From specs to release notes — automatically


This week I vis­ited Exception Twente, a fun gath­er­ing of people work­ing in the soft­ware industry in my region. One of the talks was on Behaviour-Driven Development (BDD), by Tim Schlechter.

His interest in BDD was sparked by the fact that the Don’t Repeat Yourself prin­ciple (DRY) was not adhered to when cre­at­ing specs. Basically, spe­cific­a­tions are “copied” into func­tional and unit tests. BDD looked like an oppor­tun­ity to dir­ectly relate the tests, and there­fore the code, to the spe­cific­a­tions, min­im­ising duplication.

Creating specifications

Being a .NET developer, Tim showed SpecFlow, a BDD tool, in com­bin­a­tion with NUnit, a .NET unit test tool. The Java equi­val­ents of these tools are Cucumber and JUnit. Using SpecFlow or Cucumber you can spe­cify the inten­ded beha­viour of the sys­tem in plain text — sort of. This is an example of such a spe­cific­a­tion (trans­lated from Tim’s example).

Based on this spe­cific­a­tion, SpecFlow cre­ates a test­ing class with an empty method for each line in the spe­cific­a­tion. The next thing you have to do is imple­ment these gen­er­ated meth­ods so they execute the beha­viour that is described in the spe­cific­a­tion, and your test is fin­ished. The pieces of spe­cific­a­tion that are between apo­strophes are vari­ables, allow­ing you to reuse a method with dif­fer­ent input. This is just a brief explan­a­tion of how SpecFlow works; if you want to know more, head over to the SpecFlow Getting Started page.

Just documentation for unit tests?

At first I was not really impressed. This looks just like addi­tional doc­u­ment­a­tion for unit tests, I thought. I wasn’t sure whether the extra work of cre­at­ing these human-readable spe­cific­a­tions was worth the gain, or whether a func­tional designer or inform­a­tion ana­lyst would be tech savvy enough to write this kind of specifications. 

Then Tim star­ted to talk about report­ing on these unit tests. He explained that it’s pos­sible to cre­ate a screen­shot at the end of every step, and com­bine these screen­shots with the human-readable spe­cific­a­tion in a doc­u­ment. This has the advant­age that all steps lead­ing to the fail­ure of a unit test are clearly vis­ible. Since this is also pos­sible with reg­u­lar unit tests, it still did not really con­vince me of the usefulness.

Automate your release notes

But then Tim star­ted talk­ing more about his ideas for busi­ness needs that can be filled by using this kind of report­ing. When all tests suc­ceed, the report basic­ally is func­tional doc­u­ment­a­tion of your soft­ware. And because it’s based on the actual soft­ware, it’s always accur­ate and always up to date. And it has screen­shots! This is start­ing to sound use­ful! Manually cre­ated func­tional doc­u­ment­a­tion, no mat­ter how hard you try to keep it updated, always seems to imme­di­ately become out­dated or inaccurate.

Taking this idea even fur­ther, Tim said, why not use these reports to auto­mat­ic­ally cre­ate release notes? Store a report for each ver­sion of the soft­ware, and the only thing you have to do to have accur­ate release notes is look at the dif­fer­ence between two of these reports.

These are very inter­est­ing ideas! Now I won­der, which other busi­ness needs could we auto­mat­ic­ally ful­fill using our devel­op­ment process?

Tim Schlechter on GitHub
Tim’s example for Exception Twente