For a while now, I've been practicing one of the current 'crazes' of the ruby/rails development ecosystem - automated acceptance testing using Cucumber.
Acceptance testing is something that every software engineering project (hell, any project of any form) needs and in some sense has. It's performed by the person who commissioned a project when they are determining if the project fulfills their criteria and expectations. It may be formal or informal. Automated acceptance testing is the process of writing these tests in a manner that makes them possible to run by a computer. This allows an iterative approach to acceptance in a software project by letting a client have acceptance tests for the system that can be run easily (although not necessarily quickly, a lot of acceptance tests are slow as they exercise the entire stack), and therefore being able to accept parts of the project and having a guard against regression as more parts are completed (in the sense that once a feature is finished, it's acceptance tests pass. After that, if a future feature breaks the existing tests, the code has regressed and is no longer acceptable).
Cucumber itself is a tool for writing these tests. It uses a syntax called 'Gherkin' and breaks your acceptance tests down into specific features, each one made up of several scenarios. A scenario is written in the form 'Given <x>, When <y>, Then <z>'. x, y and z are then called steps, with each step having a definition that 'wires' the scenario up to the functionality required to pass the test. The idea is that these features can be written with the traditional TDD process of 'Red, green, refactor', although the overall process is longer than the tight loop of TDD and will encompass a lot of smaller 'Red, green, refactor' steps as the functionality is built with your TDD process.
My experiences with the system have been mostly positive. It has helped me with working out what I need to do in my projects, giving me starting points and a clear description of what I need to support. It helps with clarifying routes through your system to support specific tasks. However, it adds extra overhead and requires an extra skill of writing the scenarios. The 'advised' method of getting customers to write these doesn't seem to be working and besides, in the projects I've been working on haven't had a customer as such, just myself working out what I think I need.
I'm not going to abandon the practice in the near future as I do think it adds a worthwhile value to a project. However, it isn't a silver bullet to project success and requires a fair bit of effort to write good, maintainable features that will be worthwhile.
References:
Cucumber: http://cukes.info/
Acceptance testing is something that every software engineering project (hell, any project of any form) needs and in some sense has. It's performed by the person who commissioned a project when they are determining if the project fulfills their criteria and expectations. It may be formal or informal. Automated acceptance testing is the process of writing these tests in a manner that makes them possible to run by a computer. This allows an iterative approach to acceptance in a software project by letting a client have acceptance tests for the system that can be run easily (although not necessarily quickly, a lot of acceptance tests are slow as they exercise the entire stack), and therefore being able to accept parts of the project and having a guard against regression as more parts are completed (in the sense that once a feature is finished, it's acceptance tests pass. After that, if a future feature breaks the existing tests, the code has regressed and is no longer acceptable).
Cucumber itself is a tool for writing these tests. It uses a syntax called 'Gherkin' and breaks your acceptance tests down into specific features, each one made up of several scenarios. A scenario is written in the form 'Given <x>, When <y>, Then <z>'. x, y and z are then called steps, with each step having a definition that 'wires' the scenario up to the functionality required to pass the test. The idea is that these features can be written with the traditional TDD process of 'Red, green, refactor', although the overall process is longer than the tight loop of TDD and will encompass a lot of smaller 'Red, green, refactor' steps as the functionality is built with your TDD process.
My experiences with the system have been mostly positive. It has helped me with working out what I need to do in my projects, giving me starting points and a clear description of what I need to support. It helps with clarifying routes through your system to support specific tasks. However, it adds extra overhead and requires an extra skill of writing the scenarios. The 'advised' method of getting customers to write these doesn't seem to be working and besides, in the projects I've been working on haven't had a customer as such, just myself working out what I think I need.
I'm not going to abandon the practice in the near future as I do think it adds a worthwhile value to a project. However, it isn't a silver bullet to project success and requires a fair bit of effort to write good, maintainable features that will be worthwhile.
References:
Cucumber: http://cukes.info/