Is your Functional suite done right?

On the last two projects that I have worked on, both being fairly sized in terms of people (40+), I have seen enormous effort being spent on functional testing. The effort, though not completely wasted, hasn’t yielded proportional gains in terms of quality improvements and quicker feedback on a higher integration level. The following list tries to address issues and my take on fixing them.

Separation of Concerns
Functional suites suffer most from a lack of clear directive on what they are written for. Adding view tests (testing windows UI/html output) which cannot be tested by your regular unit tests to your functional suite is a recipe for disaster. View tests do not belong in the function suite. Unfortunately such tests form almost half of the suite. Coupling these not only increases the run time of a suite, it also mandates that the same testing tool is used for both these sets. Think of an ASP.Net website you are developing. A view test suite can be written using the lightening fast NUnitASP toolset, because you wouldn’t need to attack cross browser compatibility issues and integration between your user interface and services, and you can write your functional suite with Selenium or your favorite browser based testing tool. Also, view tests should be a part of the tests that a pair runs before they check-in, while all functional tests might not (teams generally decide on a subset as smoke), so dividing your tests judiciously between view tests and functional tests is of utmost importance.

Learning: There should be a clear divide between regression functional tests and view tests. Before adding a new test, ask yourself whether you want to test just the view or integration and functionality.

Performance
Functional tests need to be extremely fast. If not, then scalable with or without parallelization. The functional suite has more often than not the longest feedback cycle on any project. The longer running time forces teams to not include this feedback in their primary builds. A functional suite that takes 12 hours to run is not going to help you a lot with continuous integration. It’s not just the wait that hurts, it’s the lack of granularity at the individual check-in level on how new development affects existing functionality.
If the testing tools you are using are not extremely fast, you must look at creating parallel streams of tests to be run. Continuous Integration tools like Cruise with agent based architectures allow you to divide tests in separate meaningful suites and parallelize their run. For browser based applications, tools like Selenium Grid help you do that a level lower than CI (if you use selenium for your tests), and are an excellent choice for distributing your tests across multiple machines. But before you jump onto this, your suite must be ready to be distributed. A classic example of bad design is when you login with one user’s credentials in all your tests. Most sites/applications don’t allow this and hence you cannot parallelize such tests (this one is easy enough to fix, but the team should be on the lookout for such issues). An idea that I am trying to push within ThoughtWorks is to have a small SeleniumRC cloud, which any developer can use to run their tests, hosting the server on their machine (this can be taken further to have a SETI like “use hardware when idle” approach, but I will be happy with 10 regular boxes dedicated right now). This would make the feedback cycle shorter and probably help include regression tests as a primary test target rather than a nightly build target.

Learning: Design with parallelization in mind. Keep the suite run-time within a decided time span (like say 20 minutes) and keep analyzing + refactoring the suite to keep this below the limit.

Ownership
Having all your functional tests in one suite makes it very difficult to divide tests into regression tests and specifications. When this happens, developers and analysts are all involved in writing functional tests which is a mess because developers want refactorable tests, written in their favorite language and analysts want an easy way to specify tests and run, preferably a tool with recording support, which developers don’t like because they are fragile, and hence analysts are forced into complicated object oriented functional suites which takes them away from their primary concern. Fixing tests broken by new functionality is another ownership issue between developers and analysts.
Ideally, the analysis team and the quality team should specify new behavior for a release using tests written with their choice of tools, and the development team should work towards making these tests green. This suite may not be monitored for failure, but must be monitored for progress. This suite doesn’t need to be highly performant and can be run per iteration. The development team on the other hand should have its own refactorable regression suite, based on the specifications. Failures in this suite must be monitored closely, and this suite should be written with quick feedback in mind.
Ideals aside, it’s much easier to manage the quality process of a team this way because all suites have their dedicated owners with concrete responsibilities on who runs what, and what runs when. It is easier to track progress, and people who write code are responsible for fixing tests they break by adding new functionality, not just leave them by to be attended to by someone else.
Learning: Do not hesitate to use different tools for different goals for suites within a team. Have a specification suite and a regression suite to define clear responsibilities.


The most wonderful thing about functional automation is its fun and easy to do. To write an effective suite, one must treat it as one treats the product it’s supposed to test, with clear design goals and responsibilities, an eye on performance every now and then, and effective reporting on progress. Don’t leave it as the dark side of the project, but as the release mechanism that gives you confidence.