Tuesday 22 November 2011

Testing - Discipline

My dad says "if a job's worth doing, it's worth doing well". I say "if a bit of code is worth writing, it's worth testing it properly". Maybe I'm stretching the old saying a little, but the principle remains true.

Software testing is a very large subject area; I'm not going to try to reproduce a text book here. I'm simply going to list some of the principles I apply to the testing phases of my projects and then show some useful macros that I have developed to aid the re-use of tests. There are many different types of test phase, each with different objectives. Some of these were briefly covered in my "SAS Software Development With The V-Model" paper at this year's SAS Global Forum (SGF).
  • To test something, you need to know what it should do, in all circumstances. This means you need to have established an agreed set of requirements and/or specifications.
  • There are a number of reasons why you might need to re-run a test - because the test failed, or for regression testing. For this reason, and for others, automated tests are preferable to manual tests.
  • Look upon your tests as an investment. Firstly, finding bugs before go-live is always "a good thing" for a number of reasons. But secondly, tests invariably need to be re-run, so the more effort you put into them the more they'll repay you when you have to re-run them. A library of re-usable tests is an asset.
  • Don't just test the "happy path" for your system. Test that the system rejects bad input and handles unexpected situations elegantly. This is called "Negative Testing". In simple terms this might mean testing with values of zero, one, two, negative, non-integer, and very large numbers
  • Document your test strategy. This includes stating which testing method & tools will be used for each different type of system element, e.g. data entry screens, report-generation wizards, small files, big files, important reports (to be sent to regulatory authorities, for example), less important reports (for internal information only, for example)
  • Document your test plan and test cases, i.e. the individual steps (and expected results) that the tester should follow.
  • Documenting your test steps means that they can reliably be re-run if the tests have to be done again
  • With regard to documentation, I always preach the "barely adequate" approach, i.e. do what needs to be done ("adequate") but don't go beyond ("barely"). In order to do this, you need to clearly understand the objectives of each document and the intended audience(s). Sometimes you need separate documents; sometimes you can put all of the content into one document.
So, having stressed the importance of testing, let me give you some hints on how I keep the test phase efficient and effective on my projects. Actually, I'm going to offer a series of hints over the next few days. In the first I'll offer some tips for automating your tests, and I'll describe a simple macro that you can use to highlight test results in your log.

I'll finish today by recommending a suite of SAS macros named FUTS (Framework for Unit Testing SAS programs) from Thotwave. These are available for free download after registering with the site (the download includes documentation and some examples of usage too). Developed by Greg Barnes-Nelson and colleagues, the macros are pure gold.

You can read background to the macros in the following SAS conference papers which chart the development and use of the macro (from their original incarnation as SASUnit through to FUTS):

Automated Testing and Real-time Event Management: An Enterprise Notification System, SUGI 29, 2004

SASUnit: Automated Testing for SAS, Phuse, 2004

Drawkcab Gnimmargorp: Test-Driven Development with FUTS, SUGI 31, 2006