Any system change needs to be tested from two perspectives:
- Has our changed functionality taken effect? (incremental testing)
- Have we broken any existing functionality? (regression testing)
At one extreme, such as when building software for passenger jet aircraft, we might adopt the following strategy:
- When introducing a system, write and execute test cases for all testable elements
- When we introduce a new function, we should write test cases for the new function, we should run those new test cases to make sure the new function works, and we should re-run all the previous test cases to make sure we didn't break anything (they should all work perfectly because nothing else changed, right?)
- When we update existing functionality, we should update the existing test cases for the updated function, we should run those updated test cases to make sure the updated function works, and we should re-run all the previous test cases to make sure we didn't break anything (again, they should all work perfectly because nothing else changed)
- How much confidence do we need?
- How many tests (and what type) do we need to establish the desired degree of confidence?
Every one of our test cases is an investment, not an ephemeral object. If we're investing in test cases and managing our technical debt, then we are on the way to having a responsibly managed development team!