My view of regression testing has gradually been changing since I started as a software tester 8 months ago. When I first started out one of the first things I was introduced to was the regression tests we run for every new kit. This was a good place to start as it helped me build up my knowledge of the product and also get familiar with quite a wide coverage of tests. With each run of the regression tests I became more and more confident and found I could run through the tests quicker each time and with less of a need to read the test steps word for word. Although this was all good and positive I could also feel myself getting a bit too comfortable and did not feel I was putting enough thought into my regression testing.
It felt as though this testing was gradually turning into checking but at the same time I thought this must be the nature of the beast. However, running the same manual tests time after time is not what I'd call fun.
So what can we do to change things?
Of course, automation is one tool we can use to reduce the number of regression tests we run manually. Automation is especially useful when the tests are literally run in the same way following identical steps every time and the results are simple enough for a computer to understand.
Some of the tests cannot be automated, or if they were automated they could not be interpreted reliably by a computer. This may be due to timing issues in running the test or audio quality assessment of the output which we cannot (yet?) trust a computer to make a valid judgement on.
However, as well as these special cases, I believe that there will always be fundamental manual tests that must be carried out by a human to ensure confidence that the kit has the basics right, before moving on to the 'more interesting' testing.
Every tester knows that it is not possible to run every test variant on a product of any reasonable size, let alone get close to this with every set of regression tests. For this reason regression tests need to be carefully chosen to be of maximum use specifically to the kit in question. To continue to be useful I believe several factors need to be taken into account when selecting what tests to run as part of a regression suite. I would include the following in my areas for consideration:
- Recent new product features - Areas of the product which have recently been created. Even though they may have had a thorough testing, in general, I would still want to focus on a new area rather than an older one. I would expect more bugs to be found lurking there as they are likely to have had less exposure to as many different variants of input as opposed to more established parts of the product.
- Customer impact - If I had to choose between testing only area A and area B and was told both were of equal size and that area A was considered essential to 70% of customers and area B only important to 30% I would choose to test area A.
- Time - We can obviously only perform a finite amount of testing within whatever time frame we have. So, looking at the example above, in simple terms if I had 100 minutes to test I would spend 70 minutes on area A and 30 minutes on area B.
- Bug fixes - Even when bugs have been fixed and successfully retested I still like to focus on these areas as they are effectively newer areas of code now so the same principle applies to that of new feature areas.
- Refactoring - Again for the same reasons as new features and bug fixes.
Targeted and meaningful regression testing should include the core tests which cannot be automated but also take into account all of the factors above. The examples above are obviously very simplified so deciding what to test and for how long is not straightforward. I think part of the skill of being a tester is to weigh up all the factors and choose one's tests wisely rather than blindly following a regression test script. I'd like to think that's how regression testing is done in the majority of companies but I fear it may not.