Saturday 31 August 2013

Embrace Uncertainty

Being a software tester can be a bit of an emotional roller coaster at times.  This may sound a little over dramatic but I think fellow testers will know what I mean.

In this instance I am specifically talking about the area of finding bugs and how it makes me feel. I often find myself not knowing how to feel and therefore have mixed emotions when I discover a new bug.

The bugs are always there so why should I feel differently each time I find one?  A lot of it is dependent on the context.


  • The type of bug (e.g. functional/security/user experience)
  • When was it discovered? (at what point during the release cycle or at what point since it's creation)
  • How reproducible is it? 
  • How much of a customer impact would it have?
  • Is is a recurring bug? (is the same developer failing to fix it properly?)
  • In what environment is it found (e.g. test or live)


Some examples of how I feel when discovering some of these different types of bugs are below.

Sometimes I feel proud of myself for catching something which would have had a major customer impact but which is not immediately obvious.  It can then be fixed so customers will never have to experience it.  On other occasions I feel disappointed that I didn't spot a bug earlier and I start beating myself up.  The further through the release cycle we are when I find a bug the more I start to question myself as to why I didn't find it earlier.  I can get quite excited when I find an obscure bug which exhibits unusual behaviour.  I can get wound up when I find intermittent bugs as it can be very frustrating being unable to reproduce something I saw moments earlier.  I can even almost convince myself that I imagined seeing it.

I think over time and with experience the emotional side of discovering bugs will diminish, not to say I will lose enthusiasm for testing.  I can't see there being a downside to the positive feelings one gets when revealing new bugs.  However, I expect the negative feelings associated with finding a bug to turn themselves more into constructive thinking and consequently taking action.  For example when I find bugs later in the release cycle I will work out how I can lessen the chance of this happening in future.  Perhaps it is a bug which I would have found if I'd seen it on a developer machine earlier in the life cycle.  Maybe I can improve my note taking skills so those unreproducible bugs become easier to replicate (I'm sure some of these bugs do still disappear for no apparent reason though!)

My take home message for this blog is that you can never be certain of eradicating all bugs in a system.  If that makes you uncomfortable then perhaps testing is not for you.  Referring back to the title of this blog, you must embrace uncertainty and use the associated feelings in a positive and constructive way if you wish to enjoy and flourish as a tester.

Monday 20 May 2013

How Techy Should a Non Techy Tester Be?

I recently went to the UK TMF (Test Management Forum) in London where there were several talks related to testing. The 38th Test Management Forum took place on Wednesday 24 April 2013 at the conference centre at Balls's Brothers, Minster Pavement.
One of the talks I chose to attend was hosted by Paul Gerrard and was entitled 'How Techy Should a Non Techy Tester Be?'  I was drawn to this talk as I was really interested to know the answer and to hear the opinions of others who worked in the test industry.  I also aspire to become a more technical tester and was interested to see how far along the continuum I was.
The first thing we discussed was what is actually meant by being technical when applied to testing.  There was a lot of input from everyone and the areas considered technical included the following (please note, since  it's nearly a month ago now, I may have added some areas which weren't mentioned and forgotten others which were): security, performance, selenium, specflow, developer tools such as Fiddler and firebug, coding, SQL, event logs.

It was concluded that a large proportion of testers in the industry fall into the non technical tester category, in that they do not include any of the areas mentioned above in their testing.  It was also agreed that there are people who specialise in each of the areas above and they may not even badge themselves as testers but have more glamorous titles such as Performance Specialist or Security Consultant.  This category of testers is obviously a lot smaller than the non technical one.

So my takeaway from this talk is that to differentiate yourself from the pool of non technical testers in the marketplace, and have the edge when striving to progress, you either need to become a real specialist in a defined area of testing, or brush up on your technical skills over a relatively wide area.  At a minimum you really have to start looking at what's going on behind the scenes when software is running rather than at what any lay customer would see.  Otherwise you could be in danger of reinforcing what should be the unjustified belief amongst many, that anyone on the street can be a software tester.

I would like to think I am personally somewhere in the middle of the continuum from 'Not at all technical' to 'Technical Specialist'.  Obviously, this continuum is vast so I'm not giving much away, but I do know that I want to move further in the direction of the technical specialist from where I'm at and I believe that can only make me a better tester.


Sunday 14 April 2013

Are you a Comfortable Tester?

Having been a tester for over a year now I am beginning to feel comfortable with the role.  But is this a good or a bad thing?  It depends what is meant by comfortable.

If comfortable means doing the exact same things in the same way every day then that is not a good thing in my book.  Perhaps you are a poor soul who has not been given the opportunity to do anything other than test scripts from which you must not deviate.  In this case I sincerely hope you are not comfortable in your role.  If you are then you are unlikely to progress very far in the testing industry.

When I say I'm getting comfortable I am not talking about things becoming easier because they are repetitive.  Of course, there are always some things you will have to do in a certain way, such as Bug reports; need to contain all the relevant information, and Gherkin tests (should always be written in the Given, When, Then format).

My comfort comes from gaining a better understanding of the product, the environment, the people, the resources, and test techniques.  That is not to say there is not still a lot to learn in all of these areas.

One prime area where complacency can easily happen is regression testing.  Our product is continually changing so we can't stick with the same tests every time, I have to adapt and update regression tests to match the product state.  Do this by removing tests no longer needed and adding new tests for newly coded areas.  Obviously this is not a black and white task.  It can be tempting to remove tests which always pass.  It is not always clear from the outside what parts of the product are interlinked so you can never make assumptions that some tests will always pass.  Sometimes it might be better to think of the product from a black box point of view as if you are a customer who has never used the product but is tasked with testing as many areas a possible.   It can be a dangerous trap to fall into to think you know the product inside out and therefore do not respect the possibility of unforeseen change.

I never want to feel that I know it all and that there is no need for me to keep learning.  I think your days are becoming numbered as a tester as soon as you start to feel this way.  I believe the test industry is one of the fastest changing industries in existence so you can never rest on your laurels.  I always want to be reading about the latest test techniques and tools and spending time with people from whom I can learn more.

So for want of a better phrase I believe to be a good tester you should always try to remain slightly uncomfortable.

Friday 8 February 2013

The Invisible Gorilla - Book Review


I have recently finished reading 'The Invisible Gorilla' by Christopher Chabris and Daniel Simons.
It was quite an interesting read and contained many (perhaps too many) examples of where we can make assumptions and fall into traps.   This is largely because we don't analyse our views as thoroughly as we might and therefore can fool ourselves into believing we have all the information we need to make an observation or decision.  On reading this book I've realised that much of what we see and experience in life, as well as testing, is not always quite what it appears on closer inspection.  I don't think the concept of WYSIATI (what you see is all there is) is referred to in this book but it sums up a lot of what the book describes.  We must sometimes look beyond the immediately obvious visual information as what we see is not always the full picture.  One of the reasons we fall for it even when we are aware of this shortcoming is that we are right the majority of the time in our initial assessment so we have less reason to believe that sometimes we will be wrong in our judgment.  Below I've written about some of the main areas covered in the book.

Confidence
Confidence can be mistaken for an indication of the accuracy of someone's statements.  Equally, a statement delivered by an individual with low confidence can come across as less believable even if they know exactly what they're talking about.  So it's important not to base our own confidence in others statements on the confidence of their delivery or demeanor.   It's better to make a decision based on a fully informed assessment of the facts rather than the opinions of the most confident or highly ranked.

Familiarity
Another trap we can fall into is believing we know more about a subject than we actually do.  For example, if you were asked if you know how a bike works you are very likely to say 'Yes'.  But if you are asked to actually describe technically in detail how the brakes or gears work you would find this a lot harder.  This made me realise that understanding the general concept of how things work by no means makes me an expert on the subject and made me realise how much I don't know.

Memory
The illusion of memory is another area covered and describes how we often we fill in the gaps in our memory with fictional details and this is not always deliberate.  Recounting of an event may also change each time it is recalled even though we may be confident we know exactly what happened each time the story is told.

Correlation and Causation
I believe we can all be guilty of making conclusions based on associations where two events happen at the same time or one just before the other.  It seems perfectly natural or almost expected to make a link between the events where there may not necessarily be one.  Even when two events consistently happen together they may not be causal or related; there may be a third event which happens leading to the first two unrelated events to happen.  Since reading the book I've noticed that news reports will make very suspect associations such as these with very little concrete evidence.  They use phrases such as 'may be linked' or 'there could be a correlation between'.  I found myself feeling infuriated with this where before I wouldn't have given it a second thought.  Especially when they suggest questionable links in relation to health and disease, causing unnecessary worry for the public.

In relation to testing, all of the points made in the book are relevant and it is a very good idea to keep them in mind when making observations.  This is important not only of software but also of other people and also to be aware of our own assumptions and interpretations of events.  I highly recommend this book to software testers as it should make you think differently and make you take your time rather than relying on your first impressions.