Wednesday 26 March 2014

Work Smarter Not Harder


Do you ever feel like you're under pressure as a tester?  Are you comfortable with the amount and quality of testing that features undergo?  Do you feel like corners are being cut?



This may not be an issue with your people but things may improve if you changed some of your processes.

There are several ceremonies (for want of a better word), which, when executed properly, should result in more reliable software being released to customers.  Carrying out these tasks may take longer in the short term, for example, a story may take 5 days to be finished rather than 3 or 4.
However, in the long term, this should result in customers giving a lot less (negative) feedback about those features.

Some of the resistance to this way of working comes from our innate desire to have everything now, or even yesterday or last month.  So it can be hard to change the short term mindset from 'We can give this to our customers today' (and sweep the risks under the carpet), to 'Lets spend an extra week on this, and make sure it's of higher quality and minimal risk'.  The first mentality may keep your customer happy for a few weeks, until the edge cases start to creep in and come back to bite you as customer complaints. The second option may frustrate some customers in the short term but you will reap the long term rewards of happy customers and a priceless reputation for producing quality software.



Described in these terms it looks like a no-brainer to choose the second approach.  However, when you're in the heat of battle, so to speak, with product managers and customers begging for features now, it can be hard not to succumb to the pressure.


To put into context what I've talked about, the practices we incorporate into our development cycle are very briefly described below.  A lot of people will say 'We already do that' but complacency is the enemy of progress, so it does no harm to constantly evaluate whether you are really sticking to these positive behaviours.


Sprint Kickoff - So everyone, including developer, testers and product managers, can start thinking about upcoming work

Story Planning - To get realistic estimations so customer expectations can be set.  Formulate test ideas to share with the Developers

Team Huddle (Developers and testers get together) - So the whole team can sing from the same hymn sheet on requirements and acceptance criteria before any coding starts.  Specification By Example tests in the Given, When, Then format should be written in readiness for automation.

Automation - Using Specification By Example
You really need to have your developers on board for automation to become a success.  Done properly this should significantly reduce the manual testing required once this reaches the tester. That way you can have confidence the majority of tests have been covered and can focus your attention on edge cases and Exploratory Testing

Demo to Test - Before committing their code the Developer should demo the feature on their machine if possible and allow the tester to test on it.  That way any bugs can be dealt with and fixed before the software reaches the test environment.  When bugs are caught early this can save a lot of time in multiple deploys.

Exploratory Testing (ET)  - This should not be the exception, but the rule for the tester.  If all the tasks above have been successfully completed the testers main form of testing should only need to be the most interesting - ET


It is our job as testers to champion and promote the practices above.  If executed well, it should make all of our lives easier, especially the customers.



Saturday 31 August 2013

Embrace Uncertainty

Being a software tester can be a bit of an emotional roller coaster at times.  This may sound a little over dramatic but I think fellow testers will know what I mean.

In this instance I am specifically talking about the area of finding bugs and how it makes me feel. I often find myself not knowing how to feel and therefore have mixed emotions when I discover a new bug.

The bugs are always there so why should I feel differently each time I find one?  A lot of it is dependent on the context.


  • The type of bug (e.g. functional/security/user experience)
  • When was it discovered? (at what point during the release cycle or at what point since it's creation)
  • How reproducible is it? 
  • How much of a customer impact would it have?
  • Is is a recurring bug? (is the same developer failing to fix it properly?)
  • In what environment is it found (e.g. test or live)


Some examples of how I feel when discovering some of these different types of bugs are below.

Sometimes I feel proud of myself for catching something which would have had a major customer impact but which is not immediately obvious.  It can then be fixed so customers will never have to experience it.  On other occasions I feel disappointed that I didn't spot a bug earlier and I start beating myself up.  The further through the release cycle we are when I find a bug the more I start to question myself as to why I didn't find it earlier.  I can get quite excited when I find an obscure bug which exhibits unusual behaviour.  I can get wound up when I find intermittent bugs as it can be very frustrating being unable to reproduce something I saw moments earlier.  I can even almost convince myself that I imagined seeing it.

I think over time and with experience the emotional side of discovering bugs will diminish, not to say I will lose enthusiasm for testing.  I can't see there being a downside to the positive feelings one gets when revealing new bugs.  However, I expect the negative feelings associated with finding a bug to turn themselves more into constructive thinking and consequently taking action.  For example when I find bugs later in the release cycle I will work out how I can lessen the chance of this happening in future.  Perhaps it is a bug which I would have found if I'd seen it on a developer machine earlier in the life cycle.  Maybe I can improve my note taking skills so those unreproducible bugs become easier to replicate (I'm sure some of these bugs do still disappear for no apparent reason though!)

My take home message for this blog is that you can never be certain of eradicating all bugs in a system.  If that makes you uncomfortable then perhaps testing is not for you.  Referring back to the title of this blog, you must embrace uncertainty and use the associated feelings in a positive and constructive way if you wish to enjoy and flourish as a tester.

Monday 20 May 2013

How Techy Should a Non Techy Tester Be?

I recently went to the UK TMF (Test Management Forum) in London where there were several talks related to testing. The 38th Test Management Forum took place on Wednesday 24 April 2013 at the conference centre at Balls's Brothers, Minster Pavement.
One of the talks I chose to attend was hosted by Paul Gerrard and was entitled 'How Techy Should a Non Techy Tester Be?'  I was drawn to this talk as I was really interested to know the answer and to hear the opinions of others who worked in the test industry.  I also aspire to become a more technical tester and was interested to see how far along the continuum I was.
The first thing we discussed was what is actually meant by being technical when applied to testing.  There was a lot of input from everyone and the areas considered technical included the following (please note, since  it's nearly a month ago now, I may have added some areas which weren't mentioned and forgotten others which were): security, performance, selenium, specflow, developer tools such as Fiddler and firebug, coding, SQL, event logs.

It was concluded that a large proportion of testers in the industry fall into the non technical tester category, in that they do not include any of the areas mentioned above in their testing.  It was also agreed that there are people who specialise in each of the areas above and they may not even badge themselves as testers but have more glamorous titles such as Performance Specialist or Security Consultant.  This category of testers is obviously a lot smaller than the non technical one.

So my takeaway from this talk is that to differentiate yourself from the pool of non technical testers in the marketplace, and have the edge when striving to progress, you either need to become a real specialist in a defined area of testing, or brush up on your technical skills over a relatively wide area.  At a minimum you really have to start looking at what's going on behind the scenes when software is running rather than at what any lay customer would see.  Otherwise you could be in danger of reinforcing what should be the unjustified belief amongst many, that anyone on the street can be a software tester.

I would like to think I am personally somewhere in the middle of the continuum from 'Not at all technical' to 'Technical Specialist'.  Obviously, this continuum is vast so I'm not giving much away, but I do know that I want to move further in the direction of the technical specialist from where I'm at and I believe that can only make me a better tester.


Sunday 14 April 2013

Are you a Comfortable Tester?

Having been a tester for over a year now I am beginning to feel comfortable with the role.  But is this a good or a bad thing?  It depends what is meant by comfortable.

If comfortable means doing the exact same things in the same way every day then that is not a good thing in my book.  Perhaps you are a poor soul who has not been given the opportunity to do anything other than test scripts from which you must not deviate.  In this case I sincerely hope you are not comfortable in your role.  If you are then you are unlikely to progress very far in the testing industry.

When I say I'm getting comfortable I am not talking about things becoming easier because they are repetitive.  Of course, there are always some things you will have to do in a certain way, such as Bug reports; need to contain all the relevant information, and Gherkin tests (should always be written in the Given, When, Then format).

My comfort comes from gaining a better understanding of the product, the environment, the people, the resources, and test techniques.  That is not to say there is not still a lot to learn in all of these areas.

One prime area where complacency can easily happen is regression testing.  Our product is continually changing so we can't stick with the same tests every time, I have to adapt and update regression tests to match the product state.  Do this by removing tests no longer needed and adding new tests for newly coded areas.  Obviously this is not a black and white task.  It can be tempting to remove tests which always pass.  It is not always clear from the outside what parts of the product are interlinked so you can never make assumptions that some tests will always pass.  Sometimes it might be better to think of the product from a black box point of view as if you are a customer who has never used the product but is tasked with testing as many areas a possible.   It can be a dangerous trap to fall into to think you know the product inside out and therefore do not respect the possibility of unforeseen change.

I never want to feel that I know it all and that there is no need for me to keep learning.  I think your days are becoming numbered as a tester as soon as you start to feel this way.  I believe the test industry is one of the fastest changing industries in existence so you can never rest on your laurels.  I always want to be reading about the latest test techniques and tools and spending time with people from whom I can learn more.

So for want of a better phrase I believe to be a good tester you should always try to remain slightly uncomfortable.

Friday 8 February 2013

The Invisible Gorilla - Book Review


I have recently finished reading 'The Invisible Gorilla' by Christopher Chabris and Daniel Simons.
It was quite an interesting read and contained many (perhaps too many) examples of where we can make assumptions and fall into traps.   This is largely because we don't analyse our views as thoroughly as we might and therefore can fool ourselves into believing we have all the information we need to make an observation or decision.  On reading this book I've realised that much of what we see and experience in life, as well as testing, is not always quite what it appears on closer inspection.  I don't think the concept of WYSIATI (what you see is all there is) is referred to in this book but it sums up a lot of what the book describes.  We must sometimes look beyond the immediately obvious visual information as what we see is not always the full picture.  One of the reasons we fall for it even when we are aware of this shortcoming is that we are right the majority of the time in our initial assessment so we have less reason to believe that sometimes we will be wrong in our judgment.  Below I've written about some of the main areas covered in the book.

Confidence
Confidence can be mistaken for an indication of the accuracy of someone's statements.  Equally, a statement delivered by an individual with low confidence can come across as less believable even if they know exactly what they're talking about.  So it's important not to base our own confidence in others statements on the confidence of their delivery or demeanor.   It's better to make a decision based on a fully informed assessment of the facts rather than the opinions of the most confident or highly ranked.

Familiarity
Another trap we can fall into is believing we know more about a subject than we actually do.  For example, if you were asked if you know how a bike works you are very likely to say 'Yes'.  But if you are asked to actually describe technically in detail how the brakes or gears work you would find this a lot harder.  This made me realise that understanding the general concept of how things work by no means makes me an expert on the subject and made me realise how much I don't know.

Memory
The illusion of memory is another area covered and describes how we often we fill in the gaps in our memory with fictional details and this is not always deliberate.  Recounting of an event may also change each time it is recalled even though we may be confident we know exactly what happened each time the story is told.

Correlation and Causation
I believe we can all be guilty of making conclusions based on associations where two events happen at the same time or one just before the other.  It seems perfectly natural or almost expected to make a link between the events where there may not necessarily be one.  Even when two events consistently happen together they may not be causal or related; there may be a third event which happens leading to the first two unrelated events to happen.  Since reading the book I've noticed that news reports will make very suspect associations such as these with very little concrete evidence.  They use phrases such as 'may be linked' or 'there could be a correlation between'.  I found myself feeling infuriated with this where before I wouldn't have given it a second thought.  Especially when they suggest questionable links in relation to health and disease, causing unnecessary worry for the public.

In relation to testing, all of the points made in the book are relevant and it is a very good idea to keep them in mind when making observations.  This is important not only of software but also of other people and also to be aware of our own assumptions and interpretations of events.  I highly recommend this book to software testers as it should make you think differently and make you take your time rather than relying on your first impressions.

Thursday 6 December 2012

Personas

Rather than testing in the same way every time it can be a good idea to use the product in the way a certain type of person would.  For example, use a website as someone who only uses the keyboard to navigate, so all actions have to be done without the use of a mouse.  Or as someone who always changes their mind, so keep using the back button to revisit pages.  The first is likely to reveal accessibility/usability issues and the second could reveal innaproproate caching issues.

I realised the other day that if you can fully get into character then you can really experience and act out how certain users would feel using a product and thus reveal flaws or bugs which you otherwise might not discover.

What follows is a description of my findings when I adopted a persona to test a website without even realising it.

Last Tuesday I wanted to order 2 pizzas from Pizza Hut for my wife and I and I wanted to do it as soon as possible, in whatever way possible, and as cheaply as possible.  I was very hungry and when I'm hungry my patience is severely reduced and any small frustrations are magnified.
So I wanted to get my order in as quickly and easily as possible.  Was Pizza Hut online going to be up to the challenge?

So the first thing I did was Google Pizza Hut.  Up came a link to the Pizza Hut menu. Clicking this took me to a menu for all their pizzas which seemed like a good start.  However, although I wanted Pizza fast I also wanted a good price and that meant getting the best deal I could.  On the page there was no clear indication of any offers available.  I didn't know where to find the deals so I gave up (remember, I was not in the mood to spend time searching around) and clicked on the 'Order Pizza' button, this took me to a page to select whether I want to Order for Delivery or Order for Collection.  I chose delivery and selected the delivery time.  Then I was taken to a page where I could select DEALS, Pizzas, Sides and Dips and Desserts and Drinks.  Why did I have to get this far before I could find out that any Deals were even on offer??  So I clicked on Deals where I was presented with 11 different deals, including Buy 1 get 1 half price, £21.99 Medium Super Saver - 2 medium pizzas and 2 sides, £25.99 Medium Full Works for 2 medium pizzas, 2 classic sides, 2 desserts and 1 drink, Two'sday Tuesday - Buy one pizza get one free.  Luckily for me, and for Pizza Hut, it was a Tuesday so I could get the best deal of 2 pizzas for the price of one.




So I chose a large Stuffed Crust Cajun chicken Sizzler for myself.  This showed the Total cost of £17.49 in the 'Your Order' section of the screen.  I then ordered a large Stuffed Crust Super Supreme at £19.49 for my wife.  The only problem was the total for my order now showed as £36.98  'How much??' I said out loud, 'Where's my 2 for 1 deal gone?'  'You've just told me I'm getting a 2 for 1 deal and now where has it gone??'  Maybe I'd pressed the wrong button or missed something so I pressed the browser back button a couple of times.  I was hoping this would take me back to the page where I could restart my order and empty my basket.  I was back at the menu page but my basket still showed £36.98!  'How do I empty my basket!!'  By this point I was really getting annoyed and losing the will to live so I clicked the Checkout button below my basket total.  This took me to another page where miraculously my deal was taking effect and a -£17.49 showed that I would only be paying for 1 pizza.  'Why couldn't they give me a clue I was doing the right thing on the previous page??'

Anyway, from here I managed to pay with my debit card with no disasters and my pizza even arrived on time.

From a testing point of view I found the 3 issues below:

Caching my choices and keeping them in my basket
Back button did not take me back to the original page
Price with offer was not shown until checkout button pressed

I'm not sure if any of these would be classed as a bug but it depends who's definition you use.  I would include Cem Kaner's definition of a bug as 'something that would bug someone that matters', i.e. the customers.

The number of customers these issues bug and the financial impact associated with potential loss of sales is probably the more important question.  Also, does the cost of fixing these issues outweigh the financial gain of keeping the sales?  Product managers are the ones paid to answer these sorts of questions but they are interesting nonetheless.

In conclusion I found it very interesting looking back on the persona I had briefly adopted.  It really changed the way I approached the website, which on another less hungry day, would have caused me no angst at all. More importantly it would possibly not have lead to me noticing any or as many issues with the way the site worked.  I will aim to think up other personas for future use with my testing because, as demonstrated here, it will help find new and interesting software quirks which could otherwise be overlooked.

Sunday 4 November 2012

Context Switching and Multitasking

Although not specific to testing I feel that software development, particularly in an agile environment, can lead to a lot of inevitable priority changes, sometimes several times a day.  Depending on the urgency of the new priorities this may mean people have to immediately stop focusing on whatever task they are currently involved with to start working on the latest precedence.

This change of attention from one task to another is known as context switching.

The Wikipedia definition of Context Switching refers to computers rather than humans but I feel the definition below can equally apply to people.

A context switch is the computing process of storing and restoring the state (context) of a CPU so that execution can be resumed from the same point at a later time. This enables multiple processes to share a single CPU. The context switch is an essential feature of a multitasking operating system.


Wikipedia also has the following definition for human multitasking:

Human multitasking is the best performance by an individual of appearing to handle more than one task at the same time. The term is derived from computer multitasking. An example of multitasking is taking phone calls while typing an email. Some believe that multitasking can result in time wasted due to human context switching and apparently causing more errors due to insufficient attention.


The notable word in the paragraph above is 'appearing' as I think multitasking, by definition, means one cannot devote 100% of one's attention to more than one task at a time.  As I write this I happen to be watching Match of the Day 2 and I know that this blog is taking a lot longer to write than if I was not looking up every time a goal is scored!  A perfect example of how not to multitask.  In my opinion multitasking is rarely effective, where a focus of attention is required, in achieving time saved and a better quality of work, than working on the same tasks sequentially.

A very important aspect of context switching is not the fact that what you are doing changes, but the fact that some amount of time is used up in getting your mind into a state of focus and readiness for the new task.  In testing, this may extend to getting your environment set up for the new task as well as having to start thinking about something new.  With several context switches per day this wasted time can soon add up.

Interruptions can come in many forms such as meetings, background conversations, questions, lunch, phone calls, and e-mails to name a few.  There are measures you can take to minimise these interruptions such as wearing headphones and turning off e-mail.  Some people use The Pomodoro Technique so they should be left alone for at least 20 minutes at a time.

Personally, I feel  there are positives in having at least two things to work on (but not at the same time) so that if what you're working on becomes blocked then you can start work on the second.  The key is to remain in control of this switching so as to limit the time 'wasted' in re focusing on the new task.

One of my weaknesses (if it can be seen as such) is that I find it difficult to say no to someone who asks me to look into something for them.  Sometimes I even welcome the interruption if I happen to be stuck in a rut with what I'm currently doing.

Going forward I feel I need to be more aware of my own context switches and I will be trying to reduce these only to those which are absolutely necessary or beneficial, and I would urge everyone to do the same.