Top Glitches Played Havoc With Reputations in 2011

As the year winds to a close, Susan Chadwick, of Edge Testing Solutions explores the minefield that is risk-based software testing and rates her top glitches for 2011

There will always be a media frenzy surrounding the big software failures.  But as software gets increasingly complex and requirements ever more demanding, it really is surprising that we haven’t seen more glitches hit the headlines this year.

Of course, software still working well – no glitches reported – customers all happy, is news that will never make the headlines.  Although it would be nice once in a while to see the ICT sector get a bit of credit for what it is achieving!

Small errors can have a big impact

The phrase ‘glitch’ has become shorthand for all manner of failures and while we can’t always be sure exactly what went wrong with the high profile failures, it is clear that what might seem like a small error can have a big impact on reputation and the bottom line.

This is where the importance of risk based testing comes into play. It is crucial that we deliver systems that perform well throughout their lifecycle, and this means testing from the outset in order to ensure that the testing is not just responsible for finding defects but for preventing them.

Historically the testing focus has been on trying to cover everything. Today, limited by finite time and resources, it is clearly not practical, or cost effective to test all combinations and variables. So the impact and probability of a failure should always be taken into account in a risk based analysis.

The Boston Matrix approach is both a robust and clear method that can be used to assess potential risks, and determine which areas might need protection. Following this approach, if an element is high impact in terms of reputation and cost, as well as high probability, then more time should be spent testing it.

Of course it is even more difficult when there are multiple organisations and different systems involved.  We cannot underestimate how complicated software is today so it is important to choose a testing approach that will optimise your chance of success, with the resources at your disposal.

When things do go wrong the key is how companies recover, learn the lessons and put systems in place so we don’t see the same mistakes happening again. If the testing process is driven by clear business objectives, such as quality of functionality and avoiding reputational damage or loss of revenue, then there will be more chance of delivering business benefits, and avoiding headlines along the lines of those that made this list.

Top Glitches of 2011

3 January: ‘Thousands of iPhone Users Hit By Software Failure’

Apple customers didn’t have the best start to the New Year after a software glitch cancelled iPhone alarms for two days at the beginning of January.

1 February: ‘Software Glitch Costs Tax Office Millions’

Problems with HM Revenue and Customs’ software resulted in both overpayments and underpayments, and left many a taxpayer facing unexpected and unwelcome tax demands.

10 October: ‘Blackberry Services Collapse’

Blackberry scored an own goal in its ongoing battle with Apple after a glitch at the RIM data centre in Slough. Messaging services and internet access were affected, causing a three-day, global meltdown.

2 December: ‘Royal Mail’s Price Finder Website Hits Glitch’

Royal Mail dampened the festive spirit after technical difficulties forced the closure of its price finder page. Other services including online stamp buying, redirection and redelivery were affected for over a week.

Susan Chadwick is the co-founder of Edge Testing Solutions