How do you measure errors in e.g. your web application?
Most people answer; "what do you mean by measuring?" For these people errors is handled on an on-going basis. It is like putting out fires, some would say.
Then we have a more evolved group who say they use some kind of error tracking, and a severity scale. This is good step forward, but they are really not measuring the error, they have just summarized the amount of fires they have to put out.
Error measuring is a vital tool for your development team. It exists to prevent or at least minimize future errors. It can tell you where to focus your testing efforts, what to remember in the future, how to focus your development time and give an indication of your level of quality.
When measuring an error you track:
This one is the easiest one to measure. Many errors equals "you got a problem", few errors equals "but, it might only be a small one".
Note: Remember to agree on, from what point you identify errors. For me it is from that point the developers say "I am finished" and releases the object for testing. For you it might be different.
This one is always tricky, because it gets personal. If you got 10 highly critical errors done by the same person, you might want to have a talk with him - he is clearly not doing his job. Just remember that he still might do his best, it might be caused by lack of training or experience.
If one person makes a huge amount of minor errors, you got another problem. Either he is lazy - or he is not good at getting into details.
Note: It is rather common too see developers loose interest in something once it is 90% finished. They want to create new exciting things - not spend days finishing something "old". If you got small developer team, each individual needs to be able to complete their assigned task. But, I you got a large team - you are in luck. With a large team it might be beneficial to hire persons that are good at finishing. They can then take over an object from the innovative developers once it has "lost interest". By this you get all developers doing what they do best - which will reflect positively on the final product.
Severity of errors is a great way to measure quality. If you find a major error, you know that the team did not do what they were supposed to. If the object was tested thoroughly before it was released for testing - you should not encounter any major issues, ever!
On the other hand, minor errors are usually found because the person worked in different manner than expected. In this case the developers probably did their best.
This one is also a good indicator. If the test-team identifies an error almost immediately, you got bigger problem. Errors that are found quickly equals lack of testing - or lack understanding on user behaviors / workflows.
Whereas, an error found 6 months after the product is released is a lesser problem. These errors, even if they are critical, only happen in edge-cases.
You should also measure in what stage the errors were found. Was it during the internal testing phase, during beta or by the client? An error found internally is always better than those found by the client.
This one is a bit difficult, because there are 3 different types of errors.
Unforeseeable errors: These errors could not have been prevented, because there has never been a situation in past where it would be a problem.
This could either be because you have done the same in many other situations (where it worked) - or because you are innovating, so that the error has no prior reference.
Note: Unforeseeable errors are acceptable in most organizations.
Foreseeable errors: These should have been prevented, because it would either be logical (with current information) or because a similar situation has happened in the past.
Note: Foreseeable errors are unacceptable in most organizations.
Collateral damage: These could not have been prevented because they are caused by other errors and are not in themselves the source of the problems.
These errors should be omitted from the error measuring result.
Another important indicator as it measures the quickness of your team. If an error is solved quickly, you got a responsive and responsible team.
If not - you either got team that is overworked, or one that is lazy and irresponsible.
The strength of doing error measuring is that it gives you the ability to identify trends, and to identify your weaknesses. You should then take your new found knowledge and change the way you develop and test in the future.
Note: Always remember to introduce error testing as a people's tool. It is not something that you should introduce for managing purposes. Instead the developers should use this directly - allowing them to improve themselves without interference from management.
Full access for... $9 per month
Full access for... $99 per year
Join 'The Weekly Update' to get an email every Friday afternoon with the latest from Baekdal + noteworthy articles from around the web.
What the shift in media is really all about.
Free for subscribers
$8.79 on Amazon
It is not about creating a shop in a tab. It is about turning communication into sale.
Free for subscribers
$7.58 on Amazon