Reset password:

Strategic insights
Errors - Can we measure them?

Written by on March 21, 2005

How do you measure errors in e.g. your web application?

Most people answer; "what do you mean by measuring?" For these people errors is handled on an on-going basis. It is like putting out fires, some would say.

Then we have a more evolved group who say they use some kind of error tracking, and a severity scale. This is good step forward, but they are really not measuring the error, they have just summarized the amount of fires they have to put out.

So what is error measuring exactly?

Error measuring is a vital tool for your development team. It exists to prevent or at least minimize future errors. It can tell you where to focus your testing efforts, what to remember in the future, how to focus your development time and give an indication of your level of quality.

When measuring an error you track:

  1. How many errors did you find?
  2. Who caused them?
  3. What are their severities?
  4. When did you find them?
  5. What type of errors is it?
  6. How quickly was they solved?
  7. What was the complexity of the project

How many errors did you find?

This one is the easiest one to measure. Many errors equals "you got a problem", few errors equals "but, it might only be a small one".

Note: Remember to agree on, from what point you identify errors. For me it is from that point the developers say "I am finished" and releases the object for testing. For you it might be different.

Who caused them?

This one is always tricky, because it gets personal. If you got 10 highly critical errors done by the same person, you might want to have a talk with him - he is clearly not doing his job. Just remember that he still might do his best, it might be caused by lack of training or experience.

If one person makes a huge amount of minor errors, you got another problem. Either he is lazy - or he is not good at getting into details.

Note: It is rather common too see developers loose interest in something once it is 90% finished. They want to create new exciting things - not spend days finishing something "old". If you got small developer team, each individual needs to be able to complete their assigned task. But, I you got a large team - you are in luck. With a large team it might be beneficial to hire persons that are good at finishing. They can then take over an object from the innovative developers once it has "lost interest". By this you get all developers doing what they do best - which will reflect positively on the final product.

What are their severities?

Severity of errors is a great way to measure quality. If you find a major error, you know that the team did not do what they were supposed to. If the object was tested thoroughly before it was released for testing - you should not encounter any major issues, ever!

On the other hand, minor errors are usually found because the person worked in different manner than expected. In this case the developers probably did their best.

When did you find them?

This one is also a good indicator. If the test-team identifies an error almost immediately, you got bigger problem. Errors that are found quickly equals lack of testing - or lack understanding on user behaviors / workflows.

Whereas, an error found 6 months after the product is released is a lesser problem. These errors, even if they are critical, only happen in edge-cases.

You should also measure in what stage the errors were found. Was it during the internal testing phase, during beta or by the client? An error found internally is always better than those found by the client.

What type of errors is it?

This one is a bit difficult, because there are 3 different types of errors.

Unforeseeable errors: These errors could not have been prevented, because there has never been a situation in past where it would be a problem.

This could either be because you have done the same in many other situations (where it worked) - or because you are innovating, so that the error has no prior reference.

Note: Unforeseeable errors are acceptable in most organizations.

Foreseeable errors: These should have been prevented, because it would either be logical (with current information) or because a similar situation has happened in the past.

Note: Foreseeable errors are unacceptable in most organizations.

Collateral damage: These could not have been prevented because they are caused by other errors and are not in themselves the source of the problems.

These errors should be omitted from the error measuring result.

How quickly was they solved

Another important indicator as it measures the quickness of your team. If an error is solved quickly, you got a responsive and responsible team.

If not - you either got team that is overworked, or one that is lazy and irresponsible.


What now?

The strength of doing error measuring is that it gives you the ability to identify trends, and to identify your weaknesses. You should then take your new found knowledge and change the way you develop and test in the future.

Note: Always remember to introduce error testing as a people's tool. It is not something that you should introduce for managing purposes. Instead the developers should use this directly - allowing them to improve themselves without interference from management.

Share on

Thomas Baekdal

Thomas Baekdal

Founder of Baekdal, author, writer, strategic consultant, and new media advocate.

Follow    

Baekdal PLUS: Premium content that helps you make the right decisions, take the right actions, and focus on what really matters.

There is always more...

What if Quality Journalism Isn't? »

ONLY FOR
SUBSCRIBERS

21
PAGES