Process TestLab: Looking at some of the test results

0
43

Share on LinkedIn

Last week we put out a short animation showing some of the results we collected from the Process TestLab over the past year. We had intentionally not provided any explanation or analysis of the figures mentioned simply because we felt that any attempt to explain would detract from the scale of the problem (as we see it).

But of course, with the benefit of having sat down with our clients to discuss the test results of their processes, we do have a very good impression of some of the causes that have led to the results we published.

Before we look at those numbers in more detail, here are some general observations to provide some context in which to view the results:

The Process TestLab acts as an independent body and is limited to conducting tests and quality assessments on business processes and providing the clients with the test results. The Process TestLab do not offer advice or consulting services based on those results.

The figures we published were based on tests of several hundred processes submitted by clients from various industries. The processes ranged from technical delivery processes to management processes to HR processes.

In almost all cases, the Process TestLab was tasked with planning and conducting a validation test. The objective of the validation test is to allow project teams, process designers and other interested groups to experience the to-be processes before implementation. To enable this, our validation system actually runs a scaled down version of the process (no trimmings, no fancy user interfaces etc.) and makes the process available to the people or roles involved. Think of putting a process into a virtual environment like Second Life and then working on the tasks defined in the process. The quality issue at stake here is not so much ‘Does the process work at all?’, but ‘Does the process work how I want it to work?’

This validation test is actually phase 2 of our testing procedure and is preceded by a test on formal and logical errors on the original process design. Simple reason when you think about it as we can only validate a process that can actually be run.

Enough of that, let’s take a closer look at those test results. First off, there were no obvious differences in the error rates or error types that could be attributed to either industries or process types. So when we state that …

on average, we found 126 design errors per business process,

this number shows no correlation to a particular industry.

Of course, the absolute number of errors and warnings should not be given too much emphasis, as there is a correlation with the size, the scope and the complexity of the process tested. There is also the fact that, for technical reasons, some errors and warnings appear more than once on our result sheet, although in different classes of errors.

One of our first reactions to the sheer number of errors found was that we were probably only getting to test processes about which our clients themselves felt insecure. Rather like a doctor in a hospital seeing only sick patients we thought that we were handed only those processes which actually required testing and error identification. Unfortunately this turned out to be a wrong assumption.

We also know that some clients asked us to test process design that had not been finalized, but on which they wanted a quick feedback with regards to impact on other processes. This will probably have contributed to the absolute numbers of errors we identified.

Any way you look at it, it doesn’t really matter if you have 126 or ‘only’ 20 reasons why your process will not work.

We will look in more detail at some of the more striking and representative causes of these error rates in future postings but on a general basis we can draw the following conclusions:

Distributed process design (especially when dealing with large-scale and complex processes) seems to lead to a higher number of errors of data-and document flow. We have found several examples of process descriptions which required a data or document input that could not have been available at the required time and place in the logical structure of the process.

Complex process issues can easily reach a level of complexity that make manual testing near impossible. We have pointed out to clients that some error types should not be regarded as the fault of the process designers but are down to human limitations. Our results clearly show that this is true for designing multi-layered rule systems as well as for processes that are highly integrated into complex process architectures and have a high level of interaction with other processes. On the other hand, breaking complex process structures down into a number of more simple process designs and linking them together can be just as counterproductive because any change to one model would then have to be checked in the context of all connected models.

As a side-note, we also found no significant difference between clients who used formal design tools like ARIS, Casewise or others to clients who used more ‘liberal’ approaches like Visio or even pure text descriptions.

Likewise, clients who had stringent design rules in place fared no better than those who trusted on more laissez-faire guidelines. Maybe the laissez-faire approach was compensated to an extent by better internal, organisational mechanisms (more emphasis on project staffing, more interaction between process designers etc.).

On the whole, although our primary wish is to be able to classify the processes we test as ‘process quality approved’, meaning free of errors and ready to implement, we believe that methodical testing of processes before hand-over for implementation needs to be addressed much more than is currently the case. We also know from our own experience as well as client statements that the distinction between process design project, process implementation project and process operations as well as the changing responsibilities that accompany these phases often lead to a ‘let THEM deal with the consequences’ attitude.

While process testing can be a tedious manual exercise and many will feel should be avoided at all cost, it is precisely for cost reasons that companies should insist on process testing: Past studies from Hewlett-Packard and others have shown a sharp rise in the cost curve for correcting process errors. While studies differ in estimating the absolute cost, in relative terms they rise up to 50 to 100 times the amount required to correct a process error during the design phase.

In simple terms that means that any error identified and corrected during the design phase with a baseline cost of €100 will quickly develop into a €1000 correction effort during IT implementation until it reaches €7.500 needed to rectify it during the actual running of the process. Do the maths with those 20 errors (or even 126 errors) and you’ll quickly realize why many process-related projects exceed their budget and still will not – and maybe can not – deliver to expectations.

You might also like to read:

Why do so many BPM projects fail? – Interesting discussion on the ebizQ forum

High Pressure on Process Quality

Did you like this? Share it:

Republished with author's permission from original post.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here