Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Rapposcience Labs1

‘There is no doubt that it was a disaster for the


laboratory. It was the first time that a client had
withdrawn from ta contract so soon, and it was
our fault entirely. It was also a disaster for
Vincent [De Smet]. I feel sorry for him. I had
known him for years. He was a good guy with
seemingly unlimited energy and a host of good
ideas. But in the end he had to go.’ (Peter
Mertens, Chief Scientist, Rapposcience Labs)

Peter Mertens was talking about his predecessor, Vincent De Smet, who was in charge of the
Laboratories (simply known internally as ‘the Lab’) when one of their larger clients, MGQ Services, an
extraction services firm, had exercised their right to withdraw from a commercial contract with
Rapposcience for ‘persistent and significant failure to comply with testing and analytical performance’.
This came as a shock to the Lab because, although they were aware that their performance had not
been entirely satisfactory, MGQ had not formally complained about the Lab’s performance. MGQ’s
withdrawal not only a created a hole in the Lab’s revenue projections, it also attracted enough negative
publicity in the industry for the Lab’s Private Equity owners, Brighthorpe Holdings, to replace Vincent
De Smet with Peter Mertens. With a background in analytical and industrial forensic testing, Peter
started the job of rescuing the Lab’s reputation.

Rapposcience Labs
Rapposcience Labs was located at Beveren near Antwerp in Belgium. In the past, it had been one of
the most reputable labs for analysing mineral deposit, soil, and mixed inert and biological samples for
a number of clients, mainly from the extraction (mining), oil and gas, and public environmental agencies.
It employed 47 staff, almost all with a science or technical background, the majority in testing and
analysis roles, together with some in administrative and sales roles. Up until the MGQ ‘disaster’,
Brighthorpe had adopted a ‘hands off’ policy towards how the Lab was run. That changed after De
Smet’s replacement, and Peter Mertens had been given the clear message that he must turn
Rapposcience around, or its future would be bleak. ‘We lost the MGQ contract in February. Ironically,
the previous 12 months had brought in record levels of business for the Lab. Yet it was business won
by undercutting rivals on price. In fact, with hindsight, it is obvious that we had been running at a
marginal loss all that year. I arrived in March, and I have spent the last month doing my best to reassure
our remaining clients that they can still trust us to deliver a timely and trustworthy service. Unfortunately
a couple of contracts were up for renewal at that time and, regretfully, we lost them. We are now running
at, what looks like, a sustained loss for the first time in our history.’ (Peter Mertens)

1 Extract from Slack N, Brandon-Jones A and Burgess N (2022) Operations Management, 10th Edition, Pearson
The Rapposcience laboratory process
The laboratory divided its activities into four phases of what it called its ‘testing cycle’. These were: Pre-
contract, Field operations, Analytics and Post-analytics. Table 17.5 summarises these phases

Table 17.5 The Testing cycle


Phase of Testing cycle
Pre-contract Field operations Analytics Post-analytics
• Sample • Sampling • Sample • Report
specification protocols preparation generated
• Delivery to lab (including • Pre-analysis • Data recording
agreed training pack) treatment
• Report outline • Containing • Analysis
and recording
• Couriering

Pre-contract occurred at the start of the contract and involved agreeing with the client the exact
specification of the service to be provided. This usually included the range of sample specifications,
how they would be delivered to the lab, the nature of the report that would be prepared, and the
contracted performance in terms of analytical accuracy ( that indicates the veracity of the analysis),
precision (that indicates the reproducibility of the analysis), and the timeliness of the report. Laboratory
errors had a reported frequency of between 0.012% and 0.6%. Although not large in itself, errors can
have huge impact on clients’ decision-making as 60-70% of their operational and investment decisions
were made on the basis of laboratory tests.

Field operations was the responsibility of the client, but the Lab often supplied the containers used for
the samples and instructions for taking and packaging the sample. Some clients also insisted on more
detailed sampling protocols for their field technicians including training packs.

The analytics phase included all the testing within the Lab itself. This would vary depending on the
nature of the tests and the procedures specified in the contract. Generally though, all testing followed
three stages; sample preparation, pre-analysis treatment, and analysis. (See figure 17.12.)
One of the first modifications to the process came when Vincent had decided to split the sample into
two parts before it was tested. Almost always there was sufficient material to be able to do this, and the
advantage was that, if the testing proved inconclusive, or some performance indicators were outside
the permitted range, the tests could be repeated. Performance indicators demonstrated whether the
analytical process was behaving as planned, if it had revealed a statistical anomaly that requires
investigation, or when a tests had failed. Most contracts specified a particular confidence level for the
results (usually 99.5%), but any small error or contamination in the testing procedure could reduce the
confidence level. If this happened, the ‘back up’ sample could be tested. However, this almost certainly
meant that the Lab would not be able to meet its promised report delivery time.

The post-analytics phase consisted of preparing the results of the analysis for the client. This was
usually a simple report describing the composition of the sample, but some clients also required a more
detailed comparative report where sample data were compared with previous sample readings. Even if
such comparative reporting was not required, the Lab recorded all sample data.

Initiatives during the De Smet period


Peter Mertens was not unsympathetic to what Vincent De Smet had been trying to do at Rapposcience.
Not only had Vincent tried to introduce some worthwhile reforms to the Lab’s operating procedures, he
was labouring under pressure to increase the profitability of the operation. ‘I think that Vincent had been
trying to increase the volume of business while keeping staffing levels the same. Presumably he figured
that increased revenue with costs held down would equal healthy profitability. He also complicated
things by introducing a number of initiatives; all at more or less the same time.’

One of Vincent’s initiatives had been his decision to split the sample into two parts before it was tested.
He did this as a ‘failsafe’ in case there were problems during the analysis phase and the tests had to
be repeated. The response of the Lab’s technicians to this move had been mixed. Some felt that is was
a sensible move that reduced the chances of recording a ‘failed through insufficient material’ result.
Although this did not happen often, it was, at the best embarrassing to the Lab, and at worst extremely
irritating for the client. Others felt that, because there was the possibility of re-testing a sample, there
was a tendency to take less care and ‘adopt testing shortcuts’ because the consequences of testing
errors were less serious.

Another of Vincent’s innovations had been the introduction of limited statistical process control (SPC).
Although the Lab had always recorded measures of its analytical performance, it had not formally
examined its analytics process performance in any systematic manner. It was the MGQ contract that
Vincent won (and lost) that prompted the Lab to take the potential of SPC seriously. During the pre-
contract phase, they had insisted on its use during all testing on their samples, together with periodic
SPC summaries being submitted. Vincent had invested in a ‘smart laboratory’ IT system that was
advertised as being able to automate the data management and statistical processes in the Lab.
However, almost a year after its partial introduction, the consensus in the Lab was that it had not been
a success. ‘It was just too sophisticated for us’, said Peter Mertens, ‘we were trying to run before we
could walk’.

The final initiative instituted during Vincent’s time as Chief Scientist was an enhanced set of reporting
protocols. ‘It wasn’t a bad idea actually’, admitted Peter Mertens, ‘we already prepared more extensive
reports for some clients, so we had the expertise to interpret their test results and advise them on their
sampling processes and how they might interpret results. In other words, we have expertise that can
add real value for our clients, so why not use it to enhance our quality of service? The problem when
Vincent introduced the idea, was that he tried to push it as a sales promotion tool. Clients were inclined
to dismiss the potential of enhanced reporting because they thought that we were simple trying to get
more money out of them.’

Getting back to basics


Peter had taken over from Vincent in March. After three or four weeks talking with all the staff in the
Lab, he felt he was ready to shape his plans for the Lab’s future. He was convinced that the Lab had to
understand what really mattered to clients and then do everything to improve their performance in a
way that would have an impact on the quality of service they were providing. Unfortunately, he was also
facing pressure from Brighthorpe, the Lab’s owners, to cut costs. “I persuaded them to give me time to
restore our reputation. We would find it difficult to do that if we were shedding staff at the same time.
Not only would it send the wrong message to the market, it would make it difficult to improve the way
we do things. Having said that, we decided to not replace any staff who left the Lab of their own volition.
We also delayed any non-essential expenditure. The main objective was to survive long enough to get
back to the basics of how we could serve clients better.”

His first action was to look at how SPC had been used in the Lab, since it had been introduced. He
talked with the Chief Field Engineer at MGQ who had approved the initial contract that the lab had lost,
and who had also insisted on them using SPC. What he said gave Peter much to think about. “I kind of
knew that, when we insisted on Rapposcience using SPC that they really didn’t understand what it was
all about. They were simply doing it because it was what the client wanted.
Their culture said, ‘If the samples are returned as the specification, then it’s OK, if not, then as long as
it doesn’t happen too often, well that’s OK also’. They just didn’t get that, by seeing their process charts,
it enabled us to see more or less exactly what was happening right inside their processes. I take some
of the blame on myself. I should have made sure that they fully understood why we were so keen for
them to use SPC. It was for them to help themselves by improving their process performance. It wasn’t
just a whim on our part.’ (Chief Field Engineer, MGQ)

The first thing Peter did was to hold a series of meetings, first with the supervisors in each department,
then with everyone in each department. He was mainly listening to their experiences of using the SPC
system that Vincent had impose, but his secondary motive was to try and judge how much they
understood about the fundamentals of SPC. The answer seemed to be, ‘not a lot’. They were all used
to using quite sophisticated statistics within their testing procedures, but not for controlling the
performance of the processes themselves. Peter reflected on this. ‘I guess it’s because the statistics
that our technicians use every day is essentially static. It deals with the probability of certain elements
or contaminants being present in a single sample. SPC deals with dynamic probabilities - time series in
effect – that show whether process behaviour is changing. However, the positive outcome from these
meetings was that staff had little problem understanding the basic concepts of SPC, when they were
explained. They were not frightened by the maths.’

Peter realised that, in fact, the biggest problem was attitudinal. ‘We had been working for a year with
the attitude that testing productivity was paramount. Don’t waste time. Get as many tests done as
possible every day. It took time to move to an attitude that stressed error-free testing. What was the
point of carrying on with testing when the processes themselves were ‘out of control’. They would only
have to be repeated, wasting everyone’s time. It may be counter-intuitive, but being slow but methodical,
and checking the process regularly, can actually increase effective productivity.’ With the agreement of
his staff, Peter devised a set of ‘check rules’. These were reference values for all the major procedures
in the sample preparation, pre-analysis and analysis stages that indicated that test results at any stage,
although within the limits that indicated a reliable result, were close to those limits. If results violated
these ‘check rules’, the test would be suspended and the sample investigated before it was allowed to
progress. Peter had three reasons for instituting the ‘check rules’. First, it prevented effort being wasted
on samples that could be compromised. Second, it stressed the importance of trying to investigate the
root causes of any problems with the process. Third, it emphasised the importance of the Lab’s
processes in determining their quality of service to customers, and therefore to the Lab’s profitability
and survival.
The ‘root cause’ programme
By September the Lab’s process performance had improved to the point where the number of samples
that failed the reliability test had almost halved, and the number of late reports had fallen by over a third.
But peter believed that further improvements were possible. ‘The most significant change is in the Lab’s
culture. Before, staff were simply going through the motions. They were not deliberately being careless,
but the were not really digging beneath what they were doing, they were not building their process
knowledge. If asked, they would tell you what they were doing rather than why they were doing it. Now
there is genuine curiosity about how testing procedures could be made better.’
Peter wanted to use the staff’s new-found interest in the process to make further improvements through
what he called the ‘root cause’ initiative. As the name implies, this was a push to discover what was
causing problems in testing. The data collected from those occasions where the check rules had been
invoked, provided valuable information which was further supplemented by individual investigations by
‘root cause teams’ in each department. Peter, with the support of supervisors in each department, had
encouraged the formation of these teams, but not made them compulsory. However most staff elected
to become ‘root cause team’ members.

By the end of October Peter was in a position to consolidate all the data on the root causes of all the
occasions when an error of some sort had occurred in the Lab’s processes. This included any defect
from ordering tests to reporting and interpretation of the results. Table 17.6 shows the root causes.

Table 17.6 Root cause by phase of the testing process


Phase of testing
Sample preparation Pre-analysis Analysis Report and record
(62% of total errors) (19% of total (15% of total (4% of total errors)
errors) errors)
Types Mislabelled sample (F) Reagent error Calibration error Reading error) (2%)
of root (24%) (6%) (6%) Interpretation error)
cause Badly contained (F) Contamination Process violation (1%)
(18%) (5%) (4%) Missing data) (1%)
Preparation error (8%) Spillage (5%) In-test calculation
Request error (F) (6%) Process violation (3%)
Insufficient material. (3%) Contamination
(F) (3%) (2%)
Damaged sample (F)
(3%)

(F) = Root cause in the field (client’s responsibility)


What was interesting to Peter was the dominance of errors with a root cause outside the Lab. The data
indicated that more than half of all errors were outside the scope of the Lab’s responsibility. ‘This
shouldn’t lead us into any form of complacency. We can still do a lot to tackle the errors in the phases
of the process for which we are clearly responsible. Basic laboratory procedure such as choosing the
correct reagent, violating process rules, or preventing contamination, should not be happening. Also, I
suspect that we are actually committing more ‘errors’ in the ‘report and record’ phase than it seems.
Errors in testing are more obvious, but reporting is not always right or wrong. There are probably missed
opportunities to enhance our service to clients that we are missing. You could class then as just as
much of an ‘error’ as a contaminated sample.

Questions
1. In hindsight, what were Vincent’s mistakes in running the Lab?
2. How did Peter’s approach differ, and why was it more successful?
3. Is a ‘missed opportunity’ in the report and record stage as much of an error as a contaminated
sample, as Peter suggests?
4. What do you suggest that Peter does next to improve process quality further?

You might also like