Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 184

1 INTRODUCTION

1.1 What is Quality?


1.2 Quality Management
1.3 Quality Assurance
1.4 Quality Control
1.5 Good Laboratory Practice (GLP)

Since this manual is aimed at improving the performance of a laboratory, the activities
involved focus on the term "quality". The quality of the product, in the present case
analytical results, should obviously be acceptable. To establish whether the product fulfils
the quality requirements these have to be defined first. Only after that it can be decided if
the product is satisfactory or if and what corrective actions need to be taken.

1.1 What is Quality?


The term "quality" has a relative meaning. This is expressed by the ISO definition: "The
totality of features and characteristics of a product or service that bear on its ability to
satisfy stated or implied needs". In simpler words, one can say that a product has good
quality when it "complies with the requirements specified by the client". When projected on
analytical work, quality can be defined as "delivery of reliable information within an agreed
span of time under agreed conditions, at agreed costs, and with necessary aftercare". The
"agreed conditions" should include a specification as to the precision and accuracy of the
data which is directly related to "fitness of use" and which may differ for different
applications. Yet, in many cases the reliability of data is not questioned and the request for
specifications omitted. Many laboratories work according to established methods and
procedures which are not readily changed and have inherent default specifications.
Moreover, not all future uses of the data and reports can be foreseen so that specifications
about required precision and accuracy cannot even be given. Consequently, this aspect of
quality is usually left to the discretion of the laboratory. However, all too often the
embarrassing situation exists that a laboratory cannot evaluate and account for its quality
simply because the necessary documentation is lacking.

In the ensuing discussions numerous activities aimed at maintaining the production of


quality are dealt with. In principle, three levels of organization of these activities can be
distinguished. From the top down these levels are:

1. Quality Management (QM)


2. Quality Assurance (QA)
3. Quality Control (QC)

1.2 Quality Management


Quality Management is the assembly and management of all activities aimed at the
production of quality by organizations of various kinds. In the present case this implies the
introduction and proper running of a "Quality System" in laboratories. A statement of
objectives and policy to produce quality should be made for the organization or department
concerned (by the institute's directorate). This statement also identifies the internal
organization and responsibilities for the effective operation of the Quality System.

Quality Management can be considered a somewhat wider interpretation of the concept of


"Good Laboratory Practice" (GLP). Therefore, inevitably the basics of the present
Guidelines largely coincide with those of GLP. These are discussed below in Section 1.5.

Note. An even wider concept of quality management is presently coming into vogue: "Total
Quality Management" (TQM). This concept includes additional aspects such as leadership
style, ethics of the work, social aspects, relation to society, etc. For an introduction to TQM
the reader is referred to Parkany (1995).

1.3 Quality Assurance


Proper Quality Management implies consequent implementation of the next level: Quality
Assurance. The ISO definition reads: "the assembly of all planned and systematic actions
necessary to provide adequate confidence that a product, process, or service will satisfy
given quality requirements." The result of these actions aimed at the production of quality,
should ideally be checked by someone independent of the work: the Quality Assurance
Officer. If no QA officer is available, then usually the Head of Laboratory performs this job
as part of his quality management task. In case of special projects, customers may require
special quality assurance measures or a Quality Plan.

1.4 Quality Control


A major part of the quality assurance is the Quality Control defined by ISO as "the
operational techniques and activities that are used to satisfy quality requirements. " An
important part of the quality control is the Quality Assessment: the system of activities to
verify if the quality control activities are effective, in other words: an evaluation of the
products themselves.

Quality control is primarily aimed at the prevention of errors. Yet, despite all efforts, it
remains inevitable that errors are be made. Therefore, the control system should have
checks to detect them. When errors or mistakes are suspected or discovered it is essential
that the "Five Ws" are trailed:

- what error was made?


- where was it made?
- when was it made?
- who made it?
- why was it made?

Only when all these questions are answered, proper action can be taken to correct the
error and prevent the same mistake being repeated.

The techniques and activities involved in Quality Control can be divided into four levels of
operation:

1. First-line control: Instrument performance check.


2. Second-line control: Check of calibration or standardization.

3. Third-line control: Batch control (control sample, identity check).

4. Fourth-line control: Overall check (external checks: reference samples, interlaboratory


exchange programmes).

Because the first two control levels both apply to the correct functioning of the instruments
they are often taken together and then only three levels are distinguished. This designation
is used throughout the present Guidelines:

1. First-line control: Instrument check / calibration.


2. Second-line control: Batch control
3. Third-line control: External check

It will be clear that producing quality in the laboratory is a major enterprise requiring a
continuous human effort and input of money. The rule-of-fist is that 10-20% of the total
costs of analysis should be spent on quality control. Therefore, for quality work at least
four conditions should be fulfilled:

- means are available (adequate personnel and facilities)


- efficient use of time and means (costs aspect)
- expertise is available (answering questions; aftercare)
- upholding and improving level of output (continuity)

In quality work, management aspects and technical aspects are inherently cobbled
together and for a clear insight and proper functioning of the laboratory these aspects have
to be broken down into their components. This is done in the ensuing chapters of this
manual.

1.5 Good Laboratory Practice (GLP)


Quality Management in the present context can be considered a modem version of the
hitherto much used concept "Good Laboratory Practice" (GLP) with a somewhat wider
interpretation. The OECD Document defines GLP as follows: "Good Laboratory Practice
(GLP) is concerned with the organizational process and the conditions under which
laboratory studies are planned, performed, monitored, recorded, and reported."

Thus, GLP prescribes a laboratory to work according to a system of procedures and


protocols. This implies the organization of the activities and the conditions under which
these take place are controlled, reported and filed. GLP is a policy for all aspects of the
laboratory which influence the quality of the analytical work. When properly applied, GLP
should then:

- allow better laboratory management (including quality management)


- improve efficiency (thus reducing costs)
- minimize errors
- allow quality control (including tracking of errors and their cause)
- stimulate and motivate all personnel
- improve safety
- improve communication possibilities, both internally and externally.

The result of GLP is that the performance of a laboratory is improved and its working
effectively controlled. An important aspect is also that the standards of quality are
documented and can be demonstrated to authorities and clients. This results in an
improved reputation for the laboratory (and for the institute as a whole). In short, the
message is:

- say what you do


- do what you say
- do it better
- be able to show what you have done

The basic rule is that all relevant plans, activities, conditions and situations are recorded
and that these records are safely filed and can be produced or retrieved when necessary.
These aspects differ strongly in character and need to be attended to individually.

As an assembly, the involved documents constitute a so-called Quality Manual. This


comprises then all relevant information on:

- Organization and Personnel


- Facilities
- Equipment and Working materials
- Analytical or testing systems
- Quality control
- Reporting and filing of results.

Since institutions having a laboratory are of divergent natures, there is no standard format
and each has to make its own Quality Manual. The present Guidelines contain examples
of forms, protocols, procedures and artificial situations. They need at least to be adapted
and many new ones will have to be made according to the specific needs, but all have to
fulfil the basic requirement of usefulness and verifiability.

As already indicated, the guidelines for Quality Management given here are mainly based
on the principles of Good Laboratory Practice as they are laid down in various relevant
documents such as ISO and ISO/IEC guides, ISO 9000 series, OECD and CEN (EN
45000 series) documents, national standards (e.g. NEN standards)*, as well as a number
of text books. The consulted documents are listed in the Literature. Use is also made of
documents developed by institutes which have obtained accreditation or are working
towards this. This concerns mainly so-called Standard Operating Procedures (SOPs) and
Protocols. Sometimes these documents are hard to acquire as they are classified
information for reasons of competitiveness. The institutes and persons which cooperated
in the development of these Guidelines are listed in the Acknowledgements.

* ISO: International Standardization Organization; IEC: International Electrical


Commission; OECD: Organization for Economic Cooperation and Development; CEN:
European Committee for Standardization, EN: European Standard; NEN: Dutch Standard.
2 STANDARD OPERATING PROCEDURES

2.1 Definition
2.2 Initiating a SOP
2.3 Preparation of SOPs
2.4 Administration, Distribution, Implementation
2.5 Laboratory notebook
2.6 Relativization as encouragement
SOPs

2.1 Definition
An important aspect of a quality system is to work according to unambiguous Standard
Operating Procedures (SOPs). In fact the whole process from sampling to the filing of the
analytical result should be described by a continuous series of SOPs. A SOP for a
laboratory can be defined as follows:

"A Standard Operating Procedure is a document which describes the regularly recurring
operations relevant to the quality of the investigation. The purpose of a SOP is to carry out
the operations correctly and always in the same manner. A SOP should be available at the
place where the work is done".

A SOP is a compulsory instruction. If deviations from this instruction are allowed, the
conditions for these should be documented including who can give permission for this and
what exactly the complete procedure will be. The original should rest at a secure place
while working copies should be authenticated with stamps and/or signatures of authorized
persons.

Several categories and types of SOPs can be distinguished. The name "SOP" may not
always be appropriate, e.g., the description of situations or other matters may better
designated protocols, instructions or simply registration forms. Also worksheets belonging
to an analytical procedure have to be standardized (to avoid jotting down readings and
calculations on odd pieces of paper).

A number of important SOP types are:

- Fundamental SOPs. These give instructions how to make SOPs of the other categories.
- Methodic SOPs. These describe a complete testing system or method of investigation.
- SOPs for safety precautions.
- Standard procedures for operating instruments, apparatus and other equipment.
- SOPs for analytical methods.
- SOPs for the preparation of reagents.
- SOPs for receiving and registration of samples.
- SOPs for Quality Assurance.
- SOPs for archiving and how to deal with complaints.

2.2 Initiating a SOP


As implied above, the initiative and further procedure for the preparation, implementation
and management of the documents is a procedure in itself which should be described.
These SOPs should at least mention:

a. who can or should make which type of SOP;


b. to whom proposals for a SOP should be submitted, and who adjudges the draft;
c. the procedure of approval;
d. who decides on the date of implementation, and who should be informed;
e. how revisions can be made or how a SOP can be withdrawn.

It should be established and recorded who is responsible for the proper distribution of the
documents, the filing and administration (e.g. of the original and further copies). Finally, it
should be indicated how frequently a valid SOP should be periodically evaluated (usually 2
years) and by whom. Only officially issued copies may be used, only then the use of the
proper instruction is guaranteed.

In the laboratory the procedure for the preparation of a SOP should be as follows:

The Head of Laboratory (HoL) charges a staff member of the laboratory to draft a SOP (or
the HoL does this himself or a staff member takes the initiative). In principle, the author is
the person who will work with the SOP, but he or she should always keep in mind that the
SOP needs to be understood by others. The author requests a new registration number
from the SOP administrator or custodian (which in smaller institutes or laboratories will
often be the HoL, see 2.4). The administrator verifies if the SOP already exists (or is
drafted). If the SOP does not exist yet, the title and author are entered into the registration
system. Once the writing of a SOP is undertaken, the management must actively support
this effort and allow authors adequate preparation time.

In case of methodic or apparatus SOPs the author asks one or more qualified colleagues
to try out the SOP. In case of execution procedures for investigations or protocols, the
project leader or HoL could do the testing. In this phase the wording of the SOP is fine-
tuned. When the test is passed, the SOP is submitted to the SOP administrator for
acceptance. Revisions of SOPs follow the same procedure.

2.3 Preparation of SOPs


The make-up of the documents should meet a minimum number of requirements:

1. Each page should have a heading and/or footing mentioning:


a. date of approval and/or version number;
b. a unique title (abbreviated if desired);
c. the number of the SOP (preferably with category);
d. page number and total number of pages of the SOP.
e. the heading (or only the logo) of originals should preferably be printed in another colour
than black.

Categories can be denoted with a letter or combination of letters, e.g.:

- F for fundamental SOP


- A or APP for apparatus SOP
- M or METH for analytical method SOP
- P or PROJ for procedure to carry out a special investigation (project)
- PROT for a protocol describing a sequence of actions or operations
- ORG for an organizational document
- PERS for describing personnel matters
- RF for registration form (e.g. chemicals, samples)
- WS for worksheet (related to analytical procedures)

2. The first page, the title page, should mention:

a. general information mentioned under 2.3.1 above, including the complete title;

b. a summary of the contents with purpose and field of application (if these are not evident
from the title); if
desired the principle may be given, including a list of points that may need attention;

c. any related SOPs (of operations used in the present SOP);

d. possible safety instructions;

e. name and signature of author, including date of signing. (It is possible to record the
authors centrally in a register);

f. name and signature of person who authorizes the introduction of the SOP (including
date).

3. The necessary equipment, reagents (including grade) and other means should be
detailed.

4. A clear, unambiguous imperative description is given in a language mastered by the


user.

5. It is recommended to include criteria for the control of the described system during
operation.

6. It is recommended to include a list of contents particularly if the SOP is lengthy.

7. It is recommended to include a list of references.

2.4 Administration, Distribution, Implementation


From this description it would seem that the preparation and administration of a SOP and
other quality assurance documentation is an onerous job. However, once the draft is
made, with the use of word processors and a simple distribution scheme of persons and
departments involved, the task can be considerably eased.

A model for a simple preparation and distribution scheme is given in Figure 2-1. This is a
relation matrix which can not only be used for the laboratory but for any department or a
whole institute. In this matrix (which can be given the status of a SOP) can be indicated all
persons or departments that are involved with the subject as well as the kind of their
involvement. This can be indicated in the scheme with an involvement code. Some of the
most usual involvements are (the number can be used as the code):

1. Taking initiative for drafting


2. Drafting the document
3. Verifying
4. Authorizing
5. Implementing/using
6. Copy for information
7. Checking implementation
8. Archiving

Fig. 2-1. Matrix of information organization (see text).


There is a multitude of valid approaches for distribution of SOPs but there must always be
a mechanism for informing potential users that a new SOP has been written or that an
existing SOP has been revised or withdrawn.

It is worthwhile to set up a good filing system for all documents right at the outset. This will
spare much inconvenience, confusion and embarrassment, not only in internal use but
also with respect to the institute's management, authorities, clients and, if applicable,
inspectors of the accreditation body.

The administrator responsible for distribution and archiving SOPs may differ per institute.
In large institutes or institutes with an accredited laboratory this will be the Quality
Assurance Officer, otherwise this may be an officer of the department of Personnel &
Organization or still someone else. In non-accredited laboratories the administration can
most conveniently be done by the head of laboratory or his deputy. The administration
may be done in a logbook, by means of a card system or, more conveniently, with a
computerized database such as PerfectView or Cardbox. Suspending files are very useful
for keeping originals, copies and other information of documents. The most logic system
seems to make an appropriate grouping into categories and a master index for easy
retrieval. It is most convenient to keep these files at a central place such as the office of
the head of laboratory. Naturally, this does not apply to working documents that obviously
are used at the work place in the laboratory, e.g., instrument logbooks, operation
instruction manuals and laboratory notebooks.

The data which should be stored per document are:

- SOP number
- version number
- date of issue
- date of expiry
- title
- author
- status (title submitted; being drafted; draft ready; issued)
- department of holders/users
- names of holders
- number of copies per holder if this is more than one
- registration number of SOPs to which reference is made
- historical data (dates of previous issues)

The SOP administrator keeps at least two copies of each SOP; one for the historical and
one for the back-up file. This also applies to revised versions. Superseded versions should
be collected and destroyed (except the copy for the historical file) to avoid confusion and
unauthorized use.

Examples of various categories of SOPs will be given in the ensuing chapters. The
contents of a SOP for the administration and management of SOPs can be distilled from
the above. An example of the basic format is given as Model F 002.

2.5 Laboratory notebook


Unless recorded automatically, raw data and readings of measurements are most
conveniently written down on worksheets that can be prepared for each analytical method
or procedure, including calibration of equipment. In addition, each laboratory staff member
should have a personal Notebook in which all observations, remarks, calculations and
other actions connected with the work are recorded in ink, not with a pencil, so that they
will not be erased or lost. To ensure integrity such a notebook must meet a few minimum
requirements: on the cover it must carry a unique serial number, the owner's name, and
the date of issue. The copy is issued by the QA officer or head of laboratory who keeps a
record of this (e.g. in his/her own Notebook). The user signs for receipt, the QA officer or
HoL for issue. The Notebook should be bound and the pages numbered before issue
(loose-leaf bindings are not GLP!). The first one or two pages can be used for an index of
contents (to be filled in as the book is used). Such Notebooks can made from ordinary
notebooks on sale (before issue, the page numbering should then be done by hand or with
a special stamp) or with the help of a word processor and then printed and bound in a
graphical workshop.

The instructions for the proper use of a laboratory notebook should be set down in a
protocol, an example is given as Model PROT 005. A model for the pages in a laboratory
notebook is given.

2.6 Relativization as encouragement


In the Preface it was stated that documentation should not be overdone and that for the
implementation of all new Quality Management rules the philosophy of a step-by-step
approach should be adopted. It is emphasized that protocols and SOPs, as well as the
administration involved, should be kept as simple as possible, particularly in the beginning.
The Quality Management system must grow by trial and error, with increasing experience,
by group discussions and with changing perceptions. In the beginning, attention will be
focused on basic operational SOPs, later shifting to record keeping (as more and more
SOPs are issued) and filling gaps as practice reveals missing links in the chain of Quality
Assurance. Inevitably problems will turn up. One way to solve them is to talk with people in
other laboratories who have faced similar problems.

Do not forget that Quality Management is a tool rather than a goal. The goal is quality
performance of the laboratory.

SOPs

F 002 - Administration of Standard Operating Procedures


PROT 005 - The Use of Laboratory Notebooks
Model page of Laboratory Notebook

F 002 - Administration of Standard Operating Procedures


LOGO STANDARD OPERATING PROCEDURE Page: 1 # 2
Model: F 002 Version: 1 Date: 95-06-21
Title: Administration of Standard Operating Procedures File:

1. PURPOSE

To give unambiguous instruction for proper management and administration of Standard


Operating Procedures as they are used in the Regional Soil Survey Institute (RSSI).

2. PRINCIPLE

Standard Operating Procedures are an essential part of a quality system. For all jobs and
duties relevant operating procedures should be available at the work station. To guarantee
that the correct version of the instruction is used copying Standard Operating Procedures
is prohibited. Standard Operating Procedures are issued on paper with the heading printed
in green.

3. FIELD OF APPLICATION

Generally for use in the quality system of RSSI but more specifically this instruction is for
use in the Chemistry Department.

4. RELATED SOPs

- F 011 The preparation of SOPs for apparatus


- F 012 The preparation of SOPs for methods
- PROJ 001 The preparation of SOPs for special investigations

5. REQUIREMENTS

Database computer program, PerfectView or Cardbox

6. PROCEDURE

6.1 Administration

The administration of SOPs for the Chemistry Department can be done by the Head of
Laboratory.

6.2 Initiating new SOP

(See these Guidelines, 2.2)

6.3 Revision of SOPs

(see these Guidelines, 2.2)

Author: Sign.:
QA Officer (sign.): Date of Expiry:

6.5 Distribution of SOPs


When the Sop fulfils all the necessary requirements it is printed. The author hands over
the manuscript (or the floppy disk with text) to the SOP administrator who is responsible
for the printing. The number of copies is decided by him/her and the author. Make matrix
of distribution (see Guidelines for Quality Management Fig. 2-1).

The author (or his successor) signs all copies in the presence of the administrator before
distribution. As the new copies are distributed the old ones (if there was one) are taken in.
For each SOP a list of holders is made. The holder signs for receipt of a copy. The list is
kept with the spare copies.

Copying SOPs is forbidden. Extra copies can be obtained from the SOP administrator.

Users are responsible for proper keeping of the SOPs. If necessary, copies can be
protected by a cover or foil, and/or be kept in a loose-leaf binding.

7. ARCHIVING

Proper archiving is essential for good administration of SOPs. All operating instructions
should be kept up-to-date and be accesible to personnel. Good Laboratory Practice
requires that all documentation pertaining to a test or investigation should be kept for a
certain period. SOPs belong to this documentation.

8. REFERENCES

Mention here the used Standards and other references for this SOP.

PROT 005 - The Use of Laboratory Notebooks


LOGO STANDARD OPERATING PROCEDURE Page: 1 # 2
Model: F 002 Version: 1 Date: 95-11-28
Title: The Use of Laboratory Notebooks File:

1. PURPOSE

To give instruction for proper lay-out, use and administration of Laboratory Notebooks in
order to guarantee the integrity and retrievability of raw data (if no preprinted Work Sheets
are used), calculations and notes pertaining to the laboratory work.

2. PRINCIPLE

Laboratory Notebooks may either be issued to persons for personal use or to Study
Projects for common use by participating persons. They are used to write down
observations, remarks, calculations and other actions in connection with the work. They
may be used for raw data but bound preprinted Work Sheets are preferred for this.

3. RELATED SOPs

F 001 Administration of SOPs


PROJ 001 The preparation of SOPs for Special Investigations
4. REQUIREMENTS

Bound notebooks with about 100-150 consecutively numbered pages. Any binding which
cannot be opened is suitable; a spiral binding is very convenient.

Both ruled and squared paper can be used. On each page provisions for dating and
signing for entries, and signing for verification or inspection may be made.

5. PROCEDURE

5.1 Issue

Notebooks are issued by or on behalf of the Head of Laboratory who keeps a record of the
books in circulation (this record may have a format similar to a Laboratory Notebook or be
part of the HoL's own Notebook).

On the cover, the book is marked with an assigned (if not preprinted) serial number and
the name of the user (or of the project). On the inside of the cover the HoL writes the date
of issue and signs for issue. The user (or Project Leader) signs the circulation record for
receipt.

5.2 Use

All entries are dated and made in ink. The person who makes the entry signs per entry (in
project notebooks) or at least per page (in personal notebooks). The Head of Laboratory
(and/or Project Leader) may inspect or verify entries and pages and may sign for this on
the page(s) concerned.

If entries are corrected, this should be lined out with a single line so that it is possible to
see what has been corrected. Essential corrections should be initialed and dated and the
reason for correction stated. Pages may not be removed; if necessary, a whole page may
be deleted by a diagonal line.

Author: Sign.:
QA Officer (sign.): Date of Expiry:

5.3 Withdrawal

When fall, the Notebook is exchanged for a new one. The HoL is responsible for proper
archiving. A notebook belonging to a Study Project is withdrawn when the study is
completed.

When an employee leaves the laboratory for another post (s)he should hand in her/his
notebook to the HoL

6. ARCHIVING

The Head of Laboratory is custodian of the withdrawn Laboratory Notebooks. They must
remain accessible for inspection and audit trailing,
7. REFERENCES

Model page of Laboratory Notebook

3 ORGANIZATION AND PERSONNEL

3.1 Function and aims of the institute


3.2 Scope of the laboratory
3.3 Organigram
3.4 Description of processes
3.5 Job descriptions, personnel records, job allocation, replacement of staff
3.6 Education and training of staff
3.7 Introduction of new staff
SOPs

In this chapter the place and internal structure of the Organization or Institute, of which the
laboratory is a part, is discussed. The description of the internal structure inherently
includes the job description of the various positions throughout the organization as well as
a list of all the involved personnel, their qualifications, knowledge, experience and
responsibilities. Because of the continuity of the work it is important that in case of illness
or other absence of staff replacement by a qualified and experienced colleague is pre-
arranged.

3.1 Function and aims of the institute


The function and/or the aims of the institute should be drawn up in order to set a
framework defining the character of the laboratory. This description should rest in several
places so that it can easily be produced upon request (Directorate, Secretariat, heads of
departments or sections including Personnel & Organization, as well as the public relations
officer). As an example, the aims of ISRIC, an institute with an analytical laboratory, are
given.

3.2 Scope of the laboratory


If the field of work, or the scope of the laboratory, is not made specifically clear in the
description of the Institute's activities, it should be elaborated in a separate statement. Soil
analysis for soil characterization and land evaluation is not the same as analysis for soil
fertility purposes and advice to farmers. Such a statement should be kept with the overall
statement about the scope of the institute.

3.3 Organigram
The organizational set-up of an institute can conveniently be represented in a diagram,
the organigram (also called organogram). An organigram should be drawn by the
department of Personnel & Organization (P&O) (or equivalent) on behalf of the
Directorate. Since the organization of an institute is usually quite dynamic, frequent
updating of this document might be necessary. For the laboratory an important aspect of
the organigram is the hierarchical line of responsibilities, particularly in case of problems
such as damage, accidents or complaints from clients. Not all details of these
responsibilities can be given in the main organigram. Such details are to be documented in
sub-organigrams, the various job descriptions (see 3.5) as well as in regulations and
statutes of the institute as a whole.

As an example the simplified organigram of ISRIC is given (Model ORG 001); a sub-
organigram of the laboratory is given on a Job Description Form (Model PERS 011).

3.4 Description of processes


The way work is organized in the laboratory should be described in a SOP. This includes
the kind and frequency of consultations and meetings, how jobs are assigned to laboratory
personnel, how instructions are given and how results are reported. The statement that
personnel are protected from improper pressure of work can also be made in this SOP.

3.5 Job descriptions, personnel records, job allocation,


replacement of staff

3.5.1 Job descriptions


3.5.2 Personnel records
3.5.3 Substitution of staff

Quality assurance in the laboratory requires that all work is done by staff which are
qualified for the job. Thus, to ensure a job is done by the right man or woman, it is
essential for the management to have records of all personal skills and qualifications of
staff as well as of the required qualifications for the various jobs.

3.5.1 Job descriptions

The professional requirements for each position in an organization has to be established


and laid down in a Job Description Form which for clarity may carry an organigram or sub-
organigram showing the position (Model PERS 011).

The job description of the heads of departments or sections is usually done by the
department of P&O in consultation with the directorate, other jobs are done by P&O (on
behalf of the directorate) in consultation with the respective heads of departments or
sections. Copies should rest with P&O and the heads of departments concerned, as well
as with the person(s) filling the position.
3.5.2 Personnel records

The list of laboratory personnel with their capabilities and skills is made by the head of
laboratory in consultation with the department of Personnel & Organization and both
should have a copy. A record of the personal qualifications and skills of each staff member
can be called a Staff Record Form and a model is shown here as PERS 012 . When this
form is completed the place of the person in the organization can be indicated by a code of
the position as shown in the sub-organigram drawn on the Job Description Forms (Model
PERS 011), in this case capitals A, B, C, etc.

From the Job Descriptions and the Qualifications of Staff (PERS Oil and PERS 012) a
short-list can be derived indicating the positions of staff. An example of such a list is Model
PERS 013. For quick reference, a matrix table is a convenient and surveyable way of
listing the skills of staff. This is shown in Model PERS 014 where per person is recorded
for which job he or she is qualified. In fact, such a proficiency list is the basis of the job
allocation to staff. This allocation of jobs, i.e. a listing of all relevant tasks with the persons
who perform the tasks (who-is-doing-what), including substitutes, can be indicated on
a Job Allocation Form (Model PERS 015). Combinations of lists are always possible of
course, e.g. PERS 013 and 014).

All these lists are prepared by the heads of departments and P&O, and should be made
available to the directorate, secretariat, and heads of other departments. Staff of
departments should at least have access to a copy of these lists of their own department.
Although for small working groups such lists may seem to be overdone and perhaps
superfluous, in departments with many people they are necessary.

3.5.3 Substitution of staff

The absence of a staff member may create a problem as a part of the work of the
laboratory is interrupted. For holidays this problem is usually limited as these are planned
and measures can be taken in advance: a job can be properly completed or a substitute
can be organized in time. Unexpected absence, such as in the case of illness, presents a
different situation, as for certain procedures a substitute needs to be arranged at short
notice and a person might not to be readily available. The extent of disruption varies with
the type of job concerned. Some jobs can be left unattended for a few days but others
need instant take-over, e.g. when extracts have been prepared and need to be measured
soon after. Other jobs are essential for the continuity of the work programme. If the
preparation of samples is interrupted, the supply to the laboratory stops. When moisture
determinations are not done, the calculation of results of many analyses cannot be done.
Usually the head of laboratory, knowing his staff, will ask a colleague of the absentee to
stand in. However, such a simple solution may not always be at hand. The colleague may
be engaged in a job at the time, he may be absent also, or the head himself may be away
and then his deputy, who may not have the same insight, has to act. To cope with these
situations a scenario for substitution has to be available. To a large extent such a scenario
is based on the personal qualifications, skills and experience of the laboratory staff.
Sometimes, help must sought from outside: when the necessary expertise is not available,
or when the absence is too protracted.

A scenario for substitution can be made in several ways. The most obvious way is based
on the Job Allocation Form (PERS 015). First on the list for each task is the one who
normally performs the job. In case of absence and no one is available for substitution
several options can be considered.

1. The job is not carried out (perhaps someone becomes available soon).

2. Someone from outside the laboratory is hired or borrowed (having ascertained that he
or she has the necessary skills).

3. The job is put out to contract (ascertain that the other laboratory has satisfactory quality
standards).

In case of incidental short-term substitution of a staff member in the laboratory, e.g. in the
case of illness, this change from the normal occupation can usually adequately be
documented in laboratory Notebooks and on the various worksheets and/or data sheets
pertaining to the jobs concerned. In any case, the head of laboratory should keep a record
in his own Notebook. More permanent changes in staff or in the organization, however,
require more paper work. All such changes have to be recorded on all the relevant
registration forms mentioned above. Therefore, these must be revised accordingly. As
observed in Chapter 1, the most onerous aspect of the procedure is the distribution of the
revised documents to the persons and offices where they are required (and the obsolete
ones taken back). On the other hand, should the work involved provide an incentive to limit
changes in laboratory staff, then it serves an unintended additional purpose: a rapid turn-
over of staff is, generally, detrimental to the continuity and quality of the work.

3.6 Education and training of staff


To maintain or improve the quality of the work, it is essential that staff members follow
training or refresher courses from time to time. These may concern new developments in
analytical techniques or approaches, data handling, the use of computers, laboratory
management (such as Quality Management and LIMS) or training in the use of newly
acquired instruments.

Such training can be given within the institute, by outside specialists, or centrally
conducted courses can be attended, if necessary abroad. In certain cases it may be
worthwhile to second someone to another laboratory for a certain period to get in-service
training and experience in a different laboratory culture.

Ideally, after training or attending a course, the staff member should report and convey his
experience or knowledge to colleagues and make proposals for any change of existing
procedures or adoption of new practices to improve the performance of the laboratory.
Tests to assess the proficiency of analysts are discussed in Chapter 6.

In many laboratories it is common practice that technicians change duties from time to
time (e.g. each half year) or carry out more than one type of analysis in order to avoid
creating bad habits and to increase job satisfaction and motivation. An advantage is
gained in an increased flexibility of the laboratory staff with respects to skills, but a
disadvantage is the possible reduction of productivity and quality of results in the
transitional period.
3.7 Introduction of new staff
When a new employee is appointed in the laboratory, he or she should be properly
introduced to the other staff, to the rules of the laboratory in general and in particular to
details of his/her new job. In order to ensure that this is properly done it is useful to draw
up a SOP with a checklist of all aspects involved. A programme of training and monitoring
the settling into the job has to be made. After a probationary period the head of laboratory
will make an evaluation and report this to P&O. If applicable, a final decision as to the
appointment can be made.

Example of concise description of function and aims of an institute.

INTERNATIONAL SOIL REFERENCE AND INFORMATION CENTRE (ISRIC),


Wageningen, Netherlands

Position

The International Soil Reference and Information Centre, ISRIC, is a centre for documentation,
research, and training about the world's soils, with emphasis on the resources of developing
countries. It houses a large collection of soil monoliths with related data and documents, books,
reports and maps.

ISRIC collects, generates and transfers information on soils by lecturing and by publishing
monographs and papers on the collected materials and research data. Training courses are given,
usually in developing countries.

Participation in scientific working groups is directed towards developments in soil genesis,


classification and correlation, mapping, soil databases (e.g. the use of Geographic Information
Systems - GIS), and land evaluation.

ISRIC was born out of an initiative of the International Society of Soil Science. It was adopted by
Unesco as one of its activities in the field of earth sciences. The Centre was founded in 1966 by the
Government of the Netherlands,

Advice on the programme and activities of ISRIC is given by a Scientific Advisory Council with
members from the Dutch agricultural scientific community and from international organisations such
as FAO and Unesco. Core funds are provided by the Dutch Directorate-General for Development
Cooperation. Project activities are generally externally funded,

Aims

 To serve as a Data Centre for documentation about soil as a natural resource, through
assembling soil monoliths, reports,: maps and other information on soils of the world, with emphasis
on the developing countries,

 To contribute to an increase in the understanding of the soil for sustained utilization in a changing
global environment.

 To improve the accessibility of soil and terrain information for the widest possible range of users
through applied research, improvement of research methods, and advice on the establishment of
soil laboratories, soil reference collections and databases,

 To contribute to developments in soil classification, soil mapping and land evaluation and in the
development of geographically referenced soils and terrain digital databases.

Visitors services

ISRIC provides information on soils of the world, on the preparation of soil monoliths for display,
and techniques of soil information systems, etc.

Visitors may consult the collections of soil monoliths, reports, maps, books and soil databases
through

- individual visits during which visitors may consult the collections with or without help of the staff.

- group visits which include one or two day visits by groups of students to get an introduction to soil
classification and/or to practice classification,

- individual guest research of 3-12 months during which scientists may use ISRIC's collections for a
specific study.

Depending on the purpose of the study and the degree of staff involvement, a fee may be charged.
ISRIC provides staff for analytical services, consulting and training, against payment. Details of
tariffs will be provided on request.

Activities

Soil monolith collections and NASRECs

Assembling and analyzing representative profiles of the major soils of the world and displaying a
reference collection of soil monoliths at ISRIC. The present collection comprises more than 900
profiles from over 70 countries. Assembling a collection of: laterite profiles and developing a
descriptive terminology and classification of laterites for interdisciplinary use (CORLAT).

Advising on the establishment of national soil reference collections and databases (NASRECs) for
training, research, land use planning and agricultural extension services in individual countries, A
bi-annual Unesco-ISRIC training course is given for this purpose. On-site support is given on
project basis.

Laboratory

- Analyzing samples, representative of the soil collection, testing and improving methods and
procedures of soil analysis.

- Advising and instructing soil laboratories on organization, equipment and procedures with the aim
to improve their performance. Aspects are the introduction of Quality Management and the
development of systems for quality control: a Laboratory Information Management System for soil
and plant laboratories (SOILIMS).

- Seat of the Bureau of the Wageningen Soil, Plant and Water Analytical Laboratories (WaLab), a
cooperation of four Wageningen research laboratories to perform a wide range of quality analyses
for third parties.

Soil inventory and mapping

Assembling a collection of soil and related maps, geo-referenced databases and reports for
consultation and various uses. ISRIC's Soil Information System (ISIS) contains data of the collected
soil profiles. ISRIC has a library and an extensive map collection, mainly from developing countries.

- ISRIC is the World Data Centre for Soils of the International Council of Scientific Unions (ICSU).

- Participation in international soil mapping programmes, e.g. the World Soils and Terrain Digital
Database (SOTER, an ISSS initiative),

- Assessment of Global Soil Degradation (GLASOD, a UNEP project).

- World Inventory of Soil Emissions (WISE).

- Mapping of Soil and Terrain Vulnerability in central and eastern Europe (SOVEUR).

- World Overview of Conservation Approaches and Technologies (WOCAT).

- Southeast Asian Land Resources Information Systems (SALRIS).

Publications

Issuing publications on the soils collection, analytical methods and techniques, proceedings of
international workshops and conferences, procedure and training manuals, and preparation of
teaching materials.

Soil classification

Study and correlation of major soil classification systems; assistance in the elaboration of new
classification systems (World Reference base for soil classification, WRB).

Guest research

ISRIC accommodates visiting scientists, who study the soil monolith collection for comparison and
correlation, or participate in other ongoing activities. Recent studies have been on Podzols,
Andosols, Vertisols and Ferralsols.

Consulting and Training

Carrying out short-term and long-term consultancies in the fields of soil science and
agroclimatology.

Training both on-the-spot and at ISRIC in soil classification, data(base) handling and interpretation,
laboratory management and analytical procedures, and establishments of NASRECs.

ISRIC
P.O. Box 353
6700 AJ Wageningen
the Netherlands
Phone: (31)(0)317-471711
Fax: (31)(0)317-471700
E-mail: soil@isric.nl
Internet: http://www/isric.nl

Visiting address: 9 Duivendaal, 6701 AR Wageningen

SOPs

ORG 001 - Organigram


PERS 011 - Job Description Form
PERS 012 - Qualifications and skills of laboratory staff
PERS 013 - List of laboratory staff
PERS 014 - Proficiency list of laboratory staff
PERS 015 - Job allocation laboratory staff

ORG 001 - Organigram


LOGO GENERAL INFORMATION FORM Page: 1 # 1
Model: ORG 001 Version: 1 Date: 96-
06-04
Title: Organigram of International Soil Reference and Information Centre, File:
Wageningen, the Netherlands

Figure

Author: Sign.:
QA Officer (sign.): Date of Expiry:
PERS 011 - Job Description Form
LOGO JOB DESCRIPTION FORM Page: 1 # 3
Model: PERS 011 Version: 1
Position: Senior technician P&O sign.: Date: 95-06-05
Institute: International Soil Reference and Information Centre, Duivendaal 9,
Wageningen.
Department: Laboratory
Section: Physical analysis
Position code: B
Salary scale(s): 6-9
Required education: Certificate Technical College, majoring in Chemistry or Physics or: B.Sc, in
Chemistry, Physics, or Agriculture (soil science),
Who is direct chief?: Head of laboratory
In charge of how many One (technician)
people?:
Give relevant section of Organigram and indicate (circle) position concerned:

Figure

Job Description (main aspects):

1. Execute physical analysis on soil samples


2. Interpret and report results of analysis (with PC-based programs)
3. Improve existing and develop new techniques of analysis
4. Give training to students and trainees
5. Execute consulting missions in laboratory assistance

Additional information related to main aspects:

1. Personally execute various types of physical analysis on soil samples, e.g. water
retention characteristics, bulk density, particle density, specific surface area, structure
stability. Delegate part of the work to technician.

2. Calculate results of analyses, interpret data and report them in a publishable manner to
Head of Laboratory. For this, use is made of personal computers (mainly Lotus 123,
dBASE, and Excel programs).

3. Etc., etc.

4.

5.

Is work executed according to instructions, Yes, where it concerns analytical work.


manuals, prescriptions, schemes and the
like?
Interpretation of results using tables, reports etc., as
well as experience.
Improvement of new techniques according to approved
workplan. Reports on this in writing.
Give training according to schemes, using lecture notes
and other training material, as well as experience
Consulting missions ditto.
What technical equipment is used? Usual laboratory equipment. In addition: soil moisture
equipment, AAS, autoanalyzer, flame photometer,
colorimeter, sieving and grinding equipment. PC with
specialized software.
How are orders usually received? (orally, In writing and orally, brief. Making workplans for
in writing, extensive, brief) projects, training and consulting, interactive with direct
chief (HoL).
To what extent can execution of work be Etc., etc.
influenced? (planning, choice of procedure
and materials, own ideas)
What is the procedure in case of problems
in the execution of the job? (consultation
with chief, client, colleagues, of literature)
If job requires writing of letters, reports,
operating procedures or other writing work,
what is the nature of this?
Are there certain well-defined mandates for
signing, giving clearance, advising
purchasing or other financial
commitments?
Other information relevant for the job
description
PERS 012 - Qualifications and skills of laboratory staff
LOGO STAFF RECORD FORM Page: 1 of 1
Model: PERS 012 Version: 1 Date: 95-06-19
Title: Qualifications and skills of laboratory staff
Position code(s):..............
Name:
Address:
Date of birth:
Education/qualifications/certificates/diplomas: (with date when obtained):
Previous positions experience:
Specialist in (analysis, techniques):
Specialist in (equipment):
Knowledge of (equipment):
Other relevant information:
Author: Sign.:
QA Officer (sign.): Date of Expiry:
PERS 013 - List of laboratory staff
LOGO STAFF RECORD FORM Page: 1 # 1
Model: PERS 013 Version: 1 Date: 96-06-11
Title: List of laboratory staff

(Position code corresponds with codes on forms PERS 011, PERS 012 and PERS 014)

Position code(s) Name Position


A Peters, Martin Head
B Williams, John J. Senior technician
C Farr, Susan Technician
D Johnson, Frederick Senior technician
E Carlson, Elisabeth Technician
F Pedro, Manuel Technician
G James, Hugh Junior technician
H Jackson, Michael M. Junior technician
I O'Brien, Patrick Senior technician
Date: Revised: Revised: Revised
P&O sign.: P&O sign.: P&O sign.: P&O sign.:
PERS 014 - Proficiency list of laboratory staff
LOGO STAFF RECORD FORM Page: 1 # 1
Model: PERS 014 Version: 1 Date: 96-06-16
Title: Proficiency list of laboratory staff HoL Sign.:
Name Technique
Prep pH PSA CEC XRD OC N-Kj Caeq Crop
Peters. M. Q Q Q Q Q Q Q Q Q
Williams, J.J. Q Q Q S S S S Q Q
Farr, S. Q Q S Q
Johnson, F. Q Q Q S Q
Carlson, E, S Q Q Q
Pedro, M. Q Q Q S S S Q
James, H. S Q Q Q
Jackson, M.M. Q Q Q Q Q
O'Brien, P. S Q S Q

Abbreviation of techniques:

Prep sample preparation;


pH pH determination;
PSA particle-size analysis;
CEC cation exchange capacity and exchangeable bases;
XRD X-ray diffraction;
OC organic carbon (Walkley-B);
N-Kj nitrogen with Kjeldahl method;
Caeq calcium carbonate equivalent;
Crop crop analysis.
S specialist;
Q qualified,

(Position codes correspond with codes on forms PERS 011, PERS 012, and PERS 013)

NON-ANALYTICAL TASKS

1. Checking First Aid kit, eye washers and other safety facilities: Carlson (E). Subst.:
Pedro (F).

2. Preparing lists for ordering supplies: Johnson (D). Subst.: Williams (B)


3. Checking and registering supplies: Johnson (D). Subst.: Williams (B)

4. Receiving samples: Jackson (H). Subst.: James (G)

5. Preparing analytical programme for work order: Peters (A). Subst.: Williams (B)

6. Registration and labelling samples, preparing work orders: Williams (B). Subst:


Johnson (D)

7. Preparing work list: Williams (B). Subst.: Johnson (D)

8. Etc.

9. Etc.

Date: Revised: Revised: Revised


P&O sign.: P&O sign.: P&O sign.: P&O sign.:
PERS 015 - Job allocation laboratory staff
LOGO JOB ALLOCATION FORM Page: 1 # 1
Model: PERS 015 Version: 1 Date: 96-06-14
Title: Job allocation laboratory staff (with substitutes) HoL Sign.:

(Position codes correspond with codes on forms PERS 011, PERS 012, PERS 013 and
PERS 014)

ANALYTICAL TASKS

1. Sample preparation: James (G). Subst: Jackson (H); Farr (C)


2. Moisture determination: James (G). Subst,: Jackson (H)
3. Particle-size analysis: Farr (C). Subst.: O'Brien (I)
4. pH and EC: Carlson (E). Subst.: Jackson (H)
5. Organic carbon (Walkley-B): Johnson (D). Subst.: Pedro (F)
6. Kjeldahl-nitrogen: Williams (B). Subst.: Pedro (F)
7. Calcium carbonate equivalent: Pedro (F), Subst.: Johnson (D)
8. Etc.
9. Etc.

4 FACILITIES AND SAFETY

4.1 Housing facilities


4.2 Safety
4.3 Admittance to the laboratory
SOPs
If an institute or organization establishes a laboratory and expects (or demands) quality
analytical data then the directorate should provide the necessary means to achieve this
goal.

The most important requirements that should be fulfilled, in addition to skilled staff, which
was discussed in the previous chapter, are the supply of adequate equipment and working
materials, the presence of suitable housing, and the enforcement of proper safety
measures. The present chapter focusses on the housing facilities and safety.

4.1 Housing facilities

4.1.1 The scientific block


4.1.2 The storage block
4.1.3 Climate

Often, the laboratory has to be housed in an existing building or sometimes in a few rooms
or a shed. On the other hand, even when a laboratory is planned in a new prospective
building not all wishes or requirements can be fulfilled. Whatever the case, conditions
should be made optimal so that the desired quality can be assured.

In the out-of-print FAO Soils Bulletin no. 10, Dewis and Freitas (1970) give an extensive
and useful account of the requirements which should be met by laboratories for soil and
water. As their recommendations to a large extent are still valid, also for plant analysis,
with some adaptations they can be followed here. These recommendations can be
modified for smaller laboratories but the main principles involved should not be ignored.

The general building lay-out should preferably consist of two separate blocks:

1. A Scientific Block, for analytical determinations, staff training and administration.

2. A Storage Block, for receipt, preparation and storage of samples, which, both in case of
soil and plant material, inevitably involves the danger of causing contamination. Also some
dusty analytical work, e.g. the sieving of the sand fraction as part of the particle-size
analysis, should be done in the storage block. For storage of bulk chemicals and waste.

Transport of prepared samples from the storage block to the scientific block should be
through a passage or buffer room or, if the blocks are on two levels, by means of an
elevator. There should not be direct connection (e.g. simply a door) between a room in
which samples are crushed or milled and a room in which analyses are being done
because of contamination by dust.

4.1.1 The scientific block

The scientific block may take various forms but ideally the building would contain separate
groups of laboratory rooms as follows:
1. Rooms for preliminary operations such as:
a. weighing of samples for analysis, including sub-sampling and fine-grinding when
necessary,

b. extraction, oxidation and freeze-drying for some analyses.

2. Rooms for physical analysis of soils such as soil moisture retention, specific surface
area, particle-size analysis (sieving should be done in the storage block or in a room for
preliminary operations (Type la).

3. Rooms for general chemical processes involving the use of concentrated acids, alkalies
or ammonia, where fumes may be evolved, even if these operations have to be conducted
in fume cupboards and the room is air-conditioned.

4. "Clean" rooms where instruments can be used without danger of being affected by
fumes or adverse atmospheric conditions. This includes the traditional "balance room" and
rooms for specialized purposes such as atomic absorption (with fume exhaust),
autoanalyzer, optical mineral analysis, and particularly X-ray analysis (diffraction and
fluorescence spectroscopy).

A particular requirement for these rooms is a stable uninterrupted power supply (UPS). In
many places stabilizers are no luxury. Interruptions in the electricity supply are very
annoying and costly: analyses and calibration procedures may have to repeated and
computer files may be lost (make back-ups frequently!). Also, some safety and warning
devices may become ineffective. When the interruption is prolonged, no work can be done
at all except for some tidying up and paper work.

5. Storage room for chemicals and maintenance supplies for apparatus, with special
precautions usually demanded by law for poisons and inflammable material (see also
Section 4.2, Safety). Large amounts of inflammable liquids such as alcohol and acetone
should be stored in separate sheds.

6. Workshop or service rooms for the central preparation and storage of distilled and/or
deionized water, for general washing and drying of laboratory ware, for construction and
repair of instruments and for glass-blowing.

7. Rooms for office administration, filing of records, staff meetings, seminars, reception of
visitors, etc. These days, most analysts have or share a personal computer which should
be placed in an office and not in Type 4 areas. Also the central lab computer (which may
be a PC) should be situated in a separate (Type 7) room.

The rooms of Types 3 and 4 should be so arranged and equipped that no samples need to
be taken into them, except those already weighed for analysis and contained in covered
vessels. Although it may seem convenient to carry out all stages of an individual analysis
in one room, this often conflicts with the need to keep delicate instruments away from dust,
fumes and vibration and would frequently lead to unnecessary duplication of equipment.
4.1.2 The storage block

The storage block should consist of at least three rooms:

1. Room for receipt and registration of all samples, with sufficient bench and shelf space to
cope with the input.

2. Room for drying, crushing, grinding/milling and sieving of samples, with measures to
exhaust dust from the air. If both soil and plant samples are being milled, this should be
done in separate rooms.

3. Room for storage of samples, both before and after analysis, with adequate shelf space.
Quality assurance requires that samples should be kept for a minimum period after
analysis (at least a year, but often longer, unless they are of a perishable nature such as
moist soil samples or water samples). This could imply that very soon the storage room is
filled to capacity. In that case additional room need to be found, if necessary in another
building. Proper registration of sample location in the storage place (room, shelf) is very
useful. A laboratory notebook can be formatted for this and its use described (and
prescribed) in a Protocol.

4.1.3 Climate

The air temperature of the laboratory and working rooms should ideally be maintained at a
constant level (preferably between 18 and 25°C) and the humidity should also be kept
reasonably steady at about 50%. In many tropical countries air conditioning of the whole
building is virtually as essential as central heating in cold and temperate countries, while in
countries having a continental climate with hot summers and cold winters, both air cooling
and central heating are necessary.

The importance of supplying clean air, at a constant favourable temperature and humidity
to all parts of a scientific laboratory building is too often neglected for financial reasons,
particularly in tropical countries where air conditioning on a large scale during the hot
seasons may be very expensive. However, if some form of air conditioning is not provided,
the efficiency of the work done is bound to be reduced and other expenses incurred
through a number of factors:

1. Analytical processes normally carried out at room temperature can be affected by


differences in temperature so that an analysis performed in a "cold" room can give a
different result to one performed in a "hot" room. The temperature of distilled or deionized
water may be very different from that in the laboratory. The extraction of phosphate, for
example, may be influenced by temperature. Control of temperature is possible on a small
scale by the use of thermostatic waterbaths or immersion coolers but this is impracticable
for shaking machines or other large scale routine operations. Temperature correction
factors can, of course, be applied in some cases but these have to be established first and
may be inaccurate for wide temperature variations.

2. Many chemicals are affected by the temperature and humidity conditions under which
they are stored, particularly if these conditions fluctuate. Thus, a substance may absorb
water from humid air or effloresce in dry air or decompose at high temperatures, becoming
either useless or needing purification.

3. Modem scientific instruments can be quickly and permanently damaged by changes in


temperature and humidity, which often cause condensation, tarnishing and short-circuits.

4. The efficiency of all laboratory personnel is undoubtedly reduced by abnormally high or


low temperatures or high humidity and by the presence of even moderate amounts of dust
or chemical fumes in the air, thus affecting output both in quantity and quality.

5. Central air conditioning is preferred to the use of obviously cheaper alternatives such as
individual cooling units or heaters in each room. Almost inevitably, corridors, store rooms
and, often, sample preparation rooms are ignored and this may lead to undesirably wide
differences in temperature and humidity between such places and analytical laboratories.
For instance the moisture condition of a sample kept in a hot and humid store room (or a
very cold one) may change significantly when taken to an air-conditioned laboratory. The
effects of storage on the results of analysis of soil samples, as often noted in the literature,
may vary with temperature and humidity.

4.2 Safety

4.2.1 Equipment
4.2.2 Chemicals, reagents, and gases
4.2.3 Waste disposal
4.2.4 General rules to observe
4.2.5 First Aid
4.2.6 Fire fighting

4.2.1 Equipment

Most accidents in laboratories occur as a result of casual behaviour and neglect, not only
actively in the operations but also passively in the maintenance of appliances (old
electricity cables, plugs, manifolds, tubing, clamps, etc.). Therefore, for each apparatus
and installation such as water distillers, deionized water systems and gas cylinders, there
should be a maintenance logbook in which all particulars should be recorded.
Maintenance, calibrations, malfunctioning and actions to rectify this and other relevant
remarks for optimal functioning should be detailed (without budget being felt as a limiting
factor). If complicated sensitive equipment such as atomic absorption spectrophotometers
and autoanalyzers are used by more than one operator, each user should record the
operation in the journal to make him or her responsible for proper use. Details of this are
laid down in SOPs which need to made for each apparatus (see Chapter 5).
4.2.2 Chemicals, reagents, and gases

The proper handling and storage of chemicals, reagents and gases, particularly the toxic
and inflammable ones should also be laid down in SOPs. An example of such a SOP, for
changing gas cylinders, is given (PROT 051). Such simple SOPs or instructions should
also be written for the storage of chemicals. These may differ according to institute and
country as the laws and regulations differ. In some countries, for instance, acetylene and
nitrous oxide cylinders may not be situated in the laboratory and should be stored in a
special ventilated cupboard or outside the building. Bottles with inflammable substances
need to be stored in stainless steel containers. Working supplies of acids and ammonia
can best be stored under fume cupboards with ventilated storage. Quantities of
inflammable material such as acetone and alcohol in excess of 5 or 10 litre should be kept
outside the building in a separate shed.

Somebody should be responsible for checking and keeping in order the special safety
equipment such as first-aid kits, chemical-spill kits, eye-wash bottles (unless special eye-
wash fountains are present), the functioning of safety showers, the presence and
maintenance of fire extinguishers (the latter will usually be done for the whole institute).
For the instruction of new personnel and to facilitate inspection, a floor-plan indicating all
safety appliances and emergency exits should be available. Of all inspection actions a
record should be kept which rests at least with the head of laboratory. One way of doing
this is to prepare a Safety Logbook with at least one page for each item to be inspected
regularly. An example of a page of this logbook is given as Model SAF 011 (which has the
same lay-out as Model APP 041 for the Maintenance Logbook for laboratory apparatus
(see Chapter 5).

Storing chemicals in alphabetical order is convenient but can only be done to a limited
extent as several chemicals should not be stored together. This must be carefully
considered in each case. For instance, oxidizing and reducing agents should not be stored
together. Acids should not be stored with organic liquids. The chemical properties and
hazards of each chemical in stock can be looked up in relevant handbooks. In addition,
suppliers of chemicals have Material Safety Data Sheets available for their hazardous
products. If a chemical has particular hazardous properties this is indicated on the label by
a hazard symbol. Although these symbols are almost self-descriptive, the most important
ones are reproduced here (see Fig. 3-1). Absence of a hazard symbol does not
necessarily imply safety!

Fig. 3-1: Hazard symbols on labels of chemical containers.

Each laboratory has its own specific range of chemicals. Once a proper partition into
categories is made, this can be laid down in a Standard Registration Form which should
be verified by a qualified chemist.

Both for efficient working and for inspection purposes a list of chemicals in stock and the
place they are stored should be prepared and kept up-to-date. Copies of this list should be
situated in or near all storage places so that any container or bottle removed can be tallied
for easy stock-management (timely ordering new stock!).

An example of the first page of such a list is given on. A separate list should be made of
the suppliers where each of the chemicals can be ordered.
4.2.3 Waste disposal

An important item to observe is waste disposal. In many countries the regulations as to


waste disposal are very strict. Sometimes a record of incoming and outgoing chemicals is
required. Some chemicals in use in soil and plant laboratories such as common acids,
bases and salts may be disposed of in dilute form and need not necessarily offer a
problem but local regulations vary and tend to become stricter. Care should be taken when
a laboratory drain outlet "disappears" somewhere in the ground to some obscure
destination or in a cesspit. Unless there is no other option, observe the rule not to dilute
concentrated solutions in order to make it disposable: 'dilution is no solution to pollution'!.

A number of chemicals deserve special attention as they may never be disposed of via the
sink, such as all toxic compounds (e.g., cyanides), persistent mineral oils, chromates,
molybdates, vanadates, selenium, arsenic, cobalt and several other metals and metalloids
and their compounds. All these materials have to be collected in proper containers to be
disposed of in a way prescribed by the local authorities. These have to be contacted about
the appropriate actions to be taken and regulations to be obeyed.

Make an inventory of toxic compounds in the laboratory and prepare a Protocol for their
collection and disposal. Usually a technician is charged with the responsibility for this.

Waste sample remains should never be disposed of by washing down a drain. Use proper
receptacles for this purpose. Nevertheless, sinks and gullies should be fitted with
removable silt traps which should be emptied regularly. In certain cases heavily polluted
soil samples may have to be treated as toxic chemical waste.

4.2.4 General rules to observe

The "Methods manual for forest soil and plant analysis" (Kalra and Maynard, 1991) gives a
useful list of various points to improve safety in a laboratory. With some modifications, this
list is reproduced here (with permission). It is suggested that each laboratory adapts and
moulds this list into a SOP called "Good Laboratory Behaviour" or "General Laboratory
Rules".

1. All employees must receive and understand the locally applicable Workplace Hazardous
Materials information guide or equivalent (if such a guide exists). In any case, the
management is responsible for proper instruction.

2. Develop a positive attitude toward laboratory safety: prevention is better than cure.

3. Observe normal laboratory safety practices.

4. Good housekeeping is extremely important. Maintain a safe, clean work environment.

5. You may work hard, but never in haste.

6. Follow the safety precautions provided by the manufacturer when operating


instruments.
7. Monitor instruments while they are operating.

8. Avoid working alone. If you must work alone, have someone contact you periodically.

9. Learn what to do in case of emergencies (e.g., fire, chemical spill, see 4.2.6).

10. Learn emergency first aid (see 4.2.5.2).

11. Seek medical attention immediately if affected by chemicals and use first aid until
medical aid is available.

12. Report all accidents and near-misses to the management.

13. Access to emergency exits, eye-wash fountains and safety showers must not be
blocked. Fountains and showers should be checked periodically for proper operation.
(Safety showers are used for chemical spills and fire victims.)

14. Wash hands immediately after contact with potentially hazardous or toxic chemicals.

15. Clean up any spillage immediately. Use appropriate materials for each spillage.

16. Dispose of chipped or broken glassware in specially marked containers.

17. Use forceps, tongs, or heat-resistant gloves to remove containers from hot plates,
ovens or muffle furnaces.

18. Do not eat, drink or smoke in the laboratory. In many countries smoking in common
rooms is prohibited by law.

19. Do not use laboratory glassware for eating or drinking.

20. Do not store food in the laboratory.

21. Telephone calls to a laboratory should be regarded as improper disturbance and


therefore be restricted to urgent cases.

22. Unauthorized persons should be kept out of a laboratory. Visitors should always be
accompanied by authorized personnel.

23. All electrical, plumbing, and instrument maintenance work should be done by qualified
personnel.

24. Routinely check for radiation leaks from microwave ovens using an electromagnetic
monitor.

25. When working with X-ray equipment, routinely check (once a week) for radiation leaks
from X-ray tubes with appropriate X-radiation detectors. In some countries wearing a film
badge is obligatory. However, this is no protection!
26. Use fume hoods when handling concentrated acids, bases, and other hazardous
chemicals. Fume hoods should be checked routinely for operating efficiency. Do not use
them for storage (except the cupboards underneath, which preferably have a tube
connection with the fume cupboard above for ventilation).

27. Muffle furnaces must be vented to the atmosphere (e.g. via a fume cupboard).

28. Atomic absorption spectrophotometers must be vented to the atmosphere (if


necessary via fume cupboard). Ensure that the drain trap is filled with water prior to
igniting the burner.

29. Use personal safety equipment as described below.

a. Body protection: laboratory coat and chemical-resistant apron.

b. Hand protection: gloves, particularly when handling concentrated acids, bases, and
other hazardous chemicals.

c. Dust mask: when crushing or milling/grinding samples, etc.

d. Eye protection: safety glasses with side shields. Persons wearing contact lenses should
always wear safety glasses in experiments involving corrosive chemicals.

e. Full-face shields: wear face shields over safety glasses in experiments involving
corrosive chemicals.

f. Foot protection: proper footwear should be used. Do not wear sandals in the laboratory.

30. Avoid unnecessary noise in the laboratory. Noise producing apparatus such as
centrifuges, or continuously running vacuum pumps should be placed outside the working
area.

31. Cylinders of compressed gases should be secured at all times.

32. Never open a centrifuge cover until the machine has stopped completely.

33. Acids, hydroxides, and other hazardous liquid reagents should be kept in plastic or
plastic coated bottles.

34. Do not pipet by mouth.

35. When diluting, always add acid to water, not water to acid.

36. For chemicals cited for waste disposal, write down contents on the label.

37. Always label bottles, vessels, wash bottles, etc., containing reagents, solutions,


samples, etc., including those containing water and also those you use for a short while
(this while may become days!).
38. Extreme care is required when using perchloric acid, otherwise fires or explosions
may occur. Work must be performed in special fume cupboards, certified as perchloric
acid safe, with a duct washdown system and no exposed organic coating, sealing
compound, or lubricant. Safety glasses, face shield, and gloves must be used. When wet-
digesting soil or plant samples, treat the sample first with nitric acid to destroy easily
oxidizable matter. Oxidizable substances (e.g. tissue, filter paper) should never be
allowed to come into contact with hot perchloric acid without pre-oxidation with
nitric acid. Do not wipe spillage with flammable material. Do not store on wooden shelves.
Do not let perchloric acid come into contact with rubber.

39. Read labels before opening a chemical container. Use workplace labels for all
prepared reagents indicating kind of reagent and concentration, date of preparation, date
of expiry and the name of the person who prepared it. Good Laboratory Practice
prescribes that all these particulars, including the amounts of components used, are
recorded in the Reagents and Solutions Book .

Useful information can also be found on Internet


e.g., http://www.safety.ubc.ca/manual/safema12.htm.

4.2.5 First Aid

Every employee of a laboratory should have knowledge of emergency first aid and roughly
one out of every ten employees of a whole institute should have a valid First Aid certificate
including an endorsement for resuscitation. These qualifications should be mentioned on
the Staff Record Form model PERS 012 . The management should encourage first aid
training and the essential refresher courses by allowing time off and a periodical bonus.

Since no paragraph nor even a chapter can take the place of a proper first aid training,
only some major practical aspects will be mentioned here to provide the basics of
emergency first aid. These may be summarized in a SOP or Instruction.

4.2.5.1 Essential Items and Equipment

1. Names and internal phone numbers of employees with First Aid certificate.
2. Telephone numbers of physicians and hospitals as well as the general emergency
number.
3. First Aid kit
4. Eye wash fountains or bottles.
5. Safety showers (at least one per laboratory).

It is the (delegatable) responsibility of the head of laboratory that these items are in order.
A check-list for regular inspection of these points should be made (and kept, for instance,
with the First Aid kit).

Items 1 and 2 could be taken care of by issuing a sticker with this information to each
employee (to be stuck onto or next to his/her telephone).

The First Aid kit should be the responsibility of one person who keeps a logbook of regular
contents checks and purchased supplements. Tallying used materials from the First Aid kit
in practice appears to be illusive. Also eye-wash equipment and safety showers need to be
inspected regularly. When an eye-wash bottle has been used, it should be replaced or
refilled and the expiration date revised.

4.2.5.2 Emergency First Aid

Sometimes, in case of an accident, there is no time or possibility to await qualified help. In


that case, the necessary help needs to be given by others. The most important general
points to observe are listed here:

1. Stay calm, try to oversee the situation and watch out for danger.

2. Try to find out what is wrong with the casualty.

3. Take care that the casualty keeps breathing. If breathing stops, try to apply artificial
respiration by mouth-to-mouth or mouth-to-nose insufflation. When unconscious, turn
casualty on his/her side with the face tilted to the floor (support head by kind of cushion).

4. Staunch serious bleeding. If necessary, arterial bleeding may be stopped by pressing a


thumb in the wound.

5. Do not move the casualty unless he/she is in a dangerous position (e.g., in case of gas,
smoke, fire or electricity), then carefully move casualty to a safe place.

6. Put the casualty's mind at rest.

7. Call qualified help as soon as possible: medical service, a physician and/or an


ambulance, and if necessary, the police. Do not leave casualty unattended.

A few specific accidents that may occur in the laboratory are the following:

Burns: Hold affected parts of the skin for at least 10 minutes in cold
water. Try to keep the bum sterile and do not apply ointment.
Corrosive burns: (e.g. by hydrogen peroxide): wash the affected part of the skin
thoroughly with water.
Eye (corrosive) burns: Wash eye thoroughly with tap water: use an eye fountain or eye-
wash bottle or a tubing connected to a tap.
Hydrofluoric acid burn: Wash the affected part with dilute ammonia (1-2%) or sodium
bicarbonate solution.
Poisoning by swallowing:
1. Corrosive solutions (acids, Let the casualty drink one or two glasses of water to dilute the
bases): poison. Vomiting should not be induced.
2. Petroleum products. Do not induce vomiting (the products may get into the bronchial
tubes).
3. Non-corrosive solutions (e.g. Try to induce vomiting. Swallow activated charcoal.
herbicides, fungicides):

In all these cases must the casualty immediately be taken to a physician or hospital. Try to
bring the original container (with or without some of the poison).
4.2.6 Fire fighting

As in the case of First Aid, a number of employees should be properly trained in fire
fighting, this goes especially for laboratory personnel. Therefore, at this point only general
instructions will be given to be applied when no qualified person can help in time. These
instructions can be moulded into a Standard Instruction to be issued to each and every
employee.

4.2.6.1 Necessary items and equipment

1. Fire-proof blanket.

2. Safety shower (at least one per laboratory).

3. Buckets with sand.

4. Portable fire extinguishers of essentially two types: CO2 or b.c.f. (halon, halogenated


hydrocarbons) since these can be used without causing damage to electrical equipment.
The extinguishing power of halon is about 6 times that of CO2! Water has the disadvantage
that it conducts electricity, powder extinguishers (containing salts) cause damage to
instruments.

4.2.6.2 Actions

When fire is detected stay calm, try to oversee the situation and watch out for danger.
Then the following actions should be taken in this order:

1. Close windows and doors.


2. Give fire alarm (shouting, telephone, fire alarm).
3. Rescue people (and animals if present).
4. Switch off electricity and/or gas supply.
5. Fight fire, if possible with at least two persons.

Persons with burning clothing should be wrapped in a blanket on the floor, sprayed with
water or be pulled under a safety shower. A CO2 fire extinguisher can also be used, but do
not spray in the face.

When using fire extinguishers it is important that the fire is fought at the seat of the fire i.e.,
at the bottom of the flames, not in the middle of the flames.

If gas cylinders are present there is the danger of explosion by overheating. If they cannot
be removed, take cover and try to cool them with a fire-hose. When the situation looks
hopeless, evacuate the building. Let everybody assemble outside and check if no one is
missing. To practice this, a regular fire drill (once a year), should be held.

The management should have a calamity scenario drawn up for the whole institute as a
Standard Instruction which is issued to each and every employee.
4.3 Admittance to the laboratory
In connection with safety and quality, only authorized persons have admittance to the
laboratory blocks. These persons are: all laboratory staff, the Quality Assurance Officer
and, usually, other professional officers employed by the institute. Others may only enter
the laboratory after permission. This permission can be given by the head of laboratory or
his/her deputy. The entrances should be marked with a sign "no admittance for
unauthorized persons". In case of trainees, students, visitors etc., at least one laboratory
staff member must be charged with their supervision or responsibility.

SOPs

PROT 051 - The replacement of a gas cylinder


SAF 011 - Safety Logbook (Laboratory)
RF 031 - Stock record of chemicals

PROT 051 - The replacement of a gas cylinder


LOGO STANDARD OPERATING PROCEDURE Page: 1#1
Model: PROT 051 Version: 2 Date: 95-03-14
Title: The replacement of a gas cylinder

1 PURPOSE

To properly replace an empty pressure gas cylinder by a new one.

2 RELATED SOPs

- PROT Acceptance delivery of goods


- PROT Storage of gases
- RF Logbook: Stock record of gases

3 REQUIREMENTS

Large spanner of correct size or shifting spanner. Detergent/soap solution with small paint
brush.

4 PROCEDURE

4.1 General

1. A cylinder may only be changed by well-instructed qualified personnel,

2. Ascertain yourself of the identity of the gas,


3. Ascertain that cylinder was properly labelled upon receipt (with date and initial). Add to
label date of opening and initial.

4. Take note of the particular properties and dangers of the gas.

5. Take note of applicable instructions of supplier.

4.2 Procedure

1. Make sure all connected equipment is switched off.

2. Close secondary valve in instrument room.

3. Close valve on cylinder.

4. Remove manifold from cylinder with (shifting) spanner of the correct size (do not use
monkey wrench!).

5. Replace cylinder.

6. Connect manifold with (shifting) spanner of correct size (do not use monkey wrench!).

7. Open valve on cylinder and make sure connection is gas-tight. In case of any doubt,
apply detergent solution to the connection with a brush: bubbling indicates a
leak. Warning: never search for a leak with a naked flame! If a leak is suspected,
immediately close main valve on cylinder and notify the management -which should decide
what action should be taken to solve the problem (e.g., replace manifold or cylinder or
both).

8. Check if pressure indicated by manifold is conform specification of supplier.

9. Close valve on cylinder when gas is not to be used for some time.

10. Enter replacement in gas/supply logbook.

11. Add to label of empty cylinder date of replacement and initial. Add label "EMPTY".

12. Notify the person in charge of gas stock (and of ordering new cylinders).

13. Notify any worker who might be waiting for the cylinder change.

Author: Sign.:
QA Officer (sign.): Expiry date:
SAF 011 - Safety Logbook (Laboratory)
LOGO STANDARD OPERATING PROCEDURE Page: 1 # ...
Model: SAF 011 Version: 1 Date: 96-02-27
Title: Safety Logbook (Laboratory)
Date Inspection / Problem / Action taken / Remarks Sign. Sign. HoL
RF 031 - Stock record of chemicals
LOGO STANDARD REGISTRATION FORM Page: 1 # 8
Model: Version: 2 Updated: 96-07-01 Sign.:
Title: Safety Logbook (Laboratory)
copies (locat.):  central stare (1)  fume cupboard (2)/(3)  Steel boxes (4) / (5)  shed (6)
lab. no. order no. chemical grade locat. stock removed
1 M1084 Aluminium chloride, hexahydrate
2 M1063 Aluminium nitrate, nonahydrate
3 M1095 Aluminium oxide
4 M0099 1-Amino-2-hydroxy-4-naftelene-sulfonic acid
5 Ml 115 Ammonium acetate
6 M1136 Ammonium carbonate
7 Ml 145 Ammonium chloride
8 Ml 164 Ammonium fluoride
9 Ml 188 Ammonium nitrate
10 Ml 182 Ammonium heptamolybdate, tetrahydrate
11 M3792 Ammonium iron(II)sulfate, hexahydrate
12 M3776 Ammonium iron(III)sulfate, dodecahydrate
13 M1206 Ammonium monohydrogenfosfate
14 Ml 226 Ammonium monovanadate
15 Ml 192 Ammonium oxalate, monohydrate
16 M1217 Ammonium sulfate
17 M4282 Gum Arabic
18 M8127 Ascorbic acid
19 M1703 Barium acetate
20 M1714 Barium carbonate
21 M1717 Barium chloride, dihydrate
22 M0255 Diphenylamine-4-sulfonic acid barium salt
23 M1737 Barium hydroxide, octahydrate
24 K7375 Bolus alba (kaolin)
25 M0165 Boric acid
26 M8121 Bromocresolgreen

5 MATERIALS: APPARATUS, REAGENTS, SAMPLES

5.1 Introduction
5.2 Apparatus
5.3 Reagents
5.4 Samples
SOPs

5.1 Introduction
Quality analytical work can only be performed if all materials used are suitable for the job,
properly organized and well cared for. This means that the tools are adequate and in good
condition, and that sample material receives attention with respect to proper handling,
storing and disposal.

The tools used for analysis may be subdivided into four categories:

1 Primary measuring equipment (pipettes, diluters, burettes, balances, thermometers, flow


meters, etc.)

2. Analytical apparatus or instruments.

3. Miscellaneous equipment and materials (ovens, furnaces, fridges, stills, glassware, etc.)

4. Reagents.

The saying that a chain is as strong as its weakest link applies particularly to these items.
An analyst may have gone out of his/her way (as he/she should) to prepare extracts, if the
cuvette of the spectrophotometer is dirty, or if the wavelength dial does not indicate the
correct wavelength, the measurements are in jeopardy. Both the blank and the control
sample (and a possible "blind" sample or spike) most likely will reveal that something is
wrong, but the harm is already done: the problem has to be found and resolved, and the
batch might have to be repeated. This is a costly affair and has to be minimized (it is an
illusion to think that it can be totally prevented) by proper handling and maintenance of the
equipment.

Also the quality and condition of a number of other working materials have to be watched
closely. The calibration of thermometers, burettes and pipettes, particularly the adjustable
types, may exceed the acceptable tolerance (and be put out of use). New glassware may
look clean but always needs to be washed. Glassware may give off unwanted elements
(boron, silicon, sodium). The same goes for milling and grinding equipment (pestles and
mortars, tungsten carbide grinders, brass or steel sieves). For virtually all analyses
glassware needs to be rinsed with deionized water after washing. Therefore, if glassware,
such as volumetric flasks, is shared by analysts, they should be able to rely on the loyalty
and good laboratory practice of their colleagues.

A similar reasoning applies to reagents. One of the most prominent sources of the errors
made in a laboratory is the use of wrongly prepared or old reagents. Therefore, reagents
have to be prepared very carefully and exactly following the prescriptions, they have to be
well labelled and expiry dates have to be observed closely. Filtering a pH buffer solution in
which fungi are flourishing may save time and reagent but is penny-wise and pound-
foolish.

Of equal importance for the quality of the work is the proper handling of the sample
material. Not only the technical aspects such as sample preparation, but particularly the
safeguarding of identity and integrity of the samples as well as the final storage or
disposal (chain of custody).
As part of the overall quality assurance, in this chapter a number of instructions and
suggestions are presented to ensure the analytical reliability of the main tools and proper
organization of sample handling.

5.2 Apparatus

5.2.1 Registration
5.2.2 Operation

For quality assurance, with respect to instruments and other equipment the following
requirements should be met:

1. Apparatus used for generation of data, and for controlling environmental factors relevant
to the study should be suitably located and of appropriate design and adequate capacity.

2. The apparatus used should be periodically inspected, cleaned, maintained, and


calibrated according to Standard Operating Procedures. Records of procedures should be
maintained.

In practice, therefore, a number of record forms and instructions need to be prepared.

5.2.1 Registration

5.2.1.1 Instrument Identification List


5.2.1.2 Instrument Maintenance List. Instrument Calibration List

5.2.1.1 Instrument Identification List

For proper management a complete list of all available apparatus is indispensable.


This Instrument Identification List should contain all information relevant for ensuring
reliable and continuous functioning of the apparatus. A model page for such an instrument
list is given as Model APP 003.

Such a record should, in addition to the description and registration/identification number,


contain information about the supplier (to contact in case of inspection, repair or
replacement), the date the apparatus was installed, and the person to whom the
responsibility for the instrument was assigned. This list can be compiled by any laboratory
staff member on behalf of the head of laboratory. A copy can be issued to all laboratory
staff (as well as the Quality Assurance Officer if applicable) or the list is deposited in a
central place accessible to all staff. The latter option allows a card-box system (physical
and/or on computer) where cards can easily be inserted and removed. When new
apparatus is acquired the list must be revised (or a new list or page may be issued)
including the deletion of old apparatus when replaced.

5.2.1.2 Instrument Maintenance List. Instrument Calibration List

For apparatus that needs maintenance and calibration at regular intervals an Instrument
Maintenance List and an Instrument Calibration List (or card-box system) must be
prepared. These lists, which may be combined, include columns for instrument
identification, reference to the logbook concerned and fixed dates or intervals for actions.
They are an aide-mémoire for the laboratory management, the actions themselves being
recorded in the logbooks. A model for these lists is given (Model APP 004).

5.2.2 Operation

5.2.2.1 Operation Instruction Manual


5.2 2.2 Instrument Maintenance Logbook
5.2 2.3 User Logbook
5.2.2.4 SOPs for use of equipment

5.2.2.1 Operation Instruction Manual

For all apparatus an Operation Instruction Manual should be available. Usually this is the
instruction manual issued by the supplier. Should this instruction not be satisfactory,
incomplete, or in a language in which the user is not proficient, then a proper instruction
manual should be made. Most commonly, the technician using an instrument writes this as
a SOP. Examples are given at the end of this chapter (see 5.2.2.4). Often, laboratories
have the instruction manual and maintenance logbook (see next) combined into one
volume.

5.2 2.2 Instrument Maintenance Logbook

In addition to the instruction manual, for each apparatus a Maintenance Logbook should


be prepared. All relevant actions taken with respect to the apparatus should be recorded in
this logbook, e.g., problems encountered and repairs made, periodic inspections, and
calibrations (other than normal calibrations with standard curves as part of an analysis). A
model for the pages of such a logbook is given as Model APP 041.

When initiating these logbooks for apparatus that have been in use for some time, the
present condition of the apparatus is the starting point and must be assessed and
recorded in the logbook together with any other information which happens to be known
and which might be relevant for future functioning (age, past problems, defects, repairs,
etc.). This is preferably compiled by the technician-in-charge of each instrument
concerned. If this venture is taken up from scratch it is advisable to start with analytical
instruments that generate data, and subsequently deal with the auxiliary equipment.
Maintenance should always be carried out by qualified technicians either from inside or
outside the institute. Many laboratories have maintenance contracts with suppliers. Such
contracts are generally quite expensive and should be critically reviewed regularly on
usefulness and length of intervals. Depending on the intensity and skill with which
equipment is used maintenance intervals can often be extended (unless accreditation
bodies require strict adherence to maintenance schedules). For other equipment regular
maintenance may be changed into if-and-when-you-need maintenance, particularly those
which are checked or calibrated before each use (which may reveal a gradual decline in
response, e.g. AAS) and those for which back-up instruments are available (e.g. electronic
balances). Often, however, such policies are only theory because for some reason
qualified service may not readily be available, for instance when the supplier has no local
office in the country. This makes the in-house maintenance facilities even more important.
A sensible measure is to build up a stock of essential parts (e.g. hollow cathode lamps for
AAS), necessary tools, blueprints and, if possible, back-up equipment. Keep records of all
these items. Also, arrangements with other laboratories for mutual assistance can be
useful.

Note. An example of an organization with this aim is SPALNA, the Soil and Plant
Analytical Laboratory Network of Africa. Secretariat at: IITA. Oyo Rd.. PMB 5320. Ibadan.
Nigeria.
5.2 2.3 User Logbook

Finally, for all apparatus sensitive to use and particularly to misuse, such as flame
photometers. AAS. ICPs, chromatographs, autoanalyzers. X-ray equipment, spectro-
photometers etc.. a User Logbook should be prepared in which users identify themselves
and report particulars of the use: date, duration, elements measured, matrix in which was
measured, and whether any problems were encountered (if there were serious problems,
these should then be recorded in the Maintenance Logbook as well and reported to the
head of laboratory). A suggestion for a page of this logbook is given as Model APP 051.

To facilitate easy and rapid information for the user (and the HoL or QA officer) in some
laboratories the status of maintenance and calibration is given on a label on the
instruments.

5.2.2.4 SOPs for use of equipment

As indicated above, a SOP should be made for the use of each apparatus. Although much
freedom exists as to the format of such SOPs, a minimum of essential information should
be included. As a guide, a few examples are given at the end of this chapter. These
comprise:

1. A standard instruction for writing these SOPs (Model F 011),

2. Two SOPs for primary measuring equipment: an adjustable pipette and an electronic
balance (Models APP 061, and APP 062).

3. A SOP for a common analytical instrument: a pH meter (Model APP 071).


One of the most important aspects of the SOPs for this equipment is the calibration and
adjustment (standardization). A number of tools belonging to the category of primary
measuring equipment cannot be adjusted e.g., volumetric flasks, standard glass burettes,
volumetric pipettes. This type of equipment is not normally calibrated unless there is
reason to suspect inaccuracy or when it is to be used for very accurate analytical work.
Sometimes different qualities are available e.g., volumetric flasks Class A and Class B
(tolerance ±0.1% and ±0.2% respectively).

Note: When calibrating volumetric glassware the weight of the displaced air may not be
neglected: this amounts to a correction of 0.11% (i.e. filling up a volumetric flask with
100.00 ml water of 25°C should result in a weight increase of 99.89 g).

5.3 Reagents

5.3.1 Reagent chemicals


5.3.2 Standard and Reagent solutions

5.3.1 Reagent chemicals

It is advisable to use analytical grade chemicals in the laboratory throughout.


Nevertheless, in soil analysis, analytical grade is often not really necessary and is the
chemically pure grade satisfactory. It is then a matter of balancing the saving of money
against disadvantages such as needing more space (some chemicals will be in stock in
two grades), more book-keeping and risk of making mistakes. The minimum requisite
purity of chemicals (including that of water and other solvents) should be stated in the
description of the analytical procedure (see Chapter 7).

When chemicals (and gases) arrive in the laboratory' the containers need to be labelled.
On the label should be recorded the date it was received, when it was first opened and, in
some cases, the expiry date. Such labels can conveniently be home-made with the PC. A
suggestion for a model is given in Figure 5-1.

Fig. 5-1 - Label for reagent chemical containers


When taking chemicals form a bottle there are three basic rules to obey:

1. Use a clean spoon or spatula (do not use one which happens to lie around, unless it is
cleaned).

2. Do not return chemical to the bottle.

3. Close the bottle tightly after use.

5.3.2 Standard and Reagent solutions

There are basically two types of standards needed for chemical analysis:

1. Standard reagents for standard reagent solutions, e.g., the popular standard reagent
ampoules, or pure chemicals. These are needed for first-line control (calibration of
measuring instruments).

2. Standard sample material, to be divided in primary (certified) standard material and the
"home-made" control samples. These are needed for second-line and third-line control.

The present discussion is restricted to Type 1, the standard reagents. The standard
sample material is discussed in Chapters 7. 8, and 9.

In addition to standard solutions for calibration, reagent solutions need to be prepared for


extractions and analytical reactions.

Much of the success of an analysis depends on the reliability of the used standard and
other reagent solutions. These should be prepared with great care and only by
experienced personnel. In larger routine laboratories extracting solutions are often made
by one person or a unit and centrally stored. Each preparation of a solution should be
recorded in a Reagents Book (or separate Standards Book). A model for a page of such a
book is given. When an ampoule is used rather than a reagent chemical, this can be
entered in the column "Amount weighed in". (If titrations are used to determine titres,
details of this can be recorded on Worksheets belonging to the analytical procedure
involved).

The lay-out of a reagent book may be one of two kinds:

1. All reagents prepared are recorded chronologically.


2. For each reagent a page (or set of pages) is reserved.

In the latter case the first row of the form (RF 032) may be preprinted or prefilled-in for
convenience.

When a standard or any other reagent solution is prepared the bottle in which it is stored
should be properly labelled. A suggestion for the model of such a label is given in Figure 5-
2.

Fig. 5-2. Label for reagent solutions.

For bottles containing standard solutions for calibration of instruments labels of slightly
different model can be used as shown in Figure 5-3. In the upper empty section the what
the shelf-life of a reagent solution is marker in large characters for easy recognition which
is convenient during the calibration procedure, e.g. "Ca 10" for AAS. For handling reagent
solutions similar rules apply as for reagent chemicals: do not return unused solution to the
bottle (contamination!) and close the analyte and concentration can be written with a field
marker in large characters for easy recognition which is convenient during the calibration
procedure, e.g., "Ca 10" for AAS. For handling reagent solutions similar rules apply as for
reagent chemicals: do not return unused solution to the bottle (contamination!) and close
the bottles immediately and tightly after use to prevent evaporation and contamination.
Even so, reagent solutions should generally not be kept for longer than six months after
preparation (while some may only be used for few days, and some should be prepared
freshly each time they are used!). In some cases, expiry dates are provided by the
supplier, but in most cases only experience can teach what the shelf-life of a reagent
solution is. Sometimes, reagent solutions can be re-standardized to extend the shelf-life
(place new or additional label or sticker!). To avoid mistakes, coloured labels (or coloured
dots) may be used. For example, red labels could be used for reagents with a short life
(e.g. buffer solutions), blue labels for reagents that should be stored in the refrigerator, and
yellow for reagents that are to be kept in the dark.

Fig. 5-3. Label for standard solutions.

The preparation of each (standard) reagent solution including information about labelling,
storing and disposal, should be written up as a SOP.

5.4 Samples
Test samples of soil, plant and water vary widely in nature and condition. The sampling
itself will not be discussed here as the responsibility for this usually lies outside the
laboratory.

Note. This does not mean that the sampling procedure has no influence on the analytical
results. Several factors such as moisture content, packing, time lapse between sampling
and analysis, etc. may be of influence. In addition, the technique and pattern of sampling
(grid density) can have a strong bearing on the interpretation of the results.

The laboratory should demand proper packaging, labelling and administration of samples
before they reach the laboratory. Specific information as to the character of the sample
may be very useful for further processing. In fact, SOPs, protocols, registration forms and
labels (made of plastic, not of paper) can be prepared and issued to clients or project
managers prior to sampling or delivery to the laboratory. This greatly facilitates the
administration and processing in the laboratory and reduces the risk of mistakes and
confusion. There is probably no laboratory which has never experienced a problem in this
field.

Good Laboratory Practice aims at proper administration and a continuous scrutiny of the
identity of samples and an unbroken chain of custody. Every effort should be made to
prevent samples being accidentally interchanged, being contaminated (broken bags),
losing their identity (i.e. their label or number) or getting lost. A system can never be full-
proof and there may always be circumstances beyond one's control (e.g. fire), and
particularly possible malevolence and sabotage are virtually impossible to prevent.

Chain of custody

From the moment samples arrive at the laboratory (or institute) their identity, integrity
(spilling and contamination!), and knowledge of their whereabouts must be safeguarded.
This implies the following actions:

1 Inspection of packaging, condition of samples, (dry, moist), identification (labels still


readable?).

2. Registration by an authorized person who takes care of further routing the samples
according to a protocol.

This will usually imply handing over the samples to someone charged with the preparation
(drying, sieving etc.) and transfer to the laboratory. Each institute needs a protocol
describing the formal procedure for handling samples. An example of a simple draft
version is given as Model PROT 011.

The registration procedure includes entering particulars of the samples into a Sample
Logbook with forms of a design fitted for the purpose. An example is given here as Model
RF 011. The sample numbers can be entered into the logbook or, perhaps more
conveniently, attached to the registration form when a proper sample list is accompanying
the samples. A copy of this list is kept in the (physical) file which is made for each work
order. The registration procedure also includes assigning a programme of analyses and
defining a target date of completion. An example of a form for this purpose is given as
Model RF 021.

It is emphasized that a computerized laboratory information and management system


(LIMS) is a powerful tool in the organization and quality control of the laboratory (see
Chapter 8). In the chain of custody a LIMS is useful as it facilitates the registration of
samples and analytical programme and produces ready-to-use printed sticker-labels with
all relevant information for the sample containers.

Finally, the ways samples are stored and disposed of also have to be described in
protocols. As mentioned in Section 4.2.3, sometimes contaminated samples may have to
be treated as chemical waste.

SOPs

APP 003 - Instrument Identification List


APP 004 - Instrument Maintenance/Calibration List
APP 041 - Logbook of AAS
APP 051 - User Logbook of AAS
F 011 - Standard Instructions for drafting apparatus SOPs
APP 061 - Operation of Eppendorf Varipette 4810
APP 062 - Operations of electronic balance Sartorius
APP 071 - Operation of pH meter Metrohm E 632
RF 032 - Page of Reagents Book
PROT 011 - Protocol for custody chain of samples
RF 011 - Protocol for accepting delivery of samples
RF 021 - Form for accepting order for analysis

APP 003 - Instrument Identification List


LOGO STANDARD OPERATING PROCEDURE Page: 1 # ...
Model: APP 003 Version: 2 Date: 96-05-15 File:
Title: Instrument Identification List
Instr. Description Serial Install. Supplier Person-in-charge Location Logbook
no. no. date (deputy) no.
APP 004 - Instrument Maintenance/Calibration List
LOGO STANDARD OPERATING PROCEDURE Page: 1 # ...
Model: APP 004 Version: 2 Date: 96-05-15 File:
Title: Instrument Maintenance/Calibration List
Instr. no. Description Serial no. Maintenance / Calibration schedule Logbook no.
APP 041 - Logbook of AAS
LOGO STANDARD OPERATING PROCEDURE Page: 1 # ...
Model: APP 041 Version: 2 Date: 97-03-17
Title: Maintenance Logbook of AAS Perkin Elmer AAnalyst 100
Serial no.:_____________ Location: _____________
Date Inspection / Problem / Action taken / Remarks Sign. Sign. HoL
APP 051 - User Logbook of AAS
LOGO STANDARD OPERATING PROCEDURE Page: 1 # ...
Model: APP 051 Version: 2 Date: 97-02-26
Title: User Logbook of AAS Perkin-Elmer AAnalyst 100
Serial no.:_____________ Location: _____________
Date Name user Duration Elements Matrix Problems Y/N Other Particulars Sign.
F 011 - Standard Instructions for drafting apparatus SOPs
LOGO STANDARD OPERATING PROCEDURE Page: 1 # ...
Model: F 011 Version: 2 Date: 95-12-11
Title: Standard instructions for drafting apparatus SOPs
CONTENTS
1 PURPOSE
2 PRINCIPLE
3 SPECIFICATIONS
4 DEFINITIONS
5 RELATED SOPs
6 SAFETY INSTRUCTIONS
7 DOCUMENTATION
8 OPERATION
9 MAINTENANCE
10 CALIBRATION
11 WITHDRAWAL FROM SERVICE
12 ARCHIVING ROUGH DATA
13 AUTOMATED (COMPUTERIZED) INFO SYSTEMS
14 REFERENCES
Author: Sign.:
QA Officer (sign.): Date of expiry:

The first page is composed according to Standard Instruction F 001 (General Instructions)

0 TITLE

Give title of SOP, e.g. "Operation of Philips/Pye Unicam SP3-200 Infrared


Spectrophotometer".

1 PURPOSE

State briefly the purpose of the apparatus. If applicable, mention the analytes and matrices
that can be used. If apparatus is part of larger system specify this.

2 PRINCIPLE

Describe briefly the principle of the technique used.

3 SPECIFICATIONS

Give all data relevant for the identification, location, etc., as well as for its proper use:
conditions under which the apparatus can be used, relevant specifications and/or
limitations.

3.1 General information

Give the following general data. This can also be given in an Appendix. Alternatively, make
reference to the corresponding Maintenance Logbook where all this information must be
given also:

- name and description of apparatus; type and serial numbers; own identification number
- name manufacturer and/or supplier
- dates of receipt and implementation
- location of apparatus
- name of person responsible for apparatus

3.2 Functional information

Specify working ranges, limitations and other information relevant for the application of the
apparatus, e.g.:

- working range(s) for applicable analytes (usually given by manufacturer)


- lower limit(s) of detection
- sensitivity
- signal/noise ratio
- temperature range
- other limitations (e.g. matrix concentrations)

4 DEFINITIONS

Give definitions of used terms-

5 RELATED SOPs

Where necessary refer to other relevant SOPs e.g.:

- F 011 Standard instruction for drafting apparatus SOPs

6 SAFETY INSTRUCTIONS

Describe the precautions to take for safe operation, e.g. checking of gas and pressure
valves; use of fume exhaust; fire or explosion danger; wearing gloves or safety goggles;
use of pipette balloon; etc. etc.

7 DOCUMENTATION

A number of relevant documents accompany the apparatus facilitating control and optimal
use with minimum trouble or failure.

7.1 Logbook(s)

For each apparatus a logbook should be made. All relevant events concerning the use and
maintenance of the apparatus should be recorded in this book. It may be divided in two
parts or consists of two volumes: Instrument Maintenance Logbook and User
Logbook. The logbook(s) should have the following features:

- Front page with title, SOP number, serial number, and date of issue,
- Second page with general information mentioned under 3.1.

7.1.1 For maintenance:


(instructions for maintenance are given under 8 below)

- In case of trouble:
date of mishap
description of problem
cause
solution
date of restoring to service

- In case of maintenance:
date of latest and next service (maintenance status)
kind of maintenance
what parts were used and have to be ordered
particulars for next service particulars of calibration (instructions for calibration are given
under 10)

7.1.2 For user registration:

- date of use
- name of user
- particulars of determination (analyte, matrix)
- duration of use
- relevant observations (problems, note for next user)

7.2 Manufacturer or supplier documents

These are the original manuals delivered with the apparatus. Often, photocopies are kept
with the apparatus (particularly smaller manuals may get lost). If the original is not kept
with the apparatus, make clear where to be found: with sticker, on photocopy, in
logbook(s), centrally in manual file, or otherwise.

7.3 Internal documents

In the laboratory there should be a list of persons who are authorized to

- use the apparatus


- perform the maintenance
- perform the calibration

8 OPERATION

8.1 General

Give here all actions necessary to prepare the apparatus for use. Include instructions for
proper environment such as:

- correct gases, lamps, cooling system


- climate control in working room
- safety measures and pollution control

8.2 Operation instruction

Divide this paragraph in as many numbered parts as there are separate steps to be
performed. Describe these steps accurately and in chronological order, using the
imperative.
When the operation instruction is extensive or already exists as a separate document, it
need not be integrated with the apparatus SOP. Its existence as an annex to the SOP is
then mentioned in Section 3.

8.3 Malfunctioning, Interferences

Describe all interferences and malfunctioning that are known to possibly occur and disturb
the proper functioning of the apparatus. Give information about

- kind of problem
- appearance or how it can be recognized
- how it can be solved
- who to turn to for assistance to solve the problem

9 MAINTENANCE

9.1 Definition/description

If applicable, state the maximum period between services or give service scheme of kind
and frequency of service. As an appendix, there must be a list of essential spare parts that
should be in stock. The final part of each service should be a test of the relevant
specifications of the apparatus.

9.2 Maintenance by own personnel

Describe the technical operations that need to be performed for maintenance of the
apparatus. If necessary, distinguish between different kinds of service. Use a stepwise
description of the operation (as for the operation instruction 6.2). Record particulars in the
Maintenance Logbook.

9.3 Maintenance by third party

Give information about

- name, address and telephone number of company (or individual) involved


- name of contact person
- specifications of warranty and of the contract (e.g., frequency) for service

The serviceman roust report his findings in writing; these reports must be filed (e.g. stick in
Maintenance Logbook or in special Maintenance Report File cross-referenced with a
remark in the Maintenance Logbook).

10 CALIBRATION

10.1 Definition

- Calibration Determining the value of the deviation(s) of an instrument from an applicable


standard.
- Adjustment Operation to make instrument sufficiently accurate for the measurement (this
operation is also called standardization)

Give here all operations or manipulations necessary to calibrate the described apparatus.
In certain cases this may involve adjustment. Use a stepwise description of the operation.
The operation for adjustment should (also) be described in the Operation Instruction (8.2).

Note: This calibration is to be distinguished from the calibration of an instrument for each


measurement. This is usually done batchwise using standard series specified in the
various analytical methods.

Relevant calibration items are;

- Pre-set values and tolerated deviations)


- Frequency of calibration
- what calibration standards are used
- relation to national or international standards
- environmental conditions during calibration
- measures to minimize shifts in adjustment

10.2 Calibration by own personnel

Record in logbook:

- calibration status of instrument


- calibration result

10.3 Calibration by third party

Give information about contract for calibration:

- Name, address and phone number of company


- Name contact person
- Contract number
- Frequency of service
Note: Calibration of modern instruments by third party becomes less and less necessary.
Usually it is included in maintenance contracts.

11 WITHDRAWAL FROM SERVICE

When an instrument is not functioning properly or defect it may no longer be used.


Describe if there are other reasons for putting the instrument out of action, e.g. when
calibration or maintenance dates have expired. (In non-accredited laboratories the latter
two rules are often not observed. Quality may then be in jeopardy.) State how the
instrument is to be labelled if it is not to be used,

Record in logbook:
- Date of putting out of action
- Reason
- Suggestions for solving the problem
- Date of restoring into action

12 ARCHIVING ROUGH DATA

Rough data are the measuring results of the instrument. This may be a figure on a display,
a chromatogram, a printer list with data, etc.

State which rough data are to be archived and how this is done.

13 AUTOMATED (COMPUTERIZED) INFORMATION SYSTEMS

Instruments may be connected to a computer which records (and interprets) the rough
data. These data may, in turn, (semi)automatically or manually be transferred to a PC for
calculation and/or a laboratory information management system (LIMS).

13.1 Description

Describe clearly the situation and procedures involved. Include the procedure for archiving
and retrieving data and for how long data will be kept.

13.2 Software

The use of software inherently implies the occurrence of problems which may necessitate
help-desk service. Therefore, the following information should be documented:

- Name of program
- Date and/or version number
- Author or supplier. Address and phone number of contact person

14 REFERENCES

State which references were used for drafting the SOP (if not mentioned earlier). Also give
references which may contribute to knowledge and skill of the user of the equipment.

Source: Institute for Inland Water Management and Waste Water Treatment (RIZA),
Lelystad.

APP 061 - Operation of Eppendorf Varipette 4810


LOGO STANDARD OPERATING PROCEDURE Page: 1 # 7
Model: APP 061 Version: 1 Date: 95-11-27
Title: Operation of Eppendorf Varipette 4810
CONTENTS
1 PURPOSE
2 PRINCIPLE
3 SPECIFICATIONS
4 DEFINITIONS
5 RELATED SOPs
6 SAFETY MEASURES
7 OPERATION
7.1 Volume adjustment
7.2. Pipetting
7.3 Calibration
7.3.1 Calculation of mean volume
7.3.2 Calculation of bias
7.3.3 Calculation of precision
8 MALFUNCTIONING
9 MAINTENANCE
9.1 Maintenance by user
9.2 Maintenance by supplier
10 LOGBOOK
11 REFERENCES
APPENDIX: CALIBRATION WORKSHEET
Author: Sign.:
Head (sign.): Date of Expiry:

1 PURPOSE

Pipetting small volumes of solutions.

2 PRINCIPLE

The pipetting mechanism is based on displacement of liquid by means of manual


displacement of air above the liquid.

3 SPECIFICATIONS

Volume range: 20 l - 2500  l. Suitable for aqueous and organic solutions.

4 DEFINITIONS

Accuracy: The closeness of the measured value to the true value,


(Note; this may also be expressed as bias: see for definition Guidelines for Quality
Management)
Precision: The closeness with which replicate measured values agree.

5 RELATED SOPs

F 001 Administration of SOPs


F 011 Standard instruction for drafting apparatus SOPs
APP 041 Maintenance Logbook
APP 003 Instrument Identification List
APP 004 Instrument Maintenance List

6 SAFETY MEASURES
Not applicable.

7 OPERATION

Prevent liquid from entering the pipette body at all times.

7.1 Volume adjustment

See Figure 1

1. Pull control knob until a click is heard (or felt).


2. Turn control knob until desired volume is shown on display,
3. Press control knob until a click is heard.

Fig. 1. Procedure for volume adjustment Varipette (Eppendorf, 1992).

7.2 Pipetting

See Figure 2.

Fig.2. Liquid charge, liquid discharge, tip ejection

7.3 Calibration
Calibrate once a month with a cleaned pipette (see Section 9.1) at both minimum and
maximum volume of working range and at an intermediate volume (see Table 2).

Procedure

1. Pipette and weigh 10 times a chosen pipette volume of demineralized water (boiled for
15 mins. and cooled) using an analytical balance (resolution 0.1 mg). Record data on
worksheet (for model see Appendix of this SOP).

2. Calculate mean volume of the pipette with Equation 1 of Section 7.3.1 of this SOP,

3. Verify if accuracy (trueness) and precision are within specifications of manufacturer.

7.3.1 Calculation of mean volume

The mean volume is calculated with Equation 1:

(1)

where:

v = mean pipette volume ( l)


g = mean of 10 calibration weighings (g)
d = density of used water (g/ml) at temperature of this water (see Table 1)

Table 1. Density of water at different temperatures

Water temp. (°C) Density (g/ml) Water temp. (°C) Density (g/ml)
15 0.99913 21 0.99802
16 0.99897 22 0.99780
17 0.99880 23 0.99756
18 0.99862 24 0.99732
19 0.99843 25 0.99707
20 0.99823 30 0.99567

7.3.2 Calculation of accuracy (trueness)

The accuracy is calculated with Equation 2:

(2)

where:
a =accuracy (%)
b = pipette setting ( l)
v = mean pipette volume ( l)

7.3.3 Calculation of precision

The precision is calculated with Equation 3:

(3)

where:

p = precision (%)


s = standard deviation of 10 calibration weighings (g)
g = mean weight of 10 calibration weighings (g)

When the calibration results do not meet the specifications of the manufacturer (see Table
2), the pipette should not be used. If the pipette cannot be fixed by proper maintenance
(see 9.1 of this SOP), then it should be fixed by the supplier or be withdrawn from service
(or be used for other purposes where lower accuracy is permitted; the pipette should then
be clearly marked).

Specifications of manufacturer:

Table 2. Factory specifications of the Varipette 4810

Varipette range Setting ( l) Accuracy Precision


200 - 1000  l 200 100 ± 0.8 %  0.3 %
500 100 ± 0.6 %  0.2 %
1000 100 ± 0.6 %  0.2 %
500 - 2500  l 500 100 ± 0.7 %  0.3 %
1000 100 ± 0.6 %  0.2 %
2500 100 ± 0.6 %  0.2 %

8 MALFUNCTIONING

- Problem: drops of liquid inside the pipette tip.


Cause: tip has come loose.
Solution: fix tip.
- Problem: liquid dripping from tip.
Cause: wrong tip, or leak in pipette.
Solution: replace tip, or clean pipette (see 9.1).

9 MAINTENANCE
9.1 Maintenance by user

Calibrate according to instruction in Section 7.3. Prior to calibration inspect if pipette is


dirty. If so, pipette should be cleaned by qualified person. The parts of the Varipette are
shown in Figure 3.

Fig. 3. Parts of the Varipette.

9.2 Maintenance by supplier

When a problem cannot be solved by own qualified personnel, the pipette has to be sent
to the supplier for repair. (Recalibrate when returned.)

10 LOGBOOK

Dates and particulars of repairs, cleaning services, and calibrations must be recorded in
the logbook for automatic pipettes.

11 REFERENCE

Eppendorf 4810. Operation Manual, IS 92.

Source: Winand Staring Centre for Integrated Land, Soil and Water Research (SC-DLO),
Wageningen.

APPENDIX. CALIBRATION WORKSHEET FOR AUTOMATIC PIPETTES

Calibration performed by: Sign.:


Date :
Pipette type + Identification no. :
Pipette setting ( l)) :Min.: Max.: Intermediate
Temperature water (°C) :
Density water (g/ml) :
Weight (g)
Weighing no. Min. volume Max. Volume Volume
1
2
3
4
5
6
7
8
9
10
Mean

Final results (see 7.3 this SOP):


Calibration Factory Specification
Setting Min Max Min Max Sign. Head
Volume ( l)
Accuracy (%)
Precision (%)
APP 062 - Operations of electronic balance Sartorius
LOGO STANDARD OPERATING PROCEDURE Page: 1 # 5
Model: APP 062 Version: 1 Date: 95-02-02
Title: Operations of electronic balance Sartorius 3708 MP 1
CONTENTS
1 PURPOSE
2 PRINCIPLE
3 SPECIFICATIONS
4 DEFINITIONS
5 ELATED SOPs
6 SAFETY INSTRUCTIONS
7 OPERATION
7.1 Preparation
7.2. Checking
7.3 Adjustment
7.4 Weighing
8 MALFUNCTIONING
9 MAINTENANCE
10 LOGBOOK
11 REFERENCE
Author: Sign:
Head (sign.) Date of Expiry

1 PURPOSE

To measure the mass of substances or objects.

2 PRINCIPLE

Electronic mass compensation.

3 SPECIFICATIONS

3.1 General

Serial no. 2709013. For particulars see appropriate section in Balance Maintenance
Logbook.

3.2 Functional

Weighing range 0 - 320 g


Readability 0.001 g
Precision (standard deviation) ± 0.0005 g
Linearity deviation (max.) 0.001 g
Taring range (by subtraction) 320 g
Taring time 10 ms
Measuring time (approx.) 2s

The balance should be level and prevented from

- vibrations
- large temperature fluctuations
- direct sunlight
- draught

4 DEFINITIONS

Not applicable.

5 RELATED SOPs

F 001 Administration of SOPs


F 011 Standard instruction for drafting apparatus SOPs
APP Balance Maintenance Logbook
APP 003 Instrument Identification List
APP 004 Instrument Maintenance List

6 SAFETY INSTRUCTIONS

Not applicable.

7 OPERATION

Fig. 1. Electronic balance Sartorius 3708 MP 1.

A Balance pan D Spirit level G Sensitivity adjustment


B Power switch E Weight display H Screws for metal house
C Levelling screws F Tare sensor I Data output

Sensitivity of the balance depends on varying earth rotation velocities at different locations
in the world and must therefore be checked and adjusted.

7.1 Preparation

1. Check if balance is level.


2. Turn on switch "B". Allow balance to warm up for at least 20 mins.

7.2 Checking

1. Press tare sensor "F" to zero balance.


2. Place calibration weight (e.g., 300 g) on the balance pan.
3. The weight should be 300.000 ± 0.001 g.
4. If this not the case, adjust sensitivity according to Section 7.3 below.

7.3 Adjustment

1. Remove plate "G". The sequence of the counters of the 6-digit switch corresponds with
the sequence of the weight display, i.e., the right counter corresponds to the right digit of
the weight display.

2. Place calibration weight on the weighing pan (only if weight had been removed).

3. In case of a lower weight indication: increase value on switch until weight indication
equals that of the calibration weight.

4. In case of a higher weight indication: reduce value of switch until weight indication
equals that of the calibration weight.

5. Permissable tolerance in all cases: ± 0.001 g (1 in final digit).

6. Press sensor "F" to zero balance and repeat sensitivity adjustment.

7. Fasten plate "G" again.

If calibration is unsuccessful, the balance should not be used until it has been repaired.

7.4 Weighing

7.4.1 Direct weighing

1. Press tare sensor to zero balance,

2. Place sample on balance pan. Read weight indication on display after illumination of
stability indicator "g"

7.4.2 Weighing-in

1. Place tare container on balance pan. Press tare sensor to zero balance.
2. Transfer sample material into tare container. Read net weight on display.
Note: This procedure can be repeated as often as necessary up to the maximum capacity
of the balance.

7.4.3 Weighing to a pre-set value

1. Example: pre-set value: 50 g.


2. While transferring sample to container observe the 10 g digit until "4" appears.
3. Proceed adding sample material until weight of "49" appears.
4. Continue adding procedure and watch other digits accordingly.
8 MALFUNCTIONING

- Problem Weight indication does not light up, decimal point does not light up:
Cause Power supply
Supply voltage
Balance not switched on
Fuse defective (Warning: when changing fuse, pull plug from socket!)
- Problem Weight indication does not light up, decimal point does
Cause Overload
- Problem Weight indication is changing continuously
Cause Balance not switched on long enough, operating temperature not yet reached
Unsatisfactory installation conditions (draught, vibrations)
- Problem Weighing results incorrect
Cause Unsatisfactory installation conditions
Balance not levelled
Sensitivity setting incorrect (solution: adjust balance)

If balance cannot be made to function property, call qualified assistance.

9 MAINTENANCE

9.1 Maintenance by user

- Keep balance clean


- Calibrate and adjust balance weekly and after each removal.
- Removing the balance:
1. Pull plug from socket

2. Remove balance

3. Connect plug with socket

4. Level balance

5. Switch on balance

6. Wait for 20 minutes (or less if balance was warm) and adjust balance as described in
Sections 7.2 and 7.3 of this SOP.

9.2 Maintenance by supplier

Have balance serviced, calibrated and adjusted once a year.

10 LOGBOOK

record in Maintenance (and/or Calibration) Logbook:


- All malfunctions encountered
- All actions taken to solve problems
- All calibrations

11 REFERENCE

Instruction for Installation and Operation of 3707 MP 1 No date. Sartorius-Werke,


Göttingen, Germany.

Source: Winand Staring Centre for Integrated Land, Soil and Water Research (SC-DLO),
Wageningen.

APP 071 - Operation of pH meter Metrohm E 632


LOGO STANDARD OPERATING PROCEDURE Page: 1 # 5
Model: APP 071 Version: 1 Date: 94-11-22
Title: Operation of pH meter Metrohm E 632
CONTENTS
1 PURPOSE
2 PRINCIPLE
3 SPECIFICATIONS
4 RELATED SOPs
5 SAFETY INSTRUCTIONS
6 OPERATION
6.1 Principle
6.2 Materials
6.3 Reagents
6.4 Precautions
6.5 Accuracy
6.6 Starting
6.7 Calibration and adjustment
6.8 Measurement
7 CHECKING AND MAINTENANCE
8 REFERENCES
Author: Sign.:
Head (sign.): Date of Expiry:

1 PURPOSE

To measure pH of soil paste, extracts, solutions, waters.

2 PRINCIPLE

The potentiometric pH measurement is based on measuring the difference in electrical


potential between solution and electrode. It is a relative measurement dependent on
electrode and temperature. Therefore, the pH meter must be calibrated and adjusted
(standardized) with standard buffers of known pH.

3 SPECIFICATIONS
With glass electrodes the pH range is 0 - 12.
Readability: 0.01 unit.
Temperature range: 0 - 100°C.
Electrode: combination glass electrode, e.g. Metrohm 6.0203.100

4 RELATED SOPs

F 002 Administration of SOPs


F 011 Standard instruction for drafting apparatus SOPs
APP 041 Maintenance Logbook
APP 042 User Logbook
APP 003 Instrument Identification List
APP 004 Instrument Maintenance List
APP ... Inspection and maintenance of pH meter Metrohm E 632
APP ... Inspection and maintenance of combination glass electrodes

5 SAFETY INSTRUCTIONS

Not applicable.

6 OPERATION

6.1 Principle

The standardization of the pH meter consists of two adjustment steps. The deviation of the
preset ("true") value of buffer solutions is electronically compensated.

The first step is always executed with a pH 7 buffer, whereas the second step can be done
with a lower (e.g. pH 4) or higher (pH 9 or 10) buffer depending on the range in which the
sample measurements are made (in exceptional cases a buffer of very low pH may be
required, e.g., pH 2).

6.2 Materials

Thermometer, -10 to 100 °C, accuracy 0.5 °C.

6.3 Reagents

Buffer solutions pH Dilute standard analytical concentrate ampoules according to instruction.


4.00, 7.00 and 9.00
or 10.00 (25 °C)
Note: Standard buffer solutions of which the pH values deviate slightly from
these values can also be used.
Water Deionized or distilled water, with electrical conductivity < 2  S/cm and pH >
5.6 (Grade 2 water according to ISO 3696).
Note: If no standard ampoules are used buffer solutions can be prepared as
follows (these solutions can also be prepared to act as "independent"
standards):
Buffer solution pH 4 Dissolve 10.21 g potassium hydrogen phthalate, C 8H5KO4, in water in a 1 L
volumetric flask and make to volume with water. (First dry the potassium
hydrogen phthalate at 110 °C for at least 2 hrs.).
The pH of this 0.05 M phthatate solution is 4.00 at 20°C and 4.01 at 25°C.
Buffer solution pH 7 Dissolve 3.40 g potassium dihydrogen phosphate, KH2PO4, and 3.55 g
disodium hydrogen phosphate, Na2HPO4, in water in a 1 L volumetric flask
and make to volume with water. (Both phosphates should first be dried at 110
°C for at least 2 hrs.).
The pH of this 0,25 M (of each phosphate) solution is: 6.88 at 20°C and 6.86
at 25°C,
Buffer solution pH 9 Dissolve 3.80 g disodium tetraborate decahydrate, Na 2B4O7.10 H2O (borax), in
water in a 1 L volumetric flask and make to volume with
water. (Note: Observe the expiry date of borax: this may lose crystal water
upon aging.)
The pH of this 0.01 M borax solution is 9,22 at 20°C and 9.18 at 25 °C.

6.4 Precautions

- The electrode must be stored in a 3 M KCl solution.


- The diaphragm of the electrode must be submerged in the solution during measurement.;
- The electrolyte level inside the electrode must be above the level of the solution being
measured.

6.5 Accuracy (bias)

The pH is readable in 2 decimals. For standardization procedures and the preparation of


reagents the second decimal has significance and can be used. For the measurement of
soil suspensions and extracts the second decimal usually has no meaning and the result
should be rounded off to one decimal. (For rules of decimal significance and rounding off
see Chapter 7 of these Guidelines for QM).

6.6 Starting

- Connect electrode with socket on the back of the instrument,


- Switch on mains with push button 7 (see Figure 1). The instrument is now ready for use.
- If necessary, push button 3 (stand-by) and button 5 (pH) and set switch 13 (slope) on
1.00.

6.7 Calibration and adjustment

These should always be performed after:

- switching on the pH meter

- replacement of electrode

- checking the calibration and the deviation of the pH from the theoretical value of the
standard buffer appears to exceed 0.05 unit.

When the pH meter is on and already adjusted then only a check of the adjustment is
needed (described in Section 7.1 of this SOP),
6.7.7 Calibration step 1

- Transfer sufficient standard buffer solution pH 7.00 to a 50 ml or 100 ml beaker.

- Measure temperature of buffer and set switch 14 (temp. compensation) to this


temperature.

- Immerse electrode in buffer solution and push button 4 (measure).

- With button 6 (Ucomp) adjust value on display (8) to theoretical pH value of the buffer at the
measured temperature. (Note: this value can be read from a table enclosed with the
standard ampoule).

- Push button 3 (stand-by). Rinse electrode with water. Setting of switch 6 (Ucomp) should


now not be changed any more.

6.7.2 Calibration step 2

- Transfer a sufficient volume of one of the two other buffer solutions (pH 4 or 9) to a 50 ml
or 100 ml beaker. (Note: this second buffer is chosen such that the pH of the solution to
be measured falls in between , the first and second calibration buffer).

- Measure temperature of buffer and adjust switch 14 (temp. compensation) to this


temperature.

- Immerse electrode in buffer solution and push button 4 (measure),

- With switch 13 (slope) adjust the value on the display to the theoretical pH value of this
buffer. (Note: this value can be read from a table enclosed with the standard ampoule).

The setting of switch 13 may not be lower than 0.95. If this condition is not met, this
electrode may not be used for the measurement and must be exchanged for another one
which does meet the condition.

- Push button 3 (stand-by) and rinse electrode with water.

- As a check, repeat readings of buffers (pH 7 first) and readjust according to Step 1 and 2
if necessary.

Fig. 1. Front panel of Metrohm pH meter E 632.

6.8 Measurement

- Measure temperature of solution (or suspension) to be measured and adjust switch


14 (temp. compensation) to this temperature.

- Immerse electrode in solution (or suspension) to be measured.

- Push button 4 (measure) and read pH value.


Note: For Quality Control it is essential to include measurement of an independent buffer
solution of known pH (as a check on calibration) and of a control sample (in each batch, to
check the system under measuring conditions).

- Push button 3 (stand-by), rinse electrode with water and place in electrode holder filled
with 3 M KCl solution.

- Enter use in User Logbook.

7 CHECKING AND MAINTENANCE

7.1 Checking of adjustment

Checking of the adjustment of previously adjusted pH meters (verification) is needed:

- Prior to each new use of the instrument.

- During batch measurement. The frequency is indicated in the procedure of the


investigation (e.g., after every 50 or 100 measurements or once every hour).

This verification is done with at least one of the calibration buffers indicated in Section 6.3.
If the deviation exceeds 0.05 unit from the preset value, the instrument must be
recalibrated and adjusted as described in Section 6.7 above.

7.2 Inspection and maintenance of electrodes

Periodical inspection of the pH electrodes, as well as inspection after complaints about


malfunctioning must be carried out by a qualified technician and is described in SOP
Model APP ...

7.3 Inspection and maintenance of pH meter

Periodical inspection of the pH meter, as well as inspection after complaints about


malfunctioning must be carried out by a qualified technician and is described in SOP
Model APP ...

8 REFERENCES

Metrohm, Instructions for use, digital pH-meter E632.


Metrohm, Application Bulletin 188/1e.
Bates, R..G. (1973) Determination of pH, theory and practice, John Wiley & Sons, New
York.
DIN 19266, pH-Messung, Standardpufferlösungen.
ISO 3696. Water for analytical laboratory use. Specification and test methods.

Source: Delft Geotechnics, Delft


RF 032 - Page of Reagents Book
Date Reagent Concentration Analysis Bottle no. Amount Final Label Sign.
code used weighed in volume no.
Verified by: Date: Sign.:
PROT 011 - Protocol for custody chain of samples
LOGO STANDARD OPERATING PROCEDURE Page: 1 # 1
Model: PROT 011 Version: Draft Date: 96-03-26
Title: Protocol for custody chain of samples

1 PURPOSE

To organize the pathway of samples through the institute.

2 PRINCIPLE

From the arrival at the institute until the discarding or final storage, samples usually go
through several hands and are processed at several places. To ensure their integrity,
traceability and to prevent that they get lost, their pathway and the responsible personnel
involved ("chain of custody") must be documented.

3 RELATED SOPs

- RF 011 Form for accepting delivery of samples


- RF 001 Sample List
- RF 021 Form for accepting order for analysis
- RF ... Sample Storage Logbook
- RF ... Sample Location Logbook
- PROT ... Storage of samples
- PROT ... Disposal of sample material

4 PROCEDURE

4.1 Upon arrival of samples at the institute an authorized officer fills out form RF 011 (protocol for
accepting delivery of samples).
4.2 If there is a regular custodian, the samples are handed over to him/her. (The custodian can be
the officer who received the samples).
4.3 Document RF Oil is taken to the person responsible for farther processing (e.g. Project Officer,
Head of Laboratory). This person signs for acceptance and keeps a copy of the form. Another
copy is made for the Work Order File prepared for the corresponding work order (This file
contains hard copies of all relevant information and documents concerning the work order). The
original is kept at a designated place (e.g. book of forms RF 011).
Note. If samples can be received by more than one person or at more than one
location/department, more than one book or file of forms RF 011 may be kept. The forms RF
011 could then be differentiated with a suffix (e.g. A, B,  etc.).
4.4 The whereabouts of samples are recorded by the custodian in a Sample Location Logbook. If
samples are stored behind lock and key, anybody taking out (sub)samples has to sign for this
in a Sample Storage Logbook.
4.5 After completion of the analytical work, the sample is (re)stored for possible later use. The
duration of storage is indicated in the Sample Storage Logbook. It is useful to record the
location also in the Work Order File (e.g. on the Order Form RF 021).
(Duration of storage may be determined by agreement with customer or by usual procedure of
the Institute, e.g. 1 year or indefinitely. This is also recorded on the order form RF 021.)
Author: Sign.:
QA Officer (sign.): Date of Expiry:
RF 011 - Protocol for accepting delivery of samples
LOGO STANDARD OPERATING PROCEDURE Page: A...
Model: RF 011-A Version: 2 Date: 96-01-22
Title: Form for accepting delivery of samples
Work order no.:

Date of arrival:

Name Client/Project:

Address:

Carrier:

Origin of samples:

Number & kind of samples:

a. ......... soil / plant / water samples*


b. ......... ring or core samples (or: ...... boxes with core samples)
c. ......... other (specify):
Condition of samples*: moist / dry / unknown
Sample list enclosed*: yes / no (if list is missing, make one for Work Order File)
Other information enclosed:
Order for analysis enclosed*: yes / no
Type of packaging*: crate / cardboard box / bag / other: ................
Number of packages: ........
Condition of package*: undamaged / damaged (specify)
Samples received by: ........................................ sign.:
* Circle as appropriate.
Samples placed in custody of: .......................................... sign.:
This document passed to: Project officer (name):.................................. sign.:
: Laboratory (name): .................................... sign.:
: Other (name): ............................................. sign.:

Remarks:

RF 021 - Form for accepting order for analysis


LOGO STANDARD OPERATING PROCEDURE Page: A...
Model: RF 021 Version: 3 Date: 96-12-06
Title: Form for accepting order for analysis
Work order no.:
Date of arrival:

Name Client/Project:

Address:

Carrier:

Origin of samples:

Number & kind of samples:

a. ......... soil / plant / water samples*


b. ......... ring or core samples (or: ...... boxes with core samples)
c. ......... other (specify):
Kind or particulars of material relevant for
analytical approach
Sample list correct?*: yes / no (without proper list, order cannot be
processed)
Condition of samples*: moist / dry
Analytical programme submitted*: yes / no (tick requested analyses overleaf)
All samples same programme?*: yes / no (if "no", describe
under Remarks overleaf)
Requested date of completion:
Sample residue*: discard / store indefinitely / store until
(date): ...................
* Circle as appropriate.
Order accepted by (on behalf of lab): .................................. sign.:
Entered into SOILIMS: date: sign.:
Order Confirmation sent to client: date: sign.:
Change in Registration by: .................................. date: sign.:
Entered into SOILIMS: date: sign.:
Order Confirmation to client: date: sign.:

Remarks:

Tick requested analyses:

Procedure Code(s)
 Preparation
 pH-H2O
 pH-KCl
 EC2.5
 Particle-size analysis (specify fractions below)
 Water-dispersible clay
 CEC
 Exchangeable bases
 Exchangeable acidity
 Exchangeable Al
 Organic carbon
 Total carbon
 Carbonate equivalent
 Available phosphate
 Gypsum
 Dithionite extraction
 Acid oxalate extraction
 Na pyrophosphate extraction
 P-retention
 pH-NaF
 ODOE
 Melanic index
 DTPA extr. (Cu, Fe, Zn, Mn)
 Boron (hot water)
 Saturation extract
 1:5 extract
 pF* 0 1 1.5 2 2.3 2.7 3.4 4.2

 Bulk density
 Specific surface area

 X-ray diffraction*: clay / whole sample / other fractions: ............


treatments;
 Guinier photo*: clay / whole sample / other fractions: ...........
treatments:

 Plant analysis (specify below)


 Water analysis (specify below)

* Circle as appropriate.

Remarks:

6 BASIC STATISTICAL TOOLS


There are lies, damn lies, and statistics......
(Anon.)

6.1 Introduction
6.2 Definitions
6.3 Basic Statistics
6.4 Statistical tests

6.1 Introduction
In the preceding chapters basic elements for the proper execution of analytical work such
as personnel, laboratory facilities, equipment, and reagents were discussed. Before
embarking upon the actual analytical work, however, one more tool for the quality
assurance of the work must be dealt with: the statistical operations necessary to control
and verify the analytical procedures (Chapter 7) as well as the resulting data (Chapter 8).

It was stated before that making mistakes in analytical work is unavoidable. This is the
reason why a complex system of precautions to prevent errors and traps to detect them
has to be set up. An important aspect of the quality control is the detection of both random
and systematic errors. This can be done by critically looking at the performance of the
analysis as a whole and also of the instruments and operators involved in the job. For the
detection itself as well as for the quantification of the errors, statistical treatment of data is
indispensable.

A multitude of different statistical tools is available, some of them simple, some


complicated, and often very specific for certain purposes. In analytical work, the most
important common operation is the comparison of data, or sets of data, to quantify
accuracy (bias) and precision. Fortunately, with a few simple convenient statistical tools
most of the information needed in regular laboratory work can be obtained: the "t-test, the
"F-test", and regression analysis. Therefore, examples of these will be given in the ensuing
pages.

Clearly, statistics are a tool, not an aim. Simple inspection of data, without statistical
treatment, by an experienced and dedicated analyst may be just as useful as statistical
figures on the desk of the disinterested. The value of statistics lies with organizing and
simplifying data, to permit some objective estimate showing that an analysis is under
control or that a change has occurred. Equally important is that the results of these
statistical procedures are recorded and can be retrieved.

6.2 Definitions

6.2.1 Error
6.2.2 Accuracy
6.2.3 Precision
6.2.4 Bias
Discussing Quality Control implies the use of several terms and concepts with a specific
(and sometimes confusing) meaning. Therefore, some of the most important concepts will
be defined first.

6.2.1 Error

Error is the collective noun for any departure of the result from the "true" value*. Analytical
errors can be:

1. Random or unpredictable deviations between replicates, quantified with the "standard


deviation".

2. Systematic or predictable regular deviation from the "true" value, quantified as "mean
difference" (i.e. the difference between the true value and the mean of replicate
determinations).

3. Constant, unrelated to the concentration of the substance analyzed (the analyte).

4. Proportional, i.e. related to the concentration of the analyte.

* The "true" value of an attribute is by nature indeterminate and often has only a very
relative meaning. Particularly in soil science for several attributes there is no such thing as
the true value as any value obtained is method-dependent (e.g. cation exchange capacity).
Obviously, this does not mean that no adequate analysis serving a purpose is possible. It
does, however, emphasize the need for the establishment of standard reference methods
and the importance of external QC (see Chapter 9).

6.2.2 Accuracy

The "trueness" or the closeness of the analytical result to the "true" value. It is constituted
by a combination of random and systematic errors (precision and bias) and cannot be
quantified directly. The test result may be a mean of several values. An accurate
determination produces a "true" quantitative value, i.e. it is precise and free of bias.

6.2.3 Precision

The closeness with which results of replicate analyses of a sample agree. It is a measure
of dispersion or scattering around the mean value and usually expressed in terms
of standard deviation, standard error or a range (difference between the highest and the
lowest result).

6.2.4 Bias

The consistent deviation of analytical results from the "true" value caused by systematic
errors in a procedure. Bias is the opposite but most used measure for "trueness" which is
the agreement of the mean of analytical results with the true value, i.e. excluding the
contribution of randomness represented in precision. There are several components
contributing to bias:
1. Method bias

The difference between the (mean) test result obtained from a number of laboratories
using the same method and an accepted reference value. The method bias may depend
on the analyte level.

2. Laboratory bias

The difference between the (mean) test result from a particular laboratory and the
accepted reference value.

3. Sample bias

The difference between the mean of replicate test results of a sample and the ("true")
value of the target population from which the sample was taken. In practice, for a
laboratory this refers mainly to sample preparation, subsampling and weighing techniques.
Whether a sample is representative for the population in the field is an extremely important
aspect but usually falls outside the responsibility of the laboratory (in some cases
laboratories have their own field sampling personnel).

The relationship between these concepts can be expressed in the following equation:

Figure

The types of errors are illustrated in Fig. 6-1.

Fig. 6-1. Accuracy and precision in laboratory measurements. (Note that the


qualifications apply to the mean of results: in c the mean is accurate but some
individual results are inaccurate)
6.3 Basic Statistics

6.3.1 Mean
6.3.2 Standard deviation
6.3.3 Relative standard deviation. Coefficient of variation
6.3.4 Confidence limits of a measurement
6.3.5 Propagation of errors

In the discussions of Chapters 7 and 8 basic statistical treatment of data will be


considered. Therefore, some understanding of these statistics is essential and they will
briefly be discussed here.

The basic assumption to be made is that a set of data, obtained by repeated analysis of
the same analyte in the same sample under the same conditions, has
a normal or Gaussian distribution. (When the distribution is skewed statistical treatment is
more complicated). The primary parameters used are the mean (or average) and
the standard deviation (see Fig. 6-2) and the main tools the F-test, the t-test, and
regression and correlation analysis.

Fig. 6-2.  A Gaussian or normal distribution. The figure shows that (approx.) 68% of
the data fall in the range  ¯ x± s,  95% in the range ¯x  ±  2s,  and 99.7% in the range ¯x
± 3s.

6.3.1 Mean

The average of a set of n data xi:

(6.1)

¯
6.3.2 Standard deviation

This is the most commonly used measure of the spread or dispersion of data around the
mean. The standard deviation is defined as the square root of the variance (V). The
variance is defined as the sum of the squared deviations from the mean, divided by n-
1. Operationally, there are several ways of calculation:

(6.1)

or

(6.3)

or

(6.4)

The calculation of the mean and the standard deviation can easily be done on a calculator
but most conveniently on a PC with computer programs such as dBASE, Lotus 123,
Quattro-Pro, Excel, and others, which have simple ready-to-use functions. (Warning: some
programs use n rather than n- 1!).

6.3.3 Relative standard deviation. Coefficient of variation

Although the standard deviation of analytical data may not vary much over limited ranges
of such data, it usually depends on the magnitude of such data: the larger the figures, the
larger s. Therefore, for comparison of variations (e.g. precision) it is often more convenient
to use the relative standard deviation (RSD) than the standard deviation itself. The RSD is
expressed as a fraction, but more usually as a percentage and is then called coefficient of
variation (CV). Often, however, these terms are confused.

(6.5; 6.6)

Note. When needed (e.g. for the F-test, see Eq. 6.11) the variance can, of course, be
calculated by squaring the standard deviation:
V = s2 (6.7)
6.3.4 Confidence limits of a measurement

The more an analysis or measurement is replicated, the closer the mean x of the results
will approach the "true" value , of the analyte content (assuming absence of bias).

A single analysis of a test sample can be regarded as literally sampling the imaginary set
of a multitude of results obtained for that test sample. The uncertainty of such subsampling
is expressed by

(6.8)

where

 = "true" value (mean of large set of replicates)


¯x = mean of subsamples
t = a statistical value which depends on the number of data and the required confidence
(usually 95%).
s = standard deviation of mean of subsamples
n = number of subsamples

(The term   is also known as the standard error of the mean.)

The critical values for t are tabulated in Appendix 1 (they are, therefore, here referred to
as ttab  ). To find the applicable value, the number of degrees of freedom has to be
established by: df = n -1 (see also Section 6.4.2).

Example

For the determination of the clay content in the particle-size analysis, a semi-automatic
pipette installation is used with a 20 mL pipette. This volume is approximate and the
operation involves the opening and closing of taps. Therefore, the pipette has to be
calibrated, i.e. both the accuracy (trueness) and precision have to be established.

A tenfold measurement of the volume yielded the following set of data (in mL):
19.941 19.812 19.829 19.828 19.742
19.797 19.937 19.847 19.885 19.804

The mean is 19.842 mL and the standard deviation 0.0627 mL. According to Appendix 1
for n = 10 is ttab = 2.26 (df = 9) and using Eq. (6.8) this calibration yields:

pipette volume = 19.842 ± 2.26 (0.0627/ ) = 19.84 ± 0.04 mL

(Note that the pipette has a systematic deviation from 20 mL as this is outside the found
confidence interval. See also bias).

In routine analytical work, results are usually single values obtained in batches of several
test samples. No laboratory will analyze a test sample 50 times to be confident that the
result is reliable. Therefore, the statistical parameters have to be obtained in another way.
Most usually this is done by method validation (see Chapter 7) and/or by keeping control
charts, which is basically the collection of analytical results from one or more control
samples in each batch (see Chapter 8). Equation (6.8) is then reduced to

(6.9)

where

 = "true" value
x = single measurement
t = applicable ttab (Appendix 1)
s = standard deviation of set of previous measurements.

In Appendix 1 can be seen that if the set of replicated measurements is large (say > 30), t
is close to 2. Therefore, the (95%) confidence of the result x of a single test sample (n = 1
in Eq. 6.8) is approximated by the commonly used and well known expression

(6.10)

where S is the previously determined standard deviation of the large set of replicates (see
also Fig. 6-2).

Note: This "method-s" or s of a control sample is not a constant and may vary for different
test materials, analyte levels, and with analytical conditions.

Running duplicates will, according to Equation (6.8), increase the confidence of the


(mean) result by a factor  :

where
¯x = mean of duplicates
s = known standard deviation of large set

Similarly, triplicate analysis will increase the confidence by a factor  , etc. Duplicates
are further discussed in Section 8.3.3.

Thus, in summary, Equation (6.8) can be applied in various ways to determine the size of
errors (confidence) in analytical work or measurements: single determinations in routine
work, determinations for which no previous data exist, certain calibrations, etc.

6.3.5 Propagation of errors

6.3.5.1. Propagation of random errors


6.3.5.2 Propagation of systematic errors

The final result of an analysis is often calculated from several measurements performed
during the procedure (weighing, calibration, dilution, titration, instrument readings,
moisture correction, etc.). As was indicated in Section 6.2, the total error in an analytical
result is an adding-up of the sub-errors made in the various steps. For daily practice, the
bias and precision of the whole method are usually the most relevant parameters
(obtained from validation, Chapter 7; or from control charts, Chapter 8). However,
sometimes it is useful to get an insight in the contributions of the subprocedures (and then
these have to be determined separately). For instance if one wants to change (part of) the
method.

Because the "adding-up" of errors is usually not a simple summation, this will be
discussed. The main distinction to be made is between random errors (precision) and
systematic errors (bias).

6.3.5.1. Propagation of random errors

In estimating the total random error from factors in a final calculation, the treatment of
summation or subtraction of factors is different from that of multiplication or division.

I. Summation calculations

If the final result x is obtained from the sum (or difference) of (sub)measurements a, b,
c, etc.:

x = a + b + c +...

then the total precision is expressed by the standard deviation obtained by taking the
square root of the sum of individual variances (squares of standard deviation):
If a (sub)measurement has a constant multiplication factor or coefficient (such as an extra
dilution), then this is included to calculate the effect of the variance concerned, e.g. (2b)2

Example

The Effective Cation Exchange Capacity of soils (ECEC) is obtained by summation of the


exchangeable cations:

ECEC = Exch. (Ca + Mg + Na + K + H + Al)

Standard deviations experimentally obtained for exchangeable Ca, Mg, Na, K and (H + Al)
on a certain sample, e.g. a control sample, are: 0.30, 0.25, 0.15, 0.15, and 0.60 cmolc/kg
respectively. The total precision is:

It can be seen that the total standard deviation is larger than the highest individual
standard deviation, but (much) less than their sum. It is also clear that if one wants to
reduce the total standard deviation, qualitatively the best result can be expected from
reducing the largest individual contribution, in this case the exchangeable acidity.

2. Multiplication calculations

If the final result x is obtained from multiplication (or subtraction) of (sub)measurements


according to

then the total error is expressed by the standard deviation obtained by taking the square
root of the sum of the individual relative standard deviations (RSD or CV, as a fraction or
as percentage, see Eqs. 6.6 and 6.7):

If a (sub)measurement has a constant multiplication factor or coefficient, then this is


included to calculate the effect of the RSD concerned, e.g. (2RSDb)2.

Example

The calculation of Kjeldahl-nitrogen may be as follows:


where

a = ml HCl required for titration sample


b = ml HCl required for titration blank
s = air-dry sample weight in gram
M = molarity of HCl
1.4 = 14×10-3×100% (14 = atomic weight of N)
mcf = moisture correction factor

Note that in addition to multiplications, this calculation contains a subtraction also (often,
calculations contain both summations and multiplications.)

Firstly, the standard deviation of the titration (a -b) is determined as indicated in Section 7
above. This is then transformed to RSD using Equations (6.5) or (6.6). Then the RSD of
the other individual parameters have to be determined experimentally. The
found RSDs are, for instance:

distillation: 0.8%,
titration: 0.5%,
molarity: 0.2%,
sample weight: 0.2%,
mcf: 0.2%.

The total calculated precision is:

Here again, the highest RSD (of distillation) dominates the total precision. In practice, the
precision of the Kjeldahl method is usually considerably worse ( 2.5%) probably mainly as
a result of the heterogeneity of the sample. The present example does not take that into
account. It would imply that 2.5% - 1.0% = 1.5% or 3/5 of the total random error is due to
sample heterogeneity (or other overlooked cause). This implies that painstaking efforts to
improve subprocedures such as the titration or the preparation of standard solutions may
not be very rewarding. It would, however, pay to improve the homogeneity of the sample,
e.g. by careful grinding and mixing in the preparatory stage.

Note. Sample heterogeneity is also represented in the moisture correction factor.


However, the influence of this factor on the final result is usually very small.
6.3.5.2 Propagation of systematic errors

Systematic errors of (sub)measurements contribute directly to the total bias of the result
since the individual parameters in the calculation of the final result each carry their own
bias. For instance, the systematic error in a balance will cause a systematic error in the
sample weight (as well as in the moisture determination). Note that some systematic errors
may cancel out, e.g. weighings by difference may not be affected by a biased balance.
The only way to detect or avoid systematic errors is by comparison (calibration) with
independent standards and outside reference or control samples.

6.4 Statistical tests

6.4.1 Two-sided vs. one-sided test


6.4.2 F-test for precision
6.4.3 t-Tests for bias
6.4.4 Linear correlation and regression
6.4.5 Analysis of variance (ANOVA)

In analytical work a frequently recurring operation is the verification of performance by


comparison of data. Some examples of comparisons in practice are:

- performance of two instruments,

- performance of two methods,

- performance of a procedure in different periods,

- performance of two analysts or laboratories,

- results obtained for a reference or control sample with the "true", "target" or "assigned"
value of this sample.

Some of the most common and convenient statistical tools to quantify such comparisons
are the F-test, the t-tests, and regression analysis.

Because the F-test and the t-tests are the most basic tests they will be discussed first.
These tests examine if two sets of normally distributed data are similar or dissimilar
(belong or not belong to the same "population") by comparing their standard
deviations and means respectively. This is illustrated in Fig. 6-3.

Fig. 6-3. Three possible cases when comparing two sets of data (n1  = n2). A. Different
mean (bias), same precision; B. Same mean (no bias), different precision; C. Both
mean and precision are different. (The fourth case, identical sets, has not been
drawn).
6.4.1 Two-sided vs. one-sided test

These tests for comparison, for instance between methods A and B, are based on the
assumption that there is no significant difference (the "null hypothesis"). In other words,
when the difference is so small that a tabulated critical value of F or t is not exceeded, we
can be confident (usually at 95% level) that A and B are not different. Two fundamentally
different questions can be asked concerning both the comparison of the standard
deviations s1 and s2 with the F-test, and of the means¯x1, and ¯x2, with the t-test:

1. are A and B different? (two-sided test)
2. is A higher (or lower) than B? (one-sided test).

This distinction has an important practical implication as statistically the probabilities for
the two situations are different: the chance that A and B are only different ("it can go two
ways") is twice as large as the chance that A is higher (or lower) than B ("it can go only
one way"). The most common case is the two-sided (also called two-tailed) test: there are
no particular reasons to expect that the means or the standard deviations of two data sets
are different. An example is the routine comparison of a control chart with the previous one
(see 8.3). However, when it is expected or suspected that the mean and/or the standard
deviation will go only one way, e.g. after a change in an analytical procedure, the one-
sided (or one-tailed) test is appropriate. In this case the probability that it goes the other
way than expected is assumed to be zero and, therefore, the probability that it goes the
expected way is doubled. Or, more correctly, the uncertainty in the two-way test of 5% (or
the probability of 5% that the critical value is exceeded) is divided over the two tails of the
Gaussian curve (see Fig. 6-2), i.e. 2.5% at the end of each tail beyond 2s. If we perform
the one-sided test with 5% uncertainty, we actually increase this 2.5% to 5% at the end of
one tail. (Note that for the whole gaussian curve, which is symmetrical, this is then
equivalent to an uncertainty of 10% in two ways!)

This difference in probability in the tests is expressed in the use of two tables of critical
values for both F and t. In fact, the one-sided table at 95% confidence level is equivalent to
the two-sided table at 90% confidence level.

It is emphasized that the one-sided test is only appropriate when a difference in one
direction is expected or aimed at. Of course it is tempting to perform this test after the
results show a clear (unexpected) effect. In fact, however, then a two times higher
probability level was used in retrospect. This is underscored by the observation that in this
way even contradictory conclusions may arise: if in an experiment calculated values of F
and t are found within the range between the two-sided and one-sided values of Ftab, and
ttab, the two-sided test indicates no significant difference, whereas the one-sided test says
that the result of A is significantly higher (or lower) than that of B. What actually happens is
that in the first case the 2.5% boundary in the tail was just not exceeded, and then,
subsequently, this 2.5% boundary is relaxed to 5% which is then obviously more easily
exceeded. This illustrates that statistical tests differ in strictness and that for proper
interpretation of results in reports, the statistical techniques used, including the confidence
limits or probability, should always be specified.
6.4.2 F-test for precision

Because the result of the F-test may be needed to choose between the Student's t-test
and the Cochran variant (see next section), the F-test is discussed first.

The F-test (or Fisher's test) is a comparison of the spread of two sets of data to test if the
sets belong to the same population, in other words if the precisions are similar or
dissimilar.

The test makes use of the ratio of the two variances:

(6.11)

where the larger s2 must be the numerator by convention. If the performances are not very
different, then the estimates s1, and s2, do not differ much and their ratio (and that of their
squares) should not deviate much from unity. In practice, the calculated F is compared
with the applicable F value in the F-table (also called the critical value, see Appendix 2).
To read the table it is necessary to know the applicable number of degrees of freedom
for s1, and s2. These are calculated by:

df1 = n1-1
df2 = n2-1

If Fcal  Ftab one can conclude with 95% confidence that there is no significant difference in
precision (the "null hypothesis" that s1, = s, is accepted). Thus, there is still a 5% chance
that we draw the wrong conclusion. In certain cases more confidence may be needed,
then a 99% confidence table can be used, which can be found in statistical textbooks.

Example I (two-sided test)

Table 6-1 gives the data sets obtained by two analysts for the cation exchange capacity
(CEC) of a control sample. Using Equation (6.11) the calculated F value is 1.62. As we
had no particular reason to expect that the analysts would perform differently, we use the
F-table for the two-sided test and find Ftab = 4.03 (Appendix 2, df1, = df2 = 9). This exceeds
the calculated value and the null hypothesis (no difference) is accepted. It can be
concluded with 95% confidence that there is no significant difference in precision between
the work of Analyst 1 and 2.

Table 6-1. CEC values (in cmolc/kg) of a control sample determined by two analysts.

1 2
10.2 9.7
10.7 9.0
10.5 10.2
9.9 10.3
9.0 10.8
11.2 11.1
11.5 9.4
10.9 9.2
8.9 9.8
10.6 10.2
¯x: 10.34 9.97
s: 0.819 0.644
n: 10 10
Fcal = 1.62 tcal = 1.12
Ftab = 4.03 ttab = 2.10

Example 2 (one-sided test)

The determination of the calcium carbonate content with the Scheibler standard method is
compared with the simple and more rapid "acid-neutralization" method using one and the
same sample. The results are given in Table 6-2. Because of the nature of the rapid
method we suspect it to produce a lower precision then obtained with the Scheibler
method and we can, therefore, perform the one sided F-test. The applicable Ftab = 3.07
(App. 2, df1, = 12, df2 = 9) which is lower than Fcal (=18.3) and the null hypothesis (no
difference) is rejected. It can be concluded (with 95% confidence) that for this one
sample the precision of the rapid titration method is significantly worse than that of the
Scheibler method.

Table 6-2. Contents of CaCO3 (in mass/mass %) in a soil sample determined with the


Scheibler method (A) and the rapid titration method (B).

A B
2.5 1.7
2.4 1.9
2.5 2.3
2.6 2.3
2.5 2.8
2.5 2.5
2.4 1.6
2.6 1.9
2.7 2.6
2.4 1.7
- 2.4
- 2.2
2.6
x: 2.51 2.13
s: 0.099 0.424
n: 10 13
Fcal  = 18.3 tcal = 3.12
Ftab = 3.07 ttab* = 2.18

(ttab* = Cochran's "alternative" ttab)

6.4.3 t-Tests for bias


6.4.3.1. Student's t-test
6.4.3.2 Cochran's t-test
6.4.3.3 t-Test for large data sets (n³ 30)
6.4.3.4 Paired t-test

Depending on the nature of two sets of data (n, s, sampling nature), the means of the sets
can be compared for bias by several variants of the t-test. The following most common
types will be discussed:

1. Student's t-test for comparison of two independent sets of data with


very similar standard deviations;

2. the Cochran variant of the t-test when the standard deviations of the independent


sets differ significantly;

3. the paired t-test for comparison of strongly dependent sets of data.

Basically, for the t-tests Equation (6.8) is used but written in a different way:

(6.12)

where

¯x = mean of test results of a sample


 = "true" or reference value
s = standard deviation of test results
n = number of test results of the sample.

To compare the mean of a data set with a reference value normally the "two-sided t-table
of critical values" is used (Appendix 1). The applicable number of degrees of freedom here
is:

df = n-1

If a value for t calculated with Equation (6.12) does not exceed the critical value in the
table, the data are taken to belong to the same population: there is no difference and the
"null hypothesis" is accepted (with the applicable probability, usually 95%).

As with the F-test, when it is expected or suspected that the obtained results are higher or
lower than that of the reference value, the one-sided t-test can be performed: if tcal >
ttab, then the results are significantly higher (or lower) than the reference value.

More commonly, however, the "true" value of proper reference samples is accompanied by
the associated standard deviation and number of replicates used to determine these
parameters. We can then apply the more general case of comparing the means of two
data sets: the "true" value in Equation (6.12) is then replaced by the mean of a second
data set. As is shown in Fig. 6-3, to test if two data sets belong to the same population it is
tested if the two Gauss curves do sufficiently overlap. In other words, if the difference
between the means ¯x1-¯x2 is small. This is discussed next.

Similarity or non-similarity of standard deviations

When using the t-test for two small sets of data (n1 and/or n2<30), a choice of the type of


test must be made depending on the similarity (or non-similarity) of the standard deviations
of the two sets. If the standard deviations are sufficiently similar they can be "pooled" and
the Student t-test can be used. When the standard deviations are not sufficiently similar an
alternative procedure for the t-test must be followed in which the standard deviations are
not pooled. A convenient alternative is the Cochran variant of the t-test. The criterion for
the choice is the passing or non-passing of the F-test (see 6.4.2), that is, if the variances
do or do not significantly differ. Therefore, for small data sets, the F-test should precede
the t-test.

For dealing with large data sets (n1, n2, 30) the "normal" t-test is used (see Section
6.4.3.3 and App. 3).

6.4.3.1. Student's t-test

(To be applied to small data sets (n1, n2 < 30) where s1, and s2 are similar according to F-


test.

When comparing two sets of data, Equation (6.12) is rewritten as:

(6.13)

where

¯x1 = mean of data set 1


¯x2 = mean of data set 2
sp = "pooled" standard deviation of the sets
n1 = number of data in set 1
n2 = number of data in set 2.

The pooled standard deviation sp is calculated by:

6.14

where

s1 = standard deviation of data set 1


s2 = standard deviation of data set 2
n1 = number of data in set 1
n2 = number of data in set 2.

To perform the t-test, the critical ttab has to be found in the table (Appendix 1); the
applicable number of degrees of freedom df is here calculated by:

df = n1 + n2  -2

Example

The two data sets of Table 6-1 can be used: With Equations (6.13) and (6.14) tcal, is
calculated as 1.12 which is lower than the critical value ttab of 2.10 (App. 1, df = 18, two-
sided), hence the null hypothesis (no difference) is accepted and the two data sets are
assumed to belong to the same population: there is no significant difference between the
mean results of the two analysts (with 95% confidence).

Note. Another illustrative way to perform this test for bias is to calculate if the difference
between the means falls within or outside the range where this difference is still not
significantly large. In other words, if this difference is less than the least significant
difference (lsd). This can be derived from Equation (6.13):
6.15

In the present example of Table 6-1, the calculation yields lsd = 0.69. The measured
difference between the means is 10.34 -9.97 = 0.37 which is smaller than the lsd indicating
that there is no significant difference between the performance of the analysts.

In addition, in this approach the 95% confidence limits of the difference between the
means can be calculated (cf. Equation 6.8):

confidence limits = 0.37 ± 0.69 = -0.32 and 1.06

Note that the value 0 for the difference is situated within this confidence interval which
agrees with the null hypothesis of x1 = x2 (no difference) having been accepted.

6.4.3.2 Cochran's t-test

To be applied to small data sets (n1, n2, < 30) where s1 and s2, are dissimilar according


to F-test.

Calculate t with:

6.16
Then determine an "alternative" critical t-value:

6.17

where

t1  = ttab at n1-1 degrees of freedom


t2 = ttab at n2-1 degrees of freedom

Now the t-test can be performed as usual: if tcal< ttab* then the null hypothesis that the
means do not significantly differ is accepted.

Example

The two data sets of Table 6-2 can be used.

According to the F-test, the standard deviations differ significantly so that the Cochran
variant must be used. Furthermore, in contrast to our expectation that the precision of the
rapid test would be inferior, we have no idea about the bias and therefore the two-sided
test is appropriate. The calculations yield tcal = 3.12 and ttab*= 2.18 meaning that tcal exceeds
ttab* which implies that the null hypothesis (no difference) is rejected and that the mean of
the rapid analysis deviates significantly from that of the standard analysis (with 95%
confidence, and for this sample only). Further investigation of the rapid method would have
to include the use of more different samples and then comparison with the one-sided t-test
would be justified (see 6.4.3.4, Example 1).

6.4.3.3 t-Test for large data sets (n 30)

In the example above (6.4.3.2) the conclusion happens to have been the same if the
Student's t-test with pooled standard deviations had been used. This is caused by the fact
that the difference in result of the Student and Cochran variants of the t-test is largest
when small sets of data are compared, and decreases with increasing number of data.
Namely, with increasing number of data a better estimate of the real distribution of the
population is obtained (the flatter t-distribution converges then to the standardized normal
distribution). When n 30 for both sets, e.g. when comparing Control Charts (see 8.3), for
all practical purposes the difference between the Student and Cochran variant is
negligible. The procedure is then reduced to the "normal" t-test by simply calculating
tcal with Eq. (6.16) and comparing this with ttab at df = n1  + n2-2. (Note in App. 1 that the two-
sided ttab is now close to 2).

The proper choice of the t-test as discussed above is summarized in a flow diagram in


Appendix 3.
6.4.3.4 Paired t-test

When two data sets are not independent, the paired t-test can be a better tool for
comparison than the "normal" t-test described in the previous sections. This is for instance
the case when two methods are compared by the same analyst using the same sample(s).
It could, in fact, also be applied to the example of Table 6-1 if the two analysts used the
same analytical method at (about) the same time.

As stated previously, comparison of two methods using different levels of analyte gives
more validation information about the methods than using only one level. Comparison of
results at each level could be done by the F and t-tests as described above. The paired t-
test, however, allows for different levels provided the concentration range is not too wide.
As a rule of fist, the range of results should be within the same magnitude. If the analysis
covers a longer range, i.e. several powers of ten, regression analysis must be considered
(see Section 6.4.4). In intermediate cases, either technique may be chosen.

The null hypothesis is that there is no difference between the data sets, so the test is to
see if the mean of the differences between the data deviates significantly from zero or not
(two-sided test). If it is expected that one set is systematically higher (or lower) than the
other set, then the one-sided test is appropriate.

Example 1

The "promising" rapid single-extraction method for the determination of the cation
exchange capacity of soils using the silver thiourea complex (AgTU, buffered at pH 7) was
compared with the traditional ammonium acetate method (NH4OAc, pH 7). Although for
certain soil types the difference in results appeared insignificant, for other types
differences seemed larger. Such a suspect group were soils with ferralic (oxic) properties
(i.e. highly weathered sesquioxide-rich soils). In Table 6-3 the results often soils with these
properties are grouped to test if the CEC methods give different results. The
difference d within each pair and the parameters needed for the paired t-test are given
also.

Table 6-3. CEC values (in cmolc/kg) obtained by the NH4OAc and AgTU methods (both at
pH 7) for ten soils with ferralic properties.

Sample NH4OAc AgTU d


1 7.1 6.5 -0.6
2 4.6 5.6 +1.0
3 10.6 14.5 +3.9
4 2.3 5.6 +3.3
5 25.2 23.8 -1.4
6 4.4 10.4 +6.0
7 7.8 8.4 +0.6
8 2.7 5.5 +2.8
9 14.3 19.2 +4.9
10 13.6 15.0 +1.4
¯d = +2.19 tcal = 2.89
sd = 2.395 ttab = 2.26
Using Equation (6.12) and noting that  d = 0 (hypothesis value of the differences, i.e. no
difference), the t-value can be calculated as:

where

 = mean of differences within each pair of data


sd = standard deviation of the mean of differences
n = number of pairs of data

The calculated t value (=2.89) exceeds the critical value of 1.83 (App. 1, df = n -1 = 9, one-
sided), hence the null hypothesis that the methods do not differ is rejected and it is
concluded that the silver thiourea method gives significantly higher results as compared
with the ammonium acetate method when applied to such highly weathered soils.

Note. Since such data sets do not have a normal distribution, the "normal" t-test which
compares means of sets cannot be used here (the means do not constitute a fair
representation of the sets). For the same reason no information about the precision of the
two methods can be obtained, nor can the F-test be applied. For information about
precision, replicate determinations are needed.

Example 2

Table 6-4 shows the data of total-P in four plant tissue samples obtained by a
laboratory L and the median values obtained by 123 laboratories in a proficiency (round-
robin) test.

Table 6-4. Total-P contents (in mmol/kg) of plant tissue as determined by 123


laboratories (Median) and Laboratory L.

Sample Median Lab L d


1 93.0 85.2 -7.8
2 201 224 23
3 78.9 84.5 5.6
4 175 185 10
¯d = 7.70 tcal =1.21
sd = 12.702 ttab = 3.18

To verify the performance of the laboratory a paired t-test can be performed:

Using Eq. (6.12) and noting that  d=0 (hypothesis value of the differences, i.e. no
difference), the t value can be calculated as:
The calculated t-value is below the critical value of 3.18 (Appendix 1, df = n - 1 = 3, two-
sided), hence the null hypothesis that the laboratory does not significantly differ from the
group of laboratories is accepted, and the results of Laboratory L seem to agree with those
of "the rest of the world" (this is a so-called third-line control).

6.4.4 Linear correlation and regression

6.4.4.1 Construction of calibration graph


6.4.4.2 Comparing two sets of data using many samples at different analyte levels

These also belong to the most common useful statistical tools to compare effects and
performances X and Y. Although the technique is in principle the same for both, there is a
fundamental difference in concept: correlation analysis is applied to independent factors:
if X increases, what will Y do (increase, decrease, or perhaps not change at all)?
In regression analysis a unilateral response is assumed: changes in X result in changes
in Y, but changes in Y do not result in changes in X.

For example, in analytical work, correlation analysis can be used for comparing methods
or laboratories, whereas regression analysis can be used to construct calibration graphs.
In practice, however, comparison of laboratories or methods is usually also done by
regression analysis. The calculations can be performed on a (programmed) calculator or
more conveniently on a PC using a home-made program. Even more convenient are the
regression programs included in statistical packages such as Statistix, Mathcad, Eureka,
Genstat, Statcal, SPSS, and others. Also, most spreadsheet programs such as Lotus 123,
Excel, and Quattro-Pro have functions for this.

Laboratories or methods are in fact independent factors. However, for regression analysis
one factor has to be the independent or "constant" factor (e.g. the reference method, or
the factor with the smallest standard deviation). This factor is by convention
designated X, whereas the other factor is then the dependent factor Y (thus, we speak of
"regression of Y on X").

As was discussed in Section 6.4.3, such comparisons can often been done with the
Student/Cochran or paired t-tests. However, correlation analysis is indicated:

1. When the concentration range is so wide that the errors, both random and systematic,
are not independent (which is the assumption for the t-tests). This is often the case where
concentration ranges of several magnitudes are involved.

2. When pairing is inappropriate for other reasons, notably a long time span between the
two analyses (sample aging, change in laboratory conditions, etc.).
The principle is to establish a statistical linear relationship between two sets of
corresponding data by fitting the data to a straight line by means of the "least squares"
technique. Such data are, for example, analytical results of two methods applied to the
same samples (correlation), or the response of an instrument to a series of standard
solutions (regression).

Note: Naturally, non-linear higher-order relationships are also possible, but since these are
less common in analytical work and more complex to handle mathematically, they will not
be discussed here. Nevertheless, to avoid misinterpretation, always inspect the kind of
relationship by plotting the data, either on paper or on the computer monitor.

The resulting line takes the general form:

y = bx + a (6.18)

where

a = intercept of the line with the y-axis


b = slope (tangent)

In laboratory work ideally, when there is perfect positive correlation without bias, the
intercept a = 0 and the slope = 1. This is the so-called "1:1 line" passing through the origin
(dashed line in Fig. 6-5).

If the intercept a  0 then there is a systematic discrepancy (bias, error)


between X and Y; when b  1 then there is a proportional response or difference
between X and Y.

The correlation between X and Y is expressed by the correlation coefficient r which can be


calculated with the following equation:

6.19

where

xi = data X
¯x = mean of data X
yi = data Y
¯y = mean of data Y

It can be shown that r can vary from 1 to -1:

r = 1 perfect positive linear correlation


r = 0 no linear correlation (maybe other correlation)
r = -1 perfect negative linear correlation
Often, the correlation coefficient r is expressed as r2: the coefficient of
determination or coefficient of variance. The advantage of r2 is that, when multiplied by
100, it indicates the percentage of variation in Y associated with variation in X. Thus, for
example, when r = 0.71 about 50% (r2 = 0.504) of the variation in Y is due to the variation
in X.

The line parameters b and a are calculated with the following equations:

6.20

and

a = ¯y - b¯x 6.21

It is worth to note that r is independent of the choice which factor is the independent
factory and which is the dependent Y. However, the regression parameters a and do
depend on this choice as the regression lines will be different (except when there is ideal
1:1 correlation).

6.4.4.1 Construction of calibration graph

As an example, we take a standard series of P (0-1.0 mg/L) for the spectrophotometric


determination of phosphate in a Bray-I extract ("available P"), reading in absorbance units.
The data and calculated terms needed to determine the parameters of the calibration
graph are given in Table 6-5. The line itself is plotted in Fig. 6-4.

Table 6-5 is presented here to give an insight in the steps and terms involved. The
calculation of the correlation coefficient r with Equation (6.19) yields a value of 0.997
(r2 = 0.995). Such high values are common for calibration graphs. When the value is not
close to 1 (say, below 0.98) this must be taken as a warning and it might then be advisable
to repeat or review the procedure. Errors may have been made (e.g. in pipetting) or the
used range of the graph may not be linear. On the other hand, a high r may be misleading
as it does not necessarily indicate linearity. Therefore, to verify this, the calibration graph
should always be plotted, either on paper or on computer monitor.

Using Equations (6.20 and (6.21) we obtain:

and

a = 0.350 - 0.313 = 0.037

Thus, the equation of the calibration line is:

y = 0.626x + 0.037 (6.22)


Table 6-5. Parameters of calibration graph in Fig. 6-4.

xi yi x1-¯x (xi-¯x)2 yi-¯y (yi-¯y)2 (x1-¯x)(yi-¯y)


0.0 0.05 -0.5 0.25 -0.30 0.090 0.150
0.2 0.14 -0.3 0.09 -0.21 0.044 0.063
0.4 0.29 -0.1 0.01 -0.06 0.004 0.006
0.6 0.43 0.1 0.01 0.08 0.006 0.008
0.8 0.52 0.3 0.09 0.17 0.029 0.051
1.0 0.67 0.5 0.25 0.32 0.102 0.160
3.0 2.10 0 0.70 0 0.2754 0.438 
¯x=0.5 ¯y = 0.35

Fig. 6-4. Calibration graph plotted from data of Table 6-5. The dashed lines delineate
the 95% confidence area of the graph. Note that the confidence is highest at the
centroid of the graph.

During calculation, the maximum number of decimals is used, rounding off to the last
significant figure is done at the end (see instruction for rounding off in Section 8.2).
Once the calibration graph is established, its use is simple: for each y value measured the
corresponding concentration x can be determined either by direct reading or by calculation
using Equation (6.22). The use of calibration graphs is further discussed in Section 7.2.2.

Note. A treatise of the error or uncertainty in the regression line is given.


6.4.4.2 Comparing two sets of data using many samples at different analyte levels

Although regression analysis assumes that one factor (on the x-axis) is constant, when
certain conditions are met the technique can also successfully be applied to comparing
two variables such as laboratories or methods. These conditions are:

- The most precise data set is plotted on the x-axis


- At least 6, but preferably more than 10 different samples are analyzed
- The samples should rather uniformly cover the analyte level range of interest.

To decide which laboratory or method is the most precise, multi-replicate results have to
be used to calculate standard deviations (see 6.4.2). If these are not available then the
standard deviations of the present sets could be compared (note that we are now not
dealing with normally distributed sets of replicate results). Another convenient way is to run
the regression analysis on the computer, reverse the variables and run the analysis again.
Observe which variable has the lowest standard deviation (or standard error of the
intercept a, both given by the computer) and then use the results of the regression
analysis where this variable was plotted on the x-axis.

If the analyte level range is incomplete, one might have to resort to spiking or standard
additions, with the inherent drawback that the original analyte-sample combination may not
adequately be reflected.

Example

In the framework of a performance verification programme, a large number of soil samples


were analyzed by two laboratories X and Y (a form of "third-line control", see Chapter 9)
and the data compared by regression. (In this particular case, the paired t-test might have
been considered also). The regression line of a common attribute, the pH, is shown here
as an illustration. Figure 6-5 shows the so-called "scatter plot" of 124 soil pH-H2O
determinations by the two laboratories. The correlation coefficient r is 0.97 which is very
satisfactory. The slope (= 1.03) indicates that the regression line is only slightly steeper
than the 1:1 ideal regression line. Very disturbing, however, is the intercept a of -1.18. This
implies that laboratory Y measures the pH more than a whole unit lower than
laboratory X at the low end of the pH range (the intercept -1.18 is at pHx = 0) which
difference decreases to about 0.8 unit at the high end.

Fig. 6-5. Scatter plot of pH data of two laboratories. Drawn line: regression


line; dashed line: 1:1 ideal regression line.
The t-test for significance is as follows:

For intercept a:  a = 0 (null hypothesis: no bias; ideal intercept is then zero), standard
error =0.14 (calculated by the computer), and using Equation (6.12) we obtain:

Here, ttab = 1.98 (App. 1, two-sided, df = n - 2 = 122 (n-2 because an extra degree of


freedom is lost as the data are used for both a and b) hence, the laboratories have a
significant mutual bias.

For slope:  b = 1 (ideal slope: null hypothesis is no difference), standard error = 0.02


(given by computer), and again using Equation (6.12) we obtain:
Again, ttab = 1.98 (App. 1; two-sided, df = 122), hence, the difference between the
laboratories is not significantly proportional (or: the laboratories do not have a significant
difference in sensitivity). These results suggest that in spite of the good correlation, the two
laboratories would have to look into the cause of the bias.

Note. In the present example, the scattering of the points around the regression line does
not seem to change much over the whole range. This indicates that the precision of
laboratory Y does not change very much over the range with respect to laboratory X. This
is not always the case. In such cases, weighted regression (not discussed here) is more
appropriate than the unweighted regression as used here.

Validation of a method (see Section 7.5) may reveal that precision can change significantly
with the level of analyte (and with other factors such as sample matrix).

6.4.5 Analysis of variance (ANOVA)

When results of laboratories or methods are compared where more than one factor can be
of influence and must be distinguished from random effects, then ANOVA is a powerful
statistical tool to be used. Examples of such factors are: different analysts, samples with
different pre-treatments, different analyte levels, different methods within one of the
laboratories). Most statistical packages for the PC can perform this analysis.

As a treatise of ANOVA is beyond the scope of the present Guidelines, for further
discussion the reader is referred to statistical textbooks, some of which are given in the list
of Literature.

Error or uncertainty in the regression line

The "fitting" of the calibration graph is necessary because the response


points yi, composing the line do not fall exactly on the line. Hence, random errors are
implied. This is expressed by an uncertainty about the slope and intercept b and a defining
the line. A quantification can be found in the standard deviation of these parameters. Most
computer programmes for regression will automatically produce figures for these. To
illustrate the procedure, the example of the calibration graph in Section 6.4.3.1 is
elaborated here.

A practical quantification of the uncertainty is obtained by calculating the standard


deviation of the points on the line; the "residual standard deviation" or "standard error of
the y-estimate", which we assumed to be constant (but which is only approximately so, see
Fig. 6-4):

(6.23)

where
 = "fitted" y-value for each xi, (read from graph or calculated with Eq. 6.22). Thus,   
is the (vertical) deviation of the found y-values from the line.

n = number of calibration points.

Note: Only the y-deviations of the points from the line are considered. It is assumed that
deviations in the x-direction are negligible. This is, of course, only the case if the standards
are very accurately prepared.

Now the standard deviations for the intercept a and slope b can be calculated with:

6.24

and

6.25

To make this procedure clear, the parameters involved are listed in Table 6-6.

The uncertainty about the regression line is expressed by the confidence limits of a
and b according to Eq. (6.9): a ± t.sa and b ± t.sb

Table 6-6. Parameters for calculating errors due to calibration graph (use also figures of
Table 6-5).

xi yi

0 0.05 0.037 0.013 0.0002


0.2 0.14 0.162 -0.022 0.0005
0.4 0.29 0.287 0.003 0.0000
0.6 0.43 0.413 0.017 0.0003
0.8 0.52 0.538 -0.018 0.0003
1.0 0.67 0.663 0.007 0.0001
0.001364 

In the present example, using Eq. (6.23), we calculate

and, using Eq. (6.24) and Table 6-5:


and, using Eq. (6.25) and Table 6-5:

The applicable ttab is 2.78 (App. 1, two-sided, df = n -1 = 4) hence, using Eq. (6.9):

a = 0.037 ± 2.78 × 0.0132 = 0.037 ± 0.037


and
b = 0.626 ± 2.78 × 0.0219 = 0.626 ± 0.061

Note that if sa is large enough, a negative value for a is possible, i.e. a negative reading for
the blank or zero-standard. (For a discussion about the error in x resulting from a reading
in y, which is particularly relevant for reading a calibration graph, see Section 7.2.3)

The uncertainty about the line is somewhat decreased by using more calibration points
(assuming sy has not increased): one more point reduces ttab from 2.78 to 2.57 (see
Appendix 1).

7 QUALITY OF ANALYTICAL PROCEDURES

7.1 Introduction
7.2 Calibration graphs
7.3 Blanks and Detection limit
7.4 Types of sample material
7.5 Validation of own procedures
7.6 Drafting an analytical procedure
7.7 Research plan
SOPs

7.1 Introduction
In this chapter the actual execution of the jobs for which the laboratory is intended, is dealt
with. The most important part of this work is of course the analytical procedures
meticulously performed according to the corresponding SOPs. Relevant aspects include
calibration, use of blanks, performance characteristics of the procedure, and reporting of
results. An aspect of utmost importance of quality management, the quality control by
inspection of the results, is discussed separately in Chapter 8.
All activities associated with these aspects are aimed at one target: the production of
reliable data with a minimum of errors. In addition, it must be ensured that reliable data are
produced consistently. To achieve this an appropriate programme of quality
control (QC) must be implemented. Quality control is the term used to describe the
practical steps undertaken to ensure that errors in the analytical data are of a magnitude
appropriate for the use to which the data will be put. This implies that the errors (which are
unavoidably made) have to be quantified to enable a decision whether they are of an
acceptable magnitude, and that unacceptable errors are discovered so that corrective
action can be taken. Clearly, quality control must detect both random and systematic
errors. The procedures for QC primarily monitor the accuracy of the work by checking the
bias of data with the help of (certified) reference samples and control samples and the
precision by means of replicate analyses of test samples as well as of reference and/or
control samples.

7.2 Calibration graphs

7.2.1 Principle
7.2.2 Construction and use
7.2.3 Error due to the regression line
7.2.4 Independent standards
7.2.5 Measuring a batch

7.2.1 Principle

Here, the construction and use of calibration graphs or curves in daily practice of a
laboratory will be discussed. Calibration of instruments (including adjustment) in the
present context are also referred to as standardization. The confusion about these terms is
mainly semantic and the terms calibration curve and standard curve are generally used
interchangeably. The term "curve" implies that the line is not straight. However, the best
(parts of) calibration lines are linear and, therefore, the general term "graph" is preferred.

For many measuring techniques calibration graphs have to be constructed. The technique
is simple and consists of plotting the instrument response against a series of samples with
known concentrations of the analyte (standards). In practice, these standards are usually
pure chemicals dispersed in a matrix corresponding with that of the test samples (the
"unknowns"). By convention, the calibration graph is always plotted with the concentration
of the standards on the x-axis and the reading of the instrument response on the y-
axis. The unknowns are determined by interpolation, not by extrapolation, so that a
suitable working range for the standards must be selected. In addition, in the present
discussion it is assumed that the working range is limited to the linear range of the
calibration graphs and that the standard deviation does not change over the range (neither
of which is always the case* and that data are normally distributed. Non-linear graphs can
sometimes be linearized in a simple way, e.g. by using a log scale (in potentiometry), but
usually imply statistical problems (polynomial regression) for which the reader is referred
to the relevant literature. It should be mentioned, however, that in modem instruments
which make and use calibration graphs automatically these aspects sometimes go by
unnoticed.

* This is the so-called "unweighted" regression line. Because normally the standard
deviation is not constant over the concentration range (it is usually least in the middle
range), this difference in error should be taken into account. This would then yield a
"weighted regression line". The calculation of this is more complicated and information
about the standard deviation of the y-readings has to be obtained. The gain in precision is
usually very limited, but sometimes the extra information about the error may be useful.

Some common practices to obtain calibration graphs are:

1. The standards are made in a solution with the same composition as the extractant used
for the samples (with the same dilution factor) so that all measurements are done in the
same matrix. This technique is often practised when analyzing many batches where the
same standards are used for some time. In this way an incorrectly prepared extractant or
matrix may be detected (in blank or control sample).

2. The standards are made in the blank extract. A disadvantage of this technique is that for
each batch the standards have to be pipetted. Therefore, this type of calibration is
sometimes favoured when only one or few batches are analyzed or when the extractant is
unstable. A seeming advantage is that the blank can be forced to zero. However, an
incorrect extractant would then more easily go by undetected. The disadvantage of
pipetting does not apply in case of automatic dispensing of reagents when equal volumes
of different concentration are added (e.g. with flow-injection).

3. Less common, but useful in special cases is the so-called standard additions technique.


This can be practised when a matrix mismatch between samples and standards needs to
be avoided: the standards are prepared from actual samples. The general procedure is to
take a number of aliquots of sample or extract, add different quantities of the analyte to
each aliquot (spiking) and dilute to the final volume. One aliquot is used without the
addition of the analyte (blank). Thus, a standard series is obtained.

If calibration is involved in an analytical procedure, the SOP for this should include a
description of the calibration sub-procedure. If applicable, including an optimalization
procedure (usually given in the instruction manual).

7.2.2 Construction and use

In several laboratories calibration graphs for some analyses are still adequately plotted
manually and the straight line (or sometimes a curved line) is drawn with a visual "best fit",
e.g. for flame atomic emission spectrometry, or colorimetry. However, this practice is only
legitimate when the random errors in the measurements of the standards are small: when
the scattering is appreciable the line-fitting becomes subjective and unreliable. Therefore,
if a calibration graph is not made automatically by a microprocessor of the instrument, the
following more objective and also quantitatively more informative procedure is generally
favoured.
The proper way of constructing the graph is essentially the performance of a regression
analysis i.e., the statistical establishment of a linear relationship between concentration of
the analyte and the instrument response using at least six points. This regression analysis
(of reading y on concentration x) yields a correlation coefficient r as a measure for the fit of
the points to a straight line (by means of Least Squares).

Warning. Some instruments can be calibrated with only one or two standards. Linearity is
then implied but may not necessarily be true. It is useful to check this with more standards.

Regression analysis was introduced in Section 6.4.4 and the construction of a calibration
graph was given as an example. The same example is taken up here (and repeated in
part) but focused somewhat more on the application.

We saw that a linear calibration graph takes the general form:

y = bx + a (6.18; 7.1)

where:

a = intercept of the line with the y-axis


b = slope (tangent)

Ideally, the intercept a is zero. Namely, when the analyte is absent no response of the
instrument is to be expected. However, because of interactions, interferences, noise,
contaminations and other sources of bias, this is seldom the case. Therefore, a can be
considered as the signal of the blank of the standard series.

The slope b is a measure for the sensitivity of the procedure; the steeper the slope, the
more sensitive the procedure, or: the stronger the instrument response on yi to a
concentration change on x (see also Section 7.5.3).

The correlation coefficient r can be calculated by:

(6.19;7.2)

where

x1= concentrations of standards
¯x = mean of concentrations of standards
y1= instrument response to standards
¯y = mean of instrument responses to standards

The line parameters b and a are calculated with the following equations:


(6.20;7.3)

and

a = ¯y - b¯x (6.21;7.4)

Example of calibration graph

As an example, we take the same calibration graph as discussed in Section 6.4.4.1, (Fig.
6-4): a standard series of P (0-1.0 mg/L) for the spectrophotometric determination of
phosphate in a Bray-I extract ("available P"), reading in absorbance units. The data and
calculated terms needed to determine the parameters of the calibration graph were given
in Table 6-5. The calculations can be done on a (programmed) calculator or more
conveniently on a PC using a home-made program or, even more conveniently, using an
available regression program. The calculations yield the equation of the calibration line
(plotted in Fig. 7-1):

y = 0.626x + 0.037 (6.22; 7.5)

with a correlation coefficient r = 0.997 . As stated previously (6.4.3.1), such high values are
common for calibration graphs. When the value is not close to 1 (say, below 0.98) this
must be taken as a warning and it might then be advisable to repeat or review the
procedure. Errors may have been made (e.g. in pipetting) or the used range of the graph
may not be linear. Therefore, to make sure, the calibration graph should always be
plotted, either on paper or on computer monitor.

Fig. 7-1. Calibration graph plotted from data of Table 6-5.


If linearity is in doubt the following test may be applied. Determine for two or three of
the highest calibration points the relative deviation of the measured y-value from the
calculated line:

(7.6)

- If the deviations are < 5% the curve can be accepted as linear.


- If a deviation > 5% then the range is decreased by dropping the highest concentration.
- Recalculate the calibration line by linear regression.
- Repeat this test procedure until the deviations < 5%.

When, as an exercise, this test is applied to the calibration curve of Fig. 7-1 (data in Table
6-3) it appears that the deviations of the three highest points are < 5%, hence the line is
sufficiently linear.

During calculation of the line, the maximum number of decimals is used, rounding off to
the last significant figure is done at the end (see instruction for rounding off in Section 8.2).

Once the calibration graph is established, its use is simple: for each y value measured for
a test sample (the "unknown") the corresponding concentration x can be determined either
by reading from the graph or by calculation using Equation (7.1), or x is automatically
produced by the instrument.
7.2.3 Error due to the regression line

The "fitting" of the calibration graph is necessary because the actual response
points yi, composing the line usually do not fall exactly on the line. Hence, random errors
are implied. This is expressed by an uncertainty about the slope and
intercept b and a defining the graph. A discussion of this uncertainty is given. It was
explained there that the error is expressed by sy, the "standard error of the y-
estimate" (see Eq. 6.23, a parameter automatically calculated by most regression
computer programs.

This uncertainty about the  -values (the fitted y-values) is transferred to the
corresponding concentrations of the unknowns on the x-axis by the calculation using Eq.
(7.1) and can be expressed by the standard deviation of the obtained x-value. The exact
calculation is rather complex but a workable approximation can be calculated with:

(7.7)

Example

For each value of the standards x the corresponding y is calculated with Eq. (7.5):

(7.8)

Then, sy is calculated using Eq. (6.23) or by computer:

Then, using Eq. (7.7):

Now, the confidence limits of the found results xf can be calculated with Eq. (6.9):

xf ± t.sx (7.9)

For a two-sided interval and 95% confidence: ttab = 2.78 (see Appendix 1, df = n -2=4).
Hence all results in this example can be expressed as:

Xf ± 0.08 mg/L

Thus, for instance, the result of a reading y = 0.22 and using Eq. (7.5) to calculate xf =
0.29, can be reported as 0.29 ± 0.08 mg/L. (See also Note 2 below.)
The used sx value can only be approximate as it is taken constant here whereas in reality
this is usually not the case. Yet, in practice, such an approximate estimation of the error
may suffice. The general rule is that the measured signal is most precise (least standard
deviation) near the centroid of the calibration graph (see Fig. 6-4). The confidence limits
can be narrowed by increasing the number of calibration points. Therefore, the reverse is
also true: with fewer calibration points the confidence limits of the measurements become
wider. Sometimes only two or three points are used. This then usually concerns the
checking and restoring of previously established calibration graphs including those in the
microprocessor or computer of instruments. In such cases it is advisable to check the
graph regularly with more standards. Make a record of this in the file or journal of the
method.

Note 1. Where the determination of the analyte is part of a procedure with several steps,
the error in precision due to this reading is added to the errors of the other steps and as
such included in the total precision error of the whole procedure. The latter is the most
useful practical estimate of confidence when reporting results. As discussed in Section
6.3.4 a convenient way to do this is by using Equations (6.8) or (6.9) with the mean and
standard deviation obtained from several replicate determinations (n> 10) carried out on
control samples or, if available, taken from the control charts (see 8.3.2: Control Chart of
the Mean). Most generally, the 95% confidence for single values x of test samples is
expressed by Equation (6.10):
x±2s (6.10; 7.10)
where s is the standard deviation of the mentioned large number of replicate
determinations.

Note 2. The confidence interval of ± 0.08 mg/L in the present example is clearly not
satisfactory and calls for inspection of the procedure. Particularly the blank seems to be
(much) too high. This illustrates the usefulness of plotting the graph and calculating the
parameters. Other traps to catch this error are the Control Chart of the Blank and, of
course, the technician's experience.

7.2.4 Independent standards

It cannot be overemphasized that for QC a calibration should always include measurement


of an independent standard or calibration verification standard at about the middle of the
calibration range. If the result of this measurement deviates alarmingly from the correct or
expected value (say > 5%), then inspection is indicated.

Such an independent standard can be obtained in several ways. Most usually it is


prepared from pure chemicals by another person than the one who prepared the actual
standards. Obviously, it should never be derived from the same stock or source as the
actual standards. If necessary, a bottle from another laboratory could be borrowed.

In addition, when new standards are prepared, the remainder of the old ones always have
to be measured as a mutual check (include this in the SOP for the preparation of
standards!).
7.2.5 Measuring a batch

After calibration of the instrument for the analyte, a batch of test samples is measured.
Ideally, the response of the instrument should not change during
measurement (drift or shift). In practice this is usually the case for only a limited period of
time or number of measurements and regular recalibration is necessary. The frequency of
recalibration during measurement varies widely depending on technique, instrument,
analyte, solvent, temperature and humidity. In general, emission and atomizing techniques
(AAS, ICP) are more sensitive to drift (or even sudden shift: by clogging) than colorimetric
techniques. Also, the techniques of recalibration and possible subsequent action vary
widely. The following two types are commonly practised.

1. Step-wise correction or interval correction

After calibration, at fixed places or intervals (after every 10, 15, 20, or more, test samples)
a standard is measured. For this, often a standard near the middle of the working range is
used (continuing calibration standard). When the drift is within acceptable limits, the
measurement is continued. If the drift is unacceptable, the instrument is recalibrated
("resloped") and the previous interval of samples remeasured before continuing with the
next interval. The extent of the "acceptable" drift depends on the kind of analysis but in soil
and plant analysis usually does not exceed 5%. This procedure is very suitable for manual
operation of measurements. When automatic sample changers are used, various options
for recalibration and repeating intervals or whole batches are possible.

2. Linear correction or correction by interpolation

Here, too, standards are measured at intervals, usually together with a blank ("drift and
wash") and possible changes are processed by the computer software which converts the
past readings of the batch to the original calibration. Only in case of serious mishap are
batches or intervals repeated. A disadvantage of this procedure is that drift is taken to be
linear whereas this may not be so. Autoanalyzers, ICP and AAS with automatic sample
changers often employ variants of this type of procedure.

At present, the development of instrument software experiences a mushroom growth.


Many new fancy features with respect to resloping, correction of carryover, post-batch
dilution and repeating, are being introduced by manufacturers. Running ahead of this,
many laboratories have developed their own interface software programs meeting their
individual demands.

7.3 Blanks and Detection limit

7.3.1 Blanks
7.3.2 Detection limit
7.3.1 Blanks

A blank or blank determination is an analysis of a sample without the analyte or attribute,


or an analysis without a sample, i.e. going through all steps of the procedure with the
reagents only. The latter type is the most common as samples without the analyte or
attribute are often not available or do not exist.

Another type of blank is the one used for calibration of instruments as discussed in the
previous sections. Thus, we may have two types of blank within one analytical method or
system:

- a blank for the whole method or system and


- a blank for analytical subprocedures (measurements) as part of the whole procedure or
system.

For instance, in the cation exchange capacity (CEC) determination of soils with the
percolation method, two method or system blanks are included in each batch: two
percolation tubes with cotton wool or filter pulp and sand or celite, but without sample. For
the determination of the index cation (NH4 by colorimetry or Na by flame emission
spectroscopy) a blank is included in the determination of the calibration graph. If NH4 is
determined by distillation and subsequent titration, a blank titration is carried out for
correction of test sample readings.

The proper analysis of blanks is very important because:

1. In many analyses sample results are calculated by subtracting blank readings from
sample readings.

2. Blank readings can be excellent monitors in quality control of reagents, analytical


processes, and proficiency.

3. They can be used to estimate several types of method detection limits.

For blanks the same rule applies as for replicate analyses: the larger the number, the
greater the confidence in the mean. The widely accepted rule in routine analysis is that
each batch should include at least two blanks. For special studies where individual results
are critical, more blanks per batch may be required (up to eight).

For quality control, Control Charts are made of blank readings identically to those of
control samples. The between-batch variability of the blank is expressed by the standard
deviation calculated from the Control Chart of the Mean of Blanks, the precision can be
estimated from the Control Chart of the Range of Duplicates of Blanks. The construction
and use of control charts are discussed in detail in 8.3. One of the main control rules of the
control charts, for instance, prescribes that a blank value beyond the mean blank value
plus 3× the standard deviation of this mean (i.e. beyond the Action Limit) must be rejected
and the batch be repeated, possibly with fresh reagents.

In many laboratories, no control charts are made for blanks. Sometimes, analysts argue
that 'there is never a problem with my blank, the reading is always close to zero'.
Admittedly, some analyses are more prone to blank errors than others. This, however, is
not a valid argument for not keeping control charts. They are made to monitor procedures
and to alarm when these are out of control (shift) or tend to become out of control (drift).
This can happen in any procedure in any laboratory at any time.

From the foregoing discussion it will be clear that signals of blank analyses generally are
not zero. In fact, blanks may found to be negative. This may point to an error in the
procedure: e.g. for the zeroing of the instrument an incorrect or a contaminated solution
was used or the calibration graph was not linear. It may also be due to the matrix of the
solution (e.g. extractant), and is then often unavoidable. For convenience, some analysts
practice "forcing the blank to zero" by adjusting the instrument. Some instruments even
invite or compel analysts to do so. This is equivalent to subtracting the blank value from
the values of the standards before plotting the calibration graph. From the standpoint of
Quality Control this practice must be discouraged. If zeroing of the instrument is
necessary, the use of pure water for this is preferred. However, such general
considerations may be overruled by specific instrument or method instructions. This is
becoming more and more common practice with modem sophisticated hi-tech instruments.
Whatever the case, a decision on how to deal with blanks must made for each procedure
and laid down in the SOP concerned.

7.3.2 Detection limit

In environmental analysis and in the analysis of trace elements there is a tendency to


accurately measure low contents of analytes. Modem equipment offer excellent
possibilities for this. For proper judgement (validation) and selection of a procedure or
instrument it is important to have information about the lower limits at which analytes can
be detected or determined with sufficient confidence. Several concepts and terms are
used e.g., detection limit, lower limit of detection (LLD), method detection limit (MDL). The
latter applies to a whole method or system, whereas the two former apply to
measurements as part of a method.

Note: In analytical chemistry, "lower limit of detection" is often confused with "sensitivity"
(see 7.5.3).

Although various definitions can be found, the most widely accepted definition of the
detection limit seems to be: 'the concentration of the analyte giving a signal equal to the
blank plus 3× the standard deviation of the blank'. Because in the calculation of analytical
results the value of the blank is subtracted (or the blank is forced to zero) the detection
limit can be written as:

LLD, MDL = 3 × sbl (7.11)

At this limit it is 93% certain that the signal is not due to the blank but that the method has
detected the presence of the analyte (this does not mean that below this limit the analyte is
absent!).

Obviously, although generally accepted, this is an arbitrary limit and in some cases the 7%
uncertainty may be too high (for 5% uncertainty the LLD =3.3 × sbl). Moreover, the
precision in that concentration range is often relatively low and the LLD must be regarded
as a qualitative limit. For some purposes, therefore, a more elevated "limit of
determination" or "limit of quantification" (LLQ) is defined as

LLQ = 2 × LLD = 6 × sbl (7.12)

or sometimes as

LLQ = 10 × sbl (7.13)

Thus, if one needs to know or report these limits of the analysis as quality characteristics,
the mean of the blanks and the corresponding standard deviation must be determined
(validation). The sbl can be obtained by running a statistically sufficient number of blank
determinations (usually a minimum of 10, and not excluding outliers). In fact, this is an
assessment of the "noise" of a determination.

Note: Noise is defined as the 'difference between the maximum and minimum values of


the signal in the absence of the analyte measured during two minutes' (ox otherwise
according to instrument instruction). The noise of several instrumental measurements can
be displayed by using a recorder (e.g. FES, AAS, ICP, IR, GC, HPLC, XRFS). Although
this is not often used to actually determine the detection limit, it is used to determine
the signal-to-noise ratio (a validation parameter not discussed here) and is particularly
useful to monitor noise in case of trouble shooting (e.g. suspected power fluctuations).

If the analysis concerns a one-batch exercise 4 to 8 blanks are run in this batch. If it
concerns an MDL as a validation characteristic of a test procedure used for multiple
batches in the laboratory such as a routine analysis, the blank data are collected from
different batches, e.g. the means of duplicates from the control charts.

For the determination of the LLD of measurements where a calibration graph is used, such
replicate blank determinations are not necessary since the value of the blank as well as
the standard deviation result directly from the regression analysis (see Section 7.2.3 and
Example 2 below).

Examples

1. Determination of the Method Detection Limit (MDL) of a Kjeldahl-N determination in


soils

Table 7-1 gives the data obtained for the blanks (means of duplicates) in 15 successive
batches of a micro-Kjeldahl N determination in soil samples. Reported are the millilitres
0.01 M HCl necessary to titrate the ammonia distillate and the conversion to results in mg
N by: reading × 0.01 × 14.

Table 7-1. Blank data of 15 batches of a Kjeldahl-N determination in soils for the


calculation of the Method Detection Limit.

ml HCl mg N
0.12 0.0161
0.16 0.0217
0.11 0.0154
0.15 0.0203
0.09 0.0126
0.14 0.0189
0.12 0.0161
0.17 0.0238
0.14 0.0189
0.20 0.0273
0.16 0.0217
0.22 0.0308
0.14 0.0189
0.11 0.0154
0.15 0.0203
Mean blank: 0.0199
sbl: 0.0048

MDL = 3 × sbl =0.014 mg N

The MDL reported in this way is an absolute value. Results are usually reported as relative
figures such as % or mg/kg (ppm). In the present case, if 1 g of sample is routinely used,
then the MDL would be 0.014 mg/g or 14 mg/kg or 0.0014%.

Note that if one would use only 0.5 g of sample (e.g. because of a high N content)
the MDL as a relative figure is doubled!

When results are obtained below the MDL of this example they must reported as: '<14
mg/kg' or '< 0.0014%'. Reporting '0 %' or '0.0 %' may be acceptable for practical purposes,
but may be interpreted as the element being absent, which is not justified.

Note 1. There are no strict rules for reporting figures below the LLD or LLQ. Most
important is that data can be correctly interpreted and used. For this reason uncertainties
(confidence limits) and detection limits should be known and reported to clients or users (if
only upon request).

The advantage of using the " <" sign for values below the LLD or LLQ is that the value 0
(zero) and negative values can be avoided as they are usually either impossible or
improbable. A disadvantage of the " <" sign is that it is a non-numerical character and not
suitable in spreadsheet programs for further calculation and manipulation. In such cases
the actually found value will be required, but then the inherent confidence restrictions
should be known to the user.

Note 2. Because a normal distribution of data is assumed it can statistically be expected


that zero and negative values for analytical results occur when blank values are subtracted
from test values equal to or lower than the blank. Clearly, only in few cases are negative
values possible (e.g. for adsorption) but for concentrations such values should normally
not be reported. Exceptions to this rule are studies involving surveys of attributes or
effects. Then it might be necessary to report the actually obtained low results as otherwise
the mean of the survey would be biased.
2. Lower Limit of Detection derived from a calibration graph

We use the calibration graph of Figure 7-1. Then, noting that sbl = sx = 0.6097 and using
Equation (7.11) we obtain: LLD = 3×0.6097 = 1.829 mg/L.

It is noteworthy that "forcing the blank to zero" does not affect the Lower Limit of Detection.
Although a (= yb, see Fig. 7-1) may become zero, the uncertainty sy of the calibration
graph, and thus of sx and sbl, is not changed by this: the only change is that the "forced"
calibration line has moved up and now runs through the intersection of the axes (parallel to
the "original" line).

7.4 Types of sample material

7.4.1 Certified reference material (CRM)


7.4.2 Reference material (RM)
7.4.3 Control sample
7.4.4 Test sample
7.4.5 Spiked sample
7.4.6 Blind sample
7.4.7 Sequence-control sample

Although several terms for different sample types have already freely been used in the
previous sections, it seems appropriate to define the various types before the major
Quality Control operations are discussed.

7.4.1 Certified reference material (CRM)

A primary reference material or substance, accompanied by a certificate, one or more of


whose property values are accurately determined by a number of selected laboratories
(with a stated method), and for which each certified value is accompanied by an
uncertainty at a stated level of confidence.

These are usually very expensive materials and, particularly for soils, hard to come by or
not available. For the availability a computerized databank containing information on about
10,000 reference materials can be consulted (COMAR, see Appendix 4).

7.4.2 Reference material (RM)

A secondary reference material or substance, one or more of whose property values are
accurately determined by a number of laboratories (with a stated method), and which
values are accompanied by an uncertainty at a stated level of confidence. The origin of the
material and the data should be traceable.
In soil and plant analysis RMs are very important since for many analytes and attributes
certified reference materials (CRMs) are not (yet) available. For certain properties a "true"
value cannot even be established as the result is always method-dependent, e.g. CEC,
and particle-size distribution of soil material. A very useful source for RMs are
interlaboratory (round robin) sample and data exchange programmes. The material sent
around is analyzed by a number of laboratories and the resulting data offer an excellent
reference base, particularly if somehow there is a link with a primary reference material.
Since this is often not the case, the data must be handled with care: it may well be that the
mean or median value of 50 or more laboratories is "wrong" (e.g. because most use a
method with an inadequate digestion step).

In some cases different levels of analyte may be imitated by spiking a sample with the
analyte (see 7.4.5). However, this is certainly not always possible (e.g. CEC,
exchangeable cations, pH, particle-size distribution).

7.4.3 Control sample

An in-house reference sample for which one or more property values have been
established by the user laboratory, possibly in collaboration with other laboratories.

This is the material a laboratory needs to prepare for second-line (internal) control in each
batch and the obtained results of which are plotted on Control Charts. The sample should
be sufficiently stable and homogeneous for the properties concerned. The preparation of
control samples is discussed in Chapter 8.

7.4.4 Test sample

The material to be analyzed, the "unknown".

7.4.5 Spiked sample

A test material with a known addition of analyte.

The sample is analyzed with and without the spike to test recovery (see 7.5.6). It should be
a realistic surrogate with respect to matrix and concentration. The mixture should be well
homogenized.

The requirement "realistic surrogate" is the main problem with spikes. Often the analyte
cannot be integrated in the sample in the same manner as the original analyte, and then
treatments such as digestion or extraction may not necessarily reflect the behaviour of real
samples.

7.4.6 Blind sample

A sample with known content of the analyte. This sample is inserted by the Head of
Laboratory or the Quality Officer in batches at places and times unknown to the analyst.
The frequency may vary but as an indication one sample in every 10 batches is given.
Various types of sample material may serve as blind samples such as control samples or
sufficiently large leftovers of test samples (analyzed several times). In case of water
analysis a solution of the pure analyte, or combination of analytes, may do. Essential is
that the analyst is aware of the possible presence of a blind sample but that he does not
recognize the material as such.

Insertion of blind samples requires some attention regarding the administration and
camouflaging. The protocol will depend on the organization of the sample and data stream
in the laboratory.

7.4.7 Sequence-control sample

A sample with an extreme content of the analyte (but falling within the working range of
the method). It is inserted at random in a batch to verify the correct order of samples. This
is particularly useful for long batches in automated analyses. Very effective is the
combination of two such samples: one with a high and one with a low analyte content.

7.5 Validation of own procedures

7.5.1 Trueness (accuracy), bias


7.5.2 Precision
7.5.3 Sensitivity
7.5.4 Working range
7.5.5 Selectivity and specificity
7.5.6 Recovery
7.5.7 Ruggedness, robustness
7.5.8 Interferences
7.5.9 Practicability
7.5.10 Validation report

Validation is the process of determining the performance characteristics of a


method/procedure or process. It is a prerequisite for judgement of the suitability of
produced analytical data for the intended use. This implies that a method may be valid in
one situation and invalid in another. Consequently, the requirements for data may, or
rather must, decide which method is to be used. When this is ill-considered, the analysis
can be unnecessarily accurate (and expensive), inadequate if the method is less accurate
than required, or useless if the accuracy is unknown.

Two main types of validation may be distinguished:

1. Validation of standard procedures. The validation of new or existing methods or


procedures intended to be used in many laboratories, including procedures (to be)
accepted by national or international standardization organizations.
2. Validation of own procedures. The in-house validation of methods or procedures by
individual user-laboratories.

The first involves an interlaboratory programme of testing the method by a number ( 8) of
selected renown laboratories according to a protocol issued to all participants. The second
involves an in-house testing of a procedure to establish its performance characteristics or
more specifically its suitability for a purpose. Since the former is a specialist task, usually
(but not exclusively) performed by standardization organizations, the present discussion
will be restricted to the second type of validation which concerns every laboratory.

Validation is not only relevant when non-standard procedures are used but just as well
when validated standard procedures are used (to what extent does the laboratory meet the
standard validation?) and even more so when variants of standard procedures are
introduced. Many laboratories use their own versions of well-established methods or
change a procedure for reasons of efficiency or convenience.

Fundamentally, any change in a procedure (e.g. sample size, liquid:solid ratio in


extractions, shaking time) may affect the performance characteristics and should be
validated. For instance, in Section 7.3.2 we noticed that halving the sample size results in
doubling the Lower Limit of Detection.

Thus, inherent in generating quality analytical data is to support these with a quantification
of the parameters of confidence. As such it is part of the quality control.

To specify the performance characteristics of a procedure, a selection (so not necessarily


all) of the following basic parameters is determined:

- Trueness (accuracy), Bias


- Precision
- Recovery
- Sensitivity
- Specificity and selectivity
- Working range (including MDL)
- Interferences
- Ruggedness or robustness
- Practicability

Before validation can be carried out it is essential that the detailed procedure is available
as a SOP.

7.5.1 Trueness (accuracy), bias

One of the first characteristics one would like to know about a method is whether the
results reflect the "true" value for the analyte or property. And, if not, can the (un)trueness
or bias be quantified and possibly corrected for?

There are several ways to find this out but essentially they are all based on the same
principle which is the use of an outside reference, directly or indirectly.
The direct method is by carrying out replicate analyses (n   10) with the method on a
(certified) reference sample with a known content of the analyte.

The indirect method is by comparing the results of the method with those of a reference


method (or otherwise generally accepted method) both applied to the same sample(s).
Another indirect way to verify bias is by having (some) samples analyzed by another
laboratory and by participation in interlaboratory exchange programmes. This will be
discussed in Chapter 9.

It should be noted that the trueness of an analytical result may be sensitive to varying
conditions (level of analyte, matrix, extract, temperature, etc.). If a method is applied to a
wide range of materials, for proper validation different samples at different levels of analyte
should be used.

Statistical comparison of results can be done in several ways some of which were
described in Section 6.4.

Numerically, the trueness (often less appropriately referred to as accuracy) can be


expressed using the equation:

7.14

where

¯x = mean of test results obtained for reference sample


 = "true" value given for reference sample

Thus, the best trueness we can get is 100%.

Bias, more commonly used than trueness, can be expressed as an absolute value by:

bias = ¯x -  (7.15)

or as a relative value by:

(7.16)

Thus, the best bias we can get is 0 (in units of the analyte) or 0 % respectively.

Example

The Cu content of a reference sample is 34.0 ± 2.7 mg/kg (2.7 = s, n=12). The results of
15 replicates with the laboratory's own method are the following: 38.0; 34.6; 29.1; 27.8;
40.4; 33.1; 40.9; 28.5; 36.1; 26.8; 30.6; 24.3; 31.6; 22.3; 29.9 mg/kg.
With Equation (6.1) we calculate: ¯ x = 31.6. Using Equation (7.14) the trueness is
(31.6/34.0)×100% = 93%. Using Equation (7.16), the bias is (31.6 - 34.0)×100% / 34.0 = -
7%.

These calculations suggests a systematic error. To see if this error is statistically


significant a t-test can be done. For this, with Equation (6.2) we first calculate s = 5.6.
The F-test (see 6.4.2 and 7.5.2) indicates a significant difference in standard deviation and
we have to use the Cochran variant of the t-test (see 6.4.3). Using Equation (6.16) we
find tcal = 1.46, and with Eq. (6.17) the critical value ttab* = 2.16 indicating that the results
obtained by the laboratory are not significantly different from the reference value (with 95%
confidence).

Although a laboratory could be satisfied with this result, the fact remains that the mean of
the test results is not equal to the "true" value but somewhat lower. As discussed in
Sections 6.4.1 and 6.4.3 the one-sided t-test can be used to test if this result is statistically
on one side (lower or higher) of the reference value. In the present case the one-sided
critical value is 1.77 (see Appendix 1) which also exceeds the calculated value of 1.46
indicating that the laboratory mean is not systematically lower than the reference value
(with 95% confidence).

At first sight a bias of -7% does not seem to be insignificant. In this case, however, the
wide spread of the own data causes the uncertainty about this. If the standard deviation of
the results had been the same as that of the reference sample then, using

Equations (6.13) and (6.14), tcal were 2.58 and with ttab = 2.06 (App. 1) the difference would
have been significant according to the two-sided t-test, and with ttab =1.71 significantly
lower according to the one-sided t-test (at 95% confidence).

7.5.2 Precision

7.5.2.1 Reproducibility
7.5.2.2 Repeatability
7.5.2.3 Within-laboratory reproducibility

Replicate analyses performed on a reference sample yielding a mean to determine


trueness or bias, as described above, also yield a standard deviation of the mean as a
measure for precision. However, for precision alone also control samples and even test
samples can be used. The statistical test for comparison is done with the F-test which
compares the obtained standard deviation with the standard deviation given for the
reference sample (in fact, the variances are compared: Eq. 6.11).

Numerically, precision is either expressed by the absolute value of the standard deviation
or, more universally, by the relative standard deviation (RSD) or coefficient of
variation (CV) (see Equations 6.5 and 6.6,).
(7.17

where

¯x = mean of test results obtained for reference sample


s = standard deviation of x

If the attained precision is worse than given for the reference sample then it can still be
decided that the performance is acceptable for the purpose (which has to be reported as
such), otherwise it has to be investigated how the performance can be improved.

Like the bias, precision will not necessarily be the same at different concentration of the
analyte or in different kinds of materials. Comparison of precision at different levels of
analyte can be done with the F-test: if the variances at a few different levels are similar,
then precision is assumed to be constant over the range.

Example

The same example as above for bias is used. The standard deviation of the laboratory is
5.6 mg/kg which, according to Eq. (7.17), corresponds with a precision of (5.6/31.6)×100%
= 18%. (The precision of the reference sample can similarly be calculated as about 8%).

According to Equation (6.11) the calculated F-value is:

the critical value is 2.47 (App. 2, two-sided, df1 = 14, df2 =11) hence, the null hypothesis
that the two standard deviations belong to the same population is rejected: there is a
significant difference in precision (at 95% confidence level).

Types of precision

The above description of precision leaves some uncertainty about the actual execution of
its determination. Because particularly precision is sensitive to the way it is determined
some specific types of precision are distinguished and, therefore, it should always be
reported what type is involved.

7.5.2.1 Reproducibility

The measure of agreement between results obtained with the same method on identical
test or reference material under different conditions (execution by different persons, in
different laboratories, with different equipment and at different times). The measure of
reproducibility R is the standard deviation of these results sR, and for a not too small
number of data (n 8) R is defined by (with 95% confidence):

R = 2.8 × sR (7.18)
(where 2.8 = 2 and is derived from the normal or gaussian distribution; ISO 5725).

Thus, reproducibility is a measure of the spread of results when a sample is analyzed by


different laboratories. If a method is sensitive to different ways of execution or conditions
(low robustness, see 7.5.7), then the reproducibility will reflect this.

This parameter can obviously not be verified in daily practice. For that purpose the next
two parameters are used (repeatability and within-laboratory reproducibility).

7.5.2.2 Repeatability

The measure of agreement between results obtained with the same method on identical
test or reference material under the same conditions (job done by one person, in the same
laboratory, with the same equipment, at the same time or with only a short time interval).
Thus, this is the best precision a laboratory can obtain: the within-batch precision.

The measure for the repeatability r is the standard deviation of these results sr, and for a
not too small number of data ( 10) r is defined by (with 95% confidence):

r = 2.8 × sr (7.19)


7.5.2.3 Within-laboratory reproducibility

The measure of agreement between results obtained with the same method on identical
test material under different conditions (execution by different persons, with the same or
different equipment, in the same laboratory, at different times). This is a more realistic type
of precision for a method over a longer span of time when conditions are more variable
than defined for repeatability.

The measure is the standard deviation of these results sL (also called between-batch


precision). The within-laboratory reproducibility RL is calculated by:

RL = 2.8 × sL (7.20)

The between-batch precision can be estimated in three different ways:

1. As the standard deviation of a large number (n 50) of duplicate determinations carried


out by two analysts:

(7.21)

where

si, = the standard deviation of each pair of duplicates


k = number of pairs of duplicates
di = difference between duplicates within each pair
2. Empirically as 1.6 × sr. Then:

RL = 2.8 × 1.6 × sr

or:

RL = 1.6 × r (7.22)

where r is the repeatability as defined above.

3. The most practical and realistic expression of the within-laboratory reproducibility is the
one based on the standard deviation obtained for control samples during routine work. The
advantage is that no extra work is involved: control samples are analyzed in each batch,
and the within-laboratory standard deviation is calculated each time a control chart is
completed (or sooner if desired, say after 10 batches). The calculation is here:

RL = 2.8 × scc (7.23)

where scc is the standard deviation obtained from a Control Chart (see 8.3.2).

Clearly, the above three RL values are not identical and thus, whenever the within-
laboratory reproducibility is reported, the way by which it is obtained should always be
stated.

Note: Naturally, instead or reporting the derived validation parameters for precision R,


r, or RL, one may prefer to report their primary measure: the standard deviation concerned.

7.5.3 Sensitivity

This is a measure for the response y of the instrument or of a whole method to the
concentration C of the analyte or property, e.g. the slope of the analytical calibration graph
(see Section 7.2.2). It is the value that is required to quantify the analyte on basis of the
analytical signal. The sensitivity for the analyte in the final sample extract may not
necessarily be equal to the sensitivity for the analyte in a simple standard solution. Matrix
effects may cause improper calibration of the measuring Step of the analytical method. As
observed earlier for calibration graphs, the sensitivity may not be constant over a long
range. It usually decreases at higher concentrations by saturation of the signal. This limits
the working range (see next Section 7.5.4). Some of the most typical situations are
exemplified in Figure 7-2.

Fig. 7-2. Examples of some typical response graphs. 1. Constant sensitivity. 2.


Sensitivity constant over lower-range, then decreasing. 3. Sensitivity decreasing
over whole range. (See also 7.5.4.)
In general, on every point of the response graph the sensitivity can be expressed by

(7.24)

The dimension of S depends on the dimensions of y and C. In atomic absorption, for


example, y is expressed in absorbance units and C in mg/L. For pH and ion-selective
electrodes the response of the electrode is expressed in mV and the concentration in mg/L
or moles (plotted on log scale). Often, for convenience, the signal is conversed and
amplified to a direct reading in arbitrary units, e.g. concentration. However, for proper
expression of the sensitivity, this derived response should be converted back to the direct
response. In practice, for instance, this is simply done by making a calibration graph in the
absorbance mode of the instrument as exemplified in Figure 7-1, where slope b is the
sensitivity of the P measurement on the spectrophotometer. If measured in the absorption
(or transmission) mode, plotting should be done with a logarithmic y-axis.

7.5.4 Working range

For most analytical methods the working range is known from previous experience. When
introducing a new method or measuring technique this range may have to be determined.
This range can be determined during validation by attempting to span a (too) wide range.
This can for instance be done by using several sample sizes, liquid:sample ratios, or by
spiking samples (see 7.5.6, Recovery). This practice is particularly important to determine
the upper limit of the working range (the lower limit of a working range corresponds with
the Method Detection Limit and was discussed in Section 7.3.2). The upper limit is often
determined by such factors as saturation of the extract (e.g. the "free" iron or gypsum
determinations) or by depletion of a solution in case of adsorption procedures (e.g.
phosphate adsorption; cobaltihexamine or silver thiourea adsorption in single-extraction
CEC methods). In such cases the liquid:sample ratio has to be adapted.

To determine the measuring range of solutions the following procedure can be applied:

- Prepare a standard solution of the analyte in the relevant matrix (e.g. extractant) at a
concentration beyond the highest expected concentration.

- Measure this solution and determine the instrument response.

- Dilute this standard solution 10× with the matrix solution and measure again.

- Repeat dilution and measuring until the instrument gives no response.

- Plot the response vs. the concentration.

- Estimate the useful part of the response graph.

(If the dilution steps are too large to obtain a reliable graph, they need to be reduced, e.g.
5×).

In Figure 7-2 the useful parts of graphs 1 and 2 are obviously the linear parts (and for
graph 2 perhaps to concentration 8 if necessary). Sometimes a built-in curve corrector for
the linearization of curved calibration plots can extend the range of application (e.g. in
AAS). Graph 3 has no linear part but must and can still be used. A logarithmic plotting may
be considered and in some cases by non-linear (polynomial) regression an equation may
be calculated. It has to be decided on practical grounds what concentration can be
accepted until the decreasing sensitivity renders the method inappropriate (with the
knowledge that flat or even downward bending ranges are useless in any case).

7.5.5 Selectivity and specificity

The measurement of an analyte may be disturbed by the presence of other components.


The measurement is then non-specific for the analyte under investigation. An analytical
method is "fully specific" when it gives an analytical signal exclusively for one particular
component, but is "dead" for all other components in the sample, e.g. when a reagent
forms a coloured complex with only one analyte. A method is "fully selective" when it
produces correct analytical results for various components of a mixture without any mutual
interaction of the components, e.g. when a reagent forms several coloured complexes with
components in the matrix but with a different colour for each component. A selective
method is composed of a series of specific measurements.
Mutual influences are common in analytical techniques but can often easily be overcome.
An example is ionization interference reducing the specificity in flame spectrometric
techniques (FES, AAS). The selectivity is no problem as the useful spectral lines can be
selected exactly with a monochromator or filters. The mutual interference can be
suppressed by adding an excess of an easily ionizable element, such as cesium, which
maintains the electron concentration in the flame constant. In chromatographic techniques
(GC, HPLC) specificity is sometimes a problem in the analysis of complex compounds.

In the validation report, selectivity and specificity are usually described rather than
quantitatively expressed.

7.5.6 Recovery

To determine the effectiveness of a method (and also of the working range), recovery
experiments can be carried out. Recovery can be defined as the 'fraction of the analyte
determined after addition of a known amount of the analyte to a sample'. In practice,
control samples are most commonly used for spiking. The sample as well as the spikes
are analyzed at least 10 times, the results averaged and the relative standard deviation
(RSD) calculated. For in-house validation the repeatability (replicates in one batch, see
7.5.2.2) is determined, whereas for quality control the within-laboratory
reproducibility (replicates in different batches, see 7.5.2.3) is determined and the data
recorded on Control Charts. The concentration level of the spikes depend on the purpose:
for routine control work the level(s) will largely correspond with those of the test samples
(recoveries at different levels may differ): a concentration midway the working range is a
convenient choice. For the determination of a working range a wide range may be
necessary, at least to start with, see 7.5.4). An example is the addition of ammonium
sulphate in the Kjeldahl nitrogen determination. Recovery tests may reveal a significant
bias in the method used and may prompt a correction factor to be applied to the analytical
results.

The recovery is calculated with:

(7.25)

where

¯xs = mean result of spiked samples


¯x = mean result of unspiked samples
¯xadd = amount of added analyte

If a blank (sample) is used for spiking then the mean result of the unspiked sample will
generally be close to zero. In fact, such replicate analyses could be used to determine or
verify the method detection limit (MDL, see 7.3.2).

As has been mentioned before (Section 7.4.5) the recovery obtained with a spike may not
be the same as that obtained with real samples since the analyte may not be integrated in
the spiked sample in the same manner as in real samples. Also, the form of the analyte
with which the spike is made may present a problem as different compounds and grain
sizes representing the analyte may behave differently in an analysis.

7.5.7 Ruggedness, robustness

An analytical method is rugged or robust if results are not (very) sensitive to variations in
the experimental conditions. Such conditions can be temperature, extraction or shaking
time, shaking technique, pH, purity of reagents, moisture content of sample, sample size,
etc. Usually, when a new method is proposed, the ruggedness is first tested by the
initiating laboratory and subsequently in an interlaboratory trial. The ruggedness test is
conveniently done with the so-called "Youden and Steiner partial factorial design" where in
only eight replicate analyses seven factors can be varied and analyzed. This efficient
technique can also be used for within-laboratory validation. As an example the ammonium
acetate CEC determination of soil will be taken. The seven factors could be for instance:

A: With (+) and without (-) addition of 125 mg CaCO3 to the sample (corresponding with
5% CaCO3 content)
B: Concentration of saturation solution: 1 M (+) and 0.5 M (-) NH4OAc
C: Extraction time: 4 hours (-) and 8 hours (+)
D: Admixture of sea-sand (or celite): with (+) and without (-) 1 teaspoon of sand
E: Washing procedure: 2× (-) or 3×(+) with ethanol 80%
F: Concentration of washing ethanol: 70% (-) or 80% (+)
G: Purity of NH4OAc: technical grade (-) and analytical grade (+)

The matrix of the design looks as shown in Table 7-2. The eight subsamples are analyzed
basically according to the SOP of the method. The variations in the SOP are indicated by
the + or - signs denoting the high or low level, presence or absence of a factor or
otherwise stated conditions to be investigated. The eight obtained analytical results
are Yi,. Thus, sample (experiment) no. 1 receives all treatments A to G indicated with (+),
sample no. 2 receives treatments A, B and D indicated by (+) and C, E, F and G indicated
by (-), etc.

Table 7-2. The partial factorial design (seven factors) for testing ruggedness of an
analytical method

Factors
Experiment A B C D E F G Results
1 + + + + ++ + Y1
2 + + - + - - - Y2
3 + - + - + - - Y3
4 + - - - - + + Y4
5 - + + - - + - Y5
6 - + - - + - + Y6
7 - - + + - - + Y7
8 - - - + ++ - Y8

The absolute effect (bias) of each factor A to G can be calculated as follows:


(7.26)

where

 YA+ = sum of results Yi, where factor A has + sign (i.e. Y1, + Y2 + Y3 + Y4; n=4)
 YA- = sum of results Yi, where factor A has - sign (i.e. Y5 + Y6 + Y7+ Y8; n=4)

The test for significance of the effect can be done in two ways:

1. With a t-test (6.4.3) using in principle the table with "two-sided" critical t values (App. 1,
n=4). When clearly an effect in one direction is to be expected, the one-sided test is
applicable.

2. By checking if the effect exceeds the precision of the original procedure (i.e. if the effect
exceeds the noise of the procedure). Most realistic and practical in this case would be to
use scc, the within-laboratory standard deviation taken from a control chart (see Sections
7.5.2.3 and 8.3.2). Now, the standard deviation of the mean of four measurements can be

taken as   (see 6.3.4), and the standard deviation of the difference between two
such means (i.e. the standard deviation of the effect calculated with Eq. 7.26)

as  . The effect of a factor can be considered significant if it

exceeds 2× the standard deviation of the procedure, i.e. .

Therefore, the effect is significant when:

Effect >1.4 × scc (7.27)

where scc is the standard deviation of the original procedure taken from the last complete
control chart.

Note. Obviously, when this standard deviation is not available such as in the case of a new
method, then an other type of precision has to be used, preferably the within-laboratory
reproducibility (see 7.5.2).

It is not always possible or desirable to vary seven factors. However, the discussed partial
factorial design does not allow a reduction of factors. At most, one (imaginary) factor can
be considered in advance to have a zero effect (e.g. the position of the moon). In that
case, the design is the same as given in Table 7-2 but omitting factor G.

For studying only three factors a design is also available. This is given in Table 7-3.

Table 7-3. The partial factorial design (three factors) for testing ruggedness of an
analytical method

Experiment Factors Results


A B C
1 + + + Y1
2 - + - Y2
3 + - + Y3
4 - - - Y4

The absolute effect of the factors A, B, and C can be calculated as follows:

(7.28)

where

 YA+ = sum of results Yi, where factor A has + sign (i.e. Y1 + Y3; n=2)

 YA- = sum of results Yi, where factor A has - sign (i.e. Y2 + Y4; n=2)

The test for significance of the effect can be done similarly as described above for the
seven-factor design, with the difference that here n = 2.

If the relative effect has to be calculated (for instance for use as a correction factor) this
must be done relative to the result of the original factor. Thus, in the above example of the
CEC determination, if one is interested in the effect of reducing the concentration of the
saturating solution (Factor B), the "reference" values are those obtained with the
1 M solution (denoted with + in column B) and the relative effect can be calculated with:

(7.29)

The confidence of the results of partial factorial experiments can be increased by running
duplicates or triplicates as discussed in Section 6.3.4. This is particularly useful here since
possible outliers may erroneously be interpreted as a "strong effect".

Often a laboratory wants to check the influence of one factor only. Temperature is a factor
which is particularly difficult to control in some laboratories or sometimes needlessly
controlled at high costs simply because it is prescribed in the original method (but perhaps
never properly validated). The very recently published standard procedure for determining
the particle-size distribution (ISO 11277) has not been validated in an interlaboratory trial.
The procedure prescribes the use of an end-over-end shaker for dispersion. If up to now a
reciprocating shaker has been used and the laboratory decides to adopt the end-over-end
shaker then in-house validation is indicated and a comparison with the end-over-end
shaker must be made and documented. If it is decided, after all, to continue with the
reciprocating shaking technique (e.g. for practical reasons), then the laboratory must be
able to show the influence of this step to users of the data. Such validation must include all
soil types to which the method is applied.
The effect of a single factor can simply be determined by conducting a number of replicate
analyses (n>. 10) with and without the factor, or at two levels of the factor, and comparing
the results with the F-test and t-test (see 6.4). Such a single effect may thus be expressed
in terms of bias and precision.

7.5.8 Interferences

Many analytical methods are to a greater or lesser extent susceptible to interferences of


various kinds. Proper validation should include documentation of such influences. Most
prominent are matrix effects which may either reduce or enhance analytical results (and
are thus a form of reduced selectivity). Ideally, such interferences are quantified as bias
and corrected for, but often this is a tedious affair or even impossible. Matrix effects can be
quantified by conducting replicate analyses at various levels and with various compositions
of (spiked) samples or they can be nullified by imitating the test sample matrix in the
standards, e.g. in X-ray fluorescence spectroscopy. However, the matrix of test samples is
often unknown beforehand. A practical qualitative check in such a case is to measure the
analyte at two levels of dilution: usually the signal of the analyte and of the interference are
not proportional.

Well-known other interferences are, for example, the dark colour of extracts in the
colorimetric determination of phosphate, and in the CEC determination the presence of
salts, lime, or gypsum. A colour interference may be avoided by measuring at an other
wavelength (in the case of phosphate: try 880 nm). Sometimes the only way to avoid
interference is to use an other method of analysis.

If it is thought that an interference can be singled out and determined, it can be quantified
as indicated for ruggedness in the previous section.

7.5.9 Practicability

When a new method is proposed or when there is a choice of methods for a determination,
it may be useful if an indication or description of the ease or tediousness of the application
is available. Usually the practicability can be derived from the detailed description of the
procedure. The problems are in most cases related to the availability and maintenance of
certain equipment and the required staff or skills. Also, the supply of required parts and
reagents is not always assured, nor the uninterrupted supply of stable power. In some
countries, for instance, high purity grades cannot always be obtained, some chemicals
cannot be kept (e.g. sodium pyrophosphate in a hot climate) and even the supply of a
seemingly common reagent such as ethanol can be a problem. If such limitations are
known, it is useful if they are mentioned in the relevant SOPs or validation report.

7.5.10 Validation report

The results of validation tests should be recorded in a validation report from which the
suitability of a method for a certain purpose can be deduced. If (legal) requirements for
specific analyses are known (e.g. in the case of toxic compounds) then such information
may be included.
Since validation is a kind of research project the report should have a comparable format.
A plan is usually initiated by the head of laboratory, drafted by the technician involved and
verified by the head. The general layout of the report should include:

- Parameters to be validated
- Description of the procedures (with reference to relevant SOPs)
- Results

A model for a validation SOP is given (VAL 09-2).

7.6 Drafting an analytical procedure


For drafting an analytical procedure the general instructions for drafting SOPs as given in
Chapter 2 apply. An example of an analytical procedure as it can be written in the form of
a SOP is METH 006. A laboratory manual of procedures, the "cookery book", can be made
by simply collecting the SOPs for all procedures in a ring binder. Because analytical
procedures, more than any other type of SOP, directly determine the product of a
laboratory, some specific aspects relating to them are discussed here.

As was outlined in Chapter 2, instructions in SOPs should be written in such a way that no
misunderstanding or ambiguity exists as to the execution of the procedure. Thus, much of
the responsibility (not all) lies with the author of the procedure. Even if the author and user
are one and the same person, which should normally be the case (see 2.2), such
misunderstanding may be propagated since the author usually draws on the literature or
documents written by someone else. Therefore, although instructions should be as brief as
possible, they should at the same time be as extensive as necessary.

As an example we take the weighing of a sample, a common instruction in many analytical


procedures. Such an instruction could read:

1. Weigh 5.0 g of sample into a 250 ml bottle.


2. Add 100 ml of extracting solution and close bottle.
3. Shake overnight.
4. Etc., etc.

Comment 1

According to general analytical practice the amount of 5.0 g means "an amount between
and including 4.95 g and 5.05 g" (4.95 weight 5.05) since less than 4.95 would round to
4.9 and more than 5.05 would round to 5.1 (note that 5.05 rounds to 5.0 and not to 5.1).

Some analysts, particularly students and trainees, take the amount of 5.0 g too literally and
set out on a lengthy process of adding and subtracting sample material until the balance
reads "5.0" or perhaps even "5.00". Not only is this procedure tedious, the sample may
become biased as particles of different size tend to segregate during this process. To
prevent such an interpretation, often the prefixes "approximately", "approx." or
"ca." (circa) are used, e.g. "approx. 5.0 g". As this, in turn, introduces a seeming
contradiction between "5.0" (with a decimal, so quite accurate) and "approx." ('it doesn't
matter all that much'), the desired accuracy must be stated: "weigh approx. 5.0 g
(accuracy 0.01 g) into a 250 ml bottle".

The notation 5.0 g can be replaced by 5 g when the sample size is less critical (in the
present case for instance if the ratio sample: liquid is not very critical). Sometimes it may
even be possible to use "weigh 3 - 5 g of sample (accuracy 0.1 g)". The accuracy needs to
be stated when the actual sample weight is used in the calculation of the final result,
otherwise it may be omitted.

Comment 2

The "sample" needs to be specified. A convenient and correct way is to make reference to
a SOP where the preparation of the sample material is described. This is the more formal
version of the common practice in many laboratories where the use of the sample is
implied of which the preparation is described elsewhere in the laboratory manual of
analytical procedures. In any case, there should be no doubt about the sample material to
be used. When other material than the usual "laboratory sample" or "test sample" is used,
the preparation must be described and the nature indicated e.g., "field-moist fine earth" or
"fraction > 2 mm" or "nodules".

When drafting a new procedure or an own version of a standard procedure, it must be


considered if the moisture content of the used sample is relevant for the final result. If so, a
moisture correction factor should be part of the calculation step. In certain cases where the
sample contains a considerable amount of water (moist highly humic samples; andic
material) this water will influence the soil: liquid ratio in certain extraction or equilibration
procedures. Validation of such procedures is then indicated.

Comment 3

The "250 ml bottle" needs to be specified also. This is usually done in the section
"Apparatus and glassware" of the SOP. If, in general, materials are not specified, then it is
implied that the type is unimportant for the procedure. However, in shaking procedures,
the kind, size and shape of bottles may have a significant influence on the results. In
addition the kind (composition) of glass is sometimes critical e.g., for the boron
determination.

Comment 4

To the instruction "Add 100 ml of extracting solution" apply the same considerations as
discussed for the sample weighing. The accuracy needs to be specified, particularly when
automatic dispensers are used. The accuracy may be implicit if the equipment to be used
is stated e.g., "add 100 ml solution by graduated pipette" or "volumetric pipette" or "with a
100 ml measuring cylinder". If another means of adding the solution is preferred its
accuracy should equal or exceed that of the stated equipment.

Comment 5

The instruction "shake overnight" is ambiguous. It must be known that "overnight" is


equivalent to "approximately 16 hrs.", namely from 5 p.m. till 9 a.m. the next morning. It is
implied that this time-span is not critical but generally the deviation should not be more
than, say, two hours. In case of doubt, this should be validated with a ruggedness test.
More critical in many cases is the term "shake" as this can be done in many different ways.
In the section "Apparatus" of the SOP the type of shaking machine is stated e.g.,
reciprocating shaker or end-over-end shaker. For the reciprocating shaker the instruction
should include the shaking frequency (in strokes per minute), the amplitude (in mm or cm)
and the position of the bottles (standing up, lying length-wise or perpendicular to the
shaking direction). For an end-over-end shaker usually only the frequency or speed (in
rpm) is relevant.

7.7 Research plan


All laboratories, including those destined for routine work, carry out research in some form.
For many laboratories it constitutes the main activity. Research may range from a simple
test of an instrument or a change in procedure, to large projects involving many aspects,
several departments of an institute, much staff and money, often carried out by
commission of third parties (contract research, sponsors).

For any project of appreciable size, according to GLP the management of the institute
must appoint a study director before the study is initiated. This person is responsible for
the planning and execution of the job. He/she is responsible to a higher Inspecting
Authority (IA) which may be the institute's management, the Quality Assurance Unit, the
Head of Research or the like as established by the management.

A study project can be subdivided into four phases: preparation, execution, reporting,
filing/archiving.

1. Preparation

In this phase the purpose and plan are formulated and approved by the IA. Any
subsequent changes are documented and communicated to the IA. The plan must include:

- Descriptive title, purpose, and identification details Study director and further personnel
Sponsor or client

- Work plan with starting date and duration Materials and methods to be used Study
protocol and SOPs (including statistical treatments of data)

- Protocols for interim reporting and inspection Way of reporting and filing of results
Authorization by the management (i.e. signature)

- A work plan or subroutines can often be clarified by means of a flow diagram. Some of


the most used symbols in flow diagrams for procedures in general, including analytical
procedures, are given in Figure 7-3. An example of a flow sheet for a research plan is
given in Fig 7-4.

Fig. 7-3. Some common symbols for flow diagrams.


2. Execution of the work

The work must be carried out according to the plan, protocols and SOPs. All observations
must be recorded including errors and irregularities. Changes of plan have to be reported
to the IA and if there are budgetary implications also to the management. The study leader
must have control of and be informed about the progress of the work and, particularly in
larger projects, be prepared for inspection by the IA.
Fig. 7-4. Design of flow diagram for study project.
3. Reporting

As soon as possible after completion of the experimental work and verification of the
quality control data the results are calculated. Together with a verification statement of
the IA, possibly after corrections have been made, the results can be reported. The
copyright and authorship of a possible publication would have been arranged in the plan.

The report should contain all information relevant for the correct interpretation of the
results. To keep a report digestible, used procedures may be given in abbreviated form
with reference to the original protocols or SOPs. Sometimes, relevant information turns up
afterwards (e.g. calculation errors). Naturally, this should be reported, even if the results
have already been used.

It is useful and often rewarding if after completion of a study project an evaluation is


carried out by the study team. In this way a next job may be performed better.

SOPs

VAL 09-2 - Validation of CEC determination with NH4OAc


METH 006 - Determination of nitrogen in soil with micro-Kjeldahl

VAL 09-2 - Validation of CEC determination with NH4OAc


LOGO STANDARD OPERATING PROCEDURE Page; 1 # 2
No.: VAL 09-2 Version: 1 Date: 96-09-19
Title: Validation of CEC determination with NH4OAc (pH 7) File:

1 PURPOSE

To determine the performance characteristics of the CEC determination with ammonium


acetate (pH 7) using the mechanical extractor.

The following parameters have been considered: Bias, precision, working range,
ruggedness, interferences, practicability.

2 REQUIREMENTS

See SOP METH 09-2 (Cation Exchange Capacity and Exchangeable Bases with
ammonium acetate and mechanical extractor).

3 PROCEDURES

3.1 Analytical procedure

The basic procedure followed is described in SOP METH 09-2 with variations and number
of replicates as indicated below. Two Control Samples have been used: LABEX 6, a Nitisol
(clay 65%, CEC 20 cmolc/kg) and LABEX 2, an Acrisol (clay 25%; CEC 7 cmolc/kg);
further details of these control samples in SOP RF 031 (List of Control Samples).

3.2 Bias

The CEC was determined 10× on both control samples. Reference is the mean value for
the CEC obtained on these samples by 19 laboratories in an interlaboratory study.

3.3 Precision

Obtained from the replicates of 3,2,

3.4 Working range

The Method Detection Limit (MDL) was calculated from 10 blank determinations.


Determination of the Upper Limit is not relevant (percolates beyond calibration range are
rare and can be brought within range by dilution).

3.5 Ruggedness

A partial factorial design with seven factors was used. The experiments were carried out in
duplicate and the factors varied are as follows:

A: With (+) and without (-) addition of 125 mg CaCO3 (corresponding with 5% CaCO3 content)
B: Concentration of saturating solution: 1 M (+) and 0.5 M (-) NH4OAc
C: Extraction time: 4 hours (-) and 8 hours (+)
D: Admixture of seasand (or celite): with (+) and without (-) 1 teaspoon of sand
E: Washing procedure: 2× (-) or 3× (+) with ethanol 80%
F: Concentration of ethanol for washing free of salt: 70% (-) or 80% (+)
G: Parity of NH4OAc: technical grade (-) and analytical grade (+)

3.6 Interferences

Two factors particularly interfere in this determination: 1. high clay content (problems with
efficiency of percolation) and 2. presence of CaCO3 (competing with saturating index
cation). The first was addressed by the difference in clay content of the two samples as
well as by Factor D in the ruggedness test, the second by factor A of the ruggedness test,

3.7 Practicability

The method is famous for its wide application and ill-famed for its limitations. Some of the
most prominent aspects in this respect are considered.

4 RESULTS

As results may have to be produced as a document accompanying analytical results (e.g.


on request of clients) they are presented here in a model format suiting this purpose.
In the present example where two different samples have been used the results for both
samples may be given on one form, or for each sample on a separate form.

For practical reasons, abbreviated reports may be released omitting irrelevant information.
{The full report should always be kept!)

LOGO METHOD VALIDATION FORM Page: 1 # 1


No.: VAL RES 09-2 Version: 1 Date: 96-11-23
Title: Validation data CEC-NH4OAc (METH 09-2) File:

1 TITLE or DESCRIPTION

Validation of cation exchange capacity determination with NH4OAc pH 7 method as


described in VAL 09-2 dd. 96-09-19.

2 RESULTS

2.1 Bias Result of calculation -with Eq. (7.14) or (7.16) of Guidelines.


(Accuracy):
2.2 Precision
Repeatability: Result of calculation with Eq. (7.17) or (7.19).
Within-lab Result of calculation with Eq. (7.23) (if Control Charts are available).
reproducibility:
2.3 Working range: Result of calculation as examplified by Table 7-1 in Section 7.3.2 of
Guidelines.
2.4 Ruggedness: Results of calculations with Eq. (7.26) or (7.29),
2.5 Interferences: In this case mainly drawn from Ruggedness test
2.6 Practicability: Special equipment necessary: mechanical extractor substantial amounts of
ethanol required washing procedures not always complete, particularly in high-
clay samples, requiring thorough check.
2.7 General
observations:
Author: Sign.:
QA Officer (sign.): Date of Expiry:
Author: Sign.:
QA Officer (sign.): Date of Expiry:
METH 006 - Determination of nitrogen in soil with micro-Kjeldahl
LOGO METHOD VALIDATION FORM Page: 1 # 1
No.: METH 006 Version: 2 Date: 96-03-01
Title: Determination of nitrogen in soil with micro-Kjeldahl File:

1. SCOPE

This procedure describes the determination of nitrogen with the micro-Kjeldahl technique.
It is supposed to include all soil nitrogen (including adsorbed NH4+) except that in nitrates.

2. RELATED DOCUMENTS

2.1 Normative references


The following standards contain provisions referred to in the text.

ISO 3696 Water for analytical laboratory use. Specification and test methods,
ISO 11464 Soil quality Pretreatment of samples for physico-chemical analysis.

2.2 Related SOPs

F 001 Administration of SOPs


APP 066 Operation of Kjeltec 1009 digester
APP 067 Operation of ammonia distillation unit
APP 072 Operation of Autoburette ABU 13 and Titrator TTT 60 (facultative)
RF 008 Reagent Book
METH 002 Moisture content determination

3. PRINCIPLE

The micro-Kjeldahl procedure is followed. The sample is digested in sulphuric acid and
hydrogen peroxide with selenium as catalyst and whereby organic nitrogen is converted to
ammonium sulphate. The solution is then made alkaline and ammonia is distilled. The
evolved ammonia is trapped in boric acid and titrated with standard acid,

4. APPARATUS AND GLASSWARE

4.1 Digester (Kjeldahl digestion tubes in heating block)


4.2 Steam-distillation unit (Fitted to accept digestion tubes)
4.3 Burette 25 ml

5. REAGENTS

Use only reagents of analytical grade and deionized or distilled water (ISO 3696).

5.1 Sulphuric acid - selenium digestion mixture. Dissolve 3.5 g selenium powder in 1 L


concentrated (96%, density 1.84 g/ml) sulphuric acid by mixing and heating at approx.
350°C. on a hot plate. The dark colour of the suspension turns into clear light-yellow.
When this is reached, continue heating for 2 hour

5.2 Hydrogen peroxide, 30%.

5.3 Sodium hydroxide solution, 38%. Dissolve 1,90 kg NaOH pellets in 2 L water in a


heavy-walled 5 L flask. Cool the solution with the flask stoppered to prevent absorption of
atmospheric CO2. Make up the volume to 5 L with freshly boiled and cooled deionized
water. Mix well.

5.4 Mixed indicator solution. Dissolve 0.13 g methyl red and 0.20 g bromocresol green in
200 ml ethanol.

5.5 Boric acid-indicator solution, 1%. Dissolve 10 g H3BO3 in 900 ml hot water, cool and
add 20 ml mixed indicator solution. Make to 1 L with water and mix thoroughly.
5.6 Hydrochloric acid, 0.010 M standard. Dilute standard analytical concentrate ampoule
according to instruction.

Author: Sign.:
QA Officer (sign.): Date of Expiry:

6. SAMPLE

Air-dry fine earth (<2 mm) obtained according to ISO 11464 (or refer to own
procedure). Mill approx. 15 g of this material to pass a 0.25 mm sieve. Use part of this
material for a moisture determination according to ISO 11465 and PROC 002.

7. PROCEDURE

7.1 Digestion

1. Weigh 1 g of sample (accuracy 0.01 g) into a digestion tube. Of soils, rich in organic
matter (>10%), 0.5 g is weighed in (see Remark 1). In each batch, include two blanks and
a control sample.

2. Add 2.5 ml digestion mixture.

3. Add successively 3 aliquots of 1 ml hydrogen peroxide. The next aliquot can be added
when frothing has subsided. If frothing is excessive, cool the tube in water.
Note:. In Steps 2 and 3 use a measuring pipette with balloon or a dispensing pipette,

4. Place the tubes on the heater and heat for about 1 hour at moderate temperature
(200°C).

5. Turn up the temperature to approx. 330°C (just below boiling temp.) and continue
heating until mixture is transparent (this should take about two hours).

6. Remove tubes from heater, allow to cool and add approx., 10 ml water with a wash
bottle while swirling.

7.2 Distillation

1. Add 20 ml boric acid-indicator solution with measuring cylinder to a 250 ml beaker and
place beaker on stand beneath the condenser tip.

2. Add 20 ml NaOH 38% with measuring cylinder to digestion tube and distil for about 7
minutes during which approx. 75 ml distillate is produced.

Note: the distillation time and amount of distillate may need to be increased for complete
distillation (see Remark 2).

3. Remove beaker from distiller, rinse condenser tip, and titrate distillate with 0.01 M HCl
until colour changes from green to pink.
Note: When using automatic titrator: set end-point pH at 4.60.

Remarks

1. The described procedure is suitable for soil samples with a nitrogen content of up to 10
mg N. This corresponds with a carbon content of roughly 10% C. Of soils with higher
contents, less sample material is weighed in. Sample sizes of less than 250 mg should not
be used because of sample bias.

2. The capacity of the procedure with respect to the amount of N that can be determined
depends to a large extent on the efficiency of the distillation assembly. This efficiency can
be checked, for instance, with a series of increasing amounts of (NH4)2SO4 or NH4Cl
containing 0-50 mg N.

8. CALCULATION

where

a = ml HCl required for titration of sample


b = ml HCl required for titration of blank
s = air-dry sample weight in gram
M = molarity of HCl
1.4 = 14 × 10-3 × 100% (14 = atomic weight of nitrogen)
mcf = moisture correction factor

9. VALIDATION PARAMETERS

9.1 Bias: -3.1% rel. (sample ISE 921, ¯x=2.80 g/kg N, n=5)
9.2 Within-lab reproducibility: RL = 2.8×scc = 2,5% rel. (sample LABEX 38,¯x =2.59 g/kg N, n=30)
9.3 Method Detection Limit: 0.014 mg N or 0.0014% N

10. TEST REPORT

The report of analytical results shall contain the following information:

- the result(s) of the determination with identification of the corresponding sample(s);


- a reference to this SOP (if requested a brief outline such as given under clause 3:
Principle);
- possible peculiarities observed during the test;
- all operations not mentioned in the SOP that can have affected the results.

11. REFERENCES

Hesse, P.R. (1971) A textbook of soil chemical analysis. John Murray, London.
Bremner, J.M. and C.S. Mulvaney (1982) Nitrogen Total. In: Page, A.L., R.H. Miller & D.R.
Keeney (eds.) Methods of soil analysis. Part 2. Chemical and microbiological properties,
2nd ed. Agronomy Series 9 ASA, SSSA, Madison. ISO 11261 Soil quality - Determination
of total nitrogen - Modified Kjeldahl method.

8 INTERNAL QUALITY CONTROL OF DATA

8.1 Introduction
8.2 Rounding and Significant figures
8.3 Control charts
8.4 Preparation of a Control Sample
8.5 Complaints
8.6 Trouble-shooting
8.7 LIMS
SOPs

8.1 Introduction
In the preceding chapters basic elements of quality assurance were discussed. All
activities associated with these aspects have one aim: the production of reliable data with
a minimum of errors. The present discussion is concerned with activities to verify that a
laboratory produces such reliable data consistently. To this end an appropriate programme
of quality control (QC) must be implemented. Quality control is the term used to describe
the practical steps undertaken to ensure that errors in the analytical data are of a
magnitude appropriate for the use to which the data will be put. This means that the
(unavoidable) errors made are quantified to enable a decision whether they are of an
acceptable magnitude and that unacceptable errors are discovered so that corrective
action can be taken and erroneous data are not released. In short, quality control must
detect both random and systematic errors.

In principle, quality control for analytical performance consists of two complementary


activities: internal QC and external QC.

The internal QC involves the in-house procedures for continuous monitoring of operations
and systematic day-to-day checking of the produced data to decide whether these are
reliable enough to be released. The procedures primarily monitor the bias of data with the
help of control samples and the precision by means of duplicate analyses of test samples
and/or of control samples. These activities take place at batch level (second-line control).

The external QC involves reference help from other laboratories and participation in
national and/or international interlaboratory sample and data exchange
programmes (proficiency testing; third-line control).
The present chapter focuses mainly on the internal QC as this has to be organised by the
laboratory itself. External QC, just as indispensable as the internal QC, is dealt with in
Chapter 9.

8.2 Rounding and Significant figures

8.2.1 Rounding
8.2.2 Significant figures

At this point, before entering into actual treatment of data, it might be useful to enter into
the data themselves as they are treated and reported. Analytical data, either direct
readings (e.g. pH) or results of one or more calculation steps associated with most
analytical methods, are often reported with several numbers after the decimal point. In
many cases this suggests a higher significance than is warranted by the combination of
procedure and test materials. Since clear rules for rounding and for determining the
number of significant decimals are available these will be given here.

8.2.1 Rounding

To allow a better overview and interpretation, to conserve paper (more columns per page),
and to simplify subsequent calculations, figures should be rounded up or down leaving out
insignificant numbers.

- To produce minimal bias, by convention rounding is done as follows:


- If the last number is 4 or less, retain the preceding number;
- if it is 6 or more, increase the preceding number by 1;
- if the last number is 5, the preceding number is made even.

Examples:

pH = 5.72 rounds to 5.7


pH = 5.76 rounds to 5.8
pH = 5.75 rounds to 5.8
pH = 5.85 rounds to 5.8

When calculations and statistics have to be performed, rounding must be done at the end.

Remark: Traditionally, and by most computer calculation programs, when the last number
is 5, the preceding number is raised by 1. There is no objection to this practice as long as
it causes no disturbing bias, e.g. in surveys of attributes or effects.

8.2.2 Significant figures


8.2.2.1 Rounding of test results
8.2.2.2 Rounding of means and standard deviations

8.2.2.1 Rounding of test results

The significance of the numbers of results is a function of the precision of the analytical


method. The most practical figures for precision are obtained from the own validation of
the procedure whereby the -within-laboratory standard deviation sL (between-batch
precision) for control samples is the most realistic parameter for routine procedures (see
7.5.2). For non-routine studies, the sr (within-batch precision) might have to be determined.

To determine which number is still significant, the following rule is applied:

Calculate the upper boundary bt of the rounding interval a using the standard deviation s of
the results (n 10):
bt = ½ s (8.1)
Then choose a equal to the largest decimal unit (...;100; 10; 1; 0.1; 0.01; etc.) which does
not exceed the calculated bt

After having done this for each type of analysis at different concentration or intensity levels
it will become apparent what the last significant figure or decimal is which may be
reported. This exercise has to be repeated regularly but is certainly indicated when a new
technique is introduced or when analyses are performed in a nonroutine way or on non-
routine test materials.

Example

Table 8-1. A series of repeated CEC determinations (in cmolc/kg) on a control sample,
each in a different batch.

Data Rounded
6.55 6.6
7.01 7.0
7.25 7.2
7.83 7.8
6.95 7.0
7.16 7.2
7.83 7.8
7.05 7.0
6.83 6.8
7.63 7.6

The standard deviation of this set of (unrounded) data is:

s =0.4298
hence: bt  = 0.2149
and: a = 0.1
Therefore, these data should be reported with a maximum of one decimal.

8.2.2.2 Rounding of means and standard deviations

When values for means, standard deviations, and relative standard


deviation (RSD and CV) are to be rounded, b is calculated in a different way:

for ¯x: (8.2)


bx  =
for s: (8.3)
bs  =
for RSD: (8.4)
brsd  =

where

¯x = mean of set of n results


s = standard deviation of set of results
RSD = relative standard deviation.

8.3 Control charts

8.3.1 Introduction
8.3.2 Control Chart of the Mean (Mean Chart)
8.3.3 Control Chart of the Range of Duplicates (Range Chart)
8.3.4 Automatic preparation of control charts

8.3.1 Introduction

As stated in Section 8.1, an internal system for quality control is needed to ensure that
valid data continue to be produced. This implies that systematic checks, e.g. per day or
per batch, must show that the test results remain reproducible and that the methodology is
actually measuring the analyte or attribute in each sample. An excellent and widely used
system of such quality control is the application of (Quality) Control Charts. In analytical
laboratories such as soil, plant and water laboratories separate control charts can be used
for analytical attributes, for instruments and for analysts. Although several types of control
charts can be applied, the present discussion will be restricted to the two most usual types:

1. Control Chart of the Mean for the control of bias;


2. Control Chart of the Range of Duplicates for the control of precision.

For the application of quality control charts it is essential that at least Control Samples are
available and preferably also (certified) Reference Samples. As the latter are very
expensive and, particularly in the case of soil samples, still hard to obtain, laboratories
usually have to rely largely on (home-made) control samples. The preparation of control
samples is dealt with in Section 8.4.

8.3.2 Control Chart of the Mean (Mean Chart)

8.3.2.1 Principle
8.3.2.2 Starting with Mean Charts
8.3.2.3 Using a Mean Chart

8.3.2.1 Principle

In each batch of test samples at least one control sample is analyzed and the result is
plotted on the control chart of the attribute and the control sample concerned. The basic
construction of this Control Chart of the Mean is presented in Fig. 8-1. (Other names
are Mean Chart, x-Chart, Levey-Jennings, or Shewhart Control Chart). This shows the
(assumed) relation with the normal distribution of the data around the mean. The
interpretation and practical use of control charts is based on a number of rules derived
from the probability statistics of the normal distribution. These rules are discussed in
8.3.2.3 below. The basic assumption is that when a control result falls within a distance
of 2s from the mean, the system was under control and the results of the batch as a whole
can be accepted. A control result beyond the distance of 2s from the mean (the "Warning
Limit") signals that something may be wrong or tends to go wrong, while a control result
beyond 3s (the "Control Limit" or "Action Limit") indicates that the system was statistically
out of control and that the results have to be rejected: the batch has to be repeated after
sorting out what went wrong and after correcting the system.

Fig. 8-1. The principle of a Control Chart of the Mean. UCL = Upper Control Limit
(or Upper Action Limit). LCL = Lower Control Limit (or Lower Action
Limit). UWL = Upper Warning Limit. LWL = Lower Warning Limit.
Apart from test results of control samples, control charts can be used for quite a number of
other types of data that need to be controlled on a regular basis, e.g. blanks, recoveries,
standard deviations, instrument response. A model for a Mean Chart is given.

Note. The limits at 2s and 3s may be too strict or not strict enough for particular analyses
used for particular purposes. A laboratory is free to choose other limits for analyses.
Whatever the choice, this should always be identifiable on the control chart (and stated in
the SOP or protocol for the use of control charts and consequent actions).

Fig. 8-2. A filled-out control chart of the mean of a control sample.

8.3.2.2 Starting with Mean Charts

A control chart can be started when a sufficient number of data of an attribute of the
control sample is available (or data of the performance of an analyst in analyzing an
attribute, or of the performance of an instrument on an analyte). Since we want the control
chart to reflect the actual analytical practice, the data should be collected in the same
manner. This is usually done by analyzing a control sample in each batch. Statistically, a
sufficient number of data would be 7, but the more data available the better. It is generally
recommended to start with at least 10 replicates.

Note: If duplicate determinations of the control sample are used in each batch to
control within-batch precision (see 8.3.3), the mean of the duplicates can be used as entry.
Although the principle of such a Mean Chart (called ¯x-Chart, as opposed to x-Chart) is
the same as for single values, the statistical background of the parameters obviously is
not. These two systems may, therefore, not be mixed.

Example
In ten consecutive batches of test samples the CEC of a control sample is determined.
The results are: 10.4; 11.6; 10.8; 9.6; 11.2; 11.9; 9.1; 10.4; 10.3; 11.6 cmolc/kg
respectively. Using the equations the following parameters for this set of data are
obtained: Mean ¯x = 10.7 cmolc/kg, and standard deviation s = 0.91. These are the initial
parameters for a new control chart (see Fig. 8-2) and are recorded in the second upper
right box of this chart ("data previous chart"). The Mean is drawn as a dashed (nearly)
central line. The Warning and Action Limits are calculated in the left lower box, and the
corresponding lines drawn as dashed and continuous lines respectively (the Action Line
may be drawn in red). The vertical scale is chosen such that the range ¯x ± 3s is roughly
2.5 to 4 cm.

It may turn out, in retrospect, that one (or more) of the initial data lies beyond an initial
Action Limit. This result should not have been used for the initial calculations. The
calculations then have to be repeated without this result. Therefore, it is advisable to have
a few more than ten initial data.

The procedure for starting a control chart should be laid down in a SOP.

8.3.2.3 Using a Mean Chart

After calculating the mean and the standard deviation of the previous chart (or of the initial
data set) five lines are drawn on the next control chart: one for the Mean, two Warning
Limits and two Action Limits (see Fig. 8-2). Each time a result for the control sample is
obtained in a batch of test samples, this result is recorded on the control chart of the
attribute concerned. No rules are laid down for the size of a "batch" as this usually
depends on the methods and equipment used. Some laboratories use one control sample
in every 20 test samples, others use a minimum of 1 in 50.

Note. The level of the analyte in the control sample should as much as possible match the
level in the test samples. For this reason it is often necessary to have more than one
control sample available for an attribute. To cope with the (expected) variation of the
concentration of the analyte in the test samples the use of more than one control sample in
a batch must be considered. This would indeed increase the reliability of the obtained
results but at a price: an extra analysis is carried out and the chance of false rejection of a
batch is increased also.

Quality control rules have been developed to detect excess bias and imprecision as well
as shift and drift in the analysis. These rules are used to determine whether or not results
of a batch are to be accepted.

Ideally, the quality control rules chosen should provide a high rate of error detection with a
low rate of false rejection. The rules for quality control are not uniform: they may vary from
laboratory to laboratory, and even within laboratories from analysis to analysis. The rules
for the interpretation of quality control charts are not uniform either. Very detailed rules are
sometimes applied, particularly when more than one control sample per batch is used.
However, it should be realized that stricter rules generally result in (s)lower output of data
and higher costs of analysis. The most convenient and commonly applied main rules are
the following:
Warning rule (if occurring, then data require farther inspection):

- One control result beyond Warning Limit.

Rejection rules (if occurring, then data are rejected):

- 1. One control result beyond Action Limit.

- 2. Two successive control results beyond same Warning Limit.

- 3. Ten successive control results are on the same side of the Mean. (Some laboratories
apply six results.)

- 4. Whenever results seem unlikely (plausibility check).

The Warning Rule is exceeded by mere chance in less than 5% of the cases. The chance
that the Rejection Rules are violated on purely statistical grounds can be calculated as
follows:

Rule 1: 0.3 %
Rule 2: 0.5×(0.05)2×100% = 0.1%
Rule 3: (0.5)10×100% = 0.1%

Thus, only less than 0.5% of the results will be rejected by mere chance. (This increases
to 2% if in Rule 3 'six results on the same side of the mean' is applied.)

If any of the four rejection rules is violated the following actions should be taken:

- Repeat the analysis, if the next point is satisfactory, continue the analysis. If not, then

- Investigate the cause of the exceeding.

- Do not use the results of the batch, run, day or period concerned until the cause is
trailed. Only use the results if rectification is justified (e.g. when a calculation error was
made).

- If no rectification is possible, after elimination of the source of the error, repeat the
analysis of the batch(es) concerned. If this next point is satisfactory, the analysis can be
continued.

Commonly, outliers are caused by simple errors such as calculation or dilution errors, use
of wrong standard solutions or dirty glassware. If there is evidence of such a cause, then
this outlier can be put on the chart but may not be used in calculating the statistical
parameters of the control chart. These events should be recorded on the chart in the box
"Remarks". If the parameters are calculated automatically, the outlier value is not entered.

Rejection Rule 3 may pose a particular problem. If after the 10th successive result on one
side of the mean it appears that a systematic error has entered the process, the
acceptance of the previous batches has to be reconsidered. If they cannot be corrected
they may have to be repeated (if this is still possible: samples may have deteriorated).
Also, the customer(s) may have to be informed. Most probably, however, problems of this
type are discovered at an earlier stage by other Quality Control tools such as excessive
blank readings, the use of independent standard solutions, instrument calibrations, etc. In
addition, by consistent inspection of the control chart three or four consecutive control
results at the same side of the mean will attract attention and a shift (see below) may
already then be suspected.

Rejection Rule 4 is a special case. Unlike the other rules this is a subjective rule based on
personal judgement of the analyst and the officer charged with the final screening of the
results before release to the customer. Both general and specific knowledge about a
sample and the attribute(s) may ring a bell when certain test results are thought to be
unexpectedly or impossibly high or low. Also, results may be contradictive, sometimes only
noticed by a complaining client. Obviously, much of the success of the application of this
rule depends on the available expertise.

Note. A very useful aspect of Quality Control of Data falling under Rejection Rule 4 is the
cross-checking of analytical results obtained for one sample (or, sometimes, for a
sequence or a group of samples belonging together, e.g. a soil profile or parts of one
plant). Certain combinations of data can be considered impossible or highly suspect. For
instance, a pH value of 8 and a carbonate content of zero is a highly unlikely combination
in soils and should arouse enough suspicion for closer examination and possibly for
rejection of either or both results. A number of such contradictions or improbabilities can
be built into computer programs and used in automatic cross-checking routines after
results are entered into a database. Ideally, these cross-checks are built into a LIMS
(Laboratory Information Management System) used by the laboratory. While all LIMSes
have options to set ranges within which results of attributes are acceptable, cross-checks
of attributes is not a common feature. An example of a LIMS with cross-checks for soil
attributes is SOILIMS.

Most models of control charts accommodate 30 entries. When a chart is fall a new chart
must be started. On the new chart the parameters of the just completed old chart need to
be filled in. This is shown on Fig. 8-2. Calculate the "Data this chart" of the old chart and fill
these in on the old chart. Perform the two-sided F-test and t-test (see right, to check if the
completed chart agrees with the previous data. If this is the case, calculate "Data all
charts" by adding the "Data this chart" to the "Data previous charts". These newly
calculated "Data all charts" of the completed old chart are the "Data previous charts" of the
new chart. Using these data, the new chart can now be initiated by drawing the new
control lines as described in 8.3.2.2.

Shift

In the rare case that the F-test and/or the t-test will not allow the data of a completed
control chart to be incorporated in the set of previous data, there is a problem. This has to
be resolved before the analysis of the attribute in question can be continued. As indicated
above, such a change or shift may have various causes, e.g. introduction of new
equipment, instability of the control sample, use of a wrong standard, wrong execution of
the method by a substitute analyst. Also, when there is a considerable time interval
between batches such a shift may occur (mind the expiry date of reagents!). However,
when the control chart is inspected in a proper and consistent manner, usually such errors
are discovered before they are revealed by the F and t-test.

Drift

A less conspicuous and therefore perhaps greater danger than incidental errors or shifts is
a gradual change in accuracy or precision of the results. An upward or downward trend
or drift of the mean or a gradual increase in the standard deviation may be too small to be
revealed by the F or t-test but may be substantial over time. Such a drift could be
discovered if a control chart were much longer, say some hundreds of observations. A way
to imitate this extension of the horizontal scale is to make a "master" control chart with the
values of x and s of the normal control charts. Such a compressed control chart could be
referred to as "Control Chart of the Trend" and is particularly suitable for a visual
inspection of the trend. An upward trend can be suspected in Figure 8-2. Indeed, the mean
of the first fifteen entries is 10.59 vs. 10.97 cmolc/kg for the last fifteen entries, implying a
relative increase of about 3.5%. This indicates that the further trend has to be watched
closely.

The main cause of drift is often instability of the control sample, but other causes such as
deterioration of reagents and equipment must taken into account. Whatever the cause,
when discovered, it should be traced and rectified. And here too, if necessary, already
released results may have to be corrected.

New Control Sample

When a control sample is about to run out, or must be replaced because of instability, or
for any other reason, a new control sample must be timely prepared so that it can be run
concurrently with the old control sample for some time. This allows to make a smooth start
without interrupting the analytical programme. As indicated previously, the more initial data
are obtained the better (with a minimum of 10) but ideally a complete control chart should
be made.

8.3.3 Control Chart of the Range of Duplicates (Range Chart)

8.3.3.1 Principle
8.3.3.2 Range chart of Control Sample
8.3.3.3 Starting the first chart
8.3.3.4 R-chart of Test Samples

Between-batch precision (within-laboratory reproducibility, see 7.5.2.3) can be inspected


visually on the Control Chart of the Mean; a "noisy" graph with frequent and large
fluctuations indicates a lower precision than a smooth graph.

Information about the within-batch precision (repeatability, see 7.5.2.2) can only be


obtained by running duplicate analyses in the same batch. For this purpose both test
samples and control samples can be used but the latter are somewhat more convenient.
The obtained data are plotted on a Control Chart of the Range of Duplicates (also
called Range Chart or R-chart).

8.3.3.1 Principle

In each batch of test samples at least one sample is analyzed in duplicate and the
difference between the results is plotted on the control chart of the attribute concerned.
The basic construction of such a Control Chart of the Range of Duplicates is given in
Figure 8-3. It shows similarities with the Control Chart of the Mean in that now a mean of
differences is calculated with corresponding standard deviation. The warning line and
control line can be drawn at Is and 3s distance from the mean of differences. The graph is
single-sided as the lowest observable value of the difference is zero.

Fig. 8-3. Control Chart of the Range of Duplicates. ¯R = mean of the range of


duplicates. WL = Warning Limit. CL = Control Limit (or Action Limit).

8.3.3.2 Range chart of Control Sample

The simplest way of controlling precision is by running duplicates of a control sample in


each batch. The advantage is that this can be directly connected to the use of single
values as applied for the Control Chart of the Mean by simply simultaneously running two
subsamples of the same control sample. A disadvantage is that precision is measured at
one concentration level only (unless more than one control samples are used). The
duplicates should be placed at random positions in the batch, not adjacent to each other.
The necessary statistical parameters for the Range Chart, ¯R and sR, can be determined
as follows:

(8.5)
where

¯R = mean difference between duplicates


 Ri = sum of (absolute) differences between duplicates
m == number of pairs of duplicates

and

(8.6)

where

sR = standard deviation of the range of all pairs of duplicates.

Fig. 8-4. A filled-out control chart of the range of duplicates of a control sample.

Note 1. Equation (8.6) is equivalent to Equation (7.21). This standard deviation is


somewhat different from the common standard deviation of a set of data (Eq. 6.2) and
results from pooling the standard deviation of each pair: namely, the duplicates of the pairs
have the same population standard deviation.

Note 2. If it is decided to routinely run the control sample in duplicate in each batch as
described here, a different situation arises with respect to the Mean Chart since now two
values for the control sample are obtained instead of one. These values are of equal
weight and, therefore, their mean must be used as an entry. It is important to note that the
parameters of the thus obtained Mean Chart, particularly the standard deviation, are not
the same as those obtained using single values. Hence, these two types should not be
mixed up and not be compared by means of the F-test!.

8.3.3.3 Starting the first chart

Initiating a Control Chart of the Range of Duplicates is identical to initiating a Control Chart
of the Mean as discussed in Section 8.3.2.2. Also the model of the chart is virtually
identical with only x replaced by ¯R. The parameters ¯R and sR are determined for at least
10 initial pairs of duplicates as given in Table 8-2 as an example. A control chart with these
initial parameters is given in Fig. 8-4.

The interpretation rules of the Range Chart are very similar to those of the Mean Chart:

Warning rule:

- One control observation beyond Warning Limit

Rejection rules:
- One control observation beyond Control (or Action) Limit
- Two successive control observations beyond Warning Limit
- Ten successive control observations beyond ¯R. (Some apply six.)

The response to violation of the rejection rules is also similar: repeat the analysis and
investigate the problem if the repeat is not satisfactory.

The procedure to initiate a new chart when the present one is full is identical to that
described for the Control Chart of the Mean.

Example

Table 8-2. CEC values (in cmolc/kg) of a control sample determined in duplicate to


calculate initial values of ¯R and sR of the control chart of duplicates.

1 2
10.1 9.7 0.4
10.7 10.2 0.5
10.5 11.1 0.6
9.8 10.3 0.5
9.0 10.1 1.1
11.0 10.6 0.5
11.5 10.7 0.8
10.9 9.5 1.4
8.9 9.4 0.5
10.0 9.6 0.4
Mean: 10.24 10.13 ¯R: 0.66
s: 0.85 0.74 sR: 0.52
8.3.3.4 R-chart of Test Samples

A limitation of the use of duplicates of a control sample to verify precision is that this may
not fully reflect the precision of the analysis of test samples as these may appreciably
deviate from the control sample both in matrix and in concentration or capacity of the
attribute concerned. The most convenient way to meet this problem is to use more than
one control sample with different concentrations of the attribute, each with their own
control chart as described above. Another way is to use test samples instead of control
samples. However, also in this case duplicates may be chosen at non-representative
analyte levels unless the level per batch is rather uniform. Alternatively, all samples are
run in duplicate but this is not commonly done in routine analysis and is usually only
affordable in special research cases.

When test sample duplicates are preferred two situations can be distinguished:

1. Analyses with a (near-)constant relative standard deviation;


2. Analyses with a non-constant relative standard deviation.

Although commonly occurring, the second case is rather complicated for routine work and
will therefore not be treated here.
Constant Relative Standard Deviation

If a constant relative standard deviation (CV or RSD) can be assumed, which may often be


the case over certain limited working ranges of concentration, one (or more) test sample(s)
in a batch can be analyzed in duplicate instead of a control sample. A constant RSD would
result in a control chart as schematically given in Figure 8-5 which is very similar to Fig. 8-
3. Because the standard deviation is assumed proportional to the analytical result this
applies to the difference between duplicates as well. Therefore, the vertical scale must be
the normalized, i.e. the (absolute) value found for R of each pair of duplicates has to be
divided by the mean of the two duplicates (and multiplied by 100% if a % scale is used
rather than a fraction scale). The interpretation rules and calculations of parameters when
a chart is full are again identical to those discussed above for the Control Chart of the
Mean.

Fig. 8-5. Control Chart of the Normalized Range of Duplicates. CV = coeff. of


variation; other symbols as in Fig. 8-3.

8.3.4 Automatic preparation of control charts

Obviously, in large laboratories with hundreds of analyses per day, much (if not all) of the
above discussed control work is usually done automatically by computer. This can be
programmed by the laboratory personnel but commercial programs are available which are
usually connected to or incorporated in the LIMS (Laboratory Information Management
System, see 8.7). For small and medium-sized laboratories (and also for large laboratories
starting with control work of new tests or analyses), the manual use of charts, where
possible with computerized calculations, is recommended.
8.4 Preparation of a Control Sample

8.4.1 Collection and treatment of soil material


8.4.2 Collection and treatment of plant material
8.4.3 Stability
8.4.4 Homogeneity

In the previous sections reference was frequently made to the "Control Sample". It was
defined as:

"An in-house reference sample for which one or more property values have been
established by the user laboratory, possibly in collaboration with other laboratories."

This is the material a laboratory needs to prepare for second-line (internal) control in each
batch and the obtained results of which are plotted on Control Charts. The sample should
be sufficiently stable and homogeneous for the properties concerned.

From the foregoing it must have become clear that the control sample has a crucial
function in quality control activities. For most analyses a control sample is indispensable.
In principle, its place can be taken by a (certified) reference sample, but these are
expensive and for many soil and plant analyses not even available. Therefore, laboratories
have to prepare control samples themselves or obtain them from other laboratories.

Because the quality control systems rely so heavily on these control samples their
preparation should be done with great care so that the samples meet a number of criteria.
The main criteria are:

1. The sample is homogeneous

2. The material is stable

3. The material has the correct particle size (i.e. passed a prescribed sieve)

4. The relevant information on properties and composition of the matrix, and the
concentration of the analyte or attribute concerned is available.

The preparation of a control sample is usually fairly easy and straightforward. As an


example it will be described here for a "normal" soil sample (so-called "fine earth") and for
a ground plant sample.

8.4.1 Collection and treatment of soil material

Select a location for the collection of suitable and sufficient material. The amount of
material to be collected depends on the turn-over of the sample material, the expected
stability and the amount that can be handled during preparation. Thus, amounts may
range from a few kilos to a hundred kilo or more.

The material is collected in plastic bags and spread out on plastic foil or in large plastic
trays in the institute for air-drying (do not expose to direct sunlight; forced drying up to
40°C is permitted). Remove large plant residues. After drying, pass the sample through a 2
mm sieve. Clods, not passing through the sieve are carefully crushed (not ground!) by a
pestle and mortar or in a mechanical breaker. Gravel, rock fragments etc. not passing
through the sieve are removed and discarded. The material passing through the sieve is
collected in a bin or vessel for mechanical homogenization. If the whole sample has to be
ground to a finer particle size this can be done at this stage. If only a part has to be ground
finer, this should be done after homogenization. Homogenization may be done with a
shovel or any other instrument suitable for this purpose. Some laboratories use a concrete
mixer. Mixing should be intensive and complete. After that, the bulk sample is divided into
subsamples of 0.5 to 1 kg to be used in the laboratory. For this, riffle samplers and sample
splitters may be used.

The subsamples can be kept in glass or plastic containers. The latter have the advantage
that they are unbreakable. Both have the disadvantage is that fine particles may be
electrostatically attracted to the container walls thus causing segregation. The rule about
labelling is that it should preferably be done on both the container and the lid. If only one
label is used this should always be stuck on the container and not on the lid!

Note. In a note the suggestion is made to have a useful control sample prepared by an
interlaboratory sample exchange organization.

8.4.2 Collection and treatment of plant material

Select plant material with the desired or expected composition. Realize that the
composition of different parts of a plant (leaf, stem, flower, fruit) may differ considerably
and that, in general, the control sample should match the test samples as much as
possible.

If the fresh material is contaminated (e.g. by soil, salts, dust) it needs to be washed with
tap water or dilute (0.1 M) hydrochloric acid followed by deionized water. For test samples,
to minimize the change of concentration of components, this washing should be done in a
minimum of time, say within half a minute. For the preparation of a control sample this is
less critical.

The sample is dried at 70°C in a ventilated drying oven for 24 hours. The sample is then
cut and ground to pass a 1 mm sieve. Storage can be done as described for soil samples.

Note. During the pretreatment (drying, milling, sieving) both soil and plant material may be
contaminated by the tools used. In this way the concentration of certain elements (Cu, Fe,
Al, etc., see 9.4) may be increased. Like the washing procedure, this problem is less
critical for control samples than for test samples (unless the contamination is present as
large particles).
8.4.3 Stability

No general statement can be given about the stability of the material. Although dried soil
and plant material can be kept for a very long time or even, in practice, indefinitely under
favourable conditions, it must be realized that some natural attributes may still (slowly)
change, that samples for certain analyses may not be dried and that certainly many
"foreign" components such as petroleum products, pesticides or other pollutants change
with time or disappear at varying unknown rates. Each sample and attribute has to be
judged on this aspect individually. Control charts may give useful information about
possible changes during storage (trends, shifts).

8.4.4 Homogeneity

For quality control it is essential that a control sample is homogeneous so that subsamples
used in the batches are "identical". In practice this is impossible (except for solutions), and
the requirement can be reduced to the condition that the (sub)samples statistically belong
to the same population. This implies a test for homogeneity to prove that the daily-use
sample containers (the laboratory control samples) into which the bulk sample was split up
represent one and the same sample. This can be done in various ways. A relatively simple
procedure is described here.

Check for homogeneity by duplicate analysis

For the check for homogeneity the statistical principles of the two control charts discussed
in Section 8.3, i.e. for the Mean and for the Range of Duplicates, are used. The laboratory
control samples, prepared by splitting the bulk sample, are analyzed in duplicate in one
batch. The analysis used is arbitrary. Usually a rapid, easy and/or cheap analysis suffices.
Suitable analyses for soil material are, for example, carbon content, total nitrogen, and
loss-on-ignition. For plant samples total nitrogen, phosphorus, or a metal (e.g. Zn) can be
used.

The organization of the test is schematically given in Fig. 8-6. As stated before, statistically
this test only makes sense when a sufficient number of sample containers are involved
(n  7). Do not use too small samples for the analysis as this will adversely affect the
representativeness resulting in an unnecessary high standard deviation.

Note. A sample may prove to be homogeneous for one attribute but not for another.
Therefore, fundamentally, homogeneity of control samples should be tested with an
analysis for each attribute for which the control sample is used. This is done for certified
reference samples but is often considered too cumbersome for laboratory control samples.
On the other hand, such an effort would have the additional advantage that useful
information about the procedure and laboratory performance is obtained (repeatability).
Also, such values can be used as initial values of control charts.

Check on the Mean (sample bias)

This is a check to establish if all samples belong to the same population. The means of the
duplicates are calculated and treated as single values (xi) for the samples 1 to n. Then,
using Equations (6.1) and (6.2), calculate ¯x and s of the data set consisting of the means
of duplicates (include all data, i.e. do not exclude outliers).

Fig. 8-6. Scheme for the preparation and homogeneity test of control samples.

The rules for interpretation may vary from one laboratory to another and from one attribute
to another. In general, values beyond ± 2s from the mean are considered outliers and
rejected. The sample container concerned may be discarded or analyzed again after which
the result may well fall within x ± 2s and be accepted or, otherwise, the subsample may
now definitely be discarded.

Check on the Range (sample homogeneity)

This is a check to establish if all samples are homogeneous. The differences R between


duplicates of each pair are calculated (include all data, i.e. do not exclude outliers). Then
calculate ¯R and sR of the data set using Equations (8.5) and (8.6) respectively. The
interpretation is identical to that for the Check on the Mean as given in the previous
paragraph.

Thus, a laboratory control sample container may have to be discarded on two grounds:

1. because it does not sufficiently represent the level of the attribute in the control sample
and
2. because it is internally too heterogeneous.
The preparation of a control sample including a test for homogeneity should be laid down
in a SOP.

Example

In Table 8-3 an example is given of a check for homogeneity of a soil control sample of 5
kg which was split into ten equal laboratory control samples of which the loss-on-ignition
was determined in duplicate.

The loss-on-ignition can be determined as follows:

1. Weigh approx. 5 g sample into a tared 30 mL porcelain crucible and dry overnight at
105°C.

2. Transfer crucible to desiccator to cool; then weigh crucible (accuracy 0.001 g).

3. Place crucibles in furnace and heat at 900°C for 4 hours.

4. Allow furnace to cool to about 100°C, transfer crucible to desiccator to cool, then weigh
crucible with residue (accuracy 0.001 g).

Now, the weight loss between 110 and 900°C can be calculated and expressed in mass %
or in g/kg (weight basis: material dried at 105 °C).

Table 8-3. Results (in mass/mass %) of duplicate Loss-on-Ignition determinations (A and


B) on representative subsamples of ten 500 g laboratory samples of a soil control sample.

Sample A B MeanAB

1 9.10 8.42 8.760 0.68


2 9.65 8.66 9.155 0.99
3 9.63 9.18 9.405 0.45
4 8.65 8.89 8.770 0.24
5 8.71 9.19 8.950 0.48
6 9.14 8.93 9.040 0.22
7 8.71 8.97 8.840 0.26
8 8.59 8.78 8.685 0.19
9 8.86 9.12 8.990 0.26
10 9.04 8.75 8.895 0.29
Mean: 8.949 0406
s: 0.214* SR: 0.334**

(* using Eq. 6.2; ** using Eq. 8.6)

Tolerance range for mean of duplicates (¯x ± 2s):

8.949 ± 2 × 0.214 = 8.52-9.38%

Tolerance range for difference R between duplicates:


In this example it appears that only the mean result of sample no. 3 (= 9.405%) falls
outside the permissible range. However, since this is only marginally so (less than 0.3%
relative) we may still decide to accept the sample without repeating the analysis.

The measure R for internal homogeneity falls for all samples within the permissible range.
(Should an R be found beyond the range we may opt for repeating the duplicate analysis
before deciding to discard that sample.)

8.5 Complaints
Errors that escaped detection by the laboratory may be detected or suspected by the
customer. Although this particular type of quality control may not be popular, it should in no
case be ignored and can sometimes even be useful. For the dealing with complaints a
protocol must be made with an accompanying Registration Form with at least the following
items:

- name client, and date the complaint was received


- work order number
- description of complaint
- name of person who received the complaint (usually the head of laboratory)
- person charged with investigation
- result of investigation
- name of person(s) who dealt with the complaint
- an evaluation and possible action
- date when report was sent to client

A record of complaints should be kept, the documents involved may be kept in the work
order file. The trailing of events (audit trailing) may sometimes not be easy and particularly
in such cases the proper registration of all laboratory procedures involved will show to be
of great value.

Note. Registration of procedures formally also applies to work that has been put out to
contract to other laboratories. When work is put out, the quality standards of the
subcontractor should be (demonstrably) satisfactory since the final responsibility towards
the client lies with the laboratory that put out the work. If the credibility needs to be verified
this is usually done by inserting duplicate and blind samples.

8.6 Trouble-shooting
Whenever the quality control detects an error, corrective measures must be taken. As
mentioned earlier, the error may be readily recognized as a simple calculation or typing
error (decimal point!) which can easily be corrected. If this is not the case, then a
systematic investigation must take place. This includes the checking of sample
identification, standards, chemicals, pipettes, dispensers, glassware, calibration
procedure, and equipment. Standards may be old or wrongly prepared, adjustable pipettes
may indicate a wrong volume, glassware may not be cleaned properly, equipment may be
dirty (e.g. clogged burner in AAS), or faulty. Particularly electrodes can be a source of
error: they may be dirty and their life-time must be observed closely. A pH-electrode may
seemingly respond well to calibration buffer solutions but still be faulty.

Clearly, every analytical procedure and instrument has its own characteristic weakness, by
experience these become known and it is useful to make a list of such relevant check
points for each procedure and adhere this to the corresponding SOP or maintenance
logbook if it concerns an instrument. Update this list when a new flaw is discovered.

Trouble-shooting is further discussed in Section 9.4.

8.7 LIMS

8.7.1 Introduction
8.7.2 What is a LIMS?
8.7.3 How to select a LIMS

8.7.1 Introduction

The various activities in a laboratory produce a large number of data streams which have
to be recorded and processed. Some of the main streams are:

- Sample registration
- Desired analytical programme
- Work planning and progress monitoring
- Calibration
- Raw data
- Data processing
- Data quality control
- Reporting
- Invoicing
- Archiving

Each of these aspects requires its own typical paperwork most of which is done with the
help of computers. As discussed in previous chapters, it is the responsibility of the
laboratory manager to keep track of all aspects and tie them up for proper functioning of
the laboratory as a whole. To assist him in this task, the manager will have to develop a
working system of records and journals. In laboratories of any appreciable size, but even
with more than two analysts, this can be a tedious and error-sensitive job. Consequently,
from about 1980, computer programs appeared on the market that could take over much
of this work. Subsequently, the capability of Laboratory Information Management
Systems (LIMS) has been further developed and their price has increased likewise.
The main benefit of a LIMS is a drastic reduction of the paperwork and improved data
recording, leading to higher efficiency and increased quality of reported analytical results.
Thus, a LIMS can be a very important tool in Quality Management.

8.7.2 What is a LIMS?

The essential element of a LIMS is a relational database in which laboratory data are
logically organized for rapid storage and retrieval. In principle, a LIMS plans, guides and
records the passage of a sample through the laboratory, from its registration, through the
programme of analyses, the validation of data (acceptance or rejection), before the
presentation and/or filing of the analytical results.

Hardware

Originally, LIMSes were installed on mainframe and minicomputers in combination with


terminals. However, with the advent of stronger PCs, programs were developed that could
run on a single PC (single-user system) or on several PCs with a central one acting as
server (network, multi-user system). The more expensive systems allow advanced
automation of a laboratory by direct coupling of analytical instruments to the system.
Printers are essential parts of the system for label and bar code printing as well as for
graphs and reports.

Software

The LIMS software consists of two elements: the routines for the functional parts, and the
database. For the latter usually a standard database program is used (e.g. dBase, Oracle,)
which can also be done for certain functional parts such as production of graphs and
report generation.

The database is subdivided into a static and a dynamic part. The static part comprises the
elements that change only little with time such as the definition of analytical methods,
whereas the dynamic part relate to clients, samples, planning, and results.

Function features

- A number of common main features of a LIMS are the following:

- Registration of samples and assigned jobs with unique numbers and automatic label
production.

- Production of work lists for daily and long-term planning.

- Allows rapid insight in status of work (pending jobs, back-log).

- Informs about laboratory productivity (per analysis, whole laboratory).

- Production of control charts and signalling of violation of control rules (results beyond
Action Limit, etc.).
- Flagging results beyond preset specifications.

- Generates reports and invoices.

- Archiving facility.

- Allows audit trailing (search for data, errors, etc.).

Data collection and subsequent calculations are usually done "outside" the LIMS. Either
with a pocket calculator but more commonly on a PC with a standard type spreadsheet
program (such as Lotus 123) or with one supplied with the analytical instrument. The data
are then transferred manually or, preferably, by wire or diskette to the LIMS. The larger
LIM systems usually have an internal module for this processing.

A major problem with the application of a LIMS is the installation and the involved
customizing to the specific needs of a laboratory. One of the first asked questions (after
asking for the price) is: 'can I directly connect my equipment to the LIMS?'. Invariably the
answer of the vendor is positive but the problems involved are usually concealed or
unjustly trivialized. It is not uncommon that installations take more than a year before the
systems are operational (not to speak of complete failures), and sometimes the
performance falls short of the expectations because the operational complexity was
underestimated.

Mentioning these problems is certainly not meant to discourage the purchase of a LIMS.
On the contrary, the use of a LIMS in general can be very rewarding. It is rather intended
as a warning that the choice for a system must be very carefully considered.

8.7.3 How to select a LIMS

When it is considered that a computerized system might improve the management of the
laboratory information data flow, a plan for its procurement must be made. The most
important activities prior to the introduction of a

LIMS are the following:

- Set up LIMS project team. Include a senior laboratory technician, the future system
manager and someone from the computer department.

- Review present procedures and workload.

- Consider if a LIMS can be useful.

Define what the system must do and may cost (make cost/benefit assessment). The
cost/benefit assessment is not always straightforward as certain benefits are difficult to
assess or to express in money (e.g. improved data quality; changing work attitude). Also, a
LIMS may be needed as a training facility for students.
When a decision is made that a LIMS project is viable, the team must define the
requirements and consider the two ways to acquire a LIMS: either by in-house building a
system or by purchasing one.

Many in-house systems are not premeditated but result from a gradual build-up of small
programs written for specific laboratory tasks such as the preparation of work lists or data
reports. The advantage is that these programs are fully customized. The disadvantage is
that, lacking an initial master plan, they are often not coupled or integrated into an overall
system which then takes extra effort. Yet, many laboratories employ such "systems". If a
system has to be built from scratch then the general rule is that if a suitable commercial
package can be found, it is not economical to build a system as it is both a complicated
and time-consuming process.

The purchase of a commercial LIMS should be a well structured exercise, particularly if a


large and expensive system is considered. Depending on the capabilities, prices for
commercial systems range from roughly USD 25,000 to 100,000 or even higher. The next
steps to be taken are:

- Identify LIMS vendors.

- Compare requirements with available systems.

- Identify suitable systems and make shortlist of vendors.

- Ask vendors for demonstration and discuss requirements, possible customization,


installation problems, training, and after-sales support.

- If possible, contact user(s) of candidate-systems.

After comparing the systems on the shortlist the choice can be made. By way of
precaution, it may be wise to start with a "pilot" LIMS, a relatively cheap single-user
system in part of the laboratory in order to gain experience and make a more considered
decision for a larger system later.

It is essential that all laboratory staff are involved and informed right from the start as a
LIMS may be considered meddlesome ('big brother is watching me') possibly arousing a
negative attitude. Like Quality Management, the success of a LIMS depends to a large
extent on the acceptance by the technical staff.

Remark: Useful information can be obtained from discussions by an active LIMS working


group of several hundreds of members on Internet. To subscribe to the (free) mailing list
send an E-mail message to: listproc@govonca2.gov.on.ca stating after Subject: subscribe
lims ............ (fill in your name).

Another site one may try is: http://www.limsource.com.

SOILIMS

An example of a low-budget and simple stand-alone LIMS specially built for small to medium-sized
soil, plant and water laboratories is SOILIMS. It is a user-friendly system which is easily installed
and learned (the manual contains a Tutor) and can be used immediately after installation. Although
the system has about 100 analyses in the standard configuration, it can be farther customized (and
re-customized later) by the supplier. A unique feature is that more than a dozen different cross-
checks can automatically be performed in order to screen soil data for internal inconsistencies:
when "anomalities" occur, the data concerned are flagged for closer inspection before they are
released (anomalities do not necessarily imply errors in all cases). An attractive feature is its price
which is comparable to that of a bench-top pH meter. The main features are the following while the
system's main menu is given below*.

- Unambiguous registration by automatic assignment of unique work order and laboratory sample
numbers.

- Possibility of priority assignments by deadline definition.

- Flexibility to alter work order requests and deadlines.

- Time-saving routine for sample label production.

- Protection of data against non-authorized users.

- Backlog reporting.

- Detailed information regarding the status of pending work orders.

- Production of work lists provides the manager with complete and accurate information for fast
decision making.

- Allows for many control samples.

- Manual or automatic data input (direct ASCII file reading).

- Second-line control by automatic verification of control sample results in Control Charts,

- Unique capabilities for cross-checking data ("artificial intelligence").

- Increased efficiency by easy production of reports and invoices.

- Data export facilities to LOTUS 123 or text editors.

- Easy-to-use automatic archival procedures.

- Audit trail capabilities for specified samples, clients, work orders, or laboratory personnel.

- Stand-alone and single-user network version.

- Option for plant and water analysis included.

- Millennium proof.
 For more information contact: ISRIC, P.O. Box 353, 6700 AJ Wageningen, the Netherlands. E-
*

mail: laboratory@isric.nl

Minimum required hardware: IBM PC (or compatible) 386 SX with 4Mb RAM.

Figure

SOPs

Model: Mean Chart


Model: Range Chart
Model: Combined Mean Chart and Range Chart

Model: Mean Chart

Model: Mean Chart

Model: Range Chart

Model: Range Chart

Model: Combined Mean Chart and Range Chart

Model: Combined Mean Chart and Range Chart

9 EXTERNAL QUALITY CONTROL OF DATA

by L.P. van Reeuwijk and V.J.G. Houba*

* Part of the information in this chapter was drawn from: V.J.G. Houba and J.J. van der
Lee (1995) and Houba et al. (1996).

9.1 Introduction
9.2 Check-analyses by another laboratory
9.3 Interlaboratory sample and data exchange programmes
9.4 Trouble-shooting
9.5 Organization of interlaboratory test programmes
9.6 Quality audit
9.1 Introduction
The quality control of data discussed in the preceding chapter is restricted
to internal control. The processes should be monitored closely to see if any unacceptable
deviations occur with respect to the situation in the previous period(s) where everything
was considered to be under control. However, this is often only relative to own data and
may lead to serious bias of the analytical results.

There are several ways to avoid or to discover systematic errors:

1. Use of spikes or pure analytes, e.g. calcium carbonate, gypsum, solutions of pure
chemicals (see 7.5.6).

2. Use of independent standards or standard solutions (see 7.2.4).

3. Analyzing (certified) reference samples (see 7.5.1).

4. Exchange of samples with another laboratory or having some own samples analyzed by
another laboratory.

5. Participation in interlaboratory sample exchange programmes (round robin tests).

The first three items have been discussed in Chapter 7, and in the ensuing paragraphs
attention will be focused on the latter two means of quality control.

9.2 Check-analyses by another laboratory

9.2.1 Single value - single value check


9.2.2 Replicate data - single value check
9.2.3 Replicate data - replicate data check

If an error in a procedure is suspected and the uncertainty cannot readily be solved, it is


not uncommon to have one or more samples analyzed by another laboratory for
comparison. This is usually a related laboratory in the neighbourhood ("neighbourly help")
or one belonging to the same umbrella organization as the laboratory itself. Sometimes,
reputable laboratories elsewhere need to be consulted.

An inherent disadvantage of this procedure is that the results of the other laboratory may
themselves be biased. To eliminate this, the check may have to be extended to more
laboratories and has then, in fact, become a comparative interlaboratory study (see 9.3).

Three types of data comparison may be distinguished:


1. A single value of a laboratory is compared with a single value of another laboratory.
2. Replicate values of a laboratory are compared with a single value of another.
3. Replicate values of a laboratory are compared with replicate values of another.

9.2.1 Single value - single value check

If the test entails a simple comparison of two single values then the bias can easily be
calculated with Equation (7.15) or (7.16). However, it should be realized that each single
value carries a confidence range of ±2s (s = standard deviation of the method; see 6.3.4)
so that there is a considerable chance of a false conclusion both in a positive and a
negative way. Thus, although such a test may be informative in some cases, it can hardly
be qualified as GLP.

9.2.2 Replicate data - single value check

This is a situation where replicate results are compared with a single value from another
laboratory or with a target value not accompanied by a (meaningful) standard deviation,
e.g. a median value from different labs with different methods ("consensus value") derived
from a proficiency test. For the test for significance, the two-sided t-test can be used as
expressed in Equation (6.12):

(6.12; 9.1)

where

¯x = mean of own results of a sample


 = target value
s = standard deviation of own results
n = number of own results

Example:

We use a variant of the example previously given to calculate bias with Eq. (7.16).

The target value of the Cu content of a sample is 34.0 mg/kg (the standard deviation
of  is unknown here, otherwise 9.2.3.1 below is applicable). The results from 15
replicates with the laboratory's own method are: ¯x= 31.6 mg/kg, and s = 5.6. Using
Equation (6.12) we calculate t = 1.66 which is less than the critical t-value (2.14, two-
sided, df = 14; see App. 1) so we accept the null hypothesis and conclude that no
significant difference is found between the target value and the results obtained by the
laboratory (at the 95% significance level and with the number of replicates used).

9.2.3 Replicate data - replicate data check


9.2.3.1 Comparison of replicate results on one sample
9.2.3.2 Comparison of replicate results on multiple samples

Statistically, the most reliable comparison for bias is made between data resulting from
replicate determinations. Now, two different kinds of check can be distinguished:

1. Comparison of replicate results on one sample.


2. Comparison of replicate results on multiple samples.
9.2.3.1 Comparison of replicate results on one sample

The message from the previous sections is clearly that if another laboratory is asked to
perform a bias check, one should preferably ask for at least a duplicate determination.
More replicates would further increase the confidence but to a decreasing extent (see
6.3.4). The test for significance of the bias is again a two-sided t-test as discussed with
examples in Section 6.4.3.

9.2.3.2 Comparison of replicate results on multiple samples

This kind of data comparison cannot be considered a "quick check" as considerable work
in both laboratories is involved. If the check is limited to a determination on two or three
samples, for comparison the two-sided t-test can be used for each sample individually (as
above in 9.2.3.1). If more than three samples are involved, the paired t-test can be
considered (for examples see 6.4.3.4) and for more than six samples linear regression is
indicated (for example see 6.4.4.2).

If, less commonly, precision of an analysis needs to be checked with another laboratory, at
least seven replicates by both laboratories are recommended to allow for a reliable F-test
(see 6.4.2).

9.3 Interlaboratory sample and data exchange programmes

9.3.1 Types of interlaboratory programmes


9.3.2 Proficiency testing
9.3.3 Examples: ISE and IPE

A laboratory which claims that it produces quality data should participate in at least one
interlaboratory exchange programme. Accredited laboratories have to provide evidence
that they are successfully participating in such a scheme of good national or international
repute (these schemes themselves may be accredited).
9.3.1 Types of interlaboratory programmes

Various types of programmes are in operation among laboratories locally, regionally,


nationally and internationally, as well as within umbrella organizations. Before joining a
scheme the purpose of participation must be clear in order to make a sound choice. The
following operational types can be distinguished:

1. Method-performance studies
1.1 Collaborative study: establishing the performance characteristics of an analytical
method.
1.2 Comparative study: comparing analytical methods by comparing the results they yield.

2. Laboratory-performance studies

2.1 Proficiency test (one method): comparing the performance of laboratories on the basis


of the same analytical method.

2.2 Proficiency test (different methods): comparing the performance of laboratories by


comparing the results of their own methods.

3. Material-certification studies

3.1 Certification study: establishing benchmark values for components or properties of a


material.

3.2 Consensus study: establishing characteristic values for components or properties of a


material, for quality control.

The most common type in which laboratories participate for quality control is Type 2.2, the
proficiency test where laboratories receive samples to be analyzed according to their
normal procedures. Type 2.1 can run concurrently with Type 2.2 if sufficient participants
employ the same analytical method. The same applies to Type 3.2 where a sample after
having been analyzed by a large number of laboratories may be used as a "reference"
sample. This is valuable material particularly for attributes for which no certified reference
material (CRM) exists.

Note: This aspect may offer an attractive opportunity for laboratories to obtain a useful
control sample: Arrange with the organizers of a round robin test programme to have a
laboratory's own bulk sample used in a proficiency test. Part of the sample is used and the
remainder is returned. This opportunity is offered by the WEPAL programmes (see 9.3.3).

Most other study types are usually executed by invitation: the organizing body select a
number of laboratories to participate in a study, the results of which are made available to
the whole laboratory community. For instance. Type 1.1 is aimed at validation of a method
and may form the basis of an official national or international standard procedure. Type 3.1
is aimed at the preparation of CRMs.
9.3.2 Proficiency testing

Participation in interlaboratory exchange programmes allows an evaluation of the


analytical performance of a laboratory by comparison with the results of other laboratories.
Both accuracy and precision can be tested with statistical parameters such as means,
standard deviations, repeatability and reproducibility emanating from the collected data. In
addition, these schemes can be a useful source of reference samples which can be put to
good use internally by participating laboratories.

The usual procedure is that subsamples of a large sample are sent to participating
laboratories at regular intervals. Often, subsamples of certain large samples are sent
repeatedly without the participants knowing this.

Depending on the material to be analyzed, the laboratories can follow their own analytical
procedures (Type 3.2) or can perform the analyses according to a detailed
extraction/destruction and measuring technique (Type 3.1). For example, to determine the
inorganic chemical composition of dried, ground crop material, one is interested in total
contents of components. In that case the laboratory results should tally, regardless of the
preprocessing and/or measuring techniques. If that is not the case, the analytical
procedures are incorrect. By contrast, determining total contents is rarely important when
analyzing the inorganic chemical composition of soils and sediments, except for geological
studies. For environmental and agronomic research one is much more interested in certain
fractions of these total contents. For most elements, for example, aqua regia digestion
yields only a part of the total contents. The magnitude of this part depends on the nature of
the samples and on the form in which the elements occur (adsorbed, occluded, in
minerals, etc.). In addition, there is a large choice of extractants which range from strong
acids to unbuffered weak electrolyte solutions or just water. Accordingly, one can find very
divergent values for the content of an element in the soil or sediment, depending on the
extraction potential of the solution used. The conditions for digestion and extraction
procedures must therefore always be stipulated in detail in a SOP.

When subsamples have been analyzed by participants for one or more attributes the
results are sent to the scheme's bureau. Here the data are processed and reports of each
round are sent to participants. After a number of rounds usually a more extensive report is
made since more data allow more and better statistical conclusions. Participants can
inspect their results and, when significant and/or systematic deviations are noticed, they
may take corrective action in the laboratory.

Although the samples usually are analyzed by a large number of laboratories, the results
should still be interpreted with caution. The analytical procedures used by participants may
differ considerably which may lead to bias and imprecision (also in the consensus value).
Even "true" values of certified reference samples, which were determined by a number of
selected renowned laboratories, have occasionally been proven to be wrong. It may even
be that some outliers are right and all other laboratories wrong. However, these are
exceptional cases which, in fact, only underscore the usefulness of interlaboratory
exchange programmes.

9.3.3 Examples: ISE and IPE


9.3.3.1 Data processing
9.3.3.2 Rating with t-value
9.3.3.3 Proficiency control chart
9.3.3.4 Rating with Z-score

As an example of schemes with good international reputation we mention the International


Plant Analytical Exchange and International Soil Analytical
Exchange (IPE and ISE) programmes. These are the oldest parts of WEPAL, the
Wageningen Evaluating Programmes for Analytical Laboratories of the Wageningen
Agricultural University. IPE, having over 250 participants from some 80 countries, is in
operation since 1956. ISE, with more than 300 participants, was started in 1988. The
operational procedures of WEPAL are given.

9.3.3.1 Data processing

For each round, data are collected for attributes analyzed by participants. The "normal"
way of data treatment would be to calculate the mean and standard deviation and to
repeat this leaving out the data beyond ±2s. However, in proficiency tests and consensus
studies there is a preference for using the median value rather than the mean. The median
is the middle observation of the sorted array of observations in the case of an odd number
of observations. In case of an even number it is the mean of the two middle observations.
Using the median rather than the mean reduces the influence of extreme data.

9.3.3.2 Rating with t-value

For each attribute the median value ( 1) and the median of absolute deviations


(MAD,    1) are calculated. The MAD (like the standard deviation, a measure for the spread
of the data) is the median of the differences between each observation and the median.

When more than seven observations for a certain attribute are reported by participants, the
following rating procedure can be performed: All values x for which:

(9.2)

are flagged with a double asterisk (**). The factor f is aimed at flagging 5% of the data and,
assuming a normal distribution, is approximated by (0.7722 +1.604/n) × t, where t is the t-
value in the two-sided 95% probability table with df = n -1 (see Section 6.4.1, Fig. 6-2, and
App. 1). This procedure is repeated leaving out the data flagged with ** which then yields a
second median ( 2) and a second MAD ( 2). These values are then substituted in
Equation (9.2) and all results x now falling in the range delineated by that equation are
flagged with a single asterisk (*). An example of such a data set is given in Table 9-1.

In this table there is a column for the MIC, the Method Indicating Code. With a maximum of
four characters, the analytical procedures used by the individual participants are indicated
to allow a better evaluation of the results. In this way, for instance, bias resulting from a
particular digestion procedure may be revealed. Also, the reproducibility (see 7.5.2.1) of a
particular method used by different participants can be calculated.

9.3.3.3 Proficiency control chart

When results do not significantly differ from the consensus mean, this does not necessarily
imply that the analytical process is perfect. The observations may systematically lie above
or below the mean or median. A kind of "proficiency control chart" can be constructed to
reveal this. When using the relative deviation of the results from the median (or mean), i.e.
the difference between the observation and the median is expressed as a percentage of
this median (cf. CV, RSD). Plotting this against time and drawing the values of 2 ×
relative MAD in the graph (in Fig. 9-1: lengths of vertical bars; comparable to the Warning
Limits of the Control Chart of the Mean, see 8.3.2), allows a laboratory to obtain an
indication of the position of its own values and to see if there is a trend. In fact, the same
quality control rules as used for the control chart of the mean can be applied.

Fig. 9-1. Proficiency control chart for the determination of boron in a crop as found


by a participant in IPE during 1994 (six samples per two-months' round). The length
of the vertical bars equals 2 × MAD (as % of median) and can be considered the
Warning Limit. Two values appear to be beyond this Limit.

Table 9-1. Example of data presentation: results for the Al content in crop samples
from IPE in Round 5, 1994 (in mg/kg).

Laboratory Sample MIC


White White Amaryllis Maize Potato Broad-
Cabbage Cabbage (bulb) beans
A 32.8 30.6 293 351 29.1 278 AA|E
B 67.0* 76.0** 678** 1051** 38.0 544** DE|CB
C 69.1** 71.0** 441** 776** 39.0 343 AC|CB
D 31.3 27.5 196 311 24.7 260 EE|BF
E 9.0 41.0 284 352 34.0 306 EE|CB
F 34.6 36.4 290 336 30.3 309 DG|CB
G 36.8 36.0 176 262 32.0 166 -
H 86.5** 101.0** 354 353 43.0 353 DE|AB
I - 30.2 - - 47.7 - DA|CB
J 42.0 36.9 178 288 32.2 202 AA|CB
K 41.3 45.5 208 319 44.9 227 -
L 33.0 35.0 160 220 32.0 166 EE|CB
M 172.7** 154.2** 274 254 80.1** 206 G|CB
N 76.5** 152.0** 190 281 57.4* 192 DG|CB
O 45.0 36.0 172 293 24.0 204 DB|CB
P 36.2 38.1 133 291 34.5 - AA|CB
Q 47.3 45.8 252 293 46.5 221 -
R 37.0 36.0 195 220 77.9** 203 AB|AE
S 46.1 49.4 270 340 51.4* 294 AA|CB
T 26.1 26.7 274 332 21.9 274 -
U 89.0** 123.0** 382 459** 119.0** 406* DA|CB
V 35.3 35.6 119 284 45.8 150 G|AE
W 49.5 45.6 326 346 41.6 322 DG|CB
X 67.0* 70.0** 683** 993** 56.0* 500** -
Y 48.5 36.5 158 267 36.6 178 DB|CB
Z 22.1 21.3 112 184 27.9 123 AA|AA
AA 30.2 28.6 142 234 32.7 142 -
BB 67.1* - 593** 713** - 549** G|L
CC 23.0 32.0 134 243 30.0 138 DB|CB
DD 34.0 35.0 210 280 31.0 165 DC|CB
EE 46.4 27.4 280 253 37.9 213 -
FF 24.4 23.4 106 184 26.8 109 G|CB
GG 32.3 31.8 196 295 31.3 203 DB|CB
Median: (1) 40.2 36.2 209 293 35.5 213
(2) 36.8 35.6 196 288 34.0 205
MAD: (1) 8.10 8.15 69.0 45.0 6.95 63.0
(2) 6.59 5.00 62.5 35.0 5.00 55.0
9.3.3.4 Rating with Z-score

Individual rating of the proficiency of a laboratory can also be done with the normal
deviate or so-called "Z-score" which is based on the bias relative to the mean of all
laboratories:

(9.3)

where
x = individual result
¯x = mean of all results
s = standard deviation of ¯x

Before the mean is calculated, outliers flagged with ** and * as described above are
removed.

For easy visualization of Z, Figure 6-2 (p. 74) can be used: assuming a normally
distributed collection of data, 5% of the Z-scores would fall outside the range -2<Z<2
(where x is more than 2s off from ¯x) and only 0.3% outside the range -3<Z<3 (see also
Note 2 below).

Hence, the following rating is usually employed:

:satisfactory

2< :questionable

:unsatisfactory

This origin of Z allows the Z-score for each attribute to be recorded on a kind of control
chart derived from the Control Chart of the Mean as discussed in Section 8.3.2. A model is
given in Figure 9-2.

Note 1. Here, again, individual ratings should be used cautiously as the system is relative
to a consensus mean, outliers are not considered, and the data collection may not be
normally distributed.

Note 2. The value of Z equals the value of ttab when n is large, and is approx. 2 at 95%
confidence (two-sided).

Fig. 9-2. Model for a Z-score control chart for one attribute in six interlaboratory
control samples per round. The value with the arrow indicates an outlier off the
scale.
9.4 Trouble-shooting
Action must be taken when statistically significant deviations are scored, or when results
are consistently above or below the mean (see Rejection Rules). This holds both for the
internal control with control charts and for external control with round robin tests. The
difference is that the results of the round robin tests always come with a time-lag: you
cannot immediately repeat a batch or correct a problem. Clearly, corrective action must be
taken as soon as problems are spotted, be it by internal or external control. Therefore, the
ensuing discussion is not limited to problems emanating from third-line control only, but
applies to all cases where problems are encountered.

One of the first actions must be to inspect whether the deviation occurs for only one
control sample or round-robin sample, or whether several samples in one
batch/round/period deviate (possibly without exceeding the Action Line or scoring
asterisks). Earlier reports must be consulted to see if there have been problems previously
with that specific attribute. If an extreme value is scored only once for a certain sample,
this may indicate that this one measurement is wrong or that there is an unexpected matrix
interference. It may be necessary to go back to the measurements in the archives to check
this (audit trailing). This will include a re-check of the second-line (batch) control: was the
result of the control sample correct? If no mistakes are found, the sample in question must
be reanalyzed and in this analysis, for instance, the sample: liquid ratio may be varied. If
anomalies in an attribute occur in several samples, the entire analytical procedure should
be scrutinized critically. The following should then be inspected:

1. The results of the first-line check (calibration of equipment, etc., see also Chapter 5).
2. The results of measurements: these should be checked on the basis of the original
signals, counts, absorbances, etc. (and not on the basis of the final results of software
procedures).

3. The standard solutions used. This involves checking whether the manufacturer's values
for standard solutions are correct, or whether the salts used are indeed primary standards
and have indeed been pretreated correctly. These salts can lose or attract water. Standard
solutions that have been kept for too long or in unsuitable bottles can change in
concentration, e.g. because the bottle was not stoppered properly, allowing water to
evaporate (see also 5.3).

4. The correctness of the pipettes and other volumetric glassware used. It is known that
sometimes the volume of the adjustable pipettes, commonly used in laboratories, deviates
from the guaranteed volume. Therefore all such pipettes should be tested regularly (see
also 5.2.2.4).

5. The automatic pipetting of measuring equipment. Table 9-2 gives an example of


deviations in the automatic pipetting equipment of a flameless atomic absorption
spectrometer. This deviation from the given value may have great consequences for the
standard series which is prepared by dilution of a standard solution with an injector and by
standard addition.

6. In round robin programmes: the digestion and detection techniques followed.


Information on this can be found in the MIC.

Table 9-2. Volume of the automatic injector of a sample changer of a flameless AAS (in
mL).

Pump setting Volume measured Difference


absolute relative
Old pump
5 6.3 + 1.3 + 26 %
10 13.0 +3.0 + 30 %
20 20.7 +0.7 + 3.5 %
25 25.6 +0.6 + 2.5 %
New pump
5 6.5 + 1.5 + 30 %
10 11.1 + 1.1 +11%
20 20.6 +0.6 +3%
25 26.2 + 1.2 +5%

Some other sources of error are:

1. General

- Filter paper washed in acid can cause secondary reactions when soil suspensions
prepared with unbuffered salt solutions are being filtered. This is particularly likely if the
first portion of the filtrate is not removed.
- Old hollow cathode lamps can impair calibration graphs.

- Voltage fluctuations of electricity mains.

- Portable telephones can disturb the functioning of sampling machines causing them, for
example, to skip a sample.

2. Contaminations

- Filter paper that has been taken out of its wrapping can absorb substances, particularly
ammonia.

- NH3 can be produced in demineralized water by the slow breakdown of the resins used.

- Boron contamination can arise from laundered laboratory coats, through the release of
perborate from the detergent.

- The paint from logos on certain new glassware may dissolve in the acid used for cleaning
(and enter the glassware).

- The cooling circuit of GF-AAS can become clogged with rust after being washed out with
tap water under high pressure.

- Zinc contamination may arise by dandruff from persons using anti-dandruff shampoo.

- Grinding can lead to contamination from the grinding apparatus. An example of this
source of error is given in Table 9-3. Mill A has a cast iron casing; in mill B the casing is
made of aluminium.

- Sieves may give off unwanted elements (e.g. brass: copper, zinc).

- Glassware may be contaminated by inadequate cleaning and rinsing. This may


particularly occur when glassware is used for different analyses. Blank determinations may
reveal such problems.

Table 9-3. Influence of grinding on the results of analyzing barley (in mg/kg). Mill A: cast
iron casing; Mill B: aluminium casing.

Mill Run Al Cu Fe Pb Zn
1 12 5.32 420 0.03 25.4
A 2 11 5.36 454 0.01 25.8
3 24 5.31 487 0.08 26.1
1 102 7.13 94 0.14 26.1
B 2 112 6.45 91 0.19 26.3
3 104 6.46 74 0.14 25.7

9.5 Organization of interlaboratory test programmes


Although it is considered beyond the scope of the present discussion, for those who
contemplate to organize an interlaboratory test programme, be it locally or wider, a few
general remarks may be useful.

It was mentioned in the Preface that more and more governments are requiring
accreditation from laboratories that carry out, for instance, environmental and ecological
analyses for particular studies or for the establishment of data bases. This implies that
accreditation bodies have been set up or are to be set up. Furthermore, it may become
strategically useful that such accreditation bodies are recognized internationally to facilitate
cross border acceptance of analytical results of an accredited laboratory should the
occasion arise, e.g. by foreign or international organizations.

Note. Information on this aspect can be obtained from the International Laboratory


Accreditation Conference (ILAC), P.O. Box 29152, 3001 GD Rotterdam, the Netherlands.

An accreditation body may delegate to a renowned laboratory the organization of a


regional or national or even international interlaboratory test programme as part of a larger
external quality assurance programme. This could also be executed by a cooperative
organization of laboratories such as SPALNA (see note in Section 5.2.2.2). Numerous
papers describe the organization of interlaboratory tests, e.g. International Standard ISO
5725 (latest ed.), Horwitz (1988), Funk et al. (1995), and Houba et al. (1996), and the
reader is referred to such papers for further information.

Meanwhile, the assistance of such a cooperative organization or network need not be


limited to round robin tests but can be extended to other essential aspects of quality
assurance such as:

- Preparation of control samples

- Testing of methods

- Organization of Training Workshops (SPALNA does this, among others, for Equipment


Maintenance, Analytical Methodology, and for Quality Management and Data Handling,
both in the English and French language).

- Making available consultants for trouble shooting and quality audits.

Particularly for individual laboratories, but also for groups of laboratories, there are clearly
organizational and budgetary advantages to join an existing laboratory network with these
aims. If still the need for a local or regional network is felt, one laboratory (or group of
laboratories) interested in improving the quality of their output could take the initiative to
set up one. Some kind of cross-link with an established scheme elsewhere would be
beneficial, particularly in the initial stage.

9.6 Quality audit


As stated in Chapter 1, when the desired quality level of the output of the laboratory is
reached, it must be maintained and, where necessary, improved. To achieve this, the
Quality Manual should contain a plan for regular checking of all quality assurance
measures as they have been discussed so far. Such a plan would include a regular
reporting to the management of the institute or company.

This is usually done by the head of laboratory and/or, if applicable, by the quality
assurance officer.

In addition to such a continuous internal inspection, particularly for larger laboratories it is


very useful to have the quality system reviewed by an independent external auditor. For
accreditation this is even an inherent part of the process.

An external audit can assist the organization to recognize bottlenecks and flaws. Such
shortcomings often result from insufficient and inefficient measures and activities which
remain unnoticed or are ignored.

An audit can be requested by the laboratory itself or by the management of the institute
and involves basically the inspection of the Quality Manual, i.e. all the protocols,
procedures, registration forms, logbooks, control charts, and other documents related to
the laboratory work. Attention is not only given to the contents of the documents, but also
to the practical implementation ('say what you do, do what you say, and be able to show
what you have done'). Laboratory staff sometimes see these audits as a sign of suspicion
about their performance, and sometimes audits may be (mis)used to get things organized
or changed under the guise of quality. Yet, the auditor should not be seen as a policeman
but as someone who was asked to help. Therefore, good cooperation with the auditor is
essential for the effectiveness of the audit. Conversely, the auditor should be selected
carefully for the same reason.

In large laboratories it may be advisable to have the audit done by more than one person,
for instance an organization specialist and an analytical expert.

The audit should result in a report of findings and recommendations to improve possible
shortcomings. Subsequently, the management which will have to decide to what extent the
report will remain confidential, and if and what actions will have to be taken.

Wageningen Evaluating Programmes for Analytical Laboratories (WEPAL)

The world's largest laboratory-performance study schemes for the analysis of soils, sediments,
crops, manures and refuse materials are included in the Wageningen Evaluating Programmes for
Analytical Laboratories (WEPAL),  organized by the Department of Soil Science and Plant Nutrition
of the Wageningen Agricultural University, the Netherlands. These programmes are:

International Plant-analytical Exchange (IPE)

A laboratory-performance study on me inorganic chemical analysis of crop material. Every two


months the participants receive six dried, ground, crop samples in coded plastic sample bags. The
participants analyze these crop samples according to their own usual techniques (extraction and/or
destruction, measurements). At the end of the test period the results are sent to Wageningen on
pre-printed forms supplied by WEPAL. where they are processed. The participants are informed of
the outcome within three weeks of the end of the test period. The results are accompanied by
information about fee digestion and detection technique, given via a four-letter code. The
programme was initiated some 40 years ago (in 1956) and has currently about 250 participants
from 80 countries.

International Soil-analytical Exchange (ISE)

A laboratory-performance study on (mainly) chemical analysis of soils. Initiated in 1988, this


programme has at present almost 300 participants from 80 countries who receive four dried,
ground, soil samples every three months. These samples can be analyzed to determine fee total
content of many elements, hut can also be submitted to a variety of extraction procedures, as well
as to the determination of soil properties such as pH, conductivity, cation exchange capacity, clay
content. The further organization and processing of data, including fee denotation of fee digestion
and detection techniques followed, are similar to those of IPE.

International Sediment Exchange for Tests on Organic Contaminants (SETOC)

A laboratory-performance study dealing wife organic substances in soils and sediments. This study
started in 1992 and has currently 90 participants. The organization, frequency of rounds, and
reporting are as for ISE. The participants in this programme can report contents for 16 PAHs, 12
PCBs, 27 organochlorine pesticides and several heavy metals. This test is organized jointly wife fee
Institute for Environmental Studies at the Free University of Amsterdam, fee Netherlands.

International Manure and Refuse Sample Exchange Programme (MARSEP)

A laboratory-performance study on chemical composition of manures, composts and sludges. This


programme was started in 1994 and has currently 75 participants. The samples can be analyzed on
real total and "total" contents of many elements. The organization, frequency of rounds, reporting,
as well as fee coding of fee digestion and detection techniques followed, are similar to those
of ISE.

For reasons of confidentiality of fee results, participants may opt for a code name in the reports.
The organization has equipment which can automatically divide large amounts of sample material
into representative subsamples for all programmes.

Reference materials

WEPAL offers participants fee opportunity to send dry bulk samples (50 kg of soil or 6 kg of plant
material) for use as sample in a test round. The remainder of fee material (about 1/4) will be
returned to sender and can then act as a valuable internal reference sample wife consensus
values.

For more information contact:

WEPAL, Dept. of Soil Science and Plant Nutrition


Wageningen Agricultural University
P.O. Box 8005
6700 EC Wageningen, the Netherlands.
E-mail: wepal@mail.benp.wau.nl
Internet: http://www.benp.wau.nl/wepal

You might also like