Professional Documents
Culture Documents
DNA Profiling
DNA Profiling
Forensic science
Physiological sciences
Forensic anthropology
Forensic archaeology
Forensic odontology
Forensic entomology
Forensic pathology
Forensic botany
Forensic biology
DNA profiling
Forensic chemistry
Forensic osteology
Social sciences
Forensic psychology
Forensic psychiatry
Forensic criminalistics
Ballistics
Ballistic fingerprinting
Body identification
Fingerprint analysis
Forensic accounting
Forensic arts
Forensic toxicology
Gloveprint analysis
Palmprint analysis
Vein matching
Digital forensics
Computer forensics
Database forensics
Network forensics
Forensic video
Forensic audio
Related disciplines
Fire investigation
Forensic engineering
Forensic linguistics
Forensic statistics
Forensic taphonomy
William M. Bass
George W. Gill
Richard Jantz
Edmond Locard
Douglas W. Owsley
Werner Spitz
Juan Vucetich
Related articles
Crime scene
CSI effect
Pollen calendar
Skid mark
Trace evidence
Use of DNA in
forensic entomology
v
t
e
DNA profiling (also called DNA testing, DNA typing, or genetic fingerprinting) is a technique
employed by forensic scientists to assist in the identification of individuals by their respective
DNA profiles. DNA profiles are encrypted sets of letters that reflect a person's DNA makeup,
which can also be used as the person's identifier. DNA profiling should not be confused with full
genome sequencing.[1] DNA profiling is used in, for example, parental testing and criminal
investigation.
Although 99.9% of human DNA sequences are the same in every person, enough of the DNA is
different that it is possible to distinguish one individual from another, unless they are
monozygotic twins.[2] DNA profiling uses repetitive ("repeat") sequences that are highly variable,
[2]
called variable number tandem repeats (VNTRs), in particular short tandem repeats (STRs).
VNTR loci are very similar between closely related humans, but are so variable that unrelated
individuals are extremely unlikely to have the same VNTRs.
The DNA profiling technique was first reported in 1986[3] by Sir Alec Jeffreys at the University
of Leicester in England, United Kingdom,[4] and is now the basis of several national DNA
databases. Dr. Jeffreys's genetic fingerprinting was made commercially available in 1987, when a
chemical company, Imperial Chemical Industries (ICI), started a blood-testing center in the U.K.
[5]
Contents
2 DNA databases
6 Cases
7 See also
8 References
9 Further reading
10 External links
A reference sample is then analyzed to create the individual's DNA profile using one of a number
of techniques, discussed below. The DNA profile is then compared against another sample to
determine whether there is a genetic match.
RFLP analysis
Main article: Restriction fragment length polymorphism
The first methods for finding out genetics used for DNA profiling involved RFLP analysis.
DNA is collected from cells, such as a blood sample, and cut into small pieces using a restriction
enzyme (a restriction digest). This generates thousands of DNA fragments of differing sizes as a
consequence of variations between DNA sequences of different individuals. The fragments are
then separated on the basis of size using gel electrophoresis.
The separated fragments are then transferred to a nitrocellulose or nylon filter; this procedure is
called a Southern blot. The DNA fragments within the blot are permanently fixed to the filter,
and the DNA strands are denatured. Radiolabeled probe molecules are then added that are
complementary to sequences in the genome that contain repeat sequences. These repeat
sequences tend to vary in length among different individuals and are called variable number
tandem repeat sequences or VNTRs. The probe molecules hybridize to DNA fragments
containing the repeat sequences and excess probe molecules are washed away. The blot is then
exposed to an X-ray film. Fragments of DNA that have bound the probe appear as dark bands on
the film.
However, the Southern blot technique is laborious, and requires large amounts of undegraded
sample DNA. Also, Karl Brown's original technique looked at many minisatellite loci at the
same time, increasing the observed variability, but making it hard to discern individual alleles
(and thereby precluding parental testing). These early techniques have been supplanted by PCRbased assays.
PCR analysis
Main article: polymerase chain reaction
Developed by Kary Mullis in 1983, a process was reported by which specific portions of the
sample DNA can be amplified almost indefinitely (Saiki et al. 1985, 1988). This has
revolutionized the whole field of DNA study. The process, the polymerase chain reaction (PCR),
mimics the biological process of DNA replication, but confines it to specific DNA sequences of
interest. With the invention of the PCR technique, DNA profiling took huge strides forward in
both discriminating power and the ability to recover information from very small (or degraded)
starting samples.
PCR greatly amplifies the amounts of a specific region of DNA. In the PCR process, the DNA
sample is denatured into the separate individual polynucleotide strands through heating. Two
oligonucleotide DNA primers are used to hybridize to two corresponding nearby sites on
opposite DNA strands in such a fashion that the normal enzymatic extension of the active
terminal of each primer (that is, the 3 end) leads toward the other primer. PCR uses replication
enzymes that are tolerant of high temperatures, such as the thermostable Taq polymerase. In this
fashion, two new copies of the sequence of interest are generated. Repeated denaturation,
hybridization, and extension in this fashion produce an exponentially growing number of copies
of the DNA of interest. Instruments that perform thermal cycling are now readily available from
commercial sources. This process can produce a million-fold or greater amplification of the
desired region in 2 hours or less.
Early assays such as the HLA-DQ alpha reverse dot blot strips grew to be very popular due to
their ease of use, and the speed with which a result could be obtained. However, they were not as
discriminating as RFLP analysis. It was also difficult to determine a DNA profile for mixed
samples, such as a vaginal swab from a sexual assault victim.
However, the PCR method was readily adaptable for analyzing VNTR, in particular STR loci. In
recent years, research in human DNA quantitation has focused on new "real-time" quantitative
PCR (qPCR) techniques. Quantitative PCR methods enable automated, precise, and highthroughput measurements. Interlaboratory studies have demonstrated the importance of human
DNA quantitation on achieving reliable interpretation of STR typing and obtaining consistent
results across laboratories.
STR analysis
Main article: Short tandem repeats
The system of DNA profiling used today is based on PCR and uses short tandem repeats (STR).
This method uses highly polymorphic regions that have short repeated sequences of DNA (the
most common is 4 bases repeated, but there are other lengths in use, including 3 and 5 bases).
Because unrelated people almost certainly have different numbers of repeat units, STRs can be
used to discriminate between unrelated individuals. These STR loci (locations on a chromosome)
are targeted with sequence-specific primers and amplified using PCR. The DNA fragments that
result are then separated and detected using electrophoresis. There are two common methods of
separation and detection, capillary electrophoresis (CE) and gel electrophoresis.
Each STR is polymorphic, but the number of alleles is very small. Typically each STR allele will
be shared by around 5 - 20% of individuals. The power of STR analysis comes from looking at
multiple STR loci simultaneously. The pattern of alleles can identify an individual quite
accurately. Thus STR analysis provides an excellent identification tool. The more STR regions
that are tested in an individual the more discriminating the test becomes.
From country to country, different STR-based DNA-profiling systems are in use. In North
America, systems that amplify the CODIS 13 core loci are almost universal, whereas in the
United Kingdom the SGM+ 11 loci system (which is compatible with The National DNA
Database) is in use. Whichever system is used, many of the STR regions used are the same.
These DNA-profiling systems are based on multiplex reactions, whereby many STR regions will
be tested at the same time.
The true power of STR analysis is in its statistical power of discrimination. Because the 13 loci
that are currently used for discrimination in CODIS are independently assorted (having a certain
number of repeats at one locus does not change the likelihood of having any number of repeats at
any other locus), the product rule for probabilities can be applied. This means that, if someone
has the DNA type of ABC, where the three loci were independent, we can say that the probability
of having that DNA type is the probability of having type A times the probability of having type
B times the probability of having type C. This has resulted in the ability to generate match
probabilities of 1 in a quintillion (1x1018) or more. However, DNA database searches showed
much more frequent than expected false DNA profile matches.[6] Moreover, since there are about
12 million monozygotic twins on Earth, the theoretical probability is not accurate.
In practice, the risk of contaminated-matching is much greater than matching a distant relative,
such as contamination of a sample from nearby objects, or from left-over cells transferred from a
prior test. The risk is greater for matching the most common person in the samples: Everything
collected from, or in contact with, a victim is a major source of contamination for any other
samples brought into a lab. For that reason, multiple control-samples are typically tested in order
to ensure that they stayed clean, when prepared during the same period as the actual test samples.
Unexpected matches (or variations) in several control-samples indicates a high probability of
contamination for the actual test samples. In a relationship test, the full DNA profiles should
differ (except for twins), to prove that a person was not actually matched as being related to their
own DNA in another sample.
AmpFLP
Main article: Amplified fragment length polymorphism
Another technique, AmpFLP, or amplified fragment length polymorphism was also put into
practice during the early 1990s. This technique was also faster than RFLP analysis and used PCR
to amplify DNA samples. It relied on variable number tandem repeat (VNTR) polymorphisms to
distinguish various alleles, which were separated on a polyacrylamide gel using an allelic ladder
(as opposed to a molecular weight ladder). Bands could be visualized by silver staining the gel.
One popular locus for fingerprinting was the D1S80 locus. As with all PCR based methods,
highly degraded DNA or very small amounts of DNA may cause allelic dropout (causing a
mistake in thinking a heterozygote is a homozygote) or other stochastic effects. In addition,
because the analysis is done on a gel, very high number repeats may bunch together at the top of
the gel, making it difficult to resolve. AmpFLP analysis can be highly automated, and allows for
easy creation of phylogenetic trees based on comparing individual samples of DNA. Due to its
relatively low cost and ease of set-up and operation, AmpFLP remains popular in lower income
countries.
During conception, the fathers sperm cell and the mothers egg cell, each containing half the
amount of DNA found in other body cells, meet and fuse to form a fertilized egg, called a zygote.
The zygote contains a complete set of DNA molecules, a unique combination of DNA from both
parents. This zygote divides and multiplies into an embryo and later, a full human being.
At each stage of development, all the cells forming the body contain the same DNAhalf from
the father and half from the mother. This fact allows the relationship testing to use all types of all
samples including loose cells from the cheeks collected using buccal swabs, blood or other types
of samples.
While a lot of DNA contains information for a certain function, there is some called junk DNA,
which is currently used for human identification. At some special locations (called loci) in the
junk DNA, predictable inheritance patterns were found to be useful in determining biological
relationships. These locations contain specific DNA markers that DNA scientists use to identify
individuals. In a routine DNA paternity test, the markers used are Short Tandem Repeats (STRs),
short pieces of DNA that occur in highly differential repeat patterns among individuals.
Each persons DNA contains two copies of these markersone copy inherited from the father
and one from the mother. Within a population, the markers at each persons DNA location could
differ in length and sometimes sequence, depending on the markers inherited from the parents.
The combination of marker sizes found in each person makes up his/her unique genetic profile.
When determining the relationship between two individuals, their genetic profiles are compared
to see if they share the same inheritance patterns at a statistically conclusive rate.
For example, the following sample report from this commercial DNA paternity testing laboratory
Universal Genetics signifies how relatedness between parents and child is identified on those
special markers:
DNA Marker
D21S11
D7S820
TH01
D13S317
D19S433
Mother
28, 30
9, 10
14, 15
7, 8
14, 16.2
Child
28, 31
10, 11
14, 16
7, 9
14, 15
Alleged father
29, 31
11, 12
15, 16
8, 9
15, 17
The partial results indicate that the child and the alleged fathers DNA match among these five
markers. The complete test results show this correlation on 16 markers between the child and the
tested man to draw a conclusion of whether or not the man is the biological father.
Each marker is assigned with a Paternity Index (PI), which is a statistical measure of how
powerfully a match at a particular marker indicates paternity. The PI of each marker is multiplied
with each other to generate the Combined Paternity Index (CPI), which indicates the overall
probability of an individual being the biological father of the tested child relative to any random
man from the entire population of the same race. The CPI is then converted into a Probability of
Paternity showing the degree of relatedness between the alleged father and child.
The DNA test report in other family relationship tests, such as grandparentage and siblingship
tests, is similar to a paternity test report. Instead of the Combined Paternity Index, a different
value, such as a Siblingship Index, is reported.
The report shows the genetic profiles of each tested person. If there are markers shared among
the tested individuals, the probability of biological relationship is calculated to determine how
likely the tested individuals share the same markers due to a blood relationship.
Y-chromosome analysis
Recent innovations have included the creation of primers targeting polymorphic regions on the
Y-chromosome (Y-STR), which allows resolution of a mixed DNA sample from a male and
female or cases in which a differential extraction is not possible. Y-chromosomes are paternally
inherited, so Y-STR analysis can help in the identification of paternally related males. Y-STR
analysis was performed in the Sally Hemings controversy to determine if Thomas Jefferson had
sired a son with one of his slaves. The analysis of the Y-chromosome yields weaker results than
autosomal chromosome analysis. The Y male sex-determining chromosome, as it is inherited
only by males from their fathers, is almost identical along the patrilineal line. This leads to a less
precise analysis than if autosomal chromosomes were testing, because of the random matching
that occurs between pairs of chromosomes as zygotes are being made.[7]
Mitochondrial analysis
Main article: Mitochondrial DNA
For highly degraded samples, it is sometimes impossible to get a complete profile of the 13
CODIS STRs. In these situations, mitochondrial DNA (mtDNA) is sometimes typed due to there
being many copies of mtDNA in a cell, while there may only be 1-2 copies of the nuclear DNA.
Forensic scientists amplify the HV1 and HV2 regions of the mtDNA, and then sequence each
region and compare single-nucleotide differences to a reference. Because mtDNA is maternally
inherited, directly linked maternal relatives can be used as match references, such as one's
maternal grandmother's daughter's son. In general, a difference of two or more nucleotides is
considered to be an exclusion. Heteroplasmy and poly-C differences may throw off straight
sequence comparisons, so some expertise on the part of the analyst is required. mtDNA is useful
in determining clear identities, such as those of missing people when a maternally linked relative
can be found. mtDNA testing was used in determining that Anna Anderson was not the Russian
princess she had claimed to be, Anastasia Romanov.
mtDNA can be obtained from such material as hair shafts and old bones/teeth. Control
mechanism based on interaction point with data. This is determined by tooled placement in
sample.
DNA databases
Main article: National DNA database
When using RFLP, the theoretical risk of a coincidental match is 1 in 100 billion
(100,000,000,000), although the practical risk is actually 1 in 1000 because monozygotic twins
are 0.2% of the human population.[citation needed] Moreover, the rate of laboratory error is almost
certainly higher than this, and often actual laboratory procedures do not reflect the theory under
which the coincidence probabilities were computed. For example, the coincidence probabilities
may be calculated based on the probabilities that markers in two samples have bands in precisely
the same location, but a laboratory worker may conclude that similarbut not precisely identical
band patterns result from identical genetic samples with some imperfection in the agarose gel.
However, in this case, the laboratory worker increases the coincidence risk by expanding the
criteria for declaring a match. Recent studies have quoted relatively high error rates, which may
be cause for concern.[14] In the early days of genetic fingerprinting, the necessary population data
to accurately compute a match probability was sometimes unavailable. Between 1992 and 1996,
arbitrary low ceilings were controversially put on match probabilities used in RFLP analysis
rather than the higher theoretically computed ones.[15] Today, RFLP has become widely disused
due to the advent of more discriminating, sensitive and easier technologies.
Since 1998, the DNA profiling system supported by The National DNA Database in the UK is
the SGM+ DNA profiling system that includes 10 STR regions and a sex-indicating test. STRs
do not suffer from such subjectivity and provide similar power of discrimination (1 in 1013 for
unrelated individuals if using a full SGM+ profile). It should be noted that figures of this
magnitude are not considered to be statistically supportable by scientists in the UK, for unrelated
individuals with full matching DNA profiles a match probability of 1 in a billion is considered
statistically supportable. However, with any DNA technique, the cautious juror should not
convict on genetic fingerprint evidence alone if other factors raise doubt. Contamination with
other evidence (secondary transfer) is a key source of incorrect DNA profiles and raising doubts
as to whether a sample has been adulterated is a favorite defense technique. More rarely,
chimerism is one such instance where the lack of a genetic match may unfairly exclude a
suspect.
The value of DNA evidence has to be seen in light of recent cases where criminals planted fake
DNA samples at crime scenes. In one case, a criminal even planted fake DNA evidence in his
own body: Dr. John Schneeberger raped one of his sedated patients in 1992 and left semen on her
underwear. Police drew what they believed to be Schneeberger's blood and compared its DNA
against the crime scene semen DNA on three occasions, never showing a match. It turned out
that he had surgically inserted a Penrose drain into his arm and filled it with foreign blood and
anticoagulants.
The functional analysis of genes and their coding sequences (open reading frames [ORFs])
typically requires that each ORF be expressed, the encoded protein purified, antibodies
produced, phenotypes examined, intracellular localization determined, and interactions with
other proteins sought.[17] In a study conducted by the life science company Nucleix and published
in the journal Forensic Science International, scientists found that an In vitro synthesized sample
of DNA matching any desired genetic profile can be constructed using standard molecular
biology techniques without obtaining any actual tissue from that person.
In the case of the Phantom of Heilbronn, police detectives found DNA traces from the same
woman on various crime scenes in Austria, Germany, and France among them murders,
burglaries and robberies. Only after the DNA of the "woman" matched the DNA sampled from
the burned body of a male asylum seeker in France, detectives began to have serious doubts
about the DNA evidence. In that case, DNA traces were already present on the cotton swabs used
to collect the samples at the crime scene, and the swabs had all been produced at the same
factory in Austria. The company's product specification said that the swabs were guaranteed to
be sterile, but not DNA-free.
Testimony
Documentary
Real (physical)
Digital
Exculpatory
Scientific
Demonstrative
Eyewitness identification
Genetic (DNA)
Lies
Relevance
Burden of proof
Laying a foundation
Spoliation
Character
Habit
Similar fact
Authentication
Chain of custody
Judicial notice
Best evidence rule
Self-authenticating document
Ancient document
Witnesses
Competence
Privilege
Direct examination
Cross-examination
Redirect
Impeachment
Recorded recollection
Expert witness
in English law
in United States law
Confessions
Business records
Excited utterance
Dying declaration
Party admission
Ancient document
Res gestae
Learned treatise
Implied assertion
Contract
Tort
Property
Criminal law
other leads have been exhausted, investigators may use specially developed software to compare
the forensic profile to all profiles taken from a states DNA database to generate a list of those
offenders already in the database who are most likely to be a very close relative of the individual
whose DNA is in the forensic profile.[20] To eliminate the majority of this list when the forensic
DNA is a man's, crime lab technicians conduct Y-STR analysis. Using standard investigative
techniques, authorities are then able to build a family tree. The family tree is populated from
information gathered from public records and criminal justice records. Investigators rule out
family members involvement in the crime by finding excluding factors such as sex, living out of
state or being incarcerated when the crime was committed. They may also use other leads from
the case, such as witness or victim statements, to identify a suspect. Once a suspect has been
identified, investigators seek to legally obtain a DNA sample from the suspect. This suspect
DNA profile is then compared to the sample found at the crime scene to definitively identify the
suspect as the source of the crime scene DNA.
Familial DNA database searching was first used in an investigation leading to the conviction of
Craig Harman of manslaughter in the United Kingdom on April 19, 2004.[21] Craig Harman was
convicted using familial DNA because of the partial matches from Harman's brother. When the
police questioned Harman's brother, the police noticed Harman lived very close to the original
crime scene. Harman confessed when his DNA isolated from the DNA found on the brick,
matched.[22] Currently, familial DNA database searching is not conducted on a national level in
the United States. States determine their own policies and decision making processes for how
and when to conduct familial searches. The first familial DNA search and subsequent conviction
in the United States was conducted in Denver, Colorado in 2008 using software developed under
the leadership of Denver District Attorney Mitch Morrissey and Denver Police Department
Crime Lab Director Gregg LaBerge.[23] California was the first state to implement a policy for
familial searching under then Attorney General, now Governor, Jerry Brown.[24] In his role as
consultant to the Familial Search Working Group of the California Department of Justice, former
Alameda County Prosecutor Rock Harmon is widely considered to have been the catalyst in the
adoption of familial search technology in California. The technique was used to catch the Los
Angeles serial killer known as the Grim Sleeper in 2010[25] It wasn't a witness or informant that
tipped off law enforcement to the identity of the "Grim Sleeper" serial killer, who had eluded
police for more than two decades, but DNA from the suspect's own son. The suspect's son was
arrested and convicted in a felony weapons charge and swabbed for DNA last year. When his
DNA was entered into the database of convicted felons, detectives were alerted to a partial match
to evidence found at the "Grim Sleeper" crime scenes. David Franklin Jr., also known as the
Grim Sleeper, was charged with ten counts of murder and one count of attempted murder.[26]
More recently, familial DNA, led to the arrest of 21-year-old Elvis Garcia on charges of sexual
assault and false imprisonment of a woman in Santa Cruz in 2008.[27] In March 2011 Virginia
Governor Bob McDonnell announced that Virginia would begin using familial DNA searches.[28]
Other states are expected to follow.
At a press conference in Virginia on March 7, 2011 regarding the East Coast Rapist, Prince
William County prosecutor Paul Ebert and Fairfax County Police Detective John Kelly said the
case would have been solved years ago if Virginia had used familial DNA searching. Aaron
Thomas, the suspected East Coast Rapist, was arrested in connection with the rape of 17 women
from Virginia to Rhode Island, but familial DNA was not used in the case.[29]
Critics of familial DNA database searches argue that the technique is an invasion of an
individuals 4th Amendment rights.[30] Privacy advocates are petitioning for DNA database
restrictions, arguing that the only fair way to search for possible DNA matches to relatives of
offenders or arrestees would be to have a population-wide DNA database.[13] Some scholars have
pointed out that the privacy concerns surrounding familial searching are similar in some respects
to other police search techniques.,[31] and most have concluded that the practice is constitutional.
[32]
The Ninth Circuit Court of Appeals in United States v. Pool (vacated as moot) suggested that
this practice is somewhat analogous to a witness looking at a photograph of one person and
stating that it looked like the perpetrator, which leads law enforcement to show the witness
photos of similar looking individuals, one of whom is identified as the perpetrator.[33] Regardless
of whether familial DNA searching was the method used to identify the suspect, authorities
always conduct a normal DNA test to match the suspects DNA with that of the DNA left at the
crime scene.
Critics also claim that racial profiling could occur on account of Familial DNA testing. In the
United States, the conviction rates of racial minorities are much higher than that of the overall
population. It is unclear whether this is due to discrimination from police officers and the courts,
as opposed to a simple higher rate of offence among minorities. Arrest-based databases, which
are found in the majority of the United States, lead to an even greater level of racial
discrimination. An arrest, as opposed to conviction, relies much more heavily on police
discretion.[13]
For instance, investigators with Denver District Attorneys Office successfully identified a
suspect in a property theft case using a familial DNA search. In this example, the suspects blood
left at the scene of the crime strongly resembled that of a current Colorado Department of
Corrections prisoner.[34] Using publicly available records, the investigators created a family tree.
They then eliminated all the family members who were incarcerated at the time of the offense, as
well as all of the females (the crime scene DNA profile was that of a male). Investigators
obtained a court order to collect the suspects DNA, but the suspect actually volunteered to come
to a police station and give a DNA sample. After providing the sample, the suspect walked free
without further interrogation or detainment. Later confronted with an exact match to the forensic
profile, the suspect pled guilty to criminal trespass at the first court date and was sentenced to
two years probation.
Partial matches
Partial DNA matches are not searches themselves, but are the result of moderate stringency
CODIS searches that produce a potential match that shares at least one allele at every locus.[35]
Partial matching does not involve the use of familial search software, such as those used in the
UK and US, or additional Y-STR analysis, and therefore often misses sibling relationships.
Partial matching has been used to identify suspects in several cases in the UK and US[36] and has
also been used as a tool to exonerate the falsely accused. Darryl Hunt was wrongly convicted in
connection with the rape and murder of a young woman in 1984 in North Carolina.[37] Hunt was
exonerated in 2004 when a DNA database search produced a remarkably close match between a
convicted felon and the forensic profile from the case. The partial match led investigators to the
felons brother, Willard E. Brown, who confessed to the crime when confronted by police. A
judge then signed an order to dismiss the case against Hunt.
The New York Times quoted the lead author on the paper, Dr. Daniel Frumkin, saying, "You can
just engineer a crime scene... any biology undergraduate could perform this."[50]
Dr. Frumkin perfected a test that can differentiate real DNA samples from fake ones. His test
detects epigenetic modifications, in particular, DNA methylation. Seventy percent of the DNA in
any human genome is methylated, meaning it contains methyl group modifications within a CpG
dinucleotide context. Methylation at the promoter region is associated with gene silencing. The
synthetic DNA lacks this epigenetic modification, which allows the test to distinguish
manufactured DNA from original, genuine, DNA.[50]
It is unknown how many police departments, if any, currently use the test. No police lab has
publicly announced that it is using the new test to verify DNA results.[51]
Cases
In 1986, Richard Buckland was exonerated, despite having admitted to the rape and
murder of a teenager near Leicester, the city where DNA profiling was first discovered.
This was the first use of DNA fingerprinting in a criminal investigation.[52]
o In 1987, in the same case as Buckland, British baker Colin Pitchfork was the first
criminal caught and convicted using DNA fingerprinting.[53]
In 1987, genetic fingerprinting was used in criminal court for the first time in the trial of a
man accused of unlawful intercourse with a mentally handicapped 14-year-old female
who gave birth to a baby.[54]
In 1987, Florida rapist Tommie Lee Andrews was the first person in the United States to
be convicted as a result of DNA evidence, for raping a woman during a burglary; he was
convicted on November 6, 1987, and sentenced to 22 years in prison.[55][56]
In 1988, Timothy Wilson Spencer was the first man in Virginia to be sentenced to death
through DNA testing, for several rape and murder charges. He was dubbed "The South
Side Strangler" because he killed victims on the south side of Richmond, Virginia. He
was later charged with rape and first-degree murder and was sentenced to death. He was
executed on April 27, 1994. David Vasquez, initially convicted of one of Spencer's
crimes, became the first man in America exonerated based on DNA evidence.
In 1989, Chicago man Gary Dotson was the first person whose conviction was overturned
using DNA evidence.
In 1991, Allan Legere was the first Canadian to be convicted as a result of DNA
evidence, for four murders he had committed while an escaped prisoner in 1989. During
his trial, his defense argued that the relatively shallow gene pool of the region could lead
to false positives.
In 1992, DNA evidence was used to prove that Nazi doctor Josef Mengele was buried in
Brazil under the name Wolfgang Gerhard.
In 1992, DNA from a palo verde tree was used to convict Mark Alan Bogan of murder.
DNA from seed pods of a tree at the crime scene was found to match that of seed pods
found in Bogan's truck. This is the first instance of plant DNA admitted in a criminal
case.[57][58][59]
In 1993, Kirk Bloodsworth was the first person to have been convicted of murder and
sentenced to death, whose conviction was overturned using DNA evidence.
The 1993 rape and murder of Mia Zapata, lead singer for the Seattle punk band The Gits
was unsolved nine years after the murder. A database search in 2001 failed, but the killer's
DNA was collected when he was arrested in Florida for burglary and domestic abuse in
2002.
The science was made famous in the United States in 1994 when prosecutors heavily
relied on DNA evidence allegedly linking O. J. Simpson to a double murder. The case
also brought to light the laboratory difficulties and handling procedure mishaps that can
cause such evidence to be significantly doubted.
In 1994, Royal Canadian Mounted Police (RCMP) detectives successfully tested hairs
from a cat known as Snowball, and used the test to link a man to the murder of his wife,
thus marking for the first time in forensic history the use of non-human DNA to identify a
criminal (except for the plant DNA mentioned in the case four paragraphs up).
In 1994, the claim that Anna Anderson was Grand Duchess Anastasia Nikolaevna of
Russia was tested after her death using samples of her tissue that had been stored at a
Charlottesville, Virginia hospital following a medical procedure. The tissue was tested
using DNA fingerprinting, and showed that she bore no relation to the Romanovs.[60]
In 1995, the British Forensic Science Service carried out its first mass intelligence DNA
screening in the investigation of the Naomi Smith murder case.
In 1998, Dr. Richard J. Schmidt was convicted of attempted second-degree murder when
it was shown that there was a link between the viral DNA of the human
immunodeficiency virus (HIV) he had been accused of injecting in his girlfriend and viral
DNA from one of his patients with AIDS. This was the first time viral DNA
fingerprinting had been used as evidence in a criminal trial.
In 1999, Raymond Easton, a disabled man from Swindon, England, was arrested and
detained for seven hours in connection with a burglary. He was released due to an
inaccurate DNA match. His DNA had been retained on file after an unrelated domestic
incident some time previously.[61]
In May 2000 Gordon Graham murdered Paul Gault at his home in Lisburn, Northern
Ireland. Graham was convicted of the murder when his DNA was found on a sports bag
left in the house as part of an elaborate ploy to suggest the murder occurred after a
burglary had gone wrong. Graham was having an affair with the victim's wife at the time
of the murder. It was the first time Low Copy Number DNA was used in Northern
Ireland.[62]
In 2001, Wayne Butler was convicted for the murder of Celia Douty. It was the first
murder in Australia to be solved using DNA profiling.[63][64]
In 2002, the body of James Hanratty, hanged in 1962 for the "A6 murder", was exhumed
and DNA samples from the body and members of his family were analysed. The results
convinced Court of Appeal judges that Hanratty's guilt, which had been strenuously
disputed by campaigners, was proved "beyond doubt".[65] Paul Foot and some other
campaigners continued to believe in Hanratty's innocence and argued that the DNA
evidence could have been contaminated, noting that the small DNA samples from items
of clothing, kept in a police laboratory for over 40 years "in conditions that do not satisfy
modern evidential standards", had had to be subjected to very new amplification
techniques in order to yield any genetic profile.[66] However, no DNA other than
Hanratty's was found on the evidence tested, contrary to what would have been expected
had the evidence indeed been contaminated.[67]
In 2002, DNA testing was used to exonerate Douglas Echols, a man who was wrongfully
convicted in a 1986 rape case. Echols was the 114th person to be exonerated through
post-conviction DNA testing.
In August 2002, Annalisa Vincenzi was shot dead in Tuscany. Bartender Peter Hamkin,
23, was arrested, in Merseyside, in March 2003 on an extradition warrant heard at Bow
Street Magistrates' Court in London to establish whether he should be taken to Italy to
face a murder charge. DNA "proved" he shot her, but he was cleared on other evidence.[68]
In 2003, Welshman Jeffrey Gafoor was convicted of the 1988 murder of Lynette White,
when crime scene evidence collected 12 years earlier was re-examined using STR
techniques, resulting in a match with his nephew.[69] This may be the first known example
of the DNA of an innocent yet related individual being used to identify the actual
criminal, via "familial searching".
In March 2003, Josiah Sutton was released from prison after serving four years of a
twelve-year sentence for a sexual assault charge. Questionable DNA samples taken from
Sutton were retested in the wake of the Houston Police Department's crime lab scandal of
mishandling DNA evidence.
In June 2003, because of new DNA evidence, Dennis Halstead, John Kogut and John
Restivo won a re-trial on their murder conviction. The three men had already served
eighteen years of their thirty-plus-year sentences.
The trial of Robert Pickton (convicted in December 2003) is notable in that DNA
evidence is being used primarily to identify the victims, and in many cases to prove their
existence.
In 2004, DNA testing shed new light into the mysterious 1912 disappearance of Bobby
Dunbar, a four-year-old boy who vanished during a fishing trip. He was allegedly found
alive eight months later in the custody of William Cantwell Walters, but another woman
claimed that the boy was her son, Bruce Anderson, whom she had entrusted in Walters'
custody. The courts disbelieved her claim and convicted Walters for the kidnapping. The
boy was raised and known as Bobby Dunbar throughout the rest of his life. However,
DNA tests on Dunbar's son and nephew revealed the two were not related, thus
establishing that the boy found in 1912 was not Bobby Dunbar, whose real fate remains
unknown.[70]
In 2005, Gary Leiterman was convicted of the 1969 murder of Jane Mixer, a law student
at the University of Michigan, after DNA found on Mixer's pantyhose was matched to
Leiterman. DNA in a drop of blood on Mixer's hand was matched to John Ruelas, who
was only four years old in 1969 and was never successfully connected to the case in any
other way. Leiterman's defense unsuccessfully argued that the unexplained match of the
blood spot to Ruelas pointed to cross-contamination and raised doubts about the
reliability of the lab's identification of Leiterman.[71][72][73]
In December 2005, Evan Simmons was proven innocent of a 1981 attack on an Atlanta
woman after serving twenty-four years in prison. Mr. Clark is the 164th person in the
United States and the fifth in Georgia to be freed using post-conviction DNA testing.
In March 2009, Sean Hodgson who spent 27 years in jail, convicted of killing Teresa De
Simone, 22, in her car in Southampton 30 years ago was released by senior judges. Tests
prove DNA from the scene was not his. British police have now reopened the case.
See also
DNA barcoding
DNA database
Forensic identification
Gene mapping
Harvey v. Horan
Identification (biology)
Kinship analysis
Maryland v. King
Parental testing
Phantom of Heilbronn
Project Innocence
Ribotyping
References
1.
2.
3.
Joseph Wambaugh, The Blooding (New York: A Perigord Press Book, 1989), 83.
4.
Jeffreys A.J., Wilson V., Thein S.W. (1984). "Hypervariable 'minisatellite' regions
in human DNA". Nature 314: 6773. doi:10.1038/314067a0.
5.
Joseph Wambaugh, The Blooding (New York: A Perigord Press Book, 1989), 202.
6.
Felch, Jason; et al (July 20, 2008). "FBI resists scrutiny of 'matches'". Los Angeles
Times. pp. P8.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Nick Paton Walsh False result fear over DNA tests The Observer, Sunday 27
January 2002.
The Evaluation of Forensic DNA Evidence 1996.
16.
17.
James L. Hartley, Gary F. Temple, and Michael A. Brasch (2000). "DNA Cloning
Using In Vitro Site-Specific Recombination". Cold Spring Harbor Laboratory Press.
18.
Diamond, Diane. "Searching the Family DNA Tree to Solve Crime." The
Huffington Post accessed April 17, 2011.
19.
20.
21.
22.
23.
Pankratz, Howard. "Denver Uses Familial DNA Evidence to Solve Car BreakIns." The Denver Post accessed April 17, 2011.
24.
Steinhaur, Jennifer. "'Grim Sleeper' Arrest Fans Debate on DNA Use." The New
York Times accessed April 17, 2011.
25.
Dolan, Maura. "A New Track in DNA Search." LA Times accessed April 17,
2011.
26.
New DNA Technique Led Police to 'Grim Sleeper' Serial Killer and Will 'Change
Policing in America' - ABC News
27.
Dolan, Maura. Familial DNA Search Used In Grim Sleeper Case Leads to Arrest
of Santa Cruz Sex Offender. LA Times accessed April 17, 2011.
28.
29.
Christoffersen, John and Barakat, Matthew. "Other victims of East Coast Rapist
suspect sought." Associated Press. Accessed May 25, 2011.
30.
31.
Suter, Sonia. All in The Family: Privacy and DNA Familial Searching (2010).
Harvard Journal of Law and Technology, Vol. 23,328, 2010.
32.
33.
34.
Pankratz, Howard."Denver Uses Familial DNA Evidence to Solve Car BreakIns." The Denver Post accessed April 17, 2011.
35.
36.
37.
38.
Amy Harmon, Lawyers Fight DNA Samples Gained on Sly, New York Times,
April 3, 2008.
39.
40.
"U.S. Supreme Court allows DNA sampling of prisoners". UPI. Retrieved 3 June
2013.
41.
http://www.supremecourt.gov/opinions/12pdf/12-207_d18e.pdf
42.
Samuels, J.E., E.H. Davies, and D.B. Pope. (2013). Collecting DNA at Arrest:
Policies, Practices, and Implications, Final Technical Report. Washington, D.C.: Urban
Institute, Justice Policy Center.
43.
44.
45.
R v. Adams [1997] EWCA Crim 2474 (16 October 1997), Court of Appeal
46.
47.
48.
49.
Donna Lyons Posted by Glenda. "State Laws on DNA Data Banks". Ncsl.org.
Retrieved 2010-04-03.
50.
51.
52.
53.
Joseph Wambaugh, The Blooding (New York, New York: A Perigord Press Book,
1989), 369.
54.
Joseph Wambaugh, The Blooding (New York, New York: A Perigord Press Book,
1989), 316.
55.
56.
"frontline: the case for innocence: the dna revolution: state and federal dna
database laws examined". Pbs.org. Retrieved 2010-04-03.
57.
58.
59.
60.
61.
62.
63.
Dutter, Barbie (2001-06-19). "18 years on, man is jailed for murder of Briton in
'paradise'". London: The Telegraph. Retrieved 2008-06-17.
64.
65.
66.
John Steele, "Hanratty lawyers reject DNA 'guilt'", Daily Telegraph, London, 23
June 2001.
67.
"Hanratty: The damning DNA". BBC News. 10 May 2002. Retrieved 2011-08-22.
68.
"Mistaken identity claim over murder". BBC News. February 15, 2003. Retrieved
April 1, 2010.
69.
Satish Sekar. "Lynette White Case: How Forensics Caught the Cellophane Man".
Lifeloom.com. Retrieved 2010-04-03.
70.
"DNA clears man of 1914 kidnapping conviction", USA Today, (May 5, 2004), by
Allen G. Breed, Associated Press.
71.
CBS News story on the Jane Mixer murder case; March 24, 2007.
72.
Another CBS News story on the Mixer case; July 17, 2007.
73.
Further reading
Anne Hart (July 2003). The Beginner's Guide to Interpreting Ethnic DNA Origins for
Family History: How Ashkenazi, Sephardi, Mizrahi & Europeans Are Related to
Everyone Else. iUniverse. ISBN 978-0-595-28306-4.
External links
Making Sense of DNA Backlogs, 2012: Myths vs. Reality United States Department of
Justice
Biology portal
Statistics portal
Categories:
Applied genetics
Biometrics
DNA
Molecular biology
Capillary electrophoresis
From Wikipedia, the free encyclopedia
Capillary electrophoresis
Acronym
CE
Classification Electrophoresis
Biomolecules
Analytes
Chiral molecules
Other techniques
gel electrophoresis
Related
Two-dimensional gel electrophoresis
Capillary electrophoresis mass
Hyphenated
spectrometry
Capillary electrophoresis (CE) is a family of electrokinetic separation methods performed in
submillimeter capillaries and in micro- and nanofluidic channels. Very often, CE refers to
capillary zone electrophoresis (CZE), but other electrophoretic techniques including capillary gel
electrophoresis (CGE), capillary isoelectric focusing (CIEF), capillary isotachophoresis and
micellar electrokinetic chromatography (MEKC) belong also to this class of methods.[1] In CE
methods, analytes migrate through electrolyte solutions under the influence of an electric field.
Analytes can be separated according to ionic mobility, additionally they may be concentrated by
means of gradients in conductivity and pH.
Contents
1 Instrumentation
2 Detection
3 Modes of separation
5 References
6 Bibliography
7 External links
Instrumentation
The instrumentation needed to perform capillary electrophoresis is relatively simple. A basic
schematic of a capillary electrophoresis system is shown in figure 1. The system's main
components are a sample vial, source and destination vials, a capillary, electrodes, a high-voltage
power supply, a detector, and a data output and handling device. The source vial, destination vial
and capillary are filled with an electrolyte such as an aqueous buffer solution. To introduce the
sample, the capillary inlet is placed into a vial containing the sample. Sample is introduced into
the capillary via capillary action, pressure, siphoning, or electrokinetically, and the capillary is
then returned to the source vial. The migration of the analytes is initiated by an electric field that
is applied between the source and destination vials and is supplied to the electrodes by the highvoltage power supply. In the most common mode of CE, all ions, positive or negative, are pulled
through the capillary in the same direction by electroosmotic flow, as will be explained. The
analytes separate as they migrate due to their electrophoretic mobility, as will be explained, and
are detected near the outlet end of the capillary. The output of the detector is sent to a data output
and handling device such as an integrator or computer. The data is then displayed as an
electropherogram, which reports detector response as a function of time. Separated chemical
compounds appear as peaks with different retention times in an electropherogram.[2] Capillary
electrophoresis was first combined with mass spectrometry by Richard D. Smith and coworkers,
and provides extremely high sensitivity for the analysis of very small sample sizes. Despite the
very small sample sizes (typically only a few nanoliters of liquid are introduced into the
capillary), high sensitivity and sharp peaks are achieved in part due to injection strategies that
result in concentration of analytes into a narrow zone near the inlet of the capillary. This is
achieved in either pressure or electrokinetic injections simply by suspending the sample in a
buffer of lower conductivity (e.g. lower salt concentration) than the running buffer. A process
called field-amplified sample stacking (a form of isotachophoresis) results in concentration of
analyte in a narrow zone at the boundary between the low-conductivity sample and the higherconductivity running buffer.
Detection
Separation by capillary electrophoresis can be detected by several detection devices. The
majority of commercial systems use UV or UV-Vis absorbance as their primary mode of
detection. In these systems, a section of the capillary itself is used as the detection cell. The use
of on-tube detection enables detection of separated analytes with no loss of resolution. In
general, capillaries used in capillary electrophoresis are coated with a polymer (frequently
polyimide or Teflon) for increased flexibility. The portion of the capillary used for UV detection,
however, must be optically transparent. For polyimide-coated capillaries, a segment of the
coating is typically burned or scraped off to provide a bare window several millimeters long.
This bare section of capillary can break easily, and capillaries with transparent coatings are
available to increase the stability of the cell window. The path length of the detection cell in
capillary electrophoresis (~ 50 micrometers) is far less than that of a traditional UV cell (~ 1 cm).
According to the Beer-Lambert law, the sensitivity of the detector is proportional to the path
length of the cell. To improve the sensitivity, the path length can be increased, though this results
in a loss of resolution. The capillary tube itself can be expanded at the detection point, creating a
"bubble cell" with a longer path length or additional tubing can be added at the detection point as
shown in figure 2. Both of these methods, however, will decrease the resolution of the separation.
[3]
Post-column detection utilizing a sheath flow configuration has also been described.
Figure 2: Techniques for increasing the pathlength of the capillary: a.) a bubble cell and b.) a zcell (additional tubing).[2]
Fluorescence detection can also be used in capillary electrophoresis for samples that naturally
fluoresce or are chemically modified to contain fluorescent tags. This mode of detection offers
high sensitivity and improved selectivity for these samples, but cannot be utilized for samples
that do not fluoresce. Numerous labeling strategies are used to create fluorescent derivatives or
conjugates of non-fluorescent molecules, including proteins and DNA. The set-up for
fluorescence detection in a capillary electrophoresis system can be complicated. The method
requires that the light beam be focused on the capillary, which can be difficult for many light
sources.[3] Laser-induced fluorescence has been used in CE systems with detection limits as low
as 1018 to 1021 mol. The sensitivity of the technique is attributed to the high intensity of the
incident light and the ability to accurately focus the light on the capillary.[2] Multi-color
fluorescence detection can be achieved by including multiple dichroic mirrors and bandpass
filters to separate the fluorescence emission amongst multiple detectors (e.g., photomultiplier
tubes), or by using a prism or grating to project spectrally resolved fluorescence emission onto a
position-sensitive detector such as a CCD array. CE systems with 4- and 5-color LIF detection
systems are used routinely for capillary DNA sequencing and genotyping ("DNA fingerprinting")
applications.[4][5]
In order to obtain the identity of sample components, capillary electrophoresis can be directly
coupled with mass spectrometers or Surface Enhanced Raman Spectroscopy (SERS). In most
systems, the capillary outlet is introduced into an ion source that utilizes electrospray ionization
(ESI). The resulting ions are then analyzed by the mass spectrometer. This set-up requires
volatile buffer solutions, which will affect the range of separation modes that can be employed
and the degree of resolution that can be achieved.[3] The measurement and analysis are mostly
done with a specialized gel analysis software.
For CE-SERS, capillary electrophoresis eluants can be deposited onto a SERS-active substrate.
Analyte retention times can be translated into spatial distance by moving the SERS-active
substrate at a constant rate during capillary electrophoresis. This allows the subsequent
spectroscopic technique to be applied to specific eluants for identification with high sensitivity.
SERS-active substrates can be chosen that do not interfere with the spectrum of the analytes.[6]
Modes of separation
The separation of compounds by capillary electrophoresis is dependent on the differential
migration of analytes in an applied electric field. The electrophoretic migration velocity ( ) of
an analyte toward the electrode of opposite charge is:
The electrophoretic mobility can be determined experimentally from the migration time and the
field strength:
where is the distance from the inlet to the detection point, is the time required for the analyte
to reach the detection point (migration time), is the applied voltage (field strength), and is
the total length of the capillary.[3] Since only charged ions are affected by the electric field,
neutral analytes are poorly separated by capillary electrophoresis.
The velocity of migration of an analyte in capillary electrophoresis will also depend upon the
rate of electroosmotic flow (EOF) of the buffer solution. In a typical system, the electroosmotic
flow is directed toward the negatively charged cathode so that the buffer flows through the
capillary from the source vial to the destination vial. Separated by differing electrophoretic
mobilities, analytes migrate toward the electrode of opposite charge.[2] As a result, negatively
charged analytes are attracted to the positively charged anode, counter to the EOF, while
positively charged analytes are attracted to the cathode, in agreement with the EOF as depicted in
figure 3.
Figure 3: Diagram of the separation of charged and neutral analytes (A) according to their
respective electrophoretic and electroosmotic flow mobilities
The velocity of the electroosmotic flow,
where
where is the zeta potential of the capillary wall, and is the relative permittivity of the buffer
solution. Experimentally, the electroosmotic mobility can be determined by measuring the
retention time of a neutral analyte.[3] The velocity ( ) of an analyte in an electric field can then
be defined as:
Since the electroosmotic flow of the buffer solution is generally greater than that of the
electrophoretic mobility of the analytes, all analytes are carried along with the buffer solution
toward the cathode. Even small, triply charged anions can be redirected to the cathode by the
relatively powerful EOF of the buffer solution. Negatively charged analytes are retained longer
in the capilliary due to their conflicting electrophoretic mobilities.[2] The order of migration seen
by the detector is shown in figure 3: small multiply charged cations migrate quickly and small
multiply charged anions are retained strongly.[3]
Electroosmotic flow is observed when an electric field is applied to a solution in a capillary that
has fixed charges on its interior wall. Charge is accumulated on the inner surface of a capillary
when a buffer solution is placed inside the capillary. In a fused-silica capillary, silanol (Si-OH)
groups attached to the interior wall of the capillary are ionized to negatively charged silanoate
(Si-O-) groups at pH values greater than three. The ionization of the capillary wall can be
enhanced by first running a basic solution, such as NaOH or KOH through the capillary prior to
introducing the buffer solution. Attracted to the negatively charged silanoate groups, the
positively charged cations of the buffer solution will form two inner layers of cations (called the
diffuse double layer or the electrical double layer) on the capillary wall as shown in figure 4. The
first layer is referred to as the fixed layer because it is held tightly to the silanoate groups. The
outer layer, called the mobile layer, is farther from the silanoate groups. The mobile cation layer
is pulled in the direction of the negatively charged cathode when an electric field is applied.
Since these cations are solvated, the bulk buffer solution migrates with the mobile layer, causing
the electroosmotic flow of the buffer solution. Other capillaries including Teflon capillaries also
exhibit electroosmotic flow. The EOF of these capillaries is probably the result of adsorption of
the electrically charged ions of the buffer onto the capillary walls.[2] The rate of EOF is
dependent on the field strength and the charge density of the capillary wall. The wall's charge
density is proportional to the pH of the buffer solution. The electroosmotic flow will increase
with pH until all of the available silanols lining the wall of the capillary are fully ionized.[3]
Figure 4: Depiction of the interior of a fused-silica gel capillary in the presence of a buffer
solution.
In certain situations where strong electroosomotic flow toward the cathode is undesirable, the
inner surface of the capillary can be coated with polymers, surfactants, or small molecules to
reduce electroosmosis to very low levels, restoring the normal direction of migration (anions
towar the anode, cations toward the cathode). CE instrumentation typically includes power
supplies with reversible polarity, allowing the same instrument to be used in "normal" mode
(with EOF and detection near the cathodic end of the capillary) and "reverse" mode (with EOF
suppressed or reversed, and detection near the anodic end of the capillary). One of the most
common approaches to suppressing EOF, reported by Stellan Hjertn in 1985, is to create a
covalently attached layer of linear polyacrylamide.[7] The silica surface of the capillary is first
modified with a silane reagent bearing a polymerizable vinyl group (e.g. 3methacryloxypropyltrimethoxysilane), followed by introduction of acrylamide monomer and a
free radical initiator. The acrylamide is polymerized in situ, forming long linear chains, some of
which are covalently attached to the wall-bound silane reagent. Numerous other strategies for
covalent modification of capillary surfaces exist. Dynamic or adsorbed coatings (which can
include polymers or small molecules) are also common.[8] For example, in capillary sequencing
of DNA, the sieving polymer (typically polydimethylacrylamide) suppresses electroosmotic flow
to very low levels.[9] A variety of dynamic capillary coating agents are commercially available to
modify, suppress, or reverse the direction of electroosmotic flow.[10] Besides modulating
electroosmotic flow, capillary wall coatings can also serve the purpose of reducing interactions
between "sticky" analytes (such as proteins) and the capillary wall. Such wall-analyte
interactions, if severe, manifest as reduced peak efficiency, asymmetric (tailing) peaks, or even
complete loss of analyte to the capillary wall.
where is the number of theoretical plates, is the apparent mobility in the separation medium
and
is the diffusion coefficient of the analyte. According to this equation, the efficiency of
separation is only limited by diffusion and is proportional to the strength of the electric field,
although practical considerations limit the strength of the electric field to several hundred volts
per centimeter. Application of very high potentials (>20-30 kV) may lead to arcing or breakdown
of the capillary. Further, application of strong electric fields leads to resistive heating (Joule
heating) of the buffer in the capillary. At sufficiently high field strengths, this heating is strong
enough that radial temperature gradients can develop within the capillary. Since electrophoretic
mobility of ions is generally temperature-dependent (due to both temperature-dependent
ionization and solvent viscosity effects), a non-uniform temperature profile results in variation of
electrophoretic mobility across the capillary, and a loss of resolution. The onset of significant
Joule heating can be determined by constructing an "Ohm's Law plot", wherein the current
through the capillary is measured as a function of applied potential. At low fields, the current is
proportional to the applied potential (Ohm's Law), whereas at higher fields the current deviates
from the straight line as heating results in decreased resistance of the buffer. The best resolution
is typically obtained at the maximum field strength for which Joule heating is insignificant (i.e.
near the boundary between the linear and nonlinear regimes of the Ohm's Law plot). Generally
capillaries of smaller inner diameter support use of higher field strengths, due to improved heat
dissipation and smaller thermal gradients relative to larger capillaries, but with the drawbacks of
lower sensitivity in absorbance detection due to shorter path length, and greater difficulty in
introducing buffer and sample into the capillary (small capillaries require greater pressure and/or
longer times to force fluids through the capillary).
The efficiency of capillary electrophoresis separations is typically much higher than the
efficiency of other separation techniques like HPLC. Unlike HPLC, in capillary electrophoresis
there is no mass transfer between phases.[3] In addition, the flow profile in EOF-driven systems is
flat, rather than the rounded laminar flow profile characteristic of the pressure-driven flow in
chromatography columns as shown in figure 5. As a result, EOF does not significantly contribute
to band broadening as in pressure-driven chromatography. Capillary electrophoresis separations
can have several hundred thousand theoretical plates.[11]
According to this equation, maximum resolution is reached when the electrophoretic and
electroosmotic mobilities are similar in magnitude and opposite in sign. In addition, it can be
seen that high resolution requires lower velocity and, correspondingly, increased analysis time.[3]
Besides diffusion and Joule heating (discussed above), factors that may decrease the resolution in
capillary electrophoresis from the theoretical limits in the above equation include, but are not
limited to, the finite widths of the injection plug and detection window; interactions between the
analyte and the capillary wall; instrumental non-idealities such as a slight difference in height of
the fluid reservoirs leading to siphoning; irregularities in the electric field due to, e.g.,
imperfectly cut capillary ends; depletion of buffering capacity in the reservoirs; and
electrodispersion (when an analyte has higher conductivity than the background electrolyte).[12]
Identifying and minimizing the numerous sources of band broadening is key to successful
method development in capillary electrophoresis, with the objective of approaching as close as
possible to the ideal of diffusion-limited resolution.
References
1.
2.
Skoog, D.A.; Holler, F.J.; Crouch, S.R "Principles of Instrumental Analysis" 6th
ed. Thomson Brooks/Cole Publishing: Belmont, CA 2007.
3.
Skoog, D.A.; Holler, F.J.; Crouch, S.R "Principles of Instrumental Analysis" 6th
ed. Chapter 30 Thomson Brooks/Cole Publishing: Belmont, CA 2007.
4.
5.
Butler, John (2004). "Forensic DNA typing by capillary electrophoresis using the
ABI Prism 310 and 3100 genetic analyzers for STR analysis". Electrophoresis 25: 1397
1412. doi:10.1002/elps.200305822. PMID 15188225. Retrieved 2014-04-09.
6.
Lin H.; Natan, M.; Keating, C. Anal. Chem. 2000, 72, 5348-5355.
7.
8.
9.
10.
11.
Skoog, D.A.; Holler, F.J.; Nieman, T.A. "Principles of Instrumental Analysis, 5th
ed." Saunders college Publishing: Philadelphia, 1998.
12.
Bibliography
Terabe, S.; Otsuka, K.; Ichikawa, K.; Tsuchiya, A.; Ando, T. Anal. Chem. 1984, 56, 111.
Terabe, S.; Otsuka, K.; Ichikawa, K.; Tsuchiya, A.; Ando, T. Anal. Chem. 1984, 56, 113.
Carretero, A.S.; Cruces-Blanco, C.; Ramirez, S.C.; Pancorbo, A.C.; Gutierrez, A.F. J.
Agric. Food. Chem. 2004, 52, 5791.
Cavazza, A.; Corradini, C.; Lauria, A.; Nicoletti, I. J. Agric. Food Chem. 2000, 48, 3324.
Rodrigues, M.R.A.; Caramao, E.B.; Arce, L.; Rios, A.; Valcarcel, M. J. Agric. Food
Chem. 2002, 50, 4215.
External links
CE animations [1]
Categories:
Chromatography
Electrophoresis
The polymerase chain reaction (PCR) is a biochemical technology in molecular biology used
to amplify a single copy or a few copies of a piece of DNA across several orders of magnitude,
generating thousands to millions of copies of a particular DNA sequence.
Developed in 1983 by Kary Mullis,[1][2] PCR is now a common and often indispensable technique
used in medical and biological research labs for a variety of applications.[3][4] These include DNA
cloning for sequencing, DNA-based phylogeny, or functional analysis of genes; the diagnosis of
hereditary diseases; the identification of genetic fingerprints (used in forensic sciences and
paternity testing); and the detection and diagnosis of infectious diseases. In 1993, Mullis was
awarded the Nobel Prize in Chemistry along with Michael Smith for his work on PCR.[5]
The method relies on thermal cycling, consisting of cycles of repeated heating and cooling of the
reaction for DNA melting and enzymatic replication of the DNA. Primers (short DNA fragments)
containing sequences complementary to the target region along with a DNA polymerase (after
which the method is named) are key components to enable selective and repeated amplification.
As PCR progresses, the DNA generated is itself used as a template for replication, setting in
motion a chain reaction in which the DNA template is exponentially amplified. PCR can be
extensively modified to perform a wide array of genetic manipulations.
Almost all PCR applications employ a heat-stable DNA polymerase, such as Taq polymerase (an
enzyme originally isolated from the bacterium Thermus aquaticus). This DNA polymerase
enzymatically assembles a new DNA strand from DNA building-blocks, the nucleotides, by
using single-stranded DNA as a template and DNA oligonucleotides (also called DNA primers),
which are required for initiation of DNA synthesis. The vast majority of PCR methods use
thermal cycling, i.e., alternately heating and cooling the PCR sample through a defined series of
temperature steps.
In the first step, the two strands of the DNA double helix are physically separated at a high
temperature in a process called DNA melting. In the second step, the temperature is lowered and
the two DNA strands become templates for DNA polymerase to selectively amplify the target
DNA. The selectivity of PCR results from the use of primers that are complementary to the DNA
region targeted for amplification under specific thermal cycling conditions.
Placing a strip of eight PCR tubes, each containing a 100 l reaction mixture, into the thermal
cycler
Contents
2 PCR stages
o 2.1 PCR optimization
3 Application of PCR
o 3.1 Selective DNA isolation
o 3.2 Amplification and quantification of DNA
o 3.3 PCR in diagnosis of diseases
5 History
o 5.1 Patent disputes
6 References
7 External links
Two primers that are complementary to the 3' (three prime) ends of each of the sense and
anti-sense strand of the DNA target.
Buffer solution, providing a suitable chemical environment for optimum activity and
stability of the DNA polymerase.
Bivalent cations, magnesium or manganese ions; generally Mg2+ is used, but Mn2+ can be
utilized for PCR-mediated DNA mutagenesis, as higher Mn2+ concentration increases the
error rate during DNA synthesis[9]
The PCR is commonly carried out in a reaction volume of 10200 l in small reaction tubes
(0.20.5 ml volumes) in a thermal cycler. The thermal cycler heats and cools the reaction tubes
to achieve the temperatures required at each step of the reaction (see below). Many modern
thermal cyclers make use of the Peltier effect, which permits both heating and cooling of the
block holding the PCR tubes simply by reversing the electric current. Thin-walled reaction tubes
permit favorable thermal conductivity to allow for rapid thermal equilibration. Most thermal
cyclers have heated lids to prevent condensation at the top of the reaction tube. Older
thermocyclers lacking a heated lid require a layer of oil on top of the reaction mixture or a ball of
wax inside the tube.
Procedure
Typically, PCR consists of a series of 20-40 repeated temperature changes, called cycles, with
each cycle commonly consisting of 2-3 discrete temperature steps, usually three (Figure below).
The cycling is often preceded by a single temperature step at a high temperature (>90 C), and
followed by one hold at the end for final product extension or brief storage. The temperatures
used and the length of time they are applied in each cycle depend on a variety of parameters.
These include the enzyme used for DNA synthesis, the concentration of divalent ions and dNTPs
in the reaction, and the melting temperature (Tm) of the primers.[10]
Initialization step: This step consists of heating the reaction to a temperature of 9496 C
(or 98 C if extremely thermostable polymerases are used), which is held for 19
minutes. It is only required for DNA polymerases that require heat activation by hot-start
PCR.[11]
Denaturation step: This step is the first regular cycling event and consists of heating the
reaction to 9498 C for 2030 seconds. It causes DNA melting of the DNA template by
disrupting the hydrogen bonds between complementary bases, yielding single-stranded
DNA molecules.
Annealing step: The reaction temperature is lowered to 5065 C for 2040 seconds
allowing annealing of the primers to the single-stranded DNA template. Typically the
annealing temperature is about 35 C below the Tm of the primers used. Stable DNA
DNA hydrogen bonds are only formed when the primer sequence very closely matches
the template sequence. The polymerase binds to the primer-template hybrid and begins
DNA formation.
Extension/elongation step: The temperature at this step depends on the DNA polymerase
used; Taq polymerase has its optimum activity temperature at 7580 C,[12][13] and
commonly a temperature of 72 C is used with this enzyme. At this step the DNA
polymerase synthesizes a new DNA strand complementary to the DNA template strand
by adding dNTPs that are complementary to the template in 5' to 3' direction, condensing
the 5'-phosphate group of the dNTPs with the 3'-hydroxyl group at the end of the nascent
(extending) DNA strand. The extension time depends both on the DNA polymerase used
and on the length of the DNA fragment to be amplified. As a rule-of-thumb, at its
optimum temperature, the DNA polymerase will polymerize a thousand bases per minute.
Under optimum conditions, i.e., if there are no limitations due to limiting substrates or
reagents, at each extension step, the amount of DNA target is doubled, leading to
exponential (geometric) amplification of the specific DNA fragment.
Final hold: This step at 415 C for an indefinite time may be employed for short-term
storage of the reaction.
Figure 3: Ethidium bromide-stained PCR products after gel electrophoresis. Two sets of primers
were used to amplify a target sequence from three different tissue samples. No amplification is
present in sample #1; DNA bands in sample #2 and #3 indicate successful amplification of the
target sequence. The gel also shows a positive control, and a DNA ladder containing DNA
fragments of defined length for sizing the bands in the experimental PCRs.
To check whether the PCR generated the anticipated DNA fragment (also sometimes referred to
as the amplimer or amplicon), agarose gel electrophoresis is employed for size separation of the
PCR products. The size(s) of PCR products is determined by comparison with a DNA ladder (a
molecular weight marker), which contains DNA fragments of known size, run on the gel
alongside the PCR products (see Fig. 3).
PCR stages
The PCR process can be divided into three stages:
Exponential amplification: At every cycle, the amount of product is doubled (assuming 100%
reaction efficiency). The reaction is very sensitive: only minute quantities of DNA need to be
present.[14]
Leveling off stage: The reaction slows as the DNA polymerase loses activity and as consumption
of reagents such as dNTPs and primers causes them to become limiting.
Plateau: No more product accumulates due to exhaustion of reagents and enzyme.
PCR optimization
Main article: PCR optimization
In practice, PCR can fail for various reasons, in part due to its sensitivity to contamination
causing amplification of spurious DNA products. Because of this, a number of techniques and
procedures have been developed for optimizing PCR conditions.[15][16] Contamination with
extraneous DNA is addressed with lab protocols and procedures that separate pre-PCR mixtures
from potential DNA contaminants.[8] This usually involves spatial separation of PCR-setup areas
from areas for analysis or purification of PCR products, use of disposable plasticware, and
thoroughly cleaning the work surface between reaction setups. Primer-design techniques are
important in improving PCR product yield and in avoiding the formation of spurious products,
and the usage of alternate buffer components or polymerase enzymes can help with amplification
of long or otherwise problematic regions of DNA. Addition of reagents, such as formamide, in
buffer systems may increase the specificity and yield of PCR.[17] Computer simulations of
theoretical PCR results (Electronic PCR) may be performed to assist in primer design.[18]
Application of PCR
Main article: Applications of PCR
Figure 4: Electrophoresis of PCR-amplified DNA fragments. (1) Father. (2) Child. (3) Mother.
The child has inherited some, but not all of the fingerprint of each of its parents, giving it a new,
unique fingerprint.
PCR allows for rapid and highly specific diagnosis of infectious diseases, including those caused
by bacteria or viruses.[21] PCR also permits identification of non-cultivatable or slow-growing
microorganisms such as mycobacteria, anaerobic bacteria, or viruses from tissue culture assays
and animal models. The basis for PCR diagnostic applications in microbiology is the detection of
infectious agents and the discrimination of non-pathogenic from pathogenic strains by virtue of
specific genes.[21][22]
Viral DNA can likewise be detected by PCR. The primers used need to be specific to the targeted
sequences in the DNA of a virus, and the PCR can be used for diagnostic analyses or DNA
sequencing of the viral genome. The high sensitivity of PCR permits virus detection soon after
infection and even before the onset of disease.[21] Such early detection may give physicians a
significant lead time in treatment. The amount of virus ("viral load") in a patient can also be
quantified by PCR-based DNA quantitation techniques (see below).
Assembly PCR or Polymerase Cycling Assembly (PCA): artificial synthesis of long DNA
sequences by performing PCR on a pool of long oligonucleotides with short overlapping
segments. The oligonucleotides alternate between sense and antisense directions, and the
overlapping segments determine the order of the PCR fragments, thereby selectively
producing the final long DNA product.[24]
Dial-out PCR: a highly parallel method for retrieving accurate DNA molecules for gene
synthesis. A complex library of DNA molecules is modified with unique flanking tags
before massively parallel sequencing. Tag-directed primers then enable the retrieval of
molecules with desired sequences by PCR.[27]
Digital PCR (dPCR): used to measure the quantity of a target DNA sequence in a DNA
sample. The DNA sample is highly diluted so that after running many PCRs in parallel,
some of them will not receive a single molecule of the target DNA. The target DNA
concentration is calculated using the proportion of negative outcomes. Hence the name
'digital PCR'.
Hot start PCR: a technique that reduces non-specific amplification during the initial set
up stages of the PCR. It may be performed manually by heating the reaction components
to the denaturation temperature (e.g., 95 C) before adding the polymerase.[29] Specialized
enzyme systems have been developed that inhibit the polymerase's activity at ambient
temperature, either by the binding of an antibody[11][30] or by the presence of covalently
bound inhibitors that dissociate only after a high-temperature activation step. Hotstart/cold-finish PCR is achieved with new hybrid polymerases that are inactive at
ambient temperature and are instantly activated at elongation temperature.
In silico PCR (digital PCR, virtual PCR, electronic PCR, e-PCR) refers to computational
tools used to calculate theoretical polymerase chain reaction results using a given set of
primers (probes) to amplify DNA sequences from a sequenced genome or transcriptome.
In silico PCR was proposed as an educational tool for molecular biology.[31]
Intersequence-specific PCR (ISSR): a PCR method for DNA fingerprinting that amplifies
regions between simple sequence repeats to produce a unique fingerprint of amplified
fragment lengths.[32]
Inverse PCR: is commonly used to identify the flanking sequences around genomic
inserts. It involves a series of DNA digestions and self ligation, resulting in known
sequences at either end of the unknown sequence.[33]
Ligation-mediated PCR: uses small DNA linkers ligated to the DNA of interest and
multiple primers annealing to the DNA linkers; it has been used for DNA sequencing,
genome walking, and DNA footprinting.[34]
Methylation-specific PCR (MSP): developed by Stephen Baylin and Jim Herman at the
Johns Hopkins School of Medicine,[35] and is used to detect methylation of CpG islands in
genomic DNA. DNA is first treated with sodium bisulfite, which converts unmethylated
cytosine bases to uracil, which is recognized by PCR primers as thymine. Two PCRs are
then carried out on the modified DNA, using primer sets identical except at any CpG
islands within the primer sequences. At these points, one primer set recognizes DNA with
cytosines to amplify methylated DNA, and one set recognizes DNA with uracil or
thymine to amplify unmethylated DNA. MSP using qPCR can also be performed to
obtain quantitative rather than qualitative information about methylation.
Miniprimer PCR: uses a thermostable polymerase (S-Tbr) that can extend from short
primers ("smalligos") as short as 9 or 10 nucleotides. This method permits PCR targeting
to smaller primer binding regions, and is used to amplify conserved DNA sequences,
such as the 16S (or eukaryotic 18S) rRNA gene.[36]
Multiplex-PCR: consists of multiple primer sets within a single PCR mixture to produce
amplicons of varying sizes that are specific to different DNA sequences. By targeting
multiple genes at once, additional information may be gained from a single test-run that
otherwise would require several times the reagents and more time to perform. Annealing
temperatures for each of the primer sets must be optimized to work correctly within a
single reaction, and amplicon sizes. That is, their base pair length should be different
enough to form distinct bands when visualized by gel electrophoresis.
Nanoparticle-Assisted PCR (nanoPCR): In recent years, it has been reported that some
nanoparticles (NPs) can enhance the efficiency of PCR (thus being called nanoPCR), and
some even perform better than the original PCR enhancers. It was also found that
quantum dots (QDs) can improve PCR specificity and efficiency. Single-walled carbon
nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) are efficient in
enhancing the amplification of long PCR. Carbon nanopowder (CNP) was reported be
able to improve the efficiency of repeated PCR and long PCR. ZnO, TiO2, and Ag NPs
were also found to increase PCR yield. Importantly, already known data has indicated
that non-metallic NPs retained acceptable amplification fidelity. Given that many NPs are
capable of enhancing PCR efficiency, it is clear that there is likely to be great potential
for nanoPCR technology improvements and product development.[37][38]
Nested PCR: increases the specificity of DNA amplification, by reducing background due
to non-specific amplification of DNA. Two sets of primers are used in two successive
PCRs. In the first reaction, one pair of primers is used to generate DNA products, which
besides the intended target, may still consist of non-specifically amplified DNA
fragments. The product(s) are then used in a second PCR with a set of primers whose
binding sites are completely or partially different from and located 3' of each of the
primers used in the first reaction. Nested PCR is often more successful in specifically
amplifying long DNA fragments than conventional PCR, but it requires more detailed
knowledge of the target sequences.
regulatory sequences, or mutations; the technique enables creation of specific and long
DNA constructs. It can also introduce deletions, insertions or point mutations into a DNA
sequence.[39][40]
PAN-AC: uses isothermal conditions for amplification, and may be used in living cells.[41]
[42]
quantitative PCR (qPCR): used to measure the quantity of a target sequence (commonly
in real-time). It quantitatively measures starting amounts of DNA, cDNA, or RNA.
quantitative PCR is commonly used to determine whether a DNA sequence is present in a
sample and the number of its copies in the sample. Quantitative PCR has a very high
degree of precision. Quantitative PCR methods use fluorescent dyes, such as Sybr Green,
EvaGreen or fluorophore-containing DNA probes, such as TaqMan, to measure the
amount of amplified product in real time. It is also sometimes abbreviated to RT-PCR
(real-time PCR) but this abbreviation should be used only for reverse transcription PCR.
qPCR is the appropriate contractions for quantitative PCR (real-time PCR).
Reverse Transcription PCR (RT-PCR): for amplifying DNA from RNA. Reverse
transcriptase reverse transcribes RNA into cDNA, which is then amplified by PCR. RTPCR is widely used in expression profiling, to determine the expression of a gene or to
identify the sequence of an RNA transcript, including transcription start and termination
sites. If the genomic DNA sequence of a gene is known, RT-PCR can be used to map the
location of exons and introns in the gene. The 5' end of a gene (corresponding to the
transcription start site) is typically identified by RACE-PCR (Rapid Amplification of
cDNA Ends).
Suicide PCR: typically used in paleogenetics or other studies where avoiding false
positives and ensuring the specificity of the amplified fragment is the highest priority. It
was originally described in a study to verify the presence of the microbe Yersinia pestis in
dental samples obtained from 14th Century graves of people supposedly killed by plague
during the medieval Black Death epidemic.[45] The method prescribes the use of any
primer combination only once in a PCR (hence the term "suicide"), which should never
have been used in any positive control PCR reaction, and the primers should always
target a genomic region never amplified before in the lab using this or any other set of
primers. This ensures that no contaminating DNA from previous PCR reactions is present
in the lab, which could otherwise generate false positives.
Touchdown PCR (Step-down PCR): a variant of PCR that aims to reduce nonspecific
background by gradually lowering the annealing temperature as PCR cycling progresses.
The annealing temperature at the initial cycles is usually a few degrees (3-5 C) above the
Tm of the primers used, while at the later cycles, it is a few degrees (3-5 C) below the
primer Tm. The higher temperatures give greater specificity for primer binding, and the
lower temperatures permit more efficient amplification from the specific products formed
during the initial cycles.[47]
Universal Fast Walking: for genome walking and genetic fingerprinting using a more
specific 'two-sided' PCR than conventional 'one-sided' approaches (using only one genespecific primer and one general primer which can lead to artefactual 'noise')[48] by
virtue of a mechanism involving lariat structure formation. Streamlined derivatives of
UFW are LaNe RAGE (lariat-dependent nested PCR for rapid amplification of genomic
DNA ends),[49] 5'RACE LaNe[50] and 3'RACE LaNe.[51]
History
Diagrammatic representation of an example primer pair. The use of primers in an in vitro assay
to allow DNA synthesis was a major innovation which allowed the development of PCR.
Main article: History of polymerase chain reaction
A 1971 paper in the Journal of Molecular Biology by Kleppe and co-workers first described a
method using an enzymatic assay to replicate a short DNA template with primers in vitro.[52]
However, this early manifestation of the basic PCR principle did not receive much attention, and
the invention of the polymerase chain reaction in 1983 is generally credited to Kary Mullis.[53]
When Mullis developed the PCR in 1983, he was working in Emeryville, California for Cetus
Corporation, one of the first biotechnology companies. There, he was responsible for
synthesizing short chains of DNA. Mullis has written that he conceived of PCR while cruising
along the Pacific Coast Highway one night in his car.[54] He was playing in his mind with a new
way of analyzing changes (mutations) in DNA when he realized that he had instead invented a
method of amplifying any DNA region through repeated cycles of duplication driven by DNA
polymerase. In Scientific American, Mullis summarized the procedure: "Beginning with a single
molecule of the genetic material DNA, the PCR can generate 100 billion similar molecules in an
afternoon. The reaction is easy to execute. It requires no more than a test tube, a few simple
reagents, and a source of heat."[55] He was awarded the Nobel Prize in Chemistry in 1993 for his
invention,[5] seven years after he and his colleagues at Cetus first put his proposal to practice.
However, some controversies have remained about the intellectual and practical contributions of
other scientists to Mullis' work, and whether he had been the sole inventor of the PCR principle
(see below).
At the core of the PCR method is the use of a suitable DNA polymerase able to withstand the
high temperatures of >90 C (194 F) required for separation of the two DNA strands in the
DNA double helix after each replication cycle. The DNA polymerases initially employed for in
vitro experiments presaging PCR were unable to withstand these high temperatures.[3] So the
early procedures for DNA replication were very inefficient and time consuming, and required
large amounts of DNA polymerase and continuous handling throughout the process.
The discovery in 1976 of Taq polymerase a DNA polymerase purified from the thermophilic
bacterium, Thermus aquaticus, which naturally lives in hot (50 to 80 C (122 to 176 F))
environments[12] such as hot springs paved the way for dramatic improvements of the PCR
method. The DNA polymerase isolated from T. aquaticus is stable at high temperatures
remaining active even after DNA denaturation,[13] thus obviating the need to add new DNA
polymerase after each cycle.[4] This allowed an automated thermocycler-based process for DNA
amplification.
Patent disputes
The PCR technique was patented by Kary Mullis and assigned to Cetus Corporation, where
Mullis worked when he invented the technique in 1983. The Taq polymerase enzyme was also
covered by patents. There have been several high-profile lawsuits related to the technique,
including an unsuccessful lawsuit brought by DuPont. The pharmaceutical company HoffmannLa Roche purchased the rights to the patents in 1992 and currently holds those that are still
protected.
A related patent battle over the Taq polymerase enzyme is still ongoing in several jurisdictions
around the world between Roche and Promega. The legal arguments have extended beyond the
lives of the original PCR and Taq polymerase patents, which expired on March 28, 2005.[56]
References
1.
2.
Bartlett, J. M. S.; Stirling, D. (2003). "A Short History of the Polymerase Chain
Reaction". PCR Protocols 226. pp. 36. doi:10.1385/1-59259-384-4:3. ISBN 1-59259384-4. edit
US4683195
3.
Saiki, R.; Scharf, S.; Faloona, F.; Mullis, K.; Horn, G.; Erlich, H.; Arnheim, N.
(1985). "Enzymatic amplification of beta-globin genomic sequences and restriction site
analysis for diagnosis of sickle cell anemia". Science 230 (4732): 13501354.
doi:10.1126/science.2999980. PMID 2999980. edit
4.
Saiki, R.; Gelfand, D.; Stoffel, S.; Scharf, S.; Higuchi, R.; Horn, G.; Mullis, K.;
Erlich, H. (1988). "Primer-directed enzymatic amplification of DNA with a thermostable
DNA polymerase". Science 239 (4839): 487491. doi:10.1126/science.2448875.
PMID 2448875. edit
5.
6.
7.
8.
9.
10.
11.
Sharkey, D. J.; Scalice, E. R.; Christy, K. G.; Atwood, S. M.; Daiss, J. L. (1994).
"Antibodies as Thermolabile Switches: High Temperature Triggering for the Polymerase
Chain Reaction". Bio/Technology 12 (5): 506509. doi:10.1038/nbt0594-506. edit
12.
13.
Lawyer, F.; Stoffel, S.; Saiki, R.; Chang, S.; Landre, P.; Abramson, R.; Gelfand,
D. (1993). "High-level expression, purification, and enzymatic characterization of fulllength Thermus aquaticus DNA polymerase and a truncated form deficient in 5' to 3'
exonuclease activity". PCR methods and applications 2 (4): 275287.
doi:10.1101/gr.2.4.275. PMID 8324500. edit
14.
15.
16.
17.
18.
19.
Pavlov AR, Pavlova NV, Kozyavkin SA, Slesarev AI (2006). "Thermostable DNA
Polymerases for a Wide Spectrum of Applications: Comparison of a Robust Hybrid
TopoTaq to other enzymes". In Kieleczawa J. DNA Sequencing II: Optimizing
Preparation and Cleanup. Jones and Bartlett. pp. 241257. ISBN 0-7637-3383-0.
20.
21.
22.
23.
24.
Stemmer WP, Crameri A, Ha KD, Brennan TM, Heyneker HL (1995). "Singlestep assembly of a gene and entire plasmid from large numbers of
oligodeoxyribonucleotides". Gene 164 (1): 4953. doi:10.1016/0378-1119(95)00511-4.
PMID 7590320.
25.
Innis MA, Myambo KB, Gelfand DH, Brow MA. (1988). "DNA sequencing with
Thermus aquaticus DNA polymerase and direct sequencing of polymerase chain reactionamplified DNA". Proc Natl Acad Sci USA 85 (24): 94364940.
doi:10.1073/pnas.85.24.9436. PMC 282767. PMID 3200828.
26.
27.
Schwartz JJ, Lee C, Shendure J. (2012). "Accurate gene synthesis with tagdirected retrieval of sequence-verified DNA molecules". Nature Methods 9 (9): 913915.
doi:10.1038/nmeth.2137. PMC 3433648. PMID 22886093.
28.
29.
30.
31.
32.
33.
34.
Mueller PR, Wold B (1988). "In vivo footprinting of a muscle specific enhancer
by ligation mediated PCR". Science 246 (4931): 780786. doi:10.1126/science.2814500.
PMID 2814500.
35.
Herman JG, Graff JR, Myhnen S, Nelkin BD, Baylin SB (1996). "Methylationspecific PCR: a novel PCR assay for methylation status of CpG islands". Proc Natl Acad
Sci USA 93 (13): 98219826. doi:10.1073/pnas.93.18.9821. PMC 38513.
PMID 8790415.
36.
37.
Cenchao Shen, Wenjuan Yang, Qiaoli Ji, Hisaji Maki, Anjie Dong, Zhizhou Zhang
(2009). "NanoPCR observation: different levels of DNA replication fidelity in
nanoparticle-enhanced polymerase chain reactions". Nanotechnology 20: 455103.
doi:10.1088/0957-4484/20/45/455103.
38.
39.
Horton RM, Hunt HD, Ho SN, Pullen JK, Pease LR (1989). "Engineering hybrid
genes without the use of restriction enzymes: gene splicing by overlap exten-sion". Gene
77 (1): 6168. doi:10.1016/0378-1119(89)90359-4. PMID 2744488.
40.
Moller, Simon (2006). PCR (THE BASICS). US: Taylor & Francis Group. p. 144.
41.
42.
43.
44.
45.
Raoult, D; G Aboudharam; E Crubezy; G Larrouy; B Ludes; M Drancourt (200011-07). "Molecular identification by "suicide PCR" of Yersinia pestis as the agent of
medieval black death". Proc. Natl. Acad. Sci. U.S.A. 97 (23): 1280012803.
doi:10.1073/pnas.220225197. ISSN 0027-8424. PMC 18844. PMID 11058154.
46.
47.
Don RH, Cox PT, Wainwright BJ, Baker K, Mattick JS (1991). "'Touchdown'
PCR to circumvent spurious priming during gene amplification". Nucl Acids Res 19 (14):
4008. doi:10.1093/nar/19.14.4008. PMC 328507. PMID 1861999.
48.
Myrick KV, Gelbart WM (2002). "Universal Fast Walking for direct and versatile
determination of flanking sequence". Gene 284 (12): 125131. doi:10.1016/S03781119(02)00384-0. PMID 11891053.
49.
50.
Park DJ (2005). "A new 5' terminal murine GAPDH exon identified using
5'RACE LaNe". Molecular Biotechnology 29 (1): 3946. doi:10.1385/MB:29:1:39.
PMID 15668518.
51.
Park DJ (2004). "3'RACE LaNe: a simple and rapid fully nested PCR method to
determine 3'-terminal cDNA sequence". Biotechniques 36 (4): 586588, 590.
PMID 15088375.
52.
53.
54.
Mullis, Kary (1998). Dancing Naked in the Mind Field. New York: Pantheon
Books. ISBN 0-679-44255-3.
55.
Mullis, Kary (1990). "The unusual origin of the polymerase chain reaction".
Scientific American 262 (4): 5661, 645. doi:10.1038/scientificamerican0490-56.
PMID 2315679.
56.
Advice on How to Survive the Taq Wars 2: GEN Genetic Engineering News
Biobusiness Channel: Article. May 1, 2006 (Vol. 26, No. 9).
External links
Library resources about
Polymerase chain reaction
OpenWetWare
History of the Polymerase Chain Reaction from the Smithsonian Institution Archives
[hide]
Categories:
Molecular biology
Laboratory techniques
Amplifiers
Hoffmann-La Roche
Biotechnology
American inventions
Navigation menu
Create account
Log in
Article
Talk
Read
Edit
View history
Main page
Contents
Featured content
Current events
Random article
Donate to Wikipedia
Wikimedia Shop
Interaction
Help
About Wikipedia
Community portal
Recent changes
Contact page
Tools
Related changes
Upload file
Special pages
Permanent link
Page information
Wikidata item
Print/export
Create a book
Download as PDF
Printable version
t 17:31.