Professional Documents
Culture Documents
Nonlinear Decision-Making With Enzymatic Neural Networks - Nature 2022
Nonlinear Decision-Making With Enzymatic Neural Networks - Nature 2022
Synthetic DNA has emerged as a versatile polymer to store and pro- from leaks that made the composition of layers delicate. Overall, fully
cess information at the molecular scale. It has powered a rich library molecular classification was only demonstrated on datasets that could
of computational molecular devices ranging from logic circuits5–8 to be linearly separated by a wide margin.
self-assembling automata9. Departing from the biological model of DNA-processing enzymes are the workhorses of biotechnology,
computation, most DNA computing devices imitate the Boolean para- synthetic biology and molecular biology. They perform an astound-
digm of electronics. However, their computing power has fallen short ing variety of transactions on DNA: production, degradation, cleav-
of the exponential growth of Moore’s law: their size has been plateauing age, ligation, scouting, cutting and pasting, and editing. In addition,
at around 5–10 logic gates for a decade6. In parallel, various groups enzymes are fast and processive, and their kinetic control is tight, mak-
have started looking at the brain, rather than the central processor ing them prime candidates for powering DNA computing devices.
unit (CPU), as an inspiration for computing with molecules1–4,10. This Previous reports showcased the power of enzymatic networks for run-
is because neuronal and chemical networks share striking similarities: ning advanced spatio-temporal dynamics like logic computation11,
massively parallel and recurrent architectures, analog and asynchro- switches12–14, clocks15–17, predator–prey oscillators18, quorum sensing19,
nous operation, fault-tolerant and redundant computations (Sup- spatial waves20, maze pathfinders21 and artificial morphogenesis22,23—
plementary Fig. 13). many of which still resist implementation with strand displacement
In 2018, Lopez et al. reported a DNA-based linear classifier4 that only. We set out to explore the potential of neuromorphic architec-
performs all of its computations with a non-enzymatic mechanism: tures combining the programmability of DNA with the efficiency of
toehold-mediated strand displacement. Using similar DNA-only mecha- enzymatic processing.
nisms on many more inputs and taking inspiration from competitive
neural networks1,2, Cherry and Qian reported in a tour de force a DNA
classifier for the MNIST database3. Together, these molecular classi- Linear classifier
fiers showcased the benefits of neuromorphic networks over Boolean Our neuromorphic networks are built around a generic enzymatic
circuits: massive parallelism, handling of analog inputs and tolerance neuron (Fig. 1) that emulates the perceptron proposed by Rosenblatt in
of corrupted patterns. However, these non-enzymatic classifiers had 195824. The neuron takes DNA or RNA strands as input. The state of the
limited decision margins, that is, they could not discriminate between neuron is encoded by the concentration of a short DNA strand α (the
two similar inputs belonging to different classes. They also suffered signal sent by the neuron). The neuron computes a weighted sum of its
LIMMS, CNRS-Institute of Industrial Science, University of Tokyo, Tokyo, Japan. 2Laboratoire Gulliver, PSL Research University, Paris, France. ✉e-mail: genot@iis.u-tokyo.ac.jp
1
Bias
Xi
c e
cTp
X1
Positive weight + α
Fluo.
α cTn
dTn
X2 Negative weight +
fT
d Weight adjustment +
aT
Amplification 2× α
Polymerase
dT
Inactive α
Bias
α
Nickase
rT
α
Reporting
Exonuclease
f
X1 w1 X1 w1
α α α
w2
X2
Bias
10.0 10.0
0.3
α production (nM min–1)
0.8 [dT]
5.0 5.0
(nM)
2.5 2.5 0.2 0.6 0
X1 (nM)
X1 (nM)
5
10
1.3 1.3 20
0.4 30
0.1
0.6 0.6 50
0.2
0 0 0
0
0 0.25 0.5 0.75 1.00 0 0.6 1.3 2.5 5.0 10.0 0 10 20 30 40 50
Weight activity w1 X2 (nM) α concentration at t = 0 (nM)
Fig. 1 | Architecture of DNA-encoded enzymatic neural networks. They produce species α or dTn whose steady-state concentration is
a, Multilayer neural networks can classify nonlinearly separable regions. proportional to the input Xi. Weights can be independently tuned with fake
b, Our individual neuron computes a weighted sum on its inputs and generates templates (fT) that compete with cT for the inputs. The activation function—a
an output if the sum exceeds a threshold c (linear classification). c, Chemical step function—is encoded in a bistable switch composed of an amplification
architecture of the neuron. The autocatalytic amplification of the output template (aT, which replicates the species α) and a drain template (dT, which
strand α (red arrow) is triggered when the weighted activation (blue and deactivates α and controls the bias). α concentration is monitored using a
orange) by input strands X i and X j overcomes the thresholding mechanism reporter template (rT). f, Experimental validation of the basic components:
(purple). fluo., fluorescence. d, The chemical neuron is powered by three weight adjustment on a single input (left), weighted summation on two inputs
enzymes producing (polymerase), cutting (nickase) and degrading (w1 = 0.5, w 2 = 1) (middle) and application of the step function on α (right).
(exonuclease) DNA. e, Building blocks of the enzymatic neural networks. Full traces are available in Extended Data Fig. 1.
Positive and negative weights are computed by converter templates cTp and cTn.
inputs thanks to converter templates. They act like programmable-gain a step function as the nonlinear activation function. The inputs here
amplifiers in analog electronics, the gain being tuned by the compo- are two DNA analogues of the miR-21 and miR-31 microRNAS (miRNAs),
sition of the templates (Fig. 1f). The neuron then takes an ON state which are involved in cancer25. The network is modular and can easily
(yielding a high concentration of α) if the weighted sum of inputs be rewired to accept different inputs or produce new outputs.
exceeds a concentration threshold, and remains OFF otherwise (low We tested the neuron in bulk (approximately 10 μl) (Fig. 2a). It is
concentration of α). In modern terms, this mimics a perceptron with sensitive (working with subnanomolar inputs), fast (classifying in a
X2
X2
X2 X2
w2 > 0 w2 > 0 c/w2
0 0
0 c/w1 0
X1 X1
1,000 1,000
833 833
667 667
1
Fluo. (a.u.)
X2 (pM)
X2 (pM)
500 500
0
333 333 0 6
Time (h)
167 167
0 0
0 167 333 500 667 833 1,000 0 167 333 500 667 833 1,000
X1 (pM) X1 (pM)
c d
45 ʝ 45 ʝ 45 ʝ 41.8 ʝ 42.6 ʝ 43.3 ʝ
X1 X1 X1 X1 X1 X1
Classifier
α α α α α α
X2 X2 X2 X2 X2 X2
15 nM 15 nM 18 nM
e f
1.35X1 + X2 = 244 pM 3.64X1 + X2 = 742 pM 1.49X1 + X2 = 473 pM –0.769X1 + X2 = –185 pM –1.80X1 + X2 = –301 pM –2.57X1 + X2 = –265 pM
1
500 500 500 500 500 500
X2 (pM)
X2 (pM)
X2 (pM)
X2 (pM)
X2 (pM)
300 300 300 300 300 300
200 200 200 200 200 200
100 100 100 100 100 100
0 0 0 0 0 0 0
0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500
X1 (pM) X1 (pM) X1 (pM) X1 (pM) X1 (pM) X1 (pM)
g h
X2 (pM) X2 (pM) X2 (pM) X2 (pM) X2 (pM) X2 (pM)
Normalized α fluo. (a.u.)
Normalized derivative
0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 500 400 300 200 100 0 500 400 300 200 100 0 500 400 300 200 100 0
Shift of fluo. (a.u.)
of α fluo. (a.u.)
0.8 0.8 0.8 0.8 0.8 0.8
0.6 FWHM = 35 pM 0.6 37 pM 0.6 33 pM 0.6 29 pM 0.6 36 pM 0.6 36 pM
0.4 0.4 0.4 0.4 0.4 0.4
0.2 0.2 0.2 0.2 0.2 0.2
0 0 0 0 0 0
0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500
X1 (pM) X1 (pM) X1 (pM) X1 (pM) X1 (pM) X1 (pM)
i Pred. ON Pred. OFF Pred. ON Pred. OFF Pred. ON Pred. OFF j Pred. ON Pred. OFF Pred. ON Pred. OFF Pred. ON Pred. OFF
Act. ON
Act. ON
Act. ON
Act. ON
Act. ON
Act. ON
Sens. Sens. Sens. Sens. Sens. Sens.
2,810 58 1,883 59 3,126 202 3,195 76 2,168 214 1,384 174
98% 97% 94% 98% 91% 89%
Act. OFF
Act. OFF
Act. OFF
Act. OFF
Act. OFF
Act. OFF
Prec. NPV .
Accu. Prec. NPV Accu. Prec. NPV Accu. Prec. NPV Accu. Prec. NPV Accu. Prec. NPV Accu.
99% 79% 97% 96% 93% 95% 97% 87% 94% 98% 80% 96% 95% 85% 91% 92% 92% 92%
Fig. 2 | Operation and tuning of linear classifiers in bulk and microdroplets. e,f, Computation in droplets. The smoothed plots show the fluorescence of the
Here the input X 1 and X 2 are DNA analogues of miR-21 and miR-31. a, Positive- α reporter in droplets prepared with varying concentrations inputs (measured
weighted classification with two positive converters and a bistable switch after 6 h). The red line is a linear fit of the OFF/ON boundary. g,h, Slicing of the
(left). This classifier partitions the concentration space with a negative-sloped fluorescence (black/green curve) and its derivative (grey curve) along one of
line (right). The matrix of plots shows the dynamics of the classifier in bulk the diagonal (dashed white line in the smoothed plot). The full-width at
(around 10 μl), measured by following the fluorescence of the reporter over 6 h half-maximum (FWHM) of the derivative is a proxy for the decision margin
at 45 °C, for varying combinations of inputs. For clarity, the background is of the classifier (that is, the distance between unambiguously OFF and ON
green when the classifier finishes in the ON state and grey otherwise. b, Computation regions. i, The confusion matrices show the number of ON and OFF droplets in
of a negative-weighted classifier for the same concentrations of inputs. This each plot, based on their actual (act.) fluorescence and predicted (pred.) value
classifier partitions the concentration space with a positive-sloped line (right). according to the linear fit. The corresponding accuracy (accu.), precision
c, Tuning the weights and bias of a positive-weighted classifier (the activity of (prec.), sensitivity (sens.), specificity (spe.) and negative predictive value (NPV)
an input is modulated by changing the composition of its converter and fake are indicated on the sides of the matrices. a.u., arbitrary units.
templates). d, Effect of temperature on a negative-weighted classifier.
few hours) and sharp (OFF and ON states are clearly demarcated). We Overall, this microfluidic setup offers a superb control of concentra-
then shrunk volumes by a factor of 105 with droplet microfluidics26. We tions and temperatures, and enables a precise visualization of the
immobilized a layer of droplets with the classifier in a silicon chamber—a decision boundary (Fig. 2c,d). End-point analysis confirms the exqui-
material with excellent optical, thermal and mechanical properties27— site sensitivity and robustness of the classifier in these approximately
and incubated the chamber in a thermal platform (Extended Data Fig. 2). 100 pl compartments: OFF and ON regions remain clearly delineated
α fluo. (a.u.)
Continuously parametrized
family of rectangular functions
Drain β
1
Input Hidden Output 0
layer layer layer 0 35 nM
α activates γ 0
1 1
γ fluo. (a.u.)
miRNA
let7a α 30 nM
Fluo. (a.u.)
γ 0
1
0
β 0
β inhibits γ 25 nM
0
1
1
β fluo. (a.u.)
Decrease Increase
drain β drain β 20 nM
Drain β 0
tuning 0 20 40 60 80 100
0 Let7a (pM)
0 Let7a concentration
b
α β γ
45 1 45 1 45 1
30 30 30
25 25 25
20 0 20 0 20 0
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Let7a (pM) Let7a (pM) Let7a (pM)
Fig. 3 | Synthesis of a parametric family of rectangular functions with a scanning of the concentration of input and a parameter of the network
multilayer perceptron. a, A microRNA input activates two neurons (α and β) in (the concentration of drain for β) . The steady-state fluorescence of α, β and γ
the hidden layer. The output neuron γ is activated by α and inhibited by β. As a (after 14 h) are shown against the concentrations of let7a and β drain.
result of these opposing actions, γ is only active when α is active and β inactive— c, Smoothed horizontal slices in the γ plot (grey arrows) reveal the tuneable
producing a rectangular function on the input. b, Droplet microfluidic rectangular function.
by a linear and sharp boundary (Hill coefficient of around 16, Extended function. Taking a single input at concentration x, this function is con-
Data Fig. 5). This endows the classifier with a narrow decision margin stant inside the interval cmin< x < cmax and null outside. Such window
(Fig. 2c): it can discriminate between inputs whose concentrations functions are widely used in electronics to filter signals, but biologi-
differ by only about 10–20% (such being the case in majority voting cal systems also use them to produce a response when the input is
over 10 bits, see below). The statistical metrics of performance (such neither too big nor too small28. To demonstrate biological relevance,
as accuracy) are in the range of 80–99% (Fig. 2i). we selected a human microRNA as the input (let7a, involved in develop-
Versatile linear classification requires negative weights. We opted ment, cancer, aging and metabolism29).
for a strategy of induced inhibition, in which an input produces a To instantiate the two thresholds cmin and cmax, we use a hidden layer
drain template13, which in turn suppresses the replication of the sig- with two neurons: α is activated by low concentrations of input and
nal strand—implementing a negative weight (model in Supplementary activates the neuron γ in the second layer, whereas neuron β is activated
Information Section 2). This strategy was successful in bulk and droplets at high concentrations of input, but inactivates the neuron γ (Fig. 3a).
(Fig. 2b,d), also producing a sharp demarcation between OFF and ON, This architecture—in which two neurons have opposite actions on the
although the boundary is slightly less linear than for positive weights output—mimics an incoherent feedforward loop (a ubiquitous motif
(Hill coefficient of around 63, Extended Data Fig. 5 and Supplementary in gene regulatory networks28). This three-neuron network defines a
Information Section 2). family of functions on let7a, which are parametrized by the weights
The chemical neuron is analog and its computation varies continu- and biases of the neurons. We expect the bias of β—that is, its drain con-
ously with its parameters. On the one hand, this can be used to program centration—to linearly control cmax, independently of cmin. We thus fixed
the parameters of the neuron, for instance changing the bias by tuning the concentration of all species but two, the drain for β and the input
the drain or the weights by tuning the converters (Fig. 2c). On the other let7a, scanned them with microfluidics and read out the three neurons
hand, it makes the neuron sensitive to uncertainties in experimental at steady state (Fig. 3b,c). As expected, the concentration cmin is inde-
parameters. However, we find that the deviations are likely to be mini- pendent of the drain for β (as shown by the vertical black strip in the α
mal for a typical operation. More precisely, we analysed the sensitiv- plot); the concentration cmax varies linearly with the drain for β; and the
ity of the bias of a single-input neuron to the drain and temperature. activated region of the output γ is correctly computed as the intersection
A pipetting error of around 3% in the concentration of the drain (typical of the activated (green) region for α and the inactivated (black) region
for a calibrated pipette) translates into an approximately 10% error in for β. Taking smoothed horizontal slices of these two-dimensional plots,
the bias, and an error of around 0.1 °C in temperature (typical reproduc- we extracted the profiles of individual functions in the family (Fig. 3c).
ibility for a thermocycler) translates into an approximately 2% error in Those plots confirm that the output rises and then falls sharply with the
the bias (both measured for drain = 20 nM and T = 41.5 °C, Extended input, although the rise is steeper than the fall (this asymmetry is due
Data Fig. 4). In addition, we used the power of the chemical neuron to the asymmetry of the bistable switch, which is easier to turn ON than
to implement majority voting, a central Boolean function, on a 10-bit OFF). The width of the rectangular function is linear with the drain and
vector (Supplementary Information Section 3). varies between around 35 pM and (at least) 90 pM of let7a.
NO YES Yes
α Q1
Q2: w′・X > c2 ? β
α
Yes
X2
X2
X2
NO YES
Q2 Q2
γ β
No γ
b X1 X1 X1
Analog Neural Logical
α Normalized α fluo. (a.u.) Normalized β fluo. (a.u.) Normalized γ fluo. (a.u.)
X2
0 1 0 1 0 1
Input Hidden Output
layer layer layer
X1 800 800 800
X2 (pM)
X2 (pM)
X2 (pM)
γ 400 400 400
γ
X1
X2 β 200 200 200
β 0 0 0
X2
0 200 400 600 800 0 200 400 600 800 0 200 400 600 800
X1 X1 (pM) X1 (pM) X1 (pM)
X2 (pM)
X1
400
β
600 200
X2
γ
0
0 200 400 600 800
X2 (pM)
X1 (pM)
400
Fit by MLP
Input Hidden Output
layer layer layer 800
α 600
200
X2 (pM)
X1
400
β
X2 200
0 γ
0
0 200 400 600 800 0 200 400 600 800
X1 (pM) X1 (pM)
Fig. 4 | A hybrid network computing recursive space partitioning. a, A space with two linear classifiers that are indirectly coupled by competitive inhibition
partitioning tree takes a point X = (X 1, X 2) in the concentration plane, and at (membership is read out by fluorescent reporters). The output layer is logical
each of its nodes, tests if X is a member of the corresponding half-plane. and decides membership of the γ region with a NOR gate (which turns its
Computation finishes when a leaf is reached. The tree can be traversed in three fluorescence OFF if X is a member of either the α or β region). c, Fluorescence
ways, partitioning the plane into three convex regions. b, Architecture of levels of α, β and γ, measured in approximately 25,000 droplets after 16 h.
a hybrid network computing the partitioning tree (Supplementary Fig. 5). d, Merged fluorescence plots. e, Fit of d by a single-layer perceptron (SLP, top)
The inputs are two strands encoding a position (X 1,X 2) in the concentration and a two-layer perceptron (bottom). MLP, multi-layer perceptron. The
plane. The hidden layer is neural and decides membership of the α and β region hatched filling indicates erroneous areas in which two classes are outputted.
decision trees are often used for making a diagnosis based on a clinical and comprises two computational layers (Fig. 4b): a hidden neural
presentation, and they are gaining traction in molecular diagnosis, layer deciding membership of the α and β region with linear classi-
for instance for classifying tumours based on the expression levels of fiers, and a logical layer deciding membership of the γ region with a
miRNAs. Taking as input a point X = (X1, X2) in the concentration plane NOR gate: its fluorescence is high only when both α and β strands are
(Fig. 4a), the algorithm starts from the root node of the tree and gradu- absent (Extended Data Fig. 7). Memberships of the α and β regions are
ally moves toward the leaves. At each node, the algorithm queries the computed by two linear classifiers. By tuning the working temperature
membership of the input to the corresponding half-plane and moves (Extended Data Fig. 8), we found conditions for which the two linear
to either child based on this membership (YES/NO). The algorithm classifiers become indirectly coupled18 and α represses β. The network
finishes when it reaches a leaf, giving the result of the classification. then correctly partitions the concentration space into three nonlinearly
Here, we partition the two-dimensional concentration plane into separable regions (Fig. 4c,d). We trained two artificial neural networks
three nonlinearly separable regions (α, β and γ). The network is a hybrid on the experimental data: a single-layer perceptron and a two-layer
1. Kim, J., Hopfield, J. J. & Winfree, E. Neural network computation by in vitro transcriptional Springer Nature or its licensor holds exclusive rights to this article under a publishing
circuits. Adv. Neural Inf. Process. Syst. 17, 681–688 (2004). agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
2. Genot, A. J., Fujii, T. & Rondelez, Y. Scaling down DNA circuits with competitive neural manuscript version of this article is solely governed by the terms of such publishing
networks. J. R. Soc. Interface 10, 20130212 (2013). agreement and applicable law.
3. Cherry, K. M. & Qian, L. Scaling up molecular pattern recognition with DNA-based
winner-take-all neural networks. Nature 559, 370–376 (2018). © The Author(s), under exclusive licence to Springer Nature Limited 2022
Extended Data Fig. 1 | Experimental validation of the basic components of adjustment: w1 corresponds to the ratio of X1toα (cT) and X1tof (fT) (10 nM total).
a chemical neuron. To demonstrate the weighted summation mechanism The production of α from various concentrations of X1 is directly monitored
(a, weight adjustment and b, summation), we used high concentrations of input using 25 nM of rTα. b, Summation of X1 and X 2: all samples contain 5 nM X1toα,
strands, which allows for the direct visualization of the linear production of α 5 nM X1tof (w1 = 0.5) and 10 nM X 2toα (w2 = 1). The production of α from different
strand (i.e. in absence of the amplification reaction that composes the activation concentrations of X1 and X 2 is directly monitored using 25 nM of rTα. c, Activation
function). The threshold activation function and the possibility to control the function: the amplification reaction of samples containing various initial
bias (c) is estimated by measuring the concentration of α required to trigger the concentrations of α is monitored in real-time. The bias (i.e. amplification
amplification reaction at given drain template (dTα) concentrations. a, Weight threshold, noted t) is tuned according to the concentration of dTα.
Extended Data Fig. 2 | Thermal setup for droplet incubation and imaging. the plate and the chamber. Two Peltier elements, separated by ~ 7.5 cm, impose
a, After generation, an emulsion is spread into a monolayer of droplets inside a thermal gradient across the copper plate. Pt100 sensors report the local
a silicon chamber. Silicon offers ideal conditions for imaging and incubation: temperature near each Peltier element to a Peltier controller. The whole setup
high thermal conductivity, mechanical rigidity and optical reflectivity. is encased into a 3D printed frame that is fitted into the microscope stage.
The inset shows a monolayer of droplets imaged by fluorescence. Scale c, Side view of the setup. Heat is extracted from the Peltier elements with CPU
bar = 500 μm. b, Bottom view of the setup. The chamber is fixed against cooling fans.
a copper plate by capillarity by sandwiching a drop of mineral oil between
Article
Extended Data Fig. 3 | Microfluidic workflow for measuring the changing their common mastermix. This allowed us to vary the concentrations
dependence of the separatrix on the concentration of reagents. of converter templates or drain templates in the master mix. After generation,
a, We determined the equation of the separatrix in the (X 1,X 2) plane for we simultaneously imaged the three subpopulations together. To that end,
3 conditions in a multiplexed experiment. We first performed 3 rounds of we sequentially and gently filled a chamber with each population: the
droplet generation. During each round, we scanned the (X 1,X 2) plane by mixing subpopulations remained spatially separated. We then incubated and imaged
3 tubes (which all derive from the same master mix) and collected the emulsion the chamber. During image processing, we separated the subpopulation by
in a separate tube. Between each round, we changed the set of 3 tubes - thus selecting 3 distinct regions (shown as red boxes in the right picture).
Extended Data Fig. 4 | See next page for caption.
Article
Extended Data Fig. 4 | Design and kinetics of the linear classifier. a, Full production overcomes the removal by the exonuclease and the drain, and α is
architecture of the linear classifier for one input X 1. The classifier comprises amplified up to the ON state. Otherwise, α is removed down to the OFF state.
three templates: a converter template (which produces α when bound to its The existence of an unstable steady state is controlled by the shape of
input X 1), a autocatalytic template (which autocatalytically replicates α), and a production and removal curves, which must intersect at three points. The shape
drain template (which deactivates α). The strand α is continuously degraded by of the removal curve is controlled by the concentrations of drain templates,
the exonuclease. b, Production and removal curves showing the rate of exonuclease and polymerase (for the inactivation step in the drain). The shape
production of α by the autocatalytic template (left), and the rate of removal of of the production curve is controlled by the concentration of autocatalytic
α by the drain template and the exonuclease (middle). Inactivation by drain templates, polymerase and nickase. c, Microfluidic mapping of the dependence
template is fast but quickly saturated, whereas degradation by the exonuclease of the bias on X1 to the drain. We prepared droplets with varying concentrations
is slow but linear. This interplay creates a kink in the removal curve, resulting in of drain for α and input X1. We incubated the droplets in a temperature gradient,
the existence of 3 intersection points between the production curve and the and imaged their content after 6 h. The top plots show the fluorescence of
removal curve. The upper and lower intersection points are the two stable droplets in the space (drain, X1), the colour indicating the level of α. The bottom
steady states (OFF and ON), whereas the middle point is an unstable steady plots show the fluorescence of droplets in the space (temperature, X1). The red
state, a threshold which controls the crossover between autocatalytic lines are linear fit of the boundary, with equation indicated above each plot.
production and removal. If α is over the threshold, the drain is saturated, The concentration are in nM and the temperature in Celsius.
Extended Data Fig. 5 | Enzymatic classifiers have stronger nonlinearities in dashed. b, Steady-state fluorescence of our negative-weighted enzymatic
and higher sensitivities than a state-of-the-art nonenzymatic classifier. linear classifier (Fig. 2d, left), and its fit by a Hill equation cn/(cn+xn).
a, Steady-state fluorescence of a reported negative-weighted nonenzymatic c, Steady-state fluorescence of our positive-weighted enzymatic linear
linear classifier4. The fitted Hill equation cn/(cn+xn) is shown in plain, the classifier (Fig. 2c, middle) and its fit by a Hill equation xn/(cn+xn). Hill equations
interpolation from the 8 points along the diagonal of Fig. 3d of ref. 4 is shown were fitted with a prefactor to improve the goodness of fit.
Article
Extended Data Fig. 6 | Majority voting weight adjustment. a-b. Trigger spiked individually (123 ± 11 min on average, hence a coefficient of variation of
production by 10 converter templates at 1 nM concentration (a) or adjusted less than 10%). In these conditions, all samples amplify within 150 min (except
proportionally to the production rate, arbitrarily choosing X3 as reference (b). for the negative control, NC), suggesting that the production rate for each
c. Comparison of the production rate from panel a. and b. At constant cT input is too high to unambiguously classify samples with less than 5 inputs from
concentration, we observed large rate discrepancies depending on the input samples with more than 5 inputs. f. Effect of the cT bundle concentration on
sequence (mean production rate = 39 pM ± 18 pM/min). A factor of 4.6 was sample classification. To further reduce the weight of all inputs, we decreased
computed between the fastest and the slowest converter template. Balancing the concentration of all converter templates from 1 X to 0.04 X. For the lowest
the cT concentration reduces the coefficient of variation from 47 % to 14 % concentration of cT bundle, no amplification is observed within 1000 min for
(29 +/−4 pM/min). d-e. Majority voting with balanced cT concentrations. all samples (from 0 to 6 inputs). 1X cT bundle results reproducibly in the
In a first attempt, we perform a majority voting experiment on 28 samples. amplification of all samples (except the negative control, NC), with a poor
Concentrations of the 1X cT bundle were 0.8, 0.7, 1.2, 1.9, 3, 0.6, 1.2, 0.5, 1.4, 1.4 nM, discrimination between 4, 5 and 6 inputs samples. Interestingly, we observed a
respectively from X1 to X10. Enzyme concentrations were set to 70 u/mL sharp threshold between ≤ 2 and > 2 inputs with a 3-fold dilution of the cT
Vent(exo-), 300 u/mL Nb.BsmI and 23 nM of ttRecJ. d. Amplification curves for bundle concentration. This demonstrates that the amplification threshold can
the 28 samples spiked with 0 to 10 inputs (5 nM each, various combinations). be tuned finely by adjusting the production rate of all inputs, allowing to set an
e. Bar chart of the amplification times (Cq). As expected, Cq are negatively arbitrary number of input voters to return a positive answer. Finally, 0.15 X of cT
correlated to the number of spiked inputs (the more inputs, the faster the bundle allows us to compute a majority voting algorithm, set a clear
amplification). Interestingly, all ten inputs exert a consistent activation force discrimination between 4 and 6 inputs.
on the switch, triggering the amplification between 112 and 148 min when
Extended Data Fig. 7 | Operation of the NOR gate. a, A NOR gate is formed by thermocycler, with and without template. The presence of the template
hybridizing a template strand (accepting α or β as primers) with a molecular induces a large fluorescence increase, due to the hybridization of the beacon.
beacon γ. In the absence of α and β, the fluorescence of the molecular beacon is c, A two-layer network with the NOR gate as its output (left) and its truth table
high because its dye and quencher are far apart. This codes for the ON state of (middle) The network consists of two linear classifiers (α and β), each accepting
the gate. If α or β is present, it induces polymerase-mediated displacement of a distinct input (X 1 or X 2). The correct operation of the network is verified by
the molecular beacon from the template, resulting in a low level of fluorescence. fluorescence: presence of either input drives the polymerization of the gate
b, Verification of the assembly of the beacon with the template. The and the displacement of the beacon, leading to a drop in fluorescence.
fluorescence of the beacon is monitored as the solution is cooled in a
Article
Extended Data Fig. 8 | Thermal dependence of the space partitioning with temperature. At low temperature (~43 °C), α and β coexist, as indicated by
network (Fig. 4). Droplets with the network and varying inputs were incubated the yellow region. When temperature increases, the yellow region shrinks and
in a graded temperature field to reveal how the quality of classification varies disappears, indicating that α and β do not coexist.