Professional Documents
Culture Documents
Proceedings of International Conference On Technology and Instru 2018 Unlocked
Proceedings of International Conference On Technology and Instru 2018 Unlocked
Proceedings of
International
Conference on
Technology and
Instrumentation in
Particle Physics 2017
Volume 1
Springer Proceedings in Physics
Volume 212
The series Springer Proceedings in Physics, founded in 1984, is devoted to timely
reports of state-of-the-art developments in physics and related sciences. Typically
based on material presented at conferences, workshops and similar scientific
meetings, volumes published in this series will constitute a comprehensive
up-to-date source of reference on a field or subfield of relevance in contemporary
physics. Proposals must include the following:
– name, place and date of the scientific meeting
– a link to the committees (local organization, international advisors etc.)
– scientific description of the meeting
– list of invited/plenary speakers
– an estimate of the planned proceedings book parameters (number of pages/
articles, requested number of bulk copies, submission deadline).
Proceedings of International
Conference on Technology
and Instrumentation
in Particle Physics 2017
Volume 1
123
Editor
Zhen-An Liu
Institute of High Energy Physics
Chinese Academy of Sciences
Beijing, China
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Contents
v
vi Contents
Particle Identification
Assembly of a Silica Aerogel Radiator Module for the Belle II
ARICH System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Makoto Tabata, Ichiro Adachi, Hideyuki Kawai, Shohei Nishida,
and Takayuki Sumiyoshi, for the Belle II ARICH Group
x Contents
Track Finding for the Level-1 Trigger of the CMS Experiment . . . . . . . 296
Tom James, on behalf of the TMTT group
A Multi-chip Data Acquisition System Based on a Heterogeneous
System-on-Chip Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Adrian Fiergolski, on behalf of the CLIC detector
and physics (CLICdp) collaboration
Acceleration of an Particle Identification Algorithm Used
for the LHCb Upgrade with the New Intel® Xeon®-FPGA . . . . . . . . . . 309
Christian Färber, Rainer Schwemmer, Niko Neufeld, and Jonathan Machen
The ATLAS Level-1 Trigger System with 13TeV Nominal
LHC Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Louis Helary, on behalf of the ATLAS Collaboration
Common Software for Controlling and Monitoring the Upgraded
CMS Level-1 Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Giuseppe Codispoti, Simone Bologna, Glenn Dirkx, Christos Lazaridis,
Alessandro Thea, and Tom Williams
A Prototype of an ATCA-Based System for Readout Electronics
in Particle and Nuclear Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Min Li, Zhe Cao, Shubin Liu, and Qi An
Commissioning and Integration Testing of the DAQ System
for the CMS GEM Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Alfredo Castaneda, On behalf of the CMS Muon group
MiniDAQ1: A Compact Data Acquisition System for GBT
Readout over 10G Ethernet at LHCb . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Paolo Durante, Jean-Pierre Cachemiche, Guillaume Vouters,
Federico Alessio, Luis Granado Cardoso, Joao Vitor Viana Barbosa,
and Niko Neufeld
Challenges and Performance of the Frontier Technology Applied
to an ATLAS Phase-I Calorimeter Trigger Board Dedicated
to the Jet Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
B. Bauss, A. Brogna, V. Buescher, R. Degele, H. Herr, C. Kahra, S. Rave,
E. Rocco, U. Schaefer, J. Souza, S. Tapprogge, and M. Weirich
The Phase-1 Upgrade of the ATLAS Level-1 Endcap Muon Trigger . . . 341
Shunichi Akatsuka, on behalf of the ATLAS Collaboration
Modeling Resource Utilization of a Large Data Acquisition System . . . . 346
Alejandro Santos, Pedro Javier García, Wainer Vandelli,
and Holger Fröning
xii Contents
The Phase-I Upgrade of the ATLAS First Level Calorimeter Trigger . . . 350
Victor Andrei, on behalf of the ATLAS Collaboration
The CMS Level-1 Calorimeter Trigger Upgrade for LHC Run II . . . . . 355
Alessandro Thea, on behalf of the CMS Level-1 Trigger group
The ATLAS Muon-to-Central-Trigger-Processor-Interface
(MUCTPI) Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
R. Spiwoks, A. Armbruster, G. Carrillo-Montoya, M. Chelstowska,
P. Czodrowski, P.-O. Deviveiros, T. Eifert, N. Ellis, G. Galster, S. Haas,
L. Helary, O. Lagkas Nikolos, A. Marzin, T. Pauly, V. Ryjov,
K. Schmieden, M. Silva Oliveira, J. Stelzer, P. Vichoudis, and T. Wengler
Automated Load Balancing in the ATLAS High-Performance
Storage Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Fabrice Le Goff and Wainer Vandelli, On behalf of the ATLAS
Collaboration
Study of the Calorimeter Trigger Simulation at Belle II Experiment . . . 371
Insoo Lee, SungHyun Kim, CheolHoon Kim, HanEol Cho, Yuji Unno,
and B. G. Cheon
RDMA Optimizations on Top of 100 Gbps Ethernet for the
Upgraded Data Acquisition System of LHCb . . . . . . . . . . . . . . . . . . . . . 376
Balázs Vőneki, Sébastien Valat, and Niko Neufeld
Integration of Data Acquisition Systems of Belle II Outer-Detectors
for Cosmic Ray Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
S. Yamada, R. Itoh, T. Konno, Z. Liu, M. Nakao, S. Y. Suzuki, and J. Zhao
Study of Radiation-Induced Soft-Errors in FPGAs for Applications
at High-Luminosity e þ e Colliders . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Raffaele Giordano, Gennaro Tortone, and Alberto Aloisio
Design of High Performance Compute Node for Belle II Pixel
Detector Data Acquisition System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Jingzhou Zhao, Zhen-An Liu, Wolfgang Kühn, Jens Sören Lange,
Thomas Geßler, and Wenxuan Gong
A Reconfigurable Virtual Nuclear Pulse Generator via the
Inversion Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Weigang Yin, Lian Chen, Feng Li, Baochen Wang, Zhou He, and Ge Jin
Design of Wireless Data Acquisition System in Nuclear
Physics Experiment Based on ZigBee . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Zhou He, Lian Chen, Feng Li, Futian Liang, and Ge Jin
A Lightweight Framework for DAQ System in Small-Scaled High
Energy Physics Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Yang Li, Wei Shen, Si Ma, and XiaoLu Ji
Contents xiii
1 Introduction
2 Detector Design
2.1 The LaBr3 Crystal and SiPM
The LaBr3 crystal owns the top performances. The decay time is 16 ns, and it is bright
and has good linear response, both characters lead to a high-energy resolution [3–7].
SiPMs, as the novel silicon-base photodetector, have a widespread application in
high-energy physics. They show excellent performances, including single-photon
sensitively, low bias, large dynamic range, and high photon detection efficiency and so
on, and now the large area SiPMs are available [8]. Each GRD consisting of 3 in.
LaBr3 crystal and 2 in. (50.44 * 50.44 mm2) SiPM array.
3 Detector Performance
55
Fig. 2. The energy spectra of internal radioactive (left) and Fe source (right)
The low-energy sensitive is an important parameter for the GRD. Put the crystal
only and the crystal with source on the SiPM. Figure 2 shows the energy spectra of
internal radioactivity and 55Fe source, the gauss peaks of 5.6 keV and 5.9 keV are
clear. This results satisfies the GECAM requirements well.
Fig. 4. The relation of ADC and energy (left) and the waveform of high-energy c-Ray (right)
information will be lost. Therefore, the next work is to design a special circuit to supply
a high-gain to low-energy and a low-gain to high-energy.
Energy-resolution is another important parameter that must be lower than 10% @
662 keV. The resolution becomes better with the energy increases. For this GRD, the
resolution can reach 6.5% @ 662 keV.
3.3 Uniformity
Put 241Am source on the crystal after being collimated. The location is shown in Fig. 5
(left). Regard the result of point (1) as the reference and get the relative relation in
Fig. 5 (right). It is clear that the uniformity get worse as the position moves from center
to edge, and the difference between center and edge is less than 10%.
Fig. 5. The location of the sources (left) and the uniformity of the GRD (right)
4 Conclusion
Acknowledgement. Thanks for supporting by the Key Research Program of Frontier Sciences,
CAS, and Grant NO. Y6292690K1.
A Novel Gamma-Ray Detector for GECAM 7
References
1. Abbott, B.P.: Phys. Rev. Lett. 116, 061102 (2016)
2. Abbott, B.P.: Phys. Rev. Lett. 116, 131103 (2016)
3. Quarati, F.: Nucl. Instr. Meth. A 574, 115–120 (2007)
4. van Loef, E.V.D.: Nucl. Instr. Meth. A 486, 254–258 (2002)
5. Iltis, A.: Nucl. Instr. Meth. A 563, 359–363 (2006)
6. Dorenbos, P.: IEEE Trans. Nucl. Sci. NS 51, 1289 (2004)
7. Bizarri, G.: IEEE Trans. Nucl. Sci. NS 53, 615 (2006)
8. SensL Homepage. http://sensl.com/. Accessed 20 Sep 2016
Spin-Off Application of Silica Aerogel
in Space: Capturing Intact Cosmic Dust
in Low-Earth Orbits and Beyond
Makoto Tabata(B)
on behalf of the Tanpopo Team
1 Introduction
Since the 1970s [1], silica aerogel has been widely used as a Cherenkov radiator in
accelerator-based particle- and nuclear-physics experiments, as well as in cosmic
ray experiments. For this major application, the adjustable refractive index and
optical transparency of the aerogel are very important. We have been developing
high-quality aerogel tiles for use in a super-B factory experiment (Belle II) to
be conducted at the High Energy Accelerator Research Organization (KEK),
Japan [2], and for various particle- and nuclear-physics experiments conducted
(or to be conducted) at Japan Proton Accelerator Complex (J-PARC) (e.g., [3])
since the year 2004. Our recent production technology has enabled us to obtain
a hydrophobic aerogel [4] with a wide range of refractive indices (n = 1.0026–
1.26) [5] and with an approximately doubled transmission length (measured at
a wavelength of 400 nm) in various refractive index regions [6].
Silica aerogel is also useful as a cosmic dust-capture medium (see [7] as a
review). Low-density aerogels can capture almost-intact micron-size dust grains
with hypervelocities of the order of several kilometers per second in space, which
was first recognized in the 1980s [8]. For this interesting application, high porosity
(i.e., low bulk density below 0.1 g/cm3 ; n < 1.026) and optical transparency of
the aerogel are vitally important. The latter characteristic enables us to easily
find a cavity under an optical microscope, which is produced in an aerogel by
the hypervelocity impact of a dust particle.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 8–11, 2018.
https://doi.org/10.1007/978-981-13-1313-4_2
Spin-Off Application of Silica Aerogel in Space 9
Material samples acquired from space are crucial in planetary science, astro-
chemistry, astrobiology, and space debris research. This is because ground-based
state-of-art analysis instruments can be used for biochemical, mineralogical, and
other related analysis of cosmic samples. The first space missions that used
aerogel as a dust-capture medium were conducted in low-Earth orbits (LEO) in
the 1990s. These include space shuttle missions (using the shuttle’s cargo bay)
by the U.S. National Aeronautics and Space Administration (NASA) [8] and
the European retrievable carrier (Eureca) mission (freefrying spacecraft) by the
European Space Agency [11]. Similarly, the series of Micro-Particles Capturer
(MPAC) experiments conducted by JAXA was a LEO mission aboard the Inter-
national Space Station (ISS) in the 2000s [12]. In addition, the Large Area Debris
Collector (LAD-C) on the ISS was meant to be used for exploring near-Earth
orbital debris by the U.S. Naval Research Laboratory (however, it was canceled
in 2007) [13]. The Stardust spacecraft, a deep-space comet flyby mission by
NASA, retrieved cometary dust (from 81P/Wind 2) back to Earth successfully
in 2006 (e.g., [14]). Recently, an Enceladus (Saturn’s moon) flyby plume sample
return mission has been proposed to search for a signature of chemical evolution
and possible extraterrestrial life [15,16].
10 M. Tabata
3 Summary
Since its first use was reported approximately 25 years ago, silica aerogel has
been used as a cosmic dust-capture medium in many space mission in LEO
and beyond. The aerogel can provide fruitful, almost-intact cosmic materials for
detailed analyses on the ground. A high-performance ultralow-density aerogel
was developed for the ongoing astrobiology mission Tanpopo in LEO. This tech-
nique for capturing intact dust particles will be applied in future missions to the
moons of outer planets to search for possible extraterrestrial life.
Acknowledgments. The author is grateful to the members of the Tanpopo team for
their contributions to CP development. Additionally, the author is grateful to Prof. H.
Kawai of Chiba University and Prof. I. Adachi of KEK for their assistance in aerogel
production. Furthermore, the author is thankful to the JEM Mission Operations and
Integration Center, Human Spaceflight Technology Directorate, JAXA. This study was
partially supported by the Hypervelocity Impact Facility (former name: Space Plasma
Laboratory) at ISAS, JAXA, the Venture Business Laboratory at Chiba University, a
Grant-in-Aid for Scientific Research (B) (No. 6H04823), and a Grant-in-Aid for JSPS
Fellows (No. 07J02691) from the Japan Society for the Promotion of Science (JSPS).
References
1. Cantin, M., et al.: Silica aerogels used as Cherenkov radiators. Nucl. Instrum.
Meth. 118, 177–182 (1974)
2. Adachi, I., et al.: Construction of silica aerogel radiator system for Belle II RICH
counter. Nucl. Instrum. Meth. Phys. Res. A 876, 129–132 (2017). https://doi.org/
10.1016/j.nima.2017.02.036
3. Tabata, M., et al.: Fabrication of silica aerogel with n = 1.08 for e+ /µ+ separation
in a threshold Cherenkov counter of the J-PARC TREK/E36 experiment. Nucl.
Instrum. Meth. Phys. Res. A 795, 206–212 (2015)
Spin-Off Application of Silica Aerogel in Space 11
1 Introduction
Astroparticle physics and high energy astrophysics are experiencing a “golden”
era thanks to very successful and long-running space- and ground-based exper-
iments (e.g. PAMELA, Fermi, AMS-02, H.E.S.S., Auger, IceCube). The multi-
messenger/multi-wavelength/multi-platform approach is opening up new possi-
bilities in discovery and observation. Hot topics still remaining are the origin of
cosmic rays, the spectrum of anti-matter and the observation of dark matter par-
ticles. The future of ground-based astroparticle experiments is very brilliant with
approved new projects (CTA, LHAASO, KM3NeT) and proposed ones (IceCube-
Gen2). In this scenario, the complementarity of space missions is needed to get
to the “knee” of the cosmic ray spectrum (HERD), to close the gamma-ray gap
in the MeV region (PANGU/e-ASTROGAM) and search for dark matter with
anti-particles at energies > TeV (ALADINO).
The Department of Nuclear and Particle Physics (DPNC) of the University of
Geneva has a long experience in the development and assembly of silicon trackers
used in space (AMS-01, AMS-02, DAMPE). At present, new technologies to
replace silicon strip detectors (SSDs) are being evaluated. In particular, the
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 12–16, 2018.
https://doi.org/10.1007/978-981-13-1313-4_3
Research and Development of a Scintillating Fiber Tracker 13
Fiber ribbon
Fig. 1. Top left: a layer of the AMS-02 silicon tracker made of 22 modules of 12 to
15 SSDs. Top right: 11 layers of the DAMPE silicon tracker. Bottom left: schematic of
a DAMPE silicon layer, each module consists of 4 SSDs and a front-end board with
6 VA140 ASICs used for the signal amplification. Bottom right: schematic of a future
space experiment tracking layer, each module consists of one fiber ribbon with three
SiPM arrays and one front-end board on each extremity.
FE board has two IDEAS VATA 64 HDR 16, to read-out one SiPM array. The
SiPM array is mounted on a printed circuit board (PCB) which is connected to
the FE board through 4 flex cables. One prototype module is shown in Fig. 2
(bottom), it consists of a 70 cm long fiber ribbon, one SiPM array mounted on
a PCB and connected to a FE board at each extremity.
~1m
~ 70 cm
Fig. 2. Top left: sketch of the scintillating fiber tracker for the HERD mission. It is a
4-sided detector, each side is composed of stacked layers. A layer is made of modules
placed side by side. A module consists of a fiber ribbon with 3 SiPM arrays and a
FE board at each extremity. Top right: Hamamatsu SiPM array with 128 channels.
Bottom: prototype of a detector module made of a 70 cm long fiber ribbon with one
SiPM array mounted on a PCB and connected to a FE board at each extremity.
Signal (ADC)
Fig. 3. Preliminary beam test results. Left: signal distribution integrated over the 128
channels of a SiPM array with no clusterization performed. Right: signal of each peak
as a function of the peak number, from the linear fit it is possible to compute the signal
of one pixel.
Research and Development of a Scintillating Fiber Tracker 15
integrated over the 128 channels of a SiPM array with no clusterization per-
formed (left). By plotting the position of each peak as a function of the peak
number, it is possible to compute the signal for one pixel: 1 pixel = 119 ADC
(right) calibrating in this way the average gain of the SiPM array.
0 C 5 C
103 10 C 15 C 56.0
Output current (nA)
10 55.0
1 54.5
1 54.0
10
2 53.5
10
3
10 53.0
52.5 53.0 53.5 54.0 54.5 55.0 55.5 56.0 56.5 57.0 0 5 10 15 20 25 30 35 40
Reverse voltage (V) Temperature (C)
Output current (nA)
56.0
Breakdown voltage (V)
102
55.8
55.6
10
55.4
55.2
1
55.0
54.8
1
10
54.6
54.4
2
10
54.2
53 54 55 56 57 58 59 60 54.0
0 32 64 96 128
Reverse voltage (V)
Channel id
Fig. 4. Top left: leakage current of a SiPM array channel as a function of bias voltage
for different temperatures. Top right: VBD as a function of the temperature. Bottom
left: leakage current of all the 128 channels of a SiPM array as a function of the bias
voltage. Bottom right: VBD as a function of the channel id.
Since the kind of detector under study (fiber + SiPM) has never been used in
space, a program of tests for the space qualification is needed. Thermal/vacuum,
vibration tests are ongoing. To evaluate the success of a test, some characteristics
of the module have to be measured before and after the test. One important
characteristic, simple to measure, is the breakdown voltage (VBD ) of the SiPM
channels of the array. Figure 4 (bottom right) shows the VBD for all the 128
channels of a SiPM array, computed from the leakage current vs. bias voltage
curves (bottom left), as described in [5], once corrected with respect to the
temperature. In fact, as measured and shown in Fig. 4 (top), the VBD of a SiPM
varies accordingly with the temperature.
Acknowledgements. We would like to thank the LHCb group of EPFL for the pro-
curement and contribution to the preparation of the fiber ribbons used at the beam
test.
References
1. Alpat, B., et al.: The internal alignment and position resolution of the AMS-02
silicon tracker determined with cosmic-ray muons. Nucl. Instr. Methods Phys. Res.
Sect. A Accel. Spectrom. Detect. Assoc. Equip. 613, 207–217 (2010)
2. Wu, X., et al.: The silicon-tungsten tracker of the DAMPE mission. In: PoS (ICRC
2015) 1192 (2016). http://inspirehep.net/record/1483465
3. Zhang, S.N., et al.: The high energy cosmic-radiation detection (HERD) facility
onboard China’s space station. Proc. SPIE Int. Soc. Opt. Eng. 9144 (2014). 91440X.
http://inspirehep.net/record/1306880
4. The LHCb Scintillating Fibre Collaboration: LHCb Scintillating Fibre Tracker Engi-
neering Design Review Report: Fibres, Mats and Modules. LHCb-PUB-2015-008
(2015)
5. Garutti, E., et al.: Characterization and x-ray damage of silicon photomultipliers.
In: PoS (TIPP 2014) 070 (2014)
6. Meier, D., et al.: SIPHRA 16-channel silicon photomultiplier readout ASIC. In:
Proceedings of AMICSA&DSP 2016 (2016)
SiPM-Based Camera Design
and Development for the Image Air
Cherenkov Telescope of LHAASO
1 Introduction
Silicon photomultipliers (SiPM) has many advantages such as no aging due to
strong light exposure, no sensitivity to magnetic fields, single photon counting
response, high photon detection efficiency and high gain at low bias voltage.
SiPM-based camera can be operated in the moon night and the duty cycle of
SiPM-based camera is about 30%, while the PMT-based camera is about 10%.
Therefore, the SiPM is the next generation of photomultiplier sensor in the
next generation of Image Air Cherenkov telescopes. The SiPM technology has
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 17–21, 2018.
https://doi.org/10.1007/978-981-13-1313-4_4
18 S. S. Zhang et al.
been used in the First G-APD Cherenkov Telescope [1] and the single-mirror
Small Size Telescopes (SST-1M) proposed for the Cherenkov Telescope Array
(CTA) project [2]. The SiPM technology is also used in the Wide Field of View
Cherenkov Telescope Array (WFCTA) of the Large High Altitude Air Shower
Observatory (LHAASO) [3,4]. WFCTA has 16 image air wide field of view
Cherenkov telescopes. Each Cherenkov has a field of view (FOV) of 14◦ × 16◦
with a pixel size of approximately 0.5◦ ×0.5◦ . The main scientific goal of WFCTA
is to measure the ultra-high energy cosmic ray energy spectrum and composition
from 30 TeV to a couple of EeV. In order to achieve about 5 orders of magnitude
of energy spectrum measurement, the portable design of the telescope is dedi-
cated to enable an easy switching between configurations of the array in different
energy range measurements. WFCTA observes the primary cosmic ray energy
more than 2.5 orders in each energy range observation mode, which requires the
dynamic range of each SiPM from 10 photoelectrons (pes) to 32000 pes. The
design and development of the SiPM-based camera is described in detail in the
paper.
2 SiPM Candidates
The SiPM is made up of an avalanche photodiode (APD) array and each APD
operates in Geiger-mode. The dimension of each single APD can vary from a
few µm to hundred µm. The density of SiPM is 1600 APDs/mm2 for 25 µm
APD size. All the APDs in the SiPM are read in parallel making it possible to
generate signals with a dynamic range from 1 pe to a few hundred pes per 1 mm2 .
The saturation happens when more than one photon hit on the same APD at
same time. The dynamic range of SiPM is proportional to the total number of
APDs in SiPM. The SiPM dynamic range is also influenced by the uniformity of
light hitting on the SiPM. A larger dynamic range requires larger total number
of APDs. The smaller APD size has smaller fill factor and then has smaller
PDE. Larger area SiPM has a bigger dark count rate (DCR). The square SiPM
with 15 mm × 15 mm photosensitive area and 25 µm APD size is selected for
WFCTA, after taking PDE, dark count rate and price into account. The total
number of APDs is 360,000 in the square SiPM. The SiPM candidates from
Hamamatsu, FBK and SensL have been evaluated in [6]. The measured pes on
the uniform light condition can be fitted very well by the function which describes
the relationship between the total number of fired APDs and the total number
of all APDs in the SiPM. The 15 mm × 15 mm SiPM with APD size 25 µm can
reach the dynamic range from 10 pes to 32,000 pes. The additional deviation
from the non-uniform light distribution caused by the light concentrator and the
spherical mirror is less than 2% at 32000 pes.
3 SiPM-Based Camera
Each SiPM-based camera consists of a 32 × 32 SiPM array, a 32 × 32 light
concentrator (Winston cone) array, 1024 channels of temperature and voltage
SiPM-Based Camera Design and Development 19
compensation loop, and l024 channels of readout electronics. The SiPM signal is
fed to pre-amplifiers through a DC coupling (Fig. 1(a)). A typical signal from pre-
amplifier is shown in Fig. 1(b). The signal width is about 42 ns. Signals coming
out of the pre-amplifiers are split into two channels and are amplified by two
chains of amplifier: high gain and low gain separately for getting a good linearity
over a wide dynamic range of 3.5 orders of magnitude. And then the signals are
digitized by 50-MHz, 12 bits flash analog-to-digital-converters (FADCs). The
digital signals are collected by FPGAs to do further processing: single channel
trigger, event trigger, signals transmission and storage etc. The SiPM gain or
break down voltage is sensitive to the temperature. The SiPM gain temperature
coefficient is about 1.5%/◦ C. The break down voltage temperature coefficient
is about 26 mv for FBK SiPM and 21.5 mv/◦ C for SensL SiPM, 54 mv/◦ C for
Hamamatsu SiPM. A temperature sensor is embedded in each SiPM (Fig. 1(a)).
A temperature and voltage compensation loop is used in each SiPM to keep the
SiPM gain stable.
(a) (b)
Fig. 1. (a) A pre-amplifier schematic (down), SIPM front and back photos (up). (b) A
typical SiPM signal from a pre-amplifier with a pulse width of about 42 ns.
The PDE of SiPM is about two times of PMT’s quantum efficiency (Qe) ×
collection efficiency (ε). The dark count rate (DCR) of SiPM is about 13 MHz
and DCR of PMT is less than 10 kHz at 0.5 pe threshold. The sky background
noise at the YangBaJing Cosmic Ray Observatory [5] is about 38 MHz for SiPM
and 19 MHz for the PMT at the moonless night. Compared with PMT-based
camera of WFCTA prototype [5], SiPM has the same or even higher signal-to-
noise ratio, after taking the DCR and PDE into account. The energy threshold
of the telescope increase when the sky background noise increase, e.g. the energy
threshold is about 30 TeV at moonless night and the energy threshold is about
300 TeV at half-moon night.
Fig. 2. (a) A sub-cluster picture without Winston cone. (b) A SiPM array camera
design diagram.
References
1. Anderhub, H., et al.: Design and operation of FACT - the first G-APD Cherenkov
telescope. JINST 8 (2013). P06008 arXiv:1304.1710
2. Schioppa, E.J., et al.: The SST-1M camera for the Cherenkov telescope array.
arXiv:1508.06453v1 [astro-ph.IM] (2015)
SiPM-Based Camera Design and Development 21
1 Introduction
The Large High Altitude Air Shower Observatory (LHAASO) is a hybrid exper-
iment designed for γ-ray astronomy and cosmic rays studies [1,2]. The Wide
Field of View Cherenkov Telescope Array (WFCTA), one of its three main com-
ponent detectors, will be operated in two observation modes. The Cherenkov
mode requires a photosensor dynamic range from 10 to 32,000 pes and the flu-
orescence mode requires that the gain of the sensor is stable for long duration
light pulse up to 3 µs. The SiPM developed rapidly since 1990s, the gain of which
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 22–26, 2018.
https://doi.org/10.1007/978-981-13-1313-4_5
Silicon Photomultiplier Performance Study and Preamplifier Design 23
is around 106 while the voltage is less than 100 V. The SiPM-based cameras can
be operated with moonlight and achieves a larger duty cycle than PMT-based
cameras. The First G-APD Cherenkov Telescope has been exploring the use of
the SiPM technology [3] and CTA project will use the SiPM on single-mirror
Small Size Telescopes (SST-1M) and dual mirror SSTs [4]. In this paper, the
design of preamplifier is illustrated, the test results are shown, and additional
non-linearities is simulated.
Models PDE Fill factor Dark count rate Cross talk Gain (106 )
S13361-5488 (Hamamatsu) 25%@400nm 47% 45 kHz/mm2 1% 0.70
FBK-25 (FBK) 38%@400nm 72% 80 kHz/mm2 15% 1.38
MicroJ-30020 (SensL) 33%@400nm 62% 80 kHz/mm2 5% 1.70
3 Performance of SiPMs
The response of the SiPM at the condition of uniform photons is expressed as
Eq. (1). The APD works in Geiger mode which means the saturation happens
when more than one photons hit on the same APD during the same readout
window [5]. The expectation number of photon electrons (Npe ) can be exacted
from the function of Eq. (1) and is expressed as Eq. (2).
Nf ired = Ncell (1 − e−P DE·Nph /Ncell ) = Ncell (1 − e−Npe /Ncell ) (1)
1
Npe = Ncell ln( ) (2)
1 − Nf ired /Ncell
24 B. Y. Bi et al.
Fig. 1. (a) The scheme of the preamplifier for the SiPM. (b) The pulses for different
value of R2 under a fixed intensity of light. The amplitudes are normalized to 1.
where Nf ired is the number of fired APDs and Ncell is the total number of APDs.
Nph is the number of photons hitting on the SiPM, and P DE · Nph is equal to
Npe .
As shown in Fig. 2(a), the nonlinearities follow the expectation predicted
by Eq. (1) very well. The dynamic range of SiPMs is proportional to the total
number of APDs. After correction with Eq. (2), the dynamic range are extended
to 32,000 pes for the Hamamatsu and FBK SiPM samples. Because Ncell is
smaller, the dynamic range of SensL SiPM samples can reach to about 6,000
pes after correction. The resolutions of the SiPM are the same before and after
correction, which is shown in Fig. 2(b).
To investigate the performance of SiPM under long duration pulse, we com-
pared the response of the SiPM with that of the PMT which is satisfactory for
our requirements [6]. According to the test result illustrated in Fig. 2(c), the
value of C1 influences the stability of the SiPM while the duration of light is get-
ting longer. At the condition of C1 = 0.1 µF, the SiPM has a poor performance.
At the condition of C1 = 1 µF, the deviation of the gain of the SiPM is less than
2% from 20 ns to 3 µs.
circle in the Fig. 2(d)). If the distribution of light on the surface is non-uniform,
there are some deviation after correction (see black dot in the Fig. 2(d)). The
additional deviation caused by the non-uniform photons distribution is less than
2% at 32,000 pes.
References
1. Zhen, C.: A future project at tibet: the large high altitude air shower observatory
(LHAASO). Chin. Phys. C 34(2), 249–252 (2010). https://doi.org/10.1088/1674-
1137/34/2/018
2. He, H.: LHAASO Project: detector design and prototype. In: Proceedings of the
31st ICRC, pp. 2–5 (2009)
3. Anderhub, H., Backes, M., Biland, A., Boccone, V., Braun, I., Bretz, T., Zänglein,
M.: Design and operation of FACT-the first G-APD Cherenkov telescope. J.
Instrum. 8(6), P06008–P06008 (2013). https://doi.org/10.1088/1748-0221/8/06/
P06008
4. Heller, M., Schioppa Jr., E., Porcelli, A., et al.: An innovative silicon photomultiplier
digitizing camera for gamma-ray astronomy. Eur. Phys. J. C. 77, 47 (2017)
5. van Dam, H.T., Seifert, S., Vinke, R., Dendooven, P., Lohner, H., Beekman, F.J.,
Schaart, D.R.: A comprehensive model of the response of silicon photomultipli-
ers. IEEE Trans. Nucl. Sci. 57(4), 2254–2266 (2010). https://doi.org/10.1109/TNS.
2010.2053048
6. Ge, M., Zhang, L., Chen, Y., Cao, Z., Zhang, S., Wang, C., Bi, B.: Photomultiplier
tube selection for the wide field of view cherenkov/fluorescence telescope array of the
large high altitude air shower observatory. Nucl. Instrum. Methods Phy. Res., Sect.
A: Accelerators Spectrometers Detectors Assoc. Equipment, 819, 175–181 (2016).
https://doi.org/10.1016/j.nima.2016.02.093
7. Heck, D., Knapp, J., Capdevielle, J.N., Schatz, G., Thouw, T.: CORSIKA: a Monte
Carlo code to simulate extensive air showers. Forschungszentrum Karlsruhe FZKA
6019, 1–90 (1998). http://www.ikp.kit.edu/
A Comprehensive Analysis of Polarized
γ-ray Beam Data with the HARPO
Demonstrator
1 Introduction
HARPO [1] is a design concept of a gaseous TPC aiming for a high precision
telescope and polarimeter for cosmic γ-rays especially in the energy range from
the pair-production threshold up to the order of 1 GeV, where current γ-ray
telescopes have a sensitivity drop (Fig. 1 in [2]) and where polarimetry becomes
difficult due to the multiple scattering (Fig. 4 in [1]). We present results from
the beam data with HARPO at NewSUBARU [3] in Japan in 2014 (See [4] as a
detailed version). Additionally a pre-amplifier saturation found in our analysis
is also reported.
M. Frotin—Now at GEPI, Observatoire de Paris, CNRS, Univ. Paris Diderot, Place
Jules Janssen, 92190 Meudon, France.
S. Wang—Now at INPAC and Department of Physics and Astronomy, Shanghai Jiao
Tong University, Shanghai Laboratory for Particle Physics and Cosmology, Shanghai
200240, China.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 27–30, 2018.
https://doi.org/10.1007/978-981-13-1313-4_6
28 R. Yonamine et al.
2 Experimental Setup
HARPO is a 30 cm cubic TPC equipped with two GEMs and a “bulk”
Micromegas mesh [5], a readout-pitch of 1 mm in two perpendicular directions,
and sampling frequency of 33 MHz performed by the AFTER electronics [6,7].
The point resolution is approximately 1 mm in x, y, and z.
We took data at photon energies from 1.7 MeV to 74 MeV. However the
subset of the range from 4 MeV to 20 MeV is our main target in this paper. This
is because at lower energies than 4 MeV, the beam repetition frequency was
too high to distinguish one event from another, whereas at higher energies than
20 MeV, we suffered from a pre-amplifier saturation that is described in Sect. 3
because e+ e− tracks were more likely perpendicular to the readout plane. More
details about the detector, the data taking and the beam configurations can be
found in [8].
To obtain better understanding of taken data, we have developed a simula-
tion framework (Sect. 3 in [2], Sect. 5 in [4]). It also plays an important role to
cancel out the systematic bias derived from the detector acceptance in measur-
ing the polarization. Our simulation was firstly validated with cosmic-rays and
its parameters were calibrated with the beam test data.
We estimate the photon direction by taking the bisector of the e+ /e− momen-
tum directions in the pair production process. The polarization asymmetry (A)
appears in Eq. (1) in [9], and thus it can be extracted by measuring the dis-
tribution of the azimuthal angle (φ := φ+ +φ 2
−
, see Fig. 3 of [9] for φ+ and φ−
definitions). A can be written as a product of AQED × D, with AQED being a
2
theoretical polarization asymmetry, and D being the dilution factor (e−2σφ , σφ :
the azimuthal angle resolution) (Eq. (1) in [10]). A complete description of our
analysis method can be found at Sects. 4 and 5 in [2].
3 Preamplifier Saturation
We observed two types of pre-amplifier saturation (Fig. 1), in which the response
to the input charges become non-linear. The first type happens in relatively short
time scale, which does not affect the next events. This effect is reproduced well in
our simulation and in principle this can be corrected. Its signature is the extra
entries at the charges just below the “saturation edge”, meaning a dropping
point close to the maximum charge. The second type arises from the correlation
between the saturation edge and the beam intensity (Fig. 2) with which we expect
no event pile-up in the sensitive volume but with which we might have a kind
of event pile-up effect in the pre-amplifier. This effect is unpredictable and can
not be corrected because data can be affected even by non-triggered previous
events.
Finally it should be noted that this pre-amplifier saturation is not a funda-
mental problem at all for future data taking. Because tracks in space condition
must be isotropic and are hardly aligned to a single readout strip. There is also a
backup solution that is to the reduce gain by a factor of ∼2, which is expected to
A Comprehensive Analysis of Polarized γ-Ray Beam Data 29
6
80000
10
7 E=52.1MeV_I=100nA_Run=1290 70000
10
E=52.1MeV_I=70nA_Run=1294
8 E=52.1MeV_I=40nA_Run=1669 60000
10
1 35
30
25
σθ,68% [rad]
A [%]
20
15
10-1 Fermi-LAT (Front)
Fermi-LAT (Back) 10 sim (P=1) / data(P=0)
data sim (P=1) / sim (P=0)
5 data(P=1) / data(P=0)
sim
data(P=1) / sim (P=0)
0
1 10 102 4 6 8 10 12 14 16 18 20
Eγ [MeV] Eγ [MeV]
Acknowledgement. This work was funded by the French National Research Agency
(ANR-13-BS05-0002) and was performed by using NewSUBARU-GACKO (Gamma
Collaboration Hutch of Konan University).
References
1. Bernard, D., et al.: HARPO: a TPC as a gamma-ray telescope and polarimeter.
Proc. SPIE Int. Soc. Opt. Eng. 9144, 91441M (2014)
2. Gros, P., et al.: First measurement of the polarisation asymmetry of a gamma-ray
beam between 1.7 to 74 MeV with the HARPO TPC. Proc. SPIE Int. Soc. Opt.
Eng. 9905, 99052R (2016)
3. Horikawa, K., et al.: Measurements for the energy and flux of laser Compton scat-
tering. Nucl. Instrum. Meth. A618, 209–215 (2010)
4. Gros, P., et al.: Performance measurement of HARPO: a Time Projection Chamber
as a gamma-ray telescope and polarimeter (2017). arXiv:1706.06483
5. Gros, P.: HARPO - TPC for High Energy Astrophysics and Polarimetry from the
MeV to the GeV. In: PoS. TIPP 2014, p. 133 (2014)
6. Baron, P., et al.: AFTER, an ASIC for the readout of the large T2K time projection
chambers. IEEE Trans. Nucl. Sci. NS 55, 1744–1752 (2008)
7. Abgrall, N., et al.: Time projection chambers for the T2K near detectors. Nucl.
Instrum. Meth. A637, 25–46 (2011)
8. Delbart, A.: HARPO, TPC as a gamma telescope and polarimeter: First measure-
ment in a polarised photon beam between 1.7 and 74 MeV. In: PoS. ICRC 2015,
1016 (2015)
9. Bernard, D.: Polarimetry of cosmic gamma-ray sources above e+ e− pair creation
threshold. Nucl. Instrum. Meth. A729, 765–780 (2013)
10. Mattox, J.R., et al.: Astrophys. J. 363 (1990)
Timing Calibration
of the LHAASO-KM2A Electromagnetic
Particle Detectors Using Charged
Particles Within the Extensive
Air Showers
1 Introduction
LHAASO is a new generation EAS experiment located at the Haizi mountain
(4410 m, in Sichuan province, China). The experiment aims to explore the
gamma-ray sources with a sensitivity of 1% Icrab at energies above 50 TeV [1,2].
In its 1.3 km2 array (KM2A) (Fig. 1), 5242 electromagnetic particle detectors
(EDs) are designed to detect arrival times and number densities of EAS charged
particles produced by the primary particles, from which the primary direction
and energy can be reconstructed [3].
An reliable reconstruction of the primary gamma ray direction requires the
accurate determination of the arrival times of the EAS particles on each EDs.
One of the critical requirements is keeping all the detectors time synchronized.
Hardware time calibration is usually performed using a probe detector manually
moved above all the detector units as reference. However, it becomes infeasible
if the EAS array has a large area above square kilometer scale and numerous
detectors. This paper will present an automatic detector time self-calibration
technique which relies on the measurement of charged particles within the EASs,
focusing on its applicability to the upcoming LHAASO-KM2A.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 31–34, 2018.
https://doi.org/10.1007/978-981-13-1313-4_7
32 H. Lv et al.
2 Timing Calibration
The ED is a type of scintillation detector with an active area of 1m2 . It consists
of four plastic scintillation tiles of 100 cm × 25 cm × 2.5 cm each (Fig. 1), several
wavelength-shifting (WLS) fibers and a 1.5 in. photomultiplier tube (PMT). The
ED front-end electronics is a very compact device deployed just behind the PMT
of each ED [4]. All FEE time-to-digital converters (TDCs) are synchronized
within sub-nanosecond via an advanced timing system named White Rabbit [5].
The main uncertainties on the measured arrival time of EAS particles comes
from the time offset spread among the EDs. For each ED, the detector time
offset arise from the time elapsed between the incidence of EAS particle arriving
on the scintillation tiles and the time stamping of the associated signal in the
FEE. It is measured as a cumulative effect of the photon transmission time in
the WLS fibers and the electron transit time in the PMTs. The relative time
offset differences must be calibrated with a precision of better than 1 ns and
periodically corrected in the data, to guarantee the optimal angular resolution
and ensure the pointing accuracy.
NORTH
ED
MD
e
WFCTA
m
5
0.
m
5
1m
0.
150 m
WCDA
Fig. 1. Left figure: The layout of the LHAASO experiment; Right figure: Schematic of
a ED illustrating of 4 scintillation tiles coupled with wavelength-shifting fibers.
8
Shower front
7
6
Relative timing
5
Core location 4
Air shower direction Air shower direction
3
(true direction) (reconstructed direction)
Direction of CP 2
1
1 2 3 4 5 6 7 8
Preset time offset (ns)
Fig. 2. Schematic of a shower front and Fig. 3. Comparison of the 4883 EDs
the CP introduced by detector time off- time offsets determined using the self-
set. calibration procedure and the preas-
signed values.
To verify the applicability of this method and estimate its precision, a complete
timing calibration is performed using simulated showers before the actual exper-
iment. About 2 × 106 EAS events are generated using the CORSIKA software
and used for this calibration. The preset time offset of each ED, which ranges
from 1.5 ns to 8.2 ns, is artificially added into the simulated detector response to
distort the detector timing.
A correlation is observed between the preset detector time offsets and mea-
sured time offsets with this calibration method (Fig. 3). A small bias is found
at EDs located at the edge of array, because the shower front is more likely
a curved surface than a conical one. Nevertheless, this characteristic does not
adversely affect the resulting overall calibration. The differences between the
measured time offsets and the initial ones indicates that the precision is 0.46 ns.
To achieve this precision level, approximately 0.5 h of exposure time is necessary
to collect enough event statistics.
1
l = sinθcosφ, m = sinθsinφ (θ and φ are the reconstructed zenith and azimuth angles,
respectively).
34 H. Lv et al.
Number of EDs
5
Entries 39
Mean -0.3266
4 RMS 0.4256
1
parƟcle
0
3 EDs -2 -1.5 -1 -0.5 0 0.5 1 1.5
Timeoffset_CP method - Timeoffset_telecsope (ns)
3 Conclusion
Preliminary result shows that the EAS events is very useful for timing calibra-
tion purposes for the large EAS array covering an area for square kilometers
scale. Detector time offsets can be determined at sub-nanosecond level on hour
timescales, which is adequate to meet the KM2A requirements.
References
1. Cao, Z.: Nucl. Instr. Meth. A 742, 95–98 (2014)
2. Cui, S., et al.: Astropart. Phys. 54, 86–92 (2014)
3. Zhang, Z., et al.: Nucl. Instr. Meth. A 845, 429–433 (2017)
4. Liu, X., et al.: Chin. Phys. C 40(7), 076101 (2016)
5. Du, Q., et al.: Nucl. Instr. Meth. A 732, 488–492 (2013)
6. He, H.H., et al.: Astropart. Phys. 27, 528–532 (2007)
MoBiKID - Kinetic Inductance Detectors
for Upcoming B-Mode Satellite Missions
1 Introduction
How did our universe begin? This is the main question cosmologists will try to answer
in the next 20 years. During the last two decades the study of the anisotropies of
Cosmic Microwave Background (CMB) was the main driver of the evidence that the
early universe underwent a period of accelerated expansion, the so-called cosmic
inflation [1]. The model however still needs a confirmation.
The detection of the presence of primordial B-modes in CMB polarization aniso-
tropies will be an independent and strong proof of the inflation scenario. Many current
and future CMB experiments from ground and balloon are devoted to the search of
B-modes. These experiments will be able to contribute significantly, but it is highly
likely that a space mission will be needed to obtain the ultimate proof and precise
measurement of the existence of B-modes.
The scientific interest in this field is very high. Recently the CMB european
community answered to the call of the ESA for a medium size mission, proposing
CORE+, an instrument devoted to the measurement of B-modes. Despite of the very
high scientific interest, the proposal was not selected. The maturity of some tech-
nologies (e.g. detectors and dilution refrigerator) was judged as not sufficient.
The technological readiness for a new space mission has to be demonstrated with
major strength to increase the possibilities of success of future proposals. In this context
the main topic to be dealt with is the choice and demonstration of the detector
technology.
The detector needed for the future space mission devoted to B-modes should meet
these requirements:
– Sensitivity: the intrinsic noise in CMB detectors between 100 and 200 GHz should
be below the photon noise from the 2.7 K cosmological blackbody radiation. This
means a Noise Equivalent Power (NEP) in the range 5–810−18 W Hz−0.5.
– Number of detectors: The future space mission will need about 2000 pixels, divided
in arrays of 100–300 pixels, depending on the operating frequency band.
– Radiation sensitivity: Cosmic Rays (CRs) can interact with the detectors, causing
glitches that implies a data loss. The sensors are deposited on a substrate (typically
300 µm Silicon) or a membrane (2 µm Silicon Nitride). CRs ionize the support.
This energy deposit (about 200 keV) generates ballistic phonons able to cause both
thermal and athermal signals in the detector. CRs at the second Sun-Earth Lagrange
point (L2), the ideal operation position for a future satellite, are composed mainly
by galactic protons with the peak of the Energy distribution at about 200 MeV and a
rate of 5 cm−2 s−1. The Planck mission faced an unexpectedly high rate of CRs on
detectors (about 2/s) [2]. This had two main consequences: a net data loss of 15%
and some concerns on the gaussianity of the noise due to the possible presence of
small pulses immersed in the noise.
The NTD bolometers, used in the Planck mission, had very good sensitivity, but
their implementation in larger arrays is not possible due to multiplexing issues. Two
detectors technologies are considered more promising for a future space implementa-
tion: Transition Edge Sensors (TESs) [3] and Kinetic Inductance Detectors (KIDs) [4].
TESs are bolometers based on superconducting thermistor. Their low impedance
allows multiplexing, using readout electronics with complex cold amplification stages
based on Superconducting Quantum Interference Device (SQUID). These devices have
demonstrated very high sensitivity in large arrays. However the complexity of their
fabrication and of the readout electronics strongly limits their possibility to be
implemented for space applications.
KIDs are a relatively young technology, invented at Caltech in 2003. They work
thank to a peculiar feature of superconductors, the kinetic inductance. In a supercon-
ductor Cooper pairs can move without suffering scattering with the lattice and this
cause the well-known DC zero resistance. The pairs show however a complex impe-
dance: they react to an applied RF field changing their motion with an inertia due to the
stored kinetic energy; this inertia corresponds to an inductance, the so-called kinetic
inductance. The kinetic inductance depends on the density of Cooper pairs and can be
MoBiKID - Kinetic Inductance Detectors for Upcoming B-Mode Satellite Missions 37
then modified by an energy release due to photons or phonons able to break them into
free electrons (quasi-particles). Variations of kinetic inductance can be measured by
building a resonator with high quality factor (Q > 103), made of the superconductor,
and by monitoring the transfer function of the resonator itself.
The main advantage of KIDs is that they can be easily multiplexed: many detectors
(up to 300) can be read-out using the same RF electronics by changing slightly the
resonant frequencies.
KIDs were successfully implemented on a demonstrator camera, called NIKA, from
2010 [5] with NEP *510−17 W/Hz0.5. The background at a ground telescope is
around 20 pW, at least 10 times more than what expected in space. NIKA NEP result
therefore does not represent the real limit of KID detectors: measurements on lower
background environments are needed.
The interaction of KIDs with CRs was observed [6, 7] and it was found to be
relevant: KIDs are sensitive to athermal phonons. With respect to the Planck
bolometers, that are divided in single pixels, the resonators of an array of KIDs are
usually deposited on a single 300 um thick silicon substrate, allowing phonons prop-
agation between all the pixels. A deep study of CRs interactions and the identification
of reliable solutions are needed to strengthen the technology readiness level. Studies are
in progress [8].
3 MoBiKID
Fig. 1. Top left, 132 pixel array installed in its holder. Bottom left, typical transmission in
frequency of an array. Right, Background simulator coupled with the array.
References
1. Guth, A.H.: PRD 23, 347 (1981)
2. Catalano, A., et al.: A&A 569, A88 (2014)
3. Irwin, K., et al.: APL 66, 1998 (1995)
4. Day, P., et al.: Nature 425, 817–821 (2003)
5. Monfardini, A., et al.: ApJS 194, 24 (2011)
6. Swenson, L., et al.: APL 96, 263511 (2010)
7. Cruciani, A., et al.: JLTP 167, 311 (2012)
8. Catalano, A., et al.: A&A 592, A26 (2016)
9. Doyle, S., et al.: JLTP 151, 530 (2008)
Backend Readout Structures
and Embedded Systems
The Detector Control System Safety
Interlocks of the Diamond Beam Monitor
Grygorii Sokhrannyi(B)
on behalf of ATLAS DBM collaboration
The DBM is mounted on the Pixel Detector support structure and is a part
of the Inner Detector (see Fig. 2). It consists of four telescopes on each side of
the interaction point. Three of them are the diamond telescopes and the other
one is the silicon. Each telescope has 3 diamond or silicon sensors with the FE-I4
readout chips.
The FE-I4, which is used by DBM, is the pixel readout chip which is 250 µm ×
50 µm and processes data from 26880 channels with data link speed 160 Mbit/s.
Its normal operating temperature in ATLAS is between −5 °C and 10 °C. All
DBM telescopes are tiled to face an interaction point at pseudorapidity range
from 3.2 to 3.5.
To ensure that the operation temperature stay in normal working range an active
CO2 cooling is provided by the IBL [2].
Due to the fact that the DBM is mounted in the high-radiation forward
region, operated in the 7 T magnetic field and required permanent cooling,
the safe and reliable operation has the highest priority. For this reason the set
of various safety interlocks have been implemented into the DCS to provide a
real-time processing on the hardware operational parameters and an immediate
reaction to the hardware danger.
The Detector Control System (DCS) is responsible for the supervision of the
detector equipment, the reading of operational parameters, the propagation of
the alarms and the archiving of operational data [3]. Along with a set of com-
mands, which are used for the detector operation, a list of software interlocks
is implemented into the DBM DCS, which reacts to the hardware parameter
changes in an automated way.
The Fig. 3 is shown the general view of the ATLAS DBM FSM which is
currently implemented and used. The DBM FSM contains commands, states
and status definition and all of their safety interlocks. All software safety checks,
being the most crucial part of the DBM normal operation, are duplicated in the
WatchDog script which like the FSM is the part of the DCS. The communication
check which is based the generating heartbeat way is established between the
FSM and the WatchDog. Hence, if one of the system is stopped working, the
detector operator sees an alarm immediately and if no reaction within 10 min,
then all hardwares will be switched off.
Fig. 4. The LV current fluctuations of the FE-I4 telescope which is lost a configuration.
3 Conclusions
The DBM has been designed to complement the existing luminosity detectors in
the ATLAS experiment. It constantly suffers from the radiation, high tempera-
tures and the magnetic field as it is mounted very close to the interaction point.
That is why the safe hardware operation has high priority. For this reason two
different procedures, FSM and WatchDog, have been implemented to provide a
real-time processing on the hardware operational parameters and an immediate
reaction to the hardware danger. They constantly monitor around 90 different
hardware channels and ensure safe, correct and efficient experimental operation
of the DBM.
References
1. Polini, A., et al.: Design of the ATLAS IBL readout system. Phys. Procedia 37,
1948–1955 (2012)
2. Verlaat, B., et al.: The ATLAS IBL CO2 cooling system. J. Instrum. 12 (2017)
The DCS Safety Interlocks of the Diamond Beam Monitor 45
3. Kersten, S., et al.: First experiences with the ATLAS pixel detector control sys-
tem at the combined test beam 2004. Nucl. Instrum. Meth. A 565, 97–101 (2006).
arXiv:physics/0510262
4. Feito, D.A., Honma, A., Mandelli, B.: Studies of IBL wire bonds operation in a
ATLAS-like magnetic field. IEEE, October 2016. https://doi.org/10.1109/NSSMIC.
2015.7581879
Development of Slow Control System
for the Belle II ARICH Counter
1 Introduction
The Aerogel Ring Imaging Cherenkov (ARICH) counter is a particle identifi-
cation device to discriminate between charged pions and kaons [1,2] based on
angular distribution of Cherenkov photons emitted in the aerogel tiles [3,4]. The
ARICH counter is required to separate charged pions and kaons up to 3.5 GeV
by 4 σ at the endcap region of the Belle II detector.
A total of 420 Hybrid Avalanche Photo Detectors (HAPDs) [5] are used in the
ARICH counter to detect the emitted photons, and management of power sup-
plies and readout electronics of the HAPDs are critical in operation of ARICH.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 46–49, 2018.
https://doi.org/10.1007/978-981-13-1313-4_10
Development of Slow Control System 47
The HAPD adapts two amplification mechanisms such as bombardment gain and
avalanche gain. The bombardment gain is about 1500 and the avalanche gain is
about 40. As the result, total gain of the HAPD is around 60000. Three kinds
of power supply inputs are aimed to drive an HAPD module: a negative high
voltage for photo-electron acceleration (HV×1:–7∼8 kV), reverse bias voltages
(Bias×4:∼350 V), and a guard voltage (Guard×1:175 V). Therefore the power
supply control system for the ARICH counter is developed to have scalability
up to 2520 input channels in total.
On the other hand, an HAPD has 144 channels to be read via the front-end
electronics and the Belle2Link, a common readout scheme of the Belle II data
acquisition (DAQ) system [6]. A ARICH readout control is developed to man-
age readout of the ARICH counter by configuring and monitoring the readout
electronics.
Both of the power supply and readout control systems must have interfaces
to network, database containing configurations and graphical user interface. The
ARICH slow control is developed with the common frameworks of the Belle II
DAQ software.
of the FEB and the MB have FPGA (Spartan-6/Vertex-5) for data processing
above and slow control of the ASICs. Data from MB are sent to the Belle II
common readout module via Belle2Link, which is used not only for readout but
also for slow control, such as setting parameters. Data from the entire system
of the ARICH are expected to be processed by the 420 FEBs, the 72 MBs, and
the 18 readout modules in the practical operation. We implemented a parameter
configuration scheme using the Belle2Link. Parameters such as voltage, current
limit, threshold voltage and trigger timing can be set and read via the Belle2Link.
Monitored values such as voltage, current also can be read and recorded.
The graphical user interface (GUI) is also developed based on Control System
Studio [11]. Both of the power supply and the parameter control GUIs are shown
in Figs. 1 and 2, respectively. The GUIs provide functions to set and get all
parameters and monitor values such as voltage and current [12].
Fig. 1. A screen shot of the ARICH Fig. 2. A device oriented view of the
power supply control GUI GUI, which controls the FEB and MB
configration.
were taken from cosmic rays. Thus the slow control system was working stably
for one day without any troubles. Therefore we concludes the slow control system
can be operated in the practical operation.
4 Summary
The ARICH slow control system, based on the Belle II DAQ slow control frame-
work, was developed for both of power supply and readout system. The readout
control system was implemented to control parameters of the FEBs and the MBs
through the Belle2Link. The slow control system was operated in the integration
test with cosmic rays and worked for about one day without any troubles. Many
clear ring image from Cherenkov photons were obtained. We concludes the slow
control system can be operated in the practical operation.
References
1. Abe, T., et al.: Belle II Technical Design Report, arXiv:1011.0352 [physics.ins-det],
KEK Report 2010-1 (2010)
2. Pestotnik, R., et al.: The Aerogel Ring Imaging Cherenkov system at the Belle II
spectrometer. Nucl. Instrum. Methods Phys. Res. A (in press). https://doi.org/10.
1016/j.nima.2017.04.043
3. Tabata, M., et al.: Large-area silica aerogel for use as Cherenkov radiators with high
refractive index, developed by supercritical carbon dioxide drying. J. Supercrit.
Fluids 110, 183–192 (2016)
4. Adachi, I., et al.: Construction of silica aerogel radiator system for Belle II RICH
counter. Nucl. Instrum. Methods Phys. Res. A (in press). https://doi.org/10.1016/
j.nima.2017.02.036
5. Yusa, Y., et al.: Test of the HAPD light sensor for the Belle II Aerogel RICH.
Nucl. Instrum. Methods Phys. Res. A (in press). https://doi.org/10.1016/j.nima.
2017.02.046
6. Yamada, S., et al.: Data acquisition system for the Belle II experiment. IEEE
Trans. Nucl. Sci. 62, 1175–1180 (2015)
7. CAEN. http://www.caen.it. Accessed 30 June 2017
8. Konno, Y., et al.: The slow control and data quality monitoring systems for the
Belle II experiment. IEEE Trans. Nucl. Sci. 62(3), 897–902 (2015)
9. Nishida, S., et al.: Readout ASICs and electronics for the 144-channel HAPDs for
the Aerogel RICH at Belle II. Phys. Procedia 37, 1730–1735 (2012)
10. XILINX. https://www.xilinx.com. Accessed 30 June 2017
11. Control System Studio. http://controlsystemstudio.org. Accessed 30 June 2017
12. Yonenaga, M., et al.: Development of slow control system for the Belle II ARICH
counter. Nucl. Instrum. Methods Phys. Res. A (in press). https://doi.org/10.1016/
j.nima.2017.03.037
Phase-I Trigger Readout Electronics
Upgrade for the ATLAS Liquid-Argon
Calorimeters
Alessandra Camplani1,2(B)
on behalf of the ATLAS Liquid Argon Calorimeter group
1
Dipartimento di Fisica, Universitá degli Studi di Milano, Milan, Italy
2
I.N.F.N., Milan, Italy
alessandra.camplani@mi.infn.it
Abstract. The upgrade of the Large Hadron Collider scheduled for the
Long Shut-down period of 2019–2020, referred to as Phase-I upgrade, will
increase the instantaneous luminosity to about three times the design
value. Since the current ATLAS trigger system does not allow sufficient
increase of the trigger rate, an improvement of the trigger system is
required. The Liquid Argon (LAr) Calorimeter read-out will therefore be
modified to use digital trigger signals with a higher spatial granularity
in order to improve the identification efficiencies of electrons, photons,
tau, jets and missing energy, at high background rejection rates at the
Level-1 trigger.
1 Introduction
The Large Hadron Collider (LHC) has shown very good performances and breaks
own records during Run 1 (2009–2013) and Run 2 (2015–2018). In particular,
in June 2016, LHC exceeded the peak record instantaneous luminosity of 1034
cm−2 s−1 . The luminosity value will again increase in the next years. During
Run 3 (2021–2023) LHC design parameters should allow for an ultimate peak
instantaneous luminosity of 3 · 1034 cm−2 s−1 . While during Run 4 (after 2025)
an instantaneous luminosity of 5 · 1034 cm−2 s−1 will be delivered. Since ATLAS
trigger system does not allow a sufficient increase of the trigger rate, an electronic
upgrade is required [1].
To face the Run 3 luminosity, the LAr calorimeter trigger electronics will be
modified, in order to maintain a low pT lepton threshold and keep the same
trigger bandwidth with respect to Run 2. The new trigger readout electronics
will be installed during the second Long Shut-down (LS2). The aim is to provide
higher-granularity, higher-resolution and longitudinal shower information from
the calorimeter to the Level-1 trigger processors. Figure 1 compares the electron
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 50–53, 2018.
https://doi.org/10.1007/978-981-13-1313-4_11
Phase-I Trigger Readout Electronics Upgrade for the ATLAS 51
Fig. 1. Trigger signal granularity improvement from Trigger Towers (Δη × Δφ = 0.1 ×
0.1) to Super Cells (Δη × Δφ = 0.025 × 0.1 in front and middle layers).
energy deposition in the present system to the new proposed system, which
has ten time finer granularity in the trigger readout from the calorimeter. The
existing calorimeter trigger readout, the so-called Trigger Tower, will evolve in
the new finer granularity scheme, called Super Cells (SC). In total there will be
34000 SCs [2].
A System Test with the final prototype is being prepared. The first tests
were done at the beginning of 2017, and more are planned for the next months.
The purpose is to confirm all the functionalities and stabilities before the mass
production.
3 Demonstrator
A demonstrator of the Phase-I upgrade was installed in ATLAS, during summer
2014, in the calorimeter ElectroMagnetic (EM) barrel, with a coverage of 1/32
of barrel region, to show the feasibility of the Phase-I Upgrade (Fig. 4).
The demonstrator is reading data from the Super Cells with the aim of
validating the energy reconstruction and collecting real collision data for the
filtering algorithm development. Of course this will allow to gain some experience
in the installation and operation of such equipment in the ATLAS environment.
In the FE, two LTDBs are installed to digitize the calorimeter data, from
284 Super Cells in the EM barrel, and transmit them along 48 optical links
at 4.8 Gbps to three BE boards called ABBA (ATCA test Board for Baseline
Acquisition), with a total throughput of about 200 Gbps per LTDB. Each ATCA
board mounts three Stratix-IV Intel FPGAs. Two FPGAs store the SC data
inside circular buffers and wait for a Level-1 trigger to select the interesting
event. The third FPGA takes care of the readout via IPbus protocol over UDP
on 10 GbE network.
Phase-I Trigger Readout Electronics Upgrade for the ATLAS 53
4 Conclusion
The ATLAS LAr calorimeter electronics will be upgraded for Phase-1 after LS2
(2019–2020). The calorimeter trigger path will be digitized at FE level, to make
use of an improved granularity for the trigger decision. Presently, the new FE and
BE systems are being developed and produced. The LTDB prototype is being
assembled, the radiation tolerant ASICs have been designed and they are under
test as well as the LDPB ATCA carrier and LATOME AMC boards. In the
meantime, a demonstrator has been installed on the detector, during summer
2014. During 2015 and 2016 it has collected data in parallel with ATLAS for
both proton-proton and heavy ion collisions and now it’s getting ready also for
2017 data. This demonstrator is giving the possibility to gain some experience
in order to be prepared for the final prototype installation and runs in 2018.
References
1. The ATLAS Collaboration: The atlas experiment at the cern large hadron collider.
J. Instr.3(08), S08003 (2008)
2. Aleksa, M.C., et al.: ATLAS Liquid Argon Calorimeter Phase-I Upgrade Technical
Design Report. Technical report CERN-LHCC. ATLAS-TDR-022, September 2013
3. Buchanan, N.J., et al.: Radiation qualification of the front-end electronics for the
readout of the ATLAS liquid argon calorimeters. JINST 3, 10005 (2008)
4. Xu, H.: The Trigger Readout Electronics for the Phase-I Upgrade of the ATLAS
Liquid Argon Calorimeters. Technical report, ATL-LARG-PROC-2016-003, CERN,
Geneva, November 2016
5. Xiao, L., et al.: LOCx2, a low-latency, low-overhead, 2 5.12-Gbps transmitter ASIC
for the ATLAS Liquid Argon Calorimeter trigger upgrade. J. Instr. 11(02), p. C02013
(2016)
6. Liang, F., et al.: The design of 8-Gbps VCSEL drivers for ATLAS liquid Argon
calorimeter upgrade. J. Instr. 8(01), C01031 (2013)
A Service-Oriented Platform for Embedded
Monitoring Systems in Belle II Experiment
Abstract. uSOP is a general purpose single board computer designed for deep
embedded applications in control and monitoring of detectors, sensors, and
complex laboratory equipment. In this paper, we present its deployment in the
monitoring system framework of the ECL endcap calorimeter of the Belle2
experiment, presently under construction at the KEK Laboratory (Tsukuba,
Japan). We discuss the main aspects of the hardware and software architectures
tailored on the needs of a detector designed around CsI scintillators.
1 Introduction
The Belle II [6] experiments has been designed to investigate the CP-violating
asymmetries in rare B mesons, the elements of CKM matrix and to perform dedicated
searches of new physics in the dark sector.
Belle II will operate at the SuperKEKB [7] electron-positron asymmetric collider
(KEK, Tsukuba, Japan). The new collider will give an instantaneous luminosity (of the
order of 1035 cm−2 s−1) a factor of 40 higher than the former KEKB.
The applications of uSOP system in Belle II environment will be described in the
following.
The measurement sequence has been implemented on the controller in order to find
the best excitation current for the thermistors, then the optimal ADC dynamic range
and, eventually, to subtract the parasitic thermocouple effects. Two controllers have
been connected to uSOP through SPI, each of them monitoring two calorimeter sectors.
During Belle shutdown, in 2010, the two endcap wheels have been dismounted and
placed at rest for electronic upgrade. In order to monitor CsI crystals during the long
shutdown, their environmental parameters have been acquired by an uSOP system.
Reading four over 32 sectors of two endcaps. During this period, the system has been
fully tested and debugged. For about two years, it operated continuous, running
unattended acquisition tasks and exporting data samples and plots on the web.
At beginning of 2017, the backward ECL endcap has been installed in the Belle II
detector. In the slow-control framework, temperature and relative humidity of the two
endcaps (forward and backward) are monitored by a uSOP-based system: 96 thermistors
and 32 humidity probes, placed in the forward and backward sectors, are sampled.
A uSOP board can control up to four calorimeter sectors. Therefore, in order to
monitor a full ECL sector, four uSOP boards are housed in a 19-in. 6U Eurocard crate
(Fig. 2).
Fig. 2. uSOP in the rack of the Electronic Hut in Tsukuba experimental hall at KEK.
The monitored parameters are transferred by uSOP via Ethernet, according to the
EPICS protocol, and they are archived and plotted by CS Studio.
interaction point. A measurement of the dose rate in real-time was performed by PIN
diodes, positioned in several experimental locations; for the measurement of the neu-
tron fluxes, a detection system based on TPCs was deployed; the high-level dose rate
close to the interaction point was measured by diamond sensor. He3 tubes were used
for detection of thermal neutrons. In addition, some calorimeter modules made by BGO
crystals and Plastic Scintillators were also deployed.
BEAST contained also a calorimeter six calorimeter modules based on three
scintillating crystals: Thallium-doped Caesium Iodide (CsI(Tl)), pure Caesium Iodide
(CsI) and Cerium-doped Lutetium Yttrium Orthosilicate (LYSO). In order to monitor
temperature and humidity of such modules, a uSOP system was used to acquire and
publish the data.
3 Conclusions
uSOP is a Service Oriented Platform actually in use in Belle II experiment for moni-
toring of the environmental temperature and humidity of CsI crystal scintillators. uSOP
can be fully managed remotely including critical operations like bootloader and
operating system uploads. The platform has shown to be a resilient and reliable solution
for high-energy physics experiment.
References
1. Sitara™ AM335x Processors, Texas Instruments. http://www.ti.com/lsds/ti/processors/sitara/
arm_cortexa8/am335x/overview.page
2. BeagleBoard Black. https://beagleboard.org/black
3. About BeagleBoard.org and the BeagleBoard.org Foundation. http://beagleboard.org/about
4. Aloisio, A., et al.: uSOP: a microprocessor-based service oriented platform for control and
monitoring. IEEE TNS 99, 1 (2017)
5. XPort Pro Embedded Device Server User Guide, Lantronix. https://www.lantronix.com/wp-
content/uploads/pdf/900-560e_XPort_Pro_UG_release.pdf
6. Abe, T., et al.: Belle II Technical Design Report. arXiv:1011.0352 [physics.ins-det]. https://
arxiv.org/abs/1011.0352
7. Ohnishi, Y., et al.: Accelerator design at SuperKEKB. PTEP 2013, 03A011 (2013)
8. Abashian, A., et al.: The Belle detector. NIM A 479, 117–232 (2002)
9. Zhu, R.-Y.: Precision crystal calorimetry in high energy physics. Nucl. Phys. B (Proc.
Suppl.) 78, 203 (1999)
10. Chen, R.-F., et al.: Property measurements of the CsI(Tl) crystal prepared at IMP. Chin.
Phys. C 32, 2 (2008)
11. Semitec Corp., High Precision Thermistor, AT Thermistor
12. Vaisala, Users’s Guide Vaisala Humidity and Temperature Probes HMP60 and HMP110
Series
13. LTC2983 - Multi-Sensor High Accuracy Digital Temperature Measurement System. http://
www.linear.com/product/LTC2983
14. BEAST II Technical Design Report DRAFT For use in US Belle II Project TDR. https://
indico.phys.hawaii.edu/getFile.py/access?contribId=1&resId=0&materialId=slides&confId=
469
Integration of Readout of Vertex
Detector in Belle II DAQ System
1 Introduction
The Belle II experiment [1], a next generation B factory experiment, starts
physics data taking in 2018 to search for the New Physics beyond the Standard
Model based on precession measurements of flavor systems. The experiment is
aimed to achieve the precession measurements by collecting 8 × 1035 cm−2 s−1
of peak luminosity from the SuperKEKB accelerator, which is 40 times higher
that in the previous Belle experiment. Data acquisition (DAQ) system [2] is one
of the biggest challenges in the Belle II experiment since the experiment collects
high trigger rate and large data size up to 30 kHz and 30 GB/s at the level-1 trig-
ger, respectively. The Belle2Link [3], a common detector readout scheme using
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 58–62, 2018.
https://doi.org/10.1007/978-981-13-1313-4_13
Integration of Readout of Vertex Detector in Belle II DAQ System 59
COPPER (common pipeline electronics readout) [4] and HSLB (high speed link
board) boards, was developed to handle data from the all sub detectors except
for the PXD (pixel vertex detector) [5] and then merge them to the HLT (high
level trigger) [6] PC farm while The DHH [7], a dedicated readout system for
the PXD has been developed to handle with huge event size from the DEPFET
sensors. A comon trigger and system clock distribution system based on FTSW
(First Timing Switch) [8] manages trigger signals all readout electronics in the
Belle II detector. Two level of event builders [9] are introduced; merger of the
6 subdetectors as HLT inputs; merger of outputs from the PXD and HLT to
be recorded. A reduction scheme of the pixel event size using selection of RoIs
(regions of interests) in the pixels is done by a FPGA based processor, , called
ONSEN [10], using reconstructed tracks in the HLT. A dedicated Flash ADC
readout electronics is developed for the SVD APV chips and a bridge board,
FTB, transfers the waveform of the lash ADC to the Belle2Link as same as
other 5 subdetectors. In parallel to integration of the outer detectors using cos-
mic ray, a beam test was performed in March 2017 at the DESY test beam
facility as an big step of readout integration of the VXD systems to established
full data chain of the Belle II DAQ system with a VXD prototype.
3 Operation Results
Event mismatches were observed in the past beam tests of the VXD prototypes
among the readout electronics of both SVD and PXD due to bugs in the firmware
and electrical instabilities of the signal cables. Once the event mismatches were
observed, it took more than ten minutes to recover the whole system due to
60 T. Konno et al.
Fig. 1. A drawing of the event Fig. 2. A screen shot of the run control GUI panel.
display. The blue line shows
the reconstructed track and
two green squares represent
the RoI.
Integration of Readout of Vertex Detector in Belle II DAQ System 61
As the results of the beam test, we concluded that the challenges of the DAQ
integration to the VXD system are successfully achieved while there are still
limitation in the readout performance due to the minimum interval of trigger
time difference and concerns of event mismatches at run start.
References
1. Abe, T., et al.: Belle II Technical Design Report, arXiv:1011.0352 (2010)
2. Yamada, S., et al.: Data acquisition system for the Belle II experiment. IEEE
Trans. Nucl. Sci. 62, 1175–1180 (2015)
3. Suna, D., et al.: Belle2Link: a global data readout and transmission for Belle II
experiment at KEK. Phys. Procedia 37, 1933–1939 (2012)
4. Higuchi, T., et al.: Modular pipeline readout electronics for the SuperBelle drift
chamber. IEEE Trans. Nucl. Sci. 52, 1912–1917 (2005)
5. Moser, H., et al.: Author links open the author workspace. The Belle II DEPFET
pixel detector. Nucl. Instrum. Methods Phys. Res. A 831, 85–87 (2016). 21
6. Itoh, R., et al.: Data flow and high level trigger of Belle II DAQ system. IEEE
Trans. Nucl. Sci. 60, 3720–3724 (2013)
7. Levit, D., et al.: FPGA based data read-out system of the Belle II pixel detector.
IEEE Trans. Nucl. Sci. 62, 1033–1039 (2015)
8. Nakao, M., et al.: Minimizing dead time of the Belle II data acquisition system
with pipelined trigger flow control. IEEE Trans. Nucl. Sci. 60, 3729–3734 (2013)
9. Suzuki, S.Y.: The three-level event building system for the Belle II experiment.
IEEE Trans. Nucl. Sci. 62, 1162–1168 (2015)
10. Gessler, T., et al.: The ONSEN data reduction system for the Belle II pixel detector,
arXiv:1406.4028 (2014)
11. Thalmeier, R., et al.: The Belle II SVD data readout system. Nucl. Instrum.
Methods A845, 633–638 (2017)
62 T. Konno et al.
12. Konno, T., et al.: The slow control and data quality monitoring system for the
Belle II experiment. IEEE Trans. Nucl. Sci. 62, 897–902 (2015)
13. EPICS-Experimental Physics and Industrial Control System. http://www.aps.anl.
gov/epics/
The Weighting Resistive Matrix for Real
Time Data Filtering in Large Detectors
Among the proposals of hardware based regression calculators for trigger pur-
poses, we mention Associative Memories (AM) [1] and the Weighting Resistive
Matrix (WRM) [2], subject of this paper.
fik = eT pk (S∗,ik<i+n )e
where pk : Mn,n (R) → Mn,n (R) is takes a n × n matrix and returns its
Hadamard product (element-wise product) by a k bitmap n×n matrix, which
corresponds to how the nodes of the circuit are connected together.
The main advantages of the WRM technique are: it is extremely fast, basi-
cally no computation is required and the desired response is given within the
signals propagation delay through the network; it is robust to noise, not looking
for exact match but to the best fit correlation. The WRM algorithm recalls the
Radon and Hough transforms, also based on parametric space scoring, but differ-
entiate from that for the intrinsic robustness against noise and for the execution
time independent on the input pattern.
Fig. 1. Simplified unidimensional WRM wiring diagram, the signal is injected in each
node, it is diffused through the horizontal lines and it is added by the columns.
Fig. 2. Left: Monte Carlo (FLUKA [5]) νe CC event. Original ν energy 2.4 GeV. The
image spans approx. 300 cm by 60 cm. Right: Monte Carlo (FLUKA) Supernova ν event.
Original ν energy 19 MeV. The image spans ≈ 15 cm in the horizontal and 25 cm in
the vertical direction.
such as the number of tracks within the detector, or the number of different
event topologies seen in a given time period, and prioritize the analysis over a
pre-selection of interesting event topologies.
Fig. 3. Left: WRM software simulation. Middle: Input data example. Right: The cor-
respoding detected signal mask
This simulation is being used on simulated DUNE data sets, optimize the
WRM design for detecting signals in DUNE data (Fig. 3). The proposed sys-
tem must be suitable for integrating with an already existing DAQ architecture
advanced design. On one side the access to the TPC analog data, needed by
the native WRM design, is not possible. Therefore we are studying an extension
The Weighting Resistive Matrix for Real Time Data Filtering 67
of the WRM network functioning as an analog adder based DAC input stage,
transforming the byte stream in voltage pulses. Another problem is the interface
between the DUNE DAQ and the WRM system, which can be solved by exploit-
ing the FELIX architecture [6] (one of the proposed readout architectures), a new
device conceived to interface data sources carried by many Gigabit Transceivers
(GBT) links to the rest of the, DAQ. The FELIX manages the data streams on
a high performance network to which the WRM can be transparently interfaced
using an FPGA based interface, needed to prepare the data stream and collect
the WRM results. We aim to implement such device in the Proto-DUNE run,
expected in 2018, where the FPGA will be used to emulate a reduced version
of the WRM algorithm, as a proof of principle and demonstrator. The results
can be used both on line and off line to validate the algorithm selectivity and
efficiency on real data.
References
1. Dell’Orso, M., Ristori, L.: VLSI structures for track finding. NIM-A 278, 436–440
(1988)
2. Cardarelli, R., Chiostri, V., Di Stante, L., Reali, E., Santonico, R., Travaglini, M.,
Tusi, E.: On a very fast topological trigger. NIM-A 324(1–2), 253–259 (1993)
68 A. Abdallah et al.
3. Abdallah, A., Cardarelli, R., Aielli, G.: On a fast discrete straight line segment
detection (2014)
4. http://www.dunescience.org/
5. Boehlen, T.T., et al.: The FLUKA code: developments and challenges for high
energy and medical applications. Nucl. Data Sheets 120, 211–214 (2014)
6. J.T. Anderson, et al.: FELIX: a high-throughput network approach for interfacing to
front end electronics for ATLAS upgrades. In: ATL-DAQ-PROC-2015-014. https://
cds.cern.ch/record/2016626
Experimental Detector Systems
Thermal Mockup Studies of Belle II
Vertex Detector
The Belle II vertex detector [1] is made up of 2-layer DEPFET1 pixel detector
(PXD) and 4-layer double-sided silicon strip detector (DSSD). The PXD [3]
consists of 40 sensors with in total 7.68 million pixels. The sensitive area of
the sensor is thinned down to 75 µm, with size of 61.44(44.80) × 12.50 mm2 for
layer 2 (1). The material budget is determined to be 0.21% X0 per PXD layer.
The DEPFET sensor is operated by 3 types of ASICs, they are Switchers which
do raw control, the analog front-end named Drain Current Digitiser (DCD),
and the Data Handling Processor (DHP) which does the pedestal subtraction.
Power consumption is dominated by the DCD/DHP which can be placed in the
end of the sensor, outside of the physics acceptance of Belle II detector. Active
cooling is required there, meanwhile, the matrix and Switchers contribute low
power consumption and thus can be sufficiently cooled with forced air flow. The
so-called PXD ladder is formed from 2 DEPFET sensors with the butt-face
joint glued together and ceramic mini-rods embedded in the thick rim of the
1
Abbreviation of “DEPleted Field Effect Transistor” [2].
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 71–76, 2018.
https://doi.org/10.1007/978-981-13-1313-4_15
72 H. Ye and C. Niebuhr
sensor. Such a ladder design makes the sensors self-supporting. The ladder size
is 170.0(136.0) × 15.4 mm2 for layer 2 (1).
Both PXD layers are mounted onto a common support and cooling block
(SCB). The steel SCBs are manufactured using 3D Printing technology with
enclosed CO2 and open N2 channels integrated. 8 silver coated carbon fiber tubes
connect the forward (FWD) and backward (BWD) SCBs, to provide grounding
and injecting N2 towards the Switchers of inner layer. The open N2 holes on SCBs
provide N2 to cool the sensitive area. The ladders are fixed on the SCBs using
M 1.2 screws with a plastic washer and an o-ring to prevent electrical contact
between screw and silicon. Elongated holes are adopted on the FWD side which
allows for compensating of thermal expansions. The ladder supporting scheme
is shown in Fig. 1.
Fig. 1. (a) The mechanical design of the Belle II PXD. The PXD ladders are fixed
on two pairs of SCBs. Forward and backward SCBs are connected with 8 carbon fiber
tubes. (b) 3D printed SCB with integrated cooling channels for CO2 circulation and
open channels providing forced N2 flow.
The 4 layers of DSSDs are called silicon vertex detector (SVD) [4], which
follow the name of Belle SVD and are numbered from 3 to 6. The SVD is
composed of 172 DSSD modules with the sensitive area of 1.13 m2 and a polar
angle coverage of 17–150◦ . The modules are read out by the APV25 chips, which
are thinned down to 100 µm. The SVD ladders are formed with up to 5 DSSD
modules in a row, and supported by two ribs and Airex foam core sandwich.
A slanted angle with trapezoidal modules is implemented in the FWD side of
Layer.4–6. For Layer.3 and FWD/BWD modules of Layer.4–6, the APV25s are
mounted in the end of the ladder and get support and cooling from the endrings,
in which CO2 cooling circuits are integrated. The barrel modules are cooled by
CO2 pipes. The so-called “origami” chip-on-sensor concept achieves to readout
the bottom side strips using the chips on the top side. In this way the readout
chips can be arranged in a row on one side of the modules and cooled with one
CO2 cooling pipe. Such a design is developed to minimize the distance between
DSSDs and APV25 as well as the material budget [5].
Thermal Mockup Studies of Belle II Vertex Detector 73
2 VXD Cooling
Adequate cooling is required in such a dense VXD volume for the detector oper-
ation. The VXD consumes about 1kW where PXD contributes about 360 W and
SVD does 700 W, together with the heat load through the 9 m of vacuum iso-
lated flex lines from the cooling plant to the detector, cooling capacity of 2–3 kW
is required. Meanwhile, the VXD needs to be thermally isolated against CDC
and beam pipe. Room temperature is required at the inner surface of central
drift chamber (CDC) for stable calibration and dE/dx performance. In total 12
cooling circuits are competent to the cooling tasked, as listed in Table 1.
Table 1. The VXD cooling pipe line system and the design power consumption.
Circuit Half Layers Cooling method Side Power per circuit [W]
1,2 +y 1,2 SCB BWD,FWD 90
3,4 −y 1,2 BWD,FWD 90
5,6 +x,−x 3–6 Endring BWD 93
7,8 +x,−x 3–6 FWD 93
9,10 +x,−x 4,5 Cooling pipe BWD 68
11,12 +x,−x 6 BWD 96
The VXD cooling system adopts the 2-phase CO2 cooling method [6], it is
an efficient concept for low-mass detectors. The pumped cooling system uses
a 2-phase accumulator, which can be both heated and cooled, to control the
pressure and hence temperature inside the detector volume. All control can be
done in a distant accessible cooling plant. A large temperature range typically
from 20 to −4 ◦ C can be achieved. The finite element analysis indicates the CO2
temperature needs to be lower than −20 ◦ C.
in the middle of the foils to probe the local temperature. An aluminium shield
simulates the inner cover of CDC and forms the dry volume between the final
focusing magnets (QCR). The heat intake arising from the SuperKEKB beam
pipe is not taken into consideration. The MARCO (Multipurpose Apparatus for
Research on CO2 ) system serves as the cooling plant for the thermal mockup.
Fig. 2. The full-sized thermal mockup of the Belle II vertex detector. The left picture
is the CAD design, the right picture is the constructed PXD and SVD mockup.
Pressure drop(bar)
Pressure drop with FLex Line(bar)
1.0
6
0.5
4
0.0
Fig. 3. The pressure drops related to injected CO2 mass flow in different cooling cir-
cuits. There is no heat load intake from detector. The contribution of flex lines is
included in the left plot and excluded in the right one. The solid curves are fitted by
2nd Polynomial, the intercept is constrained to be 0.
10-1
Amplitude (µm)
10-3
10-4
10-5
0 200 400 600 800 1000
Frequency (Hz)
Fig. 4. Vibration in the center of Layer.2 PXD ladder at different injected N2 flow
rate. The measurement on the SCB screw is taken as reference.
76 H. Ye and C. Niebuhr
5 Summary
Operating environment of Belle II PXD and SVD are thermally coupled, mean-
while, it will influence the surrounding CDC. Evaporative 2-phase CO2 and
airflow injection perform VXD cooling. A full-size thermal mock-up is built at
DESY, to verify and optimise the cooling concept of Belle II VXD. The thermal
and mechanical results are summarised as below:
With CO2 set at −25 ◦ C,
Acknowledgement. We wish to thank MPI für Physik, München group for preparing
the SCBs, MPG-Halbleiterlabor (HLL) group for producing PXD dummy sensors. The
Belle II VXD cooling frame is developed based on the experience of ATLAS-IBL, we
would like to acknowledge the support form CERN experts.
References
1. Abe, T., et al.: KEK Report 2010-1 (2010). arXiv: 1011.0352v1
2. Kemmer, J., Lutz, G.: Nucl. Instrum. Methods 253, 356 (1987)
3. Marinas, C.: Nucl. Instrum. Methods 731, 31 (2013)
4. Friedl, M., et al.: Nucl. Instrum. Methods A 732, 83 (2013)
5. Irmler, C., et al.: Nucl. Instrum. Methods 732, 109 (2013)
6. Verlaat, B., Colijn, A.P.: CO2 cooling developments for HEP detectors. In: VERTEX
2009, Veluwe, Netherlands (2009)
7. Andricek, L., Lutz, G., Richter, R.H., Reiche, M.: IEEE Trans. Nucl. Sci. 51, 1117
(2004)
Integration and Characterization
of the Vertex Detector in SuperKEKB
Commissioning Phase 2
H. Ye(B)
(On behalf of the BEAST2 Collaboration)
1 Introduction
As an upgrade of KEKB, the SuperKEKB [1] at KEK aims at increasing the
peak luminosity to 8 × 1035 cm−2 s−1 . The Belle II [2] experiment targets to
explore new physics beyond the Standard Model with an extensively upgraded
detector. The SuperKEKB commissioning campaign is scheduled in 3 phases.
The Phase 1 was successfully finished in Jun. 2016 which achieved to circulate
beams in both rings. A dedicated array of sensors was installed around the
interaction point (IP) to monitor and study beam related backgrounds [3]. Now
the project is tending to the Phase 2, during which the beam background will
be further investigated with collisions. A dedicated vertex detector system will
be used. Besides machine commissioning, the accelerator targets to achieve the
peaking luminosity of 1 × 1034 cm−2 s−1 , which is actually the designed peaking
luminosity of KEKB. In Apr. 2017 partial Belle II detector without the vertex
detector (VXD) was rolled in. The final focusing magnets also occurred to the
places. The first collision is expected in Feb. 2018. The physics run (Phase 3)
with full Belle II detector is scheduled in late 2018. The experiment is expected
to accumulate an integrated luminosity of about 50 ab−1 well within the next
decade.
Fig. 1. The 2-layer PXD modules and 4-layer SVD modules tested in the test beam at
DESY.
Fig. 2. The vertex detector dedicated for SuperKEKB commissioning Phase 2. The
integration was tested at DESY, the thermal interference was quantified and the inte-
gration sequence was decided. During the test, SVD, FANGS, CLAWS and PLUME
were involved.
Fig. 3. An event display in the test beam at DESY. The blue curve is the reconstructed
track, the yellow lines indicated the fired SVD strips, and the green regions are the
determined ROI on the PXD.
above 98%, the residual RMS for single hit clusters is determined
√ to be 14.3 µm,
which is consistent with the digital resolution of Pitch/ 12 [9]. The residuals of
the SVD sensors were determined as ∼11 µm in the r − φ side and ∼30 µm in
z side. All SVD sensors under test achieved an averaged efficiency above 99.5%
per strip in both sides [10]. The deposited charge in FANGS was quantified in
TB17. One advantage of the FE-I4 chip is that the length of signal contains
the deposited charge information. In FANGS, the HitOr signal in each sensor is
sampled with an external FPGA 640 MHz clock, resulting in a 12-bit resolution
for charge measurement. The measured mean value from Landau fit is 17.3 ke,
which is comparable within a 5% error to the expected value of 18 ke for a 250 µm
thick silicon sensor [11].
References
1. Ohnisi, Y., et al.: PTEP 2013, 03A011
2. Abe, T., et al.: KEK Report 2010-1 (2010). arXiv: 1011.0352v1
3. Miroslav, G.: The Belle II / SuperKEKB Commissioning Detector - Results from
the First Commissioning Phase. IPP2017 talk
4. Marinas, C.: Nucl. Instrum. Methods 731, 31 (2013)
5. Friedl, M., et al.: Nucl. Instrum. Methods A 732, 83 (2013)
6. Kemmer, J., Lutz, G.: Nucl. Instrum. Methods 253, 356 (1987)
7. Geßler, T., et al.: IEEE Trans. Nucl. Sci. 62, 1149 (2015)
8. Konno, T.: Integration of readout of the vertex detector in the Belle II DAQ system.
TIPP2017 talk
9. Schwenker, B., et al.: PoS(Vertex 2016)011
10. Lück, T., et al.: PoS(Vertex 2016)057
11. Khetan, N.: Master thesis, Development and integration of the FANGS detector for
the BEAST II experiment of Belle II, Rheinischen Friedrich-Wilhelms-Universität
Bonn
Radiative Decay Counter for Active
Background Identification in MEG II
Experiment
1 Introduction
The size of the RDC detector should be compact (∼20 cm) because it is installed
inside the superconducting magnet bore. Meanwhile, it has to be operational in a
high hit rate environment (∼MHz) because a lot of Michel positrons also hit the
detector. For these reasons, the detector consists of finely segmented scintillators
readout with SiPMs. The detector is installed on a moving arm, in order to arrow
the insertion of a calibration target for the MEG II photon detector from the
downstream side.
Fig. 1. Example of the dominant background event. Red dashed line and blue solid line
represent the background photon and positron respectively. Red solid line represents
the time coincident positron detected by the RDC.
Figure 2 shows the constructed detector. Arrival times of positrons are mea-
sured by 5 mm thick 12 fast plastic scintillator bars in the front part of the RDC.
Their widths (lengths) vary in a range from 1–2 cm (from 7–19 cm). Bars with the
small width are used in the higher hit rate region. Scintillation light is collected
at two ends with two or three 3 × 3 mm2 SiPMs (Hamamatsu, S13360-3050PE).
In order to reduce number of channels, the SiPMs are connected in series. The
timing resolution of ∼90 ps was obtained in each counter in a laboratory test.
Behind the plastic scintillators, there are 76 LYSO crystals with a size of 2 ×
2 × 2 cm3 . The energy of positrons are measured to distinguish pile-up energetic
Michel positrons from RMD positrons. Each crystal is readout with a single
3 × 3 mm2 SiPM (Hamamatsu, S12572-025P), which is mounted on a PCB with
Fig. 2. Top left: Plastic scintillators. Bottom left: LYSO crystals. Right: RDC detector
mounted on the moving arm.
84 R. Iwai et al.
a flexible circuit. The SiPM is fixed on the backside of the crystal with a spring.
Thanks to the high light yield of the crystals, a sufficiently good energy resolution
was obtained (∼6% for 1 MeV positron). Moreover, due to the contained radio
isotope 176 Lu, the crystal has ∼2 kHz of intrinsic radioactivity which makes an
energy peak around 600 keV. This is used for the energy scale calibration of each
channel.
3 Commissioning
The commissioning of the RDC detector was completed in 2016 by using a
high intensity muon beam. The trigger system and MEG II DAQ electronics
(WaveDREAM [3]) were also tested. As a substitution for the MEG II photon
detector, 16 BGO crystals and PMTs were used to detect photons from RMD.
Before the data taking, a series of calibrations has been done. To optimize each
bias voltage of SiPMs, positrons from Michel decay and intrinsic radioactivity
of LYSO crystals data were triggered by the timing counter and the calorimeter
respectively. The absolute energy scales of BGO crystals was calibrated by using
1.8 MeV gamma-rays of 88 Y.
The data was acquired for a few days by triggering on a hit of any BGO crys-
tal. Because most of the events were triggered by cosmic-rays, we first selected
RMD events with the hit position and the total energy deposit in BGO crystals.
Figure 3 shows the timing difference of the RDC and BGO detector after the
event selection. A clear timing peak of RMD events was successfully observed.
The flat regions correspond to pile-up Michel positrons which are mostly trig-
gered by photons from positron annihilation in flight. The pile-up rejection was
demonstrated by cutting events with large energy deposit in LYSO crystals
(>4 MeV, Fig. 4). The detection efficiency of the RDC will be measured by using
the MEG II photon detector in 2017.
Count
350
Count
200
102
150
100
10
50
−9
0 × 10
−20 −15 −10 −5 0 5 10 15 20 0 5 10 15 20 25 30 35 40
T RDC - T BGO - offset (s) E (MeV)
Fig. 3. Hit time difference of BGO Fig. 4. Energy deposit in the LYSO
crystals and RDC after event selection. crystals.
Radiative Decay Counter for Active Background Identification 85
Fig. 5. CG of the upstream detector Fig. 6. Bundled fibers (64 fibers × 2).
5 Conclusion
In the MEG II, the RDC will be newly installed to improve the sensitivity by
16%. It identifies the dominant background photons from RMD by detecting
the time coincident low momentum positrons. We constructed a detector with a
compact design and good performance in a high rate environment. The capabil-
ity of the background identification was successfully demonstrated in the beam
test. A series of studies for the upstream detector is in progress. The sensitivity
improvement will be 22–28% by installing both RDC detectors.
86 R. Iwai et al.
References
1. Baldini, A.M., et al.: Search for the lepton flavour violating decay μ+ → e+ γ with
the full dataset of the MEG experiment. Eur. Phys. J. C. 76, 434 (2016)
2. Baldini, A.M., et al.: MEG Upgrade Proposal. arXiv:1301.7225 (2013)
3. Baldini, A.M., et al.: An FPGA-based trigger for the phase II of the MEG experi-
ment. Nucl. Instrum. Meth. A 824, 326 (2016)
Belle II iTOP Optics: Design,
Construction and Performance
1 Introduction
The Belle II [1]/SuperKEKB [2] experiment is an upgrade of the Belle/KEKB
experiment for searching for New Physics (NP), which is physics beyond the
Standard Model (SM). The upgraded detector is planning to take ∼50 ab−1
of e+ e− collision data, with a design luminosity of 8 × 1035 cm−2 s−1 . This is
about 40 times larger than the KEKB collider. To achieve such high luminosity,
the so-called nano-beam technology [3] is used to squeeze the beam bunches
significantly.
Many sub-detectors of Belle will be upgraded for Belle II. This includes the
newly designed iTOP (imaging-Time-Of-Propogation) counter [4–7], which is
the particle identification counter in the barrel region. It consists of a 2.7 m
long quartz optics for the radiation and propogation of the Cherenkov light, an
array of micro-channel-plate photo-multiplier tubes (MCP-PMT) [8] for photon
detection, and wave-sampling front-end readout electronics [9,10]. This article
describes the design, construction and performance of the optics for the iTOP
counter.
2 Detector Design
As shown in Fig. 1, one iTOP module consists of two bars with the dimension
as 1250 × 450 × 20 mm. At one end of the bars is the reflection mirror with
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 87–90, 2018.
https://doi.org/10.1007/978-981-13-1313-4_18
88 B. Wang et al.
spherical surface. At the other end is the expansion block called prism. All optics
components are made of Corning 7980 synthetic fused silica, which has a high
purity and no striae inside.
When a charged track goes through the quartz radiator, it emits Cherenkov
photons. The Cherenkov angle depends on the mass of the particle for a given
momentum, the latter is measured by the central drift chamber (CDC). The
photons are reflected by the bar surfaces and the reflection mirror, then collected
by the MCP-PMTs at the prism end. The resolution of the photon sensors and
the front end electronics are required to be better than 50 ps, which is needed to
distinguish the time of propogation difference between Cherenkov photons from
π ± and K ± tracks.
3 Module Construction
The construction of the real iTOP module was started in the end of 2014 and
all 17 modules, including one spare, were finished by April 2016. After testing
with laser and cosmic ray, these modules were installed on Belle II detector by
May 2016.
Two laser displacement sensors and an autocollimator were used for the align-
ment. Laser displacement sensor measures the distance to the surface. It was
used to align the position, both horizontal and vertical, of two optics’ surfaces.
Autocollimator injects laser to a mirror which is mounted on the surface of the
optics, and by measuring the angle of the reflected laser relative to the original
one to get the angle information. It is used to align the angle between two optics’
surfaces.
After the alignment, the optics were moved together with a 50–100 µm gap.
The joints between optics were taped by using Teflon tape to make a “dam” to
prevent the epoxy from flowing outside. The epoxy used for gluing was EPOTEK
301-2. It consists of two parts, which needs to be mixed before gluing. The
mixture was centrifuged to remove the air bubbles inside. Then the adhesive
was applied from a syringe to the glue joint by using high pressure dry air. After
applying, it took 3–4 days to be fully cured.
When the curing process was finished, the excessive glue was removed using
Acetone. The alignment may change after the curing, so it needed to be measured
again. The achieved horizontal and vertical angle between the two optics near
the glue joint was within ±40 arcsec and ±20 arcsec, respectively.
The completed iTOP optics was moved into a Quartz Bar Box (QBB). The
QBB is a light-tight supporting structure made of aluminum honeycomb plates.
It has high rigidity with light material. The optics was supported on by PEEK
buttons that were glued to the inner surfaces of QBB.
When the QBB assembly was completed, the MCP-PMT modules with front-
end electronics were installed at the prism end of the module.
3.3 Installation
After the iTOP module assembly was completed, it was transferred by truck
to the experimental hall for installation. Each module was installed by using
movable stages. A module was mounted on a guide pipe, which was supported
by the stages, so it was able to move and rotate around the guide pipe. The
module deflection during the installation process was monitored by deflection
90 B. Wang et al.
sensors, and it was required to be less than 0.5 mm. The installation of all the
modules was completed in May, 2016. More details can be found in Ref. [11].
5 Summary
The iTOP counter is a novel particle identification device in the barrel region
for Belle II detector. In this article we described the design, construction and
performance of the iTOP counter. The last iTOP module has been finished and
installed in May 2016, and the Belle II detector has been moved to the beam
line in April 2017. Currently the global cosmic ray data taking is on going, for
the purpose of testing, calibration and integration of the sub-detectors including
iTOP counters.
References
1. Abe, T., et al.: KEK-REPORT-2010-1 (2010)
2. Ohnishi, Y., et al.: Prog. Theor. Exp. Phys. 2013(3), 03A011 (2013)
3. Raimondi, P.: Talk Given at the 2nd SuperB Workshop, Frascati (2006). http://
www.lnf.infn.it/conference/superb06/talks/raimondi1.ppt
4. Ohshima, T.: ICFA Instrum. Bull. 20, 10 (2000)
5. Akatsu, M., et al.: Nucl. Instr. Meth. A 440, 124 (2000)
6. Ohshima, T.: Nucl. Instr. Meth. A 453, 331 (2000)
7. Enari, Y., et al.: Nucl. Instr. Meth. A 494, 430 (2002)
8. Matsuoka, K.: For the Belle II PID group, PoS(TIPP2014)093
9. Andrew, M.: IEEE Realtime Conf. Rec. 1–5 (2012)
10. Andrew, M.: PoS (TIPP2014) 171 (2014)
11. Suzuki, K., et al.: Nucl. Instr. Meth. A 876, 252 (2017)
Gas Systems for Particle Detectors at the LHC
Experiments: Overview and Perspectives
Abstract. Over the five experiments (ALICE, ATLAS, CMS, LHCb and
TOTEM) taking data at the CERN Large Hadron Collider (LHC) 30 gas systems
are delivering the proper gas mixture to the corresponding detectors. They are
complex systems that extend over several hundred meters and have to ensure an
extremely high reliability in terms of stability and quality of the gas mixture
delivered to the detectors. In fact, the gas mixture is the sensitive medium and a
correct and stable composition is a basic requirement for good and safe long-
term operation. The present contribution describes the design philosophy
focusing the attention on the main functional modules. The reliability over the
past years is also discussed.
1 Introduction
The basic function of the gas system is to mix the different gas components in the
appropriate proportion and to distribute the mixture to the individual chambers. Over
the five experiments (ALICE, ATLAS, CMS, LHCb and TOTEM) taking data at the
CERN Large Hadron Collider (LHC) 30 gas systems (corresponding to about 300 gas
racks) are delivering the proper gas mixture to the corresponding detectors). The gas
mixture is the sensitive medium where the charge multiplication is producing the
signal. A correct and stable mixture composition is a basic requirement for good and
stable long-term operation of all detectors.
The gas systems were built according to a common standard allowing minimization
of manpower and costs for maintenance and operation.
The construction started at the beginning of the years 2000. The first systems were
put in operation between 2005 and 2006.
The gas systems for the LHC experiments are the result of a common effort
between the CERN Gas Service Team (nowadays part of CERN/EP department) and
the CERN/EN and BE departments. The CERN Gas Service Team is responsible for
designing, building, operating and maintaining the gas systems. CERN/BE is
responsible for the development of the software controls (following the requirements
provided by the CERN Gas Service Team) and the CERN/EN department of the
primary gas supply procurement and monitoring (Table 1).
The gas systems extend from the surface building where the primary gas supply point is
located to the service balcony on the experiment following a route few hundred meters
long.
The primary gas supply point is located in the surface building (SG). However, a
typical gas system is distributed over three levels (Fig. 1 shows the typical layout of a
gas recirculation system): the surface room (SG), the underground service room
(UGC) and the experimental cavern (UXC). The final gas distribution to the detectors is
located in UXC, where also the experiment is installed.
Given the large detector volume and the use of relatively expensive gas compo-
nents, for technical and economical reason most of the systems are operated in closed
loop gas circulation with a recirculation fraction higher than 90–95%.
Fig. 1. Typical layout of a gas recirculation system for the LHC experiments
Gas Systems for Particle Detectors at the LHC Experiments 93
In order to facilitate construction and maintenance, the gas systems were designed
starting from functional modules with similar functionalities (i.e. elementary building
blocks). Functional modules are, for example: mixer, pre-distribution, distribution,
circulation pump, purifier, humidifier, membrane, liquefier, gas analysis, etc. Then, the
mixer is basically identical between every system, but it can be configured in order to
satisfy the specific needs of each detector. This module oriented design is reflected by
the implementation: each system has a control rack where the PLCs1 and all the other
electronics crates corresponding to all functional modules are located (Fig. 2). The
control software for the gas system runs in the PLCs, while the crates collect all the I/O
information from the corresponding modules and, finally, they are connected to the
PLC through Profibus2. This approach was also facilitating the installation work and
the commissioning especially when all modules of a particular system were not ready
for installation at the same time.
Fig. 2. Typical gas systems control module where the main PLC is visible together with all the
control crates related to the functional modules.
The operation of all gas systems is continuously followed by the CERN Gas
Service Team. Alarms potentially produced during operation are propagated via email
and SMS to the CERN Gas Service Team as well as to the detector teams. A 7/7
24/24 h service is available for intervention outside working hours.
In the following the case of gas recuperation modules will be used as an example.
More details about gas systems and the description of other functional modules can be
found in [1].
1
A PLC (Programmable Logic Computer) is an industrial pc with basic functionalities.
2
PROFIBUS (Process Field Bus) is a standard for field bus communication in automation technology
and was first promoted in 1989 by BMBF (German department of education and research) and then
used by Siemens.
94 R. Guida et al.
Fig. 3. Schematic of the CF4 recuperation plant illustrating the working principle and the
different operational phases
Gas Systems for Particle Detectors at the LHC Experiments 95
Fig. 4. View of the CF4 recuperation plant installed in the SGX5 building at CMS experiment.
3 Operational Experience
Starting from 2010 all gas systems have been operated continuously. Data collected
during these years (2010–2017) allowed to compile a first reliability study and to spot
potential systems where extra-work for consolidation might be needed. Figure 5 shows
the average reliability of the gas systems. It is always greater than 99.98% corre-
sponding to less than 1.5 h of down-time per year per system (power-cuts and external
sources excluded).
Fig. 5. LHC gas system reliability during the last three years: during all periods the availability
was always greater than 99.95% (power-cuts and external sources excluded).
96 R. Guida et al.
Most interventions were triggered by alarms received via SMS or calls from the
experiments’ control rooms; however, a significant number was started also as a result
of routine checks performed by the CERN Gas Service Team.
An extensive maintenance program has been developed for the LHC shutdown
periods. In addition to the standard yearly maintenance (for circulation pumps, safety
valves, power supplies, …) during Long Shutdown periods (LS) all analysis devices,
mass-flow controllers (MFCs) and flow-cells used in the final distribution modules are
be verified/re-calibrated. Depending on specific need, LS periods are also used to
upgrade or consolidate specific gas systems.
4 Conclusions
30 gas systems are delivering the required gas mixture to the particles detectors at the
LHC experiments.
The gas systems were designed and built according to functional modules in order
to simplify the maintenance and operational activities.
All systems are fully automated: locally controlled by an industrial PLC and
remotely accessible through a PVSS interface. Few examples of functional modules
were discussed in the present contribution.
The operational experience over the last six years (2010–2016) has demonstrated an
impressive reliability level: greater than 99.98% corresponding to less than 1.5 h of
down-time per year (power-cuts and external problem excluded). An intensive main-
tenance and consolidation program has been developed to maintain and possibly
improve this exceptional reliability in the years to come.
References
1. Guida, R., et al.: the gas systems for the LHC experiments. In: IEEE Nuclear Science
Symposium Conference Record (2013)
2. Guida, R., et al.: Results from the first operational period of the CF4 recuperation plant for the
Cathode Strip Chambers detector at the CERN Compact Muon Solenoid experiment. In: IEEE
Nuclear Science Symposium Conference Record, pp. 1141–1145 (2012)
3. Guida, R., et al.: Commissioning of the CF4 recuperation plant for the Cathode Strip
Chambers detector at the CERN Compact Muon Solenoid experiment. In: IEEE Nuclear
Science Symposium Conference Record, pp. 1814–1821 (2011)
4. Guida, R., et al.: Development of a CF4 recuperation plant for the Cathode Strip Chambers
detector at the CERN Compact Muon Solenoid experiment. In: IEEE Nuclear Science
Symposium Conference Record, pp. 1433–1438 (2010)
Gas Mixture Monitoring Techniques
for the LHC Detector Muon Systems
Abstract. At the LHC experiments the Muon Systems are equipped with dif-
ferent types of gaseous detectors that will need to assure high performance until
the end of the LHC run. One of the key parameters for good and safe long-term
detector operation is the gas mixture composition and quality. Indeed a wrong
gas mixture composition can decrease the detector performance or cause aging
effects and irremediable damages. It is therefore a fundamental requirement to
verify and monitor the detector gas mixture quality.
In the last years several gas monitoring techniques have been studied and
developed at CERN to automatically monitor the detector gas mixture compo-
sition as well as the impact of gas quality on detector performance. In all LHC
experiments, a gas analysis module allows continuous monitoring of O2 and
H2O concentrations in several zones of the gas systems for all muon detectors.
More sophisticated and precise gas analyses are performed with gas chro-
matograph and mass spectrometer devices, which have sensitivity at the level of
ppm and allow to verify the correctness of the gas mixture composition. In
parallel to standard gas analysis techniques, a gas monitoring system based on
single wire proportional chamber has been implemented: these detectors are
very suitable to detect any possible aging contaminants thanks to their high
sensitivity.
At the LHC experiments, 30 gas systems deliver the proper gas mixture to the corre-
sponding detectors [1]. They are complex apparatus that ensure an extremely high
reliability in terms of stability and quality of the gas mixture delivered to the detectors.
Indeed, the gas mixture is the detector’s sensitive medium and a correct and stable
composition is a basic requirement for good and safe long-term operation.
A modular design is adopted for the construction of the LHC gas systems. Every
module fulfills a specific function and it can be configured to satisfy needs of different
detectors. The gas systems can be operated in two different modes:
– “open mode”: the mixture is exhausted to atmosphere after being passed through the
detector
– “recirculation mode”: the mixture is collected after being used in the detector and is
continuously re-injected into the supply lines.
About 50% LHC gas systems are operated in gas recirculation mode, being
mandatory in case of large gas volumes or expensive gas mixtures. The recirculation
fraction varies between 90% and 100% depending on detectors and gas system con-
strains. Since the renewal period of the gas volume is longer, the quality of the gas
mixture and the accumulation of impurities becomes a typical issue, which need to be
kept under control.
Furthermore the gas systems and gas mixture can affect detector operation because of:
– wrong gas mixture composition
– bad quality of the supplied gases
– accumulation of impurities in the recirculation system
– gas parameters (pressure, gas flow, cleaning agents, etc.)
These variations can have an impact on the detectors’ dark current, gas gain,
operation voltage, etc. It is therefore fundamental the monitoring of the gas system
parameters and gas mixture quality.
a) b)
Fig. 1. Gas chromatograms of the CMS CSC gas mixture in two analysis points: “CSC mixer”
and “CSC supply to the detector”. (a) Gas Chromatogram of the CMS CSC gas mixture
composition (b) Gas Chromatogram of the CMS CSC impurities (O2 and N2) concentration.
The regular monitoring of the gas mixture using uGC stations allows identifying
the origin of potential problems like low quality of gas supply, drift in the calibration of
the MFCs or faults in some gas system components.
For each experiment there are at least 20 analysis streams, which GC analyses
require a remarkable investment of resources and time. A complex system has been
installed in the underground service cavern of the CMS experiment to automatically
sample the different regions of the muon system [2]. The system is equipped with a
three column uGC station connected to three 16 multi-way valves allowing to analyze a
total of 48 gas samples coming from the output of the CSC, Drift Tube (DT) and
Resistive Plate Chamber (RPC) detectors.
Standard gas analyzers and gas chromatograph techniques are fundamental for the
monitoring of the detector gas mixture quality. Nevertheless in some cases their sen-
sitivity or specifications are not enough to address specific analysis requirements. For
this reason, some alternative gas analysis techniques are used in specific cases.
Fig. 2. Trend of SWPC normalized gain and O2 concentration in the gas mixture “Supply to the
Detector”. The purifier cycles are indicated with the different colours.
4 Conclusions
About 30 gas systems are delivering the proper gas mixture to the gaseous detectors at
the CERN LHC experiments. Quality and composition of gas mixtures are essential to
avoid temporary or unrecoverable degradation of detector performance. The verifica-
tion and monitoring of the gas mixture quality is therefore crucial. Several standard and
alternative gas analysis techniques have been established and are in use at CERN for all
Gas Mixture Monitoring Techniques for the LHC Detector Muon Systems 101
References
1. Guida, R., Capeans, M., Hahn, F., Haider, S., Mandelli, B.: The gas systems for the LHC
experiments. IEEE Trans. Nucl. Sci. (2013)
2. Capeans, M., Focardi, E., Hahn, F., Haider, S., Guida, R., Mandelli, B.: A common analysis
station for the gas systems of the CMS experiment at the CERN-LHC. In: IEEE Conference
Record (2011)
3. Capeans, M., Hahn, F., Haider, S., Guida, R., Mandelli, B.: RPC performance and gas quality
in a closed loop gas system for the new purifier configuration at LHC experiments. JINST 8,
T08003 (2013)
4. Mandelli, B.: Detector and system developments for LHC detector upgrades. CERN-THESIS-
2015-044 (2015). http://cds.cern.ch/record/2016792
Design of the New ATLAS Inner
Tracker (ITk) for the High
Luminosity LHC
Jike Wang(B)
Abstract. In the high luminosity era of the Large Hadron Collider (HL-
LHC), the instantaneous luminosity is expected to reach unprecedented
values, resulting in about 200 proton-proton interactions in a typical
bunch crossing. To cope with this high rate, the ATLAS Inner Detector
is being completely redesigned, and will be replaced by an all-silicon sys-
tem, the Inner Tracker (ITk).
This new tracker will have both silicon pixel and silicon strip sub-
systems. The components of the Inner Tracker will have to be resistant
to the large radiation dose from the particles produced in HL-LHC col-
lisions, and have low mass and sufficient sensor granularity to ensure a
good tracking performance over the pseudorapidity range |η| < 4. In this
talk, first the challenges and second possible solutions to these challenges
will be discussed, i.e. designs under consideration for the pixel and strip
modules, and the mechanics of local supports in the barrel and endcaps.
1 Introduction
The ITk is a full upgrade of the current ATLAS Inner Detector (ID) as part
of the Phase-II upgrade. It will be an “all-silicon” detector, which consists of
a new Pixel and Strip detectors. Due to radiation damage the ID can’t survive
with the future 3000 fb−1 integrated luminosity, so has to be replaced. There is
the TRT (Transition Radiation Tracker) in the current ID, it can’t work under
HL-LHC multiplicity, so has to be removed. Figure 1 shows the structure of the
ITk.
The HL-LHC upgrade would unlock much larger physics potential, for exam-
ple the rare channels like VBF h → ZZ → 4l, BSM hh → 4b, higgs self-coupling,
etc. The ITk would be the most important detector component, will be crucial
for many performances like lepton measurements, b-tagging and pileup jet rejec-
tion in wide kinematic and pseudo-rapidity range, etc. There will also be huge
challenge to the ITk, since the enormous pileup of up to μ = 200 (see the Fig. 2).
Hence we need to carefully re-design the Tracker.
sensor may not optimized, need to optimize how to arrange the sensors to min-
imize the material amounts; need to ensure the hermiticity of the detector etc.
During this designed process we also need to tightly coordinate with engi-
neering people. For example we get information from them about which sen-
sor/layouts are interested, then look; our optimized layouts may impossible in
terms of engineering or cost too much, need to consult with them on these. The
conclusions will also be written into reports/documents, to guide the building.
The design is a very important component towards the final building of ITk.
There are the following important steps in the evolution of the ITk layouts:
• The Letter of Intent (LoI) layout, which was studied around 2012.
This layout features a so-called “stub” layer, a short barrel layer between
the 4th/5th layer of the Strip, to give robustness in the barrel to end-cap
transition region. The η coverage is till 2.7. There are 4 pixel + 5 strip layers.
• Letter of Intent very forward (LoI-VF), which was studied around 2013–2014.
There were lots of studies showed that many performances and physics at HL-
LHC can benefit from larger Tracker coverage. For example Fig. 3 shows that
the larger the η coverage, the better ET miss
resolution.
• There are two main layout conceptions about the Pixel detector. One is
Inclined, the other is Extended (the details of these two conceptions will
be further discussed later). There were lots of studies about the comparison
between the Inclined and the Extended layouts around 2015–2016.
miss
Fig. 3. The ET resolution comparison for different η coverage scenarios.
Design of the New ATLAS Inner Tracker (ITk) 105
• In 2016, studies demonstrated that the Inclined would have better perfor-
mance than the Extended, then the Inclined is decided for the Pixel detector.
A baseline layout (for both the Pixel and Strip detector), has been converged
on year 2016. The layout has been used to produce the samples for the Strip
TDR.
• Right now still some optimization studies about the Pixel detector layout
are ongoing, they will converge soon. After that the layout will be used to
produce the samples for the Pixel TDR, which is planed to be released in the
end of year 2017.
Fig. 4. The r − Z view of the Extend (left) and the Inclined layout (right).
The preliminary estimation of the material budget for the two layouts, is
shown in Fig. 5 As expected the Inclined looks having less material, this is due
to the tracks are more perpendicular to the sensor when pass through. The
Extended also has another intrinsic problem: the long clusters and the corre-
sponding bad quality space points, make the seeding problematic. So the Inclined
is preferred.
106 J. Wang
Fig. 5. The X0 distribution for the Extend (left) and the Inclined layout (right).
Completely new endcaps have been implemented in this framework. The com-
plexity consist of the following items: the complicated new sensor shape (stereo
annulus); the petals are overlapping with each other; there are different sensor
Design of the New ATLAS Inner Tracker (ITk) 107
geometries for each petal. The sensor, petal and endcap geometry shapes are
shown in Fig. 7.
for this layout there are 5 Pixel layers and 4 Strip layers; The short stub layer
has been removed; the Inclined layout is decided for Pixel detector.
7 Summary
The extremely challenging situation at HL-LHC makes very hard to design the
further new Tracker. Our several years’ work successfully converged on the Strip
TDR layout, which was used for the results in the Strip TDR. Now we are trying
to converge on the Pixel TDR layout, then it will be used for results for the Pixel
TDR.
A Standalone Muon Tracking Detector
Based on the Use of Silicon
Photomultipliers
1 Introduction
Portable radiation detectors are commonly used in many fields of science and
industry: radiation surveys (based on alpha, beta and gamma sensitive detector),
radon level assessment, medical applications and in cultural heritage. Many new
applications are based on the detection of highly penetrating particles, such
as muons: homeland, tomography of large infrastructures and budgeting the
nuclear spent/waste fuel are only few of them [1]. In this work we present the
characterization and performances of a muon tracker system capable of doing
tridimensional mapping and to perform fine measurement of muon intensity flux.
The main features of the detector are:
– fairly high granularity, enabling for 3D reconstruction of tracks;
– easy to transport and deploy;
– insensitive to magnetic field;
– single photon counting capability;
– embedded data acquisition;
– remote operability.
In the following section we briefly describe the detector hardware, the detection
efficiency and preliminary results.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 109–112, 2018.
https://doi.org/10.1007/978-981-13-1313-4_22
110 F. Arneodo et al.
2 Experimental Setup
The detection technique is based on the use of plastic scintillator bars and Silicon
Photomultipliers: a detailed description of a single detection channel is reported
here [2]. The muon tracker is equipped with 200 channels organized in two views
(XZ and YZ). The hundred channels of each view are further subdivided in group
of ten to form a detection plane. Two corresponding planes (i.e. at same height)
of different views form a detection layer. Orthogonal bars in a layer allow for the
X-Y reconstruction, while the Z coordinate is given by the height of the layer.
The layout of the muon tracker is reported in Fig. 1. Each detection channel is
equipped with a preamplifier and a discrimination stage, while the SiPMs are
biased in group of five (two different voltages can be applied per plane). The
electronics readout of each view consists in one single 40 cm × 70 cm board. The
trigger logic can be defined locally through a switch or remotely by means of
a computer interface. Data acquisition is embedded in the electronics and the
data stream is stored in real time by means of a serial connection between the
board and a laptop computer.
Fig. 1. Left: the layout of the the muon tracker. In blue the scintillator bars forming a
detection layer. In green the monolithic printed circuit (PCB) board used for readout
and control each view. Right: one of the two muon trackers currently operating at
New York University Abu Dhabi (NYUAD).
Fig. 2. Typical muon event reconstruction. The XZ and YZ views are considered uncor-
related. The red (blue) bars represent the position of the plane in the XZ (YZ) view,
while the dots are the fired SiPMs in the event. The blue line is the best fit obtained by
following the fitting procedure from step 1 to 4, while the final fit (in red) is obtained
by adding the step 5 and 6.
Fig. 3. Efficiency matrices of the two views of the muon tracker. Each pixel represents
the efficiency of a single detection channel. Low efficiency channels have been replaced
on the basis of previous iterations of this map.
In Fig. 2 is reported a typical muon hit pattern along with the reconstructed
track (red line).
The detection efficiency has been measured by using a 8-fold trigger configu-
ration (8 layers in coincidences) and by constraining the residual value between
the fit-predicted SiPM and the actual one fired. This technique allowed to con-
struct the efficiency map of the detector. The results are reported in Fig. 3.
Figure 4 shows the angular distribution of muon flux measured with the detector
operating inside the ERB building at NYUAD (left) and outside (right). The
effect of the building is clearly visible on the left histogram, while the distribution
112 F. Arneodo et al.
Fig. 4. Muon flux distributions in φ taken at sea level in Abu Dhabi. Left: muon
distribution profile obtained by operating the detector inside the laboratory. Right:
muon distribution profile obtained by operating the detector outside. A 2% east-west
effect is also visible.
Fig. 5. Detector operating inside the experimental building. Left: angular distribution
at reference position. Right: angular distribution with the detector rotated by 45◦ .
on the right histogram is almost flat (we have estimated a east-west effect [3]
of 2%). The pronounced spikes found on both of the histograms are due to the
finite granularity of the detector and are driven by particles that are parallel
to the detector walls. This condition worsens the angular resolution. To reduce
the impact of this effect, every data sample is taken in two steps: at a reference
position and with the detector rotated by 45◦ . As shown in Fig. 5, by comparing
the two different profiles, it is possible to uncover regions otherwise hidden by
the detector systematics.
This work has been supported by New York University Abu Dhabi.
References
1. Checchia, P.: Review of possible applications of cosmic muon tomography. JINST
11, C12072 (2016)
2. Arneodo, F., Benabderrahmane, M.L., et al.: Muon tracking system with Silicon
Photomultipliers, 799(1), 166–171 (2015)
3. Johnson, T.H.: The Azimuthal Asymmetry of the Cosmic Radiation. Phys. Rev. 43,
834 (1933). Published on 15 May 1933
Spherical Measuring Device of Secondary
Electron Emission Coefficient Based on Pulsed
Electron Beam
1 Introduction
The secondary electron emission characteristics, as the basic properties of the material,
have important applications in various vacuum test instruments and photomultiplier
devices. In order to further enhance the function of the relevant devices (such as
microchannel plate), accurate measurement of different materials of secondary electron
emission characteristics is very necessary and significant. In order to accomplish this
task, we designed and built a device that measures the secondary electron emission
coefficient (SEEC) of the material at different primary electron incident energies and
incident angles in a high vacuum environment, which can also be used to measure the
generation the energy distribution of secondary electrons.
The structure of the device as shown in the Fig. 1 consists of vacuum system, electron
gun, main chamber, sample stage, test system and test software. When the device is in
operation, the electron beam generated by the upper electron gun will be irradiated on
the sample, the secondary electrons emitted from the sample will diverge to the sur-
roundings and are collected by the spherical collector, and the current of the secondary
electrons will be measured. The incident electron energy that generated by the electron
gun is continuously adjustable from 100 eV to 10 keV. The electron gun can be
operated in pulse mode with pulse width range from 2 ls to 200 ls. By grounding the
inner grid and the sample stage, and connecting the collector to the +40 V, a uniform
electric field can be generated internally. Therefore, the generation of secondary
electrons will not be affected by the electric field. The sample stage can be moved and
rotated to obtain different primary electron incident angles, the incident angle is con-
tinuously adjustable from 0° to 85°. The baking lamp can degas the surface of the
sample, and the achievable baking temperature is 250 °C. The vacuum system has a
working vacuum of 10−6 Pa and a limit vacuum of 10−7 Pa, which is sufficient to meet
the measurement requirements of the SEEC.
It can be seen from the device structure that the secondary electrons produced by the
sample are required to penetrate the two layers grid before they are collected by the
collector. The secondary current measured by the collector will be less than the actual
value. Therefore, we have designed the extraction electrode for the collector and the
inner and outer grid, and measure the current on it. The sum of the currents is the actual
secondary electric current generated by the sample.
Spherical Measuring Device of Secondary Electron Emission 115
At the same time, due to the charge conservation, and electrons generated by
electron guns may only flow to the inner grid, outer grid, collector and sample. So the
sum of the four parts current should be equal to the electron gun emission current, that
is, the primary electron current.
It can be obtained that the SEEC of the sample is:
4 Measurement Results
It can be seen that the experimental results are in good agreement with the theo-
retical expectations.
116 K. Wen et al.
5 Conclusion
We have been measured the SEEC of nano-thick oxide sample in pulsed mode. By
calculating the integral, we are able to measure the incident electron current and the
SEEC in real time. So, we can quickly get the feedback to monitor the work condition
of the experimental equipment. The distribution of the SEEC with incident electron
energy and incident angle can also be measured successfully. And the measurement
results are in good agreement with the existing data and theoretical expectations.
Through Labview to write the driver and measurement procedures, the automation of
the above measurement process is realized.
References
1. Jokela, S.J., Veryovkin, I.V., Zinovev, A.V.: Secondary electron yield of emissive materials
for large-area micro-channel plate detectors: surface composition and film thickness
dependencies. Phys. Procedia 37, 740–747 (2012)
2. Shih, A., Hor, C.: Secondary emission properties as a function of the electron incidence angle.
IEEE Trans. Electron Devices 40(4) (1993)
3. Kirby, R.E., King, F.K.: Secondary electron emission yields from PEP-II accelerator
materials. Nucl. Instrum. Methods Phys. Res. A 469, 1–12 (2001)
4. Suharyanto, Yamano, Y., Kobayashi, S., Michizono, S., Saito, Y.: Effect of mechanical
finishes on secondary electron emission of alumina ceramics. IEEE Trans. Dielectr. Electr.
Insul. 14(3) (2007)
5. Yang, W.J., Li, Y.D., Liu, C.L.: Model of secondary electron emission at high incident
electron energy for metal. Acta Phys. Sin. 62(8), 087901 (2013)
A Vertex and Tracking Detector System
for CLIC
1 Introduction
CLIC is a proposed future high-energy linear e + e − collider [1]. The high-
precision physics aims pose challenging demands on the performance of the
detector systems, including the vertex and tracking detector [2]. In particular, a
precise determination of displaced vertices for efficient flavor tagging requires an
3
impact parameter resolution of σ(d0 ) = 5 ⊕ 15/(p[GeV] sin 2 θ) μm for the vertex
detector, whereas the main requirement for the tracker is a transverse momen-
tum resolution of σpT /p2T = 2 × 10−5 GeV−1 for high-pT tracks above 100 GeV
in the central detector. At the same time, the material budget and power con-
sumption have to be kept at a minimum. Further, background particles from
beam-beam interactions can reach the detector and thus small cell sizes and
precise hit timing in the vertex and tracking detector are necessary.
The discs are arranged in a spiral geometry, to allow for better air-flow. To meet
the required impact parameter resolution and flavor tagging efficiency, 3 μm sin-
gle point resolution has to be achieved throughout the vertex detector. Currently,
planar and capacitively coupled hybrid pixel detectors are under consideration,
e.g. [4,5].
The main silicon tracker consists of 6 barrel layers, 7 inner discs and 4 outer
flat discs, with material content per detection layer corresponding to 1–2%X0 [3].
The tracking detector is divided into an inner and outer part by the support
cylinder for the vacuum tube. The tracker radius is 1450 mm. In total, the active
area covered by the tracker is in the order of 100 m2 . For this large area, mono-
lithic solutions are considered, e.g. [6].
4.6 m
3m
(a) Vertex detector (b) Tracking detector
Fig. 1. Rendering of the vertex and tracking detectors as implemented in the CLICdet
detector model [3]. From [7], published with permission by CERN.
The low duty cycle of the CLIC machine (312 bunches in 156 ns long bunch
trains every 20 ms) allows for pulsed power operation of the vertex and tracking
detectors. This helps in reducing the average power consumption and enables
the vertex detector to be air-cooled. For the large tracker volume, air-cooling
can not easily be implemented, therefore liquid cooling is currently foreseen.
3 Detector Performance
The main goal of the vertex detector is the tagging of heavy quarks by the recon-
struction of displaced decay vertices. Beauty- and charm-tagging performance
has been chosen as benchmark for the detector design. Full simulation studies
A Vertex and Tracking Detector System for CLIC 119
based on Geant4 [8,9] as well as multivariate analyses using the LCFIPlus pack-
age [10] have been performed on various implementations of the detector. The
variations have been guided by engineering constraints. Results from this detec-
tor optimization are illustrated in Fig. 2, while a more complete description can
be found in [11,12].
The impact of the variation of the material content per detector layer from
0.1%X0 to 0.2%X0 on the misidentification of b and light flavor backgrounds as
function of the c-tagging efficiency is shown in Fig. 2a, and reveals an increase of
the fake rate by 5% to 35%. This demonstrates the importance of limiting the
material budget of the detector. 0.2%X0 per layer is considered to be realistically
achievable taking technological and engineering constraints into account.
The relative performance of the spiral geometry in comparison to flat discs
is illustrated in Fig. 2b, and shows only slightly reduced performance in some
regions with lower coverage.
The slight benefit of 3 double layers in the barrel compared to 5 single layers
as depicted in Fig. 2c can be explained by the small reduction on the material per
layer due to shared support structures in the case of the double layer arrangement
and the additional track measurement point.
Fig. 2. Flavor tagging performance [11]. From [7], published with permission by
CERN.
120 A. Nürnberg
Fig. 3. Transverse momentum resolution for single muons. From [7], published with
permission by CERN.
Fig. 4. Bunch train occupancies in the vertex and tracking detectors due to beam
induced background hits from incoherent pair production and γγ → hadron background
2
processes [14], assuming 25 × 25 μm pixel size for the vertex detector and 1–10 mm ×
50 μm for the tracker. From [7], published with permission by CERN.
4 Summary
The design of the vertex and tracking detector for CLIC is driven by the strin-
gent requirements on measurement precision, the limited material and power
budget and the challenging background conditions. Simulation and engineering
studies have demonstrated that a light-weight air-cooled vertex detector gives
excellent flavor tagging performance, and that a large silicon tracker provides
excellent track momentum measurement. Both are essential ingredients for the
physics goals at CLIC. An integrated R&D program addressing the technological
challenges is progressing in the areas of ultra-thin sensors and readout ASICs,
interconnect technology, mechanical integration and cooling, to show the feasi-
bility of the proposed vertex and tracker detector concept.
References
1. Aicheler, M., et al.: A multi-TeV linear collider based on CLIC technology: CLIC
conceptual design report (2012). https://cds.cern.ch/record/1500095. CERN-2012-
007
2. CLIC Conceptual Design Report: Physics and Detectors at CLIC (2012). CERN-
2012-003
3. Alipour Tehrani, N., et al.: CLICdet: the post-CDR CLIC detector model (2017).
http://cds.cern.ch/record/2254048. CLICdp-Note-2017-001
4. Alipour Tehrani, N.: Test-beam measurements and simulation studies of thin-
pixel sensors for the CLIC vertex detector. Ph.D. thesis, ETH Zurich (2017),
https://www.research-collection.ethz.ch/handle/20.500.11850/164813. Diss. ETH
No. 24216
5. Buckland, M.: Analysis and simulation of HV-CMOS assemblies for the CLIC
vertex detector (2017). These Proceedings
122 A. Nürnberg
6. Munker, M.: Integrated CMOS sensor technologies for the CLIC tracker (2017).
These Proceedings
7. Nurnberg, A.: A vertex and tracking detector system for CLIC (2017). http://cds.
cern.ch/record/2272079. CLICdp-Conf-2017-013
8. Agostinelli, S., et al.: Geant4 - a simulation toolkit. Nucl. Instrum. Methods Phys.
Res. Sect. A Accel. Spectrom. Detect. Assoc. Equip. 506(3), 250–303 (2003)
9. Allison, J., et al.: Geant4 developments and applications. IEEE Trans. Nucl. Sci.
53(1), 270–278 (2006)
10. Suehara, T., Tanabe, T.: LCFIPlus: a framework for jet analysis in linear collider
studies. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrom. Detect. Assoc.
Equip. 808, 109–116 (2016). http://www.sciencedirect.com/science/article/pii/
S0168900215014199
11. Alipour Tehrani, N., Roloff, P.: Optimisation studies for the CLIC vertex-detector
geometry (2014). https://cds.cern.ch/record/1742993. CLICdp-Note-2014-002
12. Alipour Tehrani, N.: Optimisation studies for the CLIC vertex-detector geome-
try. J. Instrum. 10(07) (2015). C07001. http://stacks.iop.org/1748-0221/10/i=07/
a=C07001
13. Regler, M., Valentan, M., Fruhwirth, R.: LiC detector Toy 2.0 (Vienna fast simula-
tion tool for charged tracks), users guide. http://www.hephy.at/project/ilc/lictoy/.
HEPHY-PUB-863/08
14. Nurnberg, A., Dannheim, D.: Requirements for the CLIC tracker readout (2017).
https://cds.cern.ch/record/2261066. CLICdp-Note-2017-002
15. Dannheim, D., Sailer, A.: Beam-induced backgrounds in the CLIC detectors (2012).
http://cds.cern.ch/record/1443516. LCD-Note-2011-021
The Barrel DIRC Detector
for the PANDA Experiment at FAIR
1 Introduction
The PANDA experiment [1] is designed to shed a light on fundamental aspects of
QCD by performing hadron spectroscopy. A sophisticated detector system with
4π acceptance, precise tracking, calorimetry, and particle identification (PID) is
designed to accomplish that goal [2]. Hadronic PID for the target region of the
PANDA detector will be delivered by a DIRC (Detection of Internally Reflected
Cherenkov light) counter, see Fig. 1 (left). It is designed to cover the polar angle
range from 22◦ to 140◦ and provide at least 3 s.d. π/K separation power up to 3.5
GeV/c, matching the expected upper limit of the kaon momentum distribution,
shown in Fig. 1 (right).
Fig. 1. Left: The PANDA target spectrometer with highlighted Barrel DIRC. Right:
Phase space distribution as a function of kaon momentum and polar angle combined
for eight benchmark channels at pp̄ = 7 GeV/c. The Barrel DIRC coverage is marked
with the dashed rectangle.
The design of the PANDA Barrel DIRC [3] is shown in Fig. 2 (left). The base
concept of it is inspired by the BaBar DIRC counter [4]. It is constructed in a
form of barrel using 16 optically isolated sectors each made of radiator box and a
compact, prism-shaped expansion volume (EV). The radiator box contains three
synthetic fused silica bars of 17 × 53 × 2400 mm3 size, positioned side-by-side
with a small air gap in between them. A flat mirror at the forward end of each
bar is used to reflect Cherenkov photons to the read-out end, where a 3-layer
spherical lens images them on an array of 11 Microchannel-plate Photomultiplier
Tubes (MCP-PMTs) [5]. The MCP-PMT consists of 64 pixels of 6.5 × 6.5 mm2
size and is able to detect single photons with a precision of about 100 ps. A
modernized version of the HADES readout board [6] and front-end electronics
[7] is used for a signal readout.
Fig. 2. Left: CAD drawing of the Barrel DIRC. Only half of the sectors are shown.
Right: π/K separation power as a function of particle momentum and polar angle in
GEANT4 simulation.
The Barrel DIRC Detector for the PANDA Experiment at FAIR 125
3 Prototype Tests
Multiple aspects of the Barrel DIRC’s design were tested in hadronic particle
beams during the 2011–2016 period. Several design options, such as a monolithic
expansion volume and a traditional spherical lens with an air gap, were excluded
due to insufficient performance. The final design configuration with narrow bars
was verified at the CERN proton synchrotron in 2015 [9]. Additional tests were
carried out in 2016 to evaluate the performance of an alternative design with
wide plate radiators instead of narrow bars. Such a design would significantly
reduce the cost of the detector.
The full-scale prototype included all relevant parts of the PANDA Barrel
DIRC sector. Both radiator geometries, a narrow fused silica bar (17.1 × 35.9
× 1200.0 mm3 ) and a wide fused silica plate (17.1 × 174.8 × 1224.9 mm3 ) were
tested. One end of the radiator was coupled to a flat mirror and the other end
to a focusing lens and expansion volume. An array of 3 × 5 MCP-PMTs was
attached to the back side of the EV and used to detect Cherenkov photons.
A wide range of data measurements were taken for the π/p beam at differ-
ent polar angles and momenta. The external PID for charged particle was pro-
vided by a time-of-flight system. Figure 3 shows the result of the time imaging
reconstruction for a 25◦ polar angle and a 7 GeV/c momentum. The resulting
Fig. 3. Proton-pion log-likelihood difference distributions from proton data (red) and
pion data (blue) at 7 GeV/c beam momentum and 25◦ polar angle as a result of the
time-based imaging reconstruction. The distributions are for the narrow (left) and wide
(right) radiator with the 3-layer spherical and 2-layer cylindrical lens, respectively.
126 R. Dzhygadlo et al.
separation powers of 3.6 ± 0.1 s.d. for the bar (left) and 3.1 ± 0.1 s.d. for
the plate (right) at 7 GeV/c π/p momentum correspond to 3.8 ± 0.1 s.d. and
3.3 ± 0.1 s.d. at 3.5 GeV/c π/K momentum. While the result with a nar-
row bar is clearly superior to the result with the plate, both performances sat-
isfy the PANDA Barrel DIRC requirement and are in a good agreement with
simulations.
4 Conclusions
The PANDA Barrel DIRC will deliver hadronic PID, in particular π/K separa-
tion better than 3 s.d. up to 3.5 GeV/c. The final design features narrow radiators
made of synthetic fused silica, focusing optics with 3-layer spherical lenses and
a compact prism-shaped expansion volume instrumented with MCP-PMTs.
The latest prototype tests with particle beams at CERN validated this design.
In addition the configuration with wide plate radiator and cylindrical lens also
prove to give sufficient PID performance. Nevertheless, the narrow bar design
was selected due to the superior PID performance and because the plate requires
significantly better timing precision, which may not be available at the beginning
of the PANDA physics run.
The production of the optical components, photon sensors and electronics
for the PANDA Barrel DIRC is scheduled for the 2019–2022 period. The final
assembly and installation should take place in 2023.
Acknowledgement. This work was supported by HGS-HIRe, HIC for FAIR, BNL
eRD14 and U.S. National Science Foundation PHY-125782. We thank GSI and CERN
staff for the opportunity to use the beam facilities and for their on-site support.
References
1. PANDA Collaboration: Physics performance report for PANDA: strong interaction
studies with antiprotons. arXiv:0903.3905
2. PANDA Collaboration: Technical Progress Report, FAIR-ESAC/Pbar (2005)
3. PANDA Collaboration, Singh, B., et al.: Technical design report for the PANDA
barrel DIRC detector. arXiv:1710.00684
4. Adam, I., et al.: The DIRC particle identification system for the BaBar experiment.
Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrom. Detect. Assoc. Equip.
538, 281 (2005)
5. Lehmann, A., et al.: Recent developments with microchannel-plate PMTs. Nucl.
Instrum. Methods Phys. Res. Sect. A Accel. Spectrom. Detect. Assoc. Equip. 876,
42 (2017)
6. Ugur, C., et al.: 264 channel TDC platform applying 65 channel high precision (7.2
psRMS) FPGA based TDCs. https://doi.org/10.1109/NoMeTDC.2013.6658234
7. Michel, J., et al.: Electronics for the RICH detectors of the HADES and CBM
experiments. JINST 12 (2017). C01072
8. Dzhygadlo, R., et al.: Simulation and reconstruction of the PANDA barrel DIRC.
Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrom. Detect. Assoc. Equip.
766, 263 (2014)
9. Dzhygadlo, R., et al.: The PANDA barrel DIRC. JINST 11 (2016). C05013
The Belle II/SuperKEKB Commissioning
Detector - Results from the First
Commissioning Phase
Miroslav Gabriel(B)
on behalf of the BEAST II Collaboration
1 Introduction
On February 10th, 2016 the first circulating bunches have been observed in
the new accelerator [2].
2 Commissioning of SuperKEKB
Table 1. Overview of the detector subsystems used in the BEAST experiment and
their specific measurements.
3
System PIN diodes He tubes Micro TPCs Diamonds
Unique Neutral vs. Thermal Directional fast Ionizing
measurement charged radiation neutron neutron flux radiation dose
dose rate
Quantity 32 × 2 4 4 4
System BGO CLAWS Crystals Scintillator
Unique Electromagnetic Injection EM particle Electromagnetic
measurement dose rate background rate/injection bkg particle rate
Quantity 8 8 6×3 4
The Belle II/SuperKEKB Commissioning Detector 129
squared to account for current dependencies, is plotted over the total accumu-
lated current and time.
Towards the end of Phase 1 the improvement in vacuum purity and therefore
in beam-gas background was large enough to allow a combined study of beam-gas
and Touschek backgrounds. For a so called size sweep scan the background rate
is probed while artificially changing the size and the current of the circulating
beams to determine their influence. A total of 15 measurements for five different
beam sizes and three different beam currents is shown in Fig. 2. The observed
background rate in the BGO system is normalized to the beam-gas contribution
which depends on the beam current, the pressure in the ring and some effective
atomic number representing the mixture of residual gas species inside the vacuum
system and can be modeled by IP Ze . The normalized rate is a function of the
beam current divided by the pressure, the effective atomic number squared and
the beam size. The agreement between data and the linear fit and therefore our
model validates the mathematical description used to predict the background
rates form these two processes in terms of changing beam conditions.
To compensate for background losses and collisions, new particles are injected
directly into the circulating bunches during normal operations. However, for the
first few turns the bunches which received new particles will show significantly
higher background rates when passing by the IP. This will have a great impact
on the operation of the Belle II pixel detector, since the increased backgrounds
from injections lead to overflow, making it essentially blind for real physics until
it is read out. With precise knowledge of the bunch-by-bunch structure and the
time evolution of injection noise the charge collection in the pixel detector can
be disabled during transit of particular bunches, ensuring optimal efficiency of
the innermost detector.
In Fig. 3 the background from
charged particles observed by the
CLAWS detectors in an injection into
the electron ring is shown. The phase
shift of the injected particles has
been changed from nominal settings to
investigated its influence on the time
development. The signals are charac-
terized by substantially higher back-
ground levels immediately after injec-
tion. The intensity decreases to the
base level after several turns corre-
sponding to a few hundreds of micro
seconds. Certain timing pattern can be
observed and a correlation with accel-
erator features like betatron oscillation Fig. 3. Reconstructed signal observed by
is currently being studied. CLAWS in the three innermost forward
sensors in an injection into the electron
ring with altered phase shift parameter.
The Belle II/SuperKEKB Commissioning Detector 131
5 Summary
The SuperKEKB collider is currently undergoing an extensive commissioning
campaign divided into three phases. Due to the unprecedented luminosity, beam
backgrounds will represent a significant challenge for the operation of Belle II.
During the first phase the Belle II detector was substituted by the BEAST
experiment, a combination of several detector systems specifically designed to
measure and understand none-collision beam backgrounds. Important measure-
ments included the quantification of the improvement in vacuum conditions due
to scrubbing, the development and verification of a model to describe Touschek
and beam-gas backgrounds and the characterization of the time development of
injection backgrounds.
The second phase of the commissioning campaign will include the Belle II
detector, with a modified inner detector incorporating some of the BEAST sys-
tems for further background studies, as well as the final focusing system and it
is planed to observe the first collisions in the new accelerator. Data taking will
start in February 2018.
References
1. Abe, T., et al.: Belle-II Collaboration. arXiv:1011.0352 [physics.ins-det]
2. ‘First turns’ for SuperKEKB. http://cerncourier.com/cws/article/cern/64345.
Accessed 04 Aug 2017
The CMS ECAL Upgrade for Precision
Crystals Calorimetry at the HL-LHC
Patrizia Barria(B)
on behalf of the CMS Collaboration
1 Introduction
The physics goals for the HL-LHC phase (Phase II) [1,2] foresee precise mea-
surements of the Higgs boson couplings and studies of rare SM processes, crucial
for searches for new physics. To successfully exploit these data which will be
collected during the HL-LHC phase, it is necessary to reduce the effects of the
increased simultaneous interactions per bunch crossing (pileup (PU)). At the
same time the calorimeters should provide performance similar to that delivered
so far but with beam intensities that will result in 200 PU arising from a peak
instantaneous luminosity of 5×1034 cm−2 s−1 . This will be a particularly difficult
challenge for the endcap region (EE), due to the fact that the radiation levels
will change by a factor of 100 between |η| = 1.48 and |η| = 3.0. The dose and
fluence levels will result in significant loss to the crystal light transmission and
VPT (Vacuum Photo Triode) performance. In the barrel region (EB) we will
also have to cope with the increased PU, along with increasing APD (Avalanche
Photo Diode) noise (see Fig. 2(c)) resulting from increased dark current, the
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 132–136, 2018.
https://doi.org/10.1007/978-981-13-1313-4_27
The CMS ECAL Upgrade for Precision Crystals Calorimetry at the HL-LHC 133
dominant effect for Lint >1000 fb−1 (see Fig. 2(b)). These effects require that
the CMS detector [3] will be upgraded including a replacement of EE and the
upgrade of EB.
Fig. 2. (a) Schematic of CMS ECAL readout electronics (crystals (at the bottom), the
motherboard, 5 VFE boards, and 1 FE board). (b) Expected APD dark current (Idark )
level in EB versus integrated luminosity at both |η| = 0 (blue curves) and |η| = 1.45
(purple curves) for operating temperatures of 18 ◦ C or at 8 ◦ C. (c) Expected noise
level in EB versus integrated luminosity at |η| = 1.45 for operating temperatures of
18 ◦ C (red curves) or at 8 ◦ C (blue curves), with the present electronics (continuous
line, shaping time t = 43 ns), or the upgraded electronics (dotted line, t = 20 ns).
Copyright 2017 CERN for the benefit of the CMS Collaboration. CC-BY-4.0 license.
VFE cards. The VFEs contain ASICs that perform pulse amplification, shaping
and digitization functions. Digital signals from the 5 VFE cards are sent to a
single FE card where the trigger primitive (TP) is formed. This TP is essentially
the summed energy from the 5 × 5 crystals, together with some basic information
on the shape of the shower in these crystals. The TP is transmitted via a Gbit
optical link at 40 MHz to the off-detector trigger cards. The FE card also features
a memory, to store the individual crystal energies until reception of an external
level-1 (L1) trigger signal. At this point a second Gbit optical link multiplexes
the individual crystal energies to the off-detector readout boards. Figure 2(a)
shows that the PbWO4 crystals, the APDs, the motherboards, and the overall
mechanical structure will not be upgraded. Both FE and VFE electronics readout
will be replaced to satisfy the increased Phase II L1 trigger latency (12.5 µs c.f.
4.8 µs) and accept rate (750 kHz c.f. 100 kHz) requirements and to cope with
the increased HL-LHC performance. The VFEs will maintain similar purpose,
but decreasing the shaping time (43 ns c.f. 20 ns) and the digitisation to reduce
out-of-time PU contamination, electronics noise and anomalous APD signals
(spikes) [5]. The FE card will readout individual crystal energies at 40 MHz,
moving most processing off-detector, so the off-detector electronics needs be
upgraded to accommodate higher transfer rates and to generate TP.
3.1 Motivations
not yet available at L1. Currently the spikes are rejected at L1 using an algorithm
whose performance will degrade significantly during the HL-LHC scenario. How-
ever, with an upgraded electronics we will be able to apply more sophisticated
filtering algorithms in the VFE. Decreasing the shaping time and increasing the
digitization frequency will improve the discrimination between “normal” signals
and those from the (faster) spikes (see Fig. 3(b)). Furthermore, because of the
increasing APD noise that will significantly degrade the electromagnetic reso-
lution (see Fig. 2(c)), we will mitigate this effect by cooling the crystals, and
therefore the APDs, and by optimising pulse shaping with new VFEs. The APD
dark current is strongly dependent on temperature so by reducing the temper-
ature of the EB from 18 ◦ C to 8 ◦ C the dark current will reduce by a factor of
2.5, resulting in a noise decrease of 35%.
Fig. 3. (a) New VFE boards with Trans-impedance Amplifier (TIA) pulse
shaper/preamplifier as re-designed ASICs. (b) Comparison of APD pulse shape for
spike and scintillation events. The pulse shape for spike (scintillation) events is shown
in red (blue). (c) Timing resolution as a function of normalized amplitude for different
sampling frequencies. Copyright 2017 CERN for the benefit of the CMS Collaboration.
CC-BY-4.0 license.
4 Timing Performance
Precision timing will improve the vertex localisation for high energy photons,
and in particular the vertex resolution for H → γγ decays will benefit from
this. The current efficiency of localising the vertex is ∼70–80% but with the
current EB timing precision it would be reduced to less then 30% at 200 PU.
An improvement up to ∼70% can be achieved for photons with |Δη| > 0.8, but
to get this important increase the VFE ASIC design, the sampling rate, and the
clock distribution should be upgraded to approach the 30 ps timing precision.
As pulse shaper/preamplifier ASIC option a trans-impedance amplifier (TIA)
(see Fig. 3(a)) has been chosen and tested. TIA architecture is mainly digital
and does not apply a shaping to the APD pulse. It measures the APD signal
with high bandwidth and is optimized for sampling/digitization up to 160 MHz.
Its performance has been confirmed during 2016 beam tests at the H4 beam line
at CERN SPS with high energy electrons (20 < Ee < 250 GeV). The timing
136 P. Barria
performance of the new electronics has been evaluated for a single crystal, at the
centre of a 5 × 5 PbWO4 crystal matrix, that has been readout by prototype
VFE with discrete component TIA. Different sampling frequencies have been
emulated and finally the APD timing has been extracted through a template fit
to pulse shape. Figure 3(c) reports the very promising results achieved at 160
MHz: σ ∼ 30 ps at normalized amplitude (A/σ) of 250 corresponding at 25 GeV
photon with 100 MeV noise (at HL-LHC start) and 60 GeV photon with 240
MeV noise (at HL-LHC end).
5 Conclusions
The ECAL performance at 13 TeV is excellent but the harsh and challenging
conditions of the HL-LHC necessitate a complete replacement of EE and a par-
tial upgrade of EB to maintain this performance comparable to the Run II ones.
To mitigate the increased APD noise due to radiation damage, the EB operating
temperature will be lowered from 18 ◦ C to 8 ◦ C. Reading single-crystal infor-
mation at 40 MHz through transimpedence amplifiers will provide much more
information to the off-detector electronics. This will be used in the L1 trigger to
mitigate anomalous signals in the APDs and reduce the effects of pileup. Higher-
precision timing information than presently available will mitigate pileup even
further and result in an overall EB performance that is comparable to that in
present CMS operation. However a more precise time-of-flight measurement of
photons (σ ∼ 30 ps) will play a key role during HL-LHC to get the same angular
resolution in H → γγ analysis as in Run II.
References
1. Rossi, L., Bruning, O.: High luminosity large Hadron collider: a description for the
European strategy preparatory group, CERN-ATS-2012-236. https://cds.cern.ch/
record/1471000
2. CMS Collaboration: Technical proposal for the phase-II upgrade of the CMS detec-
tor, CERN-LHCC-2015-010; LHCC-P-008 (2015)
3. The CMS Collaboration: The CMS experiment at the CERN LHC. JINST 3 (2008).
S08004. https://doi.org/10.1088/1748-0221/3/08/S08004
4. Chatrchyan, S., et al., CMS Collaboration: Performance and operation of the CMS
electromagnetic calorimeter. JINST 5 (2010). T03010. arXiv:0910.3423v3
5. Petyt, D.A., CMS Collaboration: Mitigation of anomalous APD signals in the CMS
electromagnetic calorimeter. J. Phys. Conf. Ser. 404 (2012). 012043. http://stacks.
iop.org/1742-6596/404/i=1/a=012043
6. The CMS Collaboration: CMS Phase II upgrade scope document, CERN-LHCC-
2015-019. https://cds.cern.ch/record/2055167
The Tracking System at LHCb in Run 2:
Hardware Alignment Systems, Online
Calibration, Radiation Tolerance and 4D
Tracking with Timing
Artur Ukleja(B)
1 Introduction
The LHCb experiment [1] is designed to study B and D meson decays at the
LHC. It is constructed as a forward spectrometer with an acceptance in the
pseudorapidity range 2 < η < 5. The tracking system of the LHCb detector is
composed of a silicon-strip vertex detector, close to the proton-proton interaction
region, and five tracking stations: two upstream and three downstream of a dipole
magnet. The outer part of the downstream stations is covered by the Outer
Tracker (OT) detector [1]. The OT is a straw-tube gaseous detector covering an
area 5 × 6m2 . The OT detector modules are arranged in three stations (T1, T2
and T3). Each station consists of four module layers, arranged in an x − u − v − x
geometry. The modules in the x layers are oriented vertically. The y coordinate
is horizontal and the z one is defined along the beam pipe, from beginning
to forward of the detector. Below the monitoring and evaluation of the OT
performance in Run 2 are presented, as well as the stability of the detector with
studies on possible contribution in identification of protons and pions.
|r| |r|2
tdrif t (r) = (21.3 + 14.4 2 ) ns,
R R
where R = 2.45 mm is a radius of straw. The parameter values obtained in
Run 2 data are consistent with the Run 1 results [2,3]. The maximum drift time
extracted from the parametrization is 35 ns. The resolution dependence on the
distance from the wire (Fig. 1(b)) is also extracted from the fit:
|r|
σtdrif t (r) = (2.25 + 0.3 ) ns.
R
Due to a large background contribution coming mainly from secondary hits, the
residual distributions are fitted with a Gaussian function in a ±1σ. The drift time
residual distribution has a width of 2.4 ns and the spacial resolution (Fig. 1(c))
is 171 µm.
Fig. 1. (a) The drift time versus the unbiased distance with the overlaid fit (black line),
(b) the drift time residual distribution and (c) the hit distance residual distribution.
In (b) and (c) the core of the distribution (within ±1σ) are fitted with a Gaussian
function and the result is indicated in the figures.
Fig. 2. (a) The average number of recorded hits as a function of the activity in the pre-
vious bunch crossing, expressed as the scalar sum of transverse energy in all calorimeter
clusters (expressed in GeV). (b) The recorded drift time spectrum for all hits in the
OT (black line) and while keeping only events with ΣET (P rev) < 1000 GeV (blue
histogram).
4 Mechanical Stability
To improve the performance and provide the control of the OT by monitoring its
position, the system was constructed using the Relative Alignment System of
NIKHEF (RASNIK) [4–6]. The idea of this system is to project a finely detailed
image through a lens onto a CCD camera (RASNIK line). The RASNIK lines
are mounted on the four corners of each frame of the OT and they measure dis-
placement of four points an a frame in respect to corresponding reference points.
Horizontal lines measure mainly x and y LHCb coordinates. The resolution of
RASNIK system is better than 1µm. The example x and y variations of points
close to frames’ corners are shown in Fig. 3. The positions vary within ∼ 200 µm
in both x and y coordinates. At the beginning of the run, in May and June the
changes are relatively large (100 − 200 µm) till the intervention at the end of
June when the detector was opened and closed. After the intervention, in July
and August, the OT slowly attains the equilibrium state. This effect once again
is seen after second intervention in September, now in shorter time. In October
and November, the changes start to evolve with the opposite trend to the ones
observed in May.
Fig. 4. (a) The difference in time-of-flight between protons and pions (Δt) as a function
of their momentum (p), at the center of the OT, about 8.5 m from the interaction
point. (b) The distribution of track times for protons and pions with p < 7 GeV in
data acquired in 2016.
6 Summary
The performances of the Outer Tracker LHCb detector was stable in the entire
Run 2. Efficiencies and availability have been kept at high standards for the whole
data taking. The maximum drift time extracted from data is 35 ns with resolution
2.4 ns and the spacial resolution is 171 µm. The monitor positioning system
results show that to a very good approximation the construction supporting the
Outer Tracker detector is mechanically a rigid system. High accuracy data of this
system allows, however, to track even small deformations of the Outer Tracker
detector connected with magnetic field configurations, mechanical interventions
etc., no larger than 200 µm.
References
1. Alves Jr., A.A., et al.: The LHCb detector at LHC. JINST 3, S08005 (2008)
2. Arink, R., et al.: Performance of the LHCb outer tracker. J. Instrum. 9, P01002
(2013)
The Tracking System at LHCb in Run 2 141
3. Arink, R., et al.: Improved performance of the LHCb Outer Tracker in LHC Run-2.
(in preparation)
4. Dekker, H., et al.: The RASNIK/CCD 3-dimensional alignment system, 017, eConf
C930928 (1993)
5. Adamus, M., et al.: Test results of the RASNIK optical alignment monitoring system
for the LHCb Outer Tracker Detector, LHCb-Note-2001-004
6. Adamus, M., et al.: First Results from a Prototype of the RASNIK alignment system
for the Outer Tracker detector in LHCb experiment, LHCb-Note-2002-016
Design of a High-Count-Rate Photomultiplier
Base Board on PGNAA Application
Baochen Wang, Lian Chen(&), Yuzhe Liu, Weigang Yin, Zhou He,
and Ge Jin
1 Introduction
2 Detector System
The detection system is shown in Fig. 1. A NaI(Tl) detector which size is 6 7 inch is
used to achieve high detection efficiency and energy resolution. A photomultiplier tube
(PMT) and sodium iodide crystal are enclosed in the detector. A PMT base board
connects the detector and the electronic system. The PMT output signal flows through
the main amplifier into the multi-channel analyzer (MCA). High voltage provide power
supply to the detector.
But this design is not able to fix voltages between electrodes because of PMT
electrode currents flowing through the divider, having relatively high output impe-
dance. The electrode current depends on light intensity, so, in turn, the voltage between
electrodes and consequently the PMT gain will change with light intensity, causing
non-linear effects and signal distortion [2]. The output current of NaI detector is very
high since the NaI crystal has very high luminous efficiency. Therefore it will cause
signal distortion if the biasing circuit, i.e. base board cannot provide enough current to
the “high current” dynodes, i.e. the last several dynodes.
144 B. Wang et al.
Dynodes Dy1 to Dy5 are “low current” dynodes. These dynodes should work in
appropriate voltage. Using zener diodes to limit their working voltage if the voltage is
too high.
Dynode Dy6 to Dy10 are “high current” dynodes. Adding the current through
dynodes is very helpful to work in high count rate environment. Refer to Kern’s design
[3] while making a little changes, transistors, diodes and zener diodes are used in the
development design, shows in Fig. 4. The transistors (Q in Fig. 3) are set as current
source (mark on Fig. 3 by red circle), providing current to dynodes. The current flow
through the emitter and flows from the collector into the emitter of the next transistor.
The diode (D in Fig. 3) near the transistor providing biasing voltage VBE (voltage
between the emitter and base). Zener diodes over the transistors providing biasing
voltage VCE (voltage between the emitter and collector), and preventing the transistor
from high voltage. These units make the transistors work in amplification state.
The current from the current source (mark on Fig. 3) can be calculated by the
following formula:
VBQ VBEQ
IEQ ¼ ð2Þ
R1 jjR2
VBQ is the biasing voltage of base. The current flowing through the transistors is
equal. Resistor 1 and 2 (R1 and R2 in Fig. 3) are connected in parallel.
4 Test Result
The test result of “simple” resistive base board design is showed in Fig. 4. Signal
distorted in high count rate condition. The upper limit of energy spectrum is 5 meV.
The upper limit of count rate is 100 kps.
Fig. 4. Test result of base board. Up: Signal output from base board, left: high energy
characteristic gamma; right: low energy characteristic gamma. Down: Signal output from main
amplifier, left: low count rate condition; right: high count rate condition.
The test result of the developed design base board does not have that distortion.
Testing with the new base board in high count rate condition, the hydrogen peak
(2.2 meV) resolution is 7.8%, shows in Fig. 5. The upper limit of count rate has been
improved to 300 kps. And the upper limit of energy spectrum has been improved to
10 meV. This result meets our requirement on PGNAA application.
146 B. Wang et al.
5 Conclusions
The developed design with current driver can work properly in high count rate con-
dition and has greatly improved in various indicators. The upper limit of energy
spectrum has been improved to 10 meV. The upper limit of count rate has been
improved to 300 k.
This work is supported by the National Natural Science Foundation of China under
Grant No. 11375179.
References
1. Liu, Y., Chen, L., Liang, F. et al.: High counting-rate data acquisition system for the
applications of PGNAA. In: Real Time Conference (RT), 2016 IEEE-NPSS, pp. 1–4. IEEE
(2016)
2. Heifets, M., Margulis, P.: Fully active voltage divider for PMT photo-detector. In: Nuclear
Science Symposium and Medical Imaging Conference (NSS/MIC), 2012 IEEE, pp. 807–814.
IEEE (2012)
3. Kerns, C.R.: A high-rate phototube base. IEEE Trans. Nucl. Sci. 24(1), 353–355 (1977)
Front-end Electronics and Fast Data
Transmission
Electronics and Triggering Challenges
for the CMS High Granularity
Calorimeter for HL-LHC
Johan Borg(B)
on behalf of the CMS Collaboration
1 Introduction
By the start of the third long shutdown of the LHC late 2023, the accumulated
radiation damage to the current endcap calorimeters of CMS detector is expected
to be so severe that new endcaps calorimeters must be installed. In addition,
to cope with the increase in luminosity proposed by the high-luminosity LHC
(HL-LHC) project, and the resulting large number of simultaneous interactions
at each bunch-crossing (pile-up) and radiation damage, the High Granularity
Calorimeter (HGCAL) project [1] is developing a sampling calorimeter based
on the CALICE/ILC concepts [2], but tailored to the CMS/LHC conditions.
Compared to the Calice design, the HGCAL will include high precision time-
of-arrival (TOA) measurement capability as a means of reducing the impact of
the high pile-up, circuits for generating reduced-resolution energy measurement
data for the trigger, and must respect a tight power budget to make the cooling
requirements of the detector manageable. Although the HGCAL will use both
silicon sensors and scintillator+silicon photomultiplier sensors, both using very
similar front-end electronics, this paper will focus on the electronics requirements
for the more challenging silicon sensors.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 149–153, 2018.
https://doi.org/10.1007/978-981-13-1313-4_30
150 J. Borg
2 Front-End Electronics
A sketch of the currently envisioned architecture for the HGCAL frontend is
shown in Fig. 1. To enable calibration using minimum ionizing particles (MIP),
which deposit about 3.6 fC in a 300 µm sensor, an equivalent input charge noise
of less than 0.32 fC (2000 e-) is specified. To reach the 15 bit dynamic range
required to cover the specified 10 pC full-scale range, a time over threshold
(TOT) circuit will extend the measurement range of a charge-preamplifier and
Analog to Digital Converter (ADC) path beyond the 11 bit range of the ADC.
To simplify the logic for summing (nominally 4) sensor cells to form the reduced-
resolution trigger data, as well as to simplify the offline (but still time-critical)
high-level trigger analysis, the linear regions of the ADC and TOT paths should
exhibit some overlap. A high dynamic-range charge injection circuit with high
relative accuracy will be used for transferring the MIP-based calibration to the
TOT range.
To enable the required 20 ps TOA resolution the front-end must have a
large gain-bandwidth product, and a phase-stable reference clock signal must be
distributed to each front-end ASIC. The shaping-time of the front-end is a trade-
off between electrical noise and measurement corruption due to a slow decay of
charge deposited in preceding bunch-crossings. As shown in Fig. 2 digital filtering
[3] can mitigate the latter, and would thus allow the shaper time-constant to
be increased from 10–15 ns to 30 ns. Finally, the whole analog front-end must
respect a power budget of only 10 mW per channel.
Fig. 1. The current concept for the HGCAL frontend. Charge deposited in the sensor
cell on the left is converted to a voltage by the charge-sensitive preamplifier, before
being digitized either using the ADC or the TOT circuit. At the same time the time of
arrival is measured by a TOA circuit. Not shown: Circuits for sensor leakage current
compensation, and circuits that enable sensors of both polarities to be used. From
CMS-CR-2017-156. Published with permission by CERN.
An overview of the current system design concept for the HGCAL detector is
shown in Fig. 3. At the full 40 MHz acquisition rate, with 6 million channels and
about 34 bits per channel, the raw data generated in the detector will be just
Electronics and Triggering Challenges 151
Fig. 2. Left: An illustration of the equivalent input charge noise as a function of shaping
time for CR-RC shaping with√equal time-constants, based on a simulation with an
input voltage noise of 0.5 nV/ Hz and a total sensor+PCB+frontend capacitance of
80 pF. 10-bit digitalization after the first-stage amplification stage is assumed. Right:
the system impulse response before and after digital (FIR) filtering using 3 coefficients
for τ =30 ns. From CMS-CR-2017-156. Published with permission by CERN.
above 8 Pbit/s, which would require some 800000 10 Gbit optical links to read
out. To keep the cost of the system acceptable, this number has to be reduced by
almost two orders of magnitude. This will be achieved using on-detector buffer
memory capable of storing 12.5 µs worth of data, to be read out in response to
the CMS trigger at a rate of up to 750 kHz. These data are also zero-suppressed
by omitting charge deposits close to the noise level, and only transmit the TOT
and TOA data fields when these contain meaningful results. These measures
are expected to reduce triggered data bandwidth enough to fit into about 6000
10 Gbit lpGBT [4] links.
At the same time, the HGCAL detector must contribute real-time informa-
tion to the L1 trigger decision. As mentioned above, the spatial resolution will
be reduced by a factor of four within the front-end ASICs by summing adjacent
cells into trigger cells, and by only using half of the layers for generating trigger
data. Additionally, only trigger cells, or possibly regions of trigger cells, where
substantial deposits have been detected will be transmitted. All in all, we expect
to use on the order of 8000 optical links for bringing the trigger information out
of the detector.
Once out of the detector, the energy measurements from the individual trig-
ger cells must be merged into 3-dimensional clusters. The current approach is
based on finding connected regions around “seed” hits (cells exceeding a compar-
atively high energy threshold) on each layer to form 2D clusters, and in a second
processing step merging these into 3-dimensional clusters that correspond to
incoming particles. To meet throughput and latency requirements these opera-
tions will be implemented using a bank of FPGAs likely housed on ATCA boards
about 70 m from the detector.
152 J. Borg
Fig. 3. The signals deposited in each of the approximately 6M sensors cells, distributed
over about 600 m2 of sensor wafers, are digitized by frontend ASICs located on sensor
PCBs that connect to the sensor wafers using through-hole wire bonds. The digital data
streams from the frontend ASICs will be sent over electrical links to the panel PCBs
where streams from multiple sensors are aggregated in concentrator ASICs before being
sent off-detector using lpGBT-based optical links. From CMS-CR-2017-156. Published
with permission by CERN.
References
1. Technical Proposal for the Phase-II Upgrade of the CMS Detector, CMS collabora-
tion, Technical report (2015)
2. The CALICE collaboration: Construction and commissioning of the CALICE analog
hadron calorimeter prototype. J. Instrum. 5, P05004 (2010)
Electronics and Triggering Challenges 153
3. Gadomski, S., et al.: The deconvolution method of fast pulse shaping at hadron
colliders. Nucl. Instr. Meth. Phys. Res. B, A 320, 217–227 (1992)
4. Moreira, P.: The LpGBT Project Status and Overview. https://indico.cern.ch/
event/468486/contributions/1144369/attachments/1239839/1822836/aces.2016.03.
08.pdf. Accessed 20 Jun 2017
5. Jain, S.: Construction and first beam-tests of silicon-tungsten prototype modules
for the CMS High Granularity Calorimeter for HL-LHC. J. Instrum. 12, C03011
(2017)
Readout Electronics for CASCA in XTP
Detector
1 Introduction
And its valid sampling rate is designed for 40Msps and the readout frequency is
15 MHz. Analog output from CASCA need to be digitalized and transported by
the readout system. The detailed design for the readout system will be present
in this paper.
2.1 Architecture
The Adapter card edged-mounted with Frond-end card is near to the detector
and sampling waveform. The Main card as a FMC-based card, which is responsi-
ble for digital data acquiring and data processing, connects to the Adapter card
through a Mezzanine card. In a way of “Several-in-One”, several Adapter card
are connected to one Main card through a Mezzanine card with several HDMI2.0
cables. After processing the digitized, the Master card transmits data to Server
or PC through Gigabit Ethernet protocol. Schematic block diagram has been
shown in Fig. 1.
The CASCA frond-end card is designed for configuring the CASCA chip and
transferring the output signal. An ultralow noise, high performance differential
amplifier is adopted in the CASCA card to provide the rail-to-rail output for
the high precision ADC (Analog to Digital Converter) in the Adapter card.
The Adapter card, edged-mounted with the CASCA frond-end card, is in charge
of digitizing the output from the CASCA card and driving the ASIC. A FPGA
chip and a multi-output clock generating chip are mounted on the Adapter card.
156 H. Liu and D. Wang
The FPGA chip provides necessary logic signals for the CASCA ASIC and is
responsible for packaging the digital data from 32 channels. The clock generator
AD9517 is configured to meet the requirement of the sampling clock and readout
clock. The Adapter card is shown in the left-bottom of the Fig. 2.
To meet the Bandwidth of Dataflow between the Adapter card and the Main
card, we selected a Xilinx Artix-7 FPGA chip which provide a quad of the
GTP transceiver. With the help of the Serial RapidIO Technology and HDMI2.0
cable, the bandwidth could achieve the maximum of 6.25 Gbps which is over the
requirement of the Adapter card. The data transfers through the Serial RapidIO
protocol realized by the Logic IP Serial RapidIO Gen2 Endpoint Solution in the
FPGA.
3 Firmware Architecture
To meet the requirement of the CASCA, two FPGA have been adopted in this
system. All the command control and data transfer are depended on FPGA chip
which play a role of control center. An efficient firmware could improve the per-
formance of the system. Modular architecture is followed in the design of the
Readout Electronics for CASCA in XTP Detector 157
firmware. Serial RapidIO protocol and Gigabit Ethernet protocol for data trans-
fer are realized by the Logic IP core in the FPGA. And a special module named
control interface is able to interpret the command from PC. Each module has
its own work and collaborate with each other.Detailed schematic block diagram
has been shown in Fig. 3.
4 Summary
A new readout system has been proposed for CASCA in XTP detector. Hardware
and firmware have been developed successfully. And transfer capacity is over
the requirement of the CASCA. It is also applicable for the future upgrade of
the CASCA. In the future, the design of firmware need to be optimized and a
system-level test for this readout system will be achieved.
References
1. Black, J.K., Baker, R.G., Deines-Jones, P., Hill, J.E., Jahoda, K.: Xray polarimetry
with a micropattern time projection chamber. Nucl. Instrum. Methods Phys. Res.
A 581, 755–760 (2007)
2. Zhang, H.Y., Deng, Z., He, L., Li, H. Feng, H., Liu, Y.N.: CASCA: a readout ASIC
for a TPC based X-ray polarimeter. In: IEEE Nuclear Science Symposium and
Medical Imaging Conference (NSS/MIC), San Diego, CA, pp. 1–4 (2015)
A High-Resolution Clock Phase-Shifter
in a 65 nm CMOS Technology
1 Introduction
With the development of the LHC upgrade program, there is a great demand of high
speed data transmission system to transfer data between detectors and off-detector
electronics. The LpGBT is thus proposed to upload data at up to 10.24 Gb/s.
According to the design plan [1] as shown in Fig. 1, a phase shifter is included in
the chip to have the capacity to provide multiple clock outputs for the LHC front-end
electronics. All of these clocks are synchronized with a reference clock at 40 MHz and
must be phase adjustable. The frequencies of the output clock are 40, 80, 160, 320, 640
and 1280 MHz while the shifting resolution is fixed 48.8 ps.
2 Circuit Design
A straightforward way to generate clock intervals is to use a DLL. However the needed
numbers of delay cell at 40 MHz is 512 which is unduly long resulting in bulky area,
large transient noise and more power consuming.
In order to gain wide tuning range and high resolution, a better method is to
combine coarse phase-shifter and fine phase-shifter together [2]. The coarse phase-
shifter rotates clocks in a whole period with a large resolution of 781.25 ps while the
fine phase-shifter interpolates within the 781.25 ps interval down to 48.8 ps. The
coarse phase-shifter is a fully digital implementation while the fine phase-shifter is fully
customized and based on DLL. In this paper we focus on the fine phase-shifter. The
structure of the DLL-based phase-shifter is shown in Fig. 2.
The three D Flip-Flops driven by a clock at 2.56 GHz sample the input data (in this
scenario, they are the required output clocks) and output two clocks with a fixed phase
difference of 781.25 ps.
The lagging clock is directly fed into the Phase Detector. The leading clock
propagates in a 16-stage VCDL (actually there is a dummy one is added after the last
active delay cell) within DLL and the output of the final stage is used as the second
input of the Phase Detector. When the DLL is locked, the two clocks are in phase and
the VCDL will output 16 clock signals with equally spaced phases. As the total phase
error is 781.25 ps, the delay of one delay cell is thus 781.28/16 = 48.8 ps. By selecting
one of the VCDL outputs the 48.8 ps resolution is achieved.
The delay cell is made of two current-starved inverters using big nMOS and pMOS
to have a delay margin of at least 21% in the worst case. The two-stage output buffer
consists of a NAND and an inverter. The NAND can be disabled when the circuit is not
160 D. Yang et al.
used to save power. The delay cell is a symmetrical design which means in the VCDL
every output node of the current-starved inverter is the same loaded because it is always
connected to the next inverter and output buffer. This design is proven beneficial to
reduce Duty-Cycle distortion which is critical in some double data rate applications. As
a comparison, the delay cell in [2] is also a starved inverter but loaded with trans-
mission gates after odd-stage delay cells and inverters after even-stage delay cells. No
matter how the transmission gate is optimized, it won’t make the starved inverter the
same loaded in all corners.
The 16:1 MUX consists of 15 identical 2:1 MUXs arranged in four stages as
16:8:4:2:1. Each stage shares a common controlling signal. The 2:1 MUX is composed
of two inverters and well designed to minimize the coupling of the two branches.
Usually the 2:1 MUX needs an inverter as output buffer so that its output is not
inverted. Here with the compact structure and even stages, the output of 16:1 MUX can
still have the same phase as its input.
The structures of the delay cell and 2:1 MUX are shown in Fig. 3. Figure 4 shows
the layout of the full fine phase-shifter. The VCDL and 16:1 MUX lie on the top, the
large block in the right bottom part is Low Pass Filter (LPF), the Phase Detector and
Charge Pump locate in the left bottom area. The total size is 543 um 101 um.
A High-Resolution Clock Phase-Shifter in a 65 nm CMOS Technology 161
Fig. 3. Structure of the delay cell (left) and 2:1 MUX (right).
3 Results
The post-layout simulations show that the phase delay of two adjacent delay cells are
quite close to 48.8 ps in all simulation cases. Figure 5 depicts the phase shifting in
nominal corner at highest and slowest speed.
The peak-to-peak values of INL and DNL are 0.1 and 0.06 LSB (48.8 ps)
respectively at 1.28 GHz in the nominal corner while at 40 MHz the values are 0.06
and 0.05 LSB respectively. The periodic jitters are less than 3 ps both at 1.28 GHz and
40 MHz. The typical power dissipation of the fine phase-shifter at the lowest and the
highest frequencies are 1.1 mW and 9.1 mW respectively at 1.2 V supply voltage.
4 Conclusions
References
1. P. Moreira On Behalf of the GBT Collaborations: The LpGBT Project Status and Overview.
https://indico.cern.ch/event/468486/contributions/1144369/attachments/1239839/1822836/
aces.2016.03.08.pdf
2. Wu, G., Yu, B., Gui, P., Moreira, P.: Wide-range (25 ns) and high-resolution (48.8 ps) clock
phase shifter. Electron. Lett. 49(10), 642–644 (2013)
Development of Fast Readout System
for Counting-Type SOI Detector ‘CNPIX’
1 Introduction
X-ray diffraction measurement is a useful technique for the structural analysis. Recently
the two dimensional detector is used for this measurement. For user’s convenience, the
detector should have large detection area, small pixel size and fast frame rates (more
than 1 kHz). Photon-counting type detector is especially useful in this measurement
with its superior signal-to-noise ratio. However, the pixel size of the detector is rather
large (e.g. Medipix3RX has 55 um square pixel [1]) and charge sharing between pixels
will be severe when the pixel size become small. Thus we start to develop a new
photon-counting type SOI detector ‘CNPIX’ by using Silicon-On-Insulator
(SOI) technology [2] which have charge-sharing handling circuit while keeping
small pixel size (less than 50 um pitch). To realize this detector as practical, it is also
important to develop its Data Acquisition (DAQ) system. In this paper, we present the
scheme of new DAQ system and recent status of development.
SEABAS board are connected through Gigabit Ethernet and communicate with
TCP/UDP protocol. The transferred data are processed by a software that runs on the
PC. In DAQ mainboard side, TCP/UDP protocol were handled by SiTCP firmware
module [4].
Table 1. Difference of specification between the SEABAS2 and the KC705 boards.
SEABAS2 KC705
FPGA Virtex-5 Kintex-7
- Slice 7,200 50,950
- Block RAM 2,160 Kb 16,020 Kb
External Memory N/A DDR 3 SO-DIMM (default: 1 GB)
Connection for IEEE P-1386 CMC FMC (VITA 57.1)
SOI detector 64 pin 4 HPC 1/LPC 1
ADC/DAC/NIM On board N/A (Can be extended)
Gigabit Ethernet 1 1
SiTCP On board Can be implemented
(additional FPGA) (mixed load with user circuit)
Figure 2 shows the structure of new firmware implemented in the KC705. SiTCP
was included as a part of the firmware. DDR_IO module is developed as data buffer
using DDR 3 memory. As a first version, we developed DAQ system without DDR_IO
module. The frame rate of this system is about 200 Hz. Then DAQ system including
the DDR_IO module is being developed as a second version. In this version, the frame
rate is expected to be more than 1 kHz.
166 R. Nishimura et al.
Fig. 3. The result of readout test. (Blue region means Initial value pixels, and Green region
means Invalid value pixels.)
5 Conclusion
Fast readout system for the CNPIX is developed and tested. We adopt KC705 FPGA
evaluation board for new DAQ system instead of the existing SEABAS DAQ board.
First version of proposed DAQ system is now developed and framerate is reached to
200 Hz. Although the prototype CNPIX detector has some bugs, we could confirm the
DAQ behavior by checking the output pattern of the pixel counter.
References
1. Ballabriga, R., et al.: The Medipix3RX: a high resolution, zero dead-time pixel detector
readout chip allowing spectroscopic imaging. JINST 8 (2013) (C02016, IOP)
2. Arai, Y., et al.: Development of SOI pixel process technology. Nucl. Instrum. Methods Phys.
Res. A 636, S31–36 (2011)
3. KEK Detector Technology Project SOI Pixel Detector R&D, SOI Pixel Detector R&D. http://
rd.kek.jp/project/soi. Accessed 26 June 2017
4. Uchida, T.: Hardware-based TCP processor for gigabit ethernet. IEEE Trans. Nucl. Sci. 55(3),
1631–1637 (2008)
5. Nishimura, R., et al.: DAQ Development for silicon-on-insulator pixel detectors. In:
Proceedings of International Workshop on SOI Pixel Detector (SOIPIX2015) (2015). arXiv:
1507.04946
6. Xilinx Inc.: KC705 evaluation board for the Kintex-7 FPGA user guide. In: Xilinx
Documents UG810 (v1.7). Xilinx (2016)
7. Nishimura, R., et al.: Development of high-speed X-ray imaging system for SOI pixel
detector. In: Proceedings of the 20th International Workshop on Advanced Image Technology
2017 (IWAIT 2017)
CATIROC, a Multichannel Front-End ASIC
to Read Out the 3″ PMTs (SPMT) System
of the JUNO Experiment
Abstract. The ASIC CATIROC (Charge And Time Integrated Read Out Chip)
is a complete read-out chip designed to read arrays of 16 photomultipliers
(PMTs). It finds a valuable application in the content of the JUNO (Jiangmen
Underground Neutrino Observatory) experiment [1], a liquid scintillator
antineutrino detector with a double calorimetry system combining about 17k 20″
PMTs (Large PMTs system) and around 25k 3″ PMTs (Small PMTs system).
A front-end electronics based on the ASIC CATIROC matches well within the
3″ PMTs system specifications as explained in this paper. CATIROC is a SoC
(System on Chip) that processes analog signals up to the digitization to reduce
the cost and cables number. The ASIC is composed of 16 independent channels
that work in triggerless mode, auto-triggering on the single photo-electron (PE).
It provides a charge measurement with a charge resolution of 15 fC and a timing
information with a precision of 200 ps rms.
1 Introduction
the LPMTs [2]. The size of these PMTs is chosen to operate in photon-counting mode
for all events inside the detector. The large number of SPMTs increases the density and
the complexity of the detector system. A customized multichannel Application Specific
Integrated Circuit (ASIC) with the integration of all the analog and digital components
into a single chip will simplify the electronic system, decreasing the total power
consumption and increasing the reliability.
The SPMTs are expected to detect about 50 PE per event at 1 meV spread over 25,000
PMTs, working thus in photon counting regime with a time resolution within a few ns,
typical of the PMTs. For the installation, the SPMTs will be connected in groups of 128
to an autonomous front-end electronics located in underwater boxes near the SPMTs.
This is possible thanks to the integration of 8 multichannel ASIC CATIROC in each
readout board. The CATIROC processes the analog signals up to digitization and sends
out only zero-suppressed digital data to the central processing and storage unit.
CATIROC has been designed in AMS SiGe 0.35 lm technology and integrates 16
identical channels to provide charge and time information [3].
In each channel a trigger path allows to auto-trigger on the single photo-electron
thanks to a fast shaper (5 ns) followed by a low offset discriminator with threshold sets
by an internal 10-bit DAC common to the 16 channels.
A charge path, made by a preamplifier, a slow shaper, a switched capacitor array
and an internal 10-bit Wilkinson ADC, provides a charge measurement over a dynamic
range from 160 fC (1PE, assuming a PMT gain of 106) up to 70 pC (*400PE).
The time measurement is obtained by two paths: a “time stamp″ performed by a 26-
bit counter at 40 MHz and a “fine time″ obtained thanks to a TDC ramp per channel
converted by another 10-bit Wilkinson ADC.
3 Measurements
CATIROC has been tested in the laboratory with a dedicated test board [3] and the
measurements have been performed with a pulse generator to simulate a PMT input
charge. Preliminary measurements with a 3″ PMT have been performed.
A first information is the detection efficiency of the single-PE signal. It is measured
scanning the discriminator threshold at a given injected charge (up to 2 PE = 320 fC)
and monitoring the discriminator response. Figure 1 (left) shows the trigger efficiency
for given injected charges as a function of the threshold. The 50% trigger efficiency as a
function of the input charge is shown in the same figure (right panel). These mea-
surements show a trigger channel sensitivity of about 100 DACu/PE (i.e., a DAC
resolution of 0.6 DACu/fC) and a noise (r) of 3.5 DACu (5.6 fC) fitting the pedestal
curve (black curve on Fig. 1, left). The minimum threshold can be calculated from the
baseline mean value including 5r noise (Fig. 1 right). Considering the noise mea-
surement of 3.5 DACu, there is a very comfortable range to set the auto-triggering at a
fraction of photo-electron (usually 1/4th to 1/3rd).
170 S. Conforti et al.
Fig. 1. Trigger efficiency at a given injected charge as a function of the threshold (left) and the
50% trigger efficiency as a function of the input charge up to 2 PE (right).
The SPMT system will collect typically single-PE up to a few PE’s (so a good
charge resolution for the single PE detection is mandatory). The charge distribution for
various input signals (from 1 to 10 PE equivalent) are shown in Fig. 2 (left). The
sensitivity is 16 ADCu/PE and the noise is nicely gaussian with a RMS of 1.5 ADCu
(15 fC).
Fig. 2. Charge measurements with the full chain for different input charges (160 fC to 1.6 pC)
(left). TDC ramp reconstruction for one channel by a pulse generator signal delayed by step of
100 ps (right).
Another crucial feature is the time resolution of the ASIC which is required to be
smaller than 1 ns, such that the electronics resolution is negligible compared to that of
the PMT’s. The ASIC provides the signal “time of arrival″ operating in self-triggered
mode. The TDC ramp has been reconstructed and is displayed in Fig. 2 (right).
A periodic pulse signal is injected in one channel and delayed by steps of 100 ps. The
linear fit provides the slope which gives a LSB (or TDCu) of: LSB = 1/slope = 27
ps/TDCu. The residuals are within ±15 TDCu, corresponding to ±450 ps, with an
CATIROC, a Multichannel Front-End ASIC to Read Out the 3″ PMTs 171
RMS value of 167 ps. The residuals exhibit a periodical shape due to a coupling of the
160 MHz clock, most likely through the substrate.
The performances of CATIROC are also evaluated (very preliminary test) with
signal from the PMT planned to be used for the JUNO SPMT system (3″ HZC PMT).
The ASIC test board is connected to the PMT placed in a light-tight box. The dedicated
SPMT acquisition code developed for JUNO is used to collect the CATIROC output
data (Fig. 3 left). An example of the signal spectrum measured by the ASIC for two
channels is shown in Fig. 3 right, with a clear evidence of the single-photoelectron
peak. The PMT high voltage is to yield *106 gain. This measurement indicates that
the ASIC has the ability to detect a single photoelectron with a charge resolution of
30%. The two channel distribution are very similar. The wiggles observed in the curves
is due to a digitization artefact caused by the ASIC clock coupling. Their impact on the
determination of the single-PE position and resolution has been evaluated with Monte
Carlo simulations and found to be negligible for JUNO. Further testing is ongoing for
full characterization of the performance of the CATIROC with JUNO PMT’s.
Fig. 3. Test bench for PMT measurements. The PMT is placed in a dark box, connected to an
high voltage and to the CATIROC ASIC (left). Single photoelectron spectrum, in dark noise
configuration, measured by the ASIC (right)
4 Conclusion
The JUNO experiment is the largest liquid scintillator antineutrino detector ever built,
currently under construction in the south of China. A double calorimetry system will be
used for the first time ever combining about 17k 20″ PMTs (Large PMTs system) and
around 25k 3″ PMTs (Small PMTs system). The ASIC CATIROC has been tested and
evaluated against the requirements of the JUNO SPMT system. The results show that
CATIROC fulfils the JUNO requirements. Preliminary tests with a JUNO PMT have
been performed to measure the single-PE with CATIROC. In the near future,
CATIROC will be installed on the first prototype of the front-end board to study the
performances in real conditions.
172 S. Conforti et al.
References
1. An, F., JUNO Collaboration (East China U. Sci. Tech., Shanghai) et al.: Neutrino Physics
with JUNO, 188 pp. (2015). J. Phys. G43(3), 030401 (2016). https://doi.org/10.1088/0954-
3899/43/3/030401
2. Miao, H.E.: Double calorimetry system in JUNO experiment. In: The Technology and
Instrumentation in Particle Physics 2017 (TIPP2017) conference (2017)
3. Conforti, S., et al.: Performance of CATIROC: ASIC for smart readout of large
photomultiplier arrays, 2017_JINST_12_C03041 (2017)
First Prototype of the Muon Frontend Control
Electronics for the LHCb Upgrade: Hardware
Realization and Test
Abstract. The muon detector plays a key role in the trigger of the LHCb
experiment at CERN. The upgrade of its electronics is required in order to be
compliant with the new 40 MHz readout system, designed to cope with future
LHC runs between five and ten times the initial design luminosity. The
framework of the Service Board System upgrade aims to replace the system in
charge of monitoring and tuning the 120’000 readout channels of the muon
chambers. The aim is to provide a more reliable, flexible and fast means of
control migrating from the actual distributed local control to a centralized
architecture based on a custom high-speed serial link and a remote software
controller. In this paper, we present in details the new Service Board System
hardware prototypes from the initial architectural description to board connec-
tions, highlighting the main functionalities of the designed devices with pre-
liminary test results.
To comply with the new common LHCb read out architecture this infrastructure was
redesigned at system level while the On-Detector electronics was left unchanged. Taking
advantages of the high-speed serial link specifically designed for application under
radiation, a centralized remote server has the whole control of the CARDIAC chipset. This
significantly increases the flexibility and reliability of the system as well as the commu-
nication speed. The integration of the GBT link in the actual Service Board facility, next to
the muon stations, requires the new Service Board System (nSBS) architecture [7]
hardware development. This paper presents the new Pulse Distribution Module (nPDM),
the new Service Board (nSB) and the new Custom Back-plane (nCB) first prototypes
release (rev.01), focusing on the most important sections of the design process with
additional early test results.
Fig. 1. Quick physical and functional overview of the new Service Board System.
Fig. 2. Screenshot of two successful I2C transaction (W and R) between a commercial I2C
driver (SCL/SDA) and the muon On-Detector Electronics custom serial link (SCNx/SDNx/
SDBn).
First Prototype of the Muon Frontend Control Electronics 177
This very early series of tests also demonstrate that the developed digital translator
speeds-up the data exchange from the actual 100 kbps to 1 Mbps, reducing the access
time to the DIALOG register by a factor of 10. These translators have separate FSM(s)
that run independently and act On-The-Fly. Distributing the data load of one e-link on
12 drivers allows the GBT link to control all the serial channel of the new Service
Board simultaneously.
4 Conclusions
4.1 Summary
The increased flexibility and independence of every single muon Control Channel
associated with the higher data transfer rate allows the development of new algorithms
for fine noise measurements at any moment, without changing any part in the new
Service Board crate. When the whole facility will be ready and equipped more tests and
a full characterization will take place. Anyway, for what concerns the design of the new
Service Board System prototype these results are a proof-of-concept that in fact
motivated the production of a limited number of preliminary boards (see Figs. 4 and 5).
Fig. 4. The new Pulse Distribution Module (nPDM) prototype rev.01, produced in March 2017
Fig. 5. The new Service Board (nSB) prototype rev.01, produced in July 2016
First Prototype of the Muon Frontend Control Electronics 179
Acknowledgements. The authors express a special thanks to the Electronics Laboratory staff of
the INFN - Rome department for their help and support during these months of test and
development. To Riccardo Lunadei for his support during the PCB development and to Daniele
Ruggieri for his support during the rework of prototypes. To Manlio Capodiferro for his support
during the very first test setup installation and to Fabrizio Ameli for coordinating all the tasks
carried out by the Laboratory staff as well as for supporting the whole PCB development with his
specific experience in high-speed signal design.
References
1. Wyllie, K., et al.: Electronics architecture of the LHCb upgrade. LHCb Technical Note,
LHCb-PUB-2011-011 (2011)
2. Alessio, F. et al.: Readout control specifications for the Front-End and Back-End of the
LHCb upgrade. LHCb Technical Note, LHCb-INT-2012-018 (2012)
3. Moreira, P. et al.: The GBT project. In: Topical Workshop On Electronics For Particle
Physics, pp. 342–346 (CERN-2009-006), Paris (2009)
4. Cadeddu, S., et al.: The DIALOG chip in the Front-End electronics of the LHCb muon
Detector. IEEE Trans. Nucl. Sci. 52(6), 2726–2732 (2005)
5. Moraes, D., et al.: CARIOCA-0.25/spl mu/m CMOS fast binary Front-End for sensor
interface using a novel current-mode feedback technique. In: IEEE International Symposium
on Circuits and Systems (2001)
6. Bocci, V., et al.: The muon Front-End control electronics of the LHCb experiment. IEEE
Trans. Nucl. Sci. 57(6), 3807–3814 (2010)
7. Bocci, V.: Architecture of the LHCb muon Front-End control system upgrade. In: IEEE
Nuclear Science Symposium, San Diego (2015)
8. TR0020: SmartFusion2 and IGLOO2 Neutron Single Event Effects (SEE) Test Report.
http://www.microsemi.com/document-portal/doc_download/135249-tr0020-smartfusion2-
and-igloo2-neutron-single-event-effects-see-test-report. Accessed 27 Apr 2016
9. Caratelli, A., et al.: The GBT-SCA, a radiation tolerant ASIC for detector. In: Topical
Workshop on Electronics for Particle Physics, Aix En Provence (2014)
10. Bonacini, S., et al.: E-link: A radiation-hard low-power electrical link for chip-to-chip
communication. In: Topical Workshop on Electronics for Particle Physics, Paris (2009)
High-Speed/Radiation-Hard Optical Engine
for HL-LHC
1 Introduction
A parallel optical engine is a compact device for high-speed data transmission. The
compact design is enabled by readily available commercial high-speed VCSEL arrays.
These modern VCSELs are humidity tolerant and hence no hermitic packaging is
needed1. With the use of a 12-channel array operating at 10 Gb/s per channel, a parallel
optical engine can deliver an aggregate bandwidth of 120 Gb/s. With a standard
spacing of 250 lm between two adjacent VCSELs, the width of a 12-channel array is
only slightly over 3 mm. This allows the fabrication of a rather compact parallel optical
engine for installation in locations where space is at a premium. The use of a fiber
ribbon also reduces the number of fibers to handle and moreover a fiber ribbon is less
fragile than a single-channel fiber. These advantages greatly simplify the production,
testing, and installation of optical links.
VCSEL arrays are widely used in off-detector data transmission in high-energy
physics [1]. The first implementation [2] of VCSEL arrays for on-detector application
is in the optical links of the ATLAS pixel detector. The experience from the operation
of this first generation of array-based links has been quite positive. The ATLAS
experiment therefore continued to use VCSEL arrays in the second-generation optical
1
See for example, http://www.photonics.philips.com/.
links [3] for a new layer of the pixel detector, the insertable barrel layer (IBL), installed
in early 2014 during the long shutdown (LS1) to prepare the Large Hadron Collider
(LHC) for collisions at the center-of-mass energy of 13 TeV. In addition, ATLAS also
decided to move the optical links of the original pixel detector to a more accessible
location. The replacement optical links are also array based.
Based on this extensive and positive experience, it is logical for the ATLAS pixel
detector of the high-luminosity LHC (HL-LHC) to continue to use optical links based
on the opto-board (optical engine) concept. In these proceedings, we report the result of
an R&D project on the next generation optical engine operating at high speed.
The opto-board is a miniature printed circuit board (PCB) as shown Fig. 1. A VCSEL
array driver ASIC is mounted on the opto-board next to an opto-pack that houses a
VCSEL array. This keeps the length of the wire bonds between the ASIC and the
VCSEL array to a minimum to diminish the parasitic capacitance and inductance of the
wire bonds. This allows the ASIC to drive the VCSELs at high speed. The PCB has a
thick copper back plane (1.0 mm) for thermal management. An MTP2 barrel attached
to an aluminum brace is secured to the opto-board via a screw. A fiber ribbon termi-
nated with a MTP connector can be inserted into the MTP barrel to receive the optical
signal from the VCSEL array. An electrical connector3 is attached to the PCB to
transmit high-speed data from a pixel module to the VCSEL array driver ASIC. The
high-speed electrical signals from the connector to the ASIC are transmitted using
controlled impedance differential pair transmission lines on the PCB.
Fig. 1. (a) Schematic drawing of an opto-board together with a MTP barrel fastened to the opto-
board for the insertion of a fiber ribbon terminated with MTP connector to receive the optical
signal from the VCSEL array. (b) A three-dimensional rendition of the setup.
2
MTP connector, US Conec Ltd.
3
LSHM connector, Samtec Inc.
182 K. K. Gan et al.
The VCSEL array driver ASIC was developed under the US Collider Detector R&D
(CDRD) program of DOE. We have prototyped the ASIC in two runs, both in 4-
channel versions, using the 65 nm CMOS process of TSMC4. We only use the core
transistors of the process in order to achieve maximum radiation tolerance. Both ASICs
include an 8-bit DAC to set the VCSEL modulation and bias currents. The DAC
settings are stored in SEU (single event upset) tolerant registers. Several improvements
were implemented in the second prototype ASIC:
• Eliminated all external biases. All biases are now programmable via DACs. The
bias that is distributed across the ASIC is set via a current and then is converted into
a voltage at the point of use. This allows faster recovery from the signal switching
as there is no large RC constant between the generator of the bias voltage and the
point of use as in the previous scheme.
• Added more on-chip decoupling capacitance of *200 pF for the whole ASIC.
• Eliminated output feedback amplifier to set output level. One of the modes in the
circuit of the first prototype ASIC had large impedance and was virtually an open
circuit, causing large jitter, instead of driving the bias voltage.
• Added pre-emphasis to the signal at the output of the receiver that received the
electrical input signals. The location of the added pre-emphasis is programmable via
a delay line.
• Added pre-emphasis and feed through capacitors on the driver block of the ASIC to
increase the speed, thereby improving the timing/amplitude control.
• All power lines are tied together with a better power plane. In the first prototype
ASIC, each channel has two power pads. In the new ASIC, there are a total of 13
power pads, including some large pads for multiple wire bonds.
In the first prototype ASIC [4], all four channels are operational and the bit error rate is
less than 1.3 10−15 with all channels active using pseudo random bit strings (PRBS)
as input. Figure 3a shows the optical eye diagram at 10 Gb/s. The eye is open but
improvements are needed to reduce the jitter.
The expected radiation level for the optical links depends on the location. For
example, if the opto-boards are installed near the outer radius of the endcaps of the
silicon tracker (“ID endplates”), the ionizing dose is 10.2 Mrads and the non-ionizing
dose is 5.2 1014 1-MeV neq/cm2. In October 2015, we irradiated eight opto-boards
with prototype ASICs using 24 GeV/c protons at the CERN PS irradiation facility. In
four opto-boards, each ASIC drove a resistive load while in the other four opto-boards,
each ASIC drove a VCSEL array5. The opto-boards with VCSEL arrays attached were
4
Taiwan Semiconductor Manufacturing Company, Limited.
5
The VCSEL array used is V850-2174-002, fabricated by Finisar Corporation.
High-Speed/Radiation-Hard Optical Engine for HL-LHC 183
Fig. 2. Optical eye diagram of a VCSEL operating at 10 Gb/s before (a) and after
(b) irradiation.
Fig. 3. Optical eye diagram of a VCSEL operating at 5 Gb/s before (a) and after (b) irradiation.
184 K. K. Gan et al.
The second prototype ASIC is much easier to tune for operation at 10 Gb/s because of
the various improvements listed in Sect. 4. The supply voltage of the ASIC is 1.2 V
and the current consumption is 150 mA with all four channels operating at 10 Gb/s.
The common cathode voltage is set at −1.3 V in order to provide enough headroom to
drive the VCSEL. The current consumption of the common cathode voltage is 25 mA.
All channels have excellent coupled optical power, higher than 2 mW. The optical eye
diagram is shown in Fig. 4a for 10 Gb/s operation. In comparison with Fig. 2a, the eye
is more open but there is significant jitter and this is being investigated. The BER is
<5 10−14 on all channels with every channel active. Figure 4b shows the optical eye
diagram operating at 5 Gb/s, the target data transmission speed of the ATLAS pixel
detector at HL-LHC. The eye is wide open, indicating satisfactory performance.
Fig. 4. Optical eye diagram of a VCSEL in the second prototype ASIC operating at 10 (a) and 5
(b) Gb/s.
6 Conclusions
We have designed and fabricated a new opto-board including an array driver ASIC and
optical packaging to allow 10 Gb/s optical data transmission. The ASIC can operated at
10 Gb/s after irradiation (> 10 Mrads). The plan is to further improve the design of the
ASIC for application in the ATLAS pixel detector at HL-LHC.
Acknowledgments. The authors are indebted to Maurice Glaser/Federico Ravotti for their help
in using the irradiation facility at CERN. This work was supported in part by the U.S. DOE under
contract Nos. DE-SC0011726 and DE-FG-02-91ER-40690, by the NSF under Grant Number
1338024, and by the German BMBF under contract No. 056Si74.
References
1. Aad, G., et al.: The ATLAS experiment at the CERN Large Hadron Collider. JINST 3,
S08003 (2008)
2. Arms, K., et al.: ATLAS pixel opto-electronics. Nucl. Instrum. Methods A 554, 458 (2005)
High-Speed/Radiation-Hard Optical Engine for HL-LHC 185
3. Gan, K.K., et al.: Design, production, and reliability of the new ATLAS pixel opto-boards.
JINST 10, C02018 (2015)
4. Gan, K.K.: Radiation-hard/high-speed parallel optical links. Nucl. Instrum. Methods A 831,
246 (2016)
5. Van Ginneken, A.: Nonionzing energy deposition in silicon for radiation damage studies.
FERMILAB-FN-0522 (1989)
6. Chilingarov, A., Meyer, J.S., Sloan, T.: Radiation damage due to NIEL in GaAs particle
detectors. Nucl. Instrum. Meth. A 395, 35 (1997)
The Global Control Unit for the JUNO
Front-End Electronics
The purpose of JUNO [1] is to determine the neutrino mass hierarchy and pre-
cisely measure oscillation parameters by detecting reactor neutrinos, supernova
neutrinos as well as atmospheric, solar neutrinos and geo-neutrinos. The data
readout architecture and the desired resolution better than 0.1 photoelectron
(pe) in the low energy signal range (1 pe to 100 pe) are a challenge of primary
importance for the success of the experiment [2]. The baseline structure of the
data readout architecture states that each of the 20000 PMTs embeds the High
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 186–189, 2018.
https://doi.org/10.1007/978-981-13-1313-4_37
The Global Control Unit for the JUNO Front-End Electronics 187
Voltage (HV) unit together with the readout electronics in a standalone manner
inside a water-tight box communicating with the external word by means of a
100 m long Ethernet cable [3]. The intelligent PMT (iPMT) concept guarantees
best performances, reduces the cost in terms of cabling and water proof con-
nectors, lowers the data readout throughput, facilitates the local data storage
in case of supernova and is a very modular and flexible solution since complex
calibration and synchronization tasks can be done at PMT level before event
readout. The GCU is the brain of these smart PMTs thanks to the on board
FPGA whose potentialities are enhanced in terms of fast data buffering and elab-
oration as well as peripherals control bridging the Ethernet network to several
different buses, SPI, I2C, UART, linked respectively to a Time Delay Counter,
Clock Data Recovery, temperature sensors and HV unit controller. The front-end
inaccessibility after installation highlights the importance to design high relia-
bility hardware and to adopt strategies for recovering from stall situations due
to firmware bugs or firmware corrupted during the reprogramming phase itself.
A small Spartan6 and a virtual JTAG cable over IPbus [4] open the possibility
to remote reconfigure the main FPGA, Kintex 7. The GCU hosts an ASIC that
digitizes the signal received from the PMT, must be able to issue trigger prim-
itive requests, to store data waiting for the trigger validation and consequently
send the events fragment to the remote event builder unit via Ethernet link,
upon receiving a trigger validation. The worst case scenario in terms of events
readout bandwidth requirement comes from triggers due to dark current. Con-
sidering 1 KHz of dark noise as event rate, and let’s suppose that each event
lasts for 1000 samples, the readout throughput is about 16 Mb/s, well in the
range of fast Ethernet. During the first instants after a supernova explosion,
the event rate may burst to about 1 MHz and the GCU switches in auto-trigger
mode. In this operation mode each interesting event is packaged and stored in
the on-board 2 GB DDR3 memory. The adoption of the fast Ethernet standard
for data readout and slow control opens the possibility to use the two unused
twisted pairs of the cat-5e cable as synchronous and fixed latency links. One
of these two links is used to communicate with the Central Trigger Processor
(CTP) that collects trigger primitives generated by GCUs. The second is used
to distribute the 62.5 MHz system clock to the 20000 channels; each GCU’s local
time must match with the global time within 16 ns of resolution. The problem
of GCU synchronization is related to the distributed nature of data readout.
Figure 1 shows a block diagram resuming the main features integrated in the
custom hardware platform.
IPbus has been proven to be an ideal solution for system parameters control
and monitor and the readout speed of big blocks of events reaches 90 Mbps,
almost saturating the fast Ethernet bandwidth. The system clock is recovered
by the on-board CDR thanks to the DC balanced encoding adopted to transmit
the trigger information over the synchronous links. The 16 Gb TwinDie DDR3
memory maximum throughput observed is 1333 Mega transactions per second
in agreement with the Kintex 7 memory controller specifications. The GCU
power consumption, upon loading a very basic firmware in the FPGA is about
10 W, digitizer ASIC excluded. There are still improvements to be done, specially
concerning the power consumption and the reliability. The main ongoing activity
focuses on the remote reprogramming and synchronization between channels.
The JUNO collaboration is also evaluating new possible readout schemes in
which one GCU serves more channels. The first run of the experiment is foreseen
for 2020.
References
1. http://juno.ihep.cas.cn/. Accessed 26 June 2017
2. An, F., et al.: Neutrino Physics with JUNO. J. Phys. G 43, 030401 (2016)
3. Bellato, M., et al.: JUNO proposal for PMT readout - GCU, Padova, December
2015. Internal Note
4. Frazier, R., et al.: The IPbus Protocol. An IP-based control protocol for
ATCA/uTCA. Version 2.0, 22 March 2013. https://svnweb.cern.ch/trac/cactus.
Accessed 15 June 2017
Design of a Data Acquisition Module Based
on PXI for Waveform Digitization
Zhe Cao1,2, Jiadong Hu1,2, Cheng Li1,2, Siyuan Ma1,2, Shubin Liu1,2,
and Qi An1,2(&)
1
State Key Laboratory of Particle Detection and Electronics,
University of Science and Technology of China, Hefei 230026, China
anqi@ustc.edu.cn
2
Department of Modern Physics, University of Science and Technology
of China, Hefei 230026, China
Abstract. The waveform digitization is more and more popular for readout
electronics in the particle and nuclear physics experiment. A data acquisition
module for waveform digitization is investigated in this paper. The module is
designed on a 3U PXI (PCI eXtensions for Instrumentation) shelf, which can
manage the measurement of two channels of waveform digitization for detector
signals. It is equipped with a two channels analog to digital converter (ADC) of
12 bits resolution and up to 1.8 G samples per second (SPS) sampling rate, and a
filed programming gate array (FPGA) for controlling and data buffering.
Meanwhile, a complex programmable logic device (CPLD) is employed to
implement the PXI interface communication via PXI Bus. The performance of
this module was tested. The results show that it can be successfully used in
particle and nuclear physics experiments.
1 Introduction
In the readout electronics of modern particle and nuclear physics experiments, wave-
form digitization is becoming more and more popular. In the traditional way, the
particle is measured by time-to-digital and voltage-to-digital, which follows the pre-
amplifier, charging integration and shaping circuit. Compared with the traditional
method, waveform digitization that digitizes the entire waveform directly, can signif-
icantly reduce the order of circuit complexity. Not only the amplitude and arrival time
of the detector signal can be acquired, but also the waveform recognition and signal
screening of the particle event can be analyzed after processing the discrete digital
sequence of the original analog waveform [1].
There are two major types of method to achieve waveform digitization, fast and
high resolution ADC and switched capacitor array (SCA). In SCA method, there is an
application specific integrated circuit (ASIC) chip for sampling and storing the
waveform in ultra-high speed, then an ADC with low sample rate converts storage cells
to digitization [2]. Fast sample and slow digitization causes that the dead time cannot
avoid. Because of the development of the ADC, the ADC with both high resolution and
high sample rate becomes a reality nowadays. The technic of pipeline and folding
interpolating can achieve 12 bits with GHz level sample rate. The benefit of this
method is that the circuit is extremely simple, as well as no dead time in theory.
In this paper, a waveform digitization data acquirement module is described. The
feature of the digitization is 12 bits and up to 1.8 GSPS, which can cover the
requirement of the nuclear and high energy physics experiments as many as possible.
With the purpose of extensive application, the module integrates a high reliable
interface based on PXI architecture.
In order to evaluate the performance of the module, a series of tests were carried out.
The test bench was installed with the oscilloscope, the vector signal generator and
bandpass filters. The module was plugged in the PXI crate and the software for analysis
was set in the controller of the crate.
Because of the differential input, a balun module is developed to convert the single
end signal to differential. Two baluns are employed, a conventional type with the
bandwidth from 0.4 to 450 MHz for the lower frequency test and a transmission line
type with the bandwidth from 30 to 1800 MHz for the higher frequency test.
Fig. 2. The bandwidth of two channels, red line shows channel 1 and blue line shows channel 2
frequency from 2.4 MHz to 800 MHz, test was conducted on ENOB, as shown in
Fig. 3. The test results indicate that the ENOB of both channels is above 8 bits for an
input signal up to 350 MHz.
Fig. 3. The ENOB of two channels, red line shows channel 1 and blue line shows channel 2
4 Conclusion
This paper presents a data acquisition module based on PXI for waveform digitization.
This dual channels module has a sample rate of 1.8 GSPS. Systematic measurement
results reveal that the bandwidth of the module is about 2 GHz and the ENOB is above
8 bits for an input signal up to 350 MHz. This module has a compatible design, such as
trigger signal receiver, globe clock receiver and PXI interface. As a result, it can be
considered as a prototype of waveform digitization readout electronics for particle and
nuclear physics experiments.
Acknowledgments. This project is supported by the Young Fund Projects of the National Nat-
ural Science Foundation of China (Grant No. 11505182). And it is also supported by the Young
Fund Projects of the Anhui Provincial Natural Science Foundation (Grant No. 1608085QA21).
194 Z. Cao et al.
References
1. Esposito, B., Kaschuck, Y., Rizzo, A., et al.: Digital pulse shape discrimination in organic
scintillators for fusion applications. Nucl. Instrum. Methods Phys. Res. 518(1–2), 626–628
(2004)
2. Kleinfelder, S.A.: Development of a switched capacitor based multi-channel transient
waveform recording integrated circuit. IEEE Trans. Nucl. Sci. 35(1), 151–154 (1988)
3. http://www.ti.com.cn/product/cn/ADC12D1800/datasheet
4. PXI Hardware Specification Revision 2.2, PXI Systems Alliance (2004)
5. IEEE Standard for Terminology and Test Methods for Analog-to-Digital Converters, IEEE
Standard 1241-2010 (2011)
Readout Electronics for the TPC Detector
in the MPD/NICA Project
1 Introduction
The new research complex NICA aimed to study hot and dense baryonic matter is
under active realization in JINR (Dubna) [1]. It will operate with heavy ion collisions
(up to Au) at centre-of-mass energies 11 GeV per nucleon. The MPD detector will
operate at one of two interaction points of the collider [2, 3].
The Time Projection Chamber (TPC) is the main tracker of MPD. It has been
designed for tracking and identification of charged particles [4–7].
The TPC detector produces most complex events among other MPD subdetectors.
The NICA collider will provide the trigger rate up to 7 kHz with mean multiplicity of
300 tracks and maximum multiplicity – up to 1000 tracks per event. The data readout
system is to receive the data from all 95 232 detector pads and then transmit them to the
MPD DAQ.
The TPC data is zero dominated. For this reason zero suppression modes should be
implemented in the TPC Front-End Electronics (FEE). The mean data stream from the
TPC detector is expected to be at the value 7 Gbps (with zero suppression).
Another significant requirement for the data readout system is end-caps trans-
parency requirement. TPC electronics will be located at the end-caps of the TPC barrel
and it will shadow other MPD subdetectors. For this reason the FEE size should be as
small as possible, and it should dissipate as small power as possible to simplify the
cooling system.
The TPC’s data acquisition system consists of 24 identical parts. Each part serves to
one readout chamber and electronics of each chamber is an independent system. There
are two main blocks in the data readout system. They are FECs and RCUs. The whole
data readout system contains 95 232 registration channels, 1488 64-channels FECs and
24 RCUs.
At that moment we are considering two versions of the TPC FEE. First one is based
on the PASA and ALTRO chip set [8, 9]. These two ASICs were designed for the
ALICE TPC. They are also suit well for our design and allow us to reach all
requirements except end-caps transparency requirement. This system has already been
designed and its functionality is enough for the startup period of operation. The second
version of the TPC FEE is based on the new ASIC SAMPA [10, 11]. The SAMPA chip
has few advantages to PASA and ALTRO chip set, namely double input signal
polarity, switchable preamplifier gain, and picking time, continuous readout mode.
The FEE design that is based on the SAMPA chip is under realization now.
Block diagram of the readout system for one chamber is shown in Fig. 1.
The FEC64S is advanced version of the prototype card FEC64 [4–7, 12, 13]. The card
is based on the 4 PASA and 4 ALTRO chips which produce 64 independent regis-
tration channels in total. View of the FEC64S is shown in Fig. 2 and main parameters
are summarized in Table 1.
Fig. 2. Top view of the FEC64S. (1) PASA chips – low noise amplification of the signal;
(2) ALTRO chips – digitization and signal processing; (3) ALTERA FPGA; (4) Serializer/
Deserializer chip.
PASA chip receives analogue signals through short kapton cable directly from
detector pads. It amplifies detector signals with conversion gain 12 mV/fC over the
range of input charge up to 150fC. Each of the 64 PASAs channels has a single-ended
input and a differential output which are directly connected to the ALTRO chip inputs.
After the analogue to digital conversion in the ALTRO, the signal processing is
performed in 5 steps: (a) the first correction and subtraction of the signal baseline,
(b) the cancellation of long-term components of the signal tail, (c) the second baseline
198 G. Cheremukhina et al.
correction, (d) the suppression of the samples that contain no useful information (zero
suppression), and (e) formatting.
Another essential part of the card is ALTERA FPGA. Its main function is multi-
plexing of the parallel ALTRO words from all 64 independent channels to one stream.
The Serializer/Deserializer chip TLK2711 was chosen as a readout interface. It pro-
vides bidirectional access to the ALTRO and FPGA at speed up to 2.5 Gbps.
The FPGA implements the connection function of 40-bits ALTRO bus with 16-bits
parallel input of Ser/Des chip and vice-versa.
The multiplexing of data in the card is carried out in 4 steps. The 16 ALTRO
channels are multiplexed in the chip. Then parallel outputs of the two chips are mul-
tiplexed into one bus and organize two groups from two ALTROs each. Power reg-
ulator of the PASAs and ALTROs chips also are combined into two groups that allows
to disable the half of the card if some problems occur. Parallel outputs of two groups
are combined into one that is directly connected to the FPGA.
The FPGA slices 40-bits ALTRO words into 10-bits words, codes them with
Hamming code, and at the end feed forms 16-bits words to the parallel input of the
Ser/Des chip. The Ser/Des chip is implemented the last step of multiplexing in the
FEC64S. Outgoing data are transmitted to the RCU as a serial stream with data rate
from 1.6 to 2.5 Gbps.
The FEC64S can operate in two acquisition modes: the Individual Readout Mode
(IRM), and the Automatic Scan Mode (ASM). In IRM the RCU should send Channel
Readout command (CHRDO) independently to all of 64 channels on the card. This
acquisition mode does not allow achieving the fastest data readout speed from all
FECs, which are connected to the RCU.
After activating ASM the FPGA takes some functions of the RCU. The ASM
algorithm was implemented at the FPGAs state machine. ASM cycle starts after
receiving level 2 trigger (LVL2) that confirms validation of data in the ALTROs multi-
event buffers memory. After receiving LVL2 state machine starts to read Address and
Event Length registers (ADEVL) of all ALTROs channels that contain information
about available data in the buffers. The contents of that registers are stored in the
dedicated FIFO buffer in the FPGA. When all ADEVL registers information is received
state machine starts to send command only to those channels which contains infor-
mation. The flowgraph of the ASM is shown in Fig. 3.
Receiving of the event length information is of higher priority than the channel
readout in the state machine. This is done because the event length register contains the
information only about the latest event stored in the ALTRO buffers while for ADC
samples there are multi-event buffer memory. The data readout cycle from the channels
is interrupted when another LVL2 has been received. And it is resumed only after the
new event length information is stored to the dedicated FIFO memory in the FPGA.
The ASM significantly accelerates the data readout process. In this mode all FECs
operate simultaneously. The RCU needs only to receive data.
RCU is a significant unit of the data readout system. Its main function is control and
data collection from 62 FECs. The whole data acquisition system will be equipped with
24 RCUs each one for a single readout chamber. Every RCUs are based on the 8 input
FPGAs and one main FPGA. Input FPGA is connected to the 8 FECs through high-
speed transceivers. The multiplexing of the data in RCU is implemented in two steps.
Readout Electronics for the TPC Detector in the MPD/NICA Project 199
The input FPGA multiplexes data from 8 FECs. The main FPGA multiplexes data from
8 input FPGAs. After multiplexing the data are transmitted to the first level processor
system through optical link.
Other key functions of the RCU are monitoring of the FECs conditions and fan-out
of synchronization pulses. The RCU is equipped with independent Ethernet line to the
slow control system.
The SAMPA chip has been under development during last few years for ALICE TPC
and MCH upgrade at CERN. It has 32 channels and is composed of an analog front-end
part, an ADC, a digital processor and serial links. The SAMPA chip is more integrated
then PASA and ALTRO chip set. Its square is only 225 mm2 per 32 channels while
squares of PASA and ALTRO are 484 mm2 and 708 mm2 respectively per 16 chan-
nels. Estimates show that the size of the FEC PCB can be reduced by a factor of 3. The
amount of background substance will be proportionally reduced in front of corre-
sponding parts of the CPC Tracker, the Straw EC Tracker and the Time of Flight
system that located next to the TPC end-caps. Besides in this case the FEC planes can
be oriented parallel to the TPC end-caps plane that is the essential factor affecting to the
quality of the tracks reconstruction in the MPD.
Another significant advantage of the SAMPA chip is its possibility operating with
multi-wire proportional chambers as well as with GEM chambers that is necessary for a
future TPC upgrade.
A few pilot chips were tested and the results of measurements show us the feasi-
bility of SAMPA-based FEC for the TPC MPD/NICA. The idea of using the SAMPA
chip in the FECs by modification of the designed one seems very productive.
200 G. Cheremukhina et al.
5 Conclusion
The FEE system based on PASA and ALTRO chip set was designed. The present
system provides opportunities for read out of physics events at the trigger rate up to
7 kHz which meets the NICA project requirements for startup period of operation.
The usage of the SAMPA chip for the TPC readout system will satisfy all
requirements. In addition such a system gives an opportunity for future TPC upgrade.
The SAMPA based FEE is under designing.
Acknowledgments. The authors wish to express their gratitude to L. Musa and C. Lippmann for
their interest towards our work and support. We also thank A. Kluge and M. Bregant for the
opportunity to participate in testing the SAMPA chip also at the VBLHEP, JINR.
References
1. Kekelidze, V., et al.: Status of the NICA project at JINR. In: EPJ Web of Conference 95,
01014 (2015)
2. Abraamyan, K., et al.: The MPD detector at the NICA heavy-ion collider at JINR. Nucl.
Instrum. Method A 628, 99 (2011)
3. Abraamyan, K., et al.: MPD collaboration, the multipurpose detector (MPD). Conceptual
Design Report, JINR, Dubna, Russia (2010)
4. Aver’yanov, A.V., et al.: Time-projection chamber of the MPD detector at NICA collider
complex. Yadernaya fizika i inzhiniring [in Russian] 4(9), 867 (2013)
5. Aver’yanov, A.V.: Time-projection chamber for the MPD NICA project. J. Instrum. 9,
C09036 (2014)
6. Aver’yanov, A.V. et al.: Readout system of the TPC/MPD NICA project. Yadernaya fizika i
inzhiniring [in Russian] 5(11–12), 916 (2014)
7. Aver’yanov, A.V. et al.: Readout system of TPC/MPD NICA project. Phys. At. Nuclei 78
(13), 1556–1562 (2015)
8. ALICE TPC Electronics.: Charge sensitive shaping amplifier (PASA). Technical Specifi-
cations, CERN (2003)
9. ALICE TPC Readout Chip: User Manual. CERN. Draft 0.2. ALTRO chip web page. http://
ep-ed-alice-tpc.web.cern.ch/ep-ed-alice-tpc, June 2002
10. Barboza, S.H.I., et al.: SAMPA chip: a new ASIC for the ALICE TPC and MCH upgrades.
J. Instrum. 11, C02088 (2016)
11. Adolfsson, J., et al.: SAMPA chip: the new 32 channels ASIC for the ALICE TPC and MCH
upgrades. J. Instrum. 12, C04008 (2017)
12. Bazhazhin, A., et al.: Front end electronics for TPC MPD/NICA. In: Proceeding of XXIII
International Symposium on Nuclear Electronics & Computing Varna. JINR. E 10(11), 133
(2011)
13. Averyanov, A., et al.: R&D readout electronics of the TPC MPD/NICA. In: Proceeding of
XXIV International Symposium on Nuclear Electronics & Computing. Varna. JINR. E 10
(11), 136, 265 (2013)
TDC Based on FPGA of Boron-Coated MWPC
for Thermal Neutron Detection
1 Introduction
Neutron is an ideal probe for studying dynamical properties and the structure of matter
[2]. Traditionally, most of the neutron scattering spectrometers adopt 3He gas neutron
detector. However, in recent years, the shortage of 3He gas brings challenge for the
usage of this kind of detector in new applications. Finally find 10B can readily absorb
neutrons and the reaction process is introduced as below. The reaction products are
relatively easy to measure and the natural abundance of it can reach 19.9% [3, 4].
In order to measure the hit position of neutron for boron-coated MWPC, a proto-
type of the readout system was proposed and developed. By recording the time dif-
ference between the two pulses from two ends of the delay line module, the hit
coordinate position can be reconstructed. So the resolution of the time measurement
can affect the positioning resolution. To verify the performance of positioning reso-
lution, some experiments were performed.
Fig. 2. Structure of the readout electronics system (left) and the structure of FEE (right)
Figure 2(right) shows the structure of FEE that is implemented as removable module
plugged into a motherboard on which a DAC module can be used to set the threshold
for discriminator. In order to achieve high gain, high bandwidth and low noise, FEE
adopts two stages of amplifier. The amplified signal is fed into a discrimination and
signaled out with LVDS level.
After the simulation and calculation of the delay line module in the lab, the con-
clusion is: in order to get better position resolution, the design of the delay module
should reference the following conclusions (d represents nominal tap-to-tap delay
increment of delay line module, Z0 is the equivalent characteristic impedance of delay
line module).
• Choose the value of Z0 as large as possible.
• Choose appropriate value of d, d = 5 ns can get better position resolution for the
MWPC detector that mentioned above.
To digitizing time interval, coarse and fine time measurement method [5] is utilized and
implemented in Xilinx FPGA. Anode signal is considered as a trigger to enable TDM.
Then the time interval of two signals from the same delay line chain will be measured
and digitized. The digitized data is packed and transmitted to computer over Gigabit
Ethernet port, which is implemented with technique of SoC FPGA and has formation of
a daughter module on TDM. The structure and the lab tests result of TDM are show in
Fig. 3, which can conclude the time resolution of it is better than 35 ps (RMS).
204 L. Yu et al.
Fig. 3. The architecture of the TDM and picture of TDM and gigabit Ethernet board
To evaluate the time measurement resolution of electronics, some lab tests are taken.
The result is shown in Fig. 4. The resolution of time difference is better than 550 ps,
the total time different is about 2*286 ns for this MWPC detector (200 mm*200 mm),
and reference to expression (1), the position resolution is better than 0.2 mm.
Fig. 4. The result of delay line module with 5 ns tap value: the linear curve of the number of
delay units and the delay time (right); the resolution of time difference (left)
Joint test with the MWPC detector is also performed. Figure 5 (left) shows the joint
test platform and pictures of 241Am source placed on the detector to simulate neutron
hitting. The result is shown in Fig. 5 (right). The position resolution is better than
1 mm. From which can conclude that this result is limit by the spacing of cathode wire.
TDC Based on FPGA of Boron-Coated MWPC for Thermal Neutron Detection 205
Fig. 5. Joint test platform and 241Am is on the three different location of WMPC detector (left),
the result of joint test with WMPC detector (right)
References
1. Charpak, G., Sauli, F.: Multiwire proportional chambers and drift chambers. Nucl. Instrum.
Meth. 162, 405 (1979)
2. Viret, M., Ott, F., Renard, J.P., et al.: Magnetic filaments in resistive manganites. Phys. Rev.
Lett. 93, 217–402 (2004)
3. Shea, D.A.: Congressional Research Service (2010)
4. Knoll, G.F.: Radiation Detection and Measurement, 3rd edn. Wiley, New York (2000)
5. Huan-Huan, F.A.N., et al.: A high-density time-to-digital converter prototype module for
BES III end-cap TOF upgrade. IEEE Trans. Nucl. Sci. 60, 3563 (2013)
A Portable Readout System for Micro-pattern
Gas Detectors and Scintillation Detectors
1 Introduction
With the development of high energy physics (HEP), the Micro-pattern Gas Detector
(MPGD) such as Micromegas [1] and Gas Electron Multipliers (GEMs) [2] and
scintillation detector are widely used in particle detection physics and space astro-
physics [3]. In order to meet the requirement of the early stage’s test, a portable readout
electronic system was implemented and verified by the authors.
This system is based on the chip of VATA160 [4], which is a large dynamic range
charge measurement readout Application-Specific Integrated Circuit (ASIC) with self-
trigger function designed by IDEADS (Norway). It has 32 charge sensitive channels.
The ASIC is designed for scintillation detector and MPGD. This system, which can
acquire 128 channels of charge inputs, has been developed and can be used to research
the performance of MPGDs as well as scintillation detectors. With integration time of
1.8 us, the dynamic range is from −1 pC to +12 pC, and the noise level is better than
2.5 fC in RMS. The system is compact and portable to use. It communicates with the
PC via only one USB bus. Since its total power dissipation is lower than 2.5 W, it is
able to be supplied by USB bus. The system generates trigger itself or gets external
trigger.
Fig. 1. Block diagram of electronic board. Fig. 2. Photo of the readout electronic board.
We present the results of performance test of the readout system. The electronic noise
of all 128 channels has been test and the result is shown in Fig. 3. The figure indicates
that the noise of every channel is better than 2.5 Least Significant Bits (LSB), which
means the equivalent input charge is 2.5 fC. Every channel of this system has been
calibrated. As is shown in Fig. 4, the typical result of integral nonlinearity (INL) be-
tween −500 fC to +2.5 pC is better than 0.5%. one ADC channel stands for 1 fC. The
range of −500 fC to +2.5 pC covers most of the application requirements.
This system was coupled with a Micromegas detector [5] to test the energy spec-
trum of 55Fe. The result is shown in Fig. 5. The all-around peak and escape peak are
clearly visible, which means the readout system is capable of performing the readout of
Micromegas detector.
The encoded multiplexing readout method for Thick Gas Electron Multiplier
(THGEM) is a novel method which can significantly reduce the number of readout
208 S. Ma et al.
channels [6, 7]. In this part, the readout system was connected to a THGEM detector
with the Two-Dimensional direct coding readout of 100 100 anode bars to perform
imaging test. There was a copper plate with letter slits between the detector and X-ray
generator. After collecting the X-ray signal which entered the detector through the slits
of the copper plate, the two-dimensional imaging was obtained by decoding the hit
position of the incident signal. As is shown in Fig. 6, the letter gap is clearly visible
when the threshold is set to triple the noise.
4 Conclusion
A portable readout electronic system for MPGDs and Scintillation detectors is pre-
sented in this paper. It shows the readout systems has features of low noise (less than
2.5 fC), high dynamic range (−1–+12 pC), low power dissipation (less than 2.5 W)
and high integration (128 channels). This system is portable to use with only one USB
bus for its supply, commands and data transmission. This system operates with dif-
ferent types of MPGDs and Scintillation detectors.
References
1. Giomataris, Y., et al.: MICROMEGAS: a high-granularity position-sensitive gaseous detector
for high particle-flux environments. Nucl. Instrum. Methods Phys. Res., Sect. A 376(1), 29–
35 (1996)
2. Sauli, F.: GEM: a new concept for electron amplification in gas detectors. Nucl. Instrum.
Methods Phys. Res., Sect. A 386(2–3), 531–534 (1997)
3. Jin, C.: Dark matter particle explorer: the first Chinese cosmic ray and hard c-ray detector in
space. Chin. J. Space Sci. 34(5), 550–557 (2014)
4. IDEALS Homepage. http://ideas.no/products/ide3160-2/. Accessed 14 June 2017
5. Yunlong, Z., et al.: Manufacture and performance of the thermal-bonding Micromegas
prototype. J. Instrum. 9(10), C10028 (2014)
6. Qian, L., et al.: A successful application of thinner-THGEMs. J. Instrum. 8(11), C11008
(2013)
7. Binxiang, Q., et al.: A novel method of encoded multiplexing readout for micro-pattern gas
detectors. Chin. phys. C 40(5), 58–62 (2016)
8. HAMAMATSU Homepage. http://www.hamamatsu.com/us/en/S13360-1350PE.html. Acces-
sed 14 June 2017
Quality Evaluation System for CBM-TOF
Super Module
1 Introduction
The Time-of-Flight (TOF) system in the Compressed Baryonic Matter (CBM) experi-
ment is composed of 6 different type of super module (SM) detectors named M1 to M6,
each of which contains several high resolution Multi-gap Resistive Plate Chambers
(MRPCs). As for the M5 and M6, each SM contains 5 MRPCs, which supports up to
320 electronic channels for high-precise time measurement. In order to meet the
minimal requirement of 80 ps global time resolution, the resolution of time to digital
converter (TDC) board should be 25 ps or better [1–3].
For now, MRPC detectors for CBM-TOF are still under development. During the
process of MRPC mass production, quality evaluation system is required to ensure that
the detectors achieve the expected performance. For the purpose of quality evaluation
of CBM-TOF SM, a 320-channel time digitizing and readout electronic system for high
density and high resolution time measurement is designed.
2 System Architecture
PXI Crates
Gigabit
Ethernet
TDC#1 DRM1
TDC#9 DRM10
TDC#10 CTM
STS
SM Detector Front-end Electronics Back-end readout electronics
As shown in Fig. 2, a prototype of DRM is designed for data readout and clock and
trigger distribution. DRM receives data from TRM through optical link and distributes
the clock and trigger to TRM reversely. To deal with the massive data receiving from
TRM through four optical link, 20 DRM divided into four groups resided inside two
PXI-6U crates are introduced to the system. Among 5 DRMs from the same group, a
master module receives data from TRM at the rate of the 2.5 Gbps though one optical
link and sends to all slave ones alone a daisy chain through differential cable, as shown
in Fig. 3. As a result, each DRM needs to process the data at the rate of 0.5 Gbps. Once
data arrives at DRM, they are relayed to the DAQ system through a Gigabit Ethernet
port on each DRM concurrently.
2.5Gbps
Gigabit
(Optical Link)
Ethernet
SFP
STS
DAQ
(320 channels)
For confirming the performance of quality evaluation system, cable delay method is
conducted. Two hit signals generated by a pulse generator (AFG3252) with a certain
time delay are connected to two TDC channels. Assuming that two channel are
uncorrelated, the time resolution of one single channel is equal to the RMS value of the
measurement divided by 21/2. As shown in Fig. 4, the time resolution of the leading
edge time is better than 20 ps, which is better than the requirement.
Fig. 4. Left: Time measurement results of two channels from same TDC. Right: Time
measurement of two channel from two separate TDC boards.
4 Conclusion
A 320-channel time digitizing and readout electronic system is designed for quality
evaluation of CBM-TOF SM. The system has a distributed and extensible architecture,
mainly including FEE, BEE, and DAQ software. The laboratory test results show that
quality evaluation system can work correctly and overall system has time resolution of
20 ps. The evaluation system can be subsequently used for quality control of CBM-
TOF SM.
Acknowledgment. This work was supported by the National Basic Research Program (973
Program) of China under Grant 2015CB856906.
References
1. Senger, P.: The compressed baryonic matter experiment. J. Phys. G: Nucl. Part. Phys. 28(7),
1869 (2002)
2. Herrmann, N.: Technical design report for the CBM time-of-flight system (TOF). Technical
report, GSI. http://repository.gsi.de/record/109024/files/tof-tdr-final_rev6036.pdf. Accessed
10 June 2017
214 C. Li et al.
3. Loizeau, P.-A.: Development and test of a free-streaming readout chain for the CBM time of
flight wall (2014)
4. Fan, H.-H., et al.: TOT measurement implemented in FPGA TDC. Chin. Phys. C 39, 11
(2015)
5. Zheng, J., Cao, P., Jiang, D., An, Q.: Low-cost FPGA TDC with high resolution and density.
IEEE Trans. Nucl. Sci. PP(99), 1–1
Research of Front-End Signal Conditioning
for BaF2 Detector at CSNS-WNS
1 Introduction
This work was supported by National Research and Development plan and NSAF (U1530111).
the data acquisition system. The detector array will have 46 channels at first and up to
92 channels finally after the planned upgrade.
To avoid the influence of the alpha particles usually brought by the crystals, the
pulse shape discrimination (PSD) method [3] is achieved in the readout electronics. The
field digitization modules (FDMs) use full waveform digitization based on 1GSps
ADCs [4], to realize PSD method. So, some new requirements of the performances
such as low distortion and high bandwidth are brought to the front-end signal condi-
tioning. What’s more, the FDMs are located in the back-end PXIe crates, which leads
to transmitting analog signals to the back-end crates over about 20 m distance. In
addition, the Sub Trigger Module (STM), a section of trigger system located in back-
end PXIe crates as well, generate pre-trigger information from the analog signals.
Therefore, analog signals from the front-end need to feed both the FDMs and the
STMs. To meet the requirements above, the front-end signal conditioning should be
able to amplify the fast signals from the detectors and fan out for corresponding
processing with good performances of high bandwidth, low distortion, low noise and
long range driving.
The typical signal from the detector hit by the gamma ray is shown as Fig. 1. The
characteristics of the pulse at the 50 X system are as following: the fall time is about 2
to 3 ns; the rise time of the slow component is about 1 to 2 ls; the amplitude of
interesting signals is about −4 mV to −2 V. Thus, the front-end signal conditioning
needs high bandwidth up to 200 MHz and large linear range from up to −2 V.
0.0
-0.1
Amplitude (V)
-0.2
Slow Component
-0.3
Fast Component
-0.4
Time (ns)
In this paper, the design of the front-end signal conditioning for the BaF2 detector is
proposed. Firstly, the background is introduced. Then the design details are discussed.
In the Sect. 3, some test results are present and in the end the paper is summarized.
As illustrated in Sect. 1, to transmit the analog signals from PMTs with low-distortion,
low-noise performance, the fast preamplifier (FPA) is designed. To provide analog
signals to the FDM and the STM respectively, the analog fan-out module (AFM) is
Research of Front-End Signal Conditioning for BaF2 Detector 217
-HV
BaF2+PMT
...
...
To FDM
Ch12
twisted-pair cable To STM
FPA AFM
In order to verify the design of the FEE and evaluate the performance of the FEE, some
tests have been done. The platform and photo of testing front-end signal conditioning
circuits are shown in Fig. 3. The test results are shown as Fig. 4. From the results, we
can lead to the conclusion in the next section.
CH1 OUT-
IN
OUT+
Signal
Ă
FPA
Power Supply
OUT+ CH1 IN-
OUT- IN+
Oscillo
Ă
AFM
Fig. 3. The diagram and photo of test platform for front-end signal conditioning circuits
0
2.0
Normalized Gain (dB)
1.5
VOUT (V)
-2
Fig. 4. The input-output curve and the frequency response curve of the DUT
4 Conclusion
The prototype of the front-end electronics including two modules has been designed
with low-noise, high-speed and low-distortion performance to meet the requirements of
the STM and the FDM with a 12 bit, 1 GSps ADC for the BaF2 detector array at CSNS-
WNS. Test results show that the front-end signal conditioning circuits have the
bandwidth up to 245 MHz and pretty good performance of linearity that is suitable for
BaF2 detector application.
Research of Front-End Signal Conditioning for BaF2 Detector 219
References
1. Tang, J.Y., et al.: Proposal for muon and white neutron sources at CSNS. Chin. Phys. C 34(1),
121–125 (2010)
2. He, B., et al.: Clock distribution for BaF2 readout electronics at CSNS-WNS. Chin. Phys.
C 41(1), 162–166 (2017)
3. Combes, C.M., et al.: A thermal-neutron scintillator with n/c discrimination LiBaF3: Ce, Rb.
Nucl. Instrum. Methods Phys. Res., Sect. A 416(5), 364–370 (1998)
4. Zhang, D.L., et al.: System design for precise digitization and readout of the CSNS-WNS
BaF2 spectrometer. Chin. Phys. C 41(2), 159–165 (2017)
5. Liu, S.B., et al.: BES III time-of-flight readout system. IEEE Trans. Nucl. Sci. 57(2), 419–427
(2010)
6. Texas Instrument: LMH6552 Datasheet, Dallas, TX (2007)
7. Yuan, J.L.: GTAF system’s front-end electronics design and performance testing and system
of the target switch’s development (in Chinese). Master thesis, Department of Modern
Physics, Lanzhou University, Lanzhou, Gansu (2008)
8. Texas Instrument: LMH6550 Datasheet, Dallas, TX (2004)
Generalized Signal Conditioning Module
for Spectrometers at CSNS-WNS
1 Introduction
Chinese Spallation Neutron Source (CSNS) is a large scientific facilities under con-
struction, which will be the first spallation neutron source in developing countries, and
it ranks fourth in the world after completion [1, 2]. At CSNS, back-streaming neutrons
from proton beam line are suitable for constructing White Neutron Source (WNS) [3].
WNS is an extremely useful tool for measuring nuclear data. Spectrometer is one of
important guarantee for nuclear data measurement research.
CSNS-WNS BaF2 detector readout electronics [4]. Because the detector system and the
physical target are different, the logic algorithms in FPGA are also different.
DDR
PXIe
Detector ADC FPGA
platform
Preamplifier Signal Conditioning
Module T0 trigger
Special design timing
Detector and preamplifier use special design, digital circuit belong to generalized
design, and SCM completes the adaptation of special design and generalized digital
circuit, transforms single ended signal to differential signal, adjusts signal amplitude
and level to meet the requirements of ADC input.
Equation y = a + b*x
1000 Plot vout-ch2(mV)
W eight No W eighting
-10
Intercept -0.7085
Slope 0.25037
-12 800 Residual Sum of Square 107.46734
Pearson's r 0.99998
-14 R-Square(COD) 0.99995
Adj. R-Square 0.99995
-3dB@115MHz 600
Vout(mV)
-16
Gain(dB)
-18
400
-20
-22
200
-24
-26
0
1 10 100 1000 0 500 1000 1500 2000 2500 3000 3500
Freq(MHz) Vin(mV)
Fig. 4. The result of frequency domain test (left) and linear test (right)
Fig. 5. Joint test result of C6D6 detector (left) and test platform (right)
Generalized Signal Conditioning Module for Spectrometers 223
Produce14MeV chamber
SCM2
neutron 142PC
HV 200V preamplifier
20m coaxial cable
ORTEC
428
Fig. 6. Joint test result of multilayer fission chamber (left) and test platform (right)
5 Conclusion
SCM has good frequency response, linear and faster time response, and is suitable for a
variety of detectors by parameter adjustment. By debugging with a variety of detectors,
it is shown that SCM can match the corresponding detector, which verifies the design
rationality. So the generalized SCM simplifies design of various spectrometers for
CSNS-WNS and the development cycle is shortened.
Acknowledgements. This work was supported by National Research and Development plan
(2016YF-A0401602).
References
1. He, B., Cao, P., Zhang, D.-L., et al.: Clock distributing for BaF2 readout electronics at CSNS-
WNS (2016)
2. Tian, H.L., Zhang, J.R., Yan, L.L., et al.: Distributed data processing and analysis
environment for neutron scattering experiments at CSNS. Nucl. Instrum. Methods Phys. Res.
834, 24–29 (2016)
3. Jing, H.T., Tang, J.Y., Tang, H.Q., et al.: Studies of back-streaming white neutrons at CSNS.
Nucl. Instrum. Methods Phys. Res. 621(1–3), 91–96 (2010)
4. Zhang, D., Cao, P., Wang, Q., et al.: Proposal of readout electronics for CSNS-WNS BaF2
detector (2016)
5. Diaz-Diaz, I.A., Cervantes, I.: Design of a flexible analog signal conditioning circuit for DSP-
based systems. Procedia Tech. 7(4), 231–237 (2013)
Study of Front-End High Speed Readout Based
on JESD204B
Abstract. This paper describes a high-speed data readout method for a large-
scale front-end electronics in the JESD204B protocol-like transmission protocol
implemented in a FPGA, in addition to a reading out for a commercial ADC.
A prototype board including analog signal processing, digitization, digital
processing and control in FPGA, and data transmission has been designed and
together with a lab designed data receiver board, a demo system has been setup
for this study of new method. The JESD204B protocol is implemented in FPGA,
which is compared and verified by the commercial ADC output and the test
results are showed satisfactory.
1 Backgrounds
With the emergence of the high-speed digitalizing technology in the front end elec-
tronics, the massive data readout will become inevitable requirement and technical
bottleneck in high-energy physics experiments and other occasions. The large-scale
data readout is limited by the single-channel bandwidth and the overall bandwidth
constraints. Multi-parallel synchronizing data transmission are specially needed,
especially in large-scale ASIC. RocketIO technology, JESD204B and other protocols
provides a reference for the emergence of high-speed front-end data readout protocol
and hardware implementation. Since RocketIO is a commercial high-speed serial
technology, and JESD204B protocol is a commercial protocol embedded in the com-
mercial chip, they are not suitable for ASIC chip implementation. Therefore, the
independent research and development of ASIC design for high-speed large-scale data
readout has its necessity and urgency. Xilinx’s FPGA RocketIO and ADI ADC chip
with JESD204B protocol are used to carry out the high-speed multi-channel syn-
chronous transmission protocol research. This work masters the relevant agreement and
chip implementation through the design of the prototype, the software programming
and functional testing. With FPGA simulation, realize technology without relying on
commercial RocketIO and JESD module, preparing for ASIC high-speed readout
design on future CEPC or other experiments.
Firstly, this research refers to the JESD204B transmission project to design a single
width double layer AMC, with four 500 Msps sampling rate 14 bit resolution ADC
called digitization and FEE readout board. And research on the entire system hardware
and protocol, cooperating with the back-end data processing board to achieve large-
scale front-end data readout. And then use the GTH in the FPGA simulated JESD204B
protocol, finally simulate JESD204B protocol in the case of not using GTH, as the
principle of ASIC prototype.
In order to achieve front-end data high-speed readout, the block diagram as shown
below is used (Fig. 1).
The signal is acquired by the ADC after being processed by the amplifier.
And ADC collects the data and transmits to the miniPOD following the JESD204B
protocol. MiniPOD converts electric signal into optical signal and transmits it to the
back-end processing board through the optical fiber. The FPGA on back-end pro-
cessing board receives the signal and displays. The content of this paper is the design of
digitization and FEE readout board, the composing of FPGA program in back-end
processing board, and the simulation of JESD204B protocol.
The digitization and FEE readout board consists of two parts, the digitization and FEE
readout mother board (Tongue1) and the configuration daughter board (Tongue2).
Digitization and FEE readout mother board uses two AD9680-500 ADCs, each has
500 Msps sampling rate and 14 bit resolution. ADC data through miniPOD is con-
verted into optical signals via the fiber to the back-end. ADC sampling clock is
226 Z. Liu et al.
generated by the AD9528 PLL chip converting from 40 MHz on board or external
reference clock. And the on-board power supply from the transformer chip and convert
from the 12 V supply voltage. SYNC button is added because the JESD204B protocol
requires the receiver to the transmitter’s SYNC signal to mark the demand and receive
of the synchronization code K28.5. In this design, signal line cannot be added between
the receiver and the transmitter. The SYNC button is required to mimic the receiver to
transmitter’s SYNC to allow the ADC work properly. ADC pins in the AD9680 series
is compatible, so ADC can be 500 Mbps, 820 Mbps, 1 Gbps, 1.25 Gbps maximum
sampling rate, this experiment uses 500 M sampling frequency for data path debug-
ging, corresponding to the fiber line rate 5 Gbps (Fig. 2).
The configuration of the ADC and clock on the Tongue1 can be done by the FPGA on
the Tongue2 configuration daughter board or the MMC microcontroller on Tongue1. In
addition, Tongue2 also provides panel space for reference signal or reference clock
source input from the panel. On-board flash can guarantee the self-loading of the
program after power on, Gigabit Ethernet can be used to achieve the host computer
control through the IPbus configuration. The functional assignment of Tongue1 and
Tongue2 ensures that the entire system can work normally without Tongue2. Since the
configuration can be done using the MMC microcontroller on Tongue1, the reference
signal can also be input from the Tongue1 backplane, and Tongue2 only provides more
auxiliary functions (Fig. 3).
Study of Front-End High Speed Readout Based on JESD204B 227
Digitization and FEE readout board ADC sampling clock is generated by the
AD9528 clock chip, which is programmed by EDK embedded software and configured
through IIC bus (Fig. 4).
The configuration of the AD9680 ADC chip is done by the ISE software via the SPI
bus (Fig. 5).
228 Z. Liu et al.
The optical signal from the digitization and FEE readout board is received by the back-
end data processing board, ADC transmission data format is shown in Fig. 6. Mul-
tiframe is the concept of the JESD204B protocol [1], representing multiple frames, and
each multiframe begins with 1C of the K code and ends with 7C of the K code. FPGA
chip on the back-end data processing board needs to complete synchronization,
alignment, user data recovery. Synchronous represents using BC code to define the
10 bit boundary for a byte of data converting from 10 bit to 8 bit. Alignment represents
the use of the 7C same time characteristic to eliminate different path delays of trans-
mission process. The recovery of the user data is due to the fact that the ADC replaces
the same user data with the FC and 7C (K codes), so we need to recover the received
FC and 7C (K codes) to previous values.
Using GTH to simulate the JESD204B protocol, the transmitter sends the sync code
BC first, then sends four multiframe incremental data, followed by the user data. The K
value is designed to 32, that means, a multiframe contains 32 frames; The F value is
designed to 8, that means, a frame contains 8 bytes. The receiver is designed to four-
way wide 64 bit 10G channel, so that each frame can be corresponded to a converter
with the eight bytes of the four sampling in 64 parallel lines.
Select the parameters CF = 0, CS = 0, that is, a tail bit is added behind each
sample. Tail bit is simulated with 00, so the frame format is shown as Fig. 7.
From the transmitter ILA, it can be observed that after the BC ••• BC sync code, the
program sends four multiframes and then starts sending user data (Fig. 8):
The incremental data for each multiframe starts with 1C and ends with 7C. If the
last byte of a frame is the same as the last byte of the previous frame, replace it with
FC. If the last byte of the multiframe is the same as the last byte of the previous frame,
replace it with 7C.
The data through 8b/10b and serialization is sent through general IO port, and
received by RocketIO port. The receiver check the data received, if it is ramp data, the
error count do not increase, if not, the error count increase one. Tile2_time_count is the
count increase by one every userclk2, so it multiply by 20 is bit checked. As shown in
Fig. 11, the error count still remains zero, indicating that the bit error rate is less than 1/
(20 * 2 * 1012) = 2.5 * 10−14. This experiment show the reliability of the transmitter,
the 8b/ 10b conversion and serialization can be the ASIC prototype.
The data processing board receives data from AD9680 on digitization and FEE readout
board, and it is configured using IBERT core. The IBERT is set at 5 Gbps line rates
and 125 MHz reference clock, to match AD9680’s 5 Gbps serdes out. Even though the
link of IBERT is not built for the reason that the sender is not IBERT, we can check the
eye scan of the receiver GTH, which is the eye scan of one lane of AD9680. The eye
scan is shown as Fig. 12, the black box is the receive eye mask for LV-OIF-6G-SR [1],
which is the eye mask specified for line rates from 312.5 Mbps to 6.375 Gbps of
JESD204B. We can see that the signals at the receiver stay outside the per-defined
mask area so the eye scan of my AD9680 is good.
ADC is configured into test mode (sending incremental data). The data processing
board receives data, if the data does not increase along with the time, the error_count is
increased by one. Time_count is the counter increase by one every userclk2, so it
multiply by 20 is bit checked. The program can check 2 lanes one time for the reason
that the receiver board has only 2 SFP, so we need 4 times to check 8 lanes of 2
AD9680. Figure 13 is the result of one test. For 4 times test, the error_count is zero
Study of Front-End High Speed Readout Based on JESD204B 231
Fig. 13. The data processing board captures the ramp data.
after 20 * 1 * 1013 bit being checked, so the bit error rate of 8 lanes is less than 1/
(20 * 1 * 1013) = 5 * 10−15.
Signal generator is configured as 50 MHz, 2 Vp-p, 0 V offset sinusoidal signal
output, the sine signal can be read on the back-end processing board, as shown in
Fig. 14. X axis represents the sampling number, and Y axis represents the ADC output
amplitude information. Here ADC output is set in offset code format, the center point is
located in 1FFF, that is 8191. 50 MHz sinusoidal signal is sampled by 500 MHz
sampling clock, a cycle contains 10 points, consistent with Fig. 14.
ADC captures data into the following Fig. 15 VisualAnalog program to calculate
the SNR. As Fig. 16 shown, the whole system’s SNR is 46.112 dB at this time [2].
8 Conclusion
The JESD204B protocol was simulated in the cases of with and without using GTH.
The results of this study can be used as the basis for ASIC implementation.
Acknowledgments. This project has been Supported by National Natural Science Foundation
of China (Grant No. 11435013) and Ministry of Science and Technology of the People’s
Republic of China (Grant No. 2016YFA0400104).
References
1. JEDEC Standard: Serial Interface for Data Converters JESD204B.01
2. Lin, H.-C.: Research on self-trigger front-end unit for low frequency radio detection. Ph.D.,
thesis
Interface and Beam Instrumentation
Development of the AWAKE Stripline
BPM Electronics
1 System Overview
A Xilinx FPGA (SPARTAN6) processes the waveforms and performs the necessary
calculation to get the position and intensity information, and pack them into an event
data structure (event packet). The event packets are sent to the DAQ computer when
requested from the Ethernet interface. See Fig. 1 for the system diagram. Three boards:
AFE, Digitizer and FPGA board are integrated into a 1U crate (the so-called DSP
module) to be stacked on the standard 19 in. rack. Besides the DSP module, there’s one
Local/Calibration source module to provide 434 MHz Lo signal and 404 MHz CW
calibration signal to each DSP module. About 20 DSP modules have been assembled
and are undergoing bench test and calibration at TRIUMF. Among them 16 will be
installed and calibrated on the AWAKE electron and proton beam line in the summer
of 2017 (Fig. 2).
Development of the AWAKE Stripline BPM Electronics 239
The single bunch electron beam at AWAKE has a bunch length of 0.3–4 ps (1 sigma).
The signals induced by such beam on the stripline BPM electrodes will also have a very
narrow pulse and need to be stretched to be much wider (*1 us) for further processing.
This is normally achieved by a very high Q band pass filter, i.e. narrow pass band, at a
selected frequency which is the operating frequency of the stripline BPM. This fre-
quency is 404 MHz for our case. Its selection is a balance of the BPM sensitivity and
the inter-electrode coupling gain.
The design of the AFE board was initialized by creating two MATLAB SIMU-
LINK models. This simulation allowed manipulating the RF component’s gain/
attenuation parameters to facilitate optimizing the analog signal processing chain. The
S/N ratios at each stage were also estimated and to be minimized accounting for the
availabilities of the critical RF parts. One is for under sampling method, the other is for
the heterodyne method. Finally the later method was chosen to make use of the existing
hardware resources inherited from the TRIUMF’s E-linac BPM system. The input
signal formation on the BPM electrodes was based on the analytical method by Shafer
[2], the cable loss was accounted for, and after the chirping filter the very narrow signal
pulse is stretched to about 1 us or longer at frequency of 404 MHz, and then down
converted to be at 434 − 404 = 30 MHz with BW of about 10 MHz. The signal is then
converted from single ended to differential signal and subjected to further gain or
attenuation to accommodate the dynamic input range of more than 30 dB. The digital
attenuator at the front is to address the non-linearity effect for the large input signal (1
nC charge). See Fig. 3 for the diagram of the analog front end board. Beam tests at
CERN CALIFES facility helped the finalization of the gain parameters of the AFE
board (Fig. 4).
240 S. Liu et al.
Fig. 4. Analog Front End (AFE) board diagram, one of the four channels is showed
The on-line calibration is a dynamic procedure and performed event by event by the
on-line calibration event scheduler when turned on by the mode control register.
Immediately following each real event, the calibration signal is turned on, and a cal-
ibration event goes through the same processing chain. Assuming the calibration signal
is stable, then the gain drift of the electronics system could be measured through this
calibration event, and the correction factors could be produced and then applied to the
real event.
Almost all register in the FPGA could be accessed by the host computer through
Ethernet, but to do so is mainly for the debug purpose, as well the fast FIFO which has
the raw ADC waveform. All necessary parameters with default values have been stored
in the flash memory, and will be loaded at the system start-up. Once the start-up
completes, the DSP module automatically perform the beam position and intensity
measurement based on the pre-loaded default parameters (Fig. 5).
Fig. 5. AWAKE BPM single pulse processing FPGA function block diagram
The 40 mm BPM prototype and the readout electronics were tested with CERN
CALIFES’s electron beam in November of 2016. From the BPM monitor to electronics
there was about 25 m long RF cable. The electron beam has following parameters:
energy 196 MeV, single bunch, *5 ps FWHM bunch length, charge 50 pc–380 pc,
bunch interval 0.8 s. Beam size (transverse) varied at the BPM position.
In Fig. 6, the system transfer gain, i.e. the ADC waveform maximum amplitude vs.
the single bunch charge was shown. This was compared with the simulation result. The
linear fitting is based on the measurement points. The transfer gain measured in the
range of 50–360 pc agrees well with the simulation within 3 dB difference. The
position points measured shown are about 150 points, and this include the beam jitter at
the BPM. The horizontal position resolution is about 4.3 rum RMS, while the vertical
one is about 8.7 um RMS. Since the bench test of the DSP module shows no difference
on both planes, so the worse resolution of vertical plane should be from the beam jitter.
242 S. Liu et al.
Fig. 6. Top: Transfer gain of electron beam test at CERN CALIFES; Bottom: Position
measurement at CERN CALIFES, beam charge 150 pc (run#1544).
5 Summary
So far 19 of AWAKE BPM reading out electronics module have been assembled,
programmed and gone through bench test. The beam test at CERN CALIFES has
confirmed that the most critical requirement, the RMS resolution for a quite low beam
charge reached to around 4 um which including the beam jitter, this is much better than
specified.
Some firmware features like static calibration, auto-range, etc. will be added. The
electronics module will be installed in AWAKE tunnel during the summer of 2017, and
the commissioning will start in this fall.
References
1. Gschwendtner, E., et al.: AWAKE the advanced proton driven plasma Wakefield acceleration
experiment at CERN. NIM Phys. Res. Sect. A: Accel. Spectrom. Detect. Assoc. Equip. 829,
76–82 (2016)
2. Shafer, R.: Beam position monitoring. In: AIP Conference Proceedings on Accelerator
Instrumentation, Upton, NY (1989)
Scattering Studies with the DATURA
Beam Telescope
1 Introduction
Understanding the scattering of charged particles off nuclei in different mate-
rials has been of interest for many decades. Molière [1] postulated a theory
without empirical parameters to describe multiple scattering in arbitrary mate-
rials. Later, Gaussian approximations to the involved calculations of his theory
have been developed e.g. by Highland [2] in order to simplify predictions.
Today, precise tracking detectors allow for the characterisation of unknown
materials based on their scattering properties. In this contribution, measure-
ments with the DATURA beam telescope, a high-precision tracking device con-
sisting of silicon pixel sensors, are described. The scattering behaviour of GeV
electrons traversing aluminium targets with precisely known thicknesses between
13 µm and 104 µm at the DESY test beam facility are studied. A track recon-
struction is performed, enabling the extraction of the particle scattering angles
at the target arising from the multiple scattering therein.
2 Experimental Set-Up
The DATURA beam telescope [3] consists of six Mimosa26 [4] monolithic active
pixel sensors (MAPS), a so-called trigger logic unit (TLU) [5], four scintillators
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 243–250, 2018.
https://doi.org/10.1007/978-981-13-1313-4_47
244 H. Jansen et al.
13.6MeV √
Θ0 = ·z · ε · (1 + 0.038 ln ε) (1)
βcp
where p, βc and z are the momentum, velocity and charge number of the incident
particle. For a composite scatterer, the individual contributions to the material
budgets are summed linearly representing the total material budget ε = i εi .
The width induced by the i-th scatterer therefore reads
εi 13.6MeV √
Θ0,i = Θ0 = · z · εi · (1 + 0.038 ln ε) . (2)
ε βcp
with the correction term still containing the full material budget and not the
fraction represented by the individual scatterer.
Fig. 2. The GBL track model with two unbiased kinks at the SUT.
local derivatives are included at the measurements behind the SUT, which, when
appropriately scaled, reflect two unbiased kinks at the position of the scattering
target, as is shown in Fig. 2. This yields an unbiased value for the kink angle at
the SUT along two directions, which are chosen along x and y.
Fig. 3. The kink angle distribution measured at 3 GeV/c for (A) only air and (B) a
0.1 mm thick aluminium target. A normal distribution is fitted to the centre 98% of
the data yielding the width θmeas of the measured angle distribution.
Figure 4 presents the width of the kink angle distributions for the different
material thicknesses and particle energies. All measurements are corrected for
air by quadratically subtracting the measurement
performed without scattering
target for the respective energy: θAl = θmeas
2 − θmeas,air
2 . In this first analysis
a constant systematic uncertainty of 3% is estimated on the values of θmeas and
θmeas,air and propagated to θAl . Figure 4 (A) shows θAl as a function of the energy
and we observe a monotonically decreasing dependence. The data points are
Scattering Studies with the DATURA Beam Telescope 247
Fig. 4. θAl as (A) a function of beam momentum and (B) a function of the target
thickness together with Highland prediction. The lower plots show the relative deviation
between our measurement and the prediction.
Fig. 5. (A) The 2D distribution of the kink angle widths and the binned projections
in x- and y-direction for an aluminium target a 0.1 mm thickness at 1 GeV/c beam
momentum. (B) As in (A) for a coaxial connector.
Scattering Studies with the DATURA Beam Telescope 249
For Fig. 5 (B), a coaxial connector has been placed in between the two telescope
arms, and the material budget has been reconstructed from the multiple scat-
tering kink angles at the different positions within the material. The structures
of the connector are well-resolved. Measurements similar to the one presented
here have the potential to be used to produce full tomographic images by rotat-
ing the object and repeating the measurement for different angles and particle
energies [14].
References
1. Moliere, G.: Theorie der Streuung schneller geladener Teilchen I. Einzelstreuung
am abgeschirmten Coulomb-Feld. Z. Naturforsch. A2, 133 (1947)
2. Highland, V.: Some practical remarks on multiple scattering. Nucl. Instrum. Meth-
ods Phys. Rev. A 129(2), 497–499 (1975)
3. Jansen, H., et al.: Performance of the EUDET-type beam telescopes. EPJ Tech.
Instrum. 3(1), 7 (2016)
4. Baudot, J., et al.: First test results of MIMOSA-26, a fast CMOS sensor with
integrated zero suppression and digitized output. In: Nuclear Science Symposium
Conference Record 2009, pp. 1169–1173. IEEE (2009)
5. Cussans, D.G.: Description of the JRA1 Trigger Logic Unit (TLU), v0.2c. Technical
report (2009). Accessed 21 Apr 2015
6. Perrey, H.: EUDAQ and EUTelescope – software frameworks for testbeam data
acquisition and analysis. In: Technology and Instrumentation in Particle Physics,
PoS(TIPP2014), p. 353 (2014)
7. EUDAQ Software Developers. EUDAQ Website. http://eudaq.github.io. Accessed
22 June 2017
8. Diener, R., Meyners, N., Potylitsina-Kube, N., Stanitzki, M.: Test Beams at DESY.
http://testbeam.desy.de. Accessed 26 July 2016
250 H. Jansen et al.
9. Bulgheroni, A., et al.: EUTelescope, the JRA1 Tracking and Reconstruction Soft-
ware: A Status Report (Milestone). Technical report (2008). Accessed 22 June
2017
10. EUTelescope Software Developers. EUTelescope Website. http://eutelescope.desy.
de. Accessed 22 June 2017
11. Patrignani, C., Particle Data Group: Review of particle physics. Chin. Phys. C
40(10), 100001 (2016)
12. Blobel, V.: A new fast track-fit algorithm based on broken lines. Nucl. Instr. Meth.
Phys. A 566(1), 14–17 (2006)
13. Kleinwort, C.: General broken lines as advanced track fitting method. Nucl. Instr.
Meth. Phys. A 673, 107–110 (2012)
14. Schütze, P., Jansen, H.: Feasibility study of a track-based multiple scattering
tomography. In: These proceedings (2018)
15. Bisanz, T., Morton, A., Rubinskiy, I.: EUTelescope 1.0: Reconstruction Software for
the AIDA Testbeam Telescope. AIDA-NOTE-2015-009 (2015). https://cds.cern.
ch/record/2000969
Particle Identification
Assembly of a Silica Aerogel Radiator
Module for the Belle II ARICH System
1 Introduction
Our research group has been undertaking the development of the aerogel-based
ring-imaging Cherenkov (ARICH) counter [1]. This device is used for identi-
fying charged π and K mesons at momenta between 0.5 and 3.5 GeV/c in a
super-B factory experiment, Belle II using the SuperKEKB electron–positron
collider at the High Energy Accelerator Research Organization (KEK), Japan.
The ARICH system is a proximity-focusing ring-imaging Cherenkov counter that
uses silica aerogel as a radiator and hybrid avalanche photo-detectors [2] as
position-sensitive photo-sensors, which will be installed as a particle identifica-
tion subsystem at the forward end cap of the Belle II spectrometer. This system
is an upgraded version of the threshold-type aerogel Cherenkov counters used
in the previous Belle spectrometer. The design objective is a π/K separation
capability exceeding 4σ at a momentum of 4 GeV/c.
The particle identification performance of the ARICH counter is determined
using the Cherenkov angular resolution and the number of detected photoelec-
trons. A scheme for focusing the propagation pass of emitted Cherenkov photons
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 253–256, 2018.
https://doi.org/10.1007/978-981-13-1313-4_48
254 M. Tabata et al.
2 Results
2.1 Optical Characterization of Mass-Produced Aerogel Tiles
The yield of tiles with no damage was 344 out of 448 tiles (77%). In addition to
the required 248 tiles, 96 spare tiles were delivered. Tile damage was classified
into physical/mechanical (tile cracking, chipping, and other related phenomena)
and chemical/optical (e.g., milky tile due to problems associated with the sol–gel
process) damages. The number of physically and chemically damaged tiles were
77 (17%) and 27 (6%), respectively.
Deviations in the refractive indices from the target values were within our
expectations. Figure 1 shows the relation between the refractive index and the
transmission length. The acceptable deviation from the designed refractive-index
values of 1.045 and 1.055 was ±0.002 for both the layers. The refractive indices
measured were detected between 1.0435 and 1.0463 and between 1.0532 and
1.0558 for the upstream and downstream tiles, respectively.
The transmission lengths were sufficiently long to fulfill our requirements.
The minimum transmission lengths measured were 40.9 and 32.6 mm for the
upstream and downstream tiles, respectively, which were both longer than the
required limits of 40 and 30 mm for the upstream and downstream tiles, respec-
tively. At the tile corners, the refractive index was measured with the minimum
Assembly of a Silica Aerogel Radiator Module 255
deviation method using a laser with a wavelength of 405 nm [6]. The transmis-
sion length at 400 nm was calculated using the transmittance measured along
the tile thickness direction using a spectrophotometer [6].
Fig. 1. Relation between the refractive index and transmission length measured at
wavelengths of 405 nm and 400 nm, respectively.
3 Conclusion
Progress in the development of the dual-layer silica aerogel radiator module
for the Belle II ARICH counter was reported here. Approximately 450 large-
area (18 cm × 18 cm × 2 cm) hydrophobic aerogel tiles with refractive indices
of either 1.045 or 1.055 were manufactured. The optical characteristics (i.e.,
refractive index and transmission length) of the produced tiles were confirmed
to be suitable for the actual ARICH system. Each aerogel tile was cut into fan
shapes using a water-jet cutter to fit the cylindrical support structure. A total
of 248 aerogel tiles were successfully installed in the 124 segmented containers of
the support structure. The installation of the whole ARICH system within the
Belle II spectrometer is scheduled for late 2017.
Acknowledgments. The authors are grateful to the members of the Belle II ARICH
group for their assistance. We are also grateful to the Japan Fine Ceramics Center,
Mohri Oil Mill Co., Ltd. and Tatsumi Kakou Co., Ltd. for their contributions to mass
producing the aerogel tiles and water jet machining. This study was partially sup-
ported by a Grant-in-Aid for Scientific Research (A) (No. 24244035) from the Japan
Society for the Promotion of Science (JSPS). M. Tabata was supported in part by the
Hypervelocity Impact Facility (former name: Space Plasma Laboratory) at the Insti-
tute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency
(JAXA).
References
1. Pestotnik, R., et al.: The aerogel ring imaging Cherenkov system at the Belle II
spectrometer. Nucl. Instrum. Methods Phys. Res. A 876, 265–268 (2017). https://
doi.org/10.1016/j.nima.2017.04.043
2. Yusa, Y., et al.: Test of the HAPD light sensor for the Belle II Aerogel RICH. Nucl.
Instrum. Methods Phys. Res. A 876, 149–152 (2017). https://doi.org/10.1016/j.
nima.2017.02.046
3. Iijima, T., et al.: A novel type of proximity focusing RICH counter with multiple
refractive index aerogel radiator. Nucl. Instrum. Methods Phys. Res. A 548, 383–390
(2005)
4. Tabata, M., et al.: Silica aerogel radiator for use in the A-RICH system utilized in
the Belle II experiment. Nucl. Instrum. Methods Phys. Res. A 766, 212–216 (2014)
5. Yokogawa, H., Yokoyama, M.: Hydrophobic silica aerogels. J. Non Cryst. Solids 186,
23–29 (1995)
6. Tabata, M., et al.: Hydrophobic silica aerogel production at KEK. Nucl. Instrum.
Methods Phys. Res. A 668, 64–70 (2012)
7. Tabata, M., et al.: Large-area silica aerogel for use as Cherenkov radiators with
high refractive index, developed by supercritical carbon dioxide drying. J. Supercrit.
Fluids 110, 183–192 (2016)
8. Adachi, I., et al.: Construction of silica aerogel radiator system for Belle II RICH
counter. Nucl. Instrum. Methods Phys. Res. A 876, 129–132 (2017). https://doi.
org/10.1016/j.nima.2017.02.036
TORCH: A Large-Area Detector for High
Resolution Time-of-flight
with the distribution that is actually observed. Fast photon detectors are used:
micro-channel plate photomultipliers (MCP-PMT). The target for their intrinsic
timing resolution is 50 ps per detected photon. The focusing scheme requires a
linear array of detectors, with fine pixellization in one direction, coarse in the
other. For 2-inch tubes (60 mm pitch) the pixellization should be 128 × 8, i.e.
0.4 × 6.6 mm2 pixels, to give an angular resolution of ∼ 1 mrad in both pro-
jections. This gives a contribution to the resolution of 50 ps, leading to a total
resolution per detected photon of 50 (intrinsic) ⊕ 50 (pixel size) ps = 70 ps. For
30 detected photons per√track (under the assumption that they are uncorrelated)
this would provide 70/ 30 = 15 ps resolution on the arrival time of the track at
the TORCH detector.
Fig. 1. (a) Cross-section through the focusing block attached to the edge of the radiator
plate, illustrating the focusing of photons emitted from the plate. (b) Schematic view
of a TORCH module.
2 Application in LHCb
LHCb is the dedicated flavour physics experiment at the LHC, studying CP
violation and rare decays of beauty and charm hadrons. It is a forward spec-
trometer, although operated in pp collider mode. An upgrade in preparation for
2019–20, to move to a fully software trigger, reading out the detector at the
bunch-crossing rate of 40 MHz, with luminosity levelled at 2 × 1033 cm−2 s−1 .
A further “Phase-II” upgrade is now under discussion, to push the luminosity
further towards what is available from the LHC in the HL-LHC era (from 2024
onwards) [2].
Particle identification (in particular, distinguishing the charged hadrons p, K
and π) is crucial for much of hadronic physics of LHCb, and is currently provided
by a RICH system. Low-momentum particle ID was previously provided by an
TORCH: A Large-Area Detector for High Resolution Time-of-flight 259
aerogel radiator, but this was not suitable for the higher occupancy expected in
the upgrade, and so has been removed. There is therefore currently no positive ID
below the kaon threshold in the C4 F10 gas radiator of the RICH, at ∼ 10 GeV/c.
The difference in time-of-flight (TOF) between π and K over 10 m is ∼ 40 ps at
10 GeV/c, so 15 ps resolution would provide clear (∼ 3 σ) separation.
A “start” time is needed for the TOF measurement: it could be provided
by the accelerator clock, but would need to be corrected for the timing spread
in the beam bunches. An alternative is to use signals from other tracks in the
event from the primary vertex, in the TORCH detector itself. Typically most
are pions, so the reconstruction logic can be reversed, and the start time is
determined assuming they are all π, after removing outliers from other particles.
In this way a few-ps resolution can be achieved on the start time.
Fig. 2. (a) Layout of the LHCb spectrometer along the beam axis, as proposed for
the Phase-II upgrade [2]; the TORCH detector is sited just after the main tracker.
(b) Transverse layout of TORCH modules in LHCb, to cover the full acceptance.
At the foreseen location in LHCb (10 m from the interaction point) an area of
5 × 6 m2 has to be covered, which is not feasible with a single plate, and anyway
an aperture is required for the beam pipe. It is proposed to tile the surface using
18 identical modules (each 66 × 250 cm2 ), as shown in Fig. 2. This will require
198 photon detector tubes, with ∼ 100 k channels in total. Reflections from the
transverse edges of modules will lead to ambiguities in the reconstruction, but
at a level that can be resolved by the pattern recognition. At the luminosity
expected in the upgrade of LHCb there will be a high track multiplicity, of over
100 charged tracks per event, but the performance of TORCH has been studied
with simulation in these conditions and is excellent. Fast timing will also be
very useful for pile-up suppression at high luminosity, as is being explored by
the other experiments at the LHC [3].
260 R. Forty et al.
4 Test-Beam Studies
A small prototype module has been constructed for beam tests at the CERN
PS–T9 area. Optical components were delivered by Schott, with a second-phase
prototype MCP-PMT from Photek. The radiator plate is 35 × 12 × 1 cm3 , and
TORCH: A Large-Area Detector for High Resolution Time-of-flight 261
Fig. 3. Data from the test beam: hit pattern in the photon detector for selected pions
(a) and protons (b); reconstructed time distribution versus vertical position in one of
the columns of pixels in the photon detector for pions (c) and protons (d).
was coupled to the focusing block using silicone (Pactan 8030). The time-of-flight
could be independently determined using dedicated timing stations 10 m apart,
which allowed the p/π components of the beam to be separated. The detected
hits seen in the MCP-PMT match the expected pattern (taking into account
reflections from edges), as shown in Fig. 3. The small difference in Cherenkov
angle for π and p at 5 GeV/c is visible comparing (a) and (b). The time measured
for each cluster is plotted versus vertical position along one column of pixels in
(c) and (d), and the reflections are clearly separated. The p–π time-of-flight
difference of about 600 ps is cleanly resolved.
Projecting along the timing axis relative to the prediction for the earliest pion
signal, for each column of pixels (using the nearest timing station as reference)
gives a core distribution with σ ≈ 110 ps. This is before subtraction of the
contribution from the timing reference itself, so we are approaching the target
resolution of 70 ps/photon. Small tails seen in the timing distribution are due
to imperfections in the calibration and back-scattering effects.
A large prototype of a TORCH module on the scale that would be
required for LHCb is now under construction, with full width and half height:
125 × 66 × 1 cm3 . It will be equipped with 10 MCP-PMTs, for a total of over
5000 channels. Optical components have been delivered by Nikon, see Fig. 4.
Detailed measurements provided by the supplier match the specifications. This
final deliverable of the R&D project will be ready for testing in beam over the
next year.
262 R. Forty et al.
Fig. 4. (a) Radiator plate for the large prototype, after delivery to CERN. (b) Detail
of the design for the large prototype, indicating the various components mounted at
the upper edge of the radiator plate.
5 Conclusions
The TORCH detector concept adds precise angular and timing information to
a DIRC, to provide high-precision time-of-flight over large areas. It is included
in the plans for a future upgrade of the LHCb experiment. Suitable fast photon
detectors have been developed with industry, with final prototypes expected to
be delivered imminently. Test-beam studies have achieved close to the nominal
performance, and a full-scale prototype module is under construction for testing
over the next year: its success should provide a solid foundation for proposing
the full detector in LHCb. It is an exciting time for the project!
References
1. Varner, G., (Belle II), Schmidt, M., (PANDA): presentations at this conference
2. Expression of Interest for a Phase-II upgrade of LHCb, CERN-LHCC-2017-003
3. Lenzi, B., (ATLAS), Bornheim, A., (CMS): presentations at this conference
4. Milnes, J., (Photek): presentation at this conference
5. Gys, T., et al.: NIM A766, 171 (2014)
6. Craven, C., (Incom): presentation at this conference
7. Gys, T.: presentation at RICH 2016, Bled
8. Anghinolfi, F., et al.: NIM A533, 183 (2004)
9. Gao, R., et al.: JINST 10, C02028 (2015)
10. Castillo Garcı́a, L.: JINST 12 C02005 (2017)
High Rate Time of Flight System
for FAIR-CBM
1 Introduction
1.1 Introduction to FAIR-CBM
Nowadays, exploration of the QCD phase diagram at high net-baryon densities is one of
the top concerns in nuclear physics. Several experimental programs, such as RHIC-
STAR [1], CERN-SPS [2] and NICA [3], are devoted to the searching of QCD critical
endpoint in the phase diagram. Because of the limitation of luminosity in these
experiments, some rare probes of particles with very low production cross sections are
lost. In the future, the Compressed Baryonic Matter (CBM) experiment at the Facility of
Antiproton and Ion Research (FAIR) will be a high-rate fixed target experiment operated
at ion beam intensity up to 109/s, which is sufficient of acquire data for rare probes such
as charmed hadrons, multiple strange baryons, di-electrons and di-muons [4].
The international Facility for Antiproton and Ion Research (FAIR) constructed at the
exiting GSI facility in Darmstadt, Germany, will become a research platform in the field
of nuclear, hadron, atomic and plasma physics [5]. It consists of two synchrotrons
(SIS100/SIS300) with magnetic rigidities of 100 Tm and 300 Tm, delivering primary Au
beams up to 11A GeV from SIS100 and 35A GeV from SIS300. The minimum available
ion beam energy is about 2 A GeV. The CBM is located just behind the SIS100/SIS300,
and the beam extracted will reach intensities up to 109 Au ions per second.
The CBM experimental setup is shown in Fig. 1. It is a multi-purpose detector able
to measure hadrons, electrons and muons in heavy-ion collisions. It consists of a
serious of components just like any other detecting systems in nuclear and high energy
physics. For the detectors that compose the system, they are required to be highly
granular, fast and radiation-hard. On the other hand, CBM also require a high demand
for fast read-out electronics. The read-out electronics will run in autonomous mode
instead of trigger mode. All the signals above threshold with time stamps are pushed to
the data acquisition system.
Fig. 2. Front view of the ToF-wall. Modules are marked by dark lines, the red crossed boxes
denote the non-overlapping active areas of the single MRPC detectors inside. The yellow frames
represent the overlap of the MRPCs [4].
In order to improve the rate capability of the normal MRPC, which is only capable to
work below several hundred Hz/cm2, two main ways are available. One is reducing the
bulk resistivity of the electrode material, another is reducing the average charge. A kind
of low resistive silicate glass (TUYK-LRG10) with bulk resistivity on the order of 1010
X cm was produced in Tsinghua University [6]. This glass, characterized by an ohmic
behavior and stability with transported charge, contains oxides of transition elements. It
has a black color and is opaque to visible light, properties commonly attributed to
glasses exhibiting a form of electron conductivity. As shown in Table 1, the surface of
this glass is very smooth and its surface roughness is less than 10 nm. In order to
conduct the long term stability of the glass resistivity with increasing density trans-
ported across it, a 34-day test was done. The accumulated charge was 1 °C/cm2,
roughly corresponding to the CBM life-time over 5 year operation at the maximum
particle flux of about 20 kHz/cm2. Despite the main conductivity expected to be
electronic, an increase of the bulk resistivity with time/charge was not exceeding a
(tolerable) factor 2, even for such a large transported charge density.
Two MRPC prototypes based on such low-resistive glasses were produced and
tested in ELBE beamtest, Dresden-Rossendorf, Germany, to examine their perfor-
mance under high rate. As shown in Fig. 3, the efficiency is still higher than 90% and
the time resolution is about 80 ps, even though at 70 kHz/cm2 rate.
266 P. Lyu and Y. Wang
Fig. 3. Measured efficiencies and time resolutions for different runs as a function of the average
particle flux determined with reference scintillators [6].
Based on the technique of the low-resistive glass, two main types of MRPC prototypes
are designed aiming at TOF wall regions above 1 kHz/cm2. Towards the outer zone,
the Tsinghua University has developed a double-ended readout strip MRPC, named
MRPC3a in the TOF wall [7]. It consists of two mirrored stacks of resistive plates,
which fit into the three parallel readout PCBs. In each stack, the 0.25 mm nylon
monofilaments spacers divide the five low-resistive glass plates into four homogeneous
gas gaps. The top and bottom plates among these five glasses are covered with the
colloidal graphite spray as the high voltage electrode. There are 24 strips on each
readout PCB. Each of them is 270 mm long, 7 mm wide and the interval is 3 mm.
Ground is placed onto the MRPC’s electrode. Feed through is carefully designed to
match the 100 X impedance of PADI electronics. This method aims to minimize the
noise caused by reflection. The sectional sketch of this prototype is shown in Fig. 4(a).
Another prototype, called MRPC2, is developed by NIPNE Group for inner zone
[8]. Just like the MRPC3a, this counter has fully differential, symmetric, double stack
architecture. Five 140 lm thick gas gaps are formed with the low-resistive glass plates
High Rate Time of Flight System for FAIR-CBM 267
Fig. 4. (a) Photo of the MRPC3a prototype designed by Tsinghua University. (b) Photo of the
MRPC2 designed by NIPNE Group.
and fishing lines in each stack. The signals are designed to be read out from both strip
sides on the readout PCBs. They feature 64 electrode strips with a pitch of 4.72 mm
(2.18 mm width/2.54 mm gap) which define an active length of 302 mm.
Both the two prototypes were tested in the 2014 October GSI beam test with a 1.1A
GeV 152Sm beam [9]. The detailed layout of the beam test is shown in Fig. 5. All of
these tested MRPC modules are divided into two parts. The MRPC3a module is among
the lower setup, and the MRPC2 is in the upper setup. In this beam test, we got the
beam on a 0.3 mm/4 mm/5 mm Pb target. A flux rate of several hundred Hz/cm2 was
available.
Fig. 5. Beam time setup of GSI Oct 2014. MRPC2 and BUC-Ref are in upper setup, while HD-
P2 and MRPC3a are among lower setup. PMTs are applied for counting rate calibration. The
diamond detector is placed before Pb target to record starting time of each event [7].
In order to obtain the performance of the MRPCs from the raw data, a calibration
macro based on CBM-ROOT, developed by CBM-TOF group, was applied. This cal-
ibration runs different correction modes to kick out a series of influences factors,
268 P. Lyu and Y. Wang
including walk correction, gain correction and velocity correction. After an iterative
looping of all these corrections, we got the fully calibrated data of the analyzed detector.
The performances of MRPC3a are shown in Fig. 6 [10]. The efficiency is estimated
as the ratio of the number of matched hits between MRPC3a, HD-P2 and diamond
counter and the number of matched hits between HD-P2 and diamond counter. It stays
at 97%. The cluster size always maintains a low value of 1.6. Assuming an equal
performance with the reference counter we have a time resolution of the MRPC3a of
about 50 ps.
Fig. 6. Time resolution (around 50 ps), efficiency (97%) and clustersize (1.6 to 1.7) of MRPC3a
under different FEE (PADI) electronics threshold [9].
As for the MRPC2 [11], a value larger than 98% of efficiency is still observed at the
largest value of the threshold used in these measurements (240 mV). At the highest
applied threshold (240 mV) the mean cluster size is 3. The system time resolution of
MRPC2 for each value of the FEE thresholds improves slightly with the increasing
threshold. The best value results to 74 ps. Assuming the MRPC3a and BUC-Ref’s time
resolutions are the same, an independent timing ability of 52 ps is calculated (Fig. 7).
Fig. 7. System time resolution (around 75 ps), efficiency (98%) and clustersize (3 to 4) of
MRPC2 under different FEE (PADI) electronics threshold [10].
High Rate Time of Flight System for FAIR-CBM 269
5 Conclusion
In order to achieve an unprecedented flux rate, CBM-TOF has made a high requirement
on the MRPC counters. The performance should be maintained (efficiency above 95%,
system time resolution better than 80 ps) at the flux rate up to 30 kHz/cm2. With the
help of a newly developed low-resistive glass, such demands were fully met. Proto-
types were designed aiming at the different regions on the TOF wall. They were proved
to satisfy the requirement of CBM-TOF. In the near future, part of these prototypes will
be shipped to RHIC to compose the eTOF of STAR. It will apply 10% of the CBM-
TOF modules, including the read-out chains, and participate in RHIC Beam Energy
BES-II in 2019.
References
1. Schmach, A., et al.: Highlights of the beam energy scan from STAR. arXiv:1202.2389v1
(2011)
2. Aduszkiewicz, A., et al.: NA61/SHINE at the CERN SPS: plans, status and first results. Acta
Phys. Pol., B 43, 635 (2012)
3. Nica white paper (2011). http://nica.jinr.ru/files/WhitePaper.pdf
4. The CBM collaboration: Technical Design Report for the CBM Time-of-Flight System
(TOF) (2014)
5. FAIR Baseline Technical Report (2006). http://www.gsi.de/fair/reports/btr.html
6. Wang, J., et al.: Development of high-rate MRPCs for high resolution time-of-flight systems.
Nucl. Instr. Methods A 713, 40 (2013)
7. Wang, Y., et al.: Development and test of a real-size MRPC for CBM-TOF. JINST 11,
C08007 (2016)
8. Petris, M., et al.: Cosmic-ray and in-beam tests of 100 Ohm transmission line MGMSRPC
prototype developed for the inner zone of CBM-TOF. CBM Progress Report 2014, p. 89
(2015)
9. Deppner, I., et al.: Results from a heavy ion beam test @ GSI. CBM Progress Report 2014,
p. 86 (2015)
10. Lyu, P., et al.: Performance of Strip-MRPC for CBM-TOF in beam test. CBM Progress
Report 2015, p. 92 (2016)
11. Petris, M., et al.: Performance of MGMSRPC for the inner zone of the CBM-TOF wall in
heavy ion beam tests. CBM Progress Report 2015, p. 95 (2016)
The Aerogel Ring Image Cherenkov
Counter for Particle Identification
in the Belle II Experiment
1 Introduction
The Belle II experiment [1] will start observation of e+ − e− collision in 2018
from the SuperKEKB collider to search for the New Physics beyond the Stan-
dard Model using high precession measurements of flavor systems. The Aerogel
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 270–274, 2018.
https://doi.org/10.1007/978-981-13-1313-4_51
The Aerogel Ring Image Cherenkov Counter for Particle Identification 271
Two layers of aerogel radiators with different refractive indices [5] are adapted
to focus Cherenkov ring on the surface of the photon sensors. A method to pro-
duce silica aerogel tile with high transparency and flexible refractive indices was
newly developed. In the ARICH counter. The optical thickness and the refractive
indices of the radiators are optimized to be 20 mm and 1.045 for upstream and
1.055 for downstream, respectively. 248 of the aerogel tiles with approximate size
of 18 x 18 cm are produced to cover the effective region of the ARICH counter.
Installation the aerogel tiles to the support structure of the counter is completed
in 2016.
Hybrid Avalanche Photo Detector (HAPD) [6], co-developed with
Hamamatsu Photonics, is adapted as a position sensitive photon sensor. The
HAPD consists of Super Bialkali photocathode with 28% of quantum efficiency,
vacuum tube to accelerate the photoelectrons with a bombardment gain and
pixelated APD pads with an avalanche gain The total amplification gain is esti-
mated to be about 45000. The APD sensor is divided into 144 readout channels
of 5 × 5 mm2 pixels. The HAPD is confirmed to be operational in the magnetic
field and improve its position resolution by suppressing crosstalk due to photo-
electron back-scattering and canceling non-uniformity of electronic field at the
barrel of the vacuum tube. On the other hand, structure of the APD sensor
pad was modified to endure the radiation environment for neutron and gamma.
Finally, all 420 HAPDs are confirmed to fulfill the requirements, and then assem-
bled with the readout electronics to be installed in June 2017.
Power supply system [7] to the HAPDs is also important for a stable operation
of the ARICH counter. We adapted CAEN A1590N for the high voltage and
A7042P for the bais and guard voltages, respectively. A software power supply
control system with a control GUI is developed based on a software framework
in the Belle II data acquisition system.
Two steps of readout electronics [8] of the HAPDs are introduced in order
to process the signals and to merge data and reduce numbers of cables to the
272 T. Konno et al.
Fig. 1. Results of the cosmic ray test. The left plot show a event display of Cherenkov
ring images on the HAPD surface and the right plot describes distribution of recon-
structed Cherenkov angles.
The Aerogel Ring Image Cherenkov Counter for Particle Identification 273
In addition, stable operations of the power supply system and readout slow
control are also established. Mean values of number of detected photoelectrons
and Cherenkov angle are measured to be about 13 p.e. and 300 mrad as shown in
the right plot of Fig. 1, respectively, consistent with cosmic muons at momenta
between 0.5 and 4 GeV/c. Therefore, we concluded that successful operation of
the ARICH system is established with the test setup.
References
1. Abe, T., et al.: Belle II Technical Design Report (2010), arXiv:1011.0352
[physics.ins-det], KEK Report 2010-1
2. Abashian, A., et al.: The Belle detector. Nucl. Instrum. Meth. A479, 117–232
(2002)
3. Pestotnik, R., et al.: The aerogel Ring Imaging Cherenkov system at the Belle II
spectrometer. Nucl. Instrum. Methods Phys. Res. A 876, 265–268 (2017). https://
doi.org/10.1016/j.nima.2017.04.043. (in press)
4. Iijima, T., et al.: A novel type of proximity focusing RICH counter with multiple
refractive index aerogel radiator. Nucl. Instrum. Methods Phys. Res. A 548, 383–
390 (2005)
5. Tabata, M., et al.: Silica aerogel radiator for use in the A-RICH system utilized
in the Belle II experiment. Nucl. Instrum. Methods Phys. Res. A 766, 212–216
(2014)
6. Yusa, Y., et al.: Test of the HAPD light sensor for the Belle II Aerogel RICH. Nucl.
Instrum. Methods Phys. Res. A 876, 149–152 (2017). https://doi.org/10.1016/j.
nima.2017.02.046. (in press)
7. Yonenaga, M., et al.: Development of slow control system for the Belle II ARICH
counter. Nucl. Instrum. Methods Phys. Res. A 876, 241–245 (2017). https://doi.
org/10.1016/j.nima.2017.03.037. (in press)
8. Nishida, S., et al.: Readout ASICs and electronics for the 144-channel HAPDs for
the Aerogel RICH at Belle II. Phys. Procedia 37, 1730–1735 (2012)
274 T. Konno et al.
9. Yamada, S., et al.: Data acquisition system for the Belle II experiment. IEEE
Trans. Nucl. Sci. 62, 1175–1180 (2015)
10. Hataya, K., et al.: Development of the ARICH monitor system for the Belle II
experiment. Nucl. Instrum. Methods Phys. Res. A 876, 176–180 (2017). https://
doi.org/10.1016/j.nima.2017.02.070. (in press)
Endcap Disc DIRC for PANDA at FAIR
Abstract. The Endcap Disc DIRC (EDD) has been developed to pro-
vide an excellent particle identification in the future PANDA experiment
by separating pions and kaons up to a momentum of 4 GeV/c with a sep-
aration power of 3 s.d.. The detector is placed in the forward endcap of
the PANDA target spectrometer. It consists of a fused silica plate and
focusing elements placed at the outer rim, which focus the Cherenkov
light on the photo cathodes of the attached MCP-PMTs. A compact
and fast readout of the signals is realized with special ASICs. The per-
formance has been studied and validated with different prototype setups
in various testbeam facilities.
1 Detector Design
The future Disc DIRC detector for the PANDA [1] experiment has a compact
and modular design, that consists of four independent quadrants of fused silica
Cherenkov radiators 20 mm thick and a surface roughness of less than 1 nm. It
is designed to separate pions and kaons in the momentum range of 1–4 GeV/c
with a separation power of 3 s.d. covering polar angles between 5◦ and 22◦ .
The detector is shown in Fig. 1. The created Cherenkov light inside the radi-
ator disk is internally reflected to the outer rim of each quadrant, where 96
focusing elements (FELs) are attached. Every FEL is bonded to a bar connected
to the radiator disk and has a cylindrical mirror coating on the backside. The
Cherenkov photons captured in the bars are focused on a focal plane formed by
the photo cathode of the MCP-PMT. Each MCP-PMT contains a segmented
anode with 3 × 100 pixels for acquiring the Cherenkov photon hit pattern of the
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 275–278, 2018.
https://doi.org/10.1007/978-981-13-1313-4_52
276 M. Schmidt et al.
Fig. 1. Concept of the Disc DIRC for PANDA showing one quadrant with radiator
and FELs (left), the functionality of the focusing elements (center), and a sketch of the
attached MCP-PMTs plus readout (right).
traversing particle. From the measured hit pattern the mean Cherenkov angle
and the likelihood values for different particle hypotheses are reconstructed.
A long-pass color filter in front of the MCP-PMT entry window, that filters
out photons below a specific wavelength, increases the detector resolution, which
largely depends on the chromatic error inside the fused silica radiator and the
number of measured photon hits. For the signal readout TOFPET ASICs [2]
with a time resolution of 25 ps are used.
2 Performance Analysis
The detector performance has been simulated in the PandaRoot framework [3]
including all wavelength dependent parameters. The important parameters are
the transmission values for fused silica, the reflectivity of the mirrors and the
MCP-PMT detection efficiency with an assumed collection efficiency of 65%.
Two candidate photo cathodes with a maximum quantum efficiency of 30%
have been studied with Monte-Carlo simulations: a blue photo cathode with
a maximum between 250 nm and 400 nm and a green photo cathode with an
enhanced sensitivity between 400 nm and 500 nm, that slopes up between 330 nm
and 370 nm. One result of the simulation is, that the best resolution is obtained
for a value around 360 nm for the long-pass filter cut-off wavelength.
For this filter value the separation power for pions and kaons has been calcu-
lated for all azimuth angle φ and polar angle θ combinations as shown in Fig. 2
left tile. The average value is calculated to a value of 4.4 s.d.. A one-dimensional
projection is presented in Fig. 2 right tile, showing the separation power as a
function of the polar angle θ for several particle momenta and two choices for
the photo cathode of the MCP-PMTs. For very large angles above θ = 21◦ the
separation power drops slightly below 3 s.d. at the highest momentum due to
larger geometrical errors affecting the reconstruction algorithm.
Endcap Disc DIRC for PANDA at FAIR 277
3 Testbeam Results
The first particle identification with a Disc DIRC prototype was achieved in 2012
at the T9 testbeam at CERN [4]. In 2015 an upgraded prototype consisting
of a 500 mm square radiator plate with fused silica components and a TRB3
readout was tested in the same beam line. The measured single photon resolution
compared well with the Monte-Carlo data [5]. For the following testbeam in 2016
at DESY with a 3 GeV/c electron beam the prototype design is comparable to
the one of the final detector regarding optical precision and TOFPET based
readout electronics.
Fig. 3. Comparison between the simulated and measured photon yield (left) and single
photon resolution (right) as a function of the polar angle.
Figure 3 left side shows the comparison of the single photon resolution
between the testbeam and Monte-Carlo data for a performed angle scan. The
distance from the particle punch-through point to the FEL was 450 mm. The
resolution changes as a function of the polar angle due to an increasing number
of reflections inside the FELs. On the right side of Fig. 3 the comparison of the
photon yield is presented. The simulation output incorporates the results of an
independent charge sharing analysis, which studied the number of hits from the
detected charge cloud of a single photo electron in the MCP-PMT.
278 M. Schmidt et al.
A position scan perpendicular to the FELs was performed and used for an
event mixing study simulating a 30 FEL readout with 30 equidistant positions.
The large background of the testbeam data could be handled by applying a
reconstructed coarse time cut and a truncated mean method to derive a mean
Cherenkov angle value for each mixed event.
Figure 4 compares the result of the single photon resolution with the distri-
bution of the truncated means of the mixed
√ events. The resolution of the mean
value scales approx. with the factor 1/ N as expected. The obtained resolution
of σ = 2.5 mrad is within a factor of less than 2 away from the anticipated perfor-
mance of the fully equipped final Disc DIRC with a resolution of σ = 1.8 mrad
for the same momentum. The reason for the still existing discrepancy can be
found in the absence of a chromatic filter, less FELs and a larger effect of mul-
tiple scattering of the electrons inside the radiator disk. The resolution of the
testbeam data agrees with the Monte-Carlo simulations. Also the photon yields
of 18 (14) hits per event from the measured (Monte-Carlo) data agree within
the precision of the simulation.
Fig. 4. Single photon resolution (left) and the average Cherenkov angle (right) from
the event combination of 30 equidistant positions on the radiator disk.
References
1. PANDA Collaboration, Technical Progress Report, FAIR-ESAC/Pbar (2005)
2. Rolo, M.D., et al.: TOFPET ASIC for PET applications. J. Instrum. 8, C02050
(2013)
3. Spataro, S.: Event Reconstruction in the PandaRoot framework. J. Phys. Conf. Ser.
396, 022048 (2012)
4. Föhl, K., et al.: First particle identification with a Disc-DIRC detector. Nucl.
Instrum. Methods Phys. Res. Sect. A 732, 346–351 (2013). https://doi.org/10.1016/
j.nima.2013.08.023
5. Etzelmüller, E., et al.: Tests and developments of the PANDA Endcap Disc DIRC.
J. Instrum. 11, C04014 (2016)
The NA62 RICH Detector
Construction and Performance
Andrea Bizzeti1,2(B)
1
Department of Physics, Informatics and Mathematics,
University of Modena and Reggio Emilia, Modena, Italy
2
Istituto Nazionale di Fisica Nucleare – Sezione di Firenze,
Firenze, Sesto Fiorentino (FI), Italy
andrea.bizzeti@fi.infn.it
Fig. 1. Schematic view of the RICH detector. The hadron beam enters from the left
and travels throughout the length of the detector in an evacuated beam pipe. A zoom
of one of the two disks hosting the photomultipliers is shown on the left; the mirror
mosaic is made visible through the vessel on the right. From [2].
Fig. 2. (left) Scheme of the mirror orientation system: two ribbons connected to piezo-
electric motors pull micrometrically the mirror while a third vertical ribbon avoids on-
axis rotation. (centre and right) Position difference between the centre of the Čerenkov
ring reconstructed by the RICH PMTs and its expected position based on the track
direction reconstructed by the spectrometer, after tuning the mirrors orientation. “+X
side” and “–X side” indicate the two locations of the PMTs, on the left and on the
right of the beam pipe. Each point represents a single mirror.
diameter active region and are packed in an hexagonal lattice with 18 mm pixel
size. Each PMT has a bialkali cathode, sensitive between 185 and 650 nm wave-
length with about 20% peak quantum efficiency at 420 nm. Its 8-dynode system
provides a gain of 1.5 × 106 at 900 V supply voltage, with a time jitter of 280 ps
FWHM. PMTs are located in air outside the vessel and are separated from neon
by a quartz window; an aluminized mylar Winston cone [7] is used to reflect
incoming light to the active area of each PMT. The front-end electronics con-
sists of 64 custom made boards, each of them equipped with four 8-channels
Time-over-Threshold NINO discriminator chips [8]. The readout is provided by
4 TEL62 boards, each of them equipped with sixteen 32-channels HPTDC [9];
a fifth TEL62 board receives a multiplicity output (logic OR of the 8 channels)
from each NINO discriminator and is used for triggering. The time resolution
of Čerenkov rings has been measured by comparing the average times of two
subsets of the PMT signals, resulting in σt (ring) = 70 ps.
2 Particle Identification
In order to assess the RICH performance, the Čerenkov ring radius (which
depends on particle velocity) measured by the RICH is related to the track
momentum measured by the magnetic spectrometer. Figure 3 (left) shows a clear
separation between different particles in the momentum range 15–35 GeV/c.
Pion-muon separation is achieved by cutting on the particle mass, calculated
from the measured values of the particle velocity (from the Čerenkov ring radius)
and momentum. The charged pion identification efficiency επ and muon mis-
identification probability εμ are plotted in Fig. 3 (right) for several values of the
mass cut. At επ = 90% the muon mis-identification probability is εμ 1%.
282 A. Bizzeti
Fig. 3. (left) Čerenkov ring radius versus particle momentum. Vertical lines delimit
the momentum fiducial region 15–35 GeV/c. Electrons, muons, charged pions and scat-
tered beam kaons are clearly visible. Particles with momentum higher than 75 GeV/c
correspond to halo muons. (right) Pion identification efficiency versus muon mis-
identification probability.
3 Conclusions
The NA62 RICH detector was installed in 2014 and commissioned in autumn
2014 and 2015; it is fully operational since the 2016 run. First performance stud-
ies with collected data show that the RICH fulfilled the expectations, achieving
a time resolution of 70 ps and a factor ∼ 100 in muon suppression.
Acknowledgements. The construction of the RICH detector would not have been
possible without the enthusiastic work of many technicians from University and INFN
of Perugia and Firenze, the staff of CERN laboratory, the collaboration with Vito
Carassiti from INFN Ferrara. A special thank to the NA62 collaboration for the full
dedication to the construction, commissioning and running of the experiment.
References
1. Anelli, G., et al.: Proposal to measure the rare decay K + → π + ν ν̄ at the CERN
SPS, CERN-SPSC-2005-013, CERN-SPSC-P-326 (2005)
2. Cortina Gil, E., et al.: NA62 Collaboration. J. Instr. 12, P05025 (2017). https://
doi.org/10.1088/1748-0221/12/05/P05025
3. Buras, A.J., et al.: JHEP 1511, 166 (2015)
4. Artamonov, A.V., et al.: E949 Collaboration. Phys. Rev. D 79, 092004 (2009)
5. Anzivino, G., et al.: Nucl. Instr. Meth. Phys. Res. A 538, 314 (2008)
6. Angelucci, B., et al.: Nucl. Instr. Meth. Phys. Res. A 621, 205 (2010)
7. Hinterberger, H., Winston, R.: Rev. Sci. Instr. 37, 1094 (1966)
8. Aghinolfi, F., et al.: Nucl. Instr. Meth. Phys. Res. A 533, 183 (2004)
9. Christiansen, J.: High Performance Time to Digital Converter, CERN/EP-MIC
(2004). https://cds.cern.ch/record/1067476/files/cer-002723234.pdf
Barrel Time-of-Flight (TOF) Detector
for the P̄ANDA Experiment at FAIR
1 Introduction
The P̄ANDA experiment [1] at the new international accelerator complex, Facil-
ity for Antiproton and Ion Research (FAIR), will perform high precision experi-
ments in the strange and charm quark sector. Therefore a cooled beam of antipro-
tons with momentum range from 1.5 GeV/c to 15 GeV/c is collided with a fixed
proton or nuclear target to allow hadron production and formation experiments
with a luminosity of up to 2 · 1032 cm−2 s−1 . The scientific program includes:
Fig. 1. (left) For each detected signal the track creation times according to mass
assumption are calculated. The best conformity is equivalent to the most probable
mass configuration [7]. (right) Calculated separation powers of the P̄ANDA Barrel
TOF counter as a function of transverse momentum of the particle.
Barrel Time-of-Flight (TOF) Detector for the P̄ANDA Experiment at FAIR 285
3 Design
The Barrel TOF consists of 16 independent segments (super module) located
around the beam axis as sketched in Fig. 2 covering an azimuthal angle of 22.5◦
to 140◦ . The sensitive volume consists of scintillator tiles each of which are
read out with four Silicon Photomultiplier (SiPM) at each end. A super module
comprises of 120 scintillator tiles and 960 SiPM as well as signal transmission
lines embedded in a multilayer PCB. The front-end readout electronics (FEE)
amplifies and digitises the signal from the SiPM and transfers the data to the
P̄ANDA computing node. It is located at the back end of the segment where the
hit rate is low.
Fig. 2. (left) Drawing of the whole Barrel TOF with sub-structure of a pair of scintil-
lator tiles. (right) Sketch of circuit design of the super module (top), A photo of half
length prototype with one pair of scintillator (bottom).
Fig. 3. (left) Schematic of different possibilities of connecting SiPMs for single read
out channel: a) serial, b) parallel (right) Signal improvement with serial connection for
higher number of SiPMs.
In order to verify the laboratory test results several beam time campaigns
were carried out. The best result with the current design was achieved in Novem-
ber 2016 with a beam of 7 GeV/c momentum containing protons, pions, electrons
and kaons giving a time resolution of σt = 58 ps.
5 Conclusion
The barrel TOF detector provides a robust tool for particle identification at low
momentum. With the current design including wrapping and four SiPM on each
end an intrinsic time resolution of σt = 54 ps in the laboratory was achieved.
References
1. PANDA Collaboration. Technical Progress Report (2005)
2. PANDA Collaboration. Physics Performance Report for PANDA, Strong Interac-
tion Studies with Antiprotons. https://arxiv.org/abs/0903.3905v1
3. Akindinov, A., Alici, A., Agostinelli, A., et al.: Eur. Phys. J. Plus 128, 44 (2013).
https://doi.org/10.1140/epjp/i2013-13044-x
4. Adam, J., Adamov, D. et al.: ALICE Collaboration, Eur. Phys. J. Plus, 132: 99
(2017). https://doi.org/10.1140/epjp/i2017-11279-1
5. Eur. Phys. J. Plus, 128:44 (2013). https://doi.org/10.1140/epjp/i2013-13044-x
6. Nuclear Instruments and Methods 179, 477–485 (1981)
7. Sanchez-Lorente, A., Schmitt, L., Schmitt, C., Goetzen, K., Kisselev, A.: Motiva-
tion of the barrel time-of-flight detector for PANDA. PANDA Note (2011)
8. Schwarz, C., Britting, A., Bhler, P., Cowie, E., Dodokhov, V.K., Dren, M., et al.:
The Barrel DIRC of PANDA. J. Instrum. 7(02), C02008–C02008 (2012). https://
doi.org/10.1088/1748-0221/7/02/C02008
9. PANDA Collaboration. Barrel DIRC Technical Design Report (2016)
10. Cattaneo, P.W., Gerone, M.D., Gatti, F.: Development of high precision timing
counter based on plastic scintillator with SiPM readout. IEEE Trans. Nucl. Sci.
61, 2657 (2014)
Barrel Time-of-Flight (TOF) Detector for the P̄ANDA Experiment at FAIR 287
11. Gruber, L.: Studies of SiPM photosensors for time-of-flight detectors within
P̄ANDA at FAIR. Vienna University of Technology (2014)
12. Böhm, M., Lehmann, A., Motz, S., Uhling, F.: Fast SiPM readout of the P̄ANDA
TOF detector. J. Instrum. 11(05), C05018 (2016)
13. Gruber, L., Brunner, S.E., Marton, J., Orth, H., Suzuki, K.: Barrel time-of-flight
detector for the PANDA experiment at FAIR. Nucl. Instr. Meth. A 824, 104–105
(2016)
14. PANDA Collaboration: Barrel Time-of-Flight Technical Design Report (2017)
Trigger and Data Acquisition Systems
Electronics, Trigger and Data Acquisition
Systems for the INO ICAL Experiment
1 Introduction
ICAL’s Data Acquisition (DAQ) system records - on receiving the physics trigger
signal, pattern of RPC pick-up strips hit by the charged particles which are produced in
the neutrino interactions as well as their precise time of crossing the active detector
elements. Besides, the DAQ system also performs a number of slow control and
monitoring functions in the background. Architecture of the ICAL’s electronics and
DAQ systems (Fig. 1) is based on designating the RPC as the minimum standalone unit.
2 Front-End Electronics
3 Trigger System
The multi-level trigger system generates the global trigger signal based solely on event
topology information. Trigger logic is essentially defined as m p/n, i.e. trigger is
generated when out of a group of n consequent layers, at least p layers have m channels
each with simultaneous signals in them. The pre-trigger signals from DFEs are fed to
Signal Router Boards (SRBs) which bunch signals and redistribute them to the Trigger
Logic Boards (TLBs). The second-level trigger logic is implemented in the TLBs and
the boundary coincidences are resolved by the Global Trigger Logic Boards (GTLBs).
The entire control of the trigger system, monitoring of various signal rates etc. is
handled by the Trigger Control and Monitor (TCAM) module. Further, the Control and
Monitoring (CAM) module provides interface between the trigger and backend data
concentrator units.
Calibration and auxiliary (CAU) services sub-system mainly interfaces with trigger
system and DFE boards and performs functions of distribution of global clocks and
trigger signals as well as measurement of time offsets due to disparate signal path
lengths. The local TOF in each DFE is then translated to a common timing reference by
adding the respective offsets for reconstruction of particle trajectories. The CAU unit
was tested extensively on the RPC stacks and was found to provide offset corrections of
better than 100 ps. The Real Time Clocks (RTCs) of all the DFEs are pre-loaded with
epoch time and synchronized up to a micro second using PPS signal and global clock.
The events are built in the backend using these RTC time stamps.
The data acquired by the DFE on receipt of trigger signal is dispatched to the backend
data concentrator hosts using the DFE network interface, passing through segmented
layers of data network hubs and switches. For the purpose of communication and data
transfer between the DFE and back-end, the former is configured as a network element
with an unique IP number. Thus, the entire ICAL detector will become a large Eth-
ernet LAN, suitably segmented, with RPC units as LAN hosts together with the back-
end DAQ computers.
The back-end DAQ hardware involving multiple data concentrator servers receive
event and monitor data from the DFE modules. The event data is compiled by the event
builder. Finally, the back-end system performs various quick quality monitors on the
294 S. Achrekar et al.
data in addition to providing user interfaces, slow control and monitoring, event dis-
play, data archival and so on.
5 Power Supplies
The low voltage power supplies required for the analog and digital front-end boards
as well as to the HV DC-DC module onboard the RPC unit are individually supplied,
controlled and monitored through a central low voltage power supply distribution and
monitoring sub-system.
Entire design of baseline ICAL detector electronics was completed and reviewed.
ASICs, most of the circuit boards as well as firmware and software were already
produced. Prototyping, benchmarking besides limited production of various compo-
nents and modules were also completed, the relevant technologies and vendors were
identified. The electronics is already being used to read many ICAL prototype detector
stacks, including the mini-ICAL which is currently under construction. Integration of
Electronics, Trigger and Data Acquisition Systems 295
electronics, especially the analog and digital front-end onboard the RPC detector
module posed a big challenge. This along with extensive cable routing, spreading over
the detector of 48 m 16 m 16 m in volume is being carefully addressed.
References
1. INO Homepage. http://www.ino.tifr.res.in. Accessed 26 June 2017
2. Kumar, A., et al.: Physics potential of the ICAL detector at the India-based neutrino
observatory (INO). Pramana J. Phys. 88(5), 1–72 (2017)
Track Finding for the Level-1 Trigger
of the CMS Experiment
Tom James(B)
on behalf of the TMTT group
The Compact Muon Solenoid (CMS) detector [1] at the CERN Large Hadron
Collider (LHC) [2] is an all purpose-detector, designed to investigate a wide
range of physics. Beginning in in 2026, the LHC will operate at an upgraded
luminosity of 5–7.5 × 1034 cm−2 s−1 , corresponding to 140–200 simultaneous col-
lisions (pileup) at 40 MHz [3]. This will enable the collection of 3000 fb−1 of data
by 2035.
Due to accumulated radiation damage, it will be necessary to completely
replace the CMS outer silicon tracker during the long LHC shutdown preceding
this upgrade. Studies have shown that the use of tracker information in the level-
1 (L1) trigger will be required to keep the accept rate below the target 750 kHz,
without significant degradation in physics sensitivity. A new silicon tracker has
therefore been designed to allow the read out of a sub-set of its data at 40 MHz [4].
It will exploit the knowledge that tracks with high transverse momentum (pT )
are often signs of interesting physics. Novel tracking modules (pT -modules) are
being developed that utilise two closely spaced silicon sensors to select only pairs
of hits (stubs), compatible with a transverse momentum (pT ) greater than 2 GeV
for level-1 triggering.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 296–302, 2018.
https://doi.org/10.1007/978-981-13-1313-4_56
Track Finding for the Level-1 Trigger of the CMS Experiment 297
The Hough transform (HT) is a widely used feature extraction technique [6]. It
can be used to find imperfect instances of objects within a space, for example
tracks within the map of tracker hits. FPGA firmware has been developed to
use a two-dimensional HT to search for primary tracks in the r − φ plane [7].
The parametrisation (q/pT , φ0 ) is used, where φ0 is the coordinate of the stub
at r = 0, q is the electronic charge, and q/pT is the free parameter. Within
the TFP, the stubs are sorted into sub-sectors in both η and φ. A Hough array
is constructed for each sub-sector. Within each array, stubs are binned at each
q/pT column into a corresponding φ0 row, based on the straight-line formula
φ0 = φstub + 0.0015 r B q/pT , where and B is the magnetic field strength. The
set of stubs that intercept at a point can be considered a track candidate (after
passing some additional checks). An optional feature of the algorithm is the
ability to use the bend of the stub between the two sensors on the pT -modules
298 T. James
The Kalman Filter (KF) is a commonly used iterative algorithm that can turn
a series of measurements containing inaccuracies and noise into estimates of
unknown variables [8]. In this scenario, the measurements correspond to the stubs
associated with the track-candidates produced by the HT, and the unknown vari-
ables to the final track helix parameters. A combinatorial KF has been imple-
mented in FPGA firmware. The initial estimates (seed) of the track parameters
are taken from the corresponding HT cell. The Kalman state (current estimate
of the unknown parameters) is then updated, one stub at a time in increasing
radius, after each cycle using a weighted average of the prior state and the new
measurement. The χ2 of the track parameters are also calculated at each stage,
and this information, along with the number of layers missing hits, is used to
both reject false track candidates, and remove incorrect stubs from tracks. This
process is repeated until all stubs on the track are added, or a configurable time-
out is reached. Following the KF, a simple Duplicate Removal (DR) algorithm
removes HT candidates with duplicated helix parameters.
Fig. 2. The TFP demonstrator consists of five MP7 boards, each indicated by a sep-
arate block in the diagram. Source and Sink boards represent the DTCs, and the L1
trigger, respectively. The Geometric Processor (GP) assigns stubs to the 36 sub-sectors
per octant (two in φ and 18 in η), routing those associated with a given sub-sector to
dedicated output links.
The building blocks of the hardware demonstrator are the Master Proces-
sor 7 (MP7) [10], FPGA-based, data-stream processing double-width AMC
cards, equipped with a Xilinx Virtex-7 690 [11] FPGA, and 72 optical trans-
mitters/receivers running at 10 Gbps each way. Eleven MP7 cards are installed
in a MicroTCA crate at CERN. The MP7 boards are optically cabled to match
the requirements shown in Fig. 2, where each MP7 board is represented by a
block.
The demonstrator software package allows for the direct comparison of per-
formance between the hardware demonstrator output and the emulator (which
is based on the CMS software framework (CMSSW)), using Monte Carlo physics
samples that emulate the conditions at the HL-LHC. Although the demonstrator
processes one tracker octant at a time, the firmware is agnostic to the choice of
octant, making it possible to take data for all octants sequentially.
budget. Table 1 shows the latency for each step of the demonstrator chain. The
processing latency of each stage is fixed, and is therefore independent of pileup
or event occupancy.
Table 1. Processing latency for each of the demonstrator components, including the
four stages of serialisation/de-serialisation (SerDes), and the optical transmission delays
between each board.
SerDes GP HT KF DR Total
Latency [ns] 545 251 1025 1620 38 3479
Table 2. Resource usage for each component of the TFP demonstrator, alongside
available resources for some compatible Xilinx FPGAs [12].
6 Summary
References
1. Collaboration, C.M.S.: The CMS experiment at the CERN LHC. JINST 3, S08004
(2008). https://doi.org/10.1088/1748-0221/3/08/S08004
2. The CERN Large Hadron Collider: Accelerator and Experiments Collaboration.
LHC Machine, JINST 3, S08001 (2008). https://doi.org/10.1088/1748-0221/3/08/
S08001
3. The CERN Large Hadron Collider: Accelerator and Experiments Collaboration,
Apollinari, G., et al.: High-Luminosity Large Hadron Collider (HL-LHC): Prelimi-
nary Design Report. CERN, Geneva (2015). https://doi.org/10.5170/CERN-2015-
005
4. Collaboration, C.M.S.: CMS Technical Design Report for the Phase-2 Tracker
Upgrade. Technical report CERN-LHCC-2017-xxx. CMS-TDR-xxx, Geneva (2017)
5. CMS Collaboration: CMS Technical Design Report for the Level-1 Trigger
Upgrade. Technical report CERN-LHCC-2013-011. CMS-TDR-12, June 2013
6. Hough, P.V.C.: Method and means for recognizing complex patterns. US Patent
3,069,654, December 1962
7. Amstutz, C., et al.: An FPGA-based track finder for the L1 trigger of the CMS
experiment at the high luminosity LHC. In: 2016 IEEE-NPSS Real Time Confer-
ence (RT), pp. 1–9, June 2016. https://doi.org/10.1109/RTC.2016.7543102
8. Fruhwirth, R.: Application of Kalman filtering to track and vertex fitting.
Nucl. Instrum. Meth. A262, 444–450 (1987). https://doi.org/10.1016/0168-
9002(87)90887-4
302 T. James
9. Aggleton, R., et al.: An FPGA based track finder for the L1 trigger of the CMS
experiment at the high luminosity LHC. CMS-NOTE-2017-XXX; CERN-CMS-
NOTE-2017-XXX- Geneva: CERN (To be published)
10. Compton, K., et al.: The MP7 and CTP-6: multi-hundred Gbps processing boards
for calorimeter trigger upgrades at CMS. JINST 7, C12024 (2012). https://doi.
org/10.1088/1748-0221/7/12/C12024
11. Xilinx: 7 Series FPGAs Overview, Product Specification. DS180 (v1.17), May 2015
12. Xilinx: UltraScale Architecture and Product Data Sheet: Overview, DS890 (v2.11),
February 2017. https://www.xilinx.com/support/documentation/data sheets/
ds890-ultrascale-overview.pdf
A Multi-chip Data Acquisition System
Based on a Heterogeneous
System-on-Chip Platform
Adrian Fiergolski(B)
on behalf of the CLIC detector and physics (CLICdp) collaboration
1 Motivation
The development of pixel detectors for future high-energy physics experiments
often implies the use of a custom data acquisition (DAQ) system for a given
device. As a consequence, the process of a new chip characterisation often
involves an extra effort associated with commissioning and debugging of new
hardware, firmware and software of the accompanying DAQ system. Although
from a functional point of view the DAQ systems are very similar, different
implementations and the lack of cross-compatibility imply some learning stage
for the pixel detector user. Moreover, each new DAQ system often requires some
integration effort with a test-beam infrastructure. All those aspects delay the
pixel detector studies.
The Control and Readout Inner tracking BOard (CaRIBOu) addresses this
issue. It is a versatile modular readout system supporting by design a wide range
of current and future devices. Integration of new devices requires minimal effort.
Since CaRIBOu targets laboratory and high-rate test-beam measurements, the
system combines flexibility and high-performance requirements. The project is
2 CaRIBOu
2.1 Hardware
Fig. 1. The hardware architecture of the CaRIBOu DAQ system. From Ref. [2]. Pub-
lished with permission by CERN.
A Multi-chip Data Acquisition System Based on a Heterogeneous SoC 305
2.2 DAQ
Fig. 2. Scheme of the CaRIBOu DAQ. From Ref. [2]. Published with permission by
CERN.
306 A. Fiergolski
bias voltages, currents and will set the operational conditions (clock, reset, etc.).
The HAL supports the operation of several devices in parallel. Integration of
new devices requires minimum code development and is mainly limited to data
processing before storing. A device manager of Peary enables dynamic linking
of this code based on a device name specified in the configuration files. The
Peary framework implements common modules such as a logging engine. It also
provides a command line interface (CLI) enabling sequential step-by-step control
of the chip. This feature proves to be useful at the commissioning stage of a new
device. Finally, by supporting DAQ clients, Peary facilitates integration with a
top level DAQ run control for combined runs with a test-beam telescope.
In order to facilitate the creation of a custom embedded Linux distribu-
tion, CaRIBOu provides a Meta-caribou layer to the Yocto project [11]. Yocto
is a Linux build framework supported by a large community of open-source
and industrial developers. Meta-caribou customizations set a console-only image
with a full-featured Linux system functionality. The image comes with popu-
lar packages (python, ssh, gdb, etc.) already pre-installed. Part of Meta-caribou
configures a Secondary Program Loader (SPL) which, at the boot stage, loads
FPGA firmware (bitfile from Peary-firmware) and sets the ARM CPUs in the
desired state. Moreover, the CaRIBOu layer provides the CaR specific hardware
description to the Linux kernel through device tree configuration. With a single
command, a user launches a build process which fetches the required sources,
cross-compiles them and generates an image of the operating system. Further,
using a script provided by Meta-caribou, the user prepares an SD card which is
eventually inserted into the SD socket of the ZC-706 evaluation kit.
The last component of the CaRIBOu DAQ is the Peary-firmware produc-
ing an FPGA image file. It is the only part of the CaRIBOu project utilising
proprietary tools (Xilinx Vivado [9]). The Peary-firmware design is handled by
the Xilinx IP Integrator [10]. As all autonomous firmware blocks are described
according to the IP-XACT standard [12], the tool is aware of their interfaces
and can facilitate their integration. Moreover, using the tool, the Peary-firmware
handles the configuration of the SoC by defining processor periphery settings,
address space and clock frequencies. The user has access to a library of Vivado
intellectual property (IP) cores (i.e. DMA, SPI, I2C, etc.) which often come with
Linux device drivers distributed through a Yocto layer and maintained by the
Xilinx community users. In addition, the Peary-firmware comes with a repository
of custom sub-modules (like a serial receiver), which facilitate the development of
application-specific blocks providing access to the chip. Some of the sub-modules
make use of the System Verilog language supported in Vivado. In case processor
interrupts are not required, the custom blocks can be accessed through a generic
/dev/mem Linux device driver. The repository contains software examples of
this feature. Finally, the Xilinx tools are used by the Peary-firmware also to
create the Hardware Description File (HDL) which is required by Meta-caribou
for Linux device tree and SPL generation.
A Multi-chip Data Acquisition System Based on a Heterogeneous SoC 307
4 Summary
References
1. CaRIBOu project webpage. https://gitlab.cern.ch/Caribou/
2. Adrian, F.: A multi-chip data acquisition system based on a heterogeneous system-
on-chip platform. In: CLICdp-Conf-2017–2012. http://cds.cern.ch/record/2272077
308 A. Fiergolski
3. Kremastiotis, I., Ballabriga, R., Campbell, M., Dannheim, D., Fiergolski, A.,
Hynds, D., Kulis, S., Peric, I.: Design and characterisation of a capacitively coupled
HV-CMOS sensor chip for the CLIC vertex detector, submitted to JINST journal.
arXiv:1706.04470
4. Liu, H., et al.: Development of CaRIBOu: a modular readout system for pixel
sensor R&D, in this proceedings
5. Bugiel, S., Dasgupta, R., Glab, S., Idzik, M., Moron, J., Kapusta, P.J., Kucewicz,
W., Turala, M.: Development of SOI pixel detector in Cracow. In: SOIPIX 2015.
arXiv:1507.00864
6. SEAF connector. https://www.samtec.com/products/seaf
7. Specification of the CaR board. https://gitlab.cern.ch/Caribou/Caribou-HW/
blob/master/README.md
8. Xilinx Zynq-7000 All Programmable SoC ZC-706 Evaluation Kit. https://www.
xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html
9. Vivado Design Suite User Guide: Getting Started. https://www.xilinx.com/
support/documentation/sw manuals/xilinx2017 1/ug910-vivado-getting-started.
pdf
10. Vivado Design Suite User Guide: Designing IP Subsystems Using IP Inte-
grator. https://www.xilinx.com/support/documentation/sw manuals/xilinx2017
1/ug994-vivado-ip-subsystems.pdf
11. Yocto Project. https://www.yoctoproject.org/
12. 1685–2014 IEEE Standard for IP-XACT, Standard Structure for Packaging, Inte-
grating, and Reusing IP Within Tool Flows. https://standards.ieee.org/findstds/
standard/1685-2014.html
13. Pierpaolo, V., Augusto, N., Rafael, B.: Electronic Systems for Radiation Detec-
tion in Space and High Energy Physics Applications, CERN-THESIS-2013-156,
September 2013. http://cds.cern.ch/record/1610583
Acceleration of an Particle Identification
Algorithm Used for the LHCb Upgrade
with the New Intel R Xeon-FPGA
R
Abstract. The LHCb experiment at the LHC will upgrade its detec-
tor by 2018/2019 to a ‘triggerless’ readout scheme, where all the read-
out electronics and several sub-detector parts will be replaced. The new
readout electronics will be able to read out the detector at 40 MHz. This
increases the data bandwidth from the detector down to the event filter
farm to 40 Tb/s, which also has to be processed to select the interesting
proton-to-proton collisions for later storage. The architecture of such a
computing farm, which can process this amount of data as efficiently as
possible, is a challenging task and several compute accelerator technolo-
gies are being considered for use inside the new event filter farm.
In the high performance computing sector more and more FPGA com-
pute accelerators are used to improve the compute performance and
reduce the power consumption (e.g. in the Microsoft Catapult project
and Bing search engine). Also for the LHCb upgrade, the usage of an
experimental FPGA accelerated computing platform in the event build-
ing or in the event filter farm (trigger) is being considered and therefore
tested. This platform from Intel R hosts a general Xeon R CPU and a
high performance Arria R 10 FPGA inside a multi-chip package linked
via a high speed and low latency link. On the FPGA an accelerator is
implemented. The FPGA has cache-coherent memory access to the main
memory of the server and can collaborate with the CPU.
A computing intensive algorithm to reconstruct Cherenkov angles for
the LHCb RICH particle identification was successfully ported to the
IntelR Xeon-FPGA
R platform and accelerated. The results show that
the IntelR Xeon-FPGA
R platforms, which are built in general for high
performance computing, are also very interesting for the High Energy
Physics community.
1 The Intel
R Xeon-FPGA
R
Fig. 1. Intel
R Xeon-FPGA
R
optimization is shown in Table 1. The fast interface already uses 18% of the
FPGA ALMs, and after the optimization the whole design takes 32% of all
the ALMs. Only 15% of the DSP blocks are used to implement all floating point
calculation blocks needed for one photon pipeline and in total 12% of all registers
are used. The design runs at 200 MHz, which makes a calculation for a single
photon within 5 ns possible if the pipeline is completely filled.
FPGA resource type FPGA resource used [%] For interface used [%]
ALMs 32 18
DSPs 15 0
Registers 12 5
The results are shown in Fig. 3. For a small number of photons the Xeon R
CPU is faster due to the large latency of the photon pipeline on the FPGA. The
break even is reached at roughly 200 photons. The time for the CPU version
rises linearly and the time for the FPGA version stays constant until 8,000
photons, for more photons also the time for the FPGA version rises linearly. For
larger number of photons the time ratio between CPU and FPGA is a factor 20
up to 35. On average, the photon pipeline processes a new photon only every
second clock cycle. The bottleneck of the acceleration is the bandwidth between
CPU and FPGA. In theory, using a higher bandwidth and using all FPGA
resources for 5 photon pipelines, an acceleration factor of roughly 300 would
be possible. This would be the limit for the used Arria R 10 FPGA. To reduce
the difference between theory and measured acceleration, caching is tested right
now, to increase the reuse of the data on the FPGA. This is for the Cherenkov
angle algorithm possible, because many photon hits are combined with many
particle tracks, and for all the combinations the same calculation is processed.
4 Summary
The LHCb experiment will be upgraded 2018–2019 to make a much more flexible
40 MHz detector readout possible. Afterwards no hardware trigger will be used
any-more, only a complete software based trigger will be implemented to select
the interesting proton-proton collision. The triggering will happen on a large
Event Filter farm, which has to process and filter a input bandwidth of 40
TBits/s. This is a challenging task and several compute accelerator technologies
are being considered to be used inside the new Event Filter farm.
Therefore, a study was especially here done to investigate the possible usage
of the new IntelR Xeon-FPGA
R to accelerate the Cherenkov angle reconstruc-
tion for the particle identification. A Verilog version was implemented on the
Intel
R Arria R 10 GX 1150 FPGA and an encouraging acceleration of 35x was
measured for a large number of photons larger than 8000, compared to a single
threaded IntelR Xeon R E5-2600 v4 CPU.
These results are very motivating and the High Energy Physics community
could probably benefit similar from these new devices like the High Perfor-
mance Computing community. It is expected that compared to other accelerators
like GPUs the performance per Joule should be very interesting, which will be
verified.
References
1. LHCb Collaboration: Framework TDR for the LHCb Upgrade: Technical Design
Report. CERN, Geneva (2012)
2. LHCb Collaboration: LHCb Trigger and Online Upgrade Technical Design Report.
CERN, Geneva (2014)
3. Forty, R., Schneider, O.: RICH pattern recognition. CERN, Geneva (1998)
4. Sridharan, S., et al.: Accelerating particle identification for high-speed data-filtering
using OpenCL on FPGAs and other architectures. In: IEEE FPL 2016, Lausanne,
Switzerland, 29 Aug–2 Sept 2016. https://doi.org/10.1109/FPL.2016.7577351
5. Färber, C., et al.: Particle identification on a FPGA accelerated compute platform
for the LHCb Upgrade. IEEE Trans. Nucl. Sci. (2017). https://doi.org/10.1109/
TNS.2017.2715900
The ATLAS Level-1 Trigger System
with 13TeV Nominal LHC Collisions
Louis Helary(B)
on behalf of the ATLAS Collaboration
1 Introduction
In ATLAS, muon triggers are obtained from two technologies, Resistive Plate
Chamber (RPC) in the barrel, and Thin Gap Chamber (TGC) in the end-caps.
Together they form the L1-Muon. Additional RPCs were installed during the
Long Shutdown of the LHC (LS1) in 2013 and 2014, to recover acceptance in the
ATLAS feet region. The installation and commissioning of these chambers was
finished during 2015, and the system was fully enabled in the data taking in 2016.
Figure 2 (left) shows the efficiency of the L1 muon triggers with pT > 11 GeV as a
function of the muon φ coordinate for 2015 and 2016 data-taking. The acceptance
increase can be seen in the range −2.2 < φ < −0.8, where the feet are located.
In the end-caps where the rate of the L1 muon triggers is dominated by protons
from the beam faking real muons, it is crucial to decrease the rate while keeping
a high signal efficiency. In Run1 only the big wheels, that contain up to 3 TGC
stations and are located after the end-cap toroid magnet, were used to trigger
the events, while in Run2 the rate is reduced in the range 1.0 < |η| < 1.3 by
requiring a coincidence between the big wheel and the hadronic tile calorimeter
and in the range 1.3 < |η| < 1.9 by requiring a coincidence between the big
wheels and the muon small wheels, that contain 1 TGC station and is located
after the end-cap toroid magnet. The rate of end-cap L1 muon triggers with
pT > 15 GeV measured in the data with and without the coincidence of the
small and big wheel is shown in Fig. 2 (right).
Fig. 2. Left: Efficiency of the barrel L1 muon triggers with pT > 11 GeV for 2015 data
(blue) and for 2016 data (red) [4, 5]. Right: Rates of the end-cap L1 muon triggers with
pT > 15 GeV with (blue) and without (red) the small wheel coincidence enabled [4, 5].
316 L. Helary
Fig. 3. Left: Efficiency of the isolated L1 electron triggers with pT > 24 GeV as a
function of the average number of interaction per bunch crossing, for the old isolation
algorithm (in black) and for the new one (in blue) [4, 6]. Right: Rate as a function of
the Lumi-Block for the L1 di-muon triggers with pT > 6 GeV (in red) and for the
L1-Topo di-muon triggers with pT > 6 GeV (in blue) [4, 7].
of the ATLAS physics program would need to be left out. The L1 Topological
Trigger (L1-Topo) system consists of 2 modules each containing 2 FPGAs that
allow the execution of the 110 new topological trigger algorithms in a maximum
of 75ns. The whole L1 chain had to be redesigned in order to provide the energy
and position of each trigger object to the L1-Topo system. This is a significant
improvement compared to Run1 where only the multiplicity of each trigger item
was available. The commissioning of the system is ongoing using the data from
Run2. Figure 3 (right) shows the rate of the L1 di-muon triggers with pT > 6 GeV
with and without L1-Topo requirements on the invariant mass (2 < mµµ <
9 GeV) and the separation of the two muons (0.2 < ΔRµµ < 1.5).
6 Conclusion
After the LHC and ATLAS L1 upgrades and despite the harshening of the LHC
beams conditions ATLAS has taken data with a very high efficiency (>92% in
318 L. Helary
2016). The data-taking has already restarted in 2017 and will continue until the
end of Run2 in 2018, where about 100 fb−1 of pp collisions data at 13 TeV are
expected.
References
1. ATLAS Collaboration: JINST 3, S08003 (2008)
2. ATLAS Collaboration: Eur. Phys. J. C 72, 1849 (2012). [arXiv:1110.1530 [hep-ex]]
3. ATLAS Collaboration: Eur. Phys. J. C 77, 317 (2017). [arXiv:1611.09661 [hep-ex]]
4. From ATL-DAQ-PROC-2017-014. Published with permission by CERN
5. ATLAS Collaboration. https://twiki.cern.ch/twiki/bin/view/AtlasPublic/L1Muon
TriggerPublicResults
6. ATLAS Collaboration. https://twiki.cern.ch/twiki/bin/view/AtlasPublic/Egamma
TriggerPublicResults
7. ATLAS Collaboration. https://twiki.cern.ch/twiki/bin/view/AtlasPublic/Trigger
OperationPublicResults
Common Software for Controlling
and Monitoring the Upgraded CMS
Level-1 Trigger
Abstract. The CMS Level-1 Trigger has been replaced during the first
phase of CMS upgrades in order to cope with the increase of centre-of-
mass energy and instantaneous luminosity at which the LHC presently
operates. Profiting from the experience gathered in operating the legacy
system, effort has been made to identify the common aspects of the
hardware structures and firmware blocks across the several components
(subsystems).
A common framework has been designed in order to ensure homogene-
ity in the control and monitoring software of the subsystems, and thus
increase their reliability and operational efficiency. The framework archi-
tecture provides uniform high-level abstract description of the different
subsystems, while providing a high degreee of flexibility in the specific
implementation of hardware configuration routines and monitoring data
structures.
The unique hardware composition and configuration parameters of
each subsystem are stored in a database that has a common structure
across subsystems. A custom editor has been implemented in order to
simplify the creation of new hardware configuration instances. The over-
all monitoring information gathered from all the subsystems is finally
exposed through a single access point to experts and operators.
We present here the design and implementation of the online software
for the Level-1 Trigger upgrade.
1 Introduction
The Compact Muon Solenoid (CMS) experiment Level-1 Trigger (L1T) selects
the most interesting 100 kHz of events out of 40 MHz collisions delivered by the
CERN Large Hadron Collider (LHC). The LHC restarted in 2015 with centre-
of-mass energy of 13 TeV and increasing instantaneous luminosity.
In order to cope with the increasing event rate, CMS L1T underwent a
major upgrade [1] in 2015 and early 2016. The VME-based system has been
replaced by custom-designed processors based on the µTCA specification, con-
taining FPGAs and larger memories for the trigger logic and high-bandwidth
optical links connections. The final system, composed of 9 main components
(subsystems), accounts for a total of about 100 boards and 3000 optical links
and is only partially standardized: 6 different design of processor boards are
used and each subsystem is composed of a different number of processors and
implements different algorithms. Approximately 90% of the software has been
rewritten in order to control and monitor the new system: in order to mitigate
the rcode duplication, the Online Software was redesigned to exploit the partial
standardisation and impose a common hardware abstraction layer for all the
subsystems.
We present here the design of the control software for the upgraded L1T and
the tools developed for controlling and monitoring the subsystems.
Fig. 1. System (left) and Processor (right) views of a subsystem in the SWATCH web
interface. From CMS CR-2017/188. Published with permission by CERN.
The system description and the input parameters for Commands and FSM tran-
sitions are stored in an Oracle database, ensuring that the hardware configura-
tion that was used for any given test or data taking run can be identified, and
re-used if required. The database schema is based on a tree-like structure of for-
eign keys that mimics the structure of the L1T. Subsystems’ configurations are
then split in logical blocks and stored in form of XML chunks.
To simplify the process of editing of the L1T configuration a custom database
Configuration Editor (L1CE ) has been developed. The L1CE was designed as a
client-server application to enable multi-user access whilst keeping safe, atomic
database sessions attached to the user session. Both client and server are devel-
oped using web technologies: the server runs a Node.js application, while the
client is a web page based on Google-Polymer. This choice minimises the num-
ber of technologies involved, thus reducing development and maintenance effort
and keeping a native, efficient communication between the two parts.
The L1CE implements unambiguous bookkeeping of the configurations, by
imposing naming conventions and versioning at all levels, author identification
322 G. Codispoti et al.
through the CERN Single-Sign-On mechanism and in general explicit and auto-
matic metadata insertion. It also implements in-browser XML editing and com-
parison between different configurations at all levels.
Fig. 2. The architecture of the L1CE (left) and a view of the Level-1 Page (right).
From CMS CR-2017/188. Published with permission by CERN.
5 Conclusions
The CMS L1T has been completely replaced in 2015 and early 2016 and commis-
sioned in a very small time window. The Online Software has been rewritten to
accomodate its new structure, exploiting hardware standardization and imposing
a common abstraction model. In this way, the design of the new Online Software
increased the fraction of common code and reduced the development time and
maintenance effort, playing a vital role in the commissioning of the new system
and enhancing the data taking reliability. Moreover the design choices will sim-
plify implementation of new features, continuously adapting to the needs of the
next years of data taking.
Common Software for Controlling and Monitoring 323
References
1. The CMS collaboration: CMS Technical Design Report for the Level-1 Trigger
Upgrade, CERN, Geneva. CMS-TDR-12 (2013)
2. Bologna, S., et al.: SWATCH: common software for controlling and monitoring the
upgraded level-1 Trigger of the CMS experiment. In: Proceedings of 20th IEEE-
NPSS Real Time Conference (RT2016) (2016). https://doi.org/10.1109/RTC.2016.
7543077
3. XDAQ. https://svnweb.cern.ch/trac/cmsos. Accessed 25 June 2017
A Prototype of an ATCA-Based System
for Readout Electronics in Particle
and Nuclear Physics
Abstract. Thousands of channels and large amount of data in the modern high-
energy and nuclear physics pose many challenges for readout electronics, so that
system of high bandwidth, low latency and flexible real time data sharing is very
necessary. Because of the limit of the architecture, the classical parallel back-
plane cannot meet the requirement. A prototype readout electronics system
based on Advanced Telecom Computing Architecture (ATCA) is designed,
which is composed of a hub blade and node blades. The blades interconnect
with each other via high speed serial links of ATCA backplane. A high sample
rate Analog to Digital Converter (ADC), which can readout the signal from the
detectors, is implemented to produce high speed data for transmission, and a
Filed Programming Gate Array (FPGA) is responsible for configuration, control
and data transmission. The system tests conducted in the laboratory indicate that
the prototype system functioned well.
1 Introduction
The new-generation high-energy and nuclear physics experiments run with more
channels and larger amount of data transmission [1], thus the traditional architecture
based on parallel backplane, such as compact Peripheral Component Interconnect
(PCI) and Versa Module Eurocard (VME), are not sufficient due to required data
throughput and low latency [2].
ATCA is developed by the PCI Industrial Computer Manufacturers Group
(PICMG), and PICMG 3.0 document mainly describes its features, such as mechanical,
power distribution, thermal as well as data transport. The key features of ATCA are
listed as below [3]:
• A high throughput capacity (up to 2.5 Terabits).
• High availability up to 99.999%.
• Two redundant −48 V power supplies, reducing single points of failure.
2 System Description
The structure diagram of the prototype readout electronics is shown in Fig. 1. In the
node blade there is an ADC, of which the sampling clock is provided by a Phase
Locked Loop (PLL), with amplifier for couple. The parallel output data streams are
then transferred to an FPGA for buffering and processing. There are 16 high speed
serial transceivers(GTP) integrated in the FPGA, of which eight connect to the fabric
interface and two connect to the base interface, each supporting data rate up to 6.6 Gb/s
[4]. In addition, a Universal Serial Bus (USB) port is supplied for sending commands in
test mode.
The hub blade collects all of the data transferred from node blades through ATCA
backplane. With 16 GTP transceivers integrated in the FPGA on it, high bandwidth
326 M. Li et al.
data transmission are achieved, and event selection, data packing and real-time cor-
rection may be implemented in the FPGA due to its abundant inner connections and
logic resources. A DDR3 memory is supplied for data buffering and a flash memory is
used for FPGA configuration stream bit. A Gigabit Ethernet connected to the PC is
designed for processed data transmission. In order to make the synchronization
between hub blade and node blades, a high precision global clock fanned out from hub
blade will be shared among all node blades.
Both in hub blade and node blades, a microcontroller is used as an Intelligent
Platform Management Controller (IPMC), which is responsible for monitoring the
temperature and voltage of the board, managing activation and power up or down of
board and communicating with the Shelf Management Controller (ShMC) via two I2C
buses [5].
3 Performance Test
To evaluate the performance of the prototype readout electronics, system tests were
conducted in the laboratory, including dynamic performance of the ADC, stability of
high speed serial links between hub blade and node blade, as well as Ethernet and USB
function test. The test platform is shown in Fig. 2. The ADC performance test and
transmission stability test will be described in detail.
In the ADC performance test, the signal generator generated the sine wave and sent
to the node blade via band pass filters. The signal was conditioned by the amplifier and
digitized by the ADC. The frequency response curve shows the bandwidth is about
200 MHz, corresponding with the bandwidth of the amplifier. Figure 3. shows the
typical frequency domain figure of the ADC with the amplifier @85 MHz(−1dBFS),
and the Effective Number of Bits (ENOB) is 8.20 bits.
In the high speed serial link test, a pseudo-random data generator was implemented
in the FPGA of node blade to generate 16 bits parallel data. The data were coded and
serialized by GTP transceiver and then sent out to the hub board via x4 fabric channels
of the backplane. In the hub blade, a same pseudo-random data generator ran and
compared with the received data from the node blade. A mismatch result would be
given out. Table 1 shows the Bit Error Rate (BER) test result, indicating that the data
A Prototype of an ATCA-Based System for Readout Electronics 327
transmission is reliable. The speed of one port can reach 3.125 Gb/s, so that x4 fabric
channel can achieve 12.5 Gb/s.
4 Conclusion
A prototype readout electronics based on the ATCA structure has been designed.
12.5 Gbps data transmission through backplane is achieved, meeting the expected
requirement. The conducted laboratory tests prove that the prototype system functions
well, and for future high energy and nuclear physics experiments using ATCA tech-
nology is very promising.
Acknowledgements. This project is supported by the Young Fund Projects of the National
Natural Science Foundation of China (Grant No. 11505182). And it is also supported by the
Young Fund Projects of the Anhui Provincial Natural Science Foundation (Grant No. 1608085
QA21).
References
1. Larsen, R.S.: Advances in developing next-generation electronics standards for physics. In:
Real Time Conference, 2009, RT 2009, IEEE-NPSS, pp. 7–15. IEEE (2009)
2. Jezynski, T., Larsen, R., Du, P.L.: ATCA/lTCA for physics. Nucl. Inst. Methods Phys. Res.
A 623(1), 510–512 (2010)
3. PICMG Homepage. http://www.picmg.org. Accessed 15 June 2017
4. Xilinx Homepage. http://www.xilinx.com. Accessed 15 June 2017
5. Lang, J., et al.: TCA-5 paper. In: Proceedings of the IEEE NPSS 15th Real Time Conference,
Beijing
Commissioning and Integration
Testing of the DAQ System for the CMS
GEM Upgrade
Texas A&M University at Qatar, P.O. Box 23874 Education City, Doha, Qatar
castaned@cern.ch
1 GEM Technology
A Gas Electron Multiplier [1] is a thin metal-clad polymer foil, chemically perforated
with a high density of microscopic holes. In the case of GE1/1 upgrade project, a triple-
GEM configuration probed to be optimal in terms of signal amplification, time reso-
lution and considering the volume constrains in the CMS detector. It consists of a stack
of three GEM foils separated with a relative distance of few millimeters, a gas mixture
fills the space in-between the foils. Charged particles crossing the detector interact with
the molecules in the gas producing electron-ion pairs (primary ionization). Released
electrons flow from the drift region towards the GEM foils and are further multiplied
(second ionization) due to an intense electric field created in the holes producing an
avalanche effect. Electrons reaching the anode induce a charge in the readout strips from
where properties such as position and arrival time of primary particles can be inferred.
Performance of large scale triple-GEM detectors has been extensively studied in the past
using simulation and measurements from test-beam exercises [2]. Experimental mea-
surements indicate an excellent performance of the detector with an efficiency >97%,
particle rate capability >10 kHz/cm2, timing resolution <10 ns, angular resolution of
300 µrad and a gain uniformity of 15% or better across the entire chamber. In addition,
the detectors and electronic components undergo radiation tolerance tests to ensure the
correct operation during the lifetime of the LHC project.
The GEM data acquisition (GEM-DAQ) system is the collection of hardware and
software components for signal readout, communication between front-end electronics
and off-detector components including: GEM back-end electronics, central CMS DAQ
and the Cathode Strip Chamber (CSC) muon system, the later to improve CMS muon
triggering in the forward region. Each trapezoidal triple-GEM chamber is divided into
twenty-four sectors (eight columns and three rows). Each sector contains 128 readout
strips with inputs connected to a 128-channel-front-end chip (VFAT) [3]. Charge
collected in the strips is amplified, analog-to-digital converted and noise suppressed;
data from the twenty-four chips are packed and transmitted via electronic links (E-
links) running through a flat printed circuit board known as GEM Electronic Board
(GEB) to an opto-hybrid device for further processing. The opto-hybrid is also plugged
in the GEB and contains a Giga-Bit Transceiver (GBT) chipset, a Field Programmable
Gate Array (FPGA) as well as optical receivers and transmitters to provide the link with
the off-detector region as shown in Fig. 1.
Fig. 1. A sketch showing the main components of the GEM-DAQ system including the front-
end electronics and elements for communication with the off-detector region.
(AMC13) is used for communication with central CMS-DAQ to provide the trigger and
timing control (TTC). A unidirectional path sends fixed latency trigger data from the
GEM to the CSC system. A complete description of the electronic components can be
found in [4].
Triple-GEM chambers are subjected to rigorous Quality Control (QC) tests [5] before
being declared ready for installation and commissioning, some of those tests include
uniformity of the gain and checks for gas leaking. Once QC test are passed, super-
chambers are fabricated coupling two GE1/1 chambers and parameters such as detector
gain, noise levels and cluster size are measured with the final detector electronics.
Five GEM super-chambers were integrated into CMS early 2017. With the GEM-
DAQ system fully operational and the services (gas, cooling, cabling, low and high
voltage) in place, the system can be operated locally, allowing to perform scan routines
on specific GEM devices and configurations, or globally, with the system integrated
into the central CMS DAQ infrastructure. GEM local calibration routines are used to
check data integrity and response of the VFAT chips to injected signals, results for one
of the VFAT chips used in the commissioning are presented in Fig. 2.
Fig. 2. Response of a VFAT chip (installed in one of the super-chambers integrated in CMS) to
the injection of internal calibration pulses (charge). The magnitude of the charge is controlled
internally and a configurable number of pulses are injected. The total number of times the
comparator fired is recorded and normalized to the total number of injected pulses (e). The plot
on the right shows the response after adjusting individual channel registers to trim the 99%
response point to the same reference value, this in order to reduce any slight differences between
channels due to fabrication statistical fluctuations.
Commissioning and Integration Testing of the DAQ System 331
A successful installation of five GEM super-chambers into CMS was done early 2017,
where invaluable experience was gained on mechanical installation, service integration
and DAQ setup that will allow to reduce the time during the installation of the full
GE1/1 system (2019). GEM local calibrations indicate a good system performance and
provide valuable data for monitoring the system and GEM-DAQ components. The
commissioning work will continue in parallel with the regular CMS data taking, this
will allow for the GEM system to interact with the rest of the CMS subsystems for the
first time.
References
1. Sauli, F.: GEM: a new concept for electron amplification in gas detectors. Nucl. Instrum.
Methods A 386, 531–534 (1997)
2. Abbaneo, D., et al.: Performance of a large-area GEM detector prototype for the upgrade of
the CMS muon endcap system. In: Proceedings of 2014 IEEE Nuclear Science Symposium,
Seattle, WA, USA (2014)
3. Aspell, P., et al.: VFAT2: a front-end “system on chip” providing fast trigger information and
digitized data storage and formatting for the charge sensitive readout of multi-channel silicon
and gas particle detectors. In: 2008 IEEE Nuclear Science Symposium Conference Record,
pp. 1489–1494 (2008)
4. Colaleo, A., Safonov, A., Sharma, A., Tytgat, M., et al.: CMS technical design report for the
Muon endcap GEM upgrade. CERN-LHCC-2015-12. CMS-TDR-013, June 2015
5. Tytgat, M., et al.: Quality control for the first large areas of the triple-GEM chambers for the
CMS endcaps. CMS-CR-2015-347, December 2015
MiniDAQ1: A Compact Data Acquisition
System for GBT Readout over 10G
Ethernet at LHCb
1 Hardware Specifications
1.1 AMC40
MiniDAQ1 hardware is composed of two main parts. The first, called AMC40, is based
on the Advanced Mezzanine Card form factor and interfaces directly with the front-end
optical links. Optical links are implemented with Avago MiniPOD™ modules, 3
transmitters (AFBR-811VxyZ) and 3 receivers (AFBR-821VxyZ) for a total of 36
bidirectional links. 6 MPO12 connectors are available on the front panel to accept front-
end optical fibers. The AMC40 also mounts the FPGA implementing all data multi-
plexing and aggregation, an Altera Stratix V device is used for this purpose (Fig. 1).
1.2 AMCTP
The second half of the MiniDAQ1 hardware resides on the so-called AMCTP, an utility
module that mates with the AMC40 via an AMC B+ connector. The AMCTP hosts an
embedded microcomputer where part of the control system software is executed.
The FPGA and the control system communicate via a single-lane PCI Express Gen1 link.
The control system can configure and monitor the hardware through an on-board gigabit
Ethernet link. The AMCTP also provides the main reference clock to the mezzanine
board, either from a local oscillator or an external source. A dedicated connector can fan
out this clock to synchronize with another MiniDAQ1. Additional connectors provide
inputs for external triggers and outputs for test signals from the FPGA.
2 MiniDAQ1 Firmware
current packet to minimize network dead-time. The stack supports data transmission up
to the specified link line-rate of 10 Gb/s. Fragments are encapsulated in UDP data-
grams with a simple header identifying the origin and type of traffic. The stack also
implements ARP and ICMP replies to simplify network monitoring and configuration
through the control system.
An upcoming version of the network stack allows multiplexing of the data
acquisition stream over multiple 10 GbE links in order to linearly increase the available
output bandwidth.
3 MiniDAQ1 Software
4 Transition to MiniDAQ2
5 Conclusion
References
1. Caratelli, A., et al.: The GBT-SCA, a radiation tolerant ASIC for detector control and
monitoring applications in HEP experiments. J. Instrum. 10(03), C03034 (2015)
2. Federico, A., et al.: LHCb: clock and timing distribution in the LHCb upgraded detector and
readout system. Conference poster. No. Poster-2014-461 (2014)
3. Durante, P., et al.: 100 Gbps PCI-Express readout for the LHCb upgrade. IEEE Trans. Nucl.
Sci. 62(4), 1752–1757 (2015)
Challenges and Performance of the
Frontier Technology Applied to an
ATLAS Phase-I Calorimeter Trigger
Board Dedicated to the Jet Identification
1 Introduction
The Large Hadron Collider (LHC) at CERN will stop operation in 2019 to be
upgraded to an instantaneous luminocity (L) of about L ≈ 2.5 × 1034 cm−2 s−1
during Long Shutdown 2 (LS2). Restarting operation for Run 3 (planned for
2021) it is expected to have an average number of interactions per bunch cross-
ing of ≈60. To cope with these new conditions an upgrade programme, ‘Phase-I’
upgrade, for the trigger and data acquisition system (TDAQ) of the ATLAS
experiment [1] has been planned [2]. As part of this, the upgrade of the Level-1
Calorimeter trigger system (L1Calo)[3] will include the new sub-system jet Fea-
ture EXtractor (jFEX), which is the focus of this contribution. In Fig. 1 an
overview of the Phase-I L1Calo system is given.
Fig. 1. Left: Overview of the planned Level-1 Calorimeter trigger system for LHC Run
3 (Blue and green: the legacy system; yellow: new components added as part of the
Phase-I upgrade); Right: photograph of the first jFEX prototype (assembled with only
one processor FPGA) [5]
The jFEX PCB layout has in total more than 16000 connections, where a
high number of high-speed data lines from the optical receivers to the processors,
and from each processor to its neighbours, have to be routed paying attention to
signal integrity issues (avoiding cross-talk). For the stack-up of the 24-layer PCB
the high-speed material MEGTRON6 was used for the signal layers, which are
alternated with ground layers. The processors need to share the data that they
receive directly with their neighbours. To avoid passive splitting, which would
affect the signal quality, a feature of the MGTs of the processor FPGAs is used.
The incoming serial data is digitized in the analogue front-end of the receiver
and a connection to the transmitter of the MGT channel allows the received
data to be re-transmitted to the neighbouring processor before it is decoded.
This mechanism for data duplication was proven to work reliably by doing a bit
error rate test (BERT) with the Xilinx “iBERT” IP Core on a Xilinx Ultrascale
evaluation board VCU110. The test was run to a bit error rate of less than
2.15 × 10−16 at a line rate of 28 Gbps without seeing a single error.
Fig. 2. Results of signal integrity simulations. Left: S11 parameter (channel return);
Right: S21 parameter (channel transfer)[5].
340 B. Bauss et al.
References
1. ATLAS Collaboration: The ATLAS detector. JINST 3 S08003 (2008). https://cds.
cern.ch/record/1129811
2. ATLAS Collaboration: Technical Design Report for the Phase-I Upgrade of
the ATLAS TDAQ System, CERN-LHCC-2013-018. https://cds.cern.ch/record/
1602235
3. Andrei, V.: The Phase-I upgrade of the ATLAS first level calorimeter trigger. In:
Proceedings of TIPP (2017)
4. SFF Committee: SFF-8431 Specifications for Enhanced Small Form Factor Plug-
gable Module SFP+. ftp://ftp.seagate.com/sff
5. From ATL-DAQ-PROC-2017-018. Published with permission by CERN
The Phase-1 Upgrade of the ATLAS
Level-1 Endcap Muon Trigger
Shunichi Akatsuka(B)
on behalf of the ATLAS Collaboration
Abstract. The Level-1 muon trigger for the ATLAS experiment identi-
fies muons with high transverse momentum. Under the high luminosity
condition in LHC Run 3, more powerful trigger system is needed to reject
the backgrounds. New trigger processor board for Run 3 has been pro-
duced to achieve the required performance, by combining information
from five different detectors. The performance of the new board and the
readout system has been confirmed by a beam test. The new algorithms
to make use of the new detectors have been developed, and are tested by
MC simulation. With the new algorithms, the trigger rate is estimated
to become lower than the required rate in Run 3.
1 Introduction
LHC Run 3 is planned to start in 2021, with an instantaneous luminosity of
3.0 × 1034 cm−2 s−1 , which is twice as much as the luminosity in the current run
(Run 2). Despite the higher event rate, the data recording rate will not increase
significantly. In this situation, the requirements on the trigger system will be
more severe. The ATLAS detector [1] needs an upgrade before LHC Run 3,
to enhance its performance to cope with these high luminosity conditions
(collectively this effort is known as the Phase-1 Upgrade).
The Level-1 trigger is at the first level in the ATLAS trigger system. It
consists of dedicated trigger processor hardware, performing fast selection for all
the collision events at 40 MHz to reduce the event rate down to 100 kHz within
a fixed latency of ∼2.5 µs. This paper focuses on the Phase-1 Upgrade of the
Level-1 muon endcap trigger system.
20 GeV and a rate below 15 kHz at Run 3 peak luminosity, as well as high
trigger efficiency for muons with pT over the threshold. Without an upgrade of
the trigger system, the rate for this trigger is expected to exceed 28 kHz. Thus a
more powerful trigger strategy is needed to achieve a ∼50% rate reduction while
keeping the trigger threshold and efficiency.
From previous studies on Run 1 muon trigger performance, it is known that
more than 90% of the muon trigger candidates in the endcap region are due to
background events [2]. The major part of these background triggers are due to
events with no associated reconstructed muons. These background triggers are
known as “fake” triggers, caused by charged particles emerging from the beam
pipe. Other background triggers are due to muons with pT below the threshold.
The main strategy of the upgrade is to eliminate the fake triggers and the low-pT
muons by implementing a powerful trigger algorithm using several new detectors
introduced in Run 3.
RPCs
5 =1.3
10
BOL 1 2 3 4 5 6
4
8 EEL 2
BML 1 2 3 4 5 EI 6 muon from I.P.
1
3
6
BIS7/8EIL4 Fake
1 2 3 4 5 6
BIL
Tile
4
Calorimeter 3 2
2
EIL End-cap TGCs
=2.4
NSW
1
magnet
2 1
Magnetic
CSC
Field
0 z
0 2 4 6 8 10 12 14 16 18 20
New Small Wheel
Fig. 1. The ATLAS detector in the y − z plane. The interaction point (I.P.) is at the
origin, and the beam pipe corresponds to the z−axis. The four detectors that can be
used for coincidence triggering with the TGC Big Wheel are shown by the green boxes:
TGC EI [3], new RPC [3] (BIS7/8), Tile Calorimeter [5] and the New Small Wheel.
The red line shows a track made by a muon from the I.P., and the blue line shows a
track by a fake. The fake tracks do not make coincide with hits in the detectors inside
the magnetic field. (From ATL-DAQ-PROC-2017-016. Published with permission by
CERN.)
The detectors for the endcap muon trigger in Run 3 are shown in Fig. 1. The
toroidal magnetic field bends the muon tracks, so that the pT can be calculated
from the track angles in the Thin-Gap Chambers (TGCs) [3] installed outside
the magnetic field (TGC Big Wheel). As shown in Fig. 1, the fake triggers cre-
ate muon-like tracks in the TGCs, but do not correspond with any hits in the
detectors inside the field. Thus the fake triggers can be eliminated by requiring
The Phase-1 Upgrade of the ATLAS Level-1 Endcap Muon Trigger 343
a coincidence between hits in the TGC Big Wheel and the detectors inside the
field. The largest area in the endcap region is covered by the New Small Wheel
(NSW) [4], which has high position- and angle resolution. By utilising its infor-
mation with high resolution, low-pT muon candidates can also be rejected highly
effectively.
TDAQ TDR [2], the trigger rate for muons with pT > 20 GeV at the luminosity
of 3.0 × 1034 cm−2 s−1 is estimated to become smaller than 13 kHz, which meets
the Run 3 requirement of 15 kHz.
L1_MU20 candidate
18000
ATLAS Preliminary
16000 Phase I upgrade study
Data s = 13 TeV, 25 ns
14000 1.3 < |ηRoI| < 2.4
Fig. 2. pT distribution of the muons that passed the Level-1 muon trigger [8]. The
dashed line shows the distribution of the Level-1 trigger candidate muons in the Run
2 trigger system. The red (blue) histogram shows the distribution after the position
(position + angle) matching algorithm. Low-pT candidates are rejected effectively, while
keeping 96% efficiency for events associated with muons with pT > 20 GeV. (From
ATL-DAQ-PROC-2017-016. Published with permission by CERN.)
5 Conclusion
A new trigger processor board for the Level-1 endcap muon trigger in Run 3 has
been produced to combine information from five different detectors. Together
with the new readout system, the performance of the board has been verified
by a beam test. New trigger algorithms for Run 3 have been developed: the
position matching and the angle matching algorithms. By applying both of the
new algorithms, the trigger rate of the primary muon trigger will be 13 kHz at
Run-3 peak luminosity, which meets the ATLAS trigger requirement.
References
1. ATLAS Collaboration: The ATLAS experiment at the CERN Large Hadron Col-
lider. JINST 3, S08003 (2008)
2. ATLAS Collaboration: Technical Design Report for the Phase-I Upgrade of the
ATLAS TDAQ System, CERN-LHCC-2013-018
3. ATLAS Collaboration: ATLAS muon spectrometer: Technical Design Report,
CERN-LHCC-97-022
4. Kawamoto, T., et al.: New Small Wheel Technical Design Report, CERN-LHCC-
2013-006
5. ATLAS Collaboration: ATLAS tile calorimeter: Technical Design Report, CERN-
LHCC-96-042
The Phase-1 Upgrade of the ATLAS Level-1 Endcap Muon Trigger 345
1 Introduction
Data-acquisition systems for high-energy physics experiments have demanding
computing resource requirements. They are complex systems, needing to process
data in real time. The ATLAS experiment [1] at CERN will be facing new
requirements in terms of data throughput for the upgrade starting in 2024.
From ATL-DAQ-PROC-2017-019. Published with permission by CERN.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 346–349, 2018.
https://doi.org/10.1007/978-981-13-1313-4_66
Modeling Resource Utilization of a Large Data Acquisition System 347
Fig. 4. Comparison of the simulation results and real data for the average number of
fragments in the ROS and the average output bandwidth of the ROS.
5 Simulation Results
Simulations are executed on a Xeon E5645 2.4 GHz machine with 24 GB of RAM.
Each simulation is executed independently for 60 simulated seconds and takes
∼6 h to complete. In total, 24 simulations are executed over 2 h of consecutive
data. Figure 4 shows some of the simulation results, with an outlier at minute
∼70. The real system stopped due to external conditions and the simulation
does not reproduce this behavior. The number of fragments results differ due to
missing delays in the model of ∼10 ms, and output bandwidth results differ due
to the low resolution of the real data and network retransmissions.
6 Conclusion
A simulation model has been developed to study the behavior of the current
ATLAS TDAQ system. Results show a good and stable agreement between sim-
ulation and real data, with a relative error below 5%. Simulation results can
be further improved by adding more accurate simulation of components of the
TDAQ system and network latencies to the model. It can then be used as the
basis to studying the behavior of the candidate architectures for the new system.
References
1. ATLAS Collaboration: Performance of the ATLAS detector using first collision data.
JHEP 09, 056 (2010)
2. Pozo Astigarraga, M.E., (on behalf of the ATLAS Collaboration): Evolution of the
ATLAS trigger and data acquisition system. In: Journal of Physics: Conference
Series, vol. 608, p. 012006. IOP Publishing (2015)
3. Varga, A.: Omnet++. In: Modeling and Tools for Network Simulation, pp. 35–59.
Springer, Heidelberg (2010)
4. Seo, S.: A review and comparison of methods for detecting outliers in univariate
data sets. Master’s thesis, University of Pittsburgh (2006)
The Phase-I Upgrade of the ATLAS First
Level Calorimeter Trigger
Victor Andrei(B)
on behalf of the ATLAS Collaboration
1 Introduction
During the Run 3 data-taking period (starting in 2021), the Large Hadron Col-
lider (LHC) will increase the current instantaneous luminosity by almost a factor
of two (i.e. to ∼2.5 × 1034 cm −2 s −1 ), to substantially enhance its physics poten-
tial. The luminosity upgrade will lead to a higher number of interactions per
bunch-crossing at the ATLAS detector [1] than the design values of the cur-
rent trigger system. In order to maintain a high event selection efficiency in
the increased luminosity environment, the ATLAS Level-1 calorimeter trigger
(L1Calo) [2] will be upgraded with new object-finding processors. These will run
more efficient identification algorithms on finer granularity calorimeter informa-
tion than is currently available, preserving the low energy trigger thresholds of
the current Run 2 system [3]. This paper presents an overview of the Phase-I
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 350–354, 2018.
https://doi.org/10.1007/978-981-13-1313-4_67
The Phase-I Upgrade of the ATLAS First Level Calorimeter Trigger 351
upgrade of the L1Calo system for LHC Run 3 and the development status of the
new L1Calo hardware components.
ATCA shelves and 7 jFEX modules in one ATCA shelf will be needed for the
complete system. The gFEX [7] will receive 0.2 × 0.2 ET -sum data from the
entire calorimeter in a single ATCA module, to identify large-radius jets and
compute various global event observables. All of the FEXes will send results in
the form of Trigger Objects (TOBs) to the Level-1 Topological Trigger Proces-
sor (L1Topo), and read out data to the Data Acquisition (DAQ) system, via
optical links running at 12.8 Gb/s and 9.6 Gb/s, respectively. On the eFEX
and jFEX, the transfer of TOBs and readout information is realised via custom
ATCA Hub-ROD boards (see Fig. 1).
Prototype modules for each FEX processor have been manufactured and assem-
bled (see Fig. 2), and their functionality has been verified. Each module is a
highly-dense ATCA board design, hosting a large number of high-speed devices
that have to handle and process an input bandwidth of up to several TB/s.
The eFEX prototype is a 22-layer board with four processing FPGAs (Xilinx
Virtex-7 XC7VX550T), one readout FPGA (Xilinx Virtex-7 XC7VX330T), 17
MiniPOD optical transceivers, and up to 450 on-board multi-Gb/s differential
signals. In addition, 94 high-speed electrical buffers duplicate the input calorime-
ter data between the processing FPGAs, as required by the eFEX sliding-window
algorithms [3]. Three eFEX prototypes have been manufactured and tested. The
high-speed optical links were tested at up to 11.2 Gb/s, both at CERN using
a LAr Digital Processing System (DPS) prototype and a FOX demonstrator,
to emulate the Run 3 configuration, and in the laboratory environment using
custom FEX Test Modules (FTMs) as data sources. On 99% of the input links
the observed bit error rate was less than 10−14 . For the other links, the errors
were traced to a few broken high-speed buffers, to sensitive fibre connections
and to poor routing on one input. Additional functionality such as the read-
out and the Timing, Trigger and Control (TTC) distribution, the IPBus and
IPMC operation or the simultaneous transmission over ∼360 on-board differen-
tial pairs was successfully tested. The module power consumption was measured
to be ∼280 W, with all of the FPGA Multi-Gigabit Transceivers (MGTs) active.
The Phase-I Upgrade of the ATLAS First Level Calorimeter Trigger 353
The maximum recorded module temperature was ∼67 ◦ C, with all three proto-
types fully powered and in adjacent slots.
The jFEX prototype is a 24-layer board that hosts four processing FPGAs
(Xilinx Virtex UltraScale XCUV190), 12 MiniPODs, and up to 540 on-board
multi-Gb/s differential tracks. The board control is implemented via an exten-
sion mezzanine, which hosts among others a Xilinx Zynq-7000 FPGA based
card. Input data duplication is realised within each processing FPGA using the
Physical Medium Attached (PMA) loopback. One jFEX prototype has been
manufactured and so far only partially assembled and tested. Preliminary link
tests at up to 12.8 Gb/s, with the LAr DPS and the FTMs or in loopback mode,
showed very good and stable operation for all the tested inputs.
Two gFEX prototype versions have been manufactured. The last version,
shown in Fig. 2, is a 26-layer board with three processing FPGAs (Xilinx Vir-
tex UltraScale XCUV160), one control and monitoring FPGA (Xilinx Zynq
XC7Z100), 28 MiniPODs and a large number of on-board high-speed intercon-
nections. All of the optical I/O links of the prototype were successfully tested
at the maximum specified speeds with both the LAr DPS and in loopback
mode. The module’s power consumption was measured to be ∼300 W (with all
MGTs active), while the maximum recorded FPGA temperature was ∼67 ◦ C.
The design of the next gFEX hardware iteration, the pre-production module,
has been recently completed. This features an increased number of PCB layers
(30) and MiniPODs (35), and newer generation FPGAs (UltraScale+).
The TREX prototype is currently being manufactured. This will be an 18-
layer VME rear transition module, mainly hosting one Xilinx Kintex UltraScale
FPGA (KU115) and four Samtec FireFly optical transmitters for sending data
to the FEXes.
4 Outlook
The prototyping and testing of the L1Calo modules for Phase-I will continue
during 2017, to guide preparation of the final designs. The installation of the final
modules in the experiment will be during the LHC long shutdown starting in
2019, with the aim of being fully commissioned before the start of Run 3 in 2021.
References
1. ATLAS Collaboration: The ATLAS experiment at the CERN Large Hadron Col-
lider. JINST 3 2008 S08003
2. Achenbach, R., et al.: The ATLAS level-1 calorimeter trigger. JINST 3, P03001
(2008)
3. ATLAS Collaboration: Technical Design Report for the Phase-I Upgrade of the
ATLAS TDAQ System. CERN-LHCC-2013-018 (2013)
4. From ATL-DAQ-PROC-2017-017. Published with permission by CERN
5. ATLAS Collaboration: ATLAS Liquid Argon Calorimeter Phase-I Upgrade Techni-
cal Design Report. CERN-LHCC-2013-017 (2013)
354 V. Andrei
6. Andrei, V., et al.: Tile Rear Extension module for the Phase-I upgrade of the ATLAS
L1Calo PreProcessor system. JINST 12, C03034 (2017)
7. Chen, H., et al.: The Prototype Design of gFEX - A Component of the L1Calo
Trigger for the ATLAS Phase-I Upgrade. ATL-DAQ-PROC-2016-046
The CMS Level-1 Calorimeter Trigger
Upgrade for LHC Run II
Alessandro Thea(B)
on behalf of the CMS Level-1 Trigger group
1 Introduction
Run II of the Large Hadron Collider (LHC) started in spring 2015 after a two-
year shutdown period. In October 2016 the LHC instantaneous luminosity has
reached the record value of 1.5 × 1034 cm−2 s−1 and number of simultaneous
inelastic collisions per crossing (pile-up) of 50. The CMS experiment implements
a two-level trigger system to select the potentially interesting events among the
million collisions occurring every second: a hardware-based Level-1 (L1) trigger
that reduces the rate from 40 MHz to about 100 kHz, followed by a software-
based High Level Trigger (HLT). The overall reduction factor achieved is O(105 ).
The Level-1 CMS Calorimeter trigger receives coarse information on the trans-
verse energy (ET ) of the collision products from the electromagnetic (ECAL), the
hadronic (HCAL) and the forward hadronic calorimeters (HF) in the form of trig-
ger primitives. The Level-1 Trigger uses the primitives to build electron/photon,
τ , jet and energy sum trigger candidates. An upgrade of the L1 trigger system [2]
has been completed in view of further increase of the LHC luminosity, which is
expected to approach 2 × 1034 cm−2 s−1 in 2017.
The original Time Multiplexed (TM) [3] design, shown in Fig. 1, is one of the
main novelties of the upgraded calorimeter trigger. The system is divided in two
consecutive processing layers: the first layer, composed of 18 Calorimeter Trigger
Processor boards (CTP7) [4], is responsible for tower-level pre-processing and
data formatting e.g. ECAL and HCAL tower energy sum calculation, energy cal-
ibration and calculation of the ratio between HCAL and ECAL deposits (H/E).
In the second layer 9 Master Processor cards (MP7) [5] run the sophisticated
algorithms to find particle candidates and calculate the global energy sums.
Each MP7 receives the whole event from the layer-1 cards at trigger tower (TT)
granularity with no boundaries. The algorithms have fixed latency and are fully
pipelined: the processing starts as soon as the minimum amount of data is avail-
able. The results are sent to the demultiplexer board, also an MP7, for final
formatting before being forwarded to the μGT.
The CMS Level-1 Calorimeter Trigger Upgrade for LHC Run II 357
The algorithms of the upgraded trigger system exploit the full trigger tower
granularity and the access to the full event information. A dynamic clustering
has been designed to reconstruct lepton signatures precisely in the calorimeter
instead of using a sliding window with a fixed size. The advantage of a dynamic
technique is the construction of basic clusters that are combined to reconstruct
hadronic τ lepton candidates. An optimum-sized window is used to build particle
jet candidates directly from trigger towers. Another challenge addressed by the
new Level-1 system is the online determination of the pile-up energy without
the information from the tracking The pile-up mitigation scheme is optimized
for the trigger objects to remain robust against changing conditions.
The firmware implementation of the algorithms was particularly challenging
as all the finder algorithms were to fit in a single Xilinx Virtex 7 FPGA together
with the control logic and the infrastructure for testing. In the TM approach the
data from the calorimeters are reorganized into consecutive rings of 72 TTs in φ
358 A. Thea
by layer-1, then transmitted to layer-2 in pairs of positive and negative eta, each
bunch crossing. For the 32-bits received on each link, the internal computing
frequency achieved is 240 MHz. The structure of the firmware is organized so
that consecutive algorithm steps converge in the core of the chip where the
sorting of the trigger candidates takes place. The firmware obtained is compact
and easily maintainable. Since the start of the Run II period, the firmware was
rebuilt more than 50 times successfully. The total latency of the upgraded system
remains under 1.2 µs in total.
The upgraded trigger electronics was installed in 2013–2015, during the first long
shutdown of the LHC, in parallel to the existing trigger system. The electronics
and the algorithms were used to record data during collision in autumn 2015 in
parasitic mode. The algorithms validation was performed by comparing the col-
lected data with bit-level software emulation. The commissioning of the system
as primary CMS trigger was completed in early 2016 during the CMS cosmics
data taking campaign. The first collision in April 2016 were successfully acquired
by the upgraded trigger. The calibration settings and the trigger thresholds were
updated during the year to retain optical selection performance with the steady
increase of the LHC instantaneous luminosity. At the end of the 2016 data-taking
period, in October 2016, more than 40 fb−1 were successfully recorded with the
upgraded trigger. As a consequence of the upgrade the trigger thresholds, at
1.5 × 1034 cm−2 s−1 instantaneous LHC luminosity, were kept under ∼35 GeV
for the single electron trigger, ∼25 GeV and ∼12 GeV for the double electron
trigger legs. The new double τ lepton trigger threshold remained below 32 GeV.
6 Conclusions
The new CMS Level-1 Calorimeter trigger has successfully completed the first
year of operations. Development, installation and commissioning of the hard-
ware and the selection algorithms were conducted under a very tight schedule.
The performance of the new system throughout the 2016 LHC proton run was
excellent: despite the higher luminosity and the harsher environment the thresh-
olds for single objects triggers were maintained low enough for a successful CMS
physics programme. The upgraded trigger experience will provide lessons for the
desing of the future upgrade planned for the LHC high luminosity run in the
next decade.
The CMS Level-1 Calorimeter Trigger Upgrade for LHC Run II 359
References
1. Chatrchyan, S., et al.: CMS TDR CERN/LHCC 2000-38, CMS-TDR-006-1
2. Chatrchyan, S., et al.: CERN-LHCC-2013-011, CMS-TDR-12 (2013)
3. Baber, M., et al.: JINST 9(01), C01006 (2014)
4. Svetek, A., et al.: JINST 11(02), C02011 (2016)
5. Imperial College London, MP7 homepage. http://www.hep.ph.ic.ac.uk/mp7
6. Wittmann, J., et al.: JINST 12(01), C01046 (2017)
7. Hazen, E., et al.: JINST 8, C12036 (2013)
8. Larrea, C.G., et al.: JINST 10(02), C02019 (2015)
The ATLAS Muon-to-Central-Trigger-
Processor-Interface (MUCTPI) Upgrade
1 Introduction
Fig. 2. One half-octant of the current MUCTPI with 4 barrel, 6 endcap, and 3 forward sectors
indicating the possible overlap zones
2 ATLAS Upgrade
Fig. 3. The new MUCTPI with two Muon sector processors (Xilinx Virtex Ultrascale), one
trigger and readout processor (Xilinx Kintex Ultrascale), and a control processor (Xilinx Zynq)
Every RemoteBus Client (thread) has its own TCP socket and its own RemoteBus
Server thread, see Fig. 4. The RemoteBus Server reads/writes from/to the other pro-
cessor FPGAs using the Xilinx AXI Chip2chip protocol [4] for communication
between FPGAs and executes functions for auxiliary hardware on the server. Some
requests are pre-defined in base classes implemented for communication between any
two computers, e.g. READ(N), WRITE(N). Additional requests are added depending
on the hardware of the server (i.e. the MUCTPI). All parameters, request and response,
are 32-bit data words, and are added into the message or retrieved from the message in
a stack-like way. Additional request types can be added as functions to the server and
client, using C++ inheritance. The Yocto/OpenEmbedded development framework [5]
is used for creating the Linux operating system, for compiling the application software
(RemoteBus) and for providing all files necessary to boot and run the SoC.
Fig. 4. RemoteBus software for run control using the SoC (Zynq)
Two derived classes “ZC706Client” and “ZC706Server” were implemented for the
Xilinx ZC706 (Zynq) evaluation board. Requests were added for the hardware of the
evaluation board, in particular for DC/DC converters, clock configuration, and
temperature/voltage monitors. The minimal latency for a request-response transaction
is around 75 ls. The bandwidth is limited by the Ethernet throughput and reaches about
50 Mbyte/s for 10 kword blocks, this is about 10 times more than the throughput of the
previous MUCTPI using VME. No particular effort at optimizing the network was
done. Running multiple clients or client threads is safe and increases performance.
RemoteBus is currently being applied for testing the MUCTPI prototype.
4 Conclusions
The new MUCTPI prototype became available at the start of May 2017 and is currently
being tested. The run control path has been tested with Xilinx Zynq evaluation boards.
RemoteBus software was developed with functions for accessing memory in the pro-
cessor FPGAs, as well as for auxiliary hardware. A port of the ATLAS TDAQ software
to Xilinx Zynq with embedded Linux is under way. The Yocto/OpenEmbedded
development framework is used for building the Linux operating system and the
RemoteBus software. In conclusion, trigger electronics are not only becoming fully
optical, much denser, and more intelligent for processing, but also more intelligent to
control.
References
1. The ATLAS Collaboration: The ATLAS experiment at the CERN large hadron collider.
J. Instrum. JINST 3, S08003 (2008)
2. Haas, S., et al.: The ATLAS Muon to central trigger processor interface. In: Proceedings of
Topical Workshop on Electronics for Particle Physics, CERN-2007-007 (2007)
3. Akatsuka, S., et al.: The phase-1 upgrade of the ATLAS Level-1 endcap muon trigger. In:
Proceedings of Topical Workshop on Electronics for Particle Physics, Springer Proceedings
in Physics 212 (2018)
4. Xilinx AXI Chip2Chip Protocol. https://www.xilinx.com/products/intellectual-property/axi-
chip2chip.html. Accessed 13 June 2017
5. Yocto Project. https://www.yocotoproject.org. Accessed 13 June 2017
Automated Load Balancing in the ATLAS
High-Performance Storage Software
The purpose of the Data Logger is to decouple the online from the offline oper-
ations. It enables ATLAS to cope with disruptions of the permanent storage
service. Its tasks are to write selected event data to non-volatile storage and to
transfer them to permanent storage outside of the ATLAS facility.
In terms of hardware the Data Logger is a scale-out system currently consist-
ing of four local-attached high-performance storage solutions sporting two head
servers each. It can easily be upgraded to provide more storage space or more
bandwidth. The system comprises almost five hundred drives with a total usable
space of 430 TB. It is able to provide a total of 8 GB/s of concurrent read and
write operations.
These servers execute a distributed multi-threaded in-house application that
receives selected event data over two 10GbE network links and writes them to
disks in an organized file scheme. It also computes a file-by-file Adler32 [4] check-
sum. The application is completely data-driven, therefore its workload is entirely
determined by the data composition and indirectly by the trigger configuration.
The trigger system classifies the events in classes called streams. Each event
can be assigned to one or several streams. Each event is also associated with a
luminosity block (lumiblock), a time interval for which the detector’s operation
conditions are considered constant. The typical duration of a lumiblock is 60 s.
The application writes all events of the same stream and lumiblock to a dedicated
Fig. 1. Comparison of 2015 and 2016 stream throughput distribution for typical oper-
ations. Only the six major streams out of 25+ are shown. It shows the intrinsically
non-uniform nature of the application workload and its evolution between 2015 and
2016.
Fig. 2. Overview of the threading model of the Data Logger application. Input threads
handle network communications. The middle components dispatch the events among
the output threads. Output threads compute the data checksum and write data to
disks.
Between 2015 and 2016 the peak system writing throughput more than doubled
going from ≈1.4 GB/s to ≈3.2 GB/s. The application was, therefore, running
closer to its saturation point. Figure 1 and shows the evolution of the stream
throughput distribution for typical ATLAS operations between 2015 and 2016.
As one can see the relative difference between the major streams is less in 2016.
For these two reasons the random assignment of major streams together to a
thread would actually degrade the application performance. Synthetic tests con-
firmed the performance degradation upon occurrence of these conjoint assign-
ments: −8% for the two major streams together, and −12% for the three major
streams together. In order to re-establish the application performance, a smarter
load-balancing algorithm, sensitive to the application conditions, was needed.
A weighted policy was designed to take assignment decisions based on a load
for threads: a new file is assigned to the thread with the lowest load. The thread
load shall represent its current activity. This will allow optimally distributing the
application workload among the threads. The thread load is computed from the
amount of data processed during a configurable time window. The determination
of the operational value for this period will be a trade-off between the desired
sensitivity to condition changes and the accuracy of the decisions. In the same
manner a load is computed for each stream. Upon assignment the load of the
stream is added to the load of the thread. Therefore the thread load reflects
immediately the assignment without waiting for real-time data to accumulate.
This will ensure that streams with significant throughput will not be handled
by the same thread.
Figure 3 shows the algorithm behavior by plotting the thread loads computed
for a 5-s period and showing an example of assignments of the three major
streams to different threads. The thread loads evolve according to real-time data
Automated Load Balancing in the ATLAS High-Performance 369
(a) (b)
Fig. 3. Thread workloads as a function of time (left) and zoom around the assignment
decisions (right). Each line represents the load for a different thread computed for a
5-s period. Annotations mark the assignment of the three major streams to threads.
3 Conclusion
The Data Logger system of the ATLAS TDAQ system is a key component
enabling the decoupling of the online and offline operations. Its workload is
essentially unbalanced and cannot be fairly distributed. In 2016 new operation
conditions required a new workload distribution strategy. A weighted policy was
designed to be sensitive and self-adaptive to evolving operational conditions. It
has been validated on both test and production systems. It proved to restore
the application performance predictability. This development is now part of the
TDAQ system for the 2017 data-taking period.
370 F. Le Goff and W. Vandelli
References
1. ATLAS Collaboration: Performance of the ATLAS detector using first collision data.
JHEP 09 056(2010)
2. Evans, L., Bryant, P.: LHC machine. J. Instrum. 3(08) S08001 (2008)
3. The ATLAS TDAQ Collaboration: The atlas data acquisition and high level trigger
system. J. Instrum. 11(06) P06008 (2016)
4. Deutsch, P., Gailly, J-L.: ZLIB compressed data format specification version 3.3.
Aladdin Enterprises (1996)
Study of the Calorimeter Trigger Simulation
at Belle II Experiment
1 Introduction
The Belle experiment at the KEKB collider at the High Energy Accelerator Research
Organization (KEK) in Japan was performed to measure large mixing induced charge-
parity (CP) violations in B meson system [1, 2]. Most of results are in good agreement
with the Standard Model predictions of the Cabibbo-Kobatashi-Maskawa structure of
quark flavor mixing and CP violation in B decay [2], D D Mixing [3] and so on.
However, the experiments indicated several hints of discrepancies between the SM
predictions and the experimental measurements [4, 5]. Thus, a much larger data sample
is required to investigate further study for the New Physics effect. Therefore the Belle
upgrade experiments, called the Belle II, at the superKEKB collider [6]. The Schematic
of The Belle II detector shown in Fig. 1.
Due to an anticipated beam background level is *50 times higher than the case of
Belle, the robust and flexible trigger system is indispensable to operate the Belle II. The
requirements of the level 1 hardware trigger for the Belle II operation are high trigger
The basic framework and idea of Belle II ECL trigger (ECL-TRG) are same as the case
of the Belle [1, 8]. To handle higher trigger rate due to high luminosity and beam
background level, we have adopted a new trigger scheme that makes the trigger per-
formance more flexible using a readout electronics architecture with Flash Analog
Digital Converter (FADC), Field Programmable Gate Array (FPGA) components, and
high-speed serial link data transfer at 127 Mbps link speed (Fig. 3).
Fig. 3. Software and hardware configuration of the ECL trigger system for Belle II experiment.
The numbers in small bracket are the number of each module.
Study of the Calorimeter Trigger Simulation at Belle II Experiment 373
Fig. 4. Fitting result of trigger timing using the fastest TC timing (left), the highest energy
deposit TC timing (middle) and energy weighted timing of TCs (right).
374 I. Lee et al.
Table 1. Physics trigger efficiency with 2-D and 3-D Bhabha veto logic
Sample 2-D Logic 3-D Logic
99.99 ± 0.01 99.97 ± 0.02
Bhabha (hlab 17 ) 1.37 ± 0.12 7.83 ± 0.26
ISRðe þ e ! l þ l Þ 14.04 ± 0.38 17.14 ± 0.41
ISRðe þ e ! p þ p Þ 20.45 ± 0.67 35.26 ± 0.82
s ! generic 79.09 ± 0.21 78.19 ± 0.40
s ! lc 82.56 ± 0.38 85.48 ± 0.35
s ! ec 78.41 ± 0.41 89.29 ± 0.30
5 Conclusion
The SuperKEKB collider and Belle II detector has been constructed to search for the
New Physics phenomena. For optimal Belle II operation, we have CDC, ECL, BPID
and KLM sub-triggers and a global trigger system, such as GDL and GRL, to make a
final trigger decision. The ECL trigger system is robust and flexible by using FPGA
firmware architecture. TSim-ecl is being developed in order to test appropriate trigger
algorithms in Belle II environment. By TSim-ecl study, energy weighted timing of TCs
show the best resolution for trigger timing. In the comparison of 2-D and 3-D Bhabha
veto logic, 3-D logic provide better selection of low multiplicity events event. The
optimization of cluster energy cut would be performed in further study.
Study of the Calorimeter Trigger Simulation at Belle II Experiment 375
References
1. Abashian, A., et al. (Belle Collaboration): The Belle detector. Nucl. Instrum. Methods A 479,
117 (2002)
2. Kurokawa, S., Kiktani, E.: Overview of the KEKB accelerators. Nucl. Instr. Methods A 499,
1 (2003)
3. Abe, K., et al. (Belle collaboration): Improved measurement of CP-violation parameters sin 2-
1 and jj, B meson lifetimes, and B0-B
4. Starič, M., et al. (Belle Collaboration): Evidence for D0-D0 Mixing, Phys. Rev. Lett. 98,
211803 (2007)
5. Wei, J.-T., et al.: Belle collabortion. Phys. Rev. Lett. 103, 171801 (2009)
6. Lin, S.-W., et al.: Belle collaboration. Nature 452, 332 (2008)
7. Funakoshi, Y.: SuperKEKB project at KEK. Beam Dyn. Newslett. 67, 28 (2015)
8. Iwasaki, Y., et al.: Level 1 trigger system for the Belle II experiment. IEEE Trans. Nucl. Sci
58, 1807–1815 (2011)
RDMA Optimizations on Top of 100 Gbps
Ethernet for the Upgraded Data Acquisition
System of LHCb
1 Introduction
The Large Hadron Collider (LHC) has 4 major experiments, one of them is LHCb. It
has an underground detector which gathers information from particle collisions, which
have taken place at high energies. The designed rate of the collisions is 40 MHz at
most, if all available bunch slots are filled. In order to be able to deal with the large
quantities of data generated at the collider, one needs to reject the irrelevant events and
keep only those ones, which are interesting. We call this selection procedure triggering.
In the current system, the LHCb operates by applying two levels of triggers, where
the first is performed by an FPGA-based custom hardware, reducing the 40 MHz input
rate to 1 MHz.
The LHCb experiment will undergo a major upgrade during the LHC Long Shut-
down 2 starting from 2018 until 2019 [2]. One of the key goals of this upgrade is the
removal of the low-level hardware trigger. As a result, the event selection will be fully
software-driven, which gives more flexibility to configure it to various needs. So the
LHCb Upgrade means: reading every bunch crossing. There will be approximately 30–
40 million bunch crossings every second, where the size of one chunk is 100 kB [3],
so the aggregate is very large, and they all have to go through the network. That is the
key challenge from the technology point of view.
In this paper we will shortly describe the upgraded network communication scheme
which will run on the 500 nodes, then we will discuss the benchmarking results we had
going from simple available benchmarks to more sophisticated benchmarks. This
analysis will end up with a full running test on 4 nodes equipped with 100 Gb/s
Ethernet network adapter cards.
For the next LHCb upgrade, we will setup an event building cluster of 500 nodes to
read and aggregate the 40 Tb/s of data going out from the detector. The data will arrive
from the sub-detectors to the surface. Using standard servers to host those Readout
Units (RU) now permits to manage easily the buffering and to handle the event building
onto a 100 Gb/s standard fabric from the HPC field. Once the data have been aggre-
gated by the Builder Units (BU), they have to be sent to one of the 3500 Filter Units
onto a second network. Those filter units will apply the software triggering rules.
The Readout and Builder Units will be on the same host, it will require an inner
memory throughput up to 400 Gb/s for each node. This event building process will also
be required to handle up to 100 Gb/s bidirectionally on the event building network
(which requires some experimentation). The LHCb experiment is considering 3 dif-
ferent 100 Gb/s technologies for the upgrade: 100 Gb/s Ethernet, Intel® Omni-Path,
EDR InfiniBand. In this paper, we benchmarked some of the Ethernet solutions.
Ethernet is by far the most widely spread technology. It is worth it to study for this
specific use-case. Using the standard Linux network stack, data arriving at a network
power undergoes two copy operations: from the device memory to kernel memory,
then from kernel memory to application memory.
Network-intensive high performance computing needs a network infrastructure
with high bandwidth and low latency. RDMA is an acronym, which stands for remote
direct memory access. It makes it possible to access directly the memory of a computer
from another one without involving either one’s operating system or CPU. This allows
high throughput low latency networking.
The key benefit of RDMA is the support for enabling zero-copy data transmission.
Thus, there are no intermediate data copies to temporary buffers. Instead of this, the
network interface controller (NIC) is capable to access the application level buffer and
read from or write to it directly. No work is required to be done by the CPU, caches or
context switches. In addition, the software can perform data transfers directly from
user-space without any kernel involvement. This is called kernel bypass. The RDMA
transfer can continue to run in parallel with other system operations.
In this paper, we will study two RDMA solutions, which are designed to run on
Ethernet networks. They are iWARP (internet Wide Area RDMA Protocol) and RoCE
378 B. Vőneki et al.
(RDMA over Converged Ethernet). The key difference between them is how their flow
control is implemented.
RoCE provides a seamless, low overhead, scalable way to solve the TCP/IP I/O
bottleneck with minimal extra infrastructure. RoCE expects custom settings on both the
endpoint nodes as well as the network [4]. It uses Priority Flow Control (IEEE
802.1Qbb), that defines 8 classes. The PFC uses the priority bits within the VLAN tag
(IEEE 802.1p) to differentiate up to 8 flows. The flow control of these 8 priority classes
can be managed independently. In our test setup, we configured two priorities: 0 and 4
(where the greater number represents higher priority). Priority 0 was for the lossy traffic,
where upper layer protocol guarantees lossless behaviour in the application level.
iWARP is another alternative RDMA offering. It does not need custom switch
settings, because the implementation is using TCP [5].
4 Simple Benchmarks
Fig. 1. Iperf TCP tests between 2 nodes with Chelsio T62100-LP-CR and Mellanox CX455A
RDMA Optimizations on Top of 100 Gbps Ethernet 379
The iWARP test bench consists of 4 Dell PowerEdge C6220 nodes with the fol-
lowing specification: 2x Intel® Xeon® CPU E5-2670 at 2.60 GHz (8 cores, 16
threads), 32 GB DDR3 memory at 1333 MHz, Chelsio 100G T62100-LP-CR NIC,
Mellanox SN2700 100G Ethernet switch.
The RoCE test bench consists of 4 Intel® S2600KP nodes with: 2x Intel®
Xeon® CPU E5-2650 v3 at 2.30 GHz (10 cores, 20 threads), 64 GB DDR4 memory at
2134 MHz, Mellanox CX455A ConnectX-4 100G NIC, Mellanox SN2700 100G
Ethernet switch. We connected all nodes via 2-m-long direct attached copper
(DAC) cables to the switch (Fig. 1).
These simple TCP bandwidth tests show that a single core is not enough to saturate
the available bandwidth. With the Chelsio card we needed 4 threads to get a reasonable
speed, while the other card needed 16 threads (Fig. 2).
Fig. 2. Iperf UDP tests between 2 nodes with Chelsio T62100-LP-CR and Mellanox CX455A
The previous TCP tests use congestion and flow control, which obviously have
some penalty on the performance we have experienced here. If we want to see the result
of a cleaner/simpler benchmark, we can use UDP instead. UDP is a connectionless
transport protocol. The UDP tests show that the speed can go much closer to the
theoretical maximum at the cost of much higher CPU utilization. However, the higher
380 B. Vőneki et al.
CPU can also be an artefact of the benchmark software. This must be verified with
alternative benchmarks in the future.
5 RDMA Benchmarks
In the previous tests based, all the network traffic went through the CPU for processing.
The following test series will apply benchmarks, where data is directly written to or
read from the operational memory via RDMA (Fig. 3).
Fig. 3. ib_write_bw between 2 nodes with Chelsio (iWARP) and Mellanox (RoCE)
First we have run ib_write_bw peer to peer tests, which use the RDMA technology
supported by the card (either RoCE or iWARP). The test peaked at 87.44 Gbps
maximum bandwidth using 4096 byte large messages. Running via iWARP gave much
better results using a single thread than with TCP: 87.44 Gbps compared to 48.2 Gbps.
This benchmark utilizes one thread by default. In order to saturate the link better, the
same benchmark have been run with multiple instances in parallel with iWARP. Two
threads were enough to reach 95 Gbps, 8 threads gave 98 Gbps throughput.
RDMA Optimizations on Top of 100 Gbps Ethernet 381
In order to build large systems for HPC, one needs to use a centrally managed
launcher on top of this, for example MPI (Message Passing Interface). Another RDMA
bandwidth benchmark was also tried for RoCE, it is called OSU benchmark, which can
be launched by MPI. The unidirectional speed peaked at 96 Gbps, the bidirectional test
reached 139.24 Gbps.
6 Conclusions
We see that using Ethernet natively based on the kernel-driven TCP/IP stack is not
efficient. The CPU and the memory is too much involved in the game, hence a zero-
copy approach is needed. The heterogeneous speeds in the bidirectional heat map needs
to be (and will be) further analysed.
The presented Ethernet RDMA results are promising, and might be a good solution
for high speed interconnection in HPC.
References
1. Cámpora Pérez, D.H., Schwemmer, R., Neufeld, N.: Protocol-independent event building
evaluator for the LHCb DAQ system. IEEE Trans. Nucl. Sci. 62(3), 1110–1114 (2015)
2. The LHCb Collaboration et al.: LHCb Trigger and Online Upgrade Technical Design Report,
CERN-LHCC-2014-016; LHCB-TDR-016 (2014)
3. Otto, A., Cámpora Pérez, D.H., Neufeld, N., Schwemmer, R., Pisani, F.: A first look at 100
Gbps LAN technologies, with an emphasis on future DAQ applications. In: 21st International
Conference on Computing in High Energy and Nuclear Physics (2015)
4. Mellanox Homepage: Network Considerations for Global Pause, PFC and QoS with
Mellanox Switches and Adapters. https://community.mellanox.com/docs/DOC-2022. Acces-
sed 16 June 2017
5. Wikipedia Article Homepage: RDMA over Converged Ethernet. https://en.wikipedia.org/
wiki/RDMA_over_Converged_Ethernet. Accessed 16 June 2017
Integration of Data Acquisition
Systems of Belle II Outer-Detectors
for Cosmic Ray Test
Abstract. The Belle II experiment is scheduled to start the physics run in 2018
and the development of data acquisition (DAQ) system as well as its detector is
ongoing. Currently, most of outer sub-detectors have already been installed in
the Belle II detector and their performance will be tested in cosmic-ray mea-
surement before the beam collision starts. The integration of the DAQ system
for the cosmic ray test was first done with a small-sized read-out system and
then we moved on to the full-scale system. The system is modularized for each
sub-detector so that stand-alone and combined data-taking can be switched
easily. We measured the performance of the readout system after the integration
and confirmed that the integrated DAQ system for installed sub-detectors
actually worked at the designed trigger rate of the Belle II experiment, 30 kHz.
We also observed cosmic ray events with the integrated DAQ system.
1 Introduction
The Belle II experiment [1] aims at searching for new physics beyond the standard
model of particle physics by precisely measuring decays of heavy-flavor mesons. The
target luminosity of SuperKEKB, an asymmetric electron-positron collider, is
8 1035 cm−2s−1, which is 40 times larger than its predecessor, KEKB. Therefore, the
construction of an online DAQ system to handle a large data flow from the detector is
challenging. Recently, some of the outer sub-detectors, such as Central Drift Chamber
(CDC), Time-of-Propagation counter (TOP), Electromagnetic Calorimeter (ECL) and
KLong and Muon detector (KLM), were installed in the Belle II detector. After each
sub-detector is installed in the Belle II detector, each system needs to be integrated to
the Belle II DAQ system [2, 3] so that the performance of sub-detectors can be checked
using cosmic-ray events.
How data are handled by the Belle II DAQ system is as follows: First, trigger and
clock signals are fed to Front-End Electronics (FEE) boards of each sub-detector via
the Trigger Timing Distribution (TTD) network which consists of an array of Fast
Timing Switch (FTSW) modules [4]. Analog signals from the sub-detectors are all
digitized on the FEE boards. The only triggered events are then sent downstream and
processed by the readout system, which consists of read-out boards and PC farms.
After the readout system, the events are built by event-builders [5] and go through the
high-level trigger [6], which performs reconstruction and reduces the event rate by
rejecting background events. After the selection, data are recorded on storage.
In this paper, we report how we integrated each sub-detector to the Belle DAQ
system and the result of the performance test of the integrated DAQ system.
In the DAQ integration, the interface between the sub-detector FEE and back-end DAQ
system should be as common as possible for different sub-detectors to minimize the
development cost and share the experience with developers of different sub-detectors.
For the common interface of the outer sub-detectors in the backend-DAQ side, we
employ a readout board called as “COmmon Pipelined Platform for Electronics
Readout (COPPER)” [7]. The COPPER board can be equipped with four receiver
cards, which is called as HSLB (High Speed Link Board) [8]. One HSLB can accept
one optical-fiber input from an FEE board.
The interface part of the back-end DAQ side can be constructed with minimal
equipment for a readout test, which is called as PocketDAQ. It consists of a FTSW
module to provide clock and trigger signals, one COPPER board for receiving data
from FEE boards and a PC server for recording data. After each subdetector’s FEE
developers confirm the readout capability by PocketDAQ, the sub-detector system is
integrated with the Belle II DAQ system. Therefore, the integration itself should be
rather straightforward, because the actual interface is unchanged between PocketDAQ
and the Belle II DAQ system.
During the operation in a cosmic-ray test, data-taking is sometimes performed with
combined sub-detectors and sometimes each sub-detector in parallel. Therefore, the
TTD network and data-flow paths in the back-end DAQ system need to be partitioned
to some extent. The schematic view of this partitioned DAQ system is shown in Fig. 1.
As for the TTD network, each sub-detector has its own TTD network which is con-
nected with a global master FTSW module. In data-taking with combined sub-
detectors, one common trigger source provides trigger signals to the global master
FTSW and then they are distributed over sub-detector TTD networks via sub-detector
master FTSWs. On the other hand, each sub-detector master FTSW module can accept
trigger signals from its own trigger source. In this case, the FTSW module distributes
the trigger signals over its FEE boards for standalone data-taking. In the back-end DAQ
side, we duplicate online DAQ processes for each sub-detector with different network
ports to avoid interference. This virtually partitioned backend DAQ system are con-
trolled by slow-control daemons [9] and switching between standalone and combined
data-taking can easily done by a run-control GUI.
384 S. Yamada et al.
Fig. 1. Schematic view of the partitioned DAQ system for different sub-detectors.
After the DAQ integration, we first performed a stress test for the CDC and ECL DAQ
systems using high-rate dummy trigger to check their performance. In this test, 75
COPPER boards and 9 readout PCs, and 26 COPPERs and 10 readout PCs were used
for the CDC and ECL readout systems, respectively. We used a dummy trigger with a
constant rate as well as a random trigger with pseudo-Poisson distribution. The result of
the CDC high-rate test with the pseudo Poisson trigger is shown in Fig. 2(a). In the
constant-rate case, the both DAQ systems could process more than 99% of input
triggers. In the random trigger test, the value was about 98%. The decrease of the
Fig. 2. (a) The rate of event processing by the CDC DAQ system with dummy 30 kHz pseudo
Poisson trigger. (b) Event-display screenshot of a cosmic ray event observed by the installed
outer sub-detectors.
Integration of Data Acquisition Systems 385
efficiency in the pseudo-Poisson trigger test came from the limitation of the number of
triggers in a certain time window, which is required from the limited buffer size of
FEEs used in the Belle II experiment.
The data-taking of actual cosmic-ray events were also performed with CDC, ECL,
TOP and barrel KLM. The back-end DAQ system for the test included 118 COPPERs
and 23 readout PCs, which was nearly 60% of the total number used in the Belle II
experiment. The DAQ system could work for standalone mode as well as in a com-
bined manner. One of the cosmic ray events which fired all the installed outer sub-
detectors is shown in Fig. 2(b).
4 Summary
The DAQ integration for Belle II outer sub-detectors for a cosmic ray test are reported
in this paper. With semi-separated TTD networks and multiple data streams of different
sub-detectors handled by the slow control system, we can change standalone and
combined data-taking easily. After the integration of the DAQ system, the commis-
sioning of the readout system was performed with a high-rate dummy trigger. In the
high-rate test, more than 98% of the DAQ efficiency was achieved with 30 kHz pseudo
Poisson trigger. We also succeeded to observe clear cosmic ray events from the
combined outer sub-detectors.
References
1. Abe, T., et al.: Belle II Technical Design Report. arXiv:1011.0352 (2010)
2. Nakao, M., et al.: Data acquisition system for Belle II. J. Instrum. 5, C12004 (2010)
3. Yamada, S., et al.: Data acquisition system for the Belle II experiment. IEEE Trans. Nucl. Sci.
62, 1175–1180 (2015)
4. Nakao, M., et al.: Timing distribution for the Belle II data acquisition system. J. Instrum. 7,
C01028 (2012)
5. Itoh, R., et al.: Data flow and high level trigger of Belle II DAQ system. IEEE Trans. Nucl.
Sci. 60, 3720–3724 (2013)
6. Suzuki, S.Y., et al.: The three-level event building system for the Belle II experiment. IEEE
Trans. Nucl. Sci. 62, 1162–1168 (2015)
7. Higuchi, T., et al.: Modular pipeline readout electronics for the SuperBelle drift chamber.
IEEE Trans. Nucl. Sci. 52, 1912–1917 (2005)
8. Suna, D., et al.: Belle2Link: a global data readout and transmission for Belle II experiment at
KEK. Phys. Procedia 37, 1933–1939 (2012)
9. Konno, T., et al.: The slow control and data quality monitoring system for the Belle II
experiment. IEEE Trans. Nucl. Sci. 62, 897–902 (2015)
Study of Radiation-Induced Soft-Errors
in FPGAs for Applications
at High-Luminosity e+ e− Colliders
1 Introduction
Digital electronics in Trigger and Data Acquisition (TDAQ) systems of High-
Energy Physics (HEP) experiments is mostly implemented by means of static
RAM-based Field Programmable Gate Arrays (SRAM-based FPGAs) [1,2].
These devices offer advantages in terms of re-configurability and high-speed
processing and support embedded high-speed serial IOs. Unfortunately SRAM-
based FPGAs are sensitive to single event effects in the configuration memory.
In fact, single event upsets (SEUs) and multiple bit upsets (MBUs) may alter
the design configured into the device. The usage of such devices is limited on
detector, where there is radiation, and the search of solutions for mitigating
radiation-induced soft-errors in SRAM-based FPGAs is presently a hot topic.
Normally, triple modular redundancy techniques coupled to configuration cor-
rection, i.e. scrubbing, are used in order to reduce such effects. Moreover, error
correcting code circuitry has been integrated in latest generation devices for
reducing configuration errors, with some limitations.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 386–390, 2018.
https://doi.org/10.1007/978-981-13-1313-4_74
Study of Radiation-Induced Soft-Errors in FPGAs for Applications 387
In order to choose which strategies to adopt for protecting the design func-
tionality, the designer needs an estimate of the expected bit configuration upset
rate. Usually, in order to determine the particle to bit error cross section, irra-
diation experiments at dedicated facilities are carried out by means of proton,
neutron and heavy ions [3,4] beams. The knowledge of the cross section as a
function of the particles energy spectra and fluxes is paramount for obtaining a
reliable prediction of the upset rate. In situ (or in flight, for space applications)
measurement of the upset rate is highly recommended when the above-mentioned
information is not available with sufficient precision. In fact, this kind of mea-
surements have been carried out at the Large Hadron Collider (CERN, Geneva),
where upsets in readout and control FPGAs have been monitored during HEP
experiments data taking [5], and experiments in space have been launched in
order to compare upset rate predictions to actual measurements [6]. Moreover,
experiments aimed at measuring the effect of atmospheric neutrons on the device
configuration are carried out by FPGA vendors [7].
the intensity frontier towards the search for New Physics. Beams collide in a
single point, where the Belle2 detector is installed (Fig. 1). Table 1 reports the
main design parameters of the machine.
This work focuses on measurements of configuration soft-errors induced by
ionizing radiation in a SRAM-based FPGA device installed at a distance of ∼1
m from the beam interaction point (IP).
3 Test Setup
We designed a dedicated test board hosting a Xilinx Kintex-7 325T FPGA. In
order to distinguish between FPGA failures from those of other devices, our
board hosts only passive components other than the device under test. Power
and configuration are fed to the board at the IP over dedicated cabling from a
remote DAQ room. A single board computer [9] manages configuration and read
back via a JTAG connection (Fig. 2). A 4-channel power supply [10] feeds the
FPGA power domains and by means of a dedicated sensing scheme, an analog-
to-digital converter (ADC) [11] reads the actual voltages at the load.
4 Test Results
During the SuperKEKB operation, our setup allowed us to monitor the FPGA
configuration memory as well as its power consumption continuously. The beam
currents of the SuperKEKB collider spanned a range between 50 and 500 mA
for both the e− and e+ rings, therefore we could test the FPGA in different
radiation conditions.
We measured an upset probability of 2.0 · 10−7 averaged over the phase-1
duration, nearly 120 days. This probability is defined as the ratio of the measured
upsets, 18 in total, over the 91.5 · 106 FPGA configuration bits. The mean time
between upsets (MTBU) resulted to be 6.7 days. The collected statistics includes
1 multiple cell upset (MCU) and 16 single event upsets. In the MCU event, two
Study of Radiation-Induced Soft-Errors in FPGAs for Applications 389
configuration bits were flipped. Results from some PIN-diode detectors, also
installed at a distance of nearly 1 m from the beam pipe, suggest that the total
dose would be smaller than 300 krad. We did not measure significant variations
(>1 mA) in the FPGA currents, which suggests that there have been no, or
negligible, total dose effects. At the end of the Phase-1, the FPGA reported no
permanent damage and was operating correctly.
5 Conclusions
We installed a 7-series FPGA device within 1 m from one of the SuperKEKB
beam pipes during the machine commissioning. Our results show a MTBF of 6.7
days averaged over the commissioning time frame. We have neither evidence of
radiation impact on the FPGA power consumption nor of permanent damage
after irradiation. In the next operation phase of SuperKEKB, expected in 2018,
beam currents will increase and there will be collisions. The background radiation
might increase and in fact we are continuing this upset monitoring activity in
order to gather new, updated information.
Acknowledgments. This work is part of the ROAL SIR project funded by the Italian
Ministry of Research (MIUR), grant no. RBSI14JOUV. “Accesso Aperto MIUR”. The
institutions which contributed to the results reported in this work are listed below
as affiliations of the authors. We also wish to thank all the members of the BEAST2
community for supporting this activity.
References
1. Xilinx Inc.: Virtex UltraScale FPGAs Data Sheet: DC and AC Switching Charac-
teristics, DS893 (v1.7.1), 4 April 2016
2. Altera Corp.: Stratix 10 Device Overview, S10-OVERVIEW, 04 December 2015
3. Hiemstra, D.M., Kirischian, V.: Single event upset characterization of the kintex-7
field programmable gate array using proton irradiation. In: Proceedings of 2014
IEEE Radiation Effects Data Workshop (REDW), Paris (2014). https://doi.org/
10.1109/REDW.2014.7004593
4. Higuchi, T., Nakao, M., Nakano, E.: Radiation tolerance of readout electronics for
Belle II. In: Proceedings of Topical Workshop on Electronics for Particle Physics,
Vienna (2011)
5. Røed, K., Alme, J., Fehlker, D., Lippmann, C., Rehman, A.: First measurement
of single event upsets in the readout control FPGA of the ALICE TPC detector.
In: Proceedings of Topical Workshop on Electronics for Particle Physics, Vienna
(2011)
6. Samaras, A., Varotsou, A., Chatry, N., Lorfevre, E., Bezerra, F., Ecoffet, R.: CAR-
MEN1 and CARMEN2 experiment: comparison between in-flight measured SEE
rates and predictions. In: Proceedings of the 15th European Conference on Radi-
ation and Its Effects on Components and Systems (RADECS), Moscow (2015).
https://doi.org/10.1109/RADECS.2015.7365590
7. Xilinx Inc.: Continuing Experiments of Atmospheric Neutron Effects on Deep Sub-
micron Integrated Circuits, WP286 (v2.0), 22 March 2016
390 R. Giordano et al.
8. Adachi, I.: Status of Belle II and SuperKEKB. JINST 64(6), 1185–1190 (2017)
9. Aloisio, A., Ameli, F., Anastasio, A., Branchini, P., Di Capua, F., Giordano, R.,
Izzo, V., Tortone, G.: uSOP: a microprocessor-based service oriented platform for
control and monitoring. IEEE Trans. Nucl. Sci. PP(99) (2017). https://doi.org/
10.1088/1748-0221/9/07/C07017, C07017
10. GW-Instek: DC Power Supply, GPD-X303S Series, User Manual Gw-Instek part
no. 82PD-433S0M01 (2014)
11. Linear Technology: 24-Bit 8-/16-Channel DS ADC with Easy Drive Input Current
Cancellation and I2C Interface (2006)
Design of High Performance Compute
Node for Belle II Pixel Detector Data
Acquisition System
1 Introduction
event is about 1 MB [2]. The whole PXD data rate would up to 240 Gbps. So how to
deal with the huge data is task of DAQ system (Fig. 1).
Fig. 1. Structure of PXD and r-u view of PXD and SVD detector. (a) is PXD detector. Light
grey part is the pixel ladder. Each ladder consists of two half ladders. (b) is r-u view of PXD and
SVD detectors.
2 Data Reduction
There are two ways to reduce the PXD data. One is to reduce the size of each event.
The PXD event data contains many background hits in addition to the hits associated
with real track hits. Real hits data on PXD called Region of Interest (ROI) is only small
part of PXD. Finding ROI on PXD and sending it to event builder is one way to reduce
PXD data. ROI can be found with the help of track reconstructed by SVD. The other
way is to reduce the number of events sent to storage, sending the PXD data after HLT
selection. As estimate, it is about 5 s from particles colliding to high level trigger
generated (Fig. 2).
3 ONSEN System
Each half ladder is readout by Front-end electronic circuit (FEE) and data are sent to
the Data Handling Hybrid (DHH). Data of 40 DHH channels are concentrated to 32
channels by DHHC and sent to ONSEN (Online data Selection) system for Data
reduction processing. In data handing processing, one DHH receives data of one half
ladder and maximum data rate up to 511.4 MB/s [2]. In consideration of 8b/10b encode
in transmission, data rate sent out by DHHC is about 6 Gbps per channel. DATCON
system reconstructs particle track and generates the SVD ROI coordinates. HLT ROIs
are generated from SVD together with CDC and other detectors. SVD ROIs and HLT
ROIs are sent to ONSEN system for real PXD hit extraction. Structure of PXD DAQ
system is shown in Fig. 3.
Based on the data reduction requirement, the ONSEN system should have the
ability of high speed data transmission, high performance data processing and high
capability data buffering. An ATCA based system is designed for PXD DAQ system. It
is consisted of a full mesh ATCA shelf, 8 Compute Nodes and firmware for data
reduction. ONSEN system supports 32 optical channel for DHHC data receiving. The
throughput of the system reached up to an order of 200 Gbps. Ethernet ports are
designed for SVD ROIs, HLT ROIs receiving and reduction data output to EVB2 PC
farm. 128 GByte DDR is designed for 5 s PXD data buffing and Virtex-5 FPGA is used
for high performance data processing. Intelligent platform management control system
is used for system stable.
As AdvancedTCA specification [3] described, the ATCA architecture support full mesh
backplane topology and Star backplane topology. In full mesh topologies, a point to
point data path to/from each Compute Node as shown in Fig. 4. In ONSEN system,
one bidirectional channel is designed for each point to point channel in full mesh.
14 backplane channels are designed on Compute Node.
394 J. Zhao et al.
Fig. 4. Full mesh topology of ATCA backplane used in Belle II PXD DAQ system.
Compute Node (CN) [4] is designed as the central module in the ONSEN system. It
consists mainly of 4 Advanced Mezzanine Cards (AMC, called xFP card), 1 AMC
carrier ATCA board and 1 Rear Transition I/O Board (RTM), shown in Fig. 5. Large
Field of Programmable Gate Arrays (FPGA) are used for parallel data computing;
RocketIO technology is used for high speed data transmission between data processing
node; Gigabit Ethernet is used for data transmission between ONSEN system and HLT;
DDR2 is used for mass data buffering; The connection between CN Carrier board and
four xFP boards are by RocketIO port and general LVDS I/O pairs. 8 optical links by 4
xFP (with two 6 Gbps optical IO) cards provide an input bandwidth of 50 Gbps. 5 Gbit
Ethernet links are provided for output to higher level trigger or for storage. 14 backplane
channels are designed for board data sharing. Each channel support up to 3.125 Gbps.
Fig. 5. Full suite of Compute Node. It is consisted of four AMC cards, one Carrier board, one
Power Supply board and one RTM board.
Design of High Performance Compute Node 395
The Compute Node Carrier board consists of Xilinx Virtex4 serial FX60 FPGA
chip, one 2 GB DDR2 memory, four AMC slots, one Gigabit Ethernet, IPMC part and
power supply part. It supports full mesh topologies connection between each AMC slot
(Fig. 6).
Fig. 6. (a) is structure of Compute Node Carrier board; (b) is structure of xFP card [4]
The xFP (Processing unit based on FPGA and xTCA) card consists of one Xilinx
Virtex5 serial FX70T, two 2 GB DDR2 memories, one platform flash for configuring
the FPGA, one Gigabit Ethernet, one UART for board testing, two SFP+ connector
data line rate up to 6.25 Gbitps, one MMC module for power management, station
detection and communication with IPMC, shown in Fig. 4. And the physical figure of
the xFP board is shown in Fig. 5. The RTM in PXD is just used as IO extension for CN
carrier board. JTAG and UART port are designed for Carrier board.
396 J. Zhao et al.
5 Beam Test
In January 2017, the beam test were hold in DESY with PXD and SVD modules and
related front end electronic and DAQ system. VXD beam test DAQ structure is shown
in Fig. 7. PXD signal are digitized and sent to DHE. PXD data are concentrated by
DHC and sent to ONSEN via optical link. SVD signals are digitized by FADC and
fanned out to FTB and DATCON. FTB send the SVD data via Belle2Link [5] to
COPPER and then to HLT to generate HLT ROI information. DATCON receive SVD
data and generate SVD ROI information. ONSEN receive and remapping PXD data
and extract the hit data with the help of ROI coordinate. NIM and FTSW are used for
timing and trigger distribution. In this beam test, ONSEN had been running stable for
about 109 events. And each run operation stable up to about 18 h [6].
6 Conclusion
PXD is a new designed silicon pixel detector with huge data output. An ATCA based
ONSEN system designed for PXD DAQ system. It has the ability of high speed of data
transmission, mass data buffering and high performance data processing. VXD beam
test was hold in DESY in Jan. 2017. Successful result shows the successful Compute
Node design for Belle II PXD DAQ system.
Acknowledgment. This project has been supported by National Natural Science Foundation of
China (11435013, 11461141011, 11405196).
References
1. Doležal, Z., et al.: Belle II Technical Design Report. High Energy Accelerator Research
Organization, Tsukuba (2010)
2. Doležal, Z., Kiesling, C., et al.: The PXD Whitebook, July 2012
3. PICMG 3.0 R2.0 AdvancedTCA Base Specification ECN-002 May 26, 2006
Design of High Performance Compute Node 397
4. Zhao, J., et al.: A general xTCA compliant and FPGA based data processing building blocks
for trigger and data acquisition system. Presented at the 19th IEEE-NPSS Real Time
Conference, Nara, Japan, May 2014
5. Sun, D., Liua, Z., Zhao, J., Xu, H.: Belle2Link: a global data readout and transmission for
Belle II experiment at KEK. Phys. Procedia 37(0), 1933–1939 (2012). https://doi.org/10.
1016/j.phpro.2012.01.036
6. Lange, S.: ONSEN phase 2 readiness. In: 27th B2GM, 19–23 June 2017. KEK, Tsukuba
A Reconfigurable Virtual Nuclear Pulse
Generator via the Inversion Method
Weigang Yin, Lian Chen(&), Feng Li, Baochen Wang, Zhou He,
and Ge Jin
1 Introduction
The randomness of nuclear signals can lead to serious systematic errors when a
commercial periodic signal generator is used to test the performances of nuclear
spectrometers. A random pulse generator which can simulate the nuclear pulse signals
is useful to evaluate the nuclear data acquisition systems so that the risk of radioactive
source use can be reduced. But in order to be useful, the generator must statistically
conform the behavior of real experimental data [1, 2].
The main characteristics of the nuclear signal is to obey a specific energy distri-
bution in amplitude and subject to the Poisson distribution in time. In different
applications, the energy distribution and the count rate are various. In the following, we
will introduce a reconfigurable virtual nuclear pulse generator based on FPGA via
inversion method.
In this design, random number sequences from specific distributions are used to indicate
the statistical characteristics of the real nuclear pulses in amplitude and time. The
inversion method is a very efficient algorithm to generate arbitrary distributed random
numbers from uniform random numbers which can be produced via linear feedback shift
registers (LFSR) in FPGA easily [3].
According to the inversion method, if F(X) is the cumulative distribution function
(CDF) of variable X and Y is a uniform random number within [0, 1). Then X′ = F−1(Y)
is a random number that satisfies the CDF F(X). When used in FPGA, complex
inversion calculation must be avoid to achieve high speed. Thus, we use lookup tables
(LUT) to implement inversion. Considering that CDF F(X) is monotonically increasing,
for any Y, there always exists a discretized N so that F (N) Y < F (N + 1). In this
case, N F−1(Y) is a discretized random number that meets the CDF F(X).
3 System Design
Figure 1 shows the structure of the generator. In this design, we generate random
number sequences via the inversion method. Then, the digital pulses is synthesized
using these random numbers in FPGA. Finally, the analog pulse signal is output
through the DAC circuit. Besides, the entire system can be configured via a computer.
The amplitudes and time intervals are generated from respective CDFs via the
inversion method. Two RAMs are built as LUTs in the FPGA (Cyclone III, EP3C55) to
replace the calculation of inversion. Then, the digital pulses are synthesized using these
amplitudes and time intervals. To meet the needs of different applications. The count
rate can be adjusted and the amplitudes of emulated pulses can be set to an arbitrary
spectrum by updating the memory data of LUTs.
5 Conclusion
In this paper, we designed a random pulse generator to simulate the nuclear pulses.
Through the inversion method, the pulses output by the generator meets the Poisson
distribution in time and obeys a specific spectrum in amplitude. With the reconfigurable
count rate and reference spectrum, the virtual nuclear pulse generator can be applied in
many applications to replace radioactive sources. Thus, the radiation risks will be
greatly reduced.
References
1. Wiernik, M.: Normal and random pulse generators for the correction of dead-time losses in
nuclear spectrometry. Nucl. Instrum. Methods 96, 325–329 (1971)
2. Veiga, A., Spinelli, E.: A pulse generator with poisson-exponential distribution for emulation
of radioactive decay events. In: VII Latin American Symposium on Circuits and Systems
(LASCAS), pp. 31–34 (2016)
3. Cheung, R.C.C., Lee, D.U., Luk, W., Villasenor, J.D.: Hardware generation of arbitrary
random number distributions from uniform distributions via the inversion method. IEEE
Trans. Very Large Scale Integr. Syst. 15, 952–962 (2007)
Design of Wireless Data Acquisition System
in Nuclear Physics Experiment Based
on ZigBee
Zhou He, Lian Chen(&), Feng Li, Futian Liang, and Ge Jin
1 Introduction
In ICF, the intense radiation environment required the experimenter to stay away from
the experimental site. In order to achieve remote control and data acquisition, the
detector signal has to be transmitted to a safe area through a few dozen meters long
signal cable. Long cable will not only aggravate signal transmission attenuation, reduce
SNR, affect the measurement precision, but also bring in a high cost of the experiment
system. Especially in the case of hundreds of signal channels, only the cost of high-
fidelity signal cable and connectors tend to be more than a third of the cost of the
measurement system. Therefore, we designed a data acquisition system based on
ZigBee wireless communication technology.
Nowadays, Wireless communication technology have been widely applied, such as
ZigBee, Wi-Fi, Bluetooth, and infrared Data Association (IrDA). Compared with other
techniques, ZigBee is a short-range wireless network communication technology which
has advantages over low cost, low power consumption and reliable data transmission. It
works in licensed wireless communication frequency band and no additional exploita-
tion cost is required. It is mainly applied to remote control and automatic control [1, 2].
2 System Design
The wireless data acquisition system consists of two parts, the wireless front-end
electronics system and the data processing center. The front-end electronics is mainly
made up of the filter and Amplify circuit, Analog-to-digital conversion device (ADC),
data cache unit and Wireless communication unit. The data processing center is mainly
composed of the ZigBee coordinator and upper computer (see Fig. 1).
3 Experimental Results
In order to test the system, we design a waveform readout program. With some specific
commands sent by PC program, we can modify the triggered mode, trigger threshold,
data length, etc. We used the photomultiplier to detect cosmic rays and acquire signal
through the wireless DAQ (Fig. 4 right). Compared with the waveform read by an
oscilloscope (Fig. 4 left). We can see the waveform read by the wireless system is
prefect. It can replace the oscilloscope to read waveform in the severe radiation
environment. The effective number of bits (ENOB) of ADC is 10.9 bit, which means
the measuring accuracy is higher than the general oscilloscope. Without the signal
406 Z. He et al.
input, we obtain the noise figure, shown in Fig. 5. The standard deviation of the noise
value is 0.525 mV.
In fact, ZigBee is a low-rate wireless communicate technology at short range. The
transmission rate can reach up to 11.5 kbps. However, ICF experiment test is special,
since the target hitting time is very short and the interval is several hours. There is
enough time to transmit data and it will not introduce the bottleneck problem of
transmission rate. With the proper external antenna, the wireless communication dis-
tance can reach up to 50 m in laboratory environment.
4 Conclusions
In this paper, we design a wireless data acquisition System based on ZigBee. Instead of
long signal cable, the wireless DAQ has advantages on low power consumption,
convenient arrangement and flexible networking. In addition, highly reliable data
transmission provides the technical guarantee.
Design of Wireless Data Acquisition System in Nuclear Physics 407
References
1. Farahani, S.: ZigBee Wireless Networks and Transceivers. Elsevier Pte. Ltd., North Holland
(2008)
2. Huo, L., Liu, S., Hu, X.: ZigBee Technology and Application. Beihang University Press,
Beijing (2007)
3. Luo, Q., Qin, L., Li, X., Wu, G.: The implementation of wireless sensor and control system in
greenhouse based on ZigBee. In: 35th Chinese Control Conference (2016)
4. Li, W., Duan, Z.: ZigBee2007/PRO Protocols Stack Experiment and Practice. Beihang
University Press, Beijing (2009)
A Lightweight Framework for DAQ System
in Small-Scaled High Energy Physics
Experiments
Abstract. Data Acquisition (DAQ) System is essential for high energy physics
experiments. For the large scale experiments, we prefer to use the distributed
DAQ framework which offers the powerful online features. However, some
small scale experiments have less complicated requirements for the online data
processing, so a lightweight DAQ framework will save the development time
and manpower. This paper presents the design and implementation of a light-
weight DAQ framework on a standalone server. The framework consists of the
functions of run control, configuration, online data transmission, online event
building, lossless data compression, data storage and real-time data quality
monitoring. The framework is flexible and easy to use. Each component has the
independent interface so the users can easily customize the experiment related
functions. Till now, this framework has been successfully tested in different
experiments, which demonstrated its good capability and high reliability.
1 Introduction
DAQ system plays a key role in the high energy physics experiments. The main tasks
are collecting the data fragments from electronics modules, building them into events
and then saving into disks. There are also some online processes need to be imple-
mented in DAQ, such as data compression, event monitoring, and so on. A functional,
reliable DAQ system with high capability ensures the running efficiency of the physics
experiments.
Some good distributed DAQ frameworks have been widely used, such as
ATLAS TDAQ [2]. TDAQ is a mature, powerful, perfectly functional DAQ frame-
work, but it’s too complicate to be used in some small-scaled experiments. On the other
hand, with the improvement of the computing capacity of hardware, one single server
is becoming increasingly powerful. So we want to develop a lightweight DAQ
framework based on one single server, with which users can quickly build a new DAQ
system for their experiments with minimum development time and manpower costs.
The following features have been carefully designed when developing this light-
weight DAQ framework:
• Small volume and easy to carry
• Lightweight integrated architecture
• Concise and clear data-flow structure
• Rich DAQ features
• High capability of data processing
• Independent internal interface
• Friendly user interface
Based on this framework, users can easily build their own DAQ systems, with
customizing the DAQ functions according to the system requirements.
The framework design can be divided into two layers: the data flow layer and the
interactive layer (Fig. 1).
1. The data flow layer is responsible for data receiving, processing and saving.
2. The interactive layer is responsible for all the controls and operations during data-
taking. It’s also used for information transmission, and providing interface between
users and DAQ system.
• The ROS needs to establish socket connections with the front-end modules and read
data from electronics, then transfer them to back-end processing modules over
TCP/IP protocol. It’s designed based on client-server architecture.
• The DPS component is responsible for data processing, such as event sorting, event
building, data compressing, and so on.
• The DSS is take charge of data storage and then provides files for offline analysis.
Data flow layer is easy to maintain because of its simple and clear structure. The
components work in parallel, which greatly improve the efficiency of the system.
Data transfer is the most important task in DAQ, so the data flow layer is the key
for the whole DAQ system. In order to ensure the capability and reliability of the
system, the following methods has been used in the data flow layer.
Thread-Safe Queue. The DPS component use producer/consumer model between the
threads (Fig. 3). The framework uses queues as data buffer and encapsulates the queues
with safety read-write lock to assure its stability and reliability.
Zero-Copy Technology. The data transmissions are the crucial processes in the data
flow layer. The traditional memory data copy will introduce the resource overhead,
however passing the data by pointer can be a good solution to reduce the overhead and
improve the performance (Fig. 3).
A Lightweight Framework for DAQ System in Small-Scaled 411
The control module is in charge of sending commands and receiving log messages.
The display module is responsible for receiving sample data and plotting them in real-
time.
The modules work independently. Each modules receives information from its own
connection, which reduces competition of resources and thus improves the robustness
of the system.
Interactive layer provides GUI between the human users and the DAQ system. GUI
is a standalone software which is designed in QT framework. GUI software can be run
on the local DAQ server for the local control, or on any other PC for the remote control
(Fig. 6). GUI provides buttons for users to control the running status of the system, and
also provides running information and real-time image display. The framework users
can customize the functions in the GUI as needed by the experiment.
3 An Application Instance
Based on the framework, we developed the DAQ system for a silicon pixel detector
system, which will be used for the detection for synchrotron radiation. We chose this
framework due to the characteristics of small volume, compact structure and simple
data flow of the silicon pixel detector system. This DAQ system aimed to realize run
control, data readout, event building, graphics display and other basic DAQ functions.
The hardware structure of silicon pixel DAQ system is shown in Fig. 7. There are
twelve silicon sensors in the front of detector and each sensor corresponds to one
electronic readout board. All the data read from the front-end electronics will be
transferred to DAQ system over the Gigabit Ethernet through the switch equipment.
DAQ system runs on the Lenovo X3750 server, and there is a computer for remote
control and display.
As described in the framework design, the software architecture includes two parts,
the data flow layer and the interactive layer. Its design is shown in Fig. 8. The data flow
layer is responsible for data readout, event building, online compressing and data
A Lightweight Framework for DAQ System in Small-Scaled 413
storage. There are two main tasks assigned to interactive layer: the information inter-
action between two layers and real time display.
Till now, the DAQ system has been successfully developed base on the framework.
The required DAQ functions have been achieved (Fig. 9), and the performance can
satisfy the requirement of the whole system.
4 Performance Evaluation
The DAQ system for the silicon pixel detector is used as an example to study the
framework performance. To have a better understanding of the DAQ itself, we use
software to emulate the front-end electronics modules.
Analysis of the test results proved that in the current system environment, disk IO is
the performance bottleneck. So LZ4 [5] compression algorithm is used in the online
data process, and the data bandwidth for storage has been decreased to about 40% of
the readout data bandwidth.
In the current system settings, the maximum event rate of stable data-taking can
reach about 1300 Hz, and the corresponding readout data bandwidth is about 2.5 GB/s
(Fig. 10). Till now, the performance study showed very good results. And we are still
investigating the possible solutions to improve the performance.
5 Summary
The core design of the framework was achieved. The framework can be used for DAQ
system development in some small-scaled experiments with minimal time and man-
power costs. Framework users can quickly build their own DAQ systems due to the
modular design. And the framework provides independent processing interfaces, so
users can easily integrate the functions depending on the experiments related
requirements.
A DAQ system for silicon pixel detectors has been successfully developed base on
the framework. Preliminary test results show that the framework performs well and the
architecture design meets the requirement of the experiment. And now we are using
this framework to develop the DAQ system for other experiments.
The framework is still under development. More detailed design and development
are ongoing. And we will keep optimizing the framework based on the feedback of the
experiments.
A Lightweight Framework for DAQ System in Small-Scaled 415
References
1. Li, F., Ji, X., Li, X., et al.: DAQ architecture design of Daya Bay reactor neutrino experiment.
Nucl. Sci. IEEE Trans. 58(4), 1723–1727 (2011)
2. Atlas, C., Kesson, T., Eerola, P., et al.: ATLAS high-level trigger, data acquisition and
controls technical design report. ATLAS Technical Design Reports (2003)
3. Ma, S., Li, F., Shen, W., et al.: The DAQ system for a beam detection system based on TPC-
THGEM. In: 2016 IEEE-NPSS Real Time Conference (RT), pp. 1–4, 6 June 2016
4. Gu, M., Zhu, K., Li, F., et al.: TaskRouter: a newly designed online data processing
framework. In: 2016 IEEE-NPSS Real Time Conference (RT), pp. 1–4, 6 June 2016
5. Code Synthesis. https://github.com/lz4/lz4
Data Transmission System for 2D-SND
at CSNS
1 Introduction
construct digitized signal to raw events, and finally send the raw events to DAQ
system. The front-end electronics system consists of 36 boards, corresponding to the 36
modules of detector. Each electronics board receives signals, processes signals and
sends event data independently. Electronics system uses SiTCP to send event data via
gigabit Ethernet [1]. DAQ system is used to read raw event data from electronics
system and save the raw events. Data transmission system is used to get raw events
from DAQ system, check raw events for good ones and transfer good events to data
analysis system. Data analysis system is used to receive good events, reconstruct
events, analyze reconstructed events [2] and display results in the form of charts [3].
Data transmission system is a very important section, which links the DAQ system and
the data analysis system. It can be divided into three parts: data processing; interface
between DAQ system and data transmission system; interface between data trans-
mission system and data analysis system.
Good events selected from each electronics board are stored in its private buffer
respectively. A pubic buffer is used to collect good events from each private buffer for
the purpose of sending events of all electronics boards to data analysis system. When
private buffer of any electronics board is filled with a good event, it copies this event to
the public buffer immediately. When the public buffer is close to full and can’t accept
418 D. Zhao et al.
new arrival good event, good events in pubic buffer are sent out and public buffer will
be cleaned up. The mutex implements the mutually exclusive access to shared resource.
The application of the mutex accompanying with the use of public buffer ensures that
every event stored in public buffer is complete and correct. A mutex and a public buffer
are created in procedure of data transmission system before creations of all events
processing threads.
Fig. 2. Diagram of calling DIM server Fig. 3. Diagram of calling DIM client
3 An Application
Data transmission system together with 2D-SND and other relative systems has been
applied in neutron beam experiment successfully. In the neutron beam experiment, the
beam intensity is 106–107 c/s and the event rate of each module of 2D-SND is 25 Hz.
Results of the experiments showed that there was no event lost in process of data
transmission. Neutron imaging on one module of 2D-SND is showed in Fig. 4.
4 Conclusions
Data transmission system for 2D-SND is a stable and efficient mechanism to realize
reliable data transfer. With common framework, it can be easily expanded and
improved to fit for applications on other CSNS instruments. Distributed environment
will be involved to improve efficiency in the next stage.
References
1. Uchida, T.: Hardware-based TCP processor for gigabit ethernet. IEEE Trans. Nucl. Sci. 55(3),
1631–1637 (2008)
2. Du, R., Tian, H., Zuo, T., Tang, M., Yan, L., Zhang, J.: Data reduction for time-of-flight
small-angel neutron scattering with virtual neutrons. Instrum. Sci. Techn. 45(5), 541–557
(2017)
3. Tian, H.L., Zhang, J.R., Yan, L.L., Tang, M., Hu, L., Zhao, D.X., Qiu, Y.X., Zhang, H.Y.,
Zhuang, J., Du, R.: Distributed data processing and analysis environment for neutron
scattering experiments at CSNS. Nucl. Instrum. Methods Phys. Res. 834, 24–29 (2016)
4. DIM Homepage. http://dim.web.cern.ch/dim/. Accessed 20 May 2017
Design of DAQ Software for CSNS General
Purpose Powder Diffractometer
Abstract. This paper presents the design of data acquisition (DAQ) software
for China Spallation Neutron Source (CSNS) General Purpose Powder
Diffractometer (GPPD). GPPD is made up of 36 MA-PMT detector modules and
6192 electronics channels. Total hit rate of GPPD is about 300 kHz. DAQ
software is composed of readout module, detector and electronics configuration
parameter management module, distributed communication module, EPICS
interface module, data analysis for electronics module, interface with offline data
analysis software, etc. Raw data from front-end electronics are read out via
SiTCP gigabit Ethernet. C/C++ are used as the programming language. Qt is
used as a development tool for Linux system, while LabVIEW is used for DAQ
prototype design and GUI on Windows platform. The DAQ software has been
tested in cobalt neutron source experiment and reactor neutron beam line
experiment with detector and electronics system. The results show that the
software is stable and reliable, meets the requirements of engineering deploy-
ment. Some improvement, optimization and function expansion are still
undergoing according to the new experimental results
1 Introduction
Fig. 1. China Spallation Neutron Source Fig. 2. Structure of the DAQ system
36 electronics and detector module are connected to the gigabit network switches
through 1 Gb optical fiber. 4 computer servers are used for electronics data readout
from switches. One computer is used for GUI of electronics configuration, run control
and one server is used for data checking and data analysis. A 10 Gb Ethernet switch is
used to connect readout servers, data storage server, GUI computer, online analysis
servers and slow control system. The architecture of the whole system is shown in
Fig. 3.
SiTCP [1] gigabit network transmission is used in this system, in which RBCP protocol
is used to configure electronics registers and send commands. TCP protocol is used for
front-end electronics data readout. C/C++ are used as the programming languages for
the reason of performance of reading out.
CSNS GPPD DAQ Software is composed of many function modules, which are
shown in Fig. 4. There are 4 electronics readout program, each program connects 9
detector modules for data acquisition. Distributed management module is used to send
messages to the 4 readout program, which are deployed in different servers. Electronics
configuration module is used to configure 6192 Electronics channel with threshold and
Design of DAQ Software for CSNS General Purpose Powder Diffractometer 423
compensation values. These values are used to calibrate Electronics channel and check
whether there are bad channels and dead channels. Detector parameters management
and configuration module is used to adjust 6192 Detector channel threshold, which is
very important to improve consistency and efficiency of detector.
Qt [2] is used as a development tool for Linux system. GUI shown in Fig. 5 is used
for the developers of detector and electronics to modify and configure parameters,
enable modules and select run modes. EPICS interface and data analysis program are
also integrated in DAQ GUI.
Many software tests have been finished with electronics modules at lab in both cali-
bration mode and online data taking mode. In these tests the functions of hardware and
software were verified to be correct. Detector modules together with electronics and
DAQ software were also tested in radioactive neutron source at lab, which is shown in
Fig. 6. In addition, three detector modules and their associated electronics and DAQ
were put on a reactor neutron beam line to finish joint experiments, which is shown in
Fig. 7.
Raw data obtained in reactor neutron beam experiments were analyzed by program.
Electronics raw data are packed per 40 ms according to T0 count numbers. GPPD data
format is shown in Fig. 8. The Head of data includes flags, detector information and
run mode, which helps DAQ software to record details of the experiment. The main
part of the data is the electronics channel numbers hit by neutrons and the corre-
sponding time information of the event.
GPPD DAQ data analysis software is developed in C++, LabVIEW and ROOT.
Channel hit histogram is developed to check detector and electronics channels. Bad
channels and dead channels can be clearly displayed in this LabVIEW program, which
is shown in Fig. 9. X-Y position of event hit was developed in ROOT to display hit
images in neutron scattering experiments. Figure 10 shows an image of a neutron slit
experiment. Figure 11 shows the test result of standard sample of Al2O3.
Fig. 10. Image of a neutron slit experiment Fig. 11. Test result of Al2O3
4 Conclusions
GPPS DAQ software has been tested at lab and on reactor neutron beam line. The
results show that the software is stable and reliable, meets the requirements of the
engineering deployment. Further improvement, optimization and function expansion
are undergoing according to the newest experimental results.
Design of DAQ Software for CSNS General Purpose Powder Diffractometer 425
References
1. Uchida, T.: Hardware-based TCP processor for gigabit ethernet. IEEE Trans. Nucl. Sci. 55(3),
1631–1637 (2008)
2. https://www.qt.io/qt-for-application-development/
Design of Data Acquisition Software for CSNS
Angle Neutron Spectrometer
1 Introduction
China Spallation Neutron Source (CSNS) is the first spallation neutron source in China.
Small Angle Neutron Spectrometer (SANS) is one of the three spectrometers currently
built. Its neutron detector system consists of 120 3He tubes which output analog signal
from both sides of each tube. Its electronic system are composed of 20 preamplifier/
main-amplifier circuits and 10 readout modules, each module includes 24 electronic
channels, in charge of the readout work for 12 3He tubes.
histogram display, communicating with Detector Control System and Online Data
Analysis System. Function modules of SANS DAQ are shown in Fig. 1.
According to the design of SANS detector, hit rate of single 3He tube will be less
than 100 kHz, and the entire occupancy will be less than 40%. Front-end electronics
data are packaged every 40 ms, 7.68 MB/s data will be produced by each module. It
means the DAQ software needs to read out and process 76.8 MB data per second.
Readout Control Programs running on each Readout Server. According to the com-
mands received, the Readout Control Programs create and configure corresponding
Data Readout Objects, which are responsible for the readout tasks of several electronics
modules. Readout Control Program sends back status information and statistical data to
Run Control GUI if its electronics module is chosen to be monitored. Run Control GUI
also updates status logs and presents waveforms and histograms. The software archi-
tecture is shown in Fig. 3.
Data Readout Object is instantiated from Data Readout Class, whose function is
reading out and processing data from electronics module. As shown in Figs. 4 and 5,
the work of Data Readout Object is divided into two threads: one is to receive data
from electronics modules via network, check the data format, and then save data into a
ring buffer; the other is to read data from the ring buffer and process the data online.
In order to reduce the coupling degree, SANS DAQ uses DIM (Distributed
Information Management System) to distribute electronics raw data to Online Data
Analysis System [2]. The principle of DIM is shown in Fig. 7.
During the experiments, SANS DAQ will need to exchange status information with
Detector Control System. Since the control system adopts EPICS (Experimental
Physics and Industrial Control System) to control and monitor spectrometer, sample
environment and target station, SANS DAQ uses APIs offered by EZCA (Easy
Channel Access) to communicate with the control system.
3 Test Results
Performance and stability of SANS DAQ have been tested by using simulated data. As
shown in Figs. 9 and 10, data readout, processing and storage rate for each front-end
module can reach 111 MB/s. Long-term data-taking of the whole system has finished at
the neutron source. Satisfying position resolution of 3He tubes is shown in Fig. 11.
430 H. Zhao et al.
Fig. 9. Performance test Fig. 10. Stability test Fig. 11. Position resolution
4 Summary
SANS DAQ software has been accomplished and all functions are tested and verified.
It will be deployed and start commissioning run at CSNS.
References
1. Uchida, T.: Hardware-based TCP processor for gigabit ethernet. IEEE Trans. Nucl. Sci. 55(3),
1631–1637 (2008)
2. DIM. http://dim.web.cern.ch/dim/
Design of Data Acquisition System for CSNS
RCS Beam Position Monitor
Abstract. The Protons beam position monitor (BPM) system is one of the most
important diagnostic elements set up around the Rapid Cycling Synchrotron
(RCS) for the China Spallation Neutron Source (CSNS). This paper presents the
design of Data Acquisition (DAQ) System for CSNS RCS BPM. The beam
position data are published to the network layer through the Experimental
Physics and Industrial Control System (EPICS), which allows others to process
the data. The data acquisition software has completed the preliminary test; the
results show that the software has good practicality and scalability.
1 Introduction
The Proton Beam Position Monitor (BPM) system has been set up around the Rapid
Cycling Synchrotron (RCS) for the China Spallation Neutron Source (CSNS). Figure 1
is the schematic layout of CSNS facilities. The detector of the proton beam position
monitor is distributed at 32 measuring points along the RCS Ring. Figure 2 is the
distribution of beam position measurement elements on RCS. The data acquisition
system divides the 32 beam measurement elements into 4 groups according to the
quadrant; each group collects data of 8 beam measurement elements. Each element has
two pairs of beam measuring probes. The functions of DAQ System of BPM include:
configure the work mode of the front-end electronics, read out data from the front-end
electronics via VME bus, process and monitor online data, store and distribute
experimental data to Accelerator Control System.
The single crate system consists of a VME crate controller and a variety of front-end
electronics modules. The crate controller is VP B14/433-42 single board computer. The
electronics modules include charge measurement modules (BPME), readout control
modules (BROC) and T0 signal fan-out modules (BFAN). Figure 3 is the architecture
of a single crate.
server. Each VME crate collects the position data of the eight beam measuring ele-
ments. Each front-end electronics module corresponds to one beam measuring ele-
ments. The architecture of DAQ System is shown in Fig. 4.
The position resolution of the measured data is better than 1 mm through careful
analysis. The maximum frequency component represents the operating point of the
beam, and the beam operating point refers to the frequency of the oscillation along the
center of the beam.
3 Conclusions
The DAQ system has been used in the CSNS RCS BPM System for more than 6
months. Both TBT and COD work modes have been successfully tested and the DAQ
system was proved to meet the expected functional requirements. The fixed frequency
of proton beam on RCS has been successfully tested; the next step is to change the
beam frequency.
Design of Data Acquisition System for CSNS RCS Beam 435
References
1. Guan, X., Zhao, Y., Xu, T., Zhuang, B., Lu, W., Li, H., Zhao, J.: The design of BPM readout
electronics for the CSNS RCS. Nucl. Electron. Detect. Tech. 31(9) (2011)
2. Hu, J.: CSNS beam position monitor DAQ software research and implementation. Master
thesis, College of Physics, Zhengzhou University, Zhengzhou (2011)
3. Gu, M., Zhu, K., Jian, Z., Chu, Y.: Data acquisition software of LHAASO prototype system.
Nucl. Electron. Detect. Tech. 33(5) (2013)
Author Index
A Bernard, D., 27
Abdallah, A., 63 Bi, B. Y., 17, 22
Abinaya, P., 291 Bizzeti, Andrea, 279
Achenbach, P., 123, 275 Bocci, Valerio, 173
Achrekar, S., 291 Böhm, M., 123, 275, 283
Adachi, Ichiro, 46, 253, 270 Bologna, Simone, 319
Aielli, G., 63 Borg, Johan, 149
Akatsuka, Shunichi, 341 Branchini, P., 54
Alessio, Federico, 332 Brinkmann, K., 283
Ali, A., 123, 275 Britting, A., 123
Aloisio, Alberto, 54, 386 Brogna, A., 337
Amano, S., 27 Brook, N., 257
Ameli, F., 54 Bruel, P., 27
An, Qi, 190, 201, 210, 215, 324 Brugnera, Riccardo, 186
Anastasio, A., 54 Buchholz, P., 180
Andrei, Victor, 350 Buescher, V., 337
Aniruddhan, S., 291
Arai, Yasuo, 163 C
Armbruster, A., 360 Cabrera, A., 168
Arneodo, F., 109 Cachemiche, Jean-Pierre, 332
Attié, D., 27 Cadoux, Franck, 12
Ayyagiri, N., 291 Calvet, D., 27
Azzarello, Philipp, 12 Camplani, Alessandra, 50
Candela, A., 109
B Cao, Peng-cheng, 224
Barbosa, Joao Vitor Viana, 332 Cao, Ping, 201, 210, 215
Baron, P., 27 Cao, Zhe, 17, 22, 190, 324
Barria, Patrizia, 132 Capeans, M., 91, 97
Baudin, D., 27 Cardani, L., 35
Bauss, B., 337 Cardarelli, R., 63
Behere, A., 291 Cardinali, M., 123, 275
Belias, A., 123, 275 Cardoso, Luis Granado, 332
Bellato, Marco, 186 Carrillo-Montoya, G., 360
Benabderrahmane, L. M., 109 Casali, N., 35
Bergnoli, Antonio, 186 Castaneda, Alfredo, 328