Download as pdf or txt
Download as pdf or txt
You are on page 1of 446

Springer Proceedings in Physics 212

Zhen-An Liu Editor

Proceedings of
International
Conference on
Technology and
Instrumentation in
Particle Physics 2017
Volume 1
Springer Proceedings in Physics

Volume 212
The series Springer Proceedings in Physics, founded in 1984, is devoted to timely
reports of state-of-the-art developments in physics and related sciences. Typically
based on material presented at conferences, workshops and similar scientific
meetings, volumes published in this series will constitute a comprehensive
up-to-date source of reference on a field or subfield of relevance in contemporary
physics. Proposals must include the following:
– name, place and date of the scientific meeting
– a link to the committees (local organization, international advisors etc.)
– scientific description of the meeting
– list of invited/plenary speakers
– an estimate of the planned proceedings book parameters (number of pages/
articles, requested number of bulk copies, submission deadline).

More information about this series at http://www.springer.com/series/361


Zhen-An Liu
Editor

Proceedings of International
Conference on Technology
and Instrumentation
in Particle Physics 2017
Volume 1

123
Editor
Zhen-An Liu
Institute of High Energy Physics
Chinese Academy of Sciences
Beijing, China

ISSN 0930-8989 ISSN 1867-4941 (electronic)


Springer Proceedings in Physics
ISBN 978-981-13-1312-7 ISBN 978-981-13-1313-4 (eBook)
https://doi.org/10.1007/978-981-13-1313-4

Library of Congress Control Number: 2018947450

© Springer Nature Singapore Pte Ltd. 2018


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Contents

Astrophysics and Space Instrumentation


A Novel Gamma-Ray Detector for GECAM . . . . . . . . . . . . . . . . . . . . . 3
Pin Lv, Shao-Lin Xiong, Xi-Lei Sun, and Jun-Guang Lv
Spin-Off Application of Silica Aerogel in Space: Capturing
Intact Cosmic Dust in Low-Earth Orbits and Beyond . . . . . . . . . . . . . . 8
Makoto Tabata, on behalf of the Tanpopo Team
Research and Development of a Scintillating Fiber Tracker
with SiPM Array Read-Out for Application in Space . . . . . . . . . . . . . . 12
Chiara Perrina, Philipp Azzarello, Franck Cadoux,
Daniel La Marra, and Xin Wu
SiPM-Based Camera Design and Development for the Image
Air Cherenkov Telescope of LHAASO . . . . . . . . . . . . . . . . . . . . . . . . . . 17
S. S. Zhang, B. Y. Bi, C. Wang, Z. Cao, L. Q. Yin, T. Montaruli,
D. della Volpe, and M. Heller, for the LHAASO Collaboration
Silicon Photomultiplier Performance Study and Preamplifier Design
for the Wide Field of View Cherenkov Telescope Array of LHAASO . . . 22
B. Y. Bi, S. S. Zhang, C. Wang, Z. Cao, L. Q. Yin, T. Montaruli,
D. della Volpe, and M. Heller, for the LHAASO Collaboration
A Comprehensive Analysis of Polarized c-ray Beam Data
with the HARPO Demonstrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
R. Yonamine, S. Amano, D. Attié, P. Baron, D. Baudin, D. Bernard,
P. Bruel, D. Calvet, P. Colas, S. Daté, A. Delbart, M. Frotin,
Y. Geerebaert, B. Giebels, D. Götz, P. Gros, S. Hashimoto, D. Horan,
T. Kotaka, M. Louzir, Y. Minamiyama, S. Miyamoto, H. Ohkuma,
P. Poilleux, I. Semeniouk, P. Sizun, A. Takemoto,
M. Yamaguchi, and S. Wang

v
vi Contents

Timing Calibration of the LHAASO-KM2A Electromagnetic Particle


Detectors Using Charged Particles Within the Extensive Air Showers . . . 31
Hongkui Lv, Huihai He, Xiangdong Sheng, and Jia Liu
MoBiKID - Kinetic Inductance Detectors for Upcoming
B-Mode Satellite Missions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
A. Cruciani, L. Cardani, N. Casali, M. G. Castellano, I. Colantoni,
A. Coppolecchia, P. de Bernardis, M. Martinez, S. Masi, and M. Vignati

Backend Readout Structures and Embedded Systems


The Detector Control System Safety Interlocks of the Diamond
Beam Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Grygorii Sokhrannyi, on behalf of ATLAS DBM collaboration
Development of Slow Control System for the Belle II
ARICH Counter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
M. Yonenaga, I. Adachi, R. Dolenec, K. Hataya, H. Kakuno, H. Kawai,
H. Kindo, T. Konno, S. Korpar, P. Križan, T. Kumita, M. Machida,
M. Mrvar, S. Nishida, K. Noguchi, K. Ogawa, S. Ogawa, R. Pestotnik,
L. Šantelj, T. Sumiyoshi, M. Tabata, M. Yoshizawa, and Y. Yusa
Phase-I Trigger Readout Electronics Upgrade for the ATLAS
Liquid-Argon Calorimeters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Alessandra Camplani,
on behalf of the ATLAS Liquid Argon Calorimeter group
A Service-Oriented Platform for Embedded Monitoring Systems
in Belle II Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
F. Di Capua, A. Aloisio, F. Ameli, A. Anastasio, P. Branchini,
R. Giordano, V. Izzo, and G. Tortone
Integration of Readout of Vertex Detector in Belle II DAQ System . . . . 58
Tomoyuki Konno, Thomas Gessler, Getzkow Dennis, Hao Yin,
Itoh Ryosuke, Konorov Igor, Kühn Wolfgang, Lange Soeren,
Lautenbach Klemens, Liu Zhen-an, Nakamura Katsuro,
Nakao Mikihiko, P. Reiter Simon, and Suzuki Soh High
The Weighting Resistive Matrix for Real Time Data Filtering
in Large Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
A. Abdallah, G. Aielli, R. Cardarelli, M. Manca, M. Nessi,
P. Sala, and L. H. Whitehead

Experimental Detector Systems


Thermal Mockup Studies of Belle II Vertex Detector . . . . . . . . . . . . . . . 71
H. Ye and C. Niebuhr
Contents vii

Integration and Characterization of the Vertex Detector


in SuperKEKB Commissioning Phase 2 . . . . . . . . . . . . . . . . . . . . . . . . . 77
H. Ye, (On behalf of the BEAST2 Collaboration)
Radiative Decay Counter for Active Background Identification
in MEG II Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Ryoto Iwai, Kei Ieki, and Ryu Sawada
Belle II iTOP Optics: Design, Construction and Performance . . . . . . . . 87
Boqun Wang, Saurabh Sandilya, Bilas Pal, and Alan Schwartz
Gas Systems for Particle Detectors at the LHC Experiments:
Overview and Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
R. Guida, M. Capeans, and B. Mandelli
Gas Mixture Monitoring Techniques for the LHC Detector
Muon Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
M. Capeans, Roberto Guida, and Beatrice Mandelli
Design of the New ATLAS Inner Tracker (ITk) for the High
Luminosity LHC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Jike Wang
A Standalone Muon Tracking Detector Based on the Use
of Silicon Photomultipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
F. Arneodo, L. M. Benabderrahmane, A. Candela, V. Conicella,
A. Di Giovanni, O. Fawwaz, and G. Franchi
Spherical Measuring Device of Secondary Electron Emission
Coefficient Based on Pulsed Electron Beam . . . . . . . . . . . . . . . . . . . . . . 113
Kaile Wen, Shulin Liu, Baojun Yan, Yang Yu, and Yuzhen Yang
A Vertex and Tracking Detector System for CLIC . . . . . . . . . . . . . . . . 117
A. Nürnberg, on behalf of the CLICdp collaboration
The Barrel DIRC Detector for the PANDA Experiment at FAIR . . . . . 123
R. Dzhygadlo, A. Ali, A. Belias, A. Gerhardt, K. Götzen, G. Kalicy,
M. Krebs, D. Lehmann, F. Nerling, M. Patsyuk, K. Peters, G. Schepers,
L. Schmitt, C. Schwarz, J. Schwiening, M. Traxler, M. Zühlsdorf,
M. Böhm, A. Britting, W. Eyrich, A. Lehman, M. Pfaffinger, F. Uhlig,
M. Düren, E. Etzelmüller, K. Föhl, A. Hayrapetyan, K. Kreutzfeld,
B. Kröck, O. Merle, J. Rieke, M. Schmidt, T. Wasem, P. Achenbach,
M. Cardinali, M. Hoek, W. Lauth, S. Schlimme, C. Sfienti, and M. Thiel
The Belle II/SuperKEKB Commissioning Detector - Results
from the First Commissioning Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Miroslav Gabriel, on behalf of the BEAST II Collaboration
viii Contents

The CMS ECAL Upgrade for Precision Crystals Calorimetry


at the HL-LHC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Patrizia Barria, on behalf of the CMS Collaboration
The Tracking System at LHCb in Run 2: Hardware Alignment
Systems, Online Calibration, Radiation Tolerance and 4D
Tracking with Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Artur Ukleja
Design of a High-Count-Rate Photomultiplier Base Board
on PGNAA Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Baochen Wang, Lian Chen, Yuzhe Liu, Weigang Yin, Zhou He,
and Ge Jin

Front-end Electronics and Fast Data Transmission


Electronics and Triggering Challenges for the CMS High
Granularity Calorimeter for HL-LHC . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Johan Borg, on behalf of the CMS Collaboration
Readout Electronics for CASCA in XTP Detector . . . . . . . . . . . . . . . . . 154
Hengshuang Liu and Dong Wang
A High-Resolution Clock Phase-Shifter in a 65 nm
CMOS Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Dongxu Yang, Szymon Kulis, Datao Gong, Jingbo Ye,
Paulo Moreira, and Jian Wang
Development of Fast Readout System for Counting-Type SOI
Detector ‘CNPIX’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Ryutaro Nishimura, Yasuo Arai, Toshinobu Miyoshi, Shunji Kishimoto,
Ryo Hashimoro, Longlong Song, Yunpeng Lu, and Qun Ouyang
CATIROC, a Multichannel Front-End ASIC to Read
Out the 3″ PMTs (SPMT) System of the JUNO Experiment . . . . . . . . . 168
S. Conforti, A. Cabrera, C. De La Taille, F. Dulucq, M. Grassi,
G. Martin-Chassard, A. Noury, C. Santos, N. Seguin-Moreau,
and M. Settimo
First Prototype of the Muon Frontend Control Electronics
for the LHCb Upgrade: Hardware Realization and Test . . . . . . . . . . . . 173
Paolo Fresch, Giacomo Chiodi, Francesco Iacoangeli, and Valerio Bocci
High-Speed/Radiation-Hard Optical Engine for HL-LHC . . . . . . . . . . . 180
K. K. Gan, P. Buchholz, S. Heidbrink, H. P. Kagan, R. D. Kass,
J. Moore, D. S. Smith, M. Vogt, and M. Ziolkowski
The Global Control Unit for the JUNO Front-End Electronics . . . . . . . 186
Davide Pedretti, Marco Bellato, Antonio Bergnoli, Riccardo Brugnera,
Daniele Corti, Flavio Dal Corso, Alberto Garfagnini, Agnese Giaz, Jun Hu,
Roberto Isocrate, and Ivano Lippi, On Behalf of the JUNO Collaboration
Contents ix

Design of a Data Acquisition Module Based on PXI


for Waveform Digitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Zhe Cao, Jiadong Hu, Cheng Li, Siyuan Ma, Shubin Liu, and Qi An
Readout Electronics for the TPC Detector in the MPD/NICA Project . . . 195
G. Cheremukhina, S. Movchan, S. Vereschagin, and S. Zaporozhets
TDC Based on FPGA of Boron-Coated MWPC for Thermal
Neutron Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Li Yu, Ping Cao, WeiJia Sun, ManYu Zheng, Ying Zhang, and Qi An
A Portable Readout System for Micro-pattern Gas Detectors
and Scintillation Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Siyuan Ma, Changqing Feng, Laifu Luo, and Shubin Liu
Quality Evaluation System for CBM-TOF Super Module . . . . . . . . . . . 210
Chao Li, Xiru Huang, Ping Cao, Jiajun Zheng, and Qi An
Research of Front-End Signal Conditioning for BaF2 Detector
at CSNS-WNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Xincheng Qi, Xiru Huang, Ping Cao, Qi Wang, Yanli Chen,
Xuyang Ji, and Qi An
Generalized Signal Conditioning Module for Spectrometers
at CSNS-WNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Chen Yanli, Cao Ping, Xincheng Qi, Wang Qi, and An Qi
Study of Front-End High Speed Readout Based on JESD204B . . . . . . . 224
Zhao Liu, Zhen-An Liu, Jing-zhou Zhao, Wen-xuan Gong,
Li-bo Cheng, Peng-cheng Cao, Jia Tao, and Han-jun Kou

Interface and Beam Instrumentation


Development of the AWAKE Stripline BPM Electronics . . . . . . . . . . . . 237
Shengli Liu, Victor Verzilov, Wilfrid Farabolini, Steffen Doebert,
and Janet S. Schmidt
Scattering Studies with the DATURA Beam Telescope . . . . . . . . . . . . . 243
Hendrik Jansen, Jan Dreyling-Eschweiler, Paul Schütze,
and Simon Spannagel

Particle Identification
Assembly of a Silica Aerogel Radiator Module for the Belle II
ARICH System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Makoto Tabata, Ichiro Adachi, Hideyuki Kawai, Shohei Nishida,
and Takayuki Sumiyoshi, for the Belle II ARICH Group
x Contents

TORCH: A Large-Area Detector for High Resolution Time-of-flight . . . 257


R. Forty, N. Brook, L. Castillo García, D. Cussans, K. Föhl, C. Frei,
R. Gao, T. Gys, N. Harnew, D. Piedigrossi, J. Rademacker,
A. Ros García, and M. van Dijk
High Rate Time of Flight System for FAIR-CBM . . . . . . . . . . . . . . . . . 263
Pengfei Lyu and Yi Wang, for CBM-TOF group
The Aerogel Ring Image Cherenkov Counter for Particle
Identification in the Belle II Experiment . . . . . . . . . . . . . . . . . . . . . . . . 270
Tomoyuki Konno, Ichiro Adachi, Rok Dolenec, Hidekazu Kakuno,
Hideyuki Kawai, Haruki Kindo, Samo Korpar, Peter Križan,
Tetsuro Kumita, Masahiro Machida, Manca Mrvar, Shohei Nishida,
Kouta Noguchi, Kazuya Ogawa, Satoru Ogawa, Rok Pestotnik,
Luka Šantelj, Takayuki Sumiyoshi, Makoto Tabata, Masanobu Yonenaga,
Morihito Yoshizawa, and Yosuke Yusa
Endcap Disc DIRC for PANDA at FAIR . . . . . . . . . . . . . . . . . . . . . . . . 275
M. Schmidt, M. Düren, E. Etzelmüller, K. Föhl, A. Hayrapetyan,
K. Kreutzfeldt, O. Merle, J. Rieke, T. Wasem, M. Böhm, W. Eyrich,
A. Lehmann, M. Pfaffinger, F. Uhlig, A. Ali, A. Belias, R. Dzhygadlo,
A. Gerhardt, K. Götzen, G. Kalicy, M. Krebs, D. Lehmann, F. Nerling,
M. Patsyuk, K. Peters, G. Schepers, L. Schmitt, C. Schwarz,
J. Schwiening, M. Traxler, P. Achenbach, M. Cardinali, M. Hoek,
W. Lauth, S. Schlimme, W. Lauth, C. Sfienti, and M. Thiel
The NA62 RICH Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Andrea Bizzeti
Barrel Time-of-Flight (TOF) Detector for the PANDA  Experiment
at FAIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
N. Kratochwil, M. Böhm, K. Brinkmann, M. Chirita, K. Dutta, K. Götzen,
L. Gruber, K. Kalita, A. Lehmann, H. Orth, L. Schmitt, C. Schwarz,
D. Steinschaden, S. Zimmermann, and K. Suzuki

Trigger and Data Acquisition Systems


Electronics, Trigger and Data Acquisition Systems for the INO
ICAL Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
S. Achrekar, S. Aniruddhan, N. Ayyagiri, A. Behere, N. Chandrachoodan,
V. B. Chandratre, Chitra, D. Das, S. Dasgupta, V. M. Datar, U. Gokhale,
A. Jain, S. R. Joshi, S. D. Kalmani, N. Kamble, S. Karmakar, T. Kasbekar,
P. Kaur, H. Kolla, N. Krishnapura, P. Kumar, T. K. Kundu, A. Lokapure,
M. Maity, G. Majumder, A. Manna, S. Mohanan, S. Moitra, N. K. Mondal,
P. M. Nair, P. Abinaya, S. Padmini, N. Panyam, Pathaleswar,
A. Prabhakar, M. Punna, M. Rahaman, S. M. Raut, K. C. Ravindran,
S. Roy, S. Prafulla, M. N. Saraf, B. Satyanarayana, R. R. Shinde, S. Sikder,
D. Sil, M. Sukhwani, M. Thomas, S. S. Upadhya, P. Verma,
and E. Yuvaraj
Contents xi

Track Finding for the Level-1 Trigger of the CMS Experiment . . . . . . . 296
Tom James, on behalf of the TMTT group
A Multi-chip Data Acquisition System Based on a Heterogeneous
System-on-Chip Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Adrian Fiergolski, on behalf of the CLIC detector
and physics (CLICdp) collaboration
Acceleration of an Particle Identification Algorithm Used
for the LHCb Upgrade with the New Intel® Xeon®-FPGA . . . . . . . . . . 309
Christian Färber, Rainer Schwemmer, Niko Neufeld, and Jonathan Machen
The ATLAS Level-1 Trigger System with 13TeV Nominal
LHC Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Louis Helary, on behalf of the ATLAS Collaboration
Common Software for Controlling and Monitoring the Upgraded
CMS Level-1 Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Giuseppe Codispoti, Simone Bologna, Glenn Dirkx, Christos Lazaridis,
Alessandro Thea, and Tom Williams
A Prototype of an ATCA-Based System for Readout Electronics
in Particle and Nuclear Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Min Li, Zhe Cao, Shubin Liu, and Qi An
Commissioning and Integration Testing of the DAQ System
for the CMS GEM Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Alfredo Castaneda, On behalf of the CMS Muon group
MiniDAQ1: A Compact Data Acquisition System for GBT
Readout over 10G Ethernet at LHCb . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Paolo Durante, Jean-Pierre Cachemiche, Guillaume Vouters,
Federico Alessio, Luis Granado Cardoso, Joao Vitor Viana Barbosa,
and Niko Neufeld
Challenges and Performance of the Frontier Technology Applied
to an ATLAS Phase-I Calorimeter Trigger Board Dedicated
to the Jet Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
B. Bauss, A. Brogna, V. Buescher, R. Degele, H. Herr, C. Kahra, S. Rave,
E. Rocco, U. Schaefer, J. Souza, S. Tapprogge, and M. Weirich
The Phase-1 Upgrade of the ATLAS Level-1 Endcap Muon Trigger . . . 341
Shunichi Akatsuka, on behalf of the ATLAS Collaboration
Modeling Resource Utilization of a Large Data Acquisition System . . . . 346
Alejandro Santos, Pedro Javier García, Wainer Vandelli,
and Holger Fröning
xii Contents

The Phase-I Upgrade of the ATLAS First Level Calorimeter Trigger . . . 350
Victor Andrei, on behalf of the ATLAS Collaboration
The CMS Level-1 Calorimeter Trigger Upgrade for LHC Run II . . . . . 355
Alessandro Thea, on behalf of the CMS Level-1 Trigger group
The ATLAS Muon-to-Central-Trigger-Processor-Interface
(MUCTPI) Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
R. Spiwoks, A. Armbruster, G. Carrillo-Montoya, M. Chelstowska,
P. Czodrowski, P.-O. Deviveiros, T. Eifert, N. Ellis, G. Galster, S. Haas,
L. Helary, O. Lagkas Nikolos, A. Marzin, T. Pauly, V. Ryjov,
K. Schmieden, M. Silva Oliveira, J. Stelzer, P. Vichoudis, and T. Wengler
Automated Load Balancing in the ATLAS High-Performance
Storage Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Fabrice Le Goff and Wainer Vandelli, On behalf of the ATLAS
Collaboration
Study of the Calorimeter Trigger Simulation at Belle II Experiment . . . 371
Insoo Lee, SungHyun Kim, CheolHoon Kim, HanEol Cho, Yuji Unno,
and B. G. Cheon
RDMA Optimizations on Top of 100 Gbps Ethernet for the
Upgraded Data Acquisition System of LHCb . . . . . . . . . . . . . . . . . . . . . 376
Balázs Vőneki, Sébastien Valat, and Niko Neufeld
Integration of Data Acquisition Systems of Belle II Outer-Detectors
for Cosmic Ray Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
S. Yamada, R. Itoh, T. Konno, Z. Liu, M. Nakao, S. Y. Suzuki, and J. Zhao
Study of Radiation-Induced Soft-Errors in FPGAs for Applications
at High-Luminosity e þ e Colliders . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Raffaele Giordano, Gennaro Tortone, and Alberto Aloisio
Design of High Performance Compute Node for Belle II Pixel
Detector Data Acquisition System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Jingzhou Zhao, Zhen-An Liu, Wolfgang Kühn, Jens Sören Lange,
Thomas Geßler, and Wenxuan Gong
A Reconfigurable Virtual Nuclear Pulse Generator via the
Inversion Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Weigang Yin, Lian Chen, Feng Li, Baochen Wang, Zhou He, and Ge Jin
Design of Wireless Data Acquisition System in Nuclear
Physics Experiment Based on ZigBee . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Zhou He, Lian Chen, Feng Li, Futian Liang, and Ge Jin
A Lightweight Framework for DAQ System in Small-Scaled High
Energy Physics Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Yang Li, Wei Shen, Si Ma, and XiaoLu Ji
Contents xiii

Data Transmission System for 2D-SND at CSNS . . . . . . . . . . . . . . . . . . 416


Dongxu Zhao, Hongyu Zhang, Xiuku Wang, Haolai Tian,
and Junrong Zhang
Design of DAQ Software for CSNS General Purpose Powder
Diffractometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
Xiuku Wang, Hongyu Zhang, Yubin Zhao, Mali Chen, Dongxu Zhao,
Bin Tang, Liang Xiao, Shaojia Chen, and Haolai Tian
Design of Data Acquisition Software for CSNS Angle Neutron
Spectrometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Han Zhao, Hongyu Zhang, Mali Chen, Dongxu Zhao, Liang Xiao,
Xiuku Wang, Jinfan Chang, Yubin Zhao, and Hong Luo
Design of Data Acquisition System for CSNS RCS Beam Position
Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Liang Xiao, Dongxu Zhao, Hongyu Zhang, Yubin Zhao,
Xingcheng Tian, and Xiuku Wang
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Astrophysics and Space Instrumentation
A Novel Gamma-Ray Detector for GECAM

Pin Lv1,2, Shao-Lin Xiong1, Xi-Lei Sun1,2(&), and Jun-Guang Lv1,2


1
State Key Laboratory of Particle Detection and Electronics, Beijing, China
sunxl@ihep.ac.cn
2
Institute of High-Energy Physics, Chinese Academy of Sciences,
Beijing 100049, China

Abstract. Gravitational wave burst high-energy Electromagnetic Counterpart


All-sky Monitor (GECAM) is specially designed for detecting high-energy
electromagnetic counterpart that generates by the gravitational wave source.
GECAM consists of two micro-satellites which make a straight line with the
Earth to enlarge the field of view to all-sky. The electromagnetic radiation may
distribute in a wide range from radio-wave to high-energy c-Rays. A novel
gamma-ray detector which is made up of the 3 in. LaBr3 crystal and the large
area SiPM to detect rays from 6 keV to 2000 keV. The detector performances,
include dynamic range, linear response, energy-resolution and uniformity, will
be discussed in this paper.

Keywords: Electromagnetic radiation  LaBr3  SiPM  Dynamic range


Linear response  Energy-resolution

1 Introduction

In 2015, both advanced Laser Interferometer Gravitation wave Observatory (aLIGO)


detectors succeeded in making the first direct measurement of the gravitational wave
[1, 2]. The counterparts are the electromagnetic radiation that the sources of gravita-
tional wave produce. They contain comprehensive information about the astrophysical
source. There are several key factors for a detector to search counterparts. First, the
field of view should be as wide as possible. Then, the sensitivity of detector must be
high enough. In addition, the ability of location is indispensable, and it contributes to
confirm the low-energy counterparts, which may be later than high-energy counter-
parts. However, the existing high-energy telescopes are not specially designed to catch
the electromagnetic counterparts. Given these considerations, the GECAM was pro-
posed. There are two kinds of detectors for GECAM, one is the Gamma-Ray Detector
(GRD), and the other is the Charge-Particle Detector (CPD). Both kinds of detectors
will be installed in the micro-satellites, so the space is limited. It requires the detectors
should be as small as possible. Given these requirements, the LaBr3 crystal with SiPM
as read device is a good candidate.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 3–7, 2018.
https://doi.org/10.1007/978-981-13-1313-4_1
4 P. Lv et al.

2 Detector Design
2.1 The LaBr3 Crystal and SiPM
The LaBr3 crystal owns the top performances. The decay time is 16 ns, and it is bright
and has good linear response, both characters lead to a high-energy resolution [3–7].
SiPMs, as the novel silicon-base photodetector, have a widespread application in
high-energy physics. They show excellent performances, including single-photon
sensitively, low bias, large dynamic range, and high photon detection efficiency and so
on, and now the large area SiPMs are available [8]. Each GRD consisting of 3 in.
LaBr3 crystal and 2 in. (50.44 * 50.44 mm2) SiPM array.

2.2 Preamp and Prototype


The target dynamic range for the GRD is 6 keV to 2000 keV. In view of the low-
energy limit, the preamp should be low-noise and high-gain to distinguish low-energy
signals from noise. Therefore, two low-noise amplifiers ADA4895-1 are cascaded to
magnify the signals 25 times. For SiPM, the noise will increase as the area enlarge
because the capacitances connect in parallel. Hence, it is difficult to achieve single
readout for a large array. However, for the GRD, the high light-yield of the LaBr3
crystal can emit more photons to improve the signal to noise ratio, and we just need one
channel by 64 pads in parallel.
Figure 1 is the prototype. The rays go through the Beryllium window and then hit
the crystal to emit scintillation light with wavelength of 380 nm. The light will be
converted into electrical signals by SiPM. The SiPM and the circuit are attached to both
sides of the same Printed Circuit Board (PCB) to reduce noise.

Fig. 1. The prototype of GRD

3 Detector Performance

3.1 Dynamic Range


A series of radioactive sources are used to characterize the response of the detector with
the range from 5.9 keV to 1332 keV, and the internal radioactivity of the LaBr3 crystal
can emit two low-energy X-Rays, 5.6 keV and 37.5 keV. All of the rays are listed in
Table 1.
A Novel Gamma-Ray Detector for GECAM 5

Table 1. The list of radioactive sources and rays


Source Energy Rays
La Decay 5.6 keV, 37.4 keV X-Ray
55
Fe 5.9 keV X-Ray
241
Am 59.5 keV c-Ray
57
Co 122 keV, 136 keV c-Ray
133
Ba 81 keV, 356 keV c-Ray
137
Cs 662 keV c-Ray
60
Co 1173 keV, 1332 keV c-Ray

55
Fig. 2. The energy spectra of internal radioactive (left) and Fe source (right)

The low-energy sensitive is an important parameter for the GRD. Put the crystal
only and the crystal with source on the SiPM. Figure 2 shows the energy spectra of
internal radioactivity and 55Fe source, the gauss peaks of 5.6 keV and 5.9 keV are
clear. This results satisfies the GECAM requirements well.

3.2 Linear Response and Energy-Resolution


Make use of the rays (Table 1) to get the relation of ADC and energy. Figure 3 shows
the energy spectra. The X-axis of Fig. 4 (left) is energy and the Y-axis is ADC. The
relation changes when the energy is higher than 700 keV. Figure 4 (right) is the
waveform of high-energy gamma-Ray, the amplitude is overchannel and some

Fig. 3. The energy spectra of several gamma-sources


6 P. Lv et al.

Fig. 4. The relation of ADC and energy (left) and the waveform of high-energy c-Ray (right)

information will be lost. Therefore, the next work is to design a special circuit to supply
a high-gain to low-energy and a low-gain to high-energy.
Energy-resolution is another important parameter that must be lower than 10% @
662 keV. The resolution becomes better with the energy increases. For this GRD, the
resolution can reach 6.5% @ 662 keV.

3.3 Uniformity
Put 241Am source on the crystal after being collimated. The location is shown in Fig. 5
(left). Regard the result of point (1) as the reference and get the relative relation in
Fig. 5 (right). It is clear that the uniformity get worse as the position moves from center
to edge, and the difference between center and edge is less than 10%.

Fig. 5. The location of the sources (left) and the uniformity of the GRD (right)

4 Conclusion

GECAM as high-energy telescope which is specially designed for searching gravita-


tional wave high-energy counterparts is a necessary and important work. The perfor-
mances of the GRD are quite good. The low-energy limit can reach 5.6 keV, the
energy-resolution is 6.5% @ 662 keV, the uniformity and linear are acceptable. All the
results meet the requirements of GECAM well. And GECAM is scheduled to be
launched and get data in 2020.

Acknowledgement. Thanks for supporting by the Key Research Program of Frontier Sciences,
CAS, and Grant NO. Y6292690K1.
A Novel Gamma-Ray Detector for GECAM 7

References
1. Abbott, B.P.: Phys. Rev. Lett. 116, 061102 (2016)
2. Abbott, B.P.: Phys. Rev. Lett. 116, 131103 (2016)
3. Quarati, F.: Nucl. Instr. Meth. A 574, 115–120 (2007)
4. van Loef, E.V.D.: Nucl. Instr. Meth. A 486, 254–258 (2002)
5. Iltis, A.: Nucl. Instr. Meth. A 563, 359–363 (2006)
6. Dorenbos, P.: IEEE Trans. Nucl. Sci. NS 51, 1289 (2004)
7. Bizarri, G.: IEEE Trans. Nucl. Sci. NS 53, 615 (2006)
8. SensL Homepage. http://sensl.com/. Accessed 20 Sep 2016
Spin-Off Application of Silica Aerogel
in Space: Capturing Intact Cosmic Dust
in Low-Earth Orbits and Beyond

Makoto Tabata(B)
on behalf of the Tanpopo Team

Chiba University, Chiba, Japan


makoto@hepburn.s.chiba-u.ac.jp

Abstract. A spin-off application of transparent, low-density silica aero-


gel as a dust-capture medium in space is described. We provide an
overview of the physics behind the hypervelocity capture of dust using
aerogels and chronicle their history of use as dust collectors. In addition,
recent developments regarding the high-performance aerogel used in the
Tanpopo mission are presented.

Keywords: Silica aerogel · Low-density material · Cosmic dust


Low-Earth orbit · Astrobiology · Tanpopo

1 Introduction
Since the 1970s [1], silica aerogel has been widely used as a Cherenkov radiator in
accelerator-based particle- and nuclear-physics experiments, as well as in cosmic
ray experiments. For this major application, the adjustable refractive index and
optical transparency of the aerogel are very important. We have been developing
high-quality aerogel tiles for use in a super-B factory experiment (Belle II) to
be conducted at the High Energy Accelerator Research Organization (KEK),
Japan [2], and for various particle- and nuclear-physics experiments conducted
(or to be conducted) at Japan Proton Accelerator Complex (J-PARC) (e.g., [3])
since the year 2004. Our recent production technology has enabled us to obtain
a hydrophobic aerogel [4] with a wide range of refractive indices (n = 1.0026–
1.26) [5] and with an approximately doubled transmission length (measured at
a wavelength of 400 nm) in various refractive index regions [6].
Silica aerogel is also useful as a cosmic dust-capture medium (see [7] as a
review). Low-density aerogels can capture almost-intact micron-size dust grains
with hypervelocities of the order of several kilometers per second in space, which
was first recognized in the 1980s [8]. For this interesting application, high porosity
(i.e., low bulk density below 0.1 g/cm3 ; n < 1.026) and optical transparency of
the aerogel are vitally important. The latter characteristic enables us to easily
find a cavity under an optical microscope, which is produced in an aerogel by
the hypervelocity impact of a dust particle.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 8–11, 2018.
https://doi.org/10.1007/978-981-13-1313-4_2
Spin-Off Application of Silica Aerogel in Space 9

2 Spin-Off Application of Aerogel in Space:


A Dust-Capture Medium

2.1 Dust Impact Physics

High-energy physics researchers frequently conduct test beam experiments to


evaluate the performance of particle detectors. Similarly, a gas gun experiment
is often performed in ground-based laboratories to study hypervelocity impact
phenomena that can arise in space. Gas gun experiments enable us to simulate
dust capture in an aerogel (e.g., [9]). For example, a two-stage light-gas gun
is installed at the Institute of Space and Astronautical Science (ISAS), Japan
Aerospace Exploration Agency (JAXA), which can fire a projectile of diameter
7 mm at a nominal maximum velocity of 7 km/s. Dust particles as small as
∼10 µm can also be shot using a separable bullet container referred to as a
sabot. The gas gun is the accelerator in the space science field.
The impact cavity (referred to as a track) created inside the aerogel is mor-
phologically analyzed under an optical microscope (e.g., [10]). The track length,
width of entrance hole on the aerogel surface, maximum track width, and track
volume can be associated empirically with the impact energy, which involves
impact velocity, size, and density of dust particles. Of course, the density of the
aerogel influences the track morphology significantly. Researchers consider that
lower aerogel density is vital for capturing more intact dust grains [8]. The track
shape is also affected by the state of aggregation of the dust grains. The analysis
provides physical information about cosmic dust.

2.2 Low-Earth Orbits and Deep-Space Missions

Material samples acquired from space are crucial in planetary science, astro-
chemistry, astrobiology, and space debris research. This is because ground-based
state-of-art analysis instruments can be used for biochemical, mineralogical, and
other related analysis of cosmic samples. The first space missions that used
aerogel as a dust-capture medium were conducted in low-Earth orbits (LEO) in
the 1990s. These include space shuttle missions (using the shuttle’s cargo bay)
by the U.S. National Aeronautics and Space Administration (NASA) [8] and
the European retrievable carrier (Eureca) mission (freefrying spacecraft) by the
European Space Agency [11]. Similarly, the series of Micro-Particles Capturer
(MPAC) experiments conducted by JAXA was a LEO mission aboard the Inter-
national Space Station (ISS) in the 2000s [12]. In addition, the Large Area Debris
Collector (LAD-C) on the ISS was meant to be used for exploring near-Earth
orbital debris by the U.S. Naval Research Laboratory (however, it was canceled
in 2007) [13]. The Stardust spacecraft, a deep-space comet flyby mission by
NASA, retrieved cometary dust (from 81P/Wind 2) back to Earth successfully
in 2006 (e.g., [14]). Recently, an Enceladus (Saturn’s moon) flyby plume sample
return mission has been proposed to search for a signature of chemical evolution
and possible extraterrestrial life [15,16].
10 M. Tabata

2.3 Tanpopo Mission


The Tanpopo mission proposed in 2007 is Japan’s first astrobiology experiment
in space to investigate possible interplanetary transfer of life [17,18]. This lat-
est, ongoing mission is a multifaceted experiment involving cosmic dust capture
and determining exposure to terrestrial microbes/organic compounds in LEO
aboard the ISS. In support of present-day endeavors, we developed the world’s
lowest density (0.01 g/cm3 ; n = 1.0026) aerogel (within the dust-capture appli-
cation) for the Tanpopo capture experiment [19]. To strengthen the mechanical
properties of the entire capture media and to withstand rocket launch vibra-
tions, a 0.01-g/cm3 ultralow-density aerogel layer was combined chemically with
a relatively robust 0.03-g/cm3 density layer [20,21]. In addition, a dedicated cas-
ing comprising a capture panel (CP) together with the double-layer aerogel was
designed for interfacing an exposure mechanism [22]. A total of 36 CP units for
three years were launched in 2015. The first-year samples were retrieved in 2016,
and the analysis is in progress.

3 Summary
Since its first use was reported approximately 25 years ago, silica aerogel has
been used as a cosmic dust-capture medium in many space mission in LEO
and beyond. The aerogel can provide fruitful, almost-intact cosmic materials for
detailed analyses on the ground. A high-performance ultralow-density aerogel
was developed for the ongoing astrobiology mission Tanpopo in LEO. This tech-
nique for capturing intact dust particles will be applied in future missions to the
moons of outer planets to search for possible extraterrestrial life.

Acknowledgments. The author is grateful to the members of the Tanpopo team for
their contributions to CP development. Additionally, the author is grateful to Prof. H.
Kawai of Chiba University and Prof. I. Adachi of KEK for their assistance in aerogel
production. Furthermore, the author is thankful to the JEM Mission Operations and
Integration Center, Human Spaceflight Technology Directorate, JAXA. This study was
partially supported by the Hypervelocity Impact Facility (former name: Space Plasma
Laboratory) at ISAS, JAXA, the Venture Business Laboratory at Chiba University, a
Grant-in-Aid for Scientific Research (B) (No. 6H04823), and a Grant-in-Aid for JSPS
Fellows (No. 07J02691) from the Japan Society for the Promotion of Science (JSPS).

References
1. Cantin, M., et al.: Silica aerogels used as Cherenkov radiators. Nucl. Instrum.
Meth. 118, 177–182 (1974)
2. Adachi, I., et al.: Construction of silica aerogel radiator system for Belle II RICH
counter. Nucl. Instrum. Meth. Phys. Res. A 876, 129–132 (2017). https://doi.org/
10.1016/j.nima.2017.02.036
3. Tabata, M., et al.: Fabrication of silica aerogel with n = 1.08 for e+ /µ+ separation
in a threshold Cherenkov counter of the J-PARC TREK/E36 experiment. Nucl.
Instrum. Meth. Phys. Res. A 795, 206–212 (2015)
Spin-Off Application of Silica Aerogel in Space 11

4. Yokogawa, H., Yokoyama, M.: Hydrophobic silica aerogels. J. Non-Cryst. Solids


186, 23–29 (1995)
5. Tabata, M., et al.: Development of transparent silica aerogel over a wide range of
densities. Nucl. Instrum. Meth. Phys. Res. A 623(1), 339–341 (2010)
6. Tabata, M., et al.: Hydrophobic silica aerogel production at KEK. Nucl. Instrum.
Meth. Phys. Res. A 668, 64–70 (2012)
7. Burchell, M.J., et al.: Cosmic dust collection in aerogel. Annu. Rev. Earth Planet.
Sci. 34, 385–418 (2006)
8. Tsou, P.: Silica aerogel captures cosmic dust intact. J. Non-Cryst. Solids 186,
415–427 (1995)
9. Kitazawa, Y., et al.: Hypervelocity impact experiments on aerogel dust collector.
J. Geophys. Res. 104(E9), 22035–22052 (1999)
10. Niimi, R., et al.: Size and density estimation from impact track morphology in
silica aerogel: application to dust from comet 81P/Wild 2. Astrophys. J. 744(1),
18 (2012). (5 pages)
11. Brownlee, D.E., et al.: Eureka!! Aerogel capture of meteoroids in space. In: 25th
Lunar and Planetary Science Conference, Abstract #1092 (1994)
12. Noguchi, T., et al.: A chondrule-like object captured by space-exposed aerogel on
the international space station. Earth Planet. Sci. Lett. 309(3–4), 198–206 (2011)
13. Liou, J.-C., et al.: Improving the near-Earth meteoroid and orbital debris environ-
ment definition with LAD-C. In: Proceedings of 57th International Astronautical
Congress, IAC-06-B6.3.10, 7p., Valencia, Spain (2006)
14. Brownlee, D., et al.: Comet 81P/Wild 2 under a microscope. Science 314, 1711–
1716 (2006)
15. Tsou, P., et al.: LIFE: life investigation for Enceladus A sample return mission
concept in search for evidence of life. Astrobiology 12(8), 730–742 (2012)
16. Fujishima, K., et al.: A fly-through mission strategy targeting peptide as a signature
of chemical evolution and possible life in Enceladus plumes. Enceladus and the Icy
Moons of Saturn, Abstract #3085 (2016)
17. Yamagishi, A., et al.: TANPOPO: astrobiology exposure and micrometeoroid cap-
ture experiments. Biol. Sci. Space 21(3), 67–75 (2007). (in Japanese)
18. Kawaguchi, Y., et al.: Investigation of the interplanetary transfer of microbes in
the Tanpopo mission at the exposed facility of the international space station.
Astrobiology 16(5), 363–376 (2016)
19. Tabata, M., et al.: Tanpopo cosmic dust collector: silica aerogel production and
bacterial DNA contamination analysis. Biol. Sci. Space 25(1), 7–12 (2011)
20. Tabata, M., et al.: Silica aerogel for capturing intact interplanetary dust particles
for the Tanpopo experiment. Orig. Life Evol. Biosph. 45(1–2), 225–229 (2015)
21. Tabata, M., et al.: Ultralow-density double-layer silica aerogel fabrication for the
intact capture of cosmic dust in low-Earth orbits. J. Sol-Gel Sci. Technol. 77(2),
325–334 (2016)
22. Tabata, M., et al.: Design of a silica-aerogel-based cosmic dust collector for the
Tanpopo mission aboard the international space station. Trans. JSASS Aerosp.
Technol. Jpn. 12(ists29), Pk 29–PK 34 (2014).
Research and Development
of a Scintillating Fiber Tracker with SiPM
Array Read-Out for Application in Space

Chiara Perrina(B) , Philipp Azzarello, Franck Cadoux, Daniel La Marra,


and Xin Wu

DPNC, Université de Genève, Quai Ernest-Ansermet 24, 1211 Genève 4, Switzerland


chiara.perrina@unige.ch

Abstract. Scintillating fibers read-out by arrays of silicon photomulti-


pliers can be complementary to silicon strip detectors for particle trackers
in space or represent a viable alternative. Less fragile, more flexible, with
no need of wire bonds, they can be used in high resolution charged par-
ticle tracking detectors for spaceborne experiments. Two prototypes, a
1 m long and a 70 cm long ribbon, made of six layers of 250 µm diameter
fibers, coupled to Hamamatsu silicon photomultiplier arrays and read-
out by VATA ASICs have been tested. Preliminary results of a beam test
carried out at CERN and the status of the ongoing space qualification
process are presented in this contribution.

Keywords: Scintillating fiber ribbon · Silicon photomultiplier array


Particle tracking detector · Application in space

1 Introduction
Astroparticle physics and high energy astrophysics are experiencing a “golden”
era thanks to very successful and long-running space- and ground-based exper-
iments (e.g. PAMELA, Fermi, AMS-02, H.E.S.S., Auger, IceCube). The multi-
messenger/multi-wavelength/multi-platform approach is opening up new possi-
bilities in discovery and observation. Hot topics still remaining are the origin of
cosmic rays, the spectrum of anti-matter and the observation of dark matter par-
ticles. The future of ground-based astroparticle experiments is very brilliant with
approved new projects (CTA, LHAASO, KM3NeT) and proposed ones (IceCube-
Gen2). In this scenario, the complementarity of space missions is needed to get
to the “knee” of the cosmic ray spectrum (HERD), to close the gamma-ray gap
in the MeV region (PANGU/e-ASTROGAM) and search for dark matter with
anti-particles at energies > TeV (ALADINO).
The Department of Nuclear and Particle Physics (DPNC) of the University of
Geneva has a long experience in the development and assembly of silicon trackers
used in space (AMS-01, AMS-02, DAMPE). At present, new technologies to
replace silicon strip detectors (SSDs) are being evaluated. In particular, the
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 12–16, 2018.
https://doi.org/10.1007/978-981-13-1313-4_3
Research and Development of a Scintillating Fiber Tracker 13

possibility to use scintillating fiber ribbons read-out by silicon photomultiplier


(SiPM) arrays is under consideration. Figure 1 shows the picture of one layer of
the AMS-02 silicon tracker (top left) [1] and the picture of 11 of the 12 layers of
the DAMPE silicon tracker (top right) [2], both developed and assembled at the
DPNC. Figure 1 shows the sketch of a DAMPE silicon layer (bottom left) and the
sketch of a fiber tracker layer for a future experiment in space (bottom right),
in both the elementary module is highlighted. In accordance with the design
that will be presented here, the DAMPE module made of 4 SSDs and a front-
end (FE) board will be replaced with a module consisting of one fiber ribbon
together with three SiPM arrays and one FE board, placed at each extremity of
the ribbon.

Fiber ribbon

Silicon strip detector 3 SiPM arrays


Front-end board Front-end board

Fig. 1. Top left: a layer of the AMS-02 silicon tracker made of 22 modules of 12 to
15 SSDs. Top right: 11 layers of the DAMPE silicon tracker. Bottom left: schematic of
a DAMPE silicon layer, each module consists of 4 SSDs and a front-end board with
6 VA140 ASICs used for the signal amplification. Bottom right: schematic of a future
space experiment tracking layer, each module consists of one fiber ribbon with three
SiPM arrays and one front-end board on each extremity.

2 Design of a Scintillating Fiber Tracker for Space


The DPNC is working on a concept of fiber tracker for the HERD mission [3] that
consists in a 4-sided detector, as shown in Fig. 2 (top left). Each side is composed
of stacked layers placed alternatively at 90◦ each other to measure the x and y
coordinates of particle tracks. A layer is made of modules, as described in the
previous section, placed side by side. The fiber ribbon is made of six layers of
fibers with 250 µm of diameter, made by Kuraray, similar to the LHCb ribbon
[4]. Figure 2 (top right) shows the so far used S10943-3183(X) SiPM array made
by Hamamatsu: 128 channels per array, 96 pixels per channel, a pixel size of
57.5 µm × 62.5 µm and a channel size of 230 µm × 1500 µm. The project plans
the use of ribbons of two different lengths (1 m and 70 cm) and 9.8 cm width,
and the mounting of SiPM arrays on each end of the fiber ribbon to detect
particles with Z = 1 on one side and Z ≤ 20 on the other. For the moment, the
14 C. Perrina et al.

FE board has two IDEAS VATA 64 HDR 16, to read-out one SiPM array. The
SiPM array is mounted on a printed circuit board (PCB) which is connected to
the FE board through 4 flex cables. One prototype module is shown in Fig. 2
(bottom), it consists of a 70 cm long fiber ribbon, one SiPM array mounted on
a PCB and connected to a FE board at each extremity.

~1m

~ 70 cm

Fig. 2. Top left: sketch of the scintillating fiber tracker for the HERD mission. It is a
4-sided detector, each side is composed of stacked layers. A layer is made of modules
placed side by side. A module consists of a fiber ribbon with 3 SiPM arrays and a
FE board at each extremity. Top right: Hamamatsu SiPM array with 128 channels.
Bottom: prototype of a detector module made of a 70 cm long fiber ribbon with one
SiPM array mounted on a PCB and connected to a FE board at each extremity.

3 Characterization and Space Qualification of Modules


Two modules have been tested during a beam test (May 15–19, 2017) at CERN
with a hadron beam of 100 GeV/c in which four million events have been col-
lected. The data analysis is in progress. Figure 3 shows the signal distribution
Signal (ADC)

1 pixel = 119 ADC

Signal (ADC)

Fig. 3. Preliminary beam test results. Left: signal distribution integrated over the 128
channels of a SiPM array with no clusterization performed. Right: signal of each peak
as a function of the peak number, from the linear fit it is possible to compute the signal
of one pixel.
Research and Development of a Scintillating Fiber Tracker 15

integrated over the 128 channels of a SiPM array with no clusterization per-
formed (left). By plotting the position of each peak as a function of the peak
number, it is possible to compute the signal for one pixel: 1 pixel = 119 ADC
(right) calibrating in this way the average gain of the SiPM array.

0 C 5 C
103 10 C 15 C 56.0
Output current (nA)

Breakdown voltage (V)


20 C 25 C
10 2 30 C 35 C 55.5
40 C

10 55.0

1 54.5

1 54.0
10

2 53.5
10

3
10 53.0
52.5 53.0 53.5 54.0 54.5 55.0 55.5 56.0 56.5 57.0 0 5 10 15 20 25 30 35 40
Reverse voltage (V) Temperature (C)
Output current (nA)

56.0
Breakdown voltage (V)

102
55.8

55.6
10
55.4

55.2
1
55.0

54.8
1
10
54.6

54.4
2
10
54.2

53 54 55 56 57 58 59 60 54.0
0 32 64 96 128
Reverse voltage (V)
Channel id

Fig. 4. Top left: leakage current of a SiPM array channel as a function of bias voltage
for different temperatures. Top right: VBD as a function of the temperature. Bottom
left: leakage current of all the 128 channels of a SiPM array as a function of the bias
voltage. Bottom right: VBD as a function of the channel id.

Since the kind of detector under study (fiber + SiPM) has never been used in
space, a program of tests for the space qualification is needed. Thermal/vacuum,
vibration tests are ongoing. To evaluate the success of a test, some characteristics
of the module have to be measured before and after the test. One important
characteristic, simple to measure, is the breakdown voltage (VBD ) of the SiPM
channels of the array. Figure 4 (bottom right) shows the VBD for all the 128
channels of a SiPM array, computed from the leakage current vs. bias voltage
curves (bottom left), as described in [5], once corrected with respect to the
temperature. In fact, as measured and shown in Fig. 4 (top), the VBD of a SiPM
varies accordingly with the temperature.

4 Conclusions and Perspectives


A concept of a fiber tracker for a spaceborne astroparticle physics experiment
has been presented. Further tests will be carried on to check the validity of
16 C. Perrina et al.

the mechanical structure as well as the stability of the modules in vacuum.


In addition, front-end electronics prototypes using an ASIC including an ADC
(SIPHRA [6]) are under development. A first beam test has been done to study
the performance of the 1 m and 70 cm long modules with the simultaneous read-
out on both extremities. More beam tests will be organized, in particular to
examine the SIPHRA performances.

Acknowledgements. We would like to thank the LHCb group of EPFL for the pro-
curement and contribution to the preparation of the fiber ribbons used at the beam
test.

References
1. Alpat, B., et al.: The internal alignment and position resolution of the AMS-02
silicon tracker determined with cosmic-ray muons. Nucl. Instr. Methods Phys. Res.
Sect. A Accel. Spectrom. Detect. Assoc. Equip. 613, 207–217 (2010)
2. Wu, X., et al.: The silicon-tungsten tracker of the DAMPE mission. In: PoS (ICRC
2015) 1192 (2016). http://inspirehep.net/record/1483465
3. Zhang, S.N., et al.: The high energy cosmic-radiation detection (HERD) facility
onboard China’s space station. Proc. SPIE Int. Soc. Opt. Eng. 9144 (2014). 91440X.
http://inspirehep.net/record/1306880
4. The LHCb Scintillating Fibre Collaboration: LHCb Scintillating Fibre Tracker Engi-
neering Design Review Report: Fibres, Mats and Modules. LHCb-PUB-2015-008
(2015)
5. Garutti, E., et al.: Characterization and x-ray damage of silicon photomultipliers.
In: PoS (TIPP 2014) 070 (2014)
6. Meier, D., et al.: SIPHRA 16-channel silicon photomultiplier readout ASIC. In:
Proceedings of AMICSA&DSP 2016 (2016)
SiPM-Based Camera Design
and Development for the Image Air
Cherenkov Telescope of LHAASO

S. S. Zhang1(B) , B. Y. Bi1,2 , C. Wang1 , Z. Cao1 , L. Q. Yin1,2 , T. Montaruli3 ,


D. della Volpe3 , and M. Heller3
for the LHAASO Collaboration
1
Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China
zhangss@ihep.ac.cn
2
University of Chinese Academy of Sciences, Beijing, China
3
University of Geneva, Geneva, Switzerland

Abstract. Wide Field of View Cherenkov Telescope Array (WFCTA)


has 16 image air Cherenkov telescopes and is one of the main detectors
of LHAASO. The main scientific goal of WFCTA is to measure the ultra-
high energy cosmic ray energy spectrum and composition from 30 TeV to
a couple of EeV. Each Cherenkov telescope has an array of 32 × 32 SiPM
and covers a field of view 14◦ × 16◦ with a pixel size of 0.5◦ . Because
SiPM cannot be aging under strong light exposure, SiPM-based camera
can be operated in the moon night and achieve a longer duty cycle than
PMT-based camera, e.g. the duty cycle of SiPM-based camera is about
30%, while the PMT-based camera is about 10%. A square SiPM with
photosensitive area of 15 mm × 15 mm and Geiger-APD size of 25 µm is
used in the SiPM-based camera. Each SiPM has 360,000 APDs, which
can meet the required dynamic range from 10 photoelectrons (pes) to
32000 pes of WFCTA. SiPM has higher photon detection efficiency than
quantum efficiency × collection efficiency of PMT, so SiPM-based camera
has the same signal to noise ratio with PMT-based camera, although
SiPM has higher dark count rate.

Keywords: SiPM · WFCTA · Image air Cherenkov telescope


SiPM-based camera

1 Introduction
Silicon photomultipliers (SiPM) has many advantages such as no aging due to
strong light exposure, no sensitivity to magnetic fields, single photon counting
response, high photon detection efficiency and high gain at low bias voltage.
SiPM-based camera can be operated in the moon night and the duty cycle of
SiPM-based camera is about 30%, while the PMT-based camera is about 10%.
Therefore, the SiPM is the next generation of photomultiplier sensor in the
next generation of Image Air Cherenkov telescopes. The SiPM technology has
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 17–21, 2018.
https://doi.org/10.1007/978-981-13-1313-4_4
18 S. S. Zhang et al.

been used in the First G-APD Cherenkov Telescope [1] and the single-mirror
Small Size Telescopes (SST-1M) proposed for the Cherenkov Telescope Array
(CTA) project [2]. The SiPM technology is also used in the Wide Field of View
Cherenkov Telescope Array (WFCTA) of the Large High Altitude Air Shower
Observatory (LHAASO) [3,4]. WFCTA has 16 image air wide field of view
Cherenkov telescopes. Each Cherenkov has a field of view (FOV) of 14◦ × 16◦
with a pixel size of approximately 0.5◦ ×0.5◦ . The main scientific goal of WFCTA
is to measure the ultra-high energy cosmic ray energy spectrum and composition
from 30 TeV to a couple of EeV. In order to achieve about 5 orders of magnitude
of energy spectrum measurement, the portable design of the telescope is dedi-
cated to enable an easy switching between configurations of the array in different
energy range measurements. WFCTA observes the primary cosmic ray energy
more than 2.5 orders in each energy range observation mode, which requires the
dynamic range of each SiPM from 10 photoelectrons (pes) to 32000 pes. The
design and development of the SiPM-based camera is described in detail in the
paper.

2 SiPM Candidates
The SiPM is made up of an avalanche photodiode (APD) array and each APD
operates in Geiger-mode. The dimension of each single APD can vary from a
few µm to hundred µm. The density of SiPM is 1600 APDs/mm2 for 25 µm
APD size. All the APDs in the SiPM are read in parallel making it possible to
generate signals with a dynamic range from 1 pe to a few hundred pes per 1 mm2 .
The saturation happens when more than one photon hit on the same APD at
same time. The dynamic range of SiPM is proportional to the total number of
APDs in SiPM. The SiPM dynamic range is also influenced by the uniformity of
light hitting on the SiPM. A larger dynamic range requires larger total number
of APDs. The smaller APD size has smaller fill factor and then has smaller
PDE. Larger area SiPM has a bigger dark count rate (DCR). The square SiPM
with 15 mm × 15 mm photosensitive area and 25 µm APD size is selected for
WFCTA, after taking PDE, dark count rate and price into account. The total
number of APDs is 360,000 in the square SiPM. The SiPM candidates from
Hamamatsu, FBK and SensL have been evaluated in [6]. The measured pes on
the uniform light condition can be fitted very well by the function which describes
the relationship between the total number of fired APDs and the total number
of all APDs in the SiPM. The 15 mm × 15 mm SiPM with APD size 25 µm can
reach the dynamic range from 10 pes to 32,000 pes. The additional deviation
from the non-uniform light distribution caused by the light concentrator and the
spherical mirror is less than 2% at 32000 pes.

3 SiPM-Based Camera
Each SiPM-based camera consists of a 32 × 32 SiPM array, a 32 × 32 light
concentrator (Winston cone) array, 1024 channels of temperature and voltage
SiPM-Based Camera Design and Development 19

compensation loop, and l024 channels of readout electronics. The SiPM signal is
fed to pre-amplifiers through a DC coupling (Fig. 1(a)). A typical signal from pre-
amplifier is shown in Fig. 1(b). The signal width is about 42 ns. Signals coming
out of the pre-amplifiers are split into two channels and are amplified by two
chains of amplifier: high gain and low gain separately for getting a good linearity
over a wide dynamic range of 3.5 orders of magnitude. And then the signals are
digitized by 50-MHz, 12 bits flash analog-to-digital-converters (FADCs). The
digital signals are collected by FPGAs to do further processing: single channel
trigger, event trigger, signals transmission and storage etc. The SiPM gain or
break down voltage is sensitive to the temperature. The SiPM gain temperature
coefficient is about 1.5%/◦ C. The break down voltage temperature coefficient
is about 26 mv for FBK SiPM and 21.5 mv/◦ C for SensL SiPM, 54 mv/◦ C for
Hamamatsu SiPM. A temperature sensor is embedded in each SiPM (Fig. 1(a)).
A temperature and voltage compensation loop is used in each SiPM to keep the
SiPM gain stable.

(a) (b)

Fig. 1. (a) A pre-amplifier schematic (down), SIPM front and back photos (up). (b) A
typical SiPM signal from a pre-amplifier with a pulse width of about 42 ns.

Each SiPM-based camera is made up of 64 sub-clusters. Each sub-cluster


(Fig. 2(a)) consists of 16 SiPMs, 16 Winston cones, a pre-amplifier board, 16
temperature and high voltage compensation loops, two analogue board (each
board has 8 channels of high gain and low gain amplifier chains) and a digital
board (including 32 channels of 50 MHz, 12 bits FADCs and a FPGA). Each pixel
has a SiPM with photosensitive area of 15 mm × 15 mm. Winston cone is used
to guide the photons hitting on the non-photosensitive area into photosensitive
area. The inlet size of the Winston cone is matched to the pixel size of 25.8 mm
× 25.8 mm and the outlet size of the Winston cone is matched to the SiPM
photosensitive area of 15 mm × 15 mm. The photon collection efficiency is about
33.8% without the Winston cone and increases to 85.0% with the Winston cone.
The sky background light wavelength is dominated by the wavelength higher
than 550 nm in the moonless night [7]. An optical filter window with cutoff
wavelength of 550 nm is mounted in the front of the SiPM-based camera (see
Fig. 2(b)), which is used to improve the signal to noise ratio of the camera. The
whole system is sealed to prevent outside dust from entering.
20 S. S. Zhang et al.

The PDE of SiPM is about two times of PMT’s quantum efficiency (Qe) ×
collection efficiency (ε). The dark count rate (DCR) of SiPM is about 13 MHz
and DCR of PMT is less than 10 kHz at 0.5 pe threshold. The sky background
noise at the YangBaJing Cosmic Ray Observatory [5] is about 38 MHz for SiPM
and 19 MHz for the PMT at the moonless night. Compared with PMT-based
camera of WFCTA prototype [5], SiPM has the same or even higher signal-to-
noise ratio, after taking the DCR and PDE into account. The energy threshold
of the telescope increase when the sky background noise increase, e.g. the energy
threshold is about 30 TeV at moonless night and the energy threshold is about
300 TeV at half-moon night.

Fig. 2. (a) A sub-cluster picture without Winston cone. (b) A SiPM array camera
design diagram.

4 Discussion and Conclusion


SiPM-based camera are designed and developed to meet the requirements of
LHAASO-WFCTA. SiPM-based camera can be operated on moon nights and
achieve a longer duty cycle than PMT-based camera, e.g. the duty cycle of SiPM-
based camera is about 30%, while the PMT-based camera is about 10%. The
signal-to-noise ratio of SiPM-based camera is almost the same as PMT-based
camera of WFCTA prototype [5]. The first prototype of SiPM-based camera will
be built before June 2018 and six SiPM-based telescopes will be run at LHAASO
site at the end of 2018.

Acknowledgements. This work is supported in China by the Key Laboratory of


Particle Astrophysics, Institute of High Energy Physics, CAS. Projects No. 11475190
and No. 11675204 of NSFC also provide support to this study.

References
1. Anderhub, H., et al.: Design and operation of FACT - the first G-APD Cherenkov
telescope. JINST 8 (2013). P06008 arXiv:1304.1710
2. Schioppa, E.J., et al.: The SST-1M camera for the Cherenkov telescope array.
arXiv:1508.06453v1 [astro-ph.IM] (2015)
SiPM-Based Camera Design and Development 21

3. Cao, Z., et al.: Chin. Phys. C 34, 249 (2010)


4. He, H.H., et al.: LHAASO project: detector design and prototype. In: 31st ICRC,
LODZ (2009)
5. Zhang, S.S., et al.: Nucl. Instrum. Methods A 629, 57 (2011)
6. Bi, B.Y., et al.: Silicon photomultiplier performance study and readout design for
the wide field of view Cherenkov telescope array of LHAASO. In: Proceedings of
Technology and Instrumentation in Particle Physics (2017)
7. Benn, C.R., Ellison, S.L.: New Astron. Rev. 42, 503 (1998)
Silicon Photomultiplier Performance
Study and Preamplifier Design for the
Wide Field of View Cherenkov Telescope
Array of LHAASO

B. Y. Bi1,2(B) , S. S. Zhang1 , C. Wang1 , Z. Cao1 , L. Q. Yin1,2 , T. Montaruli3 ,


D. della Volpe3 , and M. Heller3
for the LHAASO Collaboration
1
Key Laboratory of Particle Astrophysics, IHEP, CAS, Beijing, China
biby@ihep.ac.cn
2
University of Chinese Academy of Sciences, Beijing, China
3
University of Geneva, Geneva, Switzerland

Abstract. The Wide Field of View Cherenkov Telescope Array


(WFCTA), a main component of the LHAASO, requires a dynamic range
between 10 and 32000 photoelectrons (pes) and stable gain of the photo-
sensors. Silicon photomultipliers (SiPMs) are relatively new kind devices
with respect to photomultipliers (PMT). Their performance are improv-
ing very rapidly since 1990s. SiPMs suffer for negligible ageing even under
strong light exposure. SiPM-based cameras could operate under high
moon conditions and their duty-cycle is larger than that of PMT-based
camera. The design of preamplifier for the WFCTA camera is described
this paper. Moreover properties of the SiPMs are studied, such as their
linearity at high number of photoelectrons. An analytical function is
derived to relate number of fired cells and the total number of cells in
the SiPM. We also compare the performance of SiPMs and PMTs under
long light pulses up to 3 µs. Furthermore, the additional non-linearity
due to disuniformities in light distribution is also evaluated.

Keywords: SiPM · Dynamic range · Long duration pulse


LHAASO · WFCTA

1 Introduction
The Large High Altitude Air Shower Observatory (LHAASO) is a hybrid exper-
iment designed for γ-ray astronomy and cosmic rays studies [1,2]. The Wide
Field of View Cherenkov Telescope Array (WFCTA), one of its three main com-
ponent detectors, will be operated in two observation modes. The Cherenkov
mode requires a photosensor dynamic range from 10 to 32,000 pes and the flu-
orescence mode requires that the gain of the sensor is stable for long duration
light pulse up to 3 µs. The SiPM developed rapidly since 1990s, the gain of which
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 22–26, 2018.
https://doi.org/10.1007/978-981-13-1313-4_5
Silicon Photomultiplier Performance Study and Preamplifier Design 23

is around 106 while the voltage is less than 100 V. The SiPM-based cameras can
be operated with moonlight and achieves a larger duty cycle than PMT-based
cameras. The First G-APD Cherenkov Telescope has been exploring the use of
the SiPM technology [3] and CTA project will use the SiPM on single-mirror
Small Size Telescopes (SST-1M) and dual mirror SSTs [4]. In this paper, the
design of preamplifier is illustrated, the test results are shown, and additional
non-linearities is simulated.

2 The SiPMs and Preamplifier


Figure 1(a) illustrates the preamplifier for the SiPM. The resistor R2 is to convert
the output from current to voltage, which influences the output pulse shape
(Fig. 1(b)). The pulse width is about 50 ns when R2 = 3 Ω. When the resistance
value of R2 increases to 10 Ω, the pulse width increases to about 78 ns. The
pulse width of 50 ns is suitable for the 50 MHz FADC in WFCTA electronics
system. The preamplifier OPA846 has very high gain bandwidth and large signal
performance with very low input voltage noise. The OPA846 can work in good
performance at a high gain, e.g. +10. The capacitor C1 is used to keep the
operating voltage of the SiPM stable, especially during long light pulses. The
parameters of SiPM samples from Hamamatsu, FBK and SensL are listed in
Table 1. The avalanche photodiode microcell (APD) dimension of SiPM samples
is 25 µm. The break down voltage of SiPMs is sensitive to the temperature. The
break down voltage varies about 54 mV/◦ C for Hamamatsu’s SiPM candidates,
about 26 mV/◦ C for FBK’s and about 21.5 mV/◦ C for SensL’s. The fill factor
of FBK’s and SensL’s candidates is higher than Hamamatsu’s, so the photon
detection efficiency (PDE) of FBK’s and SensL’s candidates is higher.

Table 1. Details information of the SiPMs we studied.

Models PDE Fill factor Dark count rate Cross talk Gain (106 )
S13361-5488 (Hamamatsu) 25%@400nm 47% 45 kHz/mm2 1% 0.70
FBK-25 (FBK) 38%@400nm 72% 80 kHz/mm2 15% 1.38
MicroJ-30020 (SensL) 33%@400nm 62% 80 kHz/mm2 5% 1.70

3 Performance of SiPMs
The response of the SiPM at the condition of uniform photons is expressed as
Eq. (1). The APD works in Geiger mode which means the saturation happens
when more than one photons hit on the same APD during the same readout
window [5]. The expectation number of photon electrons (Npe ) can be exacted
from the function of Eq. (1) and is expressed as Eq. (2).
Nf ired = Ncell (1 − e−P DE·Nph /Ncell ) = Ncell (1 − e−Npe /Ncell ) (1)
1
Npe = Ncell ln( ) (2)
1 − Nf ired /Ncell
24 B. Y. Bi et al.

Fig. 1. (a) The scheme of the preamplifier for the SiPM. (b) The pulses for different
value of R2 under a fixed intensity of light. The amplitudes are normalized to 1.

where Nf ired is the number of fired APDs and Ncell is the total number of APDs.
Nph is the number of photons hitting on the SiPM, and P DE · Nph is equal to
Npe .
As shown in Fig. 2(a), the nonlinearities follow the expectation predicted
by Eq. (1) very well. The dynamic range of SiPMs is proportional to the total
number of APDs. After correction with Eq. (2), the dynamic range are extended
to 32,000 pes for the Hamamatsu and FBK SiPM samples. Because Ncell is
smaller, the dynamic range of SensL SiPM samples can reach to about 6,000
pes after correction. The resolutions of the SiPM are the same before and after
correction, which is shown in Fig. 2(b).
To investigate the performance of SiPM under long duration pulse, we com-
pared the response of the SiPM with that of the PMT which is satisfactory for
our requirements [6]. According to the test result illustrated in Fig. 2(c), the
value of C1 influences the stability of the SiPM while the duration of light is get-
ting longer. At the condition of C1 = 0.1 µF, the SiPM has a poor performance.
At the condition of C1 = 1 µF, the deviation of the gain of the SiPM is less than
2% from 20 ns to 3 µs.

4 Additional Non-linearity Due to Non-uniform Light


Distribution
Equation (2) is based on the assumption that the distribution of photons on
the surface of the SiPM is uniform. However, photons are collected by sphere
reflective mirrors and a light concentrator, which makes the distribution on the
detector plane not uniform. There will be additional non-linearity due to the
non-uniformities. We investigated this situation by Monte-Carlo methods. All
the primary cosmic ray events are generated by the program CORSIKA-v7.4005
[7], A simulation program has been developed for LHAASO-WFCTA, including
ray-tracing of photons, response of SiPM, the electronics and the concentrator.
Figure 2(d) shows the results of the simulation. The total number of cells of
the SiPM we simulated is 230,400. If the distribution of light on the surface is
uniform, the output of the SiPM will be corrected perfectly with Eq. (2) (see open
Silicon Photomultiplier Performance Study and Preamplifier Design 25

Fig. 2. (a) Results of Linearity of Hamamatsu (without correction: •, with correction: ◦,


theoretical line: ), SensL (without correction: , with correction: , theoretical
line: ) and FBK (without correction: , with correction: , theoretical line:
). (b) Resolution of Hamamatsu, SensL and FBK, the symbols are the same
with those in (a), and the solid line is the resolution of Poisson distribution. (c) The
stability of the Gain for C1 = 1 µF(•) and C1 = 0.1 µF(◦) (d) Simulation of additional
non-linearity with Monte-Carlo program of WFCTA, including the mirrors and the
concentrator.

circle in the Fig. 2(d)). If the distribution of light on the surface is non-uniform,
there are some deviation after correction (see black dot in the Fig. 2(d)). The
additional deviation caused by the non-uniform photons distribution is less than
2% at 32,000 pes.

5 Discussion and Conclusion

We have developed a preamplifier for application of SiPM on LHAASO-WFCTA.


The dynamic ranges of SiPM follow the theoretical line very well, and could be
extended a lot after correction with Eq. (2). The additional deviation caused by
the non-uniform photons distribution is less than 2% at 32,000 pes.

Acknowledgements. This work is supported in China by the Key Laboratory of


Particle Astrophysics, Institute of High Energy Physics, CAS. Projects No. 11475190
and No. 11675204 of NSFC also provide support to this study.
26 B. Y. Bi et al.

References
1. Zhen, C.: A future project at tibet: the large high altitude air shower observatory
(LHAASO). Chin. Phys. C 34(2), 249–252 (2010). https://doi.org/10.1088/1674-
1137/34/2/018
2. He, H.: LHAASO Project: detector design and prototype. In: Proceedings of the
31st ICRC, pp. 2–5 (2009)
3. Anderhub, H., Backes, M., Biland, A., Boccone, V., Braun, I., Bretz, T., Zänglein,
M.: Design and operation of FACT-the first G-APD Cherenkov telescope. J.
Instrum. 8(6), P06008–P06008 (2013). https://doi.org/10.1088/1748-0221/8/06/
P06008
4. Heller, M., Schioppa Jr., E., Porcelli, A., et al.: An innovative silicon photomultiplier
digitizing camera for gamma-ray astronomy. Eur. Phys. J. C. 77, 47 (2017)
5. van Dam, H.T., Seifert, S., Vinke, R., Dendooven, P., Lohner, H., Beekman, F.J.,
Schaart, D.R.: A comprehensive model of the response of silicon photomultipli-
ers. IEEE Trans. Nucl. Sci. 57(4), 2254–2266 (2010). https://doi.org/10.1109/TNS.
2010.2053048
6. Ge, M., Zhang, L., Chen, Y., Cao, Z., Zhang, S., Wang, C., Bi, B.: Photomultiplier
tube selection for the wide field of view cherenkov/fluorescence telescope array of the
large high altitude air shower observatory. Nucl. Instrum. Methods Phy. Res., Sect.
A: Accelerators Spectrometers Detectors Assoc. Equipment, 819, 175–181 (2016).
https://doi.org/10.1016/j.nima.2016.02.093
7. Heck, D., Knapp, J., Capdevielle, J.N., Schatz, G., Thouw, T.: CORSIKA: a Monte
Carlo code to simulate extensive air showers. Forschungszentrum Karlsruhe FZKA
6019, 1–90 (1998). http://www.ikp.kit.edu/
A Comprehensive Analysis of Polarized
γ-ray Beam Data with the HARPO
Demonstrator

R. Yonamine1(B) , S. Amano2 , D. Attié1 , P. Baron1 , D. Baudin1 , D. Bernard3 ,


P. Bruel3 , D. Calvet1 , P. Colas1 , S. Daté4 , A. Delbart1 , M. Frotin3 ,
Y. Geerebaert3 , B. Giebels3 , D. Götz5 , P. Gros3 , S. Hashimoto2 , D. Horan3 ,
T. Kotaka2 , M. Louzir3 , Y. Minamiyama2 , S. Miyamoto2 , H. Ohkuma4 ,
P. Poilleux3 , I. Semeniouk3 , P. Sizun1 , A. Takemoto2 , M. Yamaguchi2 ,
and S. Wang3
1
IRFU, CEA Saclay, 91191 Gif-sur-Yvette, France
ryo.yonamine@cea.fr
2
LASTI, University of Hyōgo, 3-1-2 Koto, Kamigori-cho,
Ako-gun, Hyōgo 678-1205, Japan
3
LLR, Ecole Polytechnique, CNRS/IN2P3, 91128 Palaiseau, France
4
JASRI, 1-1-1, Kouto, Sayo-cho, Sayo-gun, Hyōgo 679-5198, Japan
5
AIM, CEA/DSM-CNRS-Université Paris Diderot, IRFU/Service d’Astrophysique,
CEA Saclay, 91191 Gif-sur-Yvette, France

Abstract. We investigate the feasibility of gaseous TPC as a telescope


and a polarimeter for cosmic gamma-rays, focussing on the energy range
from 1 to 100 MeV. Our beam results show the angular resolution can
reach a factor of 2 better than that of the Fermi LAT. We also demonstrate
polarimetry with a high significance and an excellent dilution factor, for
the first time above the pair production threshold and below 1 GeV.

Keywords: TPC · Polarimetry · Gamma-ray · Saturation

1 Introduction
HARPO [1] is a design concept of a gaseous TPC aiming for a high precision
telescope and polarimeter for cosmic γ-rays especially in the energy range from
the pair-production threshold up to the order of 1 GeV, where current γ-ray
telescopes have a sensitivity drop (Fig. 1 in [2]) and where polarimetry becomes
difficult due to the multiple scattering (Fig. 4 in [1]). We present results from
the beam data with HARPO at NewSUBARU [3] in Japan in 2014 (See [4] as a
detailed version). Additionally a pre-amplifier saturation found in our analysis
is also reported.
M. Frotin—Now at GEPI, Observatoire de Paris, CNRS, Univ. Paris Diderot, Place
Jules Janssen, 92190 Meudon, France.
S. Wang—Now at INPAC and Department of Physics and Astronomy, Shanghai Jiao
Tong University, Shanghai Laboratory for Particle Physics and Cosmology, Shanghai
200240, China.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 27–30, 2018.
https://doi.org/10.1007/978-981-13-1313-4_6
28 R. Yonamine et al.

2 Experimental Setup
HARPO is a 30 cm cubic TPC equipped with two GEMs and a “bulk”
Micromegas mesh [5], a readout-pitch of 1 mm in two perpendicular directions,
and sampling frequency of 33 MHz performed by the AFTER electronics [6,7].
The point resolution is approximately 1 mm in x, y, and z.
We took data at photon energies from 1.7 MeV to 74 MeV. However the
subset of the range from 4 MeV to 20 MeV is our main target in this paper. This
is because at lower energies than 4 MeV, the beam repetition frequency was
too high to distinguish one event from another, whereas at higher energies than
20 MeV, we suffered from a pre-amplifier saturation that is described in Sect. 3
because e+ e− tracks were more likely perpendicular to the readout plane. More
details about the detector, the data taking and the beam configurations can be
found in [8].
To obtain better understanding of taken data, we have developed a simula-
tion framework (Sect. 3 in [2], Sect. 5 in [4]). It also plays an important role to
cancel out the systematic bias derived from the detector acceptance in measur-
ing the polarization. Our simulation was firstly validated with cosmic-rays and
its parameters were calibrated with the beam test data.
We estimate the photon direction by taking the bisector of the e+ /e− momen-
tum directions in the pair production process. The polarization asymmetry (A)
appears in Eq. (1) in [9], and thus it can be extracted by measuring the dis-
tribution of the azimuthal angle (φ := φ+ +φ 2

, see Fig. 3 of [9] for φ+ and φ−
definitions). A can be written as a product of AQED × D, with AQED being a
2
theoretical polarization asymmetry, and D being the dilution factor (e−2σφ , σφ :
the azimuthal angle resolution) (Eq. (1) in [10]). A complete description of our
analysis method can be found at Sects. 4 and 5 in [2].

3 Preamplifier Saturation
We observed two types of pre-amplifier saturation (Fig. 1), in which the response
to the input charges become non-linear. The first type happens in relatively short
time scale, which does not affect the next events. This effect is reproduced well in
our simulation and in principle this can be corrected. Its signature is the extra
entries at the charges just below the “saturation edge”, meaning a dropping
point close to the maximum charge. The second type arises from the correlation
between the saturation edge and the beam intensity (Fig. 2) with which we expect
no event pile-up in the sensitive volume but with which we might have a kind
of event pile-up effect in the pre-amplifier. This effect is unpredictable and can
not be corrected because data can be affected even by non-triggered previous
events.
Finally it should be noted that this pre-amplifier saturation is not a funda-
mental problem at all for future data taking. Because tracks in space condition
must be isotropic and are hardly aligned to a single readout strip. There is also a
backup solution that is to the reduce gain by a factor of ∼2, which is expected to
A Comprehensive Analysis of Polarized γ-Ray Beam Data 29

1. short time scale effect

Saturation edge [arb. unit]


90000
5
10

6
80000
10

7 E=52.1MeV_I=100nA_Run=1290 70000
10
E=52.1MeV_I=70nA_Run=1294
8 E=52.1MeV_I=40nA_Run=1669 60000
10

10 9 2. long time scale effect 50000

103 104 105 0 20 40 60 80 100


Charge per readout strip [arb. unit] Micromegas current [nA]

Fig. 1. Normalized charge-distribution Fig. 2. Estimated saturation edges by


for three different runs, which have a fit for several runs with regard to the
same photon energy (52.1 MeV) for dif- current on Micromegas, which depends
ferent Micromegas currents (40, 70, on the beam intensity. A clear depen-
100 nA). dency can be seen.

be achieved without affecting much the performance of the event reconstruction


used in this analysis.

4 Results, Discussion and Conclusion

As the telescope performance, the angular resolutions (68% containment angle


of the residual distribution of the photon direction) as a function of the photon
energies are summarized in Fig. 3. It shows HARPO already achieved a better
angular resolution than the Fermi-LAT by a factor of two. The angular resolution
in high energy region (50 MeV) is expected to be improved by using larger
detector. Current reconstruction (Sect. 4 in [2]) is based only on the information
around the photon conversion vertex and this is good for low energies where the
multiple-scattering effect is non-negligible. On the other hand, for high energies,
the trajectories are likely straight and its opening angle becomes smaller, which
means we would gain by using the hit-points that are distant from the vertex.
To demonstrate the feasibility of polarimetry, we extract A for different ener-
gies and plot them in Fig. 4. Note that the key to minimize systematic bias is to
divide the polarized (P = 1) by unpolarized data (P = 0), and that the ratio
is denoted by e.g. “data/sim” for the ratio of polarized measurement data over
unpolarized simulation data. The remaining issue is the discrepancy between
“sim/sim” (≈“sim/data”) and “data/sim” (≈“data/data”) in Fig. 4. Given a
consistency between “data/sim” and “data/data” or between “sim/sim” and
“sim/data”, our simulation agrees with the measured data at least for P = 0.
While there must be unknown factors that are not implemented in our simula-
tion for P = 1. However we would emphasize that we obtain non-zero values of
A, which shows the feasibility of the polarimetry over 4 MeV up to 20 MeV.
We conclude that a gaseous TPC can be a good candidate both as a telescope
and as a polarimeter for low energy (1 MeV ∼ 100 MeV) γ-ray astronomy.
30 R. Yonamine et al.

1 35

30

25
σθ,68% [rad]

A [%]
20

15
10-1 Fermi-LAT (Front)
Fermi-LAT (Back) 10 sim (P=1) / data(P=0)
data sim (P=1) / sim (P=0)
5 data(P=1) / data(P=0)
sim
data(P=1) / sim (P=0)
0
1 10 102 4 6 8 10 12 14 16 18 20
Eγ [MeV] Eγ [MeV]

Fig. 3. Angular resolution. Data and Fig. 4. Polarization asymmetry. Non-


simulation are consistent. Good perfor- zero A can realize the polarimetry.
mance up to ∼50 MeV.

Acknowledgement. This work was funded by the French National Research Agency
(ANR-13-BS05-0002) and was performed by using NewSUBARU-GACKO (Gamma
Collaboration Hutch of Konan University).

References
1. Bernard, D., et al.: HARPO: a TPC as a gamma-ray telescope and polarimeter.
Proc. SPIE Int. Soc. Opt. Eng. 9144, 91441M (2014)
2. Gros, P., et al.: First measurement of the polarisation asymmetry of a gamma-ray
beam between 1.7 to 74 MeV with the HARPO TPC. Proc. SPIE Int. Soc. Opt.
Eng. 9905, 99052R (2016)
3. Horikawa, K., et al.: Measurements for the energy and flux of laser Compton scat-
tering. Nucl. Instrum. Meth. A618, 209–215 (2010)
4. Gros, P., et al.: Performance measurement of HARPO: a Time Projection Chamber
as a gamma-ray telescope and polarimeter (2017). arXiv:1706.06483
5. Gros, P.: HARPO - TPC for High Energy Astrophysics and Polarimetry from the
MeV to the GeV. In: PoS. TIPP 2014, p. 133 (2014)
6. Baron, P., et al.: AFTER, an ASIC for the readout of the large T2K time projection
chambers. IEEE Trans. Nucl. Sci. NS 55, 1744–1752 (2008)
7. Abgrall, N., et al.: Time projection chambers for the T2K near detectors. Nucl.
Instrum. Meth. A637, 25–46 (2011)
8. Delbart, A.: HARPO, TPC as a gamma telescope and polarimeter: First measure-
ment in a polarised photon beam between 1.7 and 74 MeV. In: PoS. ICRC 2015,
1016 (2015)
9. Bernard, D.: Polarimetry of cosmic gamma-ray sources above e+ e− pair creation
threshold. Nucl. Instrum. Meth. A729, 765–780 (2013)
10. Mattox, J.R., et al.: Astrophys. J. 363 (1990)
Timing Calibration
of the LHAASO-KM2A Electromagnetic
Particle Detectors Using Charged
Particles Within the Extensive
Air Showers

Hongkui Lv(B) , Huihai He, Xiangdong Sheng, and Jia Liu

Institute of High Energy Physics, Chinese Academy of Sciences,


Beijing 100049, China
lvhk@ihep.ac.cn

Abstract. The Large High Altitude Air Shower Observatory (LHAASO)


is a new generation extensive air shower (EAS) experiment focusing
on high energy gamma ray astronomy and cosmic ray physics. In the
LHAASO, 5242 electromagnetic particle detectors (EDs) and 1171 muon
detectors (MDs), which cover an area of 1.3 km2 , are designed to mea-
sure the number density and arrival time of EAS secondary particles. The
remoteness and numerous detectors extremely demand a robust, auto-
matic calibration method. In this work, a self-calibration method which
uses charged particles within the EASs as the calibration beam is devel-
oped. The method is implemented in the Monte Carlo simulation and ini-
tially applied in a prototype array experiment, from which the precision
and efficiency are estimated.

1 Introduction
LHAASO is a new generation EAS experiment located at the Haizi mountain
(4410 m, in Sichuan province, China). The experiment aims to explore the
gamma-ray sources with a sensitivity of 1% Icrab at energies above 50 TeV [1,2].
In its 1.3 km2 array (KM2A) (Fig. 1), 5242 electromagnetic particle detectors
(EDs) are designed to detect arrival times and number densities of EAS charged
particles produced by the primary particles, from which the primary direction
and energy can be reconstructed [3].
An reliable reconstruction of the primary gamma ray direction requires the
accurate determination of the arrival times of the EAS particles on each EDs.
One of the critical requirements is keeping all the detectors time synchronized.
Hardware time calibration is usually performed using a probe detector manually
moved above all the detector units as reference. However, it becomes infeasible
if the EAS array has a large area above square kilometer scale and numerous
detectors. This paper will present an automatic detector time self-calibration
technique which relies on the measurement of charged particles within the EASs,
focusing on its applicability to the upcoming LHAASO-KM2A.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 31–34, 2018.
https://doi.org/10.1007/978-981-13-1313-4_7
32 H. Lv et al.

2 Timing Calibration
The ED is a type of scintillation detector with an active area of 1m2 . It consists
of four plastic scintillation tiles of 100 cm × 25 cm × 2.5 cm each (Fig. 1), several
wavelength-shifting (WLS) fibers and a 1.5 in. photomultiplier tube (PMT). The
ED front-end electronics is a very compact device deployed just behind the PMT
of each ED [4]. All FEE time-to-digital converters (TDCs) are synchronized
within sub-nanosecond via an advanced timing system named White Rabbit [5].
The main uncertainties on the measured arrival time of EAS particles comes
from the time offset spread among the EDs. For each ED, the detector time
offset arise from the time elapsed between the incidence of EAS particle arriving
on the scintillation tiles and the time stamping of the associated signal in the
FEE. It is measured as a cumulative effect of the photon transmission time in
the WLS fibers and the electron transit time in the PMTs. The relative time
offset differences must be calibrated with a precision of better than 1 ns and
periodically corrected in the data, to guarantee the optimal angular resolution
and ensure the pointing accuracy.
NORTH

ED

MD

e 

WFCTA

m
5
0.
m
5

1m
0.

150 m
WCDA

Fig. 1. Left figure: The layout of the LHAASO experiment; Right figure: Schematic of
a ED illustrating of 4 scintillation tiles coupled with wavelength-shifting fibers.

2.1 Calibration Principle


The secondary particles within the EAS front can provide a common standard
timing signal to calibrate the EDs, since the EAS front approximately sustain a
conical shape (Fig. 2). The relative arrival time of secondary particles within the
EAS front can be accurate determined if the shower front shape and the primary
direction are well known. For each EAS event, the time offset of the i-th ED Δti
located at position coordinates (xi , yi ) is determined as follows:

xi yi zi
Δti = ti − treal
i = ti − [(l − l) + (m − m) + 1 − (l − l)2 − (m − m)2 + αri + t0 ] (1)
c c c
where ti and treal
i are the measured arrival time of EAS particle for i-th ED
and the expected “real” one, respectively; l and m are two components of the
Timing Calibration of the LHAASO-KM2A Electromagnetic Particle 33

reconstructed direction vector1 . The reconstructed direction is corrected using


the correction parameters (l,m) of the Characteristic Plane as present in [6],
which corresponds to the mean value of the direction cosines for a set of EAS
events. α is the conicity coefficient for describing the EAS front, ri is the trans-
verse distance of the i-th ED from the shower core, c is the speed of light and t0
is a fitting parameter of the direction reconstruction which corresponds to the
arrival time of the EAS plane in the coordinates (0,0,0).

Measured time offsets (ns)


10

8
Shower front
7

6
Relative timing
5

Core location 4
Air shower direction Air shower direction
3
(true direction) (reconstructed direction)
Direction of CP 2

1
1 2 3 4 5 6 7 8
Preset time offset (ns)

Fig. 2. Schematic of a shower front and Fig. 3. Comparison of the 4883 EDs
the CP introduced by detector time off- time offsets determined using the self-
set. calibration procedure and the preas-
signed values.

2.2 Mente Carlo Studies

To verify the applicability of this method and estimate its precision, a complete
timing calibration is performed using simulated showers before the actual exper-
iment. About 2 × 106 EAS events are generated using the CORSIKA software
and used for this calibration. The preset time offset of each ED, which ranges
from 1.5 ns to 8.2 ns, is artificially added into the simulated detector response to
distort the detector timing.
A correlation is observed between the preset detector time offsets and mea-
sured time offsets with this calibration method (Fig. 3). A small bias is found
at EDs located at the edge of array, because the shower front is more likely
a curved surface than a conical one. Nevertheless, this characteristic does not
adversely affect the resulting overall calibration. The differences between the
measured time offsets and the initial ones indicates that the precision is 0.46 ns.
To achieve this precision level, approximately 0.5 h of exposure time is necessary
to collect enough event statistics.

1
l = sinθcosφ, m = sinθsinφ (θ and φ are the reconstructed zenith and azimuth angles,
respectively).
34 H. Lv et al.

2.3 Experimental Calibration

The self-calibration method is also implemented by a ∼1% scale KM2A proto-


type array experiment. The prototype array is composed of 39 EDs covering an
area of approximately 110 × 50 m2 .
To cross-check the 39 EDs time offset determined by the self-calibration
method, a muon telescope system (Fig. 4) made of three EDs is established.
Once background muons pass through the three EDs simultaneously, the dif-
ference in the response time between the detector under calibration and the
reference one can be determined. A obvious correlation is also observed between
the detector time offsets measured from CP method and the ones determined by
the muon telescope. The root mean square of the differences between detector
time offsets measured from two calibration techniques is 0.43 ns (Fig. 5).

Number of EDs
5
Entries 39
Mean -0.3266
4 RMS 0.4256

1
parƟcle
0
3 EDs -2 -1.5 -1 -0.5 0 0.5 1 1.5
Timeoffset_CP method - Timeoffset_telecsope (ns)

Fig. 4. A muon telescope system Fig. 5. Distribution of the differences


composed of three EDs. between time offsets obtained from two
independent measurements.

3 Conclusion

Preliminary result shows that the EAS events is very useful for timing calibra-
tion purposes for the large EAS array covering an area for square kilometers
scale. Detector time offsets can be determined at sub-nanosecond level on hour
timescales, which is adequate to meet the KM2A requirements.

References
1. Cao, Z.: Nucl. Instr. Meth. A 742, 95–98 (2014)
2. Cui, S., et al.: Astropart. Phys. 54, 86–92 (2014)
3. Zhang, Z., et al.: Nucl. Instr. Meth. A 845, 429–433 (2017)
4. Liu, X., et al.: Chin. Phys. C 40(7), 076101 (2016)
5. Du, Q., et al.: Nucl. Instr. Meth. A 732, 488–492 (2013)
6. He, H.H., et al.: Astropart. Phys. 27, 528–532 (2007)
MoBiKID - Kinetic Inductance Detectors
for Upcoming B-Mode Satellite Missions

A. Cruciani1(&), L. Cardani1, N. Casali1, M. G. Castellano2,


I. Colantoni2, A. Coppolecchia1,3, P. de Bernardis1,3, M. Martinez1,3,
S. Masi1,3, and M. Vignati1
1
INFN, Sezione di Roma, P.le Aldo Moro 2, 00185 Rome, Italy
angelo.cruciani@roma1.infn.it
2
IFN/CNR, Via Cineto Romano, 42, 00156 Rome, Italy
3
Sapienza Università di Roma, P.le Aldo Moro 2, 00185 Rome, Italy

Abstract. Our comprehension of the dawn of universe grew incredibly during


last years, pointing to the existence of the cosmic inflation. The primordial
B-mode polarization of the Cosmic Microwave Background (CMB) represents a
unique probe to confirm this hypothesis. The detection of such small pertur-
bations of the CMB is a challenge that will be faced in the near future by a new
dedicated satellite mission.
MoBiKID is a project, supported by INFN, to develop an array of Kinetic
Inductance Detectors able to match the requirements of a next-generation
experiment. The detectors will feature a Noise Equivalent Power better than 5
aW/Hz0.5 and will be designed to minimize the background induced by cosmic
rays, which could be the main limit to the sensitivity.
In this paper we present the current status of detectors development and the
next planned steps to reach the goal of this project.

Keywords: CMB polarization  Kinetic Inductance Detectors  Cosmic Rays

1 Introduction

How did our universe begin? This is the main question cosmologists will try to answer
in the next 20 years. During the last two decades the study of the anisotropies of
Cosmic Microwave Background (CMB) was the main driver of the evidence that the
early universe underwent a period of accelerated expansion, the so-called cosmic
inflation [1]. The model however still needs a confirmation.
The detection of the presence of primordial B-modes in CMB polarization aniso-
tropies will be an independent and strong proof of the inflation scenario. Many current
and future CMB experiments from ground and balloon are devoted to the search of
B-modes. These experiments will be able to contribute significantly, but it is highly
likely that a space mission will be needed to obtain the ultimate proof and precise
measurement of the existence of B-modes.
The scientific interest in this field is very high. Recently the CMB european
community answered to the call of the ESA for a medium size mission, proposing
CORE+, an instrument devoted to the measurement of B-modes. Despite of the very

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 35–38, 2018.
https://doi.org/10.1007/978-981-13-1313-4_8
36 A. Cruciani et al.

high scientific interest, the proposal was not selected. The maturity of some tech-
nologies (e.g. detectors and dilution refrigerator) was judged as not sufficient.
The technological readiness for a new space mission has to be demonstrated with
major strength to increase the possibilities of success of future proposals. In this context
the main topic to be dealt with is the choice and demonstration of the detector
technology.

2 State of the Art

The detector needed for the future space mission devoted to B-modes should meet
these requirements:
– Sensitivity: the intrinsic noise in CMB detectors between 100 and 200 GHz should
be below the photon noise from the 2.7 K cosmological blackbody radiation. This
means a Noise Equivalent Power (NEP) in the range 5–810−18 W Hz−0.5.
– Number of detectors: The future space mission will need about 2000 pixels, divided
in arrays of 100–300 pixels, depending on the operating frequency band.
– Radiation sensitivity: Cosmic Rays (CRs) can interact with the detectors, causing
glitches that implies a data loss. The sensors are deposited on a substrate (typically
300 µm Silicon) or a membrane (2 µm Silicon Nitride). CRs ionize the support.
This energy deposit (about 200 keV) generates ballistic phonons able to cause both
thermal and athermal signals in the detector. CRs at the second Sun-Earth Lagrange
point (L2), the ideal operation position for a future satellite, are composed mainly
by galactic protons with the peak of the Energy distribution at about 200 MeV and a
rate of 5 cm−2 s−1. The Planck mission faced an unexpectedly high rate of CRs on
detectors (about 2/s) [2]. This had two main consequences: a net data loss of 15%
and some concerns on the gaussianity of the noise due to the possible presence of
small pulses immersed in the noise.
The NTD bolometers, used in the Planck mission, had very good sensitivity, but
their implementation in larger arrays is not possible due to multiplexing issues. Two
detectors technologies are considered more promising for a future space implementa-
tion: Transition Edge Sensors (TESs) [3] and Kinetic Inductance Detectors (KIDs) [4].
TESs are bolometers based on superconducting thermistor. Their low impedance
allows multiplexing, using readout electronics with complex cold amplification stages
based on Superconducting Quantum Interference Device (SQUID). These devices have
demonstrated very high sensitivity in large arrays. However the complexity of their
fabrication and of the readout electronics strongly limits their possibility to be
implemented for space applications.
KIDs are a relatively young technology, invented at Caltech in 2003. They work
thank to a peculiar feature of superconductors, the kinetic inductance. In a supercon-
ductor Cooper pairs can move without suffering scattering with the lattice and this
cause the well-known DC zero resistance. The pairs show however a complex impe-
dance: they react to an applied RF field changing their motion with an inertia due to the
stored kinetic energy; this inertia corresponds to an inductance, the so-called kinetic
inductance. The kinetic inductance depends on the density of Cooper pairs and can be
MoBiKID - Kinetic Inductance Detectors for Upcoming B-Mode Satellite Missions 37

then modified by an energy release due to photons or phonons able to break them into
free electrons (quasi-particles). Variations of kinetic inductance can be measured by
building a resonator with high quality factor (Q > 103), made of the superconductor,
and by monitoring the transfer function of the resonator itself.
The main advantage of KIDs is that they can be easily multiplexed: many detectors
(up to 300) can be read-out using the same RF electronics by changing slightly the
resonant frequencies.
KIDs were successfully implemented on a demonstrator camera, called NIKA, from
2010 [5] with NEP *510−17 W/Hz0.5. The background at a ground telescope is
around 20 pW, at least 10 times more than what expected in space. NIKA NEP result
therefore does not represent the real limit of KID detectors: measurements on lower
background environments are needed.
The interaction of KIDs with CRs was observed [6, 7] and it was found to be
relevant: KIDs are sensitive to athermal phonons. With respect to the Planck
bolometers, that are divided in single pixels, the resonators of an array of KIDs are
usually deposited on a single 300 um thick silicon substrate, allowing phonons prop-
agation between all the pixels. A deep study of CRs interactions and the identification
of reliable solutions are needed to strengthen the technology readiness level. Studies are
in progress [8].

3 MoBiKID

MoBiKID is a project, funded by Istituto Nazionale di Fisica Nucleare (INFN), to


develop an array of about 150 Aluminum KIDs operating at 150 GHz, able to meet all
the requirements for the future CMB mission. In particular, the project is focused on the
demonstration of the sensitivity needed for a space mission (NEP < 510−18 W/Hz0.5)
and the characterization and minimization of the effects of CRs on the devices.
At this stage we have realized the first arrays of detector (Fig. 1 top left), using a
geometry of the device, called Lumped Element KID (LEKID), firstly proposed by
Doyle et al. [9].
The array of 132 resonator is made of a 25 nm aluminum film deposited on a high
resistivity silicon substrate. This is patterned using an electron beam lithography.
Detectors were cooled down at a temperature of 10 mK by means of a dry dilution
refrigerator. The signal from the sample was amplified using a cryogenic low noise
amplifier with Tn = 5 K.
The best array has an average Q-value of 7104 with 20% of dead pixels (Fig. 1
bottom left). The dispersion of Q value is about 2104 due to electrical cross-talk.
A major improvement should be obtained, adding bond wires along the feedline to put
in contact the different ground planes.
Detectors were illuminated using a simulator of low background environment (Fig. 1
right), composed by a Winston cone and a IR bolometer, cooled down at 800 mK.
The system generates a wave plane, able to illuminate the whole array. Temperature
can be varied between 0.8 K and 5 K. The typical thermal constant of the bolometer is
few ms.
38 A. Cruciani et al.

Fig. 1. Top left, 132 pixel array installed in its holder. Bottom left, typical transmission in
frequency of an array. Right, Background simulator coupled with the array.

Detectors showed an average NEP of about 210−17 W/Hz. Performance is limited


by a residual TLS noise, that is currently under investigation.
During the next months we will study how to remove residual noise. At the same
time detectors performance is more than enough to test their sensitivity to Cosmic
Rays.
Detectors will be illuminated using CRs, but also a 57Co source and a fast 400 nm
LED coupled with an optical fiber. The output of this tests will be essential to study
efficient solutions in order to diminish the array sensitivity to CRs.

References
1. Guth, A.H.: PRD 23, 347 (1981)
2. Catalano, A., et al.: A&A 569, A88 (2014)
3. Irwin, K., et al.: APL 66, 1998 (1995)
4. Day, P., et al.: Nature 425, 817–821 (2003)
5. Monfardini, A., et al.: ApJS 194, 24 (2011)
6. Swenson, L., et al.: APL 96, 263511 (2010)
7. Cruciani, A., et al.: JLTP 167, 311 (2012)
8. Catalano, A., et al.: A&A 592, A26 (2016)
9. Doyle, S., et al.: JLTP 151, 530 (2008)
Backend Readout Structures
and Embedded Systems
The Detector Control System Safety
Interlocks of the Diamond Beam Monitor

Grygorii Sokhrannyi(B)
on behalf of ATLAS DBM collaboration

Institute “Jozef Stefan”, 1000 Ljubljana, Slovenia


grygorii.sokhrannyi@ijs.si

Abstract. The Diamond Beam Monitor (DBM) is one of the ATLAS


sub-detector which is designed to provide luminosity measurements and
is a part of the Pixel Detector. At the ATLAS experiment, the Detector
Control System (DCS) is used to oversee the hardware conditions and
ensures a safe, a correct and an efficient experiment operation. The Safety
interlocks, which are implemented into DCS, has one of the major impor-
tance for the DBM operation, as they provide the real-time processing
of the hardware operational parameters and an immediate reaction on
the hardware state and status changes. The following safety interlocks
developed and enhanced during two years of the DBM operation will be
presented in some detail.

Keywords: DCS · ATLAS DBM · Siemens WinCC · SCADA · FSM

1 Diamond Beam Monitor

The (DBM) is a charge particle detector with possibility to measure a luminos-


ity on the event based (or bunch-by-bunch based) method. This allows not only
to provide the total integrated luminosity, but also the instantaneous luminos-
ity and a bunch position. The DBM is mounted as a sub-detector in ATLAS
experiment and is a part of the Pixel Detector on which support structure is
placed.
The ATLAS is a cylinder shape-like general purpose detector with a forward-
backward symmetry with respect to the interaction point. The Inner Detector is
one of the main ATLAS sub-detector and consists of three different systems of
sensors: Transition Radiation Tracker, Semiconductor Tracker and Pixel Detec-
tor. All three of these systems are immersed in the high-radiation forward region
and in a magnetic field parallel to the beam. The DBM is sharing the Pixel Detec-
tor support structure in the Inner Detector. Installed very lose to the interaction
point the DBM is exposed to the large amount of radiation from the proton-
proton collisions of the sum energy up to 14 TeV. The DBM is placed 90 cm
away from the interaction point and in 4 cm radial distance from the beam pipe.
The schematic view of the Inner Detector is shown on the Fig. 1.

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 41–45, 2018.
https://doi.org/10.1007/978-981-13-1313-4_9
42 G. Sokhrannyi

Fig. 1. The schematic view of the Inner Detector.

The DBM is mounted on the Pixel Detector support structure and is a part
of the Inner Detector (see Fig. 2). It consists of four telescopes on each side of
the interaction point. Three of them are the diamond telescopes and the other
one is the silicon. Each telescope has 3 diamond or silicon sensors with the FE-I4
readout chips.
The FE-I4, which is used by DBM, is the pixel readout chip which is 250 µm ×
50 µm and processes data from 26880 channels with data link speed 160 Mbit/s.
Its normal operating temperature in ATLAS is between −5 °C and 10 °C. All
DBM telescopes are tiled to face an interaction point at pseudorapidity range
from 3.2 to 3.5.

Fig. 2. Picture of the DBM telescopes on “A-side” of the ATLAS detector.

The readout and configuration connection of the FE-I4 is established within


the optoboard which serves as an optical-electrical interface and send a signal to
the Back Of Crate (BOC) and Read-Out Driver (ROD) cards. The configuration
scheme is similar to the ATLAS IBL which is the sub-detector of the Pixel [1].
The DCS Safety Interlocks of the Diamond Beam Monitor 43

To ensure that the operation temperature stay in normal working range an active
CO2 cooling is provided by the IBL [2].
Due to the fact that the DBM is mounted in the high-radiation forward
region, operated in the 7 T magnetic field and required permanent cooling,
the safe and reliable operation has the highest priority. For this reason the set
of various safety interlocks have been implemented into the DCS to provide a
real-time processing on the hardware operational parameters and an immediate
reaction to the hardware danger.

2 DCS Safety Interlocks

The Detector Control System (DCS) is responsible for the supervision of the
detector equipment, the reading of operational parameters, the propagation of
the alarms and the archiving of operational data [3]. Along with a set of com-
mands, which are used for the detector operation, a list of software interlocks
is implemented into the DBM DCS, which reacts to the hardware parameter
changes in an automated way.
The Fig. 3 is shown the general view of the ATLAS DBM FSM which is
currently implemented and used. The DBM FSM contains commands, states
and status definition and all of their safety interlocks. All software safety checks,
being the most crucial part of the DBM normal operation, are duplicated in the
WatchDog script which like the FSM is the part of the DCS. The communication
check which is based the generating heartbeat way is established between the
FSM and the WatchDog. Hence, if one of the system is stopped working, the
detector operator sees an alarm immediately and if no reaction within 10 min,
then all hardwares will be switched off.

Fig. 3. Picture of the DBM finite state machine “A-side”.


44 G. Sokhrannyi

Despite monitoring and analyzing main telescopes parameters like Temper-


ature, Low Voltage (LV) and High Voltage (HV) - around 90 different channels,
the reading of the optoboard and the BOC channels was also implemented in the
safety interlocks. This is done to ensure that the optical data transfer, between
DBM modules and the BOC, works properly. The loosing BOC laser or a power
is one of the main concern as it will ends up in the loosing configuration on the
DBM telescopes. This way the telescopes will suffer Low Voltage Current fluc-
tuations (see Fig. 4). Due to the fact that the DBM is operating in the magnetic
field, each change of the LV current generate an oscillation of the wires. As the
result telescopes could loose wire bonds connections [4].

Fig. 4. The LV current fluctuations of the FE-I4 telescope which is lost a configuration.

3 Conclusions

The DBM has been designed to complement the existing luminosity detectors in
the ATLAS experiment. It constantly suffers from the radiation, high tempera-
tures and the magnetic field as it is mounted very close to the interaction point.
That is why the safe hardware operation has high priority. For this reason two
different procedures, FSM and WatchDog, have been implemented to provide a
real-time processing on the hardware operational parameters and an immediate
reaction to the hardware danger. They constantly monitor around 90 different
hardware channels and ensure safe, correct and efficient experimental operation
of the DBM.

References
1. Polini, A., et al.: Design of the ATLAS IBL readout system. Phys. Procedia 37,
1948–1955 (2012)
2. Verlaat, B., et al.: The ATLAS IBL CO2 cooling system. J. Instrum. 12 (2017)
The DCS Safety Interlocks of the Diamond Beam Monitor 45

3. Kersten, S., et al.: First experiences with the ATLAS pixel detector control sys-
tem at the combined test beam 2004. Nucl. Instrum. Meth. A 565, 97–101 (2006).
arXiv:physics/0510262
4. Feito, D.A., Honma, A., Mandelli, B.: Studies of IBL wire bonds operation in a
ATLAS-like magnetic field. IEEE, October 2016. https://doi.org/10.1109/NSSMIC.
2015.7581879
Development of Slow Control System
for the Belle II ARICH Counter

M. Yonenaga1(B) , I. Adachi2,3 , R. Dolenec4 , K. Hataya1 , H. Kakuno1 ,


H. Kawai5 , H. Kindo3 , T. Konno2 , S. Korpar6,7 , P. Križan4,7 , T. Kumita1 ,
M. Machida8 , M. Mrvar7 , S. Nishida2,3 , K. Noguchi1 , K. Ogawa9 , S. Ogawa10 ,
R. Pestotnik7 , L. Šantelj2 , T. Sumiyoshi1 , M. Tabata5 , M. Yoshizawa9 ,
and Y. Yusa9
1
Tokyo Metropolitan University, Hachioji, Japan
yonenaga@hepmail.phys.se.tmu.ac.jp
2
High Energy Accelerator Research Organization (KEK), Tsukuba, Japan
3
SOKENDAI (The Graduate University of Advanced Science), Tsukuba, Japan
4
University of Ljubljana, Ljubljana, Slovenia
5
Chiba University, Chiba, Japan
6
University of Maribor, Maribor, Slovenia
7
Jožef Stefan Institute, Ljubljana, Slovenia
8
Tokyo University of Science, Noda, Japan
9
Niigata University, Niigata, Japan
10
Toho University, Funabashi, Japan

Abstract. A slow control system for the Aerogel Ring Imaging


Cherenkov (ARICH) counter in the Belle II experiment was newly devel-
oped based on the development framework of the Belle II DAQ software.
The ARICH detects Cherenkov photons by using Hybrid Avalanche
Photo Detectors (HAPDs). Each HAPD has 144 pixels to be readout
and requires 6 power supply (PS) channels, therefore a total of 2520 PS
channels and 60480 pixels have to be configured and controlled. Graph-
ical User Interfaces (GUIs) are also implemented to ease the detector
operation. The slow control system was used in an integration test with
cosmic rays and we confirmed it works in practical operation.

Keywords: Cherenkov detector · Data acquisition


Particle identification · Slow control · High voltage supply

1 Introduction
The Aerogel Ring Imaging Cherenkov (ARICH) counter is a particle identifi-
cation device to discriminate between charged pions and kaons [1,2] based on
angular distribution of Cherenkov photons emitted in the aerogel tiles [3,4]. The
ARICH counter is required to separate charged pions and kaons up to 3.5 GeV
by 4 σ at the endcap region of the Belle II detector.
A total of 420 Hybrid Avalanche Photo Detectors (HAPDs) [5] are used in the
ARICH counter to detect the emitted photons, and management of power sup-
plies and readout electronics of the HAPDs are critical in operation of ARICH.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 46–49, 2018.
https://doi.org/10.1007/978-981-13-1313-4_10
Development of Slow Control System 47

The HAPD adapts two amplification mechanisms such as bombardment gain and
avalanche gain. The bombardment gain is about 1500 and the avalanche gain is
about 40. As the result, total gain of the HAPD is around 60000. Three kinds
of power supply inputs are aimed to drive an HAPD module: a negative high
voltage for photo-electron acceleration (HV×1:–7∼8 kV), reverse bias voltages
(Bias×4:∼350 V), and a guard voltage (Guard×1:175 V). Therefore the power
supply control system for the ARICH counter is developed to have scalability
up to 2520 input channels in total.
On the other hand, an HAPD has 144 channels to be read via the front-end
electronics and the Belle2Link, a common readout scheme of the Belle II data
acquisition (DAQ) system [6]. A ARICH readout control is developed to man-
age readout of the ARICH counter by configuring and monitoring the readout
electronics.
Both of the power supply and readout control systems must have interfaces
to network, database containing configurations and graphical user interface. The
ARICH slow control is developed with the common frameworks of the Belle II
DAQ software.

2 Slow Control System

The power supply control system is implemented as a network-based system,


so the power supplies for the ARICH system are required to have a network
connection. We use power supply crate SY4527 manufactured by CAEN [7],
which has an Ethernet connection to communicate via TCP/IP. A1590 modules,
which can supply –9000 V to 16 channels, take care of the high voltage, while
A7042A modules, which can supply +500 V to 48 channels, take care of the
guard and the bias voltages.
Therefore the ARICH makes use of 27 A1590 modules and 45 A7042A mod-
ules set in the 7 crates. The power supply control system is implemented using
the Belle II DAQ software library to work with other subsystems in the Belle
II DAQ such as the solenoid magnet, the SuperKEKB accelerator, and the run
control system [8]. We tested one-sixth scale of the ARICH power supply system
for two weeks without any problems. Therefore we conclude that the control
system is usable for the ARICH counter. In addition, extra functions are imple-
mented to protect the HAPDs in the magnetic field. Ramp up/down procedures
are pre-defined. Three types of voltages, guard, HV and bias, are turned on/off
in a certain order and their system is checked by the software the during the
process. When a high voltage trip is occurred, voltages are ramp down immedi-
ately in the pre-defined order. These functions are tested with a minimal setup
using two HV modules and 3 guard-bias modules.
A readout electronics of the HAPD consisted two parts: Front end boards
(FEBs) and Merger boards (MBs). The FEB is directly connected to every
HAPD and digitize hit patterns of photons. The FEB has four ASICs which
is called SA03 to process signals [9]. Data from up to six FEBs are transferred
to an MB to merge the data and reduce event size by zero hit suppression. Both
48 M. Yonenaga et al.

of the FEB and the MB have FPGA (Spartan-6/Vertex-5) for data processing
above and slow control of the ASICs. Data from MB are sent to the Belle II
common readout module via Belle2Link, which is used not only for readout but
also for slow control, such as setting parameters. Data from the entire system
of the ARICH are expected to be processed by the 420 FEBs, the 72 MBs, and
the 18 readout modules in the practical operation. We implemented a parameter
configuration scheme using the Belle2Link. Parameters such as voltage, current
limit, threshold voltage and trigger timing can be set and read via the Belle2Link.
Monitored values such as voltage, current also can be read and recorded.
The graphical user interface (GUI) is also developed based on Control System
Studio [11]. Both of the power supply and the parameter control GUIs are shown
in Figs. 1 and 2, respectively. The GUIs provide functions to set and get all
parameters and monitor values such as voltage and current [12].

Fig. 1. A screen shot of the ARICH Fig. 2. A device oriented view of the
power supply control GUI GUI, which controls the FEB and MB
configration.

3 Integration Test with Cosmic Ray


Using the slow control system, we performed an integration test with cosmic
rays. Almost all of practical components such as the power supply, the readout
modules and the structure were used in integration test.
We used 23 HAPDs for readout in the integration test. High voltage and bias
voltage are applied to 16 of the 23 HAPDs. Data from the 23 HAPDs are read
out by a readout module using the 23 FEBs and the 4 MBs.
Trigger signals for data taking was generated from cosmic ray using plas-
tic scintillators and photomultiplier tubes, and were distributed to the merger
boards and the readout module. Data from readout module were recorded in a
disk PC.
We took several runs of the integration test. The longest run was 22 h 54 min,
which was stopped intentionally. Trigger rate was about 0.1 Hz. Clear ring images
Development of Slow Control System 49

were taken from cosmic rays. Thus the slow control system was working stably
for one day without any troubles. Therefore we concludes the slow control system
can be operated in the practical operation.

4 Summary
The ARICH slow control system, based on the Belle II DAQ slow control frame-
work, was developed for both of power supply and readout system. The readout
control system was implemented to control parameters of the FEBs and the MBs
through the Belle2Link. The slow control system was operated in the integration
test with cosmic rays and worked for about one day without any troubles. Many
clear ring image from Cherenkov photons were obtained. We concludes the slow
control system can be operated in the practical operation.

References
1. Abe, T., et al.: Belle II Technical Design Report, arXiv:1011.0352 [physics.ins-det],
KEK Report 2010-1 (2010)
2. Pestotnik, R., et al.: The Aerogel Ring Imaging Cherenkov system at the Belle II
spectrometer. Nucl. Instrum. Methods Phys. Res. A (in press). https://doi.org/10.
1016/j.nima.2017.04.043
3. Tabata, M., et al.: Large-area silica aerogel for use as Cherenkov radiators with high
refractive index, developed by supercritical carbon dioxide drying. J. Supercrit.
Fluids 110, 183–192 (2016)
4. Adachi, I., et al.: Construction of silica aerogel radiator system for Belle II RICH
counter. Nucl. Instrum. Methods Phys. Res. A (in press). https://doi.org/10.1016/
j.nima.2017.02.036
5. Yusa, Y., et al.: Test of the HAPD light sensor for the Belle II Aerogel RICH.
Nucl. Instrum. Methods Phys. Res. A (in press). https://doi.org/10.1016/j.nima.
2017.02.046
6. Yamada, S., et al.: Data acquisition system for the Belle II experiment. IEEE
Trans. Nucl. Sci. 62, 1175–1180 (2015)
7. CAEN. http://www.caen.it. Accessed 30 June 2017
8. Konno, Y., et al.: The slow control and data quality monitoring systems for the
Belle II experiment. IEEE Trans. Nucl. Sci. 62(3), 897–902 (2015)
9. Nishida, S., et al.: Readout ASICs and electronics for the 144-channel HAPDs for
the Aerogel RICH at Belle II. Phys. Procedia 37, 1730–1735 (2012)
10. XILINX. https://www.xilinx.com. Accessed 30 June 2017
11. Control System Studio. http://controlsystemstudio.org. Accessed 30 June 2017
12. Yonenaga, M., et al.: Development of slow control system for the Belle II ARICH
counter. Nucl. Instrum. Methods Phys. Res. A (in press). https://doi.org/10.1016/
j.nima.2017.03.037
Phase-I Trigger Readout Electronics
Upgrade for the ATLAS Liquid-Argon
Calorimeters

Alessandra Camplani1,2(B)
on behalf of the ATLAS Liquid Argon Calorimeter group
1
Dipartimento di Fisica, Universitá degli Studi di Milano, Milan, Italy
2
I.N.F.N., Milan, Italy
alessandra.camplani@mi.infn.it

Abstract. The upgrade of the Large Hadron Collider scheduled for the
Long Shut-down period of 2019–2020, referred to as Phase-I upgrade, will
increase the instantaneous luminosity to about three times the design
value. Since the current ATLAS trigger system does not allow sufficient
increase of the trigger rate, an improvement of the trigger system is
required. The Liquid Argon (LAr) Calorimeter read-out will therefore be
modified to use digital trigger signals with a higher spatial granularity
in order to improve the identification efficiencies of electrons, photons,
tau, jets and missing energy, at high background rejection rates at the
Level-1 trigger.

1 Introduction

The Large Hadron Collider (LHC) has shown very good performances and breaks
own records during Run 1 (2009–2013) and Run 2 (2015–2018). In particular,
in June 2016, LHC exceeded the peak record instantaneous luminosity of 1034
cm−2 s−1 . The luminosity value will again increase in the next years. During
Run 3 (2021–2023) LHC design parameters should allow for an ultimate peak
instantaneous luminosity of 3 · 1034 cm−2 s−1 . While during Run 4 (after 2025)
an instantaneous luminosity of 5 · 1034 cm−2 s−1 will be delivered. Since ATLAS
trigger system does not allow a sufficient increase of the trigger rate, an electronic
upgrade is required [1].

2 ATLAS Liquid Argon (LAr) Calorimeter Upgrade

To face the Run 3 luminosity, the LAr calorimeter trigger electronics will be
modified, in order to maintain a low pT lepton threshold and keep the same
trigger bandwidth with respect to Run 2. The new trigger readout electronics
will be installed during the second Long Shut-down (LS2). The aim is to provide
higher-granularity, higher-resolution and longitudinal shower information from
the calorimeter to the Level-1 trigger processors. Figure 1 compares the electron
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 50–53, 2018.
https://doi.org/10.1007/978-981-13-1313-4_11
Phase-I Trigger Readout Electronics Upgrade for the ATLAS 51

Fig. 1. Trigger signal granularity improvement from Trigger Towers (Δη × Δφ = 0.1 ×
0.1) to Super Cells (Δη × Δφ = 0.025 × 0.1 in front and middle layers).

energy deposition in the present system to the new proposed system, which
has ten time finer granularity in the trigger readout from the calorimeter. The
existing calorimeter trigger readout, the so-called Trigger Tower, will evolve in
the new finer granularity scheme, called Super Cells (SC). In total there will be
34000 SCs [2].

2.1 LAr Front-End (FE) Electronics


The LAr FE electronics will be improved and extended during the Phase-I
Upgrade. A new Layer Sum Board (LSB) will produce the finer granularity
SC signals in the Front and Middle layers. A new baseplane will keep the com-
patibility with the existing setup and will route the new signals. And a new
LAr Trigger Digitizer Boards (LTDB) will receive, digitize and send to the BE
electronics the SC signals. The LTDB, shown in Fig. 2, is the key board for the
Phase-I Upgrade. There will be 124 LTDB boards and each of them will process
up to 320 SC analog signals. As the LTDB will be exposed to a high radiation
level, all components, especially newly developed ASICs, have to be subjected
to extensive radiation qualification tests [3].
The custom ADC is a quad-channel
12-bit 40 MS/s pipeline SAR ADC,
designed with 130 nm technology [4].
Recently, the radiation tolerance of the
prototype ADC chip has been tested
and confirmed up to 10 MRad (the
requirement is 100 kRad).
LOCx2 is a dual 8 × 12 bit seri-
alizer, with 5.12 Gbps of output [5].
LOCld is a VCSEL (Vertical Cav-
ity Surface-Emitting Lasers) driver,
designed for the optical interface
[6]. Both are fabricated with 250 nm Fig. 2. LTDB board under test.
Silicon-on-Sapphire CMOS technology
and have been irradiated up to 200 kRad. No changes in the output eye diagrams
have been observed.
52 A. Camplani

2.2 LAr Back-End (BE) Electronics


The LAr calorimeter BE electronics system, called the LAr Digital Process-
ing System (LDPS), will receive digital SC data from the LTDBs, reconstruct
ET (the transverse energy of each SC) and transmit the results to the Level-1
calorimeter trigger system every 25 ns. The LDPS consists of 32 ATCA Car-
rier boards, each one equipped with four Advanced Mezzanine Cards (AMC)
called LATOME, both shown in Fig. 3. The LATOME is built around a high-
performance and very competitive FPGA, an ARRIA-10 from Intel FPGA. The
main data path is passing through the LATOME where the transverse energy
of each SC is built.

Fig. 3. ATCA Carrier and LATOME board.

A System Test with the final prototype is being prepared. The first tests
were done at the beginning of 2017, and more are planned for the next months.
The purpose is to confirm all the functionalities and stabilities before the mass
production.

3 Demonstrator
A demonstrator of the Phase-I upgrade was installed in ATLAS, during summer
2014, in the calorimeter ElectroMagnetic (EM) barrel, with a coverage of 1/32
of barrel region, to show the feasibility of the Phase-I Upgrade (Fig. 4).
The demonstrator is reading data from the Super Cells with the aim of
validating the energy reconstruction and collecting real collision data for the
filtering algorithm development. Of course this will allow to gain some experience
in the installation and operation of such equipment in the ATLAS environment.
In the FE, two LTDBs are installed to digitize the calorimeter data, from
284 Super Cells in the EM barrel, and transmit them along 48 optical links
at 4.8 Gbps to three BE boards called ABBA (ATCA test Board for Baseline
Acquisition), with a total throughput of about 200 Gbps per LTDB. Each ATCA
board mounts three Stratix-IV Intel FPGAs. Two FPGAs store the SC data
inside circular buffers and wait for a Level-1 trigger to select the interesting
event. The third FPGA takes care of the readout via IPbus protocol over UDP
on 10 GbE network.
Phase-I Trigger Readout Electronics Upgrade for the ATLAS 53

The demonstrator system had a successful


data taking during 2015 and 2016, collecting
data in parallel with ATLAS for both proton-
proton and heavy ion collisions. This can be
done also thanks to the special Level-1 topo-
logical trigger item provided by the Level-1
community that is allowing to compared the
demonstrator readout with the ATLAS main
readout. Next step is to have a final Phase-
I prototype (LTDB and LATOME) installed
and running at the end of this year and oper-
ate until the end of Run-2. Fig. 4. ABBA boards.

4 Conclusion
The ATLAS LAr calorimeter electronics will be upgraded for Phase-1 after LS2
(2019–2020). The calorimeter trigger path will be digitized at FE level, to make
use of an improved granularity for the trigger decision. Presently, the new FE and
BE systems are being developed and produced. The LTDB prototype is being
assembled, the radiation tolerant ASICs have been designed and they are under
test as well as the LDPB ATCA carrier and LATOME AMC boards. In the
meantime, a demonstrator has been installed on the detector, during summer
2014. During 2015 and 2016 it has collected data in parallel with ATLAS for
both proton-proton and heavy ion collisions and now it’s getting ready also for
2017 data. This demonstrator is giving the possibility to gain some experience
in order to be prepared for the final prototype installation and runs in 2018.

References
1. The ATLAS Collaboration: The atlas experiment at the cern large hadron collider.
J. Instr.3(08), S08003 (2008)
2. Aleksa, M.C., et al.: ATLAS Liquid Argon Calorimeter Phase-I Upgrade Technical
Design Report. Technical report CERN-LHCC. ATLAS-TDR-022, September 2013
3. Buchanan, N.J., et al.: Radiation qualification of the front-end electronics for the
readout of the ATLAS liquid argon calorimeters. JINST 3, 10005 (2008)
4. Xu, H.: The Trigger Readout Electronics for the Phase-I Upgrade of the ATLAS
Liquid Argon Calorimeters. Technical report, ATL-LARG-PROC-2016-003, CERN,
Geneva, November 2016
5. Xiao, L., et al.: LOCx2, a low-latency, low-overhead, 2 5.12-Gbps transmitter ASIC
for the ATLAS Liquid Argon Calorimeter trigger upgrade. J. Instr. 11(02), p. C02013
(2016)
6. Liang, F., et al.: The design of 8-Gbps VCSEL drivers for ATLAS liquid Argon
calorimeter upgrade. J. Instr. 8(01), C01031 (2013)
A Service-Oriented Platform for Embedded
Monitoring Systems in Belle II Experiment

F. Di Capua1,2(&), A. Aloisio1,2, F. Ameli3, A. Anastasio2,


P. Branchini4, R. Giordano1,2, V. Izzo2, and G. Tortone2
1
University of Naples “Federico II”, Via Cinthia, 80126 Naples, Italy
2
INFN, Sezione di Napoli, Via Cinthia, 80126 Naples, Italy
dicapua@na.infn.it
3
INFN, Sezione di Roma, Piazz.le A. Moro, 00100 Rome, Italy
4
INFN, Sezione di Roma Tre, Via della Vasca Navale, 00100 Rome, Italy

Abstract. uSOP is a general purpose single board computer designed for deep
embedded applications in control and monitoring of detectors, sensors, and
complex laboratory equipment. In this paper, we present its deployment in the
monitoring system framework of the ECL endcap calorimeter of the Belle2
experiment, presently under construction at the KEK Laboratory (Tsukuba,
Japan). We discuss the main aspects of the hardware and software architectures
tailored on the needs of a detector designed around CsI scintillators.

Keywords: Embedded control system  Microprocessor  Detector monitoring

1 Introduction

uSOP is a Service Oriented Platform, designed for embedded applications, including


the monitor of complex experiments. The system is based on the high performance
ARM processor core Sitara AM335x [1] family of Cortex-A8 SoC produced by Texas
Instruments.
The uSOP design is a derivative of BeagleBone [2], designed by the BeagleBoard.
org Foundation [3], which promotes open source development of ARM-based single
board computers for embedded applications.
uSOP board [4] adopts a Sitara AM3358 microprocessor with up to 512
Mbyte RAM and 4 Gbyte Flash. Host and device USB ports are foreseen for an easy
connection of peripherals and host units. The most used serial busses, like SPI, I2C and
JTAG, are galvanically isolated. A 10/100 Ethernet port is also available for net-
working. An independent network module Lantronix Xport Pro [5].
This last element allows the user flashing the operating system and uploading the
bootloader. As operating system, we have chosen the Debian Linux distribution, ver-
sion 7, with a kernel release 3.x (Fig. 1).

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 54–57, 2018.
https://doi.org/10.1007/978-981-13-1313-4_12
A Service-Oriented Platform for Embedded Monitoring Systems 55

Fig. 1. The uSOP board in a 3-slot minicrate

2 uSOP Operations in Belle II Experiment

The Belle II [6] experiments has been designed to investigate the CP-violating
asymmetries in rare B mesons, the elements of CKM matrix and to perform dedicated
searches of new physics in the dark sector.
Belle II will operate at the SuperKEKB [7] electron-positron asymmetric collider
(KEK, Tsukuba, Japan). The new collider will give an instantaneous luminosity (of the
order of 1035 cm−2 s−1) a factor of 40 higher than the former KEKB.
The applications of uSOP system in Belle II environment will be described in the
following.

2.1 Belle II Electromagnetic Calorimeter Monitoring


The Belle II Electromagnetic Calorimeter (ECL), inherited by Belle [8], consists in a
barrel 3 m long with an inner radius of 1.25 m and two annular endcaps, named
forward and backward.
The ECL barrel 8736 CsI(Tl) scintillating crystals, which main characteristics are
high light yield and short radiation length. The two endcaps are made of 2112 CsI(Tl)
crystals, distributed in 32 sectors.
CsI(Tl) crystal light yield variations as function of temperature has been found to be
0.3%/°C at 20 °C [9], in addition CsI crystals are hygroscopic and will not tolerate
exposure at high humidity levels [10]. In order to measure the environmental param-
eters, each sector is equipped with three Semitec AT-2 thermistors [11] and an active
Vaisala HMP60 humidity probe [12].
uSOP system has been used to acquire and process data from environmental
monitors. For this application, a specific temperature and humidity controller based on
the LTC2983, a SoC with 24-bit ADCs [13], has been developed.
56 F. Di Capua et al.

The measurement sequence has been implemented on the controller in order to find
the best excitation current for the thermistors, then the optimal ADC dynamic range
and, eventually, to subtract the parasitic thermocouple effects. Two controllers have
been connected to uSOP through SPI, each of them monitoring two calorimeter sectors.
During Belle shutdown, in 2010, the two endcap wheels have been dismounted and
placed at rest for electronic upgrade. In order to monitor CsI crystals during the long
shutdown, their environmental parameters have been acquired by an uSOP system.
Reading four over 32 sectors of two endcaps. During this period, the system has been
fully tested and debugged. For about two years, it operated continuous, running
unattended acquisition tasks and exporting data samples and plots on the web.
At beginning of 2017, the backward ECL endcap has been installed in the Belle II
detector. In the slow-control framework, temperature and relative humidity of the two
endcaps (forward and backward) are monitored by a uSOP-based system: 96 thermistors
and 32 humidity probes, placed in the forward and backward sectors, are sampled.
A uSOP board can control up to four calorimeter sectors. Therefore, in order to
monitor a full ECL sector, four uSOP boards are housed in a 19-in. 6U Eurocard crate
(Fig. 2).

Fig. 2. uSOP in the rack of the Electronic Hut in Tsukuba experimental hall at KEK.

The monitored parameters are transferred by uSOP via Ethernet, according to the
EPICS protocol, and they are archived and plotted by CS Studio.

2.2 uSOP Application in Beast II


The successful operations of Belle II experiment, given the increased designed lumi-
nosity of the SuperKEKB collider, depends critically on the amount of background
processes. Aiming at a better knowledge of principal background contributions, a
dedicated experiment, called BEAST II [14], has been carried out. In the Phase 1 of
BEAST II, a suite of beam background detector systems were installed around the
A Service-Oriented Platform for Embedded Monitoring Systems 57

interaction point. A measurement of the dose rate in real-time was performed by PIN
diodes, positioned in several experimental locations; for the measurement of the neu-
tron fluxes, a detection system based on TPCs was deployed; the high-level dose rate
close to the interaction point was measured by diamond sensor. He3 tubes were used
for detection of thermal neutrons. In addition, some calorimeter modules made by BGO
crystals and Plastic Scintillators were also deployed.
BEAST contained also a calorimeter six calorimeter modules based on three
scintillating crystals: Thallium-doped Caesium Iodide (CsI(Tl)), pure Caesium Iodide
(CsI) and Cerium-doped Lutetium Yttrium Orthosilicate (LYSO). In order to monitor
temperature and humidity of such modules, a uSOP system was used to acquire and
publish the data.

3 Conclusions

uSOP is a Service Oriented Platform actually in use in Belle II experiment for moni-
toring of the environmental temperature and humidity of CsI crystal scintillators. uSOP
can be fully managed remotely including critical operations like bootloader and
operating system uploads. The platform has shown to be a resilient and reliable solution
for high-energy physics experiment.

References
1. Sitara™ AM335x Processors, Texas Instruments. http://www.ti.com/lsds/ti/processors/sitara/
arm_cortexa8/am335x/overview.page
2. BeagleBoard Black. https://beagleboard.org/black
3. About BeagleBoard.org and the BeagleBoard.org Foundation. http://beagleboard.org/about
4. Aloisio, A., et al.: uSOP: a microprocessor-based service oriented platform for control and
monitoring. IEEE TNS 99, 1 (2017)
5. XPort Pro Embedded Device Server User Guide, Lantronix. https://www.lantronix.com/wp-
content/uploads/pdf/900-560e_XPort_Pro_UG_release.pdf
6. Abe, T., et al.: Belle II Technical Design Report. arXiv:1011.0352 [physics.ins-det]. https://
arxiv.org/abs/1011.0352
7. Ohnishi, Y., et al.: Accelerator design at SuperKEKB. PTEP 2013, 03A011 (2013)
8. Abashian, A., et al.: The Belle detector. NIM A 479, 117–232 (2002)
9. Zhu, R.-Y.: Precision crystal calorimetry in high energy physics. Nucl. Phys. B (Proc.
Suppl.) 78, 203 (1999)
10. Chen, R.-F., et al.: Property measurements of the CsI(Tl) crystal prepared at IMP. Chin.
Phys. C 32, 2 (2008)
11. Semitec Corp., High Precision Thermistor, AT Thermistor
12. Vaisala, Users’s Guide Vaisala Humidity and Temperature Probes HMP60 and HMP110
Series
13. LTC2983 - Multi-Sensor High Accuracy Digital Temperature Measurement System. http://
www.linear.com/product/LTC2983
14. BEAST II Technical Design Report DRAFT For use in US Belle II Project TDR. https://
indico.phys.hawaii.edu/getFile.py/access?contribId=1&resId=0&materialId=slides&confId=
469
Integration of Readout of Vertex
Detector in Belle II DAQ System

Tomoyuki Konno1(B) , Thomas Gessler2 , Getzkow Dennis2 , Hao Yin3 ,


Itoh Ryosuke1 , Konorov Igor4 , Kühn Wolfgang2 , Lange Soeren2 ,
Lautenbach Klemens2 , Liu Zhen-an5 , Nakamura Katsuro1 , Nakao Mikihiko1 ,
P. Reiter Simon2 , and Suzuki Soh High1
1
High Energy Accelerator Research Organization (KEK), Tsukuba, Japan
konno@hepmail.phys.se.tmu.ac.jp
2
Physikalisches Institut, Justus Liebig University Giessen, Giessen, Germany
3
Institute for High Energy Physics (HEPHY), Vienna, Austria
4
Technische Universita̋t Műnchen, Munich, Germany
5
Institute of High Energy Physics, Chinese Academy of Sciences (IHEP),
Beijing, China

Abstract. Data acquisition system is a challenge in the Belle II, new


generation B factory, experiment starting data taking in 2018, since much
higher trigger rate and huge data size must be handled to collect 8 × 1035
cm−2 s−1 of peak luminosity from the SuperKEKB accelerator. A beam
test of the VXD prototype was carried out in March 2017 at the DESY
electron test beam facility as a step of readout integration to the VXD
system. During the beam period, we tested three major tasks from DAQ
point of view; Event synchronization between the detectors, Stability of
high rate performance with online tracking, and establishment of the
slow control scheme. In this letter, details of the DESY beam test are
discussed, with prospects toward the phase II beam commissioning run
in 2018 the phase III physics run.

Keywords: Data acquisition system · Slow control · Vertex detector


Silicon detector · Pixel detector

1 Introduction
The Belle II experiment [1], a next generation B factory experiment, starts
physics data taking in 2018 to search for the New Physics beyond the Standard
Model based on precession measurements of flavor systems. The experiment is
aimed to achieve the precession measurements by collecting 8 × 1035 cm−2 s−1
of peak luminosity from the SuperKEKB accelerator, which is 40 times higher
that in the previous Belle experiment. Data acquisition (DAQ) system [2] is one
of the biggest challenges in the Belle II experiment since the experiment collects
high trigger rate and large data size up to 30 kHz and 30 GB/s at the level-1 trig-
ger, respectively. The Belle2Link [3], a common detector readout scheme using
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 58–62, 2018.
https://doi.org/10.1007/978-981-13-1313-4_13
Integration of Readout of Vertex Detector in Belle II DAQ System 59

COPPER (common pipeline electronics readout) [4] and HSLB (high speed link
board) boards, was developed to handle data from the all sub detectors except
for the PXD (pixel vertex detector) [5] and then merge them to the HLT (high
level trigger) [6] PC farm while The DHH [7], a dedicated readout system for
the PXD has been developed to handle with huge event size from the DEPFET
sensors. A comon trigger and system clock distribution system based on FTSW
(First Timing Switch) [8] manages trigger signals all readout electronics in the
Belle II detector. Two level of event builders [9] are introduced; merger of the
6 subdetectors as HLT inputs; merger of outputs from the PXD and HLT to
be recorded. A reduction scheme of the pixel event size using selection of RoIs
(regions of interests) in the pixels is done by a FPGA based processor, , called
ONSEN [10], using reconstructed tracks in the HLT. A dedicated Flash ADC
readout electronics is developed for the SVD APV chips and a bridge board,
FTB, transfers the waveform of the lash ADC to the Belle2Link as same as
other 5 subdetectors. In parallel to integration of the outer detectors using cos-
mic ray, a beam test was performed in March 2017 at the DESY test beam
facility as an big step of readout integration of the VXD systems to established
full data chain of the Belle II DAQ system with a VXD prototype.

2 Data Acquisition System in the Beam Test


During the beam test, 2 PXD modules and 2 SVD modules with their readout
electronics are used as a prototype of the phase II VXD system, observing the
electron test beam with energy of 1–3 GeV and about 4 kHz of trigger rate.
We constructed a copy of the Belle II DAQ system. 2 COPPER boards con-
nected to 2 FTB to readout data from the SVD modules while 2 DHH systems
were connected to handle with PXD sensor modules. 2 FTSW boards were used;
one for the DHH system and the other for the FTBs and COPPERs. 3 PCs
with 24 Hyper threads in total were prepared by the DESY-IT group for the
HLT worker nodes to reconstruct track and extract RoIs. 2 ONSEN modules
were connected to the DHHs in parallel and receives the RoI feedback and send
selected data to a storage PC. As same as the hardware setup, we prepared for a
run control software to manage the whole test system based on the slow control
software architecture [12] of the Belle II DAQ system. There are three major
challenges in the test setup; Event synchronization between the PXD and SVD
over the data taking period, High rate performance of RoI feedback with real
tracking; and Demonstration of the slow control scheme in data taking operation
by non-DAQ experts.

3 Operation Results
Event mismatches were observed in the past beam tests of the VXD prototypes
among the readout electronics of both SVD and PXD due to bugs in the firmware
and electrical instabilities of the signal cables. Once the event mismatches were
observed, it took more than ten minutes to recover the whole system due to
60 T. Konno et al.

reloads of the firmware. After modifications of the firmware and replacement


of the cables from copper to optical, no event mismatches was observed during
the test period once data taking successfully started. Finally, the event synchro-
nization over the PXD and SVD was confirmed by not only the event numbers
recorded in the data but also clear signatures of reconstructed track shown in
Fig. 1.
We also performed overnight operation for long term stability with beam
trigger and 0.5 T of the solenoid magnetic field. The track reconstruction software
is based on basf2, Belle II analysis software framework, and operated in the HLT
worker nodes. Although it took a few days to optimize the HLT framework and
the tracking software, the data taking rate was finally stabilized around 1.6 kHz
against 4 kHz of trigger input rate. We confirmed the trigger output rate was
still limited by 200 µs of minimum intervals of trigger time difference in the DHH
modules and the tracking process itself was stable with enough performance.
A combined operation of the Belle II run control system with the SVD and
PXD dedicated control systems is an important task because there is a difference
of software architecture between the global control and these local controls; the
global run control is based on the NSM2, an upgrade of slow control protocol in
the Belle experiment while the SVD and PXD control systems rely on EPICS
[13]. We established an unified operation including the SVD and PXD systems
by introducing some programs to bridge the NSM2 and EPICS architectures. In
addition to this, Control System Studio (CSS), a control GUI framework, world-
widely used in accelerator operations, is introduced and a plugin for the NSM2
is newly developed. Therefore an unification in the user interface was finally
established. The whole control of the DAQ system was successfully taken over
to non-DAQ experts during the beam time and they managed the data taking
using the control GUI shown in Fig. 2.

Fig. 1. A drawing of the event Fig. 2. A screen shot of the run control GUI panel.
display. The blue line shows
the reconstructed track and
two green squares represent
the RoI.
Integration of Readout of Vertex Detector in Belle II DAQ System 61

As the results of the beam test, we concluded that the challenges of the DAQ
integration to the VXD system are successfully achieved while there are still
limitation in the readout performance due to the minimum interval of trigger
time difference and concerns of event mismatches at run start.

4 Summary and Prospects

Data acquisition system is a major challenge in the Belle II experiment, a new


generation B factoy experiment starting data taking in 2018, since 40 times
higher trigger rate than that in the Belle experiment must be handled in order
to perform precision measurements in flavor systems. A beam test of the VXD
prototype was carried out in March 2017 at the DESY electron test beam facility
s an important step of readout integration to the VXD system. There are three
major tasks from DAQ point of view; Event synchronization between SVD and
PXD, Stability of high rate performance with online tracking of beam, and estab-
lishment of the slow control scheme. We have finally satisfied these requirements
during the beam time while limitation of the readout electronics and concerns of
failure at run start still remain. And therefore the test setup including the sensor
modules are kept in the DESY test site for further debugging and improvement
of the firmware until summer of 2017. The whole DAQ system will be integrated
until the phase II beam commissioning run in 2018 and be fully operational in
the phase III physics run.

References
1. Abe, T., et al.: Belle II Technical Design Report, arXiv:1011.0352 (2010)
2. Yamada, S., et al.: Data acquisition system for the Belle II experiment. IEEE
Trans. Nucl. Sci. 62, 1175–1180 (2015)
3. Suna, D., et al.: Belle2Link: a global data readout and transmission for Belle II
experiment at KEK. Phys. Procedia 37, 1933–1939 (2012)
4. Higuchi, T., et al.: Modular pipeline readout electronics for the SuperBelle drift
chamber. IEEE Trans. Nucl. Sci. 52, 1912–1917 (2005)
5. Moser, H., et al.: Author links open the author workspace. The Belle II DEPFET
pixel detector. Nucl. Instrum. Methods Phys. Res. A 831, 85–87 (2016). 21
6. Itoh, R., et al.: Data flow and high level trigger of Belle II DAQ system. IEEE
Trans. Nucl. Sci. 60, 3720–3724 (2013)
7. Levit, D., et al.: FPGA based data read-out system of the Belle II pixel detector.
IEEE Trans. Nucl. Sci. 62, 1033–1039 (2015)
8. Nakao, M., et al.: Minimizing dead time of the Belle II data acquisition system
with pipelined trigger flow control. IEEE Trans. Nucl. Sci. 60, 3729–3734 (2013)
9. Suzuki, S.Y.: The three-level event building system for the Belle II experiment.
IEEE Trans. Nucl. Sci. 62, 1162–1168 (2015)
10. Gessler, T., et al.: The ONSEN data reduction system for the Belle II pixel detector,
arXiv:1406.4028 (2014)
11. Thalmeier, R., et al.: The Belle II SVD data readout system. Nucl. Instrum.
Methods A845, 633–638 (2017)
62 T. Konno et al.

12. Konno, T., et al.: The slow control and data quality monitoring system for the
Belle II experiment. IEEE Trans. Nucl. Sci. 62, 897–902 (2015)
13. EPICS-Experimental Physics and Industrial Control System. http://www.aps.anl.
gov/epics/
The Weighting Resistive Matrix for Real
Time Data Filtering in Large Detectors

A. Abdallah1 , G. Aielli1(B) , R. Cardarelli1 , M. Manca2 , M. Nessi4 , P. Sala3 ,


and L. H. Whitehead4
1
University and INFN Sez. Roma Tor Vergata, Rome, Italy
giulio.aielli@cern.ch
2
Scimpulse Foundation, Geleen, The Netherlands
3
CERN and INFN Sez. Milano, Milan, Italy
4
CERN, Geneva, Switzerland

Abstract. Experimental High Energy Physics pioneered in facing the


problem of managing, large data flows smartly and in real time. Very
large volume experiments searching for rare events such as DUNE (Deep
Underground Neutrino Experiment) may produce an extremely high data
flow with a complex data model. In this paper, we propose to overcome
the real time computing limitations by introducing a novel technology,
the WRM (Weighting Resistive Matrix).

Keywords: Fast pattern recognition · Analog processing


Tracking trigger

1 Big Data Problems in High Energy Physics


How to handle very fast, large dataflows from local sensors and control sys-
tems, with limited to no buffering as their relevance is only hic et nunc, to
be able to quickly identify salient data, and to fit the proper actuators as to
adjust the course of actions in realtime? And how to achieve this goals without
imposing unrealistic energy or cooling requirements to the structure? Techni-
cally speaking, data intensive application performance is typically limited by
storage, transmission bandwidth and computing resources for data selection. In
a classic experimental setup, sensors produce raw data usually large with respect
to the ultimate amount of salient information, affected by noise, redundant and
largely populated by non significant events. These problems can largely, if not
completely, be reduced to large correlations or model fitting problems, where the
signal does not have enough local significance, depending on space-time correla-
tion of individual measurements (tracking), which in general requires a regression
or a pattern matching on the data. The combinatorial nature of these algorithms
requires high computational complexity, typically faced through dedicated hard-
ware devices.

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 63–68, 2018.
https://doi.org/10.1007/978-981-13-1313-4_14
64 A. Abdallah et al.

Among the proposals of hardware based regression calculators for trigger pur-
poses, we mention Associative Memories (AM) [1] and the Weighting Resistive
Matrix (WRM) [2], subject of this paper.

2 WRM: An Analog Processor for ns Scale Fit


The WRM has been originally developed and tested as a fast topological trig-
ger processor for high energy physics experiments, and consists in an analog
processor, able to estimate the best fit by scoring the parametric space describ-
ing a given hypothesis on data. The device is essentially passive, being based
on a special resistive network, which introduces the concept of error distribution
through an electric potential diffusion, and the parametric space scoring through
analog adders, programmed according to the parametric space to be explored.
The WRM does not perform an exact match (as for AM) but rather provides
the best possible description of data according to the given hypothesis.
The simplified WRM schematic reported in Fig. 1, illustrates its working
principle. For every input signal on a specific layer, the network diffuses a cor-
responding potential which permits to correlate points at different distances.
The output of this process can be read on each layer. By adding the outputs of
the different layers along various directions (called ‘roads’), one can score cor-
relations likelihood. The diffusion can be represented in compact mathematical
notation [3] by:
  1 n  n 
· · · 2 , · · · , 12 , 1, 12 , · · · , 12
Let S = (si,j )1in,1jm be the n × m matrix that corresponds to the
diffusion inside the circuit of a certain input.
The road summation is then computed as follows:

fik = eT pk (S∗,ik<i+n )e

where pk : Mn,n (R) → Mn,n (R) is takes a n × n matrix and returns its
Hadamard product (element-wise product) by a k bitmap n×n matrix, which
corresponds to how the nodes of the circuit are connected together.
The main advantages of the WRM technique are: it is extremely fast, basi-
cally no computation is required and the desired response is given within the
signals propagation delay through the network; it is robust to noise, not looking
for exact match but to the best fit correlation. The WRM algorithm recalls the
Radon and Hough transforms, also based on parametric space scoring, but differ-
entiate from that for the intrinsic robustness against noise and for the execution
time independent on the input pattern.

3 The Big Data Challenges for DUNE


The Deep Underground Neutrino Experiment (DUNE) [4] is a leading-edge,
international experiment for neutrino science and proton decay studies. It will
The Weighting Resistive Matrix for Real Time Data Filtering 65

Fig. 1. Simplified unidimensional WRM wiring diagram, the signal is injected in each
node, it is diffused through the horizontal lines and it is added by the columns.

consist of a near detector, placed at Fermilab, and a huge, far detector to


be installed at the Sanford Underground Research Laboratory in Lead, South
Dakota. The DUNE experiment will search for various rare processes, including
beam, atmospheric and supernovae neutrino interactions.
The far detector, consisting of 4 modules based on the LAr TPC technology,
will be readout by ≈ 1.5 × 106 channels sampled at 2 MHz. In the absence of
any noise or background reduction, the total data volume will amount to up to
4.6 TB/s, or 150 EB/y. The vast majority of the data size comes from electronics
noise and the beta decays of the 39 Ar isotope. This amount of data represents
a huge challenge that needs drastic reduction techniques. Standard triggering
techniques will be extremely difficult to implement, because of the absence of
external references (except for beam events) and because of the necessity to
identify very low energy deposits.
Both on-line monitoring and off-line processing will also be affected by the
data size and by the huge variability of the physics events. A single event,
recorded over ≈5 ms, represents tens of GB of data. Its full reconstruction can
take hours.
In addition to being rare processes, the interaction vertices will also be ran-
domly distributed within the detector, so there is no known a priori region of
interest. The size and topology of the interactions can vary greatly both within
each category and especially between categories. Figure 2, left, shows a typical
beam electron neutrino interaction, spanning approximately 1.5 m in the beam
direction, and 0.3 m in the transverse directions. Figure 2, right, however, shows
a typical interaction arising from a supernova neutrino with length scales of the
order of tens of centimetres. The energy deposited within the detector can vary
from a few MeV up to many GeV.
Due to the rare nature of the signal processes, it is vital that the interactions
can be selected from the background with very high efficiency, whilst maintain-
ing a very significant reduction in the data rate. Furthermore, hardware level
pattern recognition enables detailed on-line monitoring of high-level quantities
66 A. Abdallah et al.

Fig. 2. Left: Monte Carlo (FLUKA [5]) νe CC event. Original ν energy 2.4 GeV. The
image spans approx. 300 cm by 60 cm. Right: Monte Carlo (FLUKA) Supernova ν event.
Original ν energy 19 MeV. The image spans ≈ 15 cm in the horizontal and 25 cm in
the vertical direction.

such as the number of tracks within the detector, or the number of different
event topologies seen in a given time period, and prioritize the analysis over a
pre-selection of interesting event topologies.

4 Implementation of WRM Technique for the DUNE


Prototype
According to the description above DUNE is a good example of big data problem
of the hardest type, having at the same time an huge data throughput and a very
complex data model. Thus we propose to study how to use the WRM technique
to extend the detector performance. A complete software simulation has been
designed in order to reproduce the logic of a WRM chip that will be developed for
the DUNE. The simulation is flexible and allows fine tuning of internal working
parameters of the WRM, like road length and shape, sampling width, etc.

Fig. 3. Left: WRM software simulation. Middle: Input data example. Right: The cor-
respoding detected signal mask

This simulation is being used on simulated DUNE data sets, optimize the
WRM design for detecting signals in DUNE data (Fig. 3). The proposed sys-
tem must be suitable for integrating with an already existing DAQ architecture
advanced design. On one side the access to the TPC analog data, needed by
the native WRM design, is not possible. Therefore we are studying an extension
The Weighting Resistive Matrix for Real Time Data Filtering 67

of the WRM network functioning as an analog adder based DAC input stage,
transforming the byte stream in voltage pulses. Another problem is the interface
between the DUNE DAQ and the WRM system, which can be solved by exploit-
ing the FELIX architecture [6] (one of the proposed readout architectures), a new
device conceived to interface data sources carried by many Gigabit Transceivers
(GBT) links to the rest of the, DAQ. The FELIX manages the data streams on
a high performance network to which the WRM can be transparently interfaced
using an FPGA based interface, needed to prepare the data stream and collect
the WRM results. We aim to implement such device in the Proto-DUNE run,
expected in 2018, where the FPGA will be used to emulate a reduced version
of the WRM algorithm, as a proof of principle and demonstrator. The results
can be used both on line and off line to validate the algorithm selectivity and
efficiency on real data.

5 Further Applications and Impact

DUNE is a prototypical example of bigdata challenges in medical applications. As


in DUNE, for example in medical imaging, a very early data reduction is highly
desirable, not because of sheer technological barriers to data flow handling, but to
comply with the governance and economic logics underlying healthcare provision.
Furthermore, much of medical image reconstruction is based on the inverse
of Radon transform. This is a well-known ill-posed problem, which should be
treated by iterative reconstruction methods, whose application nevertheless is
limited due to their computational burden. Medical images are thus most of
the time reconstructed off-line, and the calibration of machines and algorithms
may vary significantly between models and vendors, setting a strong barrier
to the useful adoption of machine learning and other scalable approaches to
complementing medical professional expertise. One problem that would seem
apparently trivial, but is arguably computationally hard, is segmentation, or the
distinction of different objects (organs/tissues) in a single image where many
elements coexist and can be juxtaposed. Solutions to this problem may emerge
as useful in automatic radiotherapy tracking, robot surgery guidance, etc.
Other applications beyond high-energy physics could be envisioned in
robotics and mechatronics. Highly interactive and/or energy-footprint sensitive
applications could be ideal testing fields for our electronics. For example, the
growing interest for exoskeleta and smart prosthetics, has led the pioneers of the
field to hit the wall of data saliency.

References
1. Dell’Orso, M., Ristori, L.: VLSI structures for track finding. NIM-A 278, 436–440
(1988)
2. Cardarelli, R., Chiostri, V., Di Stante, L., Reali, E., Santonico, R., Travaglini, M.,
Tusi, E.: On a very fast topological trigger. NIM-A 324(1–2), 253–259 (1993)
68 A. Abdallah et al.

3. Abdallah, A., Cardarelli, R., Aielli, G.: On a fast discrete straight line segment
detection (2014)
4. http://www.dunescience.org/
5. Boehlen, T.T., et al.: The FLUKA code: developments and challenges for high
energy and medical applications. Nucl. Data Sheets 120, 211–214 (2014)
6. J.T. Anderson, et al.: FELIX: a high-throughput network approach for interfacing to
front end electronics for ATLAS upgrades. In: ATL-DAQ-PROC-2015-014. https://
cds.cern.ch/record/2016626
Experimental Detector Systems
Thermal Mockup Studies of Belle II
Vertex Detector

H. Ye(B) and C. Niebuhr

DESY, Notkestrasse 85, 22607 Hamburg, Germany


hua.ye@desy.de

Abstract. As the upgrade of the former KEKB collider, SuperKEKB,


aims to increase the peaking luminosity by a factor of 40 to
8 × 1035 cm−2 s−1 . The Belle II experiment is expected to accumulate
a data set of 50 ab−1 around 10 GeV in the next decade, to explore the
new physics beyond the Standard Model at the intensity frontier. The
Belle II vertex detector (VXD) is upgraded with a new 2-layer DEPFET
pixel detector (PXD) in the inner most part, surrounded by a 4-layer
double-sided silicon strip detector. To achieve an accurate determina-
tion of decay vertex, the material budget in the detector acceptance is
well optimised. The evaporative 2-phase CO2 cooling is a newly devel-
oped low-mass cooling concept which is used in the tense VXD volume.
The cooling system must be capable of removing a heat load of about
1 kW from the detectors. To verify and optimise the performance of the
2-phase CO2 cooling system, a full-sized VXD thermal mockup is built
at DESY. In this talk some aspects of mechanics design of Belle II VXD
are presented, as well as the thermal and mechanical measurements.

1 Belle II vertex detector layout

The Belle II vertex detector [1] is made up of 2-layer DEPFET1 pixel detector
(PXD) and 4-layer double-sided silicon strip detector (DSSD). The PXD [3]
consists of 40 sensors with in total 7.68 million pixels. The sensitive area of
the sensor is thinned down to 75 µm, with size of 61.44(44.80) × 12.50 mm2 for
layer 2 (1). The material budget is determined to be 0.21% X0 per PXD layer.
The DEPFET sensor is operated by 3 types of ASICs, they are Switchers which
do raw control, the analog front-end named Drain Current Digitiser (DCD),
and the Data Handling Processor (DHP) which does the pedestal subtraction.
Power consumption is dominated by the DCD/DHP which can be placed in the
end of the sensor, outside of the physics acceptance of Belle II detector. Active
cooling is required there, meanwhile, the matrix and Switchers contribute low
power consumption and thus can be sufficiently cooled with forced air flow. The
so-called PXD ladder is formed from 2 DEPFET sensors with the butt-face
joint glued together and ceramic mini-rods embedded in the thick rim of the

1
Abbreviation of “DEPleted Field Effect Transistor” [2].
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 71–76, 2018.
https://doi.org/10.1007/978-981-13-1313-4_15
72 H. Ye and C. Niebuhr

sensor. Such a ladder design makes the sensors self-supporting. The ladder size
is 170.0(136.0) × 15.4 mm2 for layer 2 (1).
Both PXD layers are mounted onto a common support and cooling block
(SCB). The steel SCBs are manufactured using 3D Printing technology with
enclosed CO2 and open N2 channels integrated. 8 silver coated carbon fiber tubes
connect the forward (FWD) and backward (BWD) SCBs, to provide grounding
and injecting N2 towards the Switchers of inner layer. The open N2 holes on SCBs
provide N2 to cool the sensitive area. The ladders are fixed on the SCBs using
M 1.2 screws with a plastic washer and an o-ring to prevent electrical contact
between screw and silicon. Elongated holes are adopted on the FWD side which
allows for compensating of thermal expansions. The ladder supporting scheme
is shown in Fig. 1.

Fig. 1. (a) The mechanical design of the Belle II PXD. The PXD ladders are fixed
on two pairs of SCBs. Forward and backward SCBs are connected with 8 carbon fiber
tubes. (b) 3D printed SCB with integrated cooling channels for CO2 circulation and
open channels providing forced N2 flow.

The 4 layers of DSSDs are called silicon vertex detector (SVD) [4], which
follow the name of Belle SVD and are numbered from 3 to 6. The SVD is
composed of 172 DSSD modules with the sensitive area of 1.13 m2 and a polar
angle coverage of 17–150◦ . The modules are read out by the APV25 chips, which
are thinned down to 100 µm. The SVD ladders are formed with up to 5 DSSD
modules in a row, and supported by two ribs and Airex foam core sandwich.
A slanted angle with trapezoidal modules is implemented in the FWD side of
Layer.4–6. For Layer.3 and FWD/BWD modules of Layer.4–6, the APV25s are
mounted in the end of the ladder and get support and cooling from the endrings,
in which CO2 cooling circuits are integrated. The barrel modules are cooled by
CO2 pipes. The so-called “origami” chip-on-sensor concept achieves to readout
the bottom side strips using the chips on the top side. In this way the readout
chips can be arranged in a row on one side of the modules and cooled with one
CO2 cooling pipe. Such a design is developed to minimize the distance between
DSSDs and APV25 as well as the material budget [5].
Thermal Mockup Studies of Belle II Vertex Detector 73

2 VXD Cooling
Adequate cooling is required in such a dense VXD volume for the detector oper-
ation. The VXD consumes about 1kW where PXD contributes about 360 W and
SVD does 700 W, together with the heat load through the 9 m of vacuum iso-
lated flex lines from the cooling plant to the detector, cooling capacity of 2–3 kW
is required. Meanwhile, the VXD needs to be thermally isolated against CDC
and beam pipe. Room temperature is required at the inner surface of central
drift chamber (CDC) for stable calibration and dE/dx performance. In total 12
cooling circuits are competent to the cooling tasked, as listed in Table 1.

Table 1. The VXD cooling pipe line system and the design power consumption.

Circuit Half Layers Cooling method Side Power per circuit [W]
1,2 +y 1,2 SCB BWD,FWD 90
3,4 −y 1,2 BWD,FWD 90
5,6 +x,−x 3–6 Endring BWD 93
7,8 +x,−x 3–6 FWD 93
9,10 +x,−x 4,5 Cooling pipe BWD 68
11,12 +x,−x 6 BWD 96

The VXD cooling system adopts the 2-phase CO2 cooling method [6], it is
an efficient concept for low-mass detectors. The pumped cooling system uses
a 2-phase accumulator, which can be both heated and cooled, to control the
pressure and hence temperature inside the detector volume. All control can be
done in a distant accessible cooling plant. A large temperature range typically
from 20 to −4 ◦ C can be achieved. The finite element analysis indicates the CO2
temperature needs to be lower than −20 ◦ C.

3 VXD Thermal Mockup


In order to verify and optimise the performance of the 2-pahse CO2 cooling sys-
tem, a full-sized mockup is built at DESY (Fig. 2). Dummy PXD ladders are
manufactured exactly in the same way as the final detector [7]. Instead of real
ASICs, resistive dummy loads are used. The nominal power load of 0.5W, 0.5W,
8W are exerted to the “matrix”, “Switchers” and “DCD/DHP”-like resistors
in each half ladder2 . The M1.2 screws are fastened with a torque of 7 mN·m.
20 Pt100 resistance thermometers are glued onto the PXD to monitor the tem-
perature. The SVD dummy ladders use resistive foils to simulate the APV25
chips, the DSSDs silicon is simulated with glass. The NTC thermistors are glued
2
The power dissipation are based on the initial numbers for the first versions of chips
for DEPFET.
74 H. Ye and C. Niebuhr

in the middle of the foils to probe the local temperature. An aluminium shield
simulates the inner cover of CDC and forms the dry volume between the final
focusing magnets (QCR). The heat intake arising from the SuperKEKB beam
pipe is not taken into consideration. The MARCO (Multipurpose Apparatus for
Research on CO2 ) system serves as the cooling plant for the thermal mockup.

Fig. 2. The full-sized thermal mockup of the Belle II vertex detector. The left picture
is the CAD design, the right picture is the constructed PXD and SVD mockup.

4 Thermal and Mechanical Studies


To balance the CO2 mass flow in different cooling circuit, the inlet of flex line
adopts small diameter to contribute relatively large pressure drop. The mea-
sured pressure drops (shown in Fig. 3) resulted in a temperature gradient about
1–2 ◦ C in each cooling circuit. The head load in detector will introduce addi-
tional pressure drops. For instance, the intense heat load at DCD/DHP will
increase the local temperature by 40 ◦ C, coursing 1 bar’s pressure drop in the
SCB cooling channels.
Concerning the temperature distribution, with the CO2 set at −25 ◦ C and
a N2 flow of 20 L/min, the temperature on PXD was controlled at <25 ◦ C. A
gradient of 7 ◦ C was observed along the PXD ladder, the top-bottom gradient
was about 5 ◦ C, caused by the high density of cold N2 . The vibration introduced
by the N2 flow was measured with a non-contact laser displacement sensor. A
vibration at 175 Hz was observed in the ladder center with the amplitude of
<1 µm (Fig. 4). The vibration was confirmed from the N2 injection by changing
the flow rate and comparing the vibration at the M1.2 screws. The magnitude
of vibration represents as negligible to the spatial resolution of the detector.
Temperature on SVD is well controlled at <30 ◦ C expect the Layer.3, local
high temperature of 50(> 60)◦ C was observed in the BWD(FWD) foil heaters.
FE simulation indicates most of the gradient (∼ 45 ◦ C in FWD) happens
in the endring finger, which is made of stainless steel and connected to the
Layer.3. The modification proposed by DESY group is to upgrade with a copper
insert between endring finger and Layer.3 ladders. By doing this the gradient is
Thermal Mockup Studies of Belle II Vertex Detector 75

expected to get improved by 20 ◦ C, as indicated by FE simulation. The modifi-


cation is under testing at Melbourne. The temperature in the Layer.3 modules
was found to be influenced by PXD, therefore relies on the N2 injection. The
thermal interference to PXD is negligible.
Temperature gradients on the top/bottom of VXD CDRP shield and CDC
inner surface were also studied. The temperature on CDC inner surface ranged
from 15 ◦ C on top and 8 ◦ C on bottom, the gradient is determined to be about
7 ◦ C. The thermal transfer through electronic cables is determined to be negli-
gible.
With the CO2 at −25 ◦ C and N2 flow of 20 L/min, the dew point in the
VXD volume was stabilised at −35 ◦ C. N2 flow was proved to be necessary in
the dry volume (formed by CDC inner cover and VXD/QCR) because the dew
point might briefly appear to be above the CO2 temperature during the thermal
cycles of switching on/off MARCO.

Pressure drop(bar)
Pressure drop with FLex Line(bar)

SVD Layer.6 SVD Layer.6


Endring Endring
8 PXD SCB
1.5 PXD SCB

1.0
6
0.5
4
0.0

0.8 1.0 1.2 1.4 0.8 1.0 1.2 1.4


Mass flow (g/s) Mass flow (g/s)

Fig. 3. The pressure drops related to injected CO2 mass flow in different cooling cir-
cuits. There is no heat load intake from detector. The contribution of flex lines is
included in the left plot and excluded in the right one. The solid curves are fitted by
2nd Polynomial, the intercept is constrained to be 0.

10-1
Amplitude (µm)

on L.2 sensor, N2 30L/min


on L.2 sensor, N2 20L/min
on L.2 sensor, N2 9L/min
10-2 on L.2 sensor, N2 0L/min
on screw, N2 30L/min

10-3

10-4

10-5
0 200 400 600 800 1000
Frequency (Hz)

Fig. 4. Vibration in the center of Layer.2 PXD ladder at different injected N2 flow
rate. The measurement on the SCB screw is taken as reference.
76 H. Ye and C. Niebuhr

5 Summary
Operating environment of Belle II PXD and SVD are thermally coupled, mean-
while, it will influence the surrounding CDC. Evaporative 2-phase CO2 and
airflow injection perform VXD cooling. A full-size thermal mock-up is built at
DESY, to verify and optimise the cooling concept of Belle II VXD. The thermal
and mechanical results are summarised as below:
With CO2 set at −25 ◦ C,

• Temperature on PXD ladders is determined as <25 ◦ C, with the N2 flow


of 20 L/min. A gradient of 7 ◦ C is observed along the ladder and 5 ◦ C on
top/bottom. The N2 injection induces negligible vibration.
• SVD ASICs temperature is controlled below 30 ◦ C. The Layer.3 ASICs suffer
high temperature, modification is under way.
• Temperature on the inner surface of CDC cylinder range from 15 ◦ C to 8 ◦ C.

Acknowledgement. We wish to thank MPI für Physik, München group for preparing
the SCBs, MPG-Halbleiterlabor (HLL) group for producing PXD dummy sensors. The
Belle II VXD cooling frame is developed based on the experience of ATLAS-IBL, we
would like to acknowledge the support form CERN experts.

References
1. Abe, T., et al.: KEK Report 2010-1 (2010). arXiv: 1011.0352v1
2. Kemmer, J., Lutz, G.: Nucl. Instrum. Methods 253, 356 (1987)
3. Marinas, C.: Nucl. Instrum. Methods 731, 31 (2013)
4. Friedl, M., et al.: Nucl. Instrum. Methods A 732, 83 (2013)
5. Irmler, C., et al.: Nucl. Instrum. Methods 732, 109 (2013)
6. Verlaat, B., Colijn, A.P.: CO2 cooling developments for HEP detectors. In: VERTEX
2009, Veluwe, Netherlands (2009)
7. Andricek, L., Lutz, G., Richter, R.H., Reiche, M.: IEEE Trans. Nucl. Sci. 51, 1117
(2004)
Integration and Characterization
of the Vertex Detector in SuperKEKB
Commissioning Phase 2

H. Ye(B)
(On behalf of the BEAST2 Collaboration)

DESY, Notkestrasse 85, 22607 Hamburg, Germany


hua.ye@desy.de

Abstract. As an upgrade of asymmetric e+ e− collider KEKB,


SuperKEKB aims to increase the peaking luminosity by a factor of 40
to 8 × 1035 cm−2 s−1 . The SuperKEKB commissioning is achieved in 3
phases. The Phase 1 was successfully finished in June 2016. Now the
commissioning is working towards the Phase 2 targeting to reach the
luminosity of 1 × 1034 cm−2 s−1 . In Phase 2, the beam induced back-
ground versus luminosity and beam current will be further investigated,
to ensure a radiation safe operation environment for the Belle II vertex
detector during the Physics data taking in Phase 3. Closed to the beam
pipe, 2 pixel and 4 double-sided strip detector layers will be installed,
together with the dedicated radiation monitors, FANGS, CLAWS and
PLUME, which aims at investigating the backgrounds near the interact-
ing point. The Phase 2 vertex detector integration was practiced and the
combined beam tests were accomplished at DESY. In this talk, the status
of the Phase 2 vertex detector and the beam tests results are presented.

1 Introduction
As an upgrade of KEKB, the SuperKEKB [1] at KEK aims at increasing the
peak luminosity to 8 × 1035 cm−2 s−1 . The Belle II [2] experiment targets to
explore new physics beyond the Standard Model with an extensively upgraded
detector. The SuperKEKB commissioning campaign is scheduled in 3 phases.
The Phase 1 was successfully finished in Jun. 2016 which achieved to circulate
beams in both rings. A dedicated array of sensors was installed around the
interaction point (IP) to monitor and study beam related backgrounds [3]. Now
the project is tending to the Phase 2, during which the beam background will
be further investigated with collisions. A dedicated vertex detector system will
be used. Besides machine commissioning, the accelerator targets to achieve the
peaking luminosity of 1 × 1034 cm−2 s−1 , which is actually the designed peaking
luminosity of KEKB. In Apr. 2017 partial Belle II detector without the vertex
detector (VXD) was rolled in. The final focusing magnets also occurred to the
places. The first collision is expected in Feb. 2018. The physics run (Phase 3)

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 77–81, 2018.
https://doi.org/10.1007/978-981-13-1313-4_16
78 H. Ye

with full Belle II detector is scheduled in late 2018. The experiment is expected
to accumulate an integrated luminosity of about 50 ab−1 well within the next
decade.

2 Vertex Detector in Phase 2


2.1 Belle II VXD
The Belle II VXD consists of 2-layer DEPFET pixels (PXD) [4] and 4-layer
double-sided silicon strips [5]. The DEPFET [6] concept combines signal detec-
tion and amplification into one device. In DEPFET pixel, the electrons ionised
in the depleted n-type silicon bulk accumulate at a deep internal n+ gate, which
forms a potential minimum. The drain current signal is therefore modulated
proportionally to the accumulated charge. The internal gain for Belle II PXD is
expected to be about 700 pA/e− . A clear signal is required to remove the trapped
charge, to reset the pixel after a reading. The DEPFET sensor is operated by 3
types of ASICs, they are Switchers which do raw control, the analog front-end
named Drain Current Digitiser (DCD), and the Data Handling Processor (DHP)
which does the pedestal subtraction. The DEPFET conceptual PXD of Belle II
is read out continuously with a 50 kHz frame rate, together with the small pitch
size (50×55−80 µm2 ), keeps the occupancy lower than 3%. The sensor is thinned
down to 75 µm to achieve a ultra-low material budget of ∼0.2% X0 per layer.
The 2-layer PXD is expected to significantly improve the resolution of the track
parameter determination.
The silicon strip vertex detector (SVD) is upgraded with the channels twice
as many as its predecessor. The sensor is about 300 µm thick, the p-strips run
along the beam axis with pitch size of 50(75) µm and n-strips tangential to the
barrel with the pitch size of 160(240) µm. Smaller pitches are used in the inner
most SVD layer to improve the resolution. For the outer 3 SVD layers, trape-
zoidal sensors with slanted angle are adopted in the forward part, to improve the
spatial precision and minimise the material budget. The fast shaping (50 ns) and
radiation-hard (>1MGy) front-end (FE) APV25, which is originally developed
for CMS, is adopted and thinned down to 100 µm.

2.2 Vertex Detector in Phase 2 VXD


40 times larger instantaneous luminosity is expected to induce significantly
higher background level in all Belle II subdetectors. From the simulation, the
QED background as well as synchrotron radiation and Touschek effect domi-
nate in the VXD region. The dedicated vertex detector for Phase 2 (shown in
Fig. 2) involves 2 PXD and 4 SVD ladders in the +x sector where the highest
background level is expected. Three additional dedicated radiation monitors are
arranged around the IP for further investigation of the background, they are:
FANGS (FE-I4 ATLAS Near Gamma Sensors): Planar pixel with Atlas
IBL readout chips (FE-I4), to investigate the synchrotron radiation and the
deposited energy spectrum of the background.
Integration and Characterization of the Vertex Detector 79

Fig. 1. The 2-layer PXD modules and 4-layer SVD modules tested in the test beam at
DESY.

CLAWS (sCintillation Light And Waveform Sensors): Plastic scintillators


with SiPM readout, to study the time evolution of background and its decay
constant.
PLUME (Pixelated Ladder with Ultra-Light Material Embedding): double-
layer CMOS pixels, to study the spatial distribution and direction information
of the background.

Fig. 2. The vertex detector dedicated for SuperKEKB commissioning Phase 2. The
integration was tested at DESY, the thermal interference was quantified and the inte-
gration sequence was decided. During the test, SVD, FANGS, CLAWS and PLUME
were involved.

The setup investigated at DESY are shown in Figs. 1 and 2. Integration of


Phase 2 VXD was practised to study the thermal interference and the integration
sequence. The interference between subdetectors are determined to be negligible.
80 H. Ye

3 VXD Beam Tests at DESY


The performance of the VXD system was investigated in the DESY infrastruc-
ture of test beam. Complete VXD readout chain, including high-level trigger
(HLT), region of interest (ROI) determination, event building, CO2 cooling,
slow control and environmental monitoring, are involved in the tests. During the
test beam in 2016 (TB16), 2 PXD modules and 4 SVD ladders were tested, then
in TB17, up to 4 PXD modules, SVD, as well as FSANGS and CLAWS joined.
The VXD system was illuminated with up to 6 GeV e− beam within a solenoid
magnetic field up to 1T.
The huge data rate after pedestal subtraction of PXD is impossible to be
coped by the software event builder system. Most of PXD hits are contributed
from background. The readout chain needs to reduce the data by a factor of
30 [7]. A distinction between background and signal becomes possible when
the information of outer trackers are taken into account. The tracks from most
physics relevant events have enough momentum to reach SVD. By doing online
track reconstruction and tracing back to the PXD areas, the regions, referred as
Regions of Interest (ROIs), with corresponding PXD hits are expected [8]. The
Online Selection Nodes (ONSEN) will buffer the output data and record just the
pixels inside the ROIs. The ROIs are determined from two sources: the FPGA-
based SVD-only track finder Data Concentrator (DATCON) and software-based
high-level trigger (HLT), which extracts ROIs using SVD and drift chamber
information. Figure 3 shows an typical event in beam test.

Fig. 3. An event display in the test beam at DESY. The blue curve is the reconstructed
track, the yellow lines indicated the fired SVD strips, and the green regions are the
determined ROI on the PXD.

In TB16, 6 layers of EUDET telescopes were placed up and downstream of


the 6 VXD layers. The hit efficiency and spatial resolution of VXD sensors were
determined. For PXD modules, most of matrix columns achieved an efficiency
Integration and Characterization of the Vertex Detector 81

above 98%, the residual RMS for single hit clusters is determined
√ to be 14.3 µm,
which is consistent with the digital resolution of Pitch/ 12 [9]. The residuals of
the SVD sensors were determined as ∼11 µm in the r − φ side and ∼30 µm in
z side. All SVD sensors under test achieved an averaged efficiency above 99.5%
per strip in both sides [10]. The deposited charge in FANGS was quantified in
TB17. One advantage of the FE-I4 chip is that the length of signal contains
the deposited charge information. In FANGS, the HitOr signal in each sensor is
sampled with an external FPGA 640 MHz clock, resulting in a 12-bit resolution
for charge measurement. The measured mean value from Landau fit is 17.3 ke,
which is comparable within a 5% error to the expected value of 18 ke for a 250 µm
thick silicon sensor [11].

4 Summary and Outlook

SuperKEKB commissioning Phase 2 will start in Feb. 2018 to further investigate


the beam induced background, partial Belle II detector has been rolled in and the
final focusing magnets occurred in place. The dedicated vertex detector including
a sector of PXD and SVD, as well as the additional radiation monitors, FANGS,
CLAWS and PLUME aims to study the background near the IP and ensure a
radiation safe environment for VXD operation. Integration of Phase 2 VXD was
practised at DESY. The detector performance has been characterised at DESY
test beam. Full VXD readout chain was integrated in the tests.
In the summer of 2017, all subdetectors studied at DESY will be shipped to
KEK for Phase 2. In parallel, final PXD integration for Belle II physics run is
under preparation at DESY.

References
1. Ohnisi, Y., et al.: PTEP 2013, 03A011
2. Abe, T., et al.: KEK Report 2010-1 (2010). arXiv: 1011.0352v1
3. Miroslav, G.: The Belle II / SuperKEKB Commissioning Detector - Results from
the First Commissioning Phase. IPP2017 talk
4. Marinas, C.: Nucl. Instrum. Methods 731, 31 (2013)
5. Friedl, M., et al.: Nucl. Instrum. Methods A 732, 83 (2013)
6. Kemmer, J., Lutz, G.: Nucl. Instrum. Methods 253, 356 (1987)
7. Geßler, T., et al.: IEEE Trans. Nucl. Sci. 62, 1149 (2015)
8. Konno, T.: Integration of readout of the vertex detector in the Belle II DAQ system.
TIPP2017 talk
9. Schwenker, B., et al.: PoS(Vertex 2016)011
10. Lück, T., et al.: PoS(Vertex 2016)057
11. Khetan, N.: Master thesis, Development and integration of the FANGS detector for
the BEAST II experiment of Belle II, Rheinischen Friedrich-Wilhelms-Universität
Bonn
Radiative Decay Counter for Active
Background Identification in MEG II
Experiment

Ryoto Iwai(B) , Kei Ieki, and Ryu Sawada

ICEPP, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan


iwai@icepp.s.u-tokyo.ac.jp

Abstract. The Radiative Decay Counter (RDC) is a key ingredient in


the MEG II experiment. The sensitivity of the μ+ → e+ γ branching
ratio will be improved by 16% by identifying a photon from a radiative
muon decay, which is the dominant background source of the search.

Keyword: Experimental detector systems

1 Introduction

The lepton flavor violating μ+ → e+ γ process is a clear evidence of new physics


models. The upper limit on the branching ratio is B < 4.2 × 10−13 (90% C.L.),
which was set by the MEG experiment [1]. Currently, the upgraded experiment
(MEG II) is being prepared to obtain one order better sensitivity [2]. One of the
challenges is to reduce the background rate in higher muon decay rate environ-
ment (70 MHz). The dominant background is an accidental coincidence of large
momentum positron from a Michel Decay (μ+ → e+ ν µ νe ) and a high energy pho-
ton from a radiative muon decay (RMD; μ+ → e+ ν µ νe γ). This background event
occurs with an emission of low momentum (<5 MeV) positron from RMD, which
is quickly swept along the beam axis by a magnetic field. By newly installing
the RDC at the downstream of the muon stopping target and detecting the time
coincident positrons, the sensitivity can be improved by 16% (Fig. 1).

2 Detector Design, R&D

The size of the RDC detector should be compact (∼20 cm) because it is installed
inside the superconducting magnet bore. Meanwhile, it has to be operational in a
high hit rate environment (∼MHz) because a lot of Michel positrons also hit the
detector. For these reasons, the detector consists of finely segmented scintillators
readout with SiPMs. The detector is installed on a moving arm, in order to arrow
the insertion of a calibration target for the MEG II photon detector from the
downstream side.

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 82–86, 2018.
https://doi.org/10.1007/978-981-13-1313-4_17
Radiative Decay Counter for Active Background Identification 83

Fig. 1. Example of the dominant background event. Red dashed line and blue solid line
represent the background photon and positron respectively. Red solid line represents
the time coincident positron detected by the RDC.

Figure 2 shows the constructed detector. Arrival times of positrons are mea-
sured by 5 mm thick 12 fast plastic scintillator bars in the front part of the RDC.
Their widths (lengths) vary in a range from 1–2 cm (from 7–19 cm). Bars with the
small width are used in the higher hit rate region. Scintillation light is collected
at two ends with two or three 3 × 3 mm2 SiPMs (Hamamatsu, S13360-3050PE).
In order to reduce number of channels, the SiPMs are connected in series. The
timing resolution of ∼90 ps was obtained in each counter in a laboratory test.
Behind the plastic scintillators, there are 76 LYSO crystals with a size of 2 ×
2 × 2 cm3 . The energy of positrons are measured to distinguish pile-up energetic
Michel positrons from RMD positrons. Each crystal is readout with a single
3 × 3 mm2 SiPM (Hamamatsu, S12572-025P), which is mounted on a PCB with

Fig. 2. Top left: Plastic scintillators. Bottom left: LYSO crystals. Right: RDC detector
mounted on the moving arm.
84 R. Iwai et al.

a flexible circuit. The SiPM is fixed on the backside of the crystal with a spring.
Thanks to the high light yield of the crystals, a sufficiently good energy resolution
was obtained (∼6% for 1 MeV positron). Moreover, due to the contained radio
isotope 176 Lu, the crystal has ∼2 kHz of intrinsic radioactivity which makes an
energy peak around 600 keV. This is used for the energy scale calibration of each
channel.

3 Commissioning
The commissioning of the RDC detector was completed in 2016 by using a
high intensity muon beam. The trigger system and MEG II DAQ electronics
(WaveDREAM [3]) were also tested. As a substitution for the MEG II photon
detector, 16 BGO crystals and PMTs were used to detect photons from RMD.
Before the data taking, a series of calibrations has been done. To optimize each
bias voltage of SiPMs, positrons from Michel decay and intrinsic radioactivity
of LYSO crystals data were triggered by the timing counter and the calorimeter
respectively. The absolute energy scales of BGO crystals was calibrated by using
1.8 MeV gamma-rays of 88 Y.
The data was acquired for a few days by triggering on a hit of any BGO crys-
tal. Because most of the events were triggered by cosmic-rays, we first selected
RMD events with the hit position and the total energy deposit in BGO crystals.
Figure 3 shows the timing difference of the RDC and BGO detector after the
event selection. A clear timing peak of RMD events was successfully observed.
The flat regions correspond to pile-up Michel positrons which are mostly trig-
gered by photons from positron annihilation in flight. The pile-up rejection was
demonstrated by cutting events with large energy deposit in LYSO crystals
(>4 MeV, Fig. 4). The detection efficiency of the RDC will be measured by using
the MEG II photon detector in 2017.
Count

350
Count

w/o LYSO cut


300
w/ LYSO cut 103
250

200
102
150

100
10
50

−9
0 × 10
−20 −15 −10 −5 0 5 10 15 20 0 5 10 15 20 25 30 35 40
T RDC - T BGO - offset (s) E (MeV)

Fig. 3. Hit time difference of BGO Fig. 4. Energy deposit in the LYSO
crystals and RDC after event selection. crystals.
Radiative Decay Counter for Active Background Identification 85

4 Upstream Detector Option


By installing the RDC detector also in the upstream the muon stopping tar-
get, further reduction of the backgrounds is possible. However, installing the
upstream detector is more difficult due to the operation in the high intensity
muon beam (∼108 µ+ /s). In provisional design, the timing of the positrons is
measured with a layer of thin scintillating fibers (Fig. 5). About 780 square
shaped fibers with a size of 250 µm will be used. To reduce the number of read-
out channels, fibers will be grouped into a few tens of bundles and each bundle
end will be readout with a single SiPM (Fig. 6).
The influence on muon beam transportation by installing the detector is suf-
ficiently small. Although the beam spot size at the target position was increased
by 7.5% in the mockup test, the influence in physics analysis is small according
to the simulation results. Moreover, the change in the muon stopping rate at
the target is only −0.5%. The sensitivity improvement with the upstream detec-
tor largely depends on the capability of pile-up rejection. The improvement is
expected to be 22–28% by installing both the downstream and the upstream
detectors. On the other hand, the performance degradation of the scintillating
fibers due to large total integrated dose (∼MGy) is still unclear. The irradiation
test is being planned at the irradiation facility in Paul Scherrer Institute.

Fig. 5. CG of the upstream detector Fig. 6. Bundled fibers (64 fibers × 2).

5 Conclusion

In the MEG II, the RDC will be newly installed to improve the sensitivity by
16%. It identifies the dominant background photons from RMD by detecting
the time coincident low momentum positrons. We constructed a detector with a
compact design and good performance in a high rate environment. The capabil-
ity of the background identification was successfully demonstrated in the beam
test. A series of studies for the upstream detector is in progress. The sensitivity
improvement will be 22–28% by installing both RDC detectors.
86 R. Iwai et al.

References
1. Baldini, A.M., et al.: Search for the lepton flavour violating decay μ+ → e+ γ with
the full dataset of the MEG experiment. Eur. Phys. J. C. 76, 434 (2016)
2. Baldini, A.M., et al.: MEG Upgrade Proposal. arXiv:1301.7225 (2013)
3. Baldini, A.M., et al.: An FPGA-based trigger for the phase II of the MEG experi-
ment. Nucl. Instrum. Meth. A 824, 326 (2016)
Belle II iTOP Optics: Design,
Construction and Performance

Boqun Wang(B) , Saurabh Sandilya, Bilas Pal, and Alan Schwartz

University of Cincinnati, Cincinnati, OH 45221, USA


boqunwg@ucmail.uc.edu

Abstract. The imaging-Time-of-Propogation (iTOP) counter is a new


type of ring-imaging Cherenkov counter developed for particle identifi-
cation at the Belle II experiment. It consists of 16 modules arranged
azimuthally around the beam line. Each module consists of one mirror,
one prism and two quartz bar radiators. Here we describe the design,
acceptance test, alignment, gluing and assembly of the optical compo-
nents. All iTOP modules have been successfully assembled and installed
in the Belle II detector by the middle of 2016. After installation, laser and
cosmic ray data have been taken to test the performance of the modules.
First results from these tests are presented.

Keywords: Cherenkov detector · iTOP counter · Belle II

1 Introduction
The Belle II [1]/SuperKEKB [2] experiment is an upgrade of the Belle/KEKB
experiment for searching for New Physics (NP), which is physics beyond the
Standard Model (SM). The upgraded detector is planning to take ∼50 ab−1
of e+ e− collision data, with a design luminosity of 8 × 1035 cm−2 s−1 . This is
about 40 times larger than the KEKB collider. To achieve such high luminosity,
the so-called nano-beam technology [3] is used to squeeze the beam bunches
significantly.
Many sub-detectors of Belle will be upgraded for Belle II. This includes the
newly designed iTOP (imaging-Time-Of-Propogation) counter [4–7], which is
the particle identification counter in the barrel region. It consists of a 2.7 m
long quartz optics for the radiation and propogation of the Cherenkov light, an
array of micro-channel-plate photo-multiplier tubes (MCP-PMT) [8] for photon
detection, and wave-sampling front-end readout electronics [9,10]. This article
describes the design, construction and performance of the optics for the iTOP
counter.

2 Detector Design
As shown in Fig. 1, one iTOP module consists of two bars with the dimension
as 1250 × 450 × 20 mm. At one end of the bars is the reflection mirror with
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 87–90, 2018.
https://doi.org/10.1007/978-981-13-1313-4_18
88 B. Wang et al.

spherical surface. At the other end is the expansion block called prism. All optics
components are made of Corning 7980 synthetic fused silica, which has a high
purity and no striae inside.
When a charged track goes through the quartz radiator, it emits Cherenkov
photons. The Cherenkov angle depends on the mass of the particle for a given
momentum, the latter is measured by the central drift chamber (CDC). The
photons are reflected by the bar surfaces and the reflection mirror, then collected
by the MCP-PMTs at the prism end. The resolution of the photon sensors and
the front end electronics are required to be better than 50 ps, which is needed to
distinguish the time of propogation difference between Cherenkov photons from
π ± and K ± tracks.

3 Module Construction
The construction of the real iTOP module was started in the end of 2014 and
all 17 modules, including one spare, were finished by April 2016. After testing
with laser and cosmic ray, these modules were installed on Belle II detector by
May 2016.

3.1 QA of Quartz Optics


To achieve the high K/π separation capability of the iTOP counter, the optics
need to have very high quality. The Cherenkov photons can reflect hundreds of
times inside the quartz radiator, so the surfaces of the quartz bars need to be
highly polished. The requirement for surface roughness is <5 År.m.s., and for
flatness the requirement is <6.3 µm. For all 34 bars needed, 30 were produced
by Zygo Corporation (USA) and 4 were produced by Okamoto Optics Works
(Japan).
After receiving the quartz bars from the vendors, they were mounted on
the measurement stage for the QA tests. By injecting laser beam perpendic-
ular to one surface of the bar or have a angle relative to it, we can measure
the bulk transmittance and internal reflectivity of the quartz bar by measuring
the laser intensity before and after it went through the bar. The requirements
for bulk transmittance and internal reflectivity were >98.5%/m and >99.9%,
respectively. As shown in Fig. 2, all received quartz bars met the requirements.
For the QA of the other optics, the most important tests were the angle of the
tilted surface of the prism and the radius of the mirror’s spherical surface. They
were measured by injecting laser to the optics and measure the laser direction
after it went through or reflected by the optics. All optics met the requirements.

3.2 Gluing and Assembly


After the QA process was finished and all the quartz optics passed the require-
ments, two quartz bars, one prism and one mirror were mounted on a gluing
stage for precision alignment and gluing.
Belle II iTOP Optics: Design, Construction and Performance 89

Fig. 1. Optical overview of the iTOP Fig. 2. Summary of quartz bar QA


detector. results.

Two laser displacement sensors and an autocollimator were used for the align-
ment. Laser displacement sensor measures the distance to the surface. It was
used to align the position, both horizontal and vertical, of two optics’ surfaces.
Autocollimator injects laser to a mirror which is mounted on the surface of the
optics, and by measuring the angle of the reflected laser relative to the original
one to get the angle information. It is used to align the angle between two optics’
surfaces.
After the alignment, the optics were moved together with a 50–100 µm gap.
The joints between optics were taped by using Teflon tape to make a “dam” to
prevent the epoxy from flowing outside. The epoxy used for gluing was EPOTEK
301-2. It consists of two parts, which needs to be mixed before gluing. The
mixture was centrifuged to remove the air bubbles inside. Then the adhesive
was applied from a syringe to the glue joint by using high pressure dry air. After
applying, it took 3–4 days to be fully cured.
When the curing process was finished, the excessive glue was removed using
Acetone. The alignment may change after the curing, so it needed to be measured
again. The achieved horizontal and vertical angle between the two optics near
the glue joint was within ±40 arcsec and ±20 arcsec, respectively.
The completed iTOP optics was moved into a Quartz Bar Box (QBB). The
QBB is a light-tight supporting structure made of aluminum honeycomb plates.
It has high rigidity with light material. The optics was supported on by PEEK
buttons that were glued to the inner surfaces of QBB.
When the QBB assembly was completed, the MCP-PMT modules with front-
end electronics were installed at the prism end of the module.

3.3 Installation

After the iTOP module assembly was completed, it was transferred by truck
to the experimental hall for installation. Each module was installed by using
movable stages. A module was mounted on a guide pipe, which was supported
by the stages, so it was able to move and rotate around the guide pipe. The
module deflection during the installation process was monitored by deflection
90 B. Wang et al.

sensors, and it was required to be less than 0.5 mm. The installation of all the
modules was completed in May, 2016. More details can be found in Ref. [11].

4 Performance with Cosmic Ray


After installation, the cosmic ray data was taken to validate detector perfor-
mance with and without the magnetic field. Six cosmic ray triggers were pre-
pared, with each consisted of a plastic scintilltor bar. Their positions along the
beam axis were the same with the collision point and their positions in x-y plane
can be changed as needed.
For the performance test with cosmic ray, the tracking information was not
available. The number of observed photon hits for each iTOP module was com-
pared between data and MC simulation. For no magnetic field condition, the
number of photon hits for data was consistent with MC simulation within 15%.
For the 1.5 T magnetic filed condition, the discrepancy is at 20–30% level. There
are many reasons for this, including the angle and momentum distributions in
the cosmic ray muon flux, the hit identification efficiency of MCP-PMTs, etc.
More detailed studies are planned by combining tracking information from the
CDC detector.

5 Summary
The iTOP counter is a novel particle identification device in the barrel region
for Belle II detector. In this article we described the design, construction and
performance of the iTOP counter. The last iTOP module has been finished and
installed in May 2016, and the Belle II detector has been moved to the beam
line in April 2017. Currently the global cosmic ray data taking is on going, for
the purpose of testing, calibration and integration of the sub-detectors including
iTOP counters.

Acknowledgments. This research is supported under DOE Award DE-SC0011784.


I’d like to thank the organizers of the TIPP 2017 conference for allowing me to give
this talk.

References
1. Abe, T., et al.: KEK-REPORT-2010-1 (2010)
2. Ohnishi, Y., et al.: Prog. Theor. Exp. Phys. 2013(3), 03A011 (2013)
3. Raimondi, P.: Talk Given at the 2nd SuperB Workshop, Frascati (2006). http://
www.lnf.infn.it/conference/superb06/talks/raimondi1.ppt
4. Ohshima, T.: ICFA Instrum. Bull. 20, 10 (2000)
5. Akatsu, M., et al.: Nucl. Instr. Meth. A 440, 124 (2000)
6. Ohshima, T.: Nucl. Instr. Meth. A 453, 331 (2000)
7. Enari, Y., et al.: Nucl. Instr. Meth. A 494, 430 (2002)
8. Matsuoka, K.: For the Belle II PID group, PoS(TIPP2014)093
9. Andrew, M.: IEEE Realtime Conf. Rec. 1–5 (2012)
10. Andrew, M.: PoS (TIPP2014) 171 (2014)
11. Suzuki, K., et al.: Nucl. Instr. Meth. A 876, 252 (2017)
Gas Systems for Particle Detectors at the LHC
Experiments: Overview and Perspectives

R. Guida(&), M. Capeans, and B. Mandelli

CERN, Geneva, Switzerland


roberto.guida@cern.ch

Abstract. Over the five experiments (ALICE, ATLAS, CMS, LHCb and
TOTEM) taking data at the CERN Large Hadron Collider (LHC) 30 gas systems
are delivering the proper gas mixture to the corresponding detectors. They are
complex systems that extend over several hundred meters and have to ensure an
extremely high reliability in terms of stability and quality of the gas mixture
delivered to the detectors. In fact, the gas mixture is the sensitive medium and a
correct and stable composition is a basic requirement for good and safe long-
term operation. The present contribution describes the design philosophy
focusing the attention on the main functional modules. The reliability over the
past years is also discussed.

Keywords: Gas system  Gaseous detector  LHC

1 Introduction

The basic function of the gas system is to mix the different gas components in the
appropriate proportion and to distribute the mixture to the individual chambers. Over
the five experiments (ALICE, ATLAS, CMS, LHCb and TOTEM) taking data at the
CERN Large Hadron Collider (LHC) 30 gas systems (corresponding to about 300 gas
racks) are delivering the proper gas mixture to the corresponding detectors). The gas
mixture is the sensitive medium where the charge multiplication is producing the
signal. A correct and stable mixture composition is a basic requirement for good and
stable long-term operation of all detectors.
The gas systems were built according to a common standard allowing minimization
of manpower and costs for maintenance and operation.
The construction started at the beginning of the years 2000. The first systems were
put in operation between 2005 and 2006.
The gas systems for the LHC experiments are the result of a common effort
between the CERN Gas Service Team (nowadays part of CERN/EP department) and
the CERN/EN and BE departments. The CERN Gas Service Team is responsible for
designing, building, operating and maintaining the gas systems. CERN/BE is
responsible for the development of the software controls (following the requirements
provided by the CERN Gas Service Team) and the CERN/EN department of the
primary gas supply procurement and monitoring (Table 1).

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 91–96, 2018.
https://doi.org/10.1007/978-981-13-1313-4_19
92 R. Guida et al.

Table 1. Summary of gas systems at the LHC experiments

2 The Gas Systems

The gas systems extend from the surface building where the primary gas supply point is
located to the service balcony on the experiment following a route few hundred meters
long.
The primary gas supply point is located in the surface building (SG). However, a
typical gas system is distributed over three levels (Fig. 1 shows the typical layout of a
gas recirculation system): the surface room (SG), the underground service room
(UGC) and the experimental cavern (UXC). The final gas distribution to the detectors is
located in UXC, where also the experiment is installed.
Given the large detector volume and the use of relatively expensive gas compo-
nents, for technical and economical reason most of the systems are operated in closed
loop gas circulation with a recirculation fraction higher than 90–95%.

Fig. 1. Typical layout of a gas recirculation system for the LHC experiments
Gas Systems for Particle Detectors at the LHC Experiments 93

In order to facilitate construction and maintenance, the gas systems were designed
starting from functional modules with similar functionalities (i.e. elementary building
blocks). Functional modules are, for example: mixer, pre-distribution, distribution,
circulation pump, purifier, humidifier, membrane, liquefier, gas analysis, etc. Then, the
mixer is basically identical between every system, but it can be configured in order to
satisfy the specific needs of each detector. This module oriented design is reflected by
the implementation: each system has a control rack where the PLCs1 and all the other
electronics crates corresponding to all functional modules are located (Fig. 2). The
control software for the gas system runs in the PLCs, while the crates collect all the I/O
information from the corresponding modules and, finally, they are connected to the
PLC through Profibus2. This approach was also facilitating the installation work and
the commissioning especially when all modules of a particular system were not ready
for installation at the same time.

Fig. 2. Typical gas systems control module where the main PLC is visible together with all the
control crates related to the functional modules.

The operation of all gas systems is continuously followed by the CERN Gas
Service Team. Alarms potentially produced during operation are propagated via email
and SMS to the CERN Gas Service Team as well as to the detector teams. A 7/7
24/24 h service is available for intervention outside working hours.
In the following the case of gas recuperation modules will be used as an example.
More details about gas systems and the description of other functional modules can be
found in [1].

1
A PLC (Programmable Logic Computer) is an industrial pc with basic functionalities.
2
PROFIBUS (Process Field Bus) is a standard for field bus communication in automation technology
and was first promoted in 1989 by BMBF (German department of education and research) and then
used by Siemens.
94 R. Guida et al.

Gas recuperation systems are used to minimize operational costs in particular


during emptying of detector volumes containing expensive gas or for reducing the level
of impurities without increasing the fresh mixture injection. Several recuperation plants
have been developed for the LHC gas systems. They are used to recuperate Xe, C4F10,
nC5H12 and CF4. In most of the recuperation plants, the mixture returning from the
detector is cooled down until the liquefaction point of the expensive gas. Then the
liquid is recuperated and stored either in liquid or in gas phase. A different method was
developed for the CF4 recuperation [2]. The plant makes use of gas separation mem-
branes and selective adsorption in different molecular sieves in order to achieve good
CF4 recuperation efficiency (Figs. 3 and 4). The CF4 recuperation plant was fully
commissioned during 2011 and CF4 was started to be recuperated during 2012.
Recuperated CF4 has been already used for detector operation in many occasions
without any problem.
All plants are fully automated and controlled by PLC.

Fig. 3. Schematic of the CF4 recuperation plant illustrating the working principle and the
different operational phases
Gas Systems for Particle Detectors at the LHC Experiments 95

Fig. 4. View of the CF4 recuperation plant installed in the SGX5 building at CMS experiment.

3 Operational Experience

Starting from 2010 all gas systems have been operated continuously. Data collected
during these years (2010–2017) allowed to compile a first reliability study and to spot
potential systems where extra-work for consolidation might be needed. Figure 5 shows
the average reliability of the gas systems. It is always greater than 99.98% corre-
sponding to less than 1.5 h of down-time per year per system (power-cuts and external
sources excluded).

Fig. 5. LHC gas system reliability during the last three years: during all periods the availability
was always greater than 99.95% (power-cuts and external sources excluded).
96 R. Guida et al.

Most interventions were triggered by alarms received via SMS or calls from the
experiments’ control rooms; however, a significant number was started also as a result
of routine checks performed by the CERN Gas Service Team.
An extensive maintenance program has been developed for the LHC shutdown
periods. In addition to the standard yearly maintenance (for circulation pumps, safety
valves, power supplies, …) during Long Shutdown periods (LS) all analysis devices,
mass-flow controllers (MFCs) and flow-cells used in the final distribution modules are
be verified/re-calibrated. Depending on specific need, LS periods are also used to
upgrade or consolidate specific gas systems.

4 Conclusions

30 gas systems are delivering the required gas mixture to the particles detectors at the
LHC experiments.
The gas systems were designed and built according to functional modules in order
to simplify the maintenance and operational activities.
All systems are fully automated: locally controlled by an industrial PLC and
remotely accessible through a PVSS interface. Few examples of functional modules
were discussed in the present contribution.
The operational experience over the last six years (2010–2016) has demonstrated an
impressive reliability level: greater than 99.98% corresponding to less than 1.5 h of
down-time per year (power-cuts and external problem excluded). An intensive main-
tenance and consolidation program has been developed to maintain and possibly
improve this exceptional reliability in the years to come.

References
1. Guida, R., et al.: the gas systems for the LHC experiments. In: IEEE Nuclear Science
Symposium Conference Record (2013)
2. Guida, R., et al.: Results from the first operational period of the CF4 recuperation plant for the
Cathode Strip Chambers detector at the CERN Compact Muon Solenoid experiment. In: IEEE
Nuclear Science Symposium Conference Record, pp. 1141–1145 (2012)
3. Guida, R., et al.: Commissioning of the CF4 recuperation plant for the Cathode Strip
Chambers detector at the CERN Compact Muon Solenoid experiment. In: IEEE Nuclear
Science Symposium Conference Record, pp. 1814–1821 (2011)
4. Guida, R., et al.: Development of a CF4 recuperation plant for the Cathode Strip Chambers
detector at the CERN Compact Muon Solenoid experiment. In: IEEE Nuclear Science
Symposium Conference Record, pp. 1433–1438 (2010)
Gas Mixture Monitoring Techniques
for the LHC Detector Muon Systems

M. Capeans, Roberto Guida, and Beatrice Mandelli(&)

CERN, Geneva, Switzerland


beatrice.mandelli@cern.ch

Abstract. At the LHC experiments the Muon Systems are equipped with dif-
ferent types of gaseous detectors that will need to assure high performance until
the end of the LHC run. One of the key parameters for good and safe long-term
detector operation is the gas mixture composition and quality. Indeed a wrong
gas mixture composition can decrease the detector performance or cause aging
effects and irremediable damages. It is therefore a fundamental requirement to
verify and monitor the detector gas mixture quality.
In the last years several gas monitoring techniques have been studied and
developed at CERN to automatically monitor the detector gas mixture compo-
sition as well as the impact of gas quality on detector performance. In all LHC
experiments, a gas analysis module allows continuous monitoring of O2 and
H2O concentrations in several zones of the gas systems for all muon detectors.
More sophisticated and precise gas analyses are performed with gas chro-
matograph and mass spectrometer devices, which have sensitivity at the level of
ppm and allow to verify the correctness of the gas mixture composition. In
parallel to standard gas analysis techniques, a gas monitoring system based on
single wire proportional chamber has been implemented: these detectors are
very suitable to detect any possible aging contaminants thanks to their high
sensitivity.

Keywords: LHC experiments  Gas analysis  Gas systems

1 The Gas Systems for the LHC Muon Detectors

At the LHC experiments, 30 gas systems deliver the proper gas mixture to the corre-
sponding detectors [1]. They are complex apparatus that ensure an extremely high
reliability in terms of stability and quality of the gas mixture delivered to the detectors.
Indeed, the gas mixture is the detector’s sensitive medium and a correct and stable
composition is a basic requirement for good and safe long-term operation.
A modular design is adopted for the construction of the LHC gas systems. Every
module fulfills a specific function and it can be configured to satisfy needs of different
detectors. The gas systems can be operated in two different modes:
– “open mode”: the mixture is exhausted to atmosphere after being passed through the
detector
– “recirculation mode”: the mixture is collected after being used in the detector and is
continuously re-injected into the supply lines.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 97–101, 2018.
https://doi.org/10.1007/978-981-13-1313-4_20
98 M. Capeans et al.

About 50% LHC gas systems are operated in gas recirculation mode, being
mandatory in case of large gas volumes or expensive gas mixtures. The recirculation
fraction varies between 90% and 100% depending on detectors and gas system con-
strains. Since the renewal period of the gas volume is longer, the quality of the gas
mixture and the accumulation of impurities becomes a typical issue, which need to be
kept under control.
Furthermore the gas systems and gas mixture can affect detector operation because of:
– wrong gas mixture composition
– bad quality of the supplied gases
– accumulation of impurities in the recirculation system
– gas parameters (pressure, gas flow, cleaning agents, etc.)
These variations can have an impact on the detectors’ dark current, gas gain,
operation voltage, etc. It is therefore fundamental the monitoring of the gas system
parameters and gas mixture quality.

2 Standard Gas Analysis Techniques for the LHC


Experiments

2.1 O2, H2O and Infrared Analyzers


Each experiment is equipped with a standardized analysis rack for O2 and H2O mea-
surements of several gas sampling points displaced in different regions of the detector
gas systems. A control software regularly scans the gas sampling points from the
different detectors and data are made available to the experiments. In case of need of
detection of other gases (for example flammable gases) the analysis rack can be
equipped with dedicated infrared-light analyzers.
The O2 scan of all gas sampling points is crucial to detect anomalies or problems in
the gas system. Indeed a different O2 concentration from standard values can be the
symptom of a bad gas supply quality, development of leaks or detector air intake.

2.2 Gas Chromatograph Stations


The correct composition of the gas mixture delivered to the detectors is fundamental for
good and safe long-term operation. The exact concentration of each gas component as
well as the detection of different types of gas impurities can be measured through
dedicated analysis instruments, like micro gas chromatograph (uGC). At CERN six
uGC stations are installed in different test areas or in the LHC experiments. In par-
ticular CMS and LHCb are already equipped with permanent uGC stations connected
to the selection manifold of the standard analysis rack to measure the gas composition
for all gas analysis streams. Each uGC station allows the identification of several gas
components with sensitivity of the order of ppm. Figure 1 shows an example of gas
chromatograph analysis for the CMS Cathode Strip Chamber (CSC) detector: the three
gas mixture components are separated and quantified with the PPU uGC column while
O2 and N2 contaminations are detected with the MS5A uGC column.
Gas Mixture Monitoring Techniques for the LHC Detector Muon Systems 99

a) b)

Fig. 1. Gas chromatograms of the CMS CSC gas mixture in two analysis points: “CSC mixer”
and “CSC supply to the detector”. (a) Gas Chromatogram of the CMS CSC gas mixture
composition (b) Gas Chromatogram of the CMS CSC impurities (O2 and N2) concentration.

The regular monitoring of the gas mixture using uGC stations allows identifying
the origin of potential problems like low quality of gas supply, drift in the calibration of
the MFCs or faults in some gas system components.
For each experiment there are at least 20 analysis streams, which GC analyses
require a remarkable investment of resources and time. A complex system has been
installed in the underground service cavern of the CMS experiment to automatically
sample the different regions of the muon system [2]. The system is equipped with a
three column uGC station connected to three 16 multi-way valves allowing to analyze a
total of 48 gas samples coming from the output of the CSC, Drift Tube (DT) and
Resistive Plate Chamber (RPC) detectors.

3 Alternative Gas Analysis Techniques

Standard gas analyzers and gas chromatograph techniques are fundamental for the
monitoring of the detector gas mixture quality. Nevertheless in some cases their sen-
sitivity or specifications are not enough to address specific analysis requirements. For
this reason, some alternative gas analysis techniques are used in specific cases.

3.1 Detection of Impurities in Gas Recirculation Systems


The quality of the gas mixture can deteriorate in a recirculation system and, further-
more, under the effects of high electric field and radiation, complex gas molecules can
break inside the detector volume creating new chemical species, which can accumulate
in the gas system.
These impurities can be detected and identified only with a Mass Spectrometer
(MS) coupled with a uGC. Systematic studies have been conducted on the impurities
that can be created inside the RPC detector volume where the main gas component is
C2H2F4 [3]. Indeed, under the effect of high electric field and radiation, C2H2F4 can
break into different ions creating new molecules. The addition of purifiers with different
cleaning agents helps to reduce these impurities to a minimum level.
100 M. Capeans et al.

As complementary analysis tool for the study of F-species, an Ion Selective


Electrode (ISE) station is used to measure free fluoride ions in the gas mixture. A set-up
has been installed in the ALICE Muon Trigger (MTR) system to quantify the F−
pollution present in the gas mixture exiting the detector and after the gas cleaning
process.

3.2 Gaseous Detectors as Tools for Gas Monitoring


GC analysis techniques could not be enough sensitive if the impurities concentration is
below ppm level or in case of unknown impurities that cannot be detected by the uGC
columns. Furthermore the GC stations sample several gas lines and cannot work con-
tinuously on one gas stream. For these reasons, an alternative gas analysis technique has
been developed by using Single Wire Proportional Chambers (SWPCs), which, thanks
to their large drift volume, are extremely sensitive to gas mixture variations, impurities
(below ppm level) and material outgassing [4]. SWPCs can be easily installed in the gas
line providing an online and continuous gas monitoring. The first application of the
SWPC as monitoring tool has been implemented in the CMS CSC gas system for the
monitoring of the gas mixture in two different regions of the system. Figure 2 shows an
example of gas monitoring for the SWPCs: an increase of O2 concentration in the system
was easily detected by a variation of gain in the SWPC.

Fig. 2. Trend of SWPC normalized gain and O2 concentration in the gas mixture “Supply to the
Detector”. The purifier cycles are indicated with the different colours.

4 Conclusions

About 30 gas systems are delivering the proper gas mixture to the gaseous detectors at
the CERN LHC experiments. Quality and composition of gas mixtures are essential to
avoid temporary or unrecoverable degradation of detector performance. The verifica-
tion and monitoring of the gas mixture quality is therefore crucial. Several standard and
alternative gas analysis techniques have been established and are in use at CERN for all
Gas Mixture Monitoring Techniques for the LHC Detector Muon Systems 101

experiments. Thanks to the implementation of these gas monitoring systems it is


possible to spot any gas variation and to correlate detector performance with gas
mixture related problem.

References
1. Guida, R., Capeans, M., Hahn, F., Haider, S., Mandelli, B.: The gas systems for the LHC
experiments. IEEE Trans. Nucl. Sci. (2013)
2. Capeans, M., Focardi, E., Hahn, F., Haider, S., Guida, R., Mandelli, B.: A common analysis
station for the gas systems of the CMS experiment at the CERN-LHC. In: IEEE Conference
Record (2011)
3. Capeans, M., Hahn, F., Haider, S., Guida, R., Mandelli, B.: RPC performance and gas quality
in a closed loop gas system for the new purifier configuration at LHC experiments. JINST 8,
T08003 (2013)
4. Mandelli, B.: Detector and system developments for LHC detector upgrades. CERN-THESIS-
2015-044 (2015). http://cds.cern.ch/record/2016792
Design of the New ATLAS Inner
Tracker (ITk) for the High
Luminosity LHC

Jike Wang(B)

DESY, Hamburg, Germany


jikewang.de@gmail.com

Abstract. In the high luminosity era of the Large Hadron Collider (HL-
LHC), the instantaneous luminosity is expected to reach unprecedented
values, resulting in about 200 proton-proton interactions in a typical
bunch crossing. To cope with this high rate, the ATLAS Inner Detector
is being completely redesigned, and will be replaced by an all-silicon sys-
tem, the Inner Tracker (ITk).
This new tracker will have both silicon pixel and silicon strip sub-
systems. The components of the Inner Tracker will have to be resistant
to the large radiation dose from the particles produced in HL-LHC col-
lisions, and have low mass and sufficient sensor granularity to ensure a
good tracking performance over the pseudorapidity range |η| < 4. In this
talk, first the challenges and second possible solutions to these challenges
will be discussed, i.e. designs under consideration for the pixel and strip
modules, and the mechanics of local supports in the barrel and endcaps.

1 Introduction
The ITk is a full upgrade of the current ATLAS Inner Detector (ID) as part
of the Phase-II upgrade. It will be an “all-silicon” detector, which consists of
a new Pixel and Strip detectors. Due to radiation damage the ID can’t survive
with the future 3000 fb−1 integrated luminosity, so has to be replaced. There is
the TRT (Transition Radiation Tracker) in the current ID, it can’t work under
HL-LHC multiplicity, so has to be removed. Figure 1 shows the structure of the
ITk.
The HL-LHC upgrade would unlock much larger physics potential, for exam-
ple the rare channels like VBF h → ZZ → 4l, BSM hh → 4b, higgs self-coupling,
etc. The ITk would be the most important detector component, will be crucial
for many performances like lepton measurements, b-tagging and pileup jet rejec-
tion in wide kinematic and pseudo-rapidity range, etc. There will also be huge
challenge to the ITk, since the enormous pileup of up to μ = 200 (see the Fig. 2).
Hence we need to carefully re-design the Tracker.

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 102–108, 2018.
https://doi.org/10.1007/978-981-13-1313-4_21
Design of the New ATLAS Inner Tracker (ITk) 103

Fig. 1. Overview of the ITk.

Fig. 2. Display of the busy situation of μ = 200.

2 How to Design the ITk


The basic idea is to build the whole detector, mimic the HL-LHC collision sit-
uation, implement the particle-detector interaction etc., using simulation; and
then use the simulated samples to look and compare the performances.
Only detecting technique or sensor is good doesn’t mean the built detector
is good. There are still many things should be considered: like the shape of the
104 J. Wang

sensor may not optimized, need to optimize how to arrange the sensors to min-
imize the material amounts; need to ensure the hermiticity of the detector etc.
During this designed process we also need to tightly coordinate with engi-
neering people. For example we get information from them about which sen-
sor/layouts are interested, then look; our optimized layouts may impossible in
terms of engineering or cost too much, need to consult with them on these. The
conclusions will also be written into reports/documents, to guide the building.
The design is a very important component towards the final building of ITk.

3 The ITk Layout Evolution

There are the following important steps in the evolution of the ITk layouts:

• The Letter of Intent (LoI) layout, which was studied around 2012.
This layout features a so-called “stub” layer, a short barrel layer between
the 4th/5th layer of the Strip, to give robustness in the barrel to end-cap
transition region. The η coverage is till 2.7. There are 4 pixel + 5 strip layers.
• Letter of Intent very forward (LoI-VF), which was studied around 2013–2014.
There were lots of studies showed that many performances and physics at HL-
LHC can benefit from larger Tracker coverage. For example Fig. 3 shows that
the larger the η coverage, the better ET miss
resolution.
• There are two main layout conceptions about the Pixel detector. One is
Inclined, the other is Extended (the details of these two conceptions will
be further discussed later). There were lots of studies about the comparison
between the Inclined and the Extended layouts around 2015–2016.

miss
Fig. 3. The ET resolution comparison for different η coverage scenarios.
Design of the New ATLAS Inner Tracker (ITk) 105

• In 2016, studies demonstrated that the Inclined would have better perfor-
mance than the Extended, then the Inclined is decided for the Pixel detector.
A baseline layout (for both the Pixel and Strip detector), has been converged
on year 2016. The layout has been used to produce the samples for the Strip
TDR.
• Right now still some optimization studies about the Pixel detector layout
are ongoing, they will converge soon. After that the layout will be used to
produce the samples for the Pixel TDR, which is planed to be released in the
end of year 2017.

4 Pixel Detector Layout Optimizations


For the Extended and Inclined conceptions, both push the material of the end
of barrel services and support region to large Z. The difference between the two
are in the treatment of the forward part of the barrel layers, the Extended uses
long barrel while the Inclined uses rings. Figure 4 shows that the larger the η
coverage, the better ET miss
resolution.

Fig. 4. The r − Z view of the Extend (left) and the Inclined layout (right).

The preliminary estimation of the material budget for the two layouts, is
shown in Fig. 5 As expected the Inclined looks having less material, this is due
to the tracks are more perpendicular to the sensor when pass through. The
Extended also has another intrinsic problem: the long clusters and the corre-
sponding bad quality space points, make the seeding problematic. So the Inclined
is preferred.
106 J. Wang

Fig. 5. The X0 distribution for the Extend (left) and the Inclined layout (right).

5 Strip Detector Layout Studies


For the Strip detector, as illustrated in Fig. 6, a new Xml-based detector descrip-
tion framework has been developed. This framework is easy to understand
and maintain. It also has high flexibility for geometry building and detector
description.

Fig. 6. The detailed algorithm flow of the Strip description framework

Completely new endcaps have been implemented in this framework. The com-
plexity consist of the following items: the complicated new sensor shape (stereo
annulus); the petals are overlapping with each other; there are different sensor
Design of the New ATLAS Inner Tracker (ITk) 107

geometries for each petal. The sensor, petal and endcap geometry shapes are
shown in Fig. 7.

Fig. 7. The sensor, petal and endcap geometry shapes

6 The Milestone: Strip TDR Layout


The most important layout achieved is the Strip TDR layout (as already briefly
discussed in Sect. 3). The r−Z view of this layout is shown in Fig. 8 In conclusion,

Fig. 8. The Strip TDR layout


108 J. Wang

for this layout there are 5 Pixel layers and 4 Strip layers; The short stub layer
has been removed; the Inclined layout is decided for Pixel detector.

7 Summary

The extremely challenging situation at HL-LHC makes very hard to design the
further new Tracker. Our several years’ work successfully converged on the Strip
TDR layout, which was used for the results in the Strip TDR. Now we are trying
to converge on the Pixel TDR layout, then it will be used for results for the Pixel
TDR.
A Standalone Muon Tracking Detector
Based on the Use of Silicon
Photomultipliers

F. Arneodo1 , L. M. Benabderrahmane1 , A. Candela2 , V. Conicella1 ,


A. Di Giovanni1(B) , O. Fawwaz1 , and G. Franchi3
1
New York University Abu Dhabi, Abu Dhabi, UAE
adriano.digiovanni@nyu.edu
2
Gran Sasso National Laboratory, Assergi (AQ), Italy
3
AGE Scientific SRL, Capezzano Pianore (LU), Italy

Abstract. We present the characterization and performances of a muon


tracking detector developed by New York University Abu Dhabi and
Gran Sasso National Laboratory (Italy) and based on the use of Sili-
con Photomultipliers and plastic scintillating bars. The remote control
of the instrument allows for operation in locations with limited access.
The detector has been designed and built with the aim at providing 3D
particle reconstruction for the measurements of muon angular distribu-
tions with possible use in cultural heritage, archeological studies and for
tomography of large buildings.

Keywords: SiPM · Muon tomography · Cosmic rays

1 Introduction
Portable radiation detectors are commonly used in many fields of science and
industry: radiation surveys (based on alpha, beta and gamma sensitive detector),
radon level assessment, medical applications and in cultural heritage. Many new
applications are based on the detection of highly penetrating particles, such
as muons: homeland, tomography of large infrastructures and budgeting the
nuclear spent/waste fuel are only few of them [1]. In this work we present the
characterization and performances of a muon tracker system capable of doing
tridimensional mapping and to perform fine measurement of muon intensity flux.
The main features of the detector are:
– fairly high granularity, enabling for 3D reconstruction of tracks;
– easy to transport and deploy;
– insensitive to magnetic field;
– single photon counting capability;
– embedded data acquisition;
– remote operability.
In the following section we briefly describe the detector hardware, the detection
efficiency and preliminary results.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 109–112, 2018.
https://doi.org/10.1007/978-981-13-1313-4_22
110 F. Arneodo et al.

2 Experimental Setup
The detection technique is based on the use of plastic scintillator bars and Silicon
Photomultipliers: a detailed description of a single detection channel is reported
here [2]. The muon tracker is equipped with 200 channels organized in two views
(XZ and YZ). The hundred channels of each view are further subdivided in group
of ten to form a detection plane. Two corresponding planes (i.e. at same height)
of different views form a detection layer. Orthogonal bars in a layer allow for the
X-Y reconstruction, while the Z coordinate is given by the height of the layer.
The layout of the muon tracker is reported in Fig. 1. Each detection channel is
equipped with a preamplifier and a discrimination stage, while the SiPMs are
biased in group of five (two different voltages can be applied per plane). The
electronics readout of each view consists in one single 40 cm × 70 cm board. The
trigger logic can be defined locally through a switch or remotely by means of
a computer interface. Data acquisition is embedded in the electronics and the
data stream is stored in real time by means of a serial connection between the
board and a laptop computer.

Fig. 1. Left: the layout of the the muon tracker. In blue the scintillator bars forming a
detection layer. In green the monolithic printed circuit (PCB) board used for readout
and control each view. Right: one of the two muon trackers currently operating at
New York University Abu Dhabi (NYUAD).

3 Tracks Reconstruction and Efficiency Measurements


The detection thresholds acting on each plane have been optimized in order to
keep the random coincidences occurring in a layer (i.e. two orthogonal planes at
the given height) below 0.1 %. The trigger level can be varied from the softest
(one layer only) up to the hardest (all the 10 layers in coincidence) configuration.
For geometrical reasons, the softest one corresponds to an angular acceptance
of 170◦ while for the hardest 60◦ . The fitting procedure, summarized below,
considers the two views completely uncorrelated:
1. hit reconstruction pattern;
2. preliminary cut on the center of gravity of the event to get rid of the most
external hit;
A Standalone Muon Tracking Detector 111

Fig. 2. Typical muon event reconstruction. The XZ and YZ views are considered uncor-
related. The red (blue) bars represent the position of the plane in the XZ (YZ) view,
while the dots are the fired SiPMs in the event. The blue line is the best fit obtained by
following the fitting procedure from step 1 to 4, while the final fit (in red) is obtained
by adding the step 5 and 6.

Fig. 3. Efficiency matrices of the two views of the muon tracker. Each pixel represents
the efficiency of a single detection channel. Low efficiency channels have been replaced
on the basis of previous iterations of this map.

3. cut of multiple hits in a single plane;


4. preliminary linear fit;
5. analysis of residual to re-include multiple hits;
6. final linear fit.

In Fig. 2 is reported a typical muon hit pattern along with the reconstructed
track (red line).
The detection efficiency has been measured by using a 8-fold trigger configu-
ration (8 layers in coincidences) and by constraining the residual value between
the fit-predicted SiPM and the actual one fired. This technique allowed to con-
struct the efficiency map of the detector. The results are reported in Fig. 3.

4 Preliminary Angular Distributions Measurements

Figure 4 shows the angular distribution of muon flux measured with the detector
operating inside the ERB building at NYUAD (left) and outside (right). The
effect of the building is clearly visible on the left histogram, while the distribution
112 F. Arneodo et al.

Fig. 4. Muon flux distributions in φ taken at sea level in Abu Dhabi. Left: muon
distribution profile obtained by operating the detector inside the laboratory. Right:
muon distribution profile obtained by operating the detector outside. A 2% east-west
effect is also visible.

Fig. 5. Detector operating inside the experimental building. Left: angular distribution
at reference position. Right: angular distribution with the detector rotated by 45◦ .

on the right histogram is almost flat (we have estimated a east-west effect [3]
of 2%). The pronounced spikes found on both of the histograms are due to the
finite granularity of the detector and are driven by particles that are parallel
to the detector walls. This condition worsens the angular resolution. To reduce
the impact of this effect, every data sample is taken in two steps: at a reference
position and with the detector rotated by 45◦ . As shown in Fig. 5, by comparing
the two different profiles, it is possible to uncover regions otherwise hidden by
the detector systematics.
This work has been supported by New York University Abu Dhabi.

References
1. Checchia, P.: Review of possible applications of cosmic muon tomography. JINST
11, C12072 (2016)
2. Arneodo, F., Benabderrahmane, M.L., et al.: Muon tracking system with Silicon
Photomultipliers, 799(1), 166–171 (2015)
3. Johnson, T.H.: The Azimuthal Asymmetry of the Cosmic Radiation. Phys. Rev. 43,
834 (1933). Published on 15 May 1933
Spherical Measuring Device of Secondary
Electron Emission Coefficient Based on Pulsed
Electron Beam

Kaile Wen1,2, Shulin Liu1(&), Baojun Yan1(&), Yang Yu3,


and Yuzhen Yang4
1
State Key Laboratory of Particle Detection and Electronics, Institute of High
Energy Physics of Chinese Academy of Sciences, Beijing 100049, China
{liusl,yanbj}@ihep.ac.cn
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Department of Applied Physics, School of Science,
Xi’an University of Technology, Xi’an 710048, China
4
School of Physics, Nanjing University, Nanjing 210023, China

Abstract. In order to improve the performance of the microchannel plate, a


material having a high secondary electron emission coefficient (SEEC) is
required, and the SEEC of this material needs to be accurately measured. For
this purpose, a SEEC measuring device with spherical collector structure was
designed. The device consists of vacuum system, electron gun, main chamber,
sample stage, test system and test software. The measurement of the SEEC from
a wide incident energy range (100 eV–10 keV) and a large incident angle (0°–
85°) is realized by using the pulsed electron beam as the incident electron. The
energy distribution of the secondary electrons is measured by a multi-layer grid
structure. The SEEC of the metallic material was tested by using this device,
which proves that the device is stable and good.

Keywords: Secondary electron emission coefficient  Measuring device


Sample charging

1 Introduction

The secondary electron emission characteristics, as the basic properties of the material,
have important applications in various vacuum test instruments and photomultiplier
devices. In order to further enhance the function of the relevant devices (such as
microchannel plate), accurate measurement of different materials of secondary electron
emission characteristics is very necessary and significant. In order to accomplish this
task, we designed and built a device that measures the secondary electron emission
coefficient (SEEC) of the material at different primary electron incident energies and
incident angles in a high vacuum environment, which can also be used to measure the
generation the energy distribution of secondary electrons.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 113–116, 2018.
https://doi.org/10.1007/978-981-13-1313-4_23
114 K. Wen et al.

2 Equipment Structure and Working Principle

The structure of the device as shown in the Fig. 1 consists of vacuum system, electron
gun, main chamber, sample stage, test system and test software. When the device is in
operation, the electron beam generated by the upper electron gun will be irradiated on
the sample, the secondary electrons emitted from the sample will diverge to the sur-
roundings and are collected by the spherical collector, and the current of the secondary
electrons will be measured. The incident electron energy that generated by the electron
gun is continuously adjustable from 100 eV to 10 keV. The electron gun can be
operated in pulse mode with pulse width range from 2 ls to 200 ls. By grounding the
inner grid and the sample stage, and connecting the collector to the +40 V, a uniform
electric field can be generated internally. Therefore, the generation of secondary
electrons will not be affected by the electric field. The sample stage can be moved and
rotated to obtain different primary electron incident angles, the incident angle is con-
tinuously adjustable from 0° to 85°. The baking lamp can degas the surface of the
sample, and the achievable baking temperature is 250 °C. The vacuum system has a
working vacuum of 10−6 Pa and a limit vacuum of 10−7 Pa, which is sufficient to meet
the measurement requirements of the SEEC.

Fig. 1. Equipment structure

3 The Principle of Measurement SEEC

It can be seen from the device structure that the secondary electrons produced by the
sample are required to penetrate the two layers grid before they are collected by the
collector. The secondary current measured by the collector will be less than the actual
value. Therefore, we have designed the extraction electrode for the collector and the
inner and outer grid, and measure the current on it. The sum of the currents is the actual
secondary electric current generated by the sample.
Spherical Measuring Device of Secondary Electron Emission 115

At the same time, due to the charge conservation, and electrons generated by
electron guns may only flow to the inner grid, outer grid, collector and sample. So the
sum of the four parts current should be equal to the electron gun emission current, that
is, the primary electron current.
It can be obtained that the SEEC of the sample is:

Iinnergrid þ Ioutergrid þ Icollector


d¼ ð1Þ
Iinnergrid þ Ioutergrid þ Icollector þ Isamplestage

4 Measurement Results

4.1 Distribution of SEEC with Incident Energy


The Fig. 2 shows one of the measurement results of the device. It can be seen that the
trend of the first rise and fall is obvious, and the SEEC for each incident energy are also
in good agreement with the previous literature [1].

Fig. 2. Distribution of the SEEC with the incident energy.

4.2 Distribution of SEEC with Incident Angle


Figure 3 shows the measured and fitting SEEC data as a function of incident angle. It is
known that the distribution of SEEC with incident angle conforms to the formula [2–5]:

dðhÞ ¼ dð0Þ  expðcð1  cos hÞÞ ð2Þ

It can be seen that the experimental results are in good agreement with the theo-
retical expectations.
116 K. Wen et al.

Fig. 3. Distribution of the SEEC with the incident angle

5 Conclusion

We have been measured the SEEC of nano-thick oxide sample in pulsed mode. By
calculating the integral, we are able to measure the incident electron current and the
SEEC in real time. So, we can quickly get the feedback to monitor the work condition
of the experimental equipment. The distribution of the SEEC with incident electron
energy and incident angle can also be measured successfully. And the measurement
results are in good agreement with the existing data and theoretical expectations.
Through Labview to write the driver and measurement procedures, the automation of
the above measurement process is realized.

Acknowledgement. This work is supported by the National Natural Science Foundation of


China (Grant Nos. 11675278 and 11535014), Beijing Municipal Science and Technology Project
(Grant No. Z171100002817004) and the Equipment Development Program of the Chinese
Academy of Sciences.

References
1. Jokela, S.J., Veryovkin, I.V., Zinovev, A.V.: Secondary electron yield of emissive materials
for large-area micro-channel plate detectors: surface composition and film thickness
dependencies. Phys. Procedia 37, 740–747 (2012)
2. Shih, A., Hor, C.: Secondary emission properties as a function of the electron incidence angle.
IEEE Trans. Electron Devices 40(4) (1993)
3. Kirby, R.E., King, F.K.: Secondary electron emission yields from PEP-II accelerator
materials. Nucl. Instrum. Methods Phys. Res. A 469, 1–12 (2001)
4. Suharyanto, Yamano, Y., Kobayashi, S., Michizono, S., Saito, Y.: Effect of mechanical
finishes on secondary electron emission of alumina ceramics. IEEE Trans. Dielectr. Electr.
Insul. 14(3) (2007)
5. Yang, W.J., Li, Y.D., Liu, C.L.: Model of secondary electron emission at high incident
electron energy for metal. Acta Phys. Sin. 62(8), 087901 (2013)
A Vertex and Tracking Detector System
for CLIC

A. Nürnberg(B) on behalf of the CLICdp collaboration

CERN, Geneva, Switzerland


andreas.nurnberg@cern.ch

Abstract. The physics aims at the proposed future CLIC high-energy


+ −
linear e e collider pose challenging demands on the performance of
the detector system. In particular the vertex and tracking detectors have
to combine precision measurements with robustness against the expected
high rates of beam-induced backgrounds. The requirements include ultra-
low mass, facilitated by power pulsing and air cooling in the vertex-
detector region, small cell sizes and precision hit timing at the few-ns
level. A detector concept meeting these requirements has been developed
and an integrated R&D program addressing the challenges is progress-
ing in the areas of ultra-thin sensors and readout ASICs, interconnect
technology, mechanical integration and cooling.

1 Introduction
CLIC is a proposed future high-energy linear e + e − collider [1]. The high-
precision physics aims pose challenging demands on the performance of the
detector systems, including the vertex and tracking detector [2]. In particular, a
precise determination of displaced vertices for efficient flavor tagging requires an
3
impact parameter resolution of σ(d0 ) = 5 ⊕ 15/(p[GeV] sin 2 θ) μm for the vertex
detector, whereas the main requirement for the tracker is a transverse momen-
tum resolution of σpT /p2T = 2 × 10−5 GeV−1 for high-pT tracks above 100 GeV
in the central detector. At the same time, the material budget and power con-
sumption have to be kept at a minimum. Further, background particles from
beam-beam interactions can reach the detector and thus small cell sizes and
precise hit timing in the vertex and tracking detector are necessary.

2 Vertex and Tracking Detector Concepts


To fulfill the requirements outlined in Sect. 1, silicon vertex and tracking detec-
tors are foreseen for CLIC [3]. Both sub-detectors are illustrated in Fig. 1.
The vertex detector consists of three double layers in the barrel region, rang-
ing from 31 mm to 70 mm in radius, and discs on each side of the detector. The
material budget per detection layer of only 0.2%X0 does not allow for liquid cool-
ing of the detector. To extract the dissipated heat, forced air-flow is foreseen.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 117–122, 2018.
https://doi.org/10.1007/978-981-13-1313-4_24
118 A. Nürnberg

The discs are arranged in a spiral geometry, to allow for better air-flow. To meet
the required impact parameter resolution and flavor tagging efficiency, 3 μm sin-
gle point resolution has to be achieved throughout the vertex detector. Currently,
planar and capacitively coupled hybrid pixel detectors are under consideration,
e.g. [4,5].
The main silicon tracker consists of 6 barrel layers, 7 inner discs and 4 outer
flat discs, with material content per detection layer corresponding to 1–2%X0 [3].
The tracking detector is divided into an inner and outer part by the support
cylinder for the vacuum tube. The tracker radius is 1450 mm. In total, the active
area covered by the tracker is in the order of 100 m2 . For this large area, mono-
lithic solutions are considered, e.g. [6].

4.6 m

3m
(a) Vertex detector (b) Tracking detector

Fig. 1. Rendering of the vertex and tracking detectors as implemented in the CLICdet
detector model [3]. From [7], published with permission by CERN.

The low duty cycle of the CLIC machine (312 bunches in 156 ns long bunch
trains every 20 ms) allows for pulsed power operation of the vertex and tracking
detectors. This helps in reducing the average power consumption and enables
the vertex detector to be air-cooled. For the large tracker volume, air-cooling
can not easily be implemented, therefore liquid cooling is currently foreseen.

3 Detector Performance

Extensive simulation and engineering studies have been performed in order to


optimize the detector layout and to demonstrate the technical feasibility of the
proposed solutions.

3.1 Flavor Tagging

The main goal of the vertex detector is the tagging of heavy quarks by the recon-
struction of displaced decay vertices. Beauty- and charm-tagging performance
has been chosen as benchmark for the detector design. Full simulation studies
A Vertex and Tracking Detector System for CLIC 119

based on Geant4 [8,9] as well as multivariate analyses using the LCFIPlus pack-
age [10] have been performed on various implementations of the detector. The
variations have been guided by engineering constraints. Results from this detec-
tor optimization are illustrated in Fig. 2, while a more complete description can
be found in [11,12].
The impact of the variation of the material content per detector layer from
0.1%X0 to 0.2%X0 on the misidentification of b and light flavor backgrounds as
function of the c-tagging efficiency is shown in Fig. 2a, and reveals an increase of
the fake rate by 5% to 35%. This demonstrates the importance of limiting the
material budget of the detector. 0.2%X0 per layer is considered to be realistically
achievable taking technological and engineering constraints into account.
The relative performance of the spiral geometry in comparison to flat discs
is illustrated in Fig. 2b, and shows only slightly reduced performance in some
regions with lower coverage.
The slight benefit of 3 double layers in the barrel compared to 5 single layers
as depicted in Fig. 2c can be explained by the small reduction on the material per
layer due to shared support structures in the case of the double layer arrangement
and the additional track measurement point.

Fig. 2. Flavor tagging performance [11]. From [7], published with permission by
CERN.
120 A. Nürnberg

3.2 Transverse Momentum Resolution


The track momentum measurement is based on the measurement of the cur-
vature of charged particle trajectories in a magnetic field. The achievable res-
olution is an interplay between the strength of the magnetic field, the tracker
radius and the single point resolution of the detection layers. Optimization stud-
ies discussed in [3] lead to the choice of B = 4 T, R = 1.5 m and a single point
resolution σrϕ = 7 μm in the rϕ-plane. Both, fast [13] and full simulation stud-
ies presented in Fig. 3 confirm that the transverse momentum resolution goal
of σpT /p2T = 2 × 10−5 GeV−1 can be reached for high momentum tracks in the
central detector.

Fig. 3. Transverse momentum resolution for single muons. From [7], published with
permission by CERN.

3.3 Background Occupancy


To achieve a high luminosity of the CLIC machine, a strong focusing of the beams
to very small beam sizes is needed. This leads to beam-beam interactions which
result in background particles being created [2,15]. Particles from incoherent
e + e − pair production as well as from γγ → hadron events can reach the detec-
tor acceptance and create background hits in the tracking detectors. The cell
sizes have to been chosen such, that the occupancy of the detectors is limited to
3% integrated over the full bunch train. This value is assumed to be tolerated by
the track reconstruction algorithms. For the vertex detector, 25 × 25 μm2 pixels
are assumed, and short strips/long pixels in the range of 1–10 mm × 30–50 μm
are envisaged for the tracker [3]. Figure 4 summarizes the occupancy of the ver-
tex barrel layers and tracking detector discs due to beam-induced background
particles obtained in a Geant4 based full detector simulation. It demonstrates
that the occupancy can be limited to 3%. Safety factors for the uncertainties
related to the production and simulation uncertainties of the individual back-
ground processes have been applied [14,15].
A Vertex and Tracking Detector System for CLIC 121

Fig. 4. Bunch train occupancies in the vertex and tracking detectors due to beam
induced background hits from incoherent pair production and γγ → hadron background
2
processes [14], assuming 25 × 25 μm pixel size for the vertex detector and 1–10 mm ×
50 μm for the tracker. From [7], published with permission by CERN.

4 Summary
The design of the vertex and tracking detector for CLIC is driven by the strin-
gent requirements on measurement precision, the limited material and power
budget and the challenging background conditions. Simulation and engineering
studies have demonstrated that a light-weight air-cooled vertex detector gives
excellent flavor tagging performance, and that a large silicon tracker provides
excellent track momentum measurement. Both are essential ingredients for the
physics goals at CLIC. An integrated R&D program addressing the technological
challenges is progressing in the areas of ultra-thin sensors and readout ASICs,
interconnect technology, mechanical integration and cooling, to show the feasi-
bility of the proposed vertex and tracker detector concept.

References
1. Aicheler, M., et al.: A multi-TeV linear collider based on CLIC technology: CLIC
conceptual design report (2012). https://cds.cern.ch/record/1500095. CERN-2012-
007
2. CLIC Conceptual Design Report: Physics and Detectors at CLIC (2012). CERN-
2012-003
3. Alipour Tehrani, N., et al.: CLICdet: the post-CDR CLIC detector model (2017).
http://cds.cern.ch/record/2254048. CLICdp-Note-2017-001
4. Alipour Tehrani, N.: Test-beam measurements and simulation studies of thin-
pixel sensors for the CLIC vertex detector. Ph.D. thesis, ETH Zurich (2017),
https://www.research-collection.ethz.ch/handle/20.500.11850/164813. Diss. ETH
No. 24216
5. Buckland, M.: Analysis and simulation of HV-CMOS assemblies for the CLIC
vertex detector (2017). These Proceedings
122 A. Nürnberg

6. Munker, M.: Integrated CMOS sensor technologies for the CLIC tracker (2017).
These Proceedings
7. Nurnberg, A.: A vertex and tracking detector system for CLIC (2017). http://cds.
cern.ch/record/2272079. CLICdp-Conf-2017-013
8. Agostinelli, S., et al.: Geant4 - a simulation toolkit. Nucl. Instrum. Methods Phys.
Res. Sect. A Accel. Spectrom. Detect. Assoc. Equip. 506(3), 250–303 (2003)
9. Allison, J., et al.: Geant4 developments and applications. IEEE Trans. Nucl. Sci.
53(1), 270–278 (2006)
10. Suehara, T., Tanabe, T.: LCFIPlus: a framework for jet analysis in linear collider
studies. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrom. Detect. Assoc.
Equip. 808, 109–116 (2016). http://www.sciencedirect.com/science/article/pii/
S0168900215014199
11. Alipour Tehrani, N., Roloff, P.: Optimisation studies for the CLIC vertex-detector
geometry (2014). https://cds.cern.ch/record/1742993. CLICdp-Note-2014-002
12. Alipour Tehrani, N.: Optimisation studies for the CLIC vertex-detector geome-
try. J. Instrum. 10(07) (2015). C07001. http://stacks.iop.org/1748-0221/10/i=07/
a=C07001
13. Regler, M., Valentan, M., Fruhwirth, R.: LiC detector Toy 2.0 (Vienna fast simula-
tion tool for charged tracks), users guide. http://www.hephy.at/project/ilc/lictoy/.
HEPHY-PUB-863/08
14. Nurnberg, A., Dannheim, D.: Requirements for the CLIC tracker readout (2017).
https://cds.cern.ch/record/2261066. CLICdp-Note-2017-002
15. Dannheim, D., Sailer, A.: Beam-induced backgrounds in the CLIC detectors (2012).
http://cds.cern.ch/record/1443516. LCD-Note-2011-021
The Barrel DIRC Detector
for the PANDA Experiment at FAIR

R. Dzhygadlo1(B) , A. Ali1,2 , A. Belias1 , A. Gerhardt1 , K. Götzen1 , G. Kalicy1 ,


M. Krebs1,2 , D. Lehmann1 , F. Nerling1,2 , M. Patsyuk1 , K. Peters1,2 ,
G. Schepers1 , L. Schmitt1 , C. Schwarz1 , J. Schwiening1 , M. Traxler1 ,
M. Zühlsdorf1 , M. Böhm3 , A. Britting3 , W. Eyrich3 , A. Lehman3 ,
M. Pfaffinger3 , F. Uhlig3 , M. Düren4 , E. Etzelmüller4 , K. Föhl4 ,
A. Hayrapetyan4 , K. Kreutzfeld4 , B. Kröck4 , O. Merle4 , J. Rieke4 ,
M. Schmidt4 , T. Wasem4 , P. Achenbach5 , M. Cardinali5 , M. Hoek5 ,
W. Lauth5 , S. Schlimme5 , C. Sfienti5 , and M. Thiel5
1
GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany
r.dzhygadlo@gsi.de
2
Goethe University, Frankfurt a.M., Germany
3
Friedrich Alexander-University of Erlangen-Nuremberg, Erlangen, Germany
4
II. Physikalisches Institut, Justus Liebig-University of Giessen, Giessen, Germany
5
Institut für Kernphysik, Johannes Gutenberg-University of Mainz, Mainz, Germany

Abstract. Charged particle identification in the barrel region of the


PANDA target spectrometer will be delivered by a Barrel DIRC detec-
tor. The design of the Barrel DIRC has been developed using Monte-
Carlo simulation and validated with a full-scaled prototype in particle
beams. It features the narrow radiators made of synthetic fused silica,
focusing optics with 3-layer spherical lenses, and a compact prism-shaped
expansion volume instrumented with MCP-PMTs.

Keywords: Particle identification · Cherenkov detector · DIRC

1 Introduction
The PANDA experiment [1] is designed to shed a light on fundamental aspects of
QCD by performing hadron spectroscopy. A sophisticated detector system with
4π acceptance, precise tracking, calorimetry, and particle identification (PID) is
designed to accomplish that goal [2]. Hadronic PID for the target region of the
PANDA detector will be delivered by a DIRC (Detection of Internally Reflected
Cherenkov light) counter, see Fig. 1 (left). It is designed to cover the polar angle
range from 22◦ to 140◦ and provide at least 3 s.d. π/K separation power up to 3.5
GeV/c, matching the expected upper limit of the kaon momentum distribution,
shown in Fig. 1 (right).

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 123–126, 2018.
https://doi.org/10.1007/978-981-13-1313-4_25
124 R. Dzhygadlo et al.

Fig. 1. Left: The PANDA target spectrometer with highlighted Barrel DIRC. Right:
Phase space distribution as a function of kaon momentum and polar angle combined
for eight benchmark channels at pp̄ = 7 GeV/c. The Barrel DIRC coverage is marked
with the dashed rectangle.

2 Barrel DIRC Design and Performance

The design of the PANDA Barrel DIRC [3] is shown in Fig. 2 (left). The base
concept of it is inspired by the BaBar DIRC counter [4]. It is constructed in a
form of barrel using 16 optically isolated sectors each made of radiator box and a
compact, prism-shaped expansion volume (EV). The radiator box contains three
synthetic fused silica bars of 17 × 53 × 2400 mm3 size, positioned side-by-side
with a small air gap in between them. A flat mirror at the forward end of each
bar is used to reflect Cherenkov photons to the read-out end, where a 3-layer
spherical lens images them on an array of 11 Microchannel-plate Photomultiplier
Tubes (MCP-PMTs) [5]. The MCP-PMT consists of 64 pixels of 6.5 × 6.5 mm2
size and is able to detect single photons with a precision of about 100 ps. A
modernized version of the HADES readout board [6] and front-end electronics
[7] is used for a signal readout.

Fig. 2. Left: CAD drawing of the Barrel DIRC. Only half of the sectors are shown.
Right: π/K separation power as a function of particle momentum and polar angle in
GEANT4 simulation.
The Barrel DIRC Detector for the PANDA Experiment at FAIR 125

Two reconstruction methods have been developed to determine the perfor-


mance of the detector [8]. The geometrical reconstruction performs PID by recon-
structing the value of the Cherenkov angle and using it in a track-by-track max-
imum likelihood fit. This method relies mostly on the position of the detected
photons in the reconstruction, while the time imaging utilizes both, position and
time information, and directly performs the maximum likelihood fit. The results
of the time imaging reconstruction of the GEANT4 simulations are shown in
Fig. 2 (right), where the π/K separation power is shown for different momenta
and polar angle of the particles. With a separation power of 4–16 s.d., the design
exceeds the PANDA PID requirement for the entire charged kaon phase space,
indicated by the area below the black line.

3 Prototype Tests
Multiple aspects of the Barrel DIRC’s design were tested in hadronic particle
beams during the 2011–2016 period. Several design options, such as a monolithic
expansion volume and a traditional spherical lens with an air gap, were excluded
due to insufficient performance. The final design configuration with narrow bars
was verified at the CERN proton synchrotron in 2015 [9]. Additional tests were
carried out in 2016 to evaluate the performance of an alternative design with
wide plate radiators instead of narrow bars. Such a design would significantly
reduce the cost of the detector.
The full-scale prototype included all relevant parts of the PANDA Barrel
DIRC sector. Both radiator geometries, a narrow fused silica bar (17.1 × 35.9
× 1200.0 mm3 ) and a wide fused silica plate (17.1 × 174.8 × 1224.9 mm3 ) were
tested. One end of the radiator was coupled to a flat mirror and the other end
to a focusing lens and expansion volume. An array of 3 × 5 MCP-PMTs was
attached to the back side of the EV and used to detect Cherenkov photons.
A wide range of data measurements were taken for the π/p beam at differ-
ent polar angles and momenta. The external PID for charged particle was pro-
vided by a time-of-flight system. Figure 3 shows the result of the time imaging
reconstruction for a 25◦ polar angle and a 7 GeV/c momentum. The resulting

Fig. 3. Proton-pion log-likelihood difference distributions from proton data (red) and
pion data (blue) at 7 GeV/c beam momentum and 25◦ polar angle as a result of the
time-based imaging reconstruction. The distributions are for the narrow (left) and wide
(right) radiator with the 3-layer spherical and 2-layer cylindrical lens, respectively.
126 R. Dzhygadlo et al.

separation powers of 3.6 ± 0.1 s.d. for the bar (left) and 3.1 ± 0.1 s.d. for
the plate (right) at 7 GeV/c π/p momentum correspond to 3.8 ± 0.1 s.d. and
3.3 ± 0.1 s.d. at 3.5 GeV/c π/K momentum. While the result with a nar-
row bar is clearly superior to the result with the plate, both performances sat-
isfy the PANDA Barrel DIRC requirement and are in a good agreement with
simulations.

4 Conclusions
The PANDA Barrel DIRC will deliver hadronic PID, in particular π/K separa-
tion better than 3 s.d. up to 3.5 GeV/c. The final design features narrow radiators
made of synthetic fused silica, focusing optics with 3-layer spherical lenses and
a compact prism-shaped expansion volume instrumented with MCP-PMTs.
The latest prototype tests with particle beams at CERN validated this design.
In addition the configuration with wide plate radiator and cylindrical lens also
prove to give sufficient PID performance. Nevertheless, the narrow bar design
was selected due to the superior PID performance and because the plate requires
significantly better timing precision, which may not be available at the beginning
of the PANDA physics run.
The production of the optical components, photon sensors and electronics
for the PANDA Barrel DIRC is scheduled for the 2019–2022 period. The final
assembly and installation should take place in 2023.

Acknowledgement. This work was supported by HGS-HIRe, HIC for FAIR, BNL
eRD14 and U.S. National Science Foundation PHY-125782. We thank GSI and CERN
staff for the opportunity to use the beam facilities and for their on-site support.

References
1. PANDA Collaboration: Physics performance report for PANDA: strong interaction
studies with antiprotons. arXiv:0903.3905
2. PANDA Collaboration: Technical Progress Report, FAIR-ESAC/Pbar (2005)
3. PANDA Collaboration, Singh, B., et al.: Technical design report for the PANDA
barrel DIRC detector. arXiv:1710.00684
4. Adam, I., et al.: The DIRC particle identification system for the BaBar experiment.
Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrom. Detect. Assoc. Equip.
538, 281 (2005)
5. Lehmann, A., et al.: Recent developments with microchannel-plate PMTs. Nucl.
Instrum. Methods Phys. Res. Sect. A Accel. Spectrom. Detect. Assoc. Equip. 876,
42 (2017)
6. Ugur, C., et al.: 264 channel TDC platform applying 65 channel high precision (7.2
psRMS) FPGA based TDCs. https://doi.org/10.1109/NoMeTDC.2013.6658234
7. Michel, J., et al.: Electronics for the RICH detectors of the HADES and CBM
experiments. JINST 12 (2017). C01072
8. Dzhygadlo, R., et al.: Simulation and reconstruction of the PANDA barrel DIRC.
Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrom. Detect. Assoc. Equip.
766, 263 (2014)
9. Dzhygadlo, R., et al.: The PANDA barrel DIRC. JINST 11 (2016). C05013
The Belle II/SuperKEKB Commissioning
Detector - Results from the First
Commissioning Phase

Miroslav Gabriel(B)
on behalf of the BEAST II Collaboration

Max Planck Institute for Physics, Munich, Germany


mgabriel@mpp.mpg.de

Abstract. The SuperKEKB energy-asymmetric e+ e− collider has now


started commissioning and is working towards its design luminosity of
8 × 1035 cm−2 s−1 . In spring 2016, SuperKEKB circulated beams in both
rings during the first phase of commissioning, with the Belle II detec-
tor at the roll-out position. A dedicated array of sensors collectively
called BEAST II was installed specifically to monitor and study beam
background conditions. This contribution discusses the detector setup
and three selected results on the improvement of the vacuum conditions
due to vacuum scrubbing, a combined measurement of beam-gas and
Touschek backgrounds and measurements of backgrounds related to the
injection of new particles.

1 Introduction

SuperKEKB is an energy-asymmetric circular e+ e− collider with a nominal


center-of-mass energy of 10.58 GeV, slightly above the Y(4S) resonance. As a
second generation B factory it is searching for hints of new physics by precisely
probing the heavy flavor sector of the Standard Model [1].
The collider is an upgrade from its direct predecessor KEKB with the
main purpose of further increasing the luminosity up to a design goal of
8 × 1035 cm−2 s−1 . The improvements are primarily achieved by doubling of the
beam currents and focusing the particle bunches at the interaction point (IP )
down to the size of several tens of nano meters (nano-beam scheme).
However, the highly increased luminosity will result in dramatically increased
beam backgrounds, representing a significant challenge for the later operation of
the Belle II detector. These limit the beam lifetimes, increase the hit occupancy
in the inner detectors and lead to non-reducible analysis background. Further-
more, they can reduce the survival rate of the installed hardware and even pose
a threat to it due to possible excessive instantaneous radiation damages.

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 127–131, 2018.
https://doi.org/10.1007/978-981-13-1313-4_26
128 M. Gabriel

On February 10th, 2016 the first circulating bunches have been observed in
the new accelerator [2].

2 Commissioning of SuperKEKB

Currently the SuperKEKB collider is undergoing an extensive commissioning


campaign split into three phases and covering several years. Data taking for
the first phase took place from February until June 2016. For this phase, the
Belle II detector, as well as, the final focusing system were no yet installed at
the IP. The goals for this first commissioning step included circulating both the
electron and positron beams, improving the conditions of the vacuum system and
guaranteeing a radiation safe environment for the later detector. An important
part of the latter is the measurement and the real time monitoring of particle
loss rates and background radiations from the circulating beams, especially with
respect to accelerator conditions. The processes under investigation in the first
phase are beam-gas scattering, Touschek scattering, synchrotron radiation and
the backgrounds caused by injection. BEAST data is used to develop and validate
mathematical models and simulations of beam backgrounds to determine their
possible influence on the operation of Belle II and their impact on the physics
program.

Table 1. Overview of the detector subsystems used in the BEAST experiment and
their specific measurements.

3
System PIN diodes He tubes Micro TPCs Diamonds
Unique Neutral vs. Thermal Directional fast Ionizing
measurement charged radiation neutron neutron flux radiation dose
dose rate
Quantity 32 × 2 4 4 4
System BGO CLAWS Crystals Scintillator
Unique Electromagnetic Injection EM particle Electromagnetic
measurement dose rate background rate/injection bkg particle rate
Quantity 8 8 6×3 4
The Belle II/SuperKEKB Commissioning Detector 129

3 The BEAST Experiment


Since the Belle II detector was
at the roll-out position, a dedi-
cated system called the BEAST
experiment was installed at the IP.
Its measurements determine parti-
cle loss rates contributing to the
beam life time, expected dose rates
and thus possible effects on the
survival time of the inner detec-
tors, and both beam and physics
background-induced particle rates,
which impact detector operation
Fig. 1. Observed normalized background rate
and physics analysis. It consists of as a function of the accumulated current,
several individual detector systems, demonstrating the improvement in vacuum
each targeting a specific type of conditions due to vacuum scrubbing.
measurement. An overview of the
detectors is given in Table 1. The BEAST experiment is fully integrated into
the accelerator slow control network via the EPICS framework. The measure-
ment results are transmitted to the control room of the accelerator for real time
feedback to changing accelerator conditions.
The experiment was constantly monitoring beam backgrounds during the
full five month of the first commissioning phase. Within this phase, a dedicated
two-week program to study various aspects of beam-related backgrounds was
carried out.

4 Selected Results from the BEAST Experiment


Covering all results of the BEAST
experiment would be beyond the
scope of this contribution, hence,
only three selected results from vac-
uum scrubbing, Touschek and beam-
gas background measurements and
injection background measurements
are discussed in the following.
The improvement in vacuum
quality and therefore beam-gas scat-
tering backgrounds due to vacuum
scrubbing over the full run time
Fig. 2. Combined measurement of the
of the first commissioning phase is
beam-gas and Touschek backgrounds in the
shown in Fig. 1. The background rate
BGO system normalized to the beam-gas
measured by several BEAST subsys- contribution over corresponding beam sizes
tems, normalized to the beam current and currents.
130 M. Gabriel

squared to account for current dependencies, is plotted over the total accumu-
lated current and time.
Towards the end of Phase 1 the improvement in vacuum purity and therefore
in beam-gas background was large enough to allow a combined study of beam-gas
and Touschek backgrounds. For a so called size sweep scan the background rate
is probed while artificially changing the size and the current of the circulating
beams to determine their influence. A total of 15 measurements for five different
beam sizes and three different beam currents is shown in Fig. 2. The observed
background rate in the BGO system is normalized to the beam-gas contribution
which depends on the beam current, the pressure in the ring and some effective
atomic number representing the mixture of residual gas species inside the vacuum
system and can be modeled by IP Ze . The normalized rate is a function of the
beam current divided by the pressure, the effective atomic number squared and
the beam size. The agreement between data and the linear fit and therefore our
model validates the mathematical description used to predict the background
rates form these two processes in terms of changing beam conditions.
To compensate for background losses and collisions, new particles are injected
directly into the circulating bunches during normal operations. However, for the
first few turns the bunches which received new particles will show significantly
higher background rates when passing by the IP. This will have a great impact
on the operation of the Belle II pixel detector, since the increased backgrounds
from injections lead to overflow, making it essentially blind for real physics until
it is read out. With precise knowledge of the bunch-by-bunch structure and the
time evolution of injection noise the charge collection in the pixel detector can
be disabled during transit of particular bunches, ensuring optimal efficiency of
the innermost detector.
In Fig. 3 the background from
charged particles observed by the
CLAWS detectors in an injection into
the electron ring is shown. The phase
shift of the injected particles has
been changed from nominal settings to
investigated its influence on the time
development. The signals are charac-
terized by substantially higher back-
ground levels immediately after injec-
tion. The intensity decreases to the
base level after several turns corre-
sponding to a few hundreds of micro
seconds. Certain timing pattern can be
observed and a correlation with accel-
erator features like betatron oscillation Fig. 3. Reconstructed signal observed by
is currently being studied. CLAWS in the three innermost forward
sensors in an injection into the electron
ring with altered phase shift parameter.
The Belle II/SuperKEKB Commissioning Detector 131

5 Summary
The SuperKEKB collider is currently undergoing an extensive commissioning
campaign divided into three phases. Due to the unprecedented luminosity, beam
backgrounds will represent a significant challenge for the operation of Belle II.
During the first phase the Belle II detector was substituted by the BEAST
experiment, a combination of several detector systems specifically designed to
measure and understand none-collision beam backgrounds. Important measure-
ments included the quantification of the improvement in vacuum conditions due
to scrubbing, the development and verification of a model to describe Touschek
and beam-gas backgrounds and the characterization of the time development of
injection backgrounds.
The second phase of the commissioning campaign will include the Belle II
detector, with a modified inner detector incorporating some of the BEAST sys-
tems for further background studies, as well as the final focusing system and it
is planed to observe the first collisions in the new accelerator. Data taking will
start in February 2018.

References
1. Abe, T., et al.: Belle-II Collaboration. arXiv:1011.0352 [physics.ins-det]
2. ‘First turns’ for SuperKEKB. http://cerncourier.com/cws/article/cern/64345.
Accessed 04 Aug 2017
The CMS ECAL Upgrade for Precision
Crystals Calorimetry at the HL-LHC

Patrizia Barria(B)
on behalf of the CMS Collaboration

University of Virginia, Charlottesville, VA 22904-4714, USA


patrizia.barria@cern.ch

Abstract. The electromagnetic calorimeter (ECAL) of the Compact


Muon Solenoid Experiment (CMS) is operating at the Large Hadron
Collider (LHC) in 2016 with proton-proton collisions at 13 TeV center-
of-mass energy and at a bunch spacing of 25 ns. Challenging running
conditions for CMS are expected after the High-Luminosity upgrade of
the LHC (HL-LHC). We review the design and R&D studies for the CMS
ECAL crystal calorimeter upgrade and present first test beam studies.
Particular challenges at HL-LHC are the harsh radiation environment,
the increasing data rates and the extreme level of pile-up events, with
up to 200 simultaneous proton-proton (p-p) collisions. We also report
on the R&D for the new readout and trigger electronics, which must be
upgraded due to the increased trigger and latency requirements at the
HL-LHC.

Keywords: Calorimeter · Radiation-hard detectors · Scintillators


Front-end electronics for detector readout

1 Introduction
The physics goals for the HL-LHC phase (Phase II) [1,2] foresee precise mea-
surements of the Higgs boson couplings and studies of rare SM processes, crucial
for searches for new physics. To successfully exploit these data which will be
collected during the HL-LHC phase, it is necessary to reduce the effects of the
increased simultaneous interactions per bunch crossing (pileup (PU)). At the
same time the calorimeters should provide performance similar to that delivered
so far but with beam intensities that will result in 200 PU arising from a peak
instantaneous luminosity of 5×1034 cm−2 s−1 . This will be a particularly difficult
challenge for the endcap region (EE), due to the fact that the radiation levels
will change by a factor of 100 between |η| = 1.48 and |η| = 3.0. The dose and
fluence levels will result in significant loss to the crystal light transmission and
VPT (Vacuum Photo Triode) performance. In the barrel region (EB) we will
also have to cope with the increased PU, along with increasing APD (Avalanche
Photo Diode) noise (see Fig. 2(c)) resulting from increased dark current, the
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 132–136, 2018.
https://doi.org/10.1007/978-981-13-1313-4_27
The CMS ECAL Upgrade for Precision Crystals Calorimetry at the HL-LHC 133

dominant effect for Lint >1000 fb−1 (see Fig. 2(b)). These effects require that
the CMS detector [3] will be upgraded including a replacement of EE and the
upgrade of EB.

2 The CMS Electromagnetic Calorimeter (ECAL)


The CMS ECAL is a compact homogeneous calorimeter (see Fig. 1) containing
75848 lead-tungstate (PbWO4 ) scintillating crystals, located inside the CMS
superconducting solenoid. The EB covers a pseudorapidity range |η| = 1.48
while the two EE cover 1.48 < |η| < 3.0. The scintillation light is detected by
APDs in the EB and by VPTs in the two EE. A preshower detector (ES), based
on lead absorbers equipped with silicon strip sensors, is placed in front of the EE
crystals and covers the 1.65 < |η| < 2.6 range. The ECAL energy resolution is
crucial not only for the reconstruction and measurement of Higgs-boson decays,
for example H → γγ, but also for many other CMS analyses. At the same time,
as already mentioned, the performance is affected by PU. ECAL has excellent
performance at 13 TeV [4] and in particular the photon energy resolution is 1–3%
in EB and 2.5–4.5% in EE+ES.

Fig. 1. The CMS Electromagnetic Calorimeter (ECAL). EB section comprises 61200


crystals and EE section comprises 14648 crystals. Copyright 2017 CERN for the benefit
of the CMS Collaboration. CC-BY-4.0 license.

3 ECAL Barrel Upgrade


The existing EB on-detector electronics (see Fig. 2(a)), is organized into groups of
5 × 5 crystals readout by APDs and form a trigger tower (TT). The energy from
the APDs of all 25 crystals is readout at 40 MHz through a motherboard and 5
134 P. Barria

(a) (b) (c)

Fig. 2. (a) Schematic of CMS ECAL readout electronics (crystals (at the bottom), the
motherboard, 5 VFE boards, and 1 FE board). (b) Expected APD dark current (Idark )
level in EB versus integrated luminosity at both |η| = 0 (blue curves) and |η| = 1.45
(purple curves) for operating temperatures of 18 ◦ C or at 8 ◦ C. (c) Expected noise
level in EB versus integrated luminosity at |η| = 1.45 for operating temperatures of
18 ◦ C (red curves) or at 8 ◦ C (blue curves), with the present electronics (continuous
line, shaping time t = 43 ns), or the upgraded electronics (dotted line, t = 20 ns).
Copyright 2017 CERN for the benefit of the CMS Collaboration. CC-BY-4.0 license.

VFE cards. The VFEs contain ASICs that perform pulse amplification, shaping
and digitization functions. Digital signals from the 5 VFE cards are sent to a
single FE card where the trigger primitive (TP) is formed. This TP is essentially
the summed energy from the 5 × 5 crystals, together with some basic information
on the shape of the shower in these crystals. The TP is transmitted via a Gbit
optical link at 40 MHz to the off-detector trigger cards. The FE card also features
a memory, to store the individual crystal energies until reception of an external
level-1 (L1) trigger signal. At this point a second Gbit optical link multiplexes
the individual crystal energies to the off-detector readout boards. Figure 2(a)
shows that the PbWO4 crystals, the APDs, the motherboards, and the overall
mechanical structure will not be upgraded. Both FE and VFE electronics readout
will be replaced to satisfy the increased Phase II L1 trigger latency (12.5 µs c.f.
4.8 µs) and accept rate (750 kHz c.f. 100 kHz) requirements and to cope with
the increased HL-LHC performance. The VFEs will maintain similar purpose,
but decreasing the shaping time (43 ns c.f. 20 ns) and the digitisation to reduce
out-of-time PU contamination, electronics noise and anomalous APD signals
(spikes) [5]. The FE card will readout individual crystal energies at 40 MHz,
moving most processing off-detector, so the off-detector electronics needs be
upgraded to accommodate higher transfer rates and to generate TP.

3.1 Motivations

The ECAL EB upgrade [6] is driven by the Phase II L1 trigger requirements as


well as the spike mitigation can be improved using a single crystal granularity
The CMS ECAL Upgrade for Precision Crystals Calorimetry at the HL-LHC 135

not yet available at L1. Currently the spikes are rejected at L1 using an algorithm
whose performance will degrade significantly during the HL-LHC scenario. How-
ever, with an upgraded electronics we will be able to apply more sophisticated
filtering algorithms in the VFE. Decreasing the shaping time and increasing the
digitization frequency will improve the discrimination between “normal” signals
and those from the (faster) spikes (see Fig. 3(b)). Furthermore, because of the
increasing APD noise that will significantly degrade the electromagnetic reso-
lution (see Fig. 2(c)), we will mitigate this effect by cooling the crystals, and
therefore the APDs, and by optimising pulse shaping with new VFEs. The APD
dark current is strongly dependent on temperature so by reducing the temper-
ature of the EB from 18 ◦ C to 8 ◦ C the dark current will reduce by a factor of
2.5, resulting in a noise decrease of 35%.

(a) (b) (c)

Fig. 3. (a) New VFE boards with Trans-impedance Amplifier (TIA) pulse
shaper/preamplifier as re-designed ASICs. (b) Comparison of APD pulse shape for
spike and scintillation events. The pulse shape for spike (scintillation) events is shown
in red (blue). (c) Timing resolution as a function of normalized amplitude for different
sampling frequencies. Copyright 2017 CERN for the benefit of the CMS Collaboration.
CC-BY-4.0 license.

4 Timing Performance
Precision timing will improve the vertex localisation for high energy photons,
and in particular the vertex resolution for H → γγ decays will benefit from
this. The current efficiency of localising the vertex is ∼70–80% but with the
current EB timing precision it would be reduced to less then 30% at 200 PU.
An improvement up to ∼70% can be achieved for photons with |Δη| > 0.8, but
to get this important increase the VFE ASIC design, the sampling rate, and the
clock distribution should be upgraded to approach the 30 ps timing precision.
As pulse shaper/preamplifier ASIC option a trans-impedance amplifier (TIA)
(see Fig. 3(a)) has been chosen and tested. TIA architecture is mainly digital
and does not apply a shaping to the APD pulse. It measures the APD signal
with high bandwidth and is optimized for sampling/digitization up to 160 MHz.
Its performance has been confirmed during 2016 beam tests at the H4 beam line
at CERN SPS with high energy electrons (20 < Ee < 250 GeV). The timing
136 P. Barria

performance of the new electronics has been evaluated for a single crystal, at the
centre of a 5 × 5 PbWO4 crystal matrix, that has been readout by prototype
VFE with discrete component TIA. Different sampling frequencies have been
emulated and finally the APD timing has been extracted through a template fit
to pulse shape. Figure 3(c) reports the very promising results achieved at 160
MHz: σ ∼ 30 ps at normalized amplitude (A/σ) of 250 corresponding at 25 GeV
photon with 100 MeV noise (at HL-LHC start) and 60 GeV photon with 240
MeV noise (at HL-LHC end).

5 Conclusions

The ECAL performance at 13 TeV is excellent but the harsh and challenging
conditions of the HL-LHC necessitate a complete replacement of EE and a par-
tial upgrade of EB to maintain this performance comparable to the Run II ones.
To mitigate the increased APD noise due to radiation damage, the EB operating
temperature will be lowered from 18 ◦ C to 8 ◦ C. Reading single-crystal infor-
mation at 40 MHz through transimpedence amplifiers will provide much more
information to the off-detector electronics. This will be used in the L1 trigger to
mitigate anomalous signals in the APDs and reduce the effects of pileup. Higher-
precision timing information than presently available will mitigate pileup even
further and result in an overall EB performance that is comparable to that in
present CMS operation. However a more precise time-of-flight measurement of
photons (σ ∼ 30 ps) will play a key role during HL-LHC to get the same angular
resolution in H → γγ analysis as in Run II.

References
1. Rossi, L., Bruning, O.: High luminosity large Hadron collider: a description for the
European strategy preparatory group, CERN-ATS-2012-236. https://cds.cern.ch/
record/1471000
2. CMS Collaboration: Technical proposal for the phase-II upgrade of the CMS detec-
tor, CERN-LHCC-2015-010; LHCC-P-008 (2015)
3. The CMS Collaboration: The CMS experiment at the CERN LHC. JINST 3 (2008).
S08004. https://doi.org/10.1088/1748-0221/3/08/S08004
4. Chatrchyan, S., et al., CMS Collaboration: Performance and operation of the CMS
electromagnetic calorimeter. JINST 5 (2010). T03010. arXiv:0910.3423v3
5. Petyt, D.A., CMS Collaboration: Mitigation of anomalous APD signals in the CMS
electromagnetic calorimeter. J. Phys. Conf. Ser. 404 (2012). 012043. http://stacks.
iop.org/1742-6596/404/i=1/a=012043
6. The CMS Collaboration: CMS Phase II upgrade scope document, CERN-LHCC-
2015-019. https://cds.cern.ch/record/2055167
The Tracking System at LHCb in Run 2:
Hardware Alignment Systems, Online
Calibration, Radiation Tolerance and 4D
Tracking with Timing

Artur Ukleja(B)

National Centre for Nuclear Research, Warsaw, Poland


artur.ukleja@ncbj.gov.pl

Abstract. The Outer Tracker detector of LHCb experiment is a gaseous


straw tube tracker that measures the drift time with a resolution of
2.4 ns and track with spacial resolution of 171 µm. The maximum drift
time extracted from data is 35 ns. The dedicated optical alignment sys-
tem RASNIK shows that the construction supporting the Outer Tracker
detector is mechanically a rigid system. The assistance in particle iden-
tification for protons and pions is also presented.

1 Introduction
The LHCb experiment [1] is designed to study B and D meson decays at the
LHC. It is constructed as a forward spectrometer with an acceptance in the
pseudorapidity range 2 < η < 5. The tracking system of the LHCb detector is
composed of a silicon-strip vertex detector, close to the proton-proton interaction
region, and five tracking stations: two upstream and three downstream of a dipole
magnet. The outer part of the downstream stations is covered by the Outer
Tracker (OT) detector [1]. The OT is a straw-tube gaseous detector covering an
area 5 × 6m2 . The OT detector modules are arranged in three stations (T1, T2
and T3). Each station consists of four module layers, arranged in an x − u − v − x
geometry. The modules in the x layers are oriented vertically. The y coordinate
is horizontal and the z one is defined along the beam pipe, from beginning
to forward of the detector. Below the monitoring and evaluation of the OT
performance in Run 2 are presented, as well as the stability of the detector with
studies on possible contribution in identification of protons and pions.

2 Drift Time and Hit Resolution


The position of the hits in the OT is determined by measuring the drift time wire
of the ionisation clusters. The drift time measured by the detector is presented
in Fig. 1(a). The drift time (tdrif t ) and position information are related by the
means of a drift time distance relation. This relation is calibrated on data by
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 137–141, 2018.
https://doi.org/10.1007/978-981-13-1313-4_28
138 A. Ukleja

fitting the distribution of drift time as a function of the reconstructed of closest


approach between track and wire (r):

|r| |r|2
tdrif t (r) = (21.3 + 14.4 2 ) ns,
R R
where R = 2.45 mm is a radius of straw. The parameter values obtained in
Run 2 data are consistent with the Run 1 results [2,3]. The maximum drift time
extracted from the parametrization is 35 ns. The resolution dependence on the
distance from the wire (Fig. 1(b)) is also extracted from the fit:

|r|
σtdrif t (r) = (2.25 + 0.3 ) ns.
R
Due to a large background contribution coming mainly from secondary hits, the
residual distributions are fitted with a Gaussian function in a ±1σ. The drift time
residual distribution has a width of 2.4 ns and the spacial resolution (Fig. 1(c))
is 171 µm.

Fig. 1. (a) The drift time versus the unbiased distance with the overlaid fit (black line),
(b) the drift time residual distribution and (c) the hit distance residual distribution.
In (b) and (c) the core of the distribution (within ±1σ) are fitted with a Gaussian
function and the result is indicated in the figures.

3 Occupancies and Hits Efficiency


To reduce occupancy in good events, the previous busy events are vetoed.
Figure 2(a) shows the average number of hits recorded as a function of sum of
transverse energy of all hadronic calorimeter clusters in the previous bunch cross-
ing (ΣET (P rev)). It can be seen that the occupancy in the OT is about 5000 hits
for the empty previous events. It is caused by the late hits from previous bunch
crossing which are recorded in the OT as early hits from triggered bunch crossing
(so-called spill-over). If the previous event was quite busy (ΣET (P rev) < 1 TeV)
then the number of hits increases to ∼ 7500. This is a limit in data acquisition.
An upper limit ΣET (P rev) < 1 TeV is foreseen, which would reject 7.2% of
events. This does not bias the topology of the physical event since there is no
correlation between the event multiplicity in the calorimeter and consecutive
bunch crossing. This effect is illustrated on drift time spectrum in Fig. 1(b).
The Tracking System at LHCb in Run 2 139

Fig. 2. (a) The average number of recorded hits as a function of the activity in the pre-
vious bunch crossing, expressed as the scalar sum of transverse energy in all calorimeter
clusters (expressed in GeV). (b) The recorded drift time spectrum for all hits in the
OT (black line) and while keeping only events with ΣET (P rev) < 1000 GeV (blue
histogram).

4 Mechanical Stability
To improve the performance and provide the control of the OT by monitoring its
position, the system was constructed using the Relative Alignment System of
NIKHEF (RASNIK) [4–6]. The idea of this system is to project a finely detailed
image through a lens onto a CCD camera (RASNIK line). The RASNIK lines
are mounted on the four corners of each frame of the OT and they measure dis-
placement of four points an a frame in respect to corresponding reference points.
Horizontal lines measure mainly x and y LHCb coordinates. The resolution of
RASNIK system is better than 1µm. The example x and y variations of points
close to frames’ corners are shown in Fig. 3. The positions vary within ∼ 200 µm
in both x and y coordinates. At the beginning of the run, in May and June the
changes are relatively large (100 − 200 µm) till the intervention at the end of
June when the detector was opened and closed. After the intervention, in July
and August, the OT slowly attains the equilibrium state. This effect once again
is seen after second intervention in September, now in shorter time. In October
and November, the changes start to evolve with the opposite trend to the ones
observed in May.

Fig. 3. An example of movements of a corner of XU C-frame of T2 station of the OT


are shown as the x (a) and y (b) coordinates function of time for the data acquired in
2016.
140 A. Ukleja

5 Physics Object Exploiting the OT


The velocity of particle created in the pp collision can mostly be approximated
by the speed of light. But heavy low momentum particles can have velocity
sufficiently lower the light. This can cause significant shift in the arrival time.
For protons the shift in the arrival time is about 0.5 ns at 5GeV (Fig. 4(a)).
The track time distributions of pions and protons obtained in data are shown
in Fig. 4(b). The difference in the track time distributions between low momen-
tum (p < 7 GeV) pions and protons is visible. Although most of the particle
identification performance of LHCb is due to the ring imaging Cherenkov detec-
tors, the OT track time can assist in identification of these particles, especially
for analyses involving protons, Σ hiperons, deuterons and new long-lived heavy
particles searches.

Fig. 4. (a) The difference in time-of-flight between protons and pions (Δt) as a function
of their momentum (p), at the center of the OT, about 8.5 m from the interaction
point. (b) The distribution of track times for protons and pions with p < 7 GeV in
data acquired in 2016.

6 Summary
The performances of the Outer Tracker LHCb detector was stable in the entire
Run 2. Efficiencies and availability have been kept at high standards for the whole
data taking. The maximum drift time extracted from data is 35 ns with resolution
2.4 ns and the spacial resolution is 171 µm. The monitor positioning system
results show that to a very good approximation the construction supporting the
Outer Tracker detector is mechanically a rigid system. High accuracy data of this
system allows, however, to track even small deformations of the Outer Tracker
detector connected with magnetic field configurations, mechanical interventions
etc., no larger than 200 µm.

References
1. Alves Jr., A.A., et al.: The LHCb detector at LHC. JINST 3, S08005 (2008)
2. Arink, R., et al.: Performance of the LHCb outer tracker. J. Instrum. 9, P01002
(2013)
The Tracking System at LHCb in Run 2 141

3. Arink, R., et al.: Improved performance of the LHCb Outer Tracker in LHC Run-2.
(in preparation)
4. Dekker, H., et al.: The RASNIK/CCD 3-dimensional alignment system, 017, eConf
C930928 (1993)
5. Adamus, M., et al.: Test results of the RASNIK optical alignment monitoring system
for the LHCb Outer Tracker Detector, LHCb-Note-2001-004
6. Adamus, M., et al.: First Results from a Prototype of the RASNIK alignment system
for the Outer Tracker detector in LHCb experiment, LHCb-Note-2002-016
Design of a High-Count-Rate Photomultiplier
Base Board on PGNAA Application

Baochen Wang, Lian Chen(&), Yuzhe Liu, Weigang Yin, Zhou He,
and Ge Jin

State Key Laboratory of Particle Detection and Electronics,


University of Science and Technology of China, Hefei 230026, China
wbc1992@mail.ustc.edu.cn

Abstract. Prompt Gamma Neutron Activation Analysis (PGNAA) technology


has become the best choice to meet the needs of detecting the composition of
industrial materials. High count rate is an important indicator of PGNAA and a
large size NaI(Tl) detector is used. Two types of base board design are used to
provide biasing voltage. “Simple” resistive base board design can not work
properly in high count rate condition. The developed design with current source,
achieved by transistors, can work properly in high count rate condition and has
been greatly improved in various indicators. This paper includes PGNAA
introduction, detector system introduction, two types of base board design and
test result.

Keywords: PGNAA  High count rate  Base board

1 Introduction

PGNAA technique is a non-destructive nuclear analysis technique for the determina-


tion of elements. The method is used intensively for on-line and in situ analysis in
various fields. It is becoming the best choice to meet the needs of detecting the
composition of industrial materials [1].
As an on-line and in situ inspection method, the measurement must be done in short
time, usually only about 120 s. In order to ensure the measurement accuracy and get
better statistics, the measurement system requires high count rate, which is an important
indicator of PGNAA.

2 Detector System

The detection system is shown in Fig. 1. A NaI(Tl) detector which size is 6  7 inch is
used to achieve high detection efficiency and energy resolution. A photomultiplier tube
(PMT) and sodium iodide crystal are enclosed in the detector. A PMT base board
connects the detector and the electronic system. The PMT output signal flows through
the main amplifier into the multi-channel analyzer (MCA). High voltage provide power
supply to the detector.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 142–146, 2018.
https://doi.org/10.1007/978-981-13-1313-4_29
Design of a High-Count-Rate Photomultiplier Base Board 143

Fig. 1. Block diagram of detector system

3 Base Board Design

3.1 “Simple” Resistive Base Board Design


Its schematic diagram shows in Fig. 2. The voltage between cathode K and dynode
Dy1 is nearly a quarter of the total voltage, which is helpful to improve the signal-to-
noise ratio and energy resolution. The voltage of the electrodes in middle are equal.
Increasing last several electrodes’ voltage, while paralleling capacitors can avoid space
charge effect.

Fig. 2. Schematic diagram of “Simple” resistive base board design

But this design is not able to fix voltages between electrodes because of PMT
electrode currents flowing through the divider, having relatively high output impe-
dance. The electrode current depends on light intensity, so, in turn, the voltage between
electrodes and consequently the PMT gain will change with light intensity, causing
non-linear effects and signal distortion [2]. The output current of NaI detector is very
high since the NaI crystal has very high luminous efficiency. Therefore it will cause
signal distortion if the biasing circuit, i.e. base board cannot provide enough current to
the “high current” dynodes, i.e. the last several dynodes.
144 B. Wang et al.

3.2 Development Design


The energy of the electrons arriving at dynode Dy1 is due to the voltage between the
cathode K and the dynode Dy1. Then each of the electrons will cause a secondary
emission from the material of the dynode, and the amount of emitted electrons is
usually higher than one. This secondary dynode current IDy_out is higher than the
primary dynode current IDy_in, therefore, resulting in the total dynode current
IDy = IDy_out − IDy_in.
From electrical point of view, a PMT may be treated as a passive node (with no
energy sources inside it) and therefore a Kirchhoff Current Law (KCL) may be applied
to it, resulting in
XK
Iload ¼ IP þ IK þ I
i¼1 Dy i
ð1Þ

Dynodes Dy1 to Dy5 are “low current” dynodes. These dynodes should work in
appropriate voltage. Using zener diodes to limit their working voltage if the voltage is
too high.
Dynode Dy6 to Dy10 are “high current” dynodes. Adding the current through
dynodes is very helpful to work in high count rate environment. Refer to Kern’s design
[3] while making a little changes, transistors, diodes and zener diodes are used in the
development design, shows in Fig. 4. The transistors (Q in Fig. 3) are set as current
source (mark on Fig. 3 by red circle), providing current to dynodes. The current flow
through the emitter and flows from the collector into the emitter of the next transistor.
The diode (D in Fig. 3) near the transistor providing biasing voltage VBE (voltage
between the emitter and base). Zener diodes over the transistors providing biasing
voltage VCE (voltage between the emitter and collector), and preventing the transistor
from high voltage. These units make the transistors work in amplification state.

Fig. 3. Schematic diagram of developed design circuit of last few dynodes


Design of a High-Count-Rate Photomultiplier Base Board 145

The current from the current source (mark on Fig. 3) can be calculated by the
following formula:

VBQ  VBEQ
IEQ ¼ ð2Þ
R1 jjR2

VBQ is the biasing voltage of base. The current flowing through the transistors is
equal. Resistor 1 and 2 (R1 and R2 in Fig. 3) are connected in parallel.

4 Test Result

The test result of “simple” resistive base board design is showed in Fig. 4. Signal
distorted in high count rate condition. The upper limit of energy spectrum is 5 meV.
The upper limit of count rate is 100 kps.

Fig. 4. Test result of base board. Up: Signal output from base board, left: high energy
characteristic gamma; right: low energy characteristic gamma. Down: Signal output from main
amplifier, left: low count rate condition; right: high count rate condition.

The test result of the developed design base board does not have that distortion.
Testing with the new base board in high count rate condition, the hydrogen peak
(2.2 meV) resolution is 7.8%, shows in Fig. 5. The upper limit of count rate has been
improved to 300 kps. And the upper limit of energy spectrum has been improved to
10 meV. This result meets our requirement on PGNAA application.
146 B. Wang et al.

Fig. 5. Test result of hydrogen energy spectrum

5 Conclusions

The developed design with current driver can work properly in high count rate con-
dition and has greatly improved in various indicators. The upper limit of energy
spectrum has been improved to 10 meV. The upper limit of count rate has been
improved to 300 k.
This work is supported by the National Natural Science Foundation of China under
Grant No. 11375179.

References
1. Liu, Y., Chen, L., Liang, F. et al.: High counting-rate data acquisition system for the
applications of PGNAA. In: Real Time Conference (RT), 2016 IEEE-NPSS, pp. 1–4. IEEE
(2016)
2. Heifets, M., Margulis, P.: Fully active voltage divider for PMT photo-detector. In: Nuclear
Science Symposium and Medical Imaging Conference (NSS/MIC), 2012 IEEE, pp. 807–814.
IEEE (2012)
3. Kerns, C.R.: A high-rate phototube base. IEEE Trans. Nucl. Sci. 24(1), 353–355 (1977)
Front-end Electronics and Fast Data
Transmission
Electronics and Triggering Challenges
for the CMS High Granularity
Calorimeter for HL-LHC

Johan Borg(B)
on behalf of the CMS Collaboration

Imperial College London, SW7 2BW London, UK


j.borg@imperial.ac.uk

Abstract. The High Granularity Calorimeter (HGCAL) is presently


being designed to replace the CMS endcap calorimeters for the High
Luminosity phase at LHC. It will feature six million silicon sensor chan-
nels and 52 longitudinal layers. The requirements for the frontend elec-
tronics include a 0.3 fC-10 pC dynamic range, low noise (2000 e-) and low
power consumption (10 mW /channel). In addition, the HGCAL will per-
form 50 ps resolution time of arrival measurements to combat the effect
of the large number of interactions taking place at each bunch crossing,
and will transmit both triggered readout from on-detector buffer memory
and reduced resolution real-time trigger data. We present the challenges
related to the frontend electronics, data transmission and off-detector
trigger preprocessing that must be overcome, and the design concepts
currently being pursued.

1 Introduction
By the start of the third long shutdown of the LHC late 2023, the accumulated
radiation damage to the current endcap calorimeters of CMS detector is expected
to be so severe that new endcaps calorimeters must be installed. In addition,
to cope with the increase in luminosity proposed by the high-luminosity LHC
(HL-LHC) project, and the resulting large number of simultaneous interactions
at each bunch-crossing (pile-up) and radiation damage, the High Granularity
Calorimeter (HGCAL) project [1] is developing a sampling calorimeter based
on the CALICE/ILC concepts [2], but tailored to the CMS/LHC conditions.
Compared to the Calice design, the HGCAL will include high precision time-
of-arrival (TOA) measurement capability as a means of reducing the impact of
the high pile-up, circuits for generating reduced-resolution energy measurement
data for the trigger, and must respect a tight power budget to make the cooling
requirements of the detector manageable. Although the HGCAL will use both
silicon sensors and scintillator+silicon photomultiplier sensors, both using very
similar front-end electronics, this paper will focus on the electronics requirements
for the more challenging silicon sensors.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 149–153, 2018.
https://doi.org/10.1007/978-981-13-1313-4_30
150 J. Borg

2 Front-End Electronics
A sketch of the currently envisioned architecture for the HGCAL frontend is
shown in Fig. 1. To enable calibration using minimum ionizing particles (MIP),
which deposit about 3.6 fC in a 300 µm sensor, an equivalent input charge noise
of less than 0.32 fC (2000 e-) is specified. To reach the 15 bit dynamic range
required to cover the specified 10 pC full-scale range, a time over threshold
(TOT) circuit will extend the measurement range of a charge-preamplifier and
Analog to Digital Converter (ADC) path beyond the 11 bit range of the ADC.
To simplify the logic for summing (nominally 4) sensor cells to form the reduced-
resolution trigger data, as well as to simplify the offline (but still time-critical)
high-level trigger analysis, the linear regions of the ADC and TOT paths should
exhibit some overlap. A high dynamic-range charge injection circuit with high
relative accuracy will be used for transferring the MIP-based calibration to the
TOT range.
To enable the required 20 ps TOA resolution the front-end must have a
large gain-bandwidth product, and a phase-stable reference clock signal must be
distributed to each front-end ASIC. The shaping-time of the front-end is a trade-
off between electrical noise and measurement corruption due to a slow decay of
charge deposited in preceding bunch-crossings. As shown in Fig. 2 digital filtering
[3] can mitigate the latter, and would thus allow the shaper time-constant to
be increased from 10–15 ns to 30 ns. Finally, the whole analog front-end must
respect a power budget of only 10 mW per channel.

Fig. 1. The current concept for the HGCAL frontend. Charge deposited in the sensor
cell on the left is converted to a voltage by the charge-sensitive preamplifier, before
being digitized either using the ADC or the TOT circuit. At the same time the time of
arrival is measured by a TOA circuit. Not shown: Circuits for sensor leakage current
compensation, and circuits that enable sensors of both polarities to be used. From
CMS-CR-2017-156. Published with permission by CERN.

3 Data Readout and Processing

An overview of the current system design concept for the HGCAL detector is
shown in Fig. 3. At the full 40 MHz acquisition rate, with 6 million channels and
about 34 bits per channel, the raw data generated in the detector will be just
Electronics and Triggering Challenges 151

Fig. 2. Left: An illustration of the equivalent input charge noise as a function of shaping
time for CR-RC shaping with√equal time-constants, based on a simulation with an
input voltage noise of 0.5 nV/ Hz and a total sensor+PCB+frontend capacitance of
80 pF. 10-bit digitalization after the first-stage amplification stage is assumed. Right:
the system impulse response before and after digital (FIR) filtering using 3 coefficients
for τ =30 ns. From CMS-CR-2017-156. Published with permission by CERN.

above 8 Pbit/s, which would require some 800000 10 Gbit optical links to read
out. To keep the cost of the system acceptable, this number has to be reduced by
almost two orders of magnitude. This will be achieved using on-detector buffer
memory capable of storing 12.5 µs worth of data, to be read out in response to
the CMS trigger at a rate of up to 750 kHz. These data are also zero-suppressed
by omitting charge deposits close to the noise level, and only transmit the TOT
and TOA data fields when these contain meaningful results. These measures
are expected to reduce triggered data bandwidth enough to fit into about 6000
10 Gbit lpGBT [4] links.
At the same time, the HGCAL detector must contribute real-time informa-
tion to the L1 trigger decision. As mentioned above, the spatial resolution will
be reduced by a factor of four within the front-end ASICs by summing adjacent
cells into trigger cells, and by only using half of the layers for generating trigger
data. Additionally, only trigger cells, or possibly regions of trigger cells, where
substantial deposits have been detected will be transmitted. All in all, we expect
to use on the order of 8000 optical links for bringing the trigger information out
of the detector.
Once out of the detector, the energy measurements from the individual trig-
ger cells must be merged into 3-dimensional clusters. The current approach is
based on finding connected regions around “seed” hits (cells exceeding a compar-
atively high energy threshold) on each layer to form 2D clusters, and in a second
processing step merging these into 3-dimensional clusters that correspond to
incoming particles. To meet throughput and latency requirements these opera-
tions will be implemented using a bank of FPGAs likely housed on ATCA boards
about 70 m from the detector.
152 J. Borg

Fig. 3. The signals deposited in each of the approximately 6M sensors cells, distributed
over about 600 m2 of sensor wafers, are digitized by frontend ASICs located on sensor
PCBs that connect to the sensor wafers using through-hole wire bonds. The digital data
streams from the frontend ASICs will be sent over electrical links to the panel PCBs
where streams from multiple sensors are aggregated in concentrator ASICs before being
sent off-detector using lpGBT-based optical links. From CMS-CR-2017-156. Published
with permission by CERN.

4 Conclusions and Future Work


While the high luminosity planned for HL-LHC will speed up the accumula-
tion of statistics substantially, it has significant implications for the new endcap
calorimeters of CMS. To date, rapid progress has been made by leveraging experi-
ence and R&D from the CALICE project. Nevertheless, significant work remains
to develop a detector that can manage the requirements at CMS during the HL-
LHC era in terms pile-up-rejection, power-consumption and data throughput. A
first testbeam experiment in 2016 [5] delivered promising results and a second
testbeam during the summer of 2017 will explore TOT and TOA measurements
using a front-end ASIC made specifically for this test. Two sets of prototype cir-
cuits for the HGCAL frontend have been manufactured, with a third prototype
due to be submitted for manufacturing in June 2017.
Looking ahead, the HGCAL project is due to deliver a Technical Design
Report by November 2017, with full-architecture system tests planned for 2018.
This rapid pace is necessary as production needs to start in 2020 in order to
have the final detector ready for installation in 2024–2025.

Acknowledgments. This work was funded in part by: H2020-ERC-2014-ADG


670406 - Novel Calorimetry.

References
1. Technical Proposal for the Phase-II Upgrade of the CMS Detector, CMS collabora-
tion, Technical report (2015)
2. The CALICE collaboration: Construction and commissioning of the CALICE analog
hadron calorimeter prototype. J. Instrum. 5, P05004 (2010)
Electronics and Triggering Challenges 153

3. Gadomski, S., et al.: The deconvolution method of fast pulse shaping at hadron
colliders. Nucl. Instr. Meth. Phys. Res. B, A 320, 217–227 (1992)
4. Moreira, P.: The LpGBT Project Status and Overview. https://indico.cern.ch/
event/468486/contributions/1144369/attachments/1239839/1822836/aces.2016.03.
08.pdf. Accessed 20 Jun 2017
5. Jain, S.: Construction and first beam-tests of silicon-tungsten prototype modules
for the CMS High Granularity Calorimeter for HL-LHC. J. Instrum. 12, C03011
(2017)
Readout Electronics for CASCA in XTP
Detector

Hengshuang Liu and Dong Wang(B)

Center China Normal University, Wuhan, China


hengshuangliu@mails.ccnu.edu.cn, dongwang@mail.ccnu.edu.cn

Abstract. CASCA (Charge Amplifier with Switched Capacitor Array)


is a 32-channel readout ASIC in 0.18 µm CMOS technology for a TPC
based X-ray Polarimeter (XTP). It measures two dimensional photoelec-
tron tracks generated by the incident X-ray photons with one dimen-
sional strip readout. The other dimension is calculated by the drift time
from the signal waveform. This paper presents a readout electronics sys-
tem based on the CASCA chip. The system mainly consists of three
kinds of modules: the CASCA Frond-end card, the Adapter card and
the Main DAQ card. The Frond-end card, mounted with CASCA chips,
are designed for receiving detector signals. The Adapter card, edged-
mounted with the Frond-end card, is in charge of digitizing the output
signals from CASCA chip. The Main card collects digitized data from
Adapter cards and then transfers data to server through Gigabit Ether-
net protocol. One Main card is able to handle more than 128 electronic
channels.

Keywords: CASCA · Readout electronics · FPGA · Trigger


Ethernet

1 Introduction

Micro-pattern TPC (Time Project Chamber) based X-ray polarimeter (XTP)


has been demonstrated in recent years for its main advantage of releasing the
competition between absorption depth and drift distance [1]. It measures two
dimensional photoelectron tracks generated by the incident X-ray photons with
one dimensional strip readout. The other dimension is calculated by the drift
time from the signal waveform. The polarimeter focuses on the X-ray energy
of 2–10 keV whose track length is only several millimeters, and 100 µm track
resolution, fast sampling rate (20MSPS) is required. All these characters leads
to high density and low power readout electronics.
A new dedicated ASIC named CASCA, which integrates 32 channels in
0.18 µm CMOS technology, has been developed by Department of Engineering
Physics in Tsinghua University. Each channel of CASCA integrates the front-end
part and the SCA, the former for amplifying and shaping and the latter for sam-
pling [2]. The ASIC finally outputs a pair of differential signals from 32-channels.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 154–157, 2018.
https://doi.org/10.1007/978-981-13-1313-4_31
Readout Electronics for CASCA in XTP Detector 155

And its valid sampling rate is designed for 40Msps and the readout frequency is
15 MHz. Analog output from CASCA need to be digitalized and transported by
the readout system. The detailed design for the readout system will be present
in this paper.

2 System Architecture and Specifications

2.1 Architecture

The Adapter card edged-mounted with Frond-end card is near to the detector
and sampling waveform. The Main card as a FMC-based card, which is responsi-
ble for digital data acquiring and data processing, connects to the Adapter card
through a Mezzanine card. In a way of “Several-in-One”, several Adapter card
are connected to one Main card through a Mezzanine card with several HDMI2.0
cables. After processing the digitized, the Master card transmits data to Server
or PC through Gigabit Ethernet protocol. Schematic block diagram has been
shown in Fig. 1.

Fig. 1. Simplified schematic block diagram for readout system architecture

2.2 CASCA Card

The CASCA frond-end card is designed for configuring the CASCA chip and
transferring the output signal. An ultralow noise, high performance differential
amplifier is adopted in the CASCA card to provide the rail-to-rail output for
the high precision ADC (Analog to Digital Converter) in the Adapter card.

2.3 Adapter Card

The Adapter card, edged-mounted with the CASCA frond-end card, is in charge
of digitizing the output from the CASCA card and driving the ASIC. A FPGA
chip and a multi-output clock generating chip are mounted on the Adapter card.
156 H. Liu and D. Wang

The FPGA chip provides necessary logic signals for the CASCA ASIC and is
responsible for packaging the digital data from 32 channels. The clock generator
AD9517 is configured to meet the requirement of the sampling clock and readout
clock. The Adapter card is shown in the left-bottom of the Fig. 2.
To meet the Bandwidth of Dataflow between the Adapter card and the Main
card, we selected a Xilinx Artix-7 FPGA chip which provide a quad of the
GTP transceiver. With the help of the Serial RapidIO Technology and HDMI2.0
cable, the bandwidth could achieve the maximum of 6.25 Gbps which is over the
requirement of the Adapter card. The data transfers through the Serial RapidIO
protocol realized by the Logic IP Serial RapidIO Gen2 Endpoint Solution in the
FPGA.

Fig. 2. The photograph of the Adapter card and Main card

2.4 Main Card


The Main card has a high-performance FPGA chip to provide the logic resource
and I/O interfaces, and 500 MB DDR3 as the data buffer. Both the frond-end
card and Adapter card are working under the Main card which provide power
supply and source clock. The Main card is shown in the right-top of the Fig. 2.

3 Firmware Architecture
To meet the requirement of the CASCA, two FPGA have been adopted in this
system. All the command control and data transfer are depended on FPGA chip
which play a role of control center. An efficient firmware could improve the per-
formance of the system. Modular architecture is followed in the design of the
Readout Electronics for CASCA in XTP Detector 157

firmware. Serial RapidIO protocol and Gigabit Ethernet protocol for data trans-
fer are realized by the Logic IP core in the FPGA. And a special module named
control interface is able to interpret the command from PC. Each module has
its own work and collaborate with each other.Detailed schematic block diagram
has been shown in Fig. 3.

Fig. 3. Simplified schematic block diagram for firmware architecture

4 Summary

A new readout system has been proposed for CASCA in XTP detector. Hardware
and firmware have been developed successfully. And transfer capacity is over
the requirement of the CASCA. It is also applicable for the future upgrade of
the CASCA. In the future, the design of firmware need to be optimized and a
system-level test for this readout system will be achieved.

Acknowledgement. Finally, we acknowledge the support from the NSFC of China


(Grant Number 11505074).

References
1. Black, J.K., Baker, R.G., Deines-Jones, P., Hill, J.E., Jahoda, K.: Xray polarimetry
with a micropattern time projection chamber. Nucl. Instrum. Methods Phys. Res.
A 581, 755–760 (2007)
2. Zhang, H.Y., Deng, Z., He, L., Li, H. Feng, H., Liu, Y.N.: CASCA: a readout ASIC
for a TPC based X-ray polarimeter. In: IEEE Nuclear Science Symposium and
Medical Imaging Conference (NSS/MIC), San Diego, CA, pp. 1–4 (2015)
A High-Resolution Clock Phase-Shifter
in a 65 nm CMOS Technology

Dongxu Yang1, Szymon Kulis3, Datao Gong2, Jingbo Ye2,


Paulo Moreira3(&), and Jian Wang1(&)
1
Department of Modern Physics, University of Science and Technology
of China, Hefei 230026, Anhui, People’s Republic of China
wangjian@ustc.edu.cn
2
Department of Physics, Southern Methodist University,
Dallas, TX 75275, USA
3
CERN, F18110 Geneva 23, Switzerland
Paulo.Moreira@cern.ch

Abstract. The design of a high-resolution phase-shifter which is part of the


LpGBT, a low power upgrade of the gigabit transceiver (GBTX) for the LHC
upgrade program, is presented. The phase-shifter circuit aims at producing a
programmable phase rotation (up to 360°) with a time resolution of 48.8 ps for
several input clock frequencies: 40, 80, 160, 320, 640 or 1280 MHz. The circuit
is implemented as two functional blocks: a coarse phase-shifter, with a fully
digital implementation, and fine phase-shifter, based on a Delay-Locked Loop
(DLL). The post-layout simulations show that the peak-to-peak values of INL
and DNL are 0.1 and 0.06 LSB (48.8 ps) respectively at 1.28 GHz in the
nominal corner while at 40 MHz the values are 0.06 and 0.05 LSB respectively.
The phase-shifter has been designed as a radiation-tolerant circuit by means of
enclosed layout transistors (ELT) in a 65 nm CMOS technology to achieve high
resolution and reduced power dissipation. The typical power dissipation of the
fine phase-shifter at the lowest and the highest frequencies are 1.1 mW and 9.1
mW respectively at 1.2 V supply voltage.

Keywords: LpGBT  Phase-shifter  DLL

1 Introduction

With the development of the LHC upgrade program, there is a great demand of high
speed data transmission system to transfer data between detectors and off-detector
electronics. The LpGBT is thus proposed to upload data at up to 10.24 Gb/s.
According to the design plan [1] as shown in Fig. 1, a phase shifter is included in
the chip to have the capacity to provide multiple clock outputs for the LHC front-end
electronics. All of these clocks are synchronized with a reference clock at 40 MHz and
must be phase adjustable. The frequencies of the output clock are 40, 80, 160, 320, 640
and 1280 MHz while the shifting resolution is fixed 48.8 ps.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 158–162, 2018.
https://doi.org/10.1007/978-981-13-1313-4_32
A High-Resolution Clock Phase-Shifter in a 65 nm CMOS Technology 159

Fig. 1. Block diagram of the LpGBT

2 Circuit Design

A straightforward way to generate clock intervals is to use a DLL. However the needed
numbers of delay cell at 40 MHz is 512 which is unduly long resulting in bulky area,
large transient noise and more power consuming.
In order to gain wide tuning range and high resolution, a better method is to
combine coarse phase-shifter and fine phase-shifter together [2]. The coarse phase-
shifter rotates clocks in a whole period with a large resolution of 781.25 ps while the
fine phase-shifter interpolates within the 781.25 ps interval down to 48.8 ps. The
coarse phase-shifter is a fully digital implementation while the fine phase-shifter is fully
customized and based on DLL. In this paper we focus on the fine phase-shifter. The
structure of the DLL-based phase-shifter is shown in Fig. 2.
The three D Flip-Flops driven by a clock at 2.56 GHz sample the input data (in this
scenario, they are the required output clocks) and output two clocks with a fixed phase
difference of 781.25 ps.
The lagging clock is directly fed into the Phase Detector. The leading clock
propagates in a 16-stage VCDL (actually there is a dummy one is added after the last
active delay cell) within DLL and the output of the final stage is used as the second
input of the Phase Detector. When the DLL is locked, the two clocks are in phase and
the VCDL will output 16 clock signals with equally spaced phases. As the total phase
error is 781.25 ps, the delay of one delay cell is thus 781.28/16 = 48.8 ps. By selecting
one of the VCDL outputs the 48.8 ps resolution is achieved.
The delay cell is made of two current-starved inverters using big nMOS and pMOS
to have a delay margin of at least 21% in the worst case. The two-stage output buffer
consists of a NAND and an inverter. The NAND can be disabled when the circuit is not
160 D. Yang et al.

Fig. 2. Structure of the fine phase-detector

used to save power. The delay cell is a symmetrical design which means in the VCDL
every output node of the current-starved inverter is the same loaded because it is always
connected to the next inverter and output buffer. This design is proven beneficial to
reduce Duty-Cycle distortion which is critical in some double data rate applications. As
a comparison, the delay cell in [2] is also a starved inverter but loaded with trans-
mission gates after odd-stage delay cells and inverters after even-stage delay cells. No
matter how the transmission gate is optimized, it won’t make the starved inverter the
same loaded in all corners.
The 16:1 MUX consists of 15 identical 2:1 MUXs arranged in four stages as
16:8:4:2:1. Each stage shares a common controlling signal. The 2:1 MUX is composed
of two inverters and well designed to minimize the coupling of the two branches.
Usually the 2:1 MUX needs an inverter as output buffer so that its output is not
inverted. Here with the compact structure and even stages, the output of 16:1 MUX can
still have the same phase as its input.
The structures of the delay cell and 2:1 MUX are shown in Fig. 3. Figure 4 shows
the layout of the full fine phase-shifter. The VCDL and 16:1 MUX lie on the top, the
large block in the right bottom part is Low Pass Filter (LPF), the Phase Detector and
Charge Pump locate in the left bottom area. The total size is 543 um  101 um.
A High-Resolution Clock Phase-Shifter in a 65 nm CMOS Technology 161

Fig. 3. Structure of the delay cell (left) and 2:1 MUX (right).

Fig. 4. Floorplan of the fine phase-shifter

3 Results

The post-layout simulations show that the phase delay of two adjacent delay cells are
quite close to 48.8 ps in all simulation cases. Figure 5 depicts the phase shifting in
nominal corner at highest and slowest speed.

Fig. 5. Phase shifting in typical case at 1.28 GHz and 40 MHz


162 D. Yang et al.

The peak-to-peak values of INL and DNL are 0.1 and 0.06 LSB (48.8 ps)
respectively at 1.28 GHz in the nominal corner while at 40 MHz the values are 0.06
and 0.05 LSB respectively. The periodic jitters are less than 3 ps both at 1.28 GHz and
40 MHz. The typical power dissipation of the fine phase-shifter at the lowest and the
highest frequencies are 1.1 mW and 9.1 mW respectively at 1.2 V supply voltage.

4 Conclusions

We present a DLL-based high-resolution phase-shifter that provides fixed phase


shifting ability for several different clock frequencies. The design features good lin-
earity and jitter performance. It works at 1.2 V supply voltage and will be fabricated in
a 65 nm technology of TSMC.

References
1. P. Moreira On Behalf of the GBT Collaborations: The LpGBT Project Status and Overview.
https://indico.cern.ch/event/468486/contributions/1144369/attachments/1239839/1822836/
aces.2016.03.08.pdf
2. Wu, G., Yu, B., Gui, P., Moreira, P.: Wide-range (25 ns) and high-resolution (48.8 ps) clock
phase shifter. Electron. Lett. 49(10), 642–644 (2013)
Development of Fast Readout System
for Counting-Type SOI Detector ‘CNPIX’

Ryutaro Nishimura1(&), Yasuo Arai2, Toshinobu Miyoshi2,


Shunji Kishimoto3, Ryo Hashimoro3, Longlong Song4,5,
Yunpeng Lu5, and Qun Ouyang5
1
School of High Energy Accelerator Science,
SOKENDAI, Tsukuba, Ibaraki 305-0801, Japan
ryunishi@post.kek.jp,
nishimura_ryutaro@anet.soken.ac.jp
2
Institute of Particle and Nuclear Studies, KEK,
1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan
3
Institute of Materials Structure Science, KEK,
1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan
4
University of Chinese Academy of Sciences, Beijing 100049, China
5
State Key Laboratory of Particle Detection and Electronics (IHEP, CAS),
Beijing 100049, China

Abstract. We are developing a data readout system for a new photon-counting


type SOI detector ‘CNPIX’. CNPIX detector will contain *100 k hexagonal
pixels and frame rates more than 1 kHz is demanded to observe dynamic
structure of the samples. We adopt KC705 evaluation board equipped with
Kintex-7 FPGA, DDR3 memory, and Gigabit Ethernet interface as the base of
our new system to meet this demand. The CNPIX detector signals are connected
through FPGA mezzanine card interface.

Keywords: SOI  Photon-Counting  Pixel detector

1 Introduction

X-ray diffraction measurement is a useful technique for the structural analysis. Recently
the two dimensional detector is used for this measurement. For user’s convenience, the
detector should have large detection area, small pixel size and fast frame rates (more
than 1 kHz). Photon-counting type detector is especially useful in this measurement
with its superior signal-to-noise ratio. However, the pixel size of the detector is rather
large (e.g. Medipix3RX has 55 um square pixel [1]) and charge sharing between pixels
will be severe when the pixel size become small. Thus we start to develop a new
photon-counting type SOI detector ‘CNPIX’ by using Silicon-On-Insulator
(SOI) technology [2] which have charge-sharing handling circuit while keeping
small pixel size (less than 50 um pitch). To realize this detector as practical, it is also
important to develop its Data Acquisition (DAQ) system. In this paper, we present the
scheme of new DAQ system and recent status of development.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 163–167, 2018.
https://doi.org/10.1007/978-981-13-1313-4_33
164 R. Nishimura et al.

2 SOI Pixel Detector


2.1 Overview of the SOI Pixel Detector
SOI pixel detectors are being developed by a SOIPIX collaboration [3] led by High
Energy Accelerator Research Organization, KEK. They are based on a 0.2 um CMOS
fully depleted SOI process of Lapis Semiconductor Co., Ltd [2]. The SOI detector
consists of a thick (50–500 um) and high resistivity (more than 2 kX cm in Floating
Zone type wafer) Si substrate for the sensing part, and a thin Si layer for CMOS
circuits. Furthermore, the detector which uses double SOI wafer has additional Si layer
in middle, as a dedicated shielding layer, is also developed. A bird’s-eye view picture
of the double SOI pixel detector is shown in Fig. 1.

Fig. 1. The structure of the SOI detector. (Double SOI wafer)

2.2 Overview of CNPIX


CNPIX being developed and will have *100 k of hexagonal-shape pixels. Targeted
frame rates is more than 1 kHz so more than 100 Mpixel/sec readout speed is required.

3 DAQ System for CNPIX

3.1 Basic Configuration of DAQ System for SOI Detector


Currently, we are using SEABAS (Soi EvAluation BoArd with SiTCP [4]) board and
its DAQ system [5] to read out the SOI sensor. This system using SEABAS board for
DAQ main board. This board has 16 channels ADCs, 4 channels DAC, FPGA (Field-
Programmable Gate Array) and Gigabit Ethernet I/F etc. The DAQ PC and the
Development of Fast Readout System for Counting-Type SOI Detector 165

SEABAS board are connected through Gigabit Ethernet and communicate with
TCP/UDP protocol. The transferred data are processed by a software that runs on the
PC. In DAQ mainboard side, TCP/UDP protocol were handled by SiTCP firmware
module [4].

3.2 Proposed New DAQ System


SEABAS DAQ system is useful for most SOI detector’s experiment. However,
implemented FPGA’s performance is not enough for the requirement of CNPIX DAQ
system. For example, Virtex-5 FPGA, which was implemented to SEABAS 2 (second
version of SEABAS), has 2,160 Kb Block RAM. This capacity is less than 1 frame’s
structured data size of the CNPIX. Thus DAQ mainboard must be replaced to new one
from the existing SEABAS boards. In other hands, Our SOIPIX group already has
many resources, such as operating software or extension sub boards, in recent several
years. Then, for compatibility, new system must keep SiTCP’s interface. We selected
KC705 evaluation board [6] as the new DAQ board. KC705 is equipped with Kintex-7
FPGA, DDR3 memory, and Gigabit Ethernet interface which meet our demands.
Table 1 shows major specifications of the SEABAS2 and the KC705.

Table 1. Difference of specification between the SEABAS2 and the KC705 boards.
SEABAS2 KC705
FPGA Virtex-5 Kintex-7
- Slice 7,200 50,950
- Block RAM 2,160 Kb 16,020 Kb
External Memory N/A DDR 3 SO-DIMM (default: 1 GB)
Connection for IEEE P-1386 CMC FMC (VITA 57.1)
SOI detector 64 pin  4 HPC  1/LPC  1
ADC/DAC/NIM On board N/A (Can be extended)
Gigabit Ethernet 1 1
SiTCP On board Can be implemented
(additional FPGA) (mixed load with user circuit)

Figure 2 shows the structure of new firmware implemented in the KC705. SiTCP
was included as a part of the firmware. DDR_IO module is developed as data buffer
using DDR 3 memory. As a first version, we developed DAQ system without DDR_IO
module. The frame rate of this system is about 200 Hz. Then DAQ system including
the DDR_IO module is being developed as a second version. In this version, the frame
rate is expected to be more than 1 kHz.
166 R. Nishimura et al.

Fig. 2. Block diagram of the new firmware implemented in the KC705

3.3 DAQ Framework for Practical Purpose


We also have been developing the DAQ framework for practical X-ray measurements.
Basic framework [7] was developed for the SEABAS DAQ system and we extend this
to the KC750 board. The framework consists of modules which provide simple
functions and text-based command connection path. SOI sensor control software,
developed in our previous work, also works as one of modules. Each modules runs
independently, thus, user can customize DAQ system for their purpose by rearranging
modules. In addition, each module doesn’t have to have many functions, has to have
only a few functions for command connection path. This is effective to reduce source
code and to make easier to maintain. This framework can support Windows (Vista or
later) and Linux (Cent OS 6 or later), and can be used on mixed environment, for
example, some of DAQ PCs using Windows and others using Linux.

4 CNPIX Test Status

We show the test results for prototype CNPIX detector below.

4.1 Readout Test (After Reset Process)


Prototype CNPIX has several bugs and some of them are critical for readout. However,
when we readout after normal reset process, we can see some behavior’s change
depends on the voltage of middle SOI layer (VSOI2) and temperature. This means the
new DAQ could detect the change of the output. Figure 3 shows the result of this test.
Development of Fast Readout System for Counting-Type SOI Detector 167

Fig. 3. The result of readout test. (Blue region means Initial value pixels, and Green region
means Invalid value pixels.)

5 Conclusion

Fast readout system for the CNPIX is developed and tested. We adopt KC705 FPGA
evaluation board for new DAQ system instead of the existing SEABAS DAQ board.
First version of proposed DAQ system is now developed and framerate is reached to
200 Hz. Although the prototype CNPIX detector has some bugs, we could confirm the
DAQ behavior by checking the output pattern of the pixel counter.

Acknowledgement. This study is supported in part by JSPS KAKENHI Grant Number


25109008, and this study is performed as a part of the SOIPIX group [3] research activity.

References
1. Ballabriga, R., et al.: The Medipix3RX: a high resolution, zero dead-time pixel detector
readout chip allowing spectroscopic imaging. JINST 8 (2013) (C02016, IOP)
2. Arai, Y., et al.: Development of SOI pixel process technology. Nucl. Instrum. Methods Phys.
Res. A 636, S31–36 (2011)
3. KEK Detector Technology Project SOI Pixel Detector R&D, SOI Pixel Detector R&D. http://
rd.kek.jp/project/soi. Accessed 26 June 2017
4. Uchida, T.: Hardware-based TCP processor for gigabit ethernet. IEEE Trans. Nucl. Sci. 55(3),
1631–1637 (2008)
5. Nishimura, R., et al.: DAQ Development for silicon-on-insulator pixel detectors. In:
Proceedings of International Workshop on SOI Pixel Detector (SOIPIX2015) (2015). arXiv:
1507.04946
6. Xilinx Inc.: KC705 evaluation board for the Kintex-7 FPGA user guide. In: Xilinx
Documents UG810 (v1.7). Xilinx (2016)
7. Nishimura, R., et al.: Development of high-speed X-ray imaging system for SOI pixel
detector. In: Proceedings of the 20th International Workshop on Advanced Image Technology
2017 (IWAIT 2017)
CATIROC, a Multichannel Front-End ASIC
to Read Out the 3″ PMTs (SPMT) System
of the JUNO Experiment

S. Conforti2(&), A. Cabrera1, C. De La Taille2, F. Dulucq2,


M. Grassi1, G. Martin-Chassard2, A. Noury1, C. Santos1,
N. Seguin-Moreau2, and M. Settimo3
1
Laboratoire d’Astroparticule et Cosmologie (APC), Paris, France
2
Organization for Micro-Electronics Design and Applications (OMEGA),
Ecole Polytechnique, Palaiseau, France
conforti@omega.in2p3.fr
3
Laboratoire de physique subatomique et des technologies associées
(SUBATECH), Nantes, France

Abstract. The ASIC CATIROC (Charge And Time Integrated Read Out Chip)
is a complete read-out chip designed to read arrays of 16 photomultipliers
(PMTs). It finds a valuable application in the content of the JUNO (Jiangmen
Underground Neutrino Observatory) experiment [1], a liquid scintillator
antineutrino detector with a double calorimetry system combining about 17k 20″
PMTs (Large PMTs system) and around 25k 3″ PMTs (Small PMTs system).
A front-end electronics based on the ASIC CATIROC matches well within the
3″ PMTs system specifications as explained in this paper. CATIROC is a SoC
(System on Chip) that processes analog signals up to the digitization to reduce
the cost and cables number. The ASIC is composed of 16 independent channels
that work in triggerless mode, auto-triggering on the single photo-electron (PE).
It provides a charge measurement with a charge resolution of 15 fC and a timing
information with a precision of 200 ps rms.

Keywords: Front-end electronics for photomultipliers read-out


Multichannel ASIC  Analog electronic circuits

1 Introduction

The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino


experiment designed to determine neutrino mass hierarchy with a 20-thousand-ton
liquid scintillator spherical detector at about 700-m deep underground. The detection
principle in JUNO is based on the observation of the light emitted in the Inverse Beta
Decay in the liquid scintillator volume by array of large photomultipliers (LPMT). To
reach the expected sensitivity of mass hierarchy, the energy resolution has to be better
than 3% at 1 MeV. In order to achieve this target the LPMT will have to cover
approximately 75% of the inner surface of the sphere, elevating the number of nec-
essary LPMTs to 18,000. In order to improve the calorimetry control of the LPMT
system 25,000 small 3-in. small photo-multipliers (SPMTs) will be installed between

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 168–172, 2018.
https://doi.org/10.1007/978-981-13-1313-4_34
CATIROC, a Multichannel Front-End ASIC to Read Out the 3″ PMTs 169

the LPMTs [2]. The size of these PMTs is chosen to operate in photon-counting mode
for all events inside the detector. The large number of SPMTs increases the density and
the complexity of the detector system. A customized multichannel Application Specific
Integrated Circuit (ASIC) with the integration of all the analog and digital components
into a single chip will simplify the electronic system, decreasing the total power
consumption and increasing the reliability.

2 CATIROC for the Small PMT System

The SPMTs are expected to detect about 50 PE per event at 1 meV spread over 25,000
PMTs, working thus in photon counting regime with a time resolution within a few ns,
typical of the PMTs. For the installation, the SPMTs will be connected in groups of 128
to an autonomous front-end electronics located in underwater boxes near the SPMTs.
This is possible thanks to the integration of 8 multichannel ASIC CATIROC in each
readout board. The CATIROC processes the analog signals up to digitization and sends
out only zero-suppressed digital data to the central processing and storage unit.
CATIROC has been designed in AMS SiGe 0.35 lm technology and integrates 16
identical channels to provide charge and time information [3].
In each channel a trigger path allows to auto-trigger on the single photo-electron
thanks to a fast shaper (5 ns) followed by a low offset discriminator with threshold sets
by an internal 10-bit DAC common to the 16 channels.
A charge path, made by a preamplifier, a slow shaper, a switched capacitor array
and an internal 10-bit Wilkinson ADC, provides a charge measurement over a dynamic
range from 160 fC (1PE, assuming a PMT gain of 106) up to 70 pC (*400PE).
The time measurement is obtained by two paths: a “time stamp″ performed by a 26-
bit counter at 40 MHz and a “fine time″ obtained thanks to a TDC ramp per channel
converted by another 10-bit Wilkinson ADC.

3 Measurements

CATIROC has been tested in the laboratory with a dedicated test board [3] and the
measurements have been performed with a pulse generator to simulate a PMT input
charge. Preliminary measurements with a 3″ PMT have been performed.
A first information is the detection efficiency of the single-PE signal. It is measured
scanning the discriminator threshold at a given injected charge (up to 2 PE = 320 fC)
and monitoring the discriminator response. Figure 1 (left) shows the trigger efficiency
for given injected charges as a function of the threshold. The 50% trigger efficiency as a
function of the input charge is shown in the same figure (right panel). These mea-
surements show a trigger channel sensitivity of about 100 DACu/PE (i.e., a DAC
resolution of 0.6 DACu/fC) and a noise (r) of 3.5 DACu (5.6 fC) fitting the pedestal
curve (black curve on Fig. 1, left). The minimum threshold can be calculated from the
baseline mean value including 5r noise (Fig. 1 right). Considering the noise mea-
surement of 3.5 DACu, there is a very comfortable range to set the auto-triggering at a
fraction of photo-electron (usually 1/4th to 1/3rd).
170 S. Conforti et al.

Fig. 1. Trigger efficiency at a given injected charge as a function of the threshold (left) and the
50% trigger efficiency as a function of the input charge up to 2 PE (right).

The SPMT system will collect typically single-PE up to a few PE’s (so a good
charge resolution for the single PE detection is mandatory). The charge distribution for
various input signals (from 1 to 10 PE equivalent) are shown in Fig. 2 (left). The
sensitivity is 16 ADCu/PE and the noise is nicely gaussian with a RMS of 1.5 ADCu
(15 fC).

Fig. 2. Charge measurements with the full chain for different input charges (160 fC to 1.6 pC)
(left). TDC ramp reconstruction for one channel by a pulse generator signal delayed by step of
100 ps (right).

Another crucial feature is the time resolution of the ASIC which is required to be
smaller than 1 ns, such that the electronics resolution is negligible compared to that of
the PMT’s. The ASIC provides the signal “time of arrival″ operating in self-triggered
mode. The TDC ramp has been reconstructed and is displayed in Fig. 2 (right).
A periodic pulse signal is injected in one channel and delayed by steps of 100 ps. The
linear fit provides the slope which gives a LSB (or TDCu) of: LSB = 1/slope = 27
ps/TDCu. The residuals are within ±15 TDCu, corresponding to ±450 ps, with an
CATIROC, a Multichannel Front-End ASIC to Read Out the 3″ PMTs 171

RMS value of 167 ps. The residuals exhibit a periodical shape due to a coupling of the
160 MHz clock, most likely through the substrate.
The performances of CATIROC are also evaluated (very preliminary test) with
signal from the PMT planned to be used for the JUNO SPMT system (3″ HZC PMT).
The ASIC test board is connected to the PMT placed in a light-tight box. The dedicated
SPMT acquisition code developed for JUNO is used to collect the CATIROC output
data (Fig. 3 left). An example of the signal spectrum measured by the ASIC for two
channels is shown in Fig. 3 right, with a clear evidence of the single-photoelectron
peak. The PMT high voltage is to yield *106 gain. This measurement indicates that
the ASIC has the ability to detect a single photoelectron with a charge resolution of
30%. The two channel distribution are very similar. The wiggles observed in the curves
is due to a digitization artefact caused by the ASIC clock coupling. Their impact on the
determination of the single-PE position and resolution has been evaluated with Monte
Carlo simulations and found to be negligible for JUNO. Further testing is ongoing for
full characterization of the performance of the CATIROC with JUNO PMT’s.

Fig. 3. Test bench for PMT measurements. The PMT is placed in a dark box, connected to an
high voltage and to the CATIROC ASIC (left). Single photoelectron spectrum, in dark noise
configuration, measured by the ASIC (right)

4 Conclusion

The JUNO experiment is the largest liquid scintillator antineutrino detector ever built,
currently under construction in the south of China. A double calorimetry system will be
used for the first time ever combining about 17k 20″ PMTs (Large PMTs system) and
around 25k 3″ PMTs (Small PMTs system). The ASIC CATIROC has been tested and
evaluated against the requirements of the JUNO SPMT system. The results show that
CATIROC fulfils the JUNO requirements. Preliminary tests with a JUNO PMT have
been performed to measure the single-PE with CATIROC. In the near future,
CATIROC will be installed on the first prototype of the front-end board to study the
performances in real conditions.
172 S. Conforti et al.

References
1. An, F., JUNO Collaboration (East China U. Sci. Tech., Shanghai) et al.: Neutrino Physics
with JUNO, 188 pp. (2015). J. Phys. G43(3), 030401 (2016). https://doi.org/10.1088/0954-
3899/43/3/030401
2. Miao, H.E.: Double calorimetry system in JUNO experiment. In: The Technology and
Instrumentation in Particle Physics 2017 (TIPP2017) conference (2017)
3. Conforti, S., et al.: Performance of CATIROC: ASIC for smart readout of large
photomultiplier arrays, 2017_JINST_12_C03041 (2017)
First Prototype of the Muon Frontend Control
Electronics for the LHCb Upgrade: Hardware
Realization and Test

Paolo Fresch(&), Giacomo Chiodi, Francesco Iacoangeli,


and Valerio Bocci

INFN Sezione di Roma, piazzale A. Moro 2, 00185 Rome, Italy


paolo.fresch@roma1.infn.it

Abstract. The muon detector plays a key role in the trigger of the LHCb
experiment at CERN. The upgrade of its electronics is required in order to be
compliant with the new 40 MHz readout system, designed to cope with future
LHC runs between five and ten times the initial design luminosity. The
framework of the Service Board System upgrade aims to replace the system in
charge of monitoring and tuning the 120’000 readout channels of the muon
chambers. The aim is to provide a more reliable, flexible and fast means of
control migrating from the actual distributed local control to a centralized
architecture based on a custom high-speed serial link and a remote software
controller. In this paper, we present in details the new Service Board System
hardware prototypes from the initial architectural description to board connec-
tions, highlighting the main functionalities of the designed devices with pre-
liminary test results.

Keywords: LHCb  Muon  Service Board System  Upgrade

1 Background of the System Upgrade


1.1 LHCb Upgrade
In the future, the performances of the Large Hadron Collider at the European Center for
Nuclear Research will increase in terms of luminosity and energy produced by the particle
collisions compared to the initial design parameters. This requires an upgrade of many
crucial parts of the experiments. The LHCb readout electronics has been completely
redesigned [1] to cope with the new values. It is vital that all the Front-End (FE) and the
Back-End (BE) systems in the sub-detectors comply with a common set of specifications
related to the new readout scheme and control [2] based on the GBT-link [3].

1.2 Muon Detector Upgrade


The LHCb muon On-Detector electronics is based on the CARDIAC boards that host one
DIALOG [4] and two CARIOCA [5] chips. At the moment the control and monitor system
of this electronics, the so-called Service Board System (SBS), bases on a local network of
microcontrollers that independently communicate with a central supervisor [6].

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 173–179, 2018.
https://doi.org/10.1007/978-981-13-1313-4_35
174 P. Fresch et al.

To comply with the new common LHCb read out architecture this infrastructure was
redesigned at system level while the On-Detector electronics was left unchanged. Taking
advantages of the high-speed serial link specifically designed for application under
radiation, a centralized remote server has the whole control of the CARDIAC chipset. This
significantly increases the flexibility and reliability of the system as well as the commu-
nication speed. The integration of the GBT link in the actual Service Board facility, next to
the muon stations, requires the new Service Board System (nSBS) architecture [7]
hardware development. This paper presents the new Pulse Distribution Module (nPDM),
the new Service Board (nSB) and the new Custom Back-plane (nCB) first prototypes
release (rev.01), focusing on the most important sections of the design process with
additional early test results.

2 Hardware Design Description

2.1 Interfacing the ECS Serial Streams


A core item of the design (see a quick system overview in Fig. 1) is the interface
between the GBT-SCA [7] chip and the custom bidirectional serial link of the DIALOG
slow control. The entire proposed implementation relies on this section. This link
comply with standard I2C-Bus protocol at electrical level, thus the I2C-Bus peripheral
of the GBT-SCA drives the channel. A custom translator, which resides in the nSB on-
board FPGA, merges the protocols at physical layer. The FPGA also generates the
pulses used by the DIALOG chips for other tuning purposes. Thus, using an FPGA
provides a solution for both control and communication of the muon On-Detector
electronics, compliant with the novel LHCb Read-Out architecture. A Finite-State-
Machine translates and map the I2C data frame in to the custom bidirection serial link.
This link uses LVDS standard at the physical layer, thus integrated multi-standard
differential drivers are preferred. The Microsemi IGLOO2®, a flash based FPGA built
on 65 nm process that gives extremely comfortable performance in radiation envi-
ronment [5], provides different resources that satisfies the needs. It incorporates also
M-LVDS drivers, used to fan out of Timing and Fast Controls (TFC) signals delivered
by the nPDM to the whole crate, excluding the need of additional components in the
design and installed on the boards.
On nPDM, the GBTx extracts TFC signals from the high-speed optical link. The
IGLOO2 on the nPDM generates the signals used by the FPGA(s) on the nSB(s) to
produce the dedicated pulses used during tuning of the different readout channels. Due
to the aim of the information carried by these signals all the traces are routed with phase
control constraints in order to minimize the skew, on both the nPDM and the nSB. The
realized wiring satisfies the requirements given in [5] adding as well a certain grade of
flexibility to reduce the complexity of the required logic.
First Prototype of the Muon Frontend Control Electronics 175

Fig. 1. Quick physical and functional overview of the new Service Board System.

2.2 E-Link Distribution


The use of the SLVS standard at physical level of the e-link [6] requires the study of
compatibility of e-port with standard LVDS transceivers present on the IGLOO2.
We proved that the communication works but we do not have any information
regarding the degradation of the noise margin and the Bit Error Rate. A full charac-
terization might be advantageous in the future and it is in the purpose of the prototype
realization.
The distribution of the GBT link across the new Service Board Crate required as
well the design of a new Custom Backplane (nCB). In order to save the mechanical
infrastructure and to reduce the time required to replace the components during the
commissioning step, the mechanical design bases on the VME standard reference.
Through the backplane routes 20 serial e-link from the GBTx to the GBT-SCA [5] on
the nSB boards. Three differential pairs compose them, allowing a synchronous point-
to-point full-duplex communication between the two devices. In this configuration, it
carries slow control information (ECS path). Specific special timing-driven constraints
drives the routing of these lanes, in order to keep a low skew between data output and
clock. This is vital for proper operation of the e-link driver on the GBT-SCA.
176 P. Fresch et al.

2.3 Additional Features


Due to the particular application, the design includes special hardware recovery
functionalities. A back-up I2C-Bus equips the nSBS crate. It connects all the boards
and can be used in case of e-link failure (e.g. GBT-SCA malfunctioning). This provides
redundancy to the crate control link. Soft and hard reset are predisposed for all the
chips allowing different grade of recovery from critical situations. In addition, the logic
is able to cut one single power rail at time, performing delimited power cycles. The
access to the reset and power cycle functionalities can be either local or remote (i.e.
either another board or the server can send the recovery command). This can be useful
in case the primary power cycle link do not respond.

3 Early Test Results

3.1 I2C-Bus Bridge


A very first release of the I2C protocol converter (the I2C-Bus bridge) implemented in
the prototype of the nSB and on a preliminary evaluation set up proved the possibility
to establish a stable communication between the muon On-Detector electronics and a
commercial I2C driver (see Fig. 2). Repeating the same test at CERN on the muon
detector, established a successful continuous communication for more than 13 h,
exchanging about 1,2 GB of data with zero errors. This is equal to 480 write and read
operation on all the 160’000 configurable registers of the system. Finally, additional
test proved that the communication is stable and successful as well using the GBT link
and the GBT-SCA chip (at that moment available on their evaluation board only).

Fig. 2. Screenshot of two successful I2C transaction (W and R) between a commercial I2C
driver (SCL/SDA) and the muon On-Detector Electronics custom serial link (SCNx/SDNx/
SDBn).
First Prototype of the Muon Frontend Control Electronics 177

This very early series of tests also demonstrate that the developed digital translator
speeds-up the data exchange from the actual 100 kbps to 1 Mbps, reducing the access
time to the DIALOG register by a factor of 10. These translators have separate FSM(s)
that run independently and act On-The-Fly. Distributing the data load of one e-link on
12 drivers allows the GBT link to control all the serial channel of the new Service
Board simultaneously.

3.2 SLVS Serial Channels


Characterizing Standard DIN 41612 connectors proved the possibility of use them with
differential SLVS-400 (Scalable Low-Voltage Signaling) used at the physical layer of
the e-link protocol. Figure 3 shows the eye-diagram taken at 80 Mbps (a) and at
400 Mbps (b). The test uses an evaluation trace of a length comparable with those
present on the designed nCB board (with similar stack-up and PCB FR4 dielectric).
The results shows no significant attenuation of the amplitude of the signal and
acceptable level of jitter and distortion.

Fig. 3. Eye-Pattern taken with SLVS-400 compliant pseudo-random sequence at 80 Mbps in


(a) and 400 Mbps in (b). The channel include two male DIN 41612 mated with the two female
soldered on the PCB board.
178 P. Fresch et al.

4 Conclusions
4.1 Summary
The increased flexibility and independence of every single muon Control Channel
associated with the higher data transfer rate allows the development of new algorithms
for fine noise measurements at any moment, without changing any part in the new
Service Board crate. When the whole facility will be ready and equipped more tests and
a full characterization will take place. Anyway, for what concerns the design of the new
Service Board System prototype these results are a proof-of-concept that in fact
motivated the production of a limited number of preliminary boards (see Figs. 4 and 5).

Fig. 4. The new Pulse Distribution Module (nPDM) prototype rev.01, produced in March 2017

Fig. 5. The new Service Board (nSB) prototype rev.01, produced in July 2016
First Prototype of the Muon Frontend Control Electronics 179

Acknowledgements. The authors express a special thanks to the Electronics Laboratory staff of
the INFN - Rome department for their help and support during these months of test and
development. To Riccardo Lunadei for his support during the PCB development and to Daniele
Ruggieri for his support during the rework of prototypes. To Manlio Capodiferro for his support
during the very first test setup installation and to Fabrizio Ameli for coordinating all the tasks
carried out by the Laboratory staff as well as for supporting the whole PCB development with his
specific experience in high-speed signal design.

References
1. Wyllie, K., et al.: Electronics architecture of the LHCb upgrade. LHCb Technical Note,
LHCb-PUB-2011-011 (2011)
2. Alessio, F. et al.: Readout control specifications for the Front-End and Back-End of the
LHCb upgrade. LHCb Technical Note, LHCb-INT-2012-018 (2012)
3. Moreira, P. et al.: The GBT project. In: Topical Workshop On Electronics For Particle
Physics, pp. 342–346 (CERN-2009-006), Paris (2009)
4. Cadeddu, S., et al.: The DIALOG chip in the Front-End electronics of the LHCb muon
Detector. IEEE Trans. Nucl. Sci. 52(6), 2726–2732 (2005)
5. Moraes, D., et al.: CARIOCA-0.25/spl mu/m CMOS fast binary Front-End for sensor
interface using a novel current-mode feedback technique. In: IEEE International Symposium
on Circuits and Systems (2001)
6. Bocci, V., et al.: The muon Front-End control electronics of the LHCb experiment. IEEE
Trans. Nucl. Sci. 57(6), 3807–3814 (2010)
7. Bocci, V.: Architecture of the LHCb muon Front-End control system upgrade. In: IEEE
Nuclear Science Symposium, San Diego (2015)
8. TR0020: SmartFusion2 and IGLOO2 Neutron Single Event Effects (SEE) Test Report.
http://www.microsemi.com/document-portal/doc_download/135249-tr0020-smartfusion2-
and-igloo2-neutron-single-event-effects-see-test-report. Accessed 27 Apr 2016
9. Caratelli, A., et al.: The GBT-SCA, a radiation tolerant ASIC for detector. In: Topical
Workshop on Electronics for Particle Physics, Aix En Provence (2014)
10. Bonacini, S., et al.: E-link: A radiation-hard low-power electrical link for chip-to-chip
communication. In: Topical Workshop on Electronics for Particle Physics, Paris (2009)
High-Speed/Radiation-Hard Optical Engine
for HL-LHC

K. K. Gan1(&), P. Buchholz2, S. Heidbrink2, H. P. Kagan1,


R. D. Kass1, J. Moore1, D. S. Smith1, M. Vogt2, and M. Ziolkowski2
1
Department of Physics, The Ohio State University,
Columbus, OH 43210, USA
gan@mps.ohio-state.edu
2
Fachbereich Physik, Universität Siegen, 57068 Siegen, Germany

Abstract. We have designed and fabricated a compact array-based optical


engine for transmitting data at 10 Gb/s. The device consists of a 4-channel ASIC
driving a VCSEL (Vertical Cavity Surface Emitting Laser) array in an optical
package. The ASIC is designed using only core transistors in a 65 nm CMOS
process to enhance the radiation-hardness. The ASIC contains an 8-bit DAC to
control the bias and modulation currents of the individual channels in the
VCSEL array. The DAC settings are stored in SEU (single event upset) tolerant
registers. Several devices were irradiated with 24 GeV/c protons and the per-
formance of the devices is satisfactory after the irradiation.

Keywords: Radiation-hard optical link  High-speed optical link


VCSEL array driver ASIC

1 Introduction

A parallel optical engine is a compact device for high-speed data transmission. The
compact design is enabled by readily available commercial high-speed VCSEL arrays.
These modern VCSELs are humidity tolerant and hence no hermitic packaging is
needed1. With the use of a 12-channel array operating at 10 Gb/s per channel, a parallel
optical engine can deliver an aggregate bandwidth of 120 Gb/s. With a standard
spacing of 250 lm between two adjacent VCSELs, the width of a 12-channel array is
only slightly over 3 mm. This allows the fabrication of a rather compact parallel optical
engine for installation in locations where space is at a premium. The use of a fiber
ribbon also reduces the number of fibers to handle and moreover a fiber ribbon is less
fragile than a single-channel fiber. These advantages greatly simplify the production,
testing, and installation of optical links.
VCSEL arrays are widely used in off-detector data transmission in high-energy
physics [1]. The first implementation [2] of VCSEL arrays for on-detector application
is in the optical links of the ATLAS pixel detector. The experience from the operation
of this first generation of array-based links has been quite positive. The ATLAS
experiment therefore continued to use VCSEL arrays in the second-generation optical

1
See for example, http://www.photonics.philips.com/.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 180–185, 2018.
https://doi.org/10.1007/978-981-13-1313-4_36
High-Speed/Radiation-Hard Optical Engine for HL-LHC 181

links [3] for a new layer of the pixel detector, the insertable barrel layer (IBL), installed
in early 2014 during the long shutdown (LS1) to prepare the Large Hadron Collider
(LHC) for collisions at the center-of-mass energy of 13 TeV. In addition, ATLAS also
decided to move the optical links of the original pixel detector to a more accessible
location. The replacement optical links are also array based.
Based on this extensive and positive experience, it is logical for the ATLAS pixel
detector of the high-luminosity LHC (HL-LHC) to continue to use optical links based
on the opto-board (optical engine) concept. In these proceedings, we report the result of
an R&D project on the next generation optical engine operating at high speed.

2 Design of the Opto-Board

The opto-board is a miniature printed circuit board (PCB) as shown Fig. 1. A VCSEL
array driver ASIC is mounted on the opto-board next to an opto-pack that houses a
VCSEL array. This keeps the length of the wire bonds between the ASIC and the
VCSEL array to a minimum to diminish the parasitic capacitance and inductance of the
wire bonds. This allows the ASIC to drive the VCSELs at high speed. The PCB has a
thick copper back plane (1.0 mm) for thermal management. An MTP2 barrel attached
to an aluminum brace is secured to the opto-board via a screw. A fiber ribbon termi-
nated with a MTP connector can be inserted into the MTP barrel to receive the optical
signal from the VCSEL array. An electrical connector3 is attached to the PCB to
transmit high-speed data from a pixel module to the VCSEL array driver ASIC. The
high-speed electrical signals from the connector to the ASIC are transmitted using
controlled impedance differential pair transmission lines on the PCB.

Fig. 1. (a) Schematic drawing of an opto-board together with a MTP barrel fastened to the opto-
board for the insertion of a fiber ribbon terminated with MTP connector to receive the optical
signal from the VCSEL array. (b) A three-dimensional rendition of the setup.

2
MTP connector, US Conec Ltd.
3
LSHM connector, Samtec Inc.
182 K. K. Gan et al.

3 VCSEL Array Driver ASIC

The VCSEL array driver ASIC was developed under the US Collider Detector R&D
(CDRD) program of DOE. We have prototyped the ASIC in two runs, both in 4-
channel versions, using the 65 nm CMOS process of TSMC4. We only use the core
transistors of the process in order to achieve maximum radiation tolerance. Both ASICs
include an 8-bit DAC to set the VCSEL modulation and bias currents. The DAC
settings are stored in SEU (single event upset) tolerant registers. Several improvements
were implemented in the second prototype ASIC:
• Eliminated all external biases. All biases are now programmable via DACs. The
bias that is distributed across the ASIC is set via a current and then is converted into
a voltage at the point of use. This allows faster recovery from the signal switching
as there is no large RC constant between the generator of the bias voltage and the
point of use as in the previous scheme.
• Added more on-chip decoupling capacitance of *200 pF for the whole ASIC.
• Eliminated output feedback amplifier to set output level. One of the modes in the
circuit of the first prototype ASIC had large impedance and was virtually an open
circuit, causing large jitter, instead of driving the bias voltage.
• Added pre-emphasis to the signal at the output of the receiver that received the
electrical input signals. The location of the added pre-emphasis is programmable via
a delay line.
• Added pre-emphasis and feed through capacitors on the driver block of the ASIC to
increase the speed, thereby improving the timing/amplitude control.
• All power lines are tied together with a better power plane. In the first prototype
ASIC, each channel has two power pads. In the new ASIC, there are a total of 13
power pads, including some large pads for multiple wire bonds.

4 Results from the First Prototype ASIC

In the first prototype ASIC [4], all four channels are operational and the bit error rate is
less than 1.3  10−15 with all channels active using pseudo random bit strings (PRBS)
as input. Figure 3a shows the optical eye diagram at 10 Gb/s. The eye is open but
improvements are needed to reduce the jitter.
The expected radiation level for the optical links depends on the location. For
example, if the opto-boards are installed near the outer radius of the endcaps of the
silicon tracker (“ID endplates”), the ionizing dose is 10.2 Mrads and the non-ionizing
dose is 5.2  1014 1-MeV neq/cm2. In October 2015, we irradiated eight opto-boards
with prototype ASICs using 24 GeV/c protons at the CERN PS irradiation facility. In
four opto-boards, each ASIC drove a resistive load while in the other four opto-boards,
each ASIC drove a VCSEL array5. The opto-boards with VCSEL arrays attached were

4
Taiwan Semiconductor Manufacturing Company, Limited.
5
The VCSEL array used is V850-2174-002, fabricated by Finisar Corporation.
High-Speed/Radiation-Hard Optical Engine for HL-LHC 183

exposed to a dose of 4.58  1014 protons/cm2, corresponding to an ionizing dose of


12.2 Mrads and a non-ionizing dose of 2.69  1014 1-MeV neq/cm2, assuming that the
radiation damage in the VCSEL scales with the non-ionizing energy loss (NIEL) in the
GaAs [5, 6]. The opto-boards containing no VCSELs were exposed to a dose of
2.78  1015 protons/cm2, corresponding to an ionizing dose of 74.0 Mrads.
All ASICs were powered and monitored during the irradiation but at reduced
speeds because it was not practical to install high-speed cables at the irradiation facility.
The opto-boards with VCSEL arrays were periodically removed from the proton beam
to allow the annealing of the VCSELs that occurred naturally when powered. All
channels were operational at the end of the irradiation. The optical eye diagram of one
channel after irradiation is compared to that before irradiation in Fig. 2 for 10 Gb/s
operation. The optical amplitude decreases from 2.07 to 1.19 mW. The opening of
optical eye diagram is smaller but the device still operates error free for more than
30 min, corresponding to a bit error rate, BER <5  10−14, with all channels active.
Figure 3 shows a comparison of the corresponding optical eye diagrams for 5 Gb/s
operation, the expected data transmission speed of the ATLAS pixel detector at HL-
LHC. The eyes are open both before and after irradiation. This is the first demonstration
of the radiation hardness of an array driver/VCSEL combination operating at 10 Gb/s
with a dose of greater than 10 Mrads.

Fig. 2. Optical eye diagram of a VCSEL operating at 10 Gb/s before (a) and after
(b) irradiation.

Fig. 3. Optical eye diagram of a VCSEL operating at 5 Gb/s before (a) and after (b) irradiation.
184 K. K. Gan et al.

5 Results from the Second Prototype ASIC

The second prototype ASIC is much easier to tune for operation at 10 Gb/s because of
the various improvements listed in Sect. 4. The supply voltage of the ASIC is 1.2 V
and the current consumption is 150 mA with all four channels operating at 10 Gb/s.
The common cathode voltage is set at −1.3 V in order to provide enough headroom to
drive the VCSEL. The current consumption of the common cathode voltage is 25 mA.
All channels have excellent coupled optical power, higher than 2 mW. The optical eye
diagram is shown in Fig. 4a for 10 Gb/s operation. In comparison with Fig. 2a, the eye
is more open but there is significant jitter and this is being investigated. The BER is
<5  10−14 on all channels with every channel active. Figure 4b shows the optical eye
diagram operating at 5 Gb/s, the target data transmission speed of the ATLAS pixel
detector at HL-LHC. The eye is wide open, indicating satisfactory performance.

Fig. 4. Optical eye diagram of a VCSEL in the second prototype ASIC operating at 10 (a) and 5
(b) Gb/s.

6 Conclusions

We have designed and fabricated a new opto-board including an array driver ASIC and
optical packaging to allow 10 Gb/s optical data transmission. The ASIC can operated at
10 Gb/s after irradiation (> 10 Mrads). The plan is to further improve the design of the
ASIC for application in the ATLAS pixel detector at HL-LHC.

Acknowledgments. The authors are indebted to Maurice Glaser/Federico Ravotti for their help
in using the irradiation facility at CERN. This work was supported in part by the U.S. DOE under
contract Nos. DE-SC0011726 and DE-FG-02-91ER-40690, by the NSF under Grant Number
1338024, and by the German BMBF under contract No. 056Si74.

References
1. Aad, G., et al.: The ATLAS experiment at the CERN Large Hadron Collider. JINST 3,
S08003 (2008)
2. Arms, K., et al.: ATLAS pixel opto-electronics. Nucl. Instrum. Methods A 554, 458 (2005)
High-Speed/Radiation-Hard Optical Engine for HL-LHC 185

3. Gan, K.K., et al.: Design, production, and reliability of the new ATLAS pixel opto-boards.
JINST 10, C02018 (2015)
4. Gan, K.K.: Radiation-hard/high-speed parallel optical links. Nucl. Instrum. Methods A 831,
246 (2016)
5. Van Ginneken, A.: Nonionzing energy deposition in silicon for radiation damage studies.
FERMILAB-FN-0522 (1989)
6. Chilingarov, A., Meyer, J.S., Sloan, T.: Radiation damage due to NIEL in GaAs particle
detectors. Nucl. Instrum. Meth. A 395, 35 (1997)
The Global Control Unit for the JUNO
Front-End Electronics

Davide Pedretti1(B) , Marco Bellato2 , Antonio Bergnoli2 , Riccardo Brugnera3 ,


Daniele Corti2 , Flavio Dal Corso2 , Alberto Garfagnini3 , Agnese Giaz3 ,
Jun Hu4 , Roberto Isocrate2 , and Ivano Lippi2
On Behalf of the JUNO Collaboration
1
Department of Information Engineering, INFN Laboratori Nazionali di Legnaro,
University of Padova, Padova, Italy
davide.pedretti@lnl.infn.it
2
INFN Sezione di Padova, Padova, Italy
marco.bellato@pd.infn.it
3
Department of Physics and Astronomy, INFN Sezione di Padova,
University of Padova, Padova, Italy
alberto.garfagnini@pd.infn.it
4
IHEP - Institute of High Energy Physics, Beijing, China

Abstract. At the core of the Jiangmen Underground Neutrino Obser-


vatory (JUNO) front-end and readout electronics is the Global Control
Unit (GCU), a custom and low power hardware platform with intelli-
gence on-board which is able to perform several different tasks spanning
from selective readout and transmission as well as remote peripherals
control. The hardware inaccessibility after installation, the timing reso-
lution and synchronization among channels, the trigger generation and
data buffering, the supernova events data storage, the data readout band-
width requirements are all key factors that are reflected in the GCU
architecture. The main logic of the GCU is in an FPGA that interfaces
with a custom made ASIC that continuously digitizes the signal from the
photomultiplier tube (PMT). The paper gives an overview of the main
GCU functionalities and state of art of the project.

Keywords: GCU · Readout electronics · Frontend · FPGA

1 GCU Conceptual Design and Main Features

The purpose of JUNO [1] is to determine the neutrino mass hierarchy and pre-
cisely measure oscillation parameters by detecting reactor neutrinos, supernova
neutrinos as well as atmospheric, solar neutrinos and geo-neutrinos. The data
readout architecture and the desired resolution better than 0.1 photoelectron
(pe) in the low energy signal range (1 pe to 100 pe) are a challenge of primary
importance for the success of the experiment [2]. The baseline structure of the
data readout architecture states that each of the 20000 PMTs embeds the High
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 186–189, 2018.
https://doi.org/10.1007/978-981-13-1313-4_37
The Global Control Unit for the JUNO Front-End Electronics 187

Voltage (HV) unit together with the readout electronics in a standalone manner
inside a water-tight box communicating with the external word by means of a
100 m long Ethernet cable [3]. The intelligent PMT (iPMT) concept guarantees
best performances, reduces the cost in terms of cabling and water proof con-
nectors, lowers the data readout throughput, facilitates the local data storage
in case of supernova and is a very modular and flexible solution since complex
calibration and synchronization tasks can be done at PMT level before event
readout. The GCU is the brain of these smart PMTs thanks to the on board
FPGA whose potentialities are enhanced in terms of fast data buffering and elab-
oration as well as peripherals control bridging the Ethernet network to several
different buses, SPI, I2C, UART, linked respectively to a Time Delay Counter,
Clock Data Recovery, temperature sensors and HV unit controller. The front-end
inaccessibility after installation highlights the importance to design high relia-
bility hardware and to adopt strategies for recovering from stall situations due
to firmware bugs or firmware corrupted during the reprogramming phase itself.
A small Spartan6 and a virtual JTAG cable over IPbus [4] open the possibility
to remote reconfigure the main FPGA, Kintex 7. The GCU hosts an ASIC that
digitizes the signal received from the PMT, must be able to issue trigger prim-
itive requests, to store data waiting for the trigger validation and consequently
send the events fragment to the remote event builder unit via Ethernet link,
upon receiving a trigger validation. The worst case scenario in terms of events
readout bandwidth requirement comes from triggers due to dark current. Con-
sidering 1 KHz of dark noise as event rate, and let’s suppose that each event
lasts for 1000 samples, the readout throughput is about 16 Mb/s, well in the
range of fast Ethernet. During the first instants after a supernova explosion,
the event rate may burst to about 1 MHz and the GCU switches in auto-trigger
mode. In this operation mode each interesting event is packaged and stored in
the on-board 2 GB DDR3 memory. The adoption of the fast Ethernet standard
for data readout and slow control opens the possibility to use the two unused
twisted pairs of the cat-5e cable as synchronous and fixed latency links. One
of these two links is used to communicate with the Central Trigger Processor
(CTP) that collects trigger primitives generated by GCUs. The second is used
to distribute the 62.5 MHz system clock to the 20000 channels; each GCU’s local
time must match with the global time within 16 ns of resolution. The problem
of GCU synchronization is related to the distributed nature of data readout.
Figure 1 shows a block diagram resuming the main features integrated in the
custom hardware platform.

2 GCU State of Art


Figure 2 depicts one of the three available prototypes together with the PCB lay-
out designed using Cadence Allegro SPB16.6. The main functionalities have been
validated alongside firmware development and the GCU has been successfully
exploited to readout the current signal generated by the Hammamatsu PMT in
response to a LED light pulse.
188 D. Pedretti et al.

Fig. 1. GCU block diagram

Fig. 2. GCU Prototype and Layout


The Global Control Unit for the JUNO Front-End Electronics 189

IPbus has been proven to be an ideal solution for system parameters control
and monitor and the readout speed of big blocks of events reaches 90 Mbps,
almost saturating the fast Ethernet bandwidth. The system clock is recovered
by the on-board CDR thanks to the DC balanced encoding adopted to transmit
the trigger information over the synchronous links. The 16 Gb TwinDie DDR3
memory maximum throughput observed is 1333 Mega transactions per second
in agreement with the Kintex 7 memory controller specifications. The GCU
power consumption, upon loading a very basic firmware in the FPGA is about
10 W, digitizer ASIC excluded. There are still improvements to be done, specially
concerning the power consumption and the reliability. The main ongoing activity
focuses on the remote reprogramming and synchronization between channels.
The JUNO collaboration is also evaluating new possible readout schemes in
which one GCU serves more channels. The first run of the experiment is foreseen
for 2020.

References
1. http://juno.ihep.cas.cn/. Accessed 26 June 2017
2. An, F., et al.: Neutrino Physics with JUNO. J. Phys. G 43, 030401 (2016)
3. Bellato, M., et al.: JUNO proposal for PMT readout - GCU, Padova, December
2015. Internal Note
4. Frazier, R., et al.: The IPbus Protocol. An IP-based control protocol for
ATCA/uTCA. Version 2.0, 22 March 2013. https://svnweb.cern.ch/trac/cactus.
Accessed 15 June 2017
Design of a Data Acquisition Module Based
on PXI for Waveform Digitization

Zhe Cao1,2, Jiadong Hu1,2, Cheng Li1,2, Siyuan Ma1,2, Shubin Liu1,2,
and Qi An1,2(&)
1
State Key Laboratory of Particle Detection and Electronics,
University of Science and Technology of China, Hefei 230026, China
anqi@ustc.edu.cn
2
Department of Modern Physics, University of Science and Technology
of China, Hefei 230026, China

Abstract. The waveform digitization is more and more popular for readout
electronics in the particle and nuclear physics experiment. A data acquisition
module for waveform digitization is investigated in this paper. The module is
designed on a 3U PXI (PCI eXtensions for Instrumentation) shelf, which can
manage the measurement of two channels of waveform digitization for detector
signals. It is equipped with a two channels analog to digital converter (ADC) of
12 bits resolution and up to 1.8 G samples per second (SPS) sampling rate, and a
filed programming gate array (FPGA) for controlling and data buffering.
Meanwhile, a complex programmable logic device (CPLD) is employed to
implement the PXI interface communication via PXI Bus. The performance of
this module was tested. The results show that it can be successfully used in
particle and nuclear physics experiments.

Keywords: Waveform digitization  ADC  PXI

1 Introduction

In the readout electronics of modern particle and nuclear physics experiments, wave-
form digitization is becoming more and more popular. In the traditional way, the
particle is measured by time-to-digital and voltage-to-digital, which follows the pre-
amplifier, charging integration and shaping circuit. Compared with the traditional
method, waveform digitization that digitizes the entire waveform directly, can signif-
icantly reduce the order of circuit complexity. Not only the amplitude and arrival time
of the detector signal can be acquired, but also the waveform recognition and signal
screening of the particle event can be analyzed after processing the discrete digital
sequence of the original analog waveform [1].
There are two major types of method to achieve waveform digitization, fast and
high resolution ADC and switched capacitor array (SCA). In SCA method, there is an
application specific integrated circuit (ASIC) chip for sampling and storing the
waveform in ultra-high speed, then an ADC with low sample rate converts storage cells
to digitization [2]. Fast sample and slow digitization causes that the dead time cannot
avoid. Because of the development of the ADC, the ADC with both high resolution and

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 190–194, 2018.
https://doi.org/10.1007/978-981-13-1313-4_38
Design of a Data Acquisition Module Based on PXI for Waveform 191

high sample rate becomes a reality nowadays. The technic of pipeline and folding
interpolating can achieve 12 bits with GHz level sample rate. The benefit of this
method is that the circuit is extremely simple, as well as no dead time in theory.
In this paper, a waveform digitization data acquirement module is described. The
feature of the digitization is 12 bits and up to 1.8 GSPS, which can cover the
requirement of the nuclear and high energy physics experiments as many as possible.
With the purpose of extensive application, the module integrates a high reliable
interface based on PXI architecture.

2 Description of the Module

The system is based on a standard 3U eurocard, as shown in Fig. 1.

Fig. 1. Block diagram of the waveform digitization data acquirement module.

According to the requirement of nuclear and particle physics experiments, one


ADC12D1800 is chosen as the ADC. This folding interpolating based dual channels
ADC has a resolution of 12 bits and sample rate from 300 MSPS to 1.8 GSPS [3]. In
order to cover the dynamic range and the bandwidth of different kinds of the detector
signals as many as possible, differential connectors receive the waveform from pre-
amplifies.
Generally, there is only one common clock in the readout system and the other
clocks should synchronize with it. In this module, a clock receiver can receive the
system clock of the experiment. Also a local crystal oscillator is set up for the inde-
pendent operation mode. A phase locked loop (PLL) with integrated voltage controlled
oscillator (VCO) LMX2581 is employed to generate high frequency clock up to
1.8 GHz from the system clock, which drives the sample of the ADC.
192 Z. Cao et al.

In physics experiment, trigger is an important signal for readout electronics to


transfer data of the detector. Two kinds of trigger receiver is considered here. In the
architecture of the PXI, a special star trigger links to all peripheral slots from slot 2 via
backplane [4]. This module can receive the signal from star trigger if the trigger comes
from the slot 2 of the chassis. Additional, the trigger signal can be received via coaxial
cable in the front panel. Trigger input is discriminated by comparator, in order to be
compatible with different levels of digital signal. A digital-to-analog converter (DAC),
which is controlled by the FPGA, is employed to set the discrimination level.
There are two programmable logic devices designed for data processing and
interface. Artix7 with the benefit of low power consumption and low price is chosen as
the FPGA. It configures and controls all components in the module, reports the status of
the board and finds the triggered data. A ring buffer is used to store the data before
trigger, and the particular data will be matched according to the trigger latency.
A MAX II CPLD is performed as the PXI interface. The Intellectual Property (IP) core
of PCI Compiler achieves the PXI bus. The advantage of this design is that the CPLD
and firmware can be reused, no matter which chip is chosen as the main FPGA.

3 Performance of the Module

In order to evaluate the performance of the module, a series of tests were carried out.
The test bench was installed with the oscilloscope, the vector signal generator and
bandpass filters. The module was plugged in the PXI crate and the software for analysis
was set in the controller of the crate.
Because of the differential input, a balun module is developed to convert the single
end signal to differential. Two baluns are employed, a conventional type with the
bandwidth from 0.4 to 450 MHz for the lower frequency test and a transmission line
type with the bandwidth from 30 to 1800 MHz for the higher frequency test.

3.1 Frequency Response Test


The signal from the vector signal generator was calibrated by the oscilloscope first, and
then digitized by the module. The amplitude of the sinusoid was calculated by the
software. The frequency responses of two channels of the module are depicted in
Fig. 2. The curves demonstrate in a persuasive way that the bandwidth of the module is
about 2 GHz, which is limited by the bandwidth of the balun.

3.2 ENOB Test


The ENOB is an important feature to evaluate the dynamic performance of the ADC
and its associated circuit. The sine wave from the vector signal generator sent to the
module via bandpass filter, which was employed to filter the harmonic and out-off-band
noise of the generator to improve the performance of the source. The software based on
IEEE std. 1241-2010 [5] calculated the ENOB of two channels. Turning the input
Design of a Data Acquisition Module Based on PXI for Waveform 193

Fig. 2. The bandwidth of two channels, red line shows channel 1 and blue line shows channel 2

frequency from 2.4 MHz to 800 MHz, test was conducted on ENOB, as shown in
Fig. 3. The test results indicate that the ENOB of both channels is above 8 bits for an
input signal up to 350 MHz.

Fig. 3. The ENOB of two channels, red line shows channel 1 and blue line shows channel 2

4 Conclusion

This paper presents a data acquisition module based on PXI for waveform digitization.
This dual channels module has a sample rate of 1.8 GSPS. Systematic measurement
results reveal that the bandwidth of the module is about 2 GHz and the ENOB is above
8 bits for an input signal up to 350 MHz. This module has a compatible design, such as
trigger signal receiver, globe clock receiver and PXI interface. As a result, it can be
considered as a prototype of waveform digitization readout electronics for particle and
nuclear physics experiments.

Acknowledgments. This project is supported by the Young Fund Projects of the National Nat-
ural Science Foundation of China (Grant No. 11505182). And it is also supported by the Young
Fund Projects of the Anhui Provincial Natural Science Foundation (Grant No. 1608085QA21).
194 Z. Cao et al.

References
1. Esposito, B., Kaschuck, Y., Rizzo, A., et al.: Digital pulse shape discrimination in organic
scintillators for fusion applications. Nucl. Instrum. Methods Phys. Res. 518(1–2), 626–628
(2004)
2. Kleinfelder, S.A.: Development of a switched capacitor based multi-channel transient
waveform recording integrated circuit. IEEE Trans. Nucl. Sci. 35(1), 151–154 (1988)
3. http://www.ti.com.cn/product/cn/ADC12D1800/datasheet
4. PXI Hardware Specification Revision 2.2, PXI Systems Alliance (2004)
5. IEEE Standard for Terminology and Test Methods for Analog-to-Digital Converters, IEEE
Standard 1241-2010 (2011)
Readout Electronics for the TPC Detector
in the MPD/NICA Project

G. Cheremukhina, S. Movchan, S. Vereschagin(&),


and S. Zaporozhets

Joint Institute for Nuclear Research,


Joliot-Curie 6, Dubna, Moscow Region, Russia
vereschagin@jinr.ru

Abstract. The article is aimed at describing the development status, design


features and advanced version of the TPC data readout system.
The TPC is placed in the middle of the Multi-Purpose Detector and provides
tracking and identification of charged particles in the pseudorapidity range
|η| < 1.2.
Tracks in the TPC are registered by 24 readout chambers that are placed at
both end-caps of the sensitive volume. The readout system of each chamber
consists of the front-end cards (FECs, 62 per chamber) and a readout control
unit (RCU). FECs collect signals directly from the chamber pads, amplify them,
digitize, process and transfer it to the RCU.
The design concept and improvement have been motivated by NICA project
requirements for a trigger rate up to 7 kHz with mean tracks multiplicity of 300
tracks (maximum multiplicity – up to 1000 tracks) and by the requirement of
TPC end-caps transparency about X/Xo = (1520)%.

Keywords: Time projection chambers (TPC)


Front-end electronics for detector readout

1 Introduction

The new research complex NICA aimed to study hot and dense baryonic matter is
under active realization in JINR (Dubna) [1]. It will operate with heavy ion collisions
(up to Au) at centre-of-mass energies 11 GeV per nucleon. The MPD detector will
operate at one of two interaction points of the collider [2, 3].
The Time Projection Chamber (TPC) is the main tracker of MPD. It has been
designed for tracking and identification of charged particles [4–7].

2 Design Requirements and Concept

The TPC detector produces most complex events among other MPD subdetectors.
The NICA collider will provide the trigger rate up to 7 kHz with mean multiplicity of
300 tracks and maximum multiplicity – up to 1000 tracks per event. The data readout
system is to receive the data from all 95 232 detector pads and then transmit them to the
MPD DAQ.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 195–200, 2018.
https://doi.org/10.1007/978-981-13-1313-4_39
196 G. Cheremukhina et al.

The TPC data is zero dominated. For this reason zero suppression modes should be
implemented in the TPC Front-End Electronics (FEE). The mean data stream from the
TPC detector is expected to be at the value 7 Gbps (with zero suppression).
Another significant requirement for the data readout system is end-caps trans-
parency requirement. TPC electronics will be located at the end-caps of the TPC barrel
and it will shadow other MPD subdetectors. For this reason the FEE size should be as
small as possible, and it should dissipate as small power as possible to simplify the
cooling system.
The TPC’s data acquisition system consists of 24 identical parts. Each part serves to
one readout chamber and electronics of each chamber is an independent system. There
are two main blocks in the data readout system. They are FECs and RCUs. The whole
data readout system contains 95 232 registration channels, 1488 64-channels FECs and
24 RCUs.
At that moment we are considering two versions of the TPC FEE. First one is based
on the PASA and ALTRO chip set [8, 9]. These two ASICs were designed for the
ALICE TPC. They are also suit well for our design and allow us to reach all
requirements except end-caps transparency requirement. This system has already been
designed and its functionality is enough for the startup period of operation. The second
version of the TPC FEE is based on the new ASIC SAMPA [10, 11]. The SAMPA chip
has few advantages to PASA and ALTRO chip set, namely double input signal
polarity, switchable preamplifier gain, and picking time, continuous readout mode.
The FEE design that is based on the SAMPA chip is under realization now.
Block diagram of the readout system for one chamber is shown in Fig. 1.

Fig. 1. Block diagram of one chamber readout system.


Readout Electronics for the TPC Detector in the MPD/NICA Project 197

3 PASA and ALTRO Chip Set Based Design

The FEC64S is advanced version of the prototype card FEC64 [4–7, 12, 13]. The card
is based on the 4 PASA and 4 ALTRO chips which produce 64 independent regis-
tration channels in total. View of the FEC64S is shown in Fig. 2 and main parameters
are summarized in Table 1.

Fig. 2. Top view of the FEC64S. (1) PASA chips – low noise amplification of the signal;
(2) ALTRO chips – digitization and signal processing; (3) ALTERA FPGA; (4) Serializer/
Deserializer chip.

Table 1. FEC64S main parameters.


Parameter Value
Signal to noise ratio at MIP 30:1
ENC at detector capacitance 10–20 pF, electrons <1000
Shaping time, ns 180–190
Shaped signal tail cancellation after 1 ls <1%
ADC resolution, bits 10
Sampling rate, MHz 10
Buffering, events 4–8
Effective serial interface speed, Gbps 1.6–2.5
Power consumption, mW per channel <100

PASA chip receives analogue signals through short kapton cable directly from
detector pads. It amplifies detector signals with conversion gain 12 mV/fC over the
range of input charge up to 150fC. Each of the 64 PASAs channels has a single-ended
input and a differential output which are directly connected to the ALTRO chip inputs.
After the analogue to digital conversion in the ALTRO, the signal processing is
performed in 5 steps: (a) the first correction and subtraction of the signal baseline,
(b) the cancellation of long-term components of the signal tail, (c) the second baseline
198 G. Cheremukhina et al.

correction, (d) the suppression of the samples that contain no useful information (zero
suppression), and (e) formatting.
Another essential part of the card is ALTERA FPGA. Its main function is multi-
plexing of the parallel ALTRO words from all 64 independent channels to one stream.
The Serializer/Deserializer chip TLK2711 was chosen as a readout interface. It pro-
vides bidirectional access to the ALTRO and FPGA at speed up to 2.5 Gbps.
The FPGA implements the connection function of 40-bits ALTRO bus with 16-bits
parallel input of Ser/Des chip and vice-versa.
The multiplexing of data in the card is carried out in 4 steps. The 16 ALTRO
channels are multiplexed in the chip. Then parallel outputs of the two chips are mul-
tiplexed into one bus and organize two groups from two ALTROs each. Power reg-
ulator of the PASAs and ALTROs chips also are combined into two groups that allows
to disable the half of the card if some problems occur. Parallel outputs of two groups
are combined into one that is directly connected to the FPGA.
The FPGA slices 40-bits ALTRO words into 10-bits words, codes them with
Hamming code, and at the end feed forms 16-bits words to the parallel input of the
Ser/Des chip. The Ser/Des chip is implemented the last step of multiplexing in the
FEC64S. Outgoing data are transmitted to the RCU as a serial stream with data rate
from 1.6 to 2.5 Gbps.
The FEC64S can operate in two acquisition modes: the Individual Readout Mode
(IRM), and the Automatic Scan Mode (ASM). In IRM the RCU should send Channel
Readout command (CHRDO) independently to all of 64 channels on the card. This
acquisition mode does not allow achieving the fastest data readout speed from all
FECs, which are connected to the RCU.
After activating ASM the FPGA takes some functions of the RCU. The ASM
algorithm was implemented at the FPGAs state machine. ASM cycle starts after
receiving level 2 trigger (LVL2) that confirms validation of data in the ALTROs multi-
event buffers memory. After receiving LVL2 state machine starts to read Address and
Event Length registers (ADEVL) of all ALTROs channels that contain information
about available data in the buffers. The contents of that registers are stored in the
dedicated FIFO buffer in the FPGA. When all ADEVL registers information is received
state machine starts to send command only to those channels which contains infor-
mation. The flowgraph of the ASM is shown in Fig. 3.
Receiving of the event length information is of higher priority than the channel
readout in the state machine. This is done because the event length register contains the
information only about the latest event stored in the ALTRO buffers while for ADC
samples there are multi-event buffer memory. The data readout cycle from the channels
is interrupted when another LVL2 has been received. And it is resumed only after the
new event length information is stored to the dedicated FIFO memory in the FPGA.
The ASM significantly accelerates the data readout process. In this mode all FECs
operate simultaneously. The RCU needs only to receive data.
RCU is a significant unit of the data readout system. Its main function is control and
data collection from 62 FECs. The whole data acquisition system will be equipped with
24 RCUs each one for a single readout chamber. Every RCUs are based on the 8 input
FPGAs and one main FPGA. Input FPGA is connected to the 8 FECs through high-
speed transceivers. The multiplexing of the data in RCU is implemented in two steps.
Readout Electronics for the TPC Detector in the MPD/NICA Project 199

Fig. 3. Flowgraph of the ASM.

The input FPGA multiplexes data from 8 FECs. The main FPGA multiplexes data from
8 input FPGAs. After multiplexing the data are transmitted to the first level processor
system through optical link.
Other key functions of the RCU are monitoring of the FECs conditions and fan-out
of synchronization pulses. The RCU is equipped with independent Ethernet line to the
slow control system.

4 SAMPA Based Design Issues

The SAMPA chip has been under development during last few years for ALICE TPC
and MCH upgrade at CERN. It has 32 channels and is composed of an analog front-end
part, an ADC, a digital processor and serial links. The SAMPA chip is more integrated
then PASA and ALTRO chip set. Its square is only 225 mm2 per 32 channels while
squares of PASA and ALTRO are 484 mm2 and 708 mm2 respectively per 16 chan-
nels. Estimates show that the size of the FEC PCB can be reduced by a factor of 3. The
amount of background substance will be proportionally reduced in front of corre-
sponding parts of the CPC Tracker, the Straw EC Tracker and the Time of Flight
system that located next to the TPC end-caps. Besides in this case the FEC planes can
be oriented parallel to the TPC end-caps plane that is the essential factor affecting to the
quality of the tracks reconstruction in the MPD.
Another significant advantage of the SAMPA chip is its possibility operating with
multi-wire proportional chambers as well as with GEM chambers that is necessary for a
future TPC upgrade.
A few pilot chips were tested and the results of measurements show us the feasi-
bility of SAMPA-based FEC for the TPC MPD/NICA. The idea of using the SAMPA
chip in the FECs by modification of the designed one seems very productive.
200 G. Cheremukhina et al.

5 Conclusion

The FEE system based on PASA and ALTRO chip set was designed. The present
system provides opportunities for read out of physics events at the trigger rate up to
7 kHz which meets the NICA project requirements for startup period of operation.
The usage of the SAMPA chip for the TPC readout system will satisfy all
requirements. In addition such a system gives an opportunity for future TPC upgrade.
The SAMPA based FEE is under designing.

Acknowledgments. The authors wish to express their gratitude to L. Musa and C. Lippmann for
their interest towards our work and support. We also thank A. Kluge and M. Bregant for the
opportunity to participate in testing the SAMPA chip also at the VBLHEP, JINR.

References
1. Kekelidze, V., et al.: Status of the NICA project at JINR. In: EPJ Web of Conference 95,
01014 (2015)
2. Abraamyan, K., et al.: The MPD detector at the NICA heavy-ion collider at JINR. Nucl.
Instrum. Method A 628, 99 (2011)
3. Abraamyan, K., et al.: MPD collaboration, the multipurpose detector (MPD). Conceptual
Design Report, JINR, Dubna, Russia (2010)
4. Aver’yanov, A.V., et al.: Time-projection chamber of the MPD detector at NICA collider
complex. Yadernaya fizika i inzhiniring [in Russian] 4(9), 867 (2013)
5. Aver’yanov, A.V.: Time-projection chamber for the MPD NICA project. J. Instrum. 9,
C09036 (2014)
6. Aver’yanov, A.V. et al.: Readout system of the TPC/MPD NICA project. Yadernaya fizika i
inzhiniring [in Russian] 5(11–12), 916 (2014)
7. Aver’yanov, A.V. et al.: Readout system of TPC/MPD NICA project. Phys. At. Nuclei 78
(13), 1556–1562 (2015)
8. ALICE TPC Electronics.: Charge sensitive shaping amplifier (PASA). Technical Specifi-
cations, CERN (2003)
9. ALICE TPC Readout Chip: User Manual. CERN. Draft 0.2. ALTRO chip web page. http://
ep-ed-alice-tpc.web.cern.ch/ep-ed-alice-tpc, June 2002
10. Barboza, S.H.I., et al.: SAMPA chip: a new ASIC for the ALICE TPC and MCH upgrades.
J. Instrum. 11, C02088 (2016)
11. Adolfsson, J., et al.: SAMPA chip: the new 32 channels ASIC for the ALICE TPC and MCH
upgrades. J. Instrum. 12, C04008 (2017)
12. Bazhazhin, A., et al.: Front end electronics for TPC MPD/NICA. In: Proceeding of XXIII
International Symposium on Nuclear Electronics & Computing Varna. JINR. E 10(11), 133
(2011)
13. Averyanov, A., et al.: R&D readout electronics of the TPC MPD/NICA. In: Proceeding of
XXIV International Symposium on Nuclear Electronics & Computing. Varna. JINR. E 10
(11), 136, 265 (2013)
TDC Based on FPGA of Boron-Coated MWPC
for Thermal Neutron Detection

Li Yu1,2, Ping Cao1,3(&), WeiJia Sun1,2, ManYu Zheng1,3,


Ying Zhang3,4, and Qi An1,2,3
1
State Key Laboratory of Particle Detection and Electronics,
University of Science and Technology of China, Hefei 230026, China
caoping@ustc.edu.cn
2
Department of Modern Physics, University of Science and Technology
of China, Hefei 230026, China
3
School of Nuclear Science and Technology, University of Science
and Technology of China, Hefei 230026, China
4
Key Laboratory of Neutron Physics of China Academy of Engineering
Physics, Institute of Nuclear Physics and Chemistry, Mianyang 621900, China

Abstract. In order to deal with the international problem of shortage of 3He, a


new type of boron-coated Multi-wire Proportional Chamber (MWPC) thermal
neutron detector is developed. Readout electronics for this MWPC detector,
which can achieve good position resolution, is introduced in this paper. Delay
line readout method [1] is used for precisely measuring the hit location of
neutron. Furthermore, the time difference between two ends of this delay line is
precisely measured with a Time Digital Converter (TDC) based on Field Pro-
gramming Gate Array (FPGA) and read out through an Ethernet port. Lab test
shows that the position resolution contributed by readout electronics is better
than 0.2 mm with dead time about 550 ns. Joint test with MWPC detector
shows that the integral position resolution is better than 1 mm.

Keywords: Boron-coated MWPC  Thermal neutron detection


Readout electronics  Time difference measurement

1 Introduction

Neutron is an ideal probe for studying dynamical properties and the structure of matter
[2]. Traditionally, most of the neutron scattering spectrometers adopt 3He gas neutron
detector. However, in recent years, the shortage of 3He gas brings challenge for the
usage of this kind of detector in new applications. Finally find 10B can readily absorb
neutrons and the reaction process is introduced as below. The reaction products are
relatively easy to measure and the natural abundance of it can reach 19.9% [3, 4].

n þ 10 B ! a þ 7 Li þ 2:79 MeV 7% Ea ¼ 1:78 MeV ELi ¼ 1:0 MeV


7
L þ a þ c þ 2:31 MeV 93% Ea ¼ 1:47 MeV ELi ¼ 0:84 MeV

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 201–205, 2018.
https://doi.org/10.1007/978-981-13-1313-4_40
202 L. Yu et al.

A two dimensional multi-wire proportional chamber in view of boron-coated


conversion layer is developed in the Institute of High Energy Physics (IHEP), which
has a large sensitive area of 200 mm  200 mm, as shown in Fig. 1. In the detector,
the incident window is covered by 5 lm thick boron converter, the spacing of the
anode wire is 2 mm and the spacing of cathode wire is 1 mm. To guarantee the process
of avalanche in working gas (90% Ar mixed with 10% CO2), anode plane and boron
film should be supplied with high voltage (*2.2 kV) and negative high voltage
(*800 V), respectively.

Fig. 1. The structure of boron-coated MWPC

In order to measure the hit position of neutron for boron-coated MWPC, a proto-
type of the readout system was proposed and developed. By recording the time dif-
ference between the two pulses from two ends of the delay line module, the hit
coordinate position can be reconstructed. So the resolution of the time measurement
can affect the positioning resolution. To verify the performance of positioning reso-
lution, some experiments were performed.

2 System Architecture of the Readout Electronics

The architecture of readout electronics for Boron-coated MWPC is presented in Fig. 2


(left). Hit signal from MWPC detector is fed into the delay line module and propagates
towards to both directions. And then are conditioned with differential Low Voltage
Differential Signal (LVDS) by the front-end electronics (FEE) including matching,
amplification and discrimination etc. The leading edge of LVDS indicates the timing
information. This timing signal is sent to a Nuclear Instrument Module (NIM) time
digitizing module (named with TDM) for time information measurement.
For the purpose of precision, simplicity and flexibility, TDM is implemented in
FPGA. In TDM module, anode signal is used as trigger signal for measuring the time
interval value between the two cathode signal leading edge. This time interval conveys
information about the neutron impact position. The time is digitized in FPGA and
transmitted to computer over gigabit Ethernet implemented with system on a chip
(SoC) FPGA technique.
TDC Based on FPGA of Boron-Coated MWPC for Thermal Neutron Detection 203

Fig. 2. Structure of the readout electronics system (left) and the structure of FEE (right)

3 Design of FEE Module and Delay Line Module

Figure 2(right) shows the structure of FEE that is implemented as removable module
plugged into a motherboard on which a DAC module can be used to set the threshold
for discriminator. In order to achieve high gain, high bandwidth and low noise, FEE
adopts two stages of amplifier. The amplified signal is fed into a discrimination and
signaled out with LVDS level.
After the simulation and calculation of the delay line module in the lab, the con-
clusion is: in order to get better position resolution, the design of the delay module
should reference the following conclusions (d represents nominal tap-to-tap delay
increment of delay line module, Z0 is the equivalent characteristic impedance of delay
line module).
• Choose the value of Z0 as large as possible.
• Choose appropriate value of d, d = 5 ns can get better position resolution for the
MWPC detector that mentioned above.

4 Structure of Time Digitizing Module

To digitizing time interval, coarse and fine time measurement method [5] is utilized and
implemented in Xilinx FPGA. Anode signal is considered as a trigger to enable TDM.
Then the time interval of two signals from the same delay line chain will be measured
and digitized. The digitized data is packed and transmitted to computer over Gigabit
Ethernet port, which is implemented with technique of SoC FPGA and has formation of
a daughter module on TDM. The structure and the lab tests result of TDM are show in
Fig. 3, which can conclude the time resolution of it is better than 35 ps (RMS).
204 L. Yu et al.

Fig. 3. The architecture of the TDM and picture of TDM and gigabit Ethernet board

5 Experiment and Verification

To evaluate the time measurement resolution of electronics, some lab tests are taken.
The result is shown in Fig. 4. The resolution of time difference is better than 550 ps,
the total time different is about 2*286 ns for this MWPC detector (200 mm*200 mm),
and reference to expression (1), the position resolution is better than 0.2 mm.

Fig. 4. The result of delay line module with 5 ns tap value: the linear curve of the number of
delay units and the delay time (right); the resolution of time difference (left)

550ps=ð286 2Þ ns ¼ ðposition resolutionÞ=200 mm ð1Þ

Joint test with the MWPC detector is also performed. Figure 5 (left) shows the joint
test platform and pictures of 241Am source placed on the detector to simulate neutron
hitting. The result is shown in Fig. 5 (right). The position resolution is better than
1 mm. From which can conclude that this result is limit by the spacing of cathode wire.
TDC Based on FPGA of Boron-Coated MWPC for Thermal Neutron Detection 205

Fig. 5. Joint test platform and 241Am is on the three different location of WMPC detector (left),
the result of joint test with WMPC detector (right)

References
1. Charpak, G., Sauli, F.: Multiwire proportional chambers and drift chambers. Nucl. Instrum.
Meth. 162, 405 (1979)
2. Viret, M., Ott, F., Renard, J.P., et al.: Magnetic filaments in resistive manganites. Phys. Rev.
Lett. 93, 217–402 (2004)
3. Shea, D.A.: Congressional Research Service (2010)
4. Knoll, G.F.: Radiation Detection and Measurement, 3rd edn. Wiley, New York (2000)
5. Huan-Huan, F.A.N., et al.: A high-density time-to-digital converter prototype module for
BES III end-cap TOF upgrade. IEEE Trans. Nucl. Sci. 60, 3563 (2013)
A Portable Readout System for Micro-pattern
Gas Detectors and Scintillation Detectors

Siyuan Ma1,2, Changqing Feng1,2(&), Laifu Luo1,2, and Shubin Liu1,2


1
State Key Laboratory of Particle Detection and Electronics,
No. 96, Jinzhai Road, Hefei, China
fengcq@ustc.edu.cn
2
University of Science and Technology of China,
No. 96, Jinzhai Road, Hefei, China

Abstract. A system of readout electronics used for both Micro-pattern Gas


detectors and Scintillation detectors is introduced in this paper. This system is
intended as a general purpose multi-channel readout solution for a wide range of
detector types and detector complexities. A 32-channel charge measurement
ASIC VATA160 from IDEAS cooperation is adopted in this method. With
features of high integration, low noise and large dynamic range, this system
handles up to 128 electronic channels. With integration time of 1.8 us, each
channel’s dynamic range is from −1 pC to +12 pC with a noise of better than
2.5 fC and nonlinearity of better than 0.5%. As a portable system, it is able to
generate trigger itself or get external trigger. This system transfers data to a PC
host and gets controlled by PC via only one Universal Serial bus.

Keywords: VATA160  Readout system  MPGD  Scintillation detector

1 Introduction

With the development of high energy physics (HEP), the Micro-pattern Gas Detector
(MPGD) such as Micromegas [1] and Gas Electron Multipliers (GEMs) [2] and
scintillation detector are widely used in particle detection physics and space astro-
physics [3]. In order to meet the requirement of the early stage’s test, a portable readout
electronic system was implemented and verified by the authors.
This system is based on the chip of VATA160 [4], which is a large dynamic range
charge measurement readout Application-Specific Integrated Circuit (ASIC) with self-
trigger function designed by IDEADS (Norway). It has 32 charge sensitive channels.
The ASIC is designed for scintillation detector and MPGD. This system, which can
acquire 128 channels of charge inputs, has been developed and can be used to research
the performance of MPGDs as well as scintillation detectors. With integration time of
1.8 us, the dynamic range is from −1 pC to +12 pC, and the noise level is better than
2.5 fC in RMS. The system is compact and portable to use. It communicates with the
PC via only one USB bus. Since its total power dissipation is lower than 2.5 W, it is
able to be supplied by USB bus. The system generates trigger itself or gets external
trigger.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 206–209, 2018.
https://doi.org/10.1007/978-981-13-1313-4_41
A Portable Readout System 207

2 Design of the Readout System

The system is mainly composed of a readout electronic board and a data-acquisition


software on PC. The block diagram of electronic board is given in Fig. 1. It mainly
consists of 4 VATA160 chips, 4 Analog to Digital Converters (ADC), a Field-
Programmable Gate Array (FPGA), 2 Digital to Analog Converters (DAC) and a USB
interface chip. Each VATA chip has a connector of 2  32, 50 mil double row pins as
interface to detectors. Since the spark of MPGDs may damage the measurement
channel, the Chip NUP4114 is adopted in every input channel as Electro-Static Dis-
charge (ESD) protection. The ADC of AD7944 is used to convert analog output of
ASIC into digital. A DAC is used to set threshold for trigger. The automatic calibration
function has been designed on board. It is mainly composed of a DAC, an analog
switch and a capacitance. To simulate the charge from detector, a controllable step
voltage is generated by DAC and analog switch. Then the calibration capacitance turns
the step voltage into current pulse, which is injected into ASIC’s input channel. This
portable readout system has been implemented as shown in Fig. 2.

Fig. 1. Block diagram of electronic board. Fig. 2. Photo of the readout electronic board.

3 Performance of Readout System

We present the results of performance test of the readout system. The electronic noise
of all 128 channels has been test and the result is shown in Fig. 3. The figure indicates
that the noise of every channel is better than 2.5 Least Significant Bits (LSB), which
means the equivalent input charge is 2.5 fC. Every channel of this system has been
calibrated. As is shown in Fig. 4, the typical result of integral nonlinearity (INL) be-
tween −500 fC to +2.5 pC is better than 0.5%. one ADC channel stands for 1 fC. The
range of −500 fC to +2.5 pC covers most of the application requirements.
This system was coupled with a Micromegas detector [5] to test the energy spec-
trum of 55Fe. The result is shown in Fig. 5. The all-around peak and escape peak are
clearly visible, which means the readout system is capable of performing the readout of
Micromegas detector.
The encoded multiplexing readout method for Thick Gas Electron Multiplier
(THGEM) is a novel method which can significantly reduce the number of readout
208 S. Ma et al.

Fig. 3. RMS of all channels. Fig. 4. Calibration of one channel.

channels [6, 7]. In this part, the readout system was connected to a THGEM detector
with the Two-Dimensional direct coding readout of 100  100 anode bars to perform
imaging test. There was a copper plate with letter slits between the detector and X-ray
generator. After collecting the X-ray signal which entered the detector through the slits
of the copper plate, the two-dimensional imaging was obtained by decoding the hit
position of the incident signal. As is shown in Fig. 6, the letter gap is clearly visible
when the threshold is set to triple the noise.

55 Fig. 6. The results of decoded image.


Fig. 5. The spectrum of Fe.

A Silicon Photomultiplier (SiPM) was used to couple with plastic scintillation


detectors to collect photons. This readout system was connected with the SiPM
(S13360-1350PE of Hamamatsu) [8] at the voltage of 56 V to test the photoelectron
peaks. As is shown in Fig. 7, the result shows that the difference between 1 photon and
2 photons’ peak is 280 fC, which means the gain of single photon is 1.75  106. The
gain is consistent with the datasheet and this means the readout system works well with
SiPM. Figure 8 shows the cosmic spectrum of the detector is able to see, despite having
a lot of single photon noise and having some accumulation over 12 pC.
A Portable Readout System 209

Fig. 7. Single photon peaks of SiPM.


Fig. 8. Cosmic spectrum of plastic
scintillator.

4 Conclusion

A portable readout electronic system for MPGDs and Scintillation detectors is pre-
sented in this paper. It shows the readout systems has features of low noise (less than
2.5 fC), high dynamic range (−1–+12 pC), low power dissipation (less than 2.5 W)
and high integration (128 channels). This system is portable to use with only one USB
bus for its supply, commands and data transmission. This system operates with dif-
ferent types of MPGDs and Scintillation detectors.

Acknowledgements. This work was supported by National Natural Science Foundation of


China (Grant No. 11222552 and 11635007) and National Key R&D Program of China (Grant
No. 2016YFA0400303).

References
1. Giomataris, Y., et al.: MICROMEGAS: a high-granularity position-sensitive gaseous detector
for high particle-flux environments. Nucl. Instrum. Methods Phys. Res., Sect. A 376(1), 29–
35 (1996)
2. Sauli, F.: GEM: a new concept for electron amplification in gas detectors. Nucl. Instrum.
Methods Phys. Res., Sect. A 386(2–3), 531–534 (1997)
3. Jin, C.: Dark matter particle explorer: the first Chinese cosmic ray and hard c-ray detector in
space. Chin. J. Space Sci. 34(5), 550–557 (2014)
4. IDEALS Homepage. http://ideas.no/products/ide3160-2/. Accessed 14 June 2017
5. Yunlong, Z., et al.: Manufacture and performance of the thermal-bonding Micromegas
prototype. J. Instrum. 9(10), C10028 (2014)
6. Qian, L., et al.: A successful application of thinner-THGEMs. J. Instrum. 8(11), C11008
(2013)
7. Binxiang, Q., et al.: A novel method of encoded multiplexing readout for micro-pattern gas
detectors. Chin. phys. C 40(5), 58–62 (2016)
8. HAMAMATSU Homepage. http://www.hamamatsu.com/us/en/S13360-1350PE.html. Acces-
sed 14 June 2017
Quality Evaluation System for CBM-TOF
Super Module

Chao Li1,2, Xiru Huang1,2(&), Ping Cao1,3, Jiajun Zheng1,2,


and Qi An1,2
1
State Key Laboratory of Particle Detection and Electronics,
University of Science and Technology of China, Hefei 230026, China
xiru@ustc.edu.cn
2
Department of Modern Physics, University of Science and Technology
of China, Hefei 230026, China
3
School of Nuclear Science and Technology, University of Science
and Technology of China, Hefei 230026, China

Abstract. The Time-of-Flight (TOF) system in the Compressed Baryonic


Matter (CBM) experiment is comprised with super module (SM) detectors, each
of which contains serval Multi-gap Resistive Plate Chambers (MRPCs). For the
purpose of quality evaluation of CBM-TOF SM, the readout electronic system is
proposed in this paper. It consists of three kernel parts: front-end electronics
(FEE), back-end electronics (BEE) and data acquisition (DAQ) software. There
are 10 TDC boards with 20 ps time digitizing precision for 320-channel time
digitizing and 1 TRM for reading out multiple TDC boards, serving as FEE.
BEE are composed of 20 data readout modules (DRM) divided into four groups
resided inside two PXI-6U crates and 1 CTM for clock and trigger distribution.
DAQ software receives the data from DRM and distributes command to DRM
through the Gigabit Ethernet port. Preliminary test results show that the eval-
uation system can be used for the quality control of CBM-TOF SM.

Keywords: CBM-TOF SM  Readout electronic  Quality evaluation

1 Introduction

The Time-of-Flight (TOF) system in the Compressed Baryonic Matter (CBM) experi-
ment is composed of 6 different type of super module (SM) detectors named M1 to M6,
each of which contains several high resolution Multi-gap Resistive Plate Chambers
(MRPCs). As for the M5 and M6, each SM contains 5 MRPCs, which supports up to
320 electronic channels for high-precise time measurement. In order to meet the
minimal requirement of 80 ps global time resolution, the resolution of time to digital
converter (TDC) board should be 25 ps or better [1–3].
For now, MRPC detectors for CBM-TOF are still under development. During the
process of MRPC mass production, quality evaluation system is required to ensure that
the detectors achieve the expected performance. For the purpose of quality evaluation
of CBM-TOF SM, a 320-channel time digitizing and readout electronic system for high
density and high resolution time measurement is designed.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 210–214, 2018.
https://doi.org/10.1007/978-981-13-1313-4_42
Quality Evaluation System for CBM-TOF Super Module 211

2 System Architecture

The quality evaluation system has a distributed architecture, as shown in Fig. 1,


including SM detectors, front-end electronics (FEE), back-end electronics (BEE) and
data acquisition (DAQ) software.

PXI Crates

Gigabit
Ethernet
TDC#1 DRM1

TDC#1 Optic Fiber DRM2


320
MRP
MRP PA
MRP PA
MRP PA
PA Channels Network DAQ
CC
CC DI
MRPC DI
PADI
DI TFB TRM
DI Switcher System

TDC#9 DRM10

TDC#10 CTM

STS
SM Detector Front-end Electronics Back-end readout electronics

Fig. 1. Architecture of quality evaluation system

2.1 Front End Electronics


SM TDC station (STS) is designed as an integrated and independent structure, closed to
the SM detectors, serving as FEE. Each STS mainly contains 10 TDC boards for 320-
channel time digitizing and 1 TDC Readout Motherboard (TRM) for reading out 10
TDC boards. 10 TDC boards are plugged into the TRM through golden fingers.
A prototype of high resolution and density FPGA TDC board is designed, which
can support up to 32 electronic channels with a time resolution better than 20 ps and
Time over Threshold (TOT) measurement [4, 5]. Gigabit transceiver (GTP) is used to
transmit the time measurement data to the TRM and to receive the command from
TRM through golden finger connector. And the clock and trigger for TDC board are
buffered in from TRM through the golden finger connector.
The main functions of the TRM are early data aggregation, control of the TDC
boards, clock and trigger synchronization, and power supply. TRM has two Xilinx
Kintex-7 series FPGA (XC7K70T-1FFG676), each of which aggregates time mea-
surement data from 5 TDC boards concurrently and transmits the data to the back-end
readout electronics with 2 optical uplinks with 2.5 Gbps each. Two optical transceivers
(SFP or SFP+) in TRM are used to receive high-precision synchronous clock and
trigger from DRM through two optical downlink.

2.2 Back-End Electronics


BEE act as the interface between FEE and DAQ software. BEE are capable of
receiving data from STS through four optical uplink and delivering it to DAQ system
via Ethernet port at the rate of 9 Gbps. In addition, BEE should also be responsible for
transferring the command to the FEE through one optical downlink and distributing the
synchronous clock and trigger to the FEE through two optical downlink.
212 C. Li et al.

As shown in Fig. 2, a prototype of DRM is designed for data readout and clock and
trigger distribution. DRM receives data from TRM through optical link and distributes
the clock and trigger to TRM reversely. To deal with the massive data receiving from
TRM through four optical link, 20 DRM divided into four groups resided inside two
PXI-6U crates are introduced to the system. Among 5 DRMs from the same group, a
master module receives data from TRM at the rate of the 2.5 Gbps though one optical
link and sends to all slave ones alone a daisy chain through differential cable, as shown
in Fig. 3. As a result, each DRM needs to process the data at the rate of 0.5 Gbps. Once
data arrives at DRM, they are relayed to the DAQ system through a Gigabit Ethernet
port on each DRM concurrently.

Fig. 2. Photograph of DRM

DRM#1 DRM#2 DRM#3 DRM#4 DRM#5

SFP GXB GXB GXB GXB GXB

2.5Gbps 2.5Gbps 2.5Gbps 2.5Gbps

2.5Gbps
Gigabit
(Optical Link)
Ethernet
SFP
STS
DAQ
(320 channels)

Fig. 3. Data transmission flow of one DRM group

2.3 Data Acquisition Software


DAQ software is running on the back-end server for data aggregation and saving,
monitoring the system status and distribution the command from graphical user
interface (GUI). DAQ software receives TDC data from each DRM and saves the
assembled data into hard disk for further offline analysis. For the convenience of the
operating and monitoring, a GUI implemented by QT5 is designed.
Quality Evaluation System for CBM-TOF Super Module 213

3 Tests and Results

For confirming the performance of quality evaluation system, cable delay method is
conducted. Two hit signals generated by a pulse generator (AFG3252) with a certain
time delay are connected to two TDC channels. Assuming that two channel are
uncorrelated, the time resolution of one single channel is equal to the RMS value of the
measurement divided by 21/2. As shown in Fig. 4, the time resolution of the leading
edge time is better than 20 ps, which is better than the requirement.

Fig. 4. Left: Time measurement results of two channels from same TDC. Right: Time
measurement of two channel from two separate TDC boards.

4 Conclusion

A 320-channel time digitizing and readout electronic system is designed for quality
evaluation of CBM-TOF SM. The system has a distributed and extensible architecture,
mainly including FEE, BEE, and DAQ software. The laboratory test results show that
quality evaluation system can work correctly and overall system has time resolution of
20 ps. The evaluation system can be subsequently used for quality control of CBM-
TOF SM.

Acknowledgment. This work was supported by the National Basic Research Program (973
Program) of China under Grant 2015CB856906.

References
1. Senger, P.: The compressed baryonic matter experiment. J. Phys. G: Nucl. Part. Phys. 28(7),
1869 (2002)
2. Herrmann, N.: Technical design report for the CBM time-of-flight system (TOF). Technical
report, GSI. http://repository.gsi.de/record/109024/files/tof-tdr-final_rev6036.pdf. Accessed
10 June 2017
214 C. Li et al.

3. Loizeau, P.-A.: Development and test of a free-streaming readout chain for the CBM time of
flight wall (2014)
4. Fan, H.-H., et al.: TOT measurement implemented in FPGA TDC. Chin. Phys. C 39, 11
(2015)
5. Zheng, J., Cao, P., Jiang, D., An, Q.: Low-cost FPGA TDC with high resolution and density.
IEEE Trans. Nucl. Sci. PP(99), 1–1
Research of Front-End Signal Conditioning
for BaF2 Detector at CSNS-WNS

Xincheng Qi1,2, Xiru Huang1,3(&), Ping Cao1,2, Qi Wang1,3,


Yanli Chen1,2, Xuyang Ji1,2, and Qi An1,3
1
State Key Laboratory of Particle Detection and Electronics,
University of Science and Technology of China, Hefei 230026, China
xiru@ustc.edu.cn
2
School of Nuclear Science and Technology,
University of Science and Technology of China, Hefei 230026, China
3
Department of Modern Physics, University of Science and Technology
of China, Hefei 230026, China

Abstract. The BaF2 detector at CSNS-WNS (White Neutron Source at China


Spallation Neutron Source) is a segmented 4p detector array for measuring (n, c)
cross sections, which is planned to have up to 92 channels finally. The detector
system, consists of the BaF2 crystals and photomultiplier tubes, readout elec-
tronics, and the data acquisition system. To maintain as much information of the
physical instances as possible, the full waveform digitization is used in the
readout electronics. To meet the requirements of waveform digitization, the
front-end signal conditioning method should have good performance of low
noise, high bandwidth, low power consumption, etc. In this paper, the basic
requirements for the front-end signal conditioning method are discussed. And
then, the front-end signal conditioning method for the BaF2 detector is proposed.
Some test results are also present, which shows the proposed front-end signal
conditioning method is suitable for signal conditioning for BaF2 detector
application.

Keywords: Front-end  Signal conditioning  BaF2 detector  WNS


CSNS

1 Introduction

China Spallation Neutron Source (CSNS) is a scientific facility located in Dongguan.


White Neutron Source (WNS), which is an experiment planned at CSNS, chiefly
dedicates to the accurate measurements of some reaction cross sections that is
important to advanced fission energy [1]. The BaF2 detector array, one of the six main
detectors at CSNS-WNS, mainly aims at measuring the (n, c) cross section [2].
The BaF2 detector system consists of an array of crystals and photomultiplier tubes
(PMTs) that has the segmented 4p geometry, the front-end and readout electronics, and

This work was supported by National Research and Development plan and NSAF (U1530111).

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 215–219, 2018.
https://doi.org/10.1007/978-981-13-1313-4_43
216 X. Qi et al.

the data acquisition system. The detector array will have 46 channels at first and up to
92 channels finally after the planned upgrade.
To avoid the influence of the alpha particles usually brought by the crystals, the
pulse shape discrimination (PSD) method [3] is achieved in the readout electronics. The
field digitization modules (FDMs) use full waveform digitization based on 1GSps
ADCs [4], to realize PSD method. So, some new requirements of the performances
such as low distortion and high bandwidth are brought to the front-end signal condi-
tioning. What’s more, the FDMs are located in the back-end PXIe crates, which leads
to transmitting analog signals to the back-end crates over about 20 m distance. In
addition, the Sub Trigger Module (STM), a section of trigger system located in back-
end PXIe crates as well, generate pre-trigger information from the analog signals.
Therefore, analog signals from the front-end need to feed both the FDMs and the
STMs. To meet the requirements above, the front-end signal conditioning should be
able to amplify the fast signals from the detectors and fan out for corresponding
processing with good performances of high bandwidth, low distortion, low noise and
long range driving.
The typical signal from the detector hit by the gamma ray is shown as Fig. 1. The
characteristics of the pulse at the 50 X system are as following: the fall time is about 2
to 3 ns; the rise time of the slow component is about 1 to 2 ls; the amplitude of
interesting signals is about −4 mV to −2 V. Thus, the front-end signal conditioning
needs high bandwidth up to 200 MHz and large linear range from up to −2 V.

0.0

-0.1
Amplitude (V)

-0.2
Slow Component

-0.3

Fast Component
-0.4

-100 0 100 200 300

Time (ns)

Fig. 1. Typical gamma-ray signal from single-channel BaF2 detector

In this paper, the design of the front-end signal conditioning for the BaF2 detector is
proposed. Firstly, the background is introduced. Then the design details are discussed.
In the Sect. 3, some test results are present and in the end the paper is summarized.

2 Architecture of Signal Conditioning

As illustrated in Sect. 1, to transmit the analog signals from PMTs with low-distortion,
low-noise performance, the fast preamplifier (FPA) is designed. To provide analog
signals to the FDM and the STM respectively, the analog fan-out module (AFM) is
Research of Front-End Signal Conditioning for BaF2 Detector 217

introduced. To minimize the distortion caused by the long-distance transmission and


increase the signal-to-noise ratio during the transmission, twisted-pair cables are used
to link the FPAs and the AFMs [5]. The structure of front-end signal conditioning
circuits is shown in Fig. 2.

-HV
BaF2+PMT

Front-end Signal Conditioning Circuits


To FDM
Ch01
twisted-pair cable To STM
FPA
...

...
...
To FDM
Ch12
twisted-pair cable To STM
FPA AFM

Fig. 2. Structure of the front-end signal conditioning circuits

The procedure of the front-end signal conditioning is listed as following:


1. The initial analog signals from PMTs are amplified linearly by the FPAs and
converted to differential at the same time.
2. The differential signals are transmitted from the FPAs to the AFMs via 20-m
twisted-pair cables.
3. The AFMs receive the differential signals and drive the FDMs and the STMs
respectively.
To decrease the distortion probably caused by noise or other interferences during
the transmission from the FPA to the AFM, the FPA needs to amplify the signals
linearly and convert them to differential. The size of this module, however, has to be
considered for the limited space near the crystal array where the FPA is located. As a
result of balancing the performance and the size, a fully differential amplifier named
LMH6552 [6] is utilized. To maintain high SNR, the gain is set to 2, which fully uses
the output range of the amplifier. In addition, in order to avoid large signals from
breaking down the IC, the clamp protection circuit is also included in the FPA.
To drive signals to both the FDM and the STM, a fully differential amplifier is
needed to implement the analog fan-out operation [7], which takes the FEE on GTAF
as a reference. One AFM contains 12 channels, that is, works with 12 FPAs. Fur-
thermore, considering that the AFM is designed as a NIM module, LDOs are used for
switching power from NIM crates. As for the amplifier, LMH6550 [8] is considered as
a good selection. The gain is set to 1 and to adjust the signal amplitude furtherly, the
attenuation circuit is also employed.
218 X. Qi et al.

3 Test Platform and Test Results

In order to verify the design of the FEE and evaluate the performance of the FEE, some
tests have been done. The platform and photo of testing front-end signal conditioning
circuits are shown in Fig. 3. The test results are shown as Fig. 4. From the results, we
can lead to the conclusion in the next section.

CH1 OUT-
IN
OUT+

Signal
Ă

Source CH8 OUT-


IN
OUT+

FPA

Power Supply
OUT+ CH1 IN-
OUT- IN+

Oscillo
Ă

scope OUT+ CH12 IN-


OUT- IN+

AFM

Fig. 3. The diagram and photo of test platform for front-end signal conditioning circuits

0
2.0
Normalized Gain (dB)

1.5
VOUT (V)

-2

1.0 Equation y = a + b*x


Plot VOUT
Weight No Weighting
245.0MHz
Intercept -0.01209 ?0.0035 -4
Slope 1.0242 ?0.00266
0.5 0.00566
Residual Sum of Squares
Pearson's r 0.99984
R-Square(COD) 0.99967
Adj. R-Square 0.99966
0.0 -6
0.0 0.5 1.0 1.5 2.0 1 10 100
VIN (V) Frequency (MHz)

Fig. 4. The input-output curve and the frequency response curve of the DUT

4 Conclusion

The prototype of the front-end electronics including two modules has been designed
with low-noise, high-speed and low-distortion performance to meet the requirements of
the STM and the FDM with a 12 bit, 1 GSps ADC for the BaF2 detector array at CSNS-
WNS. Test results show that the front-end signal conditioning circuits have the
bandwidth up to 245 MHz and pretty good performance of linearity that is suitable for
BaF2 detector application.
Research of Front-End Signal Conditioning for BaF2 Detector 219

References
1. Tang, J.Y., et al.: Proposal for muon and white neutron sources at CSNS. Chin. Phys. C 34(1),
121–125 (2010)
2. He, B., et al.: Clock distribution for BaF2 readout electronics at CSNS-WNS. Chin. Phys.
C 41(1), 162–166 (2017)
3. Combes, C.M., et al.: A thermal-neutron scintillator with n/c discrimination LiBaF3: Ce, Rb.
Nucl. Instrum. Methods Phys. Res., Sect. A 416(5), 364–370 (1998)
4. Zhang, D.L., et al.: System design for precise digitization and readout of the CSNS-WNS
BaF2 spectrometer. Chin. Phys. C 41(2), 159–165 (2017)
5. Liu, S.B., et al.: BES III time-of-flight readout system. IEEE Trans. Nucl. Sci. 57(2), 419–427
(2010)
6. Texas Instrument: LMH6552 Datasheet, Dallas, TX (2007)
7. Yuan, J.L.: GTAF system’s front-end electronics design and performance testing and system
of the target switch’s development (in Chinese). Master thesis, Department of Modern
Physics, Lanzhou University, Lanzhou, Gansu (2008)
8. Texas Instrument: LMH6550 Datasheet, Dallas, TX (2004)
Generalized Signal Conditioning Module
for Spectrometers at CSNS-WNS

Chen Yanli1,2,3, Cao Ping1,2(&), Xincheng Qi1,2, Wang Qi1,


and An Qi1,2
1
State Key Laboratory of Particle Detection and Electronics,
University of Science and Technology of China, Hefei 230026, China
cping@ustc.edu.cn
2
School of Nuclear Science and Technology, University of Science
and Technology of China, Hefei 230026, China
3
Northwest Institute of Nuclear Technology, Xi’an 710024, China

Abstract. There are several spectrometers in White Neutron Source of China


Spallation Neutron Source (CSNS-WNS) including detectors of C6D6, Silicon,
fission chamber and light charged particle etc. Matching signals from these
various detectors becomes design challenge for readout electronics design. In
this paper, a generalized readout structure based on PXIe platform is proposed.
Signal conditioning module named SCM plays important role in connecting
signals from various detectors to a uniform readout system. SCM can be adapted
to a variety of detectors by parameter adjustment. Test results show that this
SCM has a good performance, which meets the requirement of detectors at
CSNS- WNS.

Keywords: CSNS-WNS  Signal conditioning  Generalized module

1 Introduction

Chinese Spallation Neutron Source (CSNS) is a large scientific facilities under con-
struction, which will be the first spallation neutron source in developing countries, and
it ranks fourth in the world after completion [1, 2]. At CSNS, back-streaming neutrons
from proton beam line are suitable for constructing White Neutron Source (WNS) [3].
WNS is an extremely useful tool for measuring nuclear data. Spectrometer is one of
important guarantee for nuclear data measurement research.

2 Generalized Readout Structure

CSNS-WNS experiment station have 7 sets spectrometers, add up to 62 readout


electronics channel. If these spectrometers are designed respectively, the cost is high
and the cycle is long, therefore, a generalized structure design is put forward, as shown
in Fig. 1. The generalized design contains two meanings: generalized signal condi-
tioning module and digital module. The generalized digital module is based on

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 220–223, 2018.
https://doi.org/10.1007/978-981-13-1313-4_44
Generalized Signal Conditioning Module for Spectrometers 221

CSNS-WNS BaF2 detector readout electronics [4]. Because the detector system and the
physical target are different, the logic algorithms in FPGA are also different.

DDR

PXIe
Detector ADC FPGA
platform
Preamplifier Signal Conditioning
Module T0 trigger
Special design timing

Generalized digital design

Fig. 1. Structure of the generalized readout electronics

Detector and preamplifier use special design, digital circuit belong to generalized
design, and SCM completes the adaptation of special design and generalized digital
circuit, transforms single ended signal to differential signal, adjusts signal amplitude
and level to meet the requirements of ADC input.

3 Signal Conditioning Module (SCM)


3.1 SCM Structure
Generalized SCM is designed as a form of PXIe crate plug-in, which consists of five
stages: (i) interface unit (ii) signal conditioner and filter (iii) power conversion
(iv) PXIe interface. Structure of SCM is illustrated in Fig. 2.

Fig. 2. Structure of SCM

3.2 SCM Design


Scheme of the single channel SCM circuit is shown in Fig. 3.

Fig. 3. The scheme of the single channel SCM circuit


222 C. Yanli et al.

To protect the LMH6552 chip, an overvoltage protection circuit is used. Gain


adjustment and anti-aliasing filter are implemented simultaneously. The anti-aliasing
filter prevents the spectrum aliasing by limiting the bandwidth, thus the signal can be
properly sample [5]. Usually, when the amplifier gain is less than 1, the system is
unstable. Gain setting to the minimum, the output signal amplitude still cannot meet the
requirements of the ADC input range, so we further attenuate the output signal
amplitude using resistor divider.

4 Experiments and Verification

4.1 Experiment Test


We carried out the linear and frequency response tests. The test results are is shown in
Fig. 4. The integral nonlinearity is 0.56%, −3 dB bandwidth is 115 MHz.

Equation y = a + b*x
1000 Plot vout-ch2(mV)
W eight No W eighting
-10
Intercept -0.7085
Slope 0.25037
-12 800 Residual Sum of Square 107.46734
Pearson's r 0.99998
-14 R-Square(COD) 0.99995
Adj. R-Square 0.99995
-3dB@115MHz 600
Vout(mV)

-16
Gain(dB)

-18
400
-20

-22
200
-24

-26
0
1 10 100 1000 0 500 1000 1500 2000 2500 3000 3500

Freq(MHz) Vin(mV)

Fig. 4. The result of frequency domain test (left) and linear test (right)

4.2 Joint Test with Detectors


The joint debugging experiments of SCM and C6D6 detector, multilayer fission
chamber, silicon detector and gas detector have been carried out. Joint test partial
results are shown in Figs. 5 and 6.

Fig. 5. Joint test result of C6D6 detector (left) and test platform (right)
Generalized Signal Conditioning Module for Spectrometers 223

Beam to detector 0.3m coaxial


distance1 m cable
multilayer 0.3m coaxial
accelerator fission cable

Produce14MeV chamber
SCM2
neutron 142PC
HV 200V preamplifier
20m coaxial cable
ORTEC
428

Use multilayer copper mesh package

Fig. 6. Joint test result of multilayer fission chamber (left) and test platform (right)

5 Conclusion

SCM has good frequency response, linear and faster time response, and is suitable for a
variety of detectors by parameter adjustment. By debugging with a variety of detectors,
it is shown that SCM can match the corresponding detector, which verifies the design
rationality. So the generalized SCM simplifies design of various spectrometers for
CSNS-WNS and the development cycle is shortened.

Acknowledgements. This work was supported by National Research and Development plan
(2016YF-A0401602).

References
1. He, B., Cao, P., Zhang, D.-L., et al.: Clock distributing for BaF2 readout electronics at CSNS-
WNS (2016)
2. Tian, H.L., Zhang, J.R., Yan, L.L., et al.: Distributed data processing and analysis
environment for neutron scattering experiments at CSNS. Nucl. Instrum. Methods Phys. Res.
834, 24–29 (2016)
3. Jing, H.T., Tang, J.Y., Tang, H.Q., et al.: Studies of back-streaming white neutrons at CSNS.
Nucl. Instrum. Methods Phys. Res. 621(1–3), 91–96 (2010)
4. Zhang, D., Cao, P., Wang, Q., et al.: Proposal of readout electronics for CSNS-WNS BaF2
detector (2016)
5. Diaz-Diaz, I.A., Cervantes, I.: Design of a flexible analog signal conditioning circuit for DSP-
based systems. Procedia Tech. 7(4), 231–237 (2013)
Study of Front-End High Speed Readout Based
on JESD204B

Zhao Liu1,2(&), Zhen-An Liu1, Jing-zhou Zhao1, Wen-xuan Gong1,


Li-bo Cheng1,2, Peng-cheng Cao1,2, Jia Tao1,2, and Han-jun Kou1,2
1
State Key Laboratory of Particle Detection and Electronics,
Institute of High Energy Physics, CAS and University of Science
and Technology of China, Beijing 100049, China
liuzhao@ihep.ac.cn
2
University of Chinese Academy of Sciences, Beijing 100049, China

Abstract. This paper describes a high-speed data readout method for a large-
scale front-end electronics in the JESD204B protocol-like transmission protocol
implemented in a FPGA, in addition to a reading out for a commercial ADC.
A prototype board including analog signal processing, digitization, digital
processing and control in FPGA, and data transmission has been designed and
together with a lab designed data receiver board, a demo system has been setup
for this study of new method. The JESD204B protocol is implemented in FPGA,
which is compared and verified by the commercial ADC output and the test
results are showed satisfactory.

Keywords: ADC  Data readout  JESD204B

1 Backgrounds

With the emergence of the high-speed digitalizing technology in the front end elec-
tronics, the massive data readout will become inevitable requirement and technical
bottleneck in high-energy physics experiments and other occasions. The large-scale
data readout is limited by the single-channel bandwidth and the overall bandwidth
constraints. Multi-parallel synchronizing data transmission are specially needed,
especially in large-scale ASIC. RocketIO technology, JESD204B and other protocols
provides a reference for the emergence of high-speed front-end data readout protocol
and hardware implementation. Since RocketIO is a commercial high-speed serial
technology, and JESD204B protocol is a commercial protocol embedded in the com-
mercial chip, they are not suitable for ASIC chip implementation. Therefore, the
independent research and development of ASIC design for high-speed large-scale data
readout has its necessity and urgency. Xilinx’s FPGA RocketIO and ADI ADC chip
with JESD204B protocol are used to carry out the high-speed multi-channel syn-
chronous transmission protocol research. This work masters the relevant agreement and
chip implementation through the design of the prototype, the software programming
and functional testing. With FPGA simulation, realize technology without relying on
commercial RocketIO and JESD module, preparing for ASIC high-speed readout
design on future CEPC or other experiments.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 224–233, 2018.
https://doi.org/10.1007/978-981-13-1313-4_45
Study of Front-End High Speed Readout Based on JESD204B 225

Firstly, this research refers to the JESD204B transmission project to design a single
width double layer AMC, with four 500 Msps sampling rate 14 bit resolution ADC
called digitization and FEE readout board. And research on the entire system hardware
and protocol, cooperating with the back-end data processing board to achieve large-
scale front-end data readout. And then use the GTH in the FPGA simulated JESD204B
protocol, finally simulate JESD204B protocol in the case of not using GTH, as the
principle of ASIC prototype.

2 Data Readout Block Diagram

In order to achieve front-end data high-speed readout, the block diagram as shown
below is used (Fig. 1).

Fig. 1. Data readout block diagram.

The signal is acquired by the ADC after being processed by the amplifier.
And ADC collects the data and transmits to the miniPOD following the JESD204B
protocol. MiniPOD converts electric signal into optical signal and transmits it to the
back-end processing board through the optical fiber. The FPGA on back-end pro-
cessing board receives the signal and displays. The content of this paper is the design of
digitization and FEE readout board, the composing of FPGA program in back-end
processing board, and the simulation of JESD204B protocol.

3 Digitization and FEE Readout Board

The digitization and FEE readout board consists of two parts, the digitization and FEE
readout mother board (Tongue1) and the configuration daughter board (Tongue2).
Digitization and FEE readout mother board uses two AD9680-500 ADCs, each has
500 Msps sampling rate and 14 bit resolution. ADC data through miniPOD is con-
verted into optical signals via the fiber to the back-end. ADC sampling clock is
226 Z. Liu et al.

generated by the AD9528 PLL chip converting from 40 MHz on board or external
reference clock. And the on-board power supply from the transformer chip and convert
from the 12 V supply voltage. SYNC button is added because the JESD204B protocol
requires the receiver to the transmitter’s SYNC signal to mark the demand and receive
of the synchronization code K28.5. In this design, signal line cannot be added between
the receiver and the transmitter. The SYNC button is required to mimic the receiver to
transmitter’s SYNC to allow the ADC work properly. ADC pins in the AD9680 series
is compatible, so ADC can be 500 Mbps, 820 Mbps, 1 Gbps, 1.25 Gbps maximum
sampling rate, this experiment uses 500 M sampling frequency for data path debug-
ging, corresponding to the fiber line rate 5 Gbps (Fig. 2).

Fig. 2. Digitization and FEE readout mother board.

4 Configuration Daughter Board

The configuration of the ADC and clock on the Tongue1 can be done by the FPGA on
the Tongue2 configuration daughter board or the MMC microcontroller on Tongue1. In
addition, Tongue2 also provides panel space for reference signal or reference clock
source input from the panel. On-board flash can guarantee the self-loading of the
program after power on, Gigabit Ethernet can be used to achieve the host computer
control through the IPbus configuration. The functional assignment of Tongue1 and
Tongue2 ensures that the entire system can work normally without Tongue2. Since the
configuration can be done using the MMC microcontroller on Tongue1, the reference
signal can also be input from the Tongue1 backplane, and Tongue2 only provides more
auxiliary functions (Fig. 3).
Study of Front-End High Speed Readout Based on JESD204B 227

Fig. 3. Configuration daughter board.

Fig. 4. AD9528 PLL register configuration.

Digitization and FEE readout board ADC sampling clock is generated by the
AD9528 clock chip, which is programmed by EDK embedded software and configured
through IIC bus (Fig. 4).

Fig. 5. AD9680 register configuration.

The configuration of the AD9680 ADC chip is done by the ISE software via the SPI
bus (Fig. 5).
228 Z. Liu et al.

5 Data Processing Board

The optical signal from the digitization and FEE readout board is received by the back-
end data processing board, ADC transmission data format is shown in Fig. 6. Mul-
tiframe is the concept of the JESD204B protocol [1], representing multiple frames, and
each multiframe begins with 1C of the K code and ends with 7C of the K code. FPGA
chip on the back-end data processing board needs to complete synchronization,
alignment, user data recovery. Synchronous represents using BC code to define the
10 bit boundary for a byte of data converting from 10 bit to 8 bit. Alignment represents
the use of the 7C same time characteristic to eliminate different path delays of trans-
mission process. The recovery of the user data is due to the fact that the ADC replaces
the same user data with the FC and 7C (K codes), so we need to recover the received
FC and 7C (K codes) to previous values.

Fig. 6. Data format the ADC transfers to the back-end.

6 JESD204B Protocol Simulation

Using GTH to simulate the JESD204B protocol, the transmitter sends the sync code
BC first, then sends four multiframe incremental data, followed by the user data. The K
value is designed to 32, that means, a multiframe contains 32 frames; The F value is
designed to 8, that means, a frame contains 8 bytes. The receiver is designed to four-
way wide 64 bit 10G channel, so that each frame can be corresponded to a converter
with the eight bytes of the four sampling in 64 parallel lines.

Fig. 7. Data format of one frame.


Study of Front-End High Speed Readout Based on JESD204B 229

Select the parameters CF = 0, CS = 0, that is, a tail bit is added behind each
sample. Tail bit is simulated with 00, so the frame format is shown as Fig. 7.

Fig. 8. ILA of transmitter data.

From the transmitter ILA, it can be observed that after the BC ••• BC sync code, the
program sends four multiframes and then starts sending user data (Fig. 8):
The incremental data for each multiframe starts with 1C and ends with 7C. If the
last byte of a frame is the same as the last byte of the previous frame, replace it with
FC. If the last byte of the multiframe is the same as the last byte of the previous frame,
replace it with 7C.

Fig. 9. ILA of receiver data.

The receiver uses byte_swap to adjust the 64 bit boundary. 7C performs as


alignment code using RAM alignment. And the receiver replaces FC and 7C with
former value before the transmitter’s replacement (Fig. 9):
Previous program uses the GTH IP core, considering that it is to be implemented on
the ASIC in the future, which means that GTH IP can no more be used, so we need to
use verilog to achieve 8b/10b conversion and serialization. 8b/10b implementation
based on the lookup table, according to the input 8bit data, combined with the previous
cycle, to determine the output of 10 bit data. Serialization realizes by a register fetching
from different registers at different times.
As shown in Fig. 10, 1B, 1C, 1D is the original 8b incremental data, converted to
10b become 09B, 35C, 362, serialized to 1101100100, 0011101011, 0100011011, low
bit transmits first.
230 Z. Liu et al.

Fig. 10. 8b/10b and serialization.

The data through 8b/10b and serialization is sent through general IO port, and
received by RocketIO port. The receiver check the data received, if it is ramp data, the
error count do not increase, if not, the error count increase one. Tile2_time_count is the
count increase by one every userclk2, so it multiply by 20 is bit checked. As shown in
Fig. 11, the error count still remains zero, indicating that the bit error rate is less than 1/
(20 * 2 * 1012) = 2.5 * 10−14. This experiment show the reliability of the transmitter,
the 8b/ 10b conversion and serialization can be the ASIC prototype.

Fig. 11. General IO send RocketIO receive.

7 Performance Test and Discussion

The data processing board receives data from AD9680 on digitization and FEE readout
board, and it is configured using IBERT core. The IBERT is set at 5 Gbps line rates
and 125 MHz reference clock, to match AD9680’s 5 Gbps serdes out. Even though the
link of IBERT is not built for the reason that the sender is not IBERT, we can check the
eye scan of the receiver GTH, which is the eye scan of one lane of AD9680. The eye
scan is shown as Fig. 12, the black box is the receive eye mask for LV-OIF-6G-SR [1],
which is the eye mask specified for line rates from 312.5 Mbps to 6.375 Gbps of
JESD204B. We can see that the signals at the receiver stay outside the per-defined
mask area so the eye scan of my AD9680 is good.
ADC is configured into test mode (sending incremental data). The data processing
board receives data, if the data does not increase along with the time, the error_count is
increased by one. Time_count is the counter increase by one every userclk2, so it
multiply by 20 is bit checked. The program can check 2 lanes one time for the reason
that the receiver board has only 2 SFP, so we need 4 times to check 8 lanes of 2
AD9680. Figure 13 is the result of one test. For 4 times test, the error_count is zero
Study of Front-End High Speed Readout Based on JESD204B 231

Fig. 12. Eye scan of AD9680 output serdes.

Fig. 13. The data processing board captures the ramp data.

Fig. 14. Data readout by back-end processing board.


232 Z. Liu et al.

after 20 * 1 * 1013 bit being checked, so the bit error rate of 8 lanes is less than 1/
(20 * 1 * 1013) = 5 * 10−15.
Signal generator is configured as 50 MHz, 2 Vp-p, 0 V offset sinusoidal signal
output, the sine signal can be read on the back-end processing board, as shown in
Fig. 14. X axis represents the sampling number, and Y axis represents the ADC output
amplitude information. Here ADC output is set in offset code format, the center point is
located in 1FFF, that is 8191. 50 MHz sinusoidal signal is sampled by 500 MHz
sampling clock, a cycle contains 10 points, consistent with Fig. 14.

Fig. 15. VisualAnalog analyse SNR program.

ADC captures data into the following Fig. 15 VisualAnalog program to calculate
the SNR. As Fig. 16 shown, the whole system’s SNR is 46.112 dB at this time [2].

Fig. 16. SNR analysis result.


Study of Front-End High Speed Readout Based on JESD204B 233

8 Conclusion

The JESD204B protocol was simulated in the cases of with and without using GTH.
The results of this study can be used as the basis for ASIC implementation.

Acknowledgments. This project has been Supported by National Natural Science Foundation
of China (Grant No. 11435013) and Ministry of Science and Technology of the People’s
Republic of China (Grant No. 2016YFA0400104).

References
1. JEDEC Standard: Serial Interface for Data Converters JESD204B.01
2. Lin, H.-C.: Research on self-trigger front-end unit for low frequency radio detection. Ph.D.,
thesis
Interface and Beam Instrumentation
Development of the AWAKE Stripline
BPM Electronics

Shengli Liu1(&), Victor Verzilov1, Wilfrid Farabolini2,


Steffen Doebert2, and Janet S. Schmidt2
1
TRIUMF, Vancouver, Canada
sliu@triumf.ca
2
CERN, Geneva, Switzerland

Abstract. The stripline BPMs of AWAKE (The Advanced Proton Driven


Plasma Wakefield Acceleration Experiment at CERN) are required to measure
the position of the single electron bunch with a resolution of 10 um rms, for the
bunch charge of 100 pC to 1 nC. This paper presents a AFE-Digital processing
system developed at TRIUMF (Canada) which achieved such performance. The
design of the electronics readout system is reviewed. The beam test results at
CALIFES of CERN are also described.

Keywords: AWAKE  Stripline BPM  Electronics

1 System Overview

The Advanced Proton Driven Plasma Wakefield Acceleration Experiment (AWAKE)


aims at studying plasma Wakefield generation and electron acceleration driven by
proton bunches, and is currently being built at CERN [1]. The electron beam that is to
be measured by the BPM has the parameters listed in Table 1.
The AWAKE stripline BPMs have two types of aperture of 40 mm and 60 mm,
their electrodes both have coverage angle of 38 degree, and with longitudinal length of
120 mm and 124 mm. The simulation and measurement results in this paper are based
on the 60 mm one except specifically indicated. The Fig. 1 shows the BPM’s structure.
For each single beam pulse event with repeat frequency of less than 10 Hz, induced
signals are produced on the four electrodes. The horizontal/vertical position and beam
intensity could be calculated from the individual signal strength. To the first order
approximation, the position X or Y could be calculated as X = S * Diff/Sum, with
Diff/Sum is the ratio of the signal difference over signal sum from the two opposite
electrodes, and S as the sensitivity constant [2]. The four signal are filtered, frequency
down converted and amplified, and then digitized by a 16 bit fast ADC at 100MSPS.

S. Liu and V. Verzilov—Supported by NSERC and CNRC.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 237–242, 2018.
https://doi.org/10.1007/978-981-13-1313-4_46
238 S. Liu et al.

Table 1. Electron beam parameters and BPM specifications


Electron beam (single bunch) Charge, single bunch 0.1–1 nC
Energy 10–20 MeV
Rep. rate (Max) 10 Hz
Bunch length (1 Sigma) 0.3–4 ps
Beam size (1 Sigma, typ.) 0.5 mm
BPM Spec. Range (aperture) 0–80%
P-P resolution (0.1 nC) 50 um
Accuracy 250 um
Aperture 40 mm/60 mm
Charge measurement Percent range

Fig. 1. AWAKE stripline BPM, cross-section

A Xilinx FPGA (SPARTAN6) processes the waveforms and performs the necessary
calculation to get the position and intensity information, and pack them into an event
data structure (event packet). The event packets are sent to the DAQ computer when
requested from the Ethernet interface. See Fig. 1 for the system diagram. Three boards:
AFE, Digitizer and FPGA board are integrated into a 1U crate (the so-called DSP
module) to be stacked on the standard 19 in. rack. Besides the DSP module, there’s one
Local/Calibration source module to provide 434 MHz Lo signal and 404 MHz CW
calibration signal to each DSP module. About 20 DSP modules have been assembled
and are undergoing bench test and calibration at TRIUMF. Among them 16 will be
installed and calibrated on the AWAKE electron and proton beam line in the summer
of 2017 (Fig. 2).
Development of the AWAKE Stripline BPM Electronics 239

Fig. 2. System diagram: AFE, Digitizer, FPGA board

2 Analog Signal Processing

The single bunch electron beam at AWAKE has a bunch length of 0.3–4 ps (1 sigma).
The signals induced by such beam on the stripline BPM electrodes will also have a very
narrow pulse and need to be stretched to be much wider (*1 us) for further processing.
This is normally achieved by a very high Q band pass filter, i.e. narrow pass band, at a
selected frequency which is the operating frequency of the stripline BPM. This fre-
quency is 404 MHz for our case. Its selection is a balance of the BPM sensitivity and
the inter-electrode coupling gain.
The design of the AFE board was initialized by creating two MATLAB SIMU-
LINK models. This simulation allowed manipulating the RF component’s gain/
attenuation parameters to facilitate optimizing the analog signal processing chain. The
S/N ratios at each stage were also estimated and to be minimized accounting for the
availabilities of the critical RF parts. One is for under sampling method, the other is for
the heterodyne method. Finally the later method was chosen to make use of the existing
hardware resources inherited from the TRIUMF’s E-linac BPM system. The input
signal formation on the BPM electrodes was based on the analytical method by Shafer
[2], the cable loss was accounted for, and after the chirping filter the very narrow signal
pulse is stretched to about 1 us or longer at frequency of 404 MHz, and then down
converted to be at 434 − 404 = 30 MHz with BW of about 10 MHz. The signal is then
converted from single ended to differential signal and subjected to further gain or
attenuation to accommodate the dynamic input range of more than 30 dB. The digital
attenuator at the front is to address the non-linearity effect for the large input signal (1
nC charge). See Fig. 3 for the diagram of the analog front end board. Beam tests at
CERN CALIFES facility helped the finalization of the gain parameters of the AFE
board (Fig. 4).
240 S. Liu et al.

Fig. 3. AWAKE BPM DSP 1030 module

Fig. 4. Analog Front End (AFE) board diagram, one of the four channels is showed

3 Digital Signal Processing

The digitizer board has an Analog Device AD9653BCPZ-125 which is a 16 bit 4


channel, simultaneous 125MSPS flash ADC, however it runs at 100 MSPS with a low
jitter clock ASVMX-100-5ABA and provides similar performance as the demo board
which runs at 125MSPS. A 1024 depth circular buffer keeps storing the latest signal
samples at the clock speed of 100 MHz until a trigger decision is made and continues to
run for an event tail length to make sure a whole event is saved. The processing of the
event is using 10 MHz clock, to make the place and route implementation of the FPGA a
lot less stressful. Base line restoration removes the DC offset of the signal waveform
which mainly comes from the ADC imperfection. After that, signal waveforms from all
four channels are sent to the so-called fast FIFO. The floating point algorithm is used to
calculate the signal power and apply channel gain adjustment, calibration gain adjust-
ment, and the digital attenuator mismatch adjustment. All these three adjustments are to
apply small factors around 1 to fine tune the gain of each channel so there’s no offset in
position measurement. Compared to the on-line calibration procedure, the above
adjustments are regarded as static calibration, since they are supposed to be constant.
Since the event frequency is less than 10 Hz, and event processing time is in ms scale, so
it came up the so-called on-line calibration to compensate the electronics system drift
due to any ultra-low frequency interference, for example, the ambient temperature
fluctuation. Such drift had been observed during the bench test in a lab environment and
its amount exceeds the BPM’s specification requirement.
Development of the AWAKE Stripline BPM Electronics 241

The on-line calibration is a dynamic procedure and performed event by event by the
on-line calibration event scheduler when turned on by the mode control register.
Immediately following each real event, the calibration signal is turned on, and a cal-
ibration event goes through the same processing chain. Assuming the calibration signal
is stable, then the gain drift of the electronics system could be measured through this
calibration event, and the correction factors could be produced and then applied to the
real event.
Almost all register in the FPGA could be accessed by the host computer through
Ethernet, but to do so is mainly for the debug purpose, as well the fast FIFO which has
the raw ADC waveform. All necessary parameters with default values have been stored
in the flash memory, and will be loaded at the system start-up. Once the start-up
completes, the DSP module automatically perform the beam position and intensity
measurement based on the pre-loaded default parameters (Fig. 5).

Fig. 5. AWAKE BPM single pulse processing FPGA function block diagram

4 Beam Test Result

The 40 mm BPM prototype and the readout electronics were tested with CERN
CALIFES’s electron beam in November of 2016. From the BPM monitor to electronics
there was about 25 m long RF cable. The electron beam has following parameters:
energy 196 MeV, single bunch, *5 ps FWHM bunch length, charge 50 pc–380 pc,
bunch interval 0.8 s. Beam size (transverse) varied at the BPM position.
In Fig. 6, the system transfer gain, i.e. the ADC waveform maximum amplitude vs.
the single bunch charge was shown. This was compared with the simulation result. The
linear fitting is based on the measurement points. The transfer gain measured in the
range of 50–360 pc agrees well with the simulation within 3 dB difference. The
position points measured shown are about 150 points, and this include the beam jitter at
the BPM. The horizontal position resolution is about 4.3 rum RMS, while the vertical
one is about 8.7 um RMS. Since the bench test of the DSP module shows no difference
on both planes, so the worse resolution of vertical plane should be from the beam jitter.
242 S. Liu et al.

Fig. 6. Top: Transfer gain of electron beam test at CERN CALIFES; Bottom: Position
measurement at CERN CALIFES, beam charge 150 pc (run#1544).

5 Summary

So far 19 of AWAKE BPM reading out electronics module have been assembled,
programmed and gone through bench test. The beam test at CERN CALIFES has
confirmed that the most critical requirement, the RMS resolution for a quite low beam
charge reached to around 4 um which including the beam jitter, this is much better than
specified.
Some firmware features like static calibration, auto-range, etc. will be added. The
electronics module will be installed in AWAKE tunnel during the summer of 2017, and
the commissioning will start in this fall.

References
1. Gschwendtner, E., et al.: AWAKE the advanced proton driven plasma Wakefield acceleration
experiment at CERN. NIM Phys. Res. Sect. A: Accel. Spectrom. Detect. Assoc. Equip. 829,
76–82 (2016)
2. Shafer, R.: Beam position monitoring. In: AIP Conference Proceedings on Accelerator
Instrumentation, Upton, NY (1989)
Scattering Studies with the DATURA
Beam Telescope

Hendrik Jansen1(B) , Jan Dreyling-Eschweiler1 , Paul Schütze1 ,


and Simon Spannagel2
1
Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany
hendrik.jansen@desy.de
2
CERN, Geneva, Switzerland

Abstract. High-precision particle tracking devices allow for two-


dimensional analyses of the material budget distribution of particle
detectors and their periphery. In this contribution, the material bud-
get of different targets is reconstructed from the width of the angular
distribution of scattered beam particle at a sample under test. Positrons
in the GeV-range serve as beam particles carrying enough momentum to
traverse few millimetre thick targets whilst offering sufficient deflection
for precise measurement. Reference measurements of the scattering angle
distribution of targets of known thicknesses are presented that serve as
calibration techniques required for tomographic reconstructions of inho-
mogeneous objects.

1 Introduction
Understanding the scattering of charged particles off nuclei in different mate-
rials has been of interest for many decades. Molière [1] postulated a theory
without empirical parameters to describe multiple scattering in arbitrary mate-
rials. Later, Gaussian approximations to the involved calculations of his theory
have been developed e.g. by Highland [2] in order to simplify predictions.
Today, precise tracking detectors allow for the characterisation of unknown
materials based on their scattering properties. In this contribution, measure-
ments with the DATURA beam telescope, a high-precision tracking device con-
sisting of silicon pixel sensors, are described. The scattering behaviour of GeV
electrons traversing aluminium targets with precisely known thicknesses between
13 µm and 104 µm at the DESY test beam facility are studied. A track recon-
struction is performed, enabling the extraction of the particle scattering angles
at the target arising from the multiple scattering therein.

2 Experimental Set-Up

The DATURA beam telescope [3] consists of six Mimosa26 [4] monolithic active
pixel sensors (MAPS), a so-called trigger logic unit (TLU) [5], four scintillators
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 243–250, 2018.
https://doi.org/10.1007/978-981-13-1313-4_47
244 H. Jansen et al.

for triggering purposes, as well as additional infrastructure such as moving stages,


and the EUDAQ data acquisition system [6,7]. The MIMOSA 26 sensors feature
a pitch of 18.4 µm × 18.4 µm and are thinned down to a thickness of about
50 µm. Together with 50 µm thin protective Kapton foil, the total material of
the six telescope planes amounts to ε = x/X0 = 4.8 × 10−3 , expressed as the
fraction of radiation lengths.

Fig. 1. The measurement set-up with its important parameters.

For the measurements presented, the telescope is operated with equidistant


planes, each 20 mm apart, see Fig. 1. The distance between the last plane of the
upstream telescope arm and the upstream surface of sample under test (SUT)
amounts to 12.5 mm, and from the downstream surface of the SUT to the first
downstream plane 2.5 mm. The intrinsic resolution of the sensors building the
DATURA beam telescope allows for precise tracking of beam particles and thus
measurements of both the track position and the kink angle in the scattering
material. This renders a detailed study of the material budget as a function of
the track impact position, and therefore resolved material budget measurements,
possible.
In the following, measurements of different targets are presented. The mea-
surements have been performed using positrons with energies between 1 GeV and
5 GeV provided by the DESY-II accelerator [8]. The data recorded are analysed
using the EUTelescope software [9,10,15].

3 Multiple Scattering Studies


The width of the angular distribution predicted by Highland’s approximation of
the Molière theory for single scatterers evaluates to [11]
Scattering Studies with the DATURA Beam Telescope 245

 
13.6MeV √
Θ0 = ·z · ε · (1 + 0.038 ln ε) (1)
βcp
where p, βc and z are the momentum, velocity and charge number of the incident
particle. For a composite scatterer, the individual contributions to the material

budgets are summed linearly representing the total material budget ε = i εi .
The width induced by the i-th scatterer therefore reads
  
εi 13.6MeV √
Θ0,i = Θ0 = · z · εi · (1 + 0.038 ln ε) . (2)
ε βcp
with the correction term still containing the full material budget and not the
fraction represented by the individual scatterer.

3.1 Track Model


The six hits, one in each of the sensor planes, that stem from the same traversing
particle are identified using the triplet method, which is described in detail in
reference [3]. The General Broken Lines track model [12,13] is used to describe
the particle trajectories based on the six-tuple found by the triplet method. This
model includes the notion of scatterers and thus allows for kinks in the trajectory
at these positions. The model neglects bremsstrahlung effects, non-Gaussian dis-
tributed tails as well as non-linear effects and describes most accurately the case
of the narrow scattering angle approximation.

Fig. 2. The GBL track model with two unbiased kinks at the SUT.

For the determination of the material budget of an unknown target new


parameters are introduced in the track model. For the studies presented, two
246 H. Jansen et al.

local derivatives are included at the measurements behind the SUT, which, when
appropriately scaled, reflect two unbiased kinks at the position of the scattering
target, as is shown in Fig. 2. This yields an unbiased value for the kink angle at
the SUT along two directions, which are chosen along x and y.

3.2 Homogeneous Targets


Different homogeneous aluminium targets with thicknesses of 13, 25, 50, 100,
200, 1000 and 104 µm have been measured at various energies between 1 GeV/c
and 5 GeV/c In addition, a measurement without scattering target has been
performed in order to subtract the impact from scattering in air from the results.
Measurements are performed for different beam energies, and the kink angle
distributions for different material thicknesses and particle energies are produced
with high statistics. An example at 3 GeV/c is shown in Fig. 3 for (A) air and
(B) 100 µm aluminium. The distribution is symmetric, centred around zero, but
shows clear deviations from a normal distribution.

Fig. 3. The kink angle distribution measured at 3 GeV/c for (A) only air and (B) a
0.1 mm thick aluminium target. A normal distribution is fitted to the centre 98% of
the data yielding the width θmeas of the measured angle distribution.

Figure 4 presents the width of the kink angle distributions for the different
material thicknesses and particle energies. All measurements are corrected for
air by quadratically subtracting the measurement
 performed without scattering
target for the respective energy: θAl = θmeas
2 − θmeas,air
2 . In this first analysis
a constant systematic uncertainty of 3% is estimated on the values of θmeas and
θmeas,air and propagated to θAl . Figure 4 (A) shows θAl as a function of the energy
and we observe a monotonically decreasing dependence. The data points are
Scattering Studies with the DATURA Beam Telescope 247

Fig. 4. θAl as (A) a function of beam momentum and (B) a function of the target
thickness together with Highland prediction. The lower plots show the relative deviation
between our measurement and the prediction.

presented together with the Highland predictions (dashed) assuming a literature


value of X0 (Al) = 88.97 mm. In Fig. 4 (B), θAl is plotted as a function of the SUT
thickness, following a linear dependence on a double-logarithmic scale. The ratio
plots display the relative deviation from the Highland prediction, cf. eq. (2). For
the material budget range and the energy range studied, most of the data points
lie within a 11% margin [11], with the thinnest target coming with a sizeable
uncertainty. Note that the shape of each distribution depends on the particle
energy and the amount of material budget, and hence differ more or less from
a normal distribution. The systematic dependence of the measurements on the
particle energy is under investigation.
Using the position information of the reconstructed telescope tracks, the
width of the kink angle distributions is calculated as a function of the position
on the scattering target, cf. Fig. 5 (A). Each 40 × 40 µm2 cell contains (80 ± 20)
tracks and the corresponding width is the standard deviation of the mean. We
observe a positive trend over the horizontal axis. Most likely this is an artefact
of the generation of the test beam from the DESY-II primary beam. A carbon
fibre target is used to produce bremsstrahlung photons from the primary par-
ticles, which undergo pair production on a copper target. The final momentum
selection is performed by a dipole magnet in the horizontal direction and a beam
collimator. Thus, a slight energy dependence in the horizontal axis is expected.
A detailed simulation and a corresponding correction are under study.

3.3 Inhomogeneous Targets


The excellent position resolution provided by the beam telescope renders the
measurement of the material budget distribution of arbitrary objects possible.
248 H. Jansen et al.

Fig. 5. (A) The 2D distribution of the kink angle widths and the binned projections
in x- and y-direction for an aluminium target a 0.1 mm thickness at 1 GeV/c beam
momentum. (B) As in (A) for a coaxial connector.
Scattering Studies with the DATURA Beam Telescope 249

For Fig. 5 (B), a coaxial connector has been placed in between the two telescope
arms, and the material budget has been reconstructed from the multiple scat-
tering kink angles at the different positions within the material. The structures
of the connector are well-resolved. Measurements similar to the one presented
here have the potential to be used to produce full tomographic images by rotat-
ing the object and repeating the measurement for different angles and particle
energies [14].

4 Summary and Outlook

This contribution presents a measurement of the scattering angle distribution of


various scatterers using charged particle trajectories reconstructed with a pre-
cise beam telescope. This allows for the calculation of the material budget under
the assumption of a model, e.g. Highland. The procedure can be calibrated by
measuring the material budget of targets of precisely known thickness. Using
this method, the position-resolved material budget of arbitrary targets can be
measured in a plane parallel to the sensor planes. This method could be used to
e.g. characterise the material budget of future particle detector modules, where a
precise knowledge of the material budget is of concern. Furthermore, the method-
ology could be extended by recording scattering images of targets from different
angles in order to reconstruct tomographic images.

Acknowledgement. The measurements leading to these results have been performed


at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz
Association (HGF).

References
1. Moliere, G.: Theorie der Streuung schneller geladener Teilchen I. Einzelstreuung
am abgeschirmten Coulomb-Feld. Z. Naturforsch. A2, 133 (1947)
2. Highland, V.: Some practical remarks on multiple scattering. Nucl. Instrum. Meth-
ods Phys. Rev. A 129(2), 497–499 (1975)
3. Jansen, H., et al.: Performance of the EUDET-type beam telescopes. EPJ Tech.
Instrum. 3(1), 7 (2016)
4. Baudot, J., et al.: First test results of MIMOSA-26, a fast CMOS sensor with
integrated zero suppression and digitized output. In: Nuclear Science Symposium
Conference Record 2009, pp. 1169–1173. IEEE (2009)
5. Cussans, D.G.: Description of the JRA1 Trigger Logic Unit (TLU), v0.2c. Technical
report (2009). Accessed 21 Apr 2015
6. Perrey, H.: EUDAQ and EUTelescope – software frameworks for testbeam data
acquisition and analysis. In: Technology and Instrumentation in Particle Physics,
PoS(TIPP2014), p. 353 (2014)
7. EUDAQ Software Developers. EUDAQ Website. http://eudaq.github.io. Accessed
22 June 2017
8. Diener, R., Meyners, N., Potylitsina-Kube, N., Stanitzki, M.: Test Beams at DESY.
http://testbeam.desy.de. Accessed 26 July 2016
250 H. Jansen et al.

9. Bulgheroni, A., et al.: EUTelescope, the JRA1 Tracking and Reconstruction Soft-
ware: A Status Report (Milestone). Technical report (2008). Accessed 22 June
2017
10. EUTelescope Software Developers. EUTelescope Website. http://eutelescope.desy.
de. Accessed 22 June 2017
11. Patrignani, C., Particle Data Group: Review of particle physics. Chin. Phys. C
40(10), 100001 (2016)
12. Blobel, V.: A new fast track-fit algorithm based on broken lines. Nucl. Instr. Meth.
Phys. A 566(1), 14–17 (2006)
13. Kleinwort, C.: General broken lines as advanced track fitting method. Nucl. Instr.
Meth. Phys. A 673, 107–110 (2012)
14. Schütze, P., Jansen, H.: Feasibility study of a track-based multiple scattering
tomography. In: These proceedings (2018)
15. Bisanz, T., Morton, A., Rubinskiy, I.: EUTelescope 1.0: Reconstruction Software for
the AIDA Testbeam Telescope. AIDA-NOTE-2015-009 (2015). https://cds.cern.
ch/record/2000969
Particle Identification
Assembly of a Silica Aerogel Radiator
Module for the Belle II ARICH System

Makoto Tabata1(B) , Ichiro Adachi2 , Hideyuki Kawai1 , Shohei Nishida2 ,


and Takayuki Sumiyoshi3 for the Belle II ARICH Group
1
Chiba University, Chiba, Japan
makoto@hepburn.s.chiba-u.ac.jp
2
High Energy Accelerator Research Organization (KEK), Tsukuba, Japan
3
Tokyo Metropolitan University, Hachioji, Japan

Abstract. We report recent progress in the development of a silica aero-


gel radiator module for its application in the aerogel-based ring-imaging
Cherenkov (ARICH) counter that is to be installed in the forward end
cap of the Belle II spectrometer, which is currently being upgraded at the
High Energy Accelerator Research Organization (KEK), Japan. We pro-
duced approximately 450 large-area (18 cm × 18 cm × 2 cm) hydrophobic
aerogel tiles with refractive indices of either 1.045 or 1.055 and charac-
terized their optical performance. Installation of 248 water-jet-trimmed
aerogel tiles into a support structure segmented into 124 containers was
finally completed.

Keywords: Silica aerogel · Optical material · Cherenkov radiator


Particle identification · Cherenkov ring imaging · Belle II

1 Introduction

Our research group has been undertaking the development of the aerogel-based
ring-imaging Cherenkov (ARICH) counter [1]. This device is used for identi-
fying charged π and K mesons at momenta between 0.5 and 3.5 GeV/c in a
super-B factory experiment, Belle II using the SuperKEKB electron–positron
collider at the High Energy Accelerator Research Organization (KEK), Japan.
The ARICH system is a proximity-focusing ring-imaging Cherenkov counter that
uses silica aerogel as a radiator and hybrid avalanche photo-detectors [2] as
position-sensitive photo-sensors, which will be installed as a particle identifica-
tion subsystem at the forward end cap of the Belle II spectrometer. This system
is an upgraded version of the threshold-type aerogel Cherenkov counters used
in the previous Belle spectrometer. The design objective is a π/K separation
capability exceeding 4σ at a momentum of 4 GeV/c.
The particle identification performance of the ARICH counter is determined
using the Cherenkov angular resolution and the number of detected photoelec-
trons. A scheme for focusing the propagation pass of emitted Cherenkov photons
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 253–256, 2018.
https://doi.org/10.1007/978-981-13-1313-4_48
254 M. Tabata et al.

on the photo-detectors is introduced using dual layers of 2-cm-thick aerogel tiles


(i.e., total radiator thickness = 4 cm), wherein each layer has a different refractive
index (n) [3]. To achieve high angular resolution at momenta below 4 GeV/c, we
chose n = 1.045 and 1.055 for the upstream and downstream layers, respectively
[4]. To increase the number of detected photons, the aerogel needs to be highly
transparent. A cylindrical aluminum support structure for installing the aerogel
tiles was used and had an outer and inner radii of 1.11 and 0.44 m (i.e., total
radiator area ∼ 3.3 m2 ), respectively. It is important to reduce the number of the
aerogel tiles used to cover the large radiator area (i.e., to reduce adjacent bound-
aries between the aerogel tiles) because particles cannot be clearly identified in
these gaps. Therefore, large, crack-free aerogel tiles are preferred. Installing the
tiles onto the module by trimming them with a water-jet cutter and avoid-
ing optical degradation of the aerogel via moisture adsorption during long-term
experiments should ultimately result in highly hydrophobic conditions [5].
By 2013, we had established a method for producing a high yield of large-
area aerogel tiles (18 cm × 18 cm × 2 cm; approximately tripled that of previous
tiles [6]) with n ∼ 1.05 that fulfilled the requirements of ARICH application
(transmission length ∼ 40 mm at 400-nm wavelength) [7]. This enabled us to
divide the module into 124 fan-shaped segments (comprising four concentric
rings) to install the 248 trimmed aerogel tiles. After specific production technol-
ogy transfer to the Japan Fine Ceramics Center, mass production of the aerogel
for the actual ARICH counter began in September 2013 and was completed in
May 2014. Approximately 450 tiles (16 lots) were manufactured and delivered
to KEK [8].

2 Results
2.1 Optical Characterization of Mass-Produced Aerogel Tiles
The yield of tiles with no damage was 344 out of 448 tiles (77%). In addition to
the required 248 tiles, 96 spare tiles were delivered. Tile damage was classified
into physical/mechanical (tile cracking, chipping, and other related phenomena)
and chemical/optical (e.g., milky tile due to problems associated with the sol–gel
process) damages. The number of physically and chemically damaged tiles were
77 (17%) and 27 (6%), respectively.
Deviations in the refractive indices from the target values were within our
expectations. Figure 1 shows the relation between the refractive index and the
transmission length. The acceptable deviation from the designed refractive-index
values of 1.045 and 1.055 was ±0.002 for both the layers. The refractive indices
measured were detected between 1.0435 and 1.0463 and between 1.0532 and
1.0558 for the upstream and downstream tiles, respectively.
The transmission lengths were sufficiently long to fulfill our requirements.
The minimum transmission lengths measured were 40.9 and 32.6 mm for the
upstream and downstream tiles, respectively, which were both longer than the
required limits of 40 and 30 mm for the upstream and downstream tiles, respec-
tively. At the tile corners, the refractive index was measured with the minimum
Assembly of a Silica Aerogel Radiator Module 255

deviation method using a laser with a wavelength of 405 nm [6]. The transmis-
sion length at 400 nm was calculated using the transmittance measured along
the tile thickness direction using a spectrophotometer [6].

Fig. 1. Relation between the refractive index and transmission length measured at
wavelengths of 405 nm and 400 nm, respectively.

2.2 Water-Jet Machining of Aerogel Tiles


Square aerogel tiles were cut into fan shapes using a water-jet cutting device at
Tatsumi Kakou Co., Ltd., Japan. The edges of the tiles were trimmed to form
four different shapes depending on those of the four different concentric rings
while maintaining the optical characteristics. A total of 283 tiles were water-jet
machined. The success rate of water-jet machining was 90% without volume loss
(chipping at the corners), yielding the 248 required tiles and several spares. A
total of 161 tiles (57%) had no volume loss. The number of tiles with significant
but acceptable volume loss, ≤1 cm2 or approximately ≤0.4% of the area of the
trimmed tile, was 94 (33%).

2.3 Aerogel Tile Installation


Prior to the installation of the aerogel tiles on the support structure, pairs of
upstream and downstream tiles were matched to maximize the photon focusing
effect in the dual-layer radiator scheme. The optimum value of the refractive-
index difference (Δn) between the upstream and downstream tiles is 0.01, while
an acceptable range is between 0.008 and 0.012. The Δn values of 124 installed
tile pairs ranged from 0.0095 to 0.0104.
The aerogel installation was completed in December 2016. Each segmented
aerogel container was lined with black paper to absorb background Cherenkov
photons scattered inside the aerogel. Two aerogel tiles were then installed into
the individual containers after removing dust on the tile surface. Two black glass
fibers per container were finally glued to the container septum to fix the aerogel
tiles. This procedure was repeated for the 124 segments.
256 M. Tabata et al.

3 Conclusion
Progress in the development of the dual-layer silica aerogel radiator module
for the Belle II ARICH counter was reported here. Approximately 450 large-
area (18 cm × 18 cm × 2 cm) hydrophobic aerogel tiles with refractive indices
of either 1.045 or 1.055 were manufactured. The optical characteristics (i.e.,
refractive index and transmission length) of the produced tiles were confirmed
to be suitable for the actual ARICH system. Each aerogel tile was cut into fan
shapes using a water-jet cutter to fit the cylindrical support structure. A total
of 248 aerogel tiles were successfully installed in the 124 segmented containers of
the support structure. The installation of the whole ARICH system within the
Belle II spectrometer is scheduled for late 2017.

Acknowledgments. The authors are grateful to the members of the Belle II ARICH
group for their assistance. We are also grateful to the Japan Fine Ceramics Center,
Mohri Oil Mill Co., Ltd. and Tatsumi Kakou Co., Ltd. for their contributions to mass
producing the aerogel tiles and water jet machining. This study was partially sup-
ported by a Grant-in-Aid for Scientific Research (A) (No. 24244035) from the Japan
Society for the Promotion of Science (JSPS). M. Tabata was supported in part by the
Hypervelocity Impact Facility (former name: Space Plasma Laboratory) at the Insti-
tute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency
(JAXA).

References
1. Pestotnik, R., et al.: The aerogel ring imaging Cherenkov system at the Belle II
spectrometer. Nucl. Instrum. Methods Phys. Res. A 876, 265–268 (2017). https://
doi.org/10.1016/j.nima.2017.04.043
2. Yusa, Y., et al.: Test of the HAPD light sensor for the Belle II Aerogel RICH. Nucl.
Instrum. Methods Phys. Res. A 876, 149–152 (2017). https://doi.org/10.1016/j.
nima.2017.02.046
3. Iijima, T., et al.: A novel type of proximity focusing RICH counter with multiple
refractive index aerogel radiator. Nucl. Instrum. Methods Phys. Res. A 548, 383–390
(2005)
4. Tabata, M., et al.: Silica aerogel radiator for use in the A-RICH system utilized in
the Belle II experiment. Nucl. Instrum. Methods Phys. Res. A 766, 212–216 (2014)
5. Yokogawa, H., Yokoyama, M.: Hydrophobic silica aerogels. J. Non Cryst. Solids 186,
23–29 (1995)
6. Tabata, M., et al.: Hydrophobic silica aerogel production at KEK. Nucl. Instrum.
Methods Phys. Res. A 668, 64–70 (2012)
7. Tabata, M., et al.: Large-area silica aerogel for use as Cherenkov radiators with
high refractive index, developed by supercritical carbon dioxide drying. J. Supercrit.
Fluids 110, 183–192 (2016)
8. Adachi, I., et al.: Construction of silica aerogel radiator system for Belle II RICH
counter. Nucl. Instrum. Methods Phys. Res. A 876, 129–132 (2017). https://doi.
org/10.1016/j.nima.2017.02.036
TORCH: A Large-Area Detector for High
Resolution Time-of-flight

R. Forty1(B) , N. Brook2 , L. Castillo Garcı́a3 , D. Cussans4 , K. Föhl1 , C. Frei1 ,


R. Gao3 , T. Gys1 , N. Harnew3 , D. Piedigrossi1 , J. Rademacker4 ,
A. Ros Garcı́a4 , and M. van Dijk1
1
CERN, 1211 Geneva, Switzerland
Roger.Forty@cern.ch
2
University of Bath, Bath BA2 7AY, UK
3
University of Oxford, Oxford OX1 3RH, UK
4
University of Bristol, Bristol BS8 1TL, UK

Abstract. TORCH is a novel detector concept for high resolution time-


of-flight measurement over large areas, which has been developed for
application in a future upgrade of the LHCb experiment. The status of
the R&D project is presented, including the development of suitable fast
photon detectors, and test-beam studies of prototypes.

Keywords: Particle identification · Time-of-flight · Photon detectors


LHCb

1 TORCH Detector Concept

TORCH (Timing Of internally Reflected CHerenkov light) is an evolution of


the DIRC technique, adding precision timing and angular information. It uses
a highly polished plate of synthetic quartz as Cherenkov radiator (1-cm thick,
∼ 8%X0 ). When traversed by a charged particle, promptly produced Cherenkov
photons are trapped in the plate and propagate to its edge by total internal
reflection. The innovation is to measure, along with the detected photon hit
positions, also their direction of propagation. This is achieved by coupling a
focusing block to the edge of the radiator plate, as shown in Fig. 1, and using
an extended plate (rather than bar) of quartz. Related developments have been
pursued for the Belle II and PANDA experiments [1].
When a detected photon is correctly matched to the charged particle track
that emitted it, the distance of propagation can then be determined. By measur-
ing the Cherenkov angle at emission, the wavelength of photon can also be cal-
culated, to correct for dispersion in the quartz. This calculation requires an esti-
mate of the particle velocity, so is performed separately for each mass-hypothesis
of the charged particle (π, K, etc.) and the velocity is determined from the
measured momentum. The power of the TORCH technique comes from com-
paring the distribution of photons that is expected for each mass hypothesis,
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 257–262, 2018.
https://doi.org/10.1007/978-981-13-1313-4_49
258 R. Forty et al.

with the distribution that is actually observed. Fast photon detectors are used:
micro-channel plate photomultipliers (MCP-PMT). The target for their intrinsic
timing resolution is 50 ps per detected photon. The focusing scheme requires a
linear array of detectors, with fine pixellization in one direction, coarse in the
other. For 2-inch tubes (60 mm pitch) the pixellization should be 128 × 8, i.e.
0.4 × 6.6 mm2 pixels, to give an angular resolution of ∼ 1 mrad in both pro-
jections. This gives a contribution to the resolution of 50 ps, leading to a total
resolution per detected photon of 50 (intrinsic) ⊕ 50 (pixel size) ps = 70 ps. For
30 detected photons per√track (under the assumption that they are uncorrelated)
this would provide 70/ 30 = 15 ps resolution on the arrival time of the track at
the TORCH detector.

Fig. 1. (a) Cross-section through the focusing block attached to the edge of the radiator
plate, illustrating the focusing of photons emitted from the plate. (b) Schematic view
of a TORCH module.

2 Application in LHCb
LHCb is the dedicated flavour physics experiment at the LHC, studying CP
violation and rare decays of beauty and charm hadrons. It is a forward spec-
trometer, although operated in pp collider mode. An upgrade in preparation for
2019–20, to move to a fully software trigger, reading out the detector at the
bunch-crossing rate of 40 MHz, with luminosity levelled at 2 × 1033 cm−2 s−1 .
A further “Phase-II” upgrade is now under discussion, to push the luminosity
further towards what is available from the LHC in the HL-LHC era (from 2024
onwards) [2].
Particle identification (in particular, distinguishing the charged hadrons p, K
and π) is crucial for much of hadronic physics of LHCb, and is currently provided
by a RICH system. Low-momentum particle ID was previously provided by an
TORCH: A Large-Area Detector for High Resolution Time-of-flight 259

aerogel radiator, but this was not suitable for the higher occupancy expected in
the upgrade, and so has been removed. There is therefore currently no positive ID
below the kaon threshold in the C4 F10 gas radiator of the RICH, at ∼ 10 GeV/c.
The difference in time-of-flight (TOF) between π and K over 10 m is ∼ 40 ps at
10 GeV/c, so 15 ps resolution would provide clear (∼ 3 σ) separation.
A “start” time is needed for the TOF measurement: it could be provided
by the accelerator clock, but would need to be corrected for the timing spread
in the beam bunches. An alternative is to use signals from other tracks in the
event from the primary vertex, in the TORCH detector itself. Typically most
are pions, so the reconstruction logic can be reversed, and the start time is
determined assuming they are all π, after removing outliers from other particles.
In this way a few-ps resolution can be achieved on the start time.

Fig. 2. (a) Layout of the LHCb spectrometer along the beam axis, as proposed for
the Phase-II upgrade [2]; the TORCH detector is sited just after the main tracker.
(b) Transverse layout of TORCH modules in LHCb, to cover the full acceptance.

At the foreseen location in LHCb (10 m from the interaction point) an area of
5 × 6 m2 has to be covered, which is not feasible with a single plate, and anyway
an aperture is required for the beam pipe. It is proposed to tile the surface using
18 identical modules (each 66 × 250 cm2 ), as shown in Fig. 2. This will require
198 photon detector tubes, with ∼ 100 k channels in total. Reflections from the
transverse edges of modules will lead to ambiguities in the reconstruction, but
at a level that can be resolved by the pattern recognition. At the luminosity
expected in the upgrade of LHCb there will be a high track multiplicity, of over
100 charged tracks per event, but the performance of TORCH has been studied
with simulation in these conditions and is excellent. Fast timing will also be
very useful for pile-up suppression at high luminosity, as is being explored by
the other experiments at the LHC [3].
260 R. Forty et al.

3 Status of the R&D Project


An EU-funded R&D project for TORCH has been running for five years, to
develop suitable photon detectors and provide proof-of-principle with a pro-
totype module. The project is a collaboration with industrial partner Photek
(UK).
At the start of the project, commercial tubes were not available that satisfied
all the requirements of TORCH:
1. fast timing (<50 ps per detected photon);
2. high active area (>80% for the linear array);
3. fine pixellization (128 × 8 rectangular pixels in a 60 × 60 mm2 tube);
2
4. long lifetime (up to 5 C/cm charge density at the anode).
A three-phase R&D program has been followed, to develop these characteristics
separately, and then bring them together in a final prototype tube [4].
The intrinsic timing performance of the first prototype tubes was measured
with a fast laser and single-channel commercial readout electronics. The tubes
have a dual-MCP in chevron configuration, with 10 μm pores. A resolution
of 23 ps has been achieved, with a small tail from laser and back-scattering
effects [5]. The lifetime issue was addressed by ALD (atomic layer deposition)
treatment of the MCP, as pioneered at Argonne/LAPPD [6], and tubes have
been successfully tested up to an integrated charge density at the anode of
4 C/cm2 [7].
Custom readout electronics have been developed, based on a chip set origi-
nally developed for the ALICE TOF detector [8]. The 32-channel NINO chip pro-
vides fast amplification and time-over-threshold as an estimate of input charge,
and the HPTDC chip performs digitization (with 100 ps bins) [9]. An effective
resolution equivalent to 0.4 mm can be achieved with two-times larger pixels
by making use of charge-sharing between neighbouring pixels. The point-spread
function was adjusted in the second phase of prototype tubes to share charge over
2–3 pixels. Calibration of the relationship between pulse width and charge was
performed, and the spatial resolution measured with laser illumination (using
the charge-weighted cluster centroid), to be better than 100 μm [10].
The final prototype tube integrates the features that have been developed in
the earlier phases, in a square format with 53 × 53 mm2 active area. It features
a quartz window and AC-coupled anode, so the window can be kept at ground
potential. Readout connectors are mounted on a PCB, 64 × 8 pixels per tube,
which is attached to tube using ACF (anisotropic conductive film). The delivery
of final tubes from Photek is scheduled during Summer 2017.

4 Test-Beam Studies
A small prototype module has been constructed for beam tests at the CERN
PS–T9 area. Optical components were delivered by Schott, with a second-phase
prototype MCP-PMT from Photek. The radiator plate is 35 × 12 × 1 cm3 , and
TORCH: A Large-Area Detector for High Resolution Time-of-flight 261

Fig. 3. Data from the test beam: hit pattern in the photon detector for selected pions
(a) and protons (b); reconstructed time distribution versus vertical position in one of
the columns of pixels in the photon detector for pions (c) and protons (d).

was coupled to the focusing block using silicone (Pactan 8030). The time-of-flight
could be independently determined using dedicated timing stations 10 m apart,
which allowed the p/π components of the beam to be separated. The detected
hits seen in the MCP-PMT match the expected pattern (taking into account
reflections from edges), as shown in Fig. 3. The small difference in Cherenkov
angle for π and p at 5 GeV/c is visible comparing (a) and (b). The time measured
for each cluster is plotted versus vertical position along one column of pixels in
(c) and (d), and the reflections are clearly separated. The p–π time-of-flight
difference of about 600 ps is cleanly resolved.
Projecting along the timing axis relative to the prediction for the earliest pion
signal, for each column of pixels (using the nearest timing station as reference)
gives a core distribution with σ ≈ 110 ps. This is before subtraction of the
contribution from the timing reference itself, so we are approaching the target
resolution of 70 ps/photon. Small tails seen in the timing distribution are due
to imperfections in the calibration and back-scattering effects.
A large prototype of a TORCH module on the scale that would be
required for LHCb is now under construction, with full width and half height:
125 × 66 × 1 cm3 . It will be equipped with 10 MCP-PMTs, for a total of over
5000 channels. Optical components have been delivered by Nikon, see Fig. 4.
Detailed measurements provided by the supplier match the specifications. This
final deliverable of the R&D project will be ready for testing in beam over the
next year.
262 R. Forty et al.

Fig. 4. (a) Radiator plate for the large prototype, after delivery to CERN. (b) Detail
of the design for the large prototype, indicating the various components mounted at
the upper edge of the radiator plate.

5 Conclusions

The TORCH detector concept adds precise angular and timing information to
a DIRC, to provide high-precision time-of-flight over large areas. It is included
in the plans for a future upgrade of the LHCb experiment. Suitable fast photon
detectors have been developed with industry, with final prototypes expected to
be delivered imminently. Test-beam studies have achieved close to the nominal
performance, and a full-scale prototype module is under construction for testing
over the next year: its success should provide a solid foundation for proposing
the full detector in LHCb. It is an exciting time for the project!

Acknowledgments. Support from the European Research Council in funding the


TORCH R&D project is gratefully acknowledged (ERC-2011-AdG 291175–TORCH).
The authors also thank the industrial partner Photek Ltd.

References
1. Varner, G., (Belle II), Schmidt, M., (PANDA): presentations at this conference
2. Expression of Interest for a Phase-II upgrade of LHCb, CERN-LHCC-2017-003
3. Lenzi, B., (ATLAS), Bornheim, A., (CMS): presentations at this conference
4. Milnes, J., (Photek): presentation at this conference
5. Gys, T., et al.: NIM A766, 171 (2014)
6. Craven, C., (Incom): presentation at this conference
7. Gys, T.: presentation at RICH 2016, Bled
8. Anghinolfi, F., et al.: NIM A533, 183 (2004)
9. Gao, R., et al.: JINST 10, C02028 (2015)
10. Castillo Garcı́a, L.: JINST 12 C02005 (2017)
High Rate Time of Flight System
for FAIR-CBM

Pengfei Lyu and Yi Wang(&)


for CBM-TOF group

Key Laboratory of Particle and Radiation Imaging, Department of Engineering


Physics, Tsinghua University, Beijing 100084, China
yiwang@mail.tsinghua.edu.cn

Abstract. The Compressed Baryonic Matter experiment (CBM) is one of the


big experiments of the international Facility for Antiproton and Ion Research
(FAIR) in Darmstadt, Germany. CBM aims to investigate rare probes such as
charmed hadrons, multiple strange baryons, di-electrons and di-muons as
messengers of the dense phase of strongly interacting matter with unprecedented
accuracy. This is achieved by designing all components of the experiment for an
interaction rate of 10 MHz for the largest reaction systems. Charged hadron
identification in the system is realized via the Time-of-Flight (TOF) method. For
this purpose the CBM-TOF collaboration designed a TOF wall composed of
Multi-gap Resistive Plate Chambers (MRPC). Due to the high interaction rate
the key challenge is the development of high rate MRPCs above 25 kHz/cm2
which becomes possible due to the development of low resistive glass with
extremely good quality. Based on the low resistive glass, we designed several
high rate MRPCs of different structure and readout electronics. A couple of
beam test have been performed and excellent results were obtained. The TDR of
TOF has been approved and the production of low resistive glass, MRPC
modules and electronics proceeds smoothly. In this article we present the actual
design of the TOF-wall. The design of high rate MRPC, thin glass MRPC,
readout chain and beam test results are also discussed in detail.

Keywords: CBM-TOF  High rate  MRPC  Low-resistive glass

1 Introduction
1.1 Introduction to FAIR-CBM
Nowadays, exploration of the QCD phase diagram at high net-baryon densities is one of
the top concerns in nuclear physics. Several experimental programs, such as RHIC-
STAR [1], CERN-SPS [2] and NICA [3], are devoted to the searching of QCD critical
endpoint in the phase diagram. Because of the limitation of luminosity in these
experiments, some rare probes of particles with very low production cross sections are
lost. In the future, the Compressed Baryonic Matter (CBM) experiment at the Facility of
Antiproton and Ion Research (FAIR) will be a high-rate fixed target experiment operated
at ion beam intensity up to 109/s, which is sufficient of acquire data for rare probes such
as charmed hadrons, multiple strange baryons, di-electrons and di-muons [4].

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 263–269, 2018.
https://doi.org/10.1007/978-981-13-1313-4_50
264 P. Lyu and Y. Wang

The international Facility for Antiproton and Ion Research (FAIR) constructed at the
exiting GSI facility in Darmstadt, Germany, will become a research platform in the field
of nuclear, hadron, atomic and plasma physics [5]. It consists of two synchrotrons
(SIS100/SIS300) with magnetic rigidities of 100 Tm and 300 Tm, delivering primary Au
beams up to 11A GeV from SIS100 and 35A GeV from SIS300. The minimum available
ion beam energy is about 2 A GeV. The CBM is located just behind the SIS100/SIS300,
and the beam extracted will reach intensities up to 109 Au ions per second.
The CBM experimental setup is shown in Fig. 1. It is a multi-purpose detector able
to measure hadrons, electrons and muons in heavy-ion collisions. It consists of a
serious of components just like any other detecting systems in nuclear and high energy
physics. For the detectors that compose the system, they are required to be highly
granular, fast and radiation-hard. On the other hand, CBM also require a high demand
for fast read-out electronics. The read-out electronics will run in autonomous mode
instead of trigger mode. All the signals above threshold with time stamps are pushed to
the data acquisition system.

Fig. 1. Connectional design of the CBM experimental facility [4].

1.2 CBM-TOF Design and Requirement


The hadron identification in CBM is achieved by measuring the momentum and time-
of-flight of these particles. In order to get a precise identification between 4 GeV/c p/K
at the flight distance of 10 m, a time resolution better than 80 ps of the whole system is
required. As shown in Fig. 2, the 120 m2 CBM-TOF is designed to be composed with
the multi-gap resistive plate chambers (MRPCs). In order to acquire sufficient data for
rare probes, the CBM has to be operated at a target interaction rate of 10 MHz [4]. In a
simulation of the Au + Au collision at such an interaction rate, the particle fluxes on
the TOF wall can reach up to 30 kHz/cm2. According to the flux distribution on the
TOF wall, it is divided into several regions applying different types of MRPCs.
The TOF system has put a strict requirement on the MRPCs: The system time reso-
lution should be better than 80 ps, the efficiency above 95%, the rate capability can
reach 30 kHz/cm2, the occupancy less than 5%, and all the counters should work under
free streaming data acquisition mode.
High Rate Time of Flight System for FAIR-CBM 265

Fig. 2. Front view of the ToF-wall. Modules are marked by dark lines, the red crossed boxes
denote the non-overlapping active areas of the single MRPC detectors inside. The yellow frames
represent the overlap of the MRPCs [4].

2 Development of Low-Resistive Glass

In order to improve the rate capability of the normal MRPC, which is only capable to
work below several hundred Hz/cm2, two main ways are available. One is reducing the
bulk resistivity of the electrode material, another is reducing the average charge. A kind
of low resistive silicate glass (TUYK-LRG10) with bulk resistivity on the order of 1010
X cm was produced in Tsinghua University [6]. This glass, characterized by an ohmic
behavior and stability with transported charge, contains oxides of transition elements. It
has a black color and is opaque to visible light, properties commonly attributed to
glasses exhibiting a form of electron conductivity. As shown in Table 1, the surface of
this glass is very smooth and its surface roughness is less than 10 nm. In order to
conduct the long term stability of the glass resistivity with increasing density trans-
ported across it, a 34-day test was done. The accumulated charge was 1 °C/cm2,
roughly corresponding to the CBM life-time over 5 year operation at the maximum
particle flux of about 20 kHz/cm2. Despite the main conductivity expected to be
electronic, an increase of the bulk resistivity with time/charge was not exceeding a
(tolerable) factor 2, even for such a large transported charge density.
Two MRPC prototypes based on such low-resistive glasses were produced and
tested in ELBE beamtest, Dresden-Rossendorf, Germany, to examine their perfor-
mance under high rate. As shown in Fig. 3, the efficiency is still higher than 90% and
the time resolution is about 80 ps, even though at 70 kHz/cm2 rate.
266 P. Lyu and Y. Wang

Table 1. Performance of the low-resistive glass.


Parameter Typical value
Maximal dimension 32 cm  30 cm
Bulk resistivity 1010 X cm
Standard thickness 0.7, 1.1 mm
Thickness uniformity 20 lm
Surface roughness <10 nm
Dielectric constant 7.5–9.5
DC measurement Ohmic behavior stable up to 1 °C/cm2

Fig. 3. Measured efficiencies and time resolutions for different runs as a function of the average
particle flux determined with reference scintillators [6].

3 MRPC Prototype Design for CBM-TOF

Based on the technique of the low-resistive glass, two main types of MRPC prototypes
are designed aiming at TOF wall regions above 1 kHz/cm2. Towards the outer zone,
the Tsinghua University has developed a double-ended readout strip MRPC, named
MRPC3a in the TOF wall [7]. It consists of two mirrored stacks of resistive plates,
which fit into the three parallel readout PCBs. In each stack, the 0.25 mm nylon
monofilaments spacers divide the five low-resistive glass plates into four homogeneous
gas gaps. The top and bottom plates among these five glasses are covered with the
colloidal graphite spray as the high voltage electrode. There are 24 strips on each
readout PCB. Each of them is 270 mm long, 7 mm wide and the interval is 3 mm.
Ground is placed onto the MRPC’s electrode. Feed through is carefully designed to
match the 100 X impedance of PADI electronics. This method aims to minimize the
noise caused by reflection. The sectional sketch of this prototype is shown in Fig. 4(a).
Another prototype, called MRPC2, is developed by NIPNE Group for inner zone
[8]. Just like the MRPC3a, this counter has fully differential, symmetric, double stack
architecture. Five 140 lm thick gas gaps are formed with the low-resistive glass plates
High Rate Time of Flight System for FAIR-CBM 267

Fig. 4. (a) Photo of the MRPC3a prototype designed by Tsinghua University. (b) Photo of the
MRPC2 designed by NIPNE Group.

and fishing lines in each stack. The signals are designed to be read out from both strip
sides on the readout PCBs. They feature 64 electrode strips with a pitch of 4.72 mm
(2.18 mm width/2.54 mm gap) which define an active length of 302 mm.

4 Beam Test Results

Both the two prototypes were tested in the 2014 October GSI beam test with a 1.1A
GeV 152Sm beam [9]. The detailed layout of the beam test is shown in Fig. 5. All of
these tested MRPC modules are divided into two parts. The MRPC3a module is among
the lower setup, and the MRPC2 is in the upper setup. In this beam test, we got the
beam on a 0.3 mm/4 mm/5 mm Pb target. A flux rate of several hundred Hz/cm2 was
available.

Fig. 5. Beam time setup of GSI Oct 2014. MRPC2 and BUC-Ref are in upper setup, while HD-
P2 and MRPC3a are among lower setup. PMTs are applied for counting rate calibration. The
diamond detector is placed before Pb target to record starting time of each event [7].

In order to obtain the performance of the MRPCs from the raw data, a calibration
macro based on CBM-ROOT, developed by CBM-TOF group, was applied. This cal-
ibration runs different correction modes to kick out a series of influences factors,
268 P. Lyu and Y. Wang

including walk correction, gain correction and velocity correction. After an iterative
looping of all these corrections, we got the fully calibrated data of the analyzed detector.
The performances of MRPC3a are shown in Fig. 6 [10]. The efficiency is estimated
as the ratio of the number of matched hits between MRPC3a, HD-P2 and diamond
counter and the number of matched hits between HD-P2 and diamond counter. It stays
at 97%. The cluster size always maintains a low value of 1.6. Assuming an equal
performance with the reference counter we have a time resolution of the MRPC3a of
about 50 ps.

Fig. 6. Time resolution (around 50 ps), efficiency (97%) and clustersize (1.6 to 1.7) of MRPC3a
under different FEE (PADI) electronics threshold [9].

As for the MRPC2 [11], a value larger than 98% of efficiency is still observed at the
largest value of the threshold used in these measurements (240 mV). At the highest
applied threshold (240 mV) the mean cluster size is 3. The system time resolution of
MRPC2 for each value of the FEE thresholds improves slightly with the increasing
threshold. The best value results to 74 ps. Assuming the MRPC3a and BUC-Ref’s time
resolutions are the same, an independent timing ability of 52 ps is calculated (Fig. 7).

Fig. 7. System time resolution (around 75 ps), efficiency (98%) and clustersize (3 to 4) of
MRPC2 under different FEE (PADI) electronics threshold [10].
High Rate Time of Flight System for FAIR-CBM 269

5 Conclusion

In order to achieve an unprecedented flux rate, CBM-TOF has made a high requirement
on the MRPC counters. The performance should be maintained (efficiency above 95%,
system time resolution better than 80 ps) at the flux rate up to 30 kHz/cm2. With the
help of a newly developed low-resistive glass, such demands were fully met. Proto-
types were designed aiming at the different regions on the TOF wall. They were proved
to satisfy the requirement of CBM-TOF. In the near future, part of these prototypes will
be shipped to RHIC to compose the eTOF of STAR. It will apply 10% of the CBM-
TOF modules, including the read-out chains, and participate in RHIC Beam Energy
BES-II in 2019.

References
1. Schmach, A., et al.: Highlights of the beam energy scan from STAR. arXiv:1202.2389v1
(2011)
2. Aduszkiewicz, A., et al.: NA61/SHINE at the CERN SPS: plans, status and first results. Acta
Phys. Pol., B 43, 635 (2012)
3. Nica white paper (2011). http://nica.jinr.ru/files/WhitePaper.pdf
4. The CBM collaboration: Technical Design Report for the CBM Time-of-Flight System
(TOF) (2014)
5. FAIR Baseline Technical Report (2006). http://www.gsi.de/fair/reports/btr.html
6. Wang, J., et al.: Development of high-rate MRPCs for high resolution time-of-flight systems.
Nucl. Instr. Methods A 713, 40 (2013)
7. Wang, Y., et al.: Development and test of a real-size MRPC for CBM-TOF. JINST 11,
C08007 (2016)
8. Petris, M., et al.: Cosmic-ray and in-beam tests of 100 Ohm transmission line MGMSRPC
prototype developed for the inner zone of CBM-TOF. CBM Progress Report 2014, p. 89
(2015)
9. Deppner, I., et al.: Results from a heavy ion beam test @ GSI. CBM Progress Report 2014,
p. 86 (2015)
10. Lyu, P., et al.: Performance of Strip-MRPC for CBM-TOF in beam test. CBM Progress
Report 2015, p. 92 (2016)
11. Petris, M., et al.: Performance of MGMSRPC for the inner zone of the CBM-TOF wall in
heavy ion beam tests. CBM Progress Report 2015, p. 95 (2016)
The Aerogel Ring Image Cherenkov
Counter for Particle Identification
in the Belle II Experiment

Tomoyuki Konno1,6(B) , Ichiro Adachi1,2,6 , Rok Dolenec3,6 ,


Hidekazu Kakuno4,6 , Hideyuki Kawai5,6 , Haruki Kindo2,6 , Samo Korpar3,6 ,
Peter Križan3,6,7 , Tetsuro Kumita4,6 , Masahiro Machida6,8 , Manca Mrvar6,7 ,
Shohei Nishida1,2,6 , Kouta Noguchi4,6 , Kazuya Ogawa6,9 , Satoru Ogawa6,10 ,
Rok Pestotnik6,7 , Luka Šantelj1,6 , Takayuki Sumiyoshi4,6 , Makoto Tabata5,6 ,
Masanobu Yonenaga4,6 , Morihito Yoshizawa6,9 , and Yosuke Yusa6,9
1
High Energy Accelerator Research Organization (KEK), Tsukuba, Japan
tomoyuki.konno@kek.jp
2
SOKENDAI (The Graduate University of Advanced Science), Tsukuba, Japan
3
University of Ljubljana, Ljubljana, Slovenia
4
Tokyo Metropolitan University, Hachioji, Japan
5
Chiba University, Chiba, Japan
6
University of Maribor, Maribor, Slovenia
7
Jožef Stefan Institute, Ljubljana, Slovenia
8
Tokyo University of Science, Noda, Japan
9
Niigata University, Niigata, Japan
10
Toho University, Funabashi, Japan

Abstract. The Aerogel Ring Imaging Cherenkov (ARICH) counter is


an upgrade of the forward endcap particle identification device in the
upcoming Belle II experiment to secure 4σ separation of charged kaons
and pions up to momenta of 3.5 GeV. Developments and productions of
Silica aerogel radiator, Hybrid Avalanche Photo Detector (HAPD) and
its readout electronics, key components of the ARICH counter, are done
while surrounding subsystems; a power supply, a readout slow control
and a LED light injection system are in operation. And then construction
of the ARICH counter is ongoing in parallel to cosmic ray test. Details
of the detector components with construction status are described and
the test operation are discussed in the letter.

Keywords: Particle identification · Proximity focusing RICH


Silica aerogel radiator · Hybrid avalanche photo detector

1 Introduction
The Belle II experiment [1] will start observation of e+ − e− collision in 2018
from the SuperKEKB collider to search for the New Physics beyond the Stan-
dard Model using high precession measurements of flavor systems. The Aerogel
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 270–274, 2018.
https://doi.org/10.1007/978-981-13-1313-4_51
The Aerogel Ring Image Cherenkov Counter for Particle Identification 271

Cherenkov Counter (ACC), a threshold-type Cherenkov counter, in the Belle


detector [2] is totally replaced to two Ring Imaging Cherenkov (RICH) counters;
Time-of-Propagation counter (TOP) for the barrel and Aerogel RICH counter
(ARICH)[3] for the forward endcap. The goal of the ARICH counter is to provide
excellent PID information to secure 4 σ separation of charged kaons and pions at
momenta between 0.5 and 3.5 GeV, the full kinematics range of the experiment.
The ARICH counter is a new proximity focusing RICH counter with aerogel
radiator [4], which consists of Silica aerogel radiator, Hybrid Avalanche Photo
Detector (HAPD), and their readout electronic. The counter is designed to fit
strict requirements in the Belle II detector; compact detector size to fit very
limited space of about 28 cm in the forward endcap region, capability to operate
in 1.5 T of high magnetic filed, radiation hardness in 20 times higher than that in
the Belle experiment, and performance of the readout electronics up to 30 kHz.

2 Detector Components and Construction

Two layers of aerogel radiators with different refractive indices [5] are adapted
to focus Cherenkov ring on the surface of the photon sensors. A method to pro-
duce silica aerogel tile with high transparency and flexible refractive indices was
newly developed. In the ARICH counter. The optical thickness and the refractive
indices of the radiators are optimized to be 20 mm and 1.045 for upstream and
1.055 for downstream, respectively. 248 of the aerogel tiles with approximate size
of 18 x 18 cm are produced to cover the effective region of the ARICH counter.
Installation the aerogel tiles to the support structure of the counter is completed
in 2016.
Hybrid Avalanche Photo Detector (HAPD) [6], co-developed with
Hamamatsu Photonics, is adapted as a position sensitive photon sensor. The
HAPD consists of Super Bialkali photocathode with 28% of quantum efficiency,
vacuum tube to accelerate the photoelectrons with a bombardment gain and
pixelated APD pads with an avalanche gain The total amplification gain is esti-
mated to be about 45000. The APD sensor is divided into 144 readout channels
of 5 × 5 mm2 pixels. The HAPD is confirmed to be operational in the magnetic
field and improve its position resolution by suppressing crosstalk due to photo-
electron back-scattering and canceling non-uniformity of electronic field at the
barrel of the vacuum tube. On the other hand, structure of the APD sensor
pad was modified to endure the radiation environment for neutron and gamma.
Finally, all 420 HAPDs are confirmed to fulfill the requirements, and then assem-
bled with the readout electronics to be installed in June 2017.
Power supply system [7] to the HAPDs is also important for a stable operation
of the ARICH counter. We adapted CAEN A1590N for the high voltage and
A7042P for the bais and guard voltages, respectively. A software power supply
control system with a control GUI is developed based on a software framework
in the Belle II data acquisition system.
Two steps of readout electronics [8] of the HAPDs are introduced in order
to process the signals and to merge data and reduce numbers of cables to the
272 T. Konno et al.

outside of the detector. A frontend board attached to the HAPD consists of


4 dedicated ASICs (SA01) to extract hit patterns of the APD channels and a
FPGA (Xilinx Spartan-6) to digitize the hit information; a merger board based
on a FPGA (Xilinx Vertex-5) collects the digitized data from several of the
frontend boards (up to 6) to merge and send them to the Belle II global data
acquisition (DAQ) system [9]. A performance test is done with a merger board
and 6 frontend boards and it is confirmed to have enough performance for the
trigger rate up to 50 kHz of random triggers with Poisson distribution. A slow
control software with graphical user interface (GUI) is also prepared to configure
and monitor both the frontend board and the merger board. 420 frontend and 72
merger boards are prepared for installation. Assemblies of the frontend boards
to the HAPD and installation to the ARICH support structure are completed
while all of the merger boards will be installed by end of July 2017.
A LED light injection system [10] is to inject LED lights into the ARICH
counter via optical fibers. 90 injections points including backups are prepared
to illuminate the surface of the aerogel radiators and diffuse the lights to the
photon sensors with uniform intensity over the detector. The system is confirmed
to provide the flat intensity of photons with synchronizing triggers.

3 Test Operation with Cosmic Ray


A test operation using the HAPD modules installed in the structure is carried
out to observe Cherenkov ring images by cosmic rays. 16 HAPDs connected to 4
merger boards are located under a tentative aerogel tile with about 18 × 18 cm2
coverage and a pair of plastic scintillators is placed above the radiator as a trigger
source of cosmic rays. This cosmic ray test is the first combined operation with
a copy of the Belle II DAQ system and the merger boards were synchronized
and readout by common hardware of the DAQ system. As results of the test
operation, stable operation over 12 h with 0.1 Hz of trigger rate is achieved and
event synchronization among the merger boards is also confirmed by observing
amount of clear ring images by cosmic rays as shown in the left plot of Fig. 1.

Fig. 1. Results of the cosmic ray test. The left plot show a event display of Cherenkov
ring images on the HAPD surface and the right plot describes distribution of recon-
structed Cherenkov angles.
The Aerogel Ring Image Cherenkov Counter for Particle Identification 273

In addition, stable operations of the power supply system and readout slow
control are also established. Mean values of number of detected photoelectrons
and Cherenkov angle are measured to be about 13 p.e. and 300 mrad as shown in
the right plot of Fig. 1, respectively, consistent with cosmic muons at momenta
between 0.5 and 4 GeV/c. Therefore, we concluded that successful operation of
the ARICH system is established with the test setup.

4 Summary and Schedule


The aerogel RICH (ARICH) counter is placed in the forward endcap region of
the Belle II detector to provide good separation between pions and kaons in full
kinematics of the experiment up to 3.5 GeV/c. The ARICH counter is developed
to fit the Belle II detector with a limited space, capability in high magnetic field
and high radiation tolerance. The Silica aerogel radiator tiles and the HAPDs
modules with the frontend boards are fully installed in the support structure
while installation of the merger boards will be finished in July 2017. In additions
to these components, subsystems for power supply to the HAPDs, slow control
of the readout electronics and LED light injection system are operational during
detector tests. We performed a test operation with partially installed modules
to observe Cherenkov ring images by cosmic rays, confirming enough stability
and event synchronization with clear ring images. The measured observables in
the cosmic test are consistent with cosmic muons. The ARICH counter will be
fully constructed and installed into the Belle II detector in autumn of 2017.

References
1. Abe, T., et al.: Belle II Technical Design Report (2010), arXiv:1011.0352
[physics.ins-det], KEK Report 2010-1
2. Abashian, A., et al.: The Belle detector. Nucl. Instrum. Meth. A479, 117–232
(2002)
3. Pestotnik, R., et al.: The aerogel Ring Imaging Cherenkov system at the Belle II
spectrometer. Nucl. Instrum. Methods Phys. Res. A 876, 265–268 (2017). https://
doi.org/10.1016/j.nima.2017.04.043. (in press)
4. Iijima, T., et al.: A novel type of proximity focusing RICH counter with multiple
refractive index aerogel radiator. Nucl. Instrum. Methods Phys. Res. A 548, 383–
390 (2005)
5. Tabata, M., et al.: Silica aerogel radiator for use in the A-RICH system utilized
in the Belle II experiment. Nucl. Instrum. Methods Phys. Res. A 766, 212–216
(2014)
6. Yusa, Y., et al.: Test of the HAPD light sensor for the Belle II Aerogel RICH. Nucl.
Instrum. Methods Phys. Res. A 876, 149–152 (2017). https://doi.org/10.1016/j.
nima.2017.02.046. (in press)
7. Yonenaga, M., et al.: Development of slow control system for the Belle II ARICH
counter. Nucl. Instrum. Methods Phys. Res. A 876, 241–245 (2017). https://doi.
org/10.1016/j.nima.2017.03.037. (in press)
8. Nishida, S., et al.: Readout ASICs and electronics for the 144-channel HAPDs for
the Aerogel RICH at Belle II. Phys. Procedia 37, 1730–1735 (2012)
274 T. Konno et al.

9. Yamada, S., et al.: Data acquisition system for the Belle II experiment. IEEE
Trans. Nucl. Sci. 62, 1175–1180 (2015)
10. Hataya, K., et al.: Development of the ARICH monitor system for the Belle II
experiment. Nucl. Instrum. Methods Phys. Res. A 876, 176–180 (2017). https://
doi.org/10.1016/j.nima.2017.02.070. (in press)
Endcap Disc DIRC for PANDA at FAIR

M. Schmidt1(B) , M. Düren1 , E. Etzelmüller1 , K. Föhl1 , A. Hayrapetyan1 ,


K. Kreutzfeldt1 , O. Merle1 , J. Rieke1 , T. Wasem1 , M. Böhm2 , W. Eyrich2 ,
A. Lehmann2 , M. Pfaffinger2 , F. Uhlig2 , A. Ali3 , A. Belias3 , R. Dzhygadlo3 ,
A. Gerhardt3 , K. Götzen3 , G. Kalicy3 , M. Krebs3 , D. Lehmann3 , F. Nerling3 ,
M. Patsyuk3 , K. Peters3 , G. Schepers3 , L. Schmitt3 , C. Schwarz3 ,
J. Schwiening3 , M. Traxler3 , P. Achenbach4 , M. Cardinali4 , M. Hoek4 ,
W. Lauth4 , S. Schlimme4 , W. Lauth4 , C. Sfienti4 , and M. Thiel4
1
Justus-Liebig-University of Giessen, Giessen, Germany
mustafa.a.schmidt@physik.uni-giessen.de
2
Friedrich-Alexander-University of Erlangen-Nuremberg, Erlangen, Germany
3
GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany
4
Institut für Kernphysik, Johannes-Gutenberg-University of Mainz, Mainz, Germany

Abstract. The Endcap Disc DIRC (EDD) has been developed to pro-
vide an excellent particle identification in the future PANDA experiment
by separating pions and kaons up to a momentum of 4 GeV/c with a sep-
aration power of 3 s.d.. The detector is placed in the forward endcap of
the PANDA target spectrometer. It consists of a fused silica plate and
focusing elements placed at the outer rim, which focus the Cherenkov
light on the photo cathodes of the attached MCP-PMTs. A compact
and fast readout of the signals is realized with special ASICs. The per-
formance has been studied and validated with different prototype setups
in various testbeam facilities.

Keywords: Particle identification · Cherenkov detector · DIRC

1 Detector Design

The future Disc DIRC detector for the PANDA [1] experiment has a compact
and modular design, that consists of four independent quadrants of fused silica
Cherenkov radiators 20 mm thick and a surface roughness of less than 1 nm. It
is designed to separate pions and kaons in the momentum range of 1–4 GeV/c
with a separation power of 3 s.d. covering polar angles between 5◦ and 22◦ .
The detector is shown in Fig. 1. The created Cherenkov light inside the radi-
ator disk is internally reflected to the outer rim of each quadrant, where 96
focusing elements (FELs) are attached. Every FEL is bonded to a bar connected
to the radiator disk and has a cylindrical mirror coating on the backside. The
Cherenkov photons captured in the bars are focused on a focal plane formed by
the photo cathode of the MCP-PMT. Each MCP-PMT contains a segmented
anode with 3 × 100 pixels for acquiring the Cherenkov photon hit pattern of the
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 275–278, 2018.
https://doi.org/10.1007/978-981-13-1313-4_52
276 M. Schmidt et al.

Fig. 1. Concept of the Disc DIRC for PANDA showing one quadrant with radiator
and FELs (left), the functionality of the focusing elements (center), and a sketch of the
attached MCP-PMTs plus readout (right).

traversing particle. From the measured hit pattern the mean Cherenkov angle
and the likelihood values for different particle hypotheses are reconstructed.
A long-pass color filter in front of the MCP-PMT entry window, that filters
out photons below a specific wavelength, increases the detector resolution, which
largely depends on the chromatic error inside the fused silica radiator and the
number of measured photon hits. For the signal readout TOFPET ASICs [2]
with a time resolution of 25 ps are used.

2 Performance Analysis

The detector performance has been simulated in the PandaRoot framework [3]
including all wavelength dependent parameters. The important parameters are
the transmission values for fused silica, the reflectivity of the mirrors and the
MCP-PMT detection efficiency with an assumed collection efficiency of 65%.
Two candidate photo cathodes with a maximum quantum efficiency of 30%
have been studied with Monte-Carlo simulations: a blue photo cathode with
a maximum between 250 nm and 400 nm and a green photo cathode with an
enhanced sensitivity between 400 nm and 500 nm, that slopes up between 330 nm
and 370 nm. One result of the simulation is, that the best resolution is obtained
for a value around 360 nm for the long-pass filter cut-off wavelength.
For this filter value the separation power for pions and kaons has been calcu-
lated for all azimuth angle φ and polar angle θ combinations as shown in Fig. 2
left tile. The average value is calculated to a value of 4.4 s.d.. A one-dimensional
projection is presented in Fig. 2 right tile, showing the separation power as a
function of the polar angle θ for several particle momenta and two choices for
the photo cathode of the MCP-PMTs. For very large angles above θ = 21◦ the
separation power drops slightly below 3 s.d. at the highest momentum due to
larger geometrical errors affecting the reconstruction algorithm.
Endcap Disc DIRC for PANDA at FAIR 277

Fig. 2. Separation power of the final detector as a function of θ and φ instrumented


with the blue photo cathode (left) and as a function of the polar angle θ (φ averaged)
for several momenta (right) and both photo cathodes.

3 Testbeam Results
The first particle identification with a Disc DIRC prototype was achieved in 2012
at the T9 testbeam at CERN [4]. In 2015 an upgraded prototype consisting
of a 500 mm square radiator plate with fused silica components and a TRB3
readout was tested in the same beam line. The measured single photon resolution
compared well with the Monte-Carlo data [5]. For the following testbeam in 2016
at DESY with a 3 GeV/c electron beam the prototype design is comparable to
the one of the final detector regarding optical precision and TOFPET based
readout electronics.

Fig. 3. Comparison between the simulated and measured photon yield (left) and single
photon resolution (right) as a function of the polar angle.

Figure 3 left side shows the comparison of the single photon resolution
between the testbeam and Monte-Carlo data for a performed angle scan. The
distance from the particle punch-through point to the FEL was 450 mm. The
resolution changes as a function of the polar angle due to an increasing number
of reflections inside the FELs. On the right side of Fig. 3 the comparison of the
photon yield is presented. The simulation output incorporates the results of an
independent charge sharing analysis, which studied the number of hits from the
detected charge cloud of a single photo electron in the MCP-PMT.
278 M. Schmidt et al.

A position scan perpendicular to the FELs was performed and used for an
event mixing study simulating a 30 FEL readout with 30 equidistant positions.
The large background of the testbeam data could be handled by applying a
reconstructed coarse time cut and a truncated mean method to derive a mean
Cherenkov angle value for each mixed event.
Figure 4 compares the result of the single photon resolution with the distri-
bution of the truncated means of the mixed
√ events. The resolution of the mean
value scales approx. with the factor 1/ N as expected. The obtained resolution
of σ = 2.5 mrad is within a factor of less than 2 away from the anticipated perfor-
mance of the fully equipped final Disc DIRC with a resolution of σ = 1.8 mrad
for the same momentum. The reason for the still existing discrepancy can be
found in the absence of a chromatic filter, less FELs and a larger effect of mul-
tiple scattering of the electrons inside the radiator disk. The resolution of the
testbeam data agrees with the Monte-Carlo simulations. Also the photon yields
of 18 (14) hits per event from the measured (Monte-Carlo) data agree within
the precision of the simulation.

Fig. 4. Single photon resolution (left) and the average Cherenkov angle (right) from
the event combination of 30 equidistant positions on the radiator disk.

References
1. PANDA Collaboration, Technical Progress Report, FAIR-ESAC/Pbar (2005)
2. Rolo, M.D., et al.: TOFPET ASIC for PET applications. J. Instrum. 8, C02050
(2013)
3. Spataro, S.: Event Reconstruction in the PandaRoot framework. J. Phys. Conf. Ser.
396, 022048 (2012)
4. Föhl, K., et al.: First particle identification with a Disc-DIRC detector. Nucl.
Instrum. Methods Phys. Res. Sect. A 732, 346–351 (2013). https://doi.org/10.1016/
j.nima.2013.08.023
5. Etzelmüller, E., et al.: Tests and developments of the PANDA Endcap Disc DIRC.
J. Instrum. 11, C04014 (2016)
The NA62 RICH Detector
Construction and Performance

Andrea Bizzeti1,2(B)
1
Department of Physics, Informatics and Mathematics,
University of Modena and Reggio Emilia, Modena, Italy
2
Istituto Nazionale di Fisica Nucleare – Sezione di Firenze,
Firenze, Sesto Fiorentino (FI), Italy
andrea.bizzeti@fi.infn.it

Abstract. The RICH detector of the NA62 experiment at CERN SPS is


required to suppress μ+ contamination in K + → π + ν ν̄ candidate events
by a factor at least 100 between 15 and 35 GeV/c momentum, to measure
the pion arrival time with ∼ 100 ps resolution and to produce a trigger
for a charged track. It consists of a 17 m long tank filled with Neon gas at
atmospheric pressure. Čerenkov light is reflected by a mosaic of 20 spher-
ical mirrors placed at the downstream end of the vessel and is collected
by 1952 photomultipliers placed at the upstream end. The construction
of the detector will be described and the performance reached during
first runs will be discussed.

Keywords: Čerenkov detectors · Particle identification

1 The NA62 RICH Detector


The NA62 experiment [1] at CERN SPS North Area has been designed to study
charged kaon decays and particularly to measure the branching ratio (≈ 10−10 )
of the very rare decay K + → π + ν ν̄ with a 10% precision. The NA62 experimental
apparatus is described in detail in [2].
The largest background to K + → π + ν ν̄ comes from the K + → μ+ νμ
decay, which is 10 orders of magnitude more abundant. This huge background is
mainly suppressed using kinematical methods and the very different response of
calorimeters to muons and charged pions. Another factor 100 in muon rejection is
needed in the momentum range between 15 and 35 GeV/c. A dedicated Čerenkov
detector, the RICH, has been designed and built for this purpose. Neon gas at
atmospheric pressure is used as radiator, with refraction index n = 1+62.8×10−6
at a wavelength λ = 300 μm, corresponding to a Čerenkov threshold for charged
pions of 12.5 GeV/c. Two full-length prototypes were built and tested with
hadron beams to study the performance of the proposed layout [5,6].
On behalf of the NA62 RICH Working Group: A. Bizzeti, F. Bucci, R. Ciaranfi,
E. Iacopini, G. Latino, M. Lenti, R. Volpe, G. Anzivino, M. Barbanera, E.
Imbergamo, R. Lollini, P. Cenci, V. Duk, M. Pepe, M. Piccini.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 279–282, 2018.
https://doi.org/10.1007/978-981-13-1313-4_53
280 A. Bizzeti

The RICH radiator container (“vessel”) is a 17 m long vacuum-proof steel


tank, composed of 4 cylindrical sections of diameter up to 4 m, closed by two
thin aluminium windows to minimize the material budget crossed by particles.
A sketch of the RICH detector is shown in Fig. 1. Fresh neon at atmospheric
pressure are injected in the 200 m3 vessel volume after it has been completely
evacuated. No purification or recirculation system is used.

Fig. 1. Schematic view of the RICH detector. The hadron beam enters from the left
and travels throughout the length of the detector in an evacuated beam pipe. A zoom
of one of the two disks hosting the photomultipliers is shown on the left; the mirror
mosaic is made visible through the vessel on the right. From [2].

A mosaic of spherical mirrors with 17.0 m focal length, 18 of hexagonal shape


(350 mm side) and two semi-hexagonal with a circular opening for the beam pipe,
is placed at the downstream end of the vessel to reflect and focus Čerenkov light
towards the two regions equipped with photomultipliers (PMTs) at the upstream
end of the vessel (see Fig. 1).
Each mirror is supported using a dowel inserted in a 12 mm diameter cylin-
drical hole drilled in the rear face of the mirror close to its barycentre. Dowels
are connected to the RICH vessel by means of a vertical support panel, made of
an aluminum honeycomb structure to minimize the material budget. Two thin
aluminium ribbons, each one pulled by a micrometric piezo-electric motor, keep
the mirror in equilibrium and allow to modify its orientation. A third vertical rib-
bon, without motor, avoids on-axis rotation. The mirror orientation is measured
by comparing the position of the centre of the Čerenkov ring reconstructed by
the RICH PMTs with its expected position based on the track direction recon-
structed by the spectrometer and can be finely tuned using piezomotors (see
Fig. 2).
Two arrays of 976 Hamamatsu R7400-U03 PMTs are located at the upstream
end of the vessel, left and right of the beam pipe. The PMTs have an 8 mm
The NA62 RICH Detector 281

Fig. 2. (left) Scheme of the mirror orientation system: two ribbons connected to piezo-
electric motors pull micrometrically the mirror while a third vertical ribbon avoids on-
axis rotation. (centre and right) Position difference between the centre of the Čerenkov
ring reconstructed by the RICH PMTs and its expected position based on the track
direction reconstructed by the spectrometer, after tuning the mirrors orientation. “+X
side” and “–X side” indicate the two locations of the PMTs, on the left and on the
right of the beam pipe. Each point represents a single mirror.

diameter active region and are packed in an hexagonal lattice with 18 mm pixel
size. Each PMT has a bialkali cathode, sensitive between 185 and 650 nm wave-
length with about 20% peak quantum efficiency at 420 nm. Its 8-dynode system
provides a gain of 1.5 × 106 at 900 V supply voltage, with a time jitter of 280 ps
FWHM. PMTs are located in air outside the vessel and are separated from neon
by a quartz window; an aluminized mylar Winston cone [7] is used to reflect
incoming light to the active area of each PMT. The front-end electronics con-
sists of 64 custom made boards, each of them equipped with four 8-channels
Time-over-Threshold NINO discriminator chips [8]. The readout is provided by
4 TEL62 boards, each of them equipped with sixteen 32-channels HPTDC [9];
a fifth TEL62 board receives a multiplicity output (logic OR of the 8 channels)
from each NINO discriminator and is used for triggering. The time resolution
of Čerenkov rings has been measured by comparing the average times of two
subsets of the PMT signals, resulting in σt (ring) = 70 ps.

2 Particle Identification
In order to assess the RICH performance, the Čerenkov ring radius (which
depends on particle velocity) measured by the RICH is related to the track
momentum measured by the magnetic spectrometer. Figure 3 (left) shows a clear
separation between different particles in the momentum range 15–35 GeV/c.
Pion-muon separation is achieved by cutting on the particle mass, calculated
from the measured values of the particle velocity (from the Čerenkov ring radius)
and momentum. The charged pion identification efficiency επ and muon mis-
identification probability εμ are plotted in Fig. 3 (right) for several values of the
mass cut. At επ = 90% the muon mis-identification probability is εμ  1%.
282 A. Bizzeti

Fig. 3. (left) Čerenkov ring radius versus particle momentum. Vertical lines delimit
the momentum fiducial region 15–35 GeV/c. Electrons, muons, charged pions and scat-
tered beam kaons are clearly visible. Particles with momentum higher than 75 GeV/c
correspond to halo muons. (right) Pion identification efficiency versus muon mis-
identification probability.

3 Conclusions

The NA62 RICH detector was installed in 2014 and commissioned in autumn
2014 and 2015; it is fully operational since the 2016 run. First performance stud-
ies with collected data show that the RICH fulfilled the expectations, achieving
a time resolution of 70 ps and a factor ∼ 100 in muon suppression.

Acknowledgements. The construction of the RICH detector would not have been
possible without the enthusiastic work of many technicians from University and INFN
of Perugia and Firenze, the staff of CERN laboratory, the collaboration with Vito
Carassiti from INFN Ferrara. A special thank to the NA62 collaboration for the full
dedication to the construction, commissioning and running of the experiment.

References
1. Anelli, G., et al.: Proposal to measure the rare decay K + → π + ν ν̄ at the CERN
SPS, CERN-SPSC-2005-013, CERN-SPSC-P-326 (2005)
2. Cortina Gil, E., et al.: NA62 Collaboration. J. Instr. 12, P05025 (2017). https://
doi.org/10.1088/1748-0221/12/05/P05025
3. Buras, A.J., et al.: JHEP 1511, 166 (2015)
4. Artamonov, A.V., et al.: E949 Collaboration. Phys. Rev. D 79, 092004 (2009)
5. Anzivino, G., et al.: Nucl. Instr. Meth. Phys. Res. A 538, 314 (2008)
6. Angelucci, B., et al.: Nucl. Instr. Meth. Phys. Res. A 621, 205 (2010)
7. Hinterberger, H., Winston, R.: Rev. Sci. Instr. 37, 1094 (1966)
8. Aghinolfi, F., et al.: Nucl. Instr. Meth. Phys. Res. A 533, 183 (2004)
9. Christiansen, J.: High Performance Time to Digital Converter, CERN/EP-MIC
(2004). https://cds.cern.ch/record/1067476/files/cer-002723234.pdf
Barrel Time-of-Flight (TOF) Detector
for the P̄ANDA Experiment at FAIR

N. Kratochwil1(B) , M. Böhm6 , K. Brinkmann3 , M. Chirita1 , K. Dutta7 ,


K. Götzen4 , L. Gruber1,2P , K. Kalita7 , A. Lehmann6 , H. Orth4 , L. Schmitt5 ,
C. Schwarz4 , D. Steinschaden1 , S. Zimmermann1,3 , and K. Suzuki1
1
Stefan Meyer Institute, Vienna, Austria
nicolaus.kratochwil@oeaw.ac.at
2
CERN, Geneva, Switzerland
3
Justus-Liebig Universität, Gießen, Germany
4
GSI, Darmstadt, Germany
5
FAIR, Darmstadt, Germany
6
Friedrich Alexander Universität, Erlangen, Germany
7
Gauhati University, Guwahati, India

Abstract. The P̄ANDA experiment at FAIR in Darmstadt will perform


a fixed target experiment by using a cooled beam of antiprotons in the
momentum range between 1.5 to 15 GeV/c to study open questions in
hadron physics. The core program comprises charmonium spectroscopy
with precision measurement of mass, width and decay branches, inves-
tigation of possible exotic states, search for modifications of charmed
hadrons in nuclear matter and γ-ray spectroscopy of hypernuclei.
The barrel TOF counter is located at 50 cm radial distance from the
beam axis covering an azimuthal angle from 22.5◦ to 150◦ . The detec-
tor is designed to achieve a time resolution below 100 ps (sigma) which
allows good event separation as well as particle identification below the
Chrenkov threshold. With the current prototype a single detector time
resolution < 60 ps was achieved.

Keywords: Scintillation counter · Semiconductor detector


Time-of-flight · Particle identification · Photomultiplier

1 Introduction

The P̄ANDA experiment [1] at the new international accelerator complex, Facil-
ity for Antiproton and Ion Research (FAIR), will perform high precision experi-
ments in the strange and charm quark sector. Therefore a cooled beam of antipro-
tons with momentum range from 1.5 GeV/c to 15 GeV/c is collided with a fixed
proton or nuclear target to allow hadron production and formation experiments
with a luminosity of up to 2 · 1032 cm−2 s−1 . The scientific program includes:

List of possible keywords: http://www-library.desy.de/schlagw2.html


c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 283–287, 2018.
https://doi.org/10.1007/978-981-13-1313-4_54
284 N. Kratochwil et al.

charmonium spectroscopy; investigation of exotic configurations like multiquark


states, charmed hybrids, glueballs; search for medium modifications of charmed
hadrons in nuclear matter as well as γ-ray spectroscopy of hypernuclei states [2].

2 Barrel TOF Detector


The Barrel TOF Detector is a cylindrical detector at a radial distance of around
50 cm from the beam axis and serves as precise timing (<100 ps) detector. The
detector is placed in a high magnetic field (up to 2 T), therefore the use of
non-magnetic material was mandatory. The material has to withstand radiation
during 10 years of operation time, which is φeq ≈ 9 · 1010 neq / cm2 in total.
The installation into P̄ANDA will be done in 2021 and the P̄ANDA experi-
ment will start the experiments with antiprotons in 2025.
P̄ANDA adapts a continuous readout without hardware trigger. The selection
is done by a online software framework where the Barrel TOF delivers event sort-
ing, online event time calculation (t0 ) and particle identification for low momen-
tum charged particles [3–6]. Since there is no start-stop detection for a time of
flight measurement, time stamps of multiple tracks from the same event are used
to determine the event time. With the momentum information of the tracking sys-
tem the event time is calculated for all possible particle species. The conformity of
the event time for each time stamp gives the most probable value for t0 , as its con-
cept is schematically shown in Fig. 1 (left) [7]. For particles with low momentum
an excellent separation can be achieved, while the identification for high momen-
tum particles will be done mainly by the Barrel DIRC detector [8,9].

Fig. 1. (left) For each detected signal the track creation times according to mass
assumption are calculated. The best conformity is equivalent to the most probable
mass configuration [7]. (right) Calculated separation powers of the P̄ANDA Barrel
TOF counter as a function of transverse momentum of the particle.
Barrel Time-of-Flight (TOF) Detector for the P̄ANDA Experiment at FAIR 285

3 Design
The Barrel TOF consists of 16 independent segments (super module) located
around the beam axis as sketched in Fig. 2 covering an azimuthal angle of 22.5◦
to 140◦ . The sensitive volume consists of scintillator tiles each of which are
read out with four Silicon Photomultiplier (SiPM) at each end. A super module
comprises of 120 scintillator tiles and 960 SiPM as well as signal transmission
lines embedded in a multilayer PCB. The front-end readout electronics (FEE)
amplifies and digitises the signal from the SiPM and transfers the data to the
P̄ANDA computing node. It is located at the back end of the segment where the
hit rate is low.

Fig. 2. (left) Drawing of the whole Barrel TOF with sub-structure of a pair of scintil-
lator tiles. (right) Sketch of circuit design of the super module (top), A photo of half
length prototype with one pair of scintillator (bottom).

The single tile has a dimension of 87 × 29.4 × 5 mm3 with 4 SiPMs on


both ends for readout. For the material a fast timing plastic scintillator is used.
Photons are detected using 4 SiPMs with 3 × 3 mm2 sensitive area each. They are
combined into a single channel to increase the sensitive area without increasing
the number of readout channels [10]. There are different ways to connect the
SiPMs: serial, parallel or hybrid connection, as illustrated in Fig. 3. While for
the parallel connection the bias is common by all SiPM, the signal respond is
slower. On the other hand for the serial connection the bias increases linearly
with the number of SiPM but the signal respond is faster and gives more precise
timing [11–13].

4 Performance Evaluation of Single Tile


To achieve best time resolution careful optimisation studies have been done in
terms of material and geometry resulting in the current design. Wrapping with
aluminised Mylar foil gives the best time resolution. For this design for the
EJ-232 plastic scintillator a position dependency measurement has been carried
out with mean time resolution of σt = 53,9 ps resulting in position resolution in
x-direction of 5, 5 mm (sigma) [14].
286 N. Kratochwil et al.

Fig. 3. (left) Schematic of different possibilities of connecting SiPMs for single read
out channel: a) serial, b) parallel (right) Signal improvement with serial connection for
higher number of SiPMs.

In order to verify the laboratory test results several beam time campaigns
were carried out. The best result with the current design was achieved in Novem-
ber 2016 with a beam of 7 GeV/c momentum containing protons, pions, electrons
and kaons giving a time resolution of σt = 58 ps.

5 Conclusion

The barrel TOF detector provides a robust tool for particle identification at low
momentum. With the current design including wrapping and four SiPM on each
end an intrinsic time resolution of σt = 54 ps in the laboratory was achieved.

References
1. PANDA Collaboration. Technical Progress Report (2005)
2. PANDA Collaboration. Physics Performance Report for PANDA, Strong Interac-
tion Studies with Antiprotons. https://arxiv.org/abs/0903.3905v1
3. Akindinov, A., Alici, A., Agostinelli, A., et al.: Eur. Phys. J. Plus 128, 44 (2013).
https://doi.org/10.1140/epjp/i2013-13044-x
4. Adam, J., Adamov, D. et al.: ALICE Collaboration, Eur. Phys. J. Plus, 132: 99
(2017). https://doi.org/10.1140/epjp/i2017-11279-1
5. Eur. Phys. J. Plus, 128:44 (2013). https://doi.org/10.1140/epjp/i2013-13044-x
6. Nuclear Instruments and Methods 179, 477–485 (1981)
7. Sanchez-Lorente, A., Schmitt, L., Schmitt, C., Goetzen, K., Kisselev, A.: Motiva-
tion of the barrel time-of-flight detector for PANDA. PANDA Note (2011)
8. Schwarz, C., Britting, A., Bhler, P., Cowie, E., Dodokhov, V.K., Dren, M., et al.:
The Barrel DIRC of PANDA. J. Instrum. 7(02), C02008–C02008 (2012). https://
doi.org/10.1088/1748-0221/7/02/C02008
9. PANDA Collaboration. Barrel DIRC Technical Design Report (2016)
10. Cattaneo, P.W., Gerone, M.D., Gatti, F.: Development of high precision timing
counter based on plastic scintillator with SiPM readout. IEEE Trans. Nucl. Sci.
61, 2657 (2014)
Barrel Time-of-Flight (TOF) Detector for the P̄ANDA Experiment at FAIR 287

11. Gruber, L.: Studies of SiPM photosensors for time-of-flight detectors within
P̄ANDA at FAIR. Vienna University of Technology (2014)
12. Böhm, M., Lehmann, A., Motz, S., Uhling, F.: Fast SiPM readout of the P̄ANDA
TOF detector. J. Instrum. 11(05), C05018 (2016)
13. Gruber, L., Brunner, S.E., Marton, J., Orth, H., Suzuki, K.: Barrel time-of-flight
detector for the PANDA experiment at FAIR. Nucl. Instr. Meth. A 824, 104–105
(2016)
14. PANDA Collaboration: Barrel Time-of-Flight Technical Design Report (2017)
Trigger and Data Acquisition Systems
Electronics, Trigger and Data Acquisition
Systems for the INO ICAL Experiment

S. Achrekar1, S. Aniruddhan3, N. Ayyagiri1, A. Behere1,


N. Chandrachoodan3, V. B. Chandratre1, Chitra3, D. Das1,
S. Dasgupta5, V. M. Datar5, U. Gokhale5, A. Jain1, S. R. Joshi5,
S. D. Kalmani5, N. Kamble1, S. Karmakar6, T. Kasbekar1, P. Kaur5,
H. Kolla1, N. Krishnapura3, P. Kumar3, T. K. Kundu6, A. Lokapure5,
M. Maity6, G. Majumder5, A. Manna1, S. Mohanan1, S. Moitra1,
N. K. Mondal4, P. M. Nair1, P. Abinaya3, S. Padmini1, N. Panyam5,
Pathaleswar5, A. Prabhakar3, M. Punna1, M. Rahaman6, S. M. Raut1,
K. C. Ravindran5, S. Roy6, S. Prafulla1, M. N. Saraf5,
B. Satyanarayana5(&), R. R. Shinde5, S. Sikder1, D. Sil5,
M. Sukhwani1, M. Thomas2, S. S. Upadhya5, P. Verma5,
and E. Yuvaraj5
1
Bhabha Atomic Research Centre, Trombay, Mumbai 400085, India
2
Electronics Corporation of India Limited,
Veer Savarkar Marg, Mumbai 400028, India
3
Indian Institute of Technology Madras, IIT P.O., Chennai 600036, India
4
Saha Institute of Nuclear Physics, Bidhan Nagar, Kolkata 700064, India
5
Tata Institute of Fundamental Research,
Homi Bhabha Road, Mumbai 400005, India
bsn@tifr.res.in
6
Visva-Bharati, Santiniketan 731235, India

Abstract. India-based Neutrino Observatory (INO) [1] has proposed con-


struction of a 50k ton magnetised Iron Calorimeter (ICAL) in an underground
laboratory located in South India. Main aims of this, now funded project are to
precisely study the atmospheric neutrino oscillation parameters and to determine
the ordering of neutrino masses [2]. The detector will deploy about 28,800 glass
Resistive Plate Chambers (RPCs) of approximately 2 m  2 m in area.
An ICAL RPC detector with a signal pick-up strip pitch of 30 mm, will have
128 analog signals to be readout and processed - 64 each of positive and
negative polarity signals. Thus about 3.6 million detector channels are required
to be instrumented. We will present in this paper, design of electronics, trigger
and data acquisition systems of this ambitious and indigenous experiment as
well as its current status of deployment.

Keywords: INO  ICAL  RPCs

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 291–295, 2018.
https://doi.org/10.1007/978-981-13-1313-4_55
292 S. Achrekar et al.

1 Introduction

ICAL’s Data Acquisition (DAQ) system records - on receiving the physics trigger
signal, pattern of RPC pick-up strips hit by the charged particles which are produced in
the neutrino interactions as well as their precise time of crossing the active detector
elements. Besides, the DAQ system also performs a number of slow control and
monitoring functions in the background. Architecture of the ICAL’s electronics and
DAQ systems (Fig. 1) is based on designating the RPC as the minimum standalone unit.

Fig. 1. RPC unit and architecture of ICAL electronics

2 Front-End Electronics

Analog front-end (AFE) boards using indigenously designed 4-channel voltage


amplifier and 8-channel leading edge discriminator ASICs, are mounted on two
orthogonal edges of the RPC unit. Boards were extensively tested and are being mass
produced. Meanwhile, boards using NINO ASIC were also produced. Single-ended
signals from the RPC were converted into differential type and fed to the NINO ASIC.
NINO-based AFE boards are being used in the RPC detector stacks at present.
Common processing electronics - called digital front-end (DFE) module, is located
at one corner of the RPC unit. DFE comprises of several functional blocks such as a
Time-to-Digital Converter (TDC), Strip-hit latch, Rate monitor, Pre-trigger generator,
Ambient parameter monitor and Front-end control. A soft-core processor takes care of
all the DAQ needs, configuration of the front-end components as well as data transfers
between the RPC unit and the back-end servers. Considerable part of the DFE module
hardware, including the soft-processor were implemented using a high-end FPGA. The
125 ps TDC ASIC was developed by the collaboration. Another version of DFE with
an optical data interface was also produced and is being tested (Fig. 2).
Electronics, Trigger and Data Acquisition Systems 293

Fig. 2. AFE and DFE modules

3 Trigger System

The multi-level trigger system generates the global trigger signal based solely on event
topology information. Trigger logic is essentially defined as m  p/n, i.e. trigger is
generated when out of a group of n consequent layers, at least p layers have m channels
each with simultaneous signals in them. The pre-trigger signals from DFEs are fed to
Signal Router Boards (SRBs) which bunch signals and redistribute them to the Trigger
Logic Boards (TLBs). The second-level trigger logic is implemented in the TLBs and
the boundary coincidences are resolved by the Global Trigger Logic Boards (GTLBs).
The entire control of the trigger system, monitoring of various signal rates etc. is
handled by the Trigger Control and Monitor (TCAM) module. Further, the Control and
Monitoring (CAM) module provides interface between the trigger and backend data
concentrator units.
Calibration and auxiliary (CAU) services sub-system mainly interfaces with trigger
system and DFE boards and performs functions of distribution of global clocks and
trigger signals as well as measurement of time offsets due to disparate signal path
lengths. The local TOF in each DFE is then translated to a common timing reference by
adding the respective offsets for reconstruction of particle trajectories. The CAU unit
was tested extensively on the RPC stacks and was found to provide offset corrections of
better than 100 ps. The Real Time Clocks (RTCs) of all the DFEs are pre-loaded with
epoch time and synchronized up to a micro second using PPS signal and global clock.
The events are built in the backend using these RTC time stamps.

4 Back-End Data Acquisition

The data acquired by the DFE on receipt of trigger signal is dispatched to the backend
data concentrator hosts using the DFE network interface, passing through segmented
layers of data network hubs and switches. For the purpose of communication and data
transfer between the DFE and back-end, the former is configured as a network element
with an unique IP number. Thus, the entire ICAL detector will become a large Eth-
ernet LAN, suitably segmented, with RPC units as LAN hosts together with the back-
end DAQ computers.
The back-end DAQ hardware involving multiple data concentrator servers receive
event and monitor data from the DFE modules. The event data is compiled by the event
builder. Finally, the back-end system performs various quick quality monitors on the
294 S. Achrekar et al.

data in addition to providing user interfaces, slow control and monitoring, event dis-
play, data archival and so on.

5 Power Supplies

RPCs require a variable high voltage supply of up to 12 kV for generation of uniform


electric field across their detector gas medium. It was decided to generate the HV
locally on each RPC as it will eliminate the need for high voltage cables and con-
nectors, there by easing space constraints in the cable tray as well as reducing the
overall cost. However, the modules have to be very compact and should not generate
electromagnetic interference to the RPC or to other signal processing electronics.
The HV DC-DC converter, shown in Fig. 3 is based on a current fed resonant
Royer circuit. The stepped up secondary output is further raised by a multi stage
Cockcroft Walton multiplier. The output voltage is sensed through a high input
impedance buffer for regulation and monitoring purpose. The error amplifier adjusts the
drain current of power MOSFET to regulate the inverter output swing as per the
feedback demand.

Fig. 3. Block diagram of the HV DC-DC module and its prototype

The low voltage power supplies required for the analog and digital front-end boards
as well as to the HV DC-DC module onboard the RPC unit are individually supplied,
controlled and monitored through a central low voltage power supply distribution and
monitoring sub-system.

6 Status and Outlook

Entire design of baseline ICAL detector electronics was completed and reviewed.
ASICs, most of the circuit boards as well as firmware and software were already
produced. Prototyping, benchmarking besides limited production of various compo-
nents and modules were also completed, the relevant technologies and vendors were
identified. The electronics is already being used to read many ICAL prototype detector
stacks, including the mini-ICAL which is currently under construction. Integration of
Electronics, Trigger and Data Acquisition Systems 295

electronics, especially the analog and digital front-end onboard the RPC detector
module posed a big challenge. This along with extensive cable routing, spreading over
the detector of 48 m  16 m  16 m in volume is being carefully addressed.

References
1. INO Homepage. http://www.ino.tifr.res.in. Accessed 26 June 2017
2. Kumar, A., et al.: Physics potential of the ICAL detector at the India-based neutrino
observatory (INO). Pramana J. Phys. 88(5), 1–72 (2017)
Track Finding for the Level-1 Trigger
of the CMS Experiment

Tom James(B)
on behalf of the TMTT group

Imperial College London, London, England


tom.james@cern.ch

Abstract. A new tracking system is under development for the CMS


experiment at the High Luminosity LHC (HL-LHC), located at CERN. It
includes a silicon tracker that will correlate clusters in two closely spaced
sensor layers, for the rejection of hits from low transverse momentum
tracks. This will allow tracker data to be read out to the Level-1 trigger
at 40 MHz. The Level-1 track-finder must be able to identify tracks with
transverse momentum above 2–3 GeV/c within latency constraints. A
concept for an FPGA-based track finder using a fully time-multiplexed
architecture is presented, where track candidates are identified using a
Hough Transform, and then refined with a Kalman Filter. Both steps
are fully implemented in FPGA firmware. A hardware system built from
MP7 MicroTCA processing cards has been assembled, which demon-
strates a realistic slice of the track finder in order to help gauge the
performance and requirements for a final system.

1 The High-Luminosity LHC and the New CMS Tracker

The Compact Muon Solenoid (CMS) detector [1] at the CERN Large Hadron
Collider (LHC) [2] is an all purpose-detector, designed to investigate a wide
range of physics. Beginning in in 2026, the LHC will operate at an upgraded
luminosity of 5–7.5 × 1034 cm−2 s−1 , corresponding to 140–200 simultaneous col-
lisions (pileup) at 40 MHz [3]. This will enable the collection of 3000 fb−1 of data
by 2035.
Due to accumulated radiation damage, it will be necessary to completely
replace the CMS outer silicon tracker during the long LHC shutdown preceding
this upgrade. Studies have shown that the use of tracker information in the level-
1 (L1) trigger will be required to keep the accept rate below the target 750 kHz,
without significant degradation in physics sensitivity. A new silicon tracker has
therefore been designed to allow the read out of a sub-set of its data at 40 MHz [4].
It will exploit the knowledge that tracks with high transverse momentum (pT )
are often signs of interesting physics. Novel tracking modules (pT -modules) are
being developed that utilise two closely spaced silicon sensors to select only pairs
of hits (stubs), compatible with a transverse momentum (pT ) greater than 2 GeV
for level-1 triggering.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 296–302, 2018.
https://doi.org/10.1007/978-981-13-1313-4_56
Track Finding for the Level-1 Trigger of the CMS Experiment 297

2 The Track Finding Processor


Tracks must be reconstructed from the approximately 20,000 stubs per event
(at 200 pileup) within 4–5 µs if they are to be used to make the level-1 trig-
ger decision. We propose an option for a highly time-multiplexed track-finder,
using commercially available FPGAs (rather than custom ASICS), mounted on
high-bandwidth processing boards. In this design, the tracker back-end consists
of two layers, with the Track Finding Processor (TFP) cards separated from
the Data, Trigger and Control (DTC) cards, which are responsible for the time-
multiplexing of the incoming data. This design is based upon the success of the
CMS Phase I calorimeter trigger [5], which utilises a similar time-multiplexed
approach. The time multiplex period, T , is dictated by the data rates and avail-
able bandwidth out of the detector, and into the TFP processing board. As
depicted in Fig. 1, we propose that each TFP processes only one nonant of the
tracker in φ (motivated by a realistic tracker cabling scenario), |η| < 2.4 (where
η is pseudorapidity), and one out of every 20 events. The coordinate system is
described in [1].

Fig. 1. Preliminary system architecture, whereby DTCs in two neighbouring detector


nonants time-multiplex and duplicate stub data across processing nonant boundaries
before transmission to the TFP boards.

2.1 The Hough Transform

The Hough transform (HT) is a widely used feature extraction technique [6]. It
can be used to find imperfect instances of objects within a space, for example
tracks within the map of tracker hits. FPGA firmware has been developed to
use a two-dimensional HT to search for primary tracks in the r − φ plane [7].
The parametrisation (q/pT , φ0 ) is used, where φ0 is the coordinate of the stub
at r = 0, q is the electronic charge, and q/pT is the free parameter. Within
the TFP, the stubs are sorted into sub-sectors in both η and φ. A Hough array
is constructed for each sub-sector. Within each array, stubs are binned at each
q/pT column into a corresponding φ0 row, based on the straight-line formula
φ0 = φstub + 0.0015 r B q/pT , where and B is the magnetic field strength. The
set of stubs that intercept at a point can be considered a track candidate (after
passing some additional checks). An optional feature of the algorithm is the
ability to use the bend of the stub between the two sensors on the pT -modules
298 T. James

to calculate a coarse pT estimate, which can be used to constrain the allowed


q/pT space for each stub within the array. This bend filter has a configurable
compatibility range, which is currently set to three standard deviations of the
pT estimate resolution. The firmware can also be configured to apply no require-
ment on the bend information, the consequence being a four-fold increase in
the number of found track-candidates, which must be filtered out by the more
precise downstream track fitting.
In firmware, each Hough array (a fully pipelined firmware block, which
receives one stub per clock cycle) consists of a number of columns, connected to
a single book keeper. The book keeper receives the stubs and propagates them
to each column in turn. Within each column, the corresponding φ0 of the stub
is calculated, and a register is set to mark a hit in the relevant tracker layer in
the corresponding array cell. After each time-multiplexed period, all stubs will
have been recorded in the Hough array, and the tracks within cells that have
been marked as having minimum required number of tracker-layers (configurable
between 4 and 5 layers for each array) are read out.

2.2 The Kalman Filter

The Kalman Filter (KF) is a commonly used iterative algorithm that can turn
a series of measurements containing inaccuracies and noise into estimates of
unknown variables [8]. In this scenario, the measurements correspond to the stubs
associated with the track-candidates produced by the HT, and the unknown vari-
ables to the final track helix parameters. A combinatorial KF has been imple-
mented in FPGA firmware. The initial estimates (seed) of the track parameters
are taken from the corresponding HT cell. The Kalman state (current estimate
of the unknown parameters) is then updated, one stub at a time in increasing
radius, after each cycle using a weighted average of the prior state and the new
measurement. The χ2 of the track parameters are also calculated at each stage,
and this information, along with the number of layers missing hits, is used to
both reject false track candidates, and remove incorrect stubs from tracks. This
process is repeated until all stubs on the track are added, or a configurable time-
out is reached. Following the KF, a simple Duplicate Removal (DR) algorithm
removes HT candidates with duplicated helix parameters.

3 Demonstrator Hardware and Software

A hardware demonstrator has been constructed to prove the feasibility of the


proposed TFP and the functionality of the firmware described above. Rather
than a nonant segmentation in φ, the demonstrator uses an octant segmentation,
as it was built based on a previous proposal. It is capable of processing the data
from one octant in φ, |η| < 2.4, and one out of every thirty-six 40 MHz events
(equivalent to T = 36 events). More details on the demonstrator firmware can
be found in [9].
Track Finding for the Level-1 Trigger of the CMS Experiment 299

Fig. 2. The TFP demonstrator consists of five MP7 boards, each indicated by a sep-
arate block in the diagram. Source and Sink boards represent the DTCs, and the L1
trigger, respectively. The Geometric Processor (GP) assigns stubs to the 36 sub-sectors
per octant (two in φ and 18 in η), routing those associated with a given sub-sector to
dedicated output links.

The building blocks of the hardware demonstrator are the Master Proces-
sor 7 (MP7) [10], FPGA-based, data-stream processing double-width AMC
cards, equipped with a Xilinx Virtex-7 690 [11] FPGA, and 72 optical trans-
mitters/receivers running at 10 Gbps each way. Eleven MP7 cards are installed
in a MicroTCA crate at CERN. The MP7 boards are optically cabled to match
the requirements shown in Fig. 2, where each MP7 board is represented by a
block.
The demonstrator software package allows for the direct comparison of per-
formance between the hardware demonstrator output and the emulator (which
is based on the CMS software framework (CMSSW)), using Monte Carlo physics
samples that emulate the conditions at the HL-LHC. Although the demonstrator
processes one tracker octant at a time, the firmware is agnostic to the choice of
octant, making it possible to take data for all octants sequentially.

4 Demonstrator Results and Performance


The hardware demonstrator has been shown to be highly efficient at track finding
in simulated events containing a tt̄ decay and 200 pileup events. Figure 3 shows
the track finding efficiency in this scenario (defined in relation to generator-level
primary tracks with pT ≥ 3 GeV, generating stubs in at least four tracker layers
within |η| < 2.4) as a function of the simulated track η and pT , for both the
emulation software and the hardware demonstrator output. The average tracking
efficiency for all particles generated in the tt̄ decay is 94.4%, and 97.1% when
selecting on muons from the decay. The resolution of the track parameters, as
shown in Fig. 3, is approximately 1% in relative pT and 1 mm in z0 , in the barrel.
At the end of the track finding and duplicate removal chain, an average of 70
tracks per event remain. On average, 20 % of these are false tracks arising from
combinatorics, and 4 % are duplicates of genuine tracks.
In addition to tracking performance, an important goal of the demonstrator
was to prove the feasibility of track finding within the expected 4 µs latency
300 T. James

budget. Table 1 shows the latency for each step of the demonstrator chain. The
processing latency of each stage is fixed, and is therefore independent of pileup
or event occupancy.

Fig. 3. Tracking efficiency as measured in both the hardware demonstrator and in


emulation, determined using 2000 tt̄ events each with 200 pileup events superimposed
(top). Relative pT and z0 resolution, as measured in the emulation software, for single
isolated muons, for three different ranges of pT (bottom).

Table 1. Processing latency for each of the demonstrator components, including the
four stages of serialisation/de-serialisation (SerDes), and the optical transmission delays
between each board.

SerDes GP HT KF DR Total
Latency [ns] 545 251 1025 1620 38 3479

5 FPGA Resources and Next Generation Hardware


The next generation of generic FPGA processing boards for CMS are in devel-
opment. An example is the MPUltra PCIe card, which was developed in part to
test the capabilities of the Kintex Ultrascale FPGA family [12], which support
transceivers/receivers up to 16.3 Gb/s. Overall resource usage of the demonstra-
tor TFP, as shown in Table 2, suggest that the functionality could feasibly fit
within two Kintex Ultrascale 115 FPGA.
Track Finding for the Level-1 Trigger of the CMS Experiment 301

Table 2. Resource usage for each component of the TFP demonstrator, alongside
available resources for some compatible Xilinx FPGAs [12].

LUTs [103 ] DSPs FFs [103 ] BRAM (36 Kb)


GP & HT 365 3360 504 1410
KF & DR 398 5112 316 1776
Demonstrator total 763 8472 820 3186
Virtex-7 690 433 3600 866 1470
Kintex Ultrascale 115 633 5520 1266 2160

6 Summary

A hardware demonstrator, based on currently available FPGA processing boards,


has been developed to prove the feasibility of track finding for CMS at the HL-
LHC. It is able to find tracks with around 95 % mean efficiency in challenging
high-occupancy events (such as tt̄ events with 200 pileup superimposed) within
4 µs. The results shown prove that an L1 track-finder using only FPGA process-
ing is a feasible and safe solution for CMS. It also shows that new advances in
technology are not required to achieve the desired performance, but whenever
possible they will be exploited to improve system cost and latency.

References
1. Collaboration, C.M.S.: The CMS experiment at the CERN LHC. JINST 3, S08004
(2008). https://doi.org/10.1088/1748-0221/3/08/S08004
2. The CERN Large Hadron Collider: Accelerator and Experiments Collaboration.
LHC Machine, JINST 3, S08001 (2008). https://doi.org/10.1088/1748-0221/3/08/
S08001
3. The CERN Large Hadron Collider: Accelerator and Experiments Collaboration,
Apollinari, G., et al.: High-Luminosity Large Hadron Collider (HL-LHC): Prelimi-
nary Design Report. CERN, Geneva (2015). https://doi.org/10.5170/CERN-2015-
005
4. Collaboration, C.M.S.: CMS Technical Design Report for the Phase-2 Tracker
Upgrade. Technical report CERN-LHCC-2017-xxx. CMS-TDR-xxx, Geneva (2017)
5. CMS Collaboration: CMS Technical Design Report for the Level-1 Trigger
Upgrade. Technical report CERN-LHCC-2013-011. CMS-TDR-12, June 2013
6. Hough, P.V.C.: Method and means for recognizing complex patterns. US Patent
3,069,654, December 1962
7. Amstutz, C., et al.: An FPGA-based track finder for the L1 trigger of the CMS
experiment at the high luminosity LHC. In: 2016 IEEE-NPSS Real Time Confer-
ence (RT), pp. 1–9, June 2016. https://doi.org/10.1109/RTC.2016.7543102
8. Fruhwirth, R.: Application of Kalman filtering to track and vertex fitting.
Nucl. Instrum. Meth. A262, 444–450 (1987). https://doi.org/10.1016/0168-
9002(87)90887-4
302 T. James

9. Aggleton, R., et al.: An FPGA based track finder for the L1 trigger of the CMS
experiment at the high luminosity LHC. CMS-NOTE-2017-XXX; CERN-CMS-
NOTE-2017-XXX- Geneva: CERN (To be published)
10. Compton, K., et al.: The MP7 and CTP-6: multi-hundred Gbps processing boards
for calorimeter trigger upgrades at CMS. JINST 7, C12024 (2012). https://doi.
org/10.1088/1748-0221/7/12/C12024
11. Xilinx: 7 Series FPGAs Overview, Product Specification. DS180 (v1.17), May 2015
12. Xilinx: UltraScale Architecture and Product Data Sheet: Overview, DS890 (v2.11),
February 2017. https://www.xilinx.com/support/documentation/data sheets/
ds890-ultrascale-overview.pdf
A Multi-chip Data Acquisition System
Based on a Heterogeneous
System-on-Chip Platform

Adrian Fiergolski(B)
on behalf of the CLIC detector and physics (CLICdp) collaboration

The European Organization for Nuclear Research, Geneva, Switzerland


Adrian.Fiergolski@cern.ch

Abstract. The Control and Readout Inner tracking BOard (CaRIBOu)


is a versatile readout system targeting a multitude of detector prototypes.
It profits from the heterogeneous platform of the Zynq System-on-Chip
(SoC) and integrates in a monolithic device front-end FPGA resources
with a back-end software running on a hard-core ARM-based processor.
The user-friendly Linux terminal with the pre-installed DAQ software
is combined with the efficiency and throughput of a system fully imple-
mented in the FPGA fabric. The paper presents the design of the SoC-
based DAQ system and its building blocks. It also shows examples of the
achieved functionality for the CLICpix2 readout ASIC.

Keywords: Electronics · DAQ · SoC · Zynq

1 Motivation
The development of pixel detectors for future high-energy physics experiments
often implies the use of a custom data acquisition (DAQ) system for a given
device. As a consequence, the process of a new chip characterisation often
involves an extra effort associated with commissioning and debugging of new
hardware, firmware and software of the accompanying DAQ system. Although
from a functional point of view the DAQ systems are very similar, different
implementations and the lack of cross-compatibility imply some learning stage
for the pixel detector user. Moreover, each new DAQ system often requires some
integration effort with a test-beam infrastructure. All those aspects delay the
pixel detector studies.
The Control and Readout Inner tracking BOard (CaRIBOu) addresses this
issue. It is a versatile modular readout system supporting by design a wide range
of current and future devices. Integration of new devices requires minimal effort.
Since CaRIBOu targets laboratory and high-rate test-beam measurements, the
system combines flexibility and high-performance requirements. The project is

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 303–308, 2018.
https://doi.org/10.1007/978-981-13-1313-4_57
304 A. Fiergolski

maintained by a collective effort of a user community. All hardware, firmware and


software design files are open source and shared through a public repository [1].

2 CaRIBOu

2.1 Hardware

The hardware architecture of the CaRIBOu DAQ system [4] is presented in


Fig. 1. The detector devices are placed on application specific chipboards. Cur-
rently, CaRIBOu supports CLICpix2, C3PD [3], FEI4 [4] and H35Demo [4] chip-
boards. In addition, there is ongoing work on the integration of the SOI-Cracow
[5] device. The chipboards provide minimal functionality like routing between
the SEAF 320 Pin connector [6] and the chip, and convenient test points. If
needed, the chipboards are equipped with specific buffers, e.g. LVDS-CML con-
verters. It makes their design straightforward and cost effective in case of a small
volume production.
The chipboard is directly connected to the interface Control and Readout
(CaR) board. CaR is a suitable solution for various target chips providing a
hardware environment for operating them. The CaR board supports many volt-
age levels and communication standards, including full-duplex serial links. More-
over, as it is equipped with a set of Analogue-to-Digital Converters (ADCs), the
CaR board provides monitoring capabilities, thus enabling measurements with-
out the use of dedicated laboratory equipment. The direct connection with the
chipboard allows for all required voltage regulators, ADCs, bias sources and a
the clock generator to be placed close to the chip. The full specification of the
CaR board is provided elsewhere [7].
The core of the system is a Xilinx Zynq System-on-Chip (SoC) device hosted
by the ZC-706 evaluation kit [8]. It combines a dual-core ARM Cortex-A9 pro-
cessor and a Kintex-7 Field Programmable Gate Array (FPGA) fabric connected
through a silicon interposer. In order to prevent radiation damage from sources
or particle beams and facilitate mounting, the CaRIBOu hardware architecture
(Fig. 1) enables placement of the evaluation kit at a safe distance from the CaR
board through a ∼50 cm long FMC cable. The ZC-706 board supports up to two

Fig. 1. The hardware architecture of the CaRIBOu DAQ system. From Ref. [2]. Pub-
lished with permission by CERN.
A Multi-chip Data Acquisition System Based on a Heterogeneous SoC 305

interface boards. Its commercial applications and evaluation character make it


an easily available, cost effective and a rapid solution for small volumes.
In the CaRIBOu application, the ARM processor runs the Linux operating
system and the actual DAQ software. The board is accessed through a Secure
Shell (ssh) connection (1 Gbps Ethernet) or UART. The processor is efficient
enough to perform a prompt local data analysis enabling data quality monitor-
ing, calibration, etc. Data are then transferred through 1 Gbps (RJ45 connector).
In future versions a 10 Gbps (SFP+) Ethernet link can be supported. Moreover,
the user has the possibility to use other interfaces of the evaluation kit (USB,
SD card, PCIe), which are supported by the Linux kernel out of the box.

2.2 DAQ

The non-hardware part of the CaRIBOu DAQ consists of 3 components: a DAQ


software framework (Peary), a custom Linux distribution (Meta-caribou) and an
FPGA processing image (Peary-firmware). Its scheme is presented in Fig. 2. The
user application gains unified access to hardware using a hardware abstraction
layer (HAL) of Peary. The HAL translates its requests to various Linux device
driver calls which enable control of different interface controllers and hardware
components of the SoC platform (available in the ARM processor periphery or in
the FPGA fabric). Data from the application specific chip is transferred through
the FPGA fabric to the main memory from where it can be accessed by the
processor or pushed further to the Ethernet interface.
The main advantage of Peary is its user-friendly HAL enabling control of the
CaR board and the chip. The user can initialize its device with a single command.
Using human readable configuration files, the HAL will provide all necessary

Fig. 2. Scheme of the CaRIBOu DAQ. From Ref. [2]. Published with permission by
CERN.
306 A. Fiergolski

bias voltages, currents and will set the operational conditions (clock, reset, etc.).
The HAL supports the operation of several devices in parallel. Integration of
new devices requires minimum code development and is mainly limited to data
processing before storing. A device manager of Peary enables dynamic linking
of this code based on a device name specified in the configuration files. The
Peary framework implements common modules such as a logging engine. It also
provides a command line interface (CLI) enabling sequential step-by-step control
of the chip. This feature proves to be useful at the commissioning stage of a new
device. Finally, by supporting DAQ clients, Peary facilitates integration with a
top level DAQ run control for combined runs with a test-beam telescope.
In order to facilitate the creation of a custom embedded Linux distribu-
tion, CaRIBOu provides a Meta-caribou layer to the Yocto project [11]. Yocto
is a Linux build framework supported by a large community of open-source
and industrial developers. Meta-caribou customizations set a console-only image
with a full-featured Linux system functionality. The image comes with popu-
lar packages (python, ssh, gdb, etc.) already pre-installed. Part of Meta-caribou
configures a Secondary Program Loader (SPL) which, at the boot stage, loads
FPGA firmware (bitfile from Peary-firmware) and sets the ARM CPUs in the
desired state. Moreover, the CaRIBOu layer provides the CaR specific hardware
description to the Linux kernel through device tree configuration. With a single
command, a user launches a build process which fetches the required sources,
cross-compiles them and generates an image of the operating system. Further,
using a script provided by Meta-caribou, the user prepares an SD card which is
eventually inserted into the SD socket of the ZC-706 evaluation kit.
The last component of the CaRIBOu DAQ is the Peary-firmware produc-
ing an FPGA image file. It is the only part of the CaRIBOu project utilising
proprietary tools (Xilinx Vivado [9]). The Peary-firmware design is handled by
the Xilinx IP Integrator [10]. As all autonomous firmware blocks are described
according to the IP-XACT standard [12], the tool is aware of their interfaces
and can facilitate their integration. Moreover, using the tool, the Peary-firmware
handles the configuration of the SoC by defining processor periphery settings,
address space and clock frequencies. The user has access to a library of Vivado
intellectual property (IP) cores (i.e. DMA, SPI, I2C, etc.) which often come with
Linux device drivers distributed through a Yocto layer and maintained by the
Xilinx community users. In addition, the Peary-firmware comes with a repository
of custom sub-modules (like a serial receiver), which facilitate the development of
application-specific blocks providing access to the chip. Some of the sub-modules
make use of the System Verilog language supported in Vivado. In case processor
interrupts are not required, the custom blocks can be accessed through a generic
/dev/mem Linux device driver. The repository contains software examples of
this feature. Finally, the Xilinx tools are used by the Peary-firmware also to
create the Hardware Description File (HDL) which is required by Meta-caribou
for Linux device tree and SPL generation.
A Multi-chip Data Acquisition System Based on a Heterogeneous SoC 307

3 Example Use Case: CLICpix2 Readout


The CaRIBOu DAQ system was used to commission the CLICpix2 readout chip.
CLICpix2 is the successor of the CLICpix device [13]. It is produced in a 65 nm
process technology and comes with a larger pixel matrix (128 × 128) as well as
extended Time over Threshold (5 bits) and Time of Arrival (8 bits) counters
in every pixel. The readout protocol of CLICpix2 is based on an Ethernet-like
640 Mbps serial stream. The device is configured over the Serial Peripheral Inter-
face (SPI) bus running at 100 MHz. The advanced features of CLICpix2, like
data compression, frame encoding, test pulsing and power pulsing, put addi-
tional requirements on its DAQ system.
The CaRIBOu DAQ system was used in order to obtain preliminary
CLICpix2 results. The framework enabled readout of the chip using Zynq serial
transceivers and was efficient enough to perform software-based frame decoding
and decompression. The chip was properly controlled over the fast SPI proto-
col. CaRIBOu enabled adjustment and monitoring of the power provided by the
CaR board. The card resources were also used to generate clock signals used by
the chip. The properly operated Digital-to-Analogue Converters (DAC) of the
CaR board allowed to perform bias voltage and current source scans. Moreover,
local ADCs enabled voltage measurements without involving external laboratory
tools.

4 Summary

CaRIBOu integrates a front-end and back-end DAQ system. It is a modular


solution targeting laboratory and high-rate test beam measurements. CaRIBOu
comes with versatile hardware, firmware and software. The system provides the
unique user experience of a regular fully-functional Linux terminal. Due to its
heterogeneous nature utilising the Zynq SoC, the framework enables rapid devel-
opment. The system remains fully flexible leaving the choice of programming
language to the user and providing out of the box access to all interfaces sup-
ported by the Linux kernel (Ethernet, USB, SD card, etc.). Allowing to focus
on the application-specific features, CaRIBOu enables easy integration of new
devices. The system was successfully tested during the CLICpix2 commissioning.
The open-source nature of the project hosted by a set of user-friendly GitLab
repositories enables collaborative system development by a growing developer
community.

References
1. CaRIBOu project webpage. https://gitlab.cern.ch/Caribou/
2. Adrian, F.: A multi-chip data acquisition system based on a heterogeneous system-
on-chip platform. In: CLICdp-Conf-2017–2012. http://cds.cern.ch/record/2272077
308 A. Fiergolski

3. Kremastiotis, I., Ballabriga, R., Campbell, M., Dannheim, D., Fiergolski, A.,
Hynds, D., Kulis, S., Peric, I.: Design and characterisation of a capacitively coupled
HV-CMOS sensor chip for the CLIC vertex detector, submitted to JINST journal.
arXiv:1706.04470
4. Liu, H., et al.: Development of CaRIBOu: a modular readout system for pixel
sensor R&D, in this proceedings
5. Bugiel, S., Dasgupta, R., Glab, S., Idzik, M., Moron, J., Kapusta, P.J., Kucewicz,
W., Turala, M.: Development of SOI pixel detector in Cracow. In: SOIPIX 2015.
arXiv:1507.00864
6. SEAF connector. https://www.samtec.com/products/seaf
7. Specification of the CaR board. https://gitlab.cern.ch/Caribou/Caribou-HW/
blob/master/README.md
8. Xilinx Zynq-7000 All Programmable SoC ZC-706 Evaluation Kit. https://www.
xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html
9. Vivado Design Suite User Guide: Getting Started. https://www.xilinx.com/
support/documentation/sw manuals/xilinx2017 1/ug910-vivado-getting-started.
pdf
10. Vivado Design Suite User Guide: Designing IP Subsystems Using IP Inte-
grator. https://www.xilinx.com/support/documentation/sw manuals/xilinx2017
1/ug994-vivado-ip-subsystems.pdf
11. Yocto Project. https://www.yoctoproject.org/
12. 1685–2014 IEEE Standard for IP-XACT, Standard Structure for Packaging, Inte-
grating, and Reusing IP Within Tool Flows. https://standards.ieee.org/findstds/
standard/1685-2014.html
13. Pierpaolo, V., Augusto, N., Rafael, B.: Electronic Systems for Radiation Detec-
tion in Space and High Energy Physics Applications, CERN-THESIS-2013-156,
September 2013. http://cds.cern.ch/record/1610583
Acceleration of an Particle Identification
Algorithm Used for the LHCb Upgrade
with the New Intel R Xeon-FPGA
R

Christian Färber1(B) , Rainer Schwemmer1 , Niko Neufeld1 ,


and Jonathan Machen2
1
Experimental Physics Department, CERN, 1211 Geneva 23, Switzerland
{Christian.Faerber,Rainer.Schwemmer,Niko.Neufeld}@cern.ch
2
DCG IPAG, EU Exascale Labs, Intel R Corporation, Badenerstrasse 549,
8048 Zürich, Switzerland
Jonathan.Machen@intel.com

Abstract. The LHCb experiment at the LHC will upgrade its detec-
tor by 2018/2019 to a ‘triggerless’ readout scheme, where all the read-
out electronics and several sub-detector parts will be replaced. The new
readout electronics will be able to read out the detector at 40 MHz. This
increases the data bandwidth from the detector down to the event filter
farm to 40 Tb/s, which also has to be processed to select the interesting
proton-to-proton collisions for later storage. The architecture of such a
computing farm, which can process this amount of data as efficiently as
possible, is a challenging task and several compute accelerator technolo-
gies are being considered for use inside the new event filter farm.
In the high performance computing sector more and more FPGA com-
pute accelerators are used to improve the compute performance and
reduce the power consumption (e.g. in the Microsoft Catapult project
and Bing search engine). Also for the LHCb upgrade, the usage of an
experimental FPGA accelerated computing platform in the event build-
ing or in the event filter farm (trigger) is being considered and therefore
tested. This platform from Intel R hosts a general Xeon R CPU and a
high performance Arria R 10 FPGA inside a multi-chip package linked
via a high speed and low latency link. On the FPGA an accelerator is
implemented. The FPGA has cache-coherent memory access to the main
memory of the server and can collaborate with the CPU.
A computing intensive algorithm to reconstruct Cherenkov angles for
the LHCb RICH particle identification was successfully ported to the
IntelR Xeon-FPGA
R platform and accelerated. The results show that
the IntelR Xeon-FPGA
R platforms, which are built in general for high
performance computing, are also very interesting for the High Energy
Physics community.

This work is done in collaboration with Intel


R Corporation in the High-Throughput
Computing Collaboration (HTCC) and the authors would like to thank Intel R Cor-
poration for the support which made this work possible.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 309–313, 2018.
https://doi.org/10.1007/978-981-13-1313-4_58
310 C. Färber et al.

1 The Intel
R Xeon-FPGA
R

The Intel R Xeon-FPGA


R prototype is a server machine which hosts an IntelR
Xeon R E5-2600 v4 CPU and an Intel R Arria R 10 GX 1150 FPGA (see Fig. 1).
In the FPGA, 427.200 Adaptive Logic Modules (ALM) are available and, the
FPGA hosts 1.708.800 registers and 1.518 DSP blocks. These digital signal pro-
cessor (DSP) blocks are crucial for any algorithm using floating point calcu-
lations. These are hardened floating point add/mult blocks, which can add or
multiply two floats in one clock cycle and the implementation needs no addi-
tional ALMs, like for the IntelR StratixR V FPGA. Between the CPU and the
FPGA a cache-coherent QPI bus is used to connect the two chips with a high
bandwidth and low latency interconnect. This makes a tight collaboration of
FPGA and CPU on same data in the main memory possible.
This is a real advantage of the IntelR Xeon-FPGA
R versus the standard
PCIe FPGA accelerator cards, because the PCIe FPGA card has to transfer
the data from the main memory first to the local memory on-board, process
the data with the FPGA and copy the results form the local memory back to
the main memory. This reduces the filling of the FPGA pipelines dramatically
[4], and makes only for the usage of iterative heavy algorithms or caching heavy
algorithms sense.
GPUs and CPUs have an higher power consumption than FPGA accelerators
[4], which makes FPGA acceleration in the metric of performance per Joule very
interesting. Especially for data center with a large number of servers, where also
the cost for energy and cooling makes a significant contribution to the total
costs. For example the future LHCb computing farm has to be limited to 2 MW
due to the cooling capacities, so the raw performance is important but also the
performance per Joule.

Fig. 1. Intel
R Xeon-FPGA
R

The programming of the algorithms running on the FPGA can be written in


Verilog and soon also in OpenCL.

2 Test Case: Sorting


The first test case for the new Intel R Xeon-FPGA
R was the implementation
of a sort algorithm for arrays of integer numbers on the FPGA. Sorting on an
Acceleration of an Particle Identification Algorithm 311

FPGA is interesting, because on a CPU the sorting of a n-element array scales


with n × log(n), but on a pipelined FPGA-design the time depends only on the
clock frequency for a large number of arrays. The bottleneck is that the amount
of resources on the FPGA scales with n2 [5]. The block was tested with arrays
filled with random numbers, the number of arrays was modified from 1 up to
32.000.

Fig. 2. Results of INT array sorting.

The achieved speed-up for sorting on the Intel


R Xeon-FPGA
R compared
to the single threaded CPU, using “Quicksort”, is for a large number of arrays
(>4000) a factor 117 (see Fig. 2).

3 Cherenkov Angle Reconstruction


The RICH photon reconstruction is used for particle identification and crucial for
the LHCb physics program. In the current version the RICH part of the trigger
software takes too much time to be processed for every proton-proton collision
(event). So it is a good candidate to be accelerated with the new IntelR Xeon-
R
FPGA. For each event hundreds of particle tracks are measured, each of them
creates several Cherenkov photons. To find the corresponding Cherenkov photon
cone to every particle track the same calculation has to be done for every photon
found.
Charged particles travelling faster than the speed of light in a medium trans-
mit Cherenkov irradiation. The photons are emitted in a cone of a certain angle
around the particle momentum vector, this angle is called Cherenkov angle. The
angle is correlated with the particle speed β and if the energy of the particle is
also known the particle can be identified. To calculate the Cherenkov angle a
quartic equation has to be solved, a cube root, a rotation matrix, and several
cross and scalar products have to be calculated [3].
The algorithm was implemented in a 259 clock cycle long pipeline, written in
Verilog. The resources usage of the Arria R 10 FPGA after the Quartus synthesis
312 C. Färber et al.

optimization is shown in Table 1. The fast interface already uses 18% of the
FPGA ALMs, and after the optimization the whole design takes 32% of all
the ALMs. Only 15% of the DSP blocks are used to implement all floating point
calculation blocks needed for one photon pipeline and in total 12% of all registers
are used. The design runs at 200 MHz, which makes a calculation for a single
photon within 5 ns possible if the pipeline is completely filled.

Table 1. FPGA resource usage for Cherenkov Angle reconstruction

FPGA resource type FPGA resource used [%] For interface used [%]
ALMs 32 18
DSPs 15 0
Registers 12 5

The results are shown in Fig. 3. For a small number of photons the Xeon R
CPU is faster due to the large latency of the photon pipeline on the FPGA. The
break even is reached at roughly 200 photons. The time for the CPU version
rises linearly and the time for the FPGA version stays constant until 8,000
photons, for more photons also the time for the FPGA version rises linearly. For
larger number of photons the time ratio between CPU and FPGA is a factor 20
up to 35. On average, the photon pipeline processes a new photon only every
second clock cycle. The bottleneck of the acceleration is the bandwidth between
CPU and FPGA. In theory, using a higher bandwidth and using all FPGA
resources for 5 photon pipelines, an acceleration factor of roughly 300 would
be possible. This would be the limit for the used Arria R 10 FPGA. To reduce
the difference between theory and measured acceleration, caching is tested right
now, to increase the reuse of the data on the FPGA. This is for the Cherenkov
angle algorithm possible, because many photon hits are combined with many
particle tracks, and for all the combinations the same calculation is processed.

Fig. 3. Results of Cherenkov angle reconstruction.


Acceleration of an Particle Identification Algorithm 313

4 Summary
The LHCb experiment will be upgraded 2018–2019 to make a much more flexible
40 MHz detector readout possible. Afterwards no hardware trigger will be used
any-more, only a complete software based trigger will be implemented to select
the interesting proton-proton collision. The triggering will happen on a large
Event Filter farm, which has to process and filter a input bandwidth of 40
TBits/s. This is a challenging task and several compute accelerator technologies
are being considered to be used inside the new Event Filter farm.
Therefore, a study was especially here done to investigate the possible usage
of the new IntelR Xeon-FPGA
R to accelerate the Cherenkov angle reconstruc-
tion for the particle identification. A Verilog version was implemented on the
Intel
R Arria R 10 GX 1150 FPGA and an encouraging acceleration of 35x was
measured for a large number of photons larger than 8000, compared to a single
threaded IntelR Xeon R E5-2600 v4 CPU.
These results are very motivating and the High Energy Physics community
could probably benefit similar from these new devices like the High Perfor-
mance Computing community. It is expected that compared to other accelerators
like GPUs the performance per Joule should be very interesting, which will be
verified.

References
1. LHCb Collaboration: Framework TDR for the LHCb Upgrade: Technical Design
Report. CERN, Geneva (2012)
2. LHCb Collaboration: LHCb Trigger and Online Upgrade Technical Design Report.
CERN, Geneva (2014)
3. Forty, R., Schneider, O.: RICH pattern recognition. CERN, Geneva (1998)
4. Sridharan, S., et al.: Accelerating particle identification for high-speed data-filtering
using OpenCL on FPGAs and other architectures. In: IEEE FPL 2016, Lausanne,
Switzerland, 29 Aug–2 Sept 2016. https://doi.org/10.1109/FPL.2016.7577351
5. Färber, C., et al.: Particle identification on a FPGA accelerated compute platform
for the LHCb Upgrade. IEEE Trans. Nucl. Sci. (2017). https://doi.org/10.1109/
TNS.2017.2715900
The ATLAS Level-1 Trigger System
with 13TeV Nominal LHC Collisions

Louis Helary(B)
on behalf of the ATLAS Collaboration

CERN, Geneva, Switzerland


louis.helary@cern.ch

Abstract. The Level-1 (L1) Trigger system of the ATLAS experiment


at CERN’s Large Hadron Collider (LHC) plays a key role in the ATLAS
detector data-taking. It is a hardware system that selects in real time
events containing physics-motivated signatures. Selection is purely based
on calorimetry energy depositions and hits in the muon chambers consis-
tent with muon candidates. The L1 Trigger system has been upgraded to
cope with the more challenging Run2 LHC beam conditions, including
increased centre-of-mass energy, increased instantaneous luminosity and
higher levels of pileup. This talk summarises the improvements, commis-
sioning and performance of the L1 ATLAS Trigger for the LHC Run2
data period.

Keywords: ATLAS experiment · Trigger · LHC run2

1 Introduction

The ATLAS experiment [1] is located on the LHC at CERN. It is studying


hadron hadron collisions (protons or heavy ions such as lead atoms)√ at a fre-
quency of up to 40 MHz and at a center-of-mass energy of up to s = 13 TeV.
The conditions at which the LHC has been operated in Run1 and in Run2 have
changed drastically leading to a significant increase of interaction rate which is
not covered by the storage and CPU resources available for the physics analyses.
In order to maintain the physics program intact, the selection of the events had
to be significantly revised. The ATLAS trigger system [2,3] is divided into a fast
hardware system called L1 that collects the input from the detectors and reduces
the rate of collisions from 40 MHz to about 100 kHz; and a High Level Trigger
system based on commercial computers which reduces the rate of interesting
events to 1 kHz that are recorded on disk for future analyses. A diagram of the
Level-1 system is shown in Fig. 1. This document will discuss the large fraction
of the Level-1 constituents that were upgrade before Run2 to cope with the new
LHC conditions.

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 314–318, 2018.
https://doi.org/10.1007/978-981-13-1313-4_59
The ATLAS Level-1 Trigger System with 13TeV Nominal LHC Collisions 315

Fig. 1. Sketch of the ATLAS Level-1 systems [4].

2 L1 Muon Trigger Upgrade

In ATLAS, muon triggers are obtained from two technologies, Resistive Plate
Chamber (RPC) in the barrel, and Thin Gap Chamber (TGC) in the end-caps.
Together they form the L1-Muon. Additional RPCs were installed during the
Long Shutdown of the LHC (LS1) in 2013 and 2014, to recover acceptance in the
ATLAS feet region. The installation and commissioning of these chambers was
finished during 2015, and the system was fully enabled in the data taking in 2016.
Figure 2 (left) shows the efficiency of the L1 muon triggers with pT > 11 GeV as a
function of the muon φ coordinate for 2015 and 2016 data-taking. The acceptance
increase can be seen in the range −2.2 < φ < −0.8, where the feet are located.
In the end-caps where the rate of the L1 muon triggers is dominated by protons
from the beam faking real muons, it is crucial to decrease the rate while keeping
a high signal efficiency. In Run1 only the big wheels, that contain up to 3 TGC
stations and are located after the end-cap toroid magnet, were used to trigger
the events, while in Run2 the rate is reduced in the range 1.0 < |η| < 1.3 by
requiring a coincidence between the big wheel and the hadronic tile calorimeter
and in the range 1.3 < |η| < 1.9 by requiring a coincidence between the big
wheels and the muon small wheels, that contain 1 TGC station and is located
after the end-cap toroid magnet. The rate of end-cap L1 muon triggers with
pT > 15 GeV measured in the data with and without the coincidence of the
small and big wheel is shown in Fig. 2 (right).

Fig. 2. Left: Efficiency of the barrel L1 muon triggers with pT > 11 GeV for 2015 data
(blue) and for 2016 data (red) [4, 5]. Right: Rates of the end-cap L1 muon triggers with
pT > 15 GeV with (blue) and without (red) the small wheel coincidence enabled [4, 5].
316 L. Helary

3 L1 Calorimeter Trigger Upgrade


The L1 Calorimeter (L1-Calo) system is a hardware-based system that receives
signals from both the Electromagnetic and Hadronic calorimeters with a coarse
granularity based on trigger towers of typical size Δη × Δφ = 0.1 × 0.1 and
identifies trigger objects such as electrons, taus and jets and also computes the
missing transverse energy of each event. In Run2 the system underwent various
major upgrades. The previous ASIC-based modules used at the pre-processing
stage were replaced by almost 2000 FPGA-based daughter boards to allow a
refined extraction of the energy. The new modules enable significantly improved
pile-up control, such as dynamic bunch-by-bunch pedestal correction as well as
an enhanced signal-to-noise ratio by the use of digital autocorrelation Finite
Impulse Response filters. In the 3.2 fb−1 of data collected in 2015 at the begin-
ning of Run2, about 100 events were found, which were mistimed due to jets
with saturated energy that were reconstructed by the L1-Calo in the wrong
bunch crossing. In the new pre-processing modules, the digitisation speed was
doubled to 80 MHz which allows for improved treatment of saturated signals and
refined input timing. A new algorithm taking advantage of this was deployed at
the end of the 2016 data-taking, resulting in no more mistimed events recorded
in the last 7 fb−1 of the data recorded. The firmware of the object identifi-
cation hardware components was updated to allow for extra selectivity, such
as energy-dependent electromagnetic isolation criteria in the cluster processor.
This improves the QCD background rejection for electron triggers while keeping
a good signal identification. The refined isolation algorithm allows a decrease
in the rate of the L1 electron triggers with pT > 24 GeV by about 10% with a
signal efficiency loss of about 1%. This new algorithm was tested to check for the
introduction of any biases. Figure 3 (left) shows the efficiency of the new isolation
algorithm (denoted VHIM) compared to the old isolation algorithm (VHI) as a
function of the number of interaction per bunch crossing. No strong dependence
is observed. As in Run1, the L1-Calo system sends the multiplicities of trigger
objects as well as the energy sums to the L1 Central Trigger (L1-CT) system
in order to select the events. However, a brand-new system was introduced in
Run2: the L1-Topo system to allow topological cuts between these objects. The
bandwidths between the different L1-Calo modules was therefore increased, as
well as new merger modules introduced to provide the flexibility required by this
new system.

4 L1 Topological Trigger Upgrade

The introduction of topological cuts between trigger objects at L1 is mostly


motivated by the hope to reduce significantly the rate of trigger items that are
significantly dominated by physics backgrounds such as for instance the ones
needed for the study of the production of hadronically decaying Higgs bosons.
It also reduces the rate for low pT trigger items such as those used in the study
of B-mesons. Without the introduction of this new logic a significant fraction
The ATLAS Level-1 Trigger System with 13TeV Nominal LHC Collisions 317

Fig. 3. Left: Efficiency of the isolated L1 electron triggers with pT > 24 GeV as a
function of the average number of interaction per bunch crossing, for the old isolation
algorithm (in black) and for the new one (in blue) [4, 6]. Right: Rate as a function of
the Lumi-Block for the L1 di-muon triggers with pT > 6 GeV (in red) and for the
L1-Topo di-muon triggers with pT > 6 GeV (in blue) [4, 7].

of the ATLAS physics program would need to be left out. The L1 Topological
Trigger (L1-Topo) system consists of 2 modules each containing 2 FPGAs that
allow the execution of the 110 new topological trigger algorithms in a maximum
of 75ns. The whole L1 chain had to be redesigned in order to provide the energy
and position of each trigger object to the L1-Topo system. This is a significant
improvement compared to Run1 where only the multiplicity of each trigger item
was available. The commissioning of the system is ongoing using the data from
Run2. Figure 3 (right) shows the rate of the L1 di-muon triggers with pT > 6 GeV
with and without L1-Topo requirements on the invariant mass (2 < mµµ <
9 GeV) and the separation of the two muons (0.2 < ΔRµµ < 1.5).

5 L1 Central Trigger Upgrade


The L1-CT is a hardware system which collects inputs from the other L1 systems
and forms the L1-Accept. The signal from the L1-Muon is processed by the Muon
to CTP Interface (MUCTPI), and is sent to the Central Trigger Processor (CTP)
and to L1-Topo (via the MUCTPI-To-Topo interface, which was introduced for
Run2). The MUCTPI system has received a firmware upgrade since Run1 to
allow the propagation of the (η, φ) coordinates as well as the pT of the highest
two energetic muons in a Δη × Δφ = 0.3 × 0.1 region to feed the L1-Topo
system. The CTP was redesigned to double the number of trigger inputs and
outputs, in order to cope with the new L1-Topo system. A new multi-partition
mode, allowing to run simultaneously up to three partitions, was added to allow
a more efficient commissioning and calibration of the subdetectors. The L1-CTP
software was completely redesigned to be modular, more stable and more robust.

6 Conclusion
After the LHC and ATLAS L1 upgrades and despite the harshening of the LHC
beams conditions ATLAS has taken data with a very high efficiency (>92% in
318 L. Helary

2016). The data-taking has already restarted in 2017 and will continue until the
end of Run2 in 2018, where about 100 fb−1 of pp collisions data at 13 TeV are
expected.

References
1. ATLAS Collaboration: JINST 3, S08003 (2008)
2. ATLAS Collaboration: Eur. Phys. J. C 72, 1849 (2012). [arXiv:1110.1530 [hep-ex]]
3. ATLAS Collaboration: Eur. Phys. J. C 77, 317 (2017). [arXiv:1611.09661 [hep-ex]]
4. From ATL-DAQ-PROC-2017-014. Published with permission by CERN
5. ATLAS Collaboration. https://twiki.cern.ch/twiki/bin/view/AtlasPublic/L1Muon
TriggerPublicResults
6. ATLAS Collaboration. https://twiki.cern.ch/twiki/bin/view/AtlasPublic/Egamma
TriggerPublicResults
7. ATLAS Collaboration. https://twiki.cern.ch/twiki/bin/view/AtlasPublic/Trigger
OperationPublicResults
Common Software for Controlling
and Monitoring the Upgraded CMS
Level-1 Trigger

Giuseppe Codispoti1,2(B) , Simone Bologna3 , Glenn Dirkx2 ,


Christos Lazaridis4 , Alessandro Thea5 , and Tom Williams5
1
University of Bologna and INFN, Bologna, Italy
giuseppe.codispoti@cern.ch
2
CERN, Geneva, Switzerland
3
University of Milan-Bicocca and INFN, Milano, Italy
4
Department of Physics, University of Wisconsin-Madison, Madison, USA
5
STFC Rutherford Appleton Laboratory, Harwell Oxford, UK

Abstract. The CMS Level-1 Trigger has been replaced during the first
phase of CMS upgrades in order to cope with the increase of centre-of-
mass energy and instantaneous luminosity at which the LHC presently
operates. Profiting from the experience gathered in operating the legacy
system, effort has been made to identify the common aspects of the
hardware structures and firmware blocks across the several components
(subsystems).
A common framework has been designed in order to ensure homogene-
ity in the control and monitoring software of the subsystems, and thus
increase their reliability and operational efficiency. The framework archi-
tecture provides uniform high-level abstract description of the different
subsystems, while providing a high degreee of flexibility in the specific
implementation of hardware configuration routines and monitoring data
structures.
The unique hardware composition and configuration parameters of
each subsystem are stored in a database that has a common structure
across subsystems. A custom editor has been implemented in order to
simplify the creation of new hardware configuration instances. The over-
all monitoring information gathered from all the subsystems is finally
exposed through a single access point to experts and operators.
We present here the design and implementation of the online software
for the Level-1 Trigger upgrade.

1 Introduction

The Compact Muon Solenoid (CMS) experiment Level-1 Trigger (L1T) selects
the most interesting 100 kHz of events out of 40 MHz collisions delivered by the
CERN Large Hadron Collider (LHC). The LHC restarted in 2015 with centre-
of-mass energy of 13 TeV and increasing instantaneous luminosity.

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 319–323, 2018.
https://doi.org/10.1007/978-981-13-1313-4_60
320 G. Codispoti et al.

In order to cope with the increasing event rate, CMS L1T underwent a
major upgrade [1] in 2015 and early 2016. The VME-based system has been
replaced by custom-designed processors based on the µTCA specification, con-
taining FPGAs and larger memories for the trigger logic and high-bandwidth
optical links connections. The final system, composed of 9 main components
(subsystems), accounts for a total of about 100 boards and 3000 optical links
and is only partially standardized: 6 different design of processor boards are
used and each subsystem is composed of a different number of processors and
implements different algorithms. Approximately 90% of the software has been
rewritten in order to control and monitor the new system: in order to mitigate
the rcode duplication, the Online Software was redesigned to exploit the partial
standardisation and impose a common hardware abstraction layer for all the
subsystems.
We present here the design of the control software for the upgraded L1T and
the tools developed for controlling and monitoring the subsystems.

2 The Hardware Abstraction Layer: SWATCH


A CMS L1T subsystem is composed of one or more processor boards hosted in
µTCA crates, each equipped with a module providing clock, controls and data
acquisition services. Across different board designs and subsystems, the processor
boards have the same logical blocks: input/output ports; an algorithmic block
for data processing; a readout block sending data from the input/output buffers
to the CMS Data Acquisition system; a trigger Timing and Control block receiv-
ing clock and zero-latency commands. This common abstract description of the
subsystems and the processor boards is the founding concept of SWATCH [2],
SoftWare for Automating the conTrol of Common Hardware, a common software
framework for controlling and monitoring the L1T upgrade.
SWATCH models the generic structure of a subsystem through a set of
abstract C++ classes. Subsystem-specific functionalities are implemented in
classes that inherit from them. The objects that represent the subsystem are
built using factory patterns and an XML-formatted description of the sys-
tem. Hardware access for both test and control purposes is generalized through
the concept of Command : a stateless action, represented by an abstract base
class, customised through subtype polymorphism. Commands can be chained
in Sequences. A Finite State Machine (FSM) defines the possible state of the
subsystem and is connected with the high-level CMS run control states: Com-
mands and Sequences are the building blocks for the transitions between the
FSM states. The parameters used by the commands are provided through the
Gatekeeper, a generic interface providing uniform access to the SWATCH appli-
cation both from files and database. Similarly, items of monitoring data are rep-
resented through the Metric class that provides a standard interface for accessing
their values, registering error and warning conditions, and persistency. Metrics
can be organized in tree-like structures within Monitorable Objects, whose sta-
tus is the cumulative status of all constituent Metrics. Values stored in Metrics
Common Software for Controlling and Monitoring 321

are automatically published to an external service and then stored in an Oracle


database through a dedicated process. Metrics considered crucial for the data
taking are stored directly through a separate monitoring thread.
Common monitoring and control interfaces are provided through web pages
based on Google-Polymer based on modern web technologies (ES6-javascript,
SCSS, HTML5). The XDAQ [3] framework, the CMS platform for distributed
data acquisition systems, provides the networking layer to drive the FSM from
central CMS Run Control and publish monitoring information.

Fig. 1. System (left) and Processor (right) views of a subsystem in the SWATCH web
interface. From CMS CR-2017/188. Published with permission by CERN.

3 Definition and Persistency of Hardware Configuration

The system description and the input parameters for Commands and FSM tran-
sitions are stored in an Oracle database, ensuring that the hardware configura-
tion that was used for any given test or data taking run can be identified, and
re-used if required. The database schema is based on a tree-like structure of for-
eign keys that mimics the structure of the L1T. Subsystems’ configurations are
then split in logical blocks and stored in form of XML chunks.
To simplify the process of editing of the L1T configuration a custom database
Configuration Editor (L1CE ) has been developed. The L1CE was designed as a
client-server application to enable multi-user access whilst keeping safe, atomic
database sessions attached to the user session. Both client and server are devel-
oped using web technologies: the server runs a Node.js application, while the
client is a web page based on Google-Polymer. This choice minimises the num-
ber of technologies involved, thus reducing development and maintenance effort
and keeping a native, efficient communication between the two parts.
The L1CE implements unambiguous bookkeeping of the configurations, by
imposing naming conventions and versioning at all levels, author identification
322 G. Codispoti et al.

through the CERN Single-Sign-On mechanism and in general explicit and auto-
matic metadata insertion. It also implements in-browser XML editing and com-
parison between different configurations at all levels.

4 The Level-1 Page


A web application called Level-1 page provides a single access point for shifters
and experts to all of the L1T monitoring information, including the running
conditions and the warnings and alarms from all trigger subsystems. Through a
concise visual representation of the system, including the interconnections among
trigger subsystems and with CMS detectors, it allows non-experts (e.g. control
room shift personnel) to identify the source of problems more easily. It also
provides access and control for the trigger processes, links to the documentation
and relevant contacts, and a list of reminders that the user can filter based on
the type of the ongoing run (e.g. collisions, cosmics, tests, etc.). The Level-1 page
is based on the same architecture as the L1CE in which the server is responsible
for aggregating the information from etherogeneous sources.

Fig. 2. The architecture of the L1CE (left) and a view of the Level-1 Page (right).
From CMS CR-2017/188. Published with permission by CERN.

5 Conclusions
The CMS L1T has been completely replaced in 2015 and early 2016 and commis-
sioned in a very small time window. The Online Software has been rewritten to
accomodate its new structure, exploiting hardware standardization and imposing
a common abstraction model. In this way, the design of the new Online Software
increased the fraction of common code and reduced the development time and
maintenance effort, playing a vital role in the commissioning of the new system
and enhancing the data taking reliability. Moreover the design choices will sim-
plify implementation of new features, continuously adapting to the needs of the
next years of data taking.
Common Software for Controlling and Monitoring 323

References
1. The CMS collaboration: CMS Technical Design Report for the Level-1 Trigger
Upgrade, CERN, Geneva. CMS-TDR-12 (2013)
2. Bologna, S., et al.: SWATCH: common software for controlling and monitoring the
upgraded level-1 Trigger of the CMS experiment. In: Proceedings of 20th IEEE-
NPSS Real Time Conference (RT2016) (2016). https://doi.org/10.1109/RTC.2016.
7543077
3. XDAQ. https://svnweb.cern.ch/trac/cmsos. Accessed 25 June 2017
A Prototype of an ATCA-Based System
for Readout Electronics in Particle
and Nuclear Physics

Min Li1,2, Zhe Cao1,2(&), Shubin Liu1,2, and Qi An1,2


1
State Key Laboratory of Particle Detection and Electronics (IHEP-USTC),
University of Science and Technology of China, Hefei 230026, China
caozhe@ustc.edu.cn
2
Department of Modern Physics, University of Science and Technology
of China, Hefei 230026, China

Abstract. Thousands of channels and large amount of data in the modern high-
energy and nuclear physics pose many challenges for readout electronics, so that
system of high bandwidth, low latency and flexible real time data sharing is very
necessary. Because of the limit of the architecture, the classical parallel back-
plane cannot meet the requirement. A prototype readout electronics system
based on Advanced Telecom Computing Architecture (ATCA) is designed,
which is composed of a hub blade and node blades. The blades interconnect
with each other via high speed serial links of ATCA backplane. A high sample
rate Analog to Digital Converter (ADC), which can readout the signal from the
detectors, is implemented to produce high speed data for transmission, and a
Filed Programming Gate Array (FPGA) is responsible for configuration, control
and data transmission. The system tests conducted in the laboratory indicate that
the prototype system functioned well.

Keywords: ATCA  FPGA  High bandwidth transmission

1 Introduction

The new-generation high-energy and nuclear physics experiments run with more
channels and larger amount of data transmission [1], thus the traditional architecture
based on parallel backplane, such as compact Peripheral Component Interconnect
(PCI) and Versa Module Eurocard (VME), are not sufficient due to required data
throughput and low latency [2].
ATCA is developed by the PCI Industrial Computer Manufacturers Group
(PICMG), and PICMG 3.0 document mainly describes its features, such as mechanical,
power distribution, thermal as well as data transport. The key features of ATCA are
listed as below [3]:
• A high throughput capacity (up to 2.5 Terabits).
• High availability up to 99.999%.
• Two redundant −48 V power supplies, reducing single points of failure.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 324–327, 2018.
https://doi.org/10.1007/978-981-13-1313-4_61
A Prototype of an ATCA-Based System for Readout Electronics 325

• A highly scalable, switched fabric architecture, based on Gigabit Ethernet,


PCIExpress, RocketIO and Fibre. Each serial connection providing 2.5 Gb/s data
transmission.
• Support for up to 200 W of power per board and 14 boards per crate.
The ATCA has been already used in the industry widely, while it will be
increasingly popular in the high energy and nuclear physics. In this paper, an ATCA
based prototype readout electronics system will be presented.
Based on the advantages mentioned above, a prototype readout electronics for
particle and nuclear physics, which can provide a possible solution for high speed data
acquisition system, is designed. The system is achieved in a 14-slot ATCA crate, which
is equipped with a backplane of dual-dual-star topology for fabric interface and dual-
star topology for base interface. There are four hub slots and ten node slots, and each
hub slot is interconnected with each other and all node slots, through x1, x2, or x4 high
speed serial links.

2 System Description

The structure diagram of the prototype readout electronics is shown in Fig. 1. In the
node blade there is an ADC, of which the sampling clock is provided by a Phase
Locked Loop (PLL), with amplifier for couple. The parallel output data streams are
then transferred to an FPGA for buffering and processing. There are 16 high speed
serial transceivers(GTP) integrated in the FPGA, of which eight connect to the fabric
interface and two connect to the base interface, each supporting data rate up to 6.6 Gb/s
[4]. In addition, a Universal Serial Bus (USB) port is supplied for sending commands in
test mode.

Fig. 1. Structure diagram of the prototype readout electronics

The hub blade collects all of the data transferred from node blades through ATCA
backplane. With 16 GTP transceivers integrated in the FPGA on it, high bandwidth
326 M. Li et al.

data transmission are achieved, and event selection, data packing and real-time cor-
rection may be implemented in the FPGA due to its abundant inner connections and
logic resources. A DDR3 memory is supplied for data buffering and a flash memory is
used for FPGA configuration stream bit. A Gigabit Ethernet connected to the PC is
designed for processed data transmission. In order to make the synchronization
between hub blade and node blades, a high precision global clock fanned out from hub
blade will be shared among all node blades.
Both in hub blade and node blades, a microcontroller is used as an Intelligent
Platform Management Controller (IPMC), which is responsible for monitoring the
temperature and voltage of the board, managing activation and power up or down of
board and communicating with the Shelf Management Controller (ShMC) via two I2C
buses [5].

3 Performance Test

To evaluate the performance of the prototype readout electronics, system tests were
conducted in the laboratory, including dynamic performance of the ADC, stability of
high speed serial links between hub blade and node blade, as well as Ethernet and USB
function test. The test platform is shown in Fig. 2. The ADC performance test and
transmission stability test will be described in detail.

Fig. 2. Test platform in the laboratory

In the ADC performance test, the signal generator generated the sine wave and sent
to the node blade via band pass filters. The signal was conditioned by the amplifier and
digitized by the ADC. The frequency response curve shows the bandwidth is about
200 MHz, corresponding with the bandwidth of the amplifier. Figure 3. shows the
typical frequency domain figure of the ADC with the amplifier @85 MHz(−1dBFS),
and the Effective Number of Bits (ENOB) is 8.20 bits.
In the high speed serial link test, a pseudo-random data generator was implemented
in the FPGA of node blade to generate 16 bits parallel data. The data were coded and
serialized by GTP transceiver and then sent out to the hub board via x4 fabric channels
of the backplane. In the hub blade, a same pseudo-random data generator ran and
compared with the received data from the node blade. A mismatch result would be
given out. Table 1 shows the Bit Error Rate (BER) test result, indicating that the data
A Prototype of an ATCA-Based System for Readout Electronics 327

Fig. 3. Figure of the frequency domain of ADC @85 MHz

transmission is reliable. The speed of one port can reach 3.125 Gb/s, so that x4 fabric
channel can achieve 12.5 Gb/s.

Table 1. BER test result of backplane data transmission.


Speed/Gb/s Test data/bit BER
1.25 5.4  1013 <1  10−13
3.125 9  1013 <1  10−13

4 Conclusion

A prototype readout electronics based on the ATCA structure has been designed.
12.5 Gbps data transmission through backplane is achieved, meeting the expected
requirement. The conducted laboratory tests prove that the prototype system functions
well, and for future high energy and nuclear physics experiments using ATCA tech-
nology is very promising.

Acknowledgements. This project is supported by the Young Fund Projects of the National
Natural Science Foundation of China (Grant No. 11505182). And it is also supported by the
Young Fund Projects of the Anhui Provincial Natural Science Foundation (Grant No. 1608085
QA21).

References
1. Larsen, R.S.: Advances in developing next-generation electronics standards for physics. In:
Real Time Conference, 2009, RT 2009, IEEE-NPSS, pp. 7–15. IEEE (2009)
2. Jezynski, T., Larsen, R., Du, P.L.: ATCA/lTCA for physics. Nucl. Inst. Methods Phys. Res.
A 623(1), 510–512 (2010)
3. PICMG Homepage. http://www.picmg.org. Accessed 15 June 2017
4. Xilinx Homepage. http://www.xilinx.com. Accessed 15 June 2017
5. Lang, J., et al.: TCA-5 paper. In: Proceedings of the IEEE NPSS 15th Real Time Conference,
Beijing
Commissioning and Integration
Testing of the DAQ System for the CMS
GEM Upgrade

Alfredo Castaneda(&), On behalf of the CMS Muon group

Texas A&M University at Qatar, P.O. Box 23874 Education City, Doha, Qatar
castaned@cern.ch

Abstract. The international CMS collaboration at the CERN-LHC experiment


will conduct a major upgrade to its muon detection system to cope with the
intense particle flux and radiation levels foreseen during the high-luminosity
LHC era. The first CMS approved muon upgrade project (GE1/1) consists on the
installation of new detectors in the CMS forward region based on Gas Electron
Multiplier (GEM) technology to maintain an excellent muon reconstruction and
particle identification in that region. Extensive simulation and test-beam exer-
cises performed during the past few years ensure the optimal detector operation.
Integration of five GEM modules into CMS done early 2017 allowed scientists
to gain experience on mechanical installation, services and data acquisition
(DAQ) infrastructures. This letter presents an overall description of the project
with emphasis on the GEM data acquisition (GEM-DAQ) system and results on
GEM local calibrations with the detector integrated into CMS for the first time.

Keywords: CMS  Muon  GEM  DAQ

1 GEM Technology

A Gas Electron Multiplier [1] is a thin metal-clad polymer foil, chemically perforated
with a high density of microscopic holes. In the case of GE1/1 upgrade project, a triple-
GEM configuration probed to be optimal in terms of signal amplification, time reso-
lution and considering the volume constrains in the CMS detector. It consists of a stack
of three GEM foils separated with a relative distance of few millimeters, a gas mixture
fills the space in-between the foils. Charged particles crossing the detector interact with
the molecules in the gas producing electron-ion pairs (primary ionization). Released
electrons flow from the drift region towards the GEM foils and are further multiplied
(second ionization) due to an intense electric field created in the holes producing an
avalanche effect. Electrons reaching the anode induce a charge in the readout strips from
where properties such as position and arrival time of primary particles can be inferred.
Performance of large scale triple-GEM detectors has been extensively studied in the past
using simulation and measurements from test-beam exercises [2]. Experimental mea-
surements indicate an excellent performance of the detector with an efficiency >97%,
particle rate capability >10 kHz/cm2, timing resolution <10 ns, angular resolution of
300 µrad and a gain uniformity of 15% or better across the entire chamber. In addition,

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 328–331, 2018.
https://doi.org/10.1007/978-981-13-1313-4_62
Commissioning and Integration Testing of the DAQ System 329

the detectors and electronic components undergo radiation tolerance tests to ensure the
correct operation during the lifetime of the LHC project.

2 Overview of the GEM-DAQ System

The GEM data acquisition (GEM-DAQ) system is the collection of hardware and
software components for signal readout, communication between front-end electronics
and off-detector components including: GEM back-end electronics, central CMS DAQ
and the Cathode Strip Chamber (CSC) muon system, the later to improve CMS muon
triggering in the forward region. Each trapezoidal triple-GEM chamber is divided into
twenty-four sectors (eight columns and three rows). Each sector contains 128 readout
strips with inputs connected to a 128-channel-front-end chip (VFAT) [3]. Charge
collected in the strips is amplified, analog-to-digital converted and noise suppressed;
data from the twenty-four chips are packed and transmitted via electronic links (E-
links) running through a flat printed circuit board known as GEM Electronic Board
(GEB) to an opto-hybrid device for further processing. The opto-hybrid is also plugged
in the GEB and contains a Giga-Bit Transceiver (GBT) chipset, a Field Programmable
Gate Array (FPGA) as well as optical receivers and transmitters to provide the link with
the off-detector region as shown in Fig. 1.

Fig. 1. A sketch showing the main components of the GEM-DAQ system including the front-
end electronics and elements for communication with the off-detector region.

A bidirectional optical path runs between the Micro Telecommunications Com-


puting Architecture (µTCA) located in the counting room and the opto-hybrid, this path
is used to send setup and control signals to the VFAT chips and to receive tracking and
triggering information. The information from the opto-hybrid is received by CTP7
Advance Mezzanine Cards (AMCs) for further processing, a custom AMC card
330 A. Castaneda

(AMC13) is used for communication with central CMS-DAQ to provide the trigger and
timing control (TTC). A unidirectional path sends fixed latency trigger data from the
GEM to the CSC system. A complete description of the electronic components can be
found in [4].

3 GEM Integration in CMS and Local Calibration Results

Triple-GEM chambers are subjected to rigorous Quality Control (QC) tests [5] before
being declared ready for installation and commissioning, some of those tests include
uniformity of the gain and checks for gas leaking. Once QC test are passed, super-
chambers are fabricated coupling two GE1/1 chambers and parameters such as detector
gain, noise levels and cluster size are measured with the final detector electronics.
Five GEM super-chambers were integrated into CMS early 2017. With the GEM-
DAQ system fully operational and the services (gas, cooling, cabling, low and high
voltage) in place, the system can be operated locally, allowing to perform scan routines
on specific GEM devices and configurations, or globally, with the system integrated
into the central CMS DAQ infrastructure. GEM local calibration routines are used to
check data integrity and response of the VFAT chips to injected signals, results for one
of the VFAT chips used in the commissioning are presented in Fig. 2.

Fig. 2. Response of a VFAT chip (installed in one of the super-chambers integrated in CMS) to
the injection of internal calibration pulses (charge). The magnitude of the charge is controlled
internally and a configurable number of pulses are injected. The total number of times the
comparator fired is recorded and normalized to the total number of injected pulses (e). The plot
on the right shows the response after adjusting individual channel registers to trim the 99%
response point to the same reference value, this in order to reduce any slight differences between
channels due to fabrication statistical fluctuations.
Commissioning and Integration Testing of the DAQ System 331

4 Summary and Future Perspectives

A successful installation of five GEM super-chambers into CMS was done early 2017,
where invaluable experience was gained on mechanical installation, service integration
and DAQ setup that will allow to reduce the time during the installation of the full
GE1/1 system (2019). GEM local calibrations indicate a good system performance and
provide valuable data for monitoring the system and GEM-DAQ components. The
commissioning work will continue in parallel with the regular CMS data taking, this
will allow for the GEM system to interact with the rest of the CMS subsystems for the
first time.

References
1. Sauli, F.: GEM: a new concept for electron amplification in gas detectors. Nucl. Instrum.
Methods A 386, 531–534 (1997)
2. Abbaneo, D., et al.: Performance of a large-area GEM detector prototype for the upgrade of
the CMS muon endcap system. In: Proceedings of 2014 IEEE Nuclear Science Symposium,
Seattle, WA, USA (2014)
3. Aspell, P., et al.: VFAT2: a front-end “system on chip” providing fast trigger information and
digitized data storage and formatting for the charge sensitive readout of multi-channel silicon
and gas particle detectors. In: 2008 IEEE Nuclear Science Symposium Conference Record,
pp. 1489–1494 (2008)
4. Colaleo, A., Safonov, A., Sharma, A., Tytgat, M., et al.: CMS technical design report for the
Muon endcap GEM upgrade. CERN-LHCC-2015-12. CMS-TDR-013, June 2015
5. Tytgat, M., et al.: Quality control for the first large areas of the triple-GEM chambers for the
CMS endcaps. CMS-CR-2015-347, December 2015
MiniDAQ1: A Compact Data Acquisition
System for GBT Readout over 10G
Ethernet at LHCb

Paolo Durante1(&), Jean-Pierre Cachemiche2, Guillaume Vouters3,


Federico Alessio1, Luis Granado Cardoso1,
Joao Vitor Viana Barbosa1, and Niko Neufeld1
1
EP Department, European Organization for Nuclear Research,
Geneva 23, Switzerland
paolo.durante@cern.ch
2
Centre de Physique des Particules de Marseille,
163 Avenue de Luminy, Marseille, France
3
Laboratoire d’Annecy le Vieux de Physique des Particules,
9 Chemin de Bellevue, Annecy-le-Vieux, France

Abstract. The LHCb experiment at CERN is undergoing a significant upgrade


in anticipation of the increased luminosity that will be delivered by the LHC
during Run 3 (starting in 2021). In order to allow efficient event selection in the
new operating regime, the upgraded LHCb experiment will have to operate in
continuous readout mode and deliver all 40 MHz of particle collisions directly
to the software trigger. In addition to a completely new readout system, the
front-end electronics for most sub-detectors are also to be redesigned in order to
meet the necessary performance. Most front-end communication is based on a
common *5 Gb/s radiation-hard protocol developed at CERN, called GBT.
MiniDAQ1 is a complete data-acquisition platform developed by the LHCb
collaboration for reduced-scale tests of the new front-end electronics. The
hardware includes 36 bidirectional optical links and a powerful FPGA in a small
AMC form-factor. The FPGA implements data acquisition and synchronization,
slow control and fast commands on all available GBT links, using a very flexible
architecture allowing front-end designers to experiment with various configu-
rations. The FPGA also implements a bidirectional 10G Ethernet network stack,
in order to deliver the data produced by the front-ends to a computer network for
final storage and analysis. An integrated single-board-computer runs the new
control system that is also being developed for the upgrade, this allows Mini-
DAQ1 users to interactively configure and monitor the status of the entire
readout chain, from the front-end up to the final output.

Keywords: Data acquisition systems  FPGA  GBT  10G Ethernet


Advanced Mezzanine Card  LHCb experiment

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 332–336, 2018.
https://doi.org/10.1007/978-981-13-1313-4_63
MiniDAQ1: A Compact Data Acquisition System 333

1 Hardware Specifications
1.1 AMC40
MiniDAQ1 hardware is composed of two main parts. The first, called AMC40, is based
on the Advanced Mezzanine Card form factor and interfaces directly with the front-end
optical links. Optical links are implemented with Avago MiniPOD™ modules, 3
transmitters (AFBR-811VxyZ) and 3 receivers (AFBR-821VxyZ) for a total of 36
bidirectional links. 6 MPO12 connectors are available on the front panel to accept front-
end optical fibers. The AMC40 also mounts the FPGA implementing all data multi-
plexing and aggregation, an Altera Stratix V device is used for this purpose (Fig. 1).

Fig. 1. AMC40 (left) and AMCTP (right)

1.2 AMCTP
The second half of the MiniDAQ1 hardware resides on the so-called AMCTP, an utility
module that mates with the AMC40 via an AMC B+ connector. The AMCTP hosts an
embedded microcomputer where part of the control system software is executed.
The FPGA and the control system communicate via a single-lane PCI Express Gen1 link.
The control system can configure and monitor the hardware through an on-board gigabit
Ethernet link. The AMCTP also provides the main reference clock to the mezzanine
board, either from a local oscillator or an external source. A dedicated connector can fan
out this clock to synchronize with another MiniDAQ1. Additional connectors provide
inputs for external triggers and outputs for test signals from the FPGA.

2 MiniDAQ1 Firmware

2.1 Slow Control


“Experiment Slow Control” (or ECS) refers to all communication used to configure and
monitor the readout board and all front-ends connected to it. On the front-end side, the
GBT-SCA [1] ASIC is used to multiplex configuration and monitor requests from a
single bidirectional optical link onto multiple on-board peripherals. On the readout
334 P. Durante et al.

board, MiniDAQ1 firmware already implements a translation module (called SOL40)


between the ECS and the SCA for all the available protocols, including JTAG with
support for remote reprogramming of front-end FPGAs used in low-radiation areas.
This module is highly configurable in order to accommodate every possible SCA
topology implemented in the front-end side.

2.2 Fast Commands


In addition to slow commands (which are asynchronous), the firmware also has to
implement so-called “Fast Commands”, which propagate throughout the entire system
synchronously with the collider bunch clock. For each collision, the fast commands
instruct each front-end and each readout board on how to handle the new data. This
facility is used, for example, for synchronization, calibration and throughput regulation
across a given setup. All these functions which together make up the “readout
supervisor” are already implemented in the MiniDAQ1 firmware. The corresponding
firmware module is internally referred to as SODIN [2]. Since each subdetector can
implement a different set of fast commands, this module can be configured to be
compatible with any existing implementation.

2.3 Data Aggregation


While slow and fast commands require a bidirectional optical connection with the
front-ends, data acquisition links are unidirectional and require much higher bandwidth
(up to 5.12 Gb/s per link).
In MiniDAQ1, up to 24 links can be configured in simplex data acquisition mode,
several protocols offering 3.36, 4.48 or 5.12 Gb/s of usable DAQ bandwidth are
available. Inside of a given protocol, each frontend type uses a specific encoding
designed to optimally exploit the available bandwidth.
Within the MiniDAQ1 firmware, a dedicated module is responsible for synchro-
nizing, aligning, decoding and finally aggregating the data received from all available
data acquisition links. Internally, this module is referred to as TELL40.
A difference between the firmware of MiniDAQ1 and the final system is that in the
former case, the SODIN, SOL40 and TELL40 modules are all instantiated in the same
FPGA, whereas in the final architecture each constitutes a dedicated firmware for a
specific “flavor” of readout board, the respective functionalities remain however
unchanged.

2.4 Ethernet Stack


Once data from all links corresponding to a given collision has been aggregated into a
single “fragment” and marked with a unique sequential event number, it is ready to be
delivered outside the FPGA across a dedicated 10GBASE-R network. The MiniDAQ1
firmware implements a bidirectional 10G Ethernet stack that has been optimized for
data acquisition. The stack will buffer a finite number of fragments within the maxi-
mum transmission unit (MTU) threshold of the network (9000 bytes). For a given
network link, buffering of a new packet occurs in parallel with the transmission of the
MiniDAQ1: A Compact Data Acquisition System 335

current packet to minimize network dead-time. The stack supports data transmission up
to the specified link line-rate of 10 Gb/s. Fragments are encapsulated in UDP data-
grams with a simple header identifying the origin and type of traffic. The stack also
implements ARP and ICMP replies to simplify network monitoring and configuration
through the control system.
An upcoming version of the network stack allows multiplexing of the data
acquisition stream over multiple 10 GbE links in order to linearly increase the available
output bandwidth.

3 MiniDAQ1 Software

3.1 Control System


The MiniDAQ1 control system is based on the WinCC Open Architecture. Each entity
(sensor, chip, bus, board…) in the field is represented by an abstract finite state
machine. All FSMs are organized hierarchically to easily monitor state and propagate
commands throughout the tree. A single WinCC project is run as a decentralized
system, with different components running of separate hosts, all communicating with
each other over a LAN using the DIM protocol. An embedded Linux computer hosted
on the AMCTP carrier board executes dedicated software to interface the readout board
(and all downstream entities optically connected behind it) with the rest of the control
system. This software is internally called the GbtServ. A separate piece of middleware
interfaces the control system with the storage host where data is received over 10 GbE.

3.2 Data-Quality Monitor


In addition to persisting events to disk, the storage component submits a fractions of
collision fragments, on a best-effort basis, to a process dedicated to live data-quality
monitoring. This facility is used to display real-time histograms to the experiment
operators.

4 Transition to MiniDAQ2

The successor hardware of MiniDAQ1 implements a radically different readout


architecture based on a PCI Express form factor and a modern Arria 10 FPGA, offering
an order of magnitude increase in available output bandwidth [4]. Nevertheless, most of
the firmware and software modules described so far have been designed to maintain
forward compatibility, with the main exception being the Ethernet stack, replaced with
a high-performance PCIe transport based on DMA.
336 P. Durante et al.

5 Conclusion

MiniDAQ1 hardware is currently finalized and successfully used by several sub-


detector groups within the LHCb collaboration, both for detector R&D and inside
irradiated facilities like the CERN SPS. Work is currently well underway on Mini-
DAQ2, which features a high-throughput readout protocol based on PCI Express in
place of Ethernet. Firmware and software have already been designed so as to minimize
the effort required to transition from MiniDAQ1 to its successor, which implements the
final design that will be commissioned in 2019. MiniDAQ2 hardware has now entered
production readiness and the first devices have already been delivered to members of
the LHCb collaboration.

References
1. Caratelli, A., et al.: The GBT-SCA, a radiation tolerant ASIC for detector control and
monitoring applications in HEP experiments. J. Instrum. 10(03), C03034 (2015)
2. Federico, A., et al.: LHCb: clock and timing distribution in the LHCb upgraded detector and
readout system. Conference poster. No. Poster-2014-461 (2014)
3. Durante, P., et al.: 100 Gbps PCI-Express readout for the LHCb upgrade. IEEE Trans. Nucl.
Sci. 62(4), 1752–1757 (2015)
Challenges and Performance of the
Frontier Technology Applied to an
ATLAS Phase-I Calorimeter Trigger
Board Dedicated to the Jet Identification

B. Bauss1 , A. Brogna2 , V. Buescher1 , R. Degele1 , H. Herr1 , C. Kahra2(B) ,


S. Rave1 , E. Rocco1 , U. Schaefer1 , J. Souza1 , S. Tapprogge1 , and M. Weirich1
1
Institut fuer Physik, Johannes Gutenberg - Universitaet, Mainz, Germany
2
PRISMA Detektorlabor, Johannes Gutenberg - Universitaet, Mainz, Germany
ckahra@uni-mainz.de

Abstract. The ‘Phase-I’ upgrade of the Large Hadron Collider (LHC),


scheduled to be completed in 2021, will lead to an enhanced collision
luminosity and an increased number of interactions per LHC bunch cross-
ing. To cope with the new and challenging accelerator conditions, all
the CERN experiments have planned a major detector upgrade to be
installed during the associated experimental shutdown period. One of
the physics goals of the ATLAS experiment is to maintain sensitivity to
electroweak processes despite the increased event rate. To this end, the
first-level hardware trigger based on calorimeter data will be upgraded to
exploit fine-granularity readout using a new system of Feature EXtrac-
tors (FEXs), which each uses different physics objects for trigger selec-
tion. There will be three FEX systems in total, the electron, the jet
and the global Feature Extractor. This contribution focuses on the first
prototype of the jet FEX (jFEX) and presents the hardware design chal-
lenges and adopted solutions to preserve signal integrity within a densely
populated high signal speed ATCA board.

1 Introduction

The Large Hadron Collider (LHC) at CERN will stop operation in 2019 to be
upgraded to an instantaneous luminocity (L) of about L ≈ 2.5 × 1034 cm−2 s−1
during Long Shutdown 2 (LS2). Restarting operation for Run 3 (planned for
2021) it is expected to have an average number of interactions per bunch cross-
ing of ≈60. To cope with these new conditions an upgrade programme, ‘Phase-I’
upgrade, for the trigger and data acquisition system (TDAQ) of the ATLAS
experiment [1] has been planned [2]. As part of this, the upgrade of the Level-1
Calorimeter trigger system (L1Calo)[3] will include the new sub-system jet Fea-
ture EXtractor (jFEX), which is the focus of this contribution. In Fig. 1 an
overview of the Phase-I L1Calo system is given.

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 337–340, 2018.
https://doi.org/10.1007/978-981-13-1313-4_64
338 B. Bauss et al.

Fig. 1. Left: Overview of the planned Level-1 Calorimeter trigger system for LHC Run
3 (Blue and green: the legacy system; yellow: new components added as part of the
Phase-I upgrade); Right: photograph of the first jFEX prototype (assembled with only
one processor FPGA) [5]

2 The Jet Feature Extractor


2.1 Requirements
The jFEX is intended to receive data from the calorimeters and to identify jet
and large-area tau candidates in real-time. It also calculates global variables like
the total transverse energy Et and the missing transverse
  energy E miss
t . To
cover the full η range (pseudorapidity η ≡ − ln tan θ2 , θ: polar angle), there
will be 6–7 jFEX modules (the handling of η = 0 is still under discussion),
each with four processors. Each module receives for its η slice the full φ range
(azimuthal angle). The input bandwidth of the jFEX needs to be high enough to
receive the calorimeter data with the highest possible Et resolution and to have
the capability to identify “fat jets” (jets with an area up to 1.7 × 1.7 in η × φ).
To identify a fat jet, a jFEX processor needs to receive data from half of the φ
range processed by a jFEX. However, each processor receives only one quarter of
the ring directly, so data duplication at board level between a processor FPGA
and its neighbours is required. In the jFEX processors several algorithms will
run with the “Sliding window with Gaussian weighting” algorithm [2] as baseline
algorithm for jet identification. The latency budget for jFEX is ≈387.5 ns.

2.2 Features and Challenges


The jFEX board is designed as an ATCA board populated with four Xilinx
Ultrascale XCVU190FLGA2577 FPGAs as processors. Each of these FPGAs
provide 120 Multi-Gigabit transceivers (MGT, high-speed receiver-transmitter
pairs for serial data). The line rate of the input data is still under discussion,
but jFEX is designed to support line rates of 12.8 Gbps. At this line rate the
total input data bandwidth per jFEX board would be 3.1 Tbps.
Challenges and Performance of the Frontier Technology Applied 339

The jFEX PCB layout has in total more than 16000 connections, where a
high number of high-speed data lines from the optical receivers to the processors,
and from each processor to its neighbours, have to be routed paying attention to
signal integrity issues (avoiding cross-talk). For the stack-up of the 24-layer PCB
the high-speed material MEGTRON6 was used for the signal layers, which are
alternated with ground layers. The processors need to share the data that they
receive directly with their neighbours. To avoid passive splitting, which would
affect the signal quality, a feature of the MGTs of the processor FPGAs is used.
The incoming serial data is digitized in the analogue front-end of the receiver
and a connection to the transmitter of the MGT channel allows the received
data to be re-transmitted to the neighbouring processor before it is decoded.
This mechanism for data duplication was proven to work reliably by doing a bit
error rate test (BERT) with the Xilinx “iBERT” IP Core on a Xilinx Ultrascale
evaluation board VCU110. The test was run to a bit error rate of less than
2.15 × 10−16 at a line rate of 28 Gbps without seeing a single error.

2.3 Signal Integrity Simulations


The layout of the jFEX PCB was accompanied by signal integrity simulations
for the high-speed data links. To evaluate the simulation results the recommen-
dations of the SFP+ specification [4] were taken for comparison, because it is an
industrial standard with similar line rates and use cases. The left plot in Fig. 2
shows the simulation results for the S11 parameters. Only a few traces exceed
the recommendation for max. channel return by less than 2 dB.
The simulation results for the S21 parameter are shown in the right plot
of Fig. 2. The jFEX board was designed to have very low attenuation, e.g. by
choosing the material MEGTRON6 for its low dialectric constant, but the spec-
ification recommends a minimum attenuation to damp occuring reflections. The
channel transfer is about 5 dB “too good” as its attenuation is lower than the
recommendation, which should be no problem as long as the reflections are low.

Fig. 2. Results of signal integrity simulations. Left: S11 parameter (channel return);
Right: S21 parameter (channel transfer)[5].
340 B. Bauss et al.

2.4 Link-Speed Tests


The first jFEX prototype (see Fig. 1) was delivered in December 2016. Link-speed
tests with other modules, the LATOME (LAr Trigger prOcessing MEzzanine)
and the FTM (FEX Test Module), were performed in March at CERN. These
tests were done with two different data sets, pseudo-random data (Xilinx iBERT
IP Core with PRBS31) and a simple custom protocol (encoded with 8B10B),
at the two line rates 11.2 Gbps and 12.8 Gbps. The iBERT tests were run on 48
links with LATOME and 60 links with FTM to a bit error rate of better than
1 × 10−15 without seeing a single error.

2.5 Power Integrity Simulations


Because of the high power consumption of the processor FPGAs the PCB design
was also accompanied by power integrity simulations. The main focus here was
on the voltage drop on the power planes of the processor FPGA supply voltages,
which are specified to have a tolerance of 3%. Optimizing the PCB layout by
larger power planes and higher copper thickness (up to 105 µm) the voltage drop
on the power planes was reduced e.g. for the VCCINT voltage to 12 mV, which
is less than the half of the specifications (30 mV).

2.6 Conclusions and Outlook


The challenging luminosity conditions expected for Run 3 of the LHC require
an upgrade of the ATLAS Level-1 Calorimeter trigger system. The jFEX, one of
the new sub-systems, is designed to have an input bandwidth of up to 3.1 Tbps.
Each jFEX module is equipped with four Xilinx Ultrascale processor FPGAs.
The PCB layout was accompanied and checked by power, thermal and signal
integrity simulations. The first prototype was produced in December 2016 and
the power and thermal measurements of the board itself and the link-speed tests
with other modules as data sources have been very successful so far. The final
production of the full system is scheduled for July 2018 and the installation and
commissioning in ATLAS will take place before the LHC restart in 2021.

References
1. ATLAS Collaboration: The ATLAS detector. JINST 3 S08003 (2008). https://cds.
cern.ch/record/1129811
2. ATLAS Collaboration: Technical Design Report for the Phase-I Upgrade of
the ATLAS TDAQ System, CERN-LHCC-2013-018. https://cds.cern.ch/record/
1602235
3. Andrei, V.: The Phase-I upgrade of the ATLAS first level calorimeter trigger. In:
Proceedings of TIPP (2017)
4. SFF Committee: SFF-8431 Specifications for Enhanced Small Form Factor Plug-
gable Module SFP+. ftp://ftp.seagate.com/sff
5. From ATL-DAQ-PROC-2017-018. Published with permission by CERN
The Phase-1 Upgrade of the ATLAS
Level-1 Endcap Muon Trigger

Shunichi Akatsuka(B)
on behalf of the ATLAS Collaboration

Kyoto University, Kyoto, Japan


shunichi.akatsuka@cern.ch

Abstract. The Level-1 muon trigger for the ATLAS experiment identi-
fies muons with high transverse momentum. Under the high luminosity
condition in LHC Run 3, more powerful trigger system is needed to reject
the backgrounds. New trigger processor board for Run 3 has been pro-
duced to achieve the required performance, by combining information
from five different detectors. The performance of the new board and the
readout system has been confirmed by a beam test. The new algorithms
to make use of the new detectors have been developed, and are tested by
MC simulation. With the new algorithms, the trigger rate is estimated
to become lower than the required rate in Run 3.

1 Introduction
LHC Run 3 is planned to start in 2021, with an instantaneous luminosity of
3.0 × 1034 cm−2 s−1 , which is twice as much as the luminosity in the current run
(Run 2). Despite the higher event rate, the data recording rate will not increase
significantly. In this situation, the requirements on the trigger system will be
more severe. The ATLAS detector [1] needs an upgrade before LHC Run 3,
to enhance its performance to cope with these high luminosity conditions
(collectively this effort is known as the Phase-1 Upgrade).
The Level-1 trigger is at the first level in the ATLAS trigger system. It
consists of dedicated trigger processor hardware, performing fast selection for all
the collision events at 40 MHz to reduce the event rate down to 100 kHz within
a fixed latency of ∼2.5 µs. This paper focuses on the Phase-1 Upgrade of the
Level-1 muon endcap trigger system.

2 Phase-1 Upgrade of the Level-1 Muon Trigger


The total Level-1 trigger rate assigned for muons in Run 3 has been defined
to be 19 kHz, considering other triggers and physics requirements [2]. A large
fraction of the muon trigger rate will be occupied by the lowest-threshold un-
prescaled Level-1 single muon trigger (primary muon trigger). In Run 3, the
primary muon trigger should have a transverse momentum (pT ) threshold of
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 341–345, 2018.
https://doi.org/10.1007/978-981-13-1313-4_65
342 S. Akatsuka

20 GeV and a rate below 15 kHz at Run 3 peak luminosity, as well as high
trigger efficiency for muons with pT over the threshold. Without an upgrade of
the trigger system, the rate for this trigger is expected to exceed 28 kHz. Thus a
more powerful trigger strategy is needed to achieve a ∼50% rate reduction while
keeping the trigger threshold and efficiency.
From previous studies on Run 1 muon trigger performance, it is known that
more than 90% of the muon trigger candidates in the endcap region are due to
background events [2]. The major part of these background triggers are due to
events with no associated reconstructed muons. These background triggers are
known as “fake” triggers, caused by charged particles emerging from the beam
pipe. Other background triggers are due to muons with pT below the threshold.
The main strategy of the upgrade is to eliminate the fake triggers and the low-pT
muons by implementing a powerful trigger algorithm using several new detectors
introduced in Run 3.

Large (odd numbered) sectors


TG C Big Wheel
y EML =1.0
12 m

RPCs
5 =1.3
10
BOL 1 2 3 4 5 6
4
8 EEL 2
BML 1 2 3 4 5 EI 6 muon from I.P.
1
3
6
BIS7/8EIL4 Fake
1 2 3 4 5 6
BIL
Tile
4
Calorimeter 3 2
2
EIL End-cap TGCs
=2.4
NSW

1
magnet
2 1
Magnetic
CSC
Field
0 z
0 2 4 6 8 10 12 14 16 18 20
New Small Wheel

Fig. 1. The ATLAS detector in the y − z plane. The interaction point (I.P.) is at the
origin, and the beam pipe corresponds to the z−axis. The four detectors that can be
used for coincidence triggering with the TGC Big Wheel are shown by the green boxes:
TGC EI [3], new RPC [3] (BIS7/8), Tile Calorimeter [5] and the New Small Wheel.
The red line shows a track made by a muon from the I.P., and the blue line shows a
track by a fake. The fake tracks do not make coincide with hits in the detectors inside
the magnetic field. (From ATL-DAQ-PROC-2017-016. Published with permission by
CERN.)

The detectors for the endcap muon trigger in Run 3 are shown in Fig. 1. The
toroidal magnetic field bends the muon tracks, so that the pT can be calculated
from the track angles in the Thin-Gap Chambers (TGCs) [3] installed outside
the magnetic field (TGC Big Wheel). As shown in Fig. 1, the fake triggers cre-
ate muon-like tracks in the TGCs, but do not correspond with any hits in the
detectors inside the field. Thus the fake triggers can be eliminated by requiring
The Phase-1 Upgrade of the ATLAS Level-1 Endcap Muon Trigger 343

a coincidence between hits in the TGC Big Wheel and the detectors inside the
field. The largest area in the endcap region is covered by the New Small Wheel
(NSW) [4], which has high position- and angle resolution. By utilising its infor-
mation with high resolution, low-pT muon candidates can also be rejected highly
effectively.

3 Development of the Trigger Processor/Readout System


A new trigger processor board, called Sector Logic (SL) has been developed to
combine hit information from the detectors shown in Fig. 1. A Xilinx Kintex-7
XC7K325T has been chosen as the main processor FPGA, which has 16 multi-
gigabit GTX transceivers [6]. The new SL has 12 optical inputs and outputs for
the GTX transceivers, each operated at a transfer rate of 6.4 Gbps. These I/Os
for the GTX add up to 76.8 Gbps input data rate in total, which is fast enough
to receive information from the NSW and other detectors inside the magnetic
field. The new SL also has 14 input ports for G-Link connections, which are
mainly used to receive information from the TGCs.
The trigger decision information, as well as the raw information received by
the SL, are read out to debug and improve the trigger algorithm. The SROD, a
software-readout processor implemented on a commercial PC, collects data with
TCP/IP from 12 new SL boards. In the new system, another new board called
a TTC Fan-out provides the event ID information to the 12 SLs and the SROD.
The SROD checks the event IDs in the data received from the SLs, packs them
into a certain format, then sends them to the ATLAS Readout System (ROS).
A new PCI-express card has also been produced to send data to the ROS via the
S-LINK [7] protocol. An integration test of the readout system has been done
using the TGC detectors and the muon beam at the SPS beam facility. Stable
readout and data recording have been successfully demonstrated.

4 Development of the Trigger Algorithm


Two types of matching algorithm have been considered; the position matching
algorithm and the angle matching algorithm. The position matching algorithm
requires hits in both TGC and NSW to reject the fake triggers. With significantly
high position resolution by NSW (∼0.05 in η), low-pT candidates can also be
rejected. The angle matching algorithm makes use of the high angular resolution
of NSW (∼1 mrad.). This information can be used to estimate the angles inside
the magnetic field more accurately, which leads to a better pT resolution.
The performance of the trigger logic has been tested using
√ MC simulation
samples, and is validated with the data taken in 2016 with s = 13 TeV. When
both of the position and angle matching algorithms are applied, the low-pT
candidates are rejected to a large degree while keeping the efficiency for the
events with muons with pT > 20 GeV (Fig. 2). Note that the fake triggers are
not included in Fig. 2, as they do not have associated muon candidates and so
the pT cannot be defined. Together with the study on fake trigger rejection in the
344 S. Akatsuka

TDAQ TDR [2], the trigger rate for muons with pT > 20 GeV at the luminosity
of 3.0 × 1034 cm−2 s−1 is estimated to become smaller than 13 kHz, which meets
the Run 3 requirement of 15 kHz.

L1_MU20 candidate
18000
ATLAS Preliminary
16000 Phase I upgrade study
Data s = 13 TeV, 25 ns
14000 1.3 < |ηRoI| < 2.4

12000 Run-2 (BW + FI)


10000 BW + NSW(dη:dφ)
BW + NSW(dη:dφ & dη:dθ)
8000
6000
4000
2000
0
0 5 10 15 20 25 30 35 40
Offline pmuon [GeV]
T

Fig. 2. pT distribution of the muons that passed the Level-1 muon trigger [8]. The
dashed line shows the distribution of the Level-1 trigger candidate muons in the Run
2 trigger system. The red (blue) histogram shows the distribution after the position
(position + angle) matching algorithm. Low-pT candidates are rejected effectively, while
keeping 96% efficiency for events associated with muons with pT > 20 GeV. (From
ATL-DAQ-PROC-2017-016. Published with permission by CERN.)

5 Conclusion
A new trigger processor board for the Level-1 endcap muon trigger in Run 3 has
been produced to combine information from five different detectors. Together
with the new readout system, the performance of the board has been verified
by a beam test. New trigger algorithms for Run 3 have been developed: the
position matching and the angle matching algorithms. By applying both of the
new algorithms, the trigger rate of the primary muon trigger will be 13 kHz at
Run-3 peak luminosity, which meets the ATLAS trigger requirement.

References
1. ATLAS Collaboration: The ATLAS experiment at the CERN Large Hadron Col-
lider. JINST 3, S08003 (2008)
2. ATLAS Collaboration: Technical Design Report for the Phase-I Upgrade of the
ATLAS TDAQ System, CERN-LHCC-2013-018
3. ATLAS Collaboration: ATLAS muon spectrometer: Technical Design Report,
CERN-LHCC-97-022
4. Kawamoto, T., et al.: New Small Wheel Technical Design Report, CERN-LHCC-
2013-006
5. ATLAS Collaboration: ATLAS tile calorimeter: Technical Design Report, CERN-
LHCC-96-042
The Phase-1 Upgrade of the ATLAS Level-1 Endcap Muon Trigger 345

6. Xilinx Inc.: 7 Series FPGAs GTX/GTH Transceivers User Guide. https://www.


xilinx.com/support/documentation/user guides/ug476 7Series Transceivers.pdf
7. van der Bij, H.C., et al.: S-LINK, a data link interface specification for the LHC
era. IEEE Trans. Nucl. Sci. 44, 398 (1997)
8. ATLAS Collaboration: L1 Muon Trigger Public Results. https://twiki.cern.ch/
twiki/bin/view/AtlasPublic/L1MuonTriggerPublicResults
Modeling Resource Utilization of a Large
Data Acquisition System

Alejandro Santos1,2(B) , Pedro Javier Garcı́a3 , Wainer Vandelli1 ,


and Holger Fröning2
1
Physics Department, CERN, Geneva, Switzerland
{alejandro.santos,wainer.vandelli}@cern.ch
2
Institute of Computer Engineering, Ruprecht-Karls University of Heidelberg,
Heidelberg, Germany
holger.froening@ziti.uni-heidelberg.de
3
Computing Systems Department, University of Castilla-La Mancha,
Albacete, Spain
PedroJavier.Garcia@uclm.es

Abstract. The ATLAS ‘Phase-II’ upgrade, scheduled to start in


2024, will significantly change the requirements under which the data-
acquisition system operates. The input data rate, currently fixed around
150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the chal-
lenging conditions, and exploit the capabilities of newer technologies, a
number of architectural changes are under consideration. Of particular
interest is a new component, known as the Storage Handler, which will
provide a large buffer area decoupling real-time data taking from event
filtering. Dynamic operational models of the upgraded system can be
used to identify the required resources and to select optimal techniques.
In order to achieve a robust and dependable model, the current data-
acquisition architecture has been used as a test case. This makes it pos-
sible to verify and calibrate the model against real operation data. Such
a model can then be evolved toward the future ATLAS Phase-II architec-
ture. In this paper we introduce the current and upgraded ATLAS data-
acquisition system architectures. We discuss the modeling techniques in
use and their implementation. We will show that our model reproduces
the current data-acquisition system’s operational behavior and present
the plans and initial results for Phase-II system model evolution.

Keywords: Simulation model · OMNeT++ · Data acquisition

1 Introduction
Data-acquisition systems for high-energy physics experiments have demanding
computing resource requirements. They are complex systems, needing to process
data in real time. The ATLAS experiment [1] at CERN will be facing new
requirements in terms of data throughput for the upgrade starting in 2024.
From ATL-DAQ-PROC-2017-019. Published with permission by CERN.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 346–349, 2018.
https://doi.org/10.1007/978-981-13-1313-4_66
Modeling Resource Utilization of a Large Data Acquisition System 347

There is still a significant uncertainty with respect to the technologies to be


used for the new system implementation and their availability at the time of
upgrade. The existing data-acquisition system has proven to be adequate for the
current experiment conditions, but it will undergo major changes to fulfill the
new requirements. The work presented in this paper explores the use of discrete
simulations to model and study data-acquisition systems. Ultimately, the aim is
modeling the future system to explore architecture, provisioning and advanced
techniques such as compression and storage under different technology scenarios.
In order to develop a trustworthy model the current data acquisition system is
analyzed first. The results of simulating the current TDAQ system with that
model will be presented.

2 Current ATLAS Data Acquisition System


The existing ATLAS Trigger
and Data Acquisition sys-
tem (TDAQ) [2] selects rel-
evant data for the exper-
iment’s goals in real-time.
The architecture of the cur-
rent TDAQ system is shown
in Fig. 1. Data are transfered
from the detector in the form
of an event through serial
links. An event is composed
of many fragments, one for
each serial link.
The TDAQ system has a Fig. 1. Current ATLAS Trigger and Data Acquisition
two level trigger system. The architecture.
first level is implemented
with custom electronics reducing the 40 MHz event rate to 100 kHz. The sec-
ond level, called the High-Level Trigger (HLT), is implemented by a farm of
commodity servers connected via Ethernet network. It reduces the event rate
from 100 kHz to 1 kHz by selecting interesting events which are then sent to
permanent storage.
The HLT is coordinated by the High-Level Trigger Supervisor (HLTSV),
a dedicated software process which assigns each arriving event to an available
Processing Unit (PU). Each PU is executed on a dedicated HLT CPU core and
analyzes events by reading their fragments from the Readout System (ROS).
The ROS system provides an interface between the ATLAS detector custom
electronics and the commodity Ethernet network. The ROS also provides buffer-
ing of unprocessed fragments, implemented as a distributed computer system of
many machines. Specific physics analysis algorithms executed by the PU pro-
cess request individual fragments from individual ROS machines, but not all
fragments are required to process an event. Dataflow to and from PU processes
on each HLT server is coordinated by a Data Collection Manager (DCM) process.
Each DCM provides the arbitration of the network access for PU processes.
348 A. Santos et al.

3 The Future ATLAS Data Acquisition System


The baseline architecture for the
Phase-II ATLAS data-acquisition
system is shown in Fig. 2. The Level- LTI LTI LTI Global Event
Level-0
L0 CTP 1 MHz / 10 s
0 trigger reduces the event rate from
40 MHz to 1 MHz and the Event Fil-
ter (EF) reduces the event rate from
1 MHz to 10 kHz. Data acquisition
challenges in the upgraded system
include the higher data throughput
of 5 TB/s and the greater process-
ing power required for the EF. One
key component of the future ATLAS
system is the Storage Handler, which
will provide temporary buffering for Fig. 2. Future ATLAS Trigger and Data
event data. The equivalent compo- Acquisition architecture.
nent in the current system is the
ROS, and the study of a ROS model will bring us better understanding of the
Storage Handler.

4 The Model for the Current Data Acquisition System


Figure 3 shows the implementation
of the simulation model for the cur-
rent ATLAS data-acquisition system
in OMNeT++ [3]. The model rep-
resents a simplified version of the
current TDAQ system. The network
is assumed to be ideal with infi-
nite capacity and no packet loss.
The simulation assumes that, for
small intervals of time, the condi-
Fig. 3. OMNeT++ simulation model for the
tions of the experiment remain con- simplified ATLAS TDAQ system.
stant. The only modeled delay is the
event processing delay for the PU processes.
Input values for the simulation are extracted from ATLAS real operational
monitoring data, and include the incoming data rates, fragments sizes, and pro-
cessing delays for each PU. Another input value is the ROS request sizes, which
is the distribution of the number of requested fragments per ROS. These opera-
tional data are processed in several steps. First, outliers in the data are removed
by discarding the outer fences values [4]. Next, values are averaged in five minute
intervals. Processing times are already averaged over smaller time intervals and
they have to be normalized to be re-averaged again.
Modeling Resource Utilization of a Large Data Acquisition System 349

Fig. 4. Comparison of the simulation results and real data for the average number of
fragments in the ROS and the average output bandwidth of the ROS.

5 Simulation Results
Simulations are executed on a Xeon E5645 2.4 GHz machine with 24 GB of RAM.
Each simulation is executed independently for 60 simulated seconds and takes
∼6 h to complete. In total, 24 simulations are executed over 2 h of consecutive
data. Figure 4 shows some of the simulation results, with an outlier at minute
∼70. The real system stopped due to external conditions and the simulation
does not reproduce this behavior. The number of fragments results differ due to
missing delays in the model of ∼10 ms, and output bandwidth results differ due
to the low resolution of the real data and network retransmissions.

6 Conclusion
A simulation model has been developed to study the behavior of the current
ATLAS TDAQ system. Results show a good and stable agreement between sim-
ulation and real data, with a relative error below 5%. Simulation results can
be further improved by adding more accurate simulation of components of the
TDAQ system and network latencies to the model. It can then be used as the
basis to studying the behavior of the candidate architectures for the new system.

References
1. ATLAS Collaboration: Performance of the ATLAS detector using first collision data.
JHEP 09, 056 (2010)
2. Pozo Astigarraga, M.E., (on behalf of the ATLAS Collaboration): Evolution of the
ATLAS trigger and data acquisition system. In: Journal of Physics: Conference
Series, vol. 608, p. 012006. IOP Publishing (2015)
3. Varga, A.: Omnet++. In: Modeling and Tools for Network Simulation, pp. 35–59.
Springer, Heidelberg (2010)
4. Seo, S.: A review and comparison of methods for detecting outliers in univariate
data sets. Master’s thesis, University of Pittsburgh (2006)
The Phase-I Upgrade of the ATLAS First
Level Calorimeter Trigger

Victor Andrei(B)
on behalf of the ATLAS Collaboration

Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany


andrei@kip.uni-heidelberg.de

Abstract. The ATLAS Level-1 calorimeter trigger is planning a series of


upgrades in order to face the challenges posed by the upcoming increase
of the LHC luminosity. The trigger upgrade will benefit from new front-
end electronics for parts of the calorimeter that provide the trigger sys-
tem with digital data with a tenfold increase in granularity. This makes
possible the implementation of more efficient algorithms than currently
used to maintain the low trigger thresholds at much harsher LHC colli-
sion conditions. The Level-1 calorimeter system upgrade consists of an
active and a passive system for digital data distribution, and three differ-
ent Feature Extractor systems which run complex algorithms to identify
various physics object candidates. The algorithms are implemented in
firmware on custom electronics boards with up to four high speed pro-
cessing FPGAs. The main characteristics of the electronic boards are
a high input bandwidth, up to several TB/s per module, implemented
through optical receivers, and a large number of on-board tracks provid-
ing up to 12.8 Gb/s high speed connections between the receivers and
the FPGAs as well as between the FPGAs for data sharing. Prototypes
have been built and extensively tested, to prepare for the final design
steps and the production of the modules. The contribution will give an
overview of the system and present the module designs and results from
tests with the prototypes.

1 Introduction
During the Run 3 data-taking period (starting in 2021), the Large Hadron Col-
lider (LHC) will increase the current instantaneous luminosity by almost a factor
of two (i.e. to ∼2.5 × 1034 cm −2 s −1 ), to substantially enhance its physics poten-
tial. The luminosity upgrade will lead to a higher number of interactions per
bunch-crossing at the ATLAS detector [1] than the design values of the cur-
rent trigger system. In order to maintain a high event selection efficiency in
the increased luminosity environment, the ATLAS Level-1 calorimeter trigger
(L1Calo) [2] will be upgraded with new object-finding processors. These will run
more efficient identification algorithms on finer granularity calorimeter informa-
tion than is currently available, preserving the low energy trigger thresholds of
the current Run 2 system [3]. This paper presents an overview of the Phase-I
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 350–354, 2018.
https://doi.org/10.1007/978-981-13-1313-4_67
The Phase-I Upgrade of the ATLAS First Level Calorimeter Trigger 351

upgrade of the L1Calo system for LHC Run 3 and the development status of the
new L1Calo hardware components.

2 L1Calo Phase-I Upgrade

L1Calo is a hardware-based, pipelined system that identifies various physics


object candidates in the Liquid Argon (LAr) and Tile calorimeters.
In Run 2, the object identifica-
tion is based on coarse-granularity
analogue trigger-tower input, which
describes transverse energy (ET )
deposits in calorimeter areas of typ-
ically Δη × Δφ = 0.1 × 0.1. The
system consists of a PreProcessor,
which digitises the trigger-tower sig-
nals and extracts the ET value
from each pulse, a Cluster Processor
(CP) and a Jet/Energy-Sum Pro-
cessor (JEP), which respectively use
Fig. 1. L1Calo architecture in Run 3. Com- 0.1 × 0.1 and 0.2 × 0.2 ET input
ponents shown in yellow and orange are part to identify e/γ, τ ’s and jets and to
of the Phase-I upgrade [4].
compute various global energy sums
and the missing ET .
After Run 2, the L1Calo system will be upgraded with three new Feature
Extractor (FEX) systems, to increase the discriminatory power of the trigger
at the higher LHC luminosity: the electromagnetic Feature Extractor (eFEX),
the jet Feature Extractor (jFEX) and the global Feature Extractor (gFEX) (see
Fig. 1). The input to the FEXes will be entirely digital. The front-end electronics
of the LAr calorimeter will also be upgraded during Phase-I, providing L1Calo
with finer granularity digital information. For each trigger-tower, ten so-called
SuperCells will provide ET information from the four longitudinal calorimeter
layers [5]. The Tile front-end electronics will not be upgraded to send digital data
until Phase-II. To accommodate these plans, Tile Rear Extension (TREX) [6]
modules will be developed and installed in the PreProcessor system, providing
the FEXes with Tile digitised results. In total, the FEXes will receive the digital
calorimeter data via ∼7100 high-speed optical links running at up to 12.8 Gb/s.
A Fibre-Optic Exchange (FOX) plant will have the task of reorganising the
optical fibre input, from the mapping employed by the source systems to the
one required by each destination FEX.
The eFEX and jFEX will perform similar investigations to the CP and JEP,
but with more complex sliding-window algorithms that make use of higher gran-
ularity input. The eFEX will use the SuperCell data from |η| < 2.5 to better
isolate and analyse the electromagnetic shower shapes, while the jFEX will run
Gaussian-weighted algorithms to improve jet identification using mostly 0.1 × 0.1
digital trigger-tower data from |η| < 4.9. In total, 24 eFEX modules in two
352 V. Andrei

Fig. 2. The FEX prototype modules [4].

ATCA shelves and 7 jFEX modules in one ATCA shelf will be needed for the
complete system. The gFEX [7] will receive 0.2 × 0.2 ET -sum data from the
entire calorimeter in a single ATCA module, to identify large-radius jets and
compute various global event observables. All of the FEXes will send results in
the form of Trigger Objects (TOBs) to the Level-1 Topological Trigger Proces-
sor (L1Topo), and read out data to the Data Acquisition (DAQ) system, via
optical links running at 12.8 Gb/s and 9.6 Gb/s, respectively. On the eFEX
and jFEX, the transfer of TOBs and readout information is realised via custom
ATCA Hub-ROD boards (see Fig. 1).

3 Prototype Modules and Tests

Prototype modules for each FEX processor have been manufactured and assem-
bled (see Fig. 2), and their functionality has been verified. Each module is a
highly-dense ATCA board design, hosting a large number of high-speed devices
that have to handle and process an input bandwidth of up to several TB/s.
The eFEX prototype is a 22-layer board with four processing FPGAs (Xilinx
Virtex-7 XC7VX550T), one readout FPGA (Xilinx Virtex-7 XC7VX330T), 17
MiniPOD optical transceivers, and up to 450 on-board multi-Gb/s differential
signals. In addition, 94 high-speed electrical buffers duplicate the input calorime-
ter data between the processing FPGAs, as required by the eFEX sliding-window
algorithms [3]. Three eFEX prototypes have been manufactured and tested. The
high-speed optical links were tested at up to 11.2 Gb/s, both at CERN using
a LAr Digital Processing System (DPS) prototype and a FOX demonstrator,
to emulate the Run 3 configuration, and in the laboratory environment using
custom FEX Test Modules (FTMs) as data sources. On 99% of the input links
the observed bit error rate was less than 10−14 . For the other links, the errors
were traced to a few broken high-speed buffers, to sensitive fibre connections
and to poor routing on one input. Additional functionality such as the read-
out and the Timing, Trigger and Control (TTC) distribution, the IPBus and
IPMC operation or the simultaneous transmission over ∼360 on-board differen-
tial pairs was successfully tested. The module power consumption was measured
to be ∼280 W, with all of the FPGA Multi-Gigabit Transceivers (MGTs) active.
The Phase-I Upgrade of the ATLAS First Level Calorimeter Trigger 353

The maximum recorded module temperature was ∼67 ◦ C, with all three proto-
types fully powered and in adjacent slots.
The jFEX prototype is a 24-layer board that hosts four processing FPGAs
(Xilinx Virtex UltraScale XCUV190), 12 MiniPODs, and up to 540 on-board
multi-Gb/s differential tracks. The board control is implemented via an exten-
sion mezzanine, which hosts among others a Xilinx Zynq-7000 FPGA based
card. Input data duplication is realised within each processing FPGA using the
Physical Medium Attached (PMA) loopback. One jFEX prototype has been
manufactured and so far only partially assembled and tested. Preliminary link
tests at up to 12.8 Gb/s, with the LAr DPS and the FTMs or in loopback mode,
showed very good and stable operation for all the tested inputs.
Two gFEX prototype versions have been manufactured. The last version,
shown in Fig. 2, is a 26-layer board with three processing FPGAs (Xilinx Vir-
tex UltraScale XCUV160), one control and monitoring FPGA (Xilinx Zynq
XC7Z100), 28 MiniPODs and a large number of on-board high-speed intercon-
nections. All of the optical I/O links of the prototype were successfully tested
at the maximum specified speeds with both the LAr DPS and in loopback
mode. The module’s power consumption was measured to be ∼300 W (with all
MGTs active), while the maximum recorded FPGA temperature was ∼67 ◦ C.
The design of the next gFEX hardware iteration, the pre-production module,
has been recently completed. This features an increased number of PCB layers
(30) and MiniPODs (35), and newer generation FPGAs (UltraScale+).
The TREX prototype is currently being manufactured. This will be an 18-
layer VME rear transition module, mainly hosting one Xilinx Kintex UltraScale
FPGA (KU115) and four Samtec FireFly optical transmitters for sending data
to the FEXes.

4 Outlook
The prototyping and testing of the L1Calo modules for Phase-I will continue
during 2017, to guide preparation of the final designs. The installation of the final
modules in the experiment will be during the LHC long shutdown starting in
2019, with the aim of being fully commissioned before the start of Run 3 in 2021.

References
1. ATLAS Collaboration: The ATLAS experiment at the CERN Large Hadron Col-
lider. JINST 3 2008 S08003
2. Achenbach, R., et al.: The ATLAS level-1 calorimeter trigger. JINST 3, P03001
(2008)
3. ATLAS Collaboration: Technical Design Report for the Phase-I Upgrade of the
ATLAS TDAQ System. CERN-LHCC-2013-018 (2013)
4. From ATL-DAQ-PROC-2017-017. Published with permission by CERN
5. ATLAS Collaboration: ATLAS Liquid Argon Calorimeter Phase-I Upgrade Techni-
cal Design Report. CERN-LHCC-2013-017 (2013)
354 V. Andrei

6. Andrei, V., et al.: Tile Rear Extension module for the Phase-I upgrade of the ATLAS
L1Calo PreProcessor system. JINST 12, C03034 (2017)
7. Chen, H., et al.: The Prototype Design of gFEX - A Component of the L1Calo
Trigger for the ATLAS Phase-I Upgrade. ATL-DAQ-PROC-2016-046
The CMS Level-1 Calorimeter Trigger
Upgrade for LHC Run II

Alessandro Thea(B)
on behalf of the CMS Level-1 Trigger group

Rutherford Appleton Laboratory, Didcot, UK


alessandro.thea@cern.ch

Abstract. The CMS Level-1 Calorimeter Trigger was successfully


upgraded, commissioned and employed in the recording of LHC collision
in 2016. The upgraded trigger is conceived to maximise the selection
performance in conditions of high luminosity and large multiplicity of
simultaneous inelastic collision per crossing. This is achieved through a
Time-Multiplexed architecture which enables the calorimeter data at full
spacial granularity of a single event to be processed by a single trigger
processor over multiple bunch crossings. The modular hardware design is
based on the µTCA standard. The calorimeter trigger processor boards
are equipped with Xilinx Virtex7 FPGAs and 10 Gbps optical links.
Sophisticated and innovative algorithms exploit the full event informa-
tion to reconstruct lepton and jet candidates. The commissioning and
running of the upgraded trigger will be presented with a summary of the
performance in 2016.

1 Introduction

Run II of the Large Hadron Collider (LHC) started in spring 2015 after a two-
year shutdown period. In October 2016 the LHC instantaneous luminosity has
reached the record value of 1.5 × 1034 cm−2 s−1 and number of simultaneous
inelastic collisions per crossing (pile-up) of 50. The CMS experiment implements
a two-level trigger system to select the potentially interesting events among the
million collisions occurring every second: a hardware-based Level-1 (L1) trigger
that reduces the rate from 40 MHz to about 100 kHz, followed by a software-
based High Level Trigger (HLT). The overall reduction factor achieved is O(105 ).
The Level-1 CMS Calorimeter trigger receives coarse information on the trans-
verse energy (ET ) of the collision products from the electromagnetic (ECAL), the
hadronic (HCAL) and the forward hadronic calorimeters (HF) in the form of trig-
ger primitives. The Level-1 Trigger uses the primitives to build electron/photon,
τ , jet and energy sum trigger candidates. An upgrade of the L1 trigger system [2]
has been completed in view of further increase of the LHC luminosity, which is
expected to approach 2 × 1034 cm−2 s−1 in 2017.

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 355–359, 2018.
https://doi.org/10.1007/978-981-13-1313-4_68
356 A. Thea

2 Architecture of the Upgraded Level-1 Trigger System


The performance requirements imposed by the high-luminosity high-pileup envi-
ronment of the LHC Run II led to the following conceptual choices: (a) Improve
the signal to background ratio for single physics object triggers by exploiting full
tower granularity through sophisticated trigger algorithms. Large FPGAs such
as Xilinx Virtex-7 have been chosen to provide enough computational power.
(b) Precise estimation of global quantities (e.g. missing transverse energy) by
streaming data from a single event into one FPGA. The calorimeter data collec-
tion is achieved via 1152 10 Gbps high-speed optical links. (c) Increase selectivity,
in the form of an expandable and flexible micro Global Trigger (μGT) system [6].
The μGT performs the final trigger decision based on the 12 electron and photon
candidates, 12 τ lepton candidates and 12 jets and energy sums.

Fig. 1. The Time-Multiplexed trigger architecture. From CMS CR-2017/198. Pub-


lished with permission by CERN..

The original Time Multiplexed (TM) [3] design, shown in Fig. 1, is one of the
main novelties of the upgraded calorimeter trigger. The system is divided in two
consecutive processing layers: the first layer, composed of 18 Calorimeter Trigger
Processor boards (CTP7) [4], is responsible for tower-level pre-processing and
data formatting e.g. ECAL and HCAL tower energy sum calculation, energy cal-
ibration and calculation of the ratio between HCAL and ECAL deposits (H/E).
In the second layer 9 Master Processor cards (MP7) [5] run the sophisticated
algorithms to find particle candidates and calculate the global energy sums.
Each MP7 receives the whole event from the layer-1 cards at trigger tower (TT)
granularity with no boundaries. The algorithms have fixed latency and are fully
pipelined: the processing starts as soon as the minimum amount of data is avail-
able. The results are sent to the demultiplexer board, also an MP7, for final
formatting before being forwarded to the μGT.
The CMS Level-1 Calorimeter Trigger Upgrade for LHC Run II 357

3 Advanced Processor Boards


The μTCA telecom standard has been adopted trigger-wide for the proces-
sor board design. The CTP7 and MP7 cards have been designed as generic-
processing engines equipped with multiple Gbps transceivers. The CTP7 board
is built around a Xilinx Virtex-7 XC7VX690T FPGA with a total of 67 input
and 48 output 10 Gbps optical links (Avago miniPods and microPods). Commu-
nications and support functions are managed by embedded Linux OS running on
the additional Xilinx Zynq SoC XC7ZQ45 FPGA (Dual ARM Cortex-A9 CPU).
The MP7 also features a Xilinx Virtex-7 XC7VX690T FPGA. A total of 72 Rx
and 72 Tx 10.3 Gbps optical links provided by two sets of Avago miniPODs
connected through 4 MPO connectors are placed on the front panel. An Atmel
32-bit MMC with microSDHC support handles firmware upload. These AMC
boards are housed in μTCA crates, where an AMC13 card [7] provides clock
and timing signals, data readout including monitoring as well as the L1 trigger
decision via LVDS. Data to configure the lookup tables (LUTs) and registers
are sent via Ethernet to a NAT-MCH μTCA Carrier Hub using the IPbus [8]
protocol standard (MP7) or a custom client-server protocol (CTP7).
Although the calorimeter data format did not significantly change from the
Run 1 system, the communication was upgraded from electrical to optical. The
ECAL Trigger Concentrator Cards were retrofitted with 576 Optical Synchro-
nization and Link Boards (oSLBs) transmitting data over 4.8 Gbps links. The
HCAL and HF electronics responsible for the trigger primitive generation were
replaced by a completely new μTCA-based system, for a total of 576 6.4 Gbps
links. The time multiplexing route from layer-1 to layer-2 is provided by a Molex
FlexPlane patch panel. 72-to-72 12-fiber MPO cables are routed in three enclo-
sures. This novel technology massively simplifies and shrinks fiber installations
at no extra cost.

4 Improved Calorimeter Trigger Algorithms

The algorithms of the upgraded trigger system exploit the full trigger tower
granularity and the access to the full event information. A dynamic clustering
has been designed to reconstruct lepton signatures precisely in the calorimeter
instead of using a sliding window with a fixed size. The advantage of a dynamic
technique is the construction of basic clusters that are combined to reconstruct
hadronic τ lepton candidates. An optimum-sized window is used to build particle
jet candidates directly from trigger towers. Another challenge addressed by the
new Level-1 system is the online determination of the pile-up energy without
the information from the tracking The pile-up mitigation scheme is optimized
for the trigger objects to remain robust against changing conditions.
The firmware implementation of the algorithms was particularly challenging
as all the finder algorithms were to fit in a single Xilinx Virtex 7 FPGA together
with the control logic and the infrastructure for testing. In the TM approach the
data from the calorimeters are reorganized into consecutive rings of 72 TTs in φ
358 A. Thea

by layer-1, then transmitted to layer-2 in pairs of positive and negative eta, each
bunch crossing. For the 32-bits received on each link, the internal computing
frequency achieved is 240 MHz. The structure of the firmware is organized so
that consecutive algorithm steps converge in the core of the chip where the
sorting of the trigger candidates takes place. The firmware obtained is compact
and easily maintainable. Since the start of the Run II period, the firmware was
rebuilt more than 50 times successfully. The total latency of the upgraded system
remains under 1.2 µs in total.

5 Commissioning, Operation and Performance


of the Calorimeter Trigger

The upgraded trigger electronics was installed in 2013–2015, during the first long
shutdown of the LHC, in parallel to the existing trigger system. The electronics
and the algorithms were used to record data during collision in autumn 2015 in
parasitic mode. The algorithms validation was performed by comparing the col-
lected data with bit-level software emulation. The commissioning of the system
as primary CMS trigger was completed in early 2016 during the CMS cosmics
data taking campaign. The first collision in April 2016 were successfully acquired
by the upgraded trigger. The calibration settings and the trigger thresholds were
updated during the year to retain optical selection performance with the steady
increase of the LHC instantaneous luminosity. At the end of the 2016 data-taking
period, in October 2016, more than 40 fb−1 were successfully recorded with the
upgraded trigger. As a consequence of the upgrade the trigger thresholds, at
1.5 × 1034 cm−2 s−1 instantaneous LHC luminosity, were kept under ∼35 GeV
for the single electron trigger, ∼25 GeV and ∼12 GeV for the double electron
trigger legs. The new double τ lepton trigger threshold remained below 32 GeV.

6 Conclusions

The new CMS Level-1 Calorimeter trigger has successfully completed the first
year of operations. Development, installation and commissioning of the hard-
ware and the selection algorithms were conducted under a very tight schedule.
The performance of the new system throughout the 2016 LHC proton run was
excellent: despite the higher luminosity and the harsher environment the thresh-
olds for single objects triggers were maintained low enough for a successful CMS
physics programme. The upgraded trigger experience will provide lessons for the
desing of the future upgrade planned for the LHC high luminosity run in the
next decade.
The CMS Level-1 Calorimeter Trigger Upgrade for LHC Run II 359

References
1. Chatrchyan, S., et al.: CMS TDR CERN/LHCC 2000-38, CMS-TDR-006-1
2. Chatrchyan, S., et al.: CERN-LHCC-2013-011, CMS-TDR-12 (2013)
3. Baber, M., et al.: JINST 9(01), C01006 (2014)
4. Svetek, A., et al.: JINST 11(02), C02011 (2016)
5. Imperial College London, MP7 homepage. http://www.hep.ph.ic.ac.uk/mp7
6. Wittmann, J., et al.: JINST 12(01), C01046 (2017)
7. Hazen, E., et al.: JINST 8, C12036 (2013)
8. Larrea, C.G., et al.: JINST 10(02), C02019 (2015)
The ATLAS Muon-to-Central-Trigger-
Processor-Interface (MUCTPI) Upgrade

R. Spiwoks1(&), A. Armbruster1, G. Carrillo-Montoya1,


M. Chelstowska1, P. Czodrowski1, P.-O. Deviveiros1, T. Eifert1,
N. Ellis1, G. Galster1,2, S. Haas1, L. Helary1, O. Lagkas Nikolos1,
A. Marzin1, T. Pauly1, V. Ryjov1, K. Schmieden1, M. Silva Oliveira1,
J. Stelzer1, P. Vichoudis1, and T. Wengler1
1
CERN, Geneva, Switzerland
Ralf.Spiwoks@cern.ch
2
Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark

Abstract. The Muon-to-Central-Trigger-Processor Interface is part of the


Level-1 trigger system of the ATLAS experiment at the Large Hadron Collider
at CERN. The upgrade of the Muon-to-Central Trigger Processor Interface will
be described. It will use optical input and provide full precision region-of-
interest information for muon candidates to the topological trigger processor of
the Level-1 trigger system. The new Muon-to-Central-Trigger-Processor Inter-
face will be implemented as an ATCA blade receiving 208 optical serial links
from the ATLAS muon trigger detectors. Two high-end processing FPGAs will
eliminate double counting of identical muon candidates in overlapping regions
and send candidate information to the topological trigger and multiplicities to a
third FPGA which will combine the candidate information, send muon multi-
plicities to the Central Trigger Processor and provide readout data to the ATLAS
data acquisition system. A System-on-Chip module will provide communication
with the ATLAS run control system for control, configuration and monitoring of
the new Muon-to-Central-Trigger-Processor Interface.

Keywords: Trigger  Data acquisition  Control  Embedded Linux

1 Introduction

1.1 The ATLAS Trigger System


The ATLAS experiment [1] is a general-purpose experiment at the Large Hadron
Collider (LHC) at CERN. It observes proton-proton collisions at a center-of-mass
energy of 13 TeV. With about 25 interactions in every bunch crossing (BC) of the LHC
beams every 25 ns, there are 109 interactions per seconds potentially producing inter-
esting physics. Therefore a trigger system is needed in order to select those events with
interesting physics content and which can be recorded to permanent storage at a rea-
sonable rate. The ATLAS trigger system consists of a first-level trigger based on custom
electronics and firmware which reduces the event rate to a maximum of 100 kHz, and a

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 360–365, 2018.
https://doi.org/10.1007/978-981-13-1313-4_69
The ATLAS Muon-to-Central-Trigger-Processor-Interface 361

higher-level trigger system based on commercial-off-the-shelf computers and network


components and processing software which reduces the event rate to around 1 kHz.
The first level trigger, see Fig. 1, uses reduced-granularity information from the
calorimeters and from dedicated muon trigger detectors. The trigger information is
based on multiplicities and topologies of trigger candidate objects. The muon trigger is
based on Resistive Plate Chambers (RPC) in the barrel region and Thin-Gap Chambers
(TGC) in the endcap and forward region. The Muon-to-Central-Trigger-Processor
Interface (MUCTPI) [2] combines the muon candidate counts from RPC and TGC. The
Central Trigger Processor (CTP) combines all trigger object multiplicities from the
calorimeter and muon trigger, and the topology flags from the topological trigger, and
makes the final Level-1 decision based on rules described in a trigger menu. The Level-
1 trigger decision is sent back to the detector front-end electronics using the timing,
trigger, and control (TTC) system.

Fig. 1. The ATLAS level-1 trigger system

1.2 The MUCTPI


For each BC the MUCTPI receives up to two muon candidates from each of the 208
muon sectors, 64 in the barrel region and 144 in the endcap and forward region.
The MUCTPI counts muon candidates for six different pT-thresholds. It avoids double
counting of single muons that are detected in more than one muon sector due to
geometrical overlap of the muon chambers and the trajectory of the muon in the
magnetic field; this is called “overlap handling”. Figure 2 shows the geometrical
coverage of one of the 16 boards of the current MUCTPI with 4 barrel, 6 endcap and 3
forward sectors.
362 R. Spiwoks et al.

Fig. 2. One half-octant of the current MUCTPI with 4 barrel, 6 endcap, and 3 forward sectors
indicating the possible overlap zones

2 ATLAS Upgrade

2.1 Upgrade Plans


The MUCTPI upgrade is part of the overall trigger upgrade of ATLAS on the road to
the high-luminosity LHC (HL-LHC), starting around 2025. The upgrade is in line with
the development of the New Small Wheel (NSW) of the muon trigger system [3]. The
required improvements to the MUCTPI are the following:
• Send full-precision information on muon candidates to the topological trigger
processor;
• Replace the electrical connections between the muon sectors logics and the
MUCTPI by optical links with the goal to remove bulky and difficult to maintain
cables, and allow for new or more information, like more candidates, and to
increase the bandwidth in order to send more candidates, more precise position
information and additional flags from muon identification algorithms;
• Allow the overlap handling to be improved by taking into account possible overlap
between octants, which is currently not possible;
• Fit within the same current tight latency requirement of eight BC (200 ns);
• Be compatible with the ATLAS upgrades for the HL-LHC.

2.2 The New MUCTPI


The new MUCTPI, see Fig. 3, will be built as a single ATCA blade, compared to 18
VME modules in the current system. The new MUCTPI will receive 208 optical links
using fibre ribbons and optical receiver modules (Avago minipods). It will use two
state-of-the-art FPGAs (Xilinx Virtex Ultrascale) as Muon Sector Processors for the
overlap handling, counting of muon candidates, and providing muon candidates to the
topological processor. The counts are also passed to a third FPGA (Xilinx Kintex
Ultrascale) which will act as Trigger and -Readout Processor and provide the total
count of muon candidates to the CTP and readout data to the data acquisition system.
The ATLAS Muon-to-Central-Trigger-Processor-Interface 363

Fig. 3. The new MUCTPI with two Muon sector processors (Xilinx Virtex Ultrascale), one
trigger and readout processor (Xilinx Kintex Ultrascale), and a control processor (Xilinx Zynq)

A Control Processor implemented by a Xilinx Zynq System-On-Chip (SoC) will


integrate the MUCTPI into the ATLAS run control system for sending control com-
mands, e.g. start, stop, pause, run calibration etc., loading configuration data, e.g.
lookup-table files, algorithm parameters, etc., and collecting monitoring data, e.g.
counters, selected event data, etc.

3 MUCTPI Run Control


3.1 RemoteBus Software
The processor part of the SoC is being used in order to communicate with the ATLAS
Trigger and Data Acquisition (TDAQ) run control system. A reliable protocol is
adopted for communication: TCP/IP. A client-server and request-response approach is
implemented with the client being a TDAQ controller running on a PC, and sending
requests, and the server being a process on the SoC, receiving the requests, processing
them, and sending responses. A synchronous approach is followed as with the previous
MUCTPI, and multiple clients and multi-threaded servers are allowed. This newly
designed software, RemoteBus, provides several modes of working:
• Single reads and writes from and to memory on the Muon Sector Processor and
Trigger&Readout FPGAs, as well as block read and write functions (as with the
previous MUCTPI using VME);
• Provide extensibility for user-defined functions, typically for more complex serial
protocols for auxiliary hardware, e.g. I2C, SPI, JTAG, etc.;
• Allow queuing of requests: bundle several requests before sending them together in
order to mitigate latency overhead due to network transport.
This software was named “RemoteBus” because it implements functions similar to
remote procedure call (RPC) and because it is similar to read and write operations as
with the previous MUCTPI using VME bus.
364 R. Spiwoks et al.

Every RemoteBus Client (thread) has its own TCP socket and its own RemoteBus
Server thread, see Fig. 4. The RemoteBus Server reads/writes from/to the other pro-
cessor FPGAs using the Xilinx AXI Chip2chip protocol [4] for communication
between FPGAs and executes functions for auxiliary hardware on the server. Some
requests are pre-defined in base classes implemented for communication between any
two computers, e.g. READ(N), WRITE(N). Additional requests are added depending
on the hardware of the server (i.e. the MUCTPI). All parameters, request and response,
are 32-bit data words, and are added into the message or retrieved from the message in
a stack-like way. Additional request types can be added as functions to the server and
client, using C++ inheritance. The Yocto/OpenEmbedded development framework [5]
is used for creating the Linux operating system, for compiling the application software
(RemoteBus) and for providing all files necessary to boot and run the SoC.

Fig. 4. RemoteBus software for run control using the SoC (Zynq)

Two derived classes “ZC706Client” and “ZC706Server” were implemented for the
Xilinx ZC706 (Zynq) evaluation board. Requests were added for the hardware of the
evaluation board, in particular for DC/DC converters, clock configuration, and
temperature/voltage monitors. The minimal latency for a request-response transaction
is around 75 ls. The bandwidth is limited by the Ethernet throughput and reaches about
50 Mbyte/s for 10 kword blocks, this is about 10 times more than the throughput of the
previous MUCTPI using VME. No particular effort at optimizing the network was
done. Running multiple clients or client threads is safe and increases performance.
RemoteBus is currently being applied for testing the MUCTPI prototype.

3.2 Port of TDAQ Software on Embedded Linux


As an alternative approach, in the ATLAS Level-1 Central Trigger team has started to
evaluate the porting of ATLAS TDAQ run control software to embedded Linux. In that
case, the TDAQ controller would run directly on the processor part of the SoC. This
evaluation is using the Yocto/OpenEmbedded framework and is currently under way.
The ATLAS Muon-to-Central-Trigger-Processor-Interface 365

4 Conclusions

The new MUCTPI prototype became available at the start of May 2017 and is currently
being tested. The run control path has been tested with Xilinx Zynq evaluation boards.
RemoteBus software was developed with functions for accessing memory in the pro-
cessor FPGAs, as well as for auxiliary hardware. A port of the ATLAS TDAQ software
to Xilinx Zynq with embedded Linux is under way. The Yocto/OpenEmbedded
development framework is used for building the Linux operating system and the
RemoteBus software. In conclusion, trigger electronics are not only becoming fully
optical, much denser, and more intelligent for processing, but also more intelligent to
control.

References
1. The ATLAS Collaboration: The ATLAS experiment at the CERN large hadron collider.
J. Instrum. JINST 3, S08003 (2008)
2. Haas, S., et al.: The ATLAS Muon to central trigger processor interface. In: Proceedings of
Topical Workshop on Electronics for Particle Physics, CERN-2007-007 (2007)
3. Akatsuka, S., et al.: The phase-1 upgrade of the ATLAS Level-1 endcap muon trigger. In:
Proceedings of Topical Workshop on Electronics for Particle Physics, Springer Proceedings
in Physics 212 (2018)
4. Xilinx AXI Chip2Chip Protocol. https://www.xilinx.com/products/intellectual-property/axi-
chip2chip.html. Accessed 13 June 2017
5. Yocto Project. https://www.yocotoproject.org. Accessed 13 June 2017
Automated Load Balancing in the ATLAS
High-Performance Storage Software

Fabrice Le Goff(B) and Wainer Vandelli


On behalf of the ATLAS Collaboration

CERN, Geneva, Switzerland


fabrice.le.goff@cern.ch

Abstract. ATLAS [1] is one of the general purpose detectors observing


proton-proton collisions provided by the LHC [2] at CERN. The ATLAS
Trigger and Data Acquisition (TDAQ) system [3] is responsible for con-
veying the event data from the detector up to a permanent mass-storage
system provided by CERN. This work focuses on the Data Logger system
which lies at the end of the data flow path in the TDAQ system. The
Data Logger is a transient storage system recording the selected event
data on hard drives before transferring them to permanent storage where
they are available for offline analysis.

1 Data Logger and Workload Distribution

The purpose of the Data Logger is to decouple the online from the offline oper-
ations. It enables ATLAS to cope with disruptions of the permanent storage
service. Its tasks are to write selected event data to non-volatile storage and to
transfer them to permanent storage outside of the ATLAS facility.
In terms of hardware the Data Logger is a scale-out system currently consist-
ing of four local-attached high-performance storage solutions sporting two head
servers each. It can easily be upgraded to provide more storage space or more
bandwidth. The system comprises almost five hundred drives with a total usable
space of 430 TB. It is able to provide a total of 8 GB/s of concurrent read and
write operations.
These servers execute a distributed multi-threaded in-house application that
receives selected event data over two 10GbE network links and writes them to
disks in an organized file scheme. It also computes a file-by-file Adler32 [4] check-
sum. The application is completely data-driven, therefore its workload is entirely
determined by the data composition and indirectly by the trigger configuration.
The trigger system classifies the events in classes called streams. Each event
can be assigned to one or several streams. Each event is also associated with a
luminosity block (lumiblock), a time interval for which the detector’s operation
conditions are considered constant. The typical duration of a lumiblock is 60 s.
The application writes all events of the same stream and lumiblock to a dedicated

c Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 366–370, 2018.
https://doi.org/10.1007/978-981-13-1313-4_70
Automated Load Balancing in the ATLAS High-Performance 367

Fig. 1. Comparison of 2015 and 2016 stream throughput distribution for typical oper-
ations. Only the six major streams out of 25+ are shown. It shows the intrinsically
non-uniform nature of the application workload and its evolution between 2015 and
2016.

file. Figure 1 shows a typical stream throughput distribution. As shown, the


application workload is very unbalanced.
Figure 2 shows relevant parts of the application threading model. Each
instance of the application spawns a configurable set of input threads to han-
dle the network communications and receive the event data. Another number
of output threads compute the checksum and write the data to file. An exter-
nal library provides both the file format and the checksum computation. The file
format requires the application to write files sequentially. Between these two sets
of threads several components dispatch the incoming events among the available
output threads. In order to enforce sequential writing, these components assign
each file to one thread upon file creation. This association cannot be changed
and every event intended for this file must be processed by this thread. There-
fore the streams cannot be divided into smaller chunks processed in parallel to
distribute the stream processing.
The current file-to-thread assignment policy is based on a trivial round-robin
scheduling: each new file is assigned to the next thread in a circular fashion. This
policy is robust and has very little overhead. Since at any particular moment in
time there are more active streams than available output threads, one thread
handles several streams. Moreover the events reach the application with no spe-
cific order. As a consequence different files are assigned to the same thread
randomly.
As the application is processing real-time data, one single overloaded thread
will limit the whole application performance. Specifically the assignment of
major streams together to the same thread could overload this thread and then
degrade the whole application throughput. As a consequence with the round-
robin assignment policy the instantaneous application performance is not pre-
dictable.
368 F. Le Goff and W. Vandelli

Fig. 2. Overview of the threading model of the Data Logger application. Input threads
handle network communications. The middle components dispatch the events among
the output threads. Output threads compute the data checksum and write data to
disks.

2 New Assignment Policy

Between 2015 and 2016 the peak system writing throughput more than doubled
going from ≈1.4 GB/s to ≈3.2 GB/s. The application was, therefore, running
closer to its saturation point. Figure 1 and shows the evolution of the stream
throughput distribution for typical ATLAS operations between 2015 and 2016.
As one can see the relative difference between the major streams is less in 2016.
For these two reasons the random assignment of major streams together to a
thread would actually degrade the application performance. Synthetic tests con-
firmed the performance degradation upon occurrence of these conjoint assign-
ments: −8% for the two major streams together, and −12% for the three major
streams together. In order to re-establish the application performance, a smarter
load-balancing algorithm, sensitive to the application conditions, was needed.
A weighted policy was designed to take assignment decisions based on a load
for threads: a new file is assigned to the thread with the lowest load. The thread
load shall represent its current activity. This will allow optimally distributing the
application workload among the threads. The thread load is computed from the
amount of data processed during a configurable time window. The determination
of the operational value for this period will be a trade-off between the desired
sensitivity to condition changes and the accuracy of the decisions. In the same
manner a load is computed for each stream. Upon assignment the load of the
stream is added to the load of the thread. Therefore the thread load reflects
immediately the assignment without waiting for real-time data to accumulate.
This will ensure that streams with significant throughput will not be handled
by the same thread.
Figure 3 shows the algorithm behavior by plotting the thread loads computed
for a 5-s period and showing an example of assignments of the three major
streams to different threads. The thread loads evolve according to real-time data
Automated Load Balancing in the ATLAS High-Performance 369

(a) (b)

Fig. 3. Thread workloads as a function of time (left) and zoom around the assignment
decisions (right). Each line represents the load for a different thread computed for a
5-s period. Annotations mark the assignment of the three major streams to threads.

processing. Upon assignment the thread load receives a boost corresponding to


the assigned stream load. The probability of the chosen thread to be selected
again for the next new stream is then inversely proportional to the load of the
assigned stream, no matter how close the next decision is taken.
The solution was first tested in a controlled environment using an emulated
data flow. Emulation parameters were extracted from actual 2016 monitoring
data. No wrong assignment decisions were taken during operating periods of
more than 40 h. It was checked that the overhead added by the new algorithm
does not impair the performance of the application. Results showed a slight
improvement of 2% even excluding wrong assignments. The application was then
tested on the production infrastructure and has since been included in all ATLAS
commissioning periods.

3 Conclusion
The Data Logger system of the ATLAS TDAQ system is a key component
enabling the decoupling of the online and offline operations. Its workload is
essentially unbalanced and cannot be fairly distributed. In 2016 new operation
conditions required a new workload distribution strategy. A weighted policy was
designed to be sensitive and self-adaptive to evolving operational conditions. It
has been validated on both test and production systems. It proved to restore
the application performance predictability. This development is now part of the
TDAQ system for the 2017 data-taking period.
370 F. Le Goff and W. Vandelli

References
1. ATLAS Collaboration: Performance of the ATLAS detector using first collision data.
JHEP 09 056(2010)
2. Evans, L., Bryant, P.: LHC machine. J. Instrum. 3(08) S08001 (2008)
3. The ATLAS TDAQ Collaboration: The atlas data acquisition and high level trigger
system. J. Instrum. 11(06) P06008 (2016)
4. Deutsch, P., Gailly, J-L.: ZLIB compressed data format specification version 3.3.
Aladdin Enterprises (1996)
Study of the Calorimeter Trigger Simulation
at Belle II Experiment

Insoo Lee1(&), SungHyun Kim1, CheolHoon Kim1, HanEol Cho1,


Yuji Unno1,2, and B. G. Cheon1,2
1
Department of Physics, Hanyang University, Seoul 04763, Republic of Korea
insoo.lee@belle2.org, bgcheon@hanyang.ac.kr
2
Research Institute for Natural Sciences, Hanyang University,
Seoul 04763, Republic of Korea

Abstract. The Belle II experiment at KEK in Japan is under final stage of


construction to probe New Physics beyond the Standard Model by measuring
CP violation phenomena and rare decays of beauty, charm quark and tau lepton.
The experiment is being performed at the SuperKEKB e+e− collider with
80  1034 cm−2s−1 as an ultimate instantaneous luminosity. As a severe beam
background environment is highly anticipated in e+e− collision, a simulation
study of the Belle II calorimeter trigger system is indispensable to develop an
appropriate trigger algorithm for high trigger efficiency of physics events and an
online luminosity. We report preliminary results on various trigger logics and
efficiencies of physics and Bhabha events using the Belle II Geant4-based
analysis framework called Basf2.

Keywords: Belle II  ECL trigger  Trigger simulation

1 Introduction

The Belle experiment at the KEKB collider at the High Energy Accelerator Research
Organization (KEK) in Japan was performed to measure large mixing induced charge-
parity (CP) violations in B meson system [1, 2]. Most of results are in good agreement
with the Standard Model predictions of the Cabibbo-Kobatashi-Maskawa structure of
quark flavor mixing and CP violation in B decay [2], D  D  Mixing [3] and so on.
However, the experiments indicated several hints of discrepancies between the SM
predictions and the experimental measurements [4, 5]. Thus, a much larger data sample
is required to investigate further study for the New Physics effect. Therefore the Belle
upgrade experiments, called the Belle II, at the superKEKB collider [6]. The Schematic
of The Belle II detector shown in Fig. 1.

2 Belle II Trigger System

Due to an anticipated beam background level is *50 times higher than the case of
Belle, the robust and flexible trigger system is indispensable to operate the Belle II. The
requirements of the level 1 hardware trigger for the Belle II operation are high trigger

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 371–375, 2018.
https://doi.org/10.1007/978-981-13-1313-4_71
372 I. Lee et al.

efficiency from as 100%, maximum trigger rate of 30 kHz, about 5 ls


trigger latency, timing resolution of less than 10 ns, and minimum event separation of
200 ns, while the trigger efficiencies of peculiar physics processes including missing
neutrinos should be kept as high as possible. In case of Bhabha and cc processes are
pre-scaled by factor 1/100 [7].
All of Belle II trigger and readout system have been upgraded. Figure 2 shows the
schematic overview of the Belle II hardware trigger system. Central Drift Chamber
(CDC) charged trigger provides momentum, position, charge, and number of tracks
information of charged tracks. Electromagnetic Calorimeter (ECL) neutral trigger
provides total energy, isolated clusters, and Bhabha tagging information of electro-
magnetic particle. Barrel Particle Identification Detector (BPID) trigger provides pre-
cise timing and hit topology information. K0L and l (KLM) trigger provides K0L and l
track information. GDL makes the final trigger decision that should be delivered to
Belle II Data Acquisition system (DAQ) [8].

Fig. 1. Belle II detector Fig. 2. The schematic overview of the


Belle II hardware trigger system.

3 ECL Trigger System

The basic framework and idea of Belle II ECL trigger (ECL-TRG) are same as the case
of the Belle [1, 8]. To handle higher trigger rate due to high luminosity and beam
background level, we have adopted a new trigger scheme that makes the trigger per-
formance more flexible using a readout electronics architecture with Flash Analog
Digital Converter (FADC), Field Programmable Gate Array (FPGA) components, and
high-speed serial link data transfer at 127 Mbps link speed (Fig. 3).

Fig. 3. Software and hardware configuration of the ECL trigger system for Belle II experiment.
The numbers in small bracket are the number of each module.
Study of the Calorimeter Trigger Simulation at Belle II Experiment 373

Shaper Digital Signal Processing (ShaperDSP) board through photo-diode


(PD) attached to the crystal and pre-amplifier. The ShaperDSP makes shaped signal
with 0.2 ls shaping time. The 16 shaping signals from neighboring 4  4, called a
Trigger Cell (TC), is used in ECL Trigger. The TC is the basic unit of the ECL trigger
system and a total of 576 TCs are used. All TC analog signals are sent to FADC
Analysis Module (FAM). The FAM digitizes the analog TC signal with FADC and
measure energy and timing of TC using minimum v2 fit. These TC timing and energy
are sent to Trigger Merger Module (TMM). TMM merge the coming TC data and send
it to ECL Trigger Master (ETM). The ETM analyze all TC data and generate two
trigger signal and trigger timing.
The followings are main physics trigger conditions. (1) Total energy trigger (Etot ) is
the total energy sum of barrel and forward endcap excluding most inner layer greater
than 1 GeV. (2) The number of isolated clusters (ICN), which is to count the number of
particle cluster that deposit the energy in ECL, is greater than 3. When an event, which
is not tagged by Bhabha trigger condition, does satisfy either (1) or (2), the ETM
generate physics trigger signal.

4 ECL Trigger Simulation

ECL Trigger Simulation (TSim-ecl) is a C++ based program implemented in the


Belle II Geant4-based analysis framework called Basf2. In order to develop and test an
appropriate trigger algorithm for high trigger efficiency of physics events, TSim-ecl has
same structure and same functions with actual ECL Trigger system.

4.1 Trigger Timing Algorithm Study


ECL Trigger system is one of source to provide a trigger timing for deciding the timing
of a physics event. In Belle experiment, trigger timing from ECL Trigger is decided as
the timing of first TC hit. In order to improve resolution of trigger timing, energy
deposit of TC would be considered. We test two kind of methods, Timing of the
highest energy deposit TC and Energy weighted timing of high energy deposit TCs.
Figure 4 show the measured trigger timing of Belle method and other two methods
using simulation sample of event with beam background. The both new methods
show better resolution than Belle method. We confirm the energy weighted timing
show the best resolution, 3.65 ns, satisfying the trigger/DAQ requirement.

Fig. 4. Fitting result of trigger timing using the fastest TC timing (left), the highest energy
deposit TC timing (middle) and energy weighted timing of TCs (right).
374 I. Lee et al.

4.2 3-D Bhabha Veto Logic Study


In e+e− collider, bhabha process is a main background having not only the highest
event rate but also similar event topology with low multiplicity process such as tau and
Initial state radiation (ISR). In order to suppress the Bhabha process in the Belle, a 2-D
(r  /) back-to-back topology was required for two clusters with an energy threshold
for the sum of the two clusters. This method identify some of low multiplicity process
as bhabha trigger. In order to avoid misidentification, the 3-D (r  h  /) back-to-back
topology should be used for tighter Bhabha veto trigger in Belle II. To determine back-
to-back topology on 3-D, we find particle clusters. Then we distinguish the clusters
making back-to-back topology through theta and phi direction. After finding two back-
to-back clusters, we apply energy cut not only the energy sum of the two clusters
(>4 GeV) but also energy of each cluster (>1 GeV). Table 1 show the comparison of
physics trigger efficiency of various physics processes using the 2-D logic and 3-D
logic. In the 3-D logic, more low multiplicity processes survive than 2-D even though
physics efficiency of Bhabha process is also increased. In order to maximize a trigger
efficiency of low multiplicity event with minimized Bhabha event efficiency, cut
optimization of cluster energy should be done on further study.

Table 1. Physics trigger efficiency with 2-D and 3-D Bhabha veto logic
Sample 2-D Logic 3-D Logic
99.99 ± 0.01 99.97 ± 0.02
Bhabha (hlab  17 ) 1.37 ± 0.12 7.83 ± 0.26
ISRðe þ e ! l þ l Þ 14.04 ± 0.38 17.14 ± 0.41
ISRðe þ e ! p þ p Þ 20.45 ± 0.67 35.26 ± 0.82
s ! generic 79.09 ± 0.21 78.19 ± 0.40
s ! lc 82.56 ± 0.38 85.48 ± 0.35
s ! ec 78.41 ± 0.41 89.29 ± 0.30

5 Conclusion

The SuperKEKB collider and Belle II detector has been constructed to search for the
New Physics phenomena. For optimal Belle II operation, we have CDC, ECL, BPID
and KLM sub-triggers and a global trigger system, such as GDL and GRL, to make a
final trigger decision. The ECL trigger system is robust and flexible by using FPGA
firmware architecture. TSim-ecl is being developed in order to test appropriate trigger
algorithms in Belle II environment. By TSim-ecl study, energy weighted timing of TCs
show the best resolution for trigger timing. In the comparison of 2-D and 3-D Bhabha
veto logic, 3-D logic provide better selection of low multiplicity events event. The
optimization of cluster energy cut would be performed in further study.
Study of the Calorimeter Trigger Simulation at Belle II Experiment 375

References
1. Abashian, A., et al. (Belle Collaboration): The Belle detector. Nucl. Instrum. Methods A 479,
117 (2002)
2. Kurokawa, S., Kiktani, E.: Overview of the KEKB accelerators. Nucl. Instr. Methods A 499,
1 (2003)
3. Abe, K., et al. (Belle collaboration): Improved measurement of CP-violation parameters sin 2-
1 and jj, B meson lifetimes, and B0-B
4. Starič, M., et al. (Belle Collaboration): Evidence for D0-D0 Mixing, Phys. Rev. Lett. 98,
211803 (2007)
5. Wei, J.-T., et al.: Belle collabortion. Phys. Rev. Lett. 103, 171801 (2009)
6. Lin, S.-W., et al.: Belle collaboration. Nature 452, 332 (2008)
7. Funakoshi, Y.: SuperKEKB project at KEK. Beam Dyn. Newslett. 67, 28 (2015)
8. Iwasaki, Y., et al.: Level 1 trigger system for the Belle II experiment. IEEE Trans. Nucl. Sci
58, 1807–1815 (2011)
RDMA Optimizations on Top of 100 Gbps
Ethernet for the Upgraded Data Acquisition
System of LHCb

Balázs Vőneki(&), Sébastien Valat, and Niko Neufeld

CERN, Geneva, Switzerland


{balazs.voneki,sebastien.valat,Niko.Neufeld}@cern.ch

Abstract. The LHCb experiment [1] will be upgraded in 2018–2019 to change


its operation to a triggerless full-software readout scheme from Run3. This
results in increasing the load of the event building and filtering farm by a factor
of 40. The farm will need to be able to handle all the 40 MHz rate of the particle
collisions. The network of the data acquisition system is facing with a target
speed of 40 Tb/s, aggregated by 500 nodes. It requires the links to be capable of
delivering the data with at least 100 Gbps speeds per direction.
Three solutions are being evaluated: Intel® Omni-Path Architecture, 100G
Ethernet and EDR InfiniBand. Intel® OPA and EDR IB runs by Remote Direct
Memory Access. Ethernet uses TCP/IP or UDP/IP by default, which involves
significant CPU load. However, there are solutions to implement RDMA-enabled
data transfer via Ethernet as well. These technologies are called RoCE (RDMA
over Converged Ethernet) and iWARP. We present first measurements with such
technologies on 100 Gbps equipment in respect of the data acquisition use-case.

Keywords: Networking  Data acquisition  Remote direct memory access

1 Introduction

The Large Hadron Collider (LHC) has 4 major experiments, one of them is LHCb. It
has an underground detector which gathers information from particle collisions, which
have taken place at high energies. The designed rate of the collisions is 40 MHz at
most, if all available bunch slots are filled. In order to be able to deal with the large
quantities of data generated at the collider, one needs to reject the irrelevant events and
keep only those ones, which are interesting. We call this selection procedure triggering.
In the current system, the LHCb operates by applying two levels of triggers, where
the first is performed by an FPGA-based custom hardware, reducing the 40 MHz input
rate to 1 MHz.
The LHCb experiment will undergo a major upgrade during the LHC Long Shut-
down 2 starting from 2018 until 2019 [2]. One of the key goals of this upgrade is the
removal of the low-level hardware trigger. As a result, the event selection will be fully
software-driven, which gives more flexibility to configure it to various needs. So the
LHCb Upgrade means: reading every bunch crossing. There will be approximately 30–
40 million bunch crossings every second, where the size of one chunk is 100 kB [3],

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 376–381, 2018.
https://doi.org/10.1007/978-981-13-1313-4_72
RDMA Optimizations on Top of 100 Gbps Ethernet 377

so the aggregate is very large, and they all have to go through the network. That is the
key challenge from the technology point of view.
In this paper we will shortly describe the upgraded network communication scheme
which will run on the 500 nodes, then we will discuss the benchmarking results we had
going from simple available benchmarks to more sophisticated benchmarks. This
analysis will end up with a full running test on 4 nodes equipped with 100 Gb/s
Ethernet network adapter cards.

2 Event Building and Data Flow

For the next LHCb upgrade, we will setup an event building cluster of 500 nodes to
read and aggregate the 40 Tb/s of data going out from the detector. The data will arrive
from the sub-detectors to the surface. Using standard servers to host those Readout
Units (RU) now permits to manage easily the buffering and to handle the event building
onto a 100 Gb/s standard fabric from the HPC field. Once the data have been aggre-
gated by the Builder Units (BU), they have to be sent to one of the 3500 Filter Units
onto a second network. Those filter units will apply the software triggering rules.
The Readout and Builder Units will be on the same host, it will require an inner
memory throughput up to 400 Gb/s for each node. This event building process will also
be required to handle up to 100 Gb/s bidirectionally on the event building network
(which requires some experimentation). The LHCb experiment is considering 3 dif-
ferent 100 Gb/s technologies for the upgrade: 100 Gb/s Ethernet, Intel® Omni-Path,
EDR InfiniBand. In this paper, we benchmarked some of the Ethernet solutions.

3 Ethernet with and Without RDMA

Ethernet is by far the most widely spread technology. It is worth it to study for this
specific use-case. Using the standard Linux network stack, data arriving at a network
power undergoes two copy operations: from the device memory to kernel memory,
then from kernel memory to application memory.
Network-intensive high performance computing needs a network infrastructure
with high bandwidth and low latency. RDMA is an acronym, which stands for remote
direct memory access. It makes it possible to access directly the memory of a computer
from another one without involving either one’s operating system or CPU. This allows
high throughput low latency networking.
The key benefit of RDMA is the support for enabling zero-copy data transmission.
Thus, there are no intermediate data copies to temporary buffers. Instead of this, the
network interface controller (NIC) is capable to access the application level buffer and
read from or write to it directly. No work is required to be done by the CPU, caches or
context switches. In addition, the software can perform data transfers directly from
user-space without any kernel involvement. This is called kernel bypass. The RDMA
transfer can continue to run in parallel with other system operations.
In this paper, we will study two RDMA solutions, which are designed to run on
Ethernet networks. They are iWARP (internet Wide Area RDMA Protocol) and RoCE
378 B. Vőneki et al.

(RDMA over Converged Ethernet). The key difference between them is how their flow
control is implemented.
RoCE provides a seamless, low overhead, scalable way to solve the TCP/IP I/O
bottleneck with minimal extra infrastructure. RoCE expects custom settings on both the
endpoint nodes as well as the network [4]. It uses Priority Flow Control (IEEE
802.1Qbb), that defines 8 classes. The PFC uses the priority bits within the VLAN tag
(IEEE 802.1p) to differentiate up to 8 flows. The flow control of these 8 priority classes
can be managed independently. In our test setup, we configured two priorities: 0 and 4
(where the greater number represents higher priority). Priority 0 was for the lossy traffic,
where upper layer protocol guarantees lossless behaviour in the application level.
iWARP is another alternative RDMA offering. It does not need custom switch
settings, because the implementation is using TCP [5].

4 Simple Benchmarks

In the beginning we were interested in the evaluation of these available network


technologies with some simple benchmarks. We have performed bandwidth tests with
iperf (version 2), then ib_write_bw (version 5.6) and osu_bw (version 5.3.2).

Fig. 1. Iperf TCP tests between 2 nodes with Chelsio T62100-LP-CR and Mellanox CX455A
RDMA Optimizations on Top of 100 Gbps Ethernet 379

The iWARP test bench consists of 4 Dell PowerEdge C6220 nodes with the fol-
lowing specification: 2x Intel® Xeon® CPU E5-2670 at 2.60 GHz (8 cores, 16
threads), 32 GB DDR3 memory at 1333 MHz, Chelsio 100G T62100-LP-CR NIC,
Mellanox SN2700 100G Ethernet switch.
The RoCE test bench consists of 4 Intel® S2600KP nodes with: 2x Intel®
Xeon® CPU E5-2650 v3 at 2.30 GHz (10 cores, 20 threads), 64 GB DDR4 memory at
2134 MHz, Mellanox CX455A ConnectX-4 100G NIC, Mellanox SN2700 100G
Ethernet switch. We connected all nodes via 2-m-long direct attached copper
(DAC) cables to the switch (Fig. 1).
These simple TCP bandwidth tests show that a single core is not enough to saturate
the available bandwidth. With the Chelsio card we needed 4 threads to get a reasonable
speed, while the other card needed 16 threads (Fig. 2).

Fig. 2. Iperf UDP tests between 2 nodes with Chelsio T62100-LP-CR and Mellanox CX455A

The previous TCP tests use congestion and flow control, which obviously have
some penalty on the performance we have experienced here. If we want to see the result
of a cleaner/simpler benchmark, we can use UDP instead. UDP is a connectionless
transport protocol. The UDP tests show that the speed can go much closer to the
theoretical maximum at the cost of much higher CPU utilization. However, the higher
380 B. Vőneki et al.

CPU can also be an artefact of the benchmark software. This must be verified with
alternative benchmarks in the future.

5 RDMA Benchmarks

In the previous tests based, all the network traffic went through the CPU for processing.
The following test series will apply benchmarks, where data is directly written to or
read from the operational memory via RDMA (Fig. 3).

Fig. 3. ib_write_bw between 2 nodes with Chelsio (iWARP) and Mellanox (RoCE)

First we have run ib_write_bw peer to peer tests, which use the RDMA technology
supported by the card (either RoCE or iWARP). The test peaked at 87.44 Gbps
maximum bandwidth using 4096 byte large messages. Running via iWARP gave much
better results using a single thread than with TCP: 87.44 Gbps compared to 48.2 Gbps.
This benchmark utilizes one thread by default. In order to saturate the link better, the
same benchmark have been run with multiple instances in parallel with iWARP. Two
threads were enough to reach 95 Gbps, 8 threads gave 98 Gbps throughput.
RDMA Optimizations on Top of 100 Gbps Ethernet 381

In order to build large systems for HPC, one needs to use a centrally managed
launcher on top of this, for example MPI (Message Passing Interface). Another RDMA
bandwidth benchmark was also tried for RoCE, it is called OSU benchmark, which can
be launched by MPI. The unidirectional speed peaked at 96 Gbps, the bidirectional test
reached 139.24 Gbps.

6 Conclusions

We see that using Ethernet natively based on the kernel-driven TCP/IP stack is not
efficient. The CPU and the memory is too much involved in the game, hence a zero-
copy approach is needed. The heterogeneous speeds in the bidirectional heat map needs
to be (and will be) further analysed.
The presented Ethernet RDMA results are promising, and might be a good solution
for high speed interconnection in HPC.

References
1. Cámpora Pérez, D.H., Schwemmer, R., Neufeld, N.: Protocol-independent event building
evaluator for the LHCb DAQ system. IEEE Trans. Nucl. Sci. 62(3), 1110–1114 (2015)
2. The LHCb Collaboration et al.: LHCb Trigger and Online Upgrade Technical Design Report,
CERN-LHCC-2014-016; LHCB-TDR-016 (2014)
3. Otto, A., Cámpora Pérez, D.H., Neufeld, N., Schwemmer, R., Pisani, F.: A first look at 100
Gbps LAN technologies, with an emphasis on future DAQ applications. In: 21st International
Conference on Computing in High Energy and Nuclear Physics (2015)
4. Mellanox Homepage: Network Considerations for Global Pause, PFC and QoS with
Mellanox Switches and Adapters. https://community.mellanox.com/docs/DOC-2022. Acces-
sed 16 June 2017
5. Wikipedia Article Homepage: RDMA over Converged Ethernet. https://en.wikipedia.org/
wiki/RDMA_over_Converged_Ethernet. Accessed 16 June 2017
Integration of Data Acquisition
Systems of Belle II Outer-Detectors
for Cosmic Ray Test

S. Yamada1(&), R. Itoh1, T. Konno1, Z. Liu2, M. Nakao1,


S. Y. Suzuki1, and J. Zhao2
1
KEK, 1-1 Oho, Tsukuba, Japan
satoru.yamada@kek.jp
2
IHEP, 19B YuquanLu, Shijingshan District, Beijing, China

Abstract. The Belle II experiment is scheduled to start the physics run in 2018
and the development of data acquisition (DAQ) system as well as its detector is
ongoing. Currently, most of outer sub-detectors have already been installed in
the Belle II detector and their performance will be tested in cosmic-ray mea-
surement before the beam collision starts. The integration of the DAQ system
for the cosmic ray test was first done with a small-sized read-out system and
then we moved on to the full-scale system. The system is modularized for each
sub-detector so that stand-alone and combined data-taking can be switched
easily. We measured the performance of the readout system after the integration
and confirmed that the integrated DAQ system for installed sub-detectors
actually worked at the designed trigger rate of the Belle II experiment, 30 kHz.
We also observed cosmic ray events with the integrated DAQ system.

Keyword: Data acquisition

1 Introduction

The Belle II experiment [1] aims at searching for new physics beyond the standard
model of particle physics by precisely measuring decays of heavy-flavor mesons. The
target luminosity of SuperKEKB, an asymmetric electron-positron collider, is
8  1035 cm−2s−1, which is 40 times larger than its predecessor, KEKB. Therefore, the
construction of an online DAQ system to handle a large data flow from the detector is
challenging. Recently, some of the outer sub-detectors, such as Central Drift Chamber
(CDC), Time-of-Propagation counter (TOP), Electromagnetic Calorimeter (ECL) and
KLong and Muon detector (KLM), were installed in the Belle II detector. After each
sub-detector is installed in the Belle II detector, each system needs to be integrated to
the Belle II DAQ system [2, 3] so that the performance of sub-detectors can be checked
using cosmic-ray events.
How data are handled by the Belle II DAQ system is as follows: First, trigger and
clock signals are fed to Front-End Electronics (FEE) boards of each sub-detector via
the Trigger Timing Distribution (TTD) network which consists of an array of Fast
Timing Switch (FTSW) modules [4]. Analog signals from the sub-detectors are all

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 382–385, 2018.
https://doi.org/10.1007/978-981-13-1313-4_73
Integration of Data Acquisition Systems 383

digitized on the FEE boards. The only triggered events are then sent downstream and
processed by the readout system, which consists of read-out boards and PC farms.
After the readout system, the events are built by event-builders [5] and go through the
high-level trigger [6], which performs reconstruction and reduces the event rate by
rejecting background events. After the selection, data are recorded on storage.
In this paper, we report how we integrated each sub-detector to the Belle DAQ
system and the result of the performance test of the integrated DAQ system.

2 Integration of DAQ System

In the DAQ integration, the interface between the sub-detector FEE and back-end DAQ
system should be as common as possible for different sub-detectors to minimize the
development cost and share the experience with developers of different sub-detectors.
For the common interface of the outer sub-detectors in the backend-DAQ side, we
employ a readout board called as “COmmon Pipelined Platform for Electronics
Readout (COPPER)” [7]. The COPPER board can be equipped with four receiver
cards, which is called as HSLB (High Speed Link Board) [8]. One HSLB can accept
one optical-fiber input from an FEE board.
The interface part of the back-end DAQ side can be constructed with minimal
equipment for a readout test, which is called as PocketDAQ. It consists of a FTSW
module to provide clock and trigger signals, one COPPER board for receiving data
from FEE boards and a PC server for recording data. After each subdetector’s FEE
developers confirm the readout capability by PocketDAQ, the sub-detector system is
integrated with the Belle II DAQ system. Therefore, the integration itself should be
rather straightforward, because the actual interface is unchanged between PocketDAQ
and the Belle II DAQ system.
During the operation in a cosmic-ray test, data-taking is sometimes performed with
combined sub-detectors and sometimes each sub-detector in parallel. Therefore, the
TTD network and data-flow paths in the back-end DAQ system need to be partitioned
to some extent. The schematic view of this partitioned DAQ system is shown in Fig. 1.
As for the TTD network, each sub-detector has its own TTD network which is con-
nected with a global master FTSW module. In data-taking with combined sub-
detectors, one common trigger source provides trigger signals to the global master
FTSW and then they are distributed over sub-detector TTD networks via sub-detector
master FTSWs. On the other hand, each sub-detector master FTSW module can accept
trigger signals from its own trigger source. In this case, the FTSW module distributes
the trigger signals over its FEE boards for standalone data-taking. In the back-end DAQ
side, we duplicate online DAQ processes for each sub-detector with different network
ports to avoid interference. This virtually partitioned backend DAQ system are con-
trolled by slow-control daemons [9] and switching between standalone and combined
data-taking can easily done by a run-control GUI.
384 S. Yamada et al.

Fig. 1. Schematic view of the partitioned DAQ system for different sub-detectors.

3 High-Rate Readout Test and Cosmic-Ray Measurement

After the DAQ integration, we first performed a stress test for the CDC and ECL DAQ
systems using high-rate dummy trigger to check their performance. In this test, 75
COPPER boards and 9 readout PCs, and 26 COPPERs and 10 readout PCs were used
for the CDC and ECL readout systems, respectively. We used a dummy trigger with a
constant rate as well as a random trigger with pseudo-Poisson distribution. The result of
the CDC high-rate test with the pseudo Poisson trigger is shown in Fig. 2(a). In the
constant-rate case, the both DAQ systems could process more than 99% of input
triggers. In the random trigger test, the value was about 98%. The decrease of the

Fig. 2. (a) The rate of event processing by the CDC DAQ system with dummy 30 kHz pseudo
Poisson trigger. (b) Event-display screenshot of a cosmic ray event observed by the installed
outer sub-detectors.
Integration of Data Acquisition Systems 385

efficiency in the pseudo-Poisson trigger test came from the limitation of the number of
triggers in a certain time window, which is required from the limited buffer size of
FEEs used in the Belle II experiment.
The data-taking of actual cosmic-ray events were also performed with CDC, ECL,
TOP and barrel KLM. The back-end DAQ system for the test included 118 COPPERs
and 23 readout PCs, which was nearly 60% of the total number used in the Belle II
experiment. The DAQ system could work for standalone mode as well as in a com-
bined manner. One of the cosmic ray events which fired all the installed outer sub-
detectors is shown in Fig. 2(b).

4 Summary

The DAQ integration for Belle II outer sub-detectors for a cosmic ray test are reported
in this paper. With semi-separated TTD networks and multiple data streams of different
sub-detectors handled by the slow control system, we can change standalone and
combined data-taking easily. After the integration of the DAQ system, the commis-
sioning of the readout system was performed with a high-rate dummy trigger. In the
high-rate test, more than 98% of the DAQ efficiency was achieved with 30 kHz pseudo
Poisson trigger. We also succeeded to observe clear cosmic ray events from the
combined outer sub-detectors.

References
1. Abe, T., et al.: Belle II Technical Design Report. arXiv:1011.0352 (2010)
2. Nakao, M., et al.: Data acquisition system for Belle II. J. Instrum. 5, C12004 (2010)
3. Yamada, S., et al.: Data acquisition system for the Belle II experiment. IEEE Trans. Nucl. Sci.
62, 1175–1180 (2015)
4. Nakao, M., et al.: Timing distribution for the Belle II data acquisition system. J. Instrum. 7,
C01028 (2012)
5. Itoh, R., et al.: Data flow and high level trigger of Belle II DAQ system. IEEE Trans. Nucl.
Sci. 60, 3720–3724 (2013)
6. Suzuki, S.Y., et al.: The three-level event building system for the Belle II experiment. IEEE
Trans. Nucl. Sci. 62, 1162–1168 (2015)
7. Higuchi, T., et al.: Modular pipeline readout electronics for the SuperBelle drift chamber.
IEEE Trans. Nucl. Sci. 52, 1912–1917 (2005)
8. Suna, D., et al.: Belle2Link: a global data readout and transmission for Belle II experiment at
KEK. Phys. Procedia 37, 1933–1939 (2012)
9. Konno, T., et al.: The slow control and data quality monitoring system for the Belle II
experiment. IEEE Trans. Nucl. Sci. 62, 897–902 (2015)
Study of Radiation-Induced Soft-Errors
in FPGAs for Applications
at High-Luminosity e+ e− Colliders

Raffaele Giordano1,2(B) , Gennaro Tortone2 , and Alberto Aloisio1,2


1
University of Naples ‘Federico II’, Via Cinthia, snc, 80126 Naples, Italy
2
INFN Sezione di Napoli, Via Cinthia, snc, 80126 Naples, Italy
rgiordano@na.infn.it

Abstract. At the KEK laboratory (Tsukuba, Japan), the SuperKEKB


e+ e− collider has been commissioned in February 2016 and it has been
operated until June 2016 completing the so-called Phase-1.
In this work, we present measurements of configuration soft-errors
induced by radiation in a SRAM-based FPGA device installed within
1 m from one of the SuperKEKB beam pipes. During the SuperKEKB
operation, we continuously read back the FPGA configuration memory
in order to spot upsets and we logged power consumption at the dif-
ferent power rails of the device in order to search for total dose effects.
Since the operation current of the SuperKEKB collider spanned a range
between 50 and 500 mA for both the electron and positron rings, the
experimental scenario allowed us to perform measurements in different
radiation conditions.

Keywords: FPGA · Upset · Radiation · Collider


Accesso Aperto MIUR

1 Introduction
Digital electronics in Trigger and Data Acquisition (TDAQ) systems of High-
Energy Physics (HEP) experiments is mostly implemented by means of static
RAM-based Field Programmable Gate Arrays (SRAM-based FPGAs) [1,2].
These devices offer advantages in terms of re-configurability and high-speed
processing and support embedded high-speed serial IOs. Unfortunately SRAM-
based FPGAs are sensitive to single event effects in the configuration memory.
In fact, single event upsets (SEUs) and multiple bit upsets (MBUs) may alter
the design configured into the device. The usage of such devices is limited on
detector, where there is radiation, and the search of solutions for mitigating
radiation-induced soft-errors in SRAM-based FPGAs is presently a hot topic.
Normally, triple modular redundancy techniques coupled to configuration cor-
rection, i.e. scrubbing, are used in order to reduce such effects. Moreover, error
correcting code circuitry has been integrated in latest generation devices for
reducing configuration errors, with some limitations.
c Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 386–390, 2018.
https://doi.org/10.1007/978-981-13-1313-4_74
Study of Radiation-Induced Soft-Errors in FPGAs for Applications 387

In order to choose which strategies to adopt for protecting the design func-
tionality, the designer needs an estimate of the expected bit configuration upset
rate. Usually, in order to determine the particle to bit error cross section, irra-
diation experiments at dedicated facilities are carried out by means of proton,
neutron and heavy ions [3,4] beams. The knowledge of the cross section as a
function of the particles energy spectra and fluxes is paramount for obtaining a
reliable prediction of the upset rate. In situ (or in flight, for space applications)
measurement of the upset rate is highly recommended when the above-mentioned
information is not available with sufficient precision. In fact, this kind of mea-
surements have been carried out at the Large Hadron Collider (CERN, Geneva),
where upsets in readout and control FPGAs have been monitored during HEP
experiments data taking [5], and experiments in space have been launched in
order to compare upset rate predictions to actual measurements [6]. Moreover,
experiments aimed at measuring the effect of atmospheric neutrons on the device
configuration are carried out by FPGA vendors [7].

2 The SuperKEKB Collider


The SuperKEKB [8] high-luminosity (8·1035 cm−2 s−1 ) e+ e− collider of the KEK
laboratory (Tsukuba, Japan) has been operated since February 14th to June 11th
2016. SuperKEKB has been designed to produce B mesons, in order to follow

Table 1. Main design parameters of the SuperKEKB collider.

LER HER Unit


E 4.000 7.007 GeV
I 3.6 2.6 A
n. bunches 2,500
Bunch current 1.44 1.04 mA
Circumference 3,016.315 m

Fig. 1. Simplified scheme of the SuperKEKB collider.


388 R. Giordano et al.

the intensity frontier towards the search for New Physics. Beams collide in a
single point, where the Belle2 detector is installed (Fig. 1). Table 1 reports the
main design parameters of the machine.
This work focuses on measurements of configuration soft-errors induced by
ionizing radiation in a SRAM-based FPGA device installed at a distance of ∼1
m from the beam interaction point (IP).

3 Test Setup
We designed a dedicated test board hosting a Xilinx Kintex-7 325T FPGA. In
order to distinguish between FPGA failures from those of other devices, our
board hosts only passive components other than the device under test. Power
and configuration are fed to the board at the IP over dedicated cabling from a
remote DAQ room. A single board computer [9] manages configuration and read
back via a JTAG connection (Fig. 2). A 4-channel power supply [10] feeds the
FPGA power domains and by means of a dedicated sensing scheme, an analog-
to-digital converter (ADC) [11] reads the actual voltages at the load.

Fig. 2. The test setup.

4 Test Results
During the SuperKEKB operation, our setup allowed us to monitor the FPGA
configuration memory as well as its power consumption continuously. The beam
currents of the SuperKEKB collider spanned a range between 50 and 500 mA
for both the e− and e+ rings, therefore we could test the FPGA in different
radiation conditions.
We measured an upset probability of 2.0 · 10−7 averaged over the phase-1
duration, nearly 120 days. This probability is defined as the ratio of the measured
upsets, 18 in total, over the 91.5 · 106 FPGA configuration bits. The mean time
between upsets (MTBU) resulted to be 6.7 days. The collected statistics includes
1 multiple cell upset (MCU) and 16 single event upsets. In the MCU event, two
Study of Radiation-Induced Soft-Errors in FPGAs for Applications 389

configuration bits were flipped. Results from some PIN-diode detectors, also
installed at a distance of nearly 1 m from the beam pipe, suggest that the total
dose would be smaller than 300 krad. We did not measure significant variations
(>1 mA) in the FPGA currents, which suggests that there have been no, or
negligible, total dose effects. At the end of the Phase-1, the FPGA reported no
permanent damage and was operating correctly.

5 Conclusions
We installed a 7-series FPGA device within 1 m from one of the SuperKEKB
beam pipes during the machine commissioning. Our results show a MTBF of 6.7
days averaged over the commissioning time frame. We have neither evidence of
radiation impact on the FPGA power consumption nor of permanent damage
after irradiation. In the next operation phase of SuperKEKB, expected in 2018,
beam currents will increase and there will be collisions. The background radiation
might increase and in fact we are continuing this upset monitoring activity in
order to gather new, updated information.

Acknowledgments. This work is part of the ROAL SIR project funded by the Italian
Ministry of Research (MIUR), grant no. RBSI14JOUV. “Accesso Aperto MIUR”. The
institutions which contributed to the results reported in this work are listed below
as affiliations of the authors. We also wish to thank all the members of the BEAST2
community for supporting this activity.

References
1. Xilinx Inc.: Virtex UltraScale FPGAs Data Sheet: DC and AC Switching Charac-
teristics, DS893 (v1.7.1), 4 April 2016
2. Altera Corp.: Stratix 10 Device Overview, S10-OVERVIEW, 04 December 2015
3. Hiemstra, D.M., Kirischian, V.: Single event upset characterization of the kintex-7
field programmable gate array using proton irradiation. In: Proceedings of 2014
IEEE Radiation Effects Data Workshop (REDW), Paris (2014). https://doi.org/
10.1109/REDW.2014.7004593
4. Higuchi, T., Nakao, M., Nakano, E.: Radiation tolerance of readout electronics for
Belle II. In: Proceedings of Topical Workshop on Electronics for Particle Physics,
Vienna (2011)
5. Røed, K., Alme, J., Fehlker, D., Lippmann, C., Rehman, A.: First measurement
of single event upsets in the readout control FPGA of the ALICE TPC detector.
In: Proceedings of Topical Workshop on Electronics for Particle Physics, Vienna
(2011)
6. Samaras, A., Varotsou, A., Chatry, N., Lorfevre, E., Bezerra, F., Ecoffet, R.: CAR-
MEN1 and CARMEN2 experiment: comparison between in-flight measured SEE
rates and predictions. In: Proceedings of the 15th European Conference on Radi-
ation and Its Effects on Components and Systems (RADECS), Moscow (2015).
https://doi.org/10.1109/RADECS.2015.7365590
7. Xilinx Inc.: Continuing Experiments of Atmospheric Neutron Effects on Deep Sub-
micron Integrated Circuits, WP286 (v2.0), 22 March 2016
390 R. Giordano et al.

8. Adachi, I.: Status of Belle II and SuperKEKB. JINST 64(6), 1185–1190 (2017)
9. Aloisio, A., Ameli, F., Anastasio, A., Branchini, P., Di Capua, F., Giordano, R.,
Izzo, V., Tortone, G.: uSOP: a microprocessor-based service oriented platform for
control and monitoring. IEEE Trans. Nucl. Sci. PP(99) (2017). https://doi.org/
10.1088/1748-0221/9/07/C07017, C07017
10. GW-Instek: DC Power Supply, GPD-X303S Series, User Manual Gw-Instek part
no. 82PD-433S0M01 (2014)
11. Linear Technology: 24-Bit 8-/16-Channel DS ADC with Easy Drive Input Current
Cancellation and I2C Interface (2006)
Design of High Performance Compute
Node for Belle II Pixel Detector Data
Acquisition System

Jingzhou Zhao1(&), Zhen-An Liu1, Wolfgang Kühn2,


Jens Sören Lange2, Thomas Geßler2, and Wenxuan Gong1
1
State Key Laboratory of Particle Detection and Electronics,
Institute of High Energy Physics, CAS and University of Science
and Technology of China, Beijing 100049, China
zhaojz@ihep.ac.cn
2
II. Physikalisches Institut, Justus Liebig University Giessen,
Geßler, 35392 Giessen, Germany

Abstract. Belle II Pixel Detector (PXD) is a new designed silicon pixel


detector on Belle II upgrade. It generate up to 240 Gbit data per second. With
the help of Silicon Vertex Detector (SVD) and other detectors, PXD data will
reduce to 1/30. High Performance Compute Node (CN) is used as the central
board of PXD Data Acquisition (DAQ) System. Intelligent Platform Manage-
ment Controller and Module Management Controller (IPMC/MMC) are used to
monitor power consumption, temperature and firmware download. Final version
of Compute Node is finished in 2015 and successfully joined beam test with
PXD, SVD, frond-end readout part and High Level Trigger (HLT) in DESY in
Jan. 2017.

Keywords: PXD  Compute Node  IPMC/MMC

1 Introduction

1.1 A Subsection Sample


SuperKEKB is the upgrade of KEK B-factory. It is an asymmetric electron positron
collider with the electron energy at 7 GeV and positron energy at 4 GeV, and designed
luminosity is 8  1035 cm−2s−1 [1]. Belle II is the upgrade detector on SuperKEKB. It
is consisted of Pixel Detector (PXD), Silicon Vertex Detector (SVD), Central Drift
Chamber (CDC), Particle Identification (Barrel PID and End-cap PID), Calorimeter
(ECL) and KL and muon detector (KLM).
PXD, located innermost of Belle II, is a new designed silicon pixel detector in
Belle II upgrade for high precision vertex reconstruction. It is consisted of two layer
sensors based on DEPFET (DEPleted Field Effect Transistor), 8 ladders located in
inner layer and 12 ladders located in outer layer. Outside the PXD is four layers SVD.
The main purpose of the Belle II SVD, together with the PXD and CDC is to measure
the two B decay vertices for the measurement of mixing-induced CP asymmetry [1].
For assume a trigger rate of 30 kHz and corresponding the highest luminosity, each

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 391–397, 2018.
https://doi.org/10.1007/978-981-13-1313-4_75
392 J. Zhao et al.

event is about 1 MB [2]. The whole PXD data rate would up to 240 Gbps. So how to
deal with the huge data is task of DAQ system (Fig. 1).

Fig. 1. Structure of PXD and r-u view of PXD and SVD detector. (a) is PXD detector. Light
grey part is the pixel ladder. Each ladder consists of two half ladders. (b) is r-u view of PXD and
SVD detectors.

2 Data Reduction

There are two ways to reduce the PXD data. One is to reduce the size of each event.
The PXD event data contains many background hits in addition to the hits associated
with real track hits. Real hits data on PXD called Region of Interest (ROI) is only small
part of PXD. Finding ROI on PXD and sending it to event builder is one way to reduce
PXD data. ROI can be found with the help of track reconstructed by SVD. The other
way is to reduce the number of events sent to storage, sending the PXD data after HLT
selection. As estimate, it is about 5 s from particles colliding to high level trigger
generated (Fig. 2).

Fig. 2. Principle of ROI find on PXD sensor


Design of High Performance Compute Node 393

3 ONSEN System

Each half ladder is readout by Front-end electronic circuit (FEE) and data are sent to
the Data Handling Hybrid (DHH). Data of 40 DHH channels are concentrated to 32
channels by DHHC and sent to ONSEN (Online data Selection) system for Data
reduction processing. In data handing processing, one DHH receives data of one half
ladder and maximum data rate up to 511.4 MB/s [2]. In consideration of 8b/10b encode
in transmission, data rate sent out by DHHC is about 6 Gbps per channel. DATCON
system reconstructs particle track and generates the SVD ROI coordinates. HLT ROIs
are generated from SVD together with CDC and other detectors. SVD ROIs and HLT
ROIs are sent to ONSEN system for real PXD hit extraction. Structure of PXD DAQ
system is shown in Fig. 3.

Fig. 3. Simple structure of Belle II PXD DAQ system.

Based on the data reduction requirement, the ONSEN system should have the
ability of high speed data transmission, high performance data processing and high
capability data buffering. An ATCA based system is designed for PXD DAQ system. It
is consisted of a full mesh ATCA shelf, 8 Compute Nodes and firmware for data
reduction. ONSEN system supports 32 optical channel for DHHC data receiving. The
throughput of the system reached up to an order of 200 Gbps. Ethernet ports are
designed for SVD ROIs, HLT ROIs receiving and reduction data output to EVB2 PC
farm. 128 GByte DDR is designed for 5 s PXD data buffing and Virtex-5 FPGA is used
for high performance data processing. Intelligent platform management control system
is used for system stable.

4 Compute Node Design

As AdvancedTCA specification [3] described, the ATCA architecture support full mesh
backplane topology and Star backplane topology. In full mesh topologies, a point to
point data path to/from each Compute Node as shown in Fig. 4. In ONSEN system,
one bidirectional channel is designed for each point to point channel in full mesh.
14 backplane channels are designed on Compute Node.
394 J. Zhao et al.

Fig. 4. Full mesh topology of ATCA backplane used in Belle II PXD DAQ system.

Compute Node (CN) [4] is designed as the central module in the ONSEN system. It
consists mainly of 4 Advanced Mezzanine Cards (AMC, called xFP card), 1 AMC
carrier ATCA board and 1 Rear Transition I/O Board (RTM), shown in Fig. 5. Large
Field of Programmable Gate Arrays (FPGA) are used for parallel data computing;
RocketIO technology is used for high speed data transmission between data processing
node; Gigabit Ethernet is used for data transmission between ONSEN system and HLT;
DDR2 is used for mass data buffering; The connection between CN Carrier board and
four xFP boards are by RocketIO port and general LVDS I/O pairs. 8 optical links by 4
xFP (with two 6 Gbps optical IO) cards provide an input bandwidth of 50 Gbps. 5 Gbit
Ethernet links are provided for output to higher level trigger or for storage. 14 backplane
channels are designed for board data sharing. Each channel support up to 3.125 Gbps.

Fig. 5. Full suite of Compute Node. It is consisted of four AMC cards, one Carrier board, one
Power Supply board and one RTM board.
Design of High Performance Compute Node 395

The Compute Node Carrier board consists of Xilinx Virtex4 serial FX60 FPGA
chip, one 2 GB DDR2 memory, four AMC slots, one Gigabit Ethernet, IPMC part and
power supply part. It supports full mesh topologies connection between each AMC slot
(Fig. 6).

Fig. 6. (a) is structure of Compute Node Carrier board; (b) is structure of xFP card [4]

The xFP (Processing unit based on FPGA and xTCA) card consists of one Xilinx
Virtex5 serial FX70T, two 2 GB DDR2 memories, one platform flash for configuring
the FPGA, one Gigabit Ethernet, one UART for board testing, two SFP+ connector
data line rate up to 6.25 Gbitps, one MMC module for power management, station
detection and communication with IPMC, shown in Fig. 4. And the physical figure of
the xFP board is shown in Fig. 5. The RTM in PXD is just used as IO extension for CN
carrier board. JTAG and UART port are designed for Carrier board.
396 J. Zhao et al.

5 Beam Test

In January 2017, the beam test were hold in DESY with PXD and SVD modules and
related front end electronic and DAQ system. VXD beam test DAQ structure is shown
in Fig. 7. PXD signal are digitized and sent to DHE. PXD data are concentrated by
DHC and sent to ONSEN via optical link. SVD signals are digitized by FADC and
fanned out to FTB and DATCON. FTB send the SVD data via Belle2Link [5] to
COPPER and then to HLT to generate HLT ROI information. DATCON receive SVD
data and generate SVD ROI information. ONSEN receive and remapping PXD data
and extract the hit data with the help of ROI coordinate. NIM and FTSW are used for
timing and trigger distribution. In this beam test, ONSEN had been running stable for
about 109 events. And each run operation stable up to about 18 h [6].

Fig. 7. VXD beam test DAQ structure.

6 Conclusion

PXD is a new designed silicon pixel detector with huge data output. An ATCA based
ONSEN system designed for PXD DAQ system. It has the ability of high speed of data
transmission, mass data buffering and high performance data processing. VXD beam
test was hold in DESY in Jan. 2017. Successful result shows the successful Compute
Node design for Belle II PXD DAQ system.

Acknowledgment. This project has been supported by National Natural Science Foundation of
China (11435013, 11461141011, 11405196).

References
1. Doležal, Z., et al.: Belle II Technical Design Report. High Energy Accelerator Research
Organization, Tsukuba (2010)
2. Doležal, Z., Kiesling, C., et al.: The PXD Whitebook, July 2012
3. PICMG 3.0 R2.0 AdvancedTCA Base Specification ECN-002 May 26, 2006
Design of High Performance Compute Node 397

4. Zhao, J., et al.: A general xTCA compliant and FPGA based data processing building blocks
for trigger and data acquisition system. Presented at the 19th IEEE-NPSS Real Time
Conference, Nara, Japan, May 2014
5. Sun, D., Liua, Z., Zhao, J., Xu, H.: Belle2Link: a global data readout and transmission for
Belle II experiment at KEK. Phys. Procedia 37(0), 1933–1939 (2012). https://doi.org/10.
1016/j.phpro.2012.01.036
6. Lange, S.: ONSEN phase 2 readiness. In: 27th B2GM, 19–23 June 2017. KEK, Tsukuba
A Reconfigurable Virtual Nuclear Pulse
Generator via the Inversion Method

Weigang Yin, Lian Chen(&), Feng Li, Baochen Wang, Zhou He,
and Ge Jin

State Key Laboratory of Particle Detection and Electronics,


USTC, Hefei 230026, China
yd1105@mai.ustc.edu.cn,
{chenlian,phonelee,goldjin}@ustc.edu.cn,
{wbc1992,hezhou}@mail.ustc.edu.cn

Abstract. In this paper, we present a design of reconfigurable virtual nuclear


pulse generator which can simulate the nuclear pulse signals. First, the inversion
method is used to generate random numbers (amplitudes of pulses and time
intervals between adjacent pulses) which indicate the statistical characteristics of
the real nuclear pulses in amplitude and time. Then, the digital pulses are
synthesized in FPGA using these amplitudes and time intervals. Finally, a DAC
is used to output the emulated nuclear signal. Through this design, the emulated
signal can be the same as the real nuclear pulses to obey a specific energy
distribution in amplitude and subject to the Poisson distribution in time. Com-
pared with commercial periodic signal generators, it can better test the perfor-
mances of the nuclear spectrometers especially the throughput, pile-up rejection
and baseline recovery capability. Experimental results show that the generator
behaves like the real radiation source and the detector very well, and it has a
count rate beyond 2 M/s.

Keywords: Nuclear pulse generator  Inversion method  Random numbers

1 Introduction

The randomness of nuclear signals can lead to serious systematic errors when a
commercial periodic signal generator is used to test the performances of nuclear
spectrometers. A random pulse generator which can simulate the nuclear pulse signals
is useful to evaluate the nuclear data acquisition systems so that the risk of radioactive
source use can be reduced. But in order to be useful, the generator must statistically
conform the behavior of real experimental data [1, 2].
The main characteristics of the nuclear signal is to obey a specific energy distri-
bution in amplitude and subject to the Poisson distribution in time. In different
applications, the energy distribution and the count rate are various. In the following, we
will introduce a reconfigurable virtual nuclear pulse generator based on FPGA via
inversion method.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 398–402, 2018.
https://doi.org/10.1007/978-981-13-1313-4_76
A Reconfigurable Virtual Nuclear Pulse Generator 399

2 Algorithm and Implement of the Inversion Method

In this design, random number sequences from specific distributions are used to indicate
the statistical characteristics of the real nuclear pulses in amplitude and time. The
inversion method is a very efficient algorithm to generate arbitrary distributed random
numbers from uniform random numbers which can be produced via linear feedback shift
registers (LFSR) in FPGA easily [3].
According to the inversion method, if F(X) is the cumulative distribution function
(CDF) of variable X and Y is a uniform random number within [0, 1). Then X′ = F−1(Y)
is a random number that satisfies the CDF F(X). When used in FPGA, complex
inversion calculation must be avoid to achieve high speed. Thus, we use lookup tables
(LUT) to implement inversion. Considering that CDF F(X) is monotonically increasing,
for any Y, there always exists a discretized N so that F (N)  Y < F (N + 1). In this
case, N  F−1(Y) is a discretized random number that meets the CDF F(X).

3 System Design

Figure 1 shows the structure of the generator. In this design, we generate random
number sequences via the inversion method. Then, the digital pulses is synthesized
using these random numbers in FPGA. Finally, the analog pulse signal is output
through the DAC circuit. Besides, the entire system can be configured via a computer.

Fig. 1. Block diagram of the virtual nuclear pulse generator

3.1 Digital Pulses Generator


The energy spectrum reflects the probability density function of amplitudes. Thus, the
CDF of amplitudes can be obtained by integrating the spectrum. Meanwhile, the
nuclear pulses meet the Poisson distribution in time. If the average count rate of the
nuclear pulse is k=s, the CDF FðDtÞ of the time intervals Dt are:

F(Dt) ¼ 1  ekDt ð1Þ


400 W. Yin et al.

The amplitudes and time intervals are generated from respective CDFs via the
inversion method. Two RAMs are built as LUTs in the FPGA (Cyclone III, EP3C55) to
replace the calculation of inversion. Then, the digital pulses are synthesized using these
amplitudes and time intervals. To meet the needs of different applications. The count
rate can be adjusted and the amplitudes of emulated pulses can be set to an arbitrary
spectrum by updating the memory data of LUTs.

3.2 DAC Circuit


A high speed DAC (AD9755) is used to convert the digital signal to analog signal.
After DAC, an RC shaping circuit is used to adjust the shape of the output pulse while
simulating the pile-up effect of nuclear signal so that the pile-up rejection ability of
nuclear spectrometer can be test. Figure 2 shows the digital to analog converter circuit.

Fig. 2. Diagram of the DAC circuit and the signal output

4 Experiments and Performances

4.1 Time Characteristics


In this design, the count rate is limited by the LUT when generating random numbers.
When the clock cycle T is 10 ns, a LUT with a length of 210 requires about 350 ns
(35T) per search. That means the count rate of the generator exceeds 2 M/s.
Figure 3 shows that probability density distribution of the time intervals between
adjacent pulses obeys the expected exponential distribution and the number of pulses
within unit time meets expected Poisson distribution well.

4.2 Amplitude Characteristics


A scope and a multi-channel analyzer (MCA) are used to measure the output signal of
the generator. The generator is configured to output the signal of Cs137 c spectrum.
Figure 4 shows that amplitudes of emulated pulses confirm the reference spectrum
well. In order to reduce the systematic error caused by the nonlinearity of MCA, the
data of MCA in Fig. 4 has been corrected by a slip pulse signal generator.
A Reconfigurable Virtual Nuclear Pulse Generator 401

Fig. 3. Time characteristics of the emulated pulse signal @1 M/s

Fig. 4. The output signal of the generator and its spectrum

5 Conclusion

In this paper, we designed a random pulse generator to simulate the nuclear pulses.
Through the inversion method, the pulses output by the generator meets the Poisson
distribution in time and obeys a specific spectrum in amplitude. With the reconfigurable
count rate and reference spectrum, the virtual nuclear pulse generator can be applied in
many applications to replace radioactive sources. Thus, the radiation risks will be
greatly reduced.

Acknowledgement. This work is supported by the National Natural Science Foundation of


China under Grant No. 11375179.
402 W. Yin et al.

References
1. Wiernik, M.: Normal and random pulse generators for the correction of dead-time losses in
nuclear spectrometry. Nucl. Instrum. Methods 96, 325–329 (1971)
2. Veiga, A., Spinelli, E.: A pulse generator with poisson-exponential distribution for emulation
of radioactive decay events. In: VII Latin American Symposium on Circuits and Systems
(LASCAS), pp. 31–34 (2016)
3. Cheung, R.C.C., Lee, D.U., Luk, W., Villasenor, J.D.: Hardware generation of arbitrary
random number distributions from uniform distributions via the inversion method. IEEE
Trans. Very Large Scale Integr. Syst. 15, 952–962 (2007)
Design of Wireless Data Acquisition System
in Nuclear Physics Experiment Based
on ZigBee

Zhou He, Lian Chen(&), Feng Li, Futian Liang, and Ge Jin

State Key Laboratory of Particle Detection and Electronics,


USTC, Hefei 230026, China
hezhou@mail.ustc.edu.cn, {chenlian,phonelee,ftliang,
goldjin}@ustc.edu.cn

Abstract. In Inertial Confinement Fusion experiment (ICF), the intense radi-


ation environment required the experimenter to stay away from the experimental
site. The detector signal has to be transmitted to a safe area through long signal
cable, which will not only aggravate signal transmission attenuation, reduce
SNR, affect the measurement precision, but also bring in a high cost of the
experiment system. Therefore, we designed a data acquisition system based on
ZigBee wireless communication technology. ZigBee has its advantages of low
power consumption, about 30 mA working current and 1 uA dormant current,
convenient arrangement and flexible networking. The wireless communication
distance can reach up to 50 m and the wireless communication rate can reach
about 11.5 kbps. Based on CC2530 wireless network chip, the system can
achieve the wireless data acquisition and remote control.

Keywords: ZigBee  ICF  Wireless network  Data acquisition system

1 Introduction

In ICF, the intense radiation environment required the experimenter to stay away from
the experimental site. In order to achieve remote control and data acquisition, the
detector signal has to be transmitted to a safe area through a few dozen meters long
signal cable. Long cable will not only aggravate signal transmission attenuation, reduce
SNR, affect the measurement precision, but also bring in a high cost of the experiment
system. Especially in the case of hundreds of signal channels, only the cost of high-
fidelity signal cable and connectors tend to be more than a third of the cost of the
measurement system. Therefore, we designed a data acquisition system based on
ZigBee wireless communication technology.
Nowadays, Wireless communication technology have been widely applied, such as
ZigBee, Wi-Fi, Bluetooth, and infrared Data Association (IrDA). Compared with other
techniques, ZigBee is a short-range wireless network communication technology which
has advantages over low cost, low power consumption and reliable data transmission. It
works in licensed wireless communication frequency band and no additional exploita-
tion cost is required. It is mainly applied to remote control and automatic control [1, 2].

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 403–407, 2018.
https://doi.org/10.1007/978-981-13-1313-4_77
404 Z. He et al.

2 System Design

The wireless data acquisition system consists of two parts, the wireless front-end
electronics system and the data processing center. The front-end electronics is mainly
made up of the filter and Amplify circuit, Analog-to-digital conversion device (ADC),
data cache unit and Wireless communication unit. The data processing center is mainly
composed of the ZigBee coordinator and upper computer (see Fig. 1).

Fig. 1. System structure diagram

2.1 Wireless Front End Electronics


The wireless front end circuit board is shown in Fig. 2. Due to the digital processing,
the digital noise will undoubtedly affect the SNR of the detector signals. Therefore, a
low pass filter circuit are designed before the analog-to-digital conversion. In order to
achieve a higher accuracy of data acquisition, we need to obtain enough sampling
points in the sampling time. That demands a high accuracy and high sample rate ADC.
This system uses a 12 bits high speed FADC circuit, which has a sample rate up to
210 MSPS.

Fig. 2. Wireless front end electronics


Design of Wireless Data Acquisition System in Nuclear Physics 405

In data acquisition and processing, we adopted a FPGA on account of its large


storage space and convenient algorithm implementation. The FPGA is used to cache
data and send the data after it receive command from data processing center.
Based on CC2530 chip, the wireless communication unit obtain data from FPGA
and transmit it to data processing center.

2.2 Data Processing Center


The ZigBee coordinator is connected to PC via USB port which is convenient and
flexible (see Fig. 3). Its size is 18 * 50 mm and its working current is only 40 mA.

Fig. 3. ZigBee coordinator

2.3 Wireless Network


The ZigBee coordinator is the center of the entire wireless network, which is
responsible for organizing, managing and maintaining the network. After a series of
initialization, The ZigBee coordinator will choose an appropriate channel and unique
network PANID to form a network. Terminal node is integrated in the front end
electronics. Once the terminal node is power on, it will search the wireless network.
Then it will request to join the network and obtain a unique 16-bit short address if
succeed. After that, ZigBee wireless network is established completely, which imple-
ments remote control and data transfer [3]. In the same network, all equipment con-
figuration parameters should be set to the same value, such as PANID, channel [4].

3 Experimental Results

In order to test the system, we design a waveform readout program. With some specific
commands sent by PC program, we can modify the triggered mode, trigger threshold,
data length, etc. We used the photomultiplier to detect cosmic rays and acquire signal
through the wireless DAQ (Fig. 4 right). Compared with the waveform read by an
oscilloscope (Fig. 4 left). We can see the waveform read by the wireless system is
prefect. It can replace the oscilloscope to read waveform in the severe radiation
environment. The effective number of bits (ENOB) of ADC is 10.9 bit, which means
the measuring accuracy is higher than the general oscilloscope. Without the signal
406 Z. He et al.

Fig. 4. Waveform read by oscilloscope (left) and by wireless DAQ (right)

Fig. 5. Noise test

input, we obtain the noise figure, shown in Fig. 5. The standard deviation of the noise
value is 0.525 mV.
In fact, ZigBee is a low-rate wireless communicate technology at short range. The
transmission rate can reach up to 11.5 kbps. However, ICF experiment test is special,
since the target hitting time is very short and the interval is several hours. There is
enough time to transmit data and it will not introduce the bottleneck problem of
transmission rate. With the proper external antenna, the wireless communication dis-
tance can reach up to 50 m in laboratory environment.

4 Conclusions

In this paper, we design a wireless data acquisition System based on ZigBee. Instead of
long signal cable, the wireless DAQ has advantages on low power consumption,
convenient arrangement and flexible networking. In addition, highly reliable data
transmission provides the technical guarantee.
Design of Wireless Data Acquisition System in Nuclear Physics 407

Acknowledgement. This work is supported by the National Natural Science Foundation of


China under Grant No. 11375179.

References
1. Farahani, S.: ZigBee Wireless Networks and Transceivers. Elsevier Pte. Ltd., North Holland
(2008)
2. Huo, L., Liu, S., Hu, X.: ZigBee Technology and Application. Beihang University Press,
Beijing (2007)
3. Luo, Q., Qin, L., Li, X., Wu, G.: The implementation of wireless sensor and control system in
greenhouse based on ZigBee. In: 35th Chinese Control Conference (2016)
4. Li, W., Duan, Z.: ZigBee2007/PRO Protocols Stack Experiment and Practice. Beihang
University Press, Beijing (2009)
A Lightweight Framework for DAQ System
in Small-Scaled High Energy Physics
Experiments

Yang Li1,2,3(&), Wei Shen1,2,3, Si Ma1,2, and XiaoLu Ji1,2


1
State Key Laboratory of Particle Detection and Electronics,
Institute of High Energy Physics, CAS and University of Science
and Technology of China, Beijing 100049, China
liyang616@ihep.ac.cn
2
Institute of High Energy Physics, CAS, Beijing 100049, China
3
University of the Chinese Academy of Sciences, Beijing 100049, China

Abstract. Data Acquisition (DAQ) System is essential for high energy physics
experiments. For the large scale experiments, we prefer to use the distributed
DAQ framework which offers the powerful online features. However, some
small scale experiments have less complicated requirements for the online data
processing, so a lightweight DAQ framework will save the development time
and manpower. This paper presents the design and implementation of a light-
weight DAQ framework on a standalone server. The framework consists of the
functions of run control, configuration, online data transmission, online event
building, lossless data compression, data storage and real-time data quality
monitoring. The framework is flexible and easy to use. Each component has the
independent interface so the users can easily customize the experiment related
functions. Till now, this framework has been successfully tested in different
experiments, which demonstrated its good capability and high reliability.

Keywords: Data acquisition system  Lightweight framework


High energy physics experiments

1 Introduction

DAQ system plays a key role in the high energy physics experiments. The main tasks
are collecting the data fragments from electronics modules, building them into events
and then saving into disks. There are also some online processes need to be imple-
mented in DAQ, such as data compression, event monitoring, and so on. A functional,
reliable DAQ system with high capability ensures the running efficiency of the physics
experiments.
Some good distributed DAQ frameworks have been widely used, such as
ATLAS TDAQ [2]. TDAQ is a mature, powerful, perfectly functional DAQ frame-
work, but it’s too complicate to be used in some small-scaled experiments. On the other
hand, with the improvement of the computing capacity of hardware, one single server
is becoming increasingly powerful. So we want to develop a lightweight DAQ

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 408–415, 2018.
https://doi.org/10.1007/978-981-13-1313-4_78
A Lightweight Framework for DAQ System in Small-Scaled 409

framework based on one single server, with which users can quickly build a new DAQ
system for their experiments with minimum development time and manpower costs.
The following features have been carefully designed when developing this light-
weight DAQ framework:
• Small volume and easy to carry
• Lightweight integrated architecture
• Concise and clear data-flow structure
• Rich DAQ features
• High capability of data processing
• Independent internal interface
• Friendly user interface
Based on this framework, users can easily build their own DAQ systems, with
customizing the DAQ functions according to the system requirements.

2 Design and Implementation

The framework design can be divided into two layers: the data flow layer and the
interactive layer (Fig. 1).

Fig. 1. Framework design

1. The data flow layer is responsible for data receiving, processing and saving.
2. The interactive layer is responsible for all the controls and operations during data-
taking. It’s also used for information transmission, and providing interface between
users and DAQ system.

2.1 Data Flow Layer Design


The diagram of the data flow layer is shown in Fig. 2. This layer is built by three major
components, Read Out System (ROS), Data Processing System (DPS) and Data
Storage System (DSS). It is implemented with C++ language and running on the Linux
platform.
410 Y. Li et al.

Fig. 2. Data flow layer design

• The ROS needs to establish socket connections with the front-end modules and read
data from electronics, then transfer them to back-end processing modules over
TCP/IP protocol. It’s designed based on client-server architecture.
• The DPS component is responsible for data processing, such as event sorting, event
building, data compressing, and so on.
• The DSS is take charge of data storage and then provides files for offline analysis.
Data flow layer is easy to maintain because of its simple and clear structure. The
components work in parallel, which greatly improve the efficiency of the system.
Data transfer is the most important task in DAQ, so the data flow layer is the key
for the whole DAQ system. In order to ensure the capability and reliability of the
system, the following methods has been used in the data flow layer.
Thread-Safe Queue. The DPS component use producer/consumer model between the
threads (Fig. 3). The framework uses queues as data buffer and encapsulates the queues
with safety read-write lock to assure its stability and reliability.

Fig. 3. Thread-safe queue and zero-copy technology

Zero-Copy Technology. The data transmissions are the crucial processes in the data
flow layer. The traditional memory data copy will introduce the resource overhead,
however passing the data by pointer can be a good solution to reduce the overhead and
improve the performance (Fig. 3).
A Lightweight Framework for DAQ System in Small-Scaled 411

Resource Management. In the experiments, some resources, especially those relative


to the system status, should be managed effectively. The framework has four running
status based on the principle of finite-state machine (Fig. 4), which are Waiting, Ready,
Running and Stopped. Resources will be created when system receive “initialization”
command in the status of Waiting and released safely when receive “stop” command in
Running status. If any error occurs, software will automatically execute the transition to
the Stopped status and release all the relative resources.

Fig. 4. Resource management

2.2 Interactive Layer Design


Interactive layer for the DAQ system is the global service system which is responsible
for the configuration, control, display and information sharing inside DAQ system. It
also provides GUI to communicate with users. The interactive and data flow layer share
the information and data through the network.
There are two function modules in the layer, display module and control module
(Fig. 5). Each module has its own network connection to data flow layer for infor-
mation transmission.

Fig. 5. Interactive layer design


412 Y. Li et al.

The control module is in charge of sending commands and receiving log messages.
The display module is responsible for receiving sample data and plotting them in real-
time.
The modules work independently. Each modules receives information from its own
connection, which reduces competition of resources and thus improves the robustness
of the system.
Interactive layer provides GUI between the human users and the DAQ system. GUI
is a standalone software which is designed in QT framework. GUI software can be run
on the local DAQ server for the local control, or on any other PC for the remote control
(Fig. 6). GUI provides buttons for users to control the running status of the system, and
also provides running information and real-time image display. The framework users
can customize the functions in the GUI as needed by the experiment.

Fig. 6. Framework GUI

3 An Application Instance

Based on the framework, we developed the DAQ system for a silicon pixel detector
system, which will be used for the detection for synchrotron radiation. We chose this
framework due to the characteristics of small volume, compact structure and simple
data flow of the silicon pixel detector system. This DAQ system aimed to realize run
control, data readout, event building, graphics display and other basic DAQ functions.
The hardware structure of silicon pixel DAQ system is shown in Fig. 7. There are
twelve silicon sensors in the front of detector and each sensor corresponds to one
electronic readout board. All the data read from the front-end electronics will be
transferred to DAQ system over the Gigabit Ethernet through the switch equipment.
DAQ system runs on the Lenovo X3750 server, and there is a computer for remote
control and display.
As described in the framework design, the software architecture includes two parts,
the data flow layer and the interactive layer. Its design is shown in Fig. 8. The data flow
layer is responsible for data readout, event building, online compressing and data
A Lightweight Framework for DAQ System in Small-Scaled 413

Fig. 7. DAQ system hardware structure

storage. There are two main tasks assigned to interactive layer: the information inter-
action between two layers and real time display.

Fig. 8. DAQ system software structure

Till now, the DAQ system has been successfully developed base on the framework.
The required DAQ functions have been achieved (Fig. 9), and the performance can
satisfy the requirement of the whole system.

Fig. 9. System running diagram


414 Y. Li et al.

4 Performance Evaluation

The DAQ system for the silicon pixel detector is used as an example to study the
framework performance. To have a better understanding of the DAQ itself, we use
software to emulate the front-end electronics modules.
Analysis of the test results proved that in the current system environment, disk IO is
the performance bottleneck. So LZ4 [5] compression algorithm is used in the online
data process, and the data bandwidth for storage has been decreased to about 40% of
the readout data bandwidth.
In the current system settings, the maximum event rate of stable data-taking can
reach about 1300 Hz, and the corresponding readout data bandwidth is about 2.5 GB/s
(Fig. 10). Till now, the performance study showed very good results. And we are still
investigating the possible solutions to improve the performance.

Fig. 10. Data rate on software simulating environment

5 Summary

The core design of the framework was achieved. The framework can be used for DAQ
system development in some small-scaled experiments with minimal time and man-
power costs. Framework users can quickly build their own DAQ systems due to the
modular design. And the framework provides independent processing interfaces, so
users can easily integrate the functions depending on the experiments related
requirements.
A DAQ system for silicon pixel detectors has been successfully developed base on
the framework. Preliminary test results show that the framework performs well and the
architecture design meets the requirement of the experiment. And now we are using
this framework to develop the DAQ system for other experiments.
The framework is still under development. More detailed design and development
are ongoing. And we will keep optimizing the framework based on the feedback of the
experiments.
A Lightweight Framework for DAQ System in Small-Scaled 415

References
1. Li, F., Ji, X., Li, X., et al.: DAQ architecture design of Daya Bay reactor neutrino experiment.
Nucl. Sci. IEEE Trans. 58(4), 1723–1727 (2011)
2. Atlas, C., Kesson, T., Eerola, P., et al.: ATLAS high-level trigger, data acquisition and
controls technical design report. ATLAS Technical Design Reports (2003)
3. Ma, S., Li, F., Shen, W., et al.: The DAQ system for a beam detection system based on TPC-
THGEM. In: 2016 IEEE-NPSS Real Time Conference (RT), pp. 1–4, 6 June 2016
4. Gu, M., Zhu, K., Li, F., et al.: TaskRouter: a newly designed online data processing
framework. In: 2016 IEEE-NPSS Real Time Conference (RT), pp. 1–4, 6 June 2016
5. Code Synthesis. https://github.com/lz4/lz4
Data Transmission System for 2D-SND
at CSNS

Dongxu Zhao1,3, Hongyu Zhang2,3(&), Xiuku Wang1,3, Haolai Tian1,


and Junrong Zhang1
1
China Spallation Neutron Source, Institute of High Energy Physics,
Chinese Academy of Sciences, Dongguan 523803, China
2
Institute of High Energy Physics,
Chinese Academy of Sciences, Beijing 100049, China
zhy@ihep.ac.cn
3
State Key Laboratory of Particle Detection and Electronics,
Beijing 100049, China

Abstract. China Spallation Neutron Source (CSNS) is the first high-


performance pulsed neutron source in China. On phase one, there are three
instruments at CSNS. They are General Purpose Powder Diffractometer (GPPD),
Small Angle Neutron Spectrometer (SANS) and Multi-purpose Reflectometer
(MR). At present, 2-dimension scintillator neutron detector (2D-SND) for GPPD
and its relative sub-systems have been constructed. As one of these sub-systems,
data transmission system provides a reliable and effective parallel processing
method for data processing and data transfer from Data acquisition system
(DAQ) to data analysis system. It is developed in C language. In this system,
multi-threading is used to implement data processing; shared memory is used to
communicate with DAQ system; and Distributed Information Management
(DIM) is used to interact with online data analysis system. Results of neutron
beam experiments showed that data transmission system was robust and stable,
met design requirements of GPPD. It can be widely applied on other instruments
of CSNS in the future.

Keywords: CSNS  GPPD  Data transmission  Parallel processing


DIM

1 Introduction

China Spallation Neutron Source (CSNS) is designed to build 18 instruments and


provide pulsed neutron beam for industrial users to do neutron scattering experiments.
Among these instruments, General Purpose Powder Diffractometer (GPPD) is under
construction and will come into use in the summer of 2017. GPPD consists of four
banks of 2-dimension scintillator neutron detector (2D-SND), electronics system, data
acquisition system (DAQ), data transmission system and online data analysis system.
2D-SND is used to detect neutrons and produce electronic signals. It consists of 36
modules. Each module has 192 channels. Electronics system is used to receive analog
signals from the detector, amplify these signals, convert them into digital signal,

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 416–420, 2018.
https://doi.org/10.1007/978-981-13-1313-4_79
Data Transmission System for 2D-SND at CSNS 417

construct digitized signal to raw events, and finally send the raw events to DAQ
system. The front-end electronics system consists of 36 boards, corresponding to the 36
modules of detector. Each electronics board receives signals, processes signals and
sends event data independently. Electronics system uses SiTCP to send event data via
gigabit Ethernet [1]. DAQ system is used to read raw event data from electronics
system and save the raw events. Data transmission system is used to get raw events
from DAQ system, check raw events for good ones and transfer good events to data
analysis system. Data analysis system is used to receive good events, reconstruct
events, analyze reconstructed events [2] and display results in the form of charts [3].

2 Data Transmission System Design and Implementation

Data transmission system is a very important section, which links the DAQ system and
the data analysis system. It can be divided into three parts: data processing; interface
between DAQ system and data transmission system; interface between data trans-
mission system and data analysis system.

2.1 Data Processing Framework


The whole data transmission system is written in C language. As a process oriented
language, C offers an easy way to process data with flexibility and high efficiency. The
program is designed to be multi-threading. Multi-threading technology makes multi-
tasking and parallel processing work more efficiently. Events of each electronics board
are received and processed independently in separate thread. Processing includes
getting events of each electronic board, picking good events from raw events and
storing them in buffer. Data processing is showed in Fig. 1.

Fig. 1. Flow chart of data processing

Good events selected from each electronics board are stored in its private buffer
respectively. A pubic buffer is used to collect good events from each private buffer for
the purpose of sending events of all electronics boards to data analysis system. When
private buffer of any electronics board is filled with a good event, it copies this event to
the public buffer immediately. When the public buffer is close to full and can’t accept
418 D. Zhao et al.

new arrival good event, good events in pubic buffer are sent out and public buffer will
be cleaned up. The mutex implements the mutually exclusive access to shared resource.
The application of the mutex accompanying with the use of public buffer ensures that
every event stored in public buffer is complete and correct. A mutex and a public buffer
are created in procedure of data transmission system before creations of all events
processing threads.

2.2 Interface Between DAQ System and Data Transmission System


The interface between DAQ system and data transmission system uses mechanism of
shared memory. It provides a simple and quick method to share resource in multiple
systems architecture.
Data transmission system sets up a number of shared memories. There is one-to-
one match between each memory and each electronics board. DAQ system saves
events of every electronics board to its relative shared memory. Data transmission
system gets events from shared memories. It is multithreading. Getting events from
memory of each electronics board and processing these events of this electronic board
are in the same thread.

2.3 Interface Between Data Transmission System and Data Analysis


System
DIM developed by European Organization for Nuclear Research (CERN) is adopted to
be the interface between data transmission system and data analysis system [4]. DIM is
based on the client/server paradigm. The basic concept in the DIM approach is “ser-
vice”. Servers “publish” their services by registering them with a name server. Clients
“subscribe” to services by asking the name server which server provides them and then
contacting the server directly. DIM provides a method to realize loose coupling,
meanwhile it is very efficient in data transmission.
DIM server runs in data transmission system. In addition, a name server named
DNS runs with data transmission system. DIM server provides a service attached with
the public buffer of data transmission system and starts it up. These actions occur in
procedure of data transmission system. Then, DIM server updates the service to publish
it and send events in public buffer. This action occurs in every thread of data pro-
cessing. When data transmission is stopped, DIM server stops to use service and
removes the service from it. These actions occur in procedure again. The entire
sequence of calling DIM server in data transmission system is showed in Fig. 2. DIM
client is an independent thread of receiving events and reconstruction program in data
analysis system. DIM client subscribes the service provided in DIM server meanwhile
registers a callback routine to receive events from the service. Events received by the
routine are sent to procedure in a thread-safe way. The sequence of calling DIM client
in data analysis system is showed in Fig. 3.
Data Transmission System for 2D-SND at CSNS 419

Fig. 2. Diagram of calling DIM server Fig. 3. Diagram of calling DIM client

3 An Application

Data transmission system together with 2D-SND and other relative systems has been
applied in neutron beam experiment successfully. In the neutron beam experiment, the
beam intensity is 106–107 c/s and the event rate of each module of 2D-SND is 25 Hz.
Results of the experiments showed that there was no event lost in process of data
transmission. Neutron imaging on one module of 2D-SND is showed in Fig. 4.

Fig. 4. Neutron imaging on one module of 2D-SND

4 Conclusions

Data transmission system for 2D-SND is a stable and efficient mechanism to realize
reliable data transfer. With common framework, it can be easily expanded and
improved to fit for applications on other CSNS instruments. Distributed environment
will be involved to improve efficiency in the next stage.

Acknowledgments. This work is supported by National Natural Science Foundation of China


(No. 11305191).
420 D. Zhao et al.

References
1. Uchida, T.: Hardware-based TCP processor for gigabit ethernet. IEEE Trans. Nucl. Sci. 55(3),
1631–1637 (2008)
2. Du, R., Tian, H., Zuo, T., Tang, M., Yan, L., Zhang, J.: Data reduction for time-of-flight
small-angel neutron scattering with virtual neutrons. Instrum. Sci. Techn. 45(5), 541–557
(2017)
3. Tian, H.L., Zhang, J.R., Yan, L.L., Tang, M., Hu, L., Zhao, D.X., Qiu, Y.X., Zhang, H.Y.,
Zhuang, J., Du, R.: Distributed data processing and analysis environment for neutron
scattering experiments at CSNS. Nucl. Instrum. Methods Phys. Res. 834, 24–29 (2016)
4. DIM Homepage. http://dim.web.cern.ch/dim/. Accessed 20 May 2017
Design of DAQ Software for CSNS General
Purpose Powder Diffractometer

Xiuku Wang1,3, Hongyu Zhang2,3(&), Yubin Zhao1,2,3, Mali Chen2,3,


Dongxu Zhao1,3, Bin Tang1,3, Liang Xiao1,3, Shaojia Chen1,3,
and Haolai Tian1,3
1
China Spallation Neutron Source (CSNS), Institute of High Energy Physics
(IHEP), Chinese Academy of Sciences (CAS), Dongguan 523803, China
2
Institute of High Energy Physics (IHEP), Chinese Academy of Sciences
(CAS), Beijing 100049, China
zhy@ihep.ac.cn
3
State Key Laboratory of Particle Detection and Electronics,
Beijing 100049, China

Abstract. This paper presents the design of data acquisition (DAQ) software
for China Spallation Neutron Source (CSNS) General Purpose Powder
Diffractometer (GPPD). GPPD is made up of 36 MA-PMT detector modules and
6192 electronics channels. Total hit rate of GPPD is about 300 kHz. DAQ
software is composed of readout module, detector and electronics configuration
parameter management module, distributed communication module, EPICS
interface module, data analysis for electronics module, interface with offline data
analysis software, etc. Raw data from front-end electronics are read out via
SiTCP gigabit Ethernet. C/C++ are used as the programming language. Qt is
used as a development tool for Linux system, while LabVIEW is used for DAQ
prototype design and GUI on Windows platform. The DAQ software has been
tested in cobalt neutron source experiment and reactor neutron beam line
experiment with detector and electronics system. The results show that the
software is stable and reliable, meets the requirements of engineering deploy-
ment. Some improvement, optimization and function expansion are still
undergoing according to the new experimental results

Keywords: CSNS  GPPD  DAQ software  Qt

1 Introduction

General Purpose Powder Diffractometer (GPPD) is one of the three spectrometers of


China Spallation Neutron Source (CSNS) shown in Fig. 1. The function of GPPD is
measuring cross section of scattered neutron in undetermined sample, and obtaining the
event position via data analysis. Figure 2 shows the structure of the system.
There are 36 MA-PMT detector modules and each module has 192 channels, so
there are 6192 channels in this system. Total hit rate <300 kHz and Single Detector hit
rate <30 kHz, electronics event data are packaged per 40 ms.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 421–425, 2018.
https://doi.org/10.1007/978-981-13-1313-4_80
422 X. Wang et al.

Fig. 1. China Spallation Neutron Source Fig. 2. Structure of the DAQ system

36 electronics and detector module are connected to the gigabit network switches
through 1 Gb optical fiber. 4 computer servers are used for electronics data readout
from switches. One computer is used for GUI of electronics configuration, run control
and one server is used for data checking and data analysis. A 10 Gb Ethernet switch is
used to connect readout servers, data storage server, GUI computer, online analysis
servers and slow control system. The architecture of the whole system is shown in
Fig. 3.

Fig. 3. Architecture of the whole system

2 Design of GPPD DAQ System

SiTCP [1] gigabit network transmission is used in this system, in which RBCP protocol
is used to configure electronics registers and send commands. TCP protocol is used for
front-end electronics data readout. C/C++ are used as the programming languages for
the reason of performance of reading out.
CSNS GPPD DAQ Software is composed of many function modules, which are
shown in Fig. 4. There are 4 electronics readout program, each program connects 9
detector modules for data acquisition. Distributed management module is used to send
messages to the 4 readout program, which are deployed in different servers. Electronics
configuration module is used to configure 6192 Electronics channel with threshold and
Design of DAQ Software for CSNS General Purpose Powder Diffractometer 423

Fig. 4. Module of DAQ software Fig. 5. GUI for GPPD DAQ

compensation values. These values are used to calibrate Electronics channel and check
whether there are bad channels and dead channels. Detector parameters management
and configuration module is used to adjust 6192 Detector channel threshold, which is
very important to improve consistency and efficiency of detector.
Qt [2] is used as a development tool for Linux system. GUI shown in Fig. 5 is used
for the developers of detector and electronics to modify and configure parameters,
enable modules and select run modes. EPICS interface and data analysis program are
also integrated in DAQ GUI.

3 Test in Lab and Neutron Source Experiments and Result


Analysis

Many software tests have been finished with electronics modules at lab in both cali-
bration mode and online data taking mode. In these tests the functions of hardware and
software were verified to be correct. Detector modules together with electronics and
DAQ software were also tested in radioactive neutron source at lab, which is shown in
Fig. 6. In addition, three detector modules and their associated electronics and DAQ
were put on a reactor neutron beam line to finish joint experiments, which is shown in
Fig. 7.

Fig. 6. Radioactive neutron source Fig. 7. Reactor neutron beam


experiment
424 X. Wang et al.

Raw data obtained in reactor neutron beam experiments were analyzed by program.
Electronics raw data are packed per 40 ms according to T0 count numbers. GPPD data
format is shown in Fig. 8. The Head of data includes flags, detector information and
run mode, which helps DAQ software to record details of the experiment. The main
part of the data is the electronics channel numbers hit by neutrons and the corre-
sponding time information of the event.

Fig. 8. Data format of GPPD Fig. 9. Histogram of channel checking

GPPD DAQ data analysis software is developed in C++, LabVIEW and ROOT.
Channel hit histogram is developed to check detector and electronics channels. Bad
channels and dead channels can be clearly displayed in this LabVIEW program, which
is shown in Fig. 9. X-Y position of event hit was developed in ROOT to display hit
images in neutron scattering experiments. Figure 10 shows an image of a neutron slit
experiment. Figure 11 shows the test result of standard sample of Al2O3.

Fig. 10. Image of a neutron slit experiment Fig. 11. Test result of Al2O3

4 Conclusions

GPPS DAQ software has been tested at lab and on reactor neutron beam line. The
results show that the software is stable and reliable, meets the requirements of the
engineering deployment. Further improvement, optimization and function expansion
are undergoing according to the newest experimental results.
Design of DAQ Software for CSNS General Purpose Powder Diffractometer 425

Acknowledgement. This work is supported by National Natural Science Foundation of China


(No. 11305191).

References
1. Uchida, T.: Hardware-based TCP processor for gigabit ethernet. IEEE Trans. Nucl. Sci. 55(3),
1631–1637 (2008)
2. https://www.qt.io/qt-for-application-development/
Design of Data Acquisition Software for CSNS
Angle Neutron Spectrometer

Han Zhao1,2,3(&), Hongyu Zhang1,2, Mali Chen1,2, Dongxu Zhao1,4,


Liang Xiao1,4, Xiuku Wang1,4, Jinfan Chang1,2, Yubin Zhao1,2,
and Hong Luo1,4
1
Institute of High Energy Physics, Chinese Academy of Sciences,
Beijing 100049, China
zhaohan@ihep.ac.cn
2
State Key Laboratory of Particle Detection and Electronics,
Beijing 100049, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
China Spallation Neutron Source, Institute of High Energy Physics,
Chinese Academy of Sciences, Dongguan 523803, China

Abstract. According to the data acquisition requirements of China Spallation


Neutron Source (CSNS) Small Angle Neutron Spectrometer (SANS), this paper
adopts Linux, C++ and open source technology such as Qt and MySQL to
develop the SANS DAQ software. The software realized basic DAQ functions
such as electronics configuration, run control, data readout, online data pro-
cessing and storage, status monitor and error alarm, as well as additional
functions like Graphic User Interface (GUI), Online Database and Waveform
Reconstruction. Test results show that the software is stable, reliable and effi-
cient, meets the requirements of CSNS SANS.

Keywords: CSNS  SANS  DAQ

1 Introduction

China Spallation Neutron Source (CSNS) is the first spallation neutron source in China.
Small Angle Neutron Spectrometer (SANS) is one of the three spectrometers currently
built. Its neutron detector system consists of 120 3He tubes which output analog signal
from both sides of each tube. Its electronic system are composed of 20 preamplifier/
main-amplifier circuits and 10 readout modules, each module includes 24 electronic
channels, in charge of the readout work for 12 3He tubes.

2 Design of SANS DAQ


2.1 Requirements for SANS DAQ
SANS DAQ requires functions like electronics configuration, run control, data readout
via Gigabit Ethernet, online data processing and storage, status monitoring and error
alarm, Graphic User Interface (GUI), Online Database, waveform reconstruction and
© Springer Nature Singapore Pte Ltd. 2018
Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 426–430, 2018.
https://doi.org/10.1007/978-981-13-1313-4_81
Design of Data Acquisition Software 427

histogram display, communicating with Detector Control System and Online Data
Analysis System. Function modules of SANS DAQ are shown in Fig. 1.

Fig. 1. Functional design of SANS DAQ

According to the design of SANS detector, hit rate of single 3He tube will be less
than 100 kHz, and the entire occupancy will be less than 40%. Front-end electronics
data are packaged every 40 ms, 7.68 MB/s data will be produced by each module. It
means the DAQ software needs to read out and process 76.8 MB data per second.

2.2 Hardware Architecture of SANS DAQ


The hardware architecture of SANS DAQ is shown in Fig. 2. Each electronics module
connects to the gigabit port of online switch through optical fiber. Readout Servers are
connected to the switch through 10 Gb network ports, each of them is responsible to
readout raw data from several electronics modules. The experimenter uses Run Control
Computer to configure electronics registers and control Readout Servers for data
acquisition.

Fig. 2. Hardware architecture of SANS DAQ

2.3 Software Design


SANS DAQ software is designed to be a distributed architecture. Run Control GUI
sends start/stop commands and configuration information of electronics modules to the
428 H. Zhao et al.

Readout Control Programs running on each Readout Server. According to the com-
mands received, the Readout Control Programs create and configure corresponding
Data Readout Objects, which are responsible for the readout tasks of several electronics
modules. Readout Control Program sends back status information and statistical data to
Run Control GUI if its electronics module is chosen to be monitored. Run Control GUI
also updates status logs and presents waveforms and histograms. The software archi-
tecture is shown in Fig. 3.

Fig. 3. Software architecture of SANS DAQ

Data Readout Object is instantiated from Data Readout Class, whose function is
reading out and processing data from electronics module. As shown in Figs. 4 and 5,
the work of Data Readout Object is divided into two threads: one is to receive data
from electronics modules via network, check the data format, and then save data into a
ring buffer; the other is to read data from the ring buffer and process the data online.

Fig. 4. Data readout class Fig. 5. Ring buffer


class

2.4 Interfaces Between SANS DAQ and Other Subsystems


The Run Control GUI needs to communicate with other subsystems, such as Front-end
Electronics, Online Data Analysis System and Detector Control System.
Electronics adopts SiTCP gigabit network transmission scheme [1]. As a node in
the network, each electronics module is assigned a unique IP address. DAQ software
can configure electronics modules via UDP protocol and acquire event data from
electronics via TCP protocol. SiTCP also defines a RBCP protocol to enhance the
reliability of UDP communication. The principle of SiTCP is shown in Fig. 6.
Design of Data Acquisition Software 429

Fig. 6. SiTCP principle Fig. 7. DIM principle

In order to reduce the coupling degree, SANS DAQ uses DIM (Distributed
Information Management System) to distribute electronics raw data to Online Data
Analysis System [2]. The principle of DIM is shown in Fig. 7.
During the experiments, SANS DAQ will need to exchange status information with
Detector Control System. Since the control system adopts EPICS (Experimental
Physics and Industrial Control System) to control and monitor spectrometer, sample
environment and target station, SANS DAQ uses APIs offered by EZCA (Easy
Channel Access) to communicate with the control system.

2.5 SANS DAQ Graphic User Interface


Shown in Fig. 8, the GUI of SANS DAQ consists of several sub panels, such as
electronics configuration, run control, online database, waveform and histogram
presenting.

Fig. 8. Graphic User Interface of SANS DAQ

3 Test Results

Performance and stability of SANS DAQ have been tested by using simulated data. As
shown in Figs. 9 and 10, data readout, processing and storage rate for each front-end
module can reach 111 MB/s. Long-term data-taking of the whole system has finished at
the neutron source. Satisfying position resolution of 3He tubes is shown in Fig. 11.
430 H. Zhao et al.

Fig. 9. Performance test Fig. 10. Stability test Fig. 11. Position resolution

4 Summary

SANS DAQ software has been accomplished and all functions are tested and verified.
It will be deployed and start commissioning run at CSNS.

References
1. Uchida, T.: Hardware-based TCP processor for gigabit ethernet. IEEE Trans. Nucl. Sci. 55(3),
1631–1637 (2008)
2. DIM. http://dim.web.cern.ch/dim/
Design of Data Acquisition System for CSNS
RCS Beam Position Monitor

Liang Xiao1,3,4, Dongxu Zhao1,3,4, Hongyu Zhang1,2(&),


Yubin Zhao1,2, Xingcheng Tian1,3,4, and Xiuku Wang1,3,4
1
State Key Laboratory of Particle Detection and Electronics,
Beijing 100049, China
zhy@ihep.ac.cn
2
Institute of High Energy Physics, Chinese Academy of Sciences,
Beijing 100049, China
3
China Spallation Neutron Source (CSNS), Institute of High Energy Physics
(IHEP), Chinese Academy of Sciences (CAS), Dongguan 523803, China
4
Dongguan Institute of Neutron Science (DINS), Dongguan 523808, China

Abstract. The Protons beam position monitor (BPM) system is one of the most
important diagnostic elements set up around the Rapid Cycling Synchrotron
(RCS) for the China Spallation Neutron Source (CSNS). This paper presents the
design of Data Acquisition (DAQ) System for CSNS RCS BPM. The beam
position data are published to the network layer through the Experimental
Physics and Industrial Control System (EPICS), which allows others to process
the data. The data acquisition software has completed the preliminary test; the
results show that the software has good practicality and scalability.

Keywords: CSNS  RCS  Beam Position Monitor  Data acquisition

1 Introduction

The Proton Beam Position Monitor (BPM) system has been set up around the Rapid
Cycling Synchrotron (RCS) for the China Spallation Neutron Source (CSNS). Figure 1
is the schematic layout of CSNS facilities. The detector of the proton beam position
monitor is distributed at 32 measuring points along the RCS Ring. Figure 2 is the
distribution of beam position measurement elements on RCS. The data acquisition
system divides the 32 beam measurement elements into 4 groups according to the
quadrant; each group collects data of 8 beam measurement elements. Each element has
two pairs of beam measuring probes. The functions of DAQ System of BPM include:
configure the work mode of the front-end electronics, read out data from the front-end
electronics via VME bus, process and monitor online data, store and distribute
experimental data to Accelerator Control System.

© Springer Nature Singapore Pte Ltd. 2018


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 431–435, 2018.
https://doi.org/10.1007/978-981-13-1313-4_82
432 L. Xiao et al.

Fig. 1. Schematic layout of CSNS facilities Fig. 2. Distribution of beam element

2 Single Crate Electronics System

The single crate system consists of a VME crate controller and a variety of front-end
electronics modules. The crate controller is VP B14/433-42 single board computer. The
electronics modules include charge measurement modules (BPME), readout control
modules (BROC) and T0 signal fan-out modules (BFAN). Figure 3 is the architecture
of a single crate.

Fig. 3. Single crate architecture

2.1 Architecture of BPM DAQ System


The Architecture of BPM DAQ System consists of 4 VME crate controllers located at
RCS local stations, host server used to do online data processing and event building,
PV conversion and upload server and the run control and experiment monitoring
Design of Data Acquisition System for CSNS RCS Beam 433

server. Each VME crate collects the position data of the eight beam measuring ele-
ments. Each front-end electronics module corresponds to one beam measuring ele-
ments. The architecture of DAQ System is shown in Fig. 4.

Fig. 4. Architecture of DAQ system

2.2 Design of Two Work Modes


The beam parameters vary during every RF cycle. Both Turn-By-Turn (TBT) and
Close-Orbit-Distortion (COD) modes are required in CSNS accelerator commissioning.
In the bunch cycle, COD mode requires the average position values of each 1024
bunches. Position data needs to be uploaded to the network layer in a specified manner.
While in TBT mode, the system needs to process and store the position information of
each beam bunch. The maximum amount of data per second is 0.5 MB for COD and
135 MB for TBT.

2.3 Processing of Data Streams


Electronics modules pack digitized data at a fixed frequency individually. Then, the
crate controller collects the data from each electronics module and packs the data into
pre-defined format and send them to the host server. The PV conversion server converts
the data into PV values and uploads them to the accelerator control system. In the
meantime, formatted raw data are stored permanently in an NFS file system for further
accelerator physics analysis.
434 L. Xiao et al.

2.4 Testing Results of DAQ System


Figure 5 shows the high frequency trigger and beam waveform. The high-frequency
triggered window completely contains the beam response signal. The analog signal is
observed by an oscilloscope.

Fig. 5. Highfrequency trigger and beam waveform

The position resolution of the measured data is better than 1 mm through careful
analysis. The maximum frequency component represents the operating point of the
beam, and the beam operating point refers to the frequency of the oscillation along the
center of the beam.

3 Conclusions

The DAQ system has been used in the CSNS RCS BPM System for more than 6
months. Both TBT and COD work modes have been successfully tested and the DAQ
system was proved to meet the expected functional requirements. The fixed frequency
of proton beam on RCS has been successfully tested; the next step is to change the
beam frequency.
Design of Data Acquisition System for CSNS RCS Beam 435

References
1. Guan, X., Zhao, Y., Xu, T., Zhuang, B., Lu, W., Li, H., Zhao, J.: The design of BPM readout
electronics for the CSNS RCS. Nucl. Electron. Detect. Tech. 31(9) (2011)
2. Hu, J.: CSNS beam position monitor DAQ software research and implementation. Master
thesis, College of Physics, Zhengzhou University, Zhengzhou (2011)
3. Gu, M., Zhu, K., Jian, Z., Chu, Y.: Data acquisition software of LHAASO prototype system.
Nucl. Electron. Detect. Tech. 33(5) (2013)
Author Index

A Bernard, D., 27
Abdallah, A., 63 Bi, B. Y., 17, 22
Abinaya, P., 291 Bizzeti, Andrea, 279
Achenbach, P., 123, 275 Bocci, Valerio, 173
Achrekar, S., 291 Böhm, M., 123, 275, 283
Adachi, Ichiro, 46, 253, 270 Bologna, Simone, 319
Aielli, G., 63 Borg, Johan, 149
Akatsuka, Shunichi, 341 Branchini, P., 54
Alessio, Federico, 332 Brinkmann, K., 283
Ali, A., 123, 275 Britting, A., 123
Aloisio, Alberto, 54, 386 Brogna, A., 337
Amano, S., 27 Brook, N., 257
Ameli, F., 54 Bruel, P., 27
An, Qi, 190, 201, 210, 215, 324 Brugnera, Riccardo, 186
Anastasio, A., 54 Buchholz, P., 180
Andrei, Victor, 350 Buescher, V., 337
Aniruddhan, S., 291
Arai, Yasuo, 163 C
Armbruster, A., 360 Cabrera, A., 168
Arneodo, F., 109 Cachemiche, Jean-Pierre, 332
Attié, D., 27 Cadoux, Franck, 12
Ayyagiri, N., 291 Calvet, D., 27
Azzarello, Philipp, 12 Camplani, Alessandra, 50
Candela, A., 109
B Cao, Peng-cheng, 224
Barbosa, Joao Vitor Viana, 332 Cao, Ping, 201, 210, 215
Baron, P., 27 Cao, Zhe, 17, 22, 190, 324
Barria, Patrizia, 132 Capeans, M., 91, 97
Baudin, D., 27 Cardani, L., 35
Bauss, B., 337 Cardarelli, R., 63
Behere, A., 291 Cardinali, M., 123, 275
Belias, A., 123, 275 Cardoso, Luis Granado, 332
Bellato, Marco, 186 Carrillo-Montoya, G., 360
Benabderrahmane, L. M., 109 Casali, N., 35
Bergnoli, Antonio, 186 Castaneda, Alfredo, 328

© Springer Nature Singapore Pte Ltd. 2018 437


Z.-A. Liu (Ed.): TIPP 2017, SPPHY 212, pp. 437–442, 2018.
https://doi.org/10.1007/978-981-13-1313-4
438 Author Index

Castellano, M. G., 35 Etzelmüller, E., 123, 275


Chandrachoodan, N., 291 Eyrich, W., 123, 275
Chandratre, V. B., 291
Chang, Jinfan, 426 F
Chelstowska, M., 360 Farabolini, Wilfrid, 237
Chen, Lian, 142, 398, 403 Färber, Christian, 309
Chen, Mali, 421, 426 Fawwaz, O., 109
Chen, Shaojia, 421 Feng, Changqing, 206
Chen, Yanli, 215 Fiergolski, Adrian, 303
Cheng, Li-bo, 224 Föhl, K., 123, 257, 275
Cheon, B. G., 371 Forty, R., 257
Cheremukhina, G., 195 Franchi, G., 109
Chiodi, Giacomo, 173 Frei, C., 257
Chirita, M., 283 Fresch, Paolo, 173
Chitra, 291 Fröning, Holger, 346
Cho, HanEol, 371 Frotin, M., 27
Codispoti, Giuseppe, 319
Colantoni, I., 35 G
Colas, P., 27 Gabriel, Miroslav, 127
Conforti, S., 168 Galster, G., 360
Conicella, V., 109 Gan, K. K., 180
Coppolecchia, A., 35 Gao, R., 257
Corso, Flavio Dal, 186 García, A. Ros, 257
Corti, Daniele, 186 García, L. Castillo, 257
Cruciani, A., 35 García, Pedro Javier, 346
Cussans, D., 257 Garfagnini, Alberto, 186
Czodrowski, P., 360 Geerebaert, Y., 27
Gerhardt, A., 123, 275
D
Geßler, Thomas, 391
Das, D., 291
Gessler, Thomas, 58
Dasgupta, S., 291
Giaz, Agnese, 186
Datar, V. M., 291
Giebels, B., 27
Daté, S., 27
Giordano, R., 54
de Bernardis, P., 35
Giordano, Raffaele, 386
De La Taille, C., 168
Gokhale, U., 291
Degele, R., 337
Gong, Datao, 158
Delbart, A., 27
Gong, Wen-xuan, 224
della Volpe, D., 17, 22
Gong, Wenxuan, 391
Dennis, Getzkow, 58
Götz, D., 27
Deviveiros, P.-O., 360
Götzen, K., 123, 275, 283
Di Capua, F., 54
Grassi, M., 168
Di Giovanni, A., 109
Gros, P., 27
Dirkx, Glenn, 319
Gruber, L., 283
Doebert, Steffen, 237
Guida, Roberto, 91, 97
Dolenec, Rok, 46, 270
Gys, T., 257
Dreyling-Eschweiler, Jan, 243
Dulucq, F., 168
H
Durante, Paolo, 332
Haas, S., 360
Düren, M., 123, 275
Harnew, N., 257
Dutta, K., 283
Hashimoro, Ryo, 163
Dzhygadlo, R., 123, 275
Hashimoto, S., 27
E Hataya, K., 46
Eifert, T., 360 Hayrapetyan, A., 123, 275
Ellis, N., 360 He, Huihai, 31
Author Index 439

He, Zhou, 142, 398, 403 Kou, Han-jun, 224


Heidbrink, S., 180 Kratochwil, N., 283
Helary, Louis, 314, 360 Krebs, M., 123, 275
Heller, M., 17, 22 Kreutzfeld, K., 123
Herr, H., 337 Kreutzfeldt, K., 275
High, Suzuki Soh, 58 Krishnapura, N., 291
Hoek, M., 123, 275 Križan, P., 46
Horan, D., 27 Križan, Peter, 270
Hu, Jiadong, 190 Kröck, B., 123
Hu, Jun, 186 Kühn, Wolfgang, 391
Huang, Xiru, 210, 215 Kulis, Szymon, 158
Kumar, P., 291
I Kumita, Tetsuro, 46, 270
Iacoangeli, Francesco, 173 Kundu, T. K., 291
Ieki, Kei, 82
Igor, Konorov, 58 L
Isocrate, Roberto, 186 La Marra, Daniel, 12
Itoh, R., 382 Lagkas Nikolos, O., 360
Iwai, Ryoto, 82 Lange, Jens Sören, 391
Izzo, V., 54 Lauth, W., 123, 275
Lazaridis, Christos, 319
J Le Goff, Fabrice, 366
Jain, A., 291 Lee, Insoo, 371
James, Tom, 296 Lehman, A., 123
Jansen, Hendrik, 243 Lehmann, A., 275, 283
Ji, XiaoLu, 408 Lehmann, D., 123, 275
Ji, Xuyang, 215 Li, Chao, 210
Jin, Ge, 142, 398, 403 Li, Cheng, 190
Joshi, S. R., 291 Li, Feng, 398, 403
Li, Min, 324
K Li, Yang, 408
Kagan, H. P., 180 Liang, Futian, 403
Kahra, C., 337 Lippi, Ivano, 186
Kakuno, Hidekazu, 46, 270 Liu, Hengshuang, 154
Kalicy, G., 123, 275 Liu, Jia, 31
Kalita, K., 283 Liu, Shengli, 237
Kalmani, S. D., 291 Liu, Shubin, 190, 206, 324
Kamble, N., 291 Liu, Shulin, 113
Karmakar, S., 291 Liu, Yuzhe, 142
Kasbekar, T., 291 Liu, Zhao, 224, 382
Kass, R. D., 180 Liu, Zhen-An, 224, 391
Katsuro, Nakamura, 58 Lokapure, A., 291
Kaur, P., 291 Louzir, M., 27
Kawai, H., 46 Lu, Yunpeng, 163
Kawai, Hideyuki, 253, 270 Luo, Hong, 426
Kim, CheolHoon, 371 Luo, Laifu, 206
Kim, SungHyun, 371 Lv, Hongkui, 31
Kindo, Haruki, 46, 270 Lv, Jun-Guang, 3
Kishimoto, Shunji, 163 Lv, Pin, 3
Klemens, Lautenbach, 58 Lyu, Pengfei, 263
Kolla, H., 291
Konno, Tomoyuki, 46, 58, 270, 382 M
Korpar, Samo, 46, 270 Ma, Si, 408
Kotaka, T., 27 Ma, Siyuan, 190, 206
440 Author Index

Machen, Jonathan, 309 Pestotnik, Rok, 46, 270


Machida, Masahiro, 46, 270 Peters, K., 123, 275
Maity, M., 291 Pfaffinger, M., 123, 275
Majumder, G., 291 Piedigrossi, D., 257
Manca, M., 63 Ping, Cao, 220
Mandelli, Beatrice, 91, 97 Poilleux, P., 27
Manna, A., 291 Prabhakar, A., 291
Martin-Chassard, G., 168 Prafulla, S., 291
Martinez, M., 35 Punna, M., 291
Marzin, A., 360
Masi, S., 35 Q
Merle, O., 123, 275 Qi, An, 220
Mikihiko, Nakao, 58 Qi, Wang, 220
Minamiyama, Y., 27 Qi, Xincheng, 215, 220
Miyamoto, S., 27
Miyoshi, Toshinobu, 163 R
Mohanan, S., 291 Rademacker, J., 257
Moitra, S., 291 Rahaman, M., 291
Mondal, N. K., 291 Raut, S. M., 291
Montaruli, T., 17, 22 Rave, S., 337
Moore, J., 180 Ravindran, K. C., 291
Moreira, Paulo, 158 Reiter Simon, P., 58
Movchan, S., 195 Rieke, J., 123, 275
Mrvar, Manca, 46, 270 Rocco, E., 337
Roy, S., 291
N Ryjov, V., 360
Nair, P. M., 291 Ryosuke, Itoh, 58
Nakao, M., 382
Nerling, F., 123, 275 S
Nessi, M., 63 Sala, P., 63
Neufeld, Niko, 309, 332, 376 Sandilya, Saurabh, 87
Niebuhr, C., 71 Šantelj, Luka, 46, 270
Nishida, Shohei, 46, 253, 270 Santos, Alejandro, 346
Nishimura, Ryutaro, 163 Santos, C., 168
Noguchi, Kouta, 46, 270 Saraf, M. N., 291
Noury, A., 168 Satyanarayana, B., 291
Nürnberg, A., 117 Sawada, Ryu, 82
Schaefer, U., 337
O Schepers, G., 123, 275
Ogawa, Kazuya, 46, 270 Schlimme, S., 123, 275
Ogawa, Satoru, 46, 270 Schmidt, Janet S., 237
Ohkuma, H., 27 Schmidt, M., 123, 275
Orth, H., 283 Schmieden, K., 360
Ouyang, Qun, 163 Schmitt, L., 123, 275, 283
Schütze, Paul, 243
P Schwartz, Alan, 87
Padmini, S., 291 Schwarz, C., 123, 275, 283
Pal, Bilas, 87 Schwemmer, Rainer, 309
Panyam, N., 291 Schwiening, J., 123, 275
Pathaleswar, 291 Seguin-Moreau, N., 168
Patsyuk, M., 123, 275 Semeniouk, I., 27
Pauly, T., 360 Settimo, M., 168
Pedretti, Davide, 186 Sfienti, C., 123, 275
Perrina, Chiara, 12 Shen, Wei, 408
Author Index 441

Sheng, Xiangdong, 31 Vogt, M., 180


Shinde, R. R., 291 Vőneki, Balázs, 376
Sikder, S., 291 Vouters, Guillaume, 332
Sil, D., 291
Silva Oliveira, M., 360 W
Sizun, P., 27 Wang, Baochen, 142, 398
Smith, D. S., 180 Wang, Boqun, 87
Soeren, Lange, 58 Wang, C., 17, 22
Sokhrannyi, Grygorii, 41 Wang, Dong, 154
Song, Longlong, 163 Wang, Jian, 158
Souza, J., 337 Wang, Jike, 102
Spannagel, Simon, 243 Wang, Qi, 215
Spiwoks, R., 360 Wang, S., 27
Steinschaden, D., 283 Wang, Xiuku, 416, 421, 426, 431
Stelzer, J., 360 Wang, Yi, 263
Sukhwani, M., 291 Wasem, T., 123, 275
Sumiyoshi, T., 46 Weirich, M., 337
Sumiyoshi, Takayuki, 253, 270 Wen, Kaile, 113
Sun, WeiJia, 201 Wengler, T., 360
Sun, Xi-Lei, 3 Whitehead, L. H., 63
Suzuki, K., 283 Williams, Tom, 319
Suzuki, S. Y., 382 Wolfgang, Kühn, 58
Wu, Xin, 12
T
Tabata, Makoto, 8, 46, 253, 270 X
Takemoto, A., 27 Xiao, Liang, 421, 426, 431
Tang, Bin, 421 Xiong, Shao-Lin, 3
Tao, Jia, 224
Tapprogge, S., 337 Y
Thea, Alessandro, 319, 355 Yamada, S., 382
Thiel, M., 123, 275 Yamaguchi, M., 27
Thomas, M., 291 Yan, Baojun, 113
Tian, Haolai, 416, 421 Yang, Dongxu, 158
Tian, Xingcheng, 431 Yang, Yuzhen, 113
Tortone, Gennaro, 54, 386 Yanli, Chen, 220
Traxler, M., 123, 275 Ye, H., 71, 77
Ye, Jingbo, 158
U Yin, Hao, 58
Uhlig, F., 123, 275 Yin, L. Q., 17, 22
Ukleja, Artur, 137 Yin, Weigang, 142, 398
Unno, Yuji, 371 Yonamine, R., 27
Upadhya, S. S., 291 Yonenaga, Masanobu, 46, 270
Yoshizawa, Morihito, 46, 270
V Yu, Li, 201
Valat, Sébastien, 376 Yu, Yang, 113
van Dijk, M., 257 Yusa, Yosuke, 46, 270
Vandelli, Wainer, 346, 366 Yuvaraj, E., 291
Vereschagin, S., 195
Verma, P., 291 Z
Verzilov, Victor, 237 Zaporozhets, S., 195
Vichoudis, P., 360 Zhang, Hongyu, 416, 421, 426, 431
Vignati, M., 35 Zhang, Junrong, 416
442 Author Index

Zhang, S. S., 17, 22 Zhao, Yubin, 421, 426, 431


Zhang, Ying, 201 Zhen-an, Liu, 58
Zhao, Dongxu, 416, 421, 426, 431 Zheng, Jiajun, 210
Zhao, Han, 426 Zheng, ManYu, 201
Zhao, J., 382 Zimmermann, S., 283
Zhao, Jing-zhou, 224 Ziolkowski, M., 180
Zhao, Jingzhou, 391 Zühlsdorf, M., 123

You might also like