Download as pdf or txt
Download as pdf or txt
You are on page 1of 500

FOURTH EDITION

KHAN’S
Treatment Planning in Radiation
Oncology
EDITORS

Faiz M. Khan, PhD


Professor Emeritus
Department of Radiation Oncology
University of Minnesota Medical School
Minneapolis, Minnesota

John P. Gibbons, PhD


Chief Medical Physicist
Department of Radiation Oncology
Ochsner Health System
New Orleans, Louisiana

Paul W. Sperduto, MD, MPP, FASTRO


Radiation Oncologist
Minneapolis Radiation Oncology
Minneapolis, Minnesota

ERRNVPHGLFRVRUJ
Acquisitions Editor: Julie Goolsby
Senior Product Development Editor: Emilie Moyer
Editorial Assistant: Brian Convery
Production Project Manager: Alicia Jackson
Design Coordinator: Joan Wendt, Elaine Kasmer
Illustration Coordinator: Jennifer Clements
Manufacturing Coordinator: Beth Welsh
Marketing Manager: Rachel Mante Leung
Prepress Vendor: Aptara, Inc.

4th edition

Copyright © 2016 Wolters Kluwer

Copyright © 2012 Lippincott Williams & Wilkins, a Wolters Kluwer business. Copyright © 2006
Lippincott Williams & Wilkins, a Wolters Kluwer business. Copyright © 1998 by Lippincott
Williams & Wilkins. All rights reserved. This book is protected by copyright. No part of this book
may be reproduced or transmitted in any form or by any means, including as photocopies or
scanned-in or other electronic copies, or utilized by any information storage and retrieval system
without written permission from the copyright owner, except for brief quotations embodied in
critical articles and reviews. Materials appearing in this book prepared by individuals as part of
their official duties as U.S. government employees are not covered by the above-mentioned
copyright. To request permission, please contact Wolters Kluwer at Two Commerce Square, 2001
Market Street, Philadelphia, PA 19103, via email at permissions@lww.com, or via our website at
lww.com (products and services).

987654321

Printed in China

Library of Congress Cataloging-in-Publication Data


Names: Khan, Faiz M., editor. | Gibbons, John P., Jr., editor. | Sperduto,
Paul W., editor.
Title: Khan’s treatment planning in radiation oncology / editors, Faiz M. Khan,
John P. Gibbons, Paul W. Sperduto.
Other titles: Treatment planning in radiation oncology.
Description: Fourth edition. | Philadelphia : Wolters Kluwer [2016] |
Preceded by Treatment planning in radiation oncology / editors, Faiz M. Khan,
Bruce J. Gerbi. 3rd ed. c2012. | Includes bibliographical references and
index.
Identifiers: LCCN 2016004611 | ISBN 9781469889979
Subjects: | MESH: Neoplasms—radiotherapy | Radiotherapy Planning,
Computer-Assisted | Radiation Oncology—methods
Classification: LCC RC271.R3 | NLM QZ 269 | DDC 616.99/40642—dc23
LC record available at http://lccn.loc.gov/2016004611

This work is provided “as is,” and the publisher disclaims any and all warranties, express or
implied, including any warranties as to accuracy, comprehensiveness, or currency of the content
of this work.

This work is no substitute for individual patient assessment based upon healthcare professionals’
examination of each patient and consideration of, among other things, age, weight, gender,
current or prior medical conditions, medication history, laboratory data and other factors unique
to the patient. The publisher does not provide medical advice or guidance and this work is
merely a reference tool. Healthcare professionals, and not the publisher, are solely responsible
for the use of this work including all medical judgments and for any resulting diagnosis and
treatments.

Given continuous, rapid advances in medical science and health information, independent
professional verification of medical diagnoses, indications, appropriate pharmaceutical selections
and dosages, and treatment options should be made and healthcare professionals should consult
a variety of sources. When prescribing medication, healthcare professionals are advised to
consult the product information sheet (the manufacturer’s package insert) accompanying each
drug to verify, among other things, conditions of use, warnings and side effects and identify any
changes in dosage schedule or contraindications, particularly if the medication to be
administered is new, infrequently used or has a narrow therapeutic range. To the maximum
extent permitted under applicable law, no responsibility is assumed by the publisher for any
injury and/or damage to persons or property, as a matter of products liability, negligence law or
otherwise, or from any reference to or use by any person of this work.

LWW.com
To Kathy, my wife and companion of fifty years:
Happy Anniversary, My Love.
—Faiz M. Khan

To my wife Nicole, for her continued patience and support


—John P. Gibbons

To Jody, Luke, Maria, and Will for their love and laughter and
my patients who provide me the privilege of caring for them in
times of greatest need.
—Paul W. Sperduto
Contributors

Judy A. Adams, CMD


Director of Dosimetry
Department of Radiation Oncology
Massachusetts General Hospital Cancer Center
Boston, Massachusetts

Fiori Alite, MD
Department of Radiation Oncology
Stritch School of Medicine
Loyola University
Maywood, Illinois

John A. Antolak, PhD


Associate Professor & Consultant
Department of Radiation Oncology
Mayo Clinic
Rochester, Minnesota

James M. Balter, PhD


Professor
Radiation Oncology Department
University of Michigan
Ann Arbor, Michigan

Christopher Beltran, PhD


Mayo Clinic
Department of Radiation Oncology
Rochester, Minnesota

Rachel C. Blitzblau, MD, PhD


Assistant Professor
Department of Radiation Oncology
Duke University School of Medicine
Attending Physician, Radiation Oncology
Duke University Medical Center
Durham, North Carolina

Stefan Both, PhD


Associate Attending
Medical Physics Department
Memorial Sloan Kettering Cancer Center
New York, New York

Frank J. Bova, PhD, FACR, FAAPM, FAAPM


Albert E. and Birdie W. Professor of Computer-Assisted Stereotactic
Neurosurgery
University of Florida College of Medicine
Gainesville, Florida

Jason Chan, MD
Resident in Training
Department of Radiation Oncology
University of California San Francisco
San Francisco, California

Albert Chang, MD, PhD


Assistant Professor in Residence
Departments of Radiation Oncology and Urology
University of California, San Francisco
San Francisco, California

George T. Y. Chen, PhD


Department of Radiation Oncology
Massachusetts General Hospital and Harvard Medical School
Boston, Massachusetts
Zhe (Jay) Chen, PhD
Professor
Department of Therapeutic Radiology
Yale University School of Medicine
Yale-New Haven Hospital
New Haven, Connecticut

Yen-Lin Evelyn Chen, MD


Instructor
Radiation Oncology
Massachusetts General Hospital
Boston, Massachusetts

James C. L. Chow, PhD


Assistant Professor
Department of Radiation Oncology
University of Toronto
Radiation Physicist
Radiation Medicine Program
Princess Margaret Cancer Center
Toronto, Ontario, Canada

Benjamin M. Clasie, MD
Resident Physician
Radiation Oncology
Rush University Medical Center
Chicago, Illinois

Brian G. Czito, MD
Gary Hock and Lyn Proctor Associate Professor
Department of Radiation Oncology
Duke University Medical Center
Durham, North Carolina

Thomas F. DeLaney, MD, FASTRO


Andres Soriano Professor of Radiation Oncology
Harvard Medical School
Radiation Oncologist
Department of Radiation Oncology
Medical Director, Francis H. Burr Proton Therapy Center
Co-Director, Center for Sarcoma and Connective Tissue Oncology
Massachusetts General Hospital
Boston, Massachusetts

Lei Dong, PhD


Director, Chief of Medical Physics
Scripps Proton Therapy Center
Professor, Department of Radiation Medicine and Applied Sciences
University of California—San Diego
San Diego, California

Bahman Emami, MD, FACR, FASTRO


Professor
Department of Radiation Oncology
Loyola University Medical Center
Chicago, Illinois

Robert L. Foote, MD, FASTRO


Professor of Radiation Oncology
Mayo Medical School
Chair
Department of Radiation Oncology
Mayo Clinic
Rochester, Minnesota

Ryan D. Foster, PhD


Medical Physicist
Department of Radiation Oncology
Carolinas HealthCare System/Levine Cancer Institute
Concord, North Carolina
William A. Friedman, MD
Professor and Chair
Department of Neurosurgery
University of Florida College of Medicine
Gainesville, Florida

Yolanda I. Garces, MD, MS


Consultant
Assistant Professor of Radiation Oncology
Mayo Clinic
Rochester, Minnesota

John P. Gibbons Jr, PhD


Chief Medical Physicist
Department of Radiation Oncology
Ochsner Health System
New Orleans, Louisiana

Eli Glatstein, MD, FASTRO


Emeritus Professor
Department of Radiation Oncology
University of Pennsylvania
Philadelphia, Pennsylvania

Vinai Gondi, MD
Co-Director, Brain & Spine Tumor Center
Northwestern Medicine Cancer Center, Warrenville
Warrenville, Illinois

Aditya N. Halthore, MD
Resident
Department of Radiation Medicine
Hofstra North Shore-LIJ School of Medicine
North Shore-LIJ Health System
Lake Success, New York
Andrew Jackson, PhD
Associate Attending Physicist
Medical Physics Computer Service
Memorial Sloan-Kettering Cancer Center
New York, New York

Julian Johnson, MD
Resident in Training
Department of Radiation Oncology
University of California San Francisco
San Francisco, California

James A. Kavanaugh, MS
Department of Radiation Oncology
Washington University in St. Louis
Director of Satellite Services
Siteman Cancer Center
St. Louis, Missouri

Paul J. Keall, PhD


Professor and NHMRC Australian FellowDirector, Radiation Physics
Laboratory
Central Clinical School
Sydney Medical School
The University of Sydney
New South Wales, Australia

Faiz M. Khan, PhD


Professor Emeritus
Department of Radiation Oncology
University of Minnesota Medical School
Minneapolis, Minnesota

Eric E. Klein, PhD


Professor
Department of Radiation Medicine
Northwell Health
Lake Success, New York

Jonathan P. S. Knisely, MD
Associate Professor
Department of Radiation Medicine
Northwell Health
Hofstra University School of Medicine
Lake Success, New York

Hanne M. Kooy, PhD


Associate Professor
Department of Radiation Oncology
Massachusetts General Hospital
Harvard Medical School
Boston, Massachusetts

Rupesh Kotecha, MD
Resident
Department of Radiation Oncology
Taussig Cancer Institute
Cleveland Clinic
Cleveland, Ohio

Gerald Kutcher, PhD


Professor
History Department
Binghamton University
State University of New York
Binghamton, New York

Guang Li, PhD, DABR


Associate Member and Associate Attending Physicist
Department of Medical Physics
Memorial Sloan Kettering Cancer Center
New York, New York

Tony Lomax, PhD


Professor
Centre for Proton Therapy
Paul Scherrer Institute
Switzerland

Shannon M. MacDonald, MD
Associate Professor of Radiation Oncology
Massachusetts General Hospital/Harvard Medical School
Francis H. Burr Proton Therapy Center
Boston, Massachusetts

Gig S. Mageras, PhD


Member, Memorial Hospital
Memorial Sloan Kettering Cancer Center
Attending Physicist
Department of Medical Physics
Memorial Hospital
New York, New York

Amit Maity, MD, PhD


Professor
Department of Radiation Oncology
University of Pennsylvania
Philadelphia, Pennsylvania

Charles Mayo, PhD


Department of Radiation Oncology
University of Michigan
Ann Arbor, Michigan

Minesh P. Mehta, MD, FASTRO


Professor
Department of Radiation Oncology
University of Maryland School of Medicine
Baltimore, Maryland

Loren K. Mell, MD
Associate Professor
Department of Radiation Medicine and Applied Sciences
University of California San Diego
La Jolla, California

Dimitris N. Mihailidis, PhD, FAAPM, FACMP, CAMC


Chief Medical Physicist
Cancer Center-Alliance Oncology
West Virginia University School of Medicine
Clinical Professor
Radiation Oncology
Charleston, West Virginia

Radhe Mohan, PhD


Professor
Department of Radiation Physics
University of Texas MD Anderson Cancer Center
Houston, Texas

Yvonne M. Mowery, MD, PhD


Resident Physician
Department of Radiation Oncology
Duke University School of Medicine
Resident Physician
Radiation Oncology
Duke University Medical Center
Durham, North Carolina

Arno J. Mundt, MD, FACRO, FASTRO


Professor and Chair
Department of Radiation Medicine & Applied Sciences
University of California San Diego
President, Radiating Hope Society Chair
American College of Radiation Oncology
San Diego, California

Sasa Mutic, PhD, FAAPM


Professor and Director of Medical Physics
Department of Radiation Oncology
Washington University School of Medicine
St. Louis, Missouri

Colin G. Orton, PhD


Professor Emeritus
Wayne State University
Detroit, Michigan

Manisha Palta, MD
Assistant Professor
Duke Cancer Institute
Department of Radiation Oncology
Duke University Medical Center
Durham, North Carolina

Niko Papanikolaou, PhD


Professor and Chief
Division of Medical Physics
University of Texas Health Science Center at San Antonio
San Antonio, Texas

Anthony Paravati, MD, MBA


Resident in Radiation Oncology
Department of Radiation Medicine and Applied Sciences
Moores Cancer Center
University of California, San Diego
La Jolla, California

Charles A. Pelizzari, PhD


Associate Professor
Department of Radiation and Cellular Oncology
The University of Chicago Medical Center
Chicago, Illinois

Bradford A. Perez, MD
Resident
Duke Cancer Institute
Department of Radiation Oncology
Duke University Medical Center
Durham, North Carolina

John P. Plastaras, MD, PhD


Associate Professor
Department of Radiation Oncology
University of Pennsylvania
Philadelphia, Pennsylvania

Ezequiel Ramirez, MSRS, CMD, RT(R)(T)


Department of Radiation Oncology
University of Texas Southwestern Medical Center
Dallas, Texas

Mark J. Rivard, PhD, FAAPM


Professor of Radiation Oncology
Tufts University School of Medicine
Boston, Massachusetts

Gregory C. Sharp, PhD


Assistant Professor
Department of Radiation Oncology
Massachusetts General Hospital
Boston, Massachusetts
Daniel R. Simpson, MD
Assistant Professor
Department of Radiation Medicine and Applied Sciences
Moores Cancer Center
UCSD School of Medicine
La Jolla, California

Chang W. Song, PhD


Professor Emeritus
Department of Radiation Oncology
University of Minnesota Medical School
Minneapolis, Minnesota

Paul W. Sperduto, MD, MPP, FASTRO


Medical Director, Minneapolis Radiation Oncology
Co-Director, University of Minnesota Gamma Knife Center
Minneapolis, Minnesota

Kevin L. Stephans, MD
Assistant Professor
Taussig Cancer Institute
Department of Radiation Oncology
Cleveland Clinic
Cleveland, Ohio

Kenneth R. Stevens Jr, MD


Professor Emeritus and Former Department Chair
Radiation Medicine Department
Oregon Health & Sciences University
Portland, Oregon

Alexander Sun, MD, FRCPC


Associate Professor
Department of Radiation Oncology
University of Toronto
Staff Radiation Oncologist
Princess Margaret Cancer Centre
Toronto, Canada

Nancy J. Tarbell, MD, FASTRO


CC Wang Professor of Radiation Oncology
Dean for Academic and Clinical Affairs
Harvard Medical School
Boston, Massachusetts

Bruce R. Thomadsen, PhD


Professor
Medical Physics
University of Wisconsin
Madison, Wisconsin

Robert Timmerman, MD
Professor, Vice Chair
Department of Radiation Oncology
University of Texas Southwestern Medical Center
Dallas, Texas

Wolfgang A. Tomé, PhD, FAAPM


Professor and Director of Medical Physics
Institute for Onco-Physics
Department of Radiation Oncology
Albert Einstein College of Medicine
Professor and Director
Division of Therapeutic Medical Physics
Department of Radiation Oncology
Montefiore Medical Center
Bronx, New York

Jordan A. Torok, MD
Resident Physician
Department of Radiation Oncology
Duke University School of Medicine
Duke Cancer Institute
Durham, North Carolina

Jan Unkelbach, PhD


Assistant Professor of Radiation Oncology
Harvard Medical School
Assistant Radiation Physicist
Department of Radiation Oncology
Massachusetts General Hospital
Boston, Massachusetts

Jacob (Jake) Van Dyk, BSc, MSc, FCCPM, FAAPM, FCOMP, DSc(hon)
Professor Emeritus
Oncology and Medical Biophysics
Western University
Former Manager/Head
Physics and Engineering
London Regional Cancer Program
London Health Sciences Centre
London, Ontario, Canada

Gregory M. M. Videtic, MD, CM, FRPC


Professor of Medicine
Cleveland Clinic Lerner College of Medicine
Staff Physician
Department of Radiation Oncology
Taussig Cancer Center
Cleveland Clinic
Cleveland, Ohio

Yi Wang, PhD
Instructor
Department of Radiation Oncology
Harvard Medical School
Medical Physicist
Radiation Oncology
Massachusetts General Hospital
Boston, Massachusetts

Kenneth J. Weeks, PhD


Medical Physicist
Federal Medical Center
Butner, North Carolina

Christopher G. Willett, MD, FASTRO


Professor and Chairman
Department of Radiation Oncology
Duke University
Durham, North Carolina

John A. Wolfgang, PhD


Physicist
Department of Radiation Oncology
Harvard Medical School
Massachusetts General Hospital
Boston, Massachusetts

Neil M. Woody, MD, MS


Resident
Department of Radiation Oncology
Taussig Cancer Institute
Cleveland Clinic
Cleveland, Ohio

Catheryn M. Yashar, MD, FACRO


Professor
Department of Radiation Medicine and Applied Sciences
University of California San Diego
San Diego, California

Fang-Fang Yin, PhD


Professor
Department of Radiation Oncology
Duke Clinics
Durham, North Carolina

Darwin Yip, MD
Clinical Fellow
Department of Radiation Oncology
Massachusetts General Hospital
Harvard Medical School
Boston, Massachusetts

Sua Yoo, PhD


Associate Professor
Department of Radiation Oncology
Duke University Medical Center
Durham, North Carolina

Ellen D. Yorke, PhD


Attending Physicist and Member
Memorial Sloan Kettering Cancer Center
Department of Medical Physics
New York, New York
Preface

The field of radiation oncology has advanced considerably since the


advent of Intensity-Modulated Radiation Therapy (IMRT) and related
technologies such as Image-Guided Radiation Therapy (IGRT),
Volumetric-Modulated Arc Therapy (VMAT), and Stereotactic Body
Radiotherapy (SBRT). As a result of maturation of these techniques in
the past decade or so, their application in the treatment of various
cancers has accelerated at a rapid pace. Also, parallel to these
developments, proton beam therapy has gained widespread acceptance
as an effective modality, especially in the treatment of pediatric tumors
and other malignancies where greater conformity of dose distribution is
required than possible with photons. Consequently, we invited some
leading experts to write about these latest developments in the field. It is
our hope that the fourth edition will bring our readers up-to-date with
the state of the art in the physics, biology, and clinical practice of
radiation oncology.
This book provides a comprehensive discussion of the physical,
biologic, and clinical aspects of treatment planning. Because of its
primary focus on treatment planning, it covers this subject at a much
greater depth than is the case with other books on medical physics or
radiation oncology. Like the previous editions, it is written for the
benefit of the entire treatment planning team—namely the radiation
oncologist, medical physicist, dosimetrist, and radiation therapist. A
distinctive feature of this edition is the inclusion of Key Points and Study
Questions at the end of each chapter. This is intended to make the book
useful not only for the practitioners but also residents preparing for their
board examinations.
We acknowledge Julie Goolsby, Acquisitions Editor, Emilie Moyer,
Senior Product Development Editor, and other editorial staff of Wolters
Kluwer for their support in the development and production of this
book.
Last but not the least, we wish to acknowledge the contributing
authors whose expertise and efforts are greatly appreciated. Their
valuable contributions have made this publication possible.

Faiz M. Khan
John P. Gibbons
Paul W. Sperduto
Preface to First Edition

Traditionally, treatment planning has been thought of as a way of


devising beam arrangements that will result in an acceptable isodose
pattern within a patient’s external contour. With the advent of computer
technology and medical imaging, treatment planning has developed into
a sophisticated process whereby imaging scanners are used to define
target volume, simulators are used to outline treatment volume, and
computers are used to select optimal beam arrangements for treatment.
The results are displayed as isodose curves overlaid on multiple body
cross-sections or as isodose surfaces in three dimensions. The intent of
the book is to review these methodologies and present a modern version
of the treatment planning process. The emphasis is not on what is new
and glamorous, but rather on techniques and procedures that are
considered to be the state of the art in providing the best possible care
for cancer patients.
Treatment Planning in Radiation Oncology provides a comprehensive
discussion of the clinical, physical, and technical aspects of treatment
planning. We focus on the application of physical and clinical concepts
of treatment planning to solve treatment planning problems routinely
encountered in the clinic. Since basic physics and basic radiation
oncology are covered adequately in other textbooks, they are not
included in this book.
This book is written for radiation oncologists, physicists, and
dosimetrists and will be useful to both the novice and those experienced
in the practice of radiation oncology. Ample references are provided for
those who would like to explore the subject in greater detail.
We greatly appreciate the assistance of Sally Humphreys in managing
this lengthy project. She has been responsible for keeping the
communication channels open among the editors, the contributors, and
the publisher.

Faiz M. Khan
Roger A. Potish
Contents

Contributors
Preface
Preface to First Edition

SECTION I
Physics and Biology of Treatment Planning
CHAPTER 1 Introduction: Process, Equipment, and Personnel
Faiz M. Khan
CHAPTER 2 Imaging in Radiotherapy
George T.Y. Chen, Gregory C. Sharp, John A. Wolfgang, and
Charles A. Pelizzari
CHAPTER 3 Treatment Simulation
Dimitris N. Mihailidis and Niko Papanikolaou
CHAPTER 4 Treatment Planning Algorithms: Photon Dose Calculations
John P. Gibbons
CHAPTER 5 Treatment Planning Algorithms: Brachytherapy
Kenneth J. Weeks
CHAPTER 6 Treatment Planning Algorithms: Electron Beams
Faiz M. Khan
CHAPTER 7 Treatment Planning Algorithms: Proton Therapy
Hanne M. Kooy and Benjamin M. Clasie
CHAPTER 8 Commissioning and Quality Assurance
James A. Kavanaugh, Eric E. Klein, Sasa Mutic, and Jacob
(Jake) Van Dyk

ERRNVPHGLFRVRUJ
CHAPTER 9 Intensity-Modulated Radiation Therapy: Photons
Jan Unkelbach
CHAPTER 10 Intensity-Modulated Proton Therapy
Tony Lomax
CHAPTER 11 Patient and Organ Movement
Paul J. Keall and James M. Balter
CHAPTER 12 Image-Guided Radiation Therapy
Guang Li, Gig S. Mageras, Lei Dong, and Radhe Mohan
CHAPTER 13 Linac Radiosurgery: System Requirements, Procedures, and
Testing
Frank J. Bova and William A. Friedman
CHAPTER 14 Stereotactic Ablative Radiotherapy
Ryan D. Foster, Ezequiel Ramirez, and Robert D.
Timmerman
CHAPTER 15 Low Dose-Rate Brachytherapy
Mark J. Rivard
CHAPTER 16 High Dose-Rate Brachytherapy Treatment Planning
Bruce R. Thomadsen
CHAPTER 17 Electron Beam Treatment Planning
John A. Antolak
CHAPTER 18 Proton Beam Therapy
Hanne M. Kooy and Judy A. Adams
CHAPTER 19 Role of Protons Versus Photons in Modern Radiotherapy:
Clinical Perspective
Darwin Yip, Yi Wang, and Thomas F. DeLaney
CHAPTER 20 Fractionation: Radiobiologic Principles and Clinical Practice
Colin G. Orton
CHAPTER 21 Radiobiology of Stereotactic Radiosurgery and Stereotactic
Ablative Radiotherapy
Paul W. Sperduto and Chang W. Song
CHAPTER 22 Tolerance of Normal Tissue to Therapeutic Radiation
Bahman Emami and Fiori Alite
CHAPTER 23 Treatment Plan Evaluation
Ellen D. Yorke, Andrew Jackson, and Gerald J. Kutcher

SECTION II
Treatment Planning for Specific Cancers
CHAPTER 24 Cancers of the Gastrointestinal Tract
Jordan A. Torok, Bradford A. Perez, Brian G. Czito,
Christopher G. Willett, Fang-Fang Yin, and Manisha Palta
CHAPTER 25 Gynecologic Malignancies
Anthony Paravati, Daniel R. Simpson, Loren K. Mell,
Catheryn M. Yashar, and Arno J. Mundt
CHAPTER 26A Cancer of the Genitourinary Tract: Prostate Cancer
Jason Chan and Albert Chang
CHAPTER 26B Genitourinary Cancers: Bladder Cancer
Julian Johnson and Albert Chang
CHAPTER 26C Cancers of the Genitourinary Tract: Testis
Julian Johnson and Albert Chang
CHAPTER 27 The Lymphomas
John P. Plastaras, Stefan Both, Amit Maity, and Eli
Glatstein
CHAPTER 28 Cancers of the Head and Neck
Yolanda I. Garces, Charles Mayo, Christopher Beltran, and
Robert L. Foote
CHAPTER 29 Cancers of the Skin, Including Mycosis Fungoides
Aditya N. Halthore, Kenneth R. Stevens Jr., James C. L.
Chow, Zhe (Jay) Chen, Alexander Sun, and Jonathan P. S.
Knisely
CHAPTER 30 Breast Cancer
Yvonne M. Mowery, Sua Yoo, and Rachel C. Blitzblau
CHAPTER 31 Cancers of the Central Nervous System
Vinai Gondi, Wolfgang A. Tome, and Minesh P. Mehta
CHAPTER 32 Pediatric Malignancies
Shannon M. MacDonald and Nancy J. Tarbell
CHAPTER 33 Cancers of the Thorax/Lung
Gregory M. M. Videtic, Rupesh Kotecha, Neil M. Woody,
and Kevin L. Stephans
CHAPTER 34 Soft Tissue and Bone Sarcomas
Yen-Lin Chen and Thomas F. DeLaney

Index
SECTION I

Physics and Biology of


Treatment Planning
1 Introduction: Process,
Equipment, and Personnel

Faiz M. Khan

Every patient with cancer must have access to the best possible care regardless
of constraints such as geographic separation from adequate facilities and
professional competence, economic restrictions, cultural barriers, or methods
of healthcare delivery. Suboptimal care is likely to result in an unfavorable
outcome for the patient, at greater expense for the patient and for society.
—Blue Book (1)

INTRODUCTION
Radiotherapy procedure in itself does not guarantee any favorable
outcome. It is through meticulous planning and careful implementation
of the needed treatment that the potential benefits of radiotherapy can
be realized. The ideas presented in this book pertain to the clinical,
physical, and technical aspects of procedures used in radiotherapy
treatment planning. Optimal planning and attention to details will make
it possible to fulfill the goal of the Blue Book, namely, to provide the
best possible care for every patient with cancer.

TREATMENT PLANNING PROCESS


Treatment planning is a process that involves the determination of
treatment parameters considered optimal in the management of a
patient’s disease. In radiotherapy, these parameters include target
volume, dose-limiting structures, treatment volume, dose prescription,
dose fractionation, dose distribution, patient positioning, treatment
machine settings, online patient monitoring, and adjuvant therapies. The
final product of this activity is a blueprint for the treatment, to be
followed meticulously and precisely over several weeks.

TARGET VOLUME ASSESSMENT


Treatment planning starts right after the therapy decision is made and
radiotherapy is chosen as the treatment modality. The first step is to
determine the tumor location and its extent. The target volume, as it is
called, consists of a volume that includes the tumor (demonstrated
through imaging or other means) and its occult spread to the
surrounding tissues or lymphatics. The determination of this volume and
its precise location is of paramount importance. Considering that
radiotherapy is basically an agent for local or regional tumor control, it
is logical to believe that errors in target volume assessment or its
localization will cause radiotherapy failures.
Modern imaging modalities such as computed tomography (CT),
magnetic resonance imaging (MRI), ultrasound, single photon emission
computed tomography (SPECT), and positron emission tomography
(PET) assist the radiation oncologist in the localization of target volume.
However, what is discernible in an image may not be the entire extent of
the tumor. Sufficient margins must be added to the demonstrable tumor
to allow for uncertainty in the imaging as well as microscopic spread,
depending upon the invasive characteristics of the tumor.
Next in importance to localization of the target volume is the
localization of critical structures. Again, modern imaging is greatly
helpful in providing detailed anatomic information. Although such
information is available from standard anatomy atlases, its extrapolation
to a given patient is fraught with errors that are unacceptable in
precision radiotherapy.
Assessment of the target volume for radiotherapy is not as easy as it
may sound. The first and foremost difficulty is the fact that no imaging
modality at the present time is capable of revealing the entire extent of
the tumor with its microscopic spread. The visible tumor, usually seen
through imaging, represents only a part of the tumor, called the gross
tumor volume (GTV). The volume that includes the entire tumor, namely,
GTV, and the invisible microscopic disease can be estimated only
clinically and is therefore called the clinical target volume (CTV).
The estimate of CTV is usually made by giving a suitable margin
around the GTV to include the occult disease. This process of assessing
CTV is not precise because it is subjective and depends entirely on one’s
clinical judgment. Because it is an educated guess at best, one should not
be overly tight in assigning these margins around the GTV. The assigned
margins must be wide enough to ensure that the CTV thus designed
includes the entire tumor, including both the gross and the microscopic
disease. If in doubt, it is better to be more generous than too tight
because missing a part of the disease, however tiny, would certainly
result in treatment failure.
Added to the inherent uncertainty of CTV are the uncertainties of
target volume localization in space and time. An image-based GTV, or
the inferred CTV, does not have static boundaries or shape. Its extent
and location can change as a function of time because of variations in
patient setup, physiologic motion of internal organs, patient breathing,
and positioning instability. A planning target volume (PTV) is therefore
required, which should include the CTV plus suitable margins to account
for the above uncertainties. PTV, therefore, is the ultimate target volume
—the primary focus of the treatment planning and delivery. Adequate
dose delivered to PTV at each treatment session presumably assures
adequate treatment of the entire disease-bearing volume, the CTV.
Because of the importance of accurate determination of PTV and its
localization, the International Commission on Radiation Units and
Measurements (ICRU) has come up with a systematic approach to the
whole process, as illustrated in Figures 1.1 and 1.2. The reader is
referred to ICRU Reports 50, 62, and 71 for the underlying concepts and
details of the system (2–4).
FIGURE 1.1 Schematic illustration of ICRU volumes. (From ICRU. Prescribing, Recording, and
Reporting Photon Beam Therapy. ICRU Report 50. Bethesda, MD: International Commission of
Radiation Units and Measurements; 1993.)
FIGURE 1.2 Schematic representation of ICRU volumes and margins. (From ICRU. Prescribing,
Recording, and Reporting Photon Beam Therapy [Supplement to ICRU Report 50]. ICRU Report
62. Bethesda, MD: International Commission on Radiation Units and Measurements; 1999.)

Although sophisticated treatment techniques such as intensity-


modulated radiation therapy (IMRT) and image-guided radiation therapy
(IGRT) are now available, which account for organ motion and
positional uncertainties as a function of time, the basic problem still
remains: How accurate is the CTV? Unless the CTV can be relied upon
with a high degree of certainty, various protocols to design PTV from it
and the technical advances to localize it precisely in space and time
would seem rather arbitrary, illusory, or even make-believe. Therefore,
the need for technologic sophistication (with its added cost and
complexity) must be balanced with the inherent uncertainty of CTV for a
given disease.
However, the above, seemingly pessimistic view of the process should
not discourage the development or the use of these technologies. It
should rather be taken as a cautionary note for those who may pursue
such technologies with a blind eye to their limitations. Technologic
advances must ultimately be evaluated in the context of biologic
advances. “Smart bombs” are not smart if they miss the target or, worse
yet, produce unacceptable collateral damage.
Treating the right target volume conformally with the right dose
distribution and fractionation is the primary goal of radiotherapy. It does
not matter if this objective is achieved with open beams or uniform-
intensity wedged beams, compensators, IMRT, or IGRT. As will be
discussed in the following chapters, various technologies and
methodologies are currently available, which should be selected on the
basis of their ability to achieve the above radiotherapy goal for the given
disease to be treated. In some cases, simple arrangements such as a
single beam, parallel-opposed beams, or multiple beams, with or without
wedges, are adequate, while in others IMRT or IGRT is the treatment of
choice.

EQUIPMENT
Treatment planning is a process essentially of optimization of
therapeutic choices and treatment techniques. This is all done in the
context of available equipment. In the absence of adequate or versatile
equipment, optimization of treatment plans is difficult, if not impossible.
For example, if the best equipment in an institution is a cobalt unit or a
traditional low-energy (4 to 6 MV) linear accelerator, the choice of beam
energy for different patients and tumor sites cannot be optimized. If a
good-quality simulator (conventional or CT) is not available, accurate
design of treatment fields, beam positioning, and portal localization are
not possible. Without modern imaging equipment, high accuracy is not
possible in the determination of target volumes and critical structures, so
that techniques that require conformal dose distributions in three
dimensions cannot be optimized. Accessibility to a reasonably
sophisticated computerized treatment planning system is essential to
plan isodose distributions for different techniques so as to select the one
that is best suited for a given patient. Therefore, the quality of treatment
planning and the treatment itself depend on how well equipped the
facility is with regard to treatment units, imaging equipment, and
treatment planning computers.

External Beam Units


Low-Energy Megavoltage X-ray Beams
Low-energy megavoltage beams without IMRT capability (e.g., cobalt-60
and/or 4–6-MV x-rays) are principally used for relatively shallow or
moderately deep tumors such as in the head and neck, breast, and
extremities. For treatments using parallel-opposed beams, the body
thickness in the path of these beams should not exceed approximately 17
cm. This is dictated by the ratio of maximum peripheral dose to the
midline dose (5).
In addition to the beam energy, it is also important to have machine
specifications that improve beam characteristics as well as accuracy of
treatment delivery. Some of the major specifications, for example, are
isocentric capability with source-to-axis distance of 100 cm (not less
than 80 cm for cobalt-60), field size of at least 40 × 40 cm, versatile
and rigid treatment couch, asymmetrical collimators, MLCs, and other
features that allow optimization of treatment techniques.
For IMRT or IGRT techniques, a 6-MV x-ray beam is sufficient so far as
the energy is concerned. However, the unit must be equipped with a
special collimator having dynamic MLC or apertures suitable for these
techniques. Its operation must be computer controlled to allow for
intensity-modulated beam delivery in accordance with the IMRT or IGRT
treatment plans.

Medium- or High-Energy Megavoltage X-ray Beams


X-ray beams in the energy range of 10 to 25 MV allow treatment
techniques for deep-seated tumors in the thorax, abdomen, or pelvis. For
parallel-opposed beam techniques, the deeper the tumor, the higher the
energy required to maximize the dose to the tumor, relative to the
normal tissue. Again, the ratio of the maximum peripheral dose to the
midline dose is an important consideration (5). In addition, the dose
buildup characteristics of these beams allow substantial sparing of
normal subcutaneous tissue in the path of the beams.
One may argue that the degree of normal tissue sparing achieved by x-
ray beams of 10 MV energy can also be achieved by lower megavoltage
beams using more than two beam directions, as in multiple isocentric
fields, rotation therapy, or IMRT. However, for deep-seated tumors,
high-energy beams offer greater tissue sparing for all techniques,
including IMRT.

Charged-Particle Beams
1. Electrons. Electron beams in the range of 6 to 20 MeV are useful for
treating superficial tumors at depths of 5 cm. They are often used in
conjunction with x-ray beams, either as a boost or a mixed-beam
treatment, to provide a particular isodose distribution. The principal
clinical applications include the treatment of skin and lip cancers, chest
wall irradiation, boost therapy for lymph nodes, and the treatment of
head and neck cancers.
Depth–dose characteristics of electron beams have unique features
that allow effective irradiation of relatively superficial cancers and
almost complete sparing of normal tissues beyond them. The availability
of this modality is essential for optimizing treatments of approximately
10% to 15% of cancers managed with radiotherapy.

2. Protons. Proton beam therapy has been used to treat almost all
cancers that are traditionally treated with x-rays and electrons (e.g.,
tumors of the brain, spine, head and neck, breast, lung, gastrointestinal
malignancies, prostate, and gynecologic cancers). Because of the ability
to obtain a high degree of conformity of dose distribution to the target
volume with practically no exit dose to the normal tissues, the proton
radiotherapy is an excellent option for tumors in close proximity of
critical structures such as tumors of the brain, eye, and spine. Also,
protons give significantly less integral dose than photons and, therefore,
should be a preferred modality in the treatment of pediatric tumors
where there is always a concern for a possible development of secondary
malignancies during the lifetime of the patient.

3. Carbon ions. Efficacy of charged particles heavier than protons such


as nuclei of helium, carbon, nitrogen, neon, silicon, and argon has also
been explored. Although carbon ions or heavier charged particles have
the potential to be just as good as protons, if not better, it is debatable
whether the benefits justify the high cost of such machines. As it stands,
for most institutions, even the acquisition of protons is hard to justify
over the far less expensive but very versatile megavoltage x-ray and
electron accelerators.
Protons and heavier charged particles no doubt have unique biologic
and physical properties, but “Are they clinically superior to x-rays and
electrons with IMRT and IGRT capabilities?” The answer awaits more
experience. Clinical superiority of heavy charged particles needs to be
demonstrated by carefully conducted clinical trials.

Patient Load Versus Treatment Units


The number of patients treated on a given unit can be an important
determinant of the quality of care. Overloaded machines and
overworked staff often give rise to suboptimal techniques, inadequate
care in patient setup, and a greater possibility of treatment errors. As in
any other human activity, rushed jobs do not yield the best results. In
radiotherapy, in which the name of the game is accuracy and precision,
there is simply no room for sloppiness, which can easily creep in if the
technologist’s primary concern is to keep up with the treatment
schedule. An assembly line type of atmosphere should never be allowed
in a radiotherapy facility because it deprives the patients of their right to
receive the best possible care that radiotherapy has to offer.
A report like the Blue Book is the best forum for setting up guidelines
for equipment use. The recommendation of this document is that the
load for a given megavoltage unit should not exceed 6,000 standard
treatments (single patient visit equivalent) per year. Depending upon the
complexity of procedures performed on a machine, its calibration
checks, and quality assurance, the patient load per megavoltage machine
for full use can vary from 20 to 30 patients treated per day. Details of
calculating realistic load for a megavoltage unit and criteria for
replacing or acquiring additional equipment are given in the Blue Book
(1).
Brachytherapy Equipment
Brachytherapy is an important integral part of a radiotherapy program.
Some tumors are best treated with brachytherapy, alone or in
conjunction with an external beam. It is therefore important to have this
modality available if optimal treatment planning is the goal. Although
electrons are sometimes used as an alternative, brachytherapy continues
to have an important role in treating certain tumors such as gynecologic
malignancies, oral cancers, sarcomas, prostate cancer, and brain tumors.
Currently, the sources most often being used are cesium-137 tubes,
iridium-192 seeds contained in ribbons, iodine-125 seeds, and
palladium-103 seeds. These isotopes can be used in after-loading
techniques for interstitial as well as intracavitary implantation.
Numerous applicators and templates have been designed for
conventional LDR brachytherapy. The institution must follow a
particular system consistently with all its hardware, rules of
implantation, and dose specification schemes. Remote after-loading
units, LDR as well as HDR, are becoming increasingly popular, especially
among institutions with large patient loads for brachytherapy.
Brachytherapy hardware, software, and techniques are discussed in later
chapters.

Imaging Equipment
Modern treatment planning is intimately tied to imaging. Although all
diagnostic imaging equipments have some role in defining and localizing
target volumes, the most useful modalities currently are the CT, MRI,
and PET.
Most radiotherapy institutions have access to these machines through
diagnostic departments. The only problem with this kind of arrangement
is that the fidelity of imaging data obtained under diagnostic conditions
is quite poor when used for treatment planning. This is caused primarily
by the lack of reproducibility in patient positioning. Besides appropriate
modifications in the scanner equipment (e.g., flat tabletop, patient
positioning aids), the patient setup should be supervised by a member of
the treatment planning staff. With the growing demand for CT, 4-
dimensional (4D) CT (respiration-correlated), and MRI in radiotherapy
and the large number of scans that 3-dimensional (3D) treatment
planning requires, dedicated scanners in radiotherapy departments are
becoming the norm.

Simulator
There is still a role for conventional simulators in a radiation therapy
department although their presence is becoming less common. It is
important that the simulator has the same geometric accuracy as the
treatment machine. In addition, it should allow the simulation of various
treatment techniques that is possible with modern treatment machines.
With the advent of 3D treatment planning, conformal field shaping,
MLCs, 4D CTs, and electronic portal imaging, it is logical to move into
CT simulation. A conventional simulator may be useful for final
verification of the field placement, but with the availability of good-
quality DRRs and special software for CT simulation, this need no longer
exists. Final field verification before treatment can be obtained with the
portal imaging system available on modern linacs.
CT scanners have been used for treatment planning for many years
because of their ability to image patient anatomy and gross tumor, slice
by slice. These data can be processed to view images in any plane or in
three dimensions. In addition, CT numbers can be correlated with tissue
density, pixel by pixel, thereby allowing heterogeneity corrections in
treatment planning. The only drawback of diagnostic CT scans is that of
geometric accuracy of localization needed in radiotherapy. Diagnostic
CT units, with typically narrow apertures and curved tabletops, cannot
reproduce patient positions that would be used for treatment. Although
variations due to positioning can be minimized by using flat tabletops
and units with wide aperture (e.g., 70 cm or larger diameter), the
personnel operating diagnostic equipment are not trained to set up
patients accurately to reproduce radiation therapy conditions. In
addition, diagnostic simulation units are usually too busy to allow
sufficient time for therapy simulations. Because of these technical and
logistic problems, a dedicated CT scanner for radiation therapy has
gained wide acceptance.
A dedicated radiation therapy CT scanner, with accessories (e.g., flat
table identical with those of the treatment units, lasers for positioning,
immobilization, and image registration devices, etc.) to accurately
reproduce treatment conditions, is called a CT-simulator. Many types of
such units are commercially available. Some of them are designed
specifically for radiation therapy with wide apertures (e.g., 85 cm
diameter) to provide flexibility in patient positioning for a variety of
treatment setups. The CT image data set thereby obtained, with precise
localization of patient anatomy and tissue density information, is useful
not only in generating an accurate treatment plan, but also in providing
a reference for setting up treatment plan parameters. This process is
sometimes called virtual simulation.

Positron Emission Tomography/Computed Tomography


The physics of PET is based on the positron–electron annihilation into
photons. For example, a radiolabeled compound such as
fluorodeoxyglucose (FDG) incorporates 18F as the positron-emitting
isotope. FDG is an analog of glucose that accumulates in metabolically
active cells. Because tumor cells are generally more active metabolically
than normal cells, an increased uptake of FDG is positively correlated
with the presence of tumor cells and their metabolic activity. When the
positron is emitted by 18F, it annihilates a nearby electron, with the
emission of two 0.511-MeV photons in opposite directions. These
photons are detected by ring detectors placed in a circular gantry
surrounding the patient. From the detection of these photons, computer
software (e.g., filtered back projection algorithm) reconstructs the site of
the annihilation events and the intervening anatomy. The site of
increased FDG accumulation, with the surrounding anatomy, is thereby
imaged with a resolution of about 4 mm.
Combining PET with CT scanning has several advantages:
1. Superior quality CT images with their geometric accuracy in defining
anatomy and tissue density differences are combined with PET images
to provide physiologic imaging, thereby differentiating malignant
tumors from the normal tissue on the basis of their metabolic
differences.
2. PET images may allow differentiation between benign and malignant
lesions well enough in some cases to permit tumor staging.
3. PET scanning may be used to follow changes in tumors that occur
over time and with therapy.
4. By using the same treatment table for a PET/CT scan, the patient is
scanned by both modalities without moving (only the table is moved
between scanners). This minimizes positioning errors in the scanned
data sets from both units.
5. By fusing PET and CT images, the two modalities become
complementary.

Although PET provides physiologic information about the tumor, it


lacks correlative anatomy and is inherently limited in resolution. CT, on
the other hand, lacks physiologic information but provides superior
images of anatomy and localization. Therefore, PET/CT provides
combined images that are superior to either PET or CT images alone.

Accelerator-Mounted Imaging Systems


After the treatment planning and simulation comes the critical step of
accurate treatment delivery of the planned treatment. Traditionally,
patients are set up on the treatment couch with the help of localization
lasers and various identification marks on the patient, for example, ink
marks, tattoos, or palpable bony landmarks. Sometimes identification
marks are drawn on the body casts worn by the patient for
immobilization. These procedures would be considered reasonable, if
only the patient would not move within the cast and the ink or tattoo
marks did not shift with the stretch of the skin. Bony landmarks are
relatively more reliable, but their location by palpitation cannot be
pinpointed to better than a few millimeters. Good immobilization
devices are critical in minimizing setup variations and are discussed later
in the book.
With the introduction of 3D conformal radiation therapy (CRT),
including IMRT and IGRT, it has become increasingly apparent that the
benefit of these technologies cannot be fully realized if the patient setup
and anatomy do not match the precision of the treatment plan within
acceptable limits at every treatment session. As the treatment fields are
made more conformal, the accuracy requirements of patient setup and
the PTV coverage during each treatment accordingly have to be made
more stringent. These requirements have propelled advances in the area
of patient immobilization and dynamic targeting of PTV through
imaging systems mounted on the accelerators themselves. Thus began
the era of IGRT.
Each of the three major linear accelerator manufacturers, Varian,
Elekta, and Siemens, provide accelerator-mounted imaging systems
allowing online treatment plan verification and correction (adaptive
radiation therapy) and dynamic targeting, synchronized with the
patient’s respiratory cycles (gating). The commercial names for the
systems are Trilogy (www.varian.com), Synergy (www.elekta.com), and
ONCOR (www.siemens.com). These products come with various options,
some of which may be works in progress or currently not FDA approved.
The reader can get the updated information by visiting the
corresponding Web sites.
The important consideration in acquiring any of these systems is
dictated by the desire to provide state-of-the-art radiation therapy. Such
a system is expected to have the following capabilities:

1. 3D CRT with linac-based megavoltage photon beam(s) of appropriate


energy (e.g., 6 to 18 MV)
2. Electron beam therapy with five or six different energies in the range
of 6 to 20 MeV
3. IMRT, IGRT, and gated radiation therapy capabilities
4. Accelerator-mounted imaging equipment to allow the treatment
techniques mentioned earlier (such as IMRT and IGRT)

Typically, such a system consists of an electronic portal imaging


device (EPID), a kVp source for radiographic verification of setup, an
online fluoroscopic mode to permit overlaying of treatment field
aperture on to the fluoroscopy image, and cone-beam CT capability for
treatment plan verification. Many of these devices and their use in
modern radiotherapy such as IGRT are discussed in the following
chapters.

Treatment Planning Computers


Commercial treatment planning computers became available in the early
1970s. Some of the early ones such as the Spear PC, the Artronix PC-12,
Rad-8, Theratronics Theraplan, and ADAC were instant hits and
provided a quantum jump from manual to computerized treatment
planning. They served their purpose well in providing fast and
reasonably accurate 2-dimensional (2D) treatment plans. Typically, they
allowed the input (through the digitizer) of external patient contours,
anatomic landmarks, and outlines of the target volume and of the critical
structures in a specified plane (usually central). Beams were modeled
semiempirically from the stored beam data obtained in a water
phantom. Various corrections were used to apply the water phantom
data to the patient situation, presenting irregular surfaces, tissue
inhomogeneities, and multiple beam angles. However, from today’s
standards, the old systems would be considered very limited in
capability and rudimentary in the context of modern 3D treatment
planning.
With the explosion of computer and imaging technologies in the last
20 years or so, the treatment planning computers and their algorithms
have accordingly become more powerful and sophisticated. Systems that
are currently available allow 3D treatment planning in which patient
data obtained from CT scanning, MRI, PET, and so on, are to be input
electronically. Beams are modeled with sophisticated computational
algorithms, for example, pencil beam, convolution–superposition, semi
Monte Carlo, or full Monte Carlo. These algorithms for photons,
electrons, and brachytherapy sources are discussed in later chapters.
Besides major improvements in dose computational methods, there
have been revolutionary advances in software, which allow planning of
complex treatments such as 3D CRT, IMRT, IGRT, and HDR
brachytherapy. One of the most powerful treatment planning algorithms
is called inverse planning, which allows the planner to specify the desired
dose distribution and let the computer generate a plan as close to the
input specifications as possible. Again, these techniques and algorithms
are topics of discussion later in the book.
Major 3D treatment planning systems that are commercially available
are Pinnacle (www.medical.philips.com), Eclipse (www.varian.com),
and Computerized Medical Systems (CMS; www.cms.stl.com). As these
systems are constantly evolving and undergoing revisions, the reader
should be mindful of the fact that an older version of any given system
may not carry much resemblance to the newest version. Therefore,
anyone in the market for such a system needs to do some researching
and check out each system with its most current version. Also, because
these systems and their software are frequently revised and updated, the
user is advised to carry a service contract for maintenance as well as the
option of receiving future updates as they come along.

Staffing
The 1991 Blue Book has been updated by ASTRO to a new document,
entitled Safety Is No Accident: A Framework for Quality Radiation Oncology
and Care (6). This book was published in 2012 and is available online at
https://www.astro.org/Clinical-Practice/Patient-Safety/Blue-
Book/bfp/index.html#/60. The new document provides a blueprint for
modern radiation oncology facilities in terms of structure, process, and
personnel requirements.
The basis for these recommendations is the fundamental principle that
radiation oncology practice requires a team of personnel with
appropriate educational and training background. Besides the physician
specialists, the radiation oncologists, radiotherapy requires the services
of medical physicists, dosimetrists, therapists, and nurses. The minimum
level of staffing recommended is shown in Table 1.1. In the specific
areas of treatment planning, the key personnel are radiation oncologists,
medical physicists, and dosimetrists. The quality of treatment planning
largely depends on the strength of this team.

TABLE 1.1 Minimum Personnel Requirements for Clinical Radiation


Therapya
Radiation Oncologist
The radiation oncologist, who has the ultimate responsibility for the care
of the patient, heads the treatment planning team. It is his or her
responsibility to formulate the overall plan for the treatment, including
dose prescription to tumor-bearing sites of the body. Details of the actual
treatment technique, beam energies, beam directions, and other specific
details of the treatment are finalized after a number of isodose plans
have been calculated and an optimal plan has been selected. The final
plan must meet the approval of the radiation oncologist in charge of the
patient.
The ACR standards require that the radiation oncologist be board-
certified to practice radiation oncology. In addition, the number of
radiation oncologists in a given institution must be in proportion to the
patient load (Table 1.1). No more than 25 to 30 patients should be
treated by a single physician. It is important to ensure that each patient
receives adequate care and attention from the physician and that the
treatments are not compromised because of the physician’s lack of time.

Medical Physicist
No other medical specialty draws as much from physics as radiation
oncology. The science of ionizing radiation is the province of physics,
and its application to medicine requires the services of a physics
specialist, the medical physicist. It is the collaboration between the
radiation oncologist and the medical physicist that makes radiotherapy
an effective treatment modality for cancer. Ralston Paterson (7),
emphasizing this relationship, stated in 1963: “In radiotherapy the
physicist who has given special study to this field is full partner with the
therapist, not only in the development of the science, but in the day-to-
day treatment of patients. The unit team, therefore, even for the smallest
department, consists of a radiotherapist and a physicist.”
The unit team of radiation oncologist and medical physicist must have
a supporting cast to provide radiotherapy service effectively to all
patients referred to the department. Dosimetrists, radiation therapists
(previously called technologists), nurses, and service engineers are the
other members of the team. It must be recognized by all concerned that
without this infrastructure and adequate staffing in each area of
responsibility, radiotherapy is reduced to an ineffective, if not unsafe,
modality of treatment.
Adequacy of the support of physics has been spelled out in the ASTRO
document (6). The number of physicists required in a radiotherapy
institution depends not only on the number of patients treated per year
but also on the complexity of the radiotherapy services offered. For
example, special procedures such as stereotactic radiotherapy, HDR
brachytherapy, total-body irradiation for bone marrow transplantation,
3D CRT, IMRT, IGRT, SBRT, respiratory gating, TomoTherapy,
CyberKnife treatments, and intraoperative radiotherapy are all physics-
intensive procedures and therefore require more physicists as
recommended by ASTRO.
According to the American Association of Physicists in Medicine
(AAPM), a medical physicist involved with clinical services must have a
PhD or MS degree and be board certified in the relevant specialty; in this
case, radiation oncology physics. Also, most physicists in an academic
setting teach and do research, and therefore a doctorate degree is more
desirable for them. Such research plays a key role in the development of
new techniques and in bringing about new advances to radiation
oncology. Paterson (7) emphasized this role by stating “While the
physicist has a day-to-day routine task in this working out or checking of
cases, it is important that he has time for study of special problems.
These may include the development of new x-ray techniques, the
devising of special applicators to simplify or assist treatment, the critical
analysis of existing techniques, or research work of a more fundamental
nature.”

TABLE 1.2 Roles and Responsibilities of Physicists

A medical physicist’s role in radiotherapy is summarized in Table 1.2.


Specifically in treatment planning, the physicist has the overall
responsibility of ensuring that the treatment plan is accurate and
scientifically valid. That means that the physicist is responsible for
testing the computer software and commissioning it for clinical use. He
or she is also responsible for proper interpretation of the treatment plan
as it relates to the dose distribution and calculation of treatment
duration or monitor units.
One important role of a medical physicist that is often overlooked is
that of a consultant to radiation oncologists in the design of the
treatment plan. Physicians working directly with dosimetrists to
generate a treatment plan without any significant input from the
physicist can often be seen. This process may be operationally smooth
and less costly but can be risky if serious errors go undetected and the
final plan is not optimal. It must be recognized that a qualified medical
physicist, by virtue of education and training, is the only professional on
the radiotherapy team who is familiar with the treatment planning
algorithm and can authenticate the scientific validity of a computer
treatment plan. It is important that he or she be actively involved with
the treatment planning process and that the final plan receives his or her
careful review. Because of the tendency of some physicians to bypass the
physicist, some institutions have developed the policy of having the
physicist present during simulation and doing the treatment planning
either personally or closely working with the dosimetrist in the
generation and optimization of the treatment plan.

Dosimetrist
Historically, dosimetrists were classified as physics personnel with a
Bachelor of Science degree in the physical sciences. They assisted
physicists in routine clinical work such as treatment planning, exposure
time calculations, dosimetry, and quality assurance. They could be called
a physicist assistant, analogous to physician assistant.
Today the dosimetrist’s role is not much different, but the educational
requirements have been formalized to include certification by the
Medical Dosimetrist Certification Board (MDCB), in addition to a
Bachelor’s Degree and graduation from an accredited Medical Dosimetry
training program.
As discussed earlier, the role of a dosimetrist is traditionally to assist
the physicist in all aspects of physics service. However, in some
institutions, dosimetrists substitute for physicists, and/or the treatment
planning procedure is made the sole responsibility of the dosimetrist
with no supervision from the physicist. Whether it is done for economic
or practical reasons, leaving out the physicist from the treatment
planning process is not appropriate and definitely not in the best interest
of the patient. The dosimetrist’s role is to assist the physicist, not to
replace him or her. The radiation oncologist must understand that a
computer treatment plan necessitates the physicist’s input and review
just as much as it necessitates consultation of other medical specialists in
the diagnosis and treatment of a patient.

REFERENCES
1. ISCRO. Radiation Oncology in Integrated Cancer Management: Report of
the Inter-Society Council for Radiation Oncology (Blue book). Reston,
VA: American College of Radiology; 1991.
2. ICRU. Prescribing, Recording, and Reporting Photon Beam Therapy.
ICRU Report 50. Bethesda, MD: International Commission on
Radiation Units and Measurements; 1993.
3. ICRU. Prescribing, Recording, and Reporting Photon Beam Therapy
(Supplement to ICRU Report 50). ICRU Report 62. Bethesda, MD:
International Commission on Radiation Units and Measurements;
1999.
4. ICRU. Prescribing, Recording, and Reporting Electron Beam Therapy.
ICRU Report 71. Bethesda, MD: International Commission on
Radiation Units and Measurements; 2004.
5. Khan FM, Gibbons JP. The Physics of Radiation Therapy. 5th ed.
Philadelphia, PA: Lippincott Williams & Wilkins; 2014:179–180.
6. American Society for Radiation Oncology (ASTRO). Safety is No
Accident: A Framework for Quality Radiation Oncology and Care.
Fairfax, VA: American Society for Radiation Oncology; 2012.
https://www.astro.org/clinical-practice/patient-safety/safety-
book/safety-is-no-accident.aspx
7. Paterson R. The Treatment of Malignant Disease by Radiotherapy. 2nd
ed. Baltimore, MD: Williams & Wilkins; 1963:527.
2 Imaging in Radiotherapy

George T.Y. Chen, Gregory C. Sharp, John A. Wolfgang,


and Charles A. Pelizzari

INTRODUCTION
Imaging is the basis of modern radiotherapy; it plays a major role in
disease localization, treatment planning, guiding radiation delivery, and
monitoring response. While projection radiography was the backbone of
medical imaging in the first 75 years of its existence, transformative
imaging advances in the 1970s led to visualizing patient anatomy
through computer assisted tomography. Soft tissue tumors can alter the
spatial relationships of normal organs, and with transverse 3-
dimensional (3D) maps radiation oncologists could assess target extent
and proximity to sensitive organs at risk for collateral damage. Shortly
after the introduction of computed tomography (CT) scanning in
diagnostic radiology departments, radiation oncologists explored its
potential use in therapeutic radiology. Initial studies found tumor
coverage was marginal or inadequate in nearly one-half of patients
studied (1). Advances in volumetric imaging have continued to evolve,
and today multimodality imaging provides insight into tumor
biochemistry and microenvironment normal organ function as well as
structure.
In parallel, advances in radiation delivery such as intensity modulated,
charged-particle-beam, and stereotactic body radiotherapy provided the
capability to deliver highly conformal doses to the 3D target using
personalized anatomical maps (2–4). Such delivery advances in turn
increased the interest in the development of more accurate methods to
image and treat moving targets. Advances in treatment-room imaging
have further provided the capability of image-guided radiotherapy,
where images obtained on a daily basis before or during treatment can
be used to correct for variations in setup and organ motion. With
advances in imaging and radiation treatment, dose conformation has
provided an opportunity to safely increase tumor dose, thereby
increasing the probability of tumor control, while minimizing dose to
normal radiation sensitive organs below thresholds of serious
complications.
The general principles of medical image formation and clinical
oncologic imaging are described elsewhere (5,6). Observance of the
centennial anniversary of the discovery of x-rays has resulted in
historical reviews (7,8). This chapter provides an overview of the
imaging modalities and image processing relevant to conformal external
beam radiotherapy.

IMAGE ACQUISITION
Volumetric Imaging
Imaging in radiotherapy is broadly categorized into acquired and
processed images. Figure 2.1 diagrammatically summarizes imaging
modalities used in radiotherapy. Imaging in radiotherapy is dominated
by volumetric data sets from multiple modalities. The modalities probe
the body noninvasively, utilizing different physical interactions of tissues
and the probe modality. Each modality has its strengths and
applications, and complements the strengths of other imaging
techniques. Volumetric image acquisition is emphasized, although
projection radiography has an important role in the clinic. Use of
advanced imaging technologies was surveyed and reported in 2009 (9).

Computed Tomography
CT is the primary imaging modality used in radiation oncology. The
history and basic principles of CT image formation are discussed
elsewhere (5,10). Briefly, CT measures the linear attenuation coefficient
of each pixel in the transverse imaging plane. A fan beam of diagnostic
energy x-rays passes through the patient, and the transmitted radiation is
measured. Multiple projection views are acquired as the x-ray source
rotates around the patient. From these projections, image reconstruction
algorithms generate a transverse digital image. Each pixel value is a
measurement of μx, the linear attenuation coefficient (relative to water
μw) at an effective diagnostic x-ray energy. At diagnostic energies, the
dominant photon/tissue interactions are the photoelectric and Compton
effects. Pixel values are quantified in Hounsfield units (HU):

For a single energy scan, the HUs associated with various body
components are: air, -1,000 HU; water, 0 HU; fat, ∼-80 HU; muscle,
∼30 HU, and bone variable up to or greater than 1,000 HU. HUs of
different tissues at diagnostic energies can be approximately
extrapolated to electron density values used for dose calculations (11).
Tissue characterization to separately unfold the atomic composition and
electron density per pixel can be performed with dual energy scanning
(12), although most radiotherapy planning scans are performed at a
single x-ray potential. A possible need for dual energy scanning is the
calculation of charged particle stopping powers used in proton and
heavy ion radiotherapy.
FIGURE 2.1 A: Imaging modalities acquired in radiation therapy; B: Image processing performed
on acquired images to extract and integrate information needed for treatment.

Figure 2.2 shows a human abdominal cadaver section compared to the


corresponding CT image (13). While delineation of an organ on CT will
not exactly correspond to photographic ground truth (14,15), CT can be
geometrically very accurate. Early CT scanners produced a single
transaxial slice; 3D volumes were constructed by stacking images.
Multiplanar reconstruction of CT data provides the user with sagittal,
coronal, and oblique views.
Volumetric CT studies are acquired on CT-simulators within the
department. These devices are essentially diagnostic quality scanners
with a slightly larger gantry opening to accommodate treatment
accessories. The resulting images are nearly the same quality as those
used in diagnosis, and provide images usually more than adequate for
radiation therapy planning.

CT Scan Acquisition for Treatment Planning


The volumetric CT scans are input into treatment planning, with the goal
of calculating the dose delivered at treatment. Thus, care is taken to
position the patient as he/she will be treated on the linear accelerator.
Flat treatment couches are used rather than the diagnostic scanner
curved table-tops to ensure that internal organs closely approximate
their shape and location at treatment. Treatment immobilization devices
are used during the scan. High-speed scanning reduces motion artifacts
and can capture a bolus of contrast before dissipation.

FIGURE 2.2 A: Photographic transaxial section of a human abdomen from a cadaver. B:


Corresponding CT scan (courtesy Visible Human Project).

Three-dimensional CT acquisition modes include (a) axial and (b)


helical mode. In axial mode, the patient support assembly is static an
image slice is acquired, the x-ray source is then gated off, and the couch
is advanced to the next longitudinal position. The procedure is repeated
to build a 3D image volume. In helical mode, the couch is continuously
advanced while the x-ray tube continuously rotates, leading to faster
volumetric scan acquisitions. The time to complete one rotation is ∼0.5
seconds. Tagging the x-ray longitudinal coordinate with the image
projection data and resorting as the patient advances into the gantry
provides the information needed for helical scan image reconstruction.
A scan field of view (FOV) is selected to permit visualization of the
external skin contour, data needed for dose calculations. For sites such
as head and neck, often two FOVs are used: a smaller FOV for the neck,
and a larger FOV to fully image the shoulders. Longitudinal scan limits
are chosen to capture both the tumor extent and longitudinal extent of
organs at risk. Slice thickness of 3 mm and a total of 200 slices per
scanning study are typical in planning scans.
Convention dictates that cross-sectional images are displayed as
viewed from below; for a patient in the supine position on the table,
head first into the scanner gantry, the image left is the patient’s right
side. Icons and alphanumeric information imprinted on the scan image
provide details of pixel size, slice thickness, and radiographic parameters
used during imaging.

CT Artifacts
Artifacts can degrade CT planning studies (16). Artifacts may originate
within the patient. Beam hardening results in streaks when the photon
beam crosses particularly opaque regions, such as the bone in the
posterior fossa of the brain, or metallic fillings in teeth. Physiologic
motion can also cause streak artifacts. Intravenous or oral contract can
artificially elevate HUs. These artifacts can perturb the calculation of
radiographic path length leading to inaccurate dose calculations.
Artifacts can also originate within the scanner hardware or by choice of
imaging parameters. Partial volume sampling is an artifact resulting
from choice of slice thickness; too thick a slice influences the
detectability of small lesions. Other artifacts are introduced in helical or
cone-beam reconstructions.

Limitations of 3D Imaging of Moving Anatomy


Imaging organ motion is essential in conformally irradiating moving
targets. Quantifying motion can ensure adequate tumor coverage and
unnecessary irradiation of adjacent normal tissues. Conventional 3D
imaging of moving anatomy may result in an inaccurate depiction of
organ shape and location. A motion artifact can frequently be seen in a
thoracic CT scan of a patient breathing lightly during a scan (17). Figure
2.3 is an example of such an artifact, where the lung/diaphragm
interface is distorted.
Phantom studies under controlled conditions illustrate this temporal
aliasing artifact (18). The first column in Figure 2.4 is a photograph of
test objects embedded in a foam block. An initial scan is taken with the
phantom stationary. Surface rendering of the scan shows life-like
realism, as seen in the second column. The phantom is then set into
motion simulating respiration, and scans are acquired in standard scan
mode. The resulting images of test objects are strikingly distorted, as
shown in the next three columns. For scale, the largest ball is 6 cm in
diameter. With a motion amplitude of 1 cm (2 cm peak to peak), this
sphere may be imaged with an inaccurate longitudinal axis dimension as
small as 4 cm. The distortion visualized is dependent on both scan and
object motion parameters as well as the respiratory phase at the instant
the imaging planes intersect the test object, which explains why
distortions vary.

FIGURE 2.3 Temporal aliasing artifacts when scanning a patient during respiration. Note distortion
at the lung/diaphragm interface, indicated by yellow.
FIGURE 2.4 Imaging test objects in a phantom. The objects are surface rendered when the
phantom is static and undergoing respiration simulated motion. Note the geometric distortions of
the pear and balls.

4D CT Scanning
Four-dimensional (4D) CT scanning here is defined as CT acquisition at a
respiratory time scale. The objective of 4D CT scanning is to capture the
shape and trajectory of moving organs during breathing. The motion
data can then be used to design an aperture to encompass the observed
motion, or to apply motion mitigation strategies such as beam gating.
Proof of principle of 4D scanning was initially prototyped on single-slice
scanners in 2003 (19–21). A multislice 4D CT simulator system became
commercially available shortly thereafter and rapidly became the
scanner of choice for radiation therapy (22).

FIGURE 2.5 Coronal MPRs of 4D liver tumor scan. Organs move craniocaudally (yellow reference
line) at exhale (A); at inhale (B); residual artifacts due to irregular breathing (C).

Respiration-correlated CT uses the motion of the abdominal surface,


volume of air measured by spirometry, or other respiratory surrogate
signal to correlate the respiratory state during CT acquisition of each
slice. The signals are used to re-sort the approximately 1,500 slices of
reconstructed image data to a set of coherent spatiotemporal CT data set
corresponding to specific time points of the respiratory cycle. A 4D CT
acquisition requires a few minutes, and produces 10 static CT volumes,
each with a temporal separation of ∼one-tenth of the respiratory period
(∼0.4 seconds). Details of the 4D CT acquisition methods and
applications are described elsewhere (22–24). Dose during a 4D CT scan
is approximately five times that of a conventional treatment planning
scan, although studies have shown that this may be reduced by altering
the radiographic technique without significant reduction of motion
information (25).
Figure 2.5 displays coronal images of a 4D CT scanned for a liver
tumor. The first two images represent anatomy at exhale and inhale,
showing a caudal movement of the dome of the liver by several
centimeters.
A 4D capable CT-simulator is suited for assessing organ motion at ∼1
second time scale, but has insufficient temporal resolution to image
cardiac motion. Ultrahigh-speed cardiac scanners have shown that
vessels and tumors near the heart can move by >1 cm (26) due to
cardiac pulsation.
While an important advance, 4D CT acquisition can still have residual
artifacts. Phase resorting neglects variability of lung tidal volume during
free breathing. Figure 2.5C shows an example of an artifact from phase-
based 4D CT resorting where breathing amplitude was irregular.
Respiratory variations in amplitude, periodicity, and trajectory can
perturb 4D scans. Strategies to coach breathing during 4D scanning have
included voice prompts and visual feedback but with variable success.
Physical breathing control has been attempted through abdominal
compression or active breathing control, where the patient breathes
through a regulated valve and is forced to hold his breath at a specific
respiratory phase. This interventional approach reduces the 4D problem
to a static scenario, where both imaging and treatment are performed
with minimal motion (27,28).

Volumetric Imaging in the Treatment Room


The interest in precision radiotherapy spurred x-ray volumetric image
acquisition in the treatment room. As this topic is covered in detail in
the chapter on Image Guided Radiotherapy, we briefly cover selected
highlights of image-guided therapy.
CT scanning in the treatment room was extensively used to study
prostate and seminal vesicle volume over a protracted treatment regimen
(29). The study acquired over 360 CT scans in 15 patients at a 3-scan-
per-week rate. The patient is initially set up on the treatment couch and
CT scanned in room (with the scanner advancing along the patient’s
longitudinal axis by translating on rails). After tumor localization, the
patient support assembly is rotated into treatment position. This solution
provided a stopgap volumetric imaging capability in the treatment room
while other imaging solutions were developed and refined.
The most common implementation in use today involves adding
tomographic imaging capability to the treatment machine. Much of the
interest was spurred by investigators at William Beaumont Hospital (30).
An x-ray tube and flat panel detector are mounted orthogonal to the
treatment axis. The imaging approach is known as cone-beam computed
tomography (CBCT) because the imaging beam is divergent along the
longitudinal axis. Projection images for volumetric reconstruction are
acquired at multiple angles as the gantry is rotated around the patient in
∼1 minute. Reconstruction algorithms optimized for CBCT are used to
generate volumetric anatomical maps. While diagnostic CT scanners still
set the image quality standard, CBCT images provide clinically useful
information even in the presence of motion during acquisition, radiation
scatter, and mechanical isocenter wobble during scans. An advance in in-
room CBCT imaging was the development of 4D CBCT (31). Time-
resolved data can be obtained by sorting the acquired projection data
into different temporal bins according to respiratory motion phase in an
approach similar to standard 4D CT.
A third approach to treatment room tomographic imaging is to use the
MV treatment beam itself. The therapeutic beam serves as the radiation
source and a flat panel radiation detector to acquire image projections
(32). Although soft tissue contrast is reduced in the megavoltage CBCT
images, there is adequate information for patient positioning. MV CT is
less susceptible to imaging artifacts due to high-density objects such as
metallic hip implants or dental fillings. As with other tomography
imaging, CBCT is affected by motion during data acquisition.
Innovative integrated imaging/treatment hardware has also been
developed. Tomotherapy (33) incorporates fundamental principles of CT
imaging into a megavoltage treatment system. IMRT is delivered slice by
slice as radiation intensity across the slice is modulated by shutters.
Rotation around the patient similar to CT acquisition produces highly
conformal MV dose distributions. An unmodulated fan beam can
generate an MVCT image before treatment for image guidance. The
Cyberknife (34) is another approach; a compact linear accelerator is
mounted on an industrial robotic arm. The robotic arm covers more
patient solid angle than a conventional linac, and enables irradiation
from noncoplanar angles. Image guidance is provided by room-mounted
x-ray tubes and digital image receptors, providing real-time 2D
fluoroscopic imaging. The system has been used to “track” motion of
tumors in the lung and abdomen by following radio-opaque markers.

Magnetic Resonance Imaging


Magnetic resonance imaging probes the body with magnetic and RF
radiation, providing a spatial map of the bulk magnetization of a voxel.
Acquisition sequences can be tailored to vary image contrast and
content. The most commonly acquired MRI scans are proton density, T1-
and T2-weighted images. T1, T2 notation refers to spin–lattice and spin–
spin relaxation times, time constant associated with nuclear magnetic
moment reorientation after excitation to an elevated energy state.
Factors that influence image appearance include the proton density
within the voxel, relaxation times, blood flow, and magnetic
susceptibility (5,6). Spin density and relaxation times of soft tissues vary
considerably depending on microenvironment, leading to images with
superior contrast resolution in comparison to CT. Figure 2.6 shows CT
and MRI scans of an abdominal scan of a patient at the same anatomical
level.
MRI contrast agents are paramagnetic materials that have unpaired
electrons, resulting in magnetic susceptibility. These materials alter the
T1 and T2 relaxation rates. Gadolinium chelates are used as contrast
media for brain imaging when a breakdown of the blood–brain barrier is
suspected.

FIGURE 2.6 CT and proton density weighted MRI scan of patient with liver tumor, at
approximately the same anatomical level. Lesion is well visualized in the MRI due to superior soft
tissue contrast. Bone in CT (white) appears dark on MRI. Fat, due to its high hydrogen content,
appears brighter on MRI.

MRI Artifacts
Magnetic field inhomogeneity, RF field spatial distribution, and effects
associated with rapidly changing magnetic field gradients can cause
artifacts in MRI images. Artifacts due to magnetic field inhomogeneity
can result in geometric distortion, which leads to slight differences in
geometric scale along image axes. Foreign materials also result in a local
geometric distortion. On MRI, surgical clip artifacts result in loss of
signal and spatial distortion, even when the metallic fragments are
small. With patient motion, multiple ghosts in the image may appear.
Treatment planning scans using MRI must therefore pay particular
attention to geometric distortions (35,36). Phantoms may be useful to
calibrate MR scanners, but in vivo variations are difficult to correct.
Often, a CT scan is also available, and direct comparison provides a
measure of determining geometric integrity.

Target Delineation with MRI


MRI is useful in delineation of tumors and normal structures of the brain
(35,37). In comparing CT and MRI with stereotactic biopsies,
investigations found that both CT and MRI defined gross tumor by their
contrast enhanced ring, but T2-weighted MRI detected a larger
edematous region containing infiltrating tumor.
Isotropic margins for high-grade gliomas (1 to 2 cm) are commonly
used. Diffusion tensor imaging (DTI) is an MR technique sensitive to
disruptions of white matter tracts resulting from tumor infiltration. A
current hypothesis is that such data can predict areas of potential tumor
involvement along structural pathways (37,38), increasing CTV
delineation accuracy.
Cine MRI has been used to study organ motion (39–41). Patients are
scanned every few seconds in the selected anatomical plane for up to 1
hour, during which physiologic motion in them is visualized. Advantages
of MRI over CT in studying motion include no radiation dose and direct
imaging in the plane of interest. These studies revealed information on
the range of organ motion and distortion possible over a treatment time
interval.

Functional Magnetic Resonance Imaging


Functional MRI (fMRI) reveals physiologic and neurologic activity in
contrast to structural information (42). When a task is performed,
oxygen demand in the region of the brain controlling the activity rises,
resulting in a net increase in oxyhemoglobin. Since deoxyhemoglobin is
paramagnetic, changes from increased blood flow result in changes in
the signal. Functional imaging of the brain has been used to map the
human visual system during visual stimulation, language processing
areas, and sensory and motor cortex (43,44).
If the neuropsychologic activation sites of a specific patient are
localized before treatment planning, it may be possible to avoid
irradiating these eloquent areas during focal radiotherapy. Figure 2.7A
shows an image of a patient with an astrocytoma in the left parietal
region. fMRI was used to identify an area during silent word generation,
finger movement, tactile stimulation, and other motor functions. Dose
calculations performed showed dose to certain functional areas could be
reduced (45) by shifting dose to another less critical area.

Magnetic Resonance Spectroscopic Imaging


The dominant MR signals result from the 1H nuclei of water. Suppression
of water signals permits the measurement and analysis of other
compounds. Spectroscopy was initially limited to small regions of
interest but technology has evolved in imaging multiple voxels. For each
voxel, the MR spectrum then reveals the chemical composition.
Differences in metabolite concentrations—either observed by peak
heights or peak ratios of different peaks—can be used to characterize the
tissue. Application of MRSI imaging in radiation therapy includes
applications to characterize brain and prostate tumors (46,47).

FIGURE 2.7 A: fMRI scan showing proximity of brain tumor to eloquent areas of the brain. The
functional areas (silent word generation, motor cortex controlling finger/hand) were labeled OAR
areas and dose was optimized to spare regions. B: Metastatic lymph nodes (yellow) are mapped
on to pelvic vessels using a lymphotropic iron nanoparticle contrast agent. The prostate (green)
and seminal vesicles (gold) are also shown. Left – AP view; Right – LPO view.

An example of combining functional and spectroscopic imaging in the


treatment of brain tumors has been reported (48). This feasibility study
involved patients whose target volume was defined in part by MRSI (by
detecting voxels with a choline:creatine ratio associated with tumors)
and functional areas of the brain. This general approach was proposed
by Ling et al. (49) decades ago when the technology for studies were
evolving. The authors reported that the combination of these special
imaging procedures complemented conventional CT and MRI studies
with the potential of dose painting.

Lymph Node Visualization in the Pelvis


Nodal metastases from prostate cancer are primarily located along the
major pelvic vasculature. Defining nodal irradiation treatment portals
based on vascular rather than bony anatomy could decrease the
irradiation of normal pelvic tissues, thereby reducing toxicity.
An experimental MRI imaging technique is visualization of lymph
nodes with iron nanoparticles. Lymphotropic nanoparticle-enhanced
magnetic resonance imaging has been shown to identify the location and
extent of the lymphatic system involved with cancer. Nanoparticles
increase contrast of the lymphatic system during optimized MRI pulse
sequences. Figure 2.7B shows lymph node distributions for prostate
cancer (50).

MRI Imaging in the Treatment Room


With MRI’s superior soft tissue imaging, it is natural to explore its
integration with treatment machines. Two devices under development
have been reported. The ViewRay System (51) employs a low field
strength MRI imaging system to provide volumetric images. Treatment is
delivered via a cobalt 60 triple headed system, capable of IMRT. Co-60
provides a nonferrous solution to avoid complexities associated with
operating a linear accelerator near a magnetic field. An alternative
approach is the union of a diagnostic MRI unit and a linear accelerator
(52).

Emission Tomography
Emission tomography probes tumor biochemistry by measuring the
biodistribution of biologically active radiolabeled pharmaceuticals.
Specific tracers can probe tumors for hypoxia, the degree of cell
proliferation, angiogenesis, apoptosis, and response to therapy. These
data complement anatomical and structural information from CT and
MRI, and may provide the data needed to identify tumor sub-volumes
for dose painting. The current status of functional and molecular
imaging for radiotherapy is described in a 2014 review (53).
Emission imaging is based on the detection of gamma rays emitted by
intravenously injected radiopharmaceuticals. Scintillators detect the γ-
rays emitted and the resulting visible light is amplified by
photomultiplier tubes. Reconstruction of these signals provides
volumetric information on the 3D and 4D biodistribution of the
administered radiopharmaceutical. In SPECT imaging, single γ-emitting
isotopes are administered, while for PET, positron emitters are used. The
positrons annihilate nearby electrons with emission of opposed γ-rays.
PET scanners include a ring of detectors to allow simultaneous detection
of the emitted γ-rays in the transverse plane. Volumetric data sets are
acquired at several couch positions, requiring several minutes at each
position.

PET in Radiation Planning


PET is useful in staging disease and in defining the target volume. Its use
in target volume delineation in non-small cell lung cancer (54) has been
prospectively studied. Target volumes were drawn with and without the
use of PET. The most common tracer used is 18F-FDG, which identifies
regions of high glucose metabolism, often associated with tumors. In this
study, use of PET altered the treatment volume in ∼60% of cases. PET
distinguished tumor from atelectasis (CT cannot), identified nodal
disease in one-third of patients, and identified additional tumor foci.
Applications of PET in treatment planning have been studied (55,56).
Cu-ATSM is a radiopharmaceutical labeled with a positron-emitting
isotope of copper, that has been shown to identify areas of tumor
hypoxia. Identifying such regions could be used to focus an additional
boost radiation dose to control these historically radio-resistant cells.
Chao et al. (57) reported the feasibility of using Cu-ATSM in treatment
planning for head and neck tumors. By simulating hypoxic regions with
a distribution of the radiopharmaceutical localized by PET, they showed
that additional dose could be delivered by IMRT accurately after fusing
the PET data with a planning CT.

PET/CT
The role of PET in radiation oncology has increased with the
development of PET/CT scanners. A PET/CT scanner is a mechanical
union of a multislice CT scanner with a PET scanner (58). By mechanical
integration, technical image registration issues are minimized, although
data acquisition periods for the modalities differ. This significant
difference can result in ambiguities in bio-distributions that arise from
different patient positioning during scanning.
As with other tomographic imaging modalities, respiratory motion
poses a challenge to PET imaging. During the ∼20 minutes of data
acquisition, periodic motion leads to blurring of the reconstructed
isotope distributions. Four-dimensional PET acquisition has been
reported (59,60). Data acquisition is performed in temporal correlation
to respiratory motion by labeling each detected event with the actual
motion state. Following encoded temporal data acquisition,
reconstructions are performed using temporal bins of detected events.

Imaging Response
The importance of imaging response to therapy is clear. For the
individual patient, early evidence of treatment ineffectiveness can
prompt adjustments in treatment. In clinical trials, evidence of the
effectiveness of a new agent or technique can speed its approval by
regulatory agencies. Because overall survival from a therapeutic
modality may require decades to quantify, biomarkers that indicate
response can indicate which therapeutic approaches are most promising.
Imaging both anatomical and biologic response to treatment provides a
noninvasive method to measure response. A current standard for
quantifying response is Response Evaluation Criteria in Solid Tumors
(RECIST) (61), which has been refined since its introduction 15 years
ago. RECIST applies unidimensional measurements to objectify response.
Newer quantitative methods for response evaluation (61,62) (PET,
volumetric measurements) are considered to have promise, but await
standardization and validation before widespread adoption.

Other Imaging Techniques


Projection Imaging
Thus far, the focus has been on volumetric 3D/4D image acquisition.
Two-dimensional projection imaging still has an important role in
radiotherapy. While limited in visualizing soft tissues and 3D
localization, it is useful in patient setup and quality assurance.

Simulation Images
Conventional radiographic simulators generate 2D projection images and
fluoroscopic sequences. Historically, radiation oncologists outlined the
region to be irradiated on radiographs; cerrobend blocks were fabricated
to shield uninvolved tissues. Today, with 3D simulation, 2D simulation
workload has been reduced, and limited to simple setups (e.g., palliative
cases). A right lateral simulation radiograph for a metastatic brain lesion
is shown in Figure 2.8A. Bony anatomy and low-density areas, for
example, airways, are well visualized in projection radiography.
Typically AP and lateral images are acquired for field placement and
treatment. If more complex oblique or noncoplanar beam angles are
used, reference films still provide data useful for confirming isocenter
placement. Fluoroscopic imaging on a digital simulator produces
dynamic planar images that can be used to estimate lung tumor motion
in 2D. Conventional simulators also are used to simulate the first
treatment setup, known as a verification sim, to free linear accelerators
of lengthy first day setup.

FIGURE 2.8 A: simulator radiograph of left lateral treatment field. B: Corresponding MV port film.
FIGURE 2.9 A: Ultrasound image to align prostate to isocenter. Contours from the planning scan
are projected onto daily ultrasound image. B: Surface imaging is used to align breast. The region
of interest (purple) on treatment day is aligned to the reference surface (acquired on first day or
from simulation). The system computes couch translation and rotation moves required to bring
surface du jour to reference surface.

Quality Assurance Images


Treatment verification images are required once per week to verify
setup. Historically, a majority of verification films were acquired
through MV planar imaging. Figure 2.8B is such an image for
verification of the port shown in the simulator film in Figure 2.8A.
Today, linacs have the flexibility to acquire verification images by 2D kV
or MV imaging or by CBCT. Critical setup verification (IMRT, SRS, SRT)
is often performed using CBCT scans on the treatment machine. Choice
of modality for QA images primarily depends on the Department’s
preference.

Ultrasound
Ultrasound imaging has been used in prostate localization for image-
guided therapy. High-frequency sound waves, in the range of 1 to 10
MHz, are generated by a piezoelectric crystal. A handheld ultrasound
probe is positioned manually over the suprapubic region. The US waves
are reflected at tissue interfaces and used to generate an anatomic image
(63). Since the position of US probe is known in the machine coordinate
system, the required correction of the target to isocenter can be
determined. A typical radiotherapy ultrasound image for prostate
alignment during external beam conformal therapy is shown in Figure
2.9A. Use of ultrasound to image the prostate prior to IMRT was a
milestone in the utilization of routine IGRT. The technique was an
improvement over laser setup and bony landmark imaging. However, as
other IGRT approaches matured, studies showed systematic differences
between US-guided setup compared to radio-opaque implanted markers
(64), leading to a shift away from its use. The skill level needed to
reproducibly identify and localize the prostate with US was also a factor.
In a 2009 study of image-guided therapy (65), approximately 20% of
radiation oncologists reported use of US in IGRT, although use over the
years has fallen.
Surface Imaging
Video imaging to aid patient repositioning was proposed as early as
1970s (66). With digital video imaging, a 3D map of the patient surface
topology at simulation can be captured. A corresponding 3D surface
image is acquired on the treatment machine. In 3D video-guided setup,
the patient surface du jour is brought into congruence with the 3D
reference image in real time to provide setup correction. The key
assumption in surface-guided target setup is that reproducible
repositioning of external surface leads to precise subsurface target
alignment.
A 3D video generated image is shown in Figure 2.9B. In this
technology, stereo cameras view the patient during a flash exposure of a
projected speckle pattern. The system reconstructs the surface topology
to an accuracy of <1 mm (67). The surface at treatment is fitted to a
reference surface to determine the transformation needed to bring the
two surfaces into congruence. The system has been applied to the setup
of partial breast irradiation (68,69). Video also provides a nonionizing
means of patient position surveillance.

Image Processing
Acquired tomographic images undergo extensive processing to generate
new information. The data extracted from the acquired studies are
utilized to determine the size, shape, location, and motion of the target
volume relative to adjacent normal tissues. One of the most critical tasks
of image processing is image segmentation.

Segmentation Nomenclature
Image segmentation involves the process of classifying regions by
defining their boundaries. The boundaries define organs or volumes
associated with treatment planning objectives. Nomenclature is
important to ensure clear communication between members of the
treatment planning team within a department, and to facilitate
interinstitutional comparison and collaboration in clinical trials.
Nomenclature continues to evolve as more precise documentation of
delivered dose is sought (70–74).
Table 2.1 lists basic abbreviations of target and organs at risk
originating from ICRU reports. The Gross Target Volume (GTV) localizes
the visible tumor. The Clinical Target Volume (CTV) includes a margin
around the GTV to encompass microscopic tumor extension. Planning
Target Volumes (PTV) include geometric margins to account for setup
error, tumor motion, physiologic variability (e.g., variable bladder
filling), and other uncertainties. Organs at Risk (OARs) identify radiation
sensitive structures to which dose should be minimized to avoid
collateral radiation injury. The Planning Volume at Risk (PRV) is to be
protected in dose optimization.
In an effort to more completely define volumes and sub-volumes,
groups have proposed additional nomenclature for radiation planning
(75,76). One proposal recommends additional descriptors to basic
nomenclature to simultaneously identify target and dose. For example,
PTVp1_5000 would be used to identify the planning target volume
associated with primary tumor p1, to be treated to 5000 cGy. A Level 2
node clinical target volume treated to 4,000 cGy would be denoted as
CTVn2_4000. For OARs with additional geometric expansion margins (in
case of setup uncertainty), planning organs at risk volumes (PRVs) could
be defined. For example, a left kidney could be identified as:
Kidney_L_10, denoting a 10-mm safety margin.

TABLE 2.1 Target and Normal Organ Volume Definitions

Segmentation
Volumes of interest can be segmented manually or automatically. In
manual segmentation, points are digitized by mouse on each transaxial
CT scan and connected by a closed contour. A 3D surface is created by
tessellation. When a steep HU gradient clearly identifies a boundary,
simple threshold edge detection is used; the skin and lung/chest wall
interface are typically outlined by this method. However, threshold edge
detection fails when organ boundaries are fuzzy and indistinct, or when
an organ abuts another soft tissue. Since image segmentation is
important to many medical specialties, algorithms to perform accurate
automated segmentation have been an area of continued development in
radiology and radiation oncology as well as in computer science (77).
Newer algorithms apply more sophisticated approaches to image
segmentation. Information from anatomical atlases and databases has
been used as well as methods that apply statistical models of shape and
appearance. Studies have evaluated machine learning to teach
computers to segment. Multimodality imaging combined with image
registration can sometimes be used to enhance the performance of
automated segmentation schemes. In evaluating the efficacy and
accuracy of various automated schemes, metrics have been proposed to
compare performance with “ground truth” (78).
Segmentation variability can also be traced to the radiation oncologist,
who uses his personal experience, clinical knowledge, findings
documented in surgical notes, and other sources in addition to the
digital image data set to define the target. A study (79) involving 12
physicians were asked to delineate a tumor and target volume on five
brain tumor cases; all were given the same image data. Variations of
factors of 2 in volume and 2 to 3 cm in location were observed.

Beam’s Eye View and Digitally Reconstructed Radiographs


The segmented target and normal tissues are used to build an interactive
3D model of the patient’s anatomy. The concept of an interactive BEV
was initially proposed by McShan (80), who used planar tomography to
extract skin, lung, and internal contours. By adjusting the beam angle, it
is often possible to geometrically avoid irradiation of a critical organ.
Goitein (81) advanced the concept to contiguous volumetric CT-based
treatment planning and further proposed using digitally reconstructed
radiographs (DRRs). A DRR is generated by 3D ray tracing from the
radiation source through the volume of CT data, projecting the resulting
image onto a plane. Like a radiograph, the DRR shows bony anatomy
and low-density anatomy (airways, lung), providing an internal anatomy
backdrop for segmented organ contours. The DRR is the reference image
against which the IGRT radiograph is compared to make final
adjustments in treatment setup. A BEV of a thoracic tumor is shown in
Figure 2.10.

FIGURE 2.10 BEV image of a thoracic tumor. Structure contours are segmented from a CT scan.
Red: primary tumor; violet: nodal areas. OARs include spinal cord (green), esophagus (yellow)
and heart (light blue). Isocenter: green crosshairs. Multileaf collimator vanes are step stair lines
within the rectangular primary jaw opening. The DRR shows bony structures and lung.

Image Registration
Image registration is the process that aligns different image data sets into
a common coordinate system. Reviews of medical image registration
have been published (82,83). We describe common approaches to image
registration and their application to radiation oncology.
Registration can be performed manually, semi-automatically, or fully
automatically. In manual registration, the user visually inspects and
adjusts images interactively, by sliding images over each other, or using
a landmark tool to define matching points. Interactive alignment in its
basic form provides image alignment with three degrees of freedom
(translation). Semi-automatic tools allow for a limited degree of
feedback from the user, such as a starting guess or an incomplete set of
matching points. Fully automated methods can generate a
transformation matrix without operator feedback, but still require
inspection of the final alignment to validate its correctness. Ideally,
image registration defines a one-to-one mapping between the
coordinates of points in one space with a corresponding point in the
second space.
An important use of image registration is in multimodality imaging for
target delineation. Tumor extent may be more visible on MRI, but needs
to be accurately mapped into the CT coordinate system for treatment
planning. Anatomic and fMRI or PET volumes can also be defined and
mapped onto CT scans for planning. Strategies for daily IGRT also rely
on image registration to bring the planned treatment and daily setup
into alignment. Correlation of multiple serial scans of a patient provides
data on movement of internal organs needed to determine appropriate
portal margins. The variation of prostate and seminal vesicle position
during the course of fractionated external beam therapy (84–86) through
serial CT data sets first requires the serial CT coordinate systems to be
aligned relative to each other.

Automated Image Registration


In fully automatic image registration approaches, the user does not
digitize landmarks or contours, or interact with data. The images
themselves provide the necessary information. Automatic image
registration methods are characterized by a transform and cost function.
The transform defines the nature of the mapping between spaces, and is
either linear (i.e., pure translation, rigid, and affine), or nonlinear (B-
spline or free-form). The cost function is a score that defines the relative
quality of a match. The best transform for a given cost function is
selected using an optimizer that searches for the best match.
Automated image registration in radiation oncology often utilizes
maximization of mutual information (or the converse—minimization of
entropy). The utility of mutual information is based on the intuitive
assumption that while corresponding anatomical regions may have
different distributions of intensities in different imaging modalities, the
relationship between those intensities is predictable and consistent. For
example, bony anatomy is bright in a CT scan but dark in MRI; therefore
one could map very bright regions in a CT scan onto very dark regions in
MRI. Mutual information implicitly extends this idea to every location in
the image through its mathematical definition in terms of the joint
probability density function of the matched images.
To illustrate how the joint probability distribution can characterize
image matching, consider the simplified illustration shown in Figure
2.11A, where a volumetric CT scan and MRI scan of the same patient’s
brain are to be matched. In this example, we assume there are only four
tissue types: air, bone, brain, and fat. Air is dark on both MRI and CT;
bone is bright on CT scan and dark on MRI; brain and fat are of
intermediate intensity in both modalities but their relative intensities are
reversed. Brain is brighter on CT and fat is brighter in MRI. Assume
further that there is only one intensity value associated with each tissue
type. If the entire 3D image volumes were perfectly registered, then
every bone CT voxel would map onto a bone MR voxel, brain onto brain,
and so forth. We construct a joint histogram of the CT and MR intensities
in Figure 2.11B, where each point on the plane represents a single
combination of CT and MR intensity. The histogram would contain zeros
everywhere except at the four points corresponding to (CT bone, MR
bone); (CT brain, MR brain); (CT air, MR air); and (CT fat, MR fat).
If the image volumes were to become slightly misaligned, some CT
bone voxels would map onto MR brain voxels, introducing another peak
in the joint histogram. Some MR bone voxels would likewise map onto
CT brain voxels, introducing yet another peak. In fact, some voxels from
each of the four CT tissue classes could map onto all four MR voxel
classes, leading to potentially 16 different peaks. Thus the effect of
misalignment is to reduce the sharpness of the joint histogram. The
sharpness of such a distribution can be expressed in terms of its entropy.
The details of the mathematics are covered elsewhere (87–89). Here we
state that when the images are perfectly registered, knowing that a voxel
has the bone value in the CT scan means that the corresponding voxel in
the MR scan will have the MR bone value, with no uncertainty. In this
case, the entropy would be low and the mutual information would be
high. The search for the optimal transformation involves minimizing this
score. Mutual information and its variants have been used to register CT,
MRI, PET, SPECT, and various 2D images (anatomic and functional,
radiographic, and photographic) with themselves and with each other.
FIGURE 2.11 A: CT and MRI brain scan to be automatically registered. B: Joint intensity
histogram of simplified CT and MRI volumes when perfectly registered. Image intensities for four
tissues considered: air, bone, fat, brain. Misregistration of the scans broadens peaks in histogram.

Deformable Image Registration


An advance of image registration is the development of deformable
image registration (DIR). Applications of DIR include registration of
interfractional patient scans (where scans on different days may show
organ shift and deformation), intrafractional registration (at respiratory
time scale), and multimodality image registration. An overview of DIR
for radiotherapy applications can be found elsewhere (90). Recently a
test data set representing ground truth was used to evaluate and validate
the accuracy of various approaches to DIR (91). The best performing
algorithms mapped landmarks to within a few millimeters of ground
truth.
A useful application of DIR is the segmentation of organs in 4D CT
data, which is impractical without significant computer assistance. With
DIR, it is feasible to propagate contours of organs from one respiratory
phase to another. Figure 2.12 shows such an example. In this example,
the liver was segmented by the physician on the T30 respiratory phase.
To study organ deformation and perform 4D dose calculations, it is
necessary to contour the liver on all 10 phases, a tedious task if
manually performed. With DIR, the vector mapping of each voxel from
one phase to all other phases is calculated. This transformation may then
be applied to the voxels that define the contour, to map the organ
outline to other phases. There are some small discrepancies, but most
contours are acceptable. Editing can be performed to the propagated
contours as needed. A full set of 3D contours at each phase permits
calculation and quantitative assessment of organ trajectories.
FIGURE 2.12 Four-dimensional liver segmented by deformable registration. The liver was
manually contoured in 3D at T30 respiratory phase; deformable registration was applied to map at
the segmented contours to all other respiratory phases. Accuracy is best near T30, with a few
millimeter inaccuracies at phases farther from reference phase.

Volume Visualization
An alternative to ring-stack or surface display of anatomy is volume
visualization, initially described in the computer science literature (92).
In volume rendering, the opacity and hue of a voxel of the 3D image
data set can be interactively set to be a function of its CT number.
Volume-rendered displays have been used in treatment planning for
radiotherapy (93) and in radiology (94).
Figure 2.13 is a volume rendering of CT data. The image is of a patient
with a lung tumor, and the data rendered are from a high-quality
treatment planning scan (0.5-mm slice thickness, 256 slice scanner).
Low-density lung parenchyma is rendered transparent, and the
tracheobronchial tree is visualized along with bony anatomy. The
visualization software used displays regions of high gradients in HU that
essentially display interfaces. Therefore, the contents of organs (e.g.,
heart) are not visible.
An advantage of volumetric visualization in radiotherapy planning is
that this technique can display anatomic detail not normally segmented.
Nerves, vessels, and lymph nodes are difficult to identify and laborious
to segment on axial cuts. Yet these structures often may be directly seen
in a volumetric rendering from a selected BEV. The hypothesis is that
visualization of these structures may help in aperture design of the
clinical target volume.
FIGURE 2.13 Volume visualization of a lung tumor: Lung parenchyma is rendered transparent.
The tracheobronchial tree is visible as are surface interfaces of bony anatomy and mediastinum.
Scan: 1-mm slice thickness performed on a 256 slice CT scanner.

FIGURE 2.14 Prototype visualization combines 4D volume rendered CT data with additional
quantitative parameters overlaid. Key: Brown-longer RPL; Green-shorter RPL. (A) The skin
surface with overlayed radiological path length color map from skin to distal PTV. The skin
surface/air interface is selected for display in grey level. The image can be animated to show RPL
variation during breathing. B: Left anterior oblique view indicates internal RPL between the chest
wall/lung interface and the proximal surface of the PTV. The brown region indicates the beam
grazes heart, leading to an undesirable steep compensator gradient. C: Changing to a more
oblique angle avoids this, resulting in more uniform compensator geometry. D: Overshoot image,
the amount by which each ray at chosen beam angle overshoots the distal PTV during respiration.
Green indicates overshoot is <3 mm. Dark brown areas indicate greater overshoot (>1 cm) as the
tumor moves during respiration. PTV includes upper airways, which also move during respiration,
resulting in beam overshoot. Visualizations help quantify variation of beam penetration in dynamic
situations.
The difficulty of volume visualization is that so much anatomy,
including overlying tissues, is visualized, and some are not relevant to
the planning task. Methods to display only the relevant anatomy from a
given beam perspective are needed. Interactive tools capable of
selectively peeling away tissues obscuring the volume of interest must be
incorporated into these techniques to reveal the volumes of interest.

Scientific Visualization Challenges


We can be easily overwhelmed with the gigabytes of multimodality
image data generated by 4D volumetric images. Additional data arise
from dosimetric and imaging data in the course of treatment planning
4D dose, image-guided treatment, and follow-up. The challenge we face
is articulated by Johnson (95), who addressed the major unsolved
problems in scientific visualization. Several of the unsolved problems are
relevant to visualization of radiotherapy image data, including
(a) visualization of uncertainty, (b) displaying time dependent data sets,
(c) visualizing multidimensional data (beyond 4D anatomy, dose, etc.),
and (d) quantitatively establishing the value of scientific visualization to
decision making.
We provide an example of how advanced visualization techniques
might be used in radiation oncology. One possibility is interactive
display of 4D anatomy overlaid with quantitative data. Because charged
particle beam therapy is particularly sensitive to inhomogenities, we use
a 4D CT data set to illustrate features of a quantitative n-dimensional
visualization problem (96). Figure 2.14 shows a series of images to
extract hidden information and display it in an intuitive way. Consider
the need to understand the radiological pathlength (RPL) associated with
proton irradiation of a lung tumor. As detailed in the caption, we use
volume rendering on the fly to quantify the RPL changes during 4D
motion of a lung tumor, and explore how graphics and visualization can
aid in extending the BEV concept beyond geometric relationships. The
quantities we wish to ultimately display not only include RPL as a
function of BEV, but treatment uncertainty.

KEY POINTS
• Multimodality volumetric imaging is the basis of modern
radiotherapy. Imaging modalities provide not only
geometric/anatomic information for treatment planning and delivery,
but important information on organ function and tumor physiology
that may be useful in designing the target and avoiding critical
structures.
• Data sets undergo extensive image processing, including
segmentation, building 3D/4D models of the patient. The
anatomical representations provide insight into beam portal design
to optimize dose delivery.
• Images are becoming an integral part of target alignment,
especially for conformal treatments that promise high conformality
of dose (IMRT, charged particle therapy, stereotactic radiosurgery).
Daily target confirmation is becoming more common.
• With vast amounts of image data available to the radiation
oncologist, new approaches to segmentation and display are
needed. Some possibilities include volume rendering in real time
and powerful graphics for scientific visualization.

QUESTIONS
1. Which of the below provides information to estimate the electron
density of each voxel needed for dose calculations?
A. MRI
B. Ultrasound
C. CT
D. SPECT
E. None of the above.
2. Functional MRI is used to locate cognitive and eloquent areas of
the brain. The detection mechanism involves measuring which of
the following?
A. Proton density
B. Reflectivity of RF from interfaces
C. Attenuation coefficient
D. Oxygen in hemoglobin
E. SUV
3. PET has been used to determine tumor hypoxic regions through
which of the following?
A. Measurement of SUV
B. Using 18F-FDG
C. Cu-ATSM
D. Electron spin resonance.
4. Organ motion over an 1-hour interval is best measured through
which of the below types of imaging?
A. MRI
B. CT
C. PET/CT
D. Cone-beam CT
E. Fluoroscopy
5. Which of the following best completes this statement: Image
registration by maximization of mutual information
A. involves defining corresponding points and surfaces to be
matched.
B. involves interactive image alignment.
C. minimizes entropy.
D. is only applicable to deformed organs.
E. is not clinically used.

ANSWERS
1. C
2. D
3. C
4. A
5. C

REFERENCES
1. Munzenrider JE, Pilepich M, Rene-Ferrero JB. Use of a body scanner
in radiotherapy treatment planning. Cancer. 1977;40:170–179.
2. Boyer AL, Butler EB, DiPetrillo TA, et al. Intensity-modulated
radiotherapy: Current status and issues of interest. Int J Radiat Oncol
Biol Phys. 2001;51(4):880–914.
3. Chang BK, Timmerman RD. Stereotactic body radiation therapy: a
comprehensive review. Am J Clin Oncol. 2007;30(6):637–644.
doi:10.1097/COC.0b013e3180ca7cb1.
4. Loeffler JS, Smith AR, Suit HD. The potential role of proton beams in
radiation oncology. Semin Oncol. 1997;24(6):686–695.
5. Wolbarst AB, Capasso P, Wyant AR. Medical Imaging: Essentials for
Physicians. Wiley Online Library; 2013.
6. Bushong SC. Radiologic Science for Technologists: Physics, Biology and
Protection. St. Louis, MO: Mosby; 1993.
7. Rosenow UF. Notes on the legacy of the Roentgen rays. Med Phys.
1995;22:1855–1868.
8. Siebert JA. One hundred years of medical technology. Health Phys.
1995;69:695–720.
9. Simpson DR, Lawson JD, Nath SK, et al. Utilization of advanced
imaging technologies for target delineation in radiation oncology. J
Am Coll Radiol JACR. 2009;6(12):876–883.
doi:10.1016/j.jacr.2009.08.006.
10. Pan X, Siewerdsen J, La Riviere P, et al. Anniversary Paper:
Development of x-ray computed tomography: The role of Medical
Physics and AAPM from the 1970s to present. Med Phys.
2008;35(8):3728–3739. doi:10.1118/1.2952653.
11. Schneider Ω, Bortfeld T, Schlegel W. Correlation between CT
numbers and tissue parameters needed for Monte Carlo simulations
of clinical dose distributions. Phys Med Biol. 2000;45(2):459–478.
12. Hünemohr N, Paganetti H, Greilich S, et al. Tissue decomposition
from dual energy CT data for MC based dose calculation in particle
therapy. Med Phys. 2014;41(6):061714. doi:10.1118/1.4875976.
13. Spitzer V, Ackerman MJ, Scherzinger AL, et al. The visible human
male: A technical report. J Am Med Inform Assoc. 1996;3(2):118–
130.
14. Olsen DR, Thwaites DI. Now you see it… Imaging in radiotherapy
treatment planning and delivery. Radiother Oncol. 2007;85(2):173–
175. doi:10.1016/j.radonc.2007.11.001.
15. Gao Z, Wilkins D, Eapen L, et al. A study of prostate delineation
referenced against a gold standard created from the visible human
data. Radiother Oncol. 2007;85(2):239–246.
doi:10.1016/j.radonc.2007.08.001.
16. Barrett JF, Keat N. Artifacts in CT: recognition and avoidance.
Radiographics. 2004;24(6):1679–1691. doi:10.1148/rg.246045065.
17. Balter JM, Ten Haken RK, Lawrence TS, et al. Uncertainties in CT-
based radiation therapy treatment planning associated with patient
breathing. Int J Radiat Oncol Biol Phys. 1996;36(1):167–174.
18. Chen GT, Kung JH, Beaudette KP. Artifacts in computed
tomography scanning of moving objects. Semin Radiat Oncol.
2004;14(1):19–26.
19. Vedam SS, Keall PJ, Kini VR, et al. Acquiring a four-dimensional
computed tomography dataset using an external respiratory signal.
Phys Med Biol. 2003;48(1):45–62. doi:10.1088/0031–
9155/48/1/304.
20. Ford EC, Mageras GS, Yorke E, et al. Respiration-correlated spiral
CT: a method of measuring respiratory-induced anatomic motion for
radiation treatment planning. Med Phys. 2003;30(1):88–97.
21. Low DA, Nystrom M, Kalinin E, et al. A method for the
reconstruction of four-dimensional synchronized CT scans acquired
during free breathing. Med Phys. 2003;30(6):1254–1263.
22. Pan T, Lee TY, Rietzel E, et al. 4D-CT imaging of a volume
influenced by respiratory motion on multi-slice CT. Med Phys.
2004;31(2):333–340.
23. Rietzel E, Pan T, Chen GT. Four-dimensional computed tomography:
image formation and clinical protocol. Med Phys. 2005;32(4):874–
889.
24. Rietzel E, Chen GT, Choi NC, et al. Four-dimensional image-based
treatment planning: Target volume segmentation and dose
calculation in the presence of respiratory motion. Int J Radiat Oncol
Biol Phys. 2005;61(5):1535–1550.
25. Li T, Schreibmann E, Thorndyke B, et al. Radiation dose reduction
in four-dimensional computed tomography. Med Phys.
2005;32(12):3650–3660.
26. Ross CS, Hussy DH, Pennington EC. Analysis of movement in
intrathoracic neoplasm using ultrafast computerized tomography.
Int J Radiat Oncol Biol Phys. 1990;18:671–677.
27. Wong JW, Sharpe MB, Jaffray DA, et al. The use of active breathing
control (ABC) to reduce margin for breathing motion. Int J Radiat
Oncol Biol Phys. 1999;44(4):911–919.
28. Balter JM, McGinn CJ, Lawrence TS, et al. Improvement of CT-
based treatment-planning models of abdominal targets using static
exhale imaging. Int J Radiat Oncol Biol Phys. 1998;41(4):939–943.
29. Frank SJ, Kudchadker RJ, Kuban DA, et al. A volumetric trend
analysis of the prostate and seminal vesicles during a course of
intensity-modulated radiation therapy. Am J Clin Oncol.
2010;33(2):173–175. doi:10.1097/COC.0b013e3181a31c1a.
30. Jaffray DA, Siewerdsen H. Cone-beam computed tomography with
flat panel imager: initial performance characterization. Med Phys.
2000;27:1311–1323.
31. Sonke J-J, Zijp L, Remeijer P, et al. Respiratory correlated cone
beam CT. Med Phys. 2005;32(4):1176–1186.
32. Pouliot J, Bani-Hashemi A, Chen J, et al. Low-dose megavoltage
cone-beam CT for radiation therapy. Int J Radiat Oncol Biol Phys.
2005;61(2):552–560.
33. Mackie TR, Kapatoes J, Ruchala K, et al. Image guidance for precise
conformal radiotherapy. Int J Radiat Oncol Biol Phys. 2003;56(1):89–
105.
34. Adler JR, Chang SD, Murphy MJ, et al. The Cyberknife: a frameless
robotic system for radiosurgery. Stereotact Funct Neurosurg. 1997;
69(1–4 Pt 2):124–128.
35. Thornton AF, Sandler HM, Ten Haken RK, et al. The clinical utility
of MRI in 3-dimentional treatment planning of brain neoplasms. Int
J Radiat Oncol Biol Phys. 1992;24:767–775.
36. Korsholm, ME, Waring, LW, Edmund, JM. A criterion for the
reliable use of MRI-only radiotherapy. Radiat Oncol. 2014;9(16):1–7.
37. Jansen EPM, Dewit LGH, van Herk M, et al. Target volumes in
radiotherapy for high-grade malignant glioma of the brain.
Radiother Oncol. 2000;56(2):151–156. doi:10.1016/S0167–
8140(00)00216–4.
38. Berberat J, McNamara J, Remonda L, et al. Diffusion tensor imaging
for target volume definition in glioblastoma multiforme. Strahlenther
Onkol. 2014;190(10):939–943. doi:10.1007/s00066–014–0676–3.
39. Feng M, Balter JM, Normolle D, et al. Characterization of pancreatic
tumor motion using cine MRI: surrogates for tumor position should
be used with caution. Int J Radiat Oncol Biol Phys. 2009;74(3):884–
891. doi:10.1016/j.ijrobp.2009.02.003.
40. Padhani AR, Khoo VS, Suckling J, et al. Evaluating the effect of
rectal distension and rectal movement on prostate gland position
using cine MRI. Int J Radiat Oncol. 1999;44(3):525–533.
doi:10.1016/S0360–3016(99)00040–1.
41. Ghilezan MJ, Jaffray DA, Siewerdsen JH, et al. Prostate gland
motion assessed with cine-magnetic resonance imaging (cine-MRI).
Int J Radiat Oncol Biol Phys. 2005;62(2):406–417.
42. Orrison WW, Lewine JD, Sanders JA, et al. Functional brain
imaging. Mosby. In: Year Book. Chicago: Mosby; 1995.
43. Nakajima T, Fujita M, Watanabe H, et al. Functional mapping of the
human visual system with near-infrared spectroscopy and BOLD
functional MRI. In: Society of Magnetic Resonance Medicine. San
Francisco; 1994.
44. Cao Y, Towel V, Levin D, et al. Functional mapping of human motor
cortical activation with conventional MR imaging at 1.5 T. J Magn
Reson Imag. 1993;3:869–871.
45. Hamilton RJ, Sweeney PJ, Pelizzari CA, et al. Functional imaging in
treatment planning of brain lesions. Int J Radiat Oncol Biol Phys.
1997;37(1):181–188.
46. Nelson SJ. Multivoxel magnetic resonance spectroscopy of brain
tumors. Mol Cancer Ther. 2003;2(5):497–507.
47. Arrayeh E, Westphalen AC, Kurhanewicz J, et al. Does local
recurrence of prostate cancer after radiation therapy occur at the
site of primary tumor? Results of a longitudinal MRI and MRSI
study. Int J Radiat Oncol Biol Phys. 2012;82(5):e787–e793.
doi:10.1016/j.ijrobp.2011.11.030.
48. Narayana A, Chang J, Thakur S, et al. Use of MR spectroscopy and
functional imaging in the treatment planning of gliomas. Br J Radiol.
2007;80(953):347–354. doi:10.1259/bjr/65349468.
49. Ling CC, Humm J, Larson S, et al. Towards multidimensional
radiotherapy (MD-CRT): biological imaging and biological
conformality. Int J Radiat Oncol Biol Phys. 2000;47(3):551–560.
50. Shih HA, Harisinghani M, Zietman AL, et al. Mapping of nodal
disease in locally advanced prostate cancer: rethinking the clinical
target volume for pelvic nodal irradiation based on vascular rather
than bony anatomy. Int J Radiat Oncol Biol Phys. 2005;63(4):1262–
1269. doi:10.1016/j.ijrobp.2005.07.952.
51. Mutic S, Dempsey JF. The ViewRay system: magnetic resonance-
guided and controlled radiotherapy. Semin Radiat Oncol.
2014;24(3):196–199. doi:10.1016/j.semradonc.2014.02.008.
52. Lagendijk JJ, Raaymakers BW, Raaijmakers AJ, et al. MRI/linac
integration. Radiother Oncol. 2008;86(1):25–29.
doi:10.1016/j.radonc.2007.10.034.
53. Das SK, Ten Haken RK. Functional and molecular image guidance in
radiotherapy treatment planning optimization. Semin Radiat Oncol.
2011;21(2):111–118. doi:10.1016/j.semradonc.2010.10.002.
54. Bradley J, Thorstad WL, Mutic S, et al. Impact of FDG-PET on
radiation therapy volume delineation in non-small-cell lung cancer.
Int J Radiat Oncol Biol Phys. 2004;59(1):78–86.
doi:10.1016/j.ijrobp.2003.10.044.
55. Grosu A-L, Piert M, Weber WA, et al. Positron emission tomography
for radiation treatment planning. Strahlenther Onkol. 2005;
181(8):483–499. doi:10.1007/s00066–005–1422–7.
56. MacManus M, Nestle U, Rosenzweig KE, et al. Use of PET and
PET/CT for radiation therapy planning: IAEA expert report 2006–
2007. Radiother Oncol. 2009;91(1):85–94.
doi:10.1016/j.radonc.2008.11.008.
57. Chao KS, Bosch WR, Mutic S, et al. A novel approach to overcome
hypoxic tumor resistance: Cu-ATSM-guided intensity-modulated
radiation therapy. Int J Radiat Oncol Biol Phys. 2001;49(4):1171–
1182.
58. Townsend DW, Beyer T, Blodgett TM. PET/CT scanners: a hardware
approach to image fusion. Semin Nucl Med. 2003;33(3):193–204.
59. Abdelnour AF, Nehmeh SA, Pan T, et al. Phase and amplitude
binning for 4D-CT imaging. Phys Med Biol. 2007;52(12):3515–3529.
doi:10.1088/0031–9155/52/12/012.
60. Nehmeh SA, Erdi YE, Pan T, et al. Four-dimensional PET/CT
imaging of the thorax. Med Phys. 2004;31(12):3179–3186.
61. Therasse P, Eisenhauer EA, Verweij J. RECIST revisited: a review of
validation studies on tumour assessment. Eur J Cancer.
2006;42(8):1031–1039. doi:10.1016/j.ejca.2006.01.026.
62. Shankar LK, Hoffman JM, Bacharach S, et al. Consensus
recommendations for the use of 18F-FDG PET as an indicator of
therapeutic response in patients in National Cancer Institute Trials. J
Nucl Med. 2006;47(6):1059–1066.
63. Lattanzi J, McNeeley S, Pinover W, et al. A comparison of daily CT
localization to a daily ultrasound-based system in prostate cancer.
Int J Radiat Oncol Biol Phys. 1999;43(4):719–725.
64. Langen KM, Pouliot J, Anezinos C, et al. Evaluation of ultrasound-
based prostate localization for image-guided radiotherapy. Int J
Radiat Oncol Biol Phys. 2003;57(3):635–644.
65. Simpson DR, Lawson JD, Nath SK, et al. A survey of image-guided
radiation therapy use in the United States. Cancer.
2010;116(16):3953–3960. doi:10.1002/cncr.25129.
66. Conner WG, Boone ML, Veomett R, al et. Patient repositioning and
motion detection using a video cancellation system. Int J Radiat
Oncol Biol Phys. 1975;1:147–153.
67. Bert C, Metheany KG, Doppke K, et al. A phantom evaluation of a
stereo-vision surface imaging system for radiotherapy patient setup.
Med Phys. 2005;32(9):2753–2762.
68. Bert C, Metheany KG, Doppke KP, et al. Clinical experience with a
3D surface patient setup system for alignment of partial-breast
irradiation patients. Int J Radiat Oncol Biol Phys. 2006;64(4):1265–
1274. doi:10.1016/j.ijrobp.2005.11.008.
69. Rong Y, Walston S, Welliver MX, et al. Improving intra-fractional
target position accuracy using a 3D surface surrogate for left breast
irradiation using the respiratory-gated deep-inspiration breath-hold
technique. PloS One. 2014;9(5):e97933.
doi:10.1371/journal.pone.0097933.
70. Berthelsen AK, Dobbs J, Kjellén E, et al. What’s new in target
volume definition for radiologists in ICRU Report 71? How can the
ICRU volume definitions be integrated in clinical practice? Cancer
Imaging. 2007;7:104–116. doi:10.1102/1470-7330.2007.0013.
71. ICRU 83 Prescribing, Recording and Reporting Intensity-Modulated
Photon-Beam Therapy (IMRT); 2010.
72. ICRU 78 Prescribing, Recording and Reporting Proton-Beam THerapy.
ICRU; 2007.
73. ICRU 50 Prescribing, Recording, and Reporting Photon Beam Therapy.
ICRU; 1993.
74. ICRU 62 Prescribing, Recording and Reporting Photon Beam Therapy
Supplement to ICRU Report 50; 1999.
75. Ontario Cancer Care. Contouring Nomenclature Recommendation
Report; 2014.
https://www.cancercare.on.ca/common/pages/UserFile.aspx?fileId
= 300038.
76. Santanam L, Hurkmans C, Mutic S, et al. Standardizing naming
conventions in radiation oncology. Int J Radiat Oncol Biol Phys.
2012;83(4):1344–1349.
77. Pham DL, Xu C, Prince JL. Current methods in medical image
segmentation. Annual Review of Biomedical Engineering. Vol. 2.
2000:315–337.
78. Sharp G, Fritscher KD, Pekar V, et al. Vision 20/20: perspectives on
automated image segmentation for radiotherapy. Med Phys.
2014;41(5):050902. doi:10.1118/1.4871620.
79. Leunens G, Menten J, Weltens C. Quality assessment of medical
decision making in radiation oncology: variability in target volume
delineation for brain tumors. Radiother Oncol. 1993;29:169–175.
80. McShan DL, Silverman A, Lanza DN, et al. A computerized three-
dimensional treatment planning system utilizing interactive color
graphics. Br J Radiol. 1979;52:478–481.
81. Goitein M, Abrams M, Rowell D, et al. Multidimensional treatment
planning: 2. Beam’s eye view, back projection, and projection
through CT sections. Int J Radiat Oncol Biol Phys. 1983;9:789–797.
82. Maurer CR, Fitzpatrick JM. A review of medical image registration.
In: Maciunas RJ, ed. Interactive Image Guided Neurosurgery. Park
Ridge, IL: AAN; 1993:17–44.
83. Maintz JB, Viergever MA. A survey of medical image registration.
Med Image Anal. 1998;2:1–36.
84. Beard CJ, Kijewski P, Bussière M, et al. Analysis of prostate and
seminal vesicle motion: implications for treatment planning. Int J
Radiat Oncol Biol Phys. 1993;27(Suppl 1):136.
85. Roeske JC, Forman JD, Messina CF, et al. Evaluation of changes in
the size and location of the prostate, seminal vesicle, bladder and
rectum during a course of external beam radiation therapy. Int J
Radiat Oncol Biol Phys. 1995;33:1321–1329.
86. Melian E, Mageras GS, Fuks Z, et al. Variation in prostate position
quantitation and implications for three-dimensional conformal
treatment planning. Int J Radiat Oncol Biol Phys. 1997;38(1):73–81.
87. Wells WM, Viola P, Kikinis R. Multi-modal volume registration by
maximization of mutual information. Second Annual Int Symposium
on Medical Robotics and Computer Assisted Surgery. John Wiley &
Sons; 1995:55–62.
88. Viola P. Multi-modality image registration by maximization of
mutual information. 1995.
89. Pluim JP, Maintz JB, Viergever MA. Mutual information based
registration of medical images: a survey. IEEE Trans Med Imaging.
2003;22(8):968–1003.
90. Sarrut D. Deformable registration for image-guided radiation
therapy. Z Für Med Phys. 2006;16(4):285–297.
91. Sharp G, Peroni M, Li R, et al. Evaluation of Plastimatch B-Spline
Registration on the EMPIRE10 Data Set. Massachusetts General
Hospital; 2010. http://empire10.isi.uu.nl/staticpdf/article_mgh.pdf.
92. Drebin R, Carpenter L, Hanrahan P. Volume rendering. Compu
Graph. 1988;22:65–74.
93. Pelizzari CA, Ryan MJ, Grzeszczuk R, et al. Volumetric visualization
of anatomy for treatment planning. Int J Radiat Oncol Biol Phys.
1995;34(205–212).
94. Kuszyk BS, Ney DR, Fishman EK. The current state of the art in
three dimensional oncologic imaging: an overview. Int J Radiat
Oncol Biol Phys. 1995;33:1029–1040.
95. Johnson C. Top scientific visualization research problems. IEEE
Comput Graph Appl. 2004;24(4):13–17.
96. Milos Hasan. Shadie: A Domain-Specific Language for Volume
Rendering. Boston, MA: Massachusetts General Hospital; 2010.
miloshasan.net/Shadie/shadie.pdf.
3 Treatment Simulation

Dimitris N. Mihailidis and Niko Papanikolaou

INTRODUCTION
Treatment planning is one of the most crucial processes of radiotherapy,
through which the most appropriate way to irradiate the patient is
determined. The process is composed of several important steps, such as:

1. Reproducible patient positioning and immobilization;


2. Accurate identification of the location and shape of the tumor and
neighboring critical organs;
3. Selecting the most appropriate beam arrangement;
4. Computing the doses to be delivered and evaluation of resulting dose
distributions;
5. Transfer of the treatment planning information to the treatment
delivery system;

In today’s radiotherapy, 3-dimensional (3D) patient anatomy


visualization and target definition enables planning to conform the dose
to the target volume, delivering as high doses as possible, while avoiding
the critical organs. In order to achieve this, a process called treatment
simulation is necessary to be performed. In essence, treatment simulation
is a combination of, or requires that steps 1 to 3 (mentioned above) have
been performed successfully. There are several ways to perform a
simulation, each with a different level of complexity. The most common
ones are:
1. When clinical treatment volume can be determined via simple
radiographic and fluoroscopic images from a traditional radiotherapy
simulator (1), sometimes called the anatomical approach (clinical
setup).
2. When only a limited number of transverse computed tomography (CT)
images are used for the target delineation along with radiographic
planar images (as above) in order to complete the treatment planning,
sometimes called the traditional approach.
3. When simulation can be performed on a CT scanner via a special
computer software that provides a full 3D patient representation in
the treatment planning position, a process called CT-simulation. Then,
a complete treatment planning strategy can be designed, a process
referred to as virtual simulation (1,2).
Radiotherapy simulation is a very important process on which
treatment planning and treatment delivery depend and are based on.
The accuracy of the entire radiotherapy treatment can be influenced by
the quality of treatment simulation on a patient-per-patient basis.

SIMULATION METHODS
Treatment simulation can also be thought of as a “feasibility study” of
the patient treatment strategy. Technologic advances of medical imaging
and computing have brought great improvement to the simulation
process and limitless capabilities.
We will describe the three most common methods of radiotherapy
simulation today, which strongly depend on the treatment strategy that
will be followed for the patient.

Simple Simulation—Anatomical and Traditional Approach


When the patient is necessary to be prepared for treatment in a short
amount of time, or there is a simple treatment to be delivered, a
conventional simulation can be performed. In this case, a radiographic
simulator, which actually operates in both the radiographic and
fluoroscopic mode, is used (Fig. 3.1A,B). It is an apparatus that uses a
diagnostic x-ray tube with an image intensifier (Fig. 3.1C) but duplicates
the radiotherapy treatment unit in terms of its geometric, mechanical,
and optical properties (3–5).
The patient is set up and immobilized on the simulator the same way
that she will be treated in the treatment unit. The clinical borders of the
treatment area are marked on the patient skin by the physician and
radio-opaque markers are placed on skin on these borders. The selection
of the treatment isocenter is done via fluoroscopic imaging of the area,
typically with two orthogonal reference views, anterior (AP) and lateral
(LAT) (Fig. 3.2). Upon selection of the isocenter, two orthogonal
radiographic films (or digital images) are produced for further use and
comparison with the treatment setup, and documentation purposes.
Then, the beams that will be used for treatment are simulated in order to
be geographically optimized, depending on the treatment site, by
selecting gantry and collimator angles, treatment field sizes, and so
forth; all in relationship to externally placed markers and internal bony
landmarks. A crucial step is the proper marking of the patient, like the
isocenter (as a “3-point” marking) and alignment skin marks using the
simulator laser system in all planes. At the same time, other necessary
information is collected to be used for setup and dosimetry, such as
source-to-surface distances of the simulated treatment fields, patient
thickness, determination of the time-set or monitor unit calculation point
relative to the isocenter (if half-blocked or heavily blocked fields are
used, as in Figure 3.2), simple contours of the patient surface (with
contour-makers) through points of dosimetric interest, and evaluation
data for bolus or compensator.
FIGURE 3.1 Typical radiotherapy simulator. Patient setup represented by a phantom (A), room
view (B), and geometric diagram (C).

Some simulators have a tomographic attachment (simulator-CT) that


analyzes and reconstructs the images from the image intensifier using
either analog or digital processing (6). The quality of the reconstructed
image is inferior to the CT-based simulation. However, it is adequate for
acquiring patient contours and identifying bony landmarks. The
simulator-CT does provide a volumetric reconstruction of the patient’s
anatomy and as such could be used as a basic image data set in
treatment planning. The reduced spatial resolution of such volumetric
imaging renders this technique unsuitable for high precision conformal
radiotherapy planning where a series of many thin slices is required for
detailed volume reconstruction (7,8).

FIGURE 3.2 Typical simulation portal, lateral view for a head and neck treatment. The blocking is
represented by the black marker outlines on the film and the prescription point is denoted as “Calc
Point,” which is off-axis related to the half-blocked central-axis.

Interestingly, the concept of simulator-CT has recently re-emerged and


is referred to as cone beam CT (CBCT). Cone beam imaging is currently
used in the context of image-guided radiotherapy (IGRT) for daily
patient localization and setup verification prior to treatment. Those
images could also be used for patient re-planning in the context of
adaptive radiotherapy, although for the time being this is only a research
application. However, it is expected that once the image quality,
imaging dose to the patient, and the speed of re-planning improve, the
CBCT adaptive radiotherapy will become an integral part of advanced
radiotherapy.
CBCT imaging can be obtained using the kV imaging system of a linac
(Varian, Elekta) or the MV beam itself (Tomotherapy, Siemens).
Regardless of the implementation, CBCT is similar to and suffers from
the same characteristics as the simulator CT.

Verification Simulation
This is a simulation approach “positioned” between the previously
described approach and the virtual simulation that will be described
next. This process starts by immobilizing the patient in the treatment
planning position with all the necessary devices, this time on the CT
scanner flat table-top. In this case, there is no laser localization system
available in the CT room. A standard treatment planning study of the
patient will be obtained throughout the clinical area, after radio-opaque
markers are placed by the physician on the patient’s skin. The simulation
team will need to place “3-point” tattoos or another type of long-lasting
markers on the patient’s skin. For CT purposes, the “3-point” locations
and the treatment area borders will be visible on the CT images. Patient
scans are typically obtained in an axial mode, 3 to 5 mm slice thickness.
Smaller slice thickness can be used for small areas, when higher
resolution and accuracy are necessary.
The CT images are reviewed and then imported to a treatment
planning system where a computer simulation will be done off-line. The
physician will define the volumes of interest and the isocenter might be
adjusted to accommodate the target volume extensions. The coordinates
of the treatment isocenter can be referenced to the original “3-point”
location marked in the CT room. Next, the remaining treatment planning
process is completed and the plan gets finalized. Two orthogonal (AP &
LAT) digitally reconstructed radiographs (DRRs) (9,10) will be produced
at the original CT point and the new isocenter (Fig. 3.3). DRRs of the
treatment fields will also be produced.
A verification simulation is scheduled in the conventional simulator,
where the patient is immobilized and set up again in the treatment
planning position. The patient is then simulated according to the
approved treatment plan. A sample simulation form is shown in Figure
3.4, where all the appropriate shifts from the original CT marks to the
final isocenter are implemented. An orthogonal set of setup ports, first at
the original CT point (“3-point” mark) and then at the treatment
isocenter, will assure the proper localization when compared to the
DRRs at the same locations. The patient will be marked appropriately to
insure reproducibility of setup during treatment. Further on, additional
ports of the treatment fields can be obtained to increase the accuracy of
the simulation setup and for documentation purposes. The orthogonal
and treatment ports will be compared to portal images or portal films in
the treatment room, especially on the first day of treatment. A diagram
of the verification simulation process is shown in Figure 3.5A.

CT-Simulator and Virtual Simulation


This is an exciting development in the area of simulation because it
converts a CT (or other scanning modality) scanner into a simulator
(1,2,11–13). Both patient and treatment unit are virtual, the patient
being represented by CT images and the treatment unit by model beam
geometry and expected dose distributions. The simulation film is
replaced by the DRRs. The DDRs are generated from the CT scan data by
mapping average CT values computed along lines drawn from a “virtual
source” to the location of the “virtual film.” A DRR is essentially a
calculated port film that serves as a simulator film, which contains all
the useful anatomical information of the patient (Fig. 3.6). A dedicated
radiation therapy CT scanner, with the above described virtual
simulation software and simulation accessories (e.g., flat table-top,
immobilization devices, etc.), is called a CT-simulator (Fig. 3.7A). In
addition, CT-simulators are equipped with high-precision movable laser
systems to mark the isocenter location on the patient during the virtual
simulation process. The laser system is mounted on fixed pedestals on
the floor and ceiling as shown in Figure 3.7B.
FIGURE 3.3 Breast patient CT image with two orthogonal reference fields on the treatment
isocenter-GREEN point (A), 3D reconstruction of the patient imaged area with the reference field
(B), LAT-reference field to the treatment isocenter (C), AP-reference filed to the isocenter (D). The
RED-point is the original CT point.

Modern radiotherapy CT-simulators are based on the most recent CT


scanner technology with multislice (multidetector) detector technology,
axial and helical scanning mode capabilities, rapid CT acquisition time,
and high-image quality performance and wide bore (>75 cm diameter)
to accommodate the patient immobilization devices. Further on, an
option called gating allows the scanner to perform “motion-correlated”
scanning, a process called 4-dimensional (4D) CT (the fourth being the
time information), useful for accurate treatment planning on moving
anatomy (e.g., respiratory motion in lung). The standard linear
accelerator (linac) requirements for large weight capability (up to 450 to
600 lb load), small sag (<2 mm), and hard table-top apply for CT-
simulators, too.
A diagram that describes the CT-simulation and virtual simulation
processes is shown in Figure 3.5B.

CT-Simulation Process
The patient is immobilized on the CT table and in the treatment
planning position. At this initial stage, all special immobilization devices
(e.g., head and neck masks, pelvic shells, breast boards, etc.) are
required to be constructed and/or utilized, in order to be included in the
CT image study of the patient. These devices can be indexed on the CT
table top, the same way that later on will be indexed on the linac
treatment table (Fig. 3.8A–C). Additional planning modifiers, such as
skin bolus are also required to be included. The borders of the clinical
area marked by the physician on the patient can be outlined with CT
radio-opaque markers (Fig. 3.8C). Sometimes, initial reference skin
marks are placed in the middle of the clinical treatment area. The CT
movable lasers are used to define and mark the CT reference point on
the patient.
FIGURE 3.4 Sample in-house simulation form for breast setup. Note the setup instructions and
appropriate shift information from the “3-point” computed tomography (CT) mark to the treatment
isocenter. Detailed information on the treatment fields are entered in the table below. This form
can also be used for verification simulation.

A set of anterior and lateral topograms (“scout views”) will assist the
patient alignment on the CT table. The patient will be scanned based on
a preset protocol according to the disease site and the images will be
stored for virtual simulation, while the patient remains on the CT table.

FIGURE 3.5 Verification simulation process diagram (A) and computed tomography (CT)
simulation process (B).
FIGURE 3.6 Virtual simulation as part of computed tomography (CT) simulation for a lung patient.
The user can visualize the anatomical information that will assist appropriate placement of points,
such as the treatment isocenter and the fields. In addition, tumor and other critical volumes can be
outlined at this stage. This information will be eventually transferred to the treatment planning
system.

FIGURE 3.7 A: A large (wide) bore CT-simulator accommodates the majority of immobilization
devices to be included in patient setup. B: A room view of a CT-simulator with the localization
laser system.
FIGURE 3.8 Patient immobilization devices as integral part of computed tomography (CT)
simulation process for radiotherapy. A head and neck head holder and mask (A), indexing
grooves for the immobilization devices on the table top (B) and a breast patient on a breast board
with reference CT radio-opaque markers ready to be CT-simulated (C).

Virtual Simulation Process


There are three tasks that pertain to virtual simulation. First is the
treatment isocenter localization, which is typically placed at the
geometric center of the treatment volume; the second is the target and
critical structures volumes delineation; and the third is to determine the
treatment beam parameters via beam’s-eye-view (BEVs) (14) using
DRRs, including gantry, collimator and couch angles, field sizes,
shielding block, and so forth. This last part can also be performed at a
later time during the treatment planning and isodose computation
process.
The CT images will be utilized to render a 3D view of the patient
which will allow the more precise localization of the isocenter and later
on, more efficient placement of the treatment fields (Fig. 3.9). The
isocenter will be marked on the reconstructed patient anatomy and two
reference fields (typically an AP and a LAT) will be assigned at that
point. The DRRs (Fig. 3.10) of the reference fields will be compared with
the equivalent port films later on, the same way simulator films have
been used at the past. Having determined the isocenter, the patient is
marked with the assistance of the movable lasers, one anterior and two
lateral marks on each side of the “3-point.” Shifts between the original
CT reference marks and the isocenter marks should be logged in the
patient’s chart.
At this stage, the patient can be removed from the CT table. The rest
of the virtual simulation process can be performed off-line via the special
simulation software or the treatment planning system software.
Connectivity between the CT scanner’s computer system and the
treatment planning computer is essential to be evaluated and tested by
the physicist on a frequent basis (13). Image format standards such as
DICOM (15) and DICOM-RT (16) (developed especially for radiation
therapy) transfer protocols between imaging devices, and the treatment
planning computer are the industry standard. One needs to keep in mind
that DICOM-RT transfer protocol can be highly complex to implement
and vary in interpretation from one manufacturer to another.
In the treatment planning system, the patient’s CT study and CT-
simulation information (reference marks, points, reference fields, etc.)
should be available for potential registration or fusion with other
imaging modalities (other CT studies, MRI, PET, etc.) that the patient
might have gone through (Fig. 3.11). The information provided by the
multiimaging studies will allow the physician to outline target and other
volumes more accurately.
Starting from the reference marks and setup ports, the treatment
isocenter is typically selected at the center of the treatment area.
Relative shifts of the isocenter from the reference CT marks are
monitored for subsequent patient setup, as described above, and shown
in Figure 3.12. The physician will outline the target areas and other
critical structures in a slice-by-slice process and will review the 3D
representation of these volumes in three major views (axial, sagittal, and
coronal). Delineation of target and critical organs is an extremely time-
consuming process, in most clinical cases. Progress has been made
toward computer-assisted automatic contouring, pattern recognition, and
auto segmentation (17). Figure 3.13 compares manually outlined and
auto-segmented heart volumes. However, the basic problem remains that
target delineation is inherently a manual process, since the extend of
target depends on tumor grade, stage, and patterns of spread to adjacent
structures. Clinical evaluation of the contouring results by a radiation
oncologist provides the final judgment in defining the target volume.

FIGURE 3.9 Virtual simulation based on 3D reconstruction allows accurate placement of


treatment fields (top-right). In this brain treatment, for example, two lateral (top-left and bottom-
right) and a vertex (bottom-left) treatment field shaped by multi-leaf collimators (MLCs) are
shown.
FIGURE 3.10 Isocenter placement during virtual simulation, based on reconstructed planes (top)
and orthogonal digitally reconstructed radiographs (DRRs) (bottom) for a head and neck
treatment.

With all the volumes (targets and critical structures) approved by the
physician, the treatment planning team can initiate the selection of the
appropriate treatment fields via BEVs and 3D reconstruction of internal
geometry of the patient (Fig. 3.9). Keeping in mind the clinical and setup
margins to the tumor volume, as defined by the ICRU (International
Commission on Radiation Units and Measurements) (18,19), appropriate
blocks with multileaf collimators (MLCs) can be used for 3D conformal
treatment planning. It is important to remember that each beam has
physical penumbra where the dose varies rapidly and that the dose at
the edge of the field is approximately 50% of the center dose. For this
reason, to achieve adequate dose coverage of the target volume, the field
penumbra should lie sufficiently beyond the target volume to offset any
uncertainties in PTV. Beam apertures can be designed automatically or
manually depending on the proximity of the critical structures and the
uncertainty involved in the allowed margins to the target volume (Fig.
3.14). Clinical judgment is frequently required between sparing of
critical structures and target coverage.

FIGURE 3.11 Multiimage registration for a brain patient. MRI and CT are aligned and fused in all
three major views: axial, sagittal, and coronal. This allows the user to outline volumes that are
visualized in MR images onto the CT images and proceed with treatment planning.
FIGURE 3.12 A diagram showing the relative shifts from the reference computed tomography
(CT) marks to the treatment isocenter. Visualization of internal body structures are essential in this
process.

All possible gantry, collimator, and couch angles can be evaluated


based on target coverage and avoidance of critical structures. Some
commercial planning systems provide software-assisted beam geometry
parameter optimization (20,21) which is important for highly conformal
treatment plans. Beam directions that create greater separation between
the target and critical structures are generally preferred unless other
constraints such as obstructions in the beam path or gantry collisions
with the treatment couch or patient preclude those choices.
Alternatively, dose–volume objectives for the target volume and critical
structures can be employed to produce an inversely optimized plan, with
the majority of commercial planning systems being capable of providing
inverse planning optimization algorithms (22). Final dose computations
take full advantage of CT-electron density information in order to
account for tissue inhomogeneities (23). The virtual simulation process
smoothly makes a transition into treatment planning and the treatment
evaluation stage, which is beyond the purpose of this chapter.
FIGURE 3.13 Subsequent CT axial images. Compare the heart volume that has been manually
outlined (BLUE-line) and the result of auto-segmentation (GREEN-line).
FIGURE 3.14 A field shaped around the prostate with specific margins using multi-leaf collimators
(MLCs). The BEV view (left) and the axial view (right).

A few points of precaution are in order when virtual simulation is


performed.

• Due to precise visualization of internal organ and target volumes, one


might be misled to use arbitrary small margins to the target volume in
a feeling of false confidence. Thus, other important effects such as
patient setup and target motion might not be taken into account in a
proper way, since patient and organ motion is not well visualized by
traditional 3D virtual simulation. In the absence of 4D CT imaging, an
additional fluoroscopic study, in a traditional simulator, might be of
great benefit to treatment planning, especially for moving targets such
as lung.
• The spatial resolution is generally limited by the spacing of the axial
images. Thus, within the target area it is required that smaller scanning
spacing (typically 2 to 3 mm) is used while a larger spacing (typically 1
to 2 cm) can be used further from the area of interest. One needs to
keep in mind that this will affect the quality of the reconstructed DRRs.
• Limitation of CT imaging in visualizing all treatment sites can influence
the clinical target volume (CTV) design; in other words, CT imaging do
not always provide the best method to visualize microscopic disease.
Most commonly, in the case of brain tumors, CT imaging cannot
provide clear borders of the disease. The clinician needs to keep in
mind that combination (image registration process) of the treatment
planning CT with other modalities, such as MR or PET, will allow a
more accurate delineation of the target volume. It is important to
remember that the ability to precisely conform to a target volume has
limited value if the target is not determined accurately.

4D CT-Simulation Process
Modern CT scanners are capable of providing a high-resolution
volumetric reconstruction of the patient’s anatomy. Each image voxel
has a characteristic CT number that is uniquely related to the electron
(or mass) density of that voxel. The density information is used in the
computation of dose and accounts for the effects of tissue inhomogeneity
in treatment planning. When the anatomy that is imaged is mobile
(tumor and organs move during the imaging study due to cardiac or
respiratory motion), the image data are subject to motion artifacts.
Consequently, the resulting volumetric reconstruction of the patient is a
blurred representation of the true patient anatomy. In addition, motion
artifacts will result in erroneous CT numbers and electron density values
in the vicinity of the mobile anatomy. It is therefore important to
minimize any motion artifact as it impacts not only the image quality
and the specificity by which we can resolve anatomical changes, but also
the accuracy of the calculated dose in treatment planning.
There are three different types of motion artifacts that we can observe
during a CT acquisition (24):

• If the CT scanning speed in the superior–inferior direction is much less


than the tumor motion speed, then we observe a smeared image of the
tumor.
• If the CT scanning speed is much faster than the tumor motion speed,
then the tumor position and shape are captured on the image at an
arbitrary phase of the breathing cycle.
• If the CT scanning speed is similar to the tumor motion speed, then the
tumor position and shape can be significantly distorted.

It is evident that patient motion can cause significant artifacts during


3D CT imaging (25,26), which not only degrade the image quality and
our ability to delineate anatomical structures (27), but sometimes can
even simulate the presence of disease (28). Figure 3.15 illustrates image
artifacts that are caused by superior–inferior motion during conventional
CT imaging of a test sphere (29).

FIGURE 3.15 Illustration of image artifacts that are caused by superior–inferior (SI) motion during
3D CT imaging. A: CT coronal section of a static sphere. B: CT coronal section of the same
sphere in oscillatory motion (range, 2 cm; period, 4 seconds) (29).

Ritchie (30) proposed a high speed (fast) scan to avoid motion


artifacts that was of limited success with the third-generation CT
scanners. The use of multislice technology (31) has significantly reduced
motion artifacts in CT images when acquired in fast scanning mode.
However, fast scans, although less susceptible to motion artifacts, do not
portray the full extent of motion of the tumor and are therefore not of
clinical use in treatment planning of mobile tumors. Multislice helical
scanning, on the other hand, can be used with a 4D CT scanning
protocol and reduces the overall scanning time while achieving the goal
of capturing the temporal position of the tumor in the imaging study.
When we consider the organ motion, we have to choose an imaging
technique that will minimize motion artifacts during the CT simulation.
Several methods have been proposed to address this problem, including:

• A breathhold CT simulation, where the patient is instructed to


voluntarily hold his breath while the scanning beam is turned on. A
similar result can be achieved using the active breathing control
technique (ABC) proposed by Wong et al. (32).
• A slow CT scan where axial images are acquired at a speed of 4 seconds
or slower per slice. A slow scan will ensure that the envelope of motion
of any moving organ subject to respiratory motion will be captured in
the image (typical respiratory period is 4 to 6 seconds).
• A gated CT scan where the beam is turned on only when the patient’s
breathing is at a certain window of the respiratory cycle (typically 30%
to 35% of duty cycle). The respiratory-related motion is usually
monitored using an external marker. In one of the commercially
available implementations, this is accomplished by correlating the
respiration-related tumor motion to the displacement of an external
marker placed on the patient’s chest as measured using an infrared
camera. The user can specify which portion of the sinusoidal-shaped
signal obtained from the infrared camera is used to trigger the CT
scanner using the cardiac gating port. This method of imaging is also
known as prospective gated image acquisition.
• A 4D CT scan where multiple scans are obtained for each location
(oversampling) whereby the organ motion is captured at different
sampled phases of the respiratory cycle. At the end of the scan a very
large set of 3D images is produced corresponding to each of the phases
in which the breathing cycle was sampled (Fig. 3.16). The collection of
those 3D CT scans constitutes the 4D CT study for this patient. This
method is also known as retrospective image reconstruction.

FIGURE 3.16 The 4D CT phase-sorting process: the CT images, breathing tracking signal, and
“X-ray ON” signal form the input data stream. The breathing cycle is divided into distinct bins (e.g.,
peak exhale, mid inhale, peak inhale, mid exhale). Images are sorted into those image bins
depending on the phase of the breathing cycle in which they were acquired (29).

Of all the imaging methods that aim to minimize respiration-related


motion artifacts, the 4D CT technique is the most comprehensive way to
perform the task because it not only reduces motion artifacts but also
captures the changing topography of the tumor during the respiratory
cycle. This information can be used during treatment planning to
optimally delineate treatment volumes and margins under the
assumption that the patient will be breathing the same way during
treatment as they did during the 4D CT simulation. Irregular breathing
(during imaging or treatment) is not desirable and we often coach the
patient to breath regularly and reproducibly. Coaching can be auditory,
where a computerized voice instructs the patient to breath in and out, or
visual, where for example the patient looks at the superposition of their
baseline breathing curve and their real-time breathing curve and tries to
match them as they control and pace their breathing.
Ultimately, the goal of simulation is to uniquely and reliably identify,
in the patient’s treatment position anatomy, the exact target shape and
location that can be reproducibly localized and treated during the daily
treatments. Four-dimensional CT simulation allows us to segment the
target and organ volumes with high specificity resulting in more
educated decisions on margin selection but also improved dose
calculation during treatment planning.

ACCEPTANCE TESTING AND QUALITY ASSURANCE


When it comes to conventional simulators and CT-simulators, the initial
acceptance testing is performed to verify that the unit is operating as
specified by the manufacturer and to serve as a baseline data pool for
future comparisons with periodic quality assurance (QA) testing.

Conventional Simulator
Acceptance testing of a simulator may be divided into two parts: (a)
geometric and spatial accuracies verification and (b) performance
evaluation of the x-ray generator and the associated imaging system. The
first part is similar to the acceptance testing and evaluation of a linear
accelerator for mechanical performance. Because the simulators are
designed to mimic the treatment accelerators, their geometric accuracies
should be comparable with those of the accelerators. To minimize
differences between the simulator and the accelerator it is desired to use
the same table design and accessory holders as those on the treatment
machine.
The second part is a performance evaluation of a diagnostic
radiographic and fluoroscopic unit.
Several authors have discussed the technical specifications of
treatment simulators and the required testing procedures and have
presented comprehensive reviews on this subject (3,4,33–35). The
quality assurance for the x-ray generator and the imaging system has
been discussed by various groups (36,37). The most recent
recommendations on QA for conventional simulators are from AAPM
Task Group #40 (Table III in the report) (38). Of course a well-
established QA program requires daily and annual testing for simulators,
in addition to the monthly testing.

CT-Simulator
Acceptance testing for a CT-simulator requires the acceptance testing of
the CT scanner as an imaging device to be done first. This process is
described in detail by AAPM Report No. 39 (39). For the purpose of CT-
simulation, additional literature needs to be employed to cover the needs
of radiotherapy (see McGee and Das in Ref. 11). Due to the complexity
of the new technology scanners, the manufacturer’s acceptance testing
procedure (acceptance testing procedure (ATP) manual) provides a great
guide to suggested recommendations for testing tolerances for the
particular scanner. We recommend that the AAPM Task Group #66
report is followed for all the QA needs of a CT-simulator as it applies to
radiotherapy procedures (13). Table I in Ref. 13 outlines the
electromechanical components testing (e.g., lasers, table, gantry, and
scan localization). Table II outlines test specifications for image
performance evaluation (e.g., CT number vs. electron density, image
noise, contrast and spatial resolution). A simplified set of tests are shown
in Figure 3.17. Keep in mind that the CT-simulation process QA should
be performed along with the treatment planning process QA where
information and data are transferred between the CT scanner and the
treatment planning computers.
When 4D CT scans are used for simulation, the quality assurance is for
the most part the same as that for the CT simulator. In addition to the
tests described previously, one could include scans of test phantoms that
are placed on a moving platform. Such motorized platforms can be
programmed to a user-defined moving cycle that is typically 1-
dimensional (1D) or 2-dimensional (2D), which is adequate for QA
purposes. Since the physical size of the phantom and any objects
embedded inside it are known, a 4D CT scan would test the ability of the
scanner and the accompanying software to build the 4D model of the
phantom and to reproduce the true dimensions of the imaged objects.
Although there is currently not much information on QA for 4D CT, such
protocols can easily be developed and incorporated in routine quality
assurance programs for CT simulation.

RECENT DEVELOPMENTS IN RADIOTHERAPY


SIMULATION
During the last decade, positron emission tomography (PET) in
combination with CT in hybrid, cross-modality imaging systems
(PET/CT) has gained more and more importance as a part of the
treatment planning procedure in radiotherapy. The fusion of PET with
CT adds anatomical information to the physiologic information of PET,
allowing for improvement in spatial resolution. The high sensitivity and
specificity of PET/CT in identifying the areas of tumor involvement in
various disease sites (40,41) attracted great interest in integrating
functional imaging with PET/CT into the radiotherapy planning process.
The aim is to better define and delineate the tumor’s extent and its
relationship to the surrounding radiosensitive vital structures and to
improve the therapeutic index (Fig. 3.18).
FIGURE 3.17 Set of monthly QA tests for CT-simulator. This set of tests is based on AAPM TG-66
(13).
FIGURE 3.18 PET/CT fusion of a patient with nasopharyngeal cancer with an upper left lobe lung
nodule (white arrow).

Magnetic resonance imaging (MRI) is becoming an increasingly


important tool in radiation oncology, as it can provide anatomical and
functional information regarding the tumor and normal tissues, which
may be complimentary to information from CT alone. For more than a
decade MRI has been successfully used in stereotactic radiosurgery
procedures. Thus, MRI has already been integrated into a CT-based RT
workflow, using image registration tools. Such tools are already an
inherent part of the RT workflow, for multimodality and multiphasic
image registration for radiation treatment planning (for MRI, PET, and
other imaging) and for image guidance at the treatment unit (42).

CONCLUSIONS
Treatment simulation is a crucial component of the entire treatment
planning process and guarantees successful radiotherapy practice. The
advancements of today’s technology, both in hardware and software,
allow more accurate patient setup and representation with customization
of the treatment plans to the specific patient and site. However, stringent
QA procedures are necessary to maintain optimum and safe use of such
technologies. The introduction of multimodality imaging for RT
introduces the need for deformable image registration early in the RT
simulation process, which, as a whole, is an exciting topic to investigate.

KEY POINTS
• Treatment planning requires accurate patient data to be acquired
through the process of treatment simulation.
• Radiographic, CT, PET/CT, ultrasound, and MRI simulators are
essential in modern radiotherapy treatment planning.
• Image fusion between different simulation modalities is necessary
for complex modern radiotherapy techniques for mapping out
structural or functional anatomy of the targeted areas.
• Important components of CT-simulations and virtual simulation are
3D representation of the patient, DRRs, image registration and
segmentation, and tumor motion management.
• Treatment volume delineation, treatment portal placements, and
their directional optimization can be performed as part of the virtual
simulation process based on 3D visualization of the patient model.
• Process quality assurance and periodic testing of the radiotherapy
simulation equipment should be an integral part of the modern
radiotherapy simulation process in order to secure optimal and safe
implementation of all aspects of the simulation process.

QUESTIONS
1. What is the most common imaging modality used in radiotherapy
simulation?
A. MRI
B. Planar imaging
C. Ultrasound
D. CT
E. PET
2. When compared with CT, MRI provides better
A. spatial resolution
B. contrast resolution
C. patient setup
D. tissue density information
E. geometrical accuracy
3. When compared with MRI, PET provides better
A. spatial resolution
B. patient setup
C. tissue density information
D. malignancy differentiation from normal tissue
E. geometric accuracy
4. During a 4D CT process,
A. the CT beam is turned on only when the patient’s breathing is
at a certain window of the respiratory cycle.
B. a slow CT scan of axial slices is acquired.
C. multiple scans for each location are obtained and are shorted
via retrospective image reconstruction.
D. a breathhold technique is used during the image acquisition.

ANSWERS
1. D
2. B
3. D
4. C
REFERENCES
1. Sherouse GW, Bourland JD, Reynolds K, et al. Virtual simulation in
the clinical setting: some practical considerations. Int J Radiat Oncol
Biol Phys. 1990;19:1059–1065.
2. Sherouse GW. Radiotherapy simulation. In: Khan FM, Potish R, eds.
Treatment Planning in Radiation Oncology, Baltimore, MD: Williams &
Wilkins; 1998:39–53.
3. McCullough EC. Radiotherapy treatment simulators. AAPM
Monograph No. 19, 1990:491–499.
4. McCullough EC, Earl JD. The selection, acceptance testing, and
quality control of radiotherapy treatment simulators. Radiology.
1979; 131:221–230.
5. Khan FM. The Physics of Radiation Therapy. 4th ed. Baltimore, MD:
Lippincott Williams & Wilkins; 2010.
6. Galvin JM. The CT-simulator and the Simulator-CT. In: Smith AR, ed.
Radiation Therapy Physics. Berlin: Springer-Verlag; 1995:19–32.
7. Dahl O, Kardamakis D, Lind B, et al. Current status of conformal
radiotherapy. Acta Oncol. 1996;35(Suppl. 8):41–57.
8. Rosenwald JC, Gaboriaud G, Pontvert D. [Conformal radiotherapy:
principles and clarification]. Cancer Radiother. 1999;3:367–377.
9. Siddon RL. Solution to treatment planning problems using
coordinate transformations. Med Phys. 1981;8:766–774.
10. Siddon RL. Fast calculation of the exact radiological path for a
three-dimensional CT array. Med Phys. 1985;12:252–255.
11. Coia LR, Schultheiss TE, Hanks GE, eds. A Practical Guide to CT
Simulation. Madison, WI: Advanced Medical Publishing; 1995.
12. Aird EG, Conway J. CT simulation for radiotherapy treatment
planning. Br J Radiol. 2002;75:937–949.
13. Mutic S, Palta JR, Butker EK, et al. Quality assurance for computed-
tomography simulators and the computed-tomography-simulation
process: report of the AAPM Radiation Therapy Committee Task
Group No. 66. Med Phys. 2003;30:2762–2792.
14. Goitein M, Abrams M, Rowell D, et al. Multi-dimensional treatment
planning: II. Beam’s eye-view, back projection and projection
through CT sections. Int J Radiat Oncol Biol Phys. 1983;9:789–797.
15. NEMA. The DICOM Standard. 2006.
16. Bosh W. Integrating the management of patient treatment planning
and image data. In: Purdy JA, ed. Categorical course syllabus: 3-
dimensional radiation therapy treatment planning. Chicago, IL: RSNA;
1994;151–160.
17. Ragan D, et al. Semi-automated four-dimensional computed
tomography segmentation using deformable models. Med Phys.
2005;32:2254–2261.
18. ICRU Report No. 50. Prescribing, Recording and Reporting Photon Beam
Therapy. Bethesda, MD: ICRU; 1993.
19. ICRU Report No. 62. Prescribing, Recording and Reporting Photon Beam
Therapy (supplement to ICRU Report 50). Bethesda, MD: ICRU; 1999.
20. Rowbottom CG, Oldham M, Webb S. Constrained customization of
non-coplanar beam orientations in radiotherapy of brain tumors.
Phys Med Biol. 1999;44:383–399.
21. Bedford JL, Webb S. Elimination of importance factors for clinically
accurate selection of beam orientations, beam weights and wedge
angles in conformal radiation therapy. Med Phys. 2003;30:1788–
1804.
22. Purdy JA, Grant WH III, Palta JR, Butler BE, Perez CA, eds. 3D
Conformal and Intensity Modulated Radiation Therapy: Physics and
Applications. Madison, WI: Advanced Medical Publishing, Inc; 2001.
23. Papanikolaou N, Battista JJ, Boyer AL, et al. eds. Tissue
inhomogeneity corrections for megavoltage photon beams. AAPM Task
Group Report No. 65. 2004.
http://www.aapm.org/pubs/reports/RPT_85.pdf.
24. Jiang S. Management of Moving Targets in Radiotherapy: Integrating
New Technologies into the Clinic: Monte Carlo and Image-Guided
Radiation Therapy. AAPM Monograph No. 32; 2006.
25. Mayo JR, Muller NL, Henkelman RM. The double-fissure sign: a
motion artifact on thin-section CT scans. Radiology. 1987;165:580–
581
26. Ritchie CJ, Hseih J, Gard MF, et al. Predictive respiratory gating: a
new method to reduce motion artifacts on CT scans. Radiology.
1994;190:847–852.
27. Keall PJ, Kini VR, Vedam SS, et al. Potential radiotherapy
improvements with respiratory gating. Australas Phys Eng Sci Med.
2002;25:1–6.
28. Tarver RD, Conces DJ Jr., Godwin JD. Motion artifacts on CT
simulate bronchiectasis. AJR Am J Roentgenol. 1988;151:1117–1119.
29. Vedam SS, Keall PJ, Kini VR, et al. Acquiring a four-dimensional
computed tomography dataset using an external respiratory signal.
Phys Med Biol. 2003;48:45–62.
30. Ritchie CJ, Godwin JD, Crawford CR, et al. Minimum scan speeds
for suppression of motion artifacts in CT. Radiology. 1992;185(1):
37–42.
31. Kachelriess M, Kalender WA. Electrocardiogram-correlated image
reconstruction from subsecond spiral computed tomography scans of
the heart. Med Phys. 1998;25:2417–2431.
32. Wong JW, Sharpe MB, Jaffray DA, et al. The use of active breathing
control (ABC) to reduce margin for breathing motion. Int J Radiat
Oncol Biol Phys. 1999;44(4):911–919.
33. Connors SG, Battista JJ, Bertin RJ. On technical specifications of
radiotherapy simulators. Med Phys. 1984;11:341–343.
34. International Electrotechnical Commission. Functional Performance
Characteristics of Radiotherapy Simulators. Draft Report. Geneva:
1990. IEC SubC 62C.
35. Bomford CK, et al. Treatment simulators. Br J Radiol. 1989;(Suppl.
23):1–49.
36. National Council on Radiation Protection and Measurements.
Quality Assurance for Diagnostic Imaging Equipment. Report No.
99;1988.
37. Boone JM, et al. AAPM Report No. 74. Quality Control in Diagnostic
Radiology. Report of Task Group #12;2002.
38. Kuthcer GJ, et al. AAPM Report No. 46. Comprehensive QA for
Radiation Oncology. Report of Task Group #40;1994.
39. Lin PP-J, et al. AAPM report No. 39. Specification and Acceptance
Testing of Computed Tomography Scanners. Report of Task Group #2;
1993.
40. Öllers M, Bosmans G, van Baardwijk A, et al. The integration of
PET-CT scans from different hospitals into radiotherapy treatment
planning. Radiother Oncol. 2008;87:142–146.
41. Thorwarth D, Geets X, Paiusco M. Physical radiotherapy treatment
planning based on functional PET/CT data. Radiother Oncol.
2010;96:317–324.
42. Devic S. MRI simulation for radiotherapy treatment planning. Med
Phys. 2012;39:6701–6711.
4 Treatment Planning Algorithms:
Photon Dose Calculations

John P. Gibbons

INTRODUCTION
Computerized treatment planning systems have been utilized in
radiotherapy planning since the 1950s. The first computer algorithm
used has been attributed to Tsien (1) who used punch cards to store
isodose distributions to allow for the addition of multiple beams. Since
that time, advancements in computer speeds and algorithm development
have vastly improved our capability to predict photon dose distributions
in patients.
In an early attempt to classify computer planning algorithms, ICRU
Report 42 (2) divided photon dose calculation methods into two
categories: empirical and model-based algorithms. Early empirical
algorithms such as Bentley–Milan were developed using clinical beam
data measured on a flat water phantom as input. Corrections were then
made to incorporate various effects, such as changes in patient external
contour, blocking or physical wedges, and so forth. Eventually, patient
heterogeneity correction factors became incorporated, but these were
applied afterward, that is, after water-based calculations were performed
assuming a homogenous patient geometry. Most of this development
occurred prior to the advent of CT, or at least before the incorporation of
CT-images into the radiotherapy planning process.
However, eventually the commercial utilization of empirical
algorithms faded. In the early 1990s, 3-dimensional (3D) conformal
radiation therapy (3D CRT) began to use patient-specific CT-image data
in the planning process. Initially this was limited to virtual simulation.
At that time computer-based algorithms which could incorporate the
newly available volumetric density information and compute true 3D
dose distributions in a reasonable amount of time were not yet available.
In order to fully utilize this new information, it was necessary to develop
new algorithms which could more accurately incorporate variations in
individual patient anatomy. As a result, most, if not all, commercial
treatment planning systems have moved to model-based photon
calculation methods.
In this chapter, we will describe three photon calculation models
currently in use in radiotherapy clinics. Photon calculation models are
an area of continuous development and it is likely that each commercial
vendor’s implementation of one or more of these models will differ in
many respects. Nevertheless, the intent is to provide a basic
understanding of the principles behind these algorithms.
This work represents an update of the chapter of Mackie, Liu, and
McCullough from the previous edition (3). Their work contained
thorough coverage of the subject and much of their description and
analyses have been reproduced here. In particular, their expertise
regarding the convolution/superposition algorithm is without equal, and
the reader is encouraged to review their work for greater details.

THE REPRESENTATION OF THE PATIENT FOR DOSE


PLANNING
Patient representation has evolved dramatically over the past 40 years.
Initially, patients were considered as a flat water phantom of a specific
SSD and depth for use in simple dose or monitor unit calculations.
Development of external contour tools aided the treatment planner in
determining patient-specific dose distributions. Such procedures resulted
in the patient being represented as a homogeneous composition (i.e.,
water) but did allow for the application of surface corrections to the
calculation. Patient heterogeneities could be represented in simple ways,
such as using internal contours with assigned densities. The electron
density to assign to the region could be inferred from CT atlases or, if
available, the mean patient-specific CT number within the contoured
structure (4). The problem with this approach was that tissues such as
lung and bone are not themselves homogeneous, and their density
variations would not be taken into account using this approach.
All modern radiotherapy systems use volumetric imaging data to
characterize the patient in a 3D voxel-by-voxel description. The most
common imaging dataset used for radiotherapy treatment planning is a
treatment-planning CT scan, obtained using a conventional CT-simulator.
A CT dataset of the treatment region constitutes the most accurate
representation of the patient applicable for dose computation, primarily
because of the one-to-one relationship between CT number and physical
and/or electron density (4). Dose algorithms that can use the density
representation on a point-by-point basis are easier for heterogeneous
calculations because contouring of the heterogeneities is typically not
required. An exception to this occurs when data is present within the CT
scan which will not be present for the treatment. One obvious example is
the CT-simulator couch, which is either manually or automatically
removed and, in some cases, replaced with a treatment couch by the
planning system. Also relevant are temporary contrast agents that can
produce a CT number that mimic a higher density material within the
body. Usually, the contrast agent is used to aid in the tissue
segmentation, and so only the additional step of providing a more
realistic CT number in the segmented region is required to correct for
the presence of the contrast agent. The spatial reliability of CT scanners
is typically within 2%, which leads to dose uncertainties of ∼1% (4).
Other imaging modalities provide information which will aid in the
location and delineation of structures, but is of less value in the
calculation of dose. For example, the advent of cone-beam CT within the
treatment room provides invaluable information regarding patient
alignment. However, the scatter contained within the images makes
accurate determination of density difficult. Although MRI is often able to
provide superior tissue contrast, the information in MRI is not strongly
related to electron density. Furthermore, MRI images are more prone to
artifacts during image formation, which will degrade the quality of the
calculated dose distributions.
In addition to electron density, it is also necessary to determine the
tissue composition for more modern calculation algorithms. In
convolution/superposition algorithms, fluence attenuation tables are
typically computed using mass-attenuation coefficient data, which are
somewhat weakly dependent on material. Often these coefficients are
determined for each voxel by linearly interpolating between published
results of two different materials (e.g., water and bone) based on the
density assigned to the voxel. For both Monte Carlo (MC) and Boltzmann
transport calculations, a full material assignment must be made to allow
for accurate cross-section determination for both photon and electron
transport throughout the patient volume.
Ideally, the size of the voxels in the treatment planning CT should be
close to the dose grid resolution used for calculation. A CT volume set
typically consists of 50 to 200 images with a voxel matrix dimension of
512 × 512 for each image. For a 50-cm field of view, this corresponds
to a voxel size of ∼1 mm in the transverse direction. The longitudinal
voxel size depends on the slice thickness, but is typically from 2 to 5
mm. In many planning systems, the CT slice thickness is chosen as the
voxel size of the dose grid. For these systems, it may be appropriate to
downsample the CT image set to 256 × 256. This makes the transverse
resolution more closely matched to that of the longitudinal direction,
with only a minor degradation in the image. Degrading the resolution
further from 256 × 256 may result in an unacceptable loss of detail.

BASIC RADIATION PHYSICS FOR PHOTON BEAM DOSE


CALCULATION
Here we present an introduction to the important aspects of x-ray
production and interaction to understand the capabilities and limitations
of model-based photon treatment planning algorithms.

Megavoltage Photon Production


Figure 4.1 displays a cross-sectional view of a linear accelerator
treatment head, which consists of a high-density shielding material such
as lead, tungsten, or lead-tungsten alloy. It consists of an x-ray target,
flattening filter, ion chamber, and a primary and movable collimator.
High-energy electrons are accelerated in the linac’s accelerating
structure and impinge on the x-ray target.
The production of Bremsstrahlung, or braking radiation, occurs when
the high-energy electrons strike a tungsten target located in the head of
the accelerator. The size of the focal spot of the electrons on the target is
small, typically on the order of a few millimeters (5). This finite size
contributes to the penumbra, or the blurring of the beam near the edges
of the field.
A first, or primary collimator, fabricated from a tungsten alloy, defines
the maximum field diameter that can be used for treatment.
At megavoltage energies, Bremsstrahlung is produced mainly in the
forward direction. In most conventional C-arm accelerators, to make the
beam intensity more uniform, a conical filter positioned in the beam
preferentially absorbs the photon fluence along the central axis. The
presence of the field-flattening filter alters the energy spectrum, since
the beam passing through the thicker central part of the filter has a
higher proportion of low-energy photons absorbed by the filter. This
may not be necessary for modern treatment deliveries where modulation
is used to vary the intensity of the beam. Indeed, many treatment units
now have the option of removing the filter for these treatments (e.g.,
Varian TrueBeam, Palo Alto, CA; Elekta Versa HD, Atlanta, GA), or have
removed the flattening filter entirely (e.g., TomoTherapy, Inc. Hi-Art,
Madison, WI) when a uniform field is not needed.

Compton Scatter
Photons can inelastically scatter via three main processes: photoelectric
absorption, incoherent (Compton) scattering with atomic electrons, and
pair production in the nuclear or electron electromagnetic field. In the
energy range used for radiation therapy, most interactions are Compton
scattering events, which are discussed in more detail here.
FIGURE 4.1 Components of the treatment head of a linear accelerator. A: A cross-sectional view
of the treatment head operating in x-ray therapy mode. B: A cut-away diagram of the linac. (From
Varian Medical Systems: www.varian.com, with permission.)

Compton scattered photons may originate in either the accelerator


treatment head or the patient (or phantom). Most of the scatter dose
generated by the accelerator head is produced within the primary
collimator and the field-flattening filter. These scattered photons and
electrons are sometimes referred to as “extrafocal radiation” which may
be added to the primary photon beam emitted from the source. As the
collimator jaws open, more scattered radiation is allowed to leave the
treatment head, which results in an increase in the machine output with
field size. This effect is known as collimator scatter (6), although the
collimator jaws themselves contribute little forward scatter. The photons
scattered in the primary collimator and field-flattening filter also add to
the fluence just outside the geometrical field boundary.
Like accelerator-produced scatter, phantom scatter primarily occurs in
the forward direction and increases with the size of the field. However,
for phantom-generated scatter, the penetration characteristics of the
beam are also altered. As the field size increases, the phantom scatter
causes the beam to be significantly more penetrating with depth. This
effect is significant enough that this energy difference must be included
in the dose computations.
The behavior of scatter from beam modifiers such as wedges must also
be considered within the photon model. When the field size is small, a
beam modifier mainly alters the transmission and does not contribute
much scatter that arrives at the patient. However, when the field is
large, beam modifiers begin to alter the penetration characteristics of the
beam, much as phantom scatter does. This effect is exemplified by the
increase in the wedge transmission factor with increasing field size and
depth (7,8).

Electron Transport
Photons are indirectly ionizing radiation. The dose is deposited by
charged particles (electrons and positrons) set in motion from the site of
the photon interaction. At megavoltage energies, the range of charged
particles can be several centimeters. The charged particles are mainly set
in forward motion but are scattered considerably as they slow down and
come to rest. Electrons lose energy by two processes: inelastic collisions
within the media (primarily with target electrons), and radiative
interactions (primarily with target nucleus). Inelastic collisions which
ionize the target atom can lead to secondary electrons, known as delta
rays. Radiative interactions occur via Bremsstrahlung, which effectively
transfers the energy back to a photon. Equations which model these
coupled electron–photon interactions are described later on.
The indirect nature of photon dose deposition results in several
features in photon dose distributions. Initially, the superficial dose
increases, or “builds up” from the surface of the patient because of the
increased number of charged particles being set in motion. This results
in a low skin dose, the magnitude of which is inversely proportional to
the path length of the charged particles. The dose builds up to a
maximum at a depth, dmax, characteristic of the photon beam energy. At
a point in the patient with a depth equal to the penetration distance of
charged particles, charged particles coming to rest are being replenished
by charged particles set in motion, and charged particle equilibrium
(CPE) is said to be reached. In this case, the dose at a point is
proportional to the energy fluence of photons at the same point. The
main criterion for CPE is that the energy fluence of photons must be
constant out to the range of electrons set in motion in all directions. This
does not occur in general in heterogeneous media, near the beam
boundary, or for intensity-modulated beams.
Electrons produced in the head of the accelerator and in air between
the accelerator and the patient are called contamination electrons. The
interaction of these electrons in and just beyond the buildup region
contributes significantly to the dose, especially if the field is large.
Perturbation in electron transport can be exaggerated near
heterogeneities. For example, the range of electrons is three to five times
as long in lung as in water, and so beam boundaries passing through
lung have much larger penumbral regions. Bone is the only tissue with
an atomic composition significantly different from that of water. This
can lead to perturbations in dose of only a few percent (9), and so
perturbations in electron scattering or stopping power are rarely taken
into account. Bone can therefore be treated as “high-density water.”

SUPERPOSITION/CONVOLUTION ALGORITHM
The most common photon dose calculation in use for radiotherapy
planning today is the Superposition/Convolution algorithm (9–19). This
method incorporates a model-based approach in describing the
underlying physics of the interactions, while still being able to calculate
dose in a reasonable time.
The convolution–superposition method begins by modeling the
indirect nature of dose deposition from photon beams. Primary photon
interactions are dealt with separately from the transport of scattered
photons and electrons set in motion.

Dose Calculation under Conditions of Charged Particle


Equilibrium
To begin, we consider the special case of dose determination under
conditions of CPE. In this case, the total energy absorbed by charged
particles at position r is the same as the total energy which escapes due
to photon interactions at the same location. Thus, the primary dose Dp
and the first-scattered dose from a parallel beam of monoenergetic
photons can be computed as (3):

where YP(r) and (Kc(r))P are the primary energy fluence and collision

kerma, respectively, at point r is the mass energy absorption

coefficient, øP(r = 0) is the primary photon fluence at the surface of the


phantom, hnP is the primary photon energy, and μ is the attenuation
coefficient of primary photons. The total dose is the sum of the primary
and scatter components

where dPscat(q,r′)/dV is the probability per unit volume of a primary


photon being scattered into a solid angle centered about angle q.
These equations are complicated enough, but they do not take into
account any secondary or higher-order photon scatter. They also neglect
beam divergence and do not take into account tissue heterogeneities.
They are valid only for CPE situations, so that the dose computation is
not valid in the buildup region or near the field boundaries, and the
scatter dose is perturbed by heterogeneities lying between the scatter
site at r′ and the point r, where the total dose is being computed.

Convolution/Superposition Method
Unfortunately, Equation (4.1) is simplistic because it does not take into
account the finite range of charged particles. In other words, the energy
fluence that was present at the point the charged particles were set in
motion upstream should replace the energy fluence in Equation (4.1).
We may think of this energy fluence as that originating upstream (i.e.,
assuming that the charged particles all moved linearly downstream), but
in reality, the particles may originate from any location around the
calculation point, as long as it is within the particles’ range. Thus, rather
than a single effective photon interaction site, this expression for dose
becomes a convolution integral about r:

where Ac(r−r′) describes the contribution of charged particle energy


that gets absorbed per unit volume at r from interactions at r′ and the
integration is over all values of r′ that make up volume dr′. The charged
particle energy absorption kernel has a finite extent because the range of
charged particles set in motion is finite.
Equation (4.3) requires knowledge of the energy fluence due to both
primary and scattered photons at all points. Time-consuming transport
methods, such as the method of discrete ordinates or the MC method
would be needed to compute the scattered component accurately. A
simpler solution is to utilize a scatter kernel that includes the scattered
photon component along with the contribution from charged particles.
The kernel is no longer finite because photon scatter (which has no
range) is included. Now only primary photons are explicitly transported.
A convolution equation that separates primary photon transport and a
kernel that accounts for the scattered photon and electrons set in motion
away from the primary photon interaction site is as follows:

where is the mass attenuation coefficient, ΨP(r′) is the primary energy


fluence, and A(r − r′) includes the contribution of scatter. The product
of the mass attenuation coefficient and the primary energy fluence is the
primary terma (total energy released per unit mass) TP(r′). Terma, first
defined by Ahnesjo, Andreo, and Brahme (20), is analogous to kerma,
and has the same units as dose.
The convolution kernels can, in principle, be obtained by analytic
computation, deconvolution from dose distributions, or even by direct
measurement. Most often, the kernels are computed with the MC method
by interacting a large number of primary photons at one location and
determining from where energy is absorbed, that is, from primary-
generated charged particles, charged particles subsequently set in
motion from scattered photons, or both (12,13,20,21). Figure 4.2
illustrates isovalue lines for a 1.25-MeV kernel in water. As is evident
from the figure, the kernel is forward directed even at this low energy.
As the energy increases, the kernel becomes even more forward peaked.

Modeling Primary Photons Incident on the Phantom


The convolution equation is restricted to describing monoenergetic
parallel beams of primary photons interacting with homogeneous
phantoms. To model a clinical radiotherapy beam, the contribution for
each energy bin of the photon spectrum must be summed. At present,
the spectral information is derived from MC simulations benchmarked
by measurement. Using the EGS4 MC method, Mohan and Chui (22) first
quantified the spectrum of clinical accelerators using the MC method.
Since that time, several other authors have performed simulations to
calculate photon energy spectrum (18,23,24).
FIGURE 4.2 Cobalt-60 (more precisely, 1.25-MeV primary photons) kernels for water computed
using Monte Carlo simulation (MCS). The isovalue lines are in units of cGy MeV−1 photon−1. A:
The contribution due to electrons set in motion from primary photons (i.e., the primary
contribution). B: The first scatter contribution. C: The sum of the primary and all scatter
contributions. (Reprinted from Mackie TR, Bielajew AF, Rogers DW, et al. Generation of photon
energy deposition kernels using the EGS4 Monte Carlo code. Phys Med Biol. 1988;33:1–20, with
permission.)

The spectrum will also vary with off-axis position if a field-flattening


filter is used. Figure 4.3 shows that the mean energy of primary
radiation (directly from the target) for a Varian 2100C (flattened) 10-MV
photon beam decreases off-axis, but the extrafocal photons (primary
collimator and flattening filter) do not (18). This off-axis decrease is due
to differential hardening of the beam by the field-flattening filter. Since
the direct photon component dominates, the model must take into
account the change in the energy spectrum across the field.

FIGURE 4.3 The photon mean energy distribution in an open 40 × 40-cm field from a 10-MV
photon beam target, primary collimator, and field-flattening filter. Values are for in-air photons
arriving at the plane of the isocenter. (Reprinted from Liu HH, Mackie TR, McCullough EC. A dual
source photon beam model used in convolution/superposition dose calculations for clinical
megavoltage x-ray beams. Med Phys. 1997;24:1960–1974, with permission.)

Collimators and block field outlines are usually modeled with a


mathematical mask function, which consists of the fraction of the
incident fluence transmitted through the modifier. For a collimator, the
mask function inside the field is unity, and underneath the collimator it
is equal to the primary collimator transmission. For a block, the mask
function inside the field is the primary transmission through the block
tray, and underneath the block it is equal to the primary block
transmission. The mask function alone would not be able to model the
penumbral blurring of the field boundary. This has been modeled by an
aperture function. The mask function is convolved by a 2-dimensional
(2D) blurring kernel that represents the finite size of the source. The
blurring kernel is usually assumed to be a normal function with a
standard deviation equal to the projection of the source spot’s width
through the collimation system (thereby accounting for magnification of
the source at large distances from the collimator system). Finally, the
mask function is multiplied by the energy fluence distribution for the
largest open field.
The energy fluence outside the field is greater than that which can be
accounted for by collimator transmission of the primary photons
generated in the target. It can be modeled by adding short, broad normal
distribution to the energy fluence. The source of this component is
mainly the extrafocal radiation produced from Compton scattering in the
field-flattening filter. The magnitude of the extrafocal radiation source
can also be used to account for the variation in the machine-generated
output factor, because the scatter outside the field and the increase in
machine-generated output are both due to scatter from the accelerator
head (18,25).
Conventional wedges and compensators cannot be accurately modeled
with primary attenuation only. These components produce scatter and
cause differential hardening of the beam. The hardening of the primary
beam can be accounted for according to the material of the wedge and
the beam spectrum as a function of radial position. Scatter from the
wedge is more difficult to account for. The increased scatter results in
the wedge factor increasing by a few percent as a function of field size.
This can be adequately modeled by a field size-dependent factor that
duplicates the effect. Alternatively, the wedge or compensator can be
included as part of the patient representation. This extended phantom
has a large heterogeneity in it, namely, the air gap between the device
and the patient. This method can predict the variation in the wedge
factor as a function of field size.

Ray-Tracing the Incident Energy Fluence Through the


Phantom
The incident 2D energy fluence distribution is ray-traced through the
patient to create a 3D distribution of energy fluence (Fig. 4.4). The
density of the rays followed and the sampling of the rays along their
path must be sufficient to represent the attenuation behavior of the
phantom. Sufficient sampling density is especially important for head,
neck, and breast tangential fields. In general, the sampling density
required is higher than the dose resolution desired, so several rays are
traversing each calculation voxel.

FIGURE 4.4 The ray-tracing of a two-dimensional (2D) energy fluence distribution through the
patient to create a three-dimensional (3D) energy fluence distribution in the patient. SSD, source-
to-surface distance.

Terma is computed within the calculation matrix by multiplying the


primary fluence by the mass attenuation coefficient. The primary ray
attenuation coefficient, weighted to the appropriate beam spectrum, is
based on the voxel properties. The energy fluence at a sample point is
reduced from the previous sample along the ray. Hardening of the
primary energy fluence spectrum with depth and off-axis position is
accounted for by changing the attenuation coefficient with position. The
speed of the ray-tracing operation can be improved significantly by the
use of lookup tables to store precomputed results.

Electron Contamination
The electron contamination of the beam is not accounted for in the
conventional convolution method, so an additional independent
component must be added to account for this dose. The surface dose
from megavoltage photon beams is almost entirely due to the electron
contamination component. Studies in which the electron contamination
has been removed by magnetically sweeping electrons from the field
reveal that dose from the contaminating electrons resembles an electron
beam with a practical range somewhat greater than the depth of
maximum dose. A reasonable agreement with measured depth–dose
curves can be obtained by scaling the contamination electron depth–dose
curve with the surface dose and adding this component to the
convolution-computed dose distribution.

Kernel Spatial Variance and Phantom Heterogeneities


The convolution equation assumes that the kernel is spatially invariant
in that the kernel value depends only on the relative geometrical
relationship between the interaction and dose deposition sites and not
on their absolute position in the phantom. When this is true, the
convolution calculation can be done in Fourier space, saving much time.
Unfortunately, this is not the case as the kernel varies with position.
The effects of hardening and divergence of the beam are small and can
be calculated in a number of ways. A multiplicative correction to the
terma in the patient can be used to correct for hardening of the kernel
(17,26). Alternatively, several kernels valid for different depths in the
phantom can be used as a basis for interpolation to a specific depth
(17,19). Liu et al. showed that the correction as a function of depth is
nearly linear, and not employing any correction results in ∼4%
discrepancy at 30-cm depth. Tilting the kernel to match the beam
divergence results in only a minor improvement in accuracy for the
worst-case examples (19).
Phantom heterogeneities are a more serious problem. Modeling the
transport of electrons and scattered photons through a heterogeneous
phantom would require a unique kernel at each location. Each kernel
would be superimposed on the dose grid and weighted with respect to
the primary terma. What is required to make the calculation tractable is
to modify a kernel, computed in a homogeneous medium, to be
reasonably representative in a heterogeneous situation. If most of the
energy between the primary interaction site and the dose deposition site
is transported on the direct path between these sites, it is possible to
have a relatively simple correction to the convolution equation based on
ray-tracing between the interaction and dose deposition sites, and on
scaling the path length by density to get the radiologic path length
between these sites. The convolution equation modified for radiologic
path length is called the superposition equation:

where ρr−r′ · (r - r′) is the radiologic distance from the dose deposition
site to the primary photon interaction site and ρr · r′ is the radiologic
distance from the source to the photon interaction site.
Woo and Cunningham (15) compared the modified kernel using range
scaling for a complex heterogeneous phantom with a kernel computed de
novo for a particular interaction site inside the phantom. The results
shown in Figure 4.5 indicate that agreement is not perfect, but the
computational trends are clearly in evidence in that isovalue lines
contract in high-density regions and expand in low-density regions.

FIGURE 4.5 Comparison of Monte Carlo–generated 6-MeV primary photon kernel in a water
phantom containing a ring of air. The dashed line is a kernel modified for the heterogeneous
situation using range scaling from one derived in a homogeneous phantom. The continuous line is
a kernel computed expressly for the heterogeneous situation. It is impractical to compute kernels
for every possible heterogeneous situation, and there is sufficient similarity to warrant the range
scaling approximation. (Reprinted from Woo MK, Cunningham JR. The validity of the density
scaling method in primary electron transport for photon and electron beams. Med Phys.
1990;17:187–194, with permission.)

MONTE CARLO
The Monte Carlo (MC) technique of radiation transport consists of using
well-established probability distributions governing the individual
interactions of electrons and photons to simulate their transport through
matter. MC methods are used to perform calculations in all areas of
physics and math for any problems which involve a probabilistic nature.
Several excellent reviews of MC calculations in radiation therapy exist
(27–31), as well as an AAPM Task Group Report which discusses its
clinical implementation (32).
Although the MC method had been proposed for some time, it was not
capable of being fully utilized until the development of the digital
computer in the 1940s. Radiation transport was one of the first uses for
this methodology at that time, and public codes, such as Monte Carlo N-
Particle Transport code (MCNP) began appearing as early as the 1950s.
In photon transport calculations, the Electron Transport (ETRAN) code,
developed by the National Bureau of Standards in the 1970s, was based
on the condensed history technique (discussed below) first introduced by
Berger in 1963. The Electron Gamma Shower (EGS4) code was originally
developed at the Stanford Linear Accelerator in the 1980s, and is now
maintained (as the modified EGSnrc) by the National Research Council
of Canada (33).

Analog Simulations
As pointed out by TG 105, an analog simulation is the random
propagation of a particle through the following four steps: (1)
determining the distance to the next interaction, (2) transporting the
particle to the interaction site, (3) selecting which interaction will take
place, and (4) simulating this interaction (32). The initial step is
performed based on the probability that the particle will interact within
the medium in question. For example, if the probability of interaction is
represented by an attenuation coefficient m, a random interaction
distance r can be determined from a random number e (between 0 and
1) by the following (30,32):

The second step is relatively straightforward, but knowledge of the mass


density (and the corresponding changes in μ) is required for
heterogeneous materials. Another random choice will be made for step
3, weighted proportionally to the relative probabilities of interaction
choices (e.g., Compton scatter vs. photoelectric absorption vs. pair
production). Finally the results of the interaction must be randomly
simulated, including the particles of new energy (if not absorbed) and
trajectory.

Condensed Histories
While analog simulations work well for photon interactions, a practical
problem arises for the transport of electrons. The mean free path for
electrons in the therapeutic energy range is of the order of 10−5 g/cm2.
This means that a single electron of energy >1 MeV will have more than
105 interactions before stopping. To perform an analog simulation of this
event is impractical.
The condensed history electron transport technique was first
introduced by Berger in 1963. Berger noted that most electron inelastic
interactions did not lose a great deal of energy or have a significant
directional change. These “soft” interactions could be separated by more
significant “catastrophic” events, where the electron had a significant
energy loss (e.g., delta ray production, Bremsstrahlung event). The soft
interactions could be separately simulated by combining these into
single virtual large-effect interactions, while the catastrophic events can
be analog simulated as described above (30). For electrons with energies
above an energy threshold, the mean free path for catastrophic
interactions is of several orders of magnitude higher.
While this approach allows for faster computations, the step size
choice for the condensed histories has been shown to produce artifacts in
the results (34). However, these issues have led to improved, high-
accuracy condensed history methods (34–36).

Variance Reduction Techniques


Instead of simulating individual events as in an analog simulation, one
may employ techniques to improve the MC calculation efficiency in
obtaining a particular result. Variance reduction techniques are used to
reduce the variance of a given calculation result for a given number of
histories. For example, an analog simulation of a Bremsstrahlung event
would randomly select an energy and direction (proportional to their
respective probabilities) for the resulting photon emission. Alternatively,
one could simulate the emission of a large number of photons with lower
weights to better mimic the random directional emission of these events.
This particular technique is termed Bremsstrahlung splitting (37) and is
one of many different techniques which may be employed within a
particular MC code. These techniques are important in keeping
calculation times practical for most situations. However, care must be
taken that the underlying physics is not biased by the results.

Monte Carlo Radiotherapy Dose Calculations


In principle it is possible to simulate histories for the entire radiation
therapy delivery, that is, from the initial accelerated electron’s impact
onto the target to the delivery dose. However, this would be a
tremendously inefficient process, as few of these histories would make it
beyond the accelerator head to the patient. Alternatively, it is possible to
transport the particles through the patient-independent structures (e.g.,
target, primary collimator, ion chamber), and store this information for
future use. This information is known as a phase-space file, and it
contains information including the position, energy, and direction of the
photons and electrons emitted from the accelerator.
FIGURE 4.6 Illustration of the components of a typical Varian linear accelerator treatment head in
photon beam mode. Phase space planes for simulating patient-dependent and patient-
independent structures are also represented. (Reprinted from Chetty IJ, Curran B, Cygler JE, et
al. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of
Monte Carlo-based photon and electron external beam treatment planning. Med Phys.
2007;34:4818–4853, with permission.)

Figure 4.6 shows a cross-sectional view of a Varian linac which


demonstrates a possible location of a phase-space plane located distal to
all the patient-independent components of the accelerator head. Once
initially computed, this information may be continually used to calculate
the dose to individual patients. It may also be advantageous to create
phase-space planes further down the beam path (c.f., phase-space plane
2 in Fig. 4.6), particularly for standard collimator settings and/or beam
modifiers. Otherwise, these data can be projected directly onto the
patient for dose calculation.

DISCRETE ORDINATES METHOD


More recently several authors have reported on a direct numerical
solution of the Boltzmann transport equations (BTEs). The approach has
been commercialized in the Varian Eclipse Treatment Planning System,
under the name Acuros. In particular, this methodology has been
proposed as an alternative to MC calculations, in order to produce
accurate dose distributions with a substantially reduced calculation time.

Derivation of the Transport Equations


The linear Boltzmann Transport Equation can be derived by assuming
particle conservation within a small volume element of phase space
(38–40). We define a quantity called the angular density of electrons,
Ne(r,Ω,E,t), which represents the probable number of electrons at
location r and direction W with energy E at time t per unit volume per
unit solid angle per unit energy. Ω represents the unit director in the
direction of motion, that is, parallel to v. Thus, Ne(r,Ω,E,t) dV dΩ dE
represents the number of electrons at time t in a volume element dV
about r, in a narrow beam of solid angle dΩ about Ω, and energy range
dE about E.
After a time Δt, these electrons have moved to position r + v Δt, and
have been reduced due to collisions within the medium by an amount e
−av∆t. Here σ is the macroscopic cross-section for electrons, and represents
1/λ, where λ is the mean free path. Although not strictly a cross-section,
σ is analogous to the photon attenuation coefficient and has units of
1/length. For very short times Δt, the number of electrons from this
packet which have reached r + v Δt is ≈ Ne (1 – σ v Δt) dV dΩ dE.
At the same time scattered electrons from elsewhere in the medium
may reach the same position (r + v Δt). This quantity may be
determined by integrating the angular density over phase space
multiplied by the probability for these interactions:

where represents the doubly differential cross-section


for electron scatter from energy E′ and direction W′ to energy E and
direction W.
In addition, any additional sources of electrons produced during time
Δt may also reach position r + v Δt. In this case, the number of
additional electrons at r + v Δt becomes Q(r,W,E,t) Δt, where Q(r,W,E,t)
represents the rate of electron production from other sources. The total
number of electrons at position r + v Δt is now given by the following
equation:

Dividing the equation by Δt, and taking the limit Δt → 0, we obtain

The limit term represents the total time derivative of Ne for an observer
moving with the packet of electrons (i.e., from r to r + vΔt). It may be
rewritten to simplify the equation:

The first term in Equation (4.10) represents the velocity times the
directional derivative of Ne in the direction of W. It is known as the
streaming term, as it represents the difference in the time derivative
between the moving and rest frames, the latter of which also includes
the effects of electrons moving past r without any collisions.
Upon inserting (4.10) into (4.9), the resulting equation becomes:

where we have removed the arguments for simplicity. This is the basic
form of the transport equation, which is often called the Boltzmann
equation because of its similarity to the expression derived by Boltzmann
involving the kinetic theory of gasses (39). It is more often written in
terms of the angular flux, Ye, where Ye(r,W,E,t) = νNe(r,W,E,t):

Use of the Transport Equations for Photon Beam


Calculations
In external beam radiotherapy, the time-independent form of Equation
(4.12) is used, since steady state is achieved in a much shorter time than
that when the beam is on (41). Equation (4.12) is an integro-partial–
differential equation which can be solved numerically using either
stochastic or deterministic methods. Most reports have utilized the latter,
employing some form of grid-based numerical method in which phase
space is discretized in spatial, angular, and energy coordinates
(40,42,43), although there are some differences in the literature about
which techniques are used. Finite difference and finite element methods
are used for spatial discretization, and Boman et al. reported using the
finite-element method for all variables (40). Alternatively, the method of
discrete-ordinates has been employed for angular discretization in the
Attila solver (44,45), and in the subsequent Acuros XB algorithm
currently available in the Varian Eclipse treatment planning system
(Varian Assoc, Palo Alto, CA). Energy-dependent coupled photon–
electron cross-section data are available through CEPXS, which uses the
multigroup method to discretize the particle energy domain into energy
intervals or groups (46). This class of solvers is commonly known as the
discrete ordinates method, although technically the name only refers to
the method for numerically discretizing in angle.
Up to now, we have only discussed electron angular density (or
angular flux). However, in external beam calculations, collisions involve
photons, electrons, and positrons. In principle, Equation (4.12) then
becomes a set of coupled equations. For example, excluding positron
interactions, we have the following:

where represents the differential cross-section for the


creation of particle 2 with energy E, direction Ω, from particle 1 of
energy E′, direction W′.

Acuros XB Implementation of the Linear Boltzmann


Transport Equations
Currently the only commercial implementation of the linear BTE is the
Acuros XB dose calculation algorithm available on the Varian Eclipse
treatment planning system. Acuros XB was developed using many of the
methods employed with a prototype BTE solver developed at Los Alamos
National Laboratories called Attila, which was co-authored by the
founders of Transpire, Inc. (Gig Harbor, WA) (47). Transpire, Inc.,
established a licensing agreement to commercialize Attila, for a broad
range of applications. Acuros XB has adapted and optimized the methods
within Attila for external photon beam calculations (48).
Within the Acuros algorithm, both charged pair-production particles
are assumed to be electrons, and the contribution of electron-produced
Bremsstrahlung within the patient is assumed to be deposited locally.
As already mentioned, energy discretization is performed using a
multigroup representation of the cross-section. However, this is difficult
for electrons where the inelastic cross-section increases rapidly when
energy losses become small. These “soft” interactions would require a
very large number of energy bins to accurately describe, which is
impractical for an efficient solution. As a result, electron interactions are
separated into large and small energy losses, the latter of which are
described by a continuous slowing-down (CSD) approximation. In this
case, the angular electron fluence is described by the Boltzmann–
Fokker–Planck transport equation:

FIGURE 4.7 Comparison of EGS4/PRESTA with Attila for a percent depth–dose calculation in a
heterogeneous phantom. (Reprinted from Gifford KA, Horton JL, Wareing TA, et al. Comparison of
a finite-element multigroup discrete-ordinates code with Monte Carlo for radiotherapy calculations.
Phys Med Biol. 2006;51:2253–2265, with permission.)

In this case σee represents larger, “catastrophic” interactions that are


represented by standard Boltzmann scattering (48).
Gifford et al. (44) first performed an evaluation of the prototype solver
Attila for radiation therapy dose calculations. Dose calculations
performed by Attila were directly compared with those calculated using
MC codes MCNPX for a brachytherapy calculation, and EGS4 for an
external photon beam calculation. Differences in doses were compared,
along with relative calculation speeds.
The photon dose calculation comparison was made comparing Attila
versus EGS4 for an 18-MV photon beam from a Varian 2100 accelerator.
A narrow beam geometry was used to highlight any differences in
regions of electron disequilibrium. In addition, a heterogeneous
multislab phantom was used, which consisted of water, lung, and
aluminum. The Attila calculation was divided into 24 and 36 photon and
electron energy groups, respectively. A comparison of depth doses
between these two calculations is shown in Figure 4.7. The agreements
here was also good, with an RMS difference of 0.7% of the maximum
dose.
Vassiliev et al. (45) extended these comparisons to include external
beam calculations of heterogeneous patient geometries. In their work,
Attila was compared with EGSnrc MC simulations for a 6-MV photon
beam from a Varian 2100 for a prostate and a head and neck case
previously treated within their department. For both the BTE and MC
calculations, CT datasets were converted into a material map of four
materials with fixed densities: air, adipose tissue, soft tissue, and bone.
In their comparison, calculations were performed with the same beam
geometries as those used clinically, with the exception of beam
modulation which was removed for this comparison. Dose calculation
differences were investigated along with the resolution of various
discretization variables required for accurate Attila calculations.
Figure 4.8 displays the material map for an axial slice through the
center of the PTV for the head and neck case. Also displayed is the
resulting dose distribution performed using the Attila dose calculation
engine. The black areas on each image are regions where the difference
between the MC and BTE calculations exceeded 5% of the maximum
dose. A more quantitative comparison can be seen by looking at a dose
profile through the center of the PTV (Line “L1” on Fig. 4.8) and off-axis
(Line “L2”). The dose profiles for both the Attila and EGSnrc codes for
these lines are displayed in Figure 4.9. The overall agreement between
these two methods was good, with over 98% of the calculation points
within a ±3%/±3 mm criterion.
Since the release of Acuros XB, there have been a number of planning
studies investigating the efficiency and accuracy of the discrete ordinates
method. Comparisons are made either with other planning algorithms or
with measured results for a variety of treatment sites including lung
(49,50), breast (51), and nasopharynx (52). Evaluations of Acuros for
calculations within heterogeneous media (53) or for radiosurgery
treatments (54) have also been reported. The interested reader is
referred to these works for additional information.

FIGURE 4.8 A: Dose field calculated by Attila for a head-and-neck case on the axial plane
through isocenter. Pixels, where the dose difference between Attila and Monte Carlo (EGS)
exceeds 3%/3 mm, are shown in black on A and B. B: Material map through the axial plane
containing the isocenter for the dose distribution calculated in A. (Reprinted from Vassiliev ON,
Wareing TA, Davis IM, et al. Feasibility of a multigroup deterministic solution method for three-
dimensional radiotherapy dose calculations. Int J Radiat Oncol Biol Phys. 2008;72:220–227, with
permission.)
FIGURE 4.9 Dose line plot comparisons between EGSnrc (red) and Attila (blue) along line L1 A
and L2 B in Figure 4.8. Sharp peaks and dips in the Attila solution correspond to material
heterogeneities, which are revolved at the CT image pixel level by Attila. (Reprinted from
Vassiliev, Wareing TA, Davis IM, et al. Feasibility of a multigroup deterministic solution method for
three-dimensional radiotherapy dose calculations. Int J Rad Onc Biol Phys. 2008;72:220–227,
with permission.)

KEY POINTS
• Modern radiation therapy planning systems have evolved
tremendously over the past few decades. A number of complex
model-based photon dose algorithms exist which calculate dose to
a 3D representation of the patient. These algorithms have been
developed in response to improvements in algorithm development,
computing power, and greater availability of volumetric imaging
data.
• Today, most commercial photon dose algorithms are a variation of
the convolution/superposition method. As algorithm development
and computing power improve, the use of MC and discrete-
ordinates methods which better incorporate nonequilibrium
dosimetry will likely increase.
• A convolution/superposition model should account for the following
characteristics:
• Off-axis energy variations
• Finite source size
• Extrafocal radiation
• Scatter and attenuation from beam modifying devices

• MC algorithms track histories of individual photons and electrons


that undergo hard (“catastrophic”) collisions. Soft electron collisions
are dealt with using condensed history methods.
• The discrete ordinates method represents a numerical solution to
the coupled Boltzmann transport equations. In this method, the
energy, position, and direction of the radiation quanta are
discretized for the numerical solution of the integro-differential
equations.
• Photon beam optimization presents additional challenges within the
planning process. The need to calculate dose rapidly under
conditions of changing incident fluence is necessary in a modern
radiotherapy clinic.
QUESTIONS
1. Which of the following algorithms is/are measurement based?
A. Bentley–Milan
B. Convolution/Superposition
C. Monte Carlo
D. Discrete ordinates
2. Which of the following is/are used to speed up a Monte Carlo
dose calculation? Choose all that apply.
A. Variance reduction technique
B. Condensed history method
C. Kernel tilting
D. Density scaling
3. Which of the following algorithms account for nonequilibrium
conditions (e.g., at tissue interfaces)? Choose all that apply.
A. Bentley–Milan
B. Convolution/Superposition
C. Monte Carlo
D. Discrete ordinates
4. For a typical 6-MV beam, an error in CT number of 2% leads to
an error in dose of around:
A. 0.1%
B. 1%
C. 5%
D. 10%
5. The inelastic photon scattering processes which must be
accounted for include
A. Rayleigh scattering
B. Moller scattering
C. Photoelectric absorption
D. Bremsstrahlung interactions
6. The convolution dose equation cannot be solved using Fourier
analysis primarily because
A. Scatter kernels are depth dependent
B. Patient heterogeneities
C. Beam hardening within the patient
D. Step-size artifacts

ANSWERS
1. A
2. A and B
3. C and D
4. B
5. C
6. B

REFERENCES
1. Tsien KC. The application of automatic computing machines to
radiation treatment planning. Br J Radiol. 1955;28(332):432–439.
2. Use of Computers in External Beam Radiotherapy Procedures With High-
Energy Photons and Electrons. International Commission on Radiation
Units and Measurement (ICRU): Bethesda, MD.
3. Mackie TR, Liu HH, McCullough EC. Treatment Planning Algorithms:
Model-Based Photon Dose Calculations in Treatment Planning in
Radiation Oncology. Philadelphia, PA: Lippincott Williams & Wilkins;
2012:773.
4. Sontag MR, Battista JJ, Bronskill MJ, et al. Implications of computed
tomography for inhomogeneity corrections in photon beam dose
calculations. Radiology. 1977;124(1):143–149.
5. Jaffray DA, Battista JJ, Fenster A, et al. X-ray sources of medical
linear accelerators: focal and extra-focal radiation. Med Phys.
1993;20(5):1417–1427.
6. Khan FM, Gibbons JP. Khan’s the Physics of Radiation Therapy. 5th ed.
Philadelphia, PA: Lippincott, Williams & Wilkins; 2014:624.
7. McCullough EC, Gortney J, Blackwell CR. A depth dependence
determination of the wedge transmission factor for 4–10 MV photon
beams. Med Phys. 1988;15(4):621–623.
8. Palta JR, Daftari I, Suntharalingam N. Field size dependence of
wedge factors. Med Phys. 1988;15(4):624–626.
9. Sauer OA. Calculation of dose distributions in the vicinity of high-Z
interfaces for photon beams. Med Phys. 1995;22(10):1685–1690.
10. Battista JJ, Sharpe MB. True three-dimensional dose computations
for megavoltage x-ray therapy: a role for the superposition principle.
Australas Phys Eng Sci Med. 1992;15(4):159–178.
11. Mackie TR, Scrimger JW, Battista JJ. A convolution method of
calculating dose for 15-MV x-rays. Med Phys. 1985;12(2):188–196.
12. Boyer A, Mok E. A photon dose distribution model employing
convolution calculations. Med Phys. 1985;12(2):169–177.
13. Mohan R, Chui C, Lidofsky L. Differential pencil beam dose
computation model for photons. Med Phys. 1986; 13(1):64–73.
14. Kubsad SS, Mackie TR, Gehring MA, et al. Monte Carlo and
convolution dosimetry for stereotactic radiosurgery. Int J Radiat
Oncol Biol Phys. 1990; 19(4):1027–1035.
15. Woo MK, Cunningham JR. The validity of the density scaling
method in primary electron transport for photon and electron
beams. Med Phys. 1990;17(2):187–194.
16. Metcalfe PE, Hoban PW, Murray DC. Beam hardening of 10 MV
radiotherapy x-rays: analysis using a convolution/superposition
method. Phys Med Biol. 1990;35(11):1533–1549.
17. Papanikolaou N, Mackie TR, Meger-Wells C, et al. Investigation of
the convolution method for polyenergetic spectra. Med Phys.
1993;20(5):1327–1336.
18. Liu HH, Mackie TR, McCullough EC. A dual source photon beam
model used in convolution/superposition dose calculations for
clinical megavoltage x-ray beams. Med Phys. 1997;24(12):1960–
1974.
19. Liu HH, Mackie TR, McCullough EC. Correcting kernel tilting and
hardening in convolution/superposition dose calculations for
clinical divergent and polychromatic photon beams. Med Phys.
1997;24(11):1729–1741.
20. Ahnesjo A, Andreo P, Brahme A. Calculation and application of
point spread functions for treatment planning with high energy
photon beams. Acta Oncol. 1987;26(1):49–56.
21. Mackie TR, Bielajew AF, Rogers DW, et al. Generation of photon
energy deposition kernels using the EGS Monte Carlo code. Phys
Med Biol. 1988;33(1):1–20.
22. Mohan R, Chui C, Lidofsky L. Energy and angular distributions of
photons from medical linear accelerators. Med Phys.
1985;12(5):592–597.
23. Chaney EL, Cullip TJ, Gabriel TA. A Monte Carlo study of
accelerator head scatter. Med Phys. 1994;21(9):1383–1390.
24. Lovelock DM, Chui CS, Mohan R. A Monte Carlo model of photon
beams used in radiation therapy. Med Phys. 1995;22(9):1387–1394.
25. Sharpe MB, Jaffray DA, Battista JJ, et al. Extrafocal radiation: a
unified approach to the prediction of beam penumbra and output
factors for megavoltage x-ray beams. Med Phys. 1995;22(12):2065–
2074.
26. Hoban PW, Murray DC, Round WH. Photon beam convolution using
polyenergetic energy deposition kernels. Phys Med Biol.
1994;39(4):669–685.
27. Rogers DWO, Bielajew AF. Monte Carlo techniques of electron and
photon transport for radiation dosimetry. In: Kase K, Bjarngard BE,
Attix F, eds. The Dosimetry of Ionizing Radiation. New York, NY:
Academic Press; 1990:427–539.
28. Mackie TR. Applications of the Monte Carlo Method. In: Kase K,
Bjarngard BE, Attix F, eds. The Dosimetry of Ionizing Radiation. New
York, NY: Academic Press; 1990:541–620.
29. Andreo P. Monte Carlo techniques in medical radiation physics.
Phys Med Biol. 1991;36(7):861–920.
30. Bielajew AF. The Monte Carlo simulation of radiation transport. In:
Curran B, Balter J, Chetty I, eds. AAPM Monograph No. 32:
Integrating New Technologies into the Clinic: Monte Carlo and Image-
Guided Radiation Therapy. Madison, WI: Medical Physics Publishing;
2006:697.
31. Rogers DW. Fifty years of Monte Carlo simulations for medical
physics. Phys Med Biol. 2006;51(13):R287–R301.
32. Chetty IJ, Curran B, Cygler JE, et al. Report of the AAPM Task
Group No. 105: Issues associated with clinical implementation of
Monte Carlo-based photon and electron external beam treatment
planning. Med Phys. 2007;34(12):4818–4853.
33. Kawrakow I. Accurate condensed history Monte Carlo simulation of
electron transport. I. EGSnrc, the new EGS4 version. Med Phys.
2000;27(3):485–498.
34. Bielajew AF, Rogers DWO, Jenkins TW, et al. Monte Carlo Transport
of Electrons and Photons. New York, NY: Plenum; 1988:115–137.
35. Kawrakow I, Bielajew AF. On the condensed history technique for
electron transport. Nucl Instrum Methods Phys Res B. 1998;142:253–
280.
36. Seltzer SM. Electron-photon Monte Carlo calculations: The ETRAN
code. Intl J Appl Radiat Isot. 1991;42:917–941.
37. Kawrakow I, Rogers DW, Walters BR. Large efficiency
improvements in BEAMnrc using directional bremsstrahlung
splitting. Med Phys. 2004;31(10):2883–2898.
38. Case KM, Zweifel PF. Linear Transport Theory. Reading, MA:
Addison-Wesley; 1967.
39. Bell GI, Glasstone S. Nuclear Reactor Theory. New York, NY: Van
Nostrand Reinhold Company; 1970:619.
40. Boman E, Tervo J, Vauhkonen M. Modelling the transport of
ionizing radiation using the finite element method. Phys Med Biol.
2005;50(2):265–280.
41. Borgers C. Complexity of Monte Carlo and deterministic dose-
calculation methods. Phys Med Biol. 1998;43(3):517–528.
42. Lewis EE, Miller WFJ. Computational Methods of Neutron Transport.
New York, NY: Wiley; 1984.
43. Dautray R, Lions JL. Mathematical Analysis and Numerical Methods
for Science and Technology. Berlin: Springer; 1993.
44. Gifford KA, Horton JL, Wareing TA, et al. Comparison of a finite-
element multigroup discrete-ordinates code with Monte Carlo for
radiotherapy calculations. Phys Med Biol. 2006;51(9):2253–2265.
45. Vassiliev ON. Wareing TA, Davis IM, et al. Feasibility of a
multigroup deterministic solution method for three-dimensional
radiotherapy dose calculations. Int J Radiat Oncol Biol Phys.
2008;72(1):220–227.
46. Lorence L, Morel J, Valdez G. Physics Guide to CEPXS: a Multigroup
Coupled Electron-Photon Cross Section Generating Code. New Mexico:
Sandia National Laboratories: Albuquerque; 1989:110.
47. Wareing TA, McGhee JM, Morel JE, et al., Discontinuous finite
element Sn methods on 3-D unstructured grids. Nucl Sci Eng.
2001;138:256–268.
48. Eclipse Photon and Electron Algorithms Reference Guide. Varian
Medical Systems, Inc., Palo Alto, CA 94304. P1008611-002-B.
December 2014.
49. Liu HW, Nugent Z, Clayton R, et al. Clinical impact of using the
deterministic patient dose calculation algorithm Acuros XB for lung
stereotactic body radiation therapy. Acta Oncol. 2014;53(3):324–
339.
50. Kroon PS, Hol S, Essers M. Dosimetric accuracy and clinical quality
of Acuros XB and AAA dose calculation algorithm for stereotactic
and conventional lung volumetric modulated arc therapy plans.
Radiat Oncol. 2013;8:149.
51. Fogliata A, Nicolini G, Clivio A, et al. On the dosimetric impact of
inhomogeneity management in the Acuros XB algorithm for breast
treatment. Radiat Oncol. 2011;6:103.
52. Kan MW, Leung LH, Yu PK, Dosimetric impact of using the Acuros
XB algorithm for intensity modulated radiation therapy and
RapidArc planning in nasopharyngeal carcinomas. Int J Radiat Oncol
Biol Phys. 2013;85(1):e73–e80.
53. Kan MW, Leung LH, So RW, et al. Experimental verification of the
Acuros XB and AAA dose calculation adjacent to heterogeneous
media for IMRT and RapidArc of nasopharyngeal carcinoma. Med
Phys. 2013;40(3):031714.
54. Fogliata A, Nicolini G, Clivio A, et al. Accuracy of Acuros XB and
AAA dose calculation for small fields with reference to RapidArc(®)
stereotactic treatments. Med Phys. 2011;38(11):6228–6237.
5 Treatment Planning Algorithms:
Brachytherapy

Kenneth J. Weeks

INTRODUCTION
Brachytherapy involves the treatment of cancer using photon, electron,
and positron emissions from radioisotopes. Brachytherapy was
developed using naturally occurring radioisotopes such as radium 226.
The history, applications, and emission details of radioisotopes are
described elsewhere (1–5). It is the goal of brachytherapy treatment
planning to determine the number of sources, their individual strengths,
and the location of each source relative to the treatment volume, so as to
treat a localized volume to a given minimum dose while respecting
tolerances of normal tissues. It is important to note that the original
brachytherapy clinical applications were developed realizing that
brachytherapy demanded 3-dimensional (3D) planning because the
sources were distributed in three dimensions. Because of this fact and
the absence of computers, these original treatment systems were all
inclusive. They were systems with rules for distributing the sources,
rules for picking and arranging source strengths, and given the latter,
precalculated dose-rate tables for determining the dose to a point. The
Manchester, Paris, Stockholm, Memorial, and Quimby systems (1–5) all
specified in alternate ways how to do this for interstitial and
intracavitary implants. See Chapter 15 for discussion of these systems.
From this history we can obtain knowledge of the range of the
radioactive source applications, which is important in devising dose
calculation algorithms. Thus, we summarize guidelines, which include
the following: When distributing lines of sources, attempt to keep them
spaced no closer than 8 mm (smaller volumes) and no farther than 2 cm
(larger volumes) apart. The periphery of the treatment volume is
generally not much farther than 5 cm from the center of gravity of the
source distribution. The very high doses close (less than 5 mm) to the
sources are not prescribed or evaluated as to clinical significance. At
distances greater than 10 cm from the center of the implant, the dose
delivered is low and the precise dose is not considered a treatment
objective. Therefore, we conclude that dose calculation algorithms that
are very accurate from 5 mm to 5 cm and generally accurate to 10 cm
are required. The availability of computers and advanced imaging
capabilities means precalculated dose tables for predetermined patterns
of multiple sources are no longer required. Calculation of the dose
distribution for the individual patient’s source distribution is possible.
Radioisotopes decay randomly with a time independent probability
(1,3,5,6). If there are N0 radioactive atoms at time t = 0, then at a later
time t we have N(t) atoms given by

where l (0.693/T1/2) is the radioisotope’s decay constant and T1/2 is the


half-life (time it takes for half of the sample to decay). The activity (A)
at time = t, is proportional to the number of radioactive atoms present
and is defined by

where A0 is the initial activity. Throughout the following we will


consider the calculation of the dose-rate, (in cGy/h). The total dose D
delivered during an implant which lasts for time t, is found from the
initial dose-rate, , at the start of the implant from

for the case t >> T1/2, for example, a permanent implant, the total dose
(D) is simply whereas if t << T1/2, D = t . Throughout the
following, we will calculate the dose-rate at the start of the implant .
The total dose delivered in time t is then found from Equation 5.3.
The isotope emits energy (in the form of photons, electrons, and
sometimes positrons) in all directions and that energy is absorbed in the
mass (tissue) around the isotope, giving rise to absorbed dose (absorbed
energy/mass). The calculation of dose-rate depends on the number of
radioactive atoms, the types and energies of the emitted particles, the
time rate of emission of those particles, and finally the energy absorption
and scattering properties of the surrounding media and the radioactive
material itself. In this chapter, we will begin with the simplest case of a
point source. From there we will use the point source result to determine
the dose-rate for an ideal line source and then a real clinical cylindrical
source. Finally, we will obtain the dose-rate distribution for a 3D
source/shield/applicator via numerical integration of the point source
result. Various intermediate parameterized calculation methods are
discussed. This inevitably leads to systems which explicitly model the
flow of energy from the radioactive sources. These include Monte Carlo
and Boltzmann transport theory. These latter techniques owe their
existence to the extensive computation power now available. The
advantages and disadvantages of these methods will be discussed.

CALCULATION OF DOSE-RATES AROUND A POINT


SOURCE
A point source is the simplest situation to calculate. The first
approximation, used in radiation oncology, is to ignore the charged
particle emissions and consider only the photons. The significance of this
approximation is best understood by reviewing the basic nuclear decay
data (7). Consider the well-known 192Ir which has a half-life of 73.8 days
and decays via β decay (95% of the decays) or electron capture (5%).
The decay of a single 192Ir nucleus produces on average 2.38 photons
(there are 44 possible photon energies ranging from 7.8 to 1,378 keV
which can be emitted in a single decay event) and 0.95 β-decay electrons
(with the β decay continuous energy spectrum ranging from 0 to 669
keV). In addition, atomic electrons can be emitted with various discrete
energies ranging from 11 to 1,378 keV. In a single decay, the average
total energy output from photons is 813 keV (note the average energy of
the photons is therefore 341 keV from 813 keV/2.38) and the average
electron energy output is 216 keV (7). Therefore, the total average
energy output per decay is 1,029 keV. Our decision to ignore the emitted
electrons means we are going to ignore around 21% of the total energy
output from the source in our dose-rate calculations. Why is this
justified? The reason is that practical commercial sources used for
Radiation Oncology will be encapsulated radionuclides and that
encapsulation will scatter and slow down the electrons such that most do
not leave the source capsule itself, and for those that do escape, their
range in tissue outside the source is much smaller than 5 mm and thus
will not contribute dose at a clinical prescription distance. This
encapsulation of the source is essential for the clinical use of most
isotopes used in radiation oncology. Historically, in the early days of
radiotherapy, in the United States, a clever technique to enclose radon
gas in glass capsules was developed. The high energy of the β particles
and the light filtration led to unfavorable clinical results (4). We end this
discussion with the observation that there can be a major clinical dose
distribution difference between an encapsulated and an insufficiently
encapsulated radionuclide.
Consider a sample of radioactive material whose largest dimension is
much smaller than 0.1 mm. This will be small enough so that all atoms
can be approximately considered as located at a single point. Restricting
ourselves to the photon emissions from the point source, we will first
think about the dose-rate produced in a small volume (dV) of tissue
located at a distance (r) from the source. For simplicity, consider the
source in vacuum (so no scatter) and that in each decay it emits exactly
one photon with energy E. The situation is shown in Figure 5.1. The
dose-rate in dV must be equal to the product of the following: time rate
of emission of the photons, that is, activity A, the probability of the
photon hitting dV (we will abbreviate that as P(r,dV)), the probability
that the photon of energy E which hits dV actually interacts with dV as
opposed to just passing through (P(E,dV)), and the average amount of
energy (dEabs) that is absorbed in dV whenever a photon interacts with
it, all divided by the mass (m) of dV. The units are energy/mass/time,
which is dose-rate. Explicitly,
FIGURE 5.1 Point source (S) emission of photons (wavy lines) in vacuum. Direction is random.
Only the photon emitted straight at dV can deposit energy in dV and only if it randomly interacts
with the material in dV.

FIGURE 5.2 Point source (S) emission of photon radiation in vacuum. Cross-sectional area (da)
of small material volume dV faces the source. dV is moved from radius r to radius R causing a
reduced probability of being hit by the photons by the factor r2/R2.

The first question is, what happens to the dose-rate in dV if we simply


change its distance from r to R (Fig. 5.2)? The two things that do not
change, at all, are the activity of the source and the mass of the tissue
that we move around. P(E,dV) and dEabs should not change if the angles
with which the photons hit the volume are similar, that is, the solid
angle subtended by dV is small. So we are left to focus on the probability
of a photon hitting dV. When a nucleus decays and gives off a photon,
that photon is emitted isotropically, which means the photon is equally
likely to go in any direction. Let the cross-sectional entrance area of the
mass m be denoted as da (Fig. 5.2). Of course, the total surface area at a
distance r is 4pr2. So the probability of a photon emitted in a random
direction hitting da after it has traveled a distance r is
If we move dV to a larger distance R, the probability of hitting da now
equals da/(4pR2). Therefore the probability of hitting da has been
reduced by a factor of r2/R2 in moving from r to R. This suggests that we
can try and approximate Equation 5.4 as simply

where we have defined c = P(E,dV)dEabs/m and we are hoping that c is


constant, under the assumption that the factors in c do not change much
as we move the small volume around in vacuum.
In Equation 5.6, it is understood that we should not move the little
volume of tissue someplace where it makes no sense to assume that c
remains constant. A counterexample best illustrates why it is not true
that c is a constant. Suppose that we had moved the volume dV and
centered it on r = 0, so that it completely surrounded the radioisotope.
The factor P(r,dV), where we got r2 from in the first place, is now P(r =
0, dV) = 1, that is, every photon emitted from the radioisotope hits the
volume. One can easily see from Equation 5.4 that the dose-rate does not
become infinite at r = 0 (as Equation 5.6 implies); in fact, depending on
the photon energy (E) and the size of dV, the dose-rate (from the
photons emitted) could be extremely small. This example clearly shows
that the algorithms we devise to calculate dose-rate have their regions of
validity.
Historically, radioisotope emissions were first measured in air. In
particular, the concept of exposure (1,3,5,6) (amount of ionization of air
per unit mass) was used extensively because charge collection in air-
filled cavities are the easiest measurements to make. The process was:
first, measurement of exposure rate in air; second, conversion of that
exposure rate in air to dose-rate to a small amount of tissue in air; and
finally, conversion to dose-rate to a point in the patient. The result of
this process (1,3) led one to define a dose-rate to a small amount of
water at a distance r surrounded by air as
where the single constant c in Equation 5.6 is split into two constants. Γ
is the exposure rate constant (1,3,5) (in units of R cm2/mCi/h) which
represented the conversion of photon energy to ionization of air for the
given isotope and fmed (in units of cGy/R) is the conversion constant
from exposure in air to dose to medium (water) at the average photon
energy given off by the radioisotope. Normally, people choose to express
dose to water since its radiation properties are similar to tissue and
measurements were/are made in water. Values of fmed (range 0.88–0.97
cGy/R) and Γ (range 1.45–13 R cm2/mCi/h) for various radioisotopes
are summarized in the literature (1,3,5).

FIGURE 5.3 Point source (S) emission of photon radiation in homogeneous water medium. Wavy
lines are photons, straight lines are electrons, and crosses (x) mark photon interaction points.
Three photons are followed. Photon a: Compton scatters above dV, the electron produced misses
dV, the Compton photon scatters again (just above dV). The Compton electron after slowing down
deposits its remaining energy in dV. Photon b: It was aimed right at dV coming out of S but
halfway there was scattered. Both the Compton photon and electron miss dV. Photon c: Compton
scatters below dV and the scattered photon heads right for dV and is photoelectrically captured
inside dV, its photoelectron (not shown) is absorbed in dV.

In Equation 5.7, we have the dose-rate to water in air, but what we


ultimately want is the dose-rate in water (i.e., to the patient). Let us look
again at a source radiating photons toward dV but this time in a full
water medium. Figure 5.3 shows three examples of photon histories.
First, a photon that was going to miss dV (a in Fig. 5.3) is scattered
several times; eventually, a secondarily scattered electron deposits a
fraction of the original energy into dV. Second, a photon (b) which is
emitted from the source aimed right at dV interacts on the way there and
no part of its energy reaches dV. Finally, a photon (c) which was going
to miss dV interacts and the scattered photon from that interaction is
completely absorbed in dV.
One thing we might guess is that inverse square is not going to be
valid anymore because how dV absorbs energy is much more
complicated. However, we remember that, inverse square was not a law
anyway, and what we want is an approximation in a restricted region of
interest. In any event, we could start by describing the dose-rate to a
point r in water as

In this equation the major effect of attenuation is represented in the first


term where we exponentially attenuate the in-air dose-rate of Equation
5.7 with the linear attenuation coefficient (μ) for water for the average
energy E emitted by the radionuclide (roughly 0.1 cm–1) (1,3,5). The
exponential attenuation factor takes care of one of these effects above
(photon [b] in Figure 5.3), scatter out of the path from the radionuclide
to dV. Dscat now represents the result of all the various scatter
possibilities and is far more complicated. Equation 5.8 has merely
organized the calculation into a primary part and a secondary scatter
part. Now we note in Figure 5.3 that the attenuation scattering events
[b] reduce the dose-rate in dV but the scatter events [a and c] increase
the dose-rate relative to the (Fig. 5.1) in-vacuum case. Maybe, if we get
lucky, these will cancel out. It turns out that scatter and attenuation
effects do not cancel out at all distances from the source, but close to the
source they almost do and their change with distance farther away can
be simply parameterized. Meisberger et al. (8) showed that the measured
variation in dose-rate in water as r changed from 1 to 10 cm was such as
to establish Equations 5.9 and 5.10 as a good approximation for the
dose-rate to water
Application of this algorithm (Equation 5.9) has, in the past, been a
popular choice in commercial computerized treatment-planning systems.
Technically, fmed should now be a function of r to account for the lower
energy of the scattered photons with greater distance in water (9,10);
however, that detail is usually ignored. Comparing the in-air Equation
5.7 with the in-phantom Equation 5.9, the difference is simply the
inclusion of the parameterized factor T(r). T(r), the attenuation and
build up factor, is a polynomial in r (Equation 5.10) which Meisberger et
al. (8) used to represent the ratio of the exposure in water to the
exposure in air. The free parameters A,B,C, and D are determined by
least squares fit to the experimental data for each isotope. One notes
(1,3,5,8) that A is close to 1.0 and that B,C,D are on the order of 10–3 or
less. Because of this, the value of T(r) is very close to 1.0 up to a certain
distance. Attenuation and in-scatter are balanced at a distance rA where
T(rA) = 1.0. Hale (11) pointed out that in-scatter cancels out the
attenuation loss, which was not obvious. For instance, if we consider
137Cs (E = 662 keV), the distance is around 3 cm. If we estimate the
reduction in dose from attenuation of 3 cm of water (using μ = 0.086
cm–1) we would expect a 23% drop off. Clinically, the fact that a simple
dose calculation such as in Equation 5.9 can be used, instead of
something as in Equation 5.8, makes calculations easy both by hand and
by early computers and has been extremely useful. The mathematical
form, Equation 5.10, which was chosen by Meisberger et al. (8) for T(r)
is not a unique parameterization of the attenuation and scatter effects.
One could use the form proposed by Evans with equal ease (12)

Kornelson and Young (13) fit the coefficients ka and kb to Monte Carlo
results (14). Venselaar et al. (15) extended the range of the fitted data to
60 cm. Other mathematical expressions (16–18) have been utilized;
there is little difference of clinical significance between them or
Equations 5.9 and 5.11. The reader should note that Equations 5.9 or
5.11 can be used to perform quick hand check verifications of clinical
implant plans. If one looks at a dose-rate at a point 10 cm from the
implant center, all the implanted sources can be considered
approximately as one point source located at the center of gravity of the
implant. Add all the activities together and calculate the cGy/h value
expected and compare it to your treatment planning system isodose line.
One cannot use this method to determine a small error in the computer
plan result but one can use it to uncover the presence of a major error.
Comparing Equations 5.8 and 5.11, the first term is identical and is
the attenuated primary in-air dose-rate. Comparing the second terms,
one can see that Equation 5.11 assumes that is proportional to the
attenuated primary dose-rate. This is physically reasonable since scatter
comes from the attenuation that occurs in the out-of-path directions and
this should be similar to the in-path attenuation. To the extent that this
is not true, we make up for that by letting ka and kb be completely free
parameters for each different isotope. Fitting the parameters can be done
in two ways: fitting the free parameters to match measured data, or
fitting (13–15) to match a better calculation such as Monte Carlo. All the
parameters (A,B,C,D, and ki) have no direct physical meaning, they are
chosen to allow us to describe the dose-rate as accurately as possible.
Because of that, it is required to keep track over what range of data the
parameters were determined. For example, the best-fit value of D just
happens to be negative (1,3,5,8) for 192Ir, 198Au, and 137Cs. Hence, at
large distances (r > 25 cm), where the r3 term in Equation 5.10
dominates T(r), T(25 cm) is negative, and therefore Equation 5.9
predicts negative dose-rate for those isotopes at large distances. This
negative dose result arises because we have applied Equation 5.9 outside
the range of the fitted parameters and have obtained a nonphysical
(wrong) result.

CALCULATION OF DOSE-RATES AROUND A LINE


SOURCE
In Equation 5.9, we now have an expression for the dose-rate in water
due to a simple point source. We can now apply this result (we could
have just as easily used Equation 5.11 instead) to help us calculate the
dose-rate from different source geometries. The next simplest case is a
line (length L) of radioactive material (Fig. 5.4). In the point source case,
we had spherical symmetry, which meant that the direction from the
source did not change the dose, only the distance (r) did. With a line
source we need to consider direction and distance relative to the center
of the line. There is still symmetry remaining, specifically rotational and
reflection symmetry, so if we can calculate the dose-rate to every point
in the shaded region of Figure 5.4, the dose-rate anywhere else in the
patient volume is determined without calculating. For example, the
dose-rate at P in Figure 5.4 is the same as at point B (reflection of P with
respect to Y–Z plane), C (reflection of P with respect to X–Y plane), or D
(reflection of B with respect to X–Y plane). Moreover, any point off the
plane (y ≠ 0) of Figure 5.4 that can be mapped to a point in the plane of
Figure 5.4 by a rotation about the z-axis will have the same dose-rate as
that point in the plane.

FIGURE 5.4 Line source geometry. Dose-rate calculation to point P depends on distance and
direction (r, q) of P from the source center. The active length (L) defines angles (q1 and q2) from
the endpoints of the line source to point P. β = q2 − q1 is the angle from P to the endpoints of the
active source. Results need only be calculated for the shaded quadrant, dose to points B, C, and
D will be identical to point P by symmetry, likewise for all points in 3D space obtained by rotation
of the source about the z-axis.

The solution can be found in an analytic form by defining an activity


per unit length (A/L) of source and integrating the point source
expression of Equation 5.9 along the line of the source (dl) to obtain the
dose-rate at any point P(r, q) in the plane (19). The final result
L is the active length of source and β = q2 − q1 is the angle subtended
by the line source when viewed from point P.

CALCULATION OF DOSE-RATES AROUND AN


ENCAPSULATED CYLINDRICAL SOURCE
Most applications in radiation oncology entail the use of encapsulated
(e.g., stainless steel) cylindrical sources. Consider a cylindrical source S
(radioactive source radius rS, active length L) and enclose it (Fig. 5.5)
inside a cylinder of encapsulation material (radial wall thickness t and
end cap thickness te). Again, we will consider the active source region to
be divided up into many small point sources and determine the
contribution from each point source separately using Equation 5.9 and
add the results. Clearly the first-order effect of the encapsulation will be
to reduce the dose-rate by an amount that depends on the path lengths
through the encapsulation. As the path length through the encapsulation
from every small point source to a given dose point is different, the
reduction will be different in different directions. The solution to an
encapsulated line source was given by Sievert (20). He presented the
equivalent (T(r) was ignored in those days) of the following expression
for the dose-rate at a point P located at planar coordinates r and q
(measured from the radioactive source center, Fig. 5.5)
FIGURE 5.5 Cylindrical encapsulated source geometry, thickness of encapsulation is t, radius of
source is rS, and the end cap thickness is te. Radioactive material in the source region is
subdivided into N tiny regions. Photons from each of the regions are attenuated by their path
length through source material and encapsulation. Dose-rate at P (located at r, q from center of
source) involves repeated application of point source calculation for all N cubes. For the ith cube,
its distance to P is ri, the path through the source material is dSi, and through the encapsulation is
dei.

where μe is the attenuation coefficient for the radioisotope’s photons


through the encapsulation material, t is the perpendicular wall thickness
of the encapsulation, and q1 and q2 are the angles from the point P to
the ends of the active length of the source. The integral expression in
Equation 5.13 is the Sievert integral, which can be numerically evaluated
with a computer. Young and Batho (21) later provided expressions for an
effective wall thickness accounting for source radius. As an aside, Γ
would be the unfiltered exposure rate constant in Equation 5.13 because
the attenuation of the source’s photons by encapsulation is explicitly
calculated. The differences between filtered and unfiltered exposure rate
constants are discussed in detail elsewhere (1,3,5,6,22,23).
Equation 5.13 is valid for points (such as P in Fig. 5.5) in the patient
where path lengths do not go through the ends of the source.
Expressions (19,20) that give the dose-rate in the other geometrically
distinct regions (through ends of the source capsule) will not be
presented here, as calculations involving numerical integration (21) can
be obtained with computers. We can present a single equation for
calculation anywhere in the region by numerical integration. We
subdivide the source volume into N equal parts. Each little source
volume element i contributes independently to the dose-rate at P. The
dose-rate at a point P located at r, q, from the center of the source is
given by adding all N exponentially attenuated point source
contributions

where dsi (dei) (Fig. 5.5) are the individual path length distances through
the source (encapsulation) material from tiny source region i to point P.
For a clinical 137Cs source N = 100 is a fine enough subdivision. The
effects of the attenuation coefficient of the source (μs) and its path length
(dSi) are included. The numerical integration method in one form or
another has been used often in the past (21,24–28). In Equation 5.14,
the coefficients μ are either chosen to give the best fit of Equation 5.14
to experimental measurements or directly measured. If the coefficients
are directly measured, one sees that measurements (28) of attenuation
produced by materials relative to attenuation by water work better than
linear attenuation coefficients in Equation 5.14 because the material is a
perturbation of the water medium and not a perturbation of air medium.

ENCAPSULATION AND THE LOW ENERGY PROBLEM


In our discussion up to now, we have avoided details such as where we
get the activity (A) of a given source, what is the correct way of
calculating the exposure rate constant, how does one calibrate a source
and the relationship of the latter procedures to the final step, and
calculation of dose-rate. To get an idea of the ambiguities, consider a
hypothetical encapsulated source where a manufacturer makes the
encapsulation container from lead and the radioactive material emits a
photon of energy 30 keV in half of the nuclear decays and a photon of
energy 1 MeV in the other half. If we were to measure the output of this
encapsulated source we would never detect the decays wherein photons
of 30 keV are produced because the lead would absorb them. The
activity we would infer from measurement would then be less than what
it really is because we would only be aware of half the decays. We
would be in a dilemma to call that measured activity the activity,
because the activity is essentially a measure of the actual number of
radioactive atoms (Equation 5.2). This example illustrates why there
arose a need for “apparent activity.” In short, the source manufacturer
would tell you the “apparent activity” under the assumption that you
would use the same value of Γ that he used to derive his apparent
activity. Then when you multiply A and Γ together, you will get the
correct result. Look at Equations 5.9 to 5.14; notice that what you need
to calculate the dose correctly is the value of AΓ. You do not need the
true activity A and the theoretical Γ. We emphasize that you have to
check that the value of Γ that the source manufacturer quotes it uses
matches the value in your treatment planning system in order to use the
source manufacturer’s reported activity. As an aside, it is no wonder that
the use of mg Ra eq (1,2,3,5) for quantifying activity lasted so long after
the abandonment of radium implants because everyone agreed that the
value of Γ = 8.25 (R cm2/mg Ra eq hour) for radium filtered by 0.5 mm
of Pt would be the value you would use for all isotopes.
The second problem (opposite in nature to the first) comes from the
details of calibration of the sources. It is easier to measure the emission
output of a radioisotope in air than in water. Suppose a low energy
photon is given off by the source and it gets out of the capsule or is
scattered from the capsule, giving rise to a photon even lower in energy,
and such a photon travels through air and its ionization is measured by
the detector. By everything said above, for determining the activity, that
seems like a good thing (a nucleus decayed and its effect was registered).
However, suppose that photon is so low in energy that it would be
absorbed very quickly by tissue (within an mm or so). Its effect is
included in the calibration of the source in air but clinically it is of no
significance in delivering a dose at a distance in a patient (29). Therefore
its effects should be excluded.

TWO-DIMENSIONAL DOSE CALCULATION FORMALISM


It should be noted that both these problems were present even from the
earliest days of brachytherapy using radium (1–3). The problems were
not a great enough danger to warrant rethinking the dose-rate
calculation formalism till low-energy sources such as 125I and 103Pd
came into clinical use. It has been recently decided to define a more
consistent method. Task Group 43 (TG43) of the American Association
of Physicists in Medicine (AAPM) recommended the adoption of a new
system (30–34) for calculating the dose-rate to water for low-energy
sources. This system is designed to be consistent from calibration of the
source by the accredited calibration laboratories to the final clinical
calculation for the patient. The 2-dimensional (2D) dose-rate equation
for cylindrically symmetric encapsulated sources in the TG43 formalism
(30) is given by

This dose-rate equation is a 2D calculation as points in the plane with


coordinates r and q are calculated and all other points in 3D space are
then found by rotation about the z-axis. Equation 5.15 is really a choice
between two equations, where X = P signifies that you will use a point
source geometry factor and X = L indicates a line source is used. All
quantities in Equation 5.15 are referenced to a single reference position,
usually ro = 1 cm and qo = 90o (i.e., 1 cm from the source center in a
direction perpendicular to the symmetry axis of the source, e.g., 1 cm
along the x-axis in Figure 5.5). The product SK  is the dose-rate in water
at the reference position, ro, qo.
Two new quantities are defined in Equation 5.15. First, the air kerma
strength (1,30,33), SK, gives a measure of the absolute amount of
radionuclide available. Its source calibration unit U equals cGy cm2/h by
definition. The air kerma strength is the air kerma rate in vacuum times
d2 (due to photons of energy greater than a cutoff energy, >5 keV,
measured with a free air chamber centered at a distance d); d is usually
1 m. This energy cutoff (5 keV) is chosen so that the calibration effects
of low-energy photons (which ultimately would not contribute to tissue
dose at distances greater than 1 mm from the source) are subtracted
from the calibration result. Source manufacturers now provide both the
air kerma strength (referenced to the reference position) as well as
apparent activity (for historical comparison). Typical conversions from
air kerma strengths to “apparent activity” values (U/mCi) for isotopes
used clinically are 1.27, 1.29, 2.86, and 4.12 for 125I, 103Pd, 137Cs, and
192Ir, respectively.
The second new quantity in Equation 5.15 is the radionuclide’s dose-
rate constant,  (units = μGy/h/U at 1 m). The dose-rate constant is the
ratio of a reference dose-rate (ro,qo) in water to SK. The dose-rate
constant is determined once and for all for each manufacturer’s source
via Monte Carlo modeling plus experimental measurements usually with
thermoluminescent dosimeters (TLDs) (30,31,35–38). There are errors in
both methods, so results are averaged to produce a “consensus” value for
 (31,35).
G(r, θ) is a new symbol for the simple geometry dependence already
seen in Equations 5.9 and 5.12, namely, point (P) source and line (L)
source (30,31,39). G accounts for the main effects of distance and
direction of source from point of measurement. TG43 defines GP and GL,

The ratio of G(r,q) to the reference value G(ro,qo) is explicitly indicated


in Equation 5.15.
In Equation 5.15, the radial dose function g(r) redefines the traditional
attenuation and scatter build-up factor. It accounts for photon
attenuation and scatter in water in the radial direction to the source
symmetry axis (qo = 90o). The radial dose function is essentially
fmed(r)T(r) renormalized so that g(ro) = 1.0. Since we know that the
parameters of Equation 5.10 can be fitted accurately, it is not surprising
that a polynomial expansion can be used to accurately represent g(r)

where X = P or L means that one compares their dose-rate data (at


varying positions r with qo = 90o) to their calculations using Equation
5.15, and determines the six coefficients ai in Equation 5.18 using point
or line source formula Equations 5.16 or 5.17 for the geometry factor.
The dose-rate data required to determine the radial dose functions come
from Monte Carlo calculations and are verified by measurements. Each
radioisotope has its own set of ai determined (5,30,31,40,41).
The anisotropy of source distribution function F(r,q) is introduced to
account for differences in dose-rate as a function of angle from the
symmetry axis due to the specific geometry of the encapsulation of the
radionuclide source. In other words, it takes into account the different
path lengths through the source and encapsulation at various angles. If
we have information on the dose-rate in all directions (such as from
Monte Carlo modeling or numerical integration or the Sievert integral)
then the anisotropy function can be determined (5,30,42–44) from
Equation 5.15. The normalization for F is, F(r, qo = 90o) = 1.0.
In high dose-rate afterloader applications using 192Ir (Chapter 16),
there is a single high activity cylindrical source. Optimization techniques
(Chapter 16) are used to vary source dwell times at various positions in
fixed implanted catheters. Because you must localize the catheters to
plan the patient, you have the orientation of the source symmetry axis in
the patient at all possible dwell positions and therefore you can
determine the angle q in Figure 5.4 for any position P. Therefore one
uses the 2D dose calculation (X = L) of Equation 5.15. Similarly 125I
seed sources loaded into a fixed geometry eye plaque for ocular
melanoma treatment can use the line source form.
There is a practical problem in using Equation 5.15 for cylindrical
sources that are not constrained to be in a definitive geometric
orientation by the applicator which holds them. Namely, it is not always
easy to determine orientation. Consider permanent prostate implants
which use 125I seed sources. Computed tomography (CT) scans and/or
radiographs cannot easily provide the necessary resolution to determine
the line direction in 3D space for all the sources, (though methods are
being developed to do that (45,46)). So we definitely have line sources
but we cannot determine each angle q (from each and every source to
each and every point P). Thus we are forced to use the point source
geometry factor. In this event, one can use a better approximation (31).
Although the anisotropy function is a function of r and q, one can
average F over the 4p geometry and F can be approximated by a simple
radial function, fan(r), called the 1-dimensional (1–D) anisotropy
function, e.g. this results in a roughly a 5% reduction for 192Ir (5,31).
Where a 2D calculation cannot be used for cylindrical sources, the
revised TG43 protocol (31) recommends the use of

Compare Equation 5.19 to Equation 5.15 (with X = L) with respect to


the geometry function. In Equation 5.19, the line source geometry
formula is used but regardless of what the angle q is on the left side of
the equation, we evaluate the right-hand side using q = qo = 90o. The
advantage of Equation 5.19 is that it is more accurate for cylindrical
sources at distances less than 1 cm than if we use Equation 5.15 with the
point source approximation (X = P).
It is important to understand the methodology behind the 2D dose
calculation algorithm provided by Equation 5.15. A point or line source
approximation for the geometry function is chosen (this choice is
determined by practical realities in most cases as almost all clinical
sources are cylinders). Experimental measurements and/or Monte Carlo
calculations provide the desired 3D dose distribution answer. The radial
dose-rate function in point source geometry is then determined by
choosing the parameters of Equation 5.18 so that multiplying all the
factors together at points where q = qo = 90o yields the correct dose-
rate (which you already know from the full 3D Monte Carlo result) at
those points. Once you have that, you can determine the anisotropy
factor, in the same way using the already known correct dose
distribution. One can either store a table of results (5,30,31) or create a
fitting function to reproduce the results. Furhang and Anderson (47)
proposed the functional form:

where a, b, c, and d are polynomials in r with a total of 12 free


parameters. Sloboda and Menon (48) fit those parameters to the results
of Monte Carlo calculations. Ling et al. (49) had also used a similar
parameterization to fit the dose-rate results for an 125I source.
Equations 5.9, 5.11, 5.12, and 5.14 can be converted to the more
modern formalism by substituting SKg(r) for fmed AΓT(r). For example,
Equation 5.14 for the dose-rate about a cylindrical encapsulated source
would be

where SK would have been determined for the encapsulated source;


hence already accounting for source size and encapsulation, we subtract
the source radius, rS, and the encapsulation wall thickness, t, from the
path differences.
Finally, let us review the rationale of the change in Equation 5.15. In
the old system, calibration in air, calculation in air, and then conversion
to dose in medium was the calculation process. The new TG43 system is
more akin to external beam calculations. In Equation 5.15, SK is the
cGy/h calibration in water at a reference position (the analog of linear
accelerator calibration at dmax). The product of G(r,qo) and g(r) is like a
depth dose correction along the x-axis in Figure 5.4 normalized at ro.
Finally, F(r,q) is like external beam off axis ratios. Looked at in this way,
the new formalism does not look so foreign.

THREE-DIMENSIONAL DOSE CALCULATIONS—


ASYMMETRIC DOSE DISTRIBUTIONS REQUIRED BY
APPLICATORS AND SHIELDS
Some treatments involve the use of asymmetric metal applicators to
introduce the radioactive sources into the body. In treatment of
carcinoma of the uterine cervix (1,2) some of the applicators have
tungsten shields attached, which provide a severe asymmetry in the
measured dose-rate distribution. The measurement (27,28,50,51) of
dose-rate in the presence of these asymmetries shows that the effects are
to reduce dose-rate in particular directions (geometric shadow of the
shields) by up to 40%. These applicators clearly have an effect on the
dose-rate distribution that is different in different directions; hence the
dose-rate algorithm must calculate dose-rate over the entire 3D grid
centered on the source (25,27,28,52–54). Now because clinical use
involves a source loaded into the same applicator with shields every
single time, the calculation can be done once for the
source/applicator/shield and the result for that dose distribution stored
on the computer. Figure 5.6 shows that the dose point P has paths from
the various source volume elements that miss or go through the shield.
Numerical integration can be extended to this case quite easily, thereby
accounting for primary attenuation for path length through source (S),
encapsulation (e), shield (Sh), and applicator housing (A). The
expression is the following:

FIGURE 5.6 Asymmetrical source/applicator/shield geometry. Radioactive material in source


region is subdivided into N tiny regions, photons from each of the regions are attenuated by their
path length through source material, encapsulation, shield material, and applicator material. Dose-
rate at P (located at r, q from center of source) involves repeated application of point source
calculation for all N regions. For the ith cube, its distance to P is ri, the path through the source
material is dSi, through the encapsulation is dei (the latter two are not shown here, see Fig. 5.5),
the path through the shield material is dShi, and the path through the applicator material is dA i.
For the jth cube, its distance to P is rj, its path misses the shield and the path through the
applicator material is dA j.

The path length intersection differences (d) have to be evaluated for


every direction and the μ are free parameters. The strength of Equation
5.22 is that it is simple and the 3D results can be obtained clinically fast.
The 3D calculation for a point P in Equation 5.22 involves numerous ray
line (the radial direction from each source subvolume piece to the point
P) calculations. The attenuation produced by the metal structure along
each path is accounted for, which is the greatest effect. However, the
metal structure off the ray line path to point P (secondary effect) is
ignored in Equation 5.22. For example, in Figure 5.6 the evaluation of
the contribution to dose at P from source subvolume j is the same
whether there is a tungsten shield present in the applicator or not. In
Equation 5.22, it is g(r) which represents scatter, determined from the
case of a homogeneous water medium. We are assuming in Equation
5.22 that g(r) is the same when a nonwater material is lateral to the ray
line. Equation 5.22 is a very fast calculation method and comparison to
Monte Carlo calculations (53) show it works very well for 137Cs. Lower
photon energies require a method to deal with scatter.
Reasonably fast 3D methods having a scattering component, have
been developed by Williamson (52) and Russell and Ahnesjo (55). Monte
Carlo calculations of dose results for source/applicator and just source
are each separated into primary and scattered components. Monte Carlo
calculations are used to create precalculated distance-dependent scatter
ratios that are a function of distance and mean free path in the patient.
Surprisingly, scatter is shown to be fairly isotropic about each applicator,
so scatter dose can be treated as a distance-dependent but angle-
independent term. This approximation speeds up the calculations. As
Williamson (52) points out, this demonstration of the simple angle-
independent nature of the scatter also explains why formalisms such as
Equation 5.22 work very well for the high-energy photons of 137Cs. At
larger distances from the implant or for lower-energy isotopes such as
125I, the scatter separation method remains accurate.
Generalization of the TG43 2D dose calculation formalism Equation
5.15 to a form suitable for 3D dose calculation would look something
like this:

F(r,q,φ) is extended to explicitly include the angle φ dependence, 0 < φ


< 2p and includes all angular variations. F could be described by a
function such as Equation 5.20 with the 12 free parameters themselves a
function of φ. This would then involve hundreds of parameters.
At this point, historical review helps us understand that simple
equations such as Equations 5.9 or 5.11, among others, arose because
the developers needed some simple equation to calculate or check
treatments by hand. Today, many parameter fits (such as Equation 5.23
would require) can easily be done on a computer. However, it is also
true that characterizing the dose-rate calculation directly using Monte
Carlo over a full 3D volume 20 × 20 × 20 cm3 and storing such a 3D
matrix would not be a problem either. So one could follow the historical
development and precalculate the Monte Carlo result, then fit the
parameters of Equation 5.23 to the Monte Carlo, throw away the Monte
Carlo result and use Equation 5.23 with the large number of parameters
to calculate the dose distribution. Alternatively, one could just store and
use the Monte Carlo results directly (after smoothing to minimize
statistical errors). Therefore it is unlikely that the description of the
results of 3D dose calculations will follow TG43 in describing the dose-
rate as a product of several fitted functions whose parameters were
found by comparing to Monte Carlo results. Instead “consensus” 3D
Monte Carlo generated dose-rate matrices normalized to the dose at a
reference position could be provided.

THREE-DIMENSIONAL TREATMENT PLANNING WITH 3D


DOSE DISTRIBUTIONS
If we have a precalculated 3D dose-rate distribution, how do we use it
clinically? It is necessary to determine the position and orientation of
each source’s 3D dose distribution in the patient. Figure 5.7 shows a
single source, which is arbitrarily angled in the patient. The patient
coordinate system (x, y, z) is defined by a 3D imaging device such as a
CT scanner. The 3D dose-rate distribution of the source/applicator
would have been (pre-)calculated in a coordinate system (x′, y′, z′)
centered on the source. This is the intrinsic coordinate system of the
source/applicator system. If there was cylindrical symmetry about the
intrinsic z′-axis, there would be no need to determine in what direction
the x′ and y′ axes are oriented in the CT scan coordinate system, but here
we are assuming there is no symmetry whatsoever. We define as the
calculated dose-rate distribution in its own intrinsic coordinate system
for a unit air kerma strength source. So in Figure 5.7, the dose-rate at
point P (located at in the CT scan) produced by the source (located at
implies looking up the value for from the precalculated 3D matrix.
We therefore need to determine the vector (distance and direction) In
practice, this requires that we determine which way the x′,y′,z′ axes
point in the CT scanner. Now , the vector from S to P, is the same
vector as only expressed in the coordinates of the CT scanner. These are
shown in Figure 5.7 and related via

FIGURE 5.7 Relationship between patient coordinate system (as defined by a computed
tomography [CT] scan) and the internal dose-rate calculation coordinate system of a
precalculated 3D source, which is rotated relative to the CT system. The center of the source is at
rS (relative to the CT scan). Point P in the patient is located at rP in the CT scan but at r′p relative
to the internal source coordinate system. rPS and r′p are physically the same vector, expressed in
CT and intrinsic coordinates, respectively.

where E(α,β,g) is the rotation matrix (56) for a solid body and α,β,g are
the Euler rotation angles, which rotate the intrinsic coordinate system of
the source in correspondence with the CT coordinate system. Our
problem is to find the three degrees of rotational freedom, the Euler
angles. Finding both ends of the source defines a line in space and
decides the z′-axis orientation (equivalent to two degrees of freedom).
The last degree of freedom (rotation about that line) is found by
identifying a landmark in the CT scan not on the z′-axis. Methods and
equations for calculating the Euler angles based on this information have
been given for particular 3D source/shield/applicators (57). One last
problem is that CT scans do not determine absolute position with a
precision better than one-half the scan spacing. Three-dimensional
graphic positioning of the entire applicator can make the determination
more precise (57).
Continuing on to the case of multiple sources, in Figure 5.8, the dose
at point P from two sources requires that the orientations of both sources
be determined. In order to look up the value for and , we have
to determine and . So the Euler angles for two coordinate systems
must be found. For N sources we use Equation 5.24, and the total dose-
rate in the patient’s 3D coordinate system for N sources is given
by

where SKi is the source strength and is the inverse of the Euler
rotation matrix for the ith source. The Euler angles (αi,βi,gi) for each
source must be determined. It is the latter task, which is the additional
work needed to implement 3D dose distributions in a clinical real-time
setting (57).
Once a dose-rate calculation algorithm has been implemented there
are two choices in calculating the total 3D dose-rate distribution in the
patient from a multitude of sources (enclosed in applicators with or
without metal shields). They are (a) dose superposition, that is, addition
of individual source/applicator dose distributions independent of the
presence of the other sources (Equation 5.25 is dose superposition), or
(b) direct dose calculation, that is, using the calculation algorithm with
all the sources and applicators accounted for in the calculation. The first
is the least computationally taxing because the dose-rate matrices ( )
may be precalculated. The second method is what would be used in real
time clinical Monte Carlo or GBBS applications using CT data.
FIGURE 5.8 Superposition approximation. The dose-rate at P is found by adding the contribution
from source 1 (assuming source 2 is not present), to the contribution from source 2 (assuming
source 1 is not present). The internal coordinate system for each source is shown. Only two out of
three axes are shown.

Commercial treatment-planning systems currently use dose


superposition. How important is it to correct for interapplicator
shielding effects, and patient inhomogeneities? The answer depends on
the number of sources, shields, or applicators and their positions relative
to one another. At this time, it is not clear how important these effects
are to clinical applications. Let us try and estimate this for one particular
example where we have a significant amount of metal in the form of
tungsten shields in an applicator. Consider Figure 5.9, which shows just
two sources. Let the point P be at a distance of 2 cm from the center of
source 2 and 4 cm from source 1 on the left. It is reasonable to assume
that the perturbation of having source/applicator 1 to the left of
source/applicator 2 cannot affect the contribution that source 2 makes to
the dose at P very much. Therefore, that part of the total dose stays the
same for dose superposition or direct dose calculation. However, it is
obvious that having source/applicator 2 in between point P and source 1
changes the dose contribution of source 1 to the dose at P. Crudely, we
will estimate the error using inverse square considerations alone. In dose
superposition, the contribution of source/applicator 1 is roughly (2/4)2
= 1/4 of the contribution that source/applicator 2 makes to the dose-
rate at point P. So if source/applicator 1’s contribution is reduced by
33% (which would only be for points fully in the shadow of the shield),
the error in the dose-rate (caused by forgetting about interapplicator
effects, as you do for dose superposition) would be roughly 1/12 or 8%
of the total dose. In treatment of carcinoma of the uterine cervix (2)
(Figure 5.9 is an example in a plane through the ovoids to a point lateral
to the ovoids) this 8% difference would be reduced even more by the
dose-rate contributions from the tandem sources, whose contribution to
P would be about the same in superposition or direct calculation.

FIGURE 5.9 Direct calculation of dose from multiple sources. Two radioactive source calculation
including attenuation effect of all regions. Sources 1 and 2 are subdivided into N regions each.
Dose-rate at P involves repeated application of point source calculation for all 2N regions and
calculation of path lengths through each region from both sources. For the ith cube in source 1, its
distance to P is rS1i, the path goes through source 1’s source material, encapsulation and
applicator material, it then enters and leaves source 2 on the way to point P. It passes through
source 2, shield 2 and through the second applicator (entering and leaving). For the jth cube in
source 2, its distance to P is rS2j and the path does not pass through source 1 and only intersects
its own materials.

In low dose-rate permanent implants of the prostate, one may have up


to 100 125I stainless steel clad sources in a 40 cc volume. Potentially, the
overall inhomogeneity effect caused by seed-to-seed attenuations might
be severe especially since the photon energy is low. Burns and Raeside
(58) in a Monte Carlo study of a model two plane implant showed that
the shadowing effect reduces the dose in the interior of the implant, but
this reduction was not enough to drop the dose below the reference
prescription dose. So, the significance is that very high doses here and
there inside the implant region are not really as high as one thought.
Meigooni et al. (59) also studied interseed effects for 125I and found
through measurements that variations were not only dependent on
direction but that overall, the dose at the periphery of the implant is
reduced by 6%. Chibani et al. (60) used Monte Carlo simulation and
found similar results for 125I and 103Pd. Based on these studies it is not
easy to see the need for any correction other than possibly a 5%
reduction of predicted overall dose to the periphery. We emphasize that
these studies justify a dose documentation correction, not a 5% boost of
the seed activity/strength. The latter is a completely different question
which is beyond the scope of this chapter. At the present time, it is
unclear (61) whether these corrections are needed in permanent seed
implants.
Finally, as regards patient heterogeneities, this problem is harder still
to evaluate since the variations can be endless. Das et al. (62) found that
the dose beyond bone is most affected and that the perturbations of the
dose distribution are too complex to be modeled by simple dose
calculation algorithms, so Monte Carlo calculations must be used.
Meigooni and Nath (63) used measurements plus Monte Carlo simulation
and found in their model that lower-energy sources such as 125I and
103Pd have significant changes in dose due to patient heterogeneity
whereas 192Ir does not. Calculation methods to produce simple
corrections have been proposed by Williamson et al. (64). They used
Monte Carlo generated primary and scatter components and
incorporated empiric parameters into a scatter subtraction method to
gain the advantages of Monte Carlo computation but with a large saving
in computation time. Agreement with full Monte Carlo simulations was
within a few percent in most examples considered. Furstoss et al. (65)
studied both interseed and breast tissue heterogeneity effects for 125I and
103Pd seed permanent breast implants using Monte Carlo simulation. At
these low energies, they found that breast tissue can change D90 by 10%
relative to the homogeneous water case, interseed attenuation is not
important.

THREE-DIMENSIONAL DOSE CALCULATIONS—MONTE


CARLO AND BOLTZMANN TRANSPORT TECHNIQUES
In this section, we move away from macroscopically parameterized
calculations and calculate the dose distribution using microscopically
parameterized functions. These calculations can handle all
inhomogeneities as straightforwardly as the homogeneous case. The
techniques require significant computer power which are now available.
The techniques are feasible for brachytherapy applications because the
distances and volumes of interest are small. The method of Monte Carlo
(66–69) involves the concepts figuratively expressed in Equation 5.4 at
the start of this chapter. Namely, a photon leaves a radioisotope in a
random direction and when it hits something, there is a chance of
something happening, or not. In Monte Carlo, a photon is randomly
created, starts off in a random direction, and each step of the way has a
probability (cross-section) of photoelectric interaction, Compton
interaction, pair production, or coherent scattering. If it spits off an
electron at a certain location headed in a certain direction determined
by a “roll of the dice,” that electron loses energy (6) as it moves through
the medium. The medium is split into small volume regions (cells), as
electrons lose energy in those cells; that energy loss is tallied as being
deposited into that cell. So after the one photon has left and all its
scattered photon and electron descendants have ended up as too low in
energy to escape any more cells and are absorbed, you say that one
history has been completed. After that first history, almost every cell
volume around the source has zero dose because most cells were either
not geometrically hit by anything or, if hit, no interaction event occurred
within them. In order to get useful results we will have to rerun this
“rolling of the dice” process over and over again. The computer is tailor
made to handle this task, many millions of times or more to finally get
smooth continuous results close to the source. We note that Monte Carlo
simulation tries to mimic what is happening in the patient. Monte Carlo
is sometimes thought (67) of as a simulated measurement process
carried out on the computer.
The errors in Monte Carlo are of two types, systematic and statistical.
The systematic errors arise from the fact that the scattering cross-
sections are themselves parameterized approximations to the real atomic
scattering. When Monte Carlo is used in Radiation Oncology to predict
average macroscopic properties such as dose distributions, these errors
are insignificant. The random statistical errors arise because of the need
to stop the calculation in some finite time, before statistical variation in
each and every volume element of the patient is rendered insignificant.
The details can be misleading. Suppose we use 50 million histories in a
Monte Carlo modeling of a prostate HDR treatment, this seems like an
enormous number. In that case a 10 Ci source can expose the patient for
300 seconds and therefore, the actual treatment involves around 1014
photon histories. So the seemingly huge Monte Carlo computation
models 2 million times less decays than the real treatment. Looked at the
other way, the 50 million history Monte Carlo calculation represents
what we would expect from a less than millisecond HDR treatment. In
Monte Carlo, the statistical uncertainty in the dose varies in the patient.
The uncertainty is larger the farther away from the source distribution
which one considers. Near the source distribution, the 50 million
histories modeling are usually acceptable. If not, run more histories.
Monte Carlo for brachytherapy investigations has a long history
starting with Berger’s work (66), which provided analysis and insight
into Meisberger et al. results (8). The encapsulated radioactive source
can be modeled and the relationship between calibration, measurement,
and calculation of 3D distributions determined. An unequaled advantage
of Monte Carlo is in handling any case regardless of complexity. All that
is required is to determine the radionuclide spatial distribution and the
positions of the inhomogeneities, subdivide each into regions, subdivide
the entire patient region of interest into small regions, and let the Monte
Carlo program run until the results stop changing significantly. This
advantage cannot be too highly praised. Consider that the Monte Carlo
N-Particle (MCNP) (67) input used for full 3D calculations of a
source/shield/applicator (53) required typing only 100 lines of text.
Meanwhile experimental measurements of 3D dose-rate distribution for
the same and similar applications (28,50,51) involved machining of
positioning devices to allow rotation of applicators with high precision
in water tanks. Measurements are made in different orientations and the
results have to be merged together. It is obvious which is easier.
Monte Carlo is also useful even when you are not going to use it
explicitly. All the approximations for dose-rate calculation in this
chapter involve parameters. If one fits the values of the parameters of
your algorithm to best match the results of Monte Carlo simulations
(53), then one obtains a parameterization which one can have more
confidence in than from the traditional method of fitting to experimental
results (28). The reasons for this were pointed out by Boyer (70)
regarding the limitations of brachytherapy measurement, for example,
finite size of detectors, energy response changes of detector with
distance, and so on. Monte Carlo does not have these limitations. Monte
Carlo does have the limitation or more precisely the requirement that
the source emission spectrum (7) must be precisely known and that the
exact specifications of source/applicator construction are defined
correctly. Because of that, there is still a need for measurement but only
to spot check the results at a few points for confidence that the above
requirements are correct. In summary, Monte Carlo simulation is some
part of every brachytherapy dose calculation.
Recently a new method (71–73) has been proposed for brachytherapy
dose calculation. Suppose we return to our 50 million history Monte
Carlo example and rerun it again on the computer. We would obtain a
different (albeit similar) dose distribution result. If we could (we cannot)
run the actual 1014 histories, get the results, and then rerun that
simulation once more and compare, we expect that we would have no
significant difference in results. Therefore there is an expectation that an
exact solution exists, that is, an infinite history solution. To find that
solution we turn to methods for solving the linear Boltzmann transport
equation (74) that were developed at Los Alamos National Laboratory
(75,76). The Boltzmann transport equation governs the motion of a
particle in a fluid subject to random collisions. It has no analytic
solution. To make the problem computationally feasible, the scattering
energies and angles are discretized and the equations are solved on a
grid of discrete points. The technique is called lattice or grid-based
Boltzmann solvers (GBBS) (77). We can understand this method applied
to radiation as having the radioactive source creating the driving flux of
photons (Fig. 5.10). This photon flux flows out, suffers random
collisions, and is both modified and creates electron flux. The solution
which GBBS finds is the resulting photon and electron final total flux at
every grid position throughout the patient. These fluxes are also
determined approximately in the Monte Carlo method but far faster in
GBBS, thousands of times faster. Error in GBBS is systematic not
statistical. It is said that the solution is exact (77). However, the
solutions of the differential equations can be affected by the
aforementioned discretization approximations. Hence there is a danger
that the solution could be incorrect. As in Monte Carlo, this is a
technique wherein more and more computer processing power makes
the calculations more and more reliable in shorter times.

FIGURE 5.10 Point source S of single energy photons, photon flux ΨS in homogeneous medium.
Resulting identical photon flux Ψ at positions 1 to 3 and lower energy flux at position 4. Monte
Carlo or GBBS can do no better than equal the accuracy of simple parameterized point source
dose calculations.

FIGURE 5.11 Source distribution S and dose at five positions in the presence of bone (grey) and
air (blue). Only Monte Carlo or GBBS can accurately predict dose at all positions.

In Figure 5.10, we show the photon flux in the homogeneous case,


assuming a single energy point source emitter. As one goes away from
the source the intensity of the orange photons drops (line gets shorter),
other lower-energy photons (shades of blue) appear at different angles.
Electron flux is not shown. The assumption with GBBS is that the
resulting flux at positions 1 to 4 cannot be random and must be definite
and related to each other via the transport equations. Let us compare the
use of Equation 5.15 with Monte Carlo and GBBS to illustrate the
advantages and disadvantages. In Figure 5.10, if we used Equation 5.15,
the dose which we would calculate at equally distant positions 1 to 3
would be exactly equal and correct. Assuming position 4 is 10 cm away,
the dose would be roughly accurate. In contrast, with Monte Carlo the
dose would be very close to correct for positions 1 to 3 but the doses
would not be exactly equal because of statistical error. With GBBS, the
dose would be exactly equal and correct for positions 1 to 3. Both Monte
Carlo and GBBS would be more accurate for position 4 especially if it
were much greater than 10 cm away. Therefore, for the homogeneous
case, really nothing is to be gained over any of the historical methods of
dose calculation by using Monte Carlo or GBBS. In Figure 5.11, we show
a source distribution S, and dose points in the presence of
inhomogeneities, grey is bone, blue is air. If we use Equation 5.15
(assume we can ignore intersource attenuation effects), the calculated
doses at a prescription point such as D5 or D1 are very accurate, while
the dose at D2 is fairly accurate (unless position 2 is very close to the
bone), the dose at D3 is underestimated, and D4 is overestimated. On the
other hand, for Monte Carlo and GBBS, all doses are accurately
predicted; moreover, intersource attenuation effects need not be ignored.
There is no question, that if inhomogeneity is present, the Monte Carlo
and GBBS are superior in dose calculation accuracy throughout the
patient.

FUTURE DIRECTIONS IN 3D DOSE-RATE CALCULATION


ALGORITHMS
At the present time, commercial treatment planning systems do not
provide 3D calculation of dose and do not allow precalculated 3D dose
matrices to be used for planning. TG43 formalism is supported for all
sources. It is clear that only Monte Carlo and GBBS can handle all
difficulties in brachytherapy dose calculation. One reason is that
heterogeneities do not hamper the accuracy of these methods. If the
Monte Carlo calculation is deciding whether a photon will randomly
scatter as it goes through bone or if it is going through water, it is all the
same for the computation procedure. A similar statement applies to
GBBS. There is not much extra computation power needed for the
interapplicator/patient heterogeneity calculation relative to a
homogeneous calculation. Of course, the calculation time plus time for
identification and accurate orientation of all sources, and applicators in
the CT scan and definition in the Monte Carlo or GBBS calculation
framework is still not a real-time clinical reality except in special cases.
However, all these problems will eventually be addressed using
increased processing power both for calculations and image analysis. In
the future, we expect to be able to take all inhomogeneity effects into
account using full clinical Monte Carlo or GBBS calculations.
Recently, BrachyVision (Varian Medical Systems, Palo Alto, CA) has
introduced a GBBS option for their HDR treatment planning system. The
user generates an optimized plan using Equation 5.15 to calculate dose
and thereby determine all dwell times. After completion, he or she have
the option to run a GBBS calculation. This documents the dose
distribution in the patient accounting for all inhomogeneities. This
calculation takes less than 10 minutes. What the user does with that
information is not clear. That the information is now available is
noteworthy. However, it would be a mistake to think that the fact that
the inhomogeneity corrected dose is different than the homogeneous
dose, justifies changing the integral dose (1) to the patient. That is a
more complicated question.
The last paragraph indicates that (unexpectedly) GBBS has leap-
frogged Monte Carlo in regard to individual patient-specific clinical
utilization. A problem with commercial implementation of Monte Carlo
is when do you stop the calculation, that is, how many histories? That
can be a function of the geometry of the case itself. Since even the
homogeneous case takes too long, to design a software system to run
more histories than you ever expect to need is not feasible today. So the
reasons for GBBS success are increased speed and that GBBS provides an
“exact” solution. The former is easy to understand, the latter should be
accepted with caution. Since the solution is exact (and hence when it
completes, you are finished) it is a huge software design advantage.
There is one potential problem. Namely, what happens to the “exact”
solution when you have artifact in the CT scan, or misinterpretation of
the size or density of metal objects or calcifications. Since in GBBS the
dose results in all areas of the patient are linked by the differential
equations, this could result in the prescription dose result being made
inaccurate by other less important regions of the patient being
incorrectly specified. This could mean that the “exact” solution is
“exactly” wrong. Whether it is possible to be clinically significant wrong
is unclear at this time. Having said all that, one has to be excited by this
new development as much will be learned from it, and in the current
GBBS implementation for HDR, Equation 5.15 still determines the actual
clinical treatment.

SUMMARY
The history of algorithm development was that extensive use of
measurements led to simple calculation algorithms based on a point
source. Using numerical integration these were extended to cylindrical
geometries of varying complexity. These developments were then
improved by comparison to specialized Monte Carlo studies. The rise in
computer power made available more extensive Monte Carlo
investigations. These Monte Carlo studies more precisely determined the
best parameters of the calculation algorithms to give a better agreement
with experiment and later permitted a unification of source calibration
and clinical calculation of dose. The ease of these investigations has
made extensive 3D measurement projects a thing of the past. In fact, the
GBBS literature bypasses experimental measurements and compares to
Monte Carlo for justification. The GBBS will have a greater utilization in
clinical practice as it is faster than Monte Carlo and gives definite
results. As computer power increases in the future, the power and
dominance of these two techniques in providing detailed dose
distribution results will increase. It will then seem as if all the simple
equations are no longer needed as advanced computer modeling has
superseded them. One notes that none of the governing equations for
Monte Carlo or GBBS were given in this chapter. There is no need as no
one is going to use them to check the dose in a patient. The vast
computational labor of these methods is such that only the computer can
produce the results. It is then that the historical methods reappear as the
only way to check that the computer results can be believed. One is
advised to always do exactly that.

ACKNOWLEDGMENTS
The author gratefully thanks Glenn Glasgow, Ph. D. for useful discussions,
Vania Arora, MS, for a careful reading of the manuscript and Mr. Paul
Weeks for producing the figures.
KEY POINTS
• Historical calculations for a source calibrated to specify its activity
use Equations 5.9 or 5.11.
• Modern TG43 calculations for a source calibrated to specify its
dose-rate in water at a reference position use Equation 5.15.
• Dose at a point near a cylindrical source should be calculated using
a line source approximation.
• Monte Carlo simulation has been essential to accurately determine
the basic parameters needed for Equations 5.9, 5.11, or 5.15.
• Increasing computer power, will eventually lead to using Monte
Carlo and GBBS approaches for individual patient treatment
planning.

QUESTIONS
1. If the air kerma strength (SK) doubles and the distance doubles,
the dose-rate at a given point most nearly
A. Decreases by 50%
B. Doubles
C. Remains the same
D. Decreases by 25%
2. If a 0.5 mCi 192Ir source (Г = 4.6 Rcm2/mCi h) is replaced by a
0.5 mCi 125I source (Г = 1.45 Rcm2/mCi h), the initial dose rate
at 1-cm most nearly
A. Increases by a factor of 2
B. Decreases by a factor of 2
C. Remains the same
D. Decreases by a factor of 3
3. If a SK = 2.0 U, 192Ir (^ = 1.12 cGy/hU) source is replaced by a
SK = 2.0 U, 125I source (^ = 1.036 cGy/hU), the dose rate at 1-
cm most nearly
A. Increases by a factor of 2
B. Decreases by a factor of 2
C. Remains the same
D. Decrease by a factor of 3
4. Consider a 192Ir source and an ion chamber separated by 10 cm
and fixated at the same height in an empty tank. As water is
poured into the tank, the ionization reading from the ion
chamber is observed in three regions. First region, water level
getting closer to the source–chamber height; second region,
water level covers the source–chamber; and third region water
level rising higher in the tank above the source and ion chamber.
Observation of the signal from the ion chamber as the tank fills
would show that the signal:
A. Remains the same, increases, increases
B. Increases, decreases, increases
C. Increases, stays the same, increases
D. Increases, decreases, decreases
5. Consider a 125I seed source with an air kerma strength of 0.6 U,
use TG43 formalism to calculate the dose rate at r = 2-cm and q
= 30° in Fig. 5.4. (Use the line source geometry factor, L = 4-
mm, gL(2) = 0.819, ^ = 0.965 cGy/hU and F(2,30) = 0.842)
A. 0.02 cGy/h
B. 0.1 cGy/h
C. 0.2 cGy/h
D. 0.4 cGy/h

ANSWERS
1. A From Equation 5.15, air kerma strength change increases
dose by factor 2, distance change decreases dose by factor
4. Dose drops by a factor of 2.
2. d Exposure rate constant for 125I is more than three times
smaller than that for 192Ir.
3. c Dose rate constants for the two isotopes differ by less than
10%.
4. B In the first region, increasing scatter from rising water
increases the dose-rate; in region 2 as water covers the
path from source to the chamber, attenuation decreases
the dose-rate; in third region, increasing scatter from rising
water above the source increases the dose-rate.
5. B Note when you calculate the line source geometry ratio in
Equation 5.15, when L << r the result is very close (in
this case, within 2%) to simply using the point source
approximation. Far enough away, all source distributions
look like point sources.

REFERENCES
1. Khan F. The Physics of Radiation Therapy. 2nd ed. Baltimore, MD:
Williams & Wilkins; 1994.
2. Perez CA, Glasgow GP. Clinical applications of brachytherapy. In:
Perez CA, Brady LW, eds. Principles and Practice of Radiation
Oncology. Philadelphia, PA: JB Lippincott Co; 1987.
3. Johns HE, Cunningham JR. The Physics of Radiology. 4th ed.
Springfield, IL: Charles C Thomas Publisher; 1983.
4. Pierquin B, Chassagne DJ, Chahbazian CM, et al. Brachytherapy. St.
Louis, MO: Warren H. Green; 1978.
5. Glasgow GP. Low dose rate brachytherapy. In: Khan F, Gerbi BJ, eds.
Treatment Planning in Radiation Oncology 3rd Ed. Philadelphia, PA:
Lippincott Williams Wilkins; 2012.
6. Attix FH. Introduction To Radiological Physics And Radiation Dosimetry.
New York, NY: John Wiley & Sons; 1986.
7. Brown E, Firestone RB, Shirley VS. Table of Radioactive Isotopes. New
York, NY: John Wiley & Sons; 1986.
8. Meisberger LL, Keller RJ, Shalek RJ. The effective attenuation in
water of the gamma rays of gold 198, iridium 192, cesium 137,
radium 226, and cobalt 60. Radiology. 1968;90:953–957.
9. Meli JA, Meigooni AS, Nath R. On the choice of phantom material
for the dosimetry of 192Ir sources. Int J Radiat Oncol Biol Phys.
1988;14:587–594.
10. Dale RG. Some theoretical derivations relating to the tissue
dosimetry of brachytherapy nuclides, with particular reference to
iodine-125. Med Phys. 1983;10:176–183.
11. Hale J. The use of interstitial radium dose-rate tables for other
radioactive isotopes. Am J Roentgenol Radium Ther Nucl Med. 1958;
79:49–53.
12. Evans RD, Noyau A. The Atomic Nucleus. New York, NY: McGraw
Hill; 1955.
13. Kornelson RO, Young ME. Brachytherapy buildup factors. Br J
Radiol. 1981;54:136.
14. Webb S, Fox RA. The dose in water surrounding point isotropic
gamma ray emitters. Br J Radiol. 1979;52:482–484.
15. Venselaar JL, van der Giessen PH, Dries WJ. Measurement and
calculation of the dose at large distances from brachytherapy
sources: Cs-137, Ir-192, and Co-60. Med Phys. 1996;23:537–543.
16. van Kleffens HJ, Star WM. Application of stereo X-ray
photogrammetry (SRM) in the determination of absorbed dose
values during intracavitary radiation therapy. Int J Radiat Oncol Biol
Phys. 1979;5:557–563.
17. Dale RG. A Monte Carlo derivation of parameters for use in the
tissue dosimetry of medium and low energy nuclides. Br J Radiol.
1982;55:748–757.
18. Park HC, Almond PR. Evaluation of the buildup effect of an 192Ir
high dose-rate brachytherapy source. Med Phys. 1992;19:1293–
1297.
19. Weaver K. Brachytherapy dose calculations: calculational
algorithms. In: Thomadsen B, ed. Categorical Course in Brachytherapy
Physics. Oak Brook, IL: Radiological Society of North America;
1997:41–49.
20. Sievert RM. Die intensitatatverteilung der primaren-strehlung in der
nahe medinizinisher radium-praparate. Acta Radiol. 1921;1:89–128.
21. Young ME, Batho HF. Dose tables for linear radium sources
calculated by an electronic computer. Br J Radiol. 1964;37:38–44.
22. Glasgow GP, Dillman LT. Specific gamma-ray constant and exposure
rate constant of 192Ir. Med Phys. 1979;6:49–52.
23. Glasgow GP. Exposure rate constants for filtered 192Ir sources. Med
Phys. 1981;8:502–503.
24. Diffey BL, Klevenhagen SC. An experimental and calculated dose
distribution in water around CDC K-type Cesium-137 sources. Phys
Med Biol. 1975;20:446–454.
25. van der Laarse R, Meertens H. An algorithm for ovoid shielding of a
cervix applicator. In: Cunningham JR, Ragan D, Van Dyk J, eds. The
Proceedings 8th International Conference on The Use of Computers in
Radiation Therapy, Toronto, Canada, July 9–12. Los Angeles, CA:
IEEE Computer Society; 1984.
26. Williamson JF. Monte Carlo and analytic calculation of absorbed
dose near 137Cs intracavitary sources. Int J Radiat Oncol Biol Phys.
1988;15:227–237.
27. Meertens H, van der Laarse R. Screens in ovoids of a Selectron
cervix applicator. Radiother Oncol. 1985;3:69–80.
28. Weeks KJ, Dennett JC. Dose calculation and measurements for a CT
compatible version of the Fletcher applicator. Int J Radiat Oncol Biol
Phys. 1990;18:1191–1198.
29. Williamson JF. Monte Carlo evaluation of specific dose constants in
water for 125I seeds. Med Phys. 1988;15:686–694.
30. Nath R, Anderson LL, Luxton G, et al. Dosimetry of interstitial
brachytherapy sources: recommendations of the AAPM Radiation
Therapy Committee Task Group 43. Med Phys. 1995;22:209–234.
31. Rivard MJ, Coursey BM, DeWerd LA, et al. Update of AAPM Task
Group No. 43 Report: A revised AAPM protocol for brachytherapy
dose calculations. Med Phys. 2004;31:633–674.
32. Williamson JF, Butler W, DeWerd LA, et al. Recommendations of
the American Association of Physicists in medicine regarding the
impact of implementing the 2004 Task Group 43 report on dose
specification for 103Pd and 125I interstitial brachytherapy. Med
Phys. 2005;32:1424–1439.
33. DeWerd LA, Huq MS, Das IJ, et al. Procedures for establishing and
maintaining consistent air- kerma strength standards for low-energy,
photon-emitting brachytherapy sources: recommendations of the
Calibration Laboratory Accreditation Subcommittee of the American
Association of Physics in medicine. Med Phys. 2004;31:675–681.
34. Rivard MJ, Butler WM, DeWerd LA, et al. Supplement to the 2004
update of AAPM Task Group No. 43 Report. Med Phys.
2007;34:2187–2205.
35. Chan GH, Nath R, Williamson JF. On the development of consensus
values of reference dosimetry parameters for interstitial
brachytherapy sources. Med Phys. 2004;31:1040–1045.
36. Karaiskos P, Angelopoulos A, Sakellio L et al. Monte Carlo and TLD
dosimetry of an 192Ir high dose-rate brachytherapy source. Med
Phys. 1998;25:1975–1984.
37. Heintz BH, Wallace RE, Hevezi JM. Comparison of I-125 sources
used for permanent interstitial implants. Med Phys. 2001;28:671–
682.
38. Mainegra E, Capote R, Lopez E. Dose-rate constants for 125I, 103Pd,
192Ir and 169Yb brachytherapy sources: an EGS4 Monte Carlo
study. Phys Med Biol. 1998;43:1557–1566.
39. Williamson JF. The accuracy of the line and point source
approximations in Ir-192 dosimetry. Int J Radiat Oncol Biol Phys.
1990;12:409–414.
40. Thomason C, Mackie TR, Lindstrom MJ, et al. The dose distribution
surrounding 192Ir and 137Cs seed sources. Phys Med Biol.
1991;36:475–493.
41. Mainegra E, Capote R, Lopez E. Radial dose functions for 103Pd,
125I, 169Yb and 192Ir brachytherapy sources: an EGS4 Monte Carlo
study. Phys Med Biol. 2000;45:703–717.
42. Kirov AS, Williamson JF, Meigooni AS, et al. TLD, diode, and Monte
Carlo dosimetry of an 192Ir source for high dose-rate
brachytherapy. Phys Med Biol. 1995;40:2015–2036.
43. Nath R, Meigooni AS, Muench P, et al. Anisotropy functions for
103Pd, 125I, and 192Ir interstitial brachytherapy sources. Med Phys.
1993;20:1465–1473.
44. Capote R, Mainegra E, Lopez E. Anisotropy functions for low energy
interstitial brachytherapy sources: an EGS4 Monte Carlo Study. Phys
Med Biol. 2001;46:135–150.
45. Brunet-Benkhoucha M, Verhaegen F, Lassalle S, et al. Clinical
Implementation of a digital tomosyhthesis-based seed reconstruction
algorithm for intraoperative postimplant dose evaluation in low
dose rate prostate brachytherapy. Med. Phys. 2009;36:5235–5244.
46. Corbett JF, Jezioranski JJ, Crook J, et al. The effect of seed
orientation deviations on the quality of 125I prostate implants. Phys
Med Biol. 2001;46:2785–2800.
47. Furhang EE, Anderson LL. Functional fitting of interstitial
brachytherapy dosimetry data recommended by the AAPM
Radiation Therapy Committee Task Group 43. Med Phys.
1999;26:153–160.
48. Sloboda RS, Menon GV. Experimental determination of the
anisotropy function and anisotropy factor for model 6711 I-125
seeds. Med Phys. 2000;27:1789–1799.
49. Ling CC, Schell MC, Yorke ED, et al. Two dimensional dose
distribution of 125I seeds. Med Phys. 1985;12:652–655.
50. Ling CC, Spiro IJ, Kubiatowicz DO, et al. Measurement of dose
distribution around Fletcher-Suit-Delcos colpostats using a Therados
radiation field analyzer (RFA-3). Med Phys. 1984;11:326–330.
51. Mohan R, Ding IY, Martel MK, et al. Measurements of radiation
dose distributions for shielded cervical applicators. Int J Radiat
Oncol Biol Phys. 1985;11:861–868.
52. Williamson JF. Dose calculations about shielded gynecological
colpostats. Int J Radiat Oncol Biol Phys. 1990;19:167–178.
53. Weeks KJ. Monte Carlo calculations for a new ovoid shield system
for carcinoma of the uterine cervix. Med Phys. 1998;25:2288–2292.
54. Mohan R, Ding IY, Toraskar J, et al. Computation of radiation dose
distributions for shielded cervical applicators. Int J Radiat Oncol Biol
Phys. 1985;11:823–830.
55. Russell KR, Ahnesjo A. Dose calculation in brachytherapy for a
192Ir source using a primary and scatter dose separation technique.
Phys Med Biol. 1996;1007–1024.
56. Rose ME. Elementary Theory of Angular Momentum. New York, NY:
Wiley; 1957.
57. Weeks KJ. Brachytherapy object oriented treatment planning using
three dimensional image guidance. In: Thomadsen B, ed. Categorical
course in brachytherapy physics. Oak Brook, IL: Radiological Society
of North America, 1997:79–86.
58. Burns GS, Raeside DE. The accuracy of single-seed dose
superposition for I-125 implants. Med Phys. 1989;16:627–631.
59. Meigooni AS, Meli JA, Nath R. Interseed effects on dose for 125I
brachytherapy implants. Med Phys. 1992;19:385–390.
60. Chibani O, Williamson JF, Todor D. Dosimetric effect of seed
anisotropy and interseed attenuation for 103Pd and 125I prostate
implants. Med Phys. 2005;32:2557–2566.
61. Yu Y, Anderson LL, Li Z, et al. Permanent prostate seed implant
brachytherapy: report of the American Association of Physicists in
Medicine in Medicine Task Group No. 64. Med Phys. 1999;26:2054–
2076.
62. Das RK, Keleti D, Zhu Y, et al. Validation of Monte Carlo dose
calculations near 125I sources in the presence of bounded
heterogeneities. Int J Radiat Oncol Biol Phys. 1997;38:843–853.
63. Meigooni AS, Nath R. Tissue inhomogeneity correction for
brachytherapy sources in a heterogeneous phantom with cylindrical
symmetry. Med Phys. 1992;19:401–407.
64. Williamson JF, Li Z, Wong JW. One-dimensional scatter-subtraction
method for brachytherapy dose calculation near bounded
heterogeneities. Med Phys. 1993;20:233–244.
65. Furstoss C, Reniers B, Bertrand MJ et al. Monte Carlo study of LDR
seed dosimetry with an application in a clinical brachytherapy
breast implant. Med. Phys. 2009;36:1848–1858.
66. Berger MJ. Energy Deposition in Water by Photons From Point Isotropic
Sources. MIRD Pamphlet 2. Washington, DC: National Bureau of
Standards; 1968.
67. Briesmeister JT. MCNP—A general Monte Carlo N-particle transport
code. Version 4a: Los Alamos National Laboratory Report, LA-
12625. 1993.
68. Nelson W, Hirayama H, Rogers D. The EGS4 Code System. SLAC
Report 265 Stanford University, 1985.
69. Williamson JF. Monte Carlo evaluation of kerma at a point for
photon transport problems. Med Phys. 1988;14:567–576.
70. Boyer AL. A fundamental accuracy limitation on measurements of
brachytherapy sources. Med Phys. 1979;6:454–456.
71. Gifford KA, Horton JL, Wareing TA, et al. Comparison of a finite-
element multigroup discrete-ordinates code with Monte Carlo for
Radiotherapy calculations. Phys Med Biol. 2006;51:2253–2265.
72. Daskalov GM, Baker RS, Rogers DW, et al. Dosimetric modeling of
the microselectron high-dose rate 192Ir source by the multigroup
discrete ordinates method. Med Phys. 2000;27:2307–2319.
73. Zhou C, Inanc F. Integral-transport-based deterministic
brachytherapy dose calculations. Phys Med Biol. 2003;48:73–93.
74. Lewis EE, Miller WF. Computational Methods of Neutron Transport.
New York, NY: Wiley; 1984.
75. Alcouffe RE, Baker RS, Brinkley FW, et al. DANTSYS: A Diffusion
Accelerated Neutral Particle Transport Code System. Los Alamos, NM:
Los Alamos National Laboratory LA-12969-M; 1995.
76. Wareing TA, McGhee JM, Morel JE. ATTILA: a three dimensional
unstructured tetrahedral mesh discrete-ordinates transport code.
Trans Amer Nucl Soc. 1996;75:146.
77. Vassiliev ON, Wareing TA, McGhee J, et al. Validation of a new
grid-based Boltzmann equation solver for dose calculation in
radiotherapy with photon beams. Phys Med Biol. 2010;55:581–598.
6 Treatment Planning Algorithms:
Electron Beams

Faiz M. Khan

Most commercial treatment planning systems incorporate electron beam


planning programs. However, not all programs have comparable
accuracy or limitations. In general, the electron beam algorithms are
more complex than those of the photon beams and require more careful
testing and commissioning for clinical use.
Early methods of computing dose distribution were based on empiric
data or functions that used ray line geometries assuming broad beam
depth dose distribution in homogeneous media. Inhomogeneity
corrections were determined with transmission data measured with large
slabs of heterogeneities. These earlier methods have been reviewed by
Sternick (1).
The major problem with the use of broad-beam distributions and slab
geometries is that this approach is inadequate in predicting the effects of
narrow beams, sudden changes in surface contours, small
inhomogeneities, oblique beam incidence, and so forth.
An improvement over the empiric methods came about with the
development of algorithms based on age–diffusion equation by Kawachi
(2) and others in the late 1970s. These models have been reviewed by
Andreo (3). A semiempiric pencil beam model developed by Ayyangar
and Suntharalingam (4) and based on age–diffusion equation was
adopted for the Theraplan treatment planning system (Theratronics,
Kanata, Ontario). This semiempiric algorithm, if properly implemented
for a given electron accelerator, allowed reasonably accurate calculation
of dose distribution in a homogeneous medium. Contour irregularity and
inhomogeneities were considered only in the plane of calculation, as for
example, a computed tomography (CT) slice, without regard to the
effects of the third dimension, such as adjacent CT slices. Whereas pencil
beams placed along the surface contour can predict effects of contour
irregularity and beam obliquity, inhomogeneity corrections were still
based on effective path length between the virtual source and the point
of calculations. In these cases, bulk density of the inhomogeneity in the
given CT slices was used to determine effective depth. The main
limitation of such algorithms was that the effects of the anatomy in three
dimensions were not fully accounted for, although empirically derived
correction factors could be used in simple geometric situations (5).

PENCIL BEAM MODELS BASED ON MULTIPLE


SCATTERING THEORY
In the early 1980s, there was a significant surge in the development of
electron beam treatment planning algorithms (6–9). These models were
based on gaussian pencil beam distributions obtained with the
application of Fermi–Eyges multiple scattering theory (10). For a review
of these algorithms, see Brahme (11) and Hogstrom (12). A brief
discussion of these algorithms is presented here to familiarize the users
of these programs with the basic theory involved.
Assuming small-angle approximation for multiple scattering of
electrons, the spatial distribution of electron fluence or dose from an
elementary pencil beam penetrating a scattering medium is very nearly
gaussian at all depths. Large-angle scattering events could cause
deviations from a pure gaussian distribution, but their overall effect is
considered to be small. The spatial dose distribution for the gaussian
pencil beam can be represented thus:

where dp(r,z) is the depth dose contributed by the pencil beam at a point
at a radial distance r from its central axis and depth z, dp(o,z) is the axial
dose, and is the mean square radial displacement of electrons as a
result of multiple Coulomb scattering. It can be shown that ,
where and are the mean square lateral displacements projected into
the X, Z and Y, Z planes. The exponential function in Equation 6.1
represents the off-axis ratio for the pencil beam, normalized to unity at r
= o.
This is another useful form of Equation 6.1.

where D∞(o,z) is the dose at depth z in an infinitely broad field with the
same incident fluence at the surface as the pencil beam. The gaussian
distribution function in Equation 6.2 is normalized so that the area
integral of this function over a transverse plane at depth z is unity.

FIGURE 6.1 A pencil beam coordinate system. Dose at point P is calculated by integrating
contributions from individual pencil beams.

In Cartesian coordinates, Equation 6.2 can be written thus:

where dp(x,y,z) is the dose contributed to point (x,y,z) from a pencil


beam whose central axis passes through (x′,y′,z′) (Fig. 6.1).
The total dose distribution in a field of any shape can be calculated by
summing all the pencil beams:
The integration of a gaussian function within finite limits cannot be
performed analytically. To evaluate it, this function necessitates use of
the error function. Thus, convolution calculus shows that for an electron
beam of a rectangular cross-section (2a × 2b), the spatial dose
distribution is given thus:

where the error function is defined thus:

The error function is normalized so that erf(∞) = 1. (It is known that


the integral .) Error function values for o < x < ∞ can

be obtained from tables published in mathematic handbooks (13).


Although, D∞(o,o,z) in Equation 6.5 is given by the area integral of the
dose from pencil beams over an infinite transverse plane at depth z, this
term is usually determined from measured central axis depth dose data
of a broad electron field (e.g., 20 cm × 20 cm).

Pencil Beam Characterization


Lateral Spread σ
As discussed earlier, the spatial dose distribution of an elementary pencil
electron beam can be represented by a gaussian function. This function
is characterized by its lateral spread parameter σ, which is similar to the
standard deviation parameter of the familiar normal frequency
distribution function:
Figure 6.2 is a plot of the normal distribution function given by Equation
6.7 for σ = 1. The function is normalized so that its integral between the
limits −∞ < x < +∞ is unity.
As a pencil electron beam is incident on a uniform phantom, its
isodose distribution looks like a teardrop or onion (Fig. 6.3). The lateral
spread (or σ) increases with depth until a maximum spread is achieved.
Beyond this depth there is a precipitous loss of electrons as their large
lateral excursion causes them to run out of energy.
The lateral spread parameter σ was theoretically predicted by Eyges
(10), who extended the small-angle multiple scattering theory of Fermi
to the slab geometry of any composition. Considering σx (z) in the X–Z
plane,

where q2/ρl is the mass angular scattering power and ρ is the density of
the slab phantom.

FIGURE 6.2 Plot of a normal distribution function given by Equation 6.7 for σ = 1. The function is
normalized to unity for limits −∞ < x < +∞.
FIGURE 6.3 Pencil beam isodose distribution measured with a narrow electron beam of 22-MeV
energy incident on water phantom. (Reprinted with permission from ICRU Report 35. Radiation
Dosimetry: Electron Beams with Energies Between 1 and 50 MeV. Bethesda, MD: International
Commission on Radiation Units and Measurements. 1984:36.)

There are limitations to the Eyges’ equation. As pointed out by Werner


et al. (8), σ, given by Equation 6.8, increases with depth indefinitely,
which is contrary to what is observed experimentally in narrow-beam
dose distributions. The theory does not take into account the loss of
electrons when lateral excursions exceed the range of the electrons. Also,
Equation 6.8 is based on small-angle multiple Coulomb scattering;
hence, it underestimates the probability of large-angle scatter. This gives
rise to an underestimation of σ. Correction factors have been proposed to
counteract these problems (8,14,15).
Hogstrom et al. (7) correlated electron collision linear stopping power
and linear angular scattering power relative to that of water with CT
numbers so that effective depth and σ could be calculated for
inhomogeneous media using CT data. Effective depth calculation using
CT numbers also allows pixel-by-pixel calculation of heterogeneity
correction.
Experimental measurement of σ(z) is possible with the use of narrow
beams (1-mm to 2-mm diameter). The transverse dose distribution in a
narrow beam has a gaussian shape at each depth. The root mean square
(rms) radial displacement, σr(z), can be obtained from the profiles by a
mathematic deconvolution of the gaussian distributions.
Several investigators (15–17) used a narrow-beam depth dose
distribution to determine σr. Some (6) used the edge method, in which a
wide beam is blocked off at the center by a lead block; σ is evaluated
from the excursion of electrons into the block penumbra.
Werner et al. (8) used strip beams 2-mm wide to obtain transverse
profiles in homogeneous phantoms of various compositions. The strip
beam profiles were then fitted by gaussian distributions at various
depths. Figure 6.4 shows the results, comparing σs calculated by Eyges’
equation, with and without correction for the loss of electrons.

FIGURE 6.4 Spatial spread parameter, σx, plotted as a function of depth in water for a 13-MeV
electron beam. Comparison is shown between σx’s calculated by Eyges’ equation (dashed line)
and Eyges’ equation modified for loss of electrons (solid lines) with measured data. (Reprinted
with permission from Werner BL, Khan FM, Deibel FC. A model for calculating electron beam
scattering in treatment planning. Med Phys. 1982;9:180–187.)

Central Axis Distribution


As seen in Equation 6.5, the central axis depth dose for a rectangular
field of size (2a × 2b) can be derived from the measured broad-beam
central axis distribution. It is given thus:

The normalization of the dose distribution function against D∞(o,o,z) is


useful because the central axis dose distribution for a broad beam can be
readily measured and includes Bremsstrahlung as well as the inverse
square law effect. Multiplication with the error functions provides the
required field size dependence factor due to lateral scatter.

SUCCESS AND LIMITATIONS OF PENCIL BEAM


ALGORITHMS
Algorithms based on gaussian pencil beam distribution have solved
many of the problems that plagued the previous methods, which used
measured broad-beam distributions or empirically derived best fit
functions with separate correction factors for contour irregularity and
tissue heterogeneity. Analytic representation of pencil beam allows for
calculation of dose distribution for fields of any shape and size, irregular
or sloping surface contours, and tissue heterogeneities in three
dimensions. However, there are limitations to pencil beam algorithms,
and they have been discussed by several investigators (12,18–20). Most
of the inaccuracies occur at interfaces of different density tissues such as
tissue–lung, tissue–bone, and bone edges. Within the homogenous media
of any density, the algorithm has an accuracy of approximately 5% in
the central regions of the field and approximately 2-mm spatial accuracy
in the penumbra.

Central Axis Distribution


One of the essential requirements of a treatment planning algorithm is
that it calculates the central axis depth dose distribution in water with
acceptable accuracy. For broad beams, there is no problem because
measured data are input as part of the formalism (Equation 6.9). So, the
critical test of the algorithm is to reproduce depth dose distribution for
small and irregularly shaped fields. Figure 6.5 shows such a test of the
Hogstrom algorithm.

Isodose Distribution
The next step is to check isodose distributions, especially in the
penumbra region. Figure 6.6 from Hogstrom et al. (7), shows a
reasonably good agreement. The success of the pencil beam algorithm in
this case is in part attributable to the measured broad-beam central axis
data, measured off-axis profiles at the surface to provide weighting
factors for the pencil beams, and an empirically derived multiplication
factor (1 to 1.3) to modify calculated values of σx(z) for a best agreement
in the penumbra region. This is true of most algorithms—that some
empiric factors are required to obtain a best fit of the algorithm with
measured data.

FIGURE 6.5 Comparison of measured depth dose distributions with those calculated from 10 cm
× 10 cm field size data. Electron energy, 17 MeV. (Reprinted with permission from Hogstrom KR,
Mills MD, Almond PR. Electron beam dose calculations. Phys Med Biol. 1981;26:445–459.)
FIGURE 6.6 Comparison of calculated and measured isodose distribution. (Reprinted with
permission from Hogstrom KR, Mills, MD, Almond PR. Electron beam dose calculations. Phys
Med Biol. 1981;26:445–459.)

Contour Irregularity
A pencil beam algorithm is ideally suited, at least in principle, to
calculate dose distribution in patients with irregular or sloping contour.
Pencil beams can be placed along rays emanating from the virtual
source, thus entering the patient at points defined by the surface
contours. The dose distribution in the X–Z plane from the individual
pencil beams depends on the gaussian spread parameter, σx(z), which is
properly computed as a function of depth along the ray line. The
composite dose profile therefore reflects the effect of the surface contour
shape by virtue of the individual pencil beams entering the contour
along ray lines, spreading in accordance with individual depths, and the
contributing dose laterally. Figure 6.7 shows a schematic representation
of the pencil beam algorithm used to calculate dose distribution in a
patient correction.

Tissue Heterogeneities
As discussed earlier, the Fermi–Eyges theory is strictly valid for slab
geometry. That is, a pencil beam traversing a slab of material is scattered
with a gaussian profile, and the spread, σ, of the transmitted beam
depends on the thickness of the slab and its linear angular scattering
power (Equation 6.8). Application of this theory to the human body with
inhomogeneities of different sizes and shapes becomes tenuous. Not only
do many different pencil beams pass through tissues of different
composition, but also each pencil with its increasing spread with depth
may not stay confined to one kind of tissue. Thus, the algorithm is bound
to fail where the cross-section of the inhomogeneity is smaller than the
pencil beam spread or at interfaces where parts of the pencil beam pass
through different inhomogeneities. Research in this area continues, but
no practical solution to this problem has yet been found.

FIGURE 6.7 Schematic representation of the Hogstrom algorithm for the calculation of dose
distribution in a patient cross-section. SSD, source-to-surface distance, SCD, source-to-collimator
distance. (Reprinted with permission from Hogstrom KR, Mills MD, Almond PR. Electron beam
dose calculations. Phys Med Biol. 1981;26:445–459.)

Figure 6.8 shows an example of how CT-based inhomogeneity


corrections have been applied with a pencil beam algorithm. A tissue
substitute phantom simulating a nose was irradiated with a 13-MeV
electron beam and a detailed thermoluminescent dosimetry was done.
The agreement between the measured and calculated distribution was
approximately 13%. This is not an acid test for the algorithm, since
many sources of uncertainties can compound the errors in this case.
However, if the user understands its limitations of accuracy in complex
clinical situations, the pencil beam algorithm is capable of providing
clinically useful information about overall dose distribution. These
limitations must be considered in making therapeutic decisions in the
use of electrons when the target or critical structures are encountered in
the path of the beam.

FIGURE 6.8 Experimental verification of the Hogstrom algorithm. Calculated isodose contours are
compared with measured data using TLD in a tissue substitute phantom. Electron energy 13 MeV;
field size, 8 cm × 8 cm; source-to-source distance (SSD), 100 cm. (Reprinted with permission
from Hogstrom KR, Almond PR. Comparison of experimental and calculated dose distributions.
Acta Radiol. 1983;364:89–99.)

COMPUTER ALGORITHM
Implementation of a pencil beam algorithm requires dose distribution
equations to be set up so that the dose to a point (x, y, z) in a given field
can be calculated as an integral of the doses contributed by gaussian
pencil beams. The points of calculation constitute a beam grid, usually
defined by the intersection of fan lines diverging from the virtual point
source and equally spaced depth planes perpendicular to the central axis
of the beam (X–Y planes). An irregularly shaped field is projected at the
depth plane of calculation and is divided into strip beams of width ∆X
and length extending from Ymax to Ymin (Fig. 6.9). The strip is also
divided into segments so that σ of the pencil beams and effective depths
can be calculated in three dimensions and integration can be carried out
over all strips and segments.
Starkschall et al. (18) evaluated the Hogstrom algorithm (7) for one-,
two-, and three-dimensional heterogeneity corrections. The general
equation that they set up for three-dimensional (3D) dose computation is
reproduced here to illustrate the mathematical formulation of the pencil
beam algorithm:

FIGURE 6.9 Schematic representation of an irregularly shaped field divided into strips projected
at the plane of calculation (z). (Reprinted with permission from Starkschall G, Shiu AS, Bujnowski
SW, et al. Effect of dimensionality of heterogeneity corrections on the implementation of a three-
dimensional electron pencil beam algorithms. Phys Med Biol. 1991;36:207–227.)
where De(x,y,z) is the electron dose at point (x,y,z); N is the number of
strips; M is the number of segments; W(xk,y1) is the beam weight along
the fan line at point (xk,y1); α2 is the pencil beam spread in the medium
at depth z (obtained by integrating linear angular scattering power along
a fan line from the surface of the patient to the plane of calculation); σair
is the pencil beam spread in air at the plane of final collimation and
projected to the plane of calculation in the absence of the medium;
is the measured broad-beam central axis depth dose;
SSD is the effective source-to-surface distance; is the maximum limit
of the jth segment of the ith strip; is the minimum limit of the jth
segment of the ith strip.
Equation 6.10 does not include the Bremsstrahlung dose. Assuming
that the dose beyond the practical range, Rp, is all due to photons, one
can back-calculate the photon dose by using attenuation and inverse
square law corrections.

MONTE CARLO METHODS


There is active interest in adopting Monte Carlo (MC) methods for
treatment planning of photon and electron beams. The MC technique
consists of simulating transport of millions of particles through matter. It
uses the fundamental laws of physics to determine probability
distributions of individual particle interactions. Each particle is followed
as it travels through the medium and gives rise to energy deposition by
interaction with the atoms of the medium. The larger the number of
simulated particles (histories), the greater is the statistical accuracy of
predicting their distribution. As the number of simulated particles is
increased, the accuracy gets better but the computational time becomes
prohibitively long. So, the challenge lies in using a relatively small
sample of randomly selected particles to predict the average behavior of
the particles in the beam. The dose distribution is calculated by
accumulating (scoring) ionizing events in bins (voxels) that give rise to
energy deposition in the medium. It is estimated that the transport of a
few hundred million to a billion histories will be required for radiation
therapy treatment planning with adequate precision.
In order to improve computational efficiency and decrease calculation
time, a number of fast MC codes have been developed in the past 15
years or so. Examples include Voxel-based MC (VMC, VMC++)
(21–23), Dose Planning Method (DPM) (24), and MCDOSE (25). Some of
these codes have been implemented commercially, for example,
Nucletron OMTPS, Varian Eclipse, and CMS Xio eMC. Before clinical
implementation, the user is advised to build an appropriate program for
commissioning and routine quality assurance of the MC-based treatment
planning system. Report of the AAPM Task Group No. 105 (26) would be
helpful in this regard.
With the continuing advancement in computer technology and
computation algorithms, it now seems probable that full-fledged MC
code will be implemented for routine treatment planning in the not too
distant future.

KEY POINTS
• The most commonly used methods of electron beam dose
calculation include pencil beam (PB) algorithms and Monte Carlo
(MC)-based algorithms.
• The premise of a pencil beam algorithm is that the lateral spread of
an elementary pencil beam of electrons penetrating a scattering
medium can be represented approximately by a gaussian
distribution function.
• The lateral spread parameter, σ, can be theoretically predicted by
Fermi–Eyges multiple scattering theory.
• Practical implementation of pencil beam algorithm, based on
Fermi–Eyges theory, was carried out by Hogstrom et al. in 1981
and has been adopted by several commercial treatment planning
systems.
• PB algorithms have acceptable accuracy in homogeneous media
of any density (e.g., a dose accuracy of ∼5% in the central regions
of the field and a spatial accuracy of ∼2 mm in the penumbra).
• PB algorithms are not accurate at interfaces of different density
tissues such as tissue–lung, tissue–bone, and bone edges.
• MC methods consist of simulating transport of millions of particles
and statistically determining probability distributions of individual
particle interactions.
• Full-fledged MC codes for treatment planning require inordinate
amount of computational times but they are the most accurate
methods of calculating dose distribution.
• Fast MC codes have been developed to improve efficiency and
reduce computational time. Examples include Voxel-based Monte
Carlo (VMC, VMC++), Dose Planning Method (DPM), and
MCDOSE.

QUESTIONS
1. Spatial spread of a pencil beam of electrons traversing a medium:
A. Is caused predominantly by knock on collisions
B. Increases with increase in energy
C. Is mostly due to secondary electrons ejected laterally
D. Can be represented approximately by a gaussian distribution
function
2. Pencil beam algorithms for electron beam treatment planning:
A. Usually require input of measured depth dose data as well as
lateral beam profiles for broad fields as a function of beam
energy
B. Can accurately predict surface dose
C. Are useful in predicting hot and cold spots occurring at bone–
tissue interfaces
D. Are especially useful at tissue–lung interfaces
3. Fast Monte Carlo codes for electron beam treatment planning
simulate transport of:
A. Electron beam as energy kernels
B. Electrons using age-diffusion model
C. Millions of individual electrons and their interaction
probabilities in a medium
D. A single electron per pixel in a calculation grid

ANSWERS
1. D Lateral spread of an electron pencil beam is caused
predominantly by random small-angle Coulomb scattering
and can be represented by a gaussian normal distribution.
2. A Existing electron pencil beam algorithms cannot
accurately take into account the effect of beam collimation
on depth dose or surface dose. Input of measured central
axis depth data and lateral beam profiles takes those
effects into account and assures better accuracy.
3. C Monte Carlo is not a model-based algorithm. It simulates
individual electron interactions and statistically measures
interaction probabilities. Greater the number of electrons
simulated, better the statistical accuracy.

REFERENCES
1. Sternick E. Algorithms for computerized treatment planning. In:
Orton CG, Bagne F, eds. Practical Aspects of Electron Beam Treatment
Planning. New York, NY: American Institute of Physics; 1978:52.
2. Kawachi K. Calculation of electron dose distribution for radiotherapy
treatment planning. Phys Med Biol. 1975;20:571–577.
3. Andreo P. Broad beam approaches to dose computation and their
limitations. In: Nahum AE, ed. The Computation of Dose Distributions
in Electron Beam Radiotherapy. Kungalv, Sweden: Miniab/gotab;
1985:128.
4. Ayyangar K, Suntharalingam N. Electron beam treatment planning
incorporating CT data. Med Phys. 1983;10:525 (abstract).
5. Pohlit W, Manegold KH. Electron beam dose distribution in
homogeneous media. In: Kramer S, Suntharalingam N, Zinniger GF,
eds. High Energy Photons and Electrons. New York, NY: Wiley;
1976:343.
6. Perry DJ, Holt JG. A model for calculating the effects of small
inhomogeneities on electron beam dose distributions. Med Phys.
1980;7:207–215.
7. Hogstrom KR, Mills MD, Almond PR. Electron beam dose
calculations. Phys Med Biol. 1981;26:445–459.
8. Werner BL, Khan FM, Deibel FC. A model for calculating electron
beam scattering in treatment planning. Med Phys. 1982;9:180–187.
9. Jette D, Pagnamenta A, Lanzl LH, et al. The application of multiple
scattering theory to therapeutic electron dosimetry. Med Phys.
1983;10:141–146.
10. Eyges L. Multiple scattering with energy loss. Phys Rev.
1948;74:1534.
11. Brahme A. Brief review of current algorithms for electron beam
dose planning. In: Nahum AE, ed. The Computation of Dose
Distributions in Electron Beam Radiotherapy. Kungalv, Sweden:
Miniab/gotab; 1985:271.
12. Hogstrom KR, Starkschall G, Shiu AS. Dose calculation algorithms
for electron beams. In: Purdy JA, ed. Advances in Radiation Oncology
Physics. American Institute of Physics Monograph 19. New York, NY:
American Institute of Physics; 1992:900.
13. Beyer WH. Standard Mathematical Tables. 25th ed. Boca Raton, FL:
CRC Press; 1978:524.
14. Lax I, Brahme A. Collimation of high energy electron beams. Acta
Radiol Oncol. 1980;19:199–207.
15. Lax I, Brahme A, Andreo P. Electron beam dose planning using
Gaussian beams. Acta Radiol. 1983;364:49–59.
16. Brahme A. Physics of electron beam penetration: fluence and
absorbed dose. In: Paliwal B, ed. Proceedings of the Symposium on
Electron Dosimetry and Arc Therapy. New York, NY: American
Institute of Physics; 1982:45.
17. Mandour MA, Nusslin F, Harder D. Characteristic function of point
monodirectional electron beams. Acta Radiol. 1983;364:43–48.
18. Starkschall G, Shiu AS, Buynowski SW, et al. Effect of
dimensionality of heterogeneity corrections on the implementation
of a three-dimensional electron pencil-beam algorithm. Phys Med
Biol. 1991;36:207–227.
19. Hogstrom KR, Steadham RE. Electron beam dose computation. In:
Paltra JR, Mackie TR, eds. Teletherapy: Present and Future. Madison,
WI: Advanced Medical Publishing; 1996:137–174.
20. Hogstrom KR. Electron-beam therapy: dosimetry, planning and
techniques. In: Perez C, Brady L, Halperin E, Schmidt-Ullrich RK,
eds. Principles and Practice of Radiation Oncology. Baltimore, MD:
Lippincott Williams & Wilkins; 2003:252–282.
21. Neuenschwander H, Mackie TR, Reckwerdt PJ. MMC-a high
performance Monte Carlo code for electron beam treatment
planning. Phys Med Biol. 1995;40:543–574.
22. Kawrakow I, Fippel M, Friedrich K. 3D Electron Dose Calculation
using a Voxel based Monte Carlo Algorithm. Med Phys.
1996;23:445–457.
23. Kawrakow I, Fippel M. Investigation of variance reduction
techniques for Monte Carlo photon dose calculation using XVMC.
Phys Med Biol. 2000;45:2163–2184.
24. Sempau J, Wilderman SJ, Bielajew AF. DPM, a fast, accurate Monte
Carlo code optimized for photon and electron radiotherapy
treatment planning dose calculations. Phys Med Biol. 2000;45:2263–
2291.
25. Ma C, Li JS, Pawlicki T, et al. MCDOSE- A Monte Carlo dose
calculation tool for radiation therapy treatment planning. Phys Med
Biol. 2002;47:1671–1689.
26. Chetty IJ, Curran B, Cygler JE, et al. Report of the AAPM Task
Group No. 105: issues associated with clinical implementation of
Monte Carlo-based photon and electron external beam treatment
planning. Med Phys. 2007;34:4818–4853.
7 Treatment Planning Algorithms:
Proton Therapy

Hanne M. Kooy and Benjamin M. Clasie

INTRODUCTION
A clinical dose computation algorithm, a “dose algorithm,” must satisfy
requirements such as clinical accuracy in the patient, computational
performance, representations of patient devices and delivery equipment,
and specification of the treatment field in terms of equipment input
parameters. A dose algorithm has been invariably imbedded within a
larger treatment planning system (TPS) whose requirements and
behavior often affect the dose algorithm itself. Current clinical emphasis
on the patient workflow and advanced delivery technologies such as
adaptive radiotherapy will lead to different dose algorithm
implementations and deployments depending on the context. For
example, a treatment planner needs highly interactive dose
computations in a patient to allow rapid evaluation of a clinical
treatment plan. A quality assurance physicist, on the other hand, needs
an accurate dose algorithm whose requirements could be simplified
considering that QA measurements are typically done in simple
homogeneous phantoms.
A dose algorithm has two components: a geometry modeler and a
physics modeler. In practice, the choice of a physics model drives the
specification of the geometry modeler because the geometry modeler
must present the physics modeler with local geometry information to
model the local effect on the physics-model calculation. Physics models
come in two forms: Monte Carlo and phenomenologic. The latter models
the transport of radiation in medium with analytical forms and has been
the standard in the clinic because of the generally higher computational
performance compared to Monte Carlo. Monte Carlo, in general, models
the trajectories of a large number of individual radiation particles and
scores their randomly generated interactions in medium to yield energy
deposited in the medium. Monte Carlo is considered an absolute
benchmark and much effort is expanded to produce high-performance
implementations. The accuracy of a Monte Carlo model is inversely
related to the extent and complexity of the modeled interactions and
improved performance can be achieved by limiting the model to meet a
particular clinical requirement. We describe, as an example, a basic
Monte Carlo, which outperforms a phenomenologic implementation
because of its simple model and implementation on a graphics processor
unit (GPU).
The geometry modeler creates a representation of the patient, which is
invariably derived from a CT volumetric data set and the CT voxel
coordinates are sometimes chosen as the coordinates for the points for
which to compute the dose. Dose points are typically associated to
individual organ volumes after the dose calculation proper for the
computation of, for example, dose-volume histograms. A more general
implementation allows the user to select arbitrary point distributions.
The geometry modeler also provides the physical characteristics of the
patient including, for example, electron density derived from CT
Hounsfield Units and, for protons, the stopping power ratio relative to
water (also derived from CT Hounsfield Units, albeit indirectly). The
geometry modeler also implements a model of the treatment field. For a
Monte Carlo implementation this is a two-step procedure. First, the
treatment field geometry and equipment parameters are used to create a
representative phase-space; that is, the distribution of particles in terms
of position and energy, and perhaps other parameters, of the radiation
particles that impinge on the patient. Second, the particles are
transported individually through the patient until they are absorbed in
or exit the patient volume. For a phenomenologic implementation, the
treatment field geometry forms a template whose geometric extents are
projected through the patient volume. This projection is typically
implemented by a ray tracer where individual rays populate the
treatment field to sufficiently sample the features within the patient and
where the physics model becomes a function of distance along the ray.
Figure 7.1 shows the various computational approaches.
Proton transport in medium has been exhaustively studied and the
references in this chapter are but a minimum.

PHYSICS OF PROTON TRANSPORT IN MEDIUM


Interactions
The physics of proton interactions for clinical energies, that is, between
50 and 350 MeV, is well understood. The publication “Passive Beam
Spreading in Proton Radiation Therapy” by Bernard Gottschalk (1)
provides an in-depth, theoretical, and practical, elaboration of this
physics and this section relies significantly on that material.

FIGURE 7.1 The left panel shows a Monte Carlo schematic while the right panel shows a
phenomenologic schematic. The beam includes an aperture and a range-compensator. The beam
for both is assumed subdivided in individual pencil-beam “spots.” These spots are physical in the
case of pencil-beam scanning (see text) while for a scattered field they are a means for
subdividing the field for computational purposes. The initial spot is characterized by a spread σ′,
an intensity G, and an energy R. In the Monte Carlo, the spot properties can be used as a
generator of individual protons. Thus, there will be more “red” protons than “blue” protons than
“green” protons. The total number of protons is on the order of 106 or more. Each proton is
transported through the patient and its energy loss is scored in individual voxels (a “yellow” one is
shown). In the phenomenologic model, the spot defines one or more pencil-beams that are traced
through the patient. Each ray-trace models the broadening of the protons in the pencil-beam as a
continuous function of density ρ along the ray-trace. Dose is scored to points P, where the
deposited dose depends on the geometric location of P with respect to the ray-trace axis and the
proton range R and pencil-beam spread σ. The latter may include the initial spot spread σ′ if the
ray-trace axis represents all the protons G in the spot. Otherwise, the spot can be subdivided into
multiple sub-spots such that the superposition of those spots equal the initial spot.

Protons lose energy in medium through interactions with orbital


electrons, scatter predominantly through electromagnetic Coulomb
interactions with the nuclear electric field, and have inelastic
interactions with the nucleus itself. Proton–electron interactions produce
delta electrons that deposit dose in proximity to the proton geometric
track. The mass of the proton is about 2,000 times that of the electron,
hence the collision is equivalent to a blue whale moving at near-
relativistic speed toward you: the proton emerges unscattered. Proton–
nucleus Coulomb interactions result in small angular deviations in the
proton direction and produce, to a very good approximation, a gaussian
diffusion profile in a narrow parallel beam of protons. The gaussian form
of this profile is a consequence of the very large number of random
scattering events, which, as described by the central limit theorem, is
approximate to a continuous gaussian function to describe the
underlying discrete random events.
Protons also have inelastic interactions with the nucleus and produce
secondary particles including secondary protons, neutrons, and heavier
fragments. The secondary protons contribute up to 10% of the dose
within a proton depth-dose (2). The secondary neutron production is of
specific concern because of their long-range effect. These neutrons
deposit dose to healthy tissues throughout the patient, far from the
target region, and impact on the shielding requirements of a proton
therapy facility. The neutron contribution to the dose distribution is only
considered implicitly through their contribution in the physics
parameters such as a depth-dose distribution. The effect of heavier
secondary particles is small and can be ignored for dose calculation
purposes.

Relative Biologic Effectiveness


Dose deposited by protons is considered to be biologically equivalent to
Co60 dose. That is, cellular response to proton and photon interactions
are considered equivalent. Clinically, though, the proton physical dose
(in Gy) is scaled by an RBE factor of 1.1 to yield the proton biologic dose
with units of Gy (RBE) (3). Thus, a photon fraction prescription of 1.8
Gy (which implicitly has an RBE of 1 as Co60 is the “standard” dose
reference) can be delivered with a proton fraction of 1.8 Gy (RBE),
which corresponds to 1.64 Gy physical proton dose. The latter is the
value one would measure in a radiation detector and to which one
would calibrate the field to deliver 1.8 Gy (RBE).
Clinically, the RBE is assumed 1.1 throughout. Of special
consideration is the change in RBE at the distal drop-off of the Bragg
peak. In this region, the change in RBE has the effect of differentially
shifting the distal edge deeper by about 1 mm compared to
measurements. That is, the biologic dose fall-off is shifted 1 mm deeper
compared to the physical dose fall-off. This inherent uncertainty in the
distal edge is one reason why the sharp distal edge is not used to
achieve, for example, a dose gradient between a target and a critical
structure. The other reason (see below) is the uncertainty in the relative
stopping power, which is estimated on the order of 2% to 3%. These
uncertainties could result in overdose to the critical structure or
underdose to the target (Fig. 7.2).

FIGURE 7.2 An SOBP depth-dose distribution with its constituent pristine Bragg peak depth-dose
distributions scaled relative to their contribution. The deepest pristine peak is at 160 mm, which
results in a lower SOBP range of 158.4 mm because of the other peak contributions. The 90–90
modulation width is 100.2 mm while the 90–98 is 83.1 mm. The range uncertainties create a band
of uncertainty both distally and proximally (see dashed distal fall-offs). The effect is most
pronounced at the distal fall-off and the error must be considered to ensure that the target
coverage is respected. The 80% depth, R80, of a pristine peak correlates accurately with the
proton energy in MeV. The clinical historical range is in reference to the 90% depth, which also
depends on the energy spread. The difference in practice is on the order of 1 mm. Care should be
practiced especially considering calibration and reference conditions.

The RBE, in general, depends on other factors such as dose fraction


size, number of fractions, clinical end point, and so on. These factors can
be ignored for proton radiation computations but are significant for
higher LET charged particles such as carbon. Computation algorithms for
such particles are, therefore, much more complicated and must
implement a model for the LET distributions in the patient to resolve
such dependencies.

Stopping Power
The various electromagnetic interactions transfer energy to electrons in
the medium and the proton energy is reduced as a consequence. The
proton energy loss, transferred to the medium, is quantified as the rate
of energy loss per unit length, dE/dx, where E (MeV) is the energy and x
(cm) is the distance along the proton path. The stopping power S is
defined in combination with the local material density ρ (g/cm3):

and is called the mass stopping power.


The mass stopping power in a given medium is a function of the
proton energy E and increases significantly when the proton has lost
nearly most of its energy. The proton mass stopping power in a given
material can be calculated using the Bethe–Bloch equation (4) or directly
from tables (5). For example, the mass stopping power values for protons
of energies 100, 10, and 1 MeV in water are 7.3, 45.7, and 261.1 MeV
cm2/g, respectively. Thus, a proton of 1 MeV energy cannot travel more
than 0.004 cm in water. It is this rapid increase in energy loss per unit
length, and the very localized energy deposition, that leads to the
characteristic Bragg peak of the proton depth-dose distribution (Fig.
7.2). Note that a Bragg peak is a characteristic feature of all charged
particles, including electrons, of the energy loss along the particle track.
The feature disappears for electrons traversing a medium due to their
large scattering angle. In effect, the track becomes “tangled up” and the
Bragg peak becomes averaged out over all the entwined tracks. Thus, a
monoenergetic electron depth-dose distribution has a broad and flat
high-dose region followed by the distal fall-off.

FIGURE 7.3 Conversion from CT Hounsfeld units of the volume to relative stopping power, SWV,
for biologic tissues. Figure is adapted from Schneider et al. (7).

Water-Equivalent Depth and Proton Range, R


In a photon algorithm, the computational quantity is the radiologic path
length, that gives the density (ρ in units of g/cm3 or relative
to water) corrected depth along the ray up to the physical depth z in cm.
For a proton algorithm, the appropriate quantity is the water-equivalent
thickness, τwet in g/cm2, which is the thickness in water that produces
the same final proton energy as the given thickness in the medium. It is
calculated from relative stopping power, SWV, defined as the ratio of the
stopping power of protons in the medium over that in water. In practice,
SWV is known for the materials such as polyethylene used in range-
compensators or derived empirically from CT numbers (Fig. 7.3). The
proton water-equivalent thickness1 along a ray trace is given as
where ρw is the density of water.
The mass stopping power ratio, is almost independent of
therapeutic proton energies for biologic materials (6). If this ratio is
unity, the radiologic path length and water-equivalent thickness are
mathematically equivalent. However, in treatment planning, the
differences between τRPL and τWET can be clinically significant.
Range, R, which is often called the mean proton range, is the average
water-equivalent depth where protons stop in the medium. This is a very
close approximation of the range in the continuous slowing down
approximation, RCSDA, and the depth at distal 80% of maximum dose,
R80, of a pristine proton Bragg peak (Fig. 7.2). These definitions of range
are often used interchangeably in proton therapy.

Dose to Medium
The dose to a volume element, of thickness t along and area A
perpendicular to the proton direction, is given by the energy lost in the
voxel divided by the mass of the voxel

where f = N/A is the proton fluence in (cm−2). Consider a fluence of


109 (1 Gigaproton [Gp]) per cm2 of 100 MeV protons in water where s/ρ
= 7.3 MeV cm2/g. The dose is

at the entrance of the water (where the proton energy is 100 MeV) and
where we used the factor 10−3 to convert from gram to kilogram and
the conversion factor is 0.1602 × 10−12 J/MeV. Equation 7.4 shows
that the number of protons to deliver a clinical dose is of the order of
gigaprotons, a number less than 10−15 of the number of protons in a
gram of water. A convenient rule of thumb is that 1 Gp delivers about 1
cGy to a 1 L (i.e., 10 × 10 × 10 cm3) volume.
The mass stopping power only measures the energy loss from
electromagnetic interactions and does not include the energy deposited
from secondary particles. In practice, a measured depth-dose is a
surrogate for the mass stopping power and thus effectively includes all
primary and other interactions in calculational models.
A CT data representation of the patient does not provide the necessary
stopping power information; it only provides the electron density on a
voxel level. The conversion from this electron density to relative
stopping power is empirical and is based on an average for particular
organs over all patients (7). Thus, the conversion has inherent
uncertainties due to (1) the nonspecificity for a particular patient, and
(2) the lack of knowledge of precise stopping powers for particular
organs. One, therefore, assumes a 3% uncertainty in the relative
stopping power in patients (Fig. 7.2). This 3% uncertainty must be
considered by the treatment planner as is described elsewhere.
A dose algorithm may use longitudinal depth-dose distributions in
water for all available proton energies. Dose distributions in the patient
are calculated from the dose in water using the local density and relative
stopping power, both of which come from CT data. In the presence of
heterogeneities, the depth-dose distribution in the medium, TM(E, τWET),
is obtained from that in water by (following from Equation 7.3 and
assuming constant fluence)

where E is the incident proton energy, z is the geometric depth in


centimeters, and the superscript ∞ indicates that broad beam depth-dose
distributions are at infinite source-to-axis distance (SAD). The effect of
inhomogeneities is twofold. First, the physical depth of the Bragg peak is
shifted relative to a water phantom based on the water-equivalent depth
in the patient. The shift for material M with geometric thickness zM and
water-equivalent thickness τWET,M is
Second, the dose in the heterogeneity is scaled by the relative mass
stopping powers. The latter ratio of relative mass stopping powers is
close to 1 and the scaling of the dose is typically ignored in dose
algorithm implementations. These effects are illustrated in Figure 7.4.
It was well recognized early on that protons could be used instead of
x-rays for tomography and thus measure the stopping power for a
patient directly (8). Such an image measures the energy loss, which is a
function of the integrated stopping power, along a particular proton ray
instead of an attenuation measurement along an x-ray, which is a
function of the integrated electron density. The resultant reconstructed
tomographic image is thus a stopping power distribution rather than an
electron density distribution. There is considerable interest in this
technology to achieve further precision in proton beam dosimetry and to
achieve accurate volumetric imaging at very low doses.

FIGURE 7.4 TW is the depth-dose distribution in a homogenous water phantom and TW+B is the
depth-dose distribution in a water phantom with bone at 5 cm depth. The relative mass stopping

power, of bone to water gives the ratio of the dose deposited in bone to water in the

shaded region. Bone has larger relative stopping power, , compared to water and, for 2.9 cm

geometric thickness of bone with 1.72 relative stopping power, the Δ z is −2.1 cm by Equation
(7.5b).
FIGURE 7.5 Monte Carlo (GEANT4) generated proton and electron tracks in water with
comparable penetration range. Protons maintain near constant direction of motion while
continuously slowing down by losing energy in collisions with electrons. An electron track
becomes “tangled” up with its collision electron partner and undergoes many wide-angle
scattering events. The minimal scatter of the proton makes the dose distribution accurately
described by the gaussian pencil-beam model.

Proton Lateral Spread


The numerous multiple elastic scattering events produce, for an initial
parallel and infinitesimally narrow beam of protons, a gaussian lateral
distribution profile. The characteristic spread of this gaussian profile in
the Bragg peak region is about 2% of the range (in centimeters). Thus, a
proton beam of 20 cm penetration range in water has a spread of 4 mm
in the Bragg peak region. In comparison, an electron beam has a spread
of about 20% of its range (Fig. 7.5).
The scattering angle that underlies the widening gaussian profile was
originally described by Molière (9), who investigated single scattering in
the Coulomb field of the nucleus and multiple scattering as a
consequence of numerous interactions. A complete description of
scattering theory is beyond the scope of this chapter. In practice, the
Highland approximation (10,11), or similar readily computable variants,
are used to compute the lateral beam spread in matter. For dose
calculations in thick, heterogeneous matter, the volume is divided into N
homogeneous slabs that represent the material along the particle track.
The standard deviation of the gaussian spread σP,i at radiologic depth L
due to the ith slab is

where pv is the product of the proton momentum and speed (in MeV), LR
is the radiation length (in g/cm2), and the ith slab extends from
radiologic depth Li − 1 to Li. The total gaussian spread, σP, is the
quadrature sum of the individual gaussian spreads from each slab

Solving Equation 7.6a at each dose calculation point is not feasible due
to computation time and two simplifications are made: the radiation
length in the patient is set to the radiation length of water (36.1 g/cm2)
and the density ρ is set to the density of water (1 g/cm3) in the
integration. The simplified equation for the gaussian spread

is only a function of the radiologic depth L in the media. The


simplifications are acceptable for most cases. Consider protons with 100
MeV energy incident on a homogeneous water phantom, then σP at the
maximum water-equivalent depth is 0.179 mm. If the same beam passes
through 1 g/cm of compact bone (LR = 16.6 g/cm2 and ρ = 1.85
g/cm3) followed by water, then σP at the maximum water-equivalent
depth is 0.169 mm, an acceptable difference compared to the
homogeneous phantom. However, in general, one should give special
attention to treatment plans that have materials with properties
significantly different from water.
Treatment planning algorithms use tables or parameterizations of σP in
water to improve the computation time. Numerical solutions of Equation
7.6c are described by Hong et al. (6) and Lee et al. (12). Figure 7.6
shows a similar calculation in water where the momentum and velocity
of protons at radiologic depth, L, and range, R, are calculated through
range-energy tables from J. Janni (5).

FIGURE 7.6 Left: The normalized gaussian spread σP(L)/σP(R) versus normalized radiologic
depth, L/R. This relation is essentially independent of R; the width of the line gives the variation
from 0.1 g/cm2 < R < 40 g/cm2. Right: The gaussian spread at the end of range of the pristine
proton beam as a function of range.

MODELS OF DOSE DISTRIBUTIONS


Phenomenologic Models
A phenomenologic model uses analytical forms to describe the physical
processes and, typically, represents the passage of the radiation through
the medium as geometric rays or pencil beams. Ray tracing, as applied in
radiotherapy algorithms, derives primarily from computer graphics
techniques to create images of three-dimensional (3D) objects as a
consequence of light rays interacting with those objects as described by
their physical properties. Monte Carlo methods in radiotherapy, in
essence, also implement ray tracing methods where individual rays are
the particles and their path is, in general, distorted as a consequence of
local interactions. We will refer to this technique as particle tracing in
distinction from ray tracing.
The performance of ray tracing algorithms through rectilinear volume
representations, such as a CT volume, is high and the computation time
for this part of an algorithm tends to be small compared to the
computation time needed for the physics model calculations. The ray
trace along a particular line, typically a line from the radiation source
through a point in the isocentric plane, is used to accumulate the volume
physical information along the trace.
The main computational burden in a ray-trace algorithm
implementation is the association of dose to points surrounding the ray
(Fig. 7.1). In general, at a given depth z, one has to find the points that
will receive a relevant contribution from the “particles” modeled by the
ray. In the case of a proton algorithm, this means that at a depth z, one
has to search an area of radius 3σ as this, for a gaussian distribution,
contains 99% of the laterally distributed profile. To efficiently find these
points, as opposed to explicitly checking each point at each depth for the
distance from that point to the ray axis, one can resort to sorting and
indexing algorithms before commencing the ray-trace calculation.
Original implementations of dose calculation algorithms used fan-beam
grids where the points on the calculation grid fan out along rays from
the source. Such methodologies are impractical when considering
requirements such as noncoplanar beams and arbitrary distributions of
points. A more general implementation relies on indexing and sorting
the dose points, in the beam coordinate system, in the isocentric plane,
and along the axis of the beam or ray (Fig. 7.7).

GAUSSIAN PROTON PENCIL-BEAM MODEL


The gaussian behavior of charged particle spread in medium was first
used in an electron pencil-beam algorithm described by Hogstrom et al.
(13). The use of gaussian functions has the convenient mathematical
property of yielding well-behaved integration and summation results to
describe the field profile and lateral penumbrae. In a pencil-beam model,
the radiation field is subdivided into narrow subfields, that is, pencil
beams. The central axis of the pencil beam, the “ray,” is traced through
the medium and only density variations along the axis are considered in
the calculation of the lateral spread of particles with respect to the
pencil-beam axis. The density distribution lateral to the pencil-beam axis
is considered to be that encountered on the axis and the calculation is
thus insensitive to heterogeneity variations comparable to the width of
the pencil beam. The gaussian description for electron beams works well
in homogeneous medium but less so in heterogeneous media as a
consequence of the large scattering angle of electrons (Fig. 7.5). The
gaussian model is a good mathematical and physical representation if
the spread is much smaller such as is the case for proton beams.

FIGURE 7.7 A pencil-beam traced through the volume needs to find the points that are within its
extent. Consider all dose calculation points Z projected on the isocentric plane (“dots”), binned in
rectangles (the grid shown above) in that plane, and sorted along the central axis of the overall
field. At a particular depth at Z(d) the pencil-beam has the extent shown above (blue ellipse)
which is projected on the isocentric plane (red). The pencil-beam only needs to consider the
points in those bins within its extent and only those points Z for which Z(d) − Δ < Z < Z(d) + Δ (i.e.,
those within the thickness of the ray trace step (2Δ)). Such an algorithm can improve performance
well over 10× compared to a brute force approach.

The gaussian pencil-beam model was used by Hong et al. (6) to


describe a scattered proton field produced by a general delivery device.
Such a device contains beam scatterers, an aperture, a range-
compensator, an air-gap between the range-compensator and the patient,
and the patient itself (Fig. 7.8). A physical narrow pencil beam of
protons can be mathematically modeled even in the presence of
heterogeneities while retaining good resolution. For example, a pure
pencil beam of 15-cm range still has better than 4-mm resolution at
depth.
The dose D to a point (x,y,z) from a pencil-beam field is

where the sum is over all pencil beam i (each of range Ri) whose central
axis at a depth at the coordinate z is at (xi,yi), Y(xi, yi) is the number of
protons at (xi,yi), σT is the total spread of the pencil beam, and T W• Ri,
τWET are broad-field depth-dose distributions in water with infinite SAD.
TW• Ri, and τWET are determined from measured depth-dose distributions
with finite SAD by correcting for inverse square.

where z is the longitudinal geometric coordinate of the measurement


with axis at z = 0.
Equation 7.7 is a general description of proton diffusion through
medium as a function of depth. Its basic form is used for describing a
scattered proton field, where a proton beam is passively scattered in the
lateral dimension and modulated in depth, and a pencil-beam scanning
field, where a narrow beam of proton pencils is scanned magnetically in
the lateral dimension and modulated in depth.

Scattered Field Implementation


A scattered proton field is not dissimilar from an electron scattered field.
The scattered field is characterized by a virtual source from which the
protons appear to emanate and diverge and which has a spread σS. The
source position and size can be determined by measuring the field width
and the penumbral width of an aperture as a function of distance from
the aperture. The field width projects back to the source position, while
the penumbral width is given by the back-projection of the measured
penumbra through the aperture, where the penumbra is 0, to that
position (as described, e.g., in 13). The resultant “virtual” source size of
the proton beam is large with σS between 5 and 10 cm as a consequence
of the proton scatter in the scattering system (Fig. 7.8). This large source
size can only be mitigated by placing the aperture as close to the patient
as possible (as is the practice for an electron aperture) and to move the
source position as far as possible from the patient. The latter is one
reason for the size of proton gantries; the other being the bending radius
necessary to bend the proton beam toward the patient. The effective
source size (the projection of the source to Z = 0 through a pinhole at
the aperture) thus becomes

which for a typical beamline, with an SAD of 300 and an aperture


distance ZA to the patient of 10 cm, reduces the source size by a factor of
(300 − 10)/10 = 29 and the contribution of the virtual source is
reduced to 3 mm or less.
The proton beam passes through the aperture and subsequently is
modified by a range-compensator, which locally shifts the proton range
such that the distal surface of the field is beyond the distal target volume
surface (with respect to the beam axis). Passage through the range-
compensator introduces compensator thickness-dependent scatter, which
introduces a penumbral spread component in the patient at depth z
given by
FIGURE 7.8 A schematic double-scattering nozzle (top) with a range-modulator wheel (RM,
insert) and scatterer (SS), ionization chamber IC, and a snout that holds the aperture and range-
compensator (insert). The fixed scatterer (FS) inserts thin layers of lead and ensures that the
overall mean scattering angle remains constant as a function of energy, thus maintaining a flat
lateral profile. The protons in the model are assumed to spread as a consequence from the
effective source (green profile and eq. 7.9), from the range-compensator thickness (yellow and eq.
7.10) and the scatter in patient (red profile and eq. 7.6c). The bottom shows a scanning nozzle
with a pair of scanning magnets (SM). The absence (in general) of an aperture and range-
compensator requires that the incoming pencil-beam be as narrow as possible to minimize its
contribution (green) to the in-patient spread. The SAD is defined by the bending points in the X
and Y magnets (shown here in the center of the magnets). The SAD defines the origin of the
proton pencil-beam axis. The relative widths of the pencil-beam are for illustration purposes and
do not imply that a scanning beam is, by definition, narrower. The schematics represent the core
features of the IBA nozzle (IBA LTD, Louvain la Neuve).
where q0(t) is the scattering angle produced by the protons passing
through the local thickness t at (x,y) of the range-compensator at
position ZR.
The protons in the patient further scatter and introduce a third spread
factor, σP, to the total spread of the proton pencil beam. The derivation
of σP is computed, for example, from the Highland formula (Equation
7.6c). The total spread of the pencil beam is the quadratic sum of the
three spread factors in Equations 7.9, 7.10, and 7.6b (Fig. 7.8).
The complete algorithm thus subdivides the scattered field extent
circumscribed by the aperture into many small pencil beams, typically
spaced in a rectangular grid of 2-mm resolution. This high resolution
ensures that heterogeneities within the patient are sufficiently sampled
in the lateral extent. Each pencil beam axis is traced through the CT
volume from the virtual source through a grid point and through the CT.
The calculation points (x,y,z) are transformed to the beam coordinate
system and are sorted in z (Fig. 7.7). The ray trace evaluates the water-
equivalent depth (Equation 7.5) for the point whose z is first in this
sorted list and whose that is, those points are close
enough to the pencil-beam axis to receive sufficient dose. For those
points, the evaluation of Equation 7.7 yields the dose. The ray trace
continues and repeats the procedure for the next point z in the list.

Scattered Proton Field Composition


The general framework for the algorithm for scattered fields is the
evolution of the proton field in medium described by Equation 7.7 as a
composition of individual pencil beams i. The pencil beams for this
application should be considered mathematical constructs for the
decomposition of the field; the physical field is a laterally uniform flux
of protons at different energies. The mathematical decomposition in
depth is the weighted superposition of single Bragg peak proton fields,
typically uniform in the lateral dimension or weighted according to a
measured off-axis ratio profile, to produce a spread-out Bragg peak
(SOBP) field. The SOBP is thus a weighted sum of pristine Bragg peaks
Pi.
The superposition of Bragg peaks is achieved by mechanically inserting
range-shifting material in the monoenergetic proton beam with the
insertion time proportional to the required contribution of a peak to the
SOBP (Fig. 7.8). The common method is to use a rotating wheel with
increasing step thicknesses and where the angular extent of the step is
proportional to the weight. A rotating wheel can produce SOBPs of
varying modulation by turning the beam off before a full rotation has
been completed.
The modulation width traditionally is specified as the distance
between the distal and proximal 90%. However, this definition leads to
modulation values larger than the range value for some fields and
becomes hard to measure in the shallow entrance dose region. Thus, in
our practice at least, we specify the modulation width between the distal
90% and the proximal 98% (Fig. 7.2).
Absolute dose calculations for SOBP models have, in general, not been
implemented and output calculations rely on empirical models (14). The
complexity, specific construction details, and large number of
mechanical components in a “modern” scattering nozzle make such a
description very difficult. This complexity increases the burden on the
physicist to establish a practical quality assurance protocol for output
calibrations (15).

Pencil-Beam Scanning Field Implementation


For pencil-beam scanning fields, we have physical beams of narrow
protons whose lateral position, quantified in the isocentric plane, is
controlled by the scanning magnets (SMs) and whose penetration depth
is quantified by the energy. This energy is nearly uniform with an energy
spread dE/E on the order of 0.5% or less. The effect of the energy spread
is a broadening of the Bragg peak width.
In principal, one could use the physical pencil beam directly as a
representation for the gaussian form in Equation 7.7. However, this will
lead to an implementation that is insensitive to patient heterogeneities.
The physical pencil beam typically has a spread of 4 to 15 mm at the
entrance as a consequence of the beam transport system and scatter in
air and windows. Thus, considering Equation 7.7, the dose algorithm
implementation would be unable to discern inhomogeneities smaller
than that spread. Schaffner et al. (16) describe various techniques of
modeling dose distributions near lateral heterogeneities and one such
model is the Fluence-dose model. The physical pencil beam (PPB) is
decomposed at the entrance into constituent mathematical pencil beams
(MPB) to retain sufficient resolution of the proton transport inside the
patient. This can be represented by considering the spot spread σ0
separately from the in-patient spread σP in the gaussian function and
modifying Equation 7.7 to the mathematically equivalent form

where the first term (Equation 7.12a) is the number of protons GS (in
units of billions or Gigaprotons) in a PPB in the set S, the second term
(Equation 7.12b) is the apportionment of these GS protons, given the
intrinsic lateral spread (RS, z) of the PPB, over the set of MPB’s K. This
set of MPB’s K in Equation 7.12c is defined at the highest resolution (∼2
mm) necessary to accurately represent the dose in the patient. Equation
7.12c models the diffusion of the number of protons, given by the
product of Equations 7.12a and 7.12b, in the patient given the scatter
spread σP(RS,τWET) in the patient due to multiple Coulomb scattering.
The intrinsic lateral spread (RS,z) in Equation 7.12b is determined by
the scatter in air, magnetic steering, and focusing properties of the PBS
system (Fig. 7.8), and is a function of the spot range RS and position z
along the pencil-beam spot axis. The parameter ΔS, K denotes the
position of a point in the computational pencil beam area AK with
respect to the spot coordinate system. The final term (Equation 7.12c)
follows Pedroni et al. (17), where TW•(RS,τWET) (in units of Gy cm2 Gp
−1) is the absolute measured depth-dose per Gigaproton (Fig. 7.9)
integrated over an infinite plane at water-equivalent depth τWET, τWET
σp(RS,τWET) is the total pencil-beam spread at τWET caused by multiple
Coulomb scatter in the patient (6), and is the displacement from the
calculation point to the K pencil-beam axis. Equation 7.12 is a
phenomenologic description of the distribution in the patient of protons
delivered by the set of spots S.

A Monte Carlo for Proton Transport in the Patient


The limitation of the gaussian pencil-beam model is well known:
insensitivity to lateral inhomogeneities as only variations along the
pencil-beam axis is sampled (Equation 7.2). For clinical purposes, the
gaussian pencil-beam model is a very good representation (18), given
the relatively small dispersion of the MPB (Fig. 7.5). Thus, if lateral
scatter were accurately modeled, there would be little further
improvement necessary. We describe a simple Monte Carlo that only
models lateral scatter and is implemented on a GPU, generally available
on any personal computer.

FIGURE 7.9 A set of pristine peaks in absolute dose (Gy cm2) per gigaproton. Note the change in
peak width and height as a function of range.
The physics of proton transport is well described by considering
multiple Coulomb scatter using Molière’s theory and energy loss S along
the track. Individual protons are transported through a volume
represented by a rectangular 3D grid of voxels of dimension (dx, dy, dz)
and localized by their index (i,j,k). Each voxel has the relative (to water)
stopping power SWV. The dose to a voxel is the sum of energy
depositions along the individual proton tracks that traverse the voxel
and divided by the voxel mass.

FIGURE 7.10 Pseudocode for transporting a single proton through a voxel. The proton enters with
energy Eo and scatters by a mean polar angle q, a random azimuthal (around the proton
direction) angle φ (green line). The energy loss (Ei − Eo) depends on the energy only and is
computed as a function of energy for a mean path length ·λÒ. The dose d deposited is the
deposited energy divided by the mass of the voxel.

A proton, of energy ES, enters a voxel on one of its faces and exits on
another. We compute the (unscattered) exit point along the incoming
proton direction (u,v,w) and the distance λ between the entrance and
this projected exit point. We compute the mean polar scatter angle in the
voxel as and the azimuthal angle randomly uniform between
0 and 2p. The mean scatter angle q0(Es) is derived and quantified by
Gottschalk (10), analogous to the Highland formula (Equation 7.6c) but
more appropriate to traversal through thin layers. The azimuthal angle is
the only random variable. The proton direction is adjusted to (u,v,w),
given the scattering angles and the actual exit point are computed.
The energy loss of the proton along the mean voxel geometric track
length λ is derived from either the measured depth-dose distribution,
TW• (R, τWET), or from range-stopping power tables as in Fippel and
Soukup (19). The energy loss in a voxel in the former model is.

In either model, the depth-dose in the Monte Carlo is tuned to match the
measured depth-dose by adjusting the energy spread at the entrance to
the phantom. The use of a mean track length ·λÒ solely serves to reduce
computational overhead. Figure 7.10 shows the pseudocode for this
simple algorithm and Figure 7.11 shows the ability of the algorithm to
model traversal through heterogeneous medium.
This algorithm is implemented on a GPU, which contains numerous
processors and where each processor can execute multiple threads of the
code in Figure 7.10. The implementation can transport on the order of
600,000 protons/s on an off-the-shelf graphics card. Such speed is
competitive to a “conventional” pencil-beam application and we,
therefore, expect that such implementations will replace the
conventional implementations.

FIGURE 7.11 Energy loss distribution of 2 physical pencil-beams traversing water with
inhomogeneous blocks (gray for “bone” and open for “air.” Note the differential lateral scatter as
expected).
The clinical proton field is defined by a set of numerous protons
(∼106 or 107) with a mean energy E, spread ΔE/E, entry point (x, y, z)
on the surface of the voxel volume, and direction (u, v, w). This
computational set is derived from the properties of the decomposed
pencil beams as they are for either scattered or PBS field as described
above.

KEY POINTS
• Pencil-beam dose algorithms rely on the decomposition of the
radiation field into numerous smaller, pencil-beam, fields that in
aggregate represent the transport of the whole field through the
patient. Note that the term pencil-beam in proton pencil-beam
algorithm refers to the decomposition of the field—not the physical
delivery! Each decomposed pencil-beam is transported through the
patient where the local energy (dose) is considered as a function of
the radiologic depth along the pencil-beam axis and a function of
the convolution of the energy at the radiologic depth to surrounding
volume.
• The accuracy of a gaussian pencil-beam algorithm is limited by the
width of the gaussian at the calculation depth. The algorithm
assumes that all volume contained by the gaussian experiences
the “same” physics and thus becomes insensitive to lateral
heterogeneities on the order of the width of the gaussian. Thus, for
protons, this limits the algorithm to about 5 mm at depth.
• Monte Carlo dose calculations calculate dose based on the energy
losses of individual particles in a computational (voxel) volume,
which is often the smallest geometric representation of the patient.
Thus, Monte Carlo dose calculations are accurate up to the size of
a voxel.
• The definition of dose–response is based on the empirical
experience gained (primarily) from photon irradiation and as
referenced to Cobalt-60. For photon irradiation, physical dose is
therefore equated to biologic dose and its radiobiologic
effectiveness (essentially in comparison to itself) is 1. Ions exhibit
different energy deposition along their tracks compared to
electrons, which in turn may create a difference between physical
dose and biologic dose. For example, clinical proton fields are
assumed to have an RBE = 1.1 which means that 2 Gy of Cobalt-
60 dose can be delivered by 1.8 Gy of proton dose.

QUESTIONS
1. Given a prescription dose of 2 Gy (RBE) per fraction, what is the
physical dose to be delivered by the proton field?
A. 1.82 Gy
B. 2.20 Gy
C. 2.00 Gy
2. What is the thickness of a Lucite range shifter (ρ = 1.15 g/cm3)
to shift a proton beam range of 10 cm to 8 cm?
A. 2.30 cm
B. 2.00 cm
C. 2.00 g/cm2
3. What is the gaussian spread of an infinitesimal parallel proton
pencil-beam of range = 20 cm at 10 cm in water?
A. 4.5 mm
B. 1.5 mm
C. 3.0 mm
4. In a scattered proton field, the virtual source size is (typically)
very large and on the order of σ = 5 cm. A large SAD (say 300
cm) and a short aperture-to-isocenter distance (say 20) reduce
this source size to:
A. 3.3 mm
B. 3.6 mm
C. 5.3 mm
ANSWERS
1. A 1.82 Gy [2 Gy (RBE)/1.1 (RBE)]
2. B 2.00 g/cm2 [thickness expressed in “water” thickness]
3. B (from Figure 7.6) 4.8 mm × 0.3 = 1.5 mm
4. B 20/(300 − 20) × 50 mm = 3.6 mm

REFERENCES
1. Gottschalk B. Passive beam spreading in proton radiation therapy.
http://huhepl.harvard.edu/∼gottschalk/.
2. Paganetti H. Nuclear interactions in proton therapy: dose and
relative biological effect distributions originating from primary and
secondary particles. Phys Med Biol. 2002;47:747–764.
3. International Commission on Radiation Units and Measurements.
Prescribing, recording, and reporting proton-beam therapy (ICRU
Report 78). J ICRU. 2007;7(2):NP.
4. Amsler C, Doser M, Antonelli M, et al. (Particle Data Group). Phys
Lett. B667 2008;1.
5. Janni J. Proton range-energy tables, 1 keV–10 GeV. At Data Nucl
Data Tab. 1982;27:147–339.
6. Hong L, Goitein M, Bucciolini M, et al. A pencil beam algorithm for
proton dose calculations. Phys Med Biol. 1996;41:1305–1330.
7. Schneider U, Pedroni E, Lomax A. The calibration of CT Hounsfield
units for radiotherapy treatment planning. Phys Med Biol.
1996;41:111–124.
8. Cormack AM, Koehler AM. Quantitative proton tomography:
preliminary experiments. Phys Med Biol. 1976;21:560–569.
9. Molière G. Theorie der Streuung schneller geladener Teilchen II
Mehrfach—und Vielfachstreuung-streuung. Z Naturforsch B. 1948;
3A:78–97.
10. Gottschalk B. On the scattering power of radiotherapy protons. Med
Phys. 2010;37:352–367. arXiv:0908.1413v1 (physics.med-ph).
11. Lynch GR, Dahl OI. Approximations to multiple Coulomb scattering.
Nucl Instrum Meth. 1991;B58:6–10.
12. Lee M, Nahum A, Webb S. An empirical method to build up a model
of proton dose distribution for a radiotherapy treatment-planning
package. Phys Med Biol. 1993;38:989–998.
13. Hogstrom KR, Mills MD, Almond PR. Electron beam dose
calculations. Phys Med Biol. 1981;26:445–459.
14. Kooy HM, Schaefer M, Rosenthal S, et al. Monitor unit calculations
for range modulated spread-out Bragg peak fields. Phys Med Biol.
2003;48:2797–2808.
15. Engelsman M, Lu HM, Herrup D, et al. Commissioning a passive
scattering proton therapy nozzle for accurate SOBP delivery. Med
Phys. 2009;36:2172–2180.
16. Schaffner B, Pedroni E, Lomax A. Dose calculation models for
proton treatment planning using a dynamic beam delivery system:
and attempt to include density heterogeneity effects in the
analytical dose calculation. Phys Med Biol. 1999;44:27–41.
17. Pedroni E, Scheib1 S, Bohringer T, et al. Experimental
characterization and physical modelling of the dose distribution of
scanned proton pencil beams. Phys Med Biol. 2005;50:541–561.
18. Jiang H, Paganetti H. Adaptation of GEANT4 to Monte Carlo dose
calculations based on CT data. Med Phys. 2004;31:2811–2818.
19. Fippel M, Soukup M. A Monte Carlo dose calculation algorithm for
proton therapy. Med Phys. 2004;31:2263–2273.

1Proton path length is the thickness of material summed over the track
of the proton. Protons at clinical energies, however, undergo little
deviation from straight paths and path length and thickness can be used
interchangeably in proton therapy. Depth is the thickness of material
summed along a ray from the entrance of the phantom to a given
endpoint.
8 Commissioning and Quality
Assurance

James A. Kavanaugh, Eric E. Klein, Sasa Mutic, and Jacob


Van Dyk

INTRODUCTION
Modern radiation therapy requires increasingly sophisticated
technologies to accurately deliver high doses of radiation to very specific
anatomic targets. The successful administration of a patient’s radiation
treatment is the final step of a complex process, an overview of which is
illustrated in Figure 8.1. Errors at any one of these steps can produce
deviations between the delivered radiation treatment and the intended
prescription with such deviations having the potential for markedly
inferior clinical outcomes compared with those whose treatment was
initially protocol compliant (1). On rare occasions, these deviations can
have catastrophic consequences to the patient’s health (2,3). To ensure
accuracy of the prescription and the fulfillment of treatment intent, a
rigorous quality assurance (QA) program is required at all stages of the
radiation treatment process. The purpose of a QA program is to provide
systematic evaluations and necessary corrective actions to maintain the
quality and safety of a radiation therapy program. These evaluations, or
quality controls (QC), measure a specific quality metric, compare it to an
existing standard baseline, and adjust the metric to conform to the
baseline as necessary. The baselines used within a QA program are
typically defined during the initial commissioning process (4).
Commissioning is the preparation of a new device, technique, or
procedure for clinical use.
The more darkly shaded region of Figure 8.1 highlights the stages that
relate specifically to treatment planning. Virtual simulation combined
with the use of a computerized treatment planning system (TPS) for dose
computation and technique optimization is standard practice in the
modern radiation therapy department. Developments in automated
optimization (5), beam intensity modulation (6), the use of a variety of
imaging modalities (7), and the inclusion of biologic parameters (8) to
calculate tumor control probabilities (TCPs) and normal tissue
complication probabilities (NTCPs) have added dramatically to the
complexity of the modern TPS. More recently, developments in auto
segmentation contouring algorithms, automated/knowledge-based
planning (KBP) techniques (9), and adaptive radiation therapy (ART)
(10) have further compounded this complexity.
The continually expanding complexity of the modern TPS has made
the commissioning process one of the most challenging and error-prone
steps in modern radiation therapy (11,12). Historically, measurements
and modeling of the basic radiation dataset needed to accurately
commission a TPS have been the responsibility of the local medical
physicist. Despite comprehensive guidance documents (13–16), the
variation of skill and experience of the local physicist has produced
drastic variations in the accuracy of the calculated dose distributions
across institutions utilizing the same linear accelerator technology (12).
As modern linear accelerator technology has been shown to be
consistently stable and uniform, there has been increased discussion
regarding the use of a vendor specified standardized basic radiation
dataset for commissioning a TPS (17,18). The standardized dataset for a
specific vendor (Varian Medical Systems), known as golden beam data
(GBD), would minimize the variation in TPS commissioning and would
still require local verification measurements to ensure it matches the
dose delivery data. Utilizing GBD instead of locally measured data is
quite controversial and is mentioned here to inform the reader of the
current disagreement in the field (19).
The overall need for QA in radiation therapy is well defined (20–22).
As mentioned previously, comprehensive reports on treatment planning
QA have been developed by the AAPM (13) and, more recently, by the
International Atomic Energy Agency (IAEA) (14–16). This chapter
addresses issues related to the use of treatment planning computers in
the context of acceptance testing, commissioning, and routine clinical
application. Although the emphasis is on commissioning and QC, there is
also some discussion on QA of the total treatment planning process. The
intent of this chapter is to be as generic as possible so that it applies to
both conventional and more sophisticated radiation treatment
techniques; however, because of page limitations, details for specialized
techniques are confined to references where indicated.

Historical Perspective, Current Status, and Future


Possibilities
Publications on computer applications in radiation therapy date back to
1951, when Wheatley (23) produced an analog computer to perform
“dosage estimation of fields of any size and any shape.” The primary
purpose was to relieve the tedium associated with dosage calculations
(24). It appears that computers were used in radiation therapy as early
as in any other field of medicine. Tsien (25) has been credited with
being the first in the application of “automatic computing machines to
radiation treatment planning.” During the past 60 years, there has been
an enormous technologic evolution of computer technology, including
microcomputers, large time-sharing systems, minicomputers, graphics
workstations, and desktop personal computers, as well as laptops,
handhelds, and tablets. During each phase of computational
development, computers provided treatment planners with faster and
more sophisticated capabilities for dosage calculations, better image and
graphical displays, and improved automated optimization capabilities
(26). The evolution of these sophisticated technologic developments can
be readily followed by reviewing the proceedings of a series of
international meetings on the use of computers in radiation therapy, the
first and last of which occurred in 1966 and 2013 (27,28).
FIGURE 8.1 Schematic flowchart of the steps in the total radiation therapy process. The shaded
region emphasizes those steps specifically associated with treatment planning and the use of the
TPS. (Adapted from Van Dyk J, Rosenwald J-C, Fraass B, et al. Commissioning and quality
assurance of computerized planning systems for radiation treatment of cancer. Figure 1 of IAEA
TRS-430 (14), reproduced with permission.)

The advances in computer technology led to a revolution of diagnostic


imaging during the 1970s that provided a further breakthrough for
radiation therapy planning. Until then, it was difficult both to localize
the tumor with any kind of accuracy and to provide accurate dose
calculations accounting for patient-specific tissue densities. With the
advent of computed tomography (CT) it became possible for the first
time to derive in vivo density information that could be incorporated
into the dose calculation process. The combination of developments in
computers and in diagnostic imaging resulted in much research,
improving dose calculation procedures that accounted for the actual
tissue composition of the patient.
Over the past two decades, the utilization of multiple imaging
modalities and the ever-increasing computational demands has
necessitated changes in the TPS hardware and storage capabilities.
Modern fluence optimization and dose calculations require significantly
more computational power, resulting in a migration from local desktop
PCs to large local server farms. Storing the vast amount of on-treatment
and historical treatment planning data often requires utilizing picture
archiving and communications systems (PACS). While the installation of
these systems is often completed by industry experts, it is typically the
responsibility of the local physics or information technology (IT) staff to
provide management for routine data backup/restoration,
software/hardware issues, and future upgrades. While larger institutions
may have the necessary resources to fully support all components of a
modern local server-based TPS, the increased hardware/software
components may be beyond the means and expertise available for
smaller clinics.
One possible solution currently being explored is the utilization of
cloud-computing (29). The ubiquity of modern high-speed fiber optic
communications may allow clinics to outsource the dose computation,
data archiving, and other TPS functionalities to professionally managed
server farms. Cloud-computing would allow all radiation oncology
clinics to install modern TPS functions without extensively investing in
hardware or specialized staff. Implementing cloud-computing could
provide several other advantages, including easy sharing of plan data
and a standardization of TPS parameters between institutions with
matched machines. Before being adopted clinically, several issues,
including data security/privacy and data transfer integrity, would need
to be fully resolved.
The topic of QA for TPSs has only recently received extensive
attention from national and international radiation oncology groups, as
demonstrated by the development of several reports formalizing TPS QA
(13–16,30–33). This is partly due to the nature of QA, which has not
historically been considered a major area for high-powered research and
partly in response to the increased complexity and sophistication of
technology used in the modern radiation oncology clinic. This chapter
summarizes recent developments and recommendations associated with
QA issues in radiation treatment planning, especially the implementation
and clinical use of computerized radiation TPSs.

Treatment Planning Process


In its broader sense, treatment planning includes all the steps from
therapeutic decision making to target volume and normal tissue
delineation, selection of treatment technique, determination of the
direction of radiation beams, simulation, fabrication of ancillary devices
and treatment aids, monitor unit (MU)–time calculations, treatment
verification, and finally, first treatment (Fig. 8.1). In its narrower sense,
treatment planning includes the outlining of target and critical volumes
(34); the determination of the number, directions, and modality/energy
of radiation beams; and the corresponding MU calculations. In this
narrower definition, treatment planning involves the use of image
information and the computer to perform the appropriate virtual
simulation and dose calculations. The QA considered in this chapter
primarily addresses the use of the TPS to generate appropriate beam
arrangements and dose calculations. The following specific issues are
associated with that part of the QA process.
Patient Data
Patient data can be derived in various ways, including simple methods of
external contour determination and various imaging modalities, most
commonly CT. Image registration techniques (rigid and deformable) are
used to accurately overlay the patient’s anatomy from various imaging
modalities so that each can be fully used during contour delineation. The
important issue at this stage of planning is to ensure that the patient is
positioned identically to the eventual treatment position and that the
derived data represents this position (35). It is important that the data
be transferred accurately to the TPS. Also, any conversions of digitized
data, such as the conversion of CT numbers to electron densities, must
be handled accurately by the system.

Display of Patient and Beam-Related Information


Once the data have been transferred to the TPS, the treatment planner
can manipulate the information, look at the data on various slices or in
3-dimensional (3D), and allow the system to perform reconstructions of
images. With the patient data on the computer, the radiation oncologist
is able to outline target volumes and critical organs-at-risk (OAR) on the
appropriate slices (34). The correctness of the display of patient data is
not only important for target/OAR delineation but also for the treatment
planner to accurately design the placement and optimization of radiation
beams. Beam placement is often performed by entering parameters such
as field size, beam direction, and collimator rotation. At this stage,
various options can be used for the definition of the field shape. In each
case, the beam edges can be displayed either on a beam’s-eye-view
perspective or as perspectives of the beam edges intersecting any
specified plane. Associated with the beam edge display, information
demonstrating gantry angle, collimator angle, beam energy, and
collimator size should also be displayed on the screen.

Dose Calculation and Display


Once the beam geometry has been determined, the dose distribution can
be calculated and displayed. Displays vary from simple colored isodose
lines to color washes to individual point doses. The accuracy of the
geometric correctness of isodose lines on the display is very difficult to
assess, although very specific phantom geometries, whereby one can
assess the position of a specific isodose line, can help with this process.
Doses calculated at individual points can be correlated to the isodose
lines as well as measured data.

Development and Implementation of Treatment Planning


Algorithms
The clinical implementation of treatment planning programs involves a
number of steps. Some are under the control of the user, and some are
not, because they depend on software developed by others. The typical
clinical implementation of treatment planning programs usually takes
the following steps.

Development of Calculation Algorithms


Dose calculation algorithms are based on the physics of radiation
interactions within tissue. Because of the complexity of the physics of
these types of interactions, the algorithms usually involve simplifications
allowing the calculations to be completed fast enough to be useful to the
treatment planner. These simplifications result in approximations to the
complicated physics, and therefore, the algorithms have inherent
uncertainties and generally work well only over a limited range of
conditions. Usually, the more complex algorithms handle the physics in
more detail, but also require longer calculation times. The extreme
example of this is Monte Carlo calculations, which can take hours to
days, depending on the mode of treatment and the complexity of the
plan, although recent commercial clinical versions for electron beams
can be calculated in minutes (36) and photon beams from minutes to
hours (37). To be practical, a clinical algorithm should generate dose
distributions nearly in real time, but usually in seconds. The details of
the algorithm implemented on a given commercial TPS are not in the
control of the user.

Development of the Computer Programs Implementing the


Algorithms
Once a developer of a TPS has determined the nature of the algorithm,
the algorithm must be coded into software. This software must include
appropriate input–output routines, image display, and manipulation
routines, options that allow the user to define the treatment technique,
and plan optimization and evaluation routines. The development of the
software is not under the control of the user. It is the responsibility of
the developer of the software to ensure that the algorithms are properly
coded.
It is important for the user to have some knowledge of the nature of
the dose calculation algorithms to help understand their capabilities and
limitations. Furthermore, a basic knowledge will also help the user
diagnose specific TPS problems and can be of some help in developing a
QA process. A detailed description of different dose calculation
algorithms can be found in the preceding chapters. A detailed report on
external beam tissue inhomogeneity corrections for photon beams has
been produced by AAPM Task Group 65 (38). In recognizing that the
thorough description of algorithms was also beyond the scope of the
IAEA TRS-430 (14), the report provides a series of questions that users
may want to address either as part of the search process for a new TPS
or in attempting to understand the calculation algorithm(s) currently
available on their TPS. This information is provided in 14 tables
addressing different issues related to TPSs. To give an idea as to the
subjects covered, the titles of these tables are summarized in Table 8.1.
Table 8.2 is an example of one of the tables in IAEA TRS-430 (14) and
shows questions related to external beam dose calculation algorithms in
a water-like medium without any beam modifiers.
For brachytherapy, an interesting review of the evolution of treatment
planning has been provided by Rivard et al. (39).

Determination of the Radiation Database Required by the


Algorithms
All algorithms, even sophisticated Monte Carlo procedures, require a
basic radiation dataset as input. As discussed earlier in this chapter, the
data used to create a basic radiation dataset originates from two possible
sources. Traditionally, the data are independently measured for each
energy on each therapy machine in every radiation therapy department.
The quality and accuracy of such data depend on the individuals
commissioning the TPS. Alternatively, one can commission the TPS using
GBD supplied by the system manufacturer and validate the supplied data
against a subset of beam measurements from the local radiation therapy
machines.

TABLE 8.1 Titles of Tables in IAEA TRS-430 (15). Addressing


Questions Associated with TPS Algorithms

Such data are always determined over a limited range of conditions.


Thus, calculations that extend beyond the range of the original
measurements may be subject to question, depending on the
extrapolation procedures used by the calculation algorithms.
Furthermore, measured data have their own inherent uncertainties and
depend on the type and size of detectors used and the care taken by the
experimentalist (18). The accuracy of the measured data also depends on
the stability of the radiation therapy machine and its ability to yield the
same kind of radiation characteristics from day to day and hour to hour.
The input data required by TPSs at minimum are relative, in the form
of dose ratios, with the denominator being the dose under some
reference condition. Any TPS capable of calculating MU or treatment
times also requires absolute information in the form of MUs per gray or
grays per minute. These are all part of the input dataset required by the
TPS.

TABLE 8.2 External Beam Dose Calculation Algorithm: Dose in


Water-Like Medium without Beam Modifier
Clinical Application
Finally, the clinical use of the TPS requires patient-specific information
in the form of patient contours, usually generated with the aid of CT,
PET, and/or MRI. Appropriate parameters must be entered to determine
the treatment configuration. Dose calculations are usually performed for
each beam independently, with the summed doses displayed on a
monitor. This clinical application of treatment planning depends entirely
on the user and his or her knowledge of the capabilities and limitations
of the TPS. Admittedly, the newer inverse planning optimization
routines (5) used for IMRT are automated and leave little in the control
of the user other than entering the dose-volume constraints as required
by the objective function for the optimization.

TREATMENT PLANNING SYSTEM COMMISSIONING AND


QUALITY ASSURANCE
Terminology Associated with QA
Four major topics that are associated with the installation of any major
piece of apparatus are specifications, acceptance testing, commissioning, and
QC. In the context of TPS, the distinction between some of these terms is
not entirely clear, and therefore they warrant special discussion.

System Specifications
In the context of treatment planning computers, specifications define the
detailed functionality criteria of how the TPS will be utilized within the
clinical workflow. Until relatively recently, there have been numerous
commercial and home-grown TPSs with each offering unique
functionality. This broad selection made it extremely important for an
end user to carefully match system specifications to their current/future
clinical workflow. Currently there are only a few commercially available
systems, each offering a standard range of functions that facilitate all
applications used within most clinical workflows. Manufacturers of these
modern TPSs provide specifications that define the capabilities of their
equipment. For TPSs, the specifications tend to include necessary
hardware, system administration software, networking software, and
dose planning software. Software specifications include detailed
descriptions of what the software is capable of doing and how accurate
the dose calculations can be made. Networking software specifications
should detail the ability for the TPS to be fully integrated with the
electronic medical record, information management, and record and
verify software systems.

Acceptance Testing
Upon the installation of any new device, the user should assess the
device to ensure that it behaves according to its vendor-defined
specifications. For a TPS, this takes at least two forms: assessment of the
hardware and the software. The latter can also be divided into several
components, including assessment of the integrity of the operating
system, dose calculations, image transfer, and image display. Acceptance
testing is typically conducted by the vendor’s installation engineers, who
must show the accuracy of all specifications, are within preagreed upon
standards. Successful completion of acceptance testing represents the
final stage of the installation contract.

Commissioning
Commissioning is the process of putting the system into active clinical
service. This includes the production of a basic radiation database,
which is entered into the TPS, after which the user tests the system over
a range of clinically relevant conditions. Quality evaluations of the
programs’ outputs are then made. Such a process cannot test all the
system’s pathways or subroutines; however, it does provide the user with
a level of confidence over a wide variety of often-used treatment
conditions. In addition, it helps the user understand the degree of
uncertainty associated with these specific calculations. Finally, during
commissioning the user produces a baseline of performance standards to
be utilized for future QC (4).

Quality Control
As indicated earlier, QA and QC are closely related. QA is the total
process required to ensure that a certain level of quality is maintained
for a defined product or service. QC consists of systematic actions
necessary to ensure that the product or process performs according to
specification. QC contains three components: (a) the measurement of the
performance, (b) the comparison of this performance with existing
baselines or specifications defined during commissioning, and (c) the
appropriate actions necessary to keep or regain agreement with the
baseline.
In summary, QA and QC first necessitate defining a series of
specifications. Acceptance tests ensure that the system meets the basic
requirements as defined by the specifications. Commissioning makes the
computer ready for clinical use and provides a series of baselines that
can be used for ongoing QC to ensure that the system is maintaining the
required standards. Ongoing QC must be performed at intervals to
confirm that there have been no changes in the basic radiation and
machine parameter data files, in the input–output hardware, in the CT,
MRI, or other imaging-related software or hardware, or with the transfer
of data between clinical systems. Each of these four sections will be
described in detail throughout the remainder of the chapter.

System Specifications
Sources of Uncertainties
Specifications take various forms. One form is simply a statement of
whether the TPS is capable of doing a particular function or not. Another
form is quantitative, for example, calculation speed, number of images it
can hold, and so on. A third form is a statement of accuracy. This is
particularly relevant for dose calculations. To assess the accuracy of a
TPS and to define realistic accuracy specifications, it is necessary to
understand the sources of uncertainties.
The determination of uncertainties in dose calculations is complex
because dose calculation algorithms depend on input information, which
is usually generated by measurement. Thus, the uncertainty in the
calculation output depends on the uncertainties associated with the
measurements as well as the limitations of the calculation algorithms.
Measurements are of various types, including relative doses in water
phantoms, absolute dose calibrations for MU calculations, patient
anatomy using imaging techniques or contouring devices, and thickness
profiles of compensators or bolus.
Suggested Tolerances
Criteria of acceptability for dose calculations have been described by
various authors (21,40–42). Task Group 53 of the AAPM (13) and IAEA
TRS-430 (14) also include discussions on criteria of acceptability and
tolerances generally considered achievable. All tolerance values vary
dependent on the region of calculation. Greater accuracy can be
achieved on the central ray in a homogeneous phantom than in the
penumbra region at the beam edge. Generally, four regions can be
considered: (a) regions of low-dose gradients in the central portion of the
beam, (b) regions of large-dose gradients such as those occurring in the
penumbra or in the fall-off region for electron beams, (c) regions of low-
dose gradients in low-dose areas such as those occurring outside the
beam or under large shielded areas, and (d) doses in the buildup or
build-down regions at the entrance and exit surfaces of the patient.
Criteria of acceptability are generally quoted as a percent of the
reference dose except in regions of high-dose gradients, where a spatial
agreement in millimeters is a better descriptor, since the dose
uncertainties in such regions can be very large.
Criteria of acceptability should include a statement of confidence. For
example, we may state the criterion of acceptability is an accuracy of 2%
of the dose calculated on the central ray of the beam for a homogeneous
phantom. However, it is not clear if this statement expects all calculated
doses to be less than the criterion or if it expects only a certain
percentage, such as one standard deviation (68%) to be within the 2% of
the measured values. This is an important consideration, since
ambiguities in these criteria can generate tremendous frustration from
the user’s perspective as well as some troublesome legal interactions
with manufacturers of TPSs. By way of example, Van Dyk et al. (41)
produced some tables of criteria of acceptability clearly outlining the
system’s general capabilities. Tables 8.3 and 8.4 are similar to the Van
Dyk data, but some adjustments have been made in the numerical values
to indicate a slightly tighter range of acceptability reflecting
improvements in dose calculation algorithms available with modern
TPSs. These criteria of acceptability represent one standard deviation
about the mean. Venselaar et al. (42) have used a somewhat more
complex, but more rigorous, definition of a “confidence limit” and this
has been discussed in IAEA TRS-430 (14). It should also be noted that
calculation accuracies depend not only on the input data and the
limitations of the algorithms, but also on the user’s performance of the
calculation, which includes issues such as the choice of grid spacing.
Grid spacing can have a large impact on the accuracy of dose
calculations, especially in regions of high-dose gradient.

TABLE 8.3 Sample Criteria of Acceptability for Photon and Electron


Dose Calculations
TABLE 8.4 Sample Criteria of Acceptability for Brachytherapy
Calculations

For brachytherapy calculations, uncertainty estimates are more


difficult to determine because of the very short treatment distances and
the corresponding large-dose gradients. Furthermore, brachytherapy
calculations usually include absolute dose estimates, which require a
detailed understanding of absolute source output specifications. A recent
AAPM report suggests that when all uncertainties are combined, the two
standard deviation (or two sigma uncertainty, k = 2 in the report) of
dose rates used in treatment planning are approximately 9% and 7% for
low-energy and high-energy photon-emitting brachytherapy sources,
respectively (43). These stated uncertainties are for a single location, 1
cm from the source along the transverse plane. It should be noted that
the one standard deviations (or one sigma uncertainty, k = 1 in the
report) are less than 5% for both low-energy and high-energy sources.
These findings keep in agreement with the suggested criteria of
acceptability for brachytherapy calculations with a confidence of one
standard deviation given in Table 8.4.
These criteria of acceptability are based on what may be realistically
achievable. Ideally, the recommendations of ICRU 42 (40) should be
used as a goal for developers of treatment planning computer software.
For external beam therapy, ICRU 42 recommends 2% accuracy in low-
dose gradients and 2 mm accuracy in high-dose gradients. For
brachytherapy, 5% to 10% accuracy (two-sigma uncertainty) at
distances of 0.5 cm or more has been suggested (43).
Acceptance Testing
The system specifications determine the acceptance tests that have to be
performed. As a first step upon completion of the installation of a new
system, the user should determine that all the components have been
delivered and are consistent with the specifications. This includes a
review of each component of the hardware. With rapidly changing
technologies, manufacturers often switch one piece of hardware with an
updated version or with a device from another manufacturer. It is up to
the user to ensure that the new hardware equals or surpasses the
specifications set out in the purchase document.
The next step is to ensure that each component of the hardware
functions at a simple level, that is, make an assessment of the monitor,
mouse, printer, scanner, storage media, network connection, and other
hardware items. A check that all the relevant manuals and schematics
have been delivered is also important. The next level of hardware testing
is to use the diagnostic programs provided with the system to ensure
that all the hardware components assessed by the diagnostic routines are
functioning properly. This provides both a test of the system and a
baseline for the user to understand any changes that may be observed in
the follow-up QA tests.
The next step is to run the system software and to ensure that each
component of the software listed in the specifications is actually
installed and functional. This includes third-party software (commercial
software purchased by the vendor and included in the TPS) in addition
to the treatment planning software.
The TPS software acceptance is very complex, primarily because the
testing of the software requires input data specific to the therapy
machines to allow direct comparisons of calculations with
measurements. A practical approach is to test the system’s input–output
hardware to ensure that the system is capable of providing the options as
defined in the specifications and then to assess the accuracy of the
calculations as part of the commissioning process. In the signing of the
acceptance document, the purchaser should indicate that the software
acceptance will be completed as part of the commissioning process.
It should be noted that acceptance testing for IMRT requires a special
emphasis on the system’s capability of handling the penumbra region of
small radiation fields as well as the leakage radiation outside the field.
Small differences between measured and calculated dose in these regions
can yield relatively large dose differences when many of these small
fields are summed together. Due to the difficulty of small field
dosimetric measurements, extreme care should be taken to ensure that
the input data accurately represent the radiation delivery system.
A major component of potential error relates to maintaining the
spatial integrity of imaging datasets during input–output with the TPS.
Analysis of the spatial integrity during input–output should include
validating the consistency of size, shape, distance, and orientation of the
dataset, and the uncertainty should be <1 mm. All contours, target
volumes, normal tissue structures, CT images, beam outlines, ancillary
devices such as wedges and blocks, and isodose lines should be accurate
and consistent between screen display and hard-copy output. Scales,
distance calipers, and any other measurement routines should be
assessed for both function and accuracy. This includes autocontouring,
automatic contour expansion for target volume margins, density
assessments, automatic field shaping, and other features used by the
planning software.
A more comprehensive process for acceptance testing has been
proposed by the IAEA (15). The IAEA process is based on an earlier
document specifically made for vendors of TPSs by the International
Electrotechnical Commission (IEC) (32), which developed a series of
requirements for manufacturers in the design and construction of a TPS.
The consultants group for the IAEA has proposed that vendors should
perform a series of “type” tests for their system, the results of which
should be provided by the vendor to the user as part of the purchase
documentation (15). Type tests refer to those tests that are to be done by
the manufacturer, normally at the factory, to establish compliance with
specified criteria. The type tests proposed by the IAEA are based on an
intercomparison of photon dose calculation data (44). In addition to the
6, 10, and 18 MV data in the Venselaar and Welleweerd report, cobalt-
60 data have also been added to the IAEA dataset. These data are
provided by the IAEA to all vendors. The vendors enter the basic
radiation beam data as though they were commissioning those beams for
their dose calculation algorithms. Then, the vendors should perform all
the tests as described in the IAEA document. Once the TPS is installed at
the user’s site, a select subset of tests should be performed to
demonstrate consistency with the vendor’s type tests. The vendors
should update the type test results, if necessary, whenever software
changes or upgrades are made, and again these should be documented
and provided to all purchasers of that software. At the time of software
updates by the user, another select subset of tests can be made to ensure
consistency of results between the vendor’s calculations in the factory
and the user’s calculations in the clinic.

Commissioning
As discussed in the introduction, there is an ongoing debate on the
utilization of GBD for commissioning a TPS (19). Supporters of the
historical customized method of using institution specific beam data
contend that manufacturing and installation variations between linear
accelerators require unique beam models developed from the beam data
produced by extensive commissioning measurements (18). In this
approach, it is the responsibility of the physicist, guided by numerous
formalized documents, to correctly commission the TPS and develop
ongoing QA processes (13,18). Proponents of using GBD contend that
variations between machines from the same vendor/series are minimal
(17) and reference of several studies indicating measurement and
modeling errors during TPS commissioning and QA produce more
extensive dose calculation inaccuracies (11,12). Furthermore,
prepackaged TPSs modeled on extensively validated GBD have the
potential to drastically reduce gross systematic errors and standardize
delivery quality across the entire radiation oncology industry (19).
Several vendors of both generalized and specialized commercial TPSs
install units using manufactured measured GBD (45–47). Validation of
GBD to the measurements made on the local linear accelerator is still
needed to ensure an accurate match between the planned and delivered
dose. Regardless of which set of beam data is used during TPS
commissioning, the same general measurement and comparison
methodology should be followed, including comprehensive end-to-end
testing for the entire treatment planning and delivery process. The
commissioning process should also incorporate the development of
treatment planning procedures, training for staff, and designating expert
users who will be responsible for the continually TPS QA program. The
remainder of this section provides examples and highlights of tests that
should be conducted during TPS commissioning.

Photon Beams
The clinical implementation of the TPS can be divided into several
components. These include entry of basic radiation data, entry of
machine-specific parameters, entry of data related to ancillary devices
such as wedges, assessment of image transfer capabilities, assessment of
the accuracy of the electron density conversion formula, and the
validation of data transfer between all systems in the clinical workflow.
Each of these components involves data entry and then tests of the
software. For each component it is important for the user to understand
the capabilities and limitations of the software.
For TPS commissioning using unique local beam data, the data entry
for dose calculations can have various forms, including a direct entry
from data stored in the water phantom computer. The software of both
the water phantom systems and TPSs evolves with time and therefore
comes in various versions. To ensure version compatibility, it is always
important to assess that the data have been read properly by the data
entry programs.
Basic radiation data can be entered in various forms, including tissue–
air ratios, tissue–phantom ratios, tissue–maximum ratios, percentage
depth doses (PDD), and cross-beam profiles. Cross-beam profiles may
also have to be measured under a variety of conditions, including
profiles for the machine collimators, shielding blocks, wedges, and
multileaf collimators. The quality and accuracy of the measured basic
radiation data should be evaluated prior to implementation in the TPS. A
standard set of procedures for acquiring radiation beam data, along with
examples of commonly identified errors, are provided by the AAPM (18).
Limitations of physical measurements, specifically for small fields, high
gradients such as the penumbra and buildup regions, and peripheral
dose regions should be carefully evaluated. In all situations it is
imperative that the user ensures the accuracy of the entered data by
looking at the numerical values on the screen or by plotting out the data
and comparing directly with what has been entered. An independent
verification of the beam data using either Monte Carlo generated data or
GBD should be considered.
Should GBD be used during commissioning to create dose calculation
models, the validation measurements taken on the linear accelerator
should be directly compared to the GDB values. Any discrepancies
should be evaluated and, if necessary, modifications can be made to the
delivery system.
The types of tests that should be used to assess the quality of the
algorithms are summarized through working group reports as seen in
references 13, 14, and 31. Table 8.5 provides a summary of the relevant
parameters and variables that should be included in the testing process.
This is a guide to the kinds of issues that should be considered when
assessing the calculation capabilities of the TPS.

Examples of Photon Commissioning Tests


Several examples of the types of tests that can be performed, possible
methods of evaluation, and some additional issues to consider are
provided for non-IMRT photon beam commissioning. These examples do
not represent extensive commissioning testing and primarily serve to
illustrate possible methods for completing the commissioning process.
Figure 8.2 illustrates an example of simple central axis PDD data for a
range of field sizes. A comparison is made between the calculated data
(lines) and the measured data (points). Figure 8.3 shows how the data of
Figure 8.2 can be analyzed more critically by taking the absolute
difference between the entered and the calculated data for a 10 cm × 10
cm field size. It is clear that beyond the buildup depth (40 mm) the
differences are minimal (<0.2%), but in the buildup region differences
can be as large as 9%. The reason for these differences is not clear, but
the trends are similar for all field sizes. Figure 8.4 shows a difference
comparison for cross-beam profiles and includes two different
calculation algorithms, one being a table look-up algorithm and the
other a convolution calculation. In these graphs, it is clear that there are
sizable differences in dose in the penumbra region, although spatially
these differences are quite small (1 to 2 mm).

TABLE 8.5 Initial Dose Calculation Tests: Variables to Consider


FIGURE 8.2 Central ray percentage depth doses (PDD) for fields of dimensions 4 × 4 cm, 10 × 10
cm, 25 × 25 cm, and 40 × 40 cm. The data points entered into the treatment planning system are
labeled Meas and the calculated data are shown by the lines. The data are for the 25 MV beam
from a linear accelerator and are normalized to 100% at a depth of 4 cm.

Figure 8.5 shows a comparison of measured and calculated dose


profiles with the gantry angle at 40 degrees incident on a flat water
phantom. This comparison tests the integrity of the TPS to correct for
nonnormal incidence of the primary beam. Figure 8.6 is a demonstration
of the accuracy of the wedge calculations for a motorized wedge (a
physical wedge that is inserted by motor in the beam for a fraction of
the treatment and withdrawn for the remainder). This is a severe test of
the algorithms, since the wedge is very thick, with a wedge factor of
about 0.25. Figure 8.7 shows the difference between calculated and
measured dose for central ray PDD data under the motorized wedge for a
number of field sizes.
FIGURE 8.3 The absolute difference between calculated and entered (or measured) percentage
depth dose (PDD) data for a 10 × 10 cm field for the 25 MV photon beam from a linear
accelerator.

FIGURE 8.4 Absolute differences in relative doses comparing measurements with calculations for
cross-beam dose profiles for two dose calculation algorithms. The beam profiles were measured
at a depth of 4 cm for the 25 MV beam of a linear accelerator. The pencil beam (convolution)
algorithm shows the largest difference, about 10% in the penumbra region, although this
difference represents a spatial uncertainty of only 2 mm (not shown in figure).

Figure 8.8 shows cross-beam dose profiles calculated for a multileaf


collimator with the three 1-cm leaves covering the central portion of the
beam to yield a central beam block. Note that one of the leaves is on one
side of the central ray and the other two are on the opposite side. At
depths of 4, 10, and 25 cm, the dose profiles agree reasonably well
except in the centrally blocked region. A more detailed comparison of
this is shown in Figure 8.9, where calculated central ray depth doses are
shown first for an open beam and then with the three central leaves
covering the beam. Also shown are measured central ray data under the
leaves. The differences between calculated and measured data under the
leaves are about 5% to 7% except in what is normally the buildup
region, where the differences are as large as 28%. These differences may
be due to this program’s inability to handle electron contamination
under the leaves or shields. This is a physics problem that occurs in older
algorithms but has been improved in most new commercial treatment
planning programs (48,49). Differences could also be originating from
the limitations of physical measurements within these regions.

FIGURE 8.5 Calculated and measured dose profiles with a gantry angle of 40 degrees incident on
a flat water phantom for a 10 cm × 10 cm field at depths of 4 and 10 cm for the 25 MV beam of a
linear accelerator.
FIGURE 8.6 Calculated and measured dose profiles under a motorized wedge. Profiles are
shown for the 25 cm × 25 cm at depths of 4 and 15 cm, as well as a 30 cm × 30 cm field at a
depth of 4 cm for a 25 MV beam.

FIGURE 8.7 Difference profiles comparing measurements with calculations for central ray doses
measured under a motorized wedge for 6, 10, 25, and 30 cm square fields for a 25 MV beam.
Most differences are within 0.5% except in the buildup region, where differences are as large as
3.5%.
FIGURE 8.8 Comparison of measured and calculated cross-beam profiles under a multileaf
collimator. Three leaves are in the center of the beam, with one leaf on the left side of the central
ray and the other two on the right. The largest differences occur under the centrally shielded
region of the multileaf collimator.

FIGURE 8.9 Doses along the central ray for the same geometry as Figure 9.9. The upper curve is
calculated with no multileaves in the center of the beam. The lower curve is calculated under the
leaves, and the individual data points are the measured data under the leaves. The differences
between measured and calculated doses in the buildup region under the leaves are due to the
inadequacies of the algorithm to handle electron scatter (contamination) under shields.

IMRT
There are some unique aspects to commissioning IMRT compared to 3D
conformal radiation therapy (CRT). IMRT uses automated inverse
planning routines, which use iterative algorithms to yield acceptable
plans based on specified dose–volume constraints. The resulting dose
distribution can have steep gradients between the target and the organs
at risk and the commissioning tests need to reflect this added
consideration. Because IMRT could involve the summation of very many
small fields or multiple field edges, it is extremely important to ensure
that the modeling of the penumbra and the low-dose region outside the
beam is handled accurately. Furthermore, the accurate calculation of the
leakage radiation through the body, side, and end of the leaves,
especially those with curved ends, is very important to yield an accurate
penumbra (50,51). Because of these small field considerations, ICRU
Report 83 (52) points out that the Van Dyk criteria of acceptability (41)
of 3% in the high-dose region and 4% in low-dose regions for 3D CRT
may be too restrictive for IMRT in the high-dose region and
insufficiently restrictive in the low-dose region.
It should also be noted that the delivered dose distribution is
dependent on the leaf sequencing algorithm that is used to convert the
TPS-derived intensity maps to a deliverable set of MLC sequences. The
results are dependent on leaf width, leaf-travel distance, interdigitization
of leaves, and maximum field size. Using smaller MLC steps and a larger
number of intensity levels can result in many segments with small field
sizes again compounding the need for accuracy in MLC positioning and
penumbra modeling. Furthermore, there may be accelerator constraints
on the delivery of many segments each with a small number of MUs.
Because of the difficulty in measuring doses in small fields and
potential accelerator constraints, the ICRU Report 83 (52) suggests that
the use of end-to-end testing is integral to the beam commissioning
process. End-to-end testing validates the entire treatment planning
process including data collection, beam modeling, treatment planning
and delivery, data transfer from the TPS to the record-and-verify system,
and QA of the delivered absorbed dose. A typical end-to-end test
involves scanning a QA phantom, creating an IMRT plan on the image
dataset, measuring the delivered dose within the phantom, and
comparing the measurements to the calculated dose distribution (33).
The AAPM Task Group 119 provides an end-to-end testing suite for
IMRT planning (33).

Automated and Knowledge-based Planning


Automated and KBP techniques have recently been developed to
improve both treatment planning efficiency and plan quality (53–57).
Automated techniques replace repetitive steps of the manual workflow
typically completed by a planner by automatically applying predefined
site-specific parameters (structure names, beam angles, weighted
optimization objectives) to a new patient dataset. These predefined
parameters provide a standardized basis to begin the optimization
process and help minimize the variations introduced by planner. KBP is
an additional method that aims to reduce the variation in IMRT plan
quality and improve efficiency by providing achievable, patient-specific
optimization objectives derived from a model trained with a cohort of
previously treated site-specific plans (57–60). This database of existing
treatment plans is used to create a dose prediction model that correlates
patient-specific anatomic relationships (contours) with prior dose
distributions and can be applied to future patients being treated to a
similar anatomic site. Recent research for automated, knowledge-based
contour validation has also shown promise (61).
Commissioning automated treatment planning software requires a
thorough understanding of how the program takes input data and
creates treatment plans. Ideally, predefined planning parameters as
discussed above are evaluated to ensure they correlate to an institution’s
treatment planning practice. To ensure acceptable quality is achieved,
plans generated using the automated software should be evaluated at
specific dosimetric endpoints against manually generated plans for
several patients across all treatment sites for which the software will be
clinically used. If acceptable plan quality is not achieved, the predefined
planning parameters should be adjusted until the necessary quality is
met.
Additional considerations are necessary when commissioning KBP
software. Users have the option to commission their own local KBP
model or to utilize existing global models created by another institution.
KBP models are intrinsically dependent on the anatomic relationships
(contours), clinical trade-offs, and dosimetric endpoints of the initial
IMRT plans selected to train the model. Global models are sometimes
provided by the vendor with a detailed description of the parameters
associated with these dependencies and each should be carefully
compared to the clinical practice before being considered for
implementation. When creating a local model, the quality and diversity
of the plans included should reflect the intended scope for which the
model will be clinically applied. Both local and global models should be
extensively validated against an independent validation cohort of
manually created clinical plans to ensure the model will meet or exceed
expected target and OAR dosimetric goals (9). If plans created with the
model are not consistently comparable or superior to manually created
plans, the model should not be selected for use within the clinical
workflow.

Image Registration
The utilization of multimodality imaging in the definition of anatomic
OARs and targets has become increasingly common in the modern
radiation therapy clinical workflow (62,63). While CT imaging still
remains the primary modality for radiation treatment planning, it is
often inadequate when attempting to accurately delineate the tumor
(64). Tumors located in the central nervous system, abdomen, pelvis,
breast, or head and neck may require MRI/ultrasounds to provide high
contrast between soft tissues (65). Other sites in the thorax, abdomen,
pelvis, and head and neck benefit from metabolic information provided
from PET and SPECT imaging (7,66,67). In addition, specialized CT
scans (4D CT) may be necessary to assist in the management of tumor
motion in the thorax or abdomen (68). In order for the physician to
accurately and efficiently use the information provided by all imaging
modalities, it is necessary for all imaging datasets to be geometrically
associated via a process called image registration.
Image registration creates a vector transformation that maps specific
anatomic structures in the secondary imaging dataset to the
corresponding anatomic structures in the primary imaging dataset
(typically the planning CT). This transformation may be a rigid
(consisting of a global shift/rotation) or deformable (incorporating
relative local modifications of the secondary dataset). Once the image
datasets are registered, it is possible to map information such as soft
tissue contrast, metabolic uptake, tissue boundaries, or previously
delivered dose from the secondary datasets onto the primary CT dataset.
This combination of information from various imaging modalities is
known as image fusion. Many modern TPSs have developed integrated
image registration and fusion software. Due to their role in delineating
radiation targets and healthy tissue, the accuracy and reliability of the
registration and fusion software necessitates a thorough commissioning
process.
The process of validating image registration and fusion software is
relatively new. To thoroughly evaluate the accuracy and uncertainties of
any registration software, it is necessary to quantitatively compare the
generated coordinate system changes to known true changes within
baseline image datasets. These tests should be completed across all
imaging modalities using virtual geometric and anatomic phantoms and
include both rigid and deformable registration techniques (69). Virtual
phantoms allow for predetermined changes to an imaging set against
which registration algorithms can be evaluated (52). AAPM Task Group
132 describes a series of virtual phantoms and related tests to be used
during commissioning and proposes making this standard set of virtual
phantoms available via download in order to standardize the
commission process (70). Additional end-to-end tests using physical
phantoms, such as those supplied by the Imaging and Radiation
Oncology Core (IROC) in Houston, should also be conducted. A series of
typical clinical images should also be evaluated qualitatively and an
ongoing patient specific QA process should be developed to efficiently
evaluate image registration within the treatment planning workflow.
While no formalized guidance documents currently exist, the AAPM is in
the process of developing acceptance, commissioning, and QA guidelines
(70).

Autosegmentation
Clinical autosegmentation algorithms significantly improve the efficiency
of the contouring process but can produce contours with small to
moderate errors (71,72). Commissioning tests for autosegmentation
algorithms should identify the frequency and magnitude of consistently
occurring deviations from the clinically accepted manual contours for
each structure and all treatment planning staff should be familiar with
these deviations. Continuing patient-specific contour QA should be
incorporated into the clinical treatment planning workflow.

Adaptive Radiation Therapy


ART seeks to account for the dosimetric impact of daily anatomical
variations that occur during treatment to ensure the initial planned dose
distribution matches the final delivered dose distribution. Online ART
requires the rapid implementation of many of the typical treatment
planning steps, including image acquisition, image registration,
anatomic contouring, dose optimization, daily/cumulative plan
evaluation, and patient-specific QA (10). While each of the hardware
and software components for the ART process may have been
independently commissioned in the normal TPS, their specific
application in the ART workflow should be evaluated (73). In order to
minimize the time needed for online ART, many of the planning steps
are automated. The accuracy and functionality of any automated tools,
such as DVH evaluation scripts, cumulative dose analysis, and
OAR/target assessment, should be validated for all treatment sites.
Finally, due to the time sensitivity of online ART, each member of the
staff should be extensively trained in each of their roles. End-to-end tests
of both the system and workflow should be conducted with the staff to
ensure the online ART can be accurately and efficiently completed under
the necessary time constraints (73). A thorough evaluation of the
integrated ART workflow using modern process improvement tools has
been shown to be useful in identifying potential sources of error and
should be conducted for each unique ART workflow during the
commissioning process (74).

Electron Beams
Good examples of specific tests for electron beams can be found in
recent working group reports (13,14,21). Tests of specific concern to
electrons relate to changes in source-to-skin distance (SSD), output factor
calculations, oblique beam incidence, and variations in output for
shaped fields. Additional tests to validate the accuracy of the calculated
dose in heterogeneous medium, such as bone, fat, and lung, should also
be conducted.

Brachytherapy
Verification of brachytherapy dose calculation should be approached
similarly to the external beam tests. In this situation, however, it
becomes much more difficult to compare measurements with
calculations because of the difficulty in performing measurements over
the short distances involved in brachytherapy. The user may have to
resort to comparing calculations with previously published source data.
Relevant information can be found in various reports (13,14,75,76,77).
One unique test for brachytherapy is the assessment of anisotropy
calculations if these are provided by the system. A recent report by
Rivard et al. (78) provides enhancements to commissioning techniques
and QA of brachytherapy TPS that use model-based dose calculation
algorithms. Additional information regarding model-based dose
calculation algorithms has been included in a recent AAPM Task Group
186 (79). As previously noted, a recent joint AAPM-ESTRO Working
Group addressed the uncertainty in dose calculation accuracy for
brachytherapy TPSs (43).

Proton Therapy
Commissioning proton beam models within a TPS is often accomplished
through a combination of simulated Monte Carlo data and measured
beam data (80). Requirements for specific commissioning tests depend
on the type of proton system (dual passive scatter vs. active spot
scanning) and examples of each have been described thoroughly
(81–84). For dual passive scattering systems, validation will typically
include measuring longitudinal fluence, virtual source position, effective
source position, source size, Bragg peaks, and lateral beam profiles.
Active spot scanning may require validation of spot size, in air lateral
profiles, and integral depth dose data. During commissioning, the lateral
and range uncertainties associated with the accuracy of the model for
the full range of treatment conditions should be carefully evaluated and
accounted for within the clinical treatment planning process (85,86).
Consideration needs to be given to accurately correlate proton stopping
power ratios to CT numbers within a patient and difference between
phantom stopping powers and patient tissue stopping power should be
evaluated (87).

Commissioning of Other Components


Modern TPSs contain many other commissioning aspects than those
related to dose calculations. Examples of other types of issues that must
be considered and verified are shown in Table 8.6. The ability of the TPS
to accurately handle tissue heterogeneities when calculating dose is of
particular importance and validation requires the use of a specialized
phantom (16). Other nondosimetric parameters, such as image transfer
and contouring validation, can also be evaluated using specialized
phantoms to specifically address some commissioning and QA issues
(16,88).

TABLE 8.6 Commissioning of Other Components


TABLE 8.7 Techniques Requiring Special Workup
Special Techniques
Special and individualized techniques require their own unique
evaluation. Examples of special techniques that require additional
workup and commissioning are summarized in Table 8.7. In addition,
there are now a number of new technologies that have specialized TPS
specifically made for that technology. Examples include helical
tomotherapy (45), robotic radiation therapy (89), onboard MRI-guided
radiation therapy (46), and multiple cobalt source, small field radiation
therapy, used mostly for neurologic sites (47).

Quality Assurance and Quality Control


QC of a product or process involves three steps: (a) the measurement of
the performance, (b) the comparison of the performance with a given
standard, and (c) the actions necessary to keep or regain the standard.
The commissioning process of the TPS provides the standard for
comparison. Once the TPS is fully commissioned, a QA program should
be implemented to ensure the system is able to remain within the
standards determined during commissioning. However, the problems
associated with maintaining consistency and quality within a TPS are
quite different from the problems associated with QA of a CT simulator
or accelerator, which have electrical and mechanical components that
can wear and change with time.
Closely associated with QA is risk management. Risk management
consists of four components: (a) identifying the possible sources of risk
of failure or malfunction, (b) analyzing the frequency of incidents of
failure or malfunction, (c) taking corrective action to minimize such
failure, and (d) monitoring the outcome of such changes. Thus, to
develop an appropriate QA program for treatment planning computers,
an assessment of the likelihood of failure helps focus on the issues of
concern. IAEA TRS-430 (14) provides a good summary of reported errors
associated with radiation treatment planning. For the “accidents” (major
clinically significant errors) associated with TPSs, they determined that
the key contributory factors include the following:
a. A lack of understanding of the TPS
b. A lack of appropriate commissioning (no comprehensive tests)
c. A lack of independent calculation checks

The major issues related to treatment planning errors were


summarized by four key words: (a) education, (b) verification, (c)
documentation, and (d) communication.
The development of a thorough QA program is a compromise between
cost and benefit. An appropriate program has specific QCs to identify
and mitigate high probability and high impact errors without being
excessively burdensome on a facility’s resources. However, as new
technology is implemented in the clinic, it can be challenging to identify
the appropriate QC tests that provide the necessary balance. A careful
evaluation of a new process should be conducted before determining
changes and additions to any QA program.
AAPM Task Group 100 (90) is attempting to deal with the issue of
ever-increasing QA activity as new and more complex technologies
evolve. The central idea of TG-100 is to transition from traditional
device-centered QA to a more comprehensive risk-based, process-
centered approach. To implement a process-centered QA program, the
task group describes three techniques that have been historically used in
engineering circles: process tree mapping, failure mode and effects
analysis (FMEA), and fault tree analysis (FTA). Process tree mapping is a
visual illustration of all the relationships of each step for a specific
process. It tracks the physical and temporal flow of each step, from start
to finish, and is useful in easily identifying and tracking weaknesses and
error migration. FMEA is a prospective approach to QA. When used in
conjunction with a process tree map, it assesses the potential risks
(failure modes), likelihood of errors, and impact of such errors for each
step defined within the process. For each step, there may be many
potential failure modes, and each one may have several potential causes
and outcomes. For each potential cause of failure, values are assigned in
three categories: O, the probability that a specific cause will result in a
failure mode; D, the probability that the failure mode resulting from the
specific cause will go undetected; and S, the severity of the effects
resulting from a specific failure mode should it go undetected
throughout treatment. Convention uses numbers between 1 and 10.
Category O ranges from 1 (unlikely failure, <1 in 104) to 10 (highly
likely, >5% of the time). Category D ranges from 1 (undetected only
<0.01% of the time) to 10 (undetected >20% of the time). Category S
ranges from 1 (no appreciable danger) to 10 (catastrophic if persisting
through treatment). The product of these three indices forms the risk
probability number (RPN = O × S × D). A complete FMEA applied to
an entire process helps develop a fault tree analysis, which is a
visualization of all errors at each step and associated root causes for each
error. By applying a FTA, it is possible to determine appropriate QC
measures that can be implemented at the necessary steps to accurately
mitigate identified root causes of failure. The prospective approach
described by TG-100 provides guidelines to determine an efficient
application of resources within a QA program that accurately minimize
all major sources of error. Examples of FMEA analyses have been
published for the external beam radiation therapy process (91), dynamic
MLC tracking systems (92), and intraoperative radiation therapy (93).
A necessary element in developing a prospective QA program is the
inclusion of an electronic incident event reporting system (94,95). A
department level reporting system provides a platform for all employees
to voluntarily and anonymously report events, the severity of which
range from minor miscommunication to near-miss to severe treatment
error. Submitted events can be analyzed and previously unidentified
patterns can be systematically addressed prior to the occurrence of
serious errors (96). Several major groups have recently begun
development of national/international reporting systems (ASTRO’s RO-
ILS and IAEA’s SAFRON), which will provide the possibility of shared
learning across all participating institutions.

Program and System Documentation and Training


At the most basic level of QA, the user must be aware of what the
computer programs are doing when any specific option is requested.
Even if the programs are perfectly accurate, any error in data entry
results in an error in the output. Thus, the user must have adequate
information in terms of manuals and online help to aid in the
commissioning and operational process of the TPS. The types of
documentation that should be available are listed by Van Dyk et al. (41)
and in Table A1-1 of AAPM TG53 (13).
A significant amount of documentation occurs during the
commissioning process and encompasses all systems and software that
were tested. Appropriate documentation should include detailed
descriptions of all tests run, origin of data used in the TPS, and baselines
for the QA program. Documentation detailing with the treatment
planning procedures should also be developed during the commissioning
process and all appropriate staff should be provided training to fulfill
their roles.

User Training
Closely associated with proper manuals and information is user training.
The user must be clearly aware of normalization procedures, dose
calculation algorithms, image display and reconstruction procedures,
and program calculation capabilities and limitations. This training can
be carried out at at least three levels: (a) vendor training courses, (b) in-
house staff training, and (c) special training courses set up by user
groups or third-party software vendors.
TPS training has traditionally been formatted for dosimetrists and
physicists and often only limited training is available to physicians.
Physicians should be able to effectively operate the simple tools of any
planning system (setting beams parameters, defining field sizes, contour
tools). Beyond the basic functionality, in order to accurately evaluate a
treatment plan, physicians need to be aware of inaccuracies and
limitations of the planning system. This includes the inherent
inaccuracies of the dose calculation algorithms (central axis, buildup
region, penumbra, heterogeneities) and clinical situations in which these
inaccuracies are a common factor. In addition, physicians should be
aware of the capabilities and limitations of IMRT optimization
algorithms to achieve organ/target specific dosimetric planning goals
and the common clinical trade-offs. Finally, physicians should be able to
evaluate the quality of image registration/fusion and understand the
processes of rigid and deformable image registration.

Reproducibility Tests
A normally functioning computer is unlikely to generate small changes
in output. Computer system hardware malfunctions are likely to be
obvious. A more probable issue of concern is inadvertent access by
treatment planners to the basic radiation or machine data files. This can
result in changes to accuracy of calculations without the user being
aware that changes have taken place.
For inadvertent software or hardware changes, a binary comparison of
all the software and data files can test whether any changes have
occurred. If changes are found, the details of the changes must be
assessed and a partial system recommissioning may have to be
implemented. Alternatively, as described in the IAEA report (15), a
select subset of the vendor type tests should be performed to
demonstrate consistency with previous results.
From a risk management perspective, other possible sources of error
include intended or unintended changes in software or data files. These
can occur within the TPS or in the computers associated with data
generation, such as CT scanners and water phantom systems. Software
upgrades in these external computer systems can result in changes to the
data entered into the TPS.
To aid in the assessment of any software changes, a series of
reproducibility tests of the dose calculation algorithms, the image
display algorithms, and the plan evaluation tools should be undertaken
on a regular basis. Examples of such reproducibility tests can be found in
Van Dyk et al. (41) and in the reports from the AAPM (13) and the IAEA
(14). Users should develop their own tests based on their particular TPS
and what components of the hardware, software, and data files have any
likelihood of being changed.

Patient-Specific Tests
Since no system of computer programs is error-free, nor are users of such
programs perfect, routine inspection of each treatment plan is a
requirement for proper QA. Calculation of the external beam dose
usually consists of two components: (a) calculation of a relative dose
distribution, and (b) calculation of the machine output in terms of MUs.
Both of these components require a check by a participant independent
of the first calculation. For relative dose distributions, secondary checks,
either conducted manually or with a third-party software, can be
performed by choosing a specific point, usually on the central ray, and
calculating a dose estimate for each of the beams using simplified tables
to generate the results. These checks should agree to about 2% to 3% of
the computer-calculated values in regions of uniform dose delivery and
relatively simple inhomogeneity corrections (97). More complicated
plans have to be evaluated on an individual basis to assess the trends of
the numerical values. Similarly, the machine setting calculation should
be checked independently of the first calculation.
With the advent of more complex segmented or dynamic conformal
therapy and IMRT, such manual checks become very difficult if not
impossible. In these situations, the absolute dose is determined as part of
the planning process, with the MUs being defined for each component of
the treatment. QC checks must be developed for each individual
technique. Georg et al. (98) suggested action levels between ±3% and
±5% dependent on the treatment site and treatment technique. They
also conclude that independent calculations may be used to replace
experimental dose verification once the IMRT program is mature (99).
Similarly, ICRU Report 83 (52) describes the use of one or more of the
following methods for patient-specific QA:

• Measurement of the intensity pattern from individual beams for a


specific patient
• Measurement of absorbed dose in phantom of the beam-intensity
pattern planned for a specific patient
• Independent absorbed dose calculations for the patient-specific beam
intensity pattern
• In vivo dosimetry
For brachytherapy, manual single-point calculations are more difficult,
and therefore a check can be performed with one of the conventional
systems of dosage calculations, such as the Manchester system. This
approach can be used to make crude checks to an accuracy of about
10%. Again, assessing trends is crucial in evaluating the quality of the
calculation.

In vitro and in vivo Dosimetry Checks


As a final check of the quality of the overall treatment planning process,
it is useful to perform measurements using special-purpose or
anthropomorphic phantoms (in vitro dosimetry) or to perform
measurements on or in the patient with the patient in treatment position
(in vivo dosimetry). In vitro dosimetry is an important component of the
implementation of any new treatment technique or clinical procedure.
Generally, it is performed with thermoluminescent dosimetry (TLD) in a
phantom containing human-like tissue densities and composition, such
as an anthropomorphic phantom. More recently, optically stimulated
luminescence (OSL) is being used in place of TLD (100). Diodes and
metal-oxide semiconductor field-effect transistor (MOSFET) (101)
dosimetry systems are now readily available and provide instant readout
capability. This type of dosimetry ensures that the basic procedures
associated with a new treatment technique are in agreement within a
predetermined range of accuracy. A report by Dunscombe et al. (102)
gives a good overview of the use of an anthropomorphic phantom to
evaluate the quality of treatment planning computer systems. While
providing a good indication of the accuracy of the dose delivery process
near the center of the target volume, differences between measurements
and calculations away from the central region were difficult to interpret
as to whether the calculations were off, the measurements were off, or
the beam placement was inaccurate. Thus, in vitro dosimetry must be
established in such a manner that differences between measurements
and calculations can be readily interpreted.
Similar concerns of interpretation also apply to in vivo dosimetry
(103). There is a tendency by radiation oncologists to request in vivo
measurements to give them an assurance that the dose delivery process
is accurate, especially in regions where there is concern about critical
structures such as the eyes, gonads, or a fetus. Sometimes, these regions
are close to the edge of the radiation beams. Under such circumstances,
small changes in beam alignment can generate large changes in
measured dose, leaving ambiguity in the interpretation of the results.
These interpretation difficulties should be clearly explained to the
radiation oncologist requesting the measurements. Better comparisons of
calculations and measurements can be made in regions where doses are
not changing as rapidly—either on the entrance or exit surfaces or, if
possible, by placing dosimeters in body cavities such as the mouth,
trachea, esophagus, vagina, uterus, or rectum. In vivo dosimetry is a
recommended check under some treatment conditions and it may
provide an opportunity to mitigate treatment errors (104), but it should
not replace pretreatment in vitro phantom measurements for more
complex treatment techniques, such as IMRT. AAPM Task Group 158 has
been charged with assessing the current status of in vivo dosimetry for
nontarget, out-of-field exposures and to formulate recommendations for
methods to improve measurements and calculations for doses outside the
treatment volume (105). The IAEA has also produced a recent review on
in vivo dosimetry (106).

Quality Audits
It is always useful to review the QA activities of individual institutions.
Recent years have seen the public reporting of various errors or
“accidents” in radiation therapy. While such errors can have a
devastating effect on individual patients, the actual error rate in
radiation therapy is very low. However, it is the responsibility of
members of the radiation therapy team to ensure that proper procedures
are in place to minimize such errors. As a first approach, an institutional
self-auditing process is beneficial. This is best done in the context of a
QA committee that should exist in every radiation therapy department
(107). External audits have proven to be extremely beneficial for finding
inadvertent deviations from acceptable practice. The IROC in Houston,
Texas, has done this for years for institutions participating in clinical
trials involved with the Radiation Therapy Oncology Group (RTOG)
(12,108). Dosimetry intercomparisons are also useful especially in the
development of new techniques such as IMRT and provide a means to
standardized the quality of radiation treatment facilities (12,109).
The IAEA has developed an external audit process, which involves a
review of the total treatment process (110). They do this through the use
of a QA team in radiation oncology (QUATRO), which consists of a
radiation oncologist, medical physicist, radiation therapist, and
sometimes a specialist in radiation protection. A similar external quality
audit has been incorporated into the ASTRO Apex and ACR radiation
therapy accreditation process (111). Dosimetric intercomparisions and
external audits provide a substantial benefit to improve the overall
quality of radiation therapy across all facilities.

QA of Total Radiation Therapy Planning Process


As indicated earlier, the total radiation therapy planning process consists
of many steps, of which computerized treatment planning is only one
component. For the computer plan to be implemented accurately, it is
important that the basic patient data and image information be derived
accurately. This requires QC of the imagers generating the data (87). It
also requires assurance that the patient is positioned precisely in a
manner that will be readily reproducible on subsequent simulation and
treatment setups.
Geometric accuracy, before planning, begins with proper
immobilization and localization of the patient in treatment position.
Accuracy during planning includes assessing uncertainties associated
with image transfer, image registration, target volume and normal tissue
localization, beam placement, and dose calculations. To ensure that the
plan can be implemented on the therapy unit, the planner must verify
the correct transfer of plan data to the electronic medical record system
and that geometric arrangements are physically achievable. QA after
planning may include the comparison of the radiographs with DRRs to
verify that the field shapes are correct, as well as confirmation that the
internal anatomy is located accurately within the fields. Finally,
accuracy can be ensured at treatment by verifying that the setup
parameters such as source-to-surface distances for each field are
consistent with the planned parameters. In addition, daily online
imaging, including kV CBCT, MVCT, and kV 2D imaging, can be used to
assure geometric accuracy of the beam positioning and provide a
confirmation of the location of the internal anatomy of the patient,
immediately before each dose fraction is delivered.
QA Administration
An important component of any QA program is its effective organization
and administration. Any QA program should be carried out according to
a predetermined schedule and ongoing records of the activities and the
results should be maintained. Proper administration requires that one
person, usually a qualified medical physicist, be responsible for the QA
program. Although this individual does not necessarily have to carry out
all the tests and their evaluations, he or she must ensure that there is
written documentation on the QA process, that the tests are carried out
according to their specified frequency, and that appropriate actions are
taken as needed.
As TPSs become networked into clusters with various planning and
target volume delineation stations, servers, and peripheral devices,
system management becomes an integral component of the entire QA of
the TPS. This management includes maintaining an adequate check on
system security and limiting user access not only to the system, but also
to specific software and data file modifications. It is important that the
radiation data files not be inadvertently changed and that patient
confidentiality be fully maintained.
To avoid the possibility of any undesired loss of information, a regular
schedule for system backup is essential (13). This may include daily
backups of the most recent patient additions and changes, weekly
backups of all patient information, and monthly backups of the entire
TPS. Backups are also warranted immediately after any major changes to
the software of the system.
In addition to standard backups, it may also be desirable to archive
specific patient information, especially if patients are to be grouped for
study purposes. In some cases, patient data may have to be forwarded to
clinical trial groups such as the RTOG, which accepts such information
through the internet. However, patient data must also be archived in
case the patient comes back for retreatment.
Proper QA of the modern 3D TPS is a time-consuming process.
Adequate staff resources must be allocated to ensure that the QA is
completed in an appropriate manner.

SUMMARY
QA programs for radiation therapy machines, especially with the clinical
implementation of high-energy accelerators have been well defined for
many years. Formalized (CT) simulator QA is a more recent phenomenon
(88). While redundant checks for MU and time calculations have also
been standard practice, the formalization of a QA program for treatment
planning computers occurred more recently. This is partly due to the
tremendous variation in TPSs and their algorithms and partly to the
complexity of treatment planning QA, since it involves multiple facets
and is inherently centered on the entire process and not specific
equipment. Because of these complexities, it is clear that a
comprehensive program depends on institutional procedures, the type of
planning system in use, and the entire treatment planning workflow.
Treatment planning errors can be minimized with a good QA program.
As indicated earlier in this chapter, the major issues that relate to
treatment planning errors can be summarized by four key words: (a)
education, (b) verification, (c) documentation, and (d) communication
(14). Education is required not only at the technical and professional
level in terms of the use of the TPS, but also at the organizational level
with respect to institutional policies and procedures. A very important
component of education relates to understanding the software
capabilities and limitations. Secondary dose verification of TPS produced
plans are also important as many reported errors involved a lack of an
appropriate independent secondary check of the treatment plan or dose
calculation. Clear documentation is required both of each patient’s
individual treatment plan and of departmental policies and procedures.
Finally, communication among staff members is essential for all aspects
of treatment, since various individuals at various professional levels are
involved in the treatment process. Poor communication was a key factor
in a number of the errors reported that relate to treatment planning.
A carefully executed program of treatment planning computer
commissioning and ongoing QA assessment provides users with
confidence that their work is being carried out accurately. Furthermore,
it gives the user a clear understanding of the TPS’s capabilities and
limitations. Finally, the quality of the delivered radiation dose to the
patient depends on the quality of all the steps in the treatment planning
process, including patient imaging, simulation, target volume
delineation, treatment planning, treatment verification, and quality
factors associated with dose delivery and related to the radiation therapy
machine. Thus, it is imperative that the medical physicist, as well as all
other staff associated with the radiation therapy process, be actively
involved in the QA process at all stages. This provides both full
awareness of the capabilities and limitations of each step of the process
and a mechanism for decision making about any corrective action
deemed to be necessary.

ACKNOWLEDGMENTS
Contributions of Dr. Jacob Van Dyk to previous editions of this chapter are
still prevalent throughout the content in current edition, as his expertise in
TPS commissioning and QA is unparalleled. Editorial guidance provided by
Dr. Sasa Mutic and Dr. Eric Klein greatly aided in shaping the addition of
new content and the update of existing material. I would also like to thank
my wife, Kate Kavanaugh, my eternal ghost proof reader, for providing
suggestions of structural changes that make this chapter infinitely more
readable.

KEY POINTS
• Key contributing factors for major treatment planning system (TPS)
accidents typically involve at least one of the following:
• Lack of understanding of the TPS.
• Lack of a appropriate commissioning.
• Lack of independent calculation checks.

• A rigorous quality assurance (QA) program for TPS is necessary to


ensure an accurate delivery of the treatment intent. Such a QA
program consists of the following components:
• System specifications: definitions of the capabilities of the
software and the accuracy of the dose calculations.
• Acceptance testing: assessment of the hardware and software
to ensure the accuracy of all system specifications.
• Commissioning: acquisition of all data needed to bring the
system into clinical service.
■ Radiation beam data can be acquired onsite for each
individual radiation producing device or be a validated
universal set of golden beam data (GBD).
■ Baselines of performance standards used for quality
controls are determined during commissioning.
• Quality controls: systematic actions to ensure specific
performance standards are maintained.

• The uncertainty of the calculated dose depends on the accuracy of


the beam data used during commissioning and the limitations of the
calculation algorithms.
• For external photon beam calculations in a homogenous
phantom, this uncertainty is typically smallest on the central
axis and largest in the buildup region.
• Criteria of acceptability should be based on what is realistically
achievable and include statements of confidence.
• Commissioning of a TPS typically requires basic radiation data,
which may include tissue–air ratios, tissue–phantom ratios,
percentage depth doses, cross-beam profiles, and output factors.
• Additional measurements are needed for any ancillary devices
such as wedges, blocks, and MLCs.
• End-to-end validation of the TPS and treatment workflow should be
conducted for each unique treatment modality. End-to-end tests
typically include:
• Acquiring a CT simulation of a QA phantom.
• Creating a treatment plan on the image dataset.
• Validating the calculated dose to the measured dose.

• Modern TPSs include new functionalities such as image


registration, autosegmentation, automated planning/KBP, and
adaptive radiation therapy. While specific commissioning and
quality assurance tasks differ for each functionality, end-to-end
tests and validations against the existing manual workflow should
be included prior to clinical implementation.

• Quality assurance for the total radiation therapy planning process


incorporates all steps from the initial simulation to the treatment
delivery. Quality controls need to be developed for each aspect of
the treatment planning workflow. Several tools that can aid in
developing a strong QA program include:
• An FMEA analysis provides the framework to identify key steps
in the treatment planning workflow at which implementing
quality controls will provide the greatest utility.
• Reproducibility tests of the dose calculation, image display, and
plan evaluation tools are useful quality controls that easily
determine no significant changes have occurred to the
planning system hardware/ software.
• In vivo and in vitro patient specific checks provide a useful final
validation of the dose distribution and can identify gross errors
that may result in harm to the patient.
• Quality audits of treatment planning process, either conducted
internally or by an external third party, can identify weaknesses
prior to implementing a new planning system, treatment
modality, or treatment technique. Such quality audits can
incorporate end-to-end tests to validate the entire treatment
planning workflow.
• A proper training program for all staff should be developed or
re-evaluated during the commissioning process. Annual
credentialing should be considered to ensure everyone is up-
to-date on any new changes that may have been implemented.

QUESTIONS
1. Dose calculations for an enface photon beam in a homogenous
phantom exhibit the highest absolute uncertainty in which
region?
A. Central axis after a depth of maximum dose
B. Lateral penumbra
C. Buildup
D. Out of field
2. When upgrading a TPS, it is important to complete the following
tests except:
A. End-to-end tests
B. In-phantom patient-specific dose measurements
C. Third-party linear accelerator output audits
D. Reproducibility tests of basic dose calculations
3. Implementation of effective quality controls with the treatment
planning process is aided by a prospective quantitative technique
that assesses potential risks, likelihood of errors, and impact of
such errors. This technique is known as:
A. Process Tree Mapping
B. Fault Tree Analysis
C. Risk Management
D. Failure Modes and Effects Analysis
4. The advantages of using golden beam data when commissioning a
treatment planning system include all of the following except:
A. GBD accounts for the inherent differences that exist between
individual linear accelerators.
B. GBD allows for much of the TPS commissioning process to be
“pre-packaged” and minimizes the chance of gross systematic
errors arising from the input of incorrect data.
C. GBD standardizes the delivery quality of all linear accelerators
from a specific vendor.
D. GBD eliminates the possibility of poor quality commissioning
measurements being used for the basic radiation data needed
to define beam models.
ANSWERS
1. C Uncertainties in the dose calculation are dependent on the
accuracy of the measured data used to create the beam
model and the inherent accuracy of the dose calculation
algorithm. Both exhibit the lowest uncertainty on the
central axis. Measurements in the buildup region are
inherently challenging and vary greatly with detectors
typically available to the physicist acquiring the basic
radiation data. In addition, many dose calculation
algorithms do not handle the physics of electron
contamination very well.
2. C End-to-end tests, patient specific QA, and reproducibility
tests all validate either the integrity of the dose calculation
algorithms or the planning workflow. As a TPS system
upgrade does not impact the linear accelerator output and
reproducibility tests will confirm minimal changes to the
basic beam data, third party output audits are
unnecessary.
3. D While process tree mapping and fault tree analysis can be
useful in determining effective quality controls, FMEA
provides the quantitative framework to examine the
balance between risks, probability of occurrence, and
impact.
4. A Golden beam data is intended to standardize the delivery
basic radiation data used during commissioning, thus it
does not account for small local variations that may exist
between linear accelerators. Validation measurements of
the basic radiation data should be conducted and
compared to the GBD to determine if any differences exist.

REFERENCES
1. Peters LJ, O’Sullivan B, Giralt J, et al. Critical impact of radiotherapy
protocol compliance and quality in the treatment of advanced head
and neck cancer: results from TROG 02.02. J Clin Oncol.
2010;28(18):2996–3001.
2. Bogdanich W. The Radiation Boom. As Technology Surges, Radiation
Safeguards Lag. New York, NY: New York Times; 2010:A1, New York
edition.
3. Bogdanich W. The Radiation Boom. Radiation Offers New Cures, and
Ways to do Harm. New York Times; 2010:A1, New York edition.
4. Klein EE, Hanley J, Bayouth J, et al. Task Group 142 report: quality
assurance of medical accelerators. Med Phys. 2009;36(9):4197–4212.
5. Wu Q, Xing L, Ezzell G, et al. Inverse treatment planning. In: Van
Dyk J, ed. The Modern Technology of Radiation Oncology: A
Compendium For Medical Physicists And Radiation Oncologists. Volume
2. Madison,WI: Medical Physics Publishing; 2005:131–183.
6. Xia P, Verhey LJ. Intensity-modulated radiation therapy. In: Van Dyk
J, ed. The Modern Technology of Radiation Oncology: A Compendium
for Medical Physicists and Radiation Oncologists. Volume 2. Madison,
WI: Medical Physics Publishing; 2005:221–258.
7. Jeraj R, Bowen S, Jallow N, et al. Molecular imaging in radiation
oncology. In: Van Dyk J, ed. The Modern Technology of Radiation
Oncology: A Compendium for Medical Physicists and Radiation
Oncologists. Volume 3. Madison, WI: Medical Physics Publishing;
2013:25–58.
8. Yorke ED, Mechalakos JG, Rosenzweig KE. Dose-volume
considerations: an update for use in treatment planning. In: Van Dyk
J, ed. The Modern Technology of Radiation Oncology: A Compendium
for Medical Physicists And Radiation Oncologists. Volume 2.
Madison,WI: Medical Physics Publishing; 2013:59–90.
9. Tol JP, Delaney AR, Dahele M, et al. Evaluation of a knowledge-
based planning solution for head and neck cancer. Int J Radiat Oncol
Biol Phys. 2015;91(3):612–620.
10. Wu QJ, Li T, Wu Q, et al. Adaptive radiation therapy: technical
components and clinical applications. Cancer J. 2011;17(3):182–
189.
11. Shafiq J, Barton M, Noble D, et al. An international review of
patient safety measures in radiotherapy practice. Radiother Oncol.
2009;92(1):15–21.
12. Gershkevitsh E, Pesznyak C, Petrovic B, et al. Dosimetric inter-
institution comparison in European radiotherapy centres: results of
IAEA supported treatment planning system audit. Acta Oncol.
2014;53;628–636.
13. Fraass B, Doppke K, Hunt M, et al. American Association of
Physicists in Medicine Radiation Therapy Committee Task Group
53: quality assurance for clinical radiotherapy treatment planning.
Med Phys. 1998;25:1773–1829.
14. Van Dyk J, Rosenwald J-C, Fraass B, et al. Commissioning and
Quality Assurance of Computerized Planning Systems for Radiation
Treatment of Cancer. IAEA TRS-430. Vienna, Austria: International
Atomic Energy Agency; 2004.
15. International Atomic Energy Agency. IAEA-TECDOC-1540.
Specification and Acceptance Testing of Radiotherapy Treatment
Planning Systems. Vienna, Austria: International Atomic Energy
Agency; 2007.
16. International Atomic Energy Agency. IAEA-TECDOC-1583:
Commissioning of Radiotherapy Treatment Planning Systems: Testing for
Typical External Beam Treatment Techniques. Vienna, Austria:
International Atomic Energy Agency; 2008.
17. Cho SH, Vassiliev ON, Lee S, et al. Reference photon dosimetry data
and reference phase space data for the 6 MV photon beam from
Varian Clinac 2100 series linear accelerators. Med Phys. 2005;
32:137–148.
18. Das IJ, Cheng C-W, Watts, RJ, et al. Accelerator beam data
commissioning equipment and procedures: Report of the TG-106 of
the Therapy Physics Committee of the AAPM. Med Phys. 2008;
35(9):4186–4215.
19. Das IJ, Njeh CF. Point/Counterpoint: Vendor provided machine data
should never be used as a substitute for fully commissioning a linear
accelerator. Med Phys. 2012;39:569–573.
20. Kutcher GJ, Coia L, Gillin M, et al. Comprehensive QA for radiation
oncology: report of AAPM Radiation Therapy Committee Task
Group 40. Med Phys. 1994;21:581–618.
21. Brahme A, ed. Accuracy requirements and quality assurance of
external beam therapy with photons and electrons. Acta Oncol.
1988;27(Suppl 1):5–76.
22. Williamson JF, Thomadsen BR, eds. Quality assurance for radiation
therapy: the challenges of advanced technologies symposium. Int J
Radiat Oncol Biol Phys. 2008;72(Suppl 1):S1–S214.
23. Wheatley BM. An instrument for dosage estimation with fields of
any size and any shape. Brit J Radiol. 1951;24:388–391.
24. Cunningham JR. The Gordon Richards memorial lecture: the
stampede to compute: computers in radiotherapy. J Can Assoc
Radiol. 1971;22:242–251.
25. Tsien KC. The application of automatic computing machines to
radiation treatment planning. Brit J Radiol. 1955;28:432–439.
26. Power WE, Korba A, Purdy JA, et al. Dose profiles in treatment
planning. Radiology. 1976;121(3 Pt. 1):741–742.
27. The use of computers in therapeutic radiology. Special report no. 1.
Symposium Proceedings of First International Conference of
Computers in Radiotherapy, Cambridge, UK, 1966. London: British
Institute of Radiology; 1967.
28. Haworth A, Kron T, ed. Proceedings of the XVIIth International
Conference on the Use of Computers in Radiotherapy (ICCR 2013).
Melbourne, Australia. Journal of Physics: Conference Series 489;
2014.
29. Moore KL, Kagadis GC, McNutt TR, et al. Vision 20/20: Automation
and advanced computing in clinical radiation oncology. Med Phys.
2014;41(1):010901.
30. Mijnheer B, Olszewska A, Fiorino C, et al. Quality Assurance of
Treatment Planning Systems: Practical Examples of Non-IMRT Photon
Beams. Brussels: European Society of Therapeutic Radiation
Oncology (ESTRO); 2004.
31. Bruinvis IAD, Keus RB, Lenglet WJM, et al. Quality assurance of 3D
treatment planning systems for external photon and electron beams.
Report 15 of the Netherlands Commission on Radiation Dosimetry.
Delft, The Netherlands: Netherlands Commission on Radiation
Dosimetry (NCS); 2006.
32. International Electrotechnical Commission (IEC). Medical electrical
equipment—requirements for the safety of radiotherapy treatment
planning systems. IEC 62083 (2000–11). Geneva: International
Electrotechnical Commission; 2000.
33. Ezzell GA, Burmeister JW, Dogan N, et al. IMRT commissioning:
multiple institution planning and dosimetry comparisons, a report
from AAPM Task Group No. 119. Med Phys. 2009; 36(11):5359–
5373.
34. International Commission on Radiation Units and Measurements.
ICRU Report 62. Prescribing, Recording, and Reporting Photon Beam
Therapy (Supplement to ICRU Report 50). Bethesda, MD: ICRU; 1999.
35. Verhey L, Bentel G. Patient immobilization. In: Van Dyk J, ed. The
modern technology of radiation oncology: a compendium for medical
physicists and radiation oncologists. Madison, WI: Medical Physics
Publishing; 1999:53–94.
36. Siebers JV, Keall PJ, Kawrakow I. Monte Carlo dose calculations for
external beam radiation therapy. In: Van Dyk J, ed. The modern
technology of radiation oncology: a compendium for medical physicists
and radiation oncologists. Volume 2. Madison, WI: Medical Physics
Publishing; 2005:91–130.
37. Chetty IJ, Curran B, Cygler JE, et al. Report of the AAPM Task
Group No. 105: issues associated with clinical implementation of
Monte Carlo-based photon and electron external beam treatment
planning. Med Phys. 2007;34:4818–4853.
38. Papanikolaou N, Battista JJ, Boyer AL, et al. Tissue Inhomogeneity
Corrections for Megavoltage Photon Beams. Report by Task Group 65 of
the Radiation Therapy Committee of the American Association of
Physicists in Medicine. AAPM Report 85. Madison, WI: Medical
Physics Publishing; 2004.
39. Rivard MJ, Venselaar JL, Beaulieu L. The evolution of
brachytherapy treatment planning. Med Phys. 2009;36:2136–2153.
40. International Commission on Radiation Units and Measurements.
ICRU Report 42: Use of Computers in External Beam Radiotherapy
Procedures with High-Energy Photons and Electrons. Bethesda, MD:
International Commission On Radiation Units and Measurements;
1987.
41. Van Dyk J, Barnett RB, Cygler JE, et al. Commissioning and quality
assurance of treatment planning computers. Int J Radiat Oncol Biol
Phys. 1993;26:261–273.
42. Venselaar J, Welleweerd H, Mijnheer B. Tolerances for the accuracy
of photon beam dose calculations of treatment planning systems.
Radiother Oncol. 2001;60:191–201.
43. DeWerd LA, Ibbott GS, Meigooni AS, et al. A dosimetric uncertainty
analysis for photon-emitting brachytherapy sources: report of AAPM
Task Group No. 138 and GEC-ESTRO. Med Phys. 2011;38:782–801.
44. Venselaar J, Welleweerd H. Application of a test package in an
intercomparison of the photon dose calculation performance of
treatment planning systems used in a clinical setting. Radiother
Oncol. 2001;60:203–213.
45. Mackie TR. History of tomotherapy. Phys Med Biol. 2006;51:R427–
R453.
46. Mutic S, Dempsey JF. The ViewRay system: magnetic resonance-
guided and controlled radiotherapy. Semin Radiat Oncol.
2014;24:196–199.
47. Wowra B, Muacevic A, Jess-Hempen A, et al. Safety and efficacy of
outpatient gamma knife radiosurgery for multiple cerebral
metastases. Expert Rev Neurother. 2004;4:673–679.
48. Zhu TC, Palta JR. Electron contamination in 8 and 18 MV photon
beams. Med Phys. 1998;25:12–19.
49. Bedford JL, Childs PJ, Nordmark H, et al. Commissioning and
quality assurance of the Pinnacle(3) radiotherapy treatment
planning system for external beam photons. Br J Radiol.
2003;76:163–176.
50. Cadman P, McNutt T, Bzdusek K. Validation of physics
improvements for IMRT with a commercial treatment-planning
system. J Appl Clin Med Phys. 2005:6:74–86.
51. Cadman P, Bassalow R, Sidhu NP, et al. Dosimetric considerations
for validation of a sequential IMRT process with a commercial
treatment planning system. Phys Med Biol. 2002;47:3001–3010.
52. International Commission on Radiation Units and Measurements.
ICRU Report 83: Prescribing, Recording, and Reporting Photon-Beam
Intensity-Modulated Radiation Therapy (IMRT). Bethesda, MD:
International Commission On Radiation Units and Measurements;
2010.
53. Craft DL, Hong TS, Shih HA, et al. Improved planning time and plan
quality through multicriteria optimization for intensity-modulated
radiotherapy. Int J Radiat Oncol Biol Phys. 2012; 82(1):e83–e90.
54. Voet PWJ, Maarten DLP, Breedveld S, et al. Toward fully automated
multicriterial plan generation: a prospective clinical study. Int J
Radiat Oncol Biol Phys. 2012;85(3):866–872.
55. Moore KL, Brame RS, Low DA, et al. Experience-based quality
control of clinical intensity-modulated radiotherapy planning. Int J
Radiat Oncol Biol Phys. 2011;81(2):545–551.
56. Zhang X, Li X, Quan EM, et al. A methodology for automatic
intensity-modulated radiation treatment planning for lung cancer.
Phys Med Biol. 2011;56:3873–3893.
57. Wu B, Ricchetti F, Sanguineti G, et al. Patient geometry-driven
information retrieval for IMRT treatment plan quality control. Med
Phys. 2009;36(12):5497–5505.
58. Good D, Lo J, Lee WR, et al. A knowledge-based approach to
improving and homogenizing intensity modulated radiation therapy
planning quality among treatment centers: an example application
to prostate cancer planning. Int J Radiat Oncol Biol Phys.
2013;87(1):176–181.
59. Appenzoller LM, Michalski JM, Thorstad WL, et al. Predicting dose-
volume histograms for organs-at-risk in IMRT planning. Med Phys.
2012;39(12):7446–7461.
60. Zhu X, Ge Y, Thongphiew D, et al. A planning quality evaluation
tool for prostate adaptive IMRT based on machine learning. Med
Phys. 2011;38(2):719–726.
61. Altman MB, Kavanaugh JA, Green OL, et al. Addressing the issues
limiting rapid contour evaluation to facilitate On-line Adaptive
Radiation Therapy (OL-ART) with MR-IGRT. Int JRadiat Oncol Biol
Phys. 2014;90(1):S860.
62. Caldwell C, Mah K. Imaging for radiation therapy planning. In: Van
Dyk J, ed. The Modern Technology of Radiation Oncology: A
Compendium for Medical Physicists and Radiation Oncologists. Volume
2. Madison, WI: Medical Physics Publishing; 2005:31–89.
63. Kessler ML. Image registration and data fusion in radiation therapy.
Br J Radiol. 2006;70:S99–S108.
64. Njeh CF. Tumor delineation: the weakest link in the search for
accuracy in radiotherapy. J Med Phys. 2008;33(4):136–140. doi:
10.4103/0971–6203.44472.
65. Khoo VS, Joon DL. New developments in MRI for target volume
delineation in radiotherapy. Br J Radiol. 2006;79:S2–S15.
66. Price PM, Green MM. Positron emission tomography imaging
approaches for external beam radiation therapies: current status and
future developments. Br J Radiol. 2011;84:S19–S34.
67. Macmanus M, Nestle U, Rosenweig KE, et al. Use of PET and
PET/CT for Radiation Therapy Planning: IAEA expert report 2006–
2007. Radiother Oncol. 2008;91:85–94.
68. Li G, Citrin D, Camphausen K, et al. Advances in 4D medical
imaging and 4D radiation therapy. Technol Cancer Res Treat.
2008;7:67–81.
69. Brock KK. Deformable registration accuracy consortium. Results of a
multi-institution deformable registration accuracy study (MIDRAS).
Int J Radiat Oncol Biol Phys. 2010;76(2):583–596.
70. Brock KK, Kessler ML, Mutic S, et al. AAPM Task Group 132: use of
image registration and data fusion algorithms and techniques in
radiotherapy treatment planning.
71. Teguh DN, Levendag PC, Voet PW, et al. Clinical validation of atlas-
based auto-segmentation of multiple target volumes and normal
tissue (swallowing/maticiation) structures in the head and neck. Int
J Radiat Oncol Biol Phys. 2011;81(4):950–957.
72. Gambacorta MA, Valentini C, Dinapoli N, et al. Clinical validation
of atlas-based auto-segmentation of pelvic volumes and normal
tissue in rectal tumors using auto-segmentation computed system.
Acta Oncologica. 2013;52:1676–1681.
73. Li T, Zhu X, Thongphlew D, et al. On-line adaptive radiation
therapy: feasibility and clinical study. J Oncol. 2010;2010:407236.
74. Noel CE, Santanam L, Parikh PJ, et al. Process-based quality
management for clinical implementation of adaptive radiotherapy.
Med Phys. 2014;41(8):081717.
75. Nath R, Anderson LL, Luxton G, et al. Dosimetry of interstitial
brachytherapy sources. Med Phys. 1995;22(2):209–234.
76. Rivard MJ, Coursey BM, DeWerd LA, et al. Update of AAPM Task
Group No. 43 Report: A revised AAPM protocol for brachytherapy
dose calculations. Med Phys. 2004;31(3):633–674.
77. Perez-Calatayud J, Ballester F, Das RK, et al. Report of the High
Energy Brachytherapy Source Dosimetry (HEBD) Working Group: Dose
Calculation for Photon-Emitting Brachytherapy Sources with Average
Energy Higher than 50 keV: Full Report of the AAPM and ESTRO.
College Park, MD: AAPM; 2012.
78. Rivard MJ, Beaulieu L, Mourtada F. Enhancements to
commissioning techniques and quality assurance of brachytherapy
treatment planning systems that use model-based dose calculation
algorithms. Med Phys. 2010;37:2645–2658.
79. Beaulieu L, Tedgren AC, Carrier JF, et al. Report of the Task Group
186 on model-based dose calculation methods in brachytherapy
beyond the TG-43 formalism: current status and recommendations
for clinical implementation. Med Phys. 2012;39(10):6208–6236.
80. Paganetti H, Jiang H, Lee SY, et al. Accurate Monte Carlo
simulations for nozzle design, commissioning and quality assurance
for a proton radiation therapy facility. Med Phys. 2004;31:2107–
2118.
81. Zhu XR, Poenisch F, Sawakurchi GO, et al. Commissioning dose
computation models for spot scanning proton beams in water for a
commercially available treatment planning system. Med Phys.
2013;40(4):041723.
82. Slopsema RL, Lin L, Flampouri S, et al. Development of a golden
beam data set for the commissioning of a proton double-scatter
system in a pencil-beam dose calculation algorithm. Med Phys.
2014;41(9):091710.
83. Paganetti H. Proton Therapy Physics, Series in Medical Physics and
Biomedical Engineering. Boca Raton, FL: CRC Press; 2012.
84. International Commission on Radiation Units and Measurements.
ICRU Report 78: Prescribing, Recording, and Reporting Proton-Beam
Therapy. Bethesda, MD: International Commission On Radiation
Units and Measurements; 2007.
85. Park PC, Zhu XR, Lee AK, et al. A beam-specific planning target
volume (PTV) design for proton therapy to account for setup and
range uncertainties. Int J Radiat Oncol Biol Phys. 2012;82(2):e329–
e336.
86. Paganetti H. Range Uncertainties in proton therapy and the role of
Monte Carlo simulations. Phys Med Biol. 2012;57:R99–R117.
87. Paganetti H. Dose to water versus dose to medium in proton beam
therapy. Phys Med Biol. 2009;54(14):4399–4421.
88. Mutic S, Palta JR, Butker EK, et al. Quality assurance for computed-
tomography simulators and the computed-tomography-simulation
process: report of the AAPM Radiation Therapy Committee Task
Group No. 66. Med Phys. 2003;30:2762–2792.
89. Calcerrada Diaz-Santos N, Blasco Amaro JA, Cardiel GA, et al. The
safety and efficacy of robotic image-guided radiosurgery system
treatment for intra- and extracranial lesions: a systematic review of
the literature. Radiother Oncol. 2008;89:245–253.
90. Huq MS, Fraass BA, Dunscombe PB, et al. A method for evaluating
quality assurance needs in radiation therapy. Int J Radiat Oncol Biol
Phys. 2008;71:S170–S173.
91. Ford EC, Smith K, Terezakis S, Croog V, et al. A streamlined failure
mode and effects analysis. Med Phys. 2014;41(6):061709.
92. Sawant A, Dieterich S, Svatos M, et al. Failure mode and effects
analysis-based quality assurance for dynamic MLC tracking systems.
Med Phys. 2010;37:6466–6479.
93. Ciocca M, et al. Application of failure mode and effects analysis to
intra-operative radiation therapy using mobile electron linear
accelerators. Int J Radiat Oncol Biol Phys. 2012;82:e305–e311.
94. Mutic S, Brame RS, Oddiraju S, et al. Event (error and near-miss)
reporting and learning for process improvement in radiation
oncology. Med Phys. 2010;37:5027–5036.
95. Ford EC, Fong de Los Santos L, Pawlicki T, et al. Consensus
recommendations for incident learning database structures in
radiation oncology. Med Phys. 2012;39:7272–7290.
96. Terezakis SA, Harris KM, Ford EC, et al. An evaluation of
departmental radiation oncology incident reports: Anticipating a
national reporting system. Int J Radiat Oncol Biol Phys. 2013;
85:919–923.
97. Stern RL, Heaton R, Fraser MW, et al. Verification of monitor unit
calculations for non-IMRT clinical radiotherapy: Report of AAPM
Task Group 114. Med Phys. 2011;38(1):504–530.
98. Georg D, Nyholm T, Olofsson J, et al. Clinical evaluation of monitor
unit software and the application of action levels. Radiother Oncol.
2007;85:306–315.
99. Georg D, Stock M, Kroupa B, et al. Patient-specific IMRT verification
using independent fluence-based dose calculation software:
experimental benchmarking and initial clinical experience. Phys Med
Biol. 2007;52:4981–4992.
100. Yukihara EG, McKeever SW. Optically stimulated luminescence
(OSL) dosimetry in medicine. Phys Med Biol. 2008;53:R351–R379.
101. Jornet N, Carrasco P, Jurado D, et al. Comparison study of MOSFET
detectors and diodes for entrance in vivo dosimetry in 18 MV x-ray
beams. Med Phys. 2004;31:2534–2542.
102. Dunscombe P, McGhee P, Lederer E. Anthropomorphic phantom
measurements for the validation of a treatment planning system.
Phys Med Biol. 1996;41:399–411.
103. Van Dam J, Marinello G. Methods for In Vivo Dosimetry in External
Radiotherapy. Brussels, Belgium: ESTRO; 2006.
104. World Health Organization (WHO). Radiotherapy Risk Profile:
Technical Manual. Geneva: World Health Organization; 2008.
105. Bednarz B, Kry SF, Klein EE, et al. AAPM Task Group 158:
Measurements and calculations of doses outside the treatment
volume from External Beam Radiation Therapy.
106. International Atomic Energy Agency (IAEA). Development of
Procedures for In Vivo Dosimetry in Radiotherapy. Vienna, Austria:
International Atomic Energy Agency; 2013.
107. Van Dyk J, Purdy JA. Clinical implementation of technology and
the quality assurance process. In: Van Dyk J, ed. The Modern
Technology of Radiation Oncology: A Compendium for Medical
Physicists and Radiation Oncologists. Madison, WI: Medical Physics
Publishing; 1999:19–51.
108. Ibbott G, Ma CM, Rogers DW, et al. Anniversary paper: fifty years
of AAPM involvement in radiation dosimetry. Med Phys.
2008;35:1418–1427.
109. Schiefer H, Fogliata A, Nicolini G, et al. The Swiss IMRT dosimetry
intercomparison using a thorax phantom. Med Phys. 2010; 37:4424–
4431.
110. International Atomic Energy Agency. Comprehensive Audits of
Radiotherapy Practices: A Tool for Quality Improvement, Quality
Assurance Team for Radiation Oncology (QUATRO). Vienna, Austria:
International Atomic Energy Agency; 2007.
111. American Society for Radiation Oncology. Guidance document for
ASTRO’s Accrediation Program for Excellence (APEx). Fairfax,
Virginia: APEx Guidance document; 2015.
9 Intensity-Modulated Radiation
Therapy: Photons

Jan Unkelbach

INTRODUCTION
The Rationale for IMRT: Concave Target Volumes
The development of intensity-modulated radiation therapy (IMRT) was
preceded by two important technologic developments: computed
tomography (CT) and multi-leaf collimators (MLCs). Before the
widespread availability of CT scanners, radiotherapy planning was based
on 2D x-ray images. In these images, the projection of the target volume
could be delineated, which lead to the design of 2D treatment fields.
With the development of CT imaging, a 3D model of the patient became
available. The target volume as well as organs at risk could be
delineated in three dimensions and their spatial relation became known.
This led to the development of 3D conformal radiotherapy. Conforming
the radiation dose to the target volume required improved ways of
collimating the radiation field. The solution to this problem was the
MLC. 3D conformal radiotherapy is still the standard for many treatment
sites today. However, conforming the dose distribution to the tumor is
limited to round or convex shapes of the target volume. In 3D conformal
radiotherapy, the tumor is treated with one radiation field from each
incident beam direction, where the shape of the radiation field is the
projection of the target volume in beam’s eye view. The incident fluence
is homogeneous over the field. This makes it impossible to carve out
concavities in the target volume. The problem is illustrated in Figure 9.1,
which shows a patient treated for a spinal metastasis. The target volume
shown in red includes the entire vertebral body, which surrounds the
spinal cord. An emerging treatment paradigm for such cases consists in
delivering a single fraction dose of 18 to 24 Gy to the target volume.
This treatment approach requires that the dose to the spinal cord is
limited to approximately 10 Gy. With 3D conformal radiotherapy, it is
impossible to spare the spinal cord. As a first approach, the projection of
the spinal cord in beam’s eye view could be removed from the treatment
field, and the area to the right and to the left would be treated as two
separate fields. However, this strategy would yield an inhomogeneous
dose distribution to the target volume and would underdose the target
volume near the spinal cord. More specifically, in order to deliver the
prescribed dose to the target, the fluence at the edge of the spinal cord
has to be increased. Anders Brahme has studied this phenomenon for a
stylized geometry in his 1982 paper (1). The work can be considered as
one of the first papers illustrating the need for inhomogeneous fluence
distributions across the treatment field when treating concave target
volumes. This eventually led to the development of IMRT.

Typical Applications of IMRT


The treatment of spinal metastasis in hypofractionated regimens is a
recent application of IMRT. However, over the past years, IMRT has
become the standard of care for a variety of treatment sites. Two of the
established applications are prostate cancer and head-and-neck tumors,
which are illustrated in Figures 9.2 and 9.3. The prostate lies in the mid-
sagittal plane between the bladder and the rectum. In radiotherapy
treatments of prostate cancer, the target volume contains the entire
prostate gland. The main dose-limiting normal tissue is the anterior
rectal wall. In many patients, the lateral lobes of the prostate partially
wrap around the rectum, as illustrated in Figure 9.2. Only millimeters
separate the prostate gland from the radiosensitive lining of the rectal
wall. Historically, the prescription dose was therefore limited by rectal
toxicity as the anterior rectal wall received the full prescription dose.
Today, in the era of IMRT, a commonly used prescription dose is 79 Gy
using standard fractionation, which is among the highest prescriptions
throughout radiotherapy. This necessitates that the high-dose region
carves out the concavity formed by the rectum, which became possible
with IMRT.
Tumors in the head-and-neck region represent a third example in
which IMRT has replaced 3D conformal techniques for the most part.
This includes tumors arising in the oral cavity, the nasopharynx, and the
oropharynx. These tumors are often inoperable and are close to a variety
of radiosensitive structures. These include the saliva secreting glands as
well as structures related to swallowing and speech. Thus, radiotherapy
to cancers of the head and neck is associated with acute und long-term
side effects that seriously impact quality of life. Examples of these side
effects are mouth dryness, dental decay, and swallowing dysfunction.
IMRT allows for sparing of the parotid glands and carefully distributing
dose in normal tissues. Furthermore, IMRT allows for complex dose
prescriptions that are standard of care today. Nowadays, treatment
protocols often use three dose levels, in which 70 Gy is delivered to the
gross tumor volume (GTV), 60 Gy to high-risk lymph nodes, and 54 Gy
to low-risk nodal stations. This type of dose painting approach would be
very difficult to mimic using forward planning techniques and 3D
conformal radiotherapy.

FIGURE 9.1 A: Geometry of a spinal metastasis treated with IMRT. The target volume (red)
entirely surrounds the spinal cord (dark green), which is to be spared. Additional organs at risk are
the kidneys (orange). B: IMRT provides the means to spare the spinal cord while delivering a high
dose to the target volume, as shown in the dose distribution.
FIGURE 9.2 A: Prostate cancer represents a typical application of IMRT. The prostate (red) abuts
the rectum (orange) and the bladder (yellow). B: IMRT has the ability to conform the high dose to
the prostate while carving out the concavity formed by the rectum.

FIGURE 9.3 A: A head-and-neck cancer patient treated with IMRT. The target consists of multiple
volumes prescribed to different doses: GTV (brown), high-risk clinical target volume (CTV)
(purple), and low-risk CTV (red). Radiosensitive structures including the parotid glands (blue), the
submandibular glands (yellow), and the spinal cord (light blue) are near the target volume. B:
IMRT allows for conformal dose distribution to complex-shaped target volumes.

The examples above illustrate problems in oncology in which the


technical development of IMRT had profound impact on the way
patients are treated. In the case of prostate cancer, the widespread
availability of IMRT causes a shift from radical prostatectomy toward
radiotherapy as the mainstay of therapy. In the case of spinal metastasis,
IMRT offers the option of high single fraction doses with the intent of
local control, where the role of radiotherapy was limited to palliative
treatments before.

Scope and Organization of This Chapter


This chapter focuses on the concepts of IMRT, the treatment planning
process, and the mathematical methods used. It discusses steps in the
planning process and the user interface between treatment planner and
planning software, and it provides an understanding of the algorithms
used behind the scenes by modern treatment planning systems (TPS).
For a review of the history of IMRT, the reader is referred to the paper
by Bortfeld (2) and references therein. For a comprehensive review of
IMRT including the physics and technology aspects, we recommend the
book by Webb (3). The remainder of this chapter is organized as follows:

• The section IMRT–concepts and planning process introduces the main


concepts in IMRT and illustrates the planning process step-by-step
using a head-and-neck cancer example.
• The section Fluence map optimization discusses fluence map
optimization (FMO) in more detail, which represents the most
important concept in IMRT planning. In that context, the formulation
of IMRT treatment planning as a mathematical optimization problem is
discussed.
• The section Leaf sequencing describes the leaf sequencing problem, that
is, the method to deliver intensity-modulated radiation fields using
MLCs.

The above-mentioned sections reflect the historical development of


IMRT, and thereby the functionality and algorithmic foundation of the
first-generation IMRT planning systems. In recent years, IMRT TPS have
evolved to more advanced planning algorithms and support more
complex delivery techniques. To that end, the remaining sections
describe recent developments in IMRT planning.
• The section Direct aperture optimization describes methods for direct
aperture optimization (DAO), which aims to overcome problems of the
traditional two-step approach of FMO plus leaf sequencing.
• The section Arc therapy describes treatment planning for volumetric-
modulated arc therapy (VMAT), that is, a delivery technique where the
gantry continuously rotates around the patient while radiation is
delivered.
• The section Specialized topics in IMRT planning addresses current areas
of research and development in IMRT including new approaches to
deal with inherent tradeoffs between conflicting planning goals.
Pareto-surface navigation methods are introduced as a means for
interactive planning, which provide the treatment planning with a
graphical user interface to navigate in a database of treatment plans to
determine the treatment plan with the most desirable tradeoff.

IMRT—CONCEPTS AND PLANNING PROCESS


This section demonstrates the concepts of IMRT planning step-by-step for
an example case. We consider the head-and-neck cancer patient shown
in Figure 9.3, which represents a typical application of IMRT. The target
consists of multiple volumes, the GTV and adjacent lymph nodes, which
are prescribed to different dose levels. In addition, several radiosensitive
structures are located in proximity of the tumor. This includes the saliva-
producing parotid glands, the spinal cord, and the larynx. Radiotherapy
in the head-and-neck region is typically related to acute and long-term
side effects, such as swallowing dysfunction. Reducing dose to all normal
tissues is of great importance.

The Fluence Map


IMRT refers to radiotherapy delivery methods for which the fluence
distribution in the plane perpendicular to the incident beam direction is
modulated. To that end, the radiation beam is divided into small beam
segments, which are in principle deliverable by an MLC, as further
described in the leaf sequencing section. The lateral fluence distribution
of the beam is thereby discretized into small elements, which are
commonly referred to as beamlets or bixels. For an MLC with 1-cm leaf
width, the fluence distribution is represented by the intensities of 1 × 1-
cm beamlets. Nowadays, modern MLCs with a smaller leaf width often
allow for a finer discretization into 5 × 5-mm beamlets. The discrete
representation of the fluence is commonly referred to as the fluence map.
In IMRT planning, the goal is to find the fluence maps of all incident
beam directions that yield the best possible dose distribution in the
patient. This problem is referred to as FMO and is the topic of this
section and the following one.
The definition of the fluence map is illustrated in Figure 9.4. Similar to
3D conformal therapy, this starts with the definition of the isocenter. For
IMRT planning, we subsequently determine the set of all beamlets that
are potentially helpful in finding the most desirable treatment plan.
Loosely speaking, this corresponds to all beamlets that contribute a
significant dose to the target volume. A common method for initializing
the fluence map consists in including all beamlets for which the central
axis of the corresponding beam segment intersects the target volume.
FIGURE 9.4 Illustration of the fluence map definition for IMRT planning for a head-and-neck
cancer patient. The figure shows the digitally reconstructed radiograph (DRR) for one of the
incident beam directions. The contours show the projections of the three target volumes in
beam’s-eye-view. The fluence map consists of all beamlets that cover the projection of the target
volume.

The Dose-Deposition Matrix


The quality of a treatment plan is primarily judged based on the dose
distribution in the patient. Thus, we would like to determine the fluence
maps of the incident beams as to best approximate a desired dose
distribution. To that end, we have to relate the incident fluence to the
dose distribution in the patient. The dose-deposition matrix concept,
which is frequently used in IMRT planning, provides this link.
For dose calculation, the patient is discretized into small volume
elements referred to as voxels. A dose calculation algorithm is used to
calculate the dose distribution of every beamlet in the fluence map in
the patient. Let us denote the dose that beamlet j contributes to voxel i
in the patient as Dij, and let us denote the intensity of beamlet j as xj.
The total dose di delivered to voxel i is then simply given by the
superposition of all beamlet contributions:

Here, the matrix of dose contributions Dij of beamlets j to voxel i is


referred to as the dose-deposition matrix. In practice, the fluence is
commonly quantified in monitor units (MU). In this case, the natural
unit of the dose influence matrix is Gy/MU, such that the resulting dose
distribution in the patient is obtained in Gy. The dose-deposition matrix
concept is convenient since it allows for a separation of the
mathematical optimization of beamlet intensities xj from the dose
calculation algorithm. In IMRT planning, the dose-deposition matrix is
often calculated up front and held in memory. Subsequently, the dose
distribution is obtained by a simple matrix multiplication d = Dx.

Formulation of IMRT Planning as an Optimization Problem


In order to determine the optimal fluence map for every incident beam
direction, we have to specify the desired dose distribution. In other
words, we have to characterize what a good treatment plan is. In the
example case in Figure 9.3, treatment planning aims at different goals
including:
1. A prescribed dose dpres should be delivered to all parts of the target
volume. In this case, the target volume consists of multiple parts. The
GTV is often prescribed to 70 Gy; adjacent lymph nodes which are
likely to contain microscopic tumor and are prescribed to an
intermediate dose of 60 Gy; and more distant lymph nodes that may
contain tumor cells but are less likely to do so are prescribed to 54 Gy.
The lymph node targets essentially consist of normal tissue such that
treatment planning aims at a homogeneous dose in the target,
avoiding both under- and overdosing.
2. The dose distribution should conform to the target volume. Outside
the target volume, a steep dose falloff is desired and unnecessary dose
to all healthy tissues should be avoided.
3. The dose delivered to the parotid glands is to be minimized to avoid
or reduce side effects such as xerostomia (mouth dryness).
4. The dose to the spinal cord has to be limited. The maximum dose
delivered to any part of the spinal cord has to stay below a maximum
tolerance dose dmax.
For IMRT planning, these goals have to be translated into
mathematical terms. This is done by defining functions, which represent
measures for how good a treatment plan is, and whether it is acceptable
at all. In this context, we distinguish objectives and constraints:

Constraints are conditions that are to be satisfied in any case. Every


treatment plan that does not satisfy the constraint would be
unacceptable.
Objectives are functions that measure the quality of a treatment plan.
They may represent measures to quantify how close a treatment plan
is to the ideal or desired treatment plan.

In the above example, the first three goals can be formulated as


objectives; the fourth goal of enforcing a strict maximum on the spinal
cord dose represents a constraint. The goal of delivering a homogeneous
dose to the target volume can be formulated via a quadratic objective
function:

Ideally, every voxel that belongs to the target volume receives the
prescribed dose dpres, which corresponds to a value of zero for the
function fT . Otherwise, fT yields the averaged quadratic deviation from
the prescribed dose. The larger the objective value is, the more the dose
deviates from the prescription dose, corresponding to a worse treatment
plan.
Similarly, the goal of minimizing the dose to the parotid glands can be
formulated as an objective function. For example, we can define the
objective fP as

which aims at minimizing the mean dose to the parotid glands. The goal
of conforming the dose distribution to the target volume can for example
be described via a piecewise quadratic penalty function

where the + operator is defined through (di − dimax)+ = di − dimax if di


≥ dimax, and zero otherwise. Thus dimax is a maximum dose that is
accepted in voxel i; dose values exceeding dimax are penalized
quadratically. Clearly, in normal tissue voxels directly adjacent to the
target volume, high doses are unavoidable, whereas at large distance
from the target volume, treatment planning should aim at avoiding
unnecessary dose. Therefore, dimax can be chosen based on the distance
of voxel i to the target volume. For example, dimax is set equal to the
prescribed dose in voxels directly adjacent to the target volume, and to
half the prescription at 1 cm distance.
Finally, we would like to ensure that the dose in all voxels that belong
to the spinal cord does not exceed a maximum tolerance dose dsmax. If
we do not accept any treatment plan that exceeds the maximum dose,
this can be implemented as a constraint, not an objective. In this case we
can formulate the constraint as

where S is the set of all voxels belonging to the spinal cord.


Treatment planning simultaneously aims at minimizing all of the
above objective functions, that is, ideally we would like each tumor
voxel receive the prescribed dose while no dose is delivered to the
normal tissues. It is clear that the objectives associated with different
structures are inherently conflicting. Thus, the treatment planner will
have to weight these conflicting objectives relative to each other and
accept a compromise. The traditional approach in IMRT planning
consists in manually assigning importance weights w to each objective,
using a high weight for the most important objective, and a smaller
weight for less important goals. The best treatment plan is then defined
as the one that minimizes the weighed sum of objectives

IMRT planning uses mathematical optimization algorithms in order to


determine the fluence map x, corresponding to the dose distribution d =
Dx, which minimizes the weighted sum of objectives, subject the all
constraints on the dose distribution, and under the condition that all
beamlet weights have to be positive. We will further discuss
optimization algorithms in the section Fluence map optimization. Below,
we first look at the result of such an optimization for a specific choice of
optimization parameters.

Solution to the IMRT Problem: The Optimal Treatment Plan


Figure 9.5 illustrates an IMRT treatment plan using 11 incident beam
directions. It shows the dose distribution overlaid on a coronal slice of
the patient’s CT. Also shown are the 11 beams together with the
effective fluence that is incident from each direction. One of the
radiation fields is illustrated in more detail in Figure 9.6, in which the
fluence is overlaid on the digitally reconstructed radiograph (DRR). The
figure illustrates the modulation of the intensity over the radiation field.
The resulting dose distribution of the IMRT plan is shown in Figure
9.7. The middle panel shows the cumulative dose distribution of all
beams. IMRT allows for dose distributions that conform to complex-
shaped, concave target volumes. A single IMRT plan allows for different
dose levels in high- and low-risk lymph node targets, as well as a
simultaneous integrated boost (SIB) to the GTV. The peripheral images
in Figure 9.7 show the dose contributions of 6 of the 11 incident beams.

Controlling Tradeoffs
Different objectives in IMRT planning are inherently conflicting. Clearly,
there is a tradeoff between delivering dose to the tumor and reducing
dose to healthy tissues. In the above example, the target volume is
directly adjacent to the parotid glands. Sparing the parotid glands from
radiation will lead to a dose reduction in the adjacent part of the target
volume. Ensuring coverage of the target will in turn lead to higher doses
to the parotid glands. In addition, there are tradeoffs between different
normal tissues. In order to deliver the prescribed dose to the target
volume, some dose to the normal tissues is unavoidable. However, using
intensity modulation and enough beam directions, the dose distribution
in the normal tissue can be shaped according to the physician’s
preference.

FIGURE 9.5 Illustration of an IMRT plan for the head-and-neck cancer patient generated in the
RayStation planning system, version 4.0. The dose distribution is shown on a coronal slice of the
patient’s CT scan. The blue circle indicates the isocenter. The 11 beam directions are displayed
with their respective fluence. Red color indicates a low fluence, white a high fluence.
FIGURE 9.6 Illustration of a single intensity-modulated field, overlaid on the DRR. The figure
shows the effective fluence that is incident on the patient surface for the final treatment plan. This
includes modification of the optimized fluence map through leaf sequencing (Leaf sequencing
section) and refinement of MLC leaf positions (Direct aperture optimization section). The same
applies to Figure 9.5.

In most TPS that are in use today, treatment planners control the
tradeoffs between different planning goals manually by manipulating the
relative weights w of objective functions. This can lead to a time-
consuming trial-and-error process. Different approaches have been
suggested to improve the interaction of the treatment planner with the
TPS, including interactive Pareto-surface navigation methods, which are
discussed in the section Specialized topics in IMRT planning.

Delivery of Intensity-Modulated Fields


In order to deliver an intensity-modulated field, the fluence map is
decomposed into a number of smaller radiation fields that can be
delivered using an MLC. This process is called sequencing and is
described in more detail in the Leaf sequencing section. As an outlook to
subsequent sections, Figure 9.8 illustrates how the fluence shown in
Figure 9.6 is delivered as a sequence of three MLC openings.

FLUENCE MAP OPTIMIZATION*


The previous section illustrated the main concepts in IMRT planning for
an example case. In this section we take a more formal look at IMRT
planning as a mathematical optimization problem. We first discuss some
of the frequently used objective and constraint functions, in particular
the handling of dose–volume effects (section on Dose–volume effects). The
subsequent section briefly outlines the use of outcome models in IMRT
planning and their limitations. Finally, the section on Optimization
algorithms introduces basic mathematical optimization algorithms used
in IMRT planning to solve IMRT problems.
FIGURE 9.7 IMRT dose distribution for the head-and-neck case example, demonstrating the
ability of IMRT to conform the dose distribution to complex target volumes (middle panel). Also
shown are the dose contributions of 6 (out of 11) beam directions (surrounding images).

In mathematical terms, a general FMO problem can be formulated as


the following mathematical optimization problem:
FIGURE 9.8 Delivery of an intensity-modulated field through a sequence of MLC openings. Each
figure shows the incident total fluence overlaid on the DRR, together with the leaf positions of the
multi-leaf collimator that define the field opening. Also shown are the positions of the Y-jaws that
reduce transmission through closed MLC leaves (blue).

Treatment planning involves balancing different clinical objectives.


Therefore, the objective function f is a weighted sum of individual
objectives:

Here, wn are positive weighting factors, which are used to control the
relative importance of different terms in the composite objective
function.
The objective function that may be the most commonly used in
current TPS is a piece-wise quadratic penalty function:
Here, dmax is a maximum tolerance dose for an organ, which is usually
specified by the treatment planner through the graphical user interface
in the TPS. Similarly, for target volumes dmin is a minimum dose that is
to be delivered to the target volume.
The functions gs(d) correspond to hard constraints on the dose
distribution. Common constraints are maximum dose values in organs at
risk and minimum doses in target volumes. In this case, cs is the
maximum dose in a structure, s is an index over all voxels in the
structure, and gs(d) is simply the dose in voxel s. In the subsection Dose-
Volume effects, additional commonly used objectives and constraints are
discussed.

Dose–Volume Effects
An organ at risk will typically receive an inhomogeneous dose
distribution. The question arises whether it is better to irradiate a small
part of the organ to a large dose while sparing the remaining parts to a
large extent; or whether it is better to spread out the dose and avoid
large doses in all parts of the organ. In that context, one distinguishes
parallel organs and serial organs. For organs with a serial structure, the
function of the whole organ will fail if one part of the organ is damaged.
One prominent example for a serial organ is the spinal cord. For serial
organs it is therefore crucial to limit the maximum dose delivered to the
organ, rather than the mean dose. For a parallel organ, the function of
the organ as a whole is preserved even if a part of the organ is damaged.
The lungs are an example of a parallel organ. The dependence of a
clinical outcome on the irradiated volume of an organ is commonly
referred to as a volume effect or dose–volume effect. For IMRT planning,
clinical knowledge on dose–volume effects are to be translated into
appropriate objective functions. Today, mainly two types of
objective/constraint function are being applied: dose–volume histogram
(DVH) constraints and the concept of equivalent uniform dose (EUD).
Equivalent Uniform Dose
One approach to quantifying dose–volume effects consists of using
generalized mean values of the dose distribution:

where the exponent α is larger than 1 for OARs. For the special case α =
1, EUD(d) is equivalent to the mean dose in the organ. In the limit of
large α values, the value of EUD(d) approaches the maximum dose in the
organ. Thus, parallel organs are described via a small value of α close to
1, whereas serial organs are described via large values of α
(approximately 10). The generalized mean value is commonly referred to
as EUD. The generalized mean value can also be applied to target
volumes by using negative exponents. For a large negative value of α,
the EUD approaches the minimum dose in the target volume. In practice,
exponents in the range of α = –10 … –20 are considered. The EUD can
be used as both an objective function and a constraint function.

DVH Objectives and Constraints


The clinical evaluation of treatment plans often uses the DVH. A typical
evaluation criterion for the target volume is: at least 95% of the target
volume should receive a dose equal to or higher than the prescription
dose. Similarly, a criterion for an OAR could be: at most 20% of the
organ should receive more than 30 Gy. From an optimization
perspective, it is not straightforward to handle DVH constraints in a
rigorous way. In practice, DVH constraints are therefore handled
approximately using a quadratic penalty function. We consider the
example that no more than 20% of an organ should receive a dose
higher than dmax. Given an initial dose distribution, one can identify the
fraction of voxels that exceed the dose level dmax. If this fraction is
smaller than 20%, the DVH constraint is fulfilled. Otherwise, a quadratic
penalty function is introduced that aims at reducing the dose to those
voxels that exceed dmax by the least amount. For example, if 30% of the
organ receives a higher dose, the 20% of voxels receiving the highest
dose are ignored. For the 10% of voxels a quadratic penalty term as in
Equation 9.2 is added to the objective function.

The Use of Clinical Outcome Models in IMRT Optimization


Since the beginning of the IMRT era, the question regarding the
adequate objective function has persisted. Intuitively, we would like to
translate the notion of “maximizing the tumor control probability (TCP)”
while “minimizing the normal tissue complication probability (NTCP)”
more directly into mathematical terms.
One of the most common methods for relating treatment outcome to
the dose distribution consists in performing logistic regression. As an
example, we consider NTCP models, however, the same methodology
can be applied to TCP models. The severity of a radiation side effect is
clinically assessed in discrete stages. Typically, one is interested in
avoiding severe complications. For example, in the treatment of lung
cancer, treatment planning may aim at minimizing the probability for
radiation pneumonitis of grade 2 or higher. This converts the observed
clinical outcome into a binary outcome label. NTCP modeling can thus
be considered as a classification problem, which aims at estimating the
probability of a complication given features of the dose distribution.
Standard statistical classification methods, such as logistic regression,
can be applied to this problem. In logistic regression, the NTCP model is
given by:

Here, f is a function of the dose distribution d and the model


parameters q. The central problem in statistical analysis and modeling of
patient outcome consist in determining the function f, that is, selecting
features of the dose distribution that are correlated with outcome. One
of the most commonly used representations of f is given by:

In this case, f is a linear function of a single feature of the dose


distribution, namely the EUD. For EUD(d) = TD50, the value of NTCP
evaluates to 0.5, that is, TD50 corresponds to the effective dose that leads
to a complication probability of 50%. The parameter γ determines the
slope of the dose-response relation. The NTCP model has three
parameters (TD50, γ, and the EUD exponent α) which can be fitted to
outcome data, for example, through maximum likelihood methods. This
NTCP model is equivalent to the Lyman–Kutcher–Burman (LKB) model,
except that the LKB model traditionally uses a different functional form
of the sigmoid.
Although phenomenologic outcome models play an increasing role in
treatment plan evaluation, their capabilities from a treatment plan
optimization perspective have remained limited so far. One reason for
that is the uncertainty in outcome models. A second reason is that
currently used models are not more powerful than dose-based objective
functions. In particular, the NTCP model above represents an increasing
function of the EUD, that is, independent of the parameters TD50 and γ,
higher EUD always leads to higher NTCP. As a consequence, the dose
distribution that minimizes EUD is the same as the dose distribution that
minimizes NTCP. Hence, from an IMRT optimization perspective,
minimizing EUD and NTCP is equivalent (5).

Optimization Algorithms
Our goal in this chapter is to provide the reader with an understanding
of the most basic optimization algorithms, which do not require
advanced knowledge of optimization theory. We start with a geometric
visualization of the IMRT optimization problem. Subsequent, the
gradient descent algorithm is described, which is in principle sufficient
to optimize fluence maps. Afterward, extensions of gradient descent
methods toward quasi-Newton algorithms are outlined. Certainly, the
field of IMRT optimization has advanced significantly, and increasingly
complex algorithms for constrained optimization are being applied.
These algorithms require knowledge of optimization theory, which is
beyond the scope of this chapter. The interested reader is referred to the
optimization literature (6,7). Ehrgott et al. (8) provide a review of
radiotherapy planning from a mathematical optimization perspective.

Visualization of The Fluence Map Optimization Problem


Due to the large number of beamlets (optimization variables) it is not
possible to directly visualize the objective and constraint functions for a
full IMRT planning problem. Nevertheless, it is helpful to understand the
structure of the IMRT optimization problem. To that end, we consider a
simplified version of an IMRT planning problem in which only 2
beamlets and 4 voxels are considered. We consider the following dose-
deposition matrix:

where the first two columns correspond to the tumor voxels, and
columns 3 and 4 correspond to OAR voxels. We further assume that we
aim to deliver a dose of 2 to both of the tumor voxels, and we impose a
maximum dose constraint on OAR voxel of 0.8 and 1.0, respectively.
The goal of delivering the prescribed dose to the tumor voxels is
expressed via a quadratic objective function. The optimization problem
for this illustrative example can be formulated as:

Since we only have two optimization variables, the objective and


constraint functions can be visualized explicitly. This is done in Figure
9.9. The objective function is illustrated via isolines. Since we consider a
quadratic objective function, it represents a two-dimensional parabola.
The minimum of the objective function is located at beamlet intensities
x1 = 1 and x2 = 1. At this point, both tumor voxels receive the
prescribed dose and the objective function is zero.
We now consider the constraints on the OAR voxels. Since the dose in
each voxel is a linear function of the beamlet intensities, the constraints
represent hyperplanes in beamlet intensity space, that is, lines in two
dimensions. In Figure 9.9 we show the lines where the constraints d3 =
0.8 and d4 = 1.0 are met exactly. For all beamlet intensity beyond these
lines, the maximum dose to an OAR voxel is exceeded. All beamlet
intensity combinations below the lines form the feasible region. Thus,
the optimal solution to the IMRT planning problem is given by the point
within the feasible region that has the smallest value of the objective
function. In this example, this is given by x1 = 0.7 and x2 = 1.2 and is
indicated by the red dot in Figure 9.9. By multiplying this solution with
the dose-deposition matrix, we obtain the corresponding optimal dose
distribution.
In this case, the constraint for OAR voxel 3 is binding, that is, the OAR
voxel receives the maximum dose we allow for. We further note that the
minimum of the objective function is outside of the feasible region,
which means that, in order to fulfill the maximum OAR dose constraint,
we have to accept a compromise regarding target dose homogeneity.

FIGURE 9.9 Visualization of the IMRT optimization problem for two beamlets. The quadratic
objective function is shown via isolines; the linear maximum dose constraints of OAR voxels are
shown as thick black lines (with permission reproduced from (4)).
FIGURE 9.10 Visualization of the composite objective function containing quadratic penalty
functions to approximate maximum dose constraints. For increasing weights w for the penalty
function, the minimum of the composite objective function moves closer to the optimal solution of
the constrained problem (with permission reproduced from (4)).

Approximate Handling of Constraints Via Penalty Functions


In IMRT planning, maximum dose constraints in OARs are often
approximated via penalty functions. More specifically, we can consider
the composite objective function where a quadratic penalty function,
multiplied with a weight w is added to the original objective for target
dose homogeneity:

Adding the penalty function does not change the objective function
within the feasible region, only the objective function values outside of
the feasible region are increased. This is shown in Figure 9.10 for
penalty weights of w = 5 and w = 20. While w is increased, the
unconstrained minimum of the function f moves closer to the optimal
solution of the constrained problem.

The Basic Gradient Descent Method


In this section we introduce the most generic optimization algorithm,
which can in principle be used to generate an IMRT treatment plan. To
that end, we assume that we want to minimize an objective function f,
subject to the constraint that all beamlet intensities are positive. We do
not consider additional constraints g on the dose distribution, that is, all
treatment goals are included in the objective function.
The gradient of the objective function is the vector of partial
derivatives of f with respect to the beamlet intensities xj:

The gradient vector is oriented perpendicular to the isolines of the


objective function; it points in the direction of maximum slope in the
objective function landscape. Thus, taking a small step into the direction
of the negative gradient yields a fluence map x that corresponds to a
lower value of the objective function, that is, an improved plan. This
gives rise to the most basic nonlinear optimization algorithm: In each
iteration k, the current fluence map xk is updated according to:

where α is a step size parameter, which has to be sufficiently small in


order for the algorithm to converge.

Gradient Calculation. The gradient of the objective function with


respect to the beamlet intensities can be calculated by using the chain
rule in multiple dimensions: Given that the objective is a function of the
dose distribution we have

The partial derivative of the voxel dose di with respect to the beamlet
weight xj is simply given by the corresponding element of the dose-
deposition matrix:
The partial derivative of the objective function with respect to dose in
voxel i describes by how much the objective function changes by varying
the dose in voxel i. For the quadratic objective function

the components of the gradient vector are given by

which has an intuitive interpretation. The total change in the objective


function value due to changing the intensity of beamlet j is obtained by
summing over the contributions of all voxels. The contribution of a voxel
is given by the dose error (di − dpress) multiplied by the influence Dij of
beamlet j onto the voxel i. If the dose di exceeds the prescribed dose, the
voxel’s contribution is positive; voxels that are underdosed yield a
negative contribution to the gradient component. If the gradient
component is negative after summing over the contributions of all
voxels, the impact of the underdosed voxels dominates. A step in the
direction of the negative gradient corresponds to increasing the beamlet
weight xj, thus reducing the extent of underdosing.

Handling the Nonnegativity Constraint. So far, only the objective


function f was considered, not taking into account the nonnegativity
constraint on the beamlet intensities. Applying the gradient descent
algorithm without accounting for the nonnegativity constraint leads to
negative intensities for some of the beamlets, which is not meaningful.
Different extensions of the gradient descent algorithm exist in order to
ensure positive beamlet weights.
One method consists in simply setting all negative beamlet intensities
to zero after each gradient step. Formally, this corresponds to a
projection algorithm for handling bound constraints. An alternative
approach is based on a variable transformation. In this case, a new
optimization variable is introduced for every beamlet, which is defined
as the square root of the intensity. Thus, the beamlet intensity, given by
the squared value of the variable, is always positive, while the
optimization variable can take any value. This way, the constrained
optimization problem is converted into a fully unconstrained problem.

Improvements to Gradient Descent


The generic gradient descent algorithm shows slow convergence in
practical IMRT optimization problems. Improvements to the generic
gradient descent algorithm can be made mainly in three aspects:

1. Selecting an appropriate step size using line search algorithms.


2. Improving the descent direction by including second derivative
information.
3. Improve the handling of constraints using more advanced algorithms
for constrained optimization.

For the first and third aspects, the reader is referred to the advanced
optimization literature. The second aspect is outlined below.

Including Second Derivatives. The generic gradient descent algorithm


considers the first derivative of the objective function at the current
fluence map x. This can be interpreted as finding a hyperplane that is
tangential to the objective function at x. The convergence properties of
iterative optimization algorithms can be improved by including second
derivative (i.e., curvature) information. This can be interpreted as
finding a quadratic function that is tangential to the objective function at
x. The iterative optimization algorithm, known as the Newton method,
then performs a step toward the minimum of the quadratic
approximation.
To formalize this concept, we consider a second-order Taylor
expansion of the objective function f at the fluence map x:

By defining the Hessian H as the matrix of second derivatives, this can


be written as:

The idea of the Newton method consists in taking a step Δ x such that
we reach the minimum of the quadratic approximation. For the special
case that the original objective function f is a quadric function, the
approximation is exact, and thus the Newton method finds the optimal
solution in a single step. Generally, f will not be a purely quadratic
function. However, it is assumed that a Newton step will approach the
optimum faster than a step along the gradient direction.
To calculate the Newton step Δx*, we set the gradient of f with respect
to Δx to zero, which yields the condition

Thus, the Newton step is given by

This leads to a modified iterative optimization algorithm in which the


beamlet intensities are updated according to

We can further note that the Newton method has a natural step size α
= 1.
In practical IMRT optimization, the pure Newton method is not
applied. A naïve computation of the Newton step involves the
calculation of the Hessian matrix at point x, inverting the Hessian
matrix, and multiplying the inverse Hessian H(xk)−1 with the gradient
vector. In IMRT optimization, the size of the Hessian matrix is given by
the number of beamlets squared. Therefore, the explicit calculation and
inversion of the Hessian is often computationally prohibitive. Thus,
IMRT optimization employs the so-called quasi-Newton methods, which
rely on an approximation of the Newton step. One of the most popular
methods that is applied in IMRT planning is the limited memory L-BFGS
quasi-Newton algorithm. In this algorithm, the descent direction H(xk)
−1∇f(xk) is approximated based on the fluence maps and gradients
evaluated during the previous iterations of the algorithm, which avoids a
costly matrix inversion. The comprehensive description of the L-BFGS
algorithm can be found in the textbook by Nocedal and Wright (6).

Convexity
Many objective functions commonly applied in IMRT planning are
convex. This is in particular the case for the piecewise quadratic
objective, linear objectives, and the generalized EUD for exponents |α|
> 1. The convexity property of objective and constraint functions has
important implications for the optimization of fluence maps. An
optimization problem defined through a convex objective function f and
convex constraint functions gs has a unique global minimum, that is,
there are no local minima, which are not the global minimum. Thus,
gradient descent–based optimization algorithms do reliably find the
optimal fluence map. The only nonconvex objectives commonly applied
in practice are DVH constraints. However, practical experience suggests
that the nonconvexity of DVH constraints does not cause severe local
minima-related issues in IMRT planning.

LEAF SEQUENCING
In this section, we discuss ways to deliver intensity-modulated radiation
fields. The section is focused on IMRT delivery using conventional Linacs
equipped with an MLC. This represents, by far, the most widely used
IMRT technique, although it is not the only possible form of IMRT.
Historically, IMRT delivery with compensators has been performed in
many centers. In that technique, an intensity-modulated field is created
using an absorber placed in the beam path in the Linac head. The
absorber causes an exponential attenuation of the fluence. By varying
the thickness of the absorber across the beam profile, the desired
intensity-modulated field can be created. Compensators had to be
custom made for every patient and every field, and where typically cast
in lead. This required a machine shop connected to the radiotherapy
department. Nowadays, the use of computer-controlled MLCs, which
eliminates the need for patient-specific hardware, has replaced
compensator-based IMRT delivery.
FIGURE 9.11 Illustration of beam collimation for IMRT delivery using multi-leaf collimators and
jaws.

Beam Shaping and Multi-Leaf Collimators


This section briefly introduces MLCs. We focus on the main aspects of
MLCs that are relevant for IMRT planning and delivery as described in
the remainder of this chapter. For details on the mechanical and
technical aspects, the reader is referred to the literature (3). Figure 9.11
illustrates of the main components used to collimate the radiation beam:
the jaws and the MLC. The MLC is the primary collimation device that
defines the shape of the beam. It consists of thin sheets of tungsten,
which are moved in and out of the beam using computer-controlled
electric motors. Each leaf has a considerable height (measured in beam
direction) of approximately 5 to 10 cm in order to keep the transmission
of radiation through closed leaves low. In contrast, each leaf is only a
few mm thick to yield a projected beamlet size of 5 mm or 10 mm at the
isocenter. The jaws represent rectangular field collimators upstream of
the MLC. Figure 9.12 illustrates the use of the MLC for beam collimation
in beam’s-eye-view. A variety of terms are used to refer to a radiation
field produced by an MLC. In this chapter, we use the term aperture;
other common terms are segment or MLC opening.
Depending on the MLC model, there are a number of constraints on
leaf motion and leaf positioning. This limits the set of apertures that can
be delivered by an MLC. With the latest generation of MLCs some of
these restrictions are eliminated or mediated, but especially for older
models some of the following restrictions may apply:

• Interdigitation: For some MLCs it is not possible for neighboring leaf


pairs to cross, that is, the tip of the left leaf cannot move past the tip of
a neighboring right leaf. The leaf configuration of leaf pairs 7 and 8 in
Figure 9.12 would be prohibited. The interdigitation constraint has
been eliminated for most modern MLCs.

FIGURE 9.12 Illustration of beam collimation in beam’s-eye-view using an MLC and jaws. The
MLC leaves (gray bars) are used primarily for beam shaping. The jaws (red and yellow blocks)
are typically placed in a post-processing step to irradiate the smallest rectangular field that covers
the MLC aperture. The jaws reduce transmission through closed MLC leaves. In addition, for
MLCs that require a finite gap between the left and right leaf tip, closed leaf pairs can be hidden
behind the jaws, as illustrated in rows 1 and 8.

• Maximum overtravel: Most linear accelerators have a 40 × 40-cm field


of view at the isocenter. However, for some MLCs it is not possible for
the left leaf to travel all the way the right side of the field of view. The
maximum distance that the leaf tip can travel beyond the isocenter
projection is called the maximum overtravel. The constraint implies
that the MLC cannot deliver small apertures far away from the
isocenter.
• Minimum leaf gap: For some MLCs a leaf pair cannot fully close, that
is, a minimum gap between the right and left leaf tips has to remain.
• Maximum leaf speed: In addition to restrictions on leaf positioning,
MLCs have dynamic constraints. Leaves cannot move faster than a
maximum speed; typical values are 3 cm/s or 6 cm/s.

Aperture Decomposition of Fluence Maps


It is intuitively clear that the superposition of multiple distinct radiation
fields formed by an MLC may yield an intensity-modulated field. This is
schematically illustrated in Figure 9.13.
In IMRT planning, the inverse problem needs to be solved, that is,
given an optimized fluence map, we have to determine a set of apertures
that closely reproduce the fluence map. This is called the leaf sequencing
problem. We assume for now that the fluence map is discretized into
evenly spaced fluence levels. A closer look at Figure 9.13 demonstrates
that the leaf sequencing problem does not have a unique solution, that
is, a given fluence map can be decomposed into a set of apertures in
many different ways. We consider MLC rows 3 and 4 in Figure 9.13,
which yield the same fluence, created with a distinct sequence of leaf
openings. MLC row 3 uses a sliding window decomposition, in which the
right and left leaves move unidirectionally from left to right. In contrast,
MLC row 4 uses a close-in technique, in which case the first aperture
corresponds to the largest field opening. Subsequent apertures shrink the
field and deliver additional fluence at the beamlets that have higher
intensity. In Figure 9.13 and throughout this section, we assume that the
fluence over an open field is homogeneous, which is applicable to Linacs
with flattening filter. Although not discussed here, leaf sequencing
methods can be extended to flattening filter free (FFF) delivery of IMRT,
which has the advantage of higher dose rates.
FIGURE 9.13 Schematic illustration of the decomposition of a fluence map (right panel) into
apertures. The positions of MLC leaf ends are indicated by the green bars. It is assumed that
each aperture delivers one unit of fluence; the colors yellow, orange, and red indicate one, two,
and three units of fluence, respectively.

Sliding Window Sequencing


We consider the sliding window decomposition of fluence maps in more
detail (9). To that end, we consider a single leaf pair. The upper panel in
Figure 9.14 shows an example of one row of a fluence map for a single
leaf pair. The bottom panel shows the sliding window type aperture
decomposition. In the example, both leaves move unidirectionally from
left to right.
FIGURE 9.14 Illustration of sliding window sequencing. The upper panel shows the fluence map
(vertical bars) corresponding to a single leaf pair. The bottom panel shows the set of apertures
(horizontal bars) to realize the fluence map.

It is intuitive that the gradients in the fluence map, that is, changes in
the intensity between neighboring beamlets, determine the leaf
positions. In the example, the fluence increases by 4 units between
beamlet 1 and beamlet 2. This determines that, during the delivery of 4
units of fluence, beamlet 1 has to be blocked by the left leaf while
beamlet 2 is exposed. Likewise, the fluence decreases by 1 unit between
beamlets 2 and 3, which determines that during the delivery of 1 unit of
fluence, beamlet 3 has to be blocked by the right leaf while beamlet 2 is
exposed. We call an increase in the fluence from one beamlet to the next
higher numbered beamlet a positive gradient, and a decrease a negative
gradient. It is clear that the sum of positive gradients (SPG) equals the sum
of negative gradients. In sliding window sequencing, the positive
gradients uniquely determine the left leaf positions, while the negative
gradients uniquely determine the right leaf positions. In the first
aperture, the left leaf is positioned to the left of beamlet 1, the right leaf
is positioned where the first negative gradient occurs, which is between
beamlet 2 and 3. For the second aperture, the left leaf stays in the same
position while the right leaf moves to the next negative gradient
position, which is between beamlets 3 and 4.
It is intuitive that an irregularly shaped fluence map with several
peaks and valleys requires more apertures to deliver than smooth fluence
maps. It can be shown that the minimum total number of MU to deliver
a fluence map is given by the SPG. Sliding window sequencing is
therefore optimal regarding the total number of MU since it always
reproduces a fluence map with the shortest possible beam-on time.

Sequencing as an Optimization Problem


As illustrated in Figure 9.13, the leaf sequencing problem does not have
a unique solution. The degeneracy of the problem provides some
freedom that can be exploited, that is, the sequencing step can aim at
determining apertures that have desirable features. Criteria for good
aperture sets are:

• The total number of MU to be delivered is small, which corresponds to


the total beam-on time.
• The total number of apertures is small.
• Leaf travel is minimized, that is, the MLC leaves move as little as
possible
• Apertures should have regular shapes and very small apertures are
avoided.
• The fluence map is reproduced exactly or as closely as possible without
prior discretization.
• All MLC constraints are satisfied.
The sliding window sequencing method is optimal regarding the total
number of MU. However, other sequencing methods can potentially
reduce the total number of apertures needed. To that end, the
sequencing problem can be formulated as an optimization problem.
While the FMO problem is a continuous optimization problem for which
gradient-based optimization methods are applied, the sequencing
problem is discrete and requires different optimization techniques. The
interested reader is referred to the literature. For example, the work by
Engel (10) describes a sequencing algorithm that yields the minimum
number of MU and simultaneously approximately minimizes the number
of apertures.

Step-and-Shoot Delivery Versus Dynamic Delivery


In IMRT, two types of delivery are distinguished: step-and shoot delivery
and dynamic delivery. In step-and-shoot IMRT, the fluence map is
sequenced into a set of apertures as described above. Each aperture is
irradiated individually, that is, the MLC leaves move to the desired
position, and the beam is turned on to deliver the specified number of
MU. Subsequently, the beam is turned off while the MLC leaves are
moved to shape the next aperture. In dynamic delivery, the MLC leaves
move while the treatment beam is on. In this case, the sequencing task
consists in determining the trajectories of MLC leaves, that is, the leaf
positions as a function of time.
FIGURE 9.15 Illustration of dynamic IMRT delivery using a sliding window technique. The red
lines in the upper panel show the positions of the left and right leaf as a function of time. The
green line in the bottom panel shows the corresponding fluence profile.

Dynamic delivery is frequently associated with a sliding window type


delivery, because this gives rise to a constructive method to determine
the leaf trajectories (11). To that end, we consider a generalization of
the sliding window aperture decomposition in Figure 9.14. Figure 9.15
schematically illustrates continuous trajectories of the left and right leaf
in one MLC row. The horizontal axis shows the position of the leaf end,
while the vertical axis shows the time when the leaf end traverses a
given position. At every beamlet position in the MLC row, the effective
fluence of the beamlet is given by the exposure time. Initially, both
leaves are positioned on the left side and the right leaf covers the
beamlet. At time t1 the right leaf end traverses the beamlet and opens
the radiation field at that position. At a later time point t2 the left leaf
traverses the beamlet and closes the radiation field. The fluence of the
beamlet is proportional to the time interval t2 – t1 during which the
beamlet is exposed. Thus, the distance of right and left leaf on the
vertical axis determines the corresponding fluence profile (green line in
the bottom panel).
In practice, the MLC leaves cannot move arbitrarily fast and have to
respect a maximum leaf speed. In Figure 9.15, the maximum leaf speed
constraint corresponds to a minimum slope of the leaf trajectory. To
allow a leaf to move a certain distance on the horizontal axis, a
minimum amount of time has to pass. This is indicated by the black
dashed line in the upper panel.
Dynamic delivery is frequently associated with sliding window type
delivery and used synonymously by some people. However, dynamic
delivery is not per se tied to sliding window trajectories. In principle,
other sequencing methods that allow for bidirectional leaf motion could
be developed and used. In fact, delivery of VMAT can be considered as
dynamic delivery with bidirectional leaf motion.

DIRECT APERTURE OPTIMIZATION


Historically, IMRT planning was developed as the two-step approach of
FMO plus sequencing. However, there are a number of disadvantages
related to the two-step approach. For example, in step-and-shoot IMRT, a
large number of apertures may be required to faithfully reproduce a
fluence map. Other problems relate to dose calculation accuracy during
the FMO step. In order to cope with the limitations of the two-step
approach, DAO methods are being developed and integrated into
commercial TPS. In this section, we first summarize some of the
limitations of the two-step approach. Subsequently, approaches to DAO
are discussed, which directly determine the shapes and intensities of
apertures.

Limitations of the FMO + Sequencing Approach


Limitations of the two-step approach can broadly be categorized into
three types:
1. The fluence map is not accurately reproduced by the set of apertures.
This is primarily a problem in step-and-shoot IMRT if the total number
of apertures is kept small.
2. In step-and-shoot IMRT, the leaf ends are positioned at the boundary
between two beamlets after the sequencing step. Especially for large 1
× 1-cm beamlets, the dose distribution can be improved by
positioning the leaves at intermediate positions.
3. Even if the leaf sequencing step reproduces the fluence map exactly,
there will still be a discrepancy between the FMO dose distribution
used during plan optimization and the dose that is actually delivered
by the set of apertures.

While the first two limitations are quite apparent, the third aspect is
more complex. There are multiple reasons for dose discrepancy. Some
are inherent to the dose-deposition matrix concept, which does not take
into account higher-order effects on the incident fluence that the MLC
causes. Others are related to compromises being made between accuracy
and computational performance in the FMO stage.
Dose calculation accuracy: The calculation of the dose-deposition
matrix often uses a simplified dose calculation algorithm to speed up the
computation. For example, the dose-deposition matrix may be based on
a pencil beam algorithm while the final dose distribution of the
apertures may be calculated with a convolution-superposition algorithm.
In addition, the dose-deposition matrix may not store small scatter dose
contributions far away from the central axis of the beamlet in order to
reduce memory requirements. This leads to dose discrepancy between
the sequenced FMO solution and the final dose distribution. This issue is
not an inherent limitation of the two-step approach, and using accurate
dose calculation methods for computing the dose-deposition matrix
could mitigate the problem. However, in practice fast treatment plan
optimization is desired, which requires compromises.
Tongue-and-groove effect: Other dose calculation problems are
inherent to the dose-deposition matrix concept. For mechanical reasons,
two neighboring MLC leaves cannot be arbitrarily close to each other.
However, a small gap between MLC leaves would lead to radiation
leaking through. In order to avoid such inter-leaf leakage, many MLCs
adopt a tongue-and-groove design, which is schematically illustrated in
Figure 9.16. Let us consider an aperture consisting of two neighboring
beamlets in adjacent leaf pairs. The dose-deposition matrix concept used
in FMO assumes linearity. This means, if both beamlets are combined to
a single aperture, the resulting dose distribution is predicted to be the
same compared to the situation in which both beamlets are delivered
individually as separate apertures. However, for MLCs with tongue-and-
groove design, this is not true. The regions where both leaves overlap
are now blocked as soon as one of the two leaves is closed. This leads to
an underdosage of the region of the beamlet boundary if both beamlets
are delivered as separate apertures.

FIGURE 9.16 Illustration of MLC leaves with tongue-and-groove design.

Leaf transmission: During the FMO step, the fluence of beamlets may
be zero if the beamlet is not beneficial for the treatment plan. Also, the
sequencing method typically assumes that the fluence for closed leaves is
zero. In reality this is only approximately true as there is some
transmission of radiation through closed MLC leaves. The effect is
mitigated by the jaws and is small when considering a single aperture.
However, the leaf transmission effect can add up for treatment plans
consisting of a large number of irregularly shaped apertures.
Mitigation of dose discrepancies: There are approaches to mitigate
the effects that lead to discrepancies between the dose distributions at
the FMO stage and after sequencing. One approach consists in adding
regularization terms to the FMO problem to favor smooth fluence maps
that require fewer apertures to deliver. In this context, the L1-norm
regularization term is of particular interest, which has edge-preserving
properties and favors piece-wise constant fluence maps (12,13).
Furthermore, enhancements to the sequencing algorithm have been
devised, which for example, aim to reduce tongue-and-groove effects
(14).

Direct Aperture Optimization for Step-and-Shoot IMRT


In light of the above-mentioned limitations of the two-step approach of
FMO plus sequencing, it appears desirable to directly optimize the
shapes and intensities of apertures based on the dosimetric objective
function f(d). As a modification of the FMO problem in the section on
Fluence map optimization, the DAO problem can be stated as follows:
Determine the intensities and shapes of K apertures that minimize the
objective function f(d) subject to the constraints gs(d) ≤ cs. Optimizing
shapes of the apertures refers to optimizing the positions of all MLC leaves.
In order to appreciate the inherent difficulties in DAO, we recall the
favorable properties of the FMO problem. For FMO, the optimization
variables are the beamlet intensities, while the objective is a function of
the dose distribution. Given the dose-deposition matrix elements as fixed
parameters, the dose distribution is a linear function of the beamlet
intensities. A small change in the intensity of one beamlet leads to a
small linear change in the dose to a voxel. Thus, the objective function
can be written explicitly as a function of the optimization variable. In
addition, if the objective is a convex function of dose, the overall
optimization problem is convex and gradient-based algorithms can be
used to determine the global optimum reliably.
The DAO problem does not have this favorable property. The dose
distribution is a more complex and nonconvex function of the
optimization variables (the MLC leaf positions), which cannot easily be
stated in closed form. If we consider the dose in a voxel as a function of
the position of an MLC leaf, the dependence is given by a smoothed step
function. While both leaves are fully open, the leaf pair contributes its
maximum dose to the voxel. While one leaf moves to close the field, the
leaf pair’s dose contribution goes to zero. However, for most parts, a
change in the leaf position has little impact on the dose to a particular
voxel. Only in a small region that corresponds to the projection of the
voxel onto the MLC plane, a small change in the leaf position yields a
steep change in the voxel’s dose.
In addition, there is a combinatorial aspect to the DAO problem. An
IMRT plan typically consists of 5 to 11 beam directions. Some beams
contribute more dose than others, and not all beam directions require
the same amount of intensity modulation. Therefore, the best
distribution of a limited number of apertures over the incident beam
angles is unclear a priori. Assigning the same number of apertures to
every beam may not yield the best treatment plan.
All approaches to DAO have to cope with these intrinsic difficulties.
DAO approaches can broadly be categorized into three types:

1. Stochastic search methods


2. Aperture generation methods
3. Gradient-based leaf position optimization

Stochastic Search Methods


Stochastic search methods for DAO include simulated annealing (15) and
genetic algorithms (16,17). These approaches typically start with a
geometry-based initialization of aperture shapes, that is, apertures that
conform to the target volume, possibly excluding projections of the
OARs. Subsequently, random perturbations of leaf positions are
generated and dose distribution d and objective function f (d) are
evaluated. If the treatment plan improves, the modification of the
aperture set is accepted; otherwise it is rejected with some probability.
Stochastic search methods have a number of advantages. First, random
perturbations of leaf positions can be restricted such that all MLC
constraints are fulfilled. In addition, these methods can in principle
escape from local optima of the objective function. Simulated annealing–
based DAO has been commercialized by PROWESS in the Panther TPS
for step-and-shoot IMRT. Furthermore, the method has been adapted to
VMAT (section on Arc therapy) and is used in the RapidArc module in
the Eclipse TPS marketed by Varian.

Aperture Generation Methods


The second class of methods refers to techniques that iteratively
generate new apertures that are added to a treatment plan. Such an
approach has been suggested by Romeijn et al. (18) and Carlsson (19).
Here, we illustrate the main idea behind the approach. We assume that
the current treatment plan consists of n – 1 apertures and we are
interested in generating the nth aperture that yields a large improvement
to the current treatment plan. To that end, we consider the partial
derivative ∂f/∂xj of the objective function f with respect to the intensity
of a beamlet j. If the derivative is negative, adding the beamlet with a
small positive intensity reduces the objective function, that is, improves
the treatment plan. Furthermore, if |∂f/∂xj | is large, the beamlet
promises a large improvement to the treatment plan. Therefore, a
plausible approach to identifying a valuable new aperture consists in
finding a deliverable aperture that contains many beamlets j for which
∂f/∂xj < 0 and |∂f/∂xj | is large. Romeijn et al. (18) describe an efficient
algorithm to identify the aperture A for which

is minimized. The aperture is added to the treatment plan and the


intensities of all n apertures are optimized. The problem of optimizing
the aperture intensities is formally identical to the FMO problem and can
be solved using the algorithms described in the section on Fluence map
optimization. The dose distribution of an aperture Ak in the patient is
obtained by summing the dose-deposition matrix elements over the
beamlets contained in the aperture,

and the dose distribution is simply given by summing the contributions


of all apertures,

where yk is the intensity of aperture k. The iterative generation of new


apertures can be stopped once a maximum number of apertures is
reached or the plan quality is sufficiently high.

Local Leaf Position Optimization*


In the remainder of this section, we describe the third approach of
gradient-based leaf position optimization in more detail. The reason is
that this approach is implemented in several of the widely used
commercial TPS including Pinnacle (Philips) (20), RayStation (Raysearch
Laboratories), and Monaco (Elekta).
In this approach to DAO, we assume that we are given an initial set of
apertures. This set of apertures can, for example, be obtained by
sequencing a fluence map solution, or from the aperture generation
method discussed above. Due to the nonconvex nature of the problem,
the initial set of apertures should represent a good starting point for leaf
position refinement, that is, ideally forms a decent treatment plan
already. The set of K apertures, indexed by k, is characterized by

• aperture intensities yk.


• leaf positions for the left and right leaf edges: Lkn and Rkn where n is
the index of the MLC leaf pair.

The goal of gradient-based leaf refinement is to optimize the objective


function f (d) with respect to the leaf positions and aperture weights. In
particular, we allow the leaf positions to change continuously, that is,
the leaf edge does not have to be positioned at a beamlet boundary.

Approximate Dose Calculation


We first formulate the dose distribution as a function of the optimization
variables, that is, leaf positions and aperture intensities. The dose in
voxel i is given by the sum of the contributions of the individual
apertures, weighted with their intensity yk. Furthermore, the dose
contribution of each aperture is given by the contributions of each
MLC leaf pair:

To proceed, we have to further characterize the function . For


that purpose, we consider a particular MLC row n in aperture k. We first
imagine that the left leaf is located at the left-most position at the edge
of the field; and we consider the dose contribution of the MLC row as a
function of the right leaf position, which we denote by the function
. Let us further assume that the voxel i is within the beam’s-eye-
view of the MLC row such that the MLC row contributes a significant
dose to voxel i. We know that the function has the shape of a
smooth step function: If the right leaf is located at the left most position,
the MLC row is closed and the dose contribution is zero. While the right
leaf is moving to the right, the dose contribution increases
monotonically. This is illustrated in Figure 9.17.
We now consider the dose-deposition matrix representation of the
dose to further characterize the function . We note that we know
the function at discrete points, namely when the right leaf is
positioned at an edge of a beamlet. Let Δx denote the size of a beamlet,
and let j denote the beamlet index in leaf motion direction. At position
jΔ x, the dose contribution is simply given by the sum over the exposed
beamlets, that is,

where we introduce the dose-deposition matrix notation to denote the


dose contribution of beamlet j in MLC row n to voxel i. For a continuous
leaf position in between, we consider a linear interpolation (Figure
9.17). This corresponds to the assumption that the dose distribution of a
beamlet that is half exposed is given by the beamlet dose distribution
with half the intensity. This approximation will break down for large
beamlet size Δx, however, for practical beamlet sizes of 5 mm, the
approximation yields adequate results. Using the function we can
express the dose contribution of an MLC row as

The first term represents the beamlets that are exposed by the right
leaf; the second term subtracts the beamlets that are blocked by the left
leaf.
FIGURE 9.17 Illustration of the function , representing the dose contribution of an MLC row
to a voxel as a function of the right leaf position. The function is known at discrete position where
the right leaf is positioned at a beamlet boundary and the dose contribution can be expressed as
a sum of dose-deposition matrix elements. In between, the dose contribution is interpolated
linearly (with permission reproduced from (4)).

Optimizing Leaf Positions


To optimize leaf positions and aperture intensities we can utilize
gradient descent–based algorithms for nonlinear optimization. To apply
the generic gradient descent algorithm described in the section on
Fluence map optimization, we have to evaluate the gradient of the
objective function with respect to leaf positions and aperture intensities.
This can be achieved with the help of the function ø. Let us consider
the derivative with respect to one of the right leaves, Rkn:

The calculation of the partial derivatives ∂f/∂di is identical to the case


of FMO as described in the Fluence map optimization section. Using the
linear approximation illustrated in Figure 9.17, the derivative of the
dose contribution function only depends on the beamlet where the
leaf edge is currently located. If we further assume that the leaf position
is measured in units of beamlets (i.e., moving a leaf by the width of one
beamlet corresponds to a distance of 1), the derivative of is simply
given by
where jkn is the index of the beamlet where the leaf edge is located. The
derivative of the voxel dose with respect to the aperture intensity is
simply given by the dose contribution of the aperture for unit intensity:

Evaluating the dose gradient of the objective function with respect to


the optimization variables provides the prerequisites for the use of a
gradient-based nonlinear optimization algorithm. In contrast to FMO,
DAO considers two types of optimization variables simultaneously, that
is, leaf positions and aperture intensities. Therefore, the use of second
derivatives in the optimization algorithm is important. In particular, the
quasi-Newton methods like L-BFGS can be used. Variations of gradient-
based leaf position optimization are described by De Gersem et al. (21)
and Cassioli and Unkelbach (22).
DAO provides the opportunity to directly account for restrictions of
the MLC. These can be integrated into the optimization problem in the
form of bound constraints and linear constraints. In addition, DAO
provides better ways of mitigating dose calculation inaccuracies
compared to FMO. For example, at an intermediate stage of gradient-
based leaf position optimization, the dose distributions of the current
aperture set can be calculated using a convolution-superposition
algorithm. In subsequent iterations, changes can be approximated by a
pencil beam–based dose-deposition matrix. If leaf position changes
remain small, the error in the dose distribution is minor.

ARC THERAPY
In IMRT, the patient is irradiated from discrete beam directions.
Typically between 5 and 9 beam directions are used. While the gantry
moves from one angle to the next one, the treatment beam is off. Arc
therapy refers to a radiotherapy delivery mode in which the treatment
beam is continuously on while the gantry rotates around the patient.
Conformal arc therapy has long been used as a delivery mode for
conformal therapy, especially for small spherical lesions that do not
require intensity modulation. In conformal arc therapy, the treatment
field is fixed during gantry rotation or conforms to the projection of the
target volume. VMAT refers to an extension of IMRT to a rotational
treatment mode, delivered at conventional Linacs equipped with an
MLC. The treatment field does not necessarily conform to the target at
every angle. Instead, an effectively intensity-modulated field is delivered
over an arc sector.
The motivation for VMAT has been twofold: First, the patient is
irradiated from all gantry angles rather than a relatively small number of
discrete angles. This bears the potential for better and more conformal
treatment plans. Second, VMAT bears the potential for shorter treatment
times because the treatment beam is continuously on. The idea of
delivering intensity-modulated fields through arc therapy was suggested
by Yu as early as 1995 (23). However, a clinical implementation of
VMAT was delayed in part by the lack of TPS that support this
technique. In 2008, Varian introduced the RapidArc planning module in
the Eclipse planning system and provided a commercial VMAT solution.
Around the same time, Philips Medical Systems provided the SmartArc
module in the Pinnacle planning system to support VMAT. Today, most
treatment systems including Monaco (Elekta) and RayStation (Raysearch
Laboratories) support VMAT planning.
Before the clinical adaptation of VMAT, specialized hardware to
deliver intensity-modulated fields in a rotational mode was developed.
The device has been proposed by Mackie (24) and was commercialized
as Tomotherapy, resulting in the first patient treatment in 2002. The
design of Tomotherapy machines resembles a serial CT scanner in which
the x-ray tube is replaced by a Linac that produces a therapeutic MV
treatment beam. The radiation source continuously rotates while the
patient is shifted through the device. The patient is irradiated slice-by-
slice using a fan beam, whose intensity is modulated using a customized
binary MLC. From a treatment planning perspective, Tomotherapy can
build on the FMO concepts described in the Fluence map optimization
section. The leaf sequencing problem is simple compared to MLCs at
conventional Linacs. In this section, we therefore focus on VMAT, which
poses new challenges for treatment planning. For further details on
Tomotherapy, we refer the interested reader to the review by Mackie
(25) and references therein. A more extended review of treatment plan
optimization approaches to VMAT is provided by Unkelbach (26). For
further information on the clinical implementation of VMAT, we suggest
the review by Yu (27).

The VMAT Treatment Planning Problem


VMAT can be thought of as a dynamic MLC technique to deliver IMRT,
with the modification that the gantry does not stand still at discrete
angles but continuously rotates while radiation is delivered. The dose
distribution delivered by a VMAT plan is determined through three types
of variables:

1. The MLC leaf trajectories, that is, the positions of the left leaves Ln
and the right leaves Rn as a function of time;
2. The gantry angle φ(t) as a function of time;
3. The dose rate δ(t) as a function of time.

In principle, the jaws, collimator and treatment couch also could


move. However, here we assume that the collimator and couch are at
fixed angles, and that the jaws can be positioned in a post-processing
step with minor impact on the treatment plan. For dose calculation, a
VMAT arc is discretized into small arc sectors. For example, a 360-
degree arc is divided into 180 arc sectors of 2-degree length. For
treatment planning, a dose-deposition matrix can be calculated at the
center of each arc sector.
FIGURE 9.18 Relation between leaf trajectories and effective fluence in VMAT delivery. The red
area corresponds to the exposure time of the beamlet in arc sector φ and is proportional to the
beamlet’s effective fluence.

Given the trajectories for leaves, gantry, and dose rate, a VMAT plan
delivers an effective fluence at any gantry angle φ. The relation
between effective fluence and leaf trajectories is illustrated in Figure
9.18 for a single leaf pair n. Let us for simplicity assume that the dose
rate is constant over the arc sector φ. Then the effective fluence is
determined by the time that beamlet j is exposed by the MLC leaves. In
Figure 9.18, the red-colored area enclosed by the leaf trajectories and
the beamlet boundaries corresponds to the effective exposure time. This
method to relate leaf positions to fluence involves the common
approximation made in Figure 9.17: If at time t, a beamlet is partially
exposed by the MLC leaves, the time point’s contribution to the
beamlet’s effective fluence is proportional to the exposed fraction of the
beamlet. The dose distribution is obtained by multiplying the effective
fluence with the dose-deposition matrix at each arc sector.
VMAT planning aims at determining short trajectories that lead to
high plan quality, that is, VMAT plans that only take a short amount of
time to deliver. Thereby, the trajectories have to satisfy a number of
machine constraints. In particular, the MLC leaves have to satisfy the
maximum leaf speed constraint. In addition, the gantry speed is limited
to one full rotation per minute. Limitations on the dose rate are highly
machine dependent. Some Linacs allow for continuously varying dose
rates while others allow for discrete values only. All machines have a
maximum dose rate.

DICOM Specification of a VMAT Plan


VMAT planning approaches differ in the way that the leaf trajectories
are parameterized. The most common representation is driven by the
DICOM specification of a treatment plan, which is used to communicate
the plan between the TPS and the treatment machine control system.
Using DICOM standard, the treatment plan is defined via a sequence of
control points. Each control point is defined through a set of leaf
positions, a gantry angle, and the total number of MU that is delivered
up to that control point. Thereby, the DICOM specification gives rise to
formulating VMAT planning as a DAO problem. For example, a prostate
patient may be treated with a single 360-degree arc, which is divided
into 90 arc sectors of 4-degree length. For VMAT planning, one control
point (i.e., one aperture) is assigned to the center of each arc sector.
Subsequently, the DAO methods discussed in the Direct aperture
optimization section can be applied for plan optimization.
The DAO algorithm will return the aperture shapes and aperture
intensities for each control point along the arc. It is apparent that a
given aperture intensity yφ, which corresponds to the number of MU
delivered over the arc sector, depends on the gantry speed sφ, the dose
rate δφ, and the length of the arc sector Δφ and is given by

Large aperture intensities can be realized by a large dose rate or a


slow gantry speed. Since the motivation for VMAT treatments is in part
the reduction in treatment time, the machine controller should select the
largest possible gantry speed.
In principle, a DAO algorithm applied to VMAT planning yields a
deliverable plan if the gantry speed can vary between the maximum and
very small values. However, without modifications, the resulting
treatment plan may be inefficient regarding delivery time. As an
example, we assume that we want to deliver a 360-degree arc in 90
seconds at constant gantry speed. Then, the gantry sweeps over each 4-
degree arc sector in 1 second. If the maximum leaf speed of the MLC is 3
cm/s, the MLC leaves can only move by 3 cm between two neighboring
control points. Otherwise, the gantry speed has to be reduced, leading to
longer delivery times. Hence, VMAT planning typically aims at limiting
the leaf travel between adjacent control points—in the interest of
treatment time and also dose calculation accuracy. For gradient-based
leaf position optimization methods, constraints on leaf travel correspond
to linear constraints on the leaf positions.

Illustration of a VMAT Plan


Figure 9.19 illustrates a VMAT plan for a prostate cancer patient treated
with a single 360-degree arc, delivered by moving the gantry counter-
clockwise from 180 to –180 degrees. The arc is evenly divided into 90
arc sectors that are assigned one aperture each. The circle around the
patient indicates the coplanar incident beam directions. The yellow bars
depict the number of MU that is delivered over each arc sector. In the
foreground, one of the apertures along the arc is shown.

FIGURE 9.19 Illustration of a VMAT plan for a prostate cancer patient generated in RayStation
4.0. The treatment plan consists of a single 360-degree arc divided into 90 sectors. The dose
distribution is shown on a coronal slice of the patient’s CT.

Approaches to VMAT Plan Optimization


The VMAT implementations in commercial systems heavily build on the
concepts that were previously developed for IMRT planning. Bzdusek et
al. (28) suggest a three-step approach to VMAT, using all three concepts
of IMRT planning.
1. In the first step, FMO is performed at discrete, equi-spaced beam
angles. In practice, 15 to 20 beam angles are used.
2. In the second step, the resulting fluence maps are converted into
apertures that are distributed over the corresponding arc sector.
Assuming 15 beam angles are considered in the FMO step, leading to
24-degree arc sectors, each fluence map can be segmented into six
apertures, which results in one control point every 4 degrees. In this
arc sequencing step, it is desired that neighboring apertures are similar.
For that reason, most arc sequencing methods use a sliding window
type decomposition, which leads to a natural ordering of the
apertures.
3. In the third step, DAO methods are applied to refine the leaf positions
and aperture intensities. Assuming that the first two steps yield a good
starting point, local gradient-based DAO methods can be used in the
third step.

Such a three-step approach is implemented in several commercial


systems including the SmartArc module in Pinnacle (Philips Medical
Systems), the RayArc module in RayStation (Raysearch Laboratories),
and in Monaco (Elekta). Planning systems differ in the exact
implementation of each step and not all details are disclosed.
The work by Otto (29) is the basis for the RapidArc module in the
Eclipse planning system (Varian). This approach primarily depends on
DAO using simulated annealing as described by Shepard et al. (15), and
uses a geometry-based initialization of aperture shapes. Other VMAT
optimization approaches proposed in the literature are based on sliding
window delivery of a fluence map over an arc sector (30–32). An
overview of VMAT planning approaches can be found in the review by
Unkelbach et al. (26).

SPECIALIZED TOPICS IN IMRT PLANNING


Multi-Criteria Planning Methods
IMRT treatment planning has to find a tradeoff between inherently
conflicting clinical goals. The traditional approach to explore these
tradeoffs consists in manually choosing relative weights for different
objectives. This can lead to a time-consuming trial-and-error process.
Several methods have been proposed to improve the planning process in
that regard.

Prioritized Optimization
One approach is referred to as prioritized optimization (33) or
lexicographic ordering (34). It is motivated by the assumption that the
clinical objectives can be ranked according to their priority. For
example, in the prostate cancer example shown in Figure 9.2, the main
planning goal may be to deliver the prescribed dose to the target
volume. The second planning goal is the sparing of the anterior rectal
wall. Additional objectives are related to bladder dose and conformity,
but are considered of lower priority.
A prioritized optimization scheme performs a sequence of IMRT
optimizations. In the first step, we obtain the treatment plan that yields
the best possible plan only considering the highest ranked objective. In
the prostate example, we may minimize a quadratic objective function
for the target volume:

where gs(d) ≤ cs represents the hard constraints on the dose distribution.


This yields an optimal value fT* for the quadratic objective function for
the target volume. In the next step, the target objective is turned into a
constraint, while minimizing the objective with the second highest
priority, which could be the EUD in the rectal wall:
In this formulation, the EUD in the rectal wall is minimized, subject to
the constraint that the target dose homogeneity deteriorates by at most ε
compared to the optimally achievable value fT*. Solving this
optimization problem yields the optimal rectal wall EUD fR* that is
achievable under the given constraints. In the third step, the objective
function for dose conformity can be minimized as the only objective,
subject to the constraints that the target and rectum objectives only
deteriorate by a small ε from their optimal values fT* and fR*.

Pareto-Optimality
Prioritized optimization schemes rely on a ranking of the objectives, and
make the assumption that higher ranked objectives are not compromised
to improve lower ranked objectives. This is a potential drawback in
situations where a large improvement in one objective can be achieved
by only a minor degradation of a higher ranked objective, or if the
ranking is unclear a priori.
For simplicity, we consider only two objectives below, for example,
target dose homogeneity and rectal wall EUD in a prostate case. By
varying the tolerance level ε in the prioritized optimization scheme, one
can generate a sequence of treatment plans as illustrated in Figure 9.20.
The plans obtained in this manner define the set of Pareto-optimal
treatment plans that form the Pareto surface. A treatment plan is Pareto-
optimal if it is not possible to improve the plan in one objective without
worsening at least one other objective.

Interactive Pareto-Surface Navigation Methods


For radiotherapy planning, we are interested in choosing a treatment
plan from the Pareto surface. However, it may depend on the patient’s or
physician’s preference which treatment plan to pick. Especially for
difficult cases such as the head-and-neck cancer example in Figure 9.3,
the physician or treatment planner may want to explore the tradeoffs
between different planning goals. Both tasks are straightforward in a
two-dimensional tradeoff as illustrated in Figure 9.20. In this case, the
Pareto surface can be approximated with a few treatment plans that are
evenly spaced on the one-dimensional Pareto surface in the clinically
relevant range. The treatment planner can then choose one of these pre-
computed treatments plans. However, IMRT planning typically involves
tradeoffs between more than two objectives (say, 5 to 10). It is apparent
that in higher dimensions the approximation of the Pareto surface is
more challenging due to the curse of dimensionality. In addition,
exploring the tradeoff space is nontrivial. The development of a
treatment planning framework has to address two problems:

1. Developing methods to efficiently represent the Pareto surface with a


small number of Pareto-optimal treatment plans, which are called
database plans. One method to achieve this is the so-called Sandwich
method described by Craft et al. (35).
2. Providing a graphical user interface and the underlying mathematical
methods that allow the treatment planner to interactively explore and
visualize the tradeoffs between conflicting planning goals.

FIGURE 9.20 Schematic illustration of the Pareto surface for the tradeoff between target dose
homogeneity and rectal wall EUD. All treatment plans below the Pareto surface are impossible to
achieve; treatment plans above the Pareto surface are undesirable because they can be
improved in one objective without worsening the second objective. Points on the Pareto surface
can be generated using the constrained method, that is, by minimizing the rectal wall EUD,
subject to different target homogeneity constraints (with permission reproduced from (4)).

One goal of Pareto-surface navigation methods is to allow for a


continuous exploration of treatment plans. To that end, not only the
discrete database plans are considered but also linear combinations of
plans. We assume that a treatment plan is defined through the fluence
map x. Given two treatment plans with fluence maps x1 and x2, we can
form a convex combination of the two treatment plans by considering
the averaged fluence map

which is obtained by averaging the beamlet intensities beamlet-by-


beamlet, using a mixing parameter q ∊[0,1]. If x1 and x2 are Pareto-
optimal treatment plans, the convex combination of two plans is
expected to be also a “good” treatment plan (although not strictly
Pareto-optimal).
In a TPS, the planner has to be provided with tools to navigate in the
space of convex combinations of database plans. In a practical scenario,
the planner may have evaluated a current treatment plan, and would
like to improve the treatment plan regarding one particular objective,
say the rectum dose. The TPS has to provide a user interface to express
this request. Figure 9.21 shows the Pareto-surface navigation interface in
the RayStation TPS (version 4.0), distributed by RaySearch Laboratories.
Each objective is associated with a slider. By moving the slider, the user
can request an improvement of the treatment plan with respect to the
chosen objective. In the background, the TPS translates the slider
movement into a new convex combination of database plans.

Beam Angle Optimization


IMRT planning primarily refers to determining the fluence maps of
incident beam directions and their delivery with MLCs. In this context it
is assumed that the incident beam directions are given. In practice,
treatment planners often use a template of standardized beam directions
for a given treatment site. For example, for prostate treatments a set of 7
evenly spaced coplanar beam directions is used. For treatment sites that
exhibit relatively little geometric variation across patients, such
templates often provide satisfying plan quality. However, treatment
planning studies suggest that some treatment sites, for example
intracranial lesions may benefit from individualized noncoplanar beam
directions. Despite this benefit, current TPS have very limited support
for automated selection of optimized beam directions. Typically, the
treatment planner selects beam angles manually based on experience
and the patient’s geometry.
Approaches to automated beam angle optimization (BAO) in the
literature can broadly be categorized in two types:

• Beam angle selection and FMO are considered jointly. That means, the
quality of a treatment plan is judged by an objective function f (d) as
used for FMO. The goal of BAO is then to simultaneously select a set of
beam angles and their associated fluence maps such that the objective
function f is minimized.
• Beam angle selection is separated from the FMO problem. In the first
step, beam angles are selected based on simplified measures to score
the quality of a beam direction. This is done mostly based on geometric
features. In the second step, FMO is performed for the fixed set of beam
angles.

Approaches in the first category are typically formulated as a


combinatorial optimization problem. In that setting, BAO aims to select
a small subset of n beams from a larger pool of N candidate beam
directions. This yields a very large number of possible beam ensembles
given by the binomial coefficient. No computationally efficient
algorithms exist to determine the optimal beam ensemble, and it is in
part this combinatorial nature of BAO that has prevented a practical
implementation in commercial TPS. Most approaches to combined FMO
and beam angle selection amount to solving a large number of FMO
problems for different beam ensembles. This includes stochastic search
methods like simulated annealing (36) as well as integer programming
methods (37). For a review of BAO from a methodology perspective we
suggest the paper by Ehrgott et al. (8). BAO research suffers from the
lack of shared patient data sets. Works that compare different BAO
algorithms on a common patient data set using the same objective
functions are scarce. The work by Bangert et al. (38) is an exception and
compares three stochastic search methods and an iterative beam
selection heuristic, observing similar performance of the investigated
methods.

FIGURE 9.21 Graphical user interface for multi-criteria IMRT planning in the RayStation treatment
planning system (version 4.0). Each objective is associated with a slider. The user can drag
sliders to improve the treatment plan regarding the corresponding objective. The user request is
translated into a new convex combination of database plans and the corresponding DVH and the
dose distribution are displayed. By locking sliders (visible as the check boxes to the left of each
slider) the user has additional control over the navigation process. For example, by locking the
slider for target dose homogeneity, the user can request that the navigation is restricted to
treatment plans for which the target homogeneity is no worse than indicated by the current slider
position.

Integrating Uncertainty and Organ Motion in IMRT Planning


Setup uncertainties and organ motion in radiotherapy is traditionally
handled via a safety margin approach. The clinical target volume (CTV)
is expanded by a margin to form the planning target volume (PTV).
Radiotherapy planning aims at delivering the prescribed dose to the
PTV. As long as the CTV stays within the PTV, it can be assumed that the
CTV receives the prescribed dose despite variations of the CTV location.
IMRT planning is formulated as a mathematical optimization problem,
which provides the possibility of incorporating a model of patient
motion directly into the IMRT treatment planning problem. In this case,
the manual definition of the PTV becomes obsolete and, in some cases,
better sparing of normal tissues can be achieved.

Accounting for Respiratory Motion


As an example, we consider the handling of respiratory motion in the
lung. Tumors located close to the diaphragm that are not attached to the
chest wall may move approximately 2 cm in superior–inferior direction
between exhale and inhale. Nowadays, the magnitude of motion can be
assessed using 4D CT, which provides snapshots of the patient geometry
at 10 phases during the respiratory cycle. Traditionally, motion is
accounted for using the internal target volume (ITV) concept. The ITV
represents the union of the target volume defined on each individual
phase. Treatment planning aims at delivering the prescribed dose to the
entire ITV to ensure that the target volume receives the prescribed dose
in any phase of the breathing cycle.
The dose delivered to the surrounding lung tissue can be reduced via a
nonuniform dose distribution within the ITV. Rather than irradiating the
ITV homogeneously with the prescription dose, the dose can be higher in
regions that are covered by the tumor most of the time. Thereby, the
dose can be reduced in regions that are occupied by lung tissue most of
the time, and only rarely by the tumor. The dose inhomogeneities are to
be designed such that the tumor eventually receives the prescribed dose
while accumulating dose contributions from different phases in the
respiratory cycle. IMRT optimization provides the means to formalize
this idea.
We assume that a 4D CT provides a CT image for each respiratory
phase s. Typically, the exhale phase is used as a reference phase, which
is used to define OAR voxels and tumor voxels. IMRT planning can be
based on the cumulative dose that a voxel accumulates over the
breathing cycle, which can be approximated as

Here, dis is the dose received by voxel i in phase s, Dsij is the dose-
deposition matrix in phase s, and ps is the relative amount of time that
the patient spends in phase s. The calculation of the dose-deposition
matrices in phase s represents a substantial practical difficulty. Dsij
represents the dose that the anatomical voxel i defined on the reference
phase receives from beamlet j in another phase s. Its calculation requires
a dose calculation on phase s, but also a deformable registration of the
dose distribution to the reference phase.
In the respiratory motion case as described so far, the motion is
assumed to be predictable in the sense that the cumulative dose
distribution can be calculated. Although this involves practical
challenges, it does not require conceptual changes in terms of the
optimization method used. IMRT planning can be performed by
minimizing the objective function f that is evaluated for the cumulative
dose.

Handling Uncertainties
The presence of uncertainty is different from the case of predictable
motion. For example, a systematic setup error implies that the dose
distribution delivered to the patient is not predictable and is inherently
uncertain. This requires conceptual changes regarding the formulation of
the treatment planning problem.
To illustrate the handling of uncertainty, we consider systematic setup
errors. For example we can consider six patient shifts of ±5 mm in
anterior–posterior, superior–inferior, and left–right direction. For each
patient shift a separate dose-deposition matrix Dsij can be calculated,
leading to different dose distributions d s (where s is now an index for
the error scenario). Since we consider a systematic error, only one of the
dose distributions dis can be realized, not an average. Generally, the goal
is to obtain a treatment plan that is good or acceptable for any error
scenario that is accounted for. There are mainly two approaches to
translate this notion into mathematical terms for IMRT planning: the
probabilistic approach and the worst-case approach.
In the probabilistic approach, a probability ps is assigned to each
scenario s. For example, a higher probability can be given to the nominal
scenario (i.e., no setup error occurs), and a lower probability is assigned
to setup error scenarios. IMRT treatment plan optimization is performed
by minimizing the expected value of the objective function f:
In words, the composite objective function is a weighted sum of
objectives evaluated for each error scenarios, where a higher weight may
be given to likely scenarios, and a lower weight to less likely scenarios.
While the probabilistic approach can be seen as optimizing the average
plan quality, the worst-case approach aims at finding the treatment plan
that is as good as possible for the worst error scenario that is accounted
for. Formally, this can be formulated as

Methods to incorporate motion and uncertainty in IMRT planning


have been investigated in the literature. For example, Trofimov et al.
(39) consider respiratory motion for lung tumors, and Heath et al. (40)
extend this work to include uncertainties in the breathing trajectory.
Bohoslavsky et al. (41) consider random and systematic setup errors for
prostate treatments. In recent years, these methods have been applied to
intensity-modulated proton therapy (IMPT) to handle range
uncertainties. In IMPT, the PTV concept is fundamentally limited (42)
and cannot generally ensure robust treatment plans. The need for
methods to incorporate uncertainty directly into IMPT planning has led
to the first commercial implementations. The RayStation planning
system (version 4.5) implements a worst-case approach for handling
systematic setup errors (and range errors when applied to IMPT).

KEY POINTS
• Illustrate the need for intensity-modulation when treating concave
target volumes such as head-and-neck tumors, prostate cancer, or
spinal metastasis.

• Discuss fluence map optimization (FMO) and the formulation of


IMRT planning as a mathematical optimization problem.

• Review the traditional two-step approach of IMRT planning, that is,


FMO plus leaf sequencing.
• Understand gradient-based methods for direct aperture
optimization (DAO) to optimize MLC leaf positions.

• Provide an overview of treatment plan optimization approaches for


volumetric-modulated arc therapy (VMAT).

• Provide an introduction to advanced topics in IMRT planning


including multi-criteria optimization (MCO) and the incorporation of
uncertainty and organ motion.

QUESTIONS
1. How is a VMAT plan communicated between the treatment
planning system (TPS) and the Linac?
A. The TPS determines the optimal MLC leaf positions as a
function of time as well as the gantry speed and dose rate.
B. The TPS generates a sequence of control points defined
through leaf positions, gantry angle, and cumulative monitor
units.
C. The TPS optimizes incident fluence maps and the Linac control
system converts these into MLC apertures.
2. What are advantages of the sliding window leaf sequencing
method?
A. The conversion of a fluence map into a sliding window leaf
trajectory can be performed analytically without the need for
time-consuming optimization.
B. Sliding window sequencing yields the smallest number of
apertures.
C. Sliding window sequencing yields the smallest total number of
monitor units.
3. Which statement about direct aperture optimization (DAO) is
appropriate?
A. DAO is an important component of many VMAT planning
algorithms.
B. DAO has been developed for step-and-shoot IMRT and is
therefore not applicable to dynamic delivery techniques such
as VMAT.
C. DAO eliminates the need for mathematical optimization
methods in IMRT planning and hence makes IMRT planning
faster and better.
D. DAO can in part overcome the problem of dose degradation
that may occur in the leaf sequencing step following fluence
map optimization.
E. The DAO problem can be solved much more reliably compared
to the traditional fluence map optimization approach and
therefore leads to better treatment plans.
4. What is the motivation for multi-criteria optimization (MCO)?
A. MCO potentially provides better treatment plans because the
traditional planning approach does not yield Pareto-optimal
plans.
B. Allowing the treatment planner to assess tradeoffs is one of the
main motivations for MCO.
C. In MCO, direct aperture optimization or VMAT planning can
more easily be integrated compared to the traditional
planning approach.
D. MCO is a way to overcome the cumbersome tweaking of
objective weights, which can be time-consuming in traditional
planning.

ANSWERS
1. B VMAT plans are communicated via DICOM standard, that is,
via a sequence of control points. Gantry speed, dose rate, and
MLC leaf trajectories are determined by the machine controller
based on the sequence of apertures generated by the TPS.
2. A and C It is the main disadvantage of sliding window
sequencing that it may generate a large number of small
apertures.
3. A and D DAO addresses the shortcomings of the traditional
fluence map plus sequencing IMRT planning approach.
Although DAO was originally developed for step-and-shoot
IMRT, it is widely used in VMAT algorithms, in parts due to the
DICOM specification of a VMAT plan as a sequence of
apertures.
4. B and D The main motivation for MCO is to provide methods for
an efficient and interactive exploration of tradeoffs between
planning goals. This may in turn translate into improved
treatment plans. It is a common misconception that the
traditional planning method of assigning importance weights
does not yield Pareto-optimal plans. In fact, the same
optimization methods are used in MCO, except that importance
weights are determined by an algorithm rather than manually.
Incorporating DAO or VMAT into an MCO framework is
difficult and subject to ongoing research.

REFERENCES
1. Brahme A, Roos JE, Lax I. Solution of an integral equation
encountered in radiation therapy. Phys Med Biol. 1982;27:1221–
1229.
2. Bortfeld T. IMRT: a review and preview. Phys Med Biol. 2006;
51(13):R363–R379.
3. Webb S. Intensity-modulated Radiation Therapy. Boca Raton, FL: CRC
Press; 2001.
4. Boyer A, Unkelbach J. Intensity-modulated radiation therapy
planning. In: Brahme A, ed. Comprehensive Biomedical Physics, Vol 9,
Chapter 17, Elsevier; 2014.
5. Romeijn HE, Dempsey JF, Li JG. A unifying framework for multi-
criteria fluence map optimization models. Phys Med Biol.
2004;49(10):1991–2013.
6. Nocedal J, Wright SJ. Numerical Optimization. 2nd ed. Springer;
2006.
7. Bertsekas DP. Nonlinear Programming. 2nd ed. Belmont, MA: Athena
Scientific; 1999.
8. Ehrgott M, Güler C, Hamacher HW, et al. Mathematical optimization
in intensity-modulated radiation therapy. Ann Oper Res.
2010;175:309–365.
9. Bortfeld TR, Kahler DL, Waldron TJ, et al. X-ray field compensation
with multileaf collimators. Int J Radiat Oncol Biol Phys.
1994;28(3):723–730.
10. Engel K. A new algorithm for optimal multileaf collimator field
segmentation. Discrete Appl Math. 2005;152(1):35–51.
11. Stein J, Bortfeld T, Dorschel B, et al. Dynamic x-ray compensation
for conformal radiotherapy by means of multileaf collimation.
Radiother Oncol. 1994;32:163–173.
12. Li R, Xing L. Bridging the gap between IMRT and VMAT: Dense
angularly sampled and sparse intensity modulated radiation
therapy. Med Phys. 2011;38(9):4912–4919.
13. Kim H, Li R, Lee R, et al. Dose optimization with first-order total-
variation minimization for dense angularly sampled and sparse
intensity modulated radiation therapy (DASSIM-RT). Med Phys.
2012;39(7):4316–4327.
14. Kamath S, Sahni S, Palta J, et al. Optimal leaf sequencing with
elimination of tongue-and-groove underdosage. Phys Med Biol 2004;
49(3):N7–N19.
15. Shepard DM, Earl MA, Li XA, et al. Direct aperture optimization: a
turnkey solution for step-and-shoot IMRT. Med Phys.
2002;29(6):1007–1018.
16. Li Y, Yao J, Yao D. Genetic algorithm based deliverable segments
optimization for static intensity-modulated radiotherapy. Phys Med
Biol. 2003;48:3353–3374.
17. Cotrutz C, Xing L. Segment-based dose optimization using a genetic
algorithm Phys Med Biol. 2003;48:2987–2998.
18. Romeijn HE, Ahuja RK, Dempsey JF, et al. A column generation
approach to radiation therapy treatment planning using aperture
modulation. SIAM J Optim. 2005;15:838–862.
19. Carlsson F. Combining segment generation with direct step-and-
shoot optimization in intensity-modulated radiation therapy. Med
Phys. 2008;35:3828–3838.
20. Hardemark A, Liander H, Rehbinder H, et al. Direct machine
parameter optimization with RayMachine in Pinnacle. Ray-search
Laboratories White Paper. 2003.
21. De Gersem W, Claus F, De Wagter C, et al. Leaf position
optimization for step-and-shoot IMRT. Int J Radiat Oncol Biol Phys.
2001;51:1371–1388.
22. Cassioli A, Unkelbach J. Aperture shape optimization for IMRT
treatment planning. Phys Med Biol. 2013;58(2):301–318.
23. Yu CX. Intensity-modulated arc therapy with dynamic multileaf
collimation: an alternative to tomotherapy. Phys Med Biol. 1995;
40(9):1435–1449.
24. Mackie TR, Holmes T, Swerdloff S, et al. Tomotherapy: a new
concept for the delivery of conformal radiotherapy. Med Phys. 1993;
20:1709–1719.
25. Mackie TR. History of tomotherapy. Phys Med Biol. 2006;51:R427–
R453.
26. Unkelbach J, Bortfeld T, Craft D, et al. Optimization approaches to
volumetric modulated arc therapy planning. Med Phys.
2015;42(3):1367–1377.
27. Yu CX, Tang G. Intensity-modulated arc therapy: principles,
technologies and clinical implementation. Phys Med Biol. 2011;
56(5):R31–R54.
28. Bzdusek K. Friberger H, Eriksson K, et al. Development and
evaluation of an efficient approach to volumetric arc therapy
planning. Med Phys. 2009;36:2328–2239.
29. Otto K Volumetric modulated arc therapy: IMRT in a single gantry
arc. Med Phys. 2008;35:310–317.
30. Craft D, McQuaid D, Wala J, et al. Multicriteria VMAT optimization.
Med Phys. 2012;39:686–696.
31. Papp D, Unkelbach J. Direct leaf trajectory optimization for
volumetric modulated arc therapy planning with sliding window
delivery. Med Phys. 2014;41(1):011701.
32. Wang C, Luan S, Tang G, et al. Arc-modulated radiation therapy
(AMRT): a single-arc form of intensity-modulated arc therapy. Phys
Med Biol. 2008;53(22):6291–6303.
33. Wilkens JJ, Alaly JR, Zakarian K, et al. IMRT treatment planning
based on prioritizing prescription goals. Phys Med Biol.
2007;52:1675–1692.
34. Jee KW, McShan DL, Fraass BA. Lexicographic ordering: intuitive
multicriteria optimization for IMRT. Phys Med Biol. 2007;52 1845–
1861.
35. Craft D, Halabi TF, Shih HA, et al. Approximating convex Pareto
surfaces in multiobjective radiotherapy planning. Med Phys. 2006;
33(9):3399–3407.
36. Stein J, Mohan R, Wang XH. Number and orientations of beams in
intensity-modulated radiation treatments. Med Phys.
1997;24(2):149–160.
37. Lee EK, Fox T, Crocker I. Integer programming applied to intensity-
modulated radiation treatment planning. Annals of Operations
Research. 2003;119:165–181.
38. Bangert M, Ziegenhein P, Oelfke U. Comparison of beam angle
selection strategies for intracranial IMRT. Med Phys.
2013;40:011716.
39. Trofimov A, Rietzel E, Lu HM. Temporo-spatial IMRT optimization:
concepts, implementation and initial results. Phys Med Biol.
2005;50(12):2779–2798.
40. Heath E, Unkelbach J, Oelfke U. Incorporating uncertainties in
respiratory motion into 4D treatment plan optimization. Med Phys.
2009;36:3059–3071.
41. Bohoslavsky R, Witte MG, Janssen TM, et al. Probabilistic objective
functions for margin-less IMRT planning. Phys Med Biol.
2013;58(11):3563–3580.
42. Unkelbach J, Bortfeld T, Martin B, et al. Reducing the sensitivity of
IMPT treatment plans to setup errors and range uncertainties via
probabilistic treatment planning. Med Phys. 2009;36(1):149–163.

*This section was in parts adapted from the book chapter Boyer A, Unkelbach
J. Intensity-modulated radiation therapy planning. In: Brahme A, ed.
Comprehensive Biomedical Physics, Vol 9, Chapter 17, Elsevier; 2014.
*This section was in parts adapted from the book chapter Boyer A, Unkelbach
J. Intensity-modulated radiation therapy planning. In: Brahme A, ed.
Comprehensive Biomedical Physics, Vol 9, Chapter 17, Elsevier; 2014.
10 Intensity-Modulated Proton
Therapy

Tony Lomax

INTRODUCTION
Proton therapy is becoming an increasingly relevant modality in
radiation therapy. In a recent review of proton therapy facilities
worldwide, it was reported that there are more than 50 particle therapy
facilities now in operation, with at least another 50 being built or in an
advanced stage of planning. Consequently, by the end of 2018, the
number of particle therapy facilities will be quickly approaching 100 or
more, with this trend being very likely to continue. In addition, by the
end of 2015, it is also predicted that, for the first time, the number of
treatment rooms having the capability of delivering pencil-beam
scanning (PBS) will overtake those delivering passive scattering (PS)
(1,2), with the number of PBS rooms being predicted to expand
exponentially in the next years. Thus, it is becoming increasingly clear
that not only will proton therapy become a mainstream cancer
treatment, but that PBS will be the particle therapy modality of choice in
the future.
There are two main reasons for this paradigm change in proton
therapy. First is the inherent automation of the PBS approach in
comparison to PS, and second is the ability of PBS to deliver the so-
called intensity-modulated proton therapy (IMPT). The treatment
planning aspects of PBS and IMPT will be the focus of this chapter.

BASIC PHYSICS OF PROTON INTERACTIONS


Before looking at the treatment planning process for PBS, it is first
necessary to briefly review the physics of proton interactions with
matter and the basic characteristics of the PBS approach.

Energy Loss and Scattering


Typical depth dose curves for quasi monoenergetic proton beams are
shown as red curves in Figure 10.1. As can be seen, in contrast to the
depth–dose curve for photons, this is characterized by a relatively low
entrance dose, which gradually increases with penetration and
culminates in the well-known Bragg peak. At the distal (far) end of this
peak, the dose drops off very quickly. This characteristic is determined
by the inverse square relationship of dose deposition as a function of
particle velocity, which can be analytically described by the Bethe–Bloch
equation

where dE/dz is the rate of change of energy of the proton (equivalent to


the deposited energy to tissue) and β is the velocity of the particle
represented as the fraction of the speed of light (3). As such, in
comparison to photon beams, where the depth dose characteristics are
predominantly determined by photon absorption, the proton curve is
characterized by energy loss of the protons, which leads to the well-
defined range and shape of the Bragg peak in tissue. Thus, the higher the
initial energy, the deeper the Bragg peak, while the lower the energy,
the shallower its range, at least if both are applied to the same medium.
However, the absolute range of the Bragg peak is also dependent on the
physical characteristics of the medium, and is determined by the
medium’s stopping power to protons. Typically, this is defined relative to
water (when it is then referred to as the relative proton stopping power).
Indeed, to a good approximation, this is a linear relationship, with the
geometric range in a medium with a relative stopping power of two
being half that of the range in water.
In reality of course, it is impossible to deliver perfectly monoenergetic
proton beams, with most clinical beams having an energy spectrum
(usually referred to as the momentum band) of about +/–1%. This will
tend to broaden the Bragg peak above that of the pure energy loss curve
given by the Bethe–Bloch equation. In addition, as energy loss is a
statistical process, there is an additional broadening of the Bragg peak
due to statistical smearing, an effect referred to as range straggling.
Although a complex process, the amount of range straggling is
dependent on the range of the proton beam (and therefore the energy)
and can be approximated as broadening the width of the Bragg peak by
about 1% of its range (4).
In addition to energy loss, protons also undergo scattering processes,
mainly due to multiple Coulomb scattering (MCS) events (electrostatic
deflection of the protons as they pass close to the positively charged
nuclei of atoms). For instance, the additional broadening of a proton
pencil beam with an initial energy of 177 MeV (range in water ∼21 cm)
due to MCS will be about 4 mm (σ) at the BP. However, due to the
resulting divergence of the beam after such scattering, this broadening
can be much larger if the beam subsequently passes across air gaps (5)
or through regions of low density in the patient (6). This is an important
characteristic that needs to be taken into account during the treatment
planning process.

FIGURE 10.1 The concept of the “Spread-Out-Bragg-Peak,” showing how range (energy) shifted
Bragg peaks can be modulated in order to deliver a homogeneous depth–dose profile.

This angular-spatial divergence of a proton beam is generally


represented using Fermi–Eyges transport theory. Although quite
involved mathematically, in its simplest form, the doubled spatial
variance of a narrow proton beam can be represented in the form (5):
where A′2,x describes the doubled spatial variance of the beam width in
the lateral x direction at position z (beam width in air plus MCS
scattering in the medium) and A0, A1, A2 are the moments of the
angular-spatial distribution of the beam in air. The term in brackets and
the integral represent the additional scattering due to MCS in the
medium. From these, the width (σ) of the beam in the medium, at
distance z, is then given by:

A similar equation is used for the y direction. For a detailed


description of scattering theory of protons, the reader is referred to
Gottschalk et al. (7) and Safai et al. (5).

NUCLEAR INTERACTIONS
Whereas energy loss and scattering are the primary interactions of
protons with matter, there are also secondary processes that can have a
significant effect on calculated and delivered PBS fields. As protons
penetrate tissue, there is a small, but finite probability that they will
interact directly with an atomic nucleus, either in an elastic or inelastic
way. In the first, this will lead to a potentially wider deflection of the
proton, whereas in the second process, the proton will be absorbed from
the beam and secondary particles produced. This loss of proton fluence
as a function of depth is small (about 1% per centimeter of penetration
in water (4)), but nevertheless leads to a 20% loss of proton fluence at
the Bragg peak for an initial energy of 177 MeV. In addition, the
secondary particles and their trajectories can lead to an “extension” of
the lateral, low-dose tail of the beams profile, together with a generally
forward projected background of secondary neutrons. For a more
detailed description of proton interactions with matter, the reader is
referred to the publications of Gottschalk (3) or Lomax (8).

LET and RBE


Before moving on to the technicalities of PBS, two more, somewhat
related properties of proton beams should be mentioned, namely linear
energy transfer (LET) and relative biological effectiveness (RBE). Let’s
first look at RBE.
There is plenty of evidence from in vitro cell experiments, that the
biological effectiveness of protons for the same applied physical dose is
somewhat higher than for photons. The reader is referred to the papers
by Paganetti et al. (9,10) for an excellent review of this. This enhanced
biological effect is defined as the RBE, and depending on the cell line
and end point can vary from 0.9 to 1.8 or even higher. It has also been
demonstrated that RBE tends to be highest for the lowest energies (i.e., in
the Bragg peak) and also increases as dose reduces. Despite this, most
proton centers still assume a global RBE of 1.1 to biologically “scale”
proton doses to allow for an average RBE effect.
In contrast to RBE, LET is a physical quantity that is, in principle
directly measureable. As such it can be more precisely defined and
related back to the underlying physics. In short, LET is a measure of the
local linear energy deposited to tissue and has the units of keV/μm. In
regions where protons are losing (depositing) energy rapidly, the LET
will be higher than where energy deposition is less intense. Thus, LET is
higher in the Bragg peak and distal fall-off region than in the plateau,
mainly due to the inverse relationship of energy deposition with particle
energy. In addition, although the relationship between LET and RBE is
complex, there is a clear correlation between higher LET and increased
RBE, and thus LET can be considered a good surrogate (but not a direct
predictor) for RBE.

PENCIL BEAM SCANNING


Pencil-beam scanned (PBS) proton therapy is a conceptually simple
technique (11,12), and is shown schematically in Figure 10.2. As protons
(and other particles used for therapy such as carbon and helium ions)
have a charge, they can be easily deflected by magnetic fields. Thus, if a
narrow beam of particles can be produced (a pencil beam), this can be
scanned laterally to the incident field direction such as to paint dose over
the target volume. If this is performed using a near monoenergetic beam,
this will deliver what is called a single energy layer, or a surface of Bragg
peaks, all with the same range in a uniform medium such as water.
However, as should be clear from Figure 10.1, the delivery of a single
monoenergetic layer will not necessarily ensure the delivery of a
homogeneous dose throughout the whole tumor. Thus, a full PBS field
will typically consist of a number of such layers, each modulated in
energy in order to “fill-in” the missing dose along the field direction.

FIGURE 10.2 Schematic representation of the pencil-beam scanning approach. Energy


modulated Bragg peaks are scanned across the tumor volume, with the delivered fluence at each
BP also being modulated.

In a one-dimensional sense, such a combination of individual, energy-


modulated Bragg peaks, combined in such a way as to deliver a
homogeneous dose in depth across the tumor volume, is called a
“Spread-Out-Bragg-Peak” (SOBP) (1), an example of which is shown as
the blue curve in Figure 10.1. Although a key aspect of passive scattered
proton therapy, the SOBP concept actually has little meaning in PBS
other than to indicate that, typically, the Bragg peak weights (fluences)
at the distal end of the field will tend to be higher than those at the
proximal end. Instead, it is much more meaningful to think of a PBS field
as a set of three-dimensionally distributed, individually weighted Bragg
peaks, the position and calculation of which are free variables which can
all be manipulated to define PBS treatment fields and plans.
In the rest of this chapter we will use both pencil beam and Bragg peak
to describe the individual elements delivered as part of a PBS proton
field. However, Bragg peak (BP) will be used to specifically refer to the
“spot of high dose” in and around the Bragg peak, whereas pencil beam
will refer to the whole pencil beam, from the exit of the nozzle to the
Bragg peak.

THE TREATMENT PLANNING PROCESS


In common with other forms of radiotherapy, the goal of the treatment
planning process is to design clinically acceptable treatments by
predicting a three-dimensional distribution of the delivered dose in
relation to patient anatomy. As such, the overall planning process for
PBS and IMPT is identical to that of conventional therapy, and will be
briefly outlined here, if only to define the subsequent structure of this
chapter.
From the physics point of view, a fundamental basis for any treatment
planning system is a succinct and accurate description of the
characteristics of the treatment system to be modeled (beam model). This
will be used as input into a dose calculation engine, which can then
calculate a three-dimensional prediction of the delivered dose to the
patient. In addition, modern treatment planning is unimaginable without
a physical model of the patient in the form of volumetric x-ray CT data.
For reasons discussed below, this is even more important for proton
therapy. However, as with conventional treatments, it is more or less
standard of care in proton therapy to also use MRI and, if available, PET
to help with the delineation of the tumor and organs at risk (OAR). As
such, once all relevant imaging data has been imported into the planning
system, the task of target/organ delineation is essentially the same as for
conventional approaches, even if there may be some differences in the
definition of the PTV or ITV, as discussed in more detail below.
Once all structures have been defined, the next task is to define one or
(more usually) multiple fields such as to best “focus” the high-dose
volume on the target, while ensuring that dose constraints to critical
organs are likely to be met and that any physical limitations on field
directions are, where possible, observed. Once the fields have been
defined, then each field needs to be shaped and optimized. Finally, when
a full plan has been calculated, the quality of the plan needs to be
evaluated, both clinically and also from the point of view of its
robustness to potential delivery uncertainties. Each of these steps will be
discussed in more detail in the following sections.

Beam Modeling
Before any form of predicted dose in a phantom or patient can be
calculated, a parameterization of the characteristics of the delivery
machine and radiation modality must be performed. Typically, this is
called beam modeling and can be broken down into two main
components; the definition of a set of energy-dependent depth dose
curves and a corresponding representation of the angular-spatial
distribution of the beam for each energy (see Equation 10.2).
For the dose calculation, the energy-dependent depth dose curves are
usually represented as integral depth dose curves. That is, the dose at any
depth is the integral dose deposited at that depth in an infinite plane
perpendicular to the incident direction of the beam. Although based on
measured data, such depth dose curves are generally converted into a
numeric representation (such as a depth-dependent look-up table) using
analytical or empirical fitting algorithms (13,14). In contrast, the beam
width is dependent on two components—a model for MCS in the
medium (represented by the integral part of Equation 10.2) and an
analytical representation of the beam width in air (the A0, A1, A2
parameters in Equation 10.2). Although beam broadening due to MCS is
determined solely by physics considerations, the beam width in air will
need to be measured and the A parameters then derived from these
measurements using data fitting techniques. However, in addition to
being dependent on energy, for scanning gantries, beam width in air
could well be also dependent on both the beam deflection and gantry
angle.

Dose Calculations
Analytical Calculations—Primary Dose
Although it is likely that Monte Carlo (MC)–based dose calculations will
become more prevalent in the future, most dose calculations for
treatment planning of PBS proton therapy are analytically based.
Although such approaches are inherently limited in their accuracy, they
are also inherently fast. Even with today’s computer power, this is still a
big advantage over MC techniques, especially when being used for
optimization (see below).
All analytical dose calculations for the primary dose for PBS proton
therapy (we will return to approaches for incorporating the distribution
of secondary particles resulting from nuclear interactions later) are
based on a parameterization of a physical, or calculational, pencil beam
in the following form:

where d(x, y, z) is the dose at calculation point x, y, z, with x and y


being the distances of the dose calculation point away from the central
axis of the pencil beam, z is the coordinate of the dose calculation point
along the beam direction, D(WER(0,0,z)) the integral dose deposited at
depth WER(0,0,z), 0, 0, z is the WER at distance z along the central axis
of the pencil beam and σx(z) and σy(z) are the beam widths in the two
directions orthogonal to the beam direction at distance z along the beam
direction. σx(z) and σy(z) can be derived from Equations 10.2 and 10.3.
In essence, the first quotient gives the central axis dose at position z,
whereas the two exponents correct for the off-axis position of the point
away from the central axis (x and y), and can therefore be considered as
providing the off-axis ratios for the pencil-beam dose.
Equation 10.4 is all that is really needed for calculating primary dose
in water and is remarkably accurate, as long as the beam model
parameters (depth dose curves, in-phantom MCS and phase space) are
correctly parameterized. However, when calculating in patient
geometries, methods of dealing with density heterogeneities also have to
be included.
Perhaps the simplest method of dealing with density heterogeneities is
the ray casting approach described in the paper by Schaffner et al. (15).
In this, Equation 10.4 is slightly modified such that D(WER(0,0,z))
becomes D(WER(x,y,z)), indicating that the WER of the dose calculation
point itself determines the depth at which the integral dose is extracted
from the depth dose curve, rather than that of the central axis of the
pencil beam. Thus, off-central-axis density heterogeneities can be
incorporated into the calculations, allowing for estimations of the
distorting effects of density heterogeneities on the pencil beam. In fact,
as this approach neglects the effects of proton scattering after the density
heterogeneities, it actually overestimates the effects of these.
Nevertheless, this algorithm is still the one used clinically at PSI, and all
dose distributions in this chapter (Fig. 10.3) have been calculated using
this approach.
A more sophisticated approach (but not necessarily more accurate in
all circumstances) is to decompose the physical pencil beam into a set of
smaller, calculational pencil beams and then apply Equation 10.4 to
each individual calculational beam. This approach has been proposed by
both Schaffner et al. (15) and Soukup et al. (16). In the Schaffner
approach, the total fluence contribution from the sum of all pencil
beams for a given energy layer is first calculated, with this fluence
envelope then being decomposed into a set of narrow, calculational
beams for the calculation. This approach has the advantage of efficiency,
but is not convenient for the optimization, as the correlation between
calculational and physical pencil beams is lost. Soukup’s approach then,
although less efficient, is more appropriate for the optimization step. In
addition, Soukup et al. also proposed the calculation of the multiple
scattering angle on a voxel by voxel basis along the beam direction,
rather than using a precalculated look-up table of beam broadening in
water, in order to better model the effect of the position of density
heterogeneities along the beam direction. Similarly, Szymanowski and
Oelfke (17) have proposed to mitigate this problem using a material-
specific lateral scaling factor.

Analytical Calculations—Secondary Dose


As discussed above, a small but clinically relevant component of a
proton dose distribution is due to secondary particles resulting from
interactions of the primary protons with atomic nuclei. As such, a
number of corrections to the basic primary dose algorithms described
above have been proposed.
FIGURE 10.3 Example cases treated with PBS at PSI. (A) Skull base chordoma. (B)
Ependymoma. (C) Meningioma. (D) Sacral chordoma. Arrows indicate the field directions. Green
are coplanar, yellow noncoplanar.

The earliest of these is that of Pedroni et al. (13), who attempted to


represent this secondary component using a two Gaussian model of the
following form:

where d(x,y,w) is the dose at point x, y with depth w, D(w) is the integral
dose at depth w, fNI is the fraction of the total integral dose at depth w
resulting from secondary particles, Gp is the Gaussian distribution of the
primary beam, σp is the beam width of the primary beam, GNI is the
Gaussian distribution of the secondary particle distribution and σNI is the
width of the secondary distribution. In this work, values for fNI and σNI
were determined experimentally using a series of “frame” fields of
different sizes with a thimble detector in the middle of each frame to
measure the resulting peripheral dose. From these, values for fNI and σNI
could be deduced. Soukup et al. (16) proposed a similar approach, but
based on MC simulations of the secondary dose in water, from which
they could then deduce an analytical model of this distribution.
Although the fraction of dose delivered by secondary particles is small,
their contribution is nevertheless important for correctly predicting
absolute dose (as reported in the paper by Pedroni et al.), and if
neglected, can lead to an overestimation of the dose at the edge of a
homogeneous field or an underestimation of the dose in “dose-valleys”
in highly modulated IMPT fields, as shown in Figure 10.4.

Monte Carlo Dose Calculations


It is undisputed that the most accurate dose calculations available are
MC methods. In short, these model the transport and interactions of
individual protons, that are tracked through the patient geometry (and
sometimes the whole-beam line and gantry), using knowledge about the
underlying physics, together with probabilities of the various
interactions that protons can undergo. As they track individual,
simulated protons, MC calculations do not have many of the limitations
of analytical calculations, and as such can be considered to be the gold
standard for the calculation of dose for PBS proton therapy.
FIGURE 10.4 The effects of secondary particles. Primary dose calculation (solid line) and the
corrected dose taking into account secondary particle distributions. Note the reduced dose at the
field edges (left) and increased doses in dose “holes” (right).

There are currently a number of different MC codes available that


have been used for proton therapy calculations, the main ones of which
are FLUKA (18), GEANT4 (20), MCNPX (21), VMCPro (22), and Shield-
HIT (23). All have their advantages and disadvantages, with some
having more accurate modeling of various interactions than others.
Nevertheless, none of these codes are necessarily easy to use without
considerable knowledge of the code and underlying physics, and as such,
MC calculations have been slow to get into clinical use, except in
departments with specialized staff. This is changing somewhat now with
the development of MC tool kits specifically designed for RT applications
and with simplified interfaces for ease of programing and specification.
Perhaps the most widespread of these at the time of writing is the
TOPAS system (24), which is based on GEANT4 and has been widely
validated for proton therapy applications by the group at Massachusetts
General Hospital and elsewhere (25).
The drawback of MC calculations is still the time required for accurate
calculations, with MC calculations typically taking many hours to run.
However, methods are being developed to improve efficiency through
the use of, for example, track repeating algorithms (26) or
implementation on graphical processing unit (GPU) hardware. With such
measures, full MC calculations are now being performed in a few
seconds with clinically acceptable statistics (18,27).

Imaging for Treatment Planning


Although MRI, PET, and other imaging modalities can all be of great use
for helping to define target volumes and other anatomical structures for
the treatment planning process, the core dataset for treatment planning
for PBS proton therapy is x-ray CT. This is currently the only imaging
modality that can be accurately converted into the relative proton
stopping powers required for accurate dose calculations in the patient.
As such, it is important that the CT data set to be used for dose
calculations is as good a quality as possible. This does not necessarily
mean that the CT has to have the best resolution, but more that the
Hounsfield units (HU) of the CT are correct.
X-ray CT gives a three-dimensional model of the patient in terms of x-
ray attenuation, and as such needs to be converted into relative proton
stopping power. This is a nontrivial and nonunique process. Although a
number of methods of performing this calibration have been reported
(28–30), the most widely used calibration procedure is that of Schneider
et al. (29), referred to as the stoichiometric calibration. As the name
implies, this approach is based on knowledge of the densities and
chemical composition of the mediums to be calibrated. Unfortunately,
there is no unique and one-to-one relationship between HU and proton
SP, with different materials with the same HU value having different
proton SP and vice versa (29). As such, clinically applied calibration
curves are defined based on biologically relevant tissues (29–31) and are
thus, strictly speaking, only valid when applied to patient tissues.
Although on first site this may appear not to be a problem, it can be an
issue when treating through nonbiological materials such as the table
couch or metal implants (6,32). In such cases, it may be necessary to
override the HU or SP values in the planning CT in order to ensure that
the correct SP is used for the dose calculations. More details on this issue
can be found in Lomax et al. (6) and Kruse (32).
Thus, planning CTs for proton therapy should be acquired under strict
imaging protocols, such that the acquisition parameters for the images
are exactly those used for the calibration curve to be used in the
planning system. Under these conditions, the HU–SP calibration can be
accurate to about 1% in soft tissues and 2% to 2.5% in bones (31,33).
However, allowing for other imperfections in CT data (beam hardening,
partial volume effects, etc.), it is generally recommended that a range
uncertainty of 3% to 3.5% be assumed from the CT imaging/calibration
process alone.
Such range uncertainties are what can be assumed in the case of high-
quality CT data, that is, CT data that is well calibrated and “clean.” In
this context, “clean” refers to CT datasets that are free of substantial
reconstruction artifacts. For CTs in which high-density implants are
present, however (e.g., postsurgical implants, teeth fillings, implanted
fiducial markers), reconstruction artifacts can be substantial and thus
cannot be ignored (34,35). Simply put, artifacts resulting from
reconstruction problems provide the wrong HU values for the underlying
tissues, and will obviously then provide the wrong stopping powers for
proton range and dose calculations. As such, it is imperative that the
degrading effects of such artifacts are mitigated as much as possible for
treatment fields which pass through them.
There are two main ways of doing this. The first, and the
recommended approach, is to try to reduce such artifacts at the imaging
stage. This can be done for instance by using dual-energy CT, which has
been shown to reduce reconstruction artifacts around metal implants
substantially (36,37) or by using artifact reduction methods during the
image reconstruction process (38,39). The second approach is to
manually correct the artifacts in the planning CT, by outlining the most
obvious regions of corrupted HU values and manually setting these
regions to average HU or SP values appropriate for the presumed
underlying tissue. Although certainly not optimal, this approach has
recently been shown to significantly improve the agreement between
planned and delivered doses when measured in an anthropomorphic
phantom (40) and is considered as the absolute minimal approach to
artifact mitigation for proton therapy. Indeed, as artifact reduction
algorithms and dual energy acquisitions are not perfectly effective, it
may still be necessary to perform manual stopping power corrections to
correct for residual artifacts even when using these techniques.

Structure Definition
The definition of both target and normal tissue structures for PBS proton
therapy is an essential part of the treatment planning process and is, to a
large extent, the same across all external-beam radiotherapy techniques.
Certainly, there is no reason why GTVs or CTVs for a given indication, as
well as OAR and other anatomical structures, should be any different for
PBS proton therapy than for conventional therapies. However, given the
potentially different lateral penumbra of proton fields, together with the
additional uncertainty in range, there may be good reasons why PTVs
could be different for PBS proton therapy.
PTV margins should be determined based on the estimated magnitude
of random and systematic errors during the delivery. For proton therapy,
these need to be determined both from positioning errors (which will
mainly determine the lateral margin) and range uncertainties, which
should ideally determine the distal margin. Given that these
uncertainties will not necessarily have the same magnitude, and are
likely to be of a different nature (e.g., positioning errors will likely be
random, whereas the main sources of range uncertainty will be
systematic (41,33)) it can be argued that PTVs for PBS should be field
rather than target specific, with different margins being defined laterally
to those defined distally and proximally (42,43). In practice, however,
this may not be a practical approach, particularly if field-specific
margins are not easily supported in the treatment planning system.
Indeed, at our institute, conventional, isotropic PTV margins of 4 to 7
mm are routinely used. These have been calculated based on an analysis
of our positioning uncertainties for different sites (44), and using an
estimated uncertainty in range of 3% (see above). As it turns out, this
leads to lateral and distal/proximal margins of similar values, at least for
centrally located tumors in the brain. Nevertheless, field-specific PTVs
may become more prevalent in the future as more sophisticated and
automated tools become available.
Beam Selection and Plan Design
Due to its ability to deliver a more or less homogeneous distribution to
the target from a single direction (see Fig. 10.1), it is not absolutely
necessary to treat with multiple fields, even if this is highly
recommended. This characteristic of proton therapy has a number of
consequences. First, the number of fields for a typical proton plan can be
quite small, and second, there is a lot of flexibility in the choice of these
directions as all can, at least theoretically, deliver a homogeneous dose
to the target. In practice, however, there are a number of issues that
should be taken into consideration when selecting field directions, which
will be briefly outlined in this section.
The most obvious consideration is the avoidance of critical structures.
Although all normal tissues in a patient can be considered “critical” in
one way or another, some tissues are more critical than others, and one
of the easiest ways of ensuring that dosimetric constraints are met is to
avoid bringing treatment fields through these. This can be achieved by
avoiding the structures with the lateral edge of the field and, given the
stopping characteristics of protons, also by use of the distal dose fall-off.
Due to worries about range uncertainty and increased LET/RBE at the
distal end of the field however, it is currently considered bad practice to
directly stop a proton field against critical structures which may be
sensitive to small volumes of large dose, such as the spinal cord or
brainstem. On the other hand, if there is a reasonable distance between
the distal edge of the field and the critical structure (in this context a
“reasonable distance” is difficult to precisely define, but will depend on
the estimated range uncertainty for the field), using the distal field edge
to “shield” a critical organ is perfectly acceptable, and is often the best
way of realizing the power of proton therapy.
The second consideration is the avoidance of coarse and complex
density heterogeneities. These should be ideally avoided for the
following three reasons; potential problems with the dose calculation
accuracy, limitations with dose homogeneity and conformity, and
sensitivity to potential set-up errors.
The Bragg peaks and SOBP shown in Figure 10.1 are that expected for
proton beams delivered to a perfectly homogenous medium such as
water. However, if delivered to a material that is density
inhomogeneous, then these curves can look quite different. For instance,
when a pencil beam partially intersects with a density heterogeneity,
then different portions of the beam will “see” different material
densities, thus affecting the range of those portions of the beam. As such,
the beautiful, sharp Bragg peaks shown in Figure 10.1 becomes degraded
and distorted into a broader, more irregular shape with often a much
degraded distal fall-off.

FIGURE 10.5 Degradation of the distal fall-off for a PBS proton field passing through complex
density heterogeneities.

Such an effect is shown in Figure 10.5. For this field, one can observe
that the distal end of the field does not conform well to the distal end of
the target due to the distorting effect of the density heterogeneities. In
addition, as shown by the color bar on the right side showing the dose
banding, the maximum dose in this field is quite high (138% of the
mean dose delivered to the target). This is also a consequence of the
distortion of the Bragg peaks for this field, which the optimization
process cannot fully compensate. Indeed, the dose conformity and
homogeneity of this field should be compared to that of the single field
shown in Figure 10.6B. The field in the latter figure is traversing a
particularly homogeneous part of a patient’s anatomy (only skull and
brain), and thus the shape of the Bragg peaks are preserved, leading to a
much more homogeneous (a maximum dose in the field of only 106% of
the mean target dose) and distally conformal distribution. This distorting
effect of density heterogeneities on pencil beams is one reason why they
should ideally be avoided.
The second reason is the sensitivity of such field directions to potential
positioning and set-up errors. To explain this, once again consider Figure
10.5. Clearly, the position of the “tongues” of dose extending beyond the
distal edge of the target volume are correlated with the position of the
density heterogeneities and as such, their position in relation to the
patient’s anatomy will change due to any misalignments, particularly
rotational, of the patient in relation to the delivered beam.
In summary then, it is advisable to try to avoid field directions that
avoid complex density heterogeneities, while picking directions that
avoid critical structures. In practice however, following these guideline
religiously is simply not possible, and one or other (if not all) guidelines
may have to be compromised depending on the case and anatomy.
Nevertheless, they are worth keeping in mind in plan design wherever
possible.

FIGURE 10.6 Optimization for PBS proton therapy. (A) Optimized Bragg peak positions and
fluences (colours) for a single field planned to an Ependymoma case. (B) The corresponding,
optimized dose distribution.

Field Shaping and Optimization


In this section, we will delve a little deeper into the planning process for
PBS proton therapy, by discussing the details of how the individual fields
are defined and shaped such as to efficiently deliver a clinically relevant
dose to the target volume. This process is termed “field shaping,” and
has been recently described in detail by Zhu et al. (45).

Bragg Peak Placement


The basic process of field shaping for PBS proton therapy is shown in
Figure 10.7. This shows the stages of shaping a PBS field planned to a
central brain ependymoma using the in-house treatment planning system
developed at our institute (46,47). Figure 10.7A shows the originating
planning CT and defined structures for this case, with the PTV displayed
as the green contour, whereas Figure 10.7B shows a three-dimensionally
distributed set of BP positions (the red crosses) for a posterior field
applied to this patient before any field shaping has been applied, with
this distribution of BPs essentially providing the starting conditions for
the field-shaping process.
In order to get the distribution of BPs shown in Figure 10.7B, at least
two parameters for the field and/or delivery machine must be known or
defined. These are the spot spacing and energy resolution or spacing. For
the case shown in the figure, spot separation refers to the spacing
between BPs/pencil beams in the two directions orthogonal to the beam
direction—so along the horizontal (lateral) axis in this case, as well as in
and out of the plane of the image. The energy resolution then determines
Bragg peak separation along the beam direction. For the case shown,
energy resolutions of 3.5 MeV and a spot spacing of 5 mm have been
used. Such a lateral spacing is the standard spot spacing used for our
treatments at PSI and, although a field-specific parameter in our
planning system, is more-or-less a standard for all types of treatment at
our institute. This separation has been determined such that there is a
sufficient overlap between the narrowest spots deliverable by our
machine (about 3 mm σ) such as to deliver a homogeneous (ripple free)
lateral profile. Using the definition of spot spacing defined by Zhu et al.
(45), this corresponds to an α of about 0.7 for the narrowest beam
dimension (where the spot spacing, s, is related to the full-width-half-
maximum (FWHM) of the beam by the relationship s = α FWHM, see
Zhu et al.)
The next step in field shaping is to define the sub-set of BPs that will
deliver useful dose to the tumor volume, by selecting just those BPs that
are within the target volume, or a short distance outside. This sub-set of
BPs is shown in Figure 10.7C, where all BPs internal to the selected PTV
(green contour) have been selected, together with BPs within 5 mm of
the PTV surface. This reduced set of BPs is what can now be referred to
as the shaped field. As a note, the extra, external BPs are required in
order to ensure that the high-dose area of the final dose distribution
extends to the target volume surface. In addition, the sub-set of BPs
shown in Figure 10.7C has also been assigned an initial weight (fluence)
as indicated in the figure by the color coding of the BPs and the color
bar on the right-hand side. As we will see in the following section, this
initial set of weights will not typically provide a homogeneous dose to
the field, necessitating an optimization step in the planning process.

Fluence Optimization
The BP fluences shown in Figure 10.7C have been assigned using a
precalculated, one-dimensional weighting scheme based on the SOBP
concept (see Fig. 10.1). Such fluences, if assigned to a PBS field planned
to a rectangular box in water, would then provide a homogeneous dose
across the target, and indeed such fields are a way by which a PBS
machine can emulate PS (48). However, when applied to an irregular
target in a nonhomogeneous patient with a nonflat skin surface, then
this simple fluence assignment approach is insufficient, as can be seen in
Figure 10.7D. This shows the dose distribution resulting from the set of
BP and fluences shown in Figure 10.7C, calculated using the ray-casting
analytical approach (see above). Although the 95% isodose covers most
of the PTV in the slice shown, there are clear areas of under-dosage
(<<95%) at the edge of the PTV. In order to improve this situation, an
optimization of the fluences is required, by which the fluence of each
selected BP in the field is iteratively modified, with the goal of making
the resultant dose distribution as homogeneous as possible across the
target volume.
FIGURE 10.7 Field shaping for PBS proton therapy. See main text for details.

The optimization process used for this step is essentially the same as
that used for other applications in radiotherapy (e.g., IMRT) and is based
on minimizing the following function:

where N is the number of dose calculation points and Pi and Di are the
prescribed and calculated doses at point i.
There are numerous ways of going from this simple relationship to the
actual update functions applied to each pencil beam per iteration (49),
but the one we will show here is that used in our planning system at PSI
and that was originally derived for IMPT optimization (50). This has the
following form:

In this, wj,k and wj,k-1 are the weights of the jth pencil beam at
iterations k and k -1 respectively and di,j is the dose delivered by pencil
beam j at dose calculation grid point i. Pi and Di are as in Equation 10.6.
As noted by Lomax (50), this formulation has the advantage that pencil-
beam weights can never go negative as part of the optimization process.
For a detailed derivation of Equation 10.7, the reader is referred to
Albertini et al. (51).
The result of applying Equation 10.7 to the case shown in Figure 10.7
is shown in Figure 10.6. Figure 10.6A shows the fluences, modified as a
result of the optimization process, after 60 iterations, while Figure 10.6B
shows the final dose distribution. The 95% dose now almost perfectly
encompasses the PTV, and there are no regions with doses above 106%.
When applied (as in this case) to a single field with the sole constraint
of obtaining a homogeneous dose across the target volume, this
approach is called “Single Field, Uniform Dose” or SFUD. This does not
however mean that this approach only ever uses one field for a plan, but
rather that multiple field plans can be designed by combining one or
more such fields, each optimized individually as described above. As an
example, a three-field, SFUD plan for the same case is shown in Figure
10.8. About 60% of all PBS treatments at our institute are planned and
delivered using such an SFUD approach.
An alternative approach is to perform the optimization for all pencil
beams of all fields simultaneously, a technique called IMPT or multiple
field optimization (MFO). In this case, additional constraints are also
typically added to the optimization, such as dose constraints to one of
more critical structures, and Equation 10.6 above is then expanded a
little to the following (see Unkelbach et al. (52)):
Now, WPTV and WOAR are weighting factors defining the relative
importance of target coverage and critical structure sparing, and PPTV
and POAR are constraint doses for the target and critical structures,
respectively. Note that in this formulation, the last (OAR specific) sum is
only performed over the dose calculation points where the dose
constraints for the organ are exceeded (so over the sub-set of N′OAR
points only). An update function similar in form to that of Equation 10.7
can then be derived that will optimize all pencil beams of all fields
together, under the constraints of both maximizing target dose coverage
and homogeneity, and reducing doses to critical structures to below the
predefined dose constraints.
An example of a four-field IMPT plan to a skull base chordoma is
shown in Figure 10.9, once again showing the dose distributions for the
individual fields, together with the full plan doses. Note the difference in
the form of the individual fields of Figure 10.9 to those in the SFUD
example above (Figure 10.8). The “single field, uniform dose” constraint
of SFUD has now been relaxed, with the result that the dose distributions
of the individual fields have become very inhomogeneous and complex,
but with the advantage that the dose can be selectively “carved-out” of
neighboring critical structures such as the brainstem and optic nerves. As
examples of the type of SFUD and IMPT plans that can be achieved using
PBS proton therapy, a selection of cases treated at our institute in the
last years are shown in Figure 10.3.
FIGURE 10.8 The individual fields (A-C) and full SFUD plan (C) for an example ependymoma
case.

FIGURE 10.9 The individual fields (A-D) and full IMPT plan (E) for an example skull base
chordoma case.

Field-Modifying Devices
In all PBS treatment gantries, there is a lower limit on the transportable
beam energy. This is typically about 70 MeV, but can be as high as 100
MeV on some machines. The range of 70 MeV protons is roughly 4 cm in
water, and thus this determines the minimum range of BP that can be
delivered to a patient without additional modulation of the beam.
A minimum range of 4 cm is extremely limiting. For instance, for some
pediatric cases (e.g., orbital rhabdomyosarcomas), the maximum required
range may only be of the order of 4 to 5 cm, thus making such tumors
untreatable with PBS proton therapy without measures for reducing the
minimal deliverable energy. In addition, in an analysis of over 3,800
delivered fields at our institute, covering a whole range of treatment
sites and tumor types, it was found that over 30% of all BPs in these
fields were delivered with a range of 5 cm or less. Thus, the delivery of
low-energy/low-range pencil beams is an important issue. And it is for
this reason that all PBS proton treatment facilities have the ability to
insert a preabsorber into the beam.
Unfortunately, the use of such a preabsorber is not without its costs.
As with any medium through which protons pass, MCS will occur,
broadening the beam as it passes through the preabsorber. If the
preabsorber could be placed directly on the patient surface, this
wouldn’t be a major problem, as the broadening of the beam due to the
4 cm of preabsorber alone is relatively small. However, when the
preabsorber is mounted in the treatment nozzle, it is extremely difficult
to get it very close to the patient, and inevitably there will be a gap of a
few centimeters between it and the patient. Indeed, depending on the
geometry of the nozzle, the anatomy of the patient and the type of
fixation devices used (which in the worst case can limit how close the
nozzle can be brought to the patient), gaps of 20 cm or more are not
uncommon. As MCS in the preabsorber doesn’t just broaden the beam,
but also adds an angular divergence as well, the beam geometrically
broadens across this gap, and thus, the larger the gap, the larger the
pencil-beam size on entry into the patient. As an example, for the PSI
Gantry 2, the beam width (in air) at iso-center for a 70-MeV proton
pencil beam (the lowest energy that can be transported through the
gantry) is about 4.5 mm (σ). However, with a 4-cm preabsorber and a
distance of 31 cm from the exit of the preabsorber to the iso-center, this
increases to over 10 mm (σ) in air. As the lateral penumbra of pure PBS
plans can never be sharper than the lateral fall-off of a single pencil
beam, the consequences on plan quality are hopefully clear. A more
detailed discussion on the problem of treating superficial tumors, and
the use of preabsorbers, can be found in Titt et al. (53), Zhu et al. (45),
and Lomax et al. (6).

Advanced Optimization
Up to now, we have looked at two modes of optimization for PBS plans;
SFUD and IMPT. However, given the number of pencil beams available
to the optimizer (typically thousands to tens of thousands of pencil
beams per field), the optimization problem is inherently degenerate. That
is, there are many different sets of PBS fluences that could give quite
similar dosimetric results. This aspect of SFUD/IMPT optimization is
discussed in detail elsewhere (51,55) and won’t be elaborated on here.
However, the degenerate nature of the optimization process means that
other aspects of field definition and design may be something that can
be exploited.
In Figure 10.6A, one sees that the majority of BPs have generally very
low fluences after the optimization process. Based on this, the question
arises whether these BPs are required, or whether clinically acceptable
plans could be delivered with less pencil beams per field. Indeed, this
idea has been proposed very early on in the work of Deasy et al. (56)
and Lomax (50). In the original paper by Deasy et al., the concept of
distal edge tracking (DET) was proposed, in which BPs are only deposited
at the distal edge of the PTV. Although there is no way that such a
reduced number of BPs in a field can deliver a homogeneous dose from
one field, the use of multiple DET fields, together with IMPT type
optimization, have been shown to be able to deliver clinically acceptable
plans in which the integral dose to normal tissues can also be somewhat
reduced, at least for centrally located tumors (56,57). An expansion of
this work has also been reported by Albertini et al. (51), in which a so-
called “spot-reduction” method was incorporated into the optimization
loop that automatically switches low-weighted pencil beams off,
sequentially reducing the number in the plan as the optimization
progresses. This approach has been shown to be able to reduce the
delivered BPs for plans with a small number of fields (where the pure
DET approach has too few degrees of freedom) while approaching the
DET approach (and further) for plans with many fields. However, due to
fears about the robustness of such plans to delivery errors (55,58), at the
time of writing, such “spot-reduction” techniques are not used clinically.
Indeed, robustness itself is a parameter that can also be included into
the optimization process, either indirectly or directly. In the work by
Albertini et al. mentioned above, an indirect approach was taken in
which the starting conditions of the optimization “force” the optimizer
to a robust solution. Direct robust optimization methods, on the other
hand, use robustness criteria and measures as additional constraints in
the optimization procedure. More details on this approach can be found
in the literature (52,59–61).
As a last example, it has recently been proposed that LET could also be
an interesting parameter to include in the optimization process (62). In a
way, this is also a type of robust optimization, as the idea is to try to
mitigate the potential effects of enhanced RBE by modulating the LET
and thus make the resulting plan more robust to potential biological
effects.
So there are many additional criteria that could be optimized in
addition to target and critical structure doses, and the optimization
process in PBS proton therapy is clearly a multiple criteria problem. It
should be of no surprise, therefore, that one of the main areas of
optimization research in this area is into multiple criteria optimization
(MCO) techniques (60).

Plan Evaluation
The final stage of the treatment planning process is the clinical and
physics review of the plan, before being released for delivery to the
patient.
For the most part, the clinical review will be very similar to that for
conventional therapy. Certainly, target coverage and doses to critical
structures will need to be reviewed, through both a visual assessment of
the dose distribution in all relevant slices and DVHs. However, given the
potential for increased RBE values in some parts of the plan, then
perhaps an additional clinical assessment of the plan from the point of
view of RBE should be performed. For instance, are highly weighted
fields stopping directly against a critical structure? If so, is this
acceptable, or should the dose constraints to the structure be reduced to
allow for this? The differences are somewhat more when assessing a plan
from the physical point of view, however.
One area, which we have not discussed in this chapter up to now, is
the effects of delivery uncertainties on PBS proton plans. This has been
an area of considerable research in recent years, and has recently been
reviewed by Mohan and Sahoo (63). We will not go into details here,
other than to outline the two main uncertainties; positional and range.
Positional uncertainties are of course present in any form of external
beam radiotherapy, but are a particular problem for fractionated
treatments. Inevitably, the accuracy of patient set-up over many days
cannot be as accurate as that of a single treatment, despite image-guided
techniques to improve this. In practice, therefore, positioning
inaccuracies of a few millimeters day-to-day have to be expected.
Typically, such uncertainties are managed through the use of a PTV (see
above), with estimation of the potential effects of positional
uncertainties being restricted to evaluating dose coverage of the PTV.
However, more sophisticated approaches to this are now being
developed, allowing for more direct visualization of dosimetric
uncertainties in three dimensions and in relation to the patient
geometry.
Although some of the first work in analyzing treatment uncertainties
was for photon treatments (64), much of the more recent published work
has concentrated on proton therapy, either as a metric for robust
optimization (61,65–67) or for evaluating the robustness of plans outside
of the optimization process (68–71). Many of these provide uncertainty
distributions, estimated by recalculating dose on a number of instances
of the nominal geometry shifted in space to simulate potential treatment
set-up errors. The uncertainty distribution is then calculated at each
point by generating dose error bars at each dose calculation point
through the combination of the multiple dose values into a uncertainty
band, typically by displaying the difference in the maximum and
minimum values of all calculated doses at each point (65,66,68,69).
Although this approach can be used for comparing the robustness of two
different plans, it is a very conservative approach which basically
assumes that positional errors will be systematic in nature. As such Lowe
et al. (72) have recently modified this approach to also allow for
fractionation under the assumption that set-up errors will generally be
random, which has the effect of reducing the error bars considerably
(Fig. 10.10). In the author’s opinion, this approach provides a more
realistic approximation of likely dose uncertainties resulting from daily
set-up uncertainties and thus provides a more clinically relevant tool for
evaluating (and optimizing) PBS proton plans from the point of view of
robustness.

FIGURE 10.10 Robustness analysis for positional uncertainties for an IMPT treatment to a sacral
chordoma (A). Robustness analysis without fractionation (B) and for 14 fractions (C). Note the
substantial reduction of uncertainty, particularly in the PTV and around the cauda equina.
(Courtesy of Matthew Lowe.)

In additional to positional uncertainties, there are many potential


sources of uncertainty in the proton range in vivo, as reviewed by Lomax
et al. (6) and Paganetti (33). These include inherent uncertainties in the
CT calibration, potential extensions to the Bragg peak range due to RBE
enhancement, CT artifacts (e.g., metal implants) and anatomical changes
to the patient anatomy during the course of treatment. In contrast to
positioning errors, all these are systematic in nature and thus have a
potentially larger impact on the quality of the treatment than set-up
errors. As such, their incorporation into the optimization procedure (60),
as well as any robustness analysis tools, could be more important than
those of set-up errors. Indeed, IMPT (see Section 8.2) was first used at
our institute in order to make the plan more robust to range
uncertainties (73). It is thus interesting to see that robust optimization
can lead to similar results automatically (60). However, we have found
now that our standard optimization algorithm (i.e., without robustness
criteria) anyway leads to range robust results in many cases, as discussed
by Albertini et al. (58,74). A good example of this is also shown in
Figure 10.9. Although each individual field of this complex plan is
highly inhomogeneous, there are no sharp gradients ranging out on the
brainstem (the main constraining organ in this case), with dose sparing
of this organ being achieved with lateral gradients only. In summary,
there is much still to be done in the understanding and quantification of
plan robustness for PBS proton therapy.

SUMMARY
In this chapter, we have described the main principles of treatment
planning for PBS proton therapy. Given the three-dimensional nature of
the delivery system (i.e., that individual Bragg peaks can be delivered
anywhere in the three dimensions from any single-field direction), PBS is
an inherently flexible and automated method which requires treatment
planning tools that can best exploit its potential. The techniques and
examples in this chapter hopefully do justice to its power.
However, it should be remembered that PBS proton therapy is still
very much in its infancy, with only a few thousand patients having been
treated worldwide. As such, much experience still has to be gained into
optimizing its planning and delivery, and many developments, both
technical and clinical, remain to be done.
The main areas of such developments in the coming years are likely to
be in exploiting the degeneracy of the optimization problem. The very
fact that this is a degenerate problem implies that the problems we are
currently giving the optimizer to solve are maybe not demanding
enough, and thus there is scope for adding more and more criteria into
the optimization step.
A simple approach is to start using PBS proton therapy for “dose
painting,” the first analysis of which has just been published by our
group (75). Given the number of variables available to the optimization
process, there is a considerable potential for IMPT to more precisely
form deliberately nonhomogeneous dose distributions. Likewise, and as
we understand more about normal tissue responses, and/or have more
access to functional imaging, it could well be that the concept of
“conformal avoidance” becomes more important. That is, the idea of
more precisely “carving-out” dose not from complete organs but from
the main functional parts of the organs. The flexibility and power of
IMPT should have considerable potential for such approaches as well.
Finally, much work still needs to be done in not just optimizing pencil-
beam fluences, but in also optimizing the field-shaping process such as
to either more efficiently deliver treatments, or to use more sophisticated
combinations of pencil beams, such as in the form of, for example, fan-
beams or nonregular distributions of pencil beams that directly follow
the contours of the target volume (76).
These are just some of the possibilities of PBS proton therapy that are
waiting to be discovered.

KEY POINTS
• Beam modeling and dose calculations.

• Field shaping and plan design.

• Optimization of single-field uniform dose and intensity-modulated


proton therapy plans.
• Plan evaluation and robustness.

QUESTIONS
1. Please select the answer choice that best completes the sentence.
Pencil-beam scanning (PBS) . . .
A. Relies on collimators and compensators for conforming the
dose to the target
B. Delivers individually weighted proton Bragg peaks distributed
in three dimensions throughout the target volume
C. Is dependent on the SOBP concept
D. Is not capable of delivering individually homogeneous field
doses across the target
2. Please select the answer choice that best completes the sentence.
Dose calculations for PBS proton therapy . . .
A. Have to be performed using Monte Carlo techniques
B. Can be analytical, but only for the primary dose component
C. Can be analytical, but only for the secondary dose component
D. Can be analytical, but at the cost of accuracy
3. Please select the answer choice that best completes the sentence.
PBS plans…
A. Require many field directions to achieve a high level of
conformity
B. Can achieve good dose homogeneity across the target with just
a few treatment fields
C. Are inherently robust to delivery uncertainties
D. Allow for the same amount of normal tissue sparing as IMRT
4. Please select the answer choice that best completes the sentence.
Optimization in PBS treatment planning…
A. Can only be performed using MC dose calculations
B. Is far more complex than for IMRT planning
C. Is a degenerate problem
D. Is not required.
5. Please select the answer choice that best completes the sentence.
Accurate dose calculations for PBS proton therapy…
A. Can only be performed on MRI data
B. Cannot be performed based on CT data, as it provides the
wrong information
C. Should be performed on good quality CT data using a scanner-
specific calibration curve
D. Requires PET data for calculating range.

ANSWERS
1. B Pencil-beam scanning magnetically scans individual proton
pencil beams across the target volume in combination with
energy changes. As only Bragg peaks within (or very close to)
the target volume are delivered, no collimators or
compensators are needed. In addition, as the fluences of the
Bragg peaks are individually optimized in order to best cover
the target with a homogeneous dose, then the concept of the
fixed, one-dimensional SOBP required for passive scattering is
no longer valid. Finally, as mentioned, in the so-called SFUD
(Single-Field Uniform Dose) mode, the optimization process
ensures that the dose across the target volume is homogeneous.
Therefore, the only correct answer is (B).
2. D All commercial (and noncommercial) treatment planning
systems use analytical calculations for both primary and
secondary dose components rather than Monte Carlo, due to
the substantial advantages in computational speed. Although
there is in doubt the Monte Carlo calculations are more
accurate, no proton therapy facility at the moment is using MC
calculations for their routine treatment planning (other than in
a few centers as an independent check of the calculated plan).
Therefore, the only correct answer is (D).
3. B For both passive scattering and PBS, it is possible to achieve
homogeneous doses across the target with a single field,
therefore answer (A) is wrong. Due to the problem of range
uncertainties, however, proton plans are not inherently robust
unless specifically designed to be, through careful selection of
beam angles and/or robust optimization. Thus answer (C) is
also wrong. Finally, on average, proton plans reduce the doses
to normal tissues by a factor of 2, and therefore allow for more
normal tissue sparing than for IMRT. Thus, answer (D) is also
wrong. This leaves answer (B) as the only correct answer.
4. C As with Question 3, Monte Carlo calculations are not standard
approaches to proton planning, and may be too slow for the
optimization process. Thus, answer (A) is wrong. Optimization
algorithms for proton therapy are the same as for IMRT, as are
the way in which target doses and OAR dose prescriptions are
defined. Therefore, answer (B) is also not correct. Finally,
optimization is essential for PBS proton therapy, thus answer
(D) is also incorrect. However, the optimization problem is
highly degenerate (due to the number of free variables (pencil
beams) per field that are available to the optimizer), and thus
answer (C) is the correct one.
5. C CT is the only imaging modality currently from which proton
range can be accurately calculated. Thus, the only correct
answer is (C).

REFERENCES
1. Koehler AM, Schneider RJ, Sisterson JM. Range modulators for
protons and heavy ions. Nucl Instrum Meth. 1975;131:437–440.
2. Koehler AM, Schneider RJ, Sisterson JM. Flattening of proton dose
distributions for large fields radiotherapy. Med phys. 1977;4:297–
301.
3. Gottschalk B. Physics of proton interactions in matter. In: Paganetti
H, ed. Proton Therapy Physics. Boca Raton, FL: CRC Press; 2012:20–
59.
4. Goitein M. Radiation Oncology: A Physicist’s-Eye View. New York, NY:
Springer Science and Business Media; 2008.
5. Safai S, Bortfeld T, Engelsman M. Comparison between the lateral
penumbra of a collimated double-scattering beam and uncollimated
scanning beam in proton radiotherapy. Phys Med Biol.
2008;21:1729–1750.
6. Lomax AJ, Bolsi A, Albertini F, et al. Treatment planning for Pencil
Beam scanning. In: Das I, Paganetti H, eds. Principle and Practice of
Proton Beam Therapy. Madison, WI: Medical Physics Publishing Inc.;
2015:667–707.
7. Gottschalk B, Koehler AM, Schneider RJ, et al. Multiple Coulomb
scattering of 160 MeV protons. Nucl Instrum Methods B.
1993;74:467–490.
8. Lomax AJ. Charged particle therapy: The physics of interaction.
Cancer J. 2009;15:285–291.
9. Paganetti H, Niemierko A, Ancukiewicz M, et al. Relative biological
effectiveness (RBE) values for proton beam therapy. Int J Radiat
Oncol Biol Phys. 2002;53:407–421.
10. Paganetti H. Relative biological effectiveness (RBE) values for
proton beam therapy. Variations as a function of biological
endpoint, dose, and linear energy transfer. Phys Med Biol.
2014;59:R419–R472.
11. Kanai T, Kawachi K, Kumamoto Y, et al. Spot scanning system for
proton radiotherapy. Med Phys. 1980;7:365–369.
12. Pedroni E, Bacher R, Blattmann H, et al. The 200 MeV proton
therapy project at PSI: Conceptual design and practical realisation.
Med Phys. 1995;22:37–53.
13. Pedroni E, Scheib S, Boehringer T, et al. Experimental
characterization and physical modelling of the dose distribution of
scanned proton beams. Phys Med Biol. 2005;50:541–561.
14. Bortfeld T, Schlegel W. An analytical approximation of depth-dose
distributions for therapeutic proton beams. Phys Med Biol.
1996;41:1331–1339.
15. Schaffner B, Pedroni E, Lomax A. Dose calculation models for
proton treatment planning using a dynamic beam delivery system:
An attempt to include density heterogeneity effects in the analytical
dose calculation. Phys Med Biol. 1999;44:27–41.
16. Soukup M, Fippel M, Alber M. A Pencil beam algorithm for intensity
modulated proton therapy derived from Monte Carlo simulations.
Phys Med Biol. 2005;50:5089–5104.
17. Szymanowski H, Oelfke U. Two-dimensional pencil beam scaling:
An improved proton dose algorithm for heterogeneous media. Phys
Med Biol. 2002;47:3313–3330.
18. Paganetti H, Schuemann J, Mohan R. Dose calculations for proton
beam therapy: Monte Carlo. In: Das I, Paganetti H, eds. Principle and
Practice of Proton Beam Therapy. Madison, WI: Medical Physics
Publishing Inc.; 2015;571–594.
19. Ferrari A, Sala PR, Fasso A, et al. FLUKA: A multi-particle transport
code, CERN Yellow Report CERN 2005–10. Geneva: CERN; INFN/TC
05/11, SLAC-R-773;2005.
20. Agostinelli S, Allison J, Amako KA, et al. GEANT4 – a simulation
toolkit. Nucl Instrum Methods Phys Res A. 2003;506:250–303.
21. Pelowitz DB. MCNPX user’s manual, version 2.5.0. Los Alamos
National Laboratory; 2005;LA-CP-05–0369.
22. Fippel M, Soukup M. A Monte Carlo dose calculation algorithm for
proton therapy. Med Phys. 2004;31:2263–2273.
23. Dementiev AV, Sobolevsky NM. SHIELD-universal Monte Carlo
hadron transport code: Scope and applications. Radiation
Measurements. 1999;30:553–557.
24. Perl J, Shin J, Schumnn J, et al. TOPAS: An innovative proton
Monte Carlo platform for research and clinical applications. Med
Phys. 2012;39:6818–6837.
25. Grassberger C, Lomax A, Paganetti H. Characterizing a proton beam
scanning system for Monte Carlo dose calculations in patients. Phys
Med Biol. 2015;60:633–645.
26. Yepes P, Randeniya S, Taddei PJ, et al. A track repeating algorithm
for fast Monte Carlo dose calculations of proton radiotherapy. Nucl
Technol. 2009;168:736–740.
27. Jia X, Schumann J, Paganetti H, et al. GPU-based fast Monte Carlo
dose calculation for proton therapy. Phys Med Biol. 2012;57:7783–
7797.
28. Mustafa A, Jackson DF. The relation between x-ray CT numbers and
charged particle stopping powers and its significance for
radiotherapy treatment planning. Phys Med Biol. 1983;2:169–176.
29. Schneider U, Pedroni E, Lomax A. The calibration of CT-Hounsfield
units for radiotherapy treatment planning. Phys Med Biol.
1996;41:111–124.
30. Schneider W, Bortfeld T, Schlegel W. Correlation between CT
numbers and tissue parameters needed for Monte Carlo simulations
of clinical dose distributions. Phys Med Biol. 2000;45:459–478.
31. Schaffner B, Pedroni E. The precision of proton range calculations in
proton radiotherapy treatment planning: Experimental verification
of the relation between CT-HU and proton stopping power. Phys
Med Biol. 1998;43:1579–1592.
32. Kruse J. Immobilisation and simulation In: Das I, Paganetti H eds.
Principle and Practice of Proton Beam Therapy. Madison, WI: Medical
Physics Publishing Inc. 2015;521–540.
33. Paganetti H. Range uncertainties in proton therapy and the role of
Monte Carlo simulations. Phys Med Biol. 2012;57:99–117.
34. Newhauser WD, Giebeler A, Langen KM, et al. Can megavoltage
computed tomography reduce proton range uncertainties in
treatment plans for patients with large metal implants? Phys Med
Biol. 2008;53:2327–2344.
35. Jaekel O, Reiss P. The influence of metal artefacts on the range of
ion beams. Phys Med Biol 2007;52:635–644.
36. Yang M, Virshup G, Clayton J, et al. Theoretical variance analysis of
single- and dual-energy computed tomography methods for
calculating proton stopping power ratios of biological tissues. Phys
Med Biol. 2010;55:1343–1362.
37. Huenemohr N, Paganetti H, Greilich S, et al. Tissue decomposition
from dual energy CT data for MC based dose calculation in particle
therapy. Med Phys. 2014;41:061714
38. Wei J, Sandison GA, Hsi WC, et al. Dosimetric impact of a CT metal
artefact suppression algorithm for proton, electron and photon
therapies. Phys Med Biol. 2006;51:5183–5197.
39. Meyer E, Raupach R, Lell M, et al. Normalized metal artifact
reduction (NMAR) in computed tomography. Med Phys.
2010;37:5482–5493.
40. Dietlicher I, Casiraghi M, Ares C, et al. The effect of metal implants
in proton therapy: Experimental validation using an
anthropomorphic phantom. Phys Med Biol. 2014;59:7181–7194.
41. Lomax AJ. Intensity modulated proton therapy and its sensitivity to
treatment uncertainties 1: The potential effects of calculational
uncertainties. Phys Med Biol. 2008;53:1027–1042.
42. Cabal GA, Jäkel O. Dynamic Target Definition: A novel approach for
PTV definition in ion beam therapy. Radiother Oncol. 2013;
107:227–233.
43. Park PC, Zhu XR, Lee AK, et al. A beam-specific planning target
volume (PTV) design for proton therapy to account for setup and
range uncertainties. Int J Radiat Oncol Biol Phys. 2012;82:e329–
e336.
44. Bolsi A, Lomax AJ, Pedroni E, et al. Experiences at the Paul Scherrer
Institute with a remote patient positioning procedure for high-
throughput proton radiation therapy. Int J Radiat Oncol Biol Phys.
2008;71:1581–1590.
45. Zhu XR, Poenisch F, Heng Li, et al. Field shaping: Scanning beam.
In: Das I, Paganetti H, eds. Principle and Practice of Proton Beam
Therapy. Madison, WI: Medical Physics Publishing Inc: 2015; 667–
707.
46. Schieb S, Pedroni E. Dose calculation and optimization for 3D
conformal voxel scanning. Radiat Environ Biophys. 1992;31:251–256.
47. Lomax AJ, Pedroni E, Schaffner B, et al. 3D treatment planning for
conformal proton therapy by spot scanning. Quantitative Imaging in
Oncology. 1996. Proc. 19th L H Gray Conference, (1996 BIR
publishing London) 67–71.
48. Zenklusen SM, Pedroni E, Meer D, et al. Preliminary investigations
for the option to use fast uniform scanning with compensators on a
gantry designed for IMPT. Med Phys. 2011;38:5208–5216.
49. Bortfeld T, Buerkelbach J, Boesecke R, et al. Methods of image
reconstruction from projections applied to conformation
radiotherapy. Phys Med Biol. 1990;35:1423–1434.
50. Lomax A. Intensity modulated methods for proton therapy. Phys
Med Biol. 1999;44:185–205.
51. Albertini F, Gaignat S, Bosshard M, et al. Planning and Optimizing
Treatment Plans for Actively Scanned Proton Therapy. In: Censor Y,
Jiang M, Wang G, eds. Biomedical Mathematics: Promising Directions
in Imaging, Therapy Planning and Inverse Problems. Madison, WI:
Medical Physics Publishing; 2010:1–18.
52. Unkelbach J, Craft D, Gorissen BL, et al. Treatment plan
optimization in proton therapy. In: Das I, Paganetti H, eds. Principle
and Practice of Proton Beam Therapy. Madison, WI: Medical Physics
Publishing Inc; 2015;623–646
53. Titt U, Mirkovic D, Sawakuchi GO, et al. Adjustment of the lateral
and longitudinal size of scanned proton beam spots using a
preabsorber to optimize penumbrae and delivery efficiency. Phys
Med Biol. 2010;55:7097–7106
54. Bues M, Newhauser WD, Titt U, et al. Therapeutic step and shoot
proton beam spot scanning with a multi-leaf collimator: A Monte
Carlo study. Radiat Prot Dosimetry 2005;115:164–169.
55. Lomax AJ, Intensity Modulated Proton Therapy In: Delaney T, Kooy
H, eds. Proton and charged particle radiotherapy. Boston, MA:
Lippincott Williams and Wilkins; 2008.
56. Deasy JO, Shephard DM, Mackie TR. Distal edge tracking: A
proposed delivery method for conformal proton therapy using
intensity modulation. In: Leavitt DD, Starkschall GS, eds. Proc XIIth
ICCR, Salt Lake City. Madison, WI: Medical Physics Publishing; 406–
409.
57. Oelfke U, Bortfeld T. Intensity modulated radiotherapy with
charged particle beams: Studies of inverse treatment planning for
rotation therapy. Med Phys. 2000;27:1246–1257.
58. Albertini F, Hug EB, Lomax AJ. The influence of the optimization
starting conditions on the robustness of intensity-modulated proton
therapy plans. Phys Med Biol. 2010;55:2863–2878.
59. Unkelbach J, Bortfeld T, Martin BC, et al. Reducing the sensitivity
of IMPT treatment plans to setup errors and range uncertainties via
probabilistic treatment planning. Med. Phys. 2009;36:149–163.
60. Unkelbach J, Chan TC, Bortfeld T. Accounting for range
uncertainties in the optimization of intensity modulated proton
therapy. Phys Med Biol. 2007;52:2755–2773.
61. Chen W, Unkelbach J, Trofimov A, et al. Including robustness in
multi-criteria optimization for intensity-modulated proton therapy.
Phys Med Biol. 2012;57:591–608.
62. Grassberger C, Trofimov A, Lomax A, et al. Variations in linear
energy transfer within clinical proton therapy fields and the
potential for biological treatment planning. Int J Radiat Oncol Biol
Phys. 2011;80:1559–1566.
63. Mohan R, Sahoo N. Uncertainties in proton therapy: Their impact
and management. In: Das I, Paganetti H, eds. Principle and Practice of
Proton Beam Therapy. Madison, WI: Medical Physics Publishing Inc;
2015:595–622.
64. Goitein M. Calculation of the uncertainty in the dose delivered
during radiation therapy. Med Phys. 1985;12:608–612.
65. Pflugfelder D, Wilkens JJ, Oelfke U. Worst case optimization: A
method to account for uncertainties in the optimization of intensity
modulated proton therapy. Phys Med Biol. 2008;53:1689–1700.
66. Fredriksson A, Forsgren A, Hardemark B. Minimax optimization for
handling range and setup uncertainties in proton therapy. Med Phys.
2011;38:1672–1684.
67. Liu W, Zhang X, Li Y, et al. Robust optimization of intensity
modulated proton therapy. Med Phys. 2012;39:1079–1091.
68. Lomax AJ. Intensity modulated proton therapy and its sensitivity to
treatment uncertainties 2: The potential effects of inter-fraction and
inter-field motions. Phys Med Biol. 2008;53:1043–1056.
69. Albertini F, Hug EB, Lomax AJ. Is it necessary to plan with safety
margins for actively scanned proton therapy? Phys Med Biol.
2011;56:4399–4413.
70. Casiraghi M, Albertini F, Lomax AJ. Advantages and limitations of
the “worst case scenario” approach in IMPT treatment planning.
Phys Med Biol. 2013;58:1323–1339.
71. Park PC, Cheung J P, Zhu X R, et al. Statistical assessment of proton
treatment plans under setup and range uncertainties. Int J Radiat
Oncol Biol Phys. 2013;86:1007–1013.
72. Lowe M, Albertini F, Aitkenhead A, et al. Incorporating the effect of
fractionation in the evaluation of proton plan robustness to set-up
errors. Submitted to Phys Med Biol. July 2015.
73. Lomax AJ, Boehringer T, Coray A, et al. Intensity modulated proton
therapy: A clinical example. Med Phys. 2001;28:317–324.
74. Albertini F, Bolsi A, Lomax AJ, Sensitivity of intensity modulated
proton therapy plans to changes in patient weight. Radiother Oncol.
2008;86:187–194.
75. Madani I, Lomax AJ, Albertini F, et al. Dose-painting intensity-
modulated proton therapy using simultaneous integrated boost for
intermediate- and high-risk meningioma. Radiat Oncol. 2015; 30:72.
76. Meier G, Leiser D, Besson R, et al. Contour scanning for pencil beam
scanned proton therapy for skull-base tumors, PTCOG54, San Diego,
CA; 2015.
11 Patient and Organ Movement
Paul J. Keall and James M. Balter

INTRODUCTION
The driving tenet of external-beam radiotherapy is the precise delivery
of focal radiation doses to the target, so that an effective dose can be
delivered while limiting concomitant normal tissue irradiation and
related toxicity risk. Technical advancements, such as intensity-
modulated radiation therapy (IMRT), volumetric-modulated arc therapy
(VMAT), and image-guided radiotherapy (IGRT) have provided
significant gains in specifying means to provide such dose distributions.
Accurate delivery, so that intended and actual doses agree, is a more
complicated matter.
The problems of patient positioning and motion have been studied
extensively. Although there are currently areas that need further
exploration, it is possible to consider the magnitude of various
uncertainties in dose delivery due to patient position variation and organ
movement, and to discuss rational strategies for dealing with these
uncertainties in the context of precision radiotherapy.

DESCRIPTION OF THE PROBLEM OF GEOMETRIC


VARIATION
The International Congress on Radiological Units (ICRU) has addressed
the relative problem of geometric variations. In reports 50 (1), 62 (2),
and 83 (3), concepts are evolved to attempt to standardize means of
reporting doses. Some of the concepts presented in these reports have
served as the basis for numerous investigations over the past few years,
and have been adopted as standards for clinical trials. A brief discussion
of the key concepts as they apply to geometric variation follows.
The key structures that are delineated are the gross tumor volume
(GTV) and organs at risk (OARs). The GTV is generally defined as the
“visible” target, that is, that can be delineated from imaging or related
information. The OARs are tissue structures that are dose-limiting due to
risk of radiation-induced toxicity.
The next volume of interest is the clinical target volume (CTV). This
target volume ideally expands about the GTV to include a reasonable
expectation of the true target extent on a (static) patient model. The CTV
expansion includes a reasonable expectation of the extent of disease
below the sensitive range of the imaging modality.
The planning target volume (PTV) adds a margin to the CTV to
account for organ motion or setup error. Margins, allowing for
positioning, motion, and anatomical changes, may also be required for
the OAR to arrive at the planning organ-at-risk volume (PRV). A margin
provides a buffer in the delineation of tissues to account for
uncertainties (3). These structures are used for the treatment planning
process.
When the patient is imaged to define the CTV and critical structures,
the position is sampled. In general, this sample occurs once, specifically
during the computed tomography (CT) scan for treatment planning. To
obtain this sample, the patient is immobilized and positioned with
typical reference marks placed on the skin and/or immobilization device
at the principal axes of the CT scanner for verification of position and
orientation. The sample of the patient serves as the model for treatment
planning, and all subsequent targeting and density modeling are based
on the information obtained during this session. With the advent of
broadly available in-room imaging modalities, such as cone beam CT
(4–7) and the emerging integrated MRI-radiotherapy systems (8–10),
adaptive strategies which can account for inter and potentially infraction
anatomic changes are emerging.
Multiple samples of patient position will form a distribution. If we set
the position at the initial (treatment planning) CT scan as the “true”
patient position, then a reasonable method of describing this distribution
of subsequent positioning is by the translation necessary to make the
patient position match that of the treatment planning CT scan.
Conventionally, the average coordinate of this distribution is considered
as the “systematic” error, in that it is the effective transformation that
persists throughout the samples (multiple CT scans in the above
example, or multiple patient positions over a course of treatment). The
spread of sampled positions about this average coordinate represents the
random setup variation. It is important to note that the average
coordinate may never be sampled. An excellent overview of margins in
radiotherapy is given by van Herk (11).

MINIMIZING THE IMPACT OF SETUP VARIATIONS ON


TREATMENT
Obviously, these setup variations require margins to create the PTV to
ensure proper coverage of the CTV. Reducing the margins yields a
smaller volume of tissue irradiated to high doses, and can potentially
reduce the toxicity to normal tissues. As such, significant efforts have
been made to minimize the range of variations and their resulting
impact on treatment.

Positioning Systems
A significant variety of equipment is in use to aid in repeat setup of
patients. This equipment attempts to address a dual role: immobilization
and localization. These dual roles are not necessarily compatible for any
given piece of technology.

Immobilization
Quite simply, the process of immobilization involves limiting or
eliminating movement for the time period of imaging or treatment. The
primary objective is to limit target movement, although critical normal
tissue movement also needs to be considered. There is a large amount of
literature on immobilization; however, as the technology is evolving, it
is important to consider a number of key aspects in deciding on a
technology and strategy for use of an immobilization system.
The advantage of a given immobilization method may be
compromised by the complexity of use. If an immobilization system has
many degrees of freedom, improper configuration of the device may lead
to systematic errors in patient position or shape at treatment. Examples
of complex systems are multiuse boards for fixation, in which the angles
and positions of arm supports, angle of the upper thorax, shape of neck
support, and other components are adjustable. These devices are very
cost-effective, and can be used effectively, but special care must be taken
to properly verify the patient configuration, including notation of all
configuration parameters and documented photographs of proper setup.
Some systems (e.g., alpha cradle and vacuum loc) form directly to the
patient’s shape. This can be beneficial in positioning, but it is important
to separate comfort from immobilization. Formed immobilization that
extends to distal regions from the target has been shown to be beneficial
in reproducing position (12). However, studies have shown that simple
or no immobilization, when used well, can be as effective as more
complex systems (13). Therefore the training, use, protocols,
documentation, and in-house expertise are as important as the systems
themselves and there is no substructure for qualified expert staff.

Localization Technology
A wealth of technology has been applied to localization in radiation
therapy. At present, the most prevalent technology includes in-room
lasers, gantry-mounted kilovoltage x-ray imaging systems and electronic
portal imaging devices (EPIDS) though in-room localization is a very
fast-changing field.
In-room diagnostic radiography is, in fact, a very old concept. Film-
based radiographic systems have existed on linear accelerators for over
30 years (14). Room-based digital systems have been used for
radiographic (15–18) and fluoroscopic (19) procedures.
A number of different localization technologies have been used to
treat radiotherapy patients. These include dedicated systems such as the
real-time tracking radiotherapy system (20), tomotherapy, (21),
CyberKnife (22), and Vero (23) linear accelerators. A number of
additional localization methods have been develop based on markers
including Calypso (24), Navotek (25), and Raypilot (26). Emerging
localization technologies include ultrasound (27) integrated MRI-
radiotherapy systems (10, 28–30) and kilovoltage intrafraction
monitoring (KIM) (31,32). Given the reduction in the cost of camera-
based surface imaging for recreational (typically gaming) applications,
surface imaging (33) is anticipated to grow rapidly in use to assist with
both setup reproducibility and intra-treatment patient monitoring.

STRATEGIES FOR POSITION CORRECTION


Online Correction
Generally, online position correction refers to the processes of measuring
and correcting setup error at the start of each treatment fraction. This is
the area in which the vast majority of technical developments have
focused recently. The process of online correction includes three steps:
measurement, decision, and adjustment. A fourth step (verification) may
also be used.
Measurement systems include data collection and analysis. Data can
be from imaging (e.g., radiographs, CT scan images, ultrasound, and
video) or other markers (e.g., electromagnetic and external fiducial).
Analysis is the comparison of reference image or position information to
that gathered at treatment.
Decision is the process of choosing to act or not on information from
measurements. It is valuable to consider that the measurement systems
as well as correction technology are not perfect, and therefore the errors
in these systems may increase errors in certain circumstances. The use of
thresholds for corrections allows a trade-off between the cost (frequency
of adjustment) and benefit (actual reduction of errors). Figure 11.1
shows the cost versus threshold for setup adjustment in prostate patients.
Of course the definition of cost in setup adjustment is important here. If
integrated, for example, real-time adaptation, the cost of correction is
very low. If the setup adjustment involves a manual procedure, including
treatment pause, reimaging, position shift, and verification, then the
time is long and the cost is high. Automation can substantially reduce
the cost of radiotherapy. Figure 11.2 shows the impact of positioning
strategy on margins under assumptions of systematic error versus none.
FIGURE 11.1 Cost (frequency of adjustment) versus threshold for online setup adjustment (based
on 6-mm σ for pelvic patients).

Off-Line Correction
One of the earlier forms of position correction was off-line correction.
Studies of the dosimetric impact of setup error (34–37) demonstrate that
systematic error has the largest impact on margin needed to adequately
dose a target, and that the geometric expansion to account for random
error is generally small (less than one standard deviation). Given this
observation, it can be seen that, as long as random errors are not
exceedingly large, the most significant patient benefit comes from
strategies that rapidly reduce the magnitude of systematic setup
variation.
FIGURE 11.2 Benefit (margin) versus threshold for adjustment (4-mm σ setup, 1.5-mm σ
measurement uncertainty, and 1.0-mm σ setup correction uncertainty).

A number of strategies have been used to minimize systematic error.


Two strategies are the shrinking action level (SAL) and no action level
(NAL) methods (38–40). In the SAL protocol, setup is verified daily for
the first few fractions, and adjustments are made with tolerances that
reduce in magnitude as the fractions progress. This strategy has shown
promising results.
The NAL protocol has also been used. In this method, images from
setup are acquired for n (typically 3 to 5) fractions. These images are
analyzed off-line (thereby minimizing the delay needed to analyze and
act on images at the treatment unit), and the best prediction of the
systematic error (typically the average position of the fractions analyzed)
is corrected before the next fraction treated. This protocol has been
tested, and shown to dramatically reduce systematic errors.

ADAPTIVE RADIOTHERAPY STRATEGIES


Adaptive strategies for position adjustment were first proposed by Yan et
al. (41). The adaptive process extends the concept of off-line and online
strategies. Essentially, the patient position variability is assumed to
follow a population model before patient-specific measurements. As
information about that patient’s variation is acquired (e.g., through
multiple CT scans or daily portal images), the model of variation is
refined, and predictions from this refined model can be used to adjust
position and margins. The frequency of further measurement can be
similarly adjusted as increased confidence in the patient variation is
gained, and similarly increased frequency of measurement can be
reinstated if, for example, an unexpected outlying measurement occurs
during the treatment course. Such strategies form a basis for plan
modification, which is a topic of active research and development in
radiation therapy.

Organ Movement
Internal organ movement is a further, sometimes significant, factor in
dose-limiting geometric uncertainty. The most studied forms of organ
movement have been prostate movement and breathing-induced
movement in (primarily) the thorax and abdomen. Langen and Jones
have published an excellent review of the magnitude of organ movement
as studied by several investigators (42). With the availability of real-time
localization systems, rich datasets of tumor position are now available
(24,43,44).
Prostate position variability is a combination of pelvic setup variation
(mentioned above) with internal movement of the prostate within the
pelvis (45–54). The primary factors affecting prostate movement are
rectal and bladder filling, with differential influence of these forces in
prone versus supine patients. The vast majority of prostate patients are
positioned supine, both for patient comfort and owing to observed
improvements in setup variation of the pelvis. Prone positioning has
been reported advantageous due to a separation of the rectal wall from
the prostate, although both setup variation and (breathing-related)
internal movement have been observed to increase in these patients.
FIGURE 11.3 Graphic representation of the dominant modes of prostate movement (bladder
—yellow, rectum—brown, prostate—pink, intraprostatic implanted markers—white stars). The
major translation axes (black arrows) about the left–right and anterior–posterior axes have also
been significantly attributed as rotation about the left–right axis (white arrow).

Internal movement of the prostate has generally been observed in the


anterior–posterior (AP) and cranial–caudal (CC) directions. Furthermore,
a significant component of this movement has been correlated to
rotations of the prostate about the left–right (LR) axis, with a pivot at or
near the prostatic apex (7) (Fig. 11.3). The magnitude of this movement
(of the prostate relative to the pelvic bones) is typically 1 cm or less in
the AP and CC directions, and, 5 mm in the LR direction.
Although most prostate movement studies have examined
interfractional position changes (i.e., on the order of days), some
measurements have been made of intrafractional movement. Breathing
has been shown to impact prostate movement, most notably during deep
breathing and in prone patients (55,56). Peristalsis, gas in the rectum,
and bladder filling have a more significant influence on prostate position
and potentially short-term movement. The complexity of prostate motion
is shown in Figure 11.4. In most cases, there is little motion, but motion
of over 15 mm can be observed with a variety of motion types.
Prostate movement has been addressed by attempts at reducing
motion by diet as well as immobilization through a rectal balloon. Most
common attempts at managing prostate movement, however, have
focused on localization. Radiographic localization and tracking of
implanted markers, studied by several investigators, are routine
practices, with initial localization (before subsequent movement)
accuracy of better than 2 mm. Ultrasound and in-room CT scan have also
been used for prostate localization before treatment.

FIGURE 11.4 Prostate motion exhibits a variety of different motion characteristics. From Ng JA,
Booth JT, Poulsen PR, et al. Kilovoltage intrafraction monitoring for prostate intensity modulated
arc therapy: first clinical results. Int J Radiat Oncol Biol Phys. 2012;84(5):e655–e661.

Vast efforts have recently been focused on the problem of breathing-


related movement in radiation therapy. An AAPM Task Group has been
dedicated to this topic (57). Breathing influences movement and shape
change primarily in the thorax and abdomen, although, as noted above,
breathing-related movements can also be seen in pelvic structures.
Breathing is a complex process. It is controlled both voluntarily and
automatically. Various combinations of thoracic and abdominal muscles
(including the diaphragm) can be used to control breathing, and
therefore the shape of a patient can vary for the same estimated “phase”
of breathing when evaluated sequentially.
A few general observations have been made about breathing in
population studies. A typical breathing cycle lasts around 4 seconds for
lung cancer patients (58). During normal breathing, patients tend to
spend more time to (or near) exhale than inhale. Tumors near the apices
of the lungs tend to move less than those near the diaphragm. Although
these general observations represent a reasonable population summary,
numerous studies have shown that individual patients may violate any of
the above observations. The need for patient-specific motion assessment
has been demonstrated (59,60). The advent of four-dimensional (4D) CT
techniques (61) provides data that help further elucidate patient-specific
movement. 4D CT is now a widely used method in radiotherapy.
A very thorough summary of the ventilatory movement patterns of
intrathoracic tumors was published by Seppenwoolde (43). In addition
to the above observations, this study further showed the influence of
heartbeat on some tumors, especially those near the mediastinum. An
observation of complex, elliptical movement (“hysteresis”) was also
noted in this study and observed by several other investigators. This
elliptical movement can be attributed to the complex elastic properties
of lung tissue, coupled with the different interactions of muscles and
force between the inhale and exhale portions of the breathing cycle.
Lung tumor motion induced by respiration changes with time. The daily
variation of lung tumor motion traces over 4 consecutive treatment days
is shown in Figure 11.5. A large variation in motion within and between
fractions is observed, which challenges the ability to determine suitable
treatment margins.
Motion has been studied in breast cancer as well. In general, the
breast and chest wall move <1 cm within a single treatment fraction.
Such small movements may not demand significant intrafraction
intervention for motion management. Larger interfraction variation of
chest wall position has been seen in portal imaging studies. Of note,
however, is the potentially significant advantage of deep-inspiration
breathhold (62–67), not only for immobilizing the chest wall
temporarily, but also, more importantly, for reducing lung density and
separating the heart from the medial high-dose region.
The abdomen has demonstrated significant breathing-related
movement with typical amplitudes of 1.5 cm or more. The superior
region of the liver moves with strong correlation to the diaphragm,
while more caudal regions of the liver may move differentially due to
deformation (68–71).

Technology to Manage Organ Motion


A number of technologies have been introduced to manage breathing
movement. The most common method currently employed involves
using larger margins to account for the expected inter- and intrafraction
motion variation, though as shown in Figure 11.5 this motion is variable
and difficult to estimate. Another system involves gating (turning on and
off) the treatment beam. The feedback for gating has generally been
from the monitoring of an externally placed reflective marker on the
patient’s abdomen, although fluoroscopic tracking systems have also
been used with tolerance windows for gating (72). Gating involves a
trade-off of residual motion versus efficiency. The narrower the
acceptance range for motion, the less frequently the beam is on. The
most significant concern with external gating is the relationship between
the external marker position and the tumor location. While targets near
the skin surface (e.g., breast) may have significant correlation with
external references, other targets, especially those in the thorax, have
been shown to vary in location at the same phase (as estimated from
external motion) over multiple breathing cycles (73–76).

FIGURE 11.5 Calypso-measured lung tumor motion traces over 4 consecutive days. Note the
large changes within and between fractions. From Shah AP, Kupelian PA, Waghorn BJ, et al.
Real-time tumor tracking in the lung using an electromagnetic tracking system. Int J Radiat Oncol
Biol Phys. 2013;86(3):477–483.
FIGURE 11.6 Components of an active breathing control (ABC) system.

Another commonly used technology is active breathing control (ABC)


(Fig. 11.6). First introduced by Wong (77), this concept involves using a
system that monitors breathing, and occludes breath at a given phase of
the breathing cycle and/or volume of air relative to exhalation. Various
studies have shown excellent short-term reproducibility of target
position in the thorax and abdomen using this technology (15), (78–82).
Decreased accuracy in long-term reproducibility suggests the advantage
of image-guided localization at the start of a treatment fraction in
combination with ABC-aided ventilatory immobilization.
More complex technology for managing breathing movement involves
tracking. In this process, an estimate of the target’s trajectory is used to
adjust the couch, linear accelerator orientation, or field aperture. The
available systems to perform real-time adaptation, along with the year
they were first implemented clinically are shown in Figure 11.7. Of the
four systems shown, the most widely available are the multileaf
collimator (MLC) and couch. The MLC is the lightest and as each leaf can
be controlled individually, higher order corrections such as rotation (83)
and deformation (84) can be performed.
Some breathing management systems rely on the relationship of a
surrogate to estimate tumor position at any given time or patient state.
Various surrogates have been employed, including implanted fiducial
markers, external fiducials (usually tracked in real time by video
systems), external surface monitoring, and lung volume and air flow.
The relationship between surrogate state/position and tumor position
may be variable, and the influence on this variability on geometric
accuracy of target position prediction should determine the extent of
additional verification needed or residual error expected. Two
commercial systems, the CyberKnife and Vero, currently employ a
hybrid approach to tracking, in which implanted radiopaque fiducials
are periodically localized using biplanar radiographs, and their position
is used to update a correlation with the constantly monitored external
surface of the patient.

FIGURE 11.7 The four systems investigated to realign the radiation beam and tumor due to
intrafraction motion.

SUMMARY
The influence of geometric variations in radiation therapy increases in
significance with the conformality of the planned treatment. Our
understanding of motion and its effects is growing. Interventions to
better reduce these movement-related uncertainties are evolving rapidly.
A fundamental understanding of the limitations of any given monitoring
or tracking system, coupled with the impact of uncertainty in target
position on dose, will yield efficient strategies for implementing
technology to limit the impact of patient and organ movement on
treatment outcome.

KEY POINTS
• Organ structure nomenclature has been standardized by the ICRU
in reports 50, 62, and 83. The visible gross tumor volume (GTV)
may be expanded into a clinical target volume (CTV) to include
microscopic disease, and further expanded into a planning target
volume (PTV), to account for organ motion or setup error. Similarly,
the organs at risk (OARs) may be expanded to planning organs at
risk (PRVs) to account for patient position and organ motion.

• Patient positioning systems are designed to immobilize the patient


and/or improve the localization of the treatment site. Immobilization
devices should limit target movement, but not so complex that
systematic setup errors could be introduced. A number of in-room
localization technologies have been introduced in the past few
years to assist with setup reproducibility and intrafractional target
monitoring.

• Patient positioning corrections can be made online or off-line.


Online corrections include measurement, decision as to whether to
shift, adjustment, and in some cases verification. Off-line
corrections are important in minimizing systematic errors, which
have a significant impact on the margin necessary for adequate
treatments. Adaptive strategies can serve to modify the above
corrections, based on the variations observed with individual
patients.

• Organ motion within the patient represents another source of


positional uncertainty. Several site-specific studies of interfractional
organ motion have been performed, many of which have focused
on the prostate gland. Intrafractional organ motion may also be
significant, especially for diseases within the lung, which has
motivated the development of a number of techniques to manage
treatment of this site.

QUESTIONS
1. ICRU 50 first introduced which of the following nomenclature?
A. Gross treatment volume (GTV)
B. Off-axis ratios (OARs)
C. Clinical target volume (CTV)
D. Normal tissue complication probability (NTCP)
2. Which of the following is/are used to immobilize patients?
A. Alpha cradle and/or vacuum loc
B. Calypso
C. Electronic portal imaging systems
D. ABC systems
3. Two strategies for performing off-line corrections to minimize
systematic error are:
A. Inter- and intrafractional motion management
B. Adaptive and non-adaptive margin adjustments
C. System gating and/or tracking
D. Shrinking action level and no action level
4. The magnitude of the interfractional movement of the prostate
relative to the pelvic bones is typically:
A. 1 cm or less in the anterior–posterior direction
B. 1 to 2 cm in the cranial–caudal direction
C. 5 mm or less in the left–right direction
D. <5 mm in any direction.
5. The following technologies was/were developed to help manage
breathing movement:
A. Ultrasound
B. In-room CT
C. Treatment beam gating
D. Tracking using multileaf collimator

ANSWERS
1. C
2. A
3. D
4. A and C
5. C and D

REFERENCES
1. International Commission on Radiation Units and Measurements.
Prescribing, recording and reporting photon beam therapy. ICRU
Report 50. 1993.
2. International Commission on Radiation Units and Measurements.
Prescribing, recording and reporting photon beam therapy
(Supplement to ICRU Report 50). ICRU Report 62. 1999.
3. International Commission on Radiation Units and Measurements.
Prescribing, recording, and reporting photon-beam intensity-
modulated radiation therapy (IMRT). ICRU Report 83. 2010.
4. Jaffray DA, Siewerdsen JH, Wong JW, et al. Flat-panel cone-beam
computed tomography for image-guided radiation therapy. Int J
Radiat Oncol Biol Phys. 2002;53(5):1337–1349.
5. Cho PS, Johnson RH, Griffin TW. Cone-beam CT for radiotherapy
applications. Phys Med Biol. 1995;40(11):1863–1883.
6. Pouliot J, Bani-Hashemi A, Chen J, et al. Low-dose megavoltage
cone-beam CT for radiation therapy. Int J Radiat Oncol Biol Phys.
2005;61(2):552–560.
7. Smitsmans MH, de Bois J, Sonke JJ, et al. Automatic prostate
localization on cone-beam CT scans for high precision image-guided
radiotherapy. Int J Radiat Oncol Biol Phys. 2005;63(4):975–984.
8. Fallone BG. The rotating biplanar linac–magnetic resonance imaging
system. Semin Radiat Oncol. 2014;24:200–202.
9. Lagendijk JJ, Raaymakers BW, van Vulpen M. The magnetic
resonance imaging–linac system. Semin Radiat Oncol. 2014;24:207–
209.
10. Mutic S, Dempsey JF. The ViewRay system: magnetic resonance-
guided and controlled radiotherapy. Semin Radiat Oncol.
2014;24:196–199.
11. Van Herk M. Errors and margins in radiotherapy. Semin Radiat
Oncol. 2004;14:52–64.
12. Bentel GC, Marks LB, Sherouse GW, et al. The effectiveness of
immobilization during prostate irradiation. Int J Radiat Oncol Biol
Phys. 1995;31(1):143–148.
13. Song PY, Washington M, Vaida F, et al. A comparison of four
patient immobilization devices in the treatment of prostate cancer
patients with three dimensional conformal radiotherapy. Int J Radiat
Oncol Biol Phys. 1996;34(1):213–219.
14. Biggs PJ, Goitein M, Russell MD. A diagnostic X ray field
verification device for a 10 MV linear accelerator. Int J Radiat Oncol
Biol Phys. 1985;11(3):635–643.
15. Balter JM, Brock KK, Litzenberg DW, et al. Daily targeting of
intrahepatic tumors for radiotherapy. Int J Radiat Oncol Biol Phys.
2002;52(1):266–271.
16. Litzenberg D, Dawson LA, Sandler H, et al. Daily prostate targeting
using implanted radiopaque markers. Int J Radiat Oncol Biol Phys.
2002;52(3):699–703.
17. Schewe JE, Lam KL, Balter JM, et al. A room-based diagnostic
imaging system for measurement of patient setup. Med Phys.
1998;25(12):2385–2387.
18. Murphy MJ. An automatic six-degree-of-freedom image registration
algorithm for image-guided frameless stereotaxic radiosurgery. Med
Phys. 1997;24(6):857–866.
19. Shirato H, Shimizu S, Kitamura K, et al. Four-dimensional treatment
planning and fluoroscopic real-time tumor tracking radiotherapy for
moving tumor. Int J Radiat Oncol Biol Phys. 2000;48(2):435–442.
20. Shimizu S, Shirato H, Kitamura K, et al. Use of an implanted marker
and real-time tracking of the marker for the positioning of prostate
and bladder cancers. Int J Radiat Oncol Biol Phys. 2000;48(5):1591–
1597.
21. Mackie TR, Holmes T, Swerdloff S, et al. Tomotherapy: a new
concept for the delivery of dynamic conformal radiotherapy. Med
Phys. 1993;20(6):1709–1719.
22. King CR, Brooks JD, Gill H, et al. Stereotactic body radiotherapy for
localized prostate cancer: interim results of a prospective phase II
clinical trial. Int J Radiat Oncol Biol Phys. 2009;73(4):1043–1048.
23. Kamino Y, Takayama K, Kokubo M, et al. Development of a four-
dimensional image-guided radiotherapy system with a gimbaled X-
ray head. Int J Radiat Oncol Biol Phys. 2006;66(1):271–278.
24. Kupelian P, Willoughby T, Mahadevan A, et al. Multi-institutional
clinical experience with the Calypso System in localization and
continuous, real-time monitoring of the prostate gland during
external radiotherapy. Int J Rad Onc Biol Phys. 2007;67(4):1088–
1098.
25. de Kruijf WJ, Verstraete J, Neustadter D, et al. Patient positioning
based on a radioactive tracer implanted in patients with localized
prostate cancer: a performance and safety evaluation. Int J Radiat
Oncol Biol Phys. 2013;85(2):555–560.
26. Castellanos E, Ericsson MH, Sorcini B, et al. RayPilot –
Electromagnetic real-time positioning in radiotherapy of prostate
cancer – Initial clinical results. Radiotherapy and Oncology.
2012;103, Supplement 1(0):S433.
27. Ballhausen H, Li M, Hegemann NS, et al. Intra-fraction motion of
the prostate is a random walk. Phys Med Biol. 2015;60(2):549–563.
28. Fallone B. Murray B, Rathee S, et al. First MR images obtained
during megavoltage photon irradiation from a prototype integrated
linac-MR system. Med Phys. 2009;36(6):2084–2088.
29. Raaymakers BW, Lagendijk JJ, Overweg J, et al. Integrating a 1.5 T
MRI scanner with a 6 MV accelerator: proof of concept. Phys Med
Biol. 2009;54(12):N229–N237.
30. Keall PJ, Barton M, Crozier S, et al. The Australian magnetic
resonance imaging–linac program. Semin Radiat Oncol.
2014;24:203–206.
31. Poulsen PR, Cho B, Langen K, et al. Three-dimensional prostate
position estimation with a single x-ray imager utilizing the spatial
probability density. Phys Med Biol. 2008;53(16):4331–4353.
32. Keall PJ, Aun Ng J, O’Brien R, et al. The first clinical treatment with
kilovoltage intrafraction monitoring (KIM): a real-time image
guidance method. Med Phys. 2015;42(1):354–358.
33. Bert C, Metheany KG, Doppke K, et al. A phantom evaluation of a
stereo-vision surface imaging system for radiotherapy patient setup.
Med Phys. 2005;32(9):2753–2762.
34. Bel A, van Herk M, Lebesque JV. Target margins for random
geometrical treatment uncertainties in conformal radiotherapy. Med
Phys. 1996;23(9):1537–1545.
35. Remeijer P, Rasch C, Lebesque JV, et al. Margins for translational
and rotational uncertainties: a probability-based approach. Int J
Radiat Oncol Biol Phys. 2002;53(2):464–474.
36. van Herk M, Remeijer P, Lebesque JV. Inclusion of geometric
uncertainties in treatment plan evaluation. Int J Radiat Oncol Biol
Phys. 2002;52(5):1407–1422.
37. Balter JM, Brock KK, Lam KL, et al. Evaluating the influence of
setup uncertainties on treatment planning for focal liver tumors. Int
J Radiat Oncol Biol Phys. 2005;63(2):610–614.
38. de Boer HC, van Sörnsen de Koste JR, Creutzberg CL, et al.
Electronic portal image assisted reduction of systematic set-up errors
in head and neck irradiation. Radiother Oncol. 2001;61(3):299–308.
39. de Boer HC, Heijmen BJ. A protocol for the reduction of systematic
patient setup errors with minimal portal imaging workload. Int J
Radiat Oncol Biol Phys. 2001;50(5):1350–1365.
40. van Lin EN, Nijenhuis E, Huizenga H, et al. Effectiveness of couch
height–based patient set-up and an off-line correction protocol in
prostate cancer radiotherapy. Int J Radiat Oncol Biol Phys.
2001;50(2):569–577.
41. Yan D, Vicini F, Wong J, et al. Adaptive radiation therapy. Phys Med
Biol. 1997;42(1):123–132.
42. Langen KM, Jones DT. Organ motion and its management. Int J
Radiat Oncol Biol Phys. 2001;50(1):265–278.
43. Seppenwoolde Y, Shirato H, Kitamura K, et al. Precise and real-time
measurement of 3D tumor motion in lung due to breathing and
heartbeat, measured during radiotherapy. Int J Radiat Oncol Biol
Phys. 2002;53(4):822–834.
44. Suh Y, Dieterich S, Cho B, et al. An analysis of thoracic and
abdominal tumour motion for stereotactic body radiotherapy
patients. Phys Med Biol. 2008;53(13):3623–3640.
45. Balter JM, Sandler HM, Lam K, et al. Measurement of prostate
movement over the course of routine radiotherapy using implanted
markers. Int J Radiat Oncol Biol Phys. 1995;31(1):113–118.
46. Beard CJ, Kijewski P, Bussière M, et al. Analysis of prostate and
seminal vesicle motion: implications for treatment planning. Int J
Radiat Oncol Biol Phys. 1996;34(2):451–458.
47. Booth JT, Zavgorodni SF. Set-up error & organ motion uncertainty:
a Review. Australas Phys Eng Sci Med. 1999;22(2):29–47.
48. Crook JM, Raymond Y, Salhani D, et al. Prostate motion during
standard radiotherapy as assessed by fiducial markers. Radiother
Oncol. 1995;37(1):35–42.
49. Dawson LA, Mah K, Franssen E, et al. Target position variability
throughout prostate radiotherapy. Int J Radiat Oncol Biol Phys.
1998;42(5):1155–1161.
50. Melian E, Mageras GS, Fuks Z, et al. Variation in prostate position
quantitation and implications for three-dimensional conformal
treatment planning. Int J Radiat Oncol Biol Phys. 1997;38(1):73–81.
51. Padhani AR, Khoo VS, Suckling J, et al. Evaluating the effect of
rectal distension and rectal movement on prostate gland position
using cine MRI. Int J Radiat Oncol Biol Phys. 1999;44(3):525–533.
52. Roeske JC, Forman JD, Mesina CF, et al. Evaluation of changes in
the size and location of the prostate, seminal vesicles, bladder, and
rectum during a course of external beam radiation therapy. Int J
Radiat Oncol Biol Phys. 1995;33(5):1321–1329.
53. van Herk M, Bruce A, Kroes AP, et al. Quantification of organ
motion during conformal radiotherapy of the prostate by three
dimensional image registration. Int J Radiat Oncol Biol Phys.
1995;33(5):1311–1320.
54. Zimmermann FB, Molls M. Influence of organ and patient
movements on the target volume in radiotherapy of prostatic
carcinoma. Strahlenther Onkol. 1997;173(3):172–173.
55. Dawson LA, Litzenberg DW, Brock KK, et al. A comparison of
ventilatory prostate movement in four treatment positions. Int J
Radiat Oncol Biol Phys. 2000;48(2):319–323.
56. Malone S, Crook JM, Kendal WS. Respiratory-induced prostate
motion: quantification and characterization. Int J Radiat Oncol Biol
Phys. 2000;48(1):105–109.
57. Keall PJ, Mageras GS, Balter JM, et al. The management of
respiratory motion in radiation oncology report of AAPM Task
Group 76. Med Phys. 2006;33(10):3874–3900.
58. George R, Vedam SS, Chung TD, et al. The application of the
sinusoidal model to lung cancer patient respiratory motion. Med
Phys. 2005;32(9):2850–2861.
59. Allen AM, Siracuse KM, Hayman JA, et al. Evaluation of the
influence of breathing on the movement and modeling of lung
tumors. Int J Radiat Oncol Biol Phys. 2004;58(4):1251–1257.
60. Stevens CW, Munden RF, Forster KM, et al. Respiratory-driven lung
tumor motion is independent of tumor size, tumor location, and
pulmonary function. Int J Radiat Oncol Biol Phys. 2001;51(1):62–68.
61. Rietzel E, Pan T, Chen GT. Four-dimensional computed tomography:
image formation and clinical protocol. Med Phys. 2005; 32(4):874–
889.
62. Barnes EA, Murray BR, Robinson DM, et al. Dosimetric evaluation
of lung tumor immobilization using breath hold at deep inspiration.
Int J Radiat Oncol Biol Phys. 2001. 50(4):1091–1098.
63. Chen MH, Cash EP, Danias PG, et al. Respiratory maneuvers
decrease irradiated cardiac volume in patients with left-sided breast
cancer. J Cardiovasc Magn Reson. 2002;4(2):265–271.
64. Hanley J, Debois MM, Mah D, et al. Deep inspiration breath-hold
technique for lung tumors: the potential value of target
immobilization and reduced lung density in dose escalation. Int J
Radiat Oncol Biol Phys. 1999;45(3):603–611.
65. Rosenzweig KE, Hanley J, Mah D, et al. The deep inspiration breath-
hold technique in the treatment of inoperable non–small-cell lung
cancer. Int J Radiat Oncol Biol Phys. 2000;48(1):81–87.
66. Sixel KE, Aznar MC, Ung YC. Deep inspiration breath hold to reduce
irradiated heart volume in breast cancer patients. Int J Radiat Oncol
Biol Phys. 2001. 49(1):199–204.
67. Stromberg JS, Sharpe MB, Kim LH, et al. Active breathing control
(ABC) for Hodgkin’s disease: reduction in normal tissue irradiation
with deep inspiration and implications for treatment. Int J Radiat
Oncol Biol Phys. 2000;48(3):797–806.
68. Brock KK, Hollister SJ, Dawson LA, et al. Technical note: creating a
four-dimensional model of the liver using finite element analysis.
Med Phys. 2002;29(7):1403–1405.
69. Brock KM, Balter JM, Dawson LA, et al. Automated generation of a
four-dimensional model of the liver using warping and mutual
information. Med Phys. 2003;30(6):1128–1133.
70. Brock KK, Sharpe MB, Dawson LA, et al. Accuracy of finite element
model-based multi-organ deformable image registration. Med Phys.
2005;32(6):1647–1659.
71. Brock KK, McShan DL, Ten Haken RK, et al. Inclusion of organ
deformation in dose calculations. Med Phys. 2003;30(3):290–295.
72. Shirato H, Shimizu S, Kunieda T, et al. Physical aspects of a real-
time tumor-tracking system for gated radiotherapy. Int J Radiat
Oncol Biol Phys. 2000;48(4):1187–1195.
73. Berbeco RI, Nishioka S, Shirato H, et al. Residual motion of lung
tumours in gated radiotherapy with external respiratory surrogates.
Phys Med Biol. 2005. 50(16):3655–3667.
74. Jin JY, Yin FF. Time delay measurement for linac based treatment
delivery in synchronized respiratory gating radiotherapy. Med Phys.
2005;32(5):1293–1296.
75. Ozhasoglu C, Murphy MJ. Issues in respiratory motion
compensation during external-beam radiotherapy. Int J Radiat Oncol
Biol Phys. 2002;52(5):1389–1399.
76. Vedam SS, Keall PJ, Kini VR, et al. Determining parameters for
respiration-gated radiotherapy. Med Phys. 2001;28(10):2139–2146.
77. Wong JW, Sharpe MB, Jaffray DA, et al. The use of active breathing
control (ABC) to reduce margin for breathing motion. Int J Radiat
Oncol Biol Phys. 1999;44(4):911–919.
78. Cheung PC, Sixel KE, Tirona R, et al. Reproducibility of lung tumor
position and reduction of lung mass within the planning target
volume using active breathing control (ABC). Int J Radiat Oncol Biol
Phys. 2003;57(5):1437–1442.
79. Dawson LA, Brock KK, Kazanjian S, et al. The reproducibility of
organ position using active breathing control (ABC) during liver
radiotherapy. Int J Radiat Oncol Biol Phys. 2001;51(5):1410–1421.
80. Dawson LA. Eccles C, Bissonnette JP, et al. Accuracy of daily image
guidance for hypofractionated liver radiotherapy with active
breathing control. Int J Radiat Oncol Biol Phys. 2005;62(4):1247–
1252.
81. Remouchamps VM, Letts N, Vicini FA, et al. Initial clinical
experience with moderate deep-inspiration breath hold using an
active breathing control device in the treatment of patients with
left-sided breast cancer using external beam radiation therapy. Int J
Radiat Oncol Biol Phys. 2003;56(3):704–715.
82. Sarrut D, Boldea V, Ayadi M, et al. Nonrigid registration method to
assess reproducibility of breath-holding with ABC in lung cancer. Int
J Radiat Oncol Biol Phys. 2005;61(2):594–607.
83. Wu J, Ruan D, Cho B, et al. Electromagnetic detection and real-time
DMLC adaptation to target rotation during radiotherapy. Int J Radiat
Oncol Biol Phys. 2012;82(3):e545–e553.
84. Ge Y, O’Brien RT, Shieh CC, et al. Toward the development of
intrafraction tumor deformation tracking using a dynamic multi-leaf
collimator. Med Phys. 2014;41(6):061703.
12 Image-Guided Radiation
Therapy

Guang Li, Gig S. Mageras, Lei Dong, and Radhe Mohan

INTRODUCTION
The aim of external beam radiation therapy (EBRT) of cancer is to target
localized disease noninvasively with radiation that conforms to the
target while minimizing dose to surrounding organs at risk (OAR).
Radiation dose is often delivered with inadequate visualization of the
regions being irradiated. Therefore, imaging guidance is crucial at every
step of the process, including cancer diagnosis, staging and delineation;
treatment simulation and planning; patient setup, tumor localization,
and motion monitoring; and treatment response assessment, efficacy
evaluation and strategy refinement. In fact, most of the significant
advances in radiation oncology over the last three decades have been
made possible by advances in medical imaging. Using three-dimensional
(3D) images of patient anatomy from computed tomography (CT) and
magnetic resonance imaging (MRI), as well as visualization of viable
tumor extent from MR spectroscopic imaging (MRSI), positron emission
tomography (PET), and single photon emission computed tomography
(SPECT), treatment target and OARs can be delineated with precision,
thus reducing the likelihood of marginal misses in tumor and minimizing
the exposure of normal tissues to high radiation dose. Multimodality
imaging has become an integrated component throughout the treatment
process, providing the ability to localize and visualize the tumor in space
and time to ensure an accurate delivery of a highly conformal treatment
plan.
Image-guided radiation therapy (IGRT) is composed of a multitude of
major innovations in radiation oncology to address the problems arising
from inter- and intrafractional target variations. IGRT aims to deliver a
treatment as it is planned based on 3D images acquired at treatment
simulation. These images establish a 3D reference frame of patient
anatomy (with possible inclusion of motion) for both image-based
treatment planning and image-guided treatment delivery. The former
process follows the exact 3D patient anatomy (the tumor and nearby
OARs) for dosimetric planning, while the latter focuses mostly on tumor
alignment between the planning image and images at the treatment unit
prior to and during treatment, thereby aligning to the radiation fields.
The variations of tumor position in the image-guided interfractional
setup (between treatment fractions) and intrafractional (within a
treatment fraction) patient and organ motion can be corrected for more
accurate delivery. The variation of normal tissue positions is often
assessed in terms of proximity to the irradiated volume in the current
image-guided approach, but is also a focus of adaptive IGRT research to
assess dosimetric and clinical consequences under various clinical
scenarios. Examples of image guidance in various stages of radiation
therapy are illustrated in Figure 12.1.
Increasing evidence has shown that there are substantial inter- and
intrafractional variations, in contrast to the “snapshot” planning
anatomy of a patient. The causes of such variations include voluntary
motion (body shift, rotation, and deformation), involuntary motion
(respiratory, cardiac, and digestive), disease-related changes (tumor
growth and weight loss), and radiation-induced changes (tumor
shrinkage). The variation in respiratory-induced tumor motion during
treatment may substantially deviate from the one-cycle motion extent
quantified by 4DCT at simulation, owing to breathing irregularities.
These variations could have a significant impact on the outcome of
treatments, as they may result in underdosing the target or overdosing
the OAR (1–3). In the current practice of treatment planning and
delivery, it is assumed implicitly that patient’s anatomy remains static
throughout the course of the radiation therapy. To account for statistical
variations, wide treatment margins derived from population-based
studies are used to ensure coverage of the disease at the expense of
exposing considerable OAR volumes to or near full prescribed radiation
dose. A large margin limits the ability to safely deliver higher tumor
doses because of increased risk of OAR toxicity, especially for
hypofractionated stereotactic body radiotherapy (SBRT), in which the
high dose per fraction exceeds the normal tissue’s capacity for sublethal
repair. Furthermore, the margin needed for some patients exhibiting
large target variations may exceed the population-based margin,
potentially leading to marginal misses, especially with the use of highly
conformal modalities, such as 3D conformal radiotherapy (3DCRT),
intensity-modulated radiotherapy (IMRT), volumetric-modulated arc
therapy (VMAT), and proton therapy (4–6). Treatment planning and
delivery techniques that do not correct for such daily volumetric
variations adequately may lead to suboptimal treatments. These factors
may, in part, be responsible for the poor outcome and high toxicity in
radiation therapy for some cancers (7). IGRT has the potential to target
gross and microscopic diseases accurately, to individualize treatments to
reduce margins, and to allow dose escalation to higher levels with the
expectation of improving local control and reducing toxicity (8–10).
Therefore, IGRT can help to improve the therapeutic ratio, namely the
ratio of tumor control probability (TCP) and normal tissue complication
probability (NTCP) (1,7). The recent efforts to introduce MRI, PET, and
optical surface imaging (OSI) into the treatment room can further
improve the ability to assess the accuracy of treatment delivery by direct
viewing of the target during treatment (11), imaging proton beam path
(12), and visualizing photon Cherenkov scattering (13), respectively.
FIGURE 12.1 Image guidance at various stages of the radiotherapy process.

This chapter focuses on IGRT technologies related to treatment


planning and delivery of EBRT. The second and third sections introduce
various imaging forms of IGRT technologies and their commercial
implementations for inter fractional patient setup and intrafractional
motion monitoring, respectively. The fourth section reviews
requirements and considerations for IGRT, including quality assurance
(QA). Various possible IGRT strategies, margin assessment and
reduction, and clinical implications are described in the fifth section.
Finally, the sixth section looks into the future and speculates on new
processes coming into this field.

INTERFRACTIONAL IGRT IMAGING MODALITIES FOR


PATIENT SETUP
In this section, we focus on in-room IGRT imaging modalities for daily
patient setup. Images acquired immediately prior to treatment are used
to reposition the patient so as to align the target or its surrogate (such as
implanted radiopaque fiducials in or near the tumor) with the planned
radiation isocenter. Digitally reconstructed radiograph (DRR) images
derived from the planning CT are used as the reference. A couch
positional adjustment is typically used to realign the patient. This is the
simplest form of IGRT without modification of the original treatment
plan.

2D Radiographic Imaging
Two-dimensional (2D) radiographic (projection) imaging is typically
used in treatment rooms to align the patient relative to the radiation
beams. Megavoltage (MV) imaging uses therapy x-ray beams and an
amorphous-silicon (a-Si) flat-panel imager, known as electronic portal
imaging device (EPID), to verify patient’s setup, defined as the position
of the skeletal anatomy (14). Other uses of MV imaging are to verify
treatment beam apertures prior to treatment and in vivo portal
dosimetry during treatment (15,16). Because imaging uses the therapy
beam, it provides direct in-field verification of treatment delivery, and
therefore serves as a “gold standard” for validating new IGRT
techniques. Disadvantages of MV imaging include higher radiation dose
to the patient (typically 1 to 5 cGy) and poorer image quality owing to a
large Compton scattering contribution from the higher x-ray energies
and high-energy electrons reaching the detector.
Two general categories of 2D kilovoltage (kV) x-ray imaging are
frequently used for IGRT. One is a gantry-mounted kV imaging system
on a linear accelerator (linac), orthogonal to the therapy MV x-ray beam.
The kV x-ray source and flat-panel imager are mounted on retractable
arms, providing near-diagnostic quality images. The second category of
kV imaging is room-mounted systems: the x-ray source and detector are
mounted on the ceiling or the floor. These systems provide an oblique
orthogonal image pair for stereoscopic imaging at a wide range of
treatment couch angles. Most kV x-ray imaging systems have a
companion fluoroscopic imaging mode, which is useful for observing
motion of the internal anatomy or implanted fiducials. Since kV imaging
systems are distinct from the MV beam line, the kV–MV isocenter
coincidence must be established within a clinical tolerance through
initial and periodic QA processes.
kV radiographs are often not sufficient for detecting soft tissue targets
but are more successful in aligning skeletal landmarks or implanted
radiopaque fiducials as target surrogates. In-room kV imaging represents
a major improvement over MV imaging due to its superior image quality
and its low imaging dose (0.01 to 0.1 cGy), facilitating its use for daily
image-guided patient setup (17). The different appearance of kV and MV
thoracic images is shown in Figure 12.2.

FIGURE 12.2 The appearance of anatomy kV (top row) and MV (bottom row) radiographs can be
quite different. At kV x-ray energies, the bony structures are enhanced; at the therapeutic (MV)
energies, the air cavity is enhanced.

Tomographic Imaging
CT imaging inside the treatment room provides 3D anatomical
information and improved soft tissue visibility, thus providing
advantages over radiographic imaging with higher imaging dose (18).
In-room CT images are the standard for six degrees-of-freedom (DOF)
patient setup and can be used to estimate the delivered dose
distributions based on the anatomy captured at treatment. The planning
CT image is used as the reference for patient alignment on skeletal
anatomy, fiducials, or tumors in some disease sites.

kV Helical CT and kV Cone-Beam CT


Helical multislice CT systems have been widely used in diagnostic
imaging and radiation treatment planning for many years. The first
integrated CT-linac clinical system was designed for noninvasive,
frameless stereotactic radiotherapy of brain and lung cancers with
reduced uncertainty between fractions (19). Another integrated system
with a rail system to transport the patient between treatment and CT
couches was assembled at the Memorial Sloan Kettering Cancer Center
for treatment of paraspinal lesions and prostate cancer (20,21).

FIGURE 12.3 A: An Elekta Synergy unit (Elekta Inc., Sweden). B: A Varian TrueBeam unit (Varian
Oncology Systems, Palo Alto, CA) (Photograph courtesy of Yingli Yang, PhD). Both linear
accelerators have a kV imaging system orthogonal to the therapy beam direction. Both systems
provide 2D radiographic, fluoroscopic, and CBCT modes.

A commercial CT-linac system was introduced in the clinic in 2000


(22). It consists of a medical linac and a movable CT scanner that slides
along a pair of rails (“CT-on-Rails”). A similar “CT-on-Rails” commercial
system (EXaCT, Varian Oncology Systems, Palo Alto, CA) has the
mechanical accuracy of within 0.5 mm (23,24). The biggest advantage of
an in-room CT scanner for IGRT is the similarity of image quality and
field of view (FOV) with planning CT images.
Gantry-mounted kV imaging systems are capable of radiography,
fluoroscopy, and cone-beam CT (CBCT), providing a versatile solution
for IGRT applications (25,26). CBCT imaging involves acquisition of
projection images of the patient as the gantry rotates through an arc of
at least 180 degrees plus a so-called cone-beam angle subtended by the
imaging panel (∼200 degrees total). A filtered back-projection
algorithm is used to reconstruct the volumetric images. Geometric
calibration of the CBCT system is needed periodically to maintain image
quality and geometric accuracy. Corrections on the order of 2.0 mm may
be required to compensate for the gravity-induced flex in the support
arms of the source, detector, and gantry. Submillimeter spatial resolution
and accuracy have been demonstrated in phantom. The volumetric
image with nearly isotropic spatial resolution is useful in frameless
stereotactic radiosurgery (SRS) (27).
Since 2005, major manufacturers have offered CBCT capabilities
(Elekta Synergy and XVI, Elekta Inc., Sweden; Varian On-board Imager
[OBI] and TrueBeam Imaging, Palo Alto, CA), as shown in Figure 12.3.
Elekta’s system uses a slightly larger flat panel detector (41 × 41 cm),
compared to Varian’s detector (40 × 30 cm), which limits the scan
length to 15 cm when using the full-fan scan mode. A half-fan scan
method displaces the detector vertically to capture half projection
images and requires 360-degree rotation to have the axial FOV to at
least 40 cm (28).
Limitations of CBCT image quality include elevated x-ray scatter,
which reduces image contrast and introduces cupping artifacts. Scatter
can be reduced by using both anti-scatter grids and post-processing
methods (29,30). To further improve the image quality, Kim et al. have
proposed to use orthogonal dual-source and dual-detector “in-line” with
MV beam to produce 2D and 3D images with tetrahedral collimation
(31). Because of regulations on gantry rotation speed (1 rpm), CBCT
image quality is adversely affected by the breathing motion. The IGRT
setup process may add 5 minutes (∼2 minutes acquisition and ∼3
minutes registration/approval) to the regular treatment schedule.

MV Helical CT and MV Cone-Beam CT


Tomotherapy (Accuray Inc., Sunnyvale, CA) is an integrated technology
that combines a helical megavoltage CT (MVCT) with a linear
accelerator (Fig. 12.4A) as x-ray source, which is specially designed for
delivering intensity-modulated radiation in a slit geometry (32–34).
Low-dose (1 to 2 cGy), pretreatment MVCT images are obtained from the
same treatment beam line but with a nominal energy of 4 MV. The CT
detector uses an array of 738 channel xenon ion chambers and an FOV
of 40 cm can be reconstructed.
MV CBCT uses the therapy MV x-ray and the EPID detector (35,36).
With the a-Si flat panel EPID (37), it has become possible to rapidly
acquire multiple, low-dose 2D projection images with treatment beams,
as shown in Figure 12.4B. There is no effective MV scatter-reduction
mechanism for EPID, which limits image quality. The amount of scatter
reaching the detector depends on the photon energy, field size, and
thickness of the imaged object; however, the imaging system can be
optimized by calibrating the system using site-specific phantoms (38).
The MVCT and MV CBCT images provide sufficient contrast to verify
patient position and to delineate many anatomic structures (38,39). It is
interesting to note that the MVCT numbers are linear with respect to the
electron density of material imaged, yielding accurate dose calculations
(40). Another advantage is the reduced influence of implanted metal
objects on image quality, in contrast to kV CT, which exhibits strong
artifacts when high-Z material is present (Fig. 12.5).

FIGURE 12.4 A: A picture of tomotherapy unit (TomoTherapy Inc., Madison, WI) (Photograph
courtesy of H. Ning, PhD). Tomotherapy is an integrated IGRT system, which combines a linear
accelerator with an MVCT image guidance system. B: A Siemens MV CBCT imaging system
using a conventional linac and a flat-panel EPID. Reprinted from Morin O, Gillis A, Chen J, et al.
Megavoltage cone-beam CT: system description and clinical applications. Med Dosim.
2006;31:51–61.

Hybrid Cone-Beam CT and Digital Tomosynthesis


A hybrid CBCT can be achieved by combining orthogonal kV and MV x-
ray projection images with a partial arc gantry rotation as little as 90
degrees (41) while maintaining projection images span an arc of 180
degrees. Acquisition requires only 15 seconds, making it optimal for
breath-hold imaging (42).

FIGURE 12.5 Images showing the artifacts due to the presence of metal objects in the
conventional kV CT images (left panels). Artifact-free images were obtained with an MV CBCT
(right panels). Reprinted from Morin O, Gillis A, Chen J, et al. Megavoltage cone-beam CT:
system description and clinical applications. Med Dosim. 2006;31:51–61.

Digital tomosynthesis (DTS) is a special situation of tomographic


reconstruction with limited arc (20 to 40 degrees) of projection images
(43–45). The DTS scan has short time (<10 seconds), less radiation, but
sacrifices spatial resolution in the direction perpendicular to the x-ray
beam. When necessary, a second DTS can be added quasi-orthogonal to
the first.

Respiration-Correlated (4D) Computed Tomography


Imaging
Respiration-induced motion is an important consideration in some
disease sites, in which tumor motion up to 4 cm has been observed (46).
CT scans acquired synchronously with the respiratory signal can be used
to reconstruct a set of CT scans, representing the 3D anatomy typically at
10 respiratory phases. This collection of 3DCT datasets is called
respiration-correlated CT (RCCT), or 4DCT, which describes the
snapshots of patient’s 3D anatomy over one breathing cycle.

Respiration-Correlated 4DCT
Respiration-correlated 4DCT can be acquired in either cine or helical
mode. In cine mode, repeat CT projections are acquired over slightly
more than one respiratory cycle with the couch stationary while
recording patient respiration; the couch is then incremented and the
process repeated. Following acquisition, the images are sorted with
respect to the respiratory signal, leading to a set of volume images at
different respiration points in the cycle (47,48). Helical scan uses a low
pitch and adjusts the gantry rotation period such that all voxels are
viewed by the CT detectors for at least one respiratory cycle (49,50).
Both techniques have been widely characterized and applied clinically
for estimating the extent of moving tumors in lung and abdomen
(51,52).
The selection of the type of respiratory signal can vary, and
commercial systems commonly use one of the two types of breathing
monitors. One such monitor (Real-time Position Management, RPM,
Varian Oncology Systems, Palo Alto, CA) captures the anterior–posterior
motion of an infrared-reflective block placed on the patient’s abdomen
or chest using an infrared camera. The other is a “pneumo bellows”
system (Philips Medical Systems, Milpitas, CA) that records the digital
voltage signal from a differential pressure sensor wrapped around the
patient’s abdomen. Periodic motion is assumed in the binning approach
and breathing irregularity adversely affects the quality of 4DCT images,
leading to anatomical distortions (53,54). Phase-based binning assumes
repeatable breathing cycles and often produces 4DCT images with
greater motion artifacts than amplitude-based binning (55,56).
Reduction of motion artifacts in 4DCT is an active area of investigation
and numerous methods have been proposed (57–59).

Respiration-Correlated 4D CBCT
As CBCT is acquired with limited gantry speed at 1 rpm, motion artifacts
are different and more pronounced in CBCT than CT. Respiration-
correlated 4D CBCT has been developed similar to 4DCT (60,61). A
slower gantry rotation is required to acquire sufficient projections in
each phase bin, resulting in scan times of 3 to 6 minutes. The limited
number of projections per phase reduces the contrast resolution and
introduces image artifacts; thus, the method is more suited to detecting
high-contrast objects such as tumor in parenchymal lung (60–62).
Respiratory-correlated DTS has also been reported (45). An alternative
approach is to process the CBCT images with motion correction using a
patient-specific motion model (63,64). Most of the methods use
deformable image registration (DIR) to deform the images to a common
motion state. Motion-corrected CBCT allows normal scanning time and
accurate tumor positioning (65,66).

Magnetic Resonance Imaging


MRI is well known for its nonionization radiation imaging, high soft
tissue contrast, flexible image orientation, and versatile image
appearance. The appearance of the soft tissue can be manipulated with
different pulse sequences, such as T1-weighted or T2-weighted. The
tumor visibility can be further enhanced by administrating a contrast
agent, such as a gadolinium-chelated compound. The magnetic field
strength is 0.2 Tesla (T) for open-field MRI and 1.5 T or 3 T for a closed-
field whole-body (≤70 cm bore) MRI scanner.
MRI may suffer from geometric distortion due to nonuniformity of the
magnetic field strength. This scanner-specific factor can be corrected by
imaging a large grid phantom (67). The geometric integrity is also
affected by susceptibility differences at tissue interfaces. MRI yields
limited visibility of bone owing to its fast relaxation time. Recently, MRI-
based treatment planning has been studied (68,69); an important area of
investigation is the conversion of MRI voxels to a CT number or electron
density for dosimetric calculation.
MRI-guided treatment delivery systems are an active area of
development. An integrated MRI-cobalt (60) machine (MRIdian System,
ViewRay, Inc., Gainesville, FL) was commissioned at several radiation
oncology clinics in the United States in 2014 (70). An integrated MRI-
linac system is under development by Philips and Elekta and a prototype
has been installed at the University Medical Center Utrecht in the
Netherlands (71). A third MRI-guided system that has been installed in
Princess Margaret Cancer Center nearing clinical implementation enables
a rail-mounted 1.5-T MR scanner to operate in three different rooms: MR
simulation, MR-guided brachytherapy, and MR-guided radiotherapy
linac (72). Such in-room systems offer both soft tissue–based 3D target
alignment and near real-time 2D tumor motion monitoring (73). The
goals are to provide online treatment guidance, adaptive replanning, and
monitoring of treatment response.

2D, 3D, and 4D MRI


MRI can produce 2D planar, 3D volumetric, and 4D temporal images
(74), which are scanned and reconstructed slice by slice. When a fast
scan pulse sequence is applied, such as TrueFISP (true fast imaging with
steady-state precession), cine 2DMR images, or 2D(t), can be acquired at
4 fps without parallel imaging. At this acquisition speed, respiration-
induced tumor motion can be monitored for respiratory gating or real-
time tumor tracking.
A 3D volumetric MR image is potentially useful for MRI-based
treatment planning (68,69,75,76) with high soft tissue contrast without
ionizing radiation. It is important to minimize MRI geometric distortions
(77), obtain the electronic density of MRI voxels for dose calculation
(69), and generate pseudo-DRRs as reference images to align with 2D
radiographs for patient setup (78).
Four-dimensional MRI imaging provides time-resolved (TR) (79,80) or
respiration-correlated (RC) (76,81) volumetric images during respiration.
TR-4DMRI requires parallel imaging with multi-channel coils and
parallel computing to achieve a temporal resolution of 1 to 2 fps, while
RC-4DMRI is achieved based on respiratory correlation. Hu et al. (80)
introduced an amplitude-based triggering system to acquire prospective
T2-weighted 4DMRI for abdominal tumor tracking. Tryggestad et al.
(81,82) proposed a method with two-step reconstruction to produce
deblurred 4DMRI images and a method to track tumor centroid motion
using orthogonal cine 2DMRI to achieve local volumetric information
and sufficient patient-specific breathing statistics.

Positron Emission Tomography


PET is used increasingly for tumor delineation in treatment planning and
for assessment of tumor response to radiation treatment. Using a
positron-emitting biologic tracer, tumor metabolic, proliferating, and
hypoxic conditions can be probed. A well-established PET tracer is 18F-
fluoro-deoxy-glocose (18F-FDG), a sugar-like molecule, which
accumulates in tumor cells owing to their high metabolic activities. In
the event of positron emission, the positron annihilates with an electron
to emit a pair of 511 keV photons in opposite directions. A PET scanner
with a band of scintillation detectors around the gantry detects the two
coincident events and determines the event location by the times of
flight of the two photons. Similarly, SPECT uses gamma-emitting tracers
to image a tumor by detecting independent gamma-decay events.
Hybrid PET/CT, SPECT/CT, and PET/MRI scanners are commercially
available, which provide coregistration of viable lesions in a patient
anatomy. Recently, in-room PET and SPECT have been studied as a
direct means for tumor positioning and tracking for IGRT (12,83–85).
For proton therapy, PET has been applied to directly image the by-
products of positron emitters, such as 15O, in the beam path and assess
the geometric accuracy of treatment delivery (84).

Ultrasound Imaging
Ultrasound is useful in soft tissue targeting in the abdomen for
radiotherapy. Fontanarosa et al. have recently reviewed ultrasound
guidance for external beam radiotherapy (86).
The ultrasound transducer is both a sound source and detector. It
transmits brief pulses that propagate into the tissues and receives the
echo that is bounced back at tissue interfaces where acoustic impedance
changes, owing to differences in tissue density or elasticity. The round-
trip time of the pulse-echo wave is used to determine the transducer-to-
object distances. A scan line converter constructs a 3D image of the
patient using 1D (with sweeping) or 2D transducer. Poor ultrasound
image quality, unfamiliar image appearance, and anatomy distortions
due to applied pressure have limited its utility for precise image
guidance. The inter- and intrauser variability is large for ultrasound-
guided setup (87) and more pronounced for fiducial alignment (88).

Optical Surface Imaging


Stereoscopic OSI provides real-time imaging, primarily used for aligning
superficial tumors (the breast) or immobile tumors (the brain). A
commercial OSI system (AlignRT, Vision RT, Ltd., London, UK) is
composed of two to three ceiling-mounted stereo-camera pods, each
having two cameras and a speckle projector. The triangulation among
the two cameras and a skin point identified by the cameras from the
speckle pattern is used to calculate the location of the point in space. A
skin surface image is reconstructed from all visible surface points with
the accuracy within 1.0 mm.
Validated with x-ray imaging, OSI provides a quick and nonradiologic
means for image-guided setup. Early studies of surface imaging in
radiotherapy were reported by Massachusetts General Hospital (89) and
Johns Hopkins University (90) in 2005. It has been applied in breast,
lung, brain, and head and neck. Patient setup requires to register the
surface image to a reference region of interest (ROI) defined on the
delineated patient body surface in a simulation CT. The discrepancy
between OSI and CBCT setup in brain cases is usually about 1 to 2 mm
(27).
A different type of OSI is Cherenkov video imaging to visualize
radiation delivery relative to patient anatomy, such as the breast
(13,91). The optical detection is gated with the radiation pulse from a
linac, providing a direct evidence of radiation delivery.

Patient Position Correction


Rigid body position correction has 6 DOF, including 3 translational (3T)
and 3 rotational (3R) adjustments. Usually the alignment correction
based on rigid registration is performed in 3 DOF (3T), 4 DOF (3T + 1R
for couch rotation), or 6 DOF (3T + 3R) if a 6D couch or rotationally
adjustable couch extension is used. Translational correction is essential
to align the tumor or tumor surrogate, while correction for rotation may
not be necessary. Deformable/mobile target position may be corrected
using the centroid position and 3 DOF position correction. The centroid
of the visible GTV in the thorax could be used for patient setup
alignment (92,93), but may result in substantial uncertainty in CTV
alignment (94). This is still an area of investigation.

INTRAFRACTIONAL REAL-TIME IMAGING AND MOTION


COMPENSATION
The main goal of real-time tumor tracking is to minimize the effect of
target motion not only at setup, but also during a treatment fraction.
Tumor tracking usually requires real-time motion monitoring (detection)
and motion compensation (execution) with the minimal time delay.
Although implanted markers are primarily used as surrogates for target
position using x-ray fluoroscopy, markerless approaches are emerging,
including MRI (82), EPID (95), fluoroscopy (96), CBCT projection images
(97), and OSI (27,98). In the following sections, we review different
approaches for real-time monitoring and tracking in photon
radiotherapy.

Fluoroscopic Imaging with Implant Fiducials


Two commercially available room-mounted systems, as shown in Figure
12.6, are CyberKnife (Accuray Inc., Sunnyvale, CA) and ExacTrac
(BrainLAB AG, Feldkirchen, Germany). Both are integrated IGRT systems
for target localization, setup correction, and the delivery of high-
precision frameless SRS and SBRT. The image guidance uses two distinct
imaging subsystems: kV stereoscopic x-ray imaging and real-time
infrared (IR) marker tracking. CyberKnife provides fluoroscopy for target
tracking, external marker tracking, and can perform adaptive beam
gating or real-time target tracking (99). ExacTrac is designed for
intracranial SRS or extracranial SBRT and can acquire x-ray images at
nonzero couch angles for verifying patient position (100). The 6 DOF
patient position adjustment is possible using 2D/3D image registration
(section “Image Registration and Fusion”).
FIGURE 12.6 Two room-mounted kV image-guided IGRT real-time tracking systems. A:
CyberKnife system (Photograph courtesy by Accuray Inc., Sunnyvale, CA). B: ExacTrac system,
BrainLAB AG, Feldkirchen, Germany (Photograph courtesy of BrainLAB AG.)

Gantry-mounted kV imaging systems usually have only one kV x-ray


imager and acquire an orthogonal image pair by rotating the gantry,
including Varian’s OBI and TrueBeam Imaging systems and Elekta’s
Synergy and Infinity systems. The kV imaging beam lines are orthogonal
to the MV treatment beam line, sharing the same isocenter of gantry
rotation, as shown in Figure 12.3. The kV–MV configuration provides a
possibility to acquire images during treatment with alternated beam on
time (101–103). Studies have shown that EPID can capture at least one
fiducial marker 40% to 95% of the time in VMAT prostate treatment,
while kV imaging can be used as needed for the rest (104,105). VERO
(BrainLAB AG Feldkirchen, Germany) is another gantry-mounted linac
system (Fig. 12.7), equipped with two orthogonal kV imaging and one
optical tracking systems (106). It provides CBCT, simultaneous
orthogonal 2DkV imaging and fluoroscopic imaging. Fast gantry tracking
by a gimbal-based gantry system has a latency less than 50 ms (107).
Poels et al. have reported tumor tracking using both orthogonal kV and
planar MV imaging to achieve 0.3 mm accuracy on phantom (108).
Clinical applications of VERO for SRS and SBRT have been reported
(106,108,109).

Optical Fiducial Motion Surrogates


Optical tracking determines the position of an IR-emitting or reflecting
marker via triangulation from two stereoscopic cameras. Owing to its
clean stereoscopic marker images and simple calculation, it is capable of
high spatial (0.1 mm) and temporal (<0.05 s) resolution in marker
tracking. Multiple markers can be tracked simultaneously in real-time
allowing continuous correction of patient position during treatment.
Meeks et al. have reviewed this technology in several implementations
for intracranial and extracranial SBRT (110). Markers can also serve as
fiducials; but variation in marker placement between simulation and
treatment cause uncertainty.

FIGURE 12.7 A VERO system (BrainLAB AG Feldkirchen, Germany and Mitsubishi Heavy
Industries, Tokyo, Japan) (Photograph courtesy of Dirk Verellen, PhD). The system offers quick
gantry movement aiming to a moving tumor, guided by gantry-mounted stereoscopic x-ray
imaging systems. Two rotations of the radiation beam and 5 DOF couch are available in the
system.

Video-Based Optical Surface Imaging


OSI utilizes the same principles as above to determine the position of a
surface point, which is identified with the assistance of a texture image
projected onto a patient skin. High spatial resolution is achievable
although temporal resolution is limited by the substantially increased
number of points to track. The speed of 3D surface image reconstruction
depends on the size of the ROI and image resolution: for facial area with
high resolution, 2 to 5 fps is achievable (27). AlignRT (Vision RT,
London, UK) is a commercial optical surface monitoring system (OSMS)
that has been integrated with the Varian Edge system (Fig. 12.8A).
For surface alignment, a ROI should be created with sufficient reliable
landscape on a reference surface, which is either an OSI image acquired
at simulation or an external surface rendered from the planning CT
image imported via DICOM-RT. The image registration algorithm is
based on an iterative-closest-point method leading to an efficient and
robust surface alignment of the ROI. Clinical setup time can be less than
2 minutes with high accuracy and reproducibility (27).
The real-time surface matching capability has been applied to head
motion monitoring during frameless SRS (27,98). Using an OSI image
captured at treatment as reference, systematic errors of the OSI system
can be cancelled, yielding 0.2 mm accuracy for rigid motion detection.
For nonrigid anatomy, an OSI-based spirometry has been reported (111),
aiming to utilize all respiration-induced external motion to predict tumor
motion via physical relationships (112).

Real-Time Electromagnetic Localization and Tracking


Tracking of implanted fiducials without ionizing radiation imaging is
possible with a technology that uses radiofrequency (RF)
electromagnetic fields to induce and detect signals from implanted
“wireless” transponders (Calypso, Varian Medical Systems).
Electromagnetic tracking is now an integrated component of Varian’s
Edge treatment machine, as shown in Figure 12.8. The system consists of
a console, optical tracking system, and tracking station. The console is
situated near the treatment couch with a magnetic array panel extended
above and close to the patient surface. The array panel contains RF
source coils to excite the transponders and sensor coils to detect the
transponder response signals, each at a different resonant frequency for
unique identification at 10 Hz and submillimeter accuracy (113,114).
The Calypso system can be implanted in soft tissue throughout the body
except in lung, which is currently under clinical studies (115).
FIGURE 12.8 Two nonionizing motion-tracking systems equipped for the Edge System (Varian
Oncology Systems, Palo Alto, CA). A: Photograph of the system with the OSMS and Calypso
systems. B: A diagram showing the prototype AC electromagnetic field tracking system with the
detector array and the infrared cameras (Calypso, Varian Oncology Systems, Palo Alto, CA). The
Beacon transponder is shown in the inset.

FIGURE 12.9 Two integrated MRI-guided treatment units. A: A schematic of a prototype MRI-
guided real-time tracking system for IGRT (MRIdian, ViewRay, Inc., Gainesville, FL). The system
is designed to have a low-field open MRI for real-time imaging and three-headed Cobalt source
for intensity-modulated gamma ray irradiation (Photograph courtesy of James Dempsey, PhD). B:
A prototype of MRI-Linac system (by Philips Medical Systems, Milpitas, CA and Elekta Inc.,
Sweden) with a 1.5T split magnet and a 6MV Linac (Photograph courtesy of Jan Lagendijk, PhD).

MRI Real-Time Cine Imaging


An integrated MRI-guided cobalt machine (Fig. 12.9A) is commercially
available as an IGRT system (70), which consists of a low-field open MRI
system and three cobalt irradiation sources. Three computerized double-
focused multileaf collimator systems provide intensity-modulated
gamma-ray beams, which have lower energy and larger penumbra than
a linac. The technology emphasizes the MRI-guided, near real-time (4
Hz) imaging system, which can track soft tissue targets and OARs
without interrupting treatment delivery.
Integrated MRI-linac systems are an active area of development
(71,116,117). It has a design similar to the MRI-cobalt system in that the
radiation beam is perpendicular to the magnetic field between two split
magnets (Fig. 12.9B). The linac waveguide is shielded from the magnetic
field (1.5 T) and RF signal of the MRI unit. However, the magnetic field
interacts with secondary electrons generated in the patient, thereby
affecting the dose distribution (118). The effect is most pronounced at
tissue-air interfaces, where exiting electrons return to the tissue as a
result of the Lorentz force and locally deposit additional dose. On the
other hand, the beam modifiers, such as MLC, can affect the
homogeneity of the magnetic field (119).
A fast scan sequence, such as balanced steady-state free precession, is
required for cine 2DMRI (∼4 Hz) and TR 4DMRI (2 Hz) that uses
parallel imaging and approximation in image reconstruction
(79,120,121). Respiratory-correlated 4DMRI is also available (80,81).
MRI can sample more respiratory cycles without ionizing radiation for
treatment simulation and planning.

PET Real-time Imaging


In PET/SPECT imaging, the radiation source is the viable tumor, which
is also the target of radiation therapy. In principle, in-room PET/SPECT
imaging could provide tumor position in real time. Fen et al. proposed
emission-guided radiotherapy (EGRT) that combines a PET scanner with
a linac (85). In this study, dose delivery was simulated using Monte
Carlo computation in a digital patient. Yang et al. investigated the
feasibility of using list-mode PET imaging to guide beam tracking of
tumor motion in a phantom study with 1D or 3D motion tracers (122).
The current method requires 10 seconds to determine tumor centroid
position; thus needs further improvement for motion tracking. Yan et al.
investigated the construction of an in-room SPECT system for functional
image guidance (83). The studies are in early stages of preclinical
research.

Real-Time Tumor Motion Compensation


Real-time tumor tracking refers to continuous adjustment of the
radiation beam or patient position during treatment so as to follow the
changing position of the tumor or its surrogate. In principle, real-time
tracking provides a combination of increased normal tissue sparing
relative to motion-encompassing methods by reducing the treatment
margin, and more efficient treatment with near 100% duty cycle relative
to gated treatment. In the following sections, we summarize three
strategies in various stages of development involving motion tracking of
a linac system.

Dynamic Multileaf Collimator Approach


Keall et al. have demonstrated motion tracking using dynamic MLC
(DMLC) (123). The Calypso system provides a near real-time motion
signal in prostate with better than 2 mm accuracy and 220 ms system
latency (114). One concern of this motion compensation method is the
anisotropic tracking resolution owing to MLC characteristics: Depending
on whether target motion is along or perpendicular to the leaf motion,
the spatial resolution for tracking is either <1 mm (the leaf motion) or
2.5 or 5 mm (the leaf width). Different strategies to optimize leaf
trajectories have been studied (124,125). Zimmerman et al. have
demonstrated motion tracking with intensity-modulated arc therapy
(126). Motion-tracking radiation delivery has been demonstrated using
cine 2DMRI for image guidance in motion phantom experiments (127).
Keall et al. have reported the first clinical experience on DMLC tracking
of Calypso transponder for a prostate treatment (128).
Mobile Treatment Couch Approach
D’Souza et al. have proposed compensation of the tumor motion using a
robotic couch (129). Unlike the DMLC approach, this method can
compensate for 3D tumor motion with isotropic system responses;
however, there may be patient-related physical and medical concerns.
For instance, couch motion could induce a counter-reaction from the
patient, body shift, or tissue deformation when changing motion
directions, especially for obese patients. Varian 6D couch is capable of
motion tracking but has not released for clinical use, while developments
on mobile couch/extension have been shown (130). Menten et al. have
illustrated comparable motion compensation between the mobile couch
tracking and DMLC tracking (131).

Movable Gantry Approach


The CyberKnife has a 6D robotic arm to position a light-weighted linac
and a 6D robotic couch to align a patient and provides the first clinical
solution for tumor tracking (99). The robotic arm can move at speeds of
several centimeters per second, which makes it compatible with tracking
respiration-induced tumor motion. The VERO system is designed for
image-guided tumor tracking (Fig. 12.7). Based on a gimbaled design,
the beam can rotate transversely (panned) or longitudinally (tilted) to
track implanted fiducials in or near the tumor with the maximum
motion range of 4.4 cm (or 2.5 degrees) at the treatment isocenter and a
latency of 50 ms for 4DRT (107,132,133). The latency effect will be
discussed in “Management of Intrafractional Tumor Motion”.

IGRT REQUIREMENTS AND CONSIDERATIONS


IGRT Commissioning and Quality Assurance
Commissioning and QA of IGRT-enabled technologies are essential. The
American Association of Physicists in Medicine (AAPM) has issued
several task group (TG) reports, covering in-room kV x-ray imaging for
patient setup/target localization (TG#104) (134), QA for
nonradiographic imaging for patient setup/target localization (TG#147)
(135), QA for CT-based IGRT technologies (TG#179) (136), QA for
medical accelerators (TG#142) (137), SBRT procedures (TG#101) (138),
and management of respiratory motion (TG#76) (46). These reports
provide guidelines for clinical use and QA of the IGRT imaging systems
and procedures. In the following, we summarize three important aspects:
geometric accuracy, image quality, and motion detection.

Coincidence of Imaging and Treatment Isocenters


One of most important tasks in commissioning an in-room imaging
modality is to compare the imaging isocenter with the treatment
isocenter. Conventionally, it is paramount to check the alignment of
radiation isocenter, mechanical isocenter, and laser isocenter. When
using IGRT, coincidence of imaging and treatment radiation isocenters
must be initially and periodically checked to ensure that discrepancies
are within clinically acceptable tolerances. The MV-EPID is used as the
gold standard as it provides direct reference to the treatment beam. For
stereotactic procedures, the discrepancy must be within 1.0 mm;
otherwise, it should be within 2.0 mm (137) for conventional
treatments. A calibration procedure is required to correct mechanical
sagging for kV imaging and EPID detectors (38,139). Customized QA
phantoms have been developed for different IGRT systems, including kV
and MV imaging systems of C-arm linacs (137,140) and MVCT imaging
of tomotherapy units (141). To determine the geometric accuracy of
IGRT for SRS/SBRT, an end-to-end (from simulation to delivery) test is
recommended, by comparing the alignment of the center of the
delivered MV dose distribution with the planned isocenter (106,142).

Image Quality
Bissonnette et al. have established a QA program for CBCT image quality
with the Elekta Synergy and Varian OBI systems (29,30,74). The report
evaluates flat-panel detector stability, performance and image quality of
10 linac imaging systems over a 3-year period. Details for correcting
background (dark current) and pixel-by-pixel gain uniformity (flood-field
image) of the plat-panel detector are also described. The CatPhan 500
phantom (The Phantom Laboratory, Salem, NY) is used to quantify
image quality (143). A comprehensive QA program by Yoo and Yin
describes safety, functionality, geometric accuracy, and image quality for
the Varian OBI system (144). Image quality characterization and QA
procedures for EPID (145), MV CBCT (146), and MVCT in helical
tomotherapy (141) have also been reported. A stereotactic head
phantom (Model 605 Radiosurgery Head Phantom; CIRS, Norfolk, VA)
or equivalent is used for imaging QA of the CyberKnife system (147).

Motion Detection
Clinical motion management guidelines have been published in AAPM
TG#76 (46), pertaining to respiration-induced motions of the target and
normal tissue. For fluoroscopic imaging, temporal resolution should be
100 ms or less, which produces a uncertainty of <2 mm for an object
moving at speeds up to 2 cm/s. Jiang et al. have outlined major clinical
challenges in respiratory-related procedures, including respiratory
gating, breath hold, and 4DCT (148). As external surrogates are used in
many respiratory gating and breath-hold procedures, the biggest
challenge is to ensure treatment accuracy. Indeed, many have reported
the limited reliability of internal–external correlation using external
fiducials. When external monitoring is used, verification of internal–
external correlation using image guidance prior to each treatment
session is needed.

Image Registration and Fusion


In IGRT context, image registration serves to align daily 3D or 2D
patient setup images with the planning CT or DRR images, respectively.
Uncertainty in image registration increases when the underlying
anatomy changes, including motion, deformation, or physical changes.
To minimize the uncertainty, image registration often focuses on the
vicinity of the tumor using surrogates, such as the bone and fiducials.
Visual verification with necessary manual adjustment is essential after
automatic image registration. The three orthogonal views of fused 3D
images are evaluated using color blending, checkerboard, or split
windows. Direct evaluation of 3D volumetric image rendered by GPU-
based, real-time computation is also possible (149,150). QA of the image
registration and fusion methods should be performed using an
appropriate phantom (151).

Rigid Image Registration


Most image registration tools used for IGRT are rigid registration using a
rigid transformation. In addition to deformation-related uncertainty in
rigid image registration, uncertainty may come from interobserver
variation during manual alignment. Registration accuracy, couch
adjustment accuracy, kV–MV isocenter discrepancy, couch walk for
noncoplanar beams, and patient motion after image acquisition
determine the overall setup accuracy for tumor localization.
Rigid image registration in a volume of interest (VOI) is more
clinically relevant in the presence of tissue or patient setup deformation.
Zhang et al. (152) and van Kranen et al. (153) have reported using
multiple VOIs for rigid registration to evaluate the local deformation
among the three-to-nine different VOIs in the head-and-neck region.
Park et al. have developed a spatially weighted image registration
method to allow users to define the structure of interest (154).
Mencarelli et al. developed an automatic detection system with multiple
VOIs to account for posture variation during head-and-neck setup (155).
A 3D/3D (CBCT/CT) registration is the standard to achieve 6 DOF.
Registration of an orthogonal pair of 2DkV to CT (2D/3D) can also
achieve 6 DOF by using multi-DRRs with small rotational increments.
This technique has been employed in CyberKnife, BrainLab, and Varian’s
OBI with accuracy of 1 mm/1 degree or less in 6 DOF alignment of bony
structure or fiducial markers (100,156).

Deformable Image Registration


DIR may not be suitable for setup correction using couch adjustments, as
rigid transformation cannot effectively compensate for tissue
deformation. Nevertheless, DIR is essential for contour propagation and
delivered-dose estimation: it is a useful tool for adaptive IGRT. Since
2007, DIR has been intensively studied focusing on deformation
algorithms, physical constraints, self-consistency, and accuracy
assessment (157,158). The uncertainty of DIR is 2 to 3 mm on average.
DIR can track deformed anatomy voxel by voxel between two 3D
images, producing a deformation vector field (DVF) useful in the
following three IGRT areas (92,159–161).
First, the DVF provides complete motion transformation matrix
between two stages of motion and useful for motion modeling. To
simplify and expedite calculation, Zhang et al. have shown that the
combination of DIR with principal component analysis (PCA) provides a
patient-specific motion model (162). The 4DCT-derived DVF has been
applied to generate motion-compensated CT, CBCT, DTS, and PET with
reduced motion artifacts (58,66,163).
Second, the DVF can be used for contour propagation, which is
essential for 4D and adaptive IGRT, as illustrated in Figure 12.10.
Wijesooriya et al. have studied the accuracy of automated segmentation
among different phase CT images in 4DCT by comparing 692 pairs of
automated and physician-drawn contours. The surface congruence of the
GTV and OARs was within 5 mm in >90% cases (161). Wang et al. have
applied DVF to propagate planning contours to daily CBCT in lung and
head-and-neck cancer patients, and found the volume overlap index to
be 83% with reference to physician-drawn contours (164). Physician
evaluation of these propagated contours is highly recommended,
especially in the presence of motion and metal artifacts (160).
Third, the DVF can be applied to map a dose distribution. This method
has been applied to 4D planning using 4DCT (123) and for estimating
delivered dose using daily setup CBCT (165), which is essential for
adaptive IGRT (166,167). The mapped dose is only an estimate as tumor
shrinkage and weight loss make the mapping unreliable (168,169). Li et
al. compared energy/mass transfer and direct dose mapping in 10 lung
patients and found noticeable dose differences (11% in PTV) (169).
Calculation based on deformed anatomy should provide more accurate
dose distribution (170,171).

IGRT Concerns in Simulation and Planning


Although IGRT focus on target alignment in treatment delivery, they are
strongly related to simulation and planning (172,173). As mentioned
before, the planning CT acquired is a snapshot of patient anatomy and
may not be representative at treatment. The variation can be mitigated
by preparatory procedures such as hydrogel spacer placement. Qi et al.
(172) have shown an adaptive multi-plan method, in which nine
treatment plans for each patient were produced to cover the most
probable prostate positions relative to the nodes. An alternative method
is to perform online (immediately prior to treatment) reoptimization of
the treatment plan (174).
FIGURE 12.10 Automated image segmentation of multiple repeat CT datasets. In this head-and-
neck example, contours drawn manually on the planning CT were deformed to obtain contours for
repeat CT scans obtained during the course of radiotherapy. Deformations were carried out using
transformation matrices based on deforming planning CT image to match each of the repeat CT
images. Such automatic segmentation tools, once validated by clinical studies, would make
adaptive replanning practical.

Information Technology Infrastructure for IGRT


Implementation of IGRT into routine clinical workflow requires tighter
integration of imaging and treatment systems and more efficient
information flow. IGRT represents a shift from a traditionally static
treatment planning process to a more dynamic, close-loop process with
multiple feedback check/control points. To meet the technical and
logistical needs, the following infrastructure and software tools are
considered important to IGRT applications:

IGRT Data Management:

• Picture Archival and Communication Systems specifically designed for


radiotherapy (RT-PACS) are needed that integrate IGRT workflows,
data management, user interfaces, and statistical tools among different
imaging and treatment procedures.
• The Integrated Health Enterprise in Radiation Oncology (IHE-RO)
(175) endeavors to specify and address specific clinical problems and
ambiguities including those for IGRT. It aims to overcome the
shortcomings of the Digital Imaging and Communications in Medicine
(DICOM) in radiotherapy (RT), which is the current industry standard
(176).
• Treatment management systems play a central role in integrating
image guidance and treatment delivery systems. With more frequent
use of 2D, 3D, and 4D multimodal images and possible adaptive
replanning during the course of treatment, data storage requirements
can increase one to two orders of magnitude.

IGRT Facilitating Tools:

• Tools for both rigid and DIR are necessary for implementing various
IGRT approaches (92) (159).
• Automatic treatment planning and optimization are needed to perform
plan adaptation to changing anatomy or altered target volumes (177).
• Cross-platform treatment plan comparison tools are needed for multi-
institutional studies. The computational environment for radiotherapy
research (CERR) (178) and deformable image registration for adaptive
radiotherapy research (DIRART) (159) provide a common platform for
treatment plan database and tools for clinical outcome research and
analysis.

Selection of IGRT Technology


The selection of an appropriate image-guidance solution is a complex
process that may involve a compromise among clinical objective,
product availability, existing infrastructure, manpower, and resources
(9,179). The implementation of an IGRT technology in the clinic
requires a thorough understanding of the complete clinical process and
the necessary infrastructure to support data collection, analysis, and
intervention. The four considerations: clinical, technical, resource, and
administration, suggested by the AAPM TG#104 report (134), may
evolve with industry trend.

IGRT CORRECTION STRATEGIES AND APPLICATIONS


Online Versus Offline Corrections
The establishment of a particular clinical process for correcting patient
position based on the data from various clinical studies is referred to as a
correction strategy. Strategies are broadly divided into online and offline
approaches. The online approach makes adjustment to the current
session of treatment based on data acquired. This may be as simple as
couch position adjustment or as complex as a full-plan reoptimization for
adaptation. The offline approach is to intervene treatment at a later
time, such as weekly physician review of portal images and replanning
in response to patient changes (180). The online approach has a greater
capacity to increase precision than offline strategies, but at the cost of a
higher workload. A hybrid correction strategy is often used clinically
with different error thresholds and time allowance (9,10,177,179).
In an accuracy-demanding procedure, such as frameless SRS, IGRT
patient setup and motion management can take a large portion of
treatment time (27). The overhead associated with the alignment tools
and decision rules can be prohibitive unless properly integrated. The
adaptive radiotherapy program at William Beaumont Hospital
(166,167,181) was made possible only through in-house software
integration efforts. For 4D tumor tracking, additional automatic tools are
necessary to support online correction in the intrafractional intervention.

Correction of Interfractional Setup Error


Various techniques have been developed for pretreatment setup
corrections. Without loss of generality, we consider an in-room CT-
guided IGRT system (Fig. 12.11). Following patient immobilization and
alignment of skin marks with room lasers, or using OSI for alignment, a
CBCT is acquired and aligned with the planning CT. The primary means
of intervention is correction of translational deviations, since rotational
corrections are small for single lesions enclosing the isocenter. For
multiple lesions with a single isocenter, rotational deviations may be
important and can be corrected using a 6 DOF couch with isocentric
rotation. The second level of intervention may be dose based. Ideally,
the treatment goal would be based on the delivered dose distribution.
The CBCT images of the patient’s treatment anatomy make it possible to
estimate the delivered dose distributions and to calculate accumulated
dose. Accumulated dose deviations can be corrected infrequently using
an offline adaptive correction scheme (177,181).

FIGURE 12.11 An in-room volumetric CT-guided radiotherapy process. CT images of patient’s


setup and anatomy information are acquired and sent to an alignment workstation where the
images are compared and aligned to match with the planning CT. An interventional decision is
made based on the magnitude of anatomic variations to assess the need for an online or off-line
correction. If necessary, dose tracking may be enabled and used for replanning.

Management of Intrafractional Tumor Motion


In the presence of significant intrafractional motion, additional
geometric and dosimetric variations should be taken into account, which
will increase treatment complexity. Not surprisingly, most motion
managements are related to the treatment of lung and abdominal
cancers (46,74,177). In patient setup with a mobile tumor, localization
of implanted fiducials using fluoroscopy can be achieved by aligning the
track of the implanted marker with the track discerned from the
reference 4DCT. Bony landmarks may not be used for setup alignment
since the tumor motion trajectory relative to the bony landmark may
change (182). Soft tissue imaging with direct target alignment using 4D
CBCT or cine 2DMRI would be more preferable.
Real-time monitoring of implanted fiducials, or of the tumor directly,
is needed for accurate gating of radiation treatments. Respiratory gating
used at simulation may be used to control dose delivery during the
quiescent period around end expiration. MLC motion is intermittent
during gated IMRT, thereby reducing possible interplay effects between
MLC and respiratory motions (183). Audiovisual feedback may improve
breathing regularity and breath-hold reproducibility (184,185), and can
be used in respiratory-gated treatment. In voluntary breath-hold, the
beam is enabled when the inspiration level is within tolerance of the
planned level (186). Reproducible involuntary breath hold may be
achieved using active breathing control developed by Wong et al. (187).
There will be a time delay from motion detection to the action of
beam hold or motion tracking, usually 100 to 400 ms (188,189). This
latency can cause a targeting error of 1 to 2 mm (104), critical to tumor
tracking. A predictive model can be used to anticipate the tumor
position to reduce the latency-caused error to gain submillimeter
accuracy (127,190). Tumor motion can be predicted using external
markers based on a motion correlation model, and such a model needs
initially calibration and periodic verification and update by frequent
imaging measurements (133,191).

Population-Based and Individualized Margins


The relative importance of systematic and random errors in the
determination of PTV margins should be considered in the design of a
clinical strategy. Geometric errors in radiation field placement are
typically characterized by distributions of nonzero mean and variance.
The mean represents the systematic discrepancy while the variance
represents the random component. The relative importance of these two
categories of errors may vary in determining appropriate PTV margins
(192,193). The reported margin formula may not completely general, as
the number of fractions is not concerned, especially for SBRT cases with
five treatment fractions or less.
A treatment margin depends not only on the imaging modality chosen
and tumor surrogate used, but also on the type of patient immobilization
and motion management technique employed. For respiratory motion,
breath hold, abdominal compression, respiratory gating, or motion
tracking manage tumor motion at different levels. As a consequence, the
margin added to form the internal tumor volume (ITV) will be reduced
relative to that for a motion-encompassing ITV (166,194). The overall
margin is the sum of uncertainties from inter- and intrafractional
motions.
In lung, patient-breathing irregularities add another level of
uncertainty and complexity for the treatment. Grills et al. proposed a
margin formula that contained components of both population-based
and patient-specific systematic and random errors (195). Such a margin
formula was applied to CT-guided setup of lung cancer cases, resulting in
a 65% to 75% margin reduction. To compensate for baseline drift, Pepin
et al. have reported using a dynamic-gating window (196). As the tumor
motion is patient specific and determined using 4DCT, individualized
ITV margin is often applied in lung cancer treatment (195). However,
4DCT-based motion simulation is based on single respiratory cycles and
thus could be statistically unreliable (197). Motion simulation for ITV
derivation based on 4DMRI or cine 2DMRI have been proposed and
investigated (81,82,198).
In prostate cases, IGRT margin reduction is one of the most dramatic
examples in all anatomic sites. With in-room CT guidance, a 3-mm
margin was reported adequate for prostate dose coverage, but may lose
some of the seminal vesicles coverage due to daily variation in rectal and
bladder filling that causes local deformation (199). A comparative study
using four different IGRT setup methods, skin marks, 3D bony
landmarks, 3D fiducial markers, and Calypso transponders has shown
that the last two methods can achieve 4 mm and 3 mm margin
requirement, respectively (200). Another study has compared four setup
techniques using skin marks, 2D bony registration, ultrasound guidance,
and in-room 3DCT (201). A recent Calypso study has shown that a
margin of 2 mm would produce sufficient CTV dose coverage based on
1,267 tracking sessions of 35 patients (202). Figure 12.12A demonstrates
that the alignment accuracy increases along with the complexity of the
alignment technology: from skin to bone to ultrasound and to CT (8).
The dosimetric result for one patient exhibiting large organ motion is
shown in Figure 12.11B.
FIGURE 12.12 A: Patient setup accuracy of prostate cancer using different in-room imaging
modalities. Generally, the accuracy increases with imaging frequency, dimension, and use of
fiducial. The skin mark and ultrasound setup have largest variation, as indicated by the error bars.
The margin could be reduced from 8 to 2 mm based on this finding. Reprinted with permission
from Mageras GS, Mechalakos J. Planning in the IGRT context: Closing the loop. Semin Radiat
Oncol. 2007;17:268–277. B: Target coverage based on various types of image-guided setups for
treatment. In this example, 24 treatment-time CT scans of a prostate cancer patient were used to
compare the effectiveness of four alignment techniques for patient setup using a fixed-margin
IMRT plan. The minimum target dose is lowest (59.3 Gy) for skin marks–based setup and highest
(76.0 Gy) for the CT-guided setup. The day-to-day variations in the minimum dose (represented
by the error bars) are smallest for CT-guided technique and largest for skin-mark and ultrasound-
guided techniques.

Anatomic Variations and Dosimetric Consequences


Inter- and Intrafractional Variations in Anatomy
Substantial inter- and intrafractional organ variations and setup
uncertainties of lung, liver, diaphragm, gynecologic, prostate, seminal
vesicles, bladder, and rectum have been reviewed by Langen and Jones
(203). Even with careful immobilization and alignment of the patient,
significant changes occur because of the nonrigidity of anatomy, bowel
gas movement, and variable fillings of the bladder (204). Li et al.
reported interfractional anatomic variations for all major sites based on
daily CT assessment (205). Target and OAR variations may follow
certain trends, including tumor volume shrinkage up to 12 months after
initial hormone treatment (206), radiation-induced tumor shrinkage, and
disease-related weight loss. Figure 12.13A shows a side-by-side
comparison of a head-and-neck target volume that has shrunk
significantly during the course of treatment. The skin contour no longer
matches well with the immobilization mask. Changes in target volume
and OAR position could have significant clinical consequences
(152,153). During a prostate IMRT treatment, changes in bladder filling
can cause prostate and OARs to move away from the planning position,
as demonstrated in Figure 12.14.

FIGURE 12.13 A: An example of setup error for a patient immobilized with a thermoplastic
facemask due to tumor shrinkage as treatment progresses. Approximately half-way through the
treatment course (right panel), the lower neck was not centered on the headrest, presumably due
to the relatively “roomier” mask. B: Dosimetric impact of interfractional variations in head-and-
neck anatomy. The solid lines show the volumes of the parotid glands (left and right) decreased
as the treatment progressed. At the same time, the centers of both parotid glands also moved
medially due to tumor shrinkage and weight loss. As a result, the percent of parotid volume
exceeding 26 Gy increased by least 10% over the course of radiotherapy.

Dosimetric Effects due to Interfractional Motion


The common approach to evaluating a delivered dose is to use the daily
setup 3DCT images and actual dynamic leaf sequence from a treatment
log file for dose reconstruction (207). To generate a cumulative dose
distribution over multiple fractions, dose mapping based on DIR is
applied to a reference image for final dose evaluation (161).
In prostate cases, it is reported that 25% (8/33) of patients would
have geometric or dosimetric miss without daily MVCT guidance to
improve prostate localization (208). Obese patients and patients with
large daily rectal motion would be most subject to such marginal miss.
van Herk has pointed out that the systematic uncertainty is more
important and should be minimized (192). Langen et al. have
investigated the dosimetric consequences of prostate motion during
helical tomotherapy for 16 patients with 515 daily MVCT scans (209).
The study finds that the mean change in target D95% is 1 ± 4% and the
average cumulative effect is smeared out after five fractions. In
individual fractions, the D95% may be off by up to 20%. For normal
tissues such as the rectum, Chen et al. have reported that daily dose
variation caused by rectal volume changes is significant and 27% of
treatments would benefit from adaptive replanning (210). Wen et al.
have calculated actual accumulated doses with three PTV margins of 10
mm (6 mm at anterior rectum), 5 mm (3 mm) and 3 mm (isotropic)
(211). With calculated TCP and NTCP values in eight early prostate
patients, they suggested that margin reduction resulting from IGRT was
an effective means to improve the therapeutic ratio. Clinically, Sveistrup
et al. compared 388 IG-IMRT with 115 non-IG 3DCRT treatments and
shown grade 2 or higher toxicity of 5.8% versus 57.3%, respectively (7).
Zelefsky et al. conducted a clinical study with 186 IGRT and 190 non-
IGRT treatments and demonstrated significant improvement in
biochemistry control at 3 years among high-risk prostate patients with
IGRT versus non-IGRT (97% vs. 77%, p = 0.05) (212).
FIGURE 12.14 Intrafractional variations of anatomy observed in a prostate patient in the span of
20 minutes. CT images were acquired just prior and immediately after an IMRT treatment fraction.
The contours of pelvic anatomy before treatment (left) are overlaid on the CT image of the patient
acquired immediately after the treatment (right). Prostate target (red) was displaced anteriorly for
>5 mm.

In head-and-neck treatments, it is desirable to reduce the dose to the


parotid glands in order to minimize the incidence of late xerostomia
(213). Unfortunately, the parotid glands can decrease in volume and
move medially during the course of treatment (214). As a result, parotid
mean dose increased by 10% and exceed 26 Gy (Fig. 12.13B). A single
mid-course correction to adapt the treatment plan to the anatomical
change can help reduce the dose for both parotid glands (215).

Dosimetric Effects of Intrafractional Motion


The dosimetric consequence of intrafractional breathing motion for lung
tumors can be demonstrated by 4DCT-based planning. Figure 12.15A
shows a case study that used a free-breathing CT image to design a
treatment plan with an inadequate 8-mm margin to cover the CTV
(shown in yellow). The actual dose distribution does not cover the entire
target volume in some of the breathing phases due to respiratory motion,
which is not detected in the free-breathing CT. Using DIR, the
cumulative dose distribution from the 10 individual phases is calculated
and mapped to a free-breathing fast CT scan (near phase 7). The
resultant cumulative dose distribution summed from the entire breathing
cycle shows a dose deficiency in the CTV (red arrow), as illustrated in
the bottom row of Figure 12.15B. In this case, the cumulative dose
distribution when using the ITV derived from the 10-phase 4DCT does
not underdose the target but results in treatment to a larger volume. Wu
et al. have compared three delivery techniques, 3DCRT, IMAT, and
IMRT, in five treatment cases in liver, with tumor motions ranging from
0.5 to 1.75 cm. The variation in CTV D95% is largest (−8.3%) for IMRT
and smallest (<2%) for 3DCRT, with negligible dose–volume histogram
variations for normal tissues (216). Kuo et al. have found that with an
adequate margin for motion, D95% variation is <2% in the CTV (217).
However, the representation of respiratory motion during treatment
using the one-cycle 4DCT simulation has been questioned, and a longer
4DMRI simulation has been suggested (82,198).

Adaptive Approaches for Correcting Dosimetric Deviations


William Beaumont Hospital has pioneered the adaptive radiotherapy
strategy by using a purpose-built treatment planning system to facilitate
offline dosimetric evaluation and replanning (167,181). Without the
support from a more automated planning system, routine replanning is
not feasible. Recently, studies from several groups have focused on
implementing an automated treatment planning system (218,219) and
an automatic CT simulation optimization strategy (220). Predictive
treatment planning (221), incorporating tumor regression model from
the start, represents a new approach to adaptive radiotherapy, which
may yield more flexibility in accounting for variations.
FIGURE 12.15 A: Potential consequence of respiratory motion on target coverage. An IMRT plan,
developed using conventional CT, was applied to the patient’s 4DCT. Dose distributions were
calculated in each of the 10 phases of the breathing cycle. A portion of the CTV, shown in thick
yellow line, was not covered by the 70 Gy prescription dose line (red) in phases 1 through 4. B:
Comparison of a treatment plan as perceived on a free-breathing CT (top row) and as realized
after accounting for breathing motion in all 10 phases (bottom row). The latter was obtained by
summing dose distributions computed on individual phases of the 4DCT image (A), and mapped
to a reference CT image using deformable image registration (DIR).

Mageras and Mechalakos have discussed treatment planning in the


IGRT context and the various challenges to treatment-plan adaptation
strategies in various disease sites (8). An alternative planning approach
to evaluating a PTV is to simulate motion and other uncertainties
directly in the dose calculation, resulting in dose distributions of not
only the CTV but also OARs.
The adaptive concept as applied to radiotherapy practice derives from
modern informatics and control theory. Offline adaptation has been
implemented for various disease sites in various institutions (180,222),
and online replanning has been reported, with computation time within
5 to 8 minutes (174). Oh et al. have reported on a hybrid adaptive
approach (online MRI guidance and offline replanning) applied to 33
cervical cancer cases to overcome the substantial organ motion and
tumor shrinkage (180). The knowledge gained from geometric and
dosimetric variations via clinical IGRT research will be useful for guiding
the treatment planning in certain clinical scenarios. Four-dimensional
MRI should provide an advantage in this respect.

Future Directions
Image-guided radiotherapy is commonly considered in the context of
treatment delivery, but it is more appropriate to broaden its scope to
include imaging at other stages of the radiotherapy (10). We, therefore,
briefly discuss future directions as they apply to this broader definition.
We believe that further advances in IGRT rely on the innovation and
integration of automated technologies to facilitate evaluation and
decision-making processes. Automation in treatment simulation,
planning, and delivery will be active areas of investigation, allowing
standardization of treatment planning based on a planning library with
optimal plans of all anatomical sites and planning techniques.
Technologies such as graphics processing unit (GPU) (97,149) and cloud
computing (223,224) allow online 3D/4D image reconstruction, online
plan reoptimization, and real-time tumor tracking. A new clinical
workflow for adaptive IGRT would be implemented with focus shifting
from routine planning process to personalized plan tailoring, plan QA,
and treatment assessment and adaptation.
Further employment of multimodal imaging in both simulation room
and treatment room will be an active area of development. Functional
imaging provides viable tumor volumes for planning and biologic image
guidance for treatment. In-room MRI has visualized a moving tumor and
OAR during treatment for the first time and could assist to minimize the
chance of a marginal miss. Treatment verification may be augmented
with in-room PET, SPECT, MRI, or Cherenkov OSI. Cine 2DMRI and
4DMRI with visualization of tumor and OAR is more clinically desirable
for tumor motion assessment at simulation and tumor motion
monitoring during treatment. MRI-based treatment planning can change
the radiotherapy paradigm, especially in conjunction with the integrated
MRI-cobalt or MRI-linac units, realizing adaptive IGRT clinically.
Treatment response evaluation using multimodality imaging will
continue to be an active area of investigation. Due to complexity of
radiation response, a multilevel approach at molecular, cellular, organ,
and physiologic levels is more likely to yield useful information.
Different biologic tracers could be designed to probe proper biologic
attributes, such as DNA double-strand breaks or cellular membrane
rupture. Ideally, response assessment within the treatment course would
be most beneficial for individualized treatments, while the reality is lack
of an effective assessment index even after treatment. A response-driven,
biologically adaptive radiotherapy is still distant from clinical practice.
The dosimetric feedback loop, which is within reach and will ensure that
the treatment process goes along the intended course, can provide more
reliable clinical data to tune prediction models of treatment outcome.
Radiation therapy has gone through a series of technologic revolutions
following several breakthroughs in imaging in the past three decades.
We have witnessed the growth of IGRT, which has provided improved
geometric and dosimetric accuracy in radiation therapy of localized
cancers. We expect that more technologic advances are forthcoming at
all levels of IGRT, and will further close the physical, biologic, and
clinical feedback loop for radiation therapy.

KEY POINTS
• In the application of IGRT to treatment delivery, we have a better
understanding of various uncertainties, correction strategies, and
technical limitations. Geometrically, a large body of evidence has
shown the improved accuracy of IGRT in patient setup and motion
management. Dosimetrically, IGRT improves treatment delivery in
treatment plans that contain sharp dose gradients or mobile
targets. Clinically, increasing evidence has revealed associations of
local failure with marginal miss and high-grade toxicity with organ
motion.

• In this chapter, we have discussed the importance of image-guided


radiation therapy (IGRT) and many in-room imaging modalities,
which serve as visual and quantitative guidance for 3D/4D
treatment simulation, accurate treatment planning, and image-
guided treatment delivery. Using 2D radiologic imaging, 3D
tomographic imaging, 4D respiration-correlated imaging, or 4DOSI,
the setup image before treatment is aligned to the planning image
with reference to the radiation isocenter, so that the treatment can
be delivered as planned. Tumor-tracking images during treatment
serve to align the target with radiation beam so that the mobile
tumor is irradiated continuously or within a gating window.

• Implementation of IGRT requires various tools, QA procedures, and


resources to achieve clinical objectives for radiotherapy. IGRT has
been successfully implemented for all major anatomical sites and
has been demonstrated to improve treatment accuracy with
reduced uncertainty and margin requirements.

• In the past three decades, the evolution of IGRT has been


punctuated with major technologic advancements. In the future,
IGRT will continue to evolve as emerging technologies and clinical
challenges motivate investigations into new areas.

QUESTIONS
1. On all isocentric linac machines, the kV and MV radiographic
imaging are available commercially. Which of the following
statements is incorrect in regard to kV and MV imaging?
A. The kV and MV images can be used together to produce a
hybrid CBCT.
B. The alignment of the kV and MV imaging isocenters should be
periodically checked.
C. The quality of kV imaging is better than that of MV imaging
due to a large component of Compton scatters in the MV
beam.
D. The kV image quality can be improved by using metal grid in
front of imager and post-acquisition image processing, the
same techniques can be applied to MV image quality.
E. Both kV and MV CBCT images can be used for evaluating
delivered dose, while MV CBCT has the advantage of less
metal artifacts and no need for CT number conversion.
2. A variety of in-room imaging modalities has been developed for
IGRT procedures. Which of following imaging modalities or
nonimaging tools can be used for intrafractional real-time motion
monitoring as a direct or indirect tumor motion surrogate?
(1) 4DCT or 4D CBCT imaging
(2) Gantry-mounted orthogonal 2DkV imaging
(3) Room-mounted orthogonal 2DkV imaging
(4) Infrared marker tracking system
(5) Calypso electromagnetic transponder system
A. (1) only
B. (2) + (3)
C. (2) + (4) + (5)
D. (2) + (3) + (4) + (5)
E. All of the above
3. A CBCT scan is acquired during a paraspinal SBRT procedure and
ready for approval. However, the attending physician, who is
with another patient, approves the image alignment 10 minutes
later. The physicist on duty requests that the therapists take a
verification orthogonal 2DkV image pair. Is the physicist’s action
correct and why?
(1) No; there is no need for 2DkV verification since 2DkV
alignment is inferior to CBCT.
(2) No; there is no need for 2DkV verification since physician has
approved the CBCT.
(3) No; there is no need for 2DkV verification since the patient is
immobilized in a mask.
(4) Yes; the 2DkV verification is necessary since the patient may
have moved out.
(5) Yes; the orthogonal 2DkV images can provide 6 DOF
registration via 2D/3D registration, so they can be used for
verification of CBCT alignment.
A. (1) + (2) + (3)
B. (2) only
C. (3) only
D. (4) only
E. (4) + (5)
4. In a radiation oncology clinic, a frameless SRS (fSRS) procedure is
implemented for clinical use. An end-to-end test is conducted
using an anthropomorphic head phantom with inserted
orthogonal films to deliver an fSRS plan and shows a 2-mm
difference between the center of the delivered spherical dose
distribution and the planning isocenter marked on the films.
Which of the following factors could be the major causes to the
observed discrepancy?
(1) A misalignment of the isocenter of the gantry-mounted kV
and the MV beamlines.
(2) Couch walk that causes misalignment of the couch isocenter
and radiation isocenter.
(3) Image registration error between the CBCT and planning CT.
(4) Couch sag at the setup position, since a 100-lb object was
placed on the couch inferior to the phantom to mimic patient
body weight.
(5) The film placed inside the head phantom with a small angle
relative to the CT slices.
A. (1) + (2)
B. (1) + (3)
C. (1) + (2) + (3)
D. (1) + (2) + (3) + (5)
E. All of the above
5. When treating SBRT for a mobile tumor such as lung lesions, it is
important to first align the target during patient setup and then
to consider intrafractional motion management. Which of the
following methods would introduce the largest uncertainty in
tumor alignment?
(1) Using free-breathing CT for planning with the ITV delineated
based on 4DCT and free-breathing CBCT for setup.
(2) Using respiratory-gated CT at full exhalation for both
planning and CBCT setup.
(3) Using respiratory-gated CT at full inhalation for both
planning and CBCT setup.
(4) Using motion compensated mid-ventilation CT for planning
and motion-compensated mid-ventilation CBCT for setup.
A. (1)
B. (2)
C. (3)
D. (4)
E. (1) and (3)

ANSWERS
1. D The MV image quality cannot be improved using the septa
grid, as the MV photon can be further scattered by the metal
grid, causing more scatters.
2. D (1) 4DCT or 4D CBCT are retrospective reconstructed and
cannot be used in real-time; (2) Varian and Elekta linac can
only take orthogonal 2DkV one at a time, but VERO can take
orthogonal fluoroscopic imaging simultaneously; (3)
CyberKnife uses fluoroscopy for tumor tracking, (4) and (5) can
be used as external and internal tumor tracking systems.
3. E After CBCT imaging, the alignment approval should be done
immediately to avoid patient motion, even with an
immobilization device. The orthogonal 2DkV imaging qualifies
as a verification means to conform the correctness of CBCT
alignment.
4. C The kV and MV isocenter discrepancy and couch walk are
transparent to image registration but will affect the setup
accuracy. Couch sag and film angle are minor factors, since
they only cause rotational misalignment, which is a secondary
to translational misalignment.
5. C The full-inhalation phase is known to be irreproducible and
therefore unreliable for tumor alignment.

REFERENCES
1. Park SS, Yan D, McGrath S, et al. Adaptive image-guided
radiotherapy (IGRT) eliminates the risk of biochemical failure
caused by the bias of rectal distension in prostate cancer treatment
planning: clinical evidence. Int J Radiat Oncol Biol Phys.
2012;83(3):947–952.
2. Eisbruch A, Harris J, Garden AS, et al. Multi-institutional trial of
accelerated hypofractionated intensity-modulated radiation therapy
for early-stage oropharyngeal cancer (RTOG 00–22). Int J Radiat
Oncol Biol Phys. 2010;76(5):1333–1338.
3. Tucker SL, Jin H, Wei X, et al. Impact of toxicity grade and scoring
system on the relationship between mean lung dose and risk of
radiation pneumonitis in a large cohort of patients with non-small
cell lung cancer. Int J Radiat Oncol Biol Phys. 2010;77(3):691–698.
4. Leibel SA, Fuks Z, Zelefsky MJ, et al. Intensity-modulated
radiotherapy. Cancer J. 2002;8(2):164–176.
5. Suit H. The Gray Lecture 2001: coming technical advances in
radiation oncology. Int J Radiat Oncol Biol Phys. 2002;53(4):798–
809.
6. Matuszak MM, Yan D, Grills I, et al. Clinical applications of
volumetric modulated arc therapy. Int J Radiat Oncol Biol Phys.
2010;77(2):608–616.
7. Sveistrup J, af Rosenschold PM, Deasy JO, et al. Improvement in
toxicity in high risk prostate cancer patients treated with image-
guided intensity-modulated radiotherapy compared to 3D conformal
radiotherapy without daily image guidance. Radiat Oncol. 2014;9:44.
8. Mageras GS, Mechalakos J. Planning in the IGRT context: closing the
loop. Semin Radiat Oncol. 2007;17(4):268–277.
9. van Herk M. Different styles of image-guided radiotherapy. Semin
Radiat Oncol. 2007;17(4):258–267.
10. Greco C, Ling CC. Broadening the scope of image-guided
radiotherapy (IGRT). Acta Oncol. 2008;47(7):1193–1200.
11. Lagendijk JJ, Raaymakers BW, Raaijmakers AJ, et al. MRI/linac
integration. Radiother Oncol. 2008;86(1):25–29.
12. Nishio T, Miyatake A, Ogino T, et al. The development and clinical
use of a beam ON-LINE PET system mounted on a rotating gantry
port in proton therapy. Int J Radiat Oncol Biol Phys. 2010;76(1):277–
286.
13. Jarvis LA, Zhang R, Gladstone DJ, et al. Cherenkov video imaging
allows for the first visualization of radiation therapy in real time. Int
J Radiat Oncol Biol Phys. 2014;89(3):615–622.
14. Boyer AL, Antonuk L, Fenster A, et al. A review of electronic portal
imaging devices (EPIDs). Med Phys. 1992;19(1):1–16.
15. van Elmpt W, McDermott L, Nijsten S, et al. A literature review of
electronic portal imaging for radiotherapy dosimetry. Radiother
Oncol. 2008;88(3):289–309.
16. Mans A, Remeijer P, Olaciregui-Ruiz I, et al. 3D Dosimetric
verification of volumetric-modulated arc therapy by portal
dosimetry. Radiother Oncol. 2010;94(2):181–187.
17. Russo GA, Qureshi MM, Truong MT, et al. Daily orthogonal
kilovoltage imaging using a gantry-mounted on-board imaging
system results in a reduction in radiation therapy delivery errors. Int
J Radiat Oncol Biol Phys. 2012;84(3):596–601.
18. Kalender WA. Dose in x-ray computed tomography. Phys Med Biol.
2014;59(3):R129–R150.
19. Uematsu M, Fukui T, Shioda A, et al. A dual computed tomography

You might also like