Download as pdf or txt
Download as pdf or txt
You are on page 1of 934

COMPUTING IN

CIVIL ENGINEERING
PROCEEDINGS OF THE 2011 ASCE INTERNATIONAL
WORKSHOP ON COMPUTING IN CIVIL ENGINEERING

June 19–22, 2011


Miami, Florida

SPONSORED BY
Technical Council on Computing and Information Technology
of the American Society of Civil Engineers

EDITED BY
Yimin Zhu, Ph.D.
R. Raymond Issa, Ph.D., J.D., P.E., F.ASCE

1801 ALEXANDER BELL DRIVE


RESTON, VIRGINIA 20191–4400
Cataloging-in-Publication Data on file with the Library of Congress.

American Society of Civil Engineers


1801 Alexander Bell Drive
Reston, Virginia, 20191-4400

www.pubs.asce.org

Any statements expressed in these materials are those of the individual authors and do not
necessarily represent the views of ASCE, which takes no responsibility for any statement
made herein. No reference made in this publication to any specific method, product,
process, or service constitutes or implies an endorsement, recommendation, or warranty
thereof by ASCE. The materials are for general information only and do not represent a
standard of ASCE, nor are they intended as a reference in purchase specifications, contracts,
regulations, statutes, or any other legal document. ASCE makes no representation or
warranty of any kind, whether express or implied, concerning the accuracy, completeness,
suitability, or utility of any information, apparatus, product, or process discussed in this
publication, and assumes no liability therefore. This information should not be used without
first securing competent advice with respect to its suitability for any general or specific
application. Anyone utilizing this information assumes all liability arising from such use,
including but not limited to infringement of any patent or patents.

ASCE and American Society of Civil Engineers—Registered in U.S. Patent and Trademark
Office.

Photocopies and permissions. Permission to photocopy or reproduce material from ASCE


publications can be obtained by sending an e-mail to permissions@asce.org or by locating a
title in ASCE's online database (http://cedb.asce.org) and using the "Permission to Reuse"
link. Bulk reprints. Information regarding reprints of 100 or more copies is available at
http://www.asce.org/reprints.

Copyright © 2011 by the American Society of Civil Engineers.


All Rights Reserved.
ISBN 978-0-7844-1182-7
Manufactured in the United States of America.
Preface

Welcome to Miami! It is our pleasure to organize the 2011 International Workshop


on Computing in Civil Engineering.

This year, we have received many high quality papers. The workshop has accepted
over 100 papers from 19 countries in four subjects, 1) novel engineering, construction
and management technologies, 2) design, engineering and analysis, 3) sustainable and
resilient infrastructure, and 4) cutting edge development. These papers are the result
of a rigorous peer review process starting from over 200 abstracts we received. Each
abstract and paper was assigned to at least two reviewers. Only the outstanding papers
have been collected in the proceedings. These papers are also a genuine
representation of the very best research being conducted in this community.

We would like to thank the Department of Construction Management at Florida


International University, the Rinker School of building Construction at University of
Florida, and the External Program of the College of Engineering and Computing at
Florida International University for their support. The buildingSmart Alliance and
Autodesk have generously sponsored the workshop. The ASCE Technical Council on
Computing and Information Technology committees have provided guidance and
assistance to make the workshop a success. In particular, we would like to thank
many members of the Visualization, Information Modeling and Simulation (VIMS)
committee, the Data Sensing and Analysis (DSA) committee and the Education
committee for their contribution to the paper review and selection process. Also,
many thanks go to Mr. Victor Ceron for his hard work to serve as the secretary of the
workshop.

Enjoy your stay in Miami! Don’t miss the beach and the sunshine!

Yimin Zhu, Ph.D. and Raymond Issa, Ph.D., P.E., J.D., FASCE
Workshop Co-Chairs
2011 ASCE International Workshop on Computing in Civil Engineering

iii
Acknowledgments

Organizing Committee
Raymond Issa (co-chair) Yimin Zhu (co-chair)

Technical Committee
Amr Kandil Mani Golparvar-Fard Svetlana Obina
Baabak Ashuri Mehmet Bayraktar Wallied Orabi
Huangqing Lu Pinchao Liao Zhigang Shen
Ioannis Brilakis Salman Ahzar
John Messner SangHyun Lee

International Advisory/Scientific Committee


Carlos Caldas Ian Smith Syed Ahmed
Feniosky Peña-Mora Irtishad Ahmad Vineet Kamat
Hani Melhem Lucio Soibelman William O’Brien
Ian Flood Renate Fruchter

Reviewers
Amr Kandil Ioannis Brilakis Pinchao Liao
Baabak Ashuri Ivan Mutis Qingbin Cui
Benny Raphael Javier Irizarry R. Raymond Issa
Boong Yeol Ryoo Jerry Gao Renate Fruchter
Burcin Becerik-Gerber Jesus de la Garza Salman Azhar
Burcu Akinci Jie Gong SangHyun Lee
Carlos Caldas Jochen Teizer Semiha Kiziltas
Chimay Anumba John Haymaker Sergio Scher
Don Chen John Messner Svetlana Olbina
Esin Ergen Ken-Yu Lin Tarek Mahfouz
Esther Obonyo Kihong Ku Tomasz Arciszewski
Federico Boadilla Kincho Law Vineet Kamat
Fernanda Leite Lucio Soibelman Wallied Orabi
Ghang Lee Mani Golparvar-Fard Wassim Barham
Giovanni Migliaccio Mark Shaurette Wei Wu
Guillermo Salazar Mehmet Bayraktar William O'Brien
Hani Melhem Nashwan Dawood Yacine Rezgui
Hazar Dib Nora El-Gohary Yimin Zhu
Huanqing Lu Omar El-Anwar Yung-Ching Shen
Hyunjoo Kim Omar Tatari Zhigang Shen
Ian Flood Patricio Vela
Ian Smith Patrick Hsieh

iv
Contents

Novel Engineering, Construction, and Management Technologies


A Study of Implementation of IP-S2 Mobile Mapping Technology for Highway
Asset Condition Assessment ................................................................................................... 1
J. M. De la Garza, C. G. Howerton, and D. Sideris
An Automated Stabilization Method for Spatial-to-Structural Design
Transformations ...................................................................................................................... 9
C. D. J. Smulders and H. Hofmeyer
Determination of the Optimal Positions of Window Blinds
through Multi-Criteria Search ............................................................................................. 17
B. Raphael
Performance of Two Model-Free Data Interpretation Methods
for Continuous Monitoring of Structures under Environmental
Variations ............................................................................................................................... 25
Irwanda Laory, Thanh N. Trinh, and Ian F. C. Smith
Condition-Based Maintenance in Facilities Management ................................................. 33
Joseph Neelamkavil
Toward Sustainable Financial Innovation Policies in Infrastructure:
A Framework for Ex-Ante Analysis .................................................................................... 41
Ali Mostafavi, Dulcy Abraham, and Daniel DeLaurentis
A Multi-Objective Generic Algorithm Approach for Optimization
of Building Energy Performance ......................................................................................... 51
Don Chen and Zhili (Jerry) Gao
Comparison of Image-Based and Manual Field Survey Methods for Indoor
As-Built Documentation Assessment ................................................................................... 59
Laura Klein, Nan Li, and Burcin Becerik-Gerber
Image-Based 3D Reconstruction and Recognition for Enhanced Highway
Condition Assessment ........................................................................................................... 67
Berk Uslu, Mani Golparvar-Fard, and Jesus M. de la Garza
Design and Evaluation of Algorithm and Deployment Parameters
for an RFID-Based Indoor Location Sensing Solution ...................................................... 77
N. Li, S. Li, B. Becerik-Gerber, and G. Calis
Impact of Ambient Temperature, Tag/Antenna Orientation, and Distance
on the Performance of Radio Frequency Identification
in Construction Industry ...................................................................................................... 85
S. Li, N. Li, G. Calis, and B. B. Gerber
Multiobjective Optimization of Advanced Shoring Systems Used
in Bridge Construction ......................................................................................................... 94
Khaled Nassar, Mohamed El Masry, and Yasmine Sherif

v
Application of Dimension Reduction Techniques for Motion Recognition:
Construction Worker Behavior Monitoring ..................................................................... 102
SangUk Han, SangHyun Lee, and Feniosky Peña-Mora
Civil and Environmental Engineering Challenges for Data Sensing and Analysis ........110
Gauri M. Jog, Shuai Li, Burcin Becerik Gerber, and Ioannis Brilakis
Automated 3D Structure Inference of Civil Infrastructure Using a Stereo
Camera Set............................................................................................................................118
H. Fathi, I. Brilakis, and P. Vela
Unstructured Construction Document Classification Model
through Support Vector Machine (SVM) .......................................................................... 126
Tarek Mahfouz
Automatic Look-Ahead Schedule Generation System for the Finishing Phase
of Complex Projects for General Contractors .................................................................. 134
N. Dong, M. Fischer, and Z. Haddad
Sustainable Construction Ontology Development Using Information
Retrieval Techniques ........................................................................................................... 143
Yacine Rezgui and Adam Marks
Machine Vision Enhanced Post-Earthquake Inspection ................................................. 152
Zhenhua Zhu, Stephanie German, Sara Roberts, Ioannis Brilakis,
and Reginald DesRoches
Continuous Sensing of Occupant Perception of Indoor Ambient Factors ..................... 161
Farrokh Jazizadeh, Geoffrey Kavulya, Laura Klein,
and Burcin Becerik-Gerber
Effects of Color, Distance, and Incident Angle on Quality of 3D Point Clouds ............. 169
Geoffrey Kavulya, Farrokh Jazizadeh, and Burcin Becerik-Gerber
The Effective Acquisition and Processing of 3D Photogrammetric Data
from Digital Photogrammetry for Construction Progress
Measurement ....................................................................................................................... 178
C. Kim, H. Son, and C. Kim
Data Transmission Network for Greenhouse Gas Emission Inspection ......................... 186
Qinyi Ding, Xinyuan Zhu, and Qingbin Cui
Wearable Physiological Status Monitors for Measuring and Evaluating
Workers’ Physical Strain: Preliminary Validation ........................................................... 194
Umberto C. Gatti, Giovanni C. Migliaccio, and Suzanne Schneider
A Framework for Optimizing Detour Planning and Development
around Construction Zones ................................................................................................ 202
M. Jardaneh, A. Khalafallah, A. El-Nashar, and N. Elmitiny
A Multi-Objective Decision Support System for PPP Funding Decisions ...................... 210
Morteza Farajian and Qingbin Cui
Truck Weigh-in-Motion Using Reverse Modeling and Genetic Algorithms .................. 219
G. Vala, I. Flood, and E. Obonyo
The Application of Artificial Neural Network for the Prediction
of the Deformation Performance of Hot-Mix Asphalt ..................................................... 227
Ilseok Oh and Wasim Barham

vi
An Approach for Occlusion Detection in Construction Site Point Cloud Data ............. 234
Dennis J. Bouvier, Chris Gordon, and Matthew McDonald
Applications of Machine Learning in Pipeline Monitoring ............................................. 242
Yujie Ying, Joel Harley, James H. Garrett, Jr., Yuanwei Jin,
Irving J. Oppenheim, Jun Shi, and Lucio Soibelman
Using Electimize to Solve the Time-Cost-Tradeoff Problem
in Construction Engineering .............................................................................................. 250
Mohamed Abdel-Raheem and Ahmed Khalafallah
Vision-Based Crane Tracking for Understanding Construction Activity ...................... 258
J. Yang, P. A. Vela, J. Teizer, and Z. K. Shi
Design of Optimization Model and Program to Generate Timetables
for a Single Two-Way High Speed Rail Line under Disturbances .................................. 266
T. W. Ho, C. Y. Lin, S. M. Tseng, and C. C. Chou
Learning and Classifying Motions of Construction Workers and Equipment
Using Bag of Video Feature Words and Bayesian Learning Methods............................ 274
Jie Gong and Carlos H. Caldas
Evolutionary Software Development to Support Ethnographic Action Research ........ 282
Timo Hartmann
Determining the Benefits of an RFID-Based System for Tracking
Pre-Fabricated Components in a Supply Chain ............................................................... 291
E. Ergen, G. Demiralp, and G. Guven
Coordination of Converging Construction Equipment in Disaster Response ............... 299
Albert Y. Chen and Feniosky Peña-Mora
A Management System of Roadside Trees Using RFID and Ontology ........................... 307
Nobuyoshi Yabuki, Yuki Kikushige, and Tomohiro Fukuda
Transforming IFC-Based Building Layout Information into a Geometric
Topology Network for Indoor Navigation Assistance ...................................................... 315
S. Taneja, B. Akinci, J. H. Garrett, L. Soibelman, and B. East
Business Models for Decentralised Facility Management Supported by Radio
Frequency Identification Technology ................................................................................ 323
Z. Cong, L. Allan, and K. Menzel
Requirements for Autonomous Crane Safety Monitoring............................................... 331
Xiaowei Luo, Fernanda Leite, and William J. O’Brien
A Knowledge-Directed Information Retrieval and Management Framework
for Energy Performance Building Regulations ................................................................ 339
Lewis John McGibbney and Bimal Kumar
Novel Sensor Network Architecture for Intelligent Building Environment
Monitoring and Management ............................................................................................ 347
Qian Huang, Xiaohang Li, Mark Shaurette, and Robert F. Cox
Planning of Wireless Networks with 4D Virtual Prototyping for Construction
Site Collaboration................................................................................................................ 355
O. Koseoglu

vii
Comparison of Camera Motion Estimation Methods for 3D Reconstruction
of Infrastructure .................................................................................................................. 363
Abbas Rashidi, Fei Dai, Ioannis Brilakis, and Patricio Vela
Multi-Image Stitching and Scene Reconstruction for Evaluating Change
Evolution in Structures ....................................................................................................... 372
Mohammad R. Jahanshahi and Sami F. Masri
Computer Vision Techniques for Worker Motion Analysis to Reduce
Musculoskeletal Disorders in Construction ...................................................................... 380
Chunxia Li and SangHyun Lee
A Novel Crack Detection Approach for Condition Assessment of Structures ............... 388
Mohammad R. Jahanshahi and Sami F. Masri
Developing an Efficient Algorithm for Balancing Mass-Haul Diagram......................... 396
Khaled Nassar, Ossama Hosney, Ebrahim A. Aly, and Hesham Osman

Design, Engineering, and Analysis


Standardization of Structural BIM.................................................................................... 405
N. Nawari
Collaborative Design of Parametric Sustainable Architecture ....................................... 413
J. C. Hubers
Developing Common Product Property Sets (SPie) ......................................................... 421
E. William East, David T. McKay, Chris Bogen, and Mark Kalin
Integration of Geotechnical Design and Analysis Processes Using
a Parametric and 3D-Model Based Approach .................................................................. 430
M. Obergrieȕer, T. Euringer, A. Borrmann, and E. Rank
Aspects of Model Interaction in Mechanized Tunneling ................................................. 438
K. Lehner, K. Erlemann, F. Hegemann, C. Koch, D. Hartmann, and M. König
Robust Construction Scheduling Using Discrete-Event Simulation ............................... 446
M. König
The Development of the Virtual Construction Simulator 3: An Interactive
Simulation Environment for Construction Management Education ............................. 454
Sanghoon Lee, Dragana Nikolic, John I. Messner, and Chimay J. Anumba
Preparation of Constraints for Construction Simulation ................................................ 462
Arnim Marx and Markus König
Using IFC Models for User-Directed Visualization .......................................................... 470
A. Chris Bogen and E. William East
Understanding Building Structures Using BIM Tools ..................................................... 478
N. Nawari, L. Itani, and E. Gonzalez
Efficient and Effective Quality Assessment of As-Is Building Information
Models and 3D Laser-Scanned Data ................................................................................. 486
P. Tang, E. B. Anil, B. Akinci, and D. Huber

viii
Occlusion Handling Method for Ubiquitous Augmented Reality Using
Reality Capture Technology and GLSL ............................................................................ 494
Suyang Dong, Chen Feng, and Vineet R. Kamat
A Visual Monitoring Framework for Integrated Productivity and Carbon
Footprint Control of Construction Operations ................................................................ 504
Arsalan Heydarian and Mani Golparvar-Fard
Building Information Modeling Implementation—Current and Desired Status .......... 512
Pavan Meadati, Javier Irizarry, and Amin Akhnoukh
Simulating the Effect of Access Road Route Selection on Wind
Farm Construction .............................................................................................................. 520
Mohamed El Masry, Khaled Nassar, and Hesham Osman
Toward the Exchange of Parametric Bridge Models Using a Neutral
Data Format......................................................................................................................... 528
Yang Ji, André Borrmann, and Mathias Obergrieȕer
An Agent-Based Approach to Model the Effect of Occupants’ Energy Use
Characteristics in Commercial Buildings ......................................................................... 536
Elie Azar and Carol Menassa
Incorporating Social Behaviors in Egress Simulation ..................................................... 544
Mei Ling Chu, Xiaoshan Pan, and Kincho Law
3D Thermal Modeling for Existing Buildings Using Hybrid LIDAR System................ 552
Y. Cho and C. Wang
A Generalized Time-Scale Network Simulation Using Chronographic
Dynamics Relations ............................................................................................................. 560
A. Francis and E. Miresco
Automating Codes Conformance in Structural Domain ................................................. 569
Nawari O. Nawari
Benefits of Implementing Building Information Modeling for Healthcare
Facility Commissioning ...................................................................................................... 578
C. Chen, H. Y. Dib, and G. C. Lasker
A Real Time Decision Support System for Enhanced Crane Operations
in Construction and Manufacturing .................................................................................. 586
Amir Zavichi and Amir H. Behzadan
The Competencies of BIM Specialists: A Comparative Analysis
of the Literature Review and Job Ad Descriptions .......................................................... 594
M. B. Barison and E. T. Santos
Adaptive Guidance for Emergency Evacuation for Complex
Building Geometries............................................................................................................ 603
Chih-Yuan Chu
Improving the Robustness of Model Exchanges Using Product Modeling
“Concepts” for IFC Schema ................................................................................................611
Manu Venugopal, Charles Eastman, Rafael Sacks, and Jochen Teizer
Framework for an IFC-Based Tool for Implementing Design for
Deconstruction (DfD) .......................................................................................................... 619
A. Khalili and D. K. H. Chua

ix
Temporary Facility Planning of a Construction Project Using BIM (Building
Information Modeling) ....................................................................................................... 627
Hyunjoo Kim and Hongseob Ahn
Energy Simulation System Using BIM (Building Information Modeling)..................... 635
Hyunjoo Kim and Kyle Anderson
Semantic Modeling for Automated Compliance Checking ............................................. 641
D. M. Salama and N. M. El-Gohary
Ontology-Based Standardized Web Services for Context Aware Building
Information Exchange and Updating ................................................................................ 649
J. C. P. Cheng and M. Das
IFC-Based Construction Industry Ontology and Semantic Web Services
Framework........................................................................................................................... 657
L. Zhang and R. R. A. Issa
Using Laser Scanning to Access the Accuracy of As-Built BIM ...................................... 665
B. Giel and R. R. A. Issa
BIM Facilitated Web Service for LEED Automation ...................................................... 673
Wei Wu and Raja R. A. Issa
Optimization of Construction Schedules with Discrete-Event Simulation
Using an Optimization Framework ................................................................................... 682
M. Hamm, K. Szczesny, V. V. Nguyen, and M. König
Development of 5D CAD System for Visualizing Risk Degree and Progress
Schedule for Construction Project ..................................................................................... 690
Leen-Seok Kang, Hyoun-Seok Moon, Hyeon-Seung Kim, Gwang-Yeol Choi,
and Chang-Hak Kim
Integration of Safety in Design through the Use of Building
Information Modeling ......................................................................................................... 698
Jia Qi, R. R. A. Issa, J. Hinze, and S. Olbina
A Study of Sight Area Rate Analysis Algorithm on Theater Design ............................... 706
Yeonhee Kim and Ghang Lee
Algorithm for Efficiently Extracting IFC Building Elements from an IFC
Building Model .................................................................................................................... 713
Jongsung Won and Ghang Lee

Sustainable and Resilient Infrastructure


Evaluating the Role of Healthcare Facility Information on Health Information
Technology Initiatives from a Patient Safety Perspective ................................................ 720
J. Lucas, T. Bulbul, C. J. Anumba, and J. Messner
EVMS for Nuclear Power Plant Construction: Variables for Theory
and Implementation ............................................................................................................ 728
Y. Jung, B. S. Moon, and J. Y. Kim
Evaluating Eco-Efficiency of Construction Materials: A Frontier Approach ............... 736
O. Tatari and M. Kucukvar

x
Analysis of Critical Parameters in the ADR Implementation Insurance Model ........... 744
Xinyi Song, Carol C. Menassa, Carlos A. Arboleda, and Feniosky Peña-Mora
Application of Latent Semantic Analysis for Conceptual Cost Estimates:
Assessment in the Construction Industry ......................................................................... 752
Tarek Mahfouz
Dynamic Life Cycle Assessment of Building Design and Retrofit Processes ................. 760
Sarah Russell-Smith and Michael Lepech
A Real Options Approach to Evaluating Investment in Solar Ready Buildings ............ 768
B. Ashuri and H. Kashani
Agile IPD Production Plans As an Engine of Process Change ........................................ 776
Renate Fruchter and Plamen Ventsislavov Ivanov
An Automated Collaborative Framework to Develop Scenarios for Slums:
Upgrading Projects According to Implementation Phases
and Construction Planning................................................................................................. 785
O. E. Anwar and T. A. Aziz
Preparing for a New Madrid Earthquake: Accelerating and Optimizing
Temporary Housing Decisions for Shelby County, TN .................................................... 794
Omar El-Anwar, Khaled El-Rayes, and Amr Elnashai
Requirements for an Integrated Framework of Self-Managing HVAC Systems .......... 802
Xuesong Liu, Burcu Akinci, James H. Garrett, Jr., and Mario Bergés
A Web-Based Resource Management System for Damaged
Transportation Networks ................................................................................................... 810
W. Orabi
Time, Cost, and Environmental Impact Analysis on Construction Operations ............ 818
Gulbin Ozcan-Deniz, Victor Ceron, and Yimin Zhu
Learning to Appropriate a Project Social Network System Technology ........................ 826
Ivan Mutis and R. R. A. Issa
Decision Support for Building Renovation Strategies...................................................... 834
H. Yin, P. Stack, and K. Menzel
Environmental Performance Analysis of a Single Family House Using BIM ................ 842
A. A. Raheem, R. R. A. Issa, and S. Olbina

Cutting-Edge Development
Enhancing Student Learning in Structures Courses with Building
Information Modeling ......................................................................................................... 850
Wasim Barham, Pavan Meadati, and Javier Irizarry
Using Applied Cognitive Work Analysis for a Superintendent to Examine
Technology-Supported Learning Objectives in Field
Supervision Education ........................................................................................................ 858
Fernando A. Mondragon Solis and William J. O’Brien
Developing and Testing a 3D Video Game for Construction Safety Education ............. 867
Jeong Wook Son, Ken-Yu Lin, and Eddy M. Rojas

xi
Attention and Engagement of Remote Team Members in Collaborative
Multimedia Environments .................................................................................................. 875
R. Fruchter and H. Cavallin
Teaching Design Optioneering: A Method for Multidisciplinary Design
Optimization ........................................................................................................................ 883
David Jason Gerber and Forest Flager
Synectical Building of Representation Space: A Key to Computing Education ............ 891
Sebastian Koziolek and Tomasz Arciszewski
Enhancing Construction Engineering and Management Education Using
a COnstruction INdustry Simulation (COINS) ................................................................ 899
T. M. Korman and H. Johnston
Effectiveness of Ontology-Based Online Exam Platform for Programming
Language Education ........................................................................................................... 907
Chia-Ying Lin and Chien-Cheng Chou

Indexes
Author Index........................................................................................................................ 915
Subject Index ....................................................................................................................... 919

xii
A Study of Implementation of IP-S2 Mobile Mapping Technology for Highway
Asset Condition Assessment

J. M. De la Garza1, C. G. Howerton2, D. Sideris2


1
Vecellio Professor, Department of Civil and Environmental Engineering, Virginia
Tech, 200 Patton Hall, Blacksburg, Virginia, 24061; email: chema@vt.edu
2
Former Research Assistants, Center for Highway Asset Management Programs
(CHAMPS), Virginia Tech.

ABSTRACT
The national highway infrastructure is continually deteriorating and in need
of reconstruction and repairs. This is revealed by national highways poor grades in
the 2005 and 2009 ASCE report cards. As major arteries for the flow of goods and
people in the United States, poor highways can lead to fatalities, economic distress,
and frustration among motorists. Prior to performing maintenance, state DOTs
need to assess damages and determine what highway assets need to be repaired.
Data collection techniques have not been standardized in the United States, but
most state DOTs make extensive use of manpowered collection crews.
Manpowered crews’ data collection efforts are time consuming, costly, and
potentially unsafe. Mobile mapping enables DOTs to determine the condition and
location of assets while increasing safety for surveyors. Positioning and visual
recognition of assets is an important aspect while inspecting numerous dispersed
assets along highways. This paper presents a preliminary study of Topcon’s IP-S2
Mobile Mapping system. Two separate but interrelated projects were conducted.
The first project’s primary objectives are: (1) to measure the time it takes to collect
data using the IP-S2 method versus the traditional method; and (2) to measure the
accuracy of the data using the IP-S2 method versus the traditional method. These
tests were conducted at two variable speeds: slow and highway.
INTRODUCTION
Maintenance plays a critical role in the condition and operation of roads.
Given that road conditions in the U.S. are getting worse (ASCE 2005; ASCE 2009),
government must allocate funds for highway maintenance to keep highways from
becoming unserviceable. Ultimately, proper maintenance will save money and
improve citizen satisfaction. Highways need to be maintained frequently for two
reasons: to ensure the safety of those who travel them and to mitigate economic
stress that can result from road deterioration (de la Garza et al., 1998).
The Federal government understands the criticality of maintaining the
nation’s arteries for the transport of people, goods, and services. Along with
nationwide awareness of bridge maintenance following the I-35 bridge collapse in
Minnesota, the government has imposed national mandates to improve critical
highway assets such as pavement markings and traffic signs (Rasdorf et al., 2009).

1
2 COMPUTING IN CIVIL ENGINEERING

Moreover, different methods have been applied over the years by most of the
country’s state DOTs to prioritize maintenance depending on the visual condition of
the highways and their assets (Bandara and Gunaratne, 2001). Most of these
methods gather data concerning the condition of the highway pavement, bridge
decks, and other essential assets. This approach generally aims to allocate funds for
maintaining specific highway assets depending on their importance and safety
concerns (Bandara and Gunaratne, 2001).
BACKGROUND
Center for Highway Asset Management ProgramS (CHAMPS)
The Commonwealth of Virginia leads the way in highway asset
management with Virginia’s Department of Transportation (VDOT) performance-
based road maintenance. In 2001, Virginia Tech (VT) and VDOT established the
VT-VDOT Partnership for Highway Maintenance Monitoring Program (HMMP).
Under this partnership, Virginia Tech’s CHAMPS provides VDOT with
independent assessment and ratings of Virginia’s Highways (Piñero, 2003). These
results are published in a Maintenance Rating Program (MRP) report, which VDOT
uses to assess highway conditions and is the basis for the overall performance of
maintenance contractors. In this study, Virginia Tech’s CHAMPS collected asset
condition data with assistance from Topcon (Howerton and Sideris, 2010). The
Virginia Tech Transportation Institute (VTTI) Smart Road was used as the data
collection test-bed. VTTI is Virginia Tech’s largest university-level research center
and is mainly involved with research focused on the general transportation field.
Highway Asset Management Research
Current research has focused on three areas of asset management:
performance measurement, decision-making and data collection. Elements of asset
management are closely tied together, and data collection is the bridge between
performance measurement and decision-making. Decision making prioritization
models cannot be implemented without properly assessing the condition of the
highway assets (Durango-Cohen and Sarutipand 2006; Vanier 2001).
Because of the time commitments and costliness of data collection,
innovative new approaches are needed for highway departments because of limited
budgets (Bandara and Gunaratne, 2001). Advanced technology will allow agencies
to continue maintaining highways during periods of budget shortfall. Considering
the importance of inventory and location data for low-cost capital assets, new
technology must be used.
Literature agrees that compiling an inventory of assets and assessing asset
performance are critical elements of highway asset management (Hassanain et al.
2003, Rasdorf et al. 2009). Collecting baseline data on assets of a section of
highway creates inventory in a DOT’s database. From this inventory, random sites
can be selected and assessed based on predefined criteria. Photographic
documentation is a critical element involved in creating an information technology
(IT) database (Rasdorf et al., 2009). VDOT has recently initialized a pilot project
for photographic documentation for all asset failures within the Stanton South
TAMS Project (Roca, 2009).
COMPUTING IN CIVIL ENGINEERING 3

Standard asset management techniques do not currently exist throughout the


U.S. Many state DOTs use manpowered crews. In contrast, New Mexico uses an
image-based Global Positioning System (GPS) data collection system. This system
allows image processing along with GPS coordinates throughout the New Mexico
highway system (Medina et al., 2009). Several DOTs already have geographic
information system (GIS) capabilities to complement support systems for asset
management and mapping.
With the possible exception of a few state DOTs, most of the current
condition assessment and measurement processes applied throughout the U.S. make
excessive use of manpowered crews. These traditional data collection methods have
proved to be time consuming and expensive (Mizusawa and McNeil 2006; Rasdorf
et al. 2009). Research is needed to determine optimal techniques for automating
data collection, and acquiring positioning data for an asset management database
that improve safety and efficiency.
Mobile Mapping Background
Mobile mapping is defined as collecting spatially located data on a moving
platform to augment high quality data with geo-referenced data. The technology
enables the user to electronically collect data using multiple technologies at one
time to increase accuracy. The technologies usually include GPS and digital
photography but may also include laser scanners (Tao & Li 2007).
The combination of geo-referencing and digital imaging has allowed mobile
mapping technology to move from infancy in academic environments to
commercial applications in land-based, marine, and air-borne environments (Tao
and Li, 2007). The rapid development of technology over this period has allowed
these systems to provide economical, high quality imaging and positioning data.
Improvements to current mobile mapping applications are anticipated due to
continued research in mathematical algorithms and miniaturization of technological
components (Tao and Li, 2007). Furthermore, numerous manufactures of
positioning equipment and laser equipment have joined the mobile mapping field,
and are developing technology to use for highway inspections.
Several applications have been applied and studied in mobile mapping
technology; however, the bulk of knowledge is on the performance, precision and
accuracy of the positioning data (Tao and Li 2007; Karimi et al. 2000).
Additionally, image based testing on road signs proved to be inconclusive, due to
single camera mobile mapping system (Karimi et al., 2000). The side angle made it
impossible to detect the type of sign and other important data that can be used for
highway asset management.
TECHNOLOGY
Topcon’s Integrated Positioning (IP-S2) Mobile Mapping System maps
linear features to a high level of accuracy. Vehicle positions are obtained using
three redundant technologies: (1) a geospatial positioning Global Navigation
Satellite System (GNSS) receiver; (2) a vehicle positioning Inertial Measurement
Unit (IMU); and (3) a vehicle odometric positioning tracked by a Controller Area
Network (CAN bus) and external wheel encoders.
4 COMPUTING IN CIVIL ENGINEERING

These three technologies work together to sustain a 3D position for the


vehicle even in locations where satellite signals can be blocked by obstructions such
as buildings, bridges, or tree lines.
The IP-S2 system, shown in Figure
1, includes three high-resolution laser
scanners that cover the vehicle path at
ground level and sweep the adjacent areas
to a distance of 30 meters on each side. A
high-resolution digital camera provides
360-degree spherical images at a rate of 15
frames per second.
Vehicle position and sensor output
are integrated seamlessly into one
continuous 3D data stream that can be
exported. GNSS data can be post-processed
Figure 1. IP-S2 unit for higher accuracy.
In this study, an attempt is made to
determine if the vehicle-mounted system
can map data at normal travel speeds for roadside feature inventories and condition
assessments. If successful, safety should be increased by removing man-powered
crews from the travel lanes.
SCOPE & LIMITATIONS
This study intends to be a preliminary study of how the Topcon IP-S2
technology is compared to traditional manpowered data collection efforts, in terms
of time to collect data and accuracy of data. The results are intended to simulate
current practice of VDOT MRP projects. Both of the projects conducted for this
study were limited to the assessment of traditional low capital roadside assets. After
reviewing the data and literature provided through CHAMPS, seven assets typically
found on a highway system were selected. These assets are: cross pipes, pipes and
culverts (<36 ft2), paved ditches, storm drains/drop inlets, guardrail, signs and
object markers.
MACHINE CONDITION ASSESSMENT VERSUS TRADITIONAL
CONDITION ASSESSMENT
Research Design
The main objective was to assess how the IP-S2 system compares with
traditional methods of data collection. The specific objective is to assess the ability
to provide accurate data, and assess how long it takes to collect and analyze this
data. The accuracy of the data and time analysis is tested at different vehicle speeds
and compared to a control group performing traditional man-powered inspections.
Sixty percent of the Smart Road’s tenth-mile segments were randomly
selected and evaluated for each trial. Three groups were formed; each group
evaluated the data using the traditional data collection method, as well as the IP-S2
method for highway speed and slow speed runs. This led to each group conducting
three trials. The control group was sent to perform a traditional inspection of
guardrail, signs and object markers. The inspection was timed from the time the
COMPUTING IN CIVIL ENGINEERING 5

control group began working on the Smart Road to the time all data was collected.
Data was collected using the IP-S2 system at slow speed (15-20mph) and highway
speed (60-65mph) after the traditional inspections were completed. The slow speed
simulates vehicle inspections from the shoulder, while highway speed evaluation
simulates the vehicle driving normally along the road. Each data run was timed
from the time on the Smart Road to the time a data run was completed. The data
was then post-processed using the Topcon’s Geoclean software; this time was
recorded as well.
Research Results
The primary analysis considers the time to collect, process and analyze the
data, and the determination of whether or not varied IP-S2 speed changes results.
As shown on Table 1, on average, the time to collect and analyze the data using the
traditional inspection was 59 minutes, while the average for both fast and slow runs
of the IP-S2 using interactive and batch processing were 70 minutes and 53 minutes
respectively. The traditional inspections require travel time, stopping time and
walking time; the Interactive processing option with the IP-S2 required 17min of
data processing, and the Batch processing option of the IP-S2 did not. With the IP-
S2, assets could be located and zoomed to assess their condition.
As shown in Table 1 Geoclean post-processing time accounted for a major
portion of the time analysis. If the processing time is eliminated or automated, the
IP-S2 inspections could be faster than traditional. Traditional data collection is
slightly faster than IP-S2 based inspections, if interactive Geoclean processing time
is included.
Table 1. Average Data Collection, Processing and Analysis Times
IP-S2 IP-S2
Activity Traditional
(10-15mph) (60-65mph)
Data Collection n/a 6 min 1 min 30 sec
Geoclean Processing n/a 17 min 17 min
Data Analysis 59 min 38 min 41 min
GNSS Static Alignment n/a 10 min 10 min
Total Interactive Processing n/a 71 min 70 min
Total Batch Processing n/a 54 min 53 min

CHAMPS QA/QC process compared the statistical significance of the


conditions assessed in the field using z-statistics. A z-score was calculated for each
segment based on how each group compared to one another in assessing the assets
present. A score between -1.96 and 1.96 was considered to be statistically the same
with a 95% confidence level. A score of zero was a perfect match in assessment.
There was not a statistically significant difference in the condition assessment of the
assets for 67 out of 69 sites using the IP-S2 and traditional man-powered data
collection methods.
CLARITY OF IMAGES RECEIVED
Research Design
For this experimental study three parameters were defined for the evaluation
and categorization of the data. These parameters are presented in Table 2.
6 COMPUTING IN CIVIL ENGINEERING

Table 2. Research Parameters for Implementation Plan


Parameter Definition
The clarity of the assets when the vehicle speed varies (30mph, 50mph and
Speed
65 mph)
The clarity of the assets based on the distance between the asset item and
Distance
the IP-S2
Lighting
The clarity of the assets under different Light Conditions (Day and Night)
Conditions

The experimental Runs were designed according to the above parameters.


The data received was then placed into three different categories based on the
clarity of information conveyed. These categories are presented in Table 3.
Table 3. Levels of Clarity for this study
Category Description
Red The asset is not visible
The asset is visible but not in its entirety and/or the resolution of the image is
Yellow
not detailed enough to provide assessment of the condition of the asset
The asset is fully in view and the resolution of the image is high enough to
Green
provide a condition assessment of the asset

Research Results
In order to research the IP-S2 unit’s recording capabilities, three different
speeds were tested; 30, 45 and 60 mph. According to the data received from these
three runs, there is no significant difference based on the speed of the vehicle. This
is attributed to the fact that the IP-S2 unit records photographic data every few
milliseconds and as such, the highway speeds do not affect the quality of the data
received.
Distance was the second of the three parameters identified for this study.
This study shows assets condition can be assessed if the IP-S2 unit is within twelve
feet of the asset or less. Due to asset size, certain assets could not be assessed even
within twelve feet. Considering the frame rate of photographs taken by IP-S2, the
data collected will always contain at least one frame where the asset item is within
twelve feet.
Finally, lighting conditions affect the clarity of data. Data collected at night
with no lighting provided (apart from the vehicle’s headlights) conveyed little
information for the assets depicted, while the data collected during the day can be
readily assessed.
Data could not be assessed in three occasions due to: distance, obstruction
and size. Asset items situated more than 40 feet from the shoulder could not be
assessed due to the range of the camera. Certain asset items found behind another
asset item could not be assessed due to the obstruction (i.e. paved ditch located
behind guardrail). Finally, asset items smaller than 2 ft by 2ft could not be assessed
due to the small relative size. The findings per asset item are presented in Table 4.
Table 4. Overall Quality of data per Asset Item
ASSET Asset Item Overall Total Number of
GROUP Clarity Number of Asset Items
COMPUTING IN CIVIL ENGINEERING 7

Level of Asset Items identified &


Asset Item assessed
Drainage Cross Pipes 75% 8 6
Pipes & Culverts (<36 ft2) 100% 1 1
Paved Ditches 0% 8 0
Storm Drains/Drop Inlets 57% 14 8
Traffic Signals & Signs 79% 28 22
Guardrail 100% 30 30
Object Markers 0% 23 0

CONCLUSIONS
Mobile mapping is used in a variety of fields from terrain modeling to
emergency management. This research was performed to assess the capabilities of
Topcon’s IP-S2 mobile mapping technology for VDOT. In particular, two projects
were completed. The first compared the speed and accuracy of surveyors to assess
certain roadside assets, using the traditional manpowered crews versus the IP-S2
technology, whereas the second evaluated the quality of the data received through
the IP-S2 in regard to the clarity of specific assets.
Under the experimental design conditions and various processing
workflows, it was determined that a timed comparison between human-based and
IP-S2 based technology was directly dependent on the processing and analysis
methods employed. The time to analyze the data using batch-mode processing
technology is faster than traditional collection.
From the experimental data, 96% of IP-S2 runs were within a 95%
confidence level of the manually collected data. The IP-S2 cannot assess certain
failure codes of the abovementioned assets, including missing guardrail bolts,
damage to the back of guardrail components, turned signs, and missing object
markers.
Under the experimental design conditions presented in the second project,
distance and lighting conditions affected the ability for assets to be assessed greatly,
whereas speed of the collection vehicle did not. In particular, the condition of assets
found behind the guardrail was impossible to assess due to issues with distance and
obstructions.
Finally, the IP-S2 system offers a unique ability of simulating the actual
course of the vehicle. With the software provided, inspectors have the ability to stop
or rewind the photographic data to search or inspect assets. This in itself offers
numerous benefits, since the data assessment can be conducted at any given time
and the assets can be inspected as many times as desired.

ACKNOWLEDGEMENTS
The research reported in this paper was conducted at Center For Highway
Asset Management Programs (CHAMPS) and funded by the Virginia Department
of Transportation (VDOT) The opinions and findings presented in this paper are
those of the authors and do not necessarily represent the views of VDOT and
Topcon.
8 COMPUTING IN CIVIL ENGINEERING

REFERENCES
ASCE, (2005). “The 2005 Report Card for America’s Infrastructure.”
http://www.asce.org/reportcard/2005 (May 25 2009).
ASCE, (2009). “The 2009 Report Card for America’s Infrastructure.”
http://www.asce.org/reportcard/2009 (May 25 2009).
Bandara, N., and Gunaratne, M. (2001). “Current and Future Pavement
Maintenance Prioritization Based on Rapid Visual Condition Evaluation.” J. of
Transportation Engr., 127(2), 116-123.
De la Garza, J.M., Drew, D.R., and Chasey, A.D. (1998). “Simulating Highway
Infrastructure Management Policies”. J. of Management in Engr., 14(5), 64-72.
Durango-Cohen, P.L., and Sarutipand, P. (2006). “Coordination of Maintenance
and Rehabilitation Policies for Transportation Infrastructure.” Applications of
Advanced Technology in Transportation 2006, 213, 34.
Hassanain, M., Froese, T., and Vanier, D. (2003). “Framework Model for Asset
Maintenance Management.” J. of Performance of Constructed Facilities, 17
(1), 51-64.
Howerton, C.G. and Sideris, D. (2010). “A Study of Implementation of IP-S2
Mobile Mapping Technology for Highway Asset Condition Assessment.”
Project & Report, presented to Virginia Polytechnic Institute and State
University VA, for fulfillment of the requirements for the degree of M.S. in Civil
and Environmental Engineering.
Karimi, H., Khattak, A.J. and Hummer, J. (2000). “Evaluation of Mobile Mapping
System for Roadway Data Collection.” J. of Computing in Civil Engineering,
14(3), 168-173.
Medina, R., Haghani, A., and Harris, N. (2009). “Sampling Protocol for Condition
Assessment of Selected Assets.” J. of Transportation Engr., 127(2), 116-123.
Mizusawa, D., and McNeil, S. (2006). “The Role of Advanced Technology in Asset
Management: International Experiences.” Applications of Advanced Technology
in Transportation 2006 (AATT 2006), 213, 33.
Pinero, J.C. (2003). “A Framework for Monitoring Performance-Based Road
Maintenance.” PhD Dissertation, presented to Virginia Polytechnic Institute and
State University VA, for fulfillment of the requirements for the degree of Doctor
of Philosophy in Industrial and Systems Engineering.
Rasdorf, W., Hummer, J., Harris, E., and Sitzabee, W. (2009). “IT Issues for the
Management of High-Quantity, Low-Costs Assets.” J. of Computing in Civil
Engineering, 135(4), 183-196.
Roca, I. (2009). “Visualization of Failed Highway Assets through Geo-Coded
Pictures in Google Earth and Google Maps.” Project & Report, presented to
Virginia Polytechnic Institute and State University VA, for fulfillment of the
requirements for the degree of M.S. in Civil and Environmental Engineering.
Tao, V. and Li, J. (2007). “Advances in Mobile Mapping Technology.” London:
Taylor & Francis Group.
Vanier, D. J. (2001). “Why Industry Needs Asset Management Tools.” J. of
Computing in Civil Engineering, 15(1).
An Automated Stabilization Method for Spatial-to-Structural Design
Transformations

C. D.J. Smulders1 and H. Hofmeyer2


1
M.Sc.-student, Building and Planning (B), Structural Design Group (SD),
Department of Architecture, Eindhoven University of Technology (TU/e); email:
c.smulders@gmail.com
2
Corresponding author, Ph.D., Associate Professor in Applied Mechanics TU/e, B,
SD, Den Dolech 2, 5612 AZ Eindhoven, The Netherlands; PH (040) 247-2203; FAX
(040) 245-0328; email: h.hofmeyer@tue.nl

ABSTRACT

A spatial-structural design process can be investigated via a so-called


research engine, in which a spatial design is transformed into a structural design and
vice versa. During the transformation from a spatial into a structural design, it is
necessary to end up with a stable structural model, so that a (static or dynamic)
structural analysis can be carried out. This paper presents a method to automate the
(normally intuitively carried out) stabilization process, using data on a structural
design's geometry and its instability modes. The method uses the null space and
associated null vectors of the structural stiffness matrix. Then each null vector is
solved by connecting one of the vector’s degree of freedom associated key points
with a surrounding key point. Examples illustrate the method and demonstrate that
predefined requirements have been met.

INTRODUCTION

Engineers in the field of Architecture, Engineering and Construction (AEC)


are used to solutions achieved through a creative process: not by working from a
problem towards a solution, but by an exploration of problems and solutions
simultaneously, since a newly formulated issue itself must be studied and new issues
arise in the process of finding and evaluating possible responses to the matter (Maher
2000). As such, it is generally acknowledged that there is a need for tools that
support the designer to explore a solution space and evaluate the design process
outcomes (Austin et al 2000, Camelo and Mulet 2010, Chou et al 2010, Eilouti 2009,
Isikdag and Underwood 2010, Krish 2010, Nelson et al 2009, Rafiq et al 2003, Zang
and Wang 2010).
Recently the idea of a research engine has been proposed (Hofmeyer 2007).
It has the potential to be both reflective (on the process and outcomes of
multidisciplinary design) and innovative (by exploring new design solutions). A
research engine cycle consists of four phases as shown in Figure 1(a): (1) A
transformation of a spatial design into a structural design; (2) The optimization of the
structural design; (3) The transformation of the optimized structural design into a

9
10 COMPUTING IN CIVIL ENGINEERING

spatial design, and (4) Finally adjusting the spatial design to comply to the properties
of the initial spatial design. During the second transformation, the finite element
method is needed, for which the structural design should be stable. Because the first
transformation step will add structural elements to the spatial design without
knowing how to build a stable structural design, it is thus necessary to include a
method that automates the stabilization of the structural system. In this paper,
instability refers to the kinematically undetermined state of a structural system for
which, due to the lack of a sufficient number of constraints (see Figure 1(b) and
1(c)), mechanisms may occur. Mechanisms represent parts of the structural system
that are able to move freely with respect to other parts. The number of unique
mechanisms is a measure for the degree of instability of the system.

Figure 1. (a) Schematic research engine, (b) instability due to lack of support,
(c) instability due to lack of elements or support.

No procedure can be found in literature for the automated stabilization of


kinematically undetermined structures (Hofmeyer and Russell 2009). This may be
caused by the fact that stabilization is a design type problem, supposedly to be solved
by the structural engineer through a creative process. Nevertheless, it is worth
mentioning research that has been carried out to illustrate the possibility of stable
systems with (theoretically) not enough constraints, thus seemingly having
instabilities (Volokh and Vilnay 1997, Kuznetsov 1988). It was concluded that the
type of problems that are appropriate for this approach, which proposes the
assumption of pre-stress to stabilize unstable structures, is limited to systems that are
subject to large deformations without this pre-stress, such as tensile structures.
A method has been developed that yields detailed information on the state of
instability of a given structure (Hofmeyer and Russell 2009). It does so by
calculating the structural stiffness matrix's null space, which is a collection of null
vectors. Each null vector represents a unique mechanism, which cannot be expressed
as an arbitrary sum of other mechanisms. The null vector lists the key points that can
move freely and the direction in which they can move (a so-called degree of freedom
(DOF)). As such, the null space can be of help in the method presented in this paper.

METHOD REQUIREMENTS

The method for automated stabilization is restricted to three dimensional


structural systems that (1) span box-shaped volumes defined by eight key points each
COMPUTING IN CIVIL ENGINEERING 11

as shown in Figure 2; (2) are orthogonal assembled; (3) are built up out of rods. Rods
are defined here as linear elements that are hinge connected to each other. The
orthogonal assembly refers to the structural system key points only, which means
that the rods themselves do not necessarily need to be positioned axes-aligned with
the global axes.

Figure 2. Box-shaped volume defined by 8 key points and positioned


orthogonally with respect to the coordinate system.

Based on the above restrictions, the method must be effective: it must be able
to generate a solution for any possible problem within the previously defined scope.
Secondly, the method must be efficient: to stabilize a system, a minimum of
adjustments must be made to avoid unnecessary elements that hamper further design
explorations.

METHOD DESCRIPTION

In this paper, a method is presented that enables automated stabilization by


adding elements to the unstable system. Since every addition is expected to reduce
the degree of instability, it can be ensured that only a minimum of elements is added,
which follows the above mentioned requirement regarding efficiency.
The method benefits from the fact that besides absolute coordinates (x,y, and
z) of the key points, grid coordinates are used (Gx, Gy, and Gz). A grid coordinate,
being a natural number, is a relative value that helps defining the location of a key
point with respect to others, as shown in Figure 3(a). They also allow a convenient
formulation of two further restrictions: (a) Rods are not allowed to span diagonally
through space (Figure 3(b)) as defined by formula (1) and (b) a rod should not span
more than one increment (Figure 3(c)) as defined by formula (2).

Figure 3. (a) Grid coordinates; restrictions: (b) spatial diagonal, (c) span along
more than a single grid increment.
12 COMPUTING IN CIVIL ENGINEERING

 Gx  0    Gy  0    Gz  0  (1)

 Gx  1   Gy  1   Gz  1 (2)

Step 1: Selection of DOF by Null Space


A mechanism can be described by its degrees of freedom (DOFs), as shown
in figure 4. A DOF consists of two parts: a key point identifier and the direction in
which the key point is able to displace.

Figure 4. (a) Rotational mechanism, (b) DOFs: keypoint 5(x), 6(x,y), 7(y).

All possible mechanisms, each defined by their set of DOFs, are given by
finding the null space of the structural design's stiffness matrix (Hofmeyer and
Russell 2009). Using this, the method starts with the first null vector (i.e.
mechanism) and its first DOF. If no effective addition can be found for this DOF, the
method selects the next DOF. When all DOFs of a null vector have been tried
without success, the method selects the next null vector, etc. Note that a
mathematical procedure yields a sequence of mechanisms that is not related to
structural engineering logic, the method inevitably selects a practically random first
mechanism and DOF to solve.

Step 2: Structural Element Addition


For the key point related to the selected DOF in step 1, the method
investigates the existence of surrounding key points. Then, the DOF’s key point is
connected with a rod to the first found surrounding key point if the following
requirements are fulfilled: (a) the key points are not yet mutually connected by an
existing rod; (b) the key points fulfill the requirements of formulae (1) and (2); (c)
the orientation of the key points with respect to each other is not perpendicular to the
direction of the DOF, formula (3). The last requirement is necessary since rods,
being hinge connected, cannot resist movement perpendicular to their axis, at least
not using a mechanically first-order approach.

DOFaxis  x  Gx  0  DOFaxis  y  Gy  0  DOFaxis  z  Gz  0 (3)

The sequence in which surrounding key points are found may influence the
final solution and thus needs explanation. The search for surrounding key points
starts with a search for axes-aligned key points as shown in Figure 5.
COMPUTING IN CIVIL ENGINEERING 13

Figure 5. Axes-aligned key points for (a) x-axis, (b) y-axis, and (c) z-axis, selected
DOF key point is at the origin.

Assume the key point from the DOF has coordinates (i,j,k), then the existence of
surrounding key points is checked using the same sequence as used in formula (4).
Note that other possibilities are excluded due to the conditions in formula (2) and (3).

DOFaxis  x  I  (i  1, j , k ), II  (i  1, j , k )
DOFaxis  y  I  (i, j  1, k ), II  (i, j  1, k ) (4)
DOFaxis  z  I  (i, j , k  1), II  (i, j , k  1)

Secondly, diagonally oriented surrounding key points are investigated, taken


into account formula (1) to (3) and the planes as shown in Figure 6. For each specific
key point, only two out of three planes are part of the search, as shown by formula
(5), due to the condition of formula (3). The existence of diagonally surrounding key
points is checked using the same sequence as used in formula (6). Note that in this
formula, for a specific DOF-axis only 2 planes are applicable, see formula (5).

DOFaxis  x  relevant planes: xz , xy


DOFaxis  y  relevant planes: yz , xy (5)
DOFaxis  z  relevant planes: xz , yz

Figure 6. Diagonally oriented key points for (a) xz-plane, (b) yz-planes, and (c)
xy-plane.
14 COMPUTING IN CIVIL ENGINEERING

plane xz :1  i  1, j , k  1 , 2  i  1, j , k  1 ,3  i  1, j , k  1 , 4  i  1, j, k  1
plane yz :1  i, j  1, k  1 , 2  i, j  1, k  1 ,3  i, j  1, k  1 , 4  i, j  1, k  1 (6)
plane xy :1  i  1, j  1, k  , 2  i  1, j  1, k ,3  i  1, j  1, k , 4  i  1, j  1, k

Planes xz and yz are considered before xy because for regular designs, which
have their height defined in z-direction, a vertical connection is expected to yield the
highest chance of success. If no axes-aligned or diagonally surrounding key point can
be found that is suitable to be rod connected to the DOF-keypoint, as mentioned at
the start of this section, the next DOF-keypoint will be selected for which the
procedure is repeated.

Step 3: Addition Check


After each rod addition, null space is calculated for the adjusted structural
design to investigate whether the rod addition affects model instability. If the number
of null vectors has been reduced, so has the degree of instability and thus the system
has become more stable. Failed rod additions are discarded after which the procedure
returns to step 1 (as presented in this section). Note that the success of a rod addition
can only be measured by the reduction of the number of null vectors as a reduction of
the number of mechanisms can yield complete different mechanisms as a result.

DEMONSTRATION

A problem will be solved to demonstrate the above method for automated


stabilization, figure 7. This will show that the method performs using a minimum of
additions (two additions for two mechanisms) and that it does so in a minimum of
attempts (one attempt for each addition). In figure 7(a), the original structural design
is shown. Two mechanisms exist, both related to shear movements of the side
surfaces. Following the procedure in the previous section, steps are executed as
shown in detail in table 1, resulting in the structural design in figure 7(b).

Figure 7. System before and after adjustment (addition of rod between key
points 2 and 5): (a) 2 mechanisms, vector 1 (5 and 6 in x), vector 2 (5 and 8 in y),
(b) 1 mechanism, vector 1 (5 and 8 in y).
COMPUTING IN CIVIL ENGINEERING 15

Table 1. First rod addition.


Step 1: First null vector, first DOF: key point 5 in x-direction
Step 2: Key point 5 has grid coordinates (1,1,2), start eq. (4), then eq. 5 to 6.
eq. (4) I (0,1,2) does not exist
(dof x) II (2,1,2) equals key point 6, already connected
eq. (6) 1 (0,1,1) does not exist
(plane xz) 2 (0,1,3) does not exist
3 (2,1,1) key point 2 is candidate, not yet connected to 5
Step 3: Number of mechanisms reduced from 2 to 1, accept rod addition

Using one more addition sequence, comparable to the previous one, the structural
design is completely stable. The method presented here has been implemented in
C++, visualized via OpenGL, and applied to a variety of academic and practical
complex structural designs, as shown in figure 8.

Figure 8. C++ / OpenGL program solving a real structural design.

DISCUSSION AND CONCLUSIONS

In the second section (method requirements), requirements were mentioned


regarding the method's effectiveness and efficiency. It can be shown that the method
produces effective solutions as the test case in the fourth section (demonstration)
demonstrates that a box-shaped structural system can always be made stable with the
use of rods only. Since the method can find all axis-aligned and diagonally
surrounding key points in the first grid perimeter from every other key point, it will
always be able to find all necessary connections for a given connected set of key
points, i.e. a structural design for which for every key point at least one key point
exists at a grid distance equal to 1 (axis-aligned or diagonally). Because of the
addition check (in the third section (method description)), it can also be shown that
only efficient solutions are produced, as every rod added should be effective.
It can be concluded that a method has been developed that successfully
stabilizes an unstable structure by adding elements. It can be used for the mentioned
research engine that transforms a spatial design into a structural design
automatically. Also, it can be used by structural engineers to explore stabilization
16 COMPUTING IN CIVIL ENGINEERING

solutions that were not or could not be conceived by hand. The range of problems
that the presented method can solve is limited to orthogonal systems built up from
(hinge-connected) rods. Currently, the method is extended for rigidly connected
beams and flat shells elements, being members of the initial design as well as being
used as addition elements.

REFERENCES

Austin, S., Baldwin, A., Li, B. and Waskett, P. (2000). "Analytical Design Planning
Technique (ADePT): a dependency structure matrix tool to schedule the building
design process." Construction Management and Economics 18(2), 173-182.
Camelo, D. M., and Mulet, E. (2010). "A multi-relational and interactive model for
supporting the design process in the conceptual phase." Automation in Construction
19(7), 964-974.
Chou J. S., Chen, H. M., Hou, C. C., and Lin, C.W. (2010). "Visualized EVM system for
assessing project performance.", Automation in Construction 19(5), 596-607.
Eilouti, B. H. (2009). "Design knowledge recycling using precedent-based analysis and
synthesis models." Design Studies 30(4), 340-368.
Hofmeyer, H. (2007). "Cyclic apllication of transformations using scales for spatially or
structurally determined design." Automation in Construction 16(1), 664-673.
Hofmeyer, H., and Russell, P. (2009). "Interaction between spatial and structural building
design: a finite element based program for the analysis of kinematically
indeterminable structural topologies." CONVR2009, Proceedings of the 9th
international conference on construction applications of virtual reality, Sydney,
Australia (November 5-6), 247-256.
Isikdag, U., and Jason Underwood, J. (2010). "Two design patterns for facilitating Building
Information Model-based synchronous collaboration." Automation in Construction
19(5), 544-553.
Krish, S. (2010) "A practical generative design method." Computer-Aided Design, accepted,
in press.
Kuznetsov, E. N. (1988) "Underconstrained Structural Systems." International Journal of
Solids and Structures 24(2), 153-163.
Maher, M. L. (2000). "A Model of Co-evolutionary Design." Engineering with Computers
16(3-4), 195-208.
Nelson, B. A., Wilson, J. O., Rosen, D., and Yen, J. (2009). "Refined metrics for measuring
ideation effectiveness" Design Studies 30(6), 737-743.
Rafiq, M. Y., Mathews, J. D., and Bullock, G. N. (2003). "Conceptual Building Design –
Evolutionary Approach" Journal of Computing in Civil Engineering 17(3), 150-158.
Volokh, K. Y., and Vilnay, O. (1997). "'Natural', 'Kinematic' and 'Elastic' Displacements of
Underconstrained Structures" International Journal of Solids and Structures 34(8),
911-930.
Zang, W., and Wang, G. (2010). "A generative concept design model based on parallel
evolutionary strategy." CSCWD, Proceedings of the 2010 14th International
Conference on Computer Supported Cooperative Work in Design, Shanghai, China
(April 14-16), 748 - 752.
Determination of The Optimal Positions of Window Blinds
Through Multi-Criteria Search
B. Raphael1
1
Assistant Professor, Department of Building, National University of Singapore.
Email: bdgbr@nus.edu.sg

ABSTRACT

Design of building systems should consider trade-offs between conflicting


objectives. Multi-criteria search and optimization techniques have been developed for
this purpose and these have been successfully applied to many design tasks. One
popular approach is Pareto optimization which generates a population of efficient
solutions. However, it cannot be used for real time control tasks because the control
task requires the identification of a single solution completely autonomously, that is,
without human intervention. An algorithm for selecting the best solution that achieves
reasonable trade-offs among all the objectives is described in this paper. The
algorithm uses two pieces of information for the selection, ordering of objectives
according to their importance; and grouping of solutions according to the sensitivity
of objective function values. All the solutions that lie within the sensitivity limits of a
particular objective are considered to be equivalent. This permits further filtering of
solutions according to the importance of objectives. The new algorithm has been
applied for the real time control of window blinds. Results from a case study are
presented and the advantages of this algorithm are evaluated.

INTRODUCTION

It is quite well known that parameters that influence the Indoor Environment
Quality strongly interact with each other (Gero et al 1983, Wright et al. 2002). For
example, increasing the natural daylight in a room might increase the amount of heat
transmitted. Design of building systems should consider trade-offs between such
conflicting objectives (Dikaki et al. 2008). Multi-criteria search and optimization
techniques have been developed for this purpose and these have been successfully
applied to many design tasks.

One popular approach is Pareto optimization (Grierson 2008, Raphael and


Smith, 2003a). The Pareto front can be used by engineers for visual inspection and
evaluation of possible trade offs between conflicting objectives. However, this cannot
be used in real time control because the control task requires the identification of a
single solution completely autonomously, that is, without human intervention. Pareto
approach does not provide any direct information that is useful in selecting the best
solution. Additional domain knowledge and heuristics are sometimes used to select a
single solution from the Pareto set as demonstrated in Adam and Smith (2007).

17
18 COMPUTING IN CIVIL ENGINEERING

This paper presents a new algorithm called Relaxed Restricted Pareto (RR-
Pareto) for selecting a single solution that achieves a reasonable trade off among
conflicting objectives in a multi-objective optimization task. The application of the
algorithm to window blind control is presented to illustrate potential advantages.

RR-PARETO ALGORITHM

In this algorithm, the solution with the best trade-offs among all the objectives is
chosen using two pieces of information:
 Ordering of the objectives according to their importance
 The sensitivity of each objective

The sensitivity of an objective refers to the threshold which determines


whether the differences in the objective function values are significant. This is based
on the observation that small differences in the value of an objective do not matter in
practical situations. All the points lying within the specified sensitivity band are
considered to be equivalent with respect to that objective. These solutions are further
filtered using other objectives.

In order to illustrate the concept of sensitivity, consider the objective of


minimizing the energy consumption. The user might specify that reduction in energy
below 10% is not significant, and therefore, the sensitivity of this objective is defined
as 10%.

The algorithm starts off with a set of solutions that are generated by any
search technique, for example, PGSL (Raphael and Smith, 2003b) or Genetic
algorithms. Each solution point contains the values for all the objectives as well as the
decision variables (optimization variables). The set of solutions are sequentially
filtered according to the order of importance of objectives. At each stage of filtering,
the solution point with the best value for the current objective from among all the
points in the current set is chosen. All the points that lie outside the sensitivity band
of the chosen point are eliminated from the set. At the end of the process, one or more
points might remain in the solution set. The user is asked to choose the preferred
solution from this set or in the automatic mode, the best solution according to the
most important criterion is selected.

The algorithm is a generalized and domain-independent form of the


hierarchical selection used in Adam and Smith (2007). Domain specific rules have
been replaced with the sensitivity parameter for filtering. This makes it easier to
integrate the filtering process into the optimization process and the complete Pareto
set need not be generated. It is also emphasized that the set of solutions that are
obtained through RR-Pareto filtering are not necessarily Pareto optimal since small
increases in the objective function value are ignored by the RR-Pareto algorithm.
Solution points with even small increases in the objective function values are
eliminated by the traditional Pareto filtering (in a minimization problem). While these
points may not be attractive from the point of view of the objectives that can be
COMPUTING IN CIVIL ENGINEERING 19

quantified, these may have important characteristics and may represent unique and
distant solutions in the decision variable space.

In order to understand how the algorithm works, an example is shown in


Figure 1. The figure shows the lighting and solar thermal loads for various positions
of a window blind. The first curve shows the lighting load. The second curve shows
the solar thermal load. The window blind position varies from 0.1 m to 1.8 m from
the window sill and is shown on the x-axis. The blind position 0 m represents
completely closed and the position 1.8 m represents completely open. As the blind
opens, the energy required for lighting decreases, while the energy required for
cooling increases. The Pareto front for this problem is shown in Figure 2. Here the
objectives are lighting load and the thermal load. Some of the points on the Pareto
front are shown in Table 1.
11700

10700

9700

L i g h ti n g
L o a d (W )

8700 S o la r

7700

6700

5700
0 0 .2 0 .4 0 .6 0 .8 1 1 .2 1 .4 1 .6 1 .8

B lin d p o s itio n (m )

Figure 1. Lighting and thermal load for various positions of a window blind.
11000

10500

10000

9500

9000
T h e r m a l lo a d (W )

8500

8000

7500

7000

6500

6000
5800 5900 6000 6100 6200 6300 6400 6500

L ig h t in g P o w e r ( W )

Figure 2. Pareto front for a window blind control problem.


20 COMPUTING IN CIVIL ENGINEERING

Table 1. Points on the Pareto Front.


Blind Position (m) Lighting (W) Solar (W)
0.1 6400 7234
0.2 6360 7580
0.51 6198 8637
0.6 6168 8936
0.73 6121 9351
1 6023 10079
1.15 5979 10295
1.24 5973 10388
1.25 5968 10397
1.26 5956 10406
1.31 5948 10452
1.32 5933 10461
1.35 5932 10488
1.38 5926 10515
1.4 5915 10531
1.5 5897 10596
1.56 5876 10744

In this example, all the blind positions up to 1.56 m are on the Pareto front.
Above this value, the lighting levels are higher than the prescribed values and there is
no reduction in lighting energy. At the same time there is an increase in the cooling
load. With pure Pareto filtering, it is not possible to select a single best blind position.

In order to illustrate the working of the algorithm, two scenarios are


considered here. In the first scenario, the sensitivity of both the objectives is specified
as 2% and the lighting load has higher priority over thermal load. The best point in
Table 1 according to the objective of lighting load has a value of 5876 W. All the
points above the window blind position of 1.13 m lie within 2% of this value, and
therefore, all these points shown above are initially added to the RR-Pareto Set. The
best point according to the objective of thermal load has a value of 7234 W. However,
this point gets filtered out using the primary objective. The best point according to the
second objective in the filtered set is 10274 W. All the current points are within 2%
of this value Therefore, at the end of filtering all these points still remain in the RR-
Pareto set. In this case, the best point according to the primary objective is chosen.
This results in the blinds getting open to the maximum level (1.56 m) without
exceeding the limit on maximum brightness.

In the second scenario, the sensitivity of both objectives is specified as 5%. In


this case, the chosen solution will be different. All the points above the blind position
of 0.6 m are within the sensitivity of the lighting energy objective. The best point
with respect to the thermal load objective from the resulting RR-Pareto set is 8936 W.
The maximum thermal load within 5% of this value is 9383 W. Therefore, all the
points above the blind position of 0.73 m get filtered out. In this case, the solution
selected will not be the best according to the primary objective, but it will be a
compromise solution that makes trade-offs among the two objectives.
COMPUTING IN CIVIL ENGINEERING 21

The selection process can be interpreted as follows. All the points are
equivalent with respect to the objective of minimizing lighting energy since all the
solutions lie within the sensitivity limit. However, some of the points are not good
with respect to the second objective and therefore, these are removed from the set.
The best point with respect to the primary objective is selected from the remaining set.
This point represents a good trade off between the two conflicting objectives.

In this example, it is not possible to get an improvement in lighting beyond


10%. Better lighting performance is possible only with the use of day lighting
features such as adaptively controlled light shelves (Raphael 2010).

This example illustrates how the new algorithm is able to select good
solutions without the use of arbitrary weight factors. Users are able to control the
selection of the optimal point by specifying sensitivities of objectives. These
sensitivities represent important domain knowledge and reflect the priorities of the
organization. How much increase in energy is unacceptable and what level of
increase in lighting is significant for visual comfort are really the decisions of the
facilities manager.

EMPIRICAL EVALUATION OF THE ALGORITHM

In order to evaluate the algorithm in a practical situation, a case study of an


office building is taken. Two control strategies are compared. The first one uses RR-
PARETO filtering to select the best blind position. The second strategy is similar to
the one that is commonly used in traditional window blind control (Guillemin and
Morel, 2001). This is based on limiting the maximum brightness in the room in order
to eliminate glare. In this strategy, the window blind is left completely open as long
as the lux values in the room are below the recommended values. If the lux values are
higher than the limiting value, the window blind is completely closed. 600 lux is
taken as the limiting value in this study.

In the RR-Pareto control strategy, the optimal blind positions for all the hours
of a typical summer day from 8 am to 6 pm are computed, using total energy as the
primary objective function and the lighting energy as the secondary objective.
Thermal load is computed using the energy simulation software EnergyPlus (2005)
and lighting levels are computed using the lighting simulation software Radiance
(Ward and Shakespeare 1998). For each hour, the energy consumption of the optimal
control action is computed. This energy is compared with that of the second control
strategy. The difference between these two cases gives the energy savings that can be
achieved using the integrated control strategy.

The case study involves the first floor of an office building of size 36m x 18m
in Singapore, with the longer side oriented along the north-south direction. There are
windows with controllable blinds on the east and west facades, named W1 and W3
respectively. The window height is 2.4m and the ceiling is at a height of 2.7 m from
the floor. A 3D rendering of the building using Radiance is shown in Figure 3.
22 COMPUTING IN CIVIL ENGINEERING

Table 2 summarizes the energy computations for all the hours of the day. The
second and third columns contain the optimal blind positions determined by the
control algorithm. The fourth column gives the lighting power and the fifth column
the cooling load. The sixth column gives the total energy for the optimal blind
positions. The last column contains the energy for the second control strategy.

Table 2. Summary of energy computations for the optimal blind positions.


Hour Blind position Lighting Cooling Total Energy (KWH)
(KW) (KW)
W1 W3 Integrated Control Control
Strategy strategy 2
8 100 63 5.52 11.09 20.57 29.5
9 49 45 5.91 11.48 21.35 30.08
10 26 39 6.06 11.73 21.75 30.47
11 100 40 5.74 12.28 21.97 30.61
12 100 78 5.45 12.59 22 30.52
13 100 100 5.31 12.67 21.95 21.95
14 85 96 5.39 12.7 22.05 30.63
15 49 18 6.02 12.26 22.24 31
16 37 21 6.09 12.13 22.19 31.21
17 41 30 5.99 11.93 21.88 30.83
18 55 76 5.61 11.48 21.05 29.97

The total savings in energy for the whole day with respect to the second
control strategy is 26.87%. A plot of the total energy for the two control strategies is
given in Figure 4.

Figure 3. Case study of an office building rendered using Radiance Lighting


Simulation software.
COMPUTING IN CIVIL ENGINEERING 23

32

30

28

O p t im a l
26
E n e rg y ( K W H )

S tr a te g y 2

24

22

20
8 10 12 14 16 18
Hour

Figure 4. Comparison of energy for two different control strategies.

It can be seen that for every hour, the optimal control results in lower energy
consumption. When the blinds are fully open, either too much solar gain causes the
cooling energy to be higher or the excessive brightness causes the blinds to be closed,
thereby increasing the lighting energy consumption. The only exception is at 13:00
hours, when the shading is just adequate to prevent excessive heat and light, causing a
dip in the energy consumption of the second control strategy. Only at this hour, the
external shade of 0.6 m width prevents direct sunlight from entering the room.

It should be noted that the energy savings vary depending on a number of


factors such as the building orientation, position and size of windows, types of
shading provided, etc. Therefore, these results cannot be directly applied to other
cases. Empirical tests have shown that significant savings can be achieved through
the RR-PARETO control strategy in most cases.

CONCLUDING REMARKS

A new algorithm for selecting a single solution that achieves reasonable trade-
offs among multiple objectives is presented in this paper. The algorithm has been
evaluated by applying it to the task of window blind control using the case study of an
office building. In the selected example, an energy savings of 26.87% is obtained
compared to a traditional control strategy. The control algorithm has already been
applied to a number of tasks such as personalized ventilation and light shelves. The
24 COMPUTING IN CIVIL ENGINEERING

results show great potential in the application to complex multi-objective


optimization problems.

ACKNOWLEDGEMENTS:

This research is supported by the Singapore Ministry of Education’s AcRF


Tier 1 funding and the Office of Research, (ORE), NUS, through the grant R-296-
000-102-112. The author wishes to thank his collaborators in this research work, Prof.
Tham Kwok Wai and Prof. Chandra Sekhar for fruitful discussions. Hardware
installation work in the laboratory done by Mr. Tan Cheow Beng is also gratefully
acknowledged.

REFERENCES:

Adam, B. and Smith, I.F.C. "Tensegrity Active Control: Multiobjective Approach", J


of Computing in Civil Engineering, Vol 21, No 1, 2007, pp 3-10.

Diakaki, C., Grigoroudis, E., Kolokotsa, D., (2008). Towards a multi-objective


optimization approach for improving energy efficiency in buildings, Energy and
Buildings 40 (9), pp. 1747-1754.

EnergyPlus 1.2.3.023, (2005). A building energy simulation program,


www.energyplus.gov.

Gero J.S., Neville D.C., Radford A.D., (1983). Energy in context: a multicriteria
model for building design, Building and Environment 18 (3) 99–107.

Grierson D. E., Pareto multi-criteria decision making, Advanced Engineering


Informatics, Volume 22, Issue 3, pp. 371-384.
Guillemin A., Morel N. (2001). An innovative lighting controller integrated in a self-
adaptive building control system, Energy and Buildings 33, 477-487.

Raphael B. and Smith I.F.C, (2003). Fundamentals of computer aided engineering,


John Wiley, UK.

Raphael, B. (2010). Active Control of Daylighting Features in Buildings. Computer-


Aided Civil and Infrastructure Engineering, no. doi: 10.1111/j.1467-
8667.2010.00692.x

Ward L. G, Shakespeare R. (1998). Rendering with Radiance: the art and science of
lighting visualization, San Francisco: Morgan Kaufmann.

Wright J.A., Loosemore H.A., Farmani R. (2002). Optimization of building thermal


design and control by multi-criterion genetic algorithm, Energy and Buildings 34 (9)
959–972.
Performance of two model-free data interpretation methods for continuous
monitoring of structures under environmental variations

Irwanda Laory1, Thanh N. Trinh2 and Ian F. C. Smith3


Applied Computing and Mechanics Laboratory (IMAC), Swiss Federal Institute of
Technology Lausanne (EPFL), Station 18, CH-1015 Lausanne, Switzerland.
1
irwanda.laory@epfl.ch; 2 ngocthanh.trinh@epfl.ch; 3 ian.Smith@epfl.ch

ABSTRACT

Interpreting measurement data from continuous monitoring of civil structures


for structural health monitoring (SHM) is a challenging task. This task is even more
difficult when measurement data are influenced by environmental variations, such as
temperature, wind and humidity. This paper investigates for the first time the
performance of two model-free data interpretation methods: Moving Principal
Component Analysis (MPCA) and Robust Regression Analysis (RRA) for
monitoring civil structures that are influenced by temperature. The performance of
the two methods is evaluated through two criteria: (1) damage detectability and (2)
time to detection with respect to two factors: sensor-damage location and traffic
loading intensity. Furthermore, the performance is studied in situations with and
without filtering seasonal temperature variations through the use of a moving average
filter. The study demonstrates that MPCA has higher damage detectability than RRA.
RRA, on the other hand, detects damages faster than MPCA. Filtering seasonal
temperature variations may reduce the time to detection of MPCA while the benefits
are modest for RRA. MPCA and RRA should be considered as complementary
methods for continuous monitoring of civil structures.

Keywords: Moving principal component analysis; Robust regression analysis;


Damage detectability; Time to detection; Seasonal temperature variations.

INTRODUCTION

With recent advances in sensor technology, data acquisition systems and


computational power, the number of structures that are monitored is growing. Thus,
large quantities of measurement data are retrieved every day and much more will be
available in the future. Extracting useful information from this data to detect damage
is a challenge for SHM. This task is even more difficult when measurement data are
influenced by environmental variations, such as temperature, wind and humidity. For
example, the thermal effects on the performance of Tamar Bridge were shown to
dominate bridge behavior (Brownjohn et al. 2009). Another study found that the
peak-to-peak strain differential due to temperature over a one-year period is more
than ten times higher than the strain due to observed maximum daily traffic (Catbas

25
26 COMPUTING IN CIVIL ENGINEERING

et al. 2008). In structural health monitoring, there are typically two classes of data
interpretation methods: model-based methods and model-free methods (ASCE 2011).
Model-based data interpretation methods typically utilize measurement data to
identify models that are able to reflect the real behavior of structures (Goulet et al.
2010; Koh and Thanh 2009; Koh and Thanh 2010; Robert-Nicoud et al. 2005). Thus,
such methods involve the development and use of behaviour (physical) models to
validate the results. Nevertheless, creating such models for civil infrastructure is
often difficult and expensive, and may not always reflect real behavior due to the
presence of uncertainties in complex civil-engineering structures (Goulet et al. 2010).
Model-free data interpretation methods involve analyzing data without
behavior models (i.e. without using geometrical and material information). These
methods identify changes in time-series signals statistically. They are thus well-
suited for interpreting measurement data during continuous monitoring of structures.
Many signal-processing methods have been proposed for the application in
continuous monitoring (Hou et al. 2000; Lanata and Grosso 2006; Omenzetter and
Brownjohn 2006; Omenzetter et al. 2004; Yan et al. 2005a; Yan et al. 2005b).
Posenato et al (2010; 2008) proposed two model-free data interpretation methods,
MPCA and RRA, to detect and localize anomalous behavior for the context of civil-
engineering structures. The performance of these two methods was compared with
that of eight other methods. The studies demonstrated that MPCA and RRA perform
better other methods for anomaly detection in the presence of noise, missing data and
outlier. Both methods were also observed to require low computational resources to
detect anomalies, even when there were large quantities of data. In addition, they
were adaptable in changing the condition of structures for further damage detection.
This paper investigates the performance of MPCA and RRA in terms of
damage detectability and time to detection (i.e. the time from the moment that
damage occurs to the moment it is detected). These metrics are evaluated with
respect to changes in traffic loading and the proximity of sensors to damage locations.
This paper also studies the influence of removing seasonal temperature variations on
the reduction of time to detection. A railway truss bridge in Germany is used for this
study.

MOVING PRINCIPAL COMPONENT ANALYSIS (MPCA)

MPCA employs a fixed-size window that moves along the measurement time
series to track changes in its principal components in order to detect anomaly in
structures. The procedure of computing principle components inside a window is
described as the following steps.
Step 1. Formulate a matrix U with each column containing a measurement time
series and each row corresponding a time step (observation) of all time series.
Step 2. Move a fixed-size window along the columns of U to extract datasets at each
time step k as
COMPUTING IN CIVIL ENGINEERING 27

 u1 (tk ) u2 (tk ) ... u N s ( tk ) 


 
u ( t ) u2 (tk 1 ) ... uN s (tk 1 ) 
Uk  t    for k = 1, …,  N m  N w 
1 k 1
 ... ... ... ... 
 
u1 (tk  NW ) u2 (tk  NW ) ... uN s (tk  NW ) 
where N w is the total number of observations within the moving window; N m is the
total number of observations of a measurement time series.
Step 3. Normalize each time series inside the active window by subtracting its mean
value.
Step 4. Compute covariance matrix Ck for all measurements inside the window as
k  Nw

 u t  u t 
T
Ck  j j
j k

Step 5. Compute eigenvalue i and eigenvector  i , called principal components, of


the covariance matrix as
 Ck  i I   i  0 for i = 1, …, N s
Step 6. Sort the eigenvectors with respect to the decreasing order of the eigenvalue.
The first few principal components contain most of the variance of time series while
the remaining components are defined by measurement noise. Eigenvectors that are
related to the first few eigenvalues are used for anomaly detection.
Standard deviation,  , of the principal components in the training phase is
used to define the threshold bounds, 6 . In the monitoring phase, anomaly is
identified when the value of the principal component exceeds the threshold bound
defined.

ROBUST REGRESSION ANALYSIS (RRA)

Use of RRA for continuous monitoring of structures involves finding sensor


pairs that have a high correlation, and then focusing on the correlation of these
couples to detect anomalies. The correlation coefficient between measurements from
two sensors si and s j are computed and compared with a given correlation
coefficient. All sensor pairs having a correlation coefficient greater than the given
coefficient are selected in order to compute the robust regression line. The linear
relation between si and s j is written as
s ' j  asi  b
where s ' j represents the value of s j calculated according to the linear relation. a and
b are the coefficients of the robust regression line estimated from measurements.
These coefficients are estimated using iteratively reweighted least squares. The
employment of the robust regression analysis for structural continuous monitoring is
carried out by observing the difference between measurement s j and prediction s ' j .
Similar to MPCA, standard deviation of the difference in the training phase is used to
define the threshold bounds, 6 . In the monitoring phase, an anomaly is identified
when the difference of measurement to prediction exceeds the threshold bounds.
28 COMPUTING IN CIVIL ENGINEERING

DATA PROCESSING FOR REMOVING SEASONAL TEMPERATURE


VARIATIONS

Temperature variation generally includes seasonal and daily variations. Daily


variation is the periodic change in temperature over 24 hours while seasonal
variation is the periodic change according to the seasons. The frequency of daily
variation is thus much higher than that of seasonal variation. Thus, the seasonal
temperature variation can be filtered from measured temperature data by using a
moving average filter (Smith 1997).
After filtering, the seasonal variation of the temperature ( ts ) and the
structural responses (  s ) in the undamaged state are obtained. A linear relation is
assumed as
 s    ts
Given the calculated coefficient  and the seasonal temperature variation in
subsequent years t 's , the structural response  d due to daily temperature variation
is computed as
 d    (  t 's )
where  is the measurement due to daily and seasonal temperature variations in the
subsequent years.

NUMERICAL STUDIES

Two features of the data-interpretation methods: (1) damage detectability and


(2) time to detection, are evaluated with respect to changes in sensor-damage
locations (the proximity of sensors to damage locations) and traffic loading. In the
context of this paper, damage detectability is the capability to interpret measurement
data in order to identify damage and is defined as follows:
Damage detectability (%)=100%-Minimum detectable damage level (%)
where minimum detectable damage level is the smallest percentage loss of stiffness
in a member that can be detected. The influence of data processing for removing
seasonal variations on these two features is also investigated.
The study is carried out for a railway truss bridge in Germany, as shown in
Figure 1. This 80-m steel bridge is composed of two parallel trusses each having 77
steel members with an elastic modulus of 200 GPa and a density of 7870 kg/m3. A
finite element analysis for one truss of the bridge under traffic loading and
temperature variation is performed to obtain “simulated” measurements (strains) at
15 members (marked by black bars in Figure 1). Traffic loading is simulated by
applying a randomly generated vertical load (0-19 tonnes) at each node in the bottom
chords. Damage is marked as black dots and is assumed to be a loss of axial stiffness
of each truss member. Damage scenarios are used to evaluate the effects of sensor-
damage location on the damage detectability and time to detection of the two
methods. Furthermore, varying traffic loading is simulated to evaluate the effects of
traffic loading. All effects are evaluated with and without data processing for
removing seasonal temperature effects.
COMPUTING IN CIVIL ENGINEERING 29

Figure 1. A truss structure of a 80-m railway steel bridge with sensor locations marked
as black bars and damage locations marked as black dots.

Figure 2 illustrates the effects of damage locations on damage detectability and


time to detection of MPCA and RRA. It is concluded that when damage occurs close
to sensors, damage detectability is highest. Figure 2 (left) indicates that, without
seasonal-variation removal and when damage is away from sensor locations, the
damage detectability of MPCA is higher than that of RRA. A possible reason for this
observation is that MPCA performs the analysis of correlation for all measurements
while RRA does so only for highly correlated measurement pairs (as discussed in
RRA section). In addition, the seasonal-variation removal has a significant effect on
the damage detectability of MPCA when compared with RRA.
As for time to detection, damage scenarios with a stiffness reduction of 60%
and a traffic level of 50% are generated. Figure 2 (right) shows that damage taking
place close to sensors is detected faster than that away from sensors. For example,
the time to detect the damage at location 1 is 23 days, and increases to 42 days for
location 2. In contrast, RRA can detect damage away from a sensor as fast as damage
at a sensor. This is likely because MPCA evaluates eigenvectors within a moving
window, whereas RRA evaluates directly the difference between a measured value
and a robust regression line for each of the measurement points. Figure 2 (right) also
indicates that for MPCA, removing seasonal variations reduces significantly the time
to detection for damage at a sensor location. For RRA, nevertheless, the effect of
seasonal-variation removal on time to detection is negligible.
Figure 3 illustrates the effects of traffic loading on the damage detectability and
the time to detection of both methods with and without seasonal-variation removal.
As shown in Figure 3 (left), the damage detectability of MPCA and RRA decreases
as traffic loading increases. It also indicates that removal of seasonal variations
reduces the damage detectability of MPCA while it does not affect that of RRA. For
example, at traffic loading level of 40%, the damage detectability of MPCA
decreases from 95% to 75% corresponding to before and after removing seasonal
variations. At the same traffic loading, it changes only 2% for RRA.
To demonstrate the effects of traffic loading on time to detection, the analyses
are performed for 60% damage at location 2. Figure 3 (right) illustrates that as the
traffic loading level increases, time to detection of MPCA increases while that of
RRA does not change. The increase of time to detection of MPCA is 40 days and
160 days with and without seasonal variations, respectively. The time to detection of
RRA, on the other hand, remains at one day for traffic loading from 20% to 80%.
Also note that for MPCA, the influence of removing seasonal variations on time to
detection is not stable. This removal reduces time to detection only for low traffic
loading level (less than 40%).
30 COMPUTING IN CIVIL ENGINEERING

No seasonal variation removal ) 60 No seasonal variation removal


)100 Moving average filter s Moving average filter
y
a
(% (d
ti 80
y n40
li o
b 60 ti
a c
t e
t
c e
e
t 40 d20
e
d o
t
e 20 e
g
a im 0
m0 T
a
D MPCA RRA MPCA RRA MPCA RRA MPCA RRA MPCA RRA MPCA RRA

Location 1 Location 2 Location 3


Location 1 Location 2 Location 3

Figure 2. Damage detectability (left) and time to detection (right) at three locations
using MPCA and RRA.

100 200
No seasonal variation removal (MPCA)
) )
% s Moving average filter (MPCA)
( y
75 a 150
ty d No seasonal variation removal (RRA)
lii ( Moving average filter (RRA)
b n
o
a
t ti 100
c 50 c
te No seasonal variation removal (MPCA) te
e e
d Moving average filter (MPCA) d
e 25 o
t 50
g No seasonal variation removal (RRA)
a e
m
a
Moving average filter (RRA)
im
T 0
D 0
0 20 40 60 80 100 0 20 40 60 80 100
Traffic loading level (%) Traffic loading level (%)

Figure 3. Damage detectability (left) and time to detection (right) when using MPCA
and RRA for damage at location 2 with traffic loading levels from 20% to 100%.

CONCLUSIONS

Data interpretation for continuous monitoring of structures is a challenging task. This


paper investigates, for the first time, the performance of two model-free data
interpretation methods - (1) moving principal component analysis (MPCA) and (2)
robust regression analysis (RRA) - under environmental variations. MPCA has
shown higher performance than RRA in terms of damage detectability. RRA, on the
other hand, is better than MPCA in terms of time to detection and has the advantage
of being insensitive to seasonal variations. For MPCA, removing seasonal variations
results in a trade-off between damage detectability and time to damage detection.
Finally, since these two methods are most appropriate in different contexts, they
should be considered complementary. Good damage detection strategies for
structural health monitoring could result from synergies between MPCA and RRA.
COMPUTING IN CIVIL ENGINEERING 31

ACKNOWLEDGEMENTS

This work was partially funded by the Swiss Commission for Technology and
Innovation and the Swiss National Science Foundation (contract 200020-12638). An
extended version of this paper has been accepted for publication in Advanced
Engineering Informatics (Laory et al. 2011).

REFERENCES

ASCE. (2011). "Structural Identification of Constructed Facilities." Structural Identification


Comittee, American Society of Civil Engineers.
Brownjohn, J. M. W., Worden, K., Cross, E., List, D., Cole, R., and Wood, T. (2009).
"Thermal effects on performance on Tamar Bridge." The Fourth International
Conference on Structural Health Monitoring of Intelligent Infrastructure Zurich,
Swirzerland, 152.
Catbas, F. N., Susoy, M., and Frangopol, D. M. (2008). "Structural health monitoring and
reliability estimation: Long span truss bridge application with environmental
monitoring data." Engineering Structures, 30(9), 2347-2359.
Goulet, J.-A., Kripakaran, P., and Smith, I. F. C. (2010). "Multimodel Structural
Performance Monitoring." Journal of Structural Engineering, 136(10), 1309-1318.
Hou, Z., Noori, M., and St. Amand, R. (2000). "Wavelet-based approach for structural
damage detection." Journal of Engineering Mechanics, 126(7), 677-683.
Koh, C. G., and Thanh, T. N. (2009). "Challenges and Strategies in Using Genetic
Algorithms for Structural Identification." Soft Computing in Civil and Structural
Engineering, B. H. V. Topping and Y. Tsompanakis, eds., Saxe-Coburg
Publications, Stirlingshire, UK, 203-226.
Koh, C. G., and Thanh, T. N. (2010). "Output-only Substructural Identification for Local
Damage Detection." The Fifth International Conference on Bridge Maintenance,
Safety and Management, Philadelphia, Pennsylvania, USA.
Lanata, F., and Grosso, A. D. (2006). "Damage detection and localization for continuous
static monitoring of structures using a proper orthogonal decomposition of signals."
Smart Materials and Structures, 15(6), 1811-1829.
Laory, I., Trinh, N., and Smith, I. F. C. (2011). "Evaluating two model-free data
interpretation methods for measurements that are influenced by temperature."
Advanced Engineering Informatics, in Press.
Omenzetter, P., and Brownjohn, J. M. W. (2006). "Application of time series analysis for
bridge monitoring." Smart Materials and Structures, 15(1), 129-138.
Omenzetter, P., Brownjohn, J. M. W., and Moyo, P. (2004). "Identification of unusual events
in multi-channel bridge monitoring data." Mechanical Systems and Signal
Processing, 18(2), 409-430.
Posenato, D., Kripakaran, P., Inaudi, D., and Smith, I. F. C. (2010). "Methodologies for
model-free data interpretation of civil engineering structures." Computers &
Structures, 88(7-8), 467-482.
Posenato, D., Lanata, F., Inaudi, D., and Smith, I. F. C. (2008). "Model-free data
interpretation for continuous monitoring of complex structures." Advanced
Engineering Informatics, 22(1), 135-144.
Robert-Nicoud, Y., Raphael, B., Burdet, O., and Smith, I. F. C. (2005). "Model Identification
of Bridges Using Measurement Data." Computer-Aided Civil and Infrastructure
Engineering, 20(2), 118-131.
32 COMPUTING IN CIVIL ENGINEERING

Smith, S. W. (1997). The Scientist and Engineer's Guide to Digital Signal Processing,
California Technical Pub.
Yan, A. M., Kerschen, G., De Boe, P., and Golinval, J. C. (2005a). "Structural damage
diagnosis under varying environmental conditions--Part I: A linear analysis."
Mechanical Systems and Signal Processing, 19(4), 847-864.
Yan, A. M., Kerschen, G., De Boe, P., and Golinval, J. C. (2005b). "Structural damage
diagnosis under varying environmental conditions--part II: local PCA for non-linear
cases." Mechanical Systems and Signal Processing, 19(4), 865-880.
Condition-based Maintenance in Facilities Management

Joseph Neelamkavil1
1
Centre for Computer-assisted Construction Technologies, National Research Council
Canada, London, Ontario Canada N6G 4X8; email: joseph.neelamkavil@nrc.gc.ca

ABSTRACT
A facility management strategy requires that an organization’s major operational
concerns are dealt with, such as: avoiding the risk of catastrophic failures, planning
for asset maintenance and reducing the quantity of spare parts and associated
inventory costs. To bring this into further perspective, it is a well known fact that
many systems suffer increasing wear with usage and age and are subject to random
failures that are linked to the deterioration of these assets. Some examples of such
affected items can be building components, hydraulic structures, turbine blades, and
rotating equipment. In these cases, various physical deterioration processes can be
observed, such as cumulative wear, crack growth, corrosion, fatigue, and so on.

The deterioration and failures of such systems might incur safety hazards, as well as
high operational costs (due to work stoppage, delays, unplanned intervention, etc.).
To cope with this, preventive maintenance strategies are often adapted thereby
replacing the deteriorated system before it even fails. If the deterioration of the
system, or a parameter strongly correlated with the state of that system can be directly
measured (via corrosion assessment, wear monitoring, etc.), and if the system stops
functioning when it deteriorates beyond a given threshold, then it is appropriate to
base any maintenance decisions on the actual deterioration of the system rather than
on its age. And this leads to the choice of a condition-based maintenance (CBM)
policy. CBM techniques provide an assessment of the system’s condition, based on
data collected from the system through continuous monitoring and/or via inspections.
The main intent is to determine the required maintenance plan prior to any predicted
failure. Such a strategy will contribute by minimizing maintenance costs, improving
operational safety and reducing the number of in-service system failures. This paper
will address the merits of adapting CBM strategies in Facilities Management.

THE ROLE OF MAINTENANCE IN FACILITY MANAGEMENT

Modern maintenance management strategies have evolved over time; organizations


are now trying to ensure high asset reliability and availability while trying to adhere
to a limited maintenance investment. Arriving at a maintenance strategy for
individual assets, to satisfy corporate objectives and with minimum investment
remains a challenge. A number of maintenance strategies are being practiced today:

33
34 COMPUTING IN CIVIL ENGINEERING

Corrective Maintenance or ‘Run-to-failure’: For a long time, organizations have


practiced a “run to failure” maintenance strategy, which means an asset is operated
until it fails. Maintenance action, which typically involves repair or replacement, is
taken with the intention of correcting the fault. For non-critical assets, this is still
considered a good operating strategy. But failures of many assets (such as critical)
have far-reaching consequences. These failures can shut down entire production lines,
make buildings unusable, or can even cause accidents. It is imperative that these
types of failures are prevented.

Preventive Maintenance: A different type of maintenance strategy widely known as


preventive maintenance has evolved to address some of the above problems. It
involves looking at the asset failure history, and initiating maintenance to “fix” the
asset before there is a high probability of its failing. The preventive strategy advises
that maintenance be performed more often than is absolutely necessary. And since
maintenance incurs costs in both labour and parts, this strategy can result in “over-
maintenance”. Preventive maintenance often requires that assets be taken off-line for
servicing, which in turn can incur costs due to down time and lost production.

Predictive vs. Condition-based Maintenance (CBM): Since preventive maintenance


has become expensive, organizations have developed a different approach to
maintenance, one that involves continuously (or frequently) monitoring an asset’s
condition until it begins to show evidence of deteriorating performance or failure.
Maintenance is then performed in-time to prevent total failure. Compared to what
preventive maintenance can offer, the new strategy (often known as predictive or
condition-based) results in overall cost reductions, while providing better asset
availability and performance. This strategy uses real-time information on the asset’s
condition to identify when maintenance will become necessary, which may also allow
it be deferred until it is actually needed. The terms “predictive” and “condition-
based” are sometimes used interchangeably, yet there is a distinct difference between
the two. Predictive maintenance is activated by the analysis of equipment condition
data that is gathered periodically, often manually. This contrasts with the CBM
approach in which equipment condition data is collected in a continuous manner and
analyzed in real-time. The evolution of maintenance may be depicted in the form of a
picture as shown in Figure 1.

Figure 1: Evolution of Maintenance


COMPUTING IN CIVIL ENGINEERING 35

Condition-based maintenance can be initiated according to the state of a degrading


system that is monitored through various measures that typically describe the state of
the system. Once the degradation characteristic crosses a specified threshold, action
to perform the maintenance is triggered. This means that degradation measures must
be identified that effectively relate the state of the system (or asset) to its remaining
useful life, along with a decision on a failure threshold, with the feasibility of
implementing a condition monitoring technology. Rausch (2008) has listed the most
common monitoring methods being practiced today, like Process monitoring,
Vibration monitoring, Thermography, Tribology, Visual inspection etc.

DEGRADATION MODELS FOR CONDITION-BASED MAINTENANCE

Selecting a suitable model to be used in a CBM scheme is not a trivial task. It should
be based on the ability of the model to accurately describe the degradation process
and make effective extrapolations of the component state into smart decisions related
to the maintenance. The model must ensure that the degradation phenomenon is
captured by the most realistic and practical method available for implementation.
Degradation measurements traverse downward (or upward) toward a threshold, and
the system is said to have failed at the instance when the measured value crossed a
predetermined failure threshold. There are continuous time, discrete time, continuous
state, and discrete state degradation representations. Many of the discrete state/time
methods involve Markov methods, while some of the continuous degradation models
include polynomials, cumulative damage, Brownian motion and gamma processes.

Scarf (2007) suggested a structured approach to choose an appropriate model for


condition-based maintenance with the aid of a number of models. It is logical to place
condition-based maintenance in the context of a general maintenance framework for a
large complex system, one that considers all elements of the system (machines, units,
components) that have failure characteristics. But, if it is not possible to define a
failure threshold, and no condition indicator data are available at failure, and further
that a warning threshold cannot be defined, then condition-based maintenance for the
item is not feasible. In such cases, other maintenance policies should be explored –
age- based, routine inspection, operate to failure, etc.

Grall et al. (2002) have described a system that undergoes random deterioration,
while being monitored through “perfect” inspections. When the system condition
exceeds its failure level, it enters into a failed state and a corrective replacement is
carried out. When the system state is found to be greater than a critical threshold
level, the still-functioning system is considered as ‘worn-out’ and a preventive
replacement is performed. A low critical threshold leads to frequent preventive
maintenance operations, and prevents the full exploitation of the residual life of the
deteriorated (still functioning) system. But, a high critical threshold tends to keep the
device working even in an advanced deterioration state, with increased risk of failure.
36 COMPUTING IN CIVIL ENGINEERING

Goto et al. (2008) have proposed an on-line deterioration and residual life prediction
method for the rotating equipment. The equipment is inspected for vibration measures
and a mathematical model is created in order to predict the future condition of the
equipment. Prior to building the deterioration model, the ‘noise’ in the vibration data
caused by measurement errors is eliminated, which will also improve the accuracy of
the model. An on-line deterioration data management scheme is included.

In most CBM modeling approaches, the deterioration measures are inspected and
compared with a predefined threshold for maintenance decisions. Departing from this
approach, Lu et al. (2007) describe what is called a predictive CBM (PCBM) to
foretell the deterioration condition in the future. In the PCBM model the degradation
states are modeled as continuous states using a state-space model in which the state
vector includes both the degradation level and the degrading rate, both of which
influence maintenance decisions.

Many researchers have provided multiple approaches to optimize maintenance


decisions by taking into account the various characteristics of the complete system.
For example, environmental conditions can affect the deterioration rate of a system;
an excessive humidity level favours corrosion. Conversely, excessive deterioration of
the operating system can make changes in the environment. One way of capturing the
effect of the environment on an item’s life span is to randomize its failure rate
function and treat it as a stochastic process. In this context, a well known approach is
the proportional hazard rate approach which consists of modeling the effect of the
environment with the introduction of covariates in the hazard function.

Deloux et al. (2008) have described a condition-based maintenance decision


framework to tackle the potential variations in system deterioration, especially in the
rate of deterioration. In this work, the system condition at a specific time can be
summarized by a scalar variable which varies increasingly as the system deteriorates,
which can be the measure of a physical parameter linked to the resistance of a
structure, (example: length of a crack).

The CBM modeling of deteriorating systems with multiple different units has not
been a focus item. Wang et al. (2009) present a novel CBM approach for multi-unit
systems in which the deterioration processes of multi units are modeled using
continuous-time Markov chains. Segmenting the system deterioration into several
discrete states is more practical than describing the deterioration condition by a single
scalar continuous variable.

Rao and Naikan (2009) propose a condition-based preventive maintenance (CBPM)


scheme for deteriorating systems; the associated model considers deterioration and
random failures with minimal and major maintenance strategies. Minimal repairs are
carried out after every random failure, whereas the device is replaced after the
occurrence of the deterioration failure. The system undergoes random inspections to
assess the condition; based on the condition of the device, a signal is prompted to do
‘nothing’, do minimal maintenance, or do major maintenance.
COMPUTING IN CIVIL ENGINEERING 37

An important assumption that is implicit in many of the research works is that after
each maintenance action, the state of the system returns to its initial state. Shahanaghi
et al. (2008]) extend this assumption. In a nutshell it means that after each
maintenance action, the system state is not fully improved and the amount of
improvement made on the system state depends on the current state of the system.

INTELLIGENT CBM AND ‘P-F’ INTERVAL

According to Yam et al. (2001), intelligent systems that are used for condition-based
fault diagnosis fall into three categories - rule-based diagnostic systems, model-based
diagnostic systems and case-based diagnostic systems. Rule-based systems detect and
identify equipment faults in accordance with the rules representing the relation of
each possible fault with the corresponding condition. A model-based system uses
various mathematical, neural network and logical methods and compares the real time
monitored condition with the model of the object in order to predict the fault
behavior. Case-based systems use historical records of maintenance cases to provide
an interpretation for the actual monitored conditions of the item. A record of all
previous incidents and system malfunctions along with their maintenance solutions
are stored in a computer. If a fault similar to a stored case occurs, the case-based
diagnostic system will pick up a suitable maintenance solution from the case library.

Figure 2: P-F interval

In a CBM scenario, the maintenance frequency is determined based on the hypothesis


that most failures do not occur instantaneously. If sufficient evidence can be gathered
that something is in the final stages of a failure, then it is also possible to take action
to prevent it from failing completely and/or at least avoiding the consequences.
According to Sethiya (2008), and as illustrated in Figure 2, maintenance task intervals
should be determined based on the expected P-F interval. The P-F interval curve
shows how a failure starts and deteriorates to the point at which it can be detected
(point of potential failure "P"). Thereafter, if not detected and no action taken, it
continues to deteriorate - usually at an accelerating rate - until it reaches the point of
functional failure (Point "F"). The amount of time which elapses between the points
of potential and functional failures is known as the P-F interval. To detect the
38 COMPUTING IN CIVIL ENGINEERING

potential failure, an inspection scheme must be instituted, the interval of which must
be significantly less than the P-F interval in order to avoid reaching the threshold of
functional failure. The P-F interval can be measured in units relating to exposure to
fatigue cycles (running time, units of output, stop-start cycles, etc).

APPLICATION TO BRIDGES AND BUILDING FACILITIES

The CBM approaches to maintenance found early acceptance in hardware equipment.


Yet the techniques are equally applicable to assets that are in the form of buildings
and structures, provided one can construct “P-F” curves for these assets. However,
one should take note that geographical location plays an important role in the actual
deterioration rate of buildings, since facilities are affected by environmental factors
like exposure to sunlight, humidity, rain, snowfall, etc. This means that the same
building can exhibit different degradation phenomena depending upon the location.

Markovian approaches are finding broad applications in the field of building


component maintenance. Many systems employing this approach have been typically
adapted to the bridge structure domain; but a few applications are in the works in the
domain of building maintenance. As detailed in Lacasse et al. (2008) and Talon et al.
(2008), the building façade maintenance management model developed at The
National Research Council of Canada adapts the principles of bridge structures
maintenance to the maintenance of building facades. It permits the optimization of
maintenance planning, and introduces a system that permits a user to initiate building
maintenance actions. The maintenance of many components is considered in the
context of long term maintenance planning. The significance of various components
with respect to others in the façade system is determined first by conducting a Failure
Mode Effect and Criticality Analysis on all façade components. This approach
permits the development of a component criticality index from which their relative
importance is assigned amongst the different façade components. Optimization of
various maintenance actions is considered with respect to the cost of such actions.
This includes the replacement of components based on a multi-objective index, where
the index provides a means of relating competing maintenance objectives.
Lounis and Vanier (1998) have presented a systematic approach for bridge
maintenance management that combines a stochastic Markovian performance
prediction model with a multi-objective optimization procedure. The main purpose
was to determine the optimal allocation of funds and prioritization of bridges for
maintenance, repair and replacement. The performance of bridges is dependent on
many factors, like the materials involved, age, environmental conditions, traffic
loading, past maintenance, etc. The Markov chain model is utilized in this system, as
it can forecast the future condition of the bridge network, thus rationalizing the need
for allocating required funds for maintenance. The stochastic modeling of the bridge
performance via a discrete Markov chain model captures the time-dependence,
uncertainty and variability associated with condition ratings, resulting from
deterioration and maintenance, and the Markovian transition probability matrix is
estimated from the condition ratings data collected during bridge inspections.
COMPUTING IN CIVIL ENGINEERING 39

Lounis and Vanier (2000) developed a multi-objective and stochastic optimization


system for the maintenance management of roofing systems that integrates stochastic
condition-assessment and performance-prediction models with a multi-objective
optimization approach. Its embedded model was based on in-field performance data
collected during roofing inspections, considering the system and material types,
environmental conditions, age, workmanship, and maintenance level. The system’s
main features are: i) Condition assessment of roofing components using infield visual
inspection and non-destructive testing, ii) Prediction of future performance and
remaining service life of building elements using stochastic Markovian models and
iii) Multi-objective maintenance optimization of roofing maintenance by considering
multiple conflicting objectives, namely, (a) minimization of maintenance costs, (b)
maximization of network performance, and (c) minimization of risk of failure.

CONCLUSION

The condition-based maintenance (CBM) approach ensures a reduction in


maintenance uncertainty, based on the needs indicated by the asset’s condition. The
monitoring process involves the collection and interpretation of the relevant asset (or
component) parameters for identifying the state of deviations and changes from their
normal conditions. The parameters in this context represent a set of characteristics
that indicate the actual asset’s condition. Any abnormality in these characteristics
indicates the occurrence of some sort of functional failure. A built-in fault diagnosis
scheme can be activated by the detection of such an abnormal condition; this would
recognize and analyze the symptomatic information, identify the root causes of the
failure and infer the fault development trend, as well as predict the remaining life of
the asset. By monitoring the operating conditions of the defective item, future key
symptoms associated with the deterioration of the component can be predicted.
Equipped with this kind of monitoring system, an advanced alarm can be activated
when the predicted value falls within an alarm band. This will help the system
operators to take adequate actions to check the condition of the item and repair the
defects prior to a total failure.

REFERENCES

Deloux, E., Castanier, B. and Be´renguer, C. (2008). “Maintenance policy for a


deteriorating system evolving in a stressful environment”, Proc. IMechE
Vol. 222 Part O: J. Risk and Reliability.

Goto, S., Adachi, Y., Katafuchi, S., Furue, T., Uchida, Y., Sueyoshi, M., Hatazaki
H., and Nakamura, M. (2008). “On Line Deterioration Prediction and
Residual Life Evaluation of Rotating Equipment based on Vibration
Measurement”, SICE Annual Conference, Japan.

Grall, A., Bérenguer C. and Dieulle, L. (2002). “A condition-based maintenance


policy for stochastically deteriorating systems”, Reliability Engineering &
System Safety Volume 76, Issue 2, pp. 167-180.
40 COMPUTING IN CIVIL ENGINEERING

Lacasse, M.A.; Kyle, B.R.; Talon, A.; Boissier, D.; Hilly, T.; Abdulghani, K.
(2008) “Optimization of the building maintenance management process
using a Markovian model”, NRC Canada Report NRCC-51170

Lounis, Z and Vanier, D. J., (2000) “A Multi-objective and Stochastic System for
Building Maintenance Management”, Computer-Aided Civil and
Infrastructure Engineering 15 Pp 320–329

Lounis, Z and Vanier, D. J., (1998) “Optimization of Bridge Maintenance


Management Using Markovian Models”, International Conference on Short
and Medium Span Bridges, Calgary, Alberta, Pp.1045-1053, V.2

Lu, S., Tu, Y. C and Lu, H. (2007). “Predictive Condition-based Maintenance for
Continuously Deteriorating System” Qual. Reliab. Engng. Int. 23. pp 71–81.

Rao, P. N. S., and Naikan, V.N. A. (2006). “An Optimization Methodology for
Condition Based Minimal and Major Preventive Maintenance”, Economic
Quality Control, Vol 21, No. 1, pp 127 – 141.

Rausch, M. T. (2008). “Condition based Maintenance of a single system under


Spare Part Inventory Constraints”, M.Sc. Thesis, Wichita State University.

Scarf, P. A. (2007). “A Framework for Condition Monitoring and Condition


Based Maintenance” Quality Technology & Quantitative Management, Vol.
4, No. 2, pp. 301-312.

Sethiya, S.K. “Condition Based Maintenance (CBM)”, Secy. to CME/WCR/JBP

Shahanaghi, K., Babaei, H., Bakhsha, A., and Fard, N. S. (2008) “A new
condition based maintenance model with random improvements on the
system after maintenance actions: Optimizing by Monte Carlo simulation”,
World Journal of Modelling and Simulation, Vol. 4 No. 3, pp. 230-236.

Talon, A., Boissier, D., Hans, J., Lacasse, M.A., Chorier, J, (2008) “FMECA and
Management of Building Components”. National Research Council Canada
Report NRCC-51168.

Wang, L., Zheng, E., Li, Y., Wang, B., and Wu, J. (2009). “Maintenance
Optimization of Generating Equipment based on a Condition-based
Maintenance Policy for Multi-unit Systems”, CCDC – IEEE 2009.

Yam, R. C. M., Tse, P. W., Li L., and Tu, P. (2001) “Intelligent Predictive
Decision Support System for Condition-Based Maintenance”, Int. Journal of
Advanced Manufacturing Technology 17. Pp 383–391
TOWARDS SUSTAINABLE FINANCIAL INNOVATION POLICIES IN
INFRASTRUCTURE: A FRAMEWORK FOR EX-ANTE ANALYSIS
Ali Mostafavi1, Dulcy Abraham2, and Daniel DeLaurentis3
1
Ph.D. Candidate and Research Assistant, School of Civil Engineering, Purdue
University, 550 Stadium Mall Drive, West Lafayette, IN 47909-2051, USA, Phone
765/543-4036, FAX 765/494-0644, amostafa@purdue.edu.
2
Professor, School of Civil Engineering, Purdue University, 550 Stadium Mall Dr.,
West Lafayette, IN 47907-2051, USA, Phone 765/494-2239, FAX 765/494-0644,
dulcy@purdue.edu.
3
Associate Professor, School of Aeronautics and Astronautics, Purdue University, 701
W. Stadium Ave., West Lafayette, IN 47907-2045, USA, Phone 765 / 494-0694,
ddelaure@purdue.edu.

ABSTRACT
Innovative financing emerged to complement traditional financing structures in
closing the gap for infrastructure financing. The key to sustainable financial
innovations is policy analysis. The objective of this paper is to create and test an ex-
ante policy assessment model as a part of a System of Systems analysis framework to
assist policy-makers in examining innovative financing alternatives. A hybrid Agent-
Based/System Dynamics model is created to perform the following: 1) captures the
emergent dynamics of private investment in infrastructure by simulating the activities
and institutions of the players at a micro level, and 2) analyzes the determinants of
financial innovation at a macro level. The significant parameters and variables for
policy-making are identified through Meta-modeling. The application of the
methodology and its implications are then discussed using a hypothetical case. This
study illustrates the potential for the methodology to be used and tested for ex-ante
analysis in innovative financing policy-making in infrastructure systems.

INTRODUCTION
Infrastructure is a key driver of economic development. According to Levine (1997),
the importance of infrastructure for economic growth and public welfare on one hand
and the centrality of financial systems in economic growth on the other hand, as well
as the ever changing local, political, economic, social, and technological environment
and emerging globalization, raises the importance of financial innovation in
infrastructure projects. The key to sustainable financial innovation is sound policy-
making. The questions to be answered in order to make effective policies include but
are not limited to the following (Mostafavi et al. 2010): 1) What are the organizations
engaged in innovative financing? 2) What are the activities and institutional rules
affecting the development and the diffusion of innovative financing tools? The
objective of this paper is to create and test a systemic methodology for assessment of
innovative financing policies. The analysis includes: 1) creation of a hybrid Agent-
Based/System Dynamics model to facilitate implementation of ex-ante simulation
experimentation regarding the dynamics of investment in infrastructure, and 2)

41
42 COMPUTING IN CIVIL ENGINEERING

creation of a meta-model to identify significant factors affecting investment in


infrastructure.

BACKGROUND INFORMATION
Policy analysis tools can be divided into two categories of techniques: ex-post and ex-
ante. Ex-post analysis tools consider the previously observed system behavior and
identify the significant underlying factors that trigger the search for a “best” solution
for a specific scenario. Despite their robustness in static policy analysis, ex-post
analysis tools, such as game theory and statistical decision theory have not been
successful for problems “where complexity and adaptation are central” (Bankes
2002).
The limitations of these methods in capturing the complexity of public policy analysis
and management has been recognized (Bankes 2002). Lempert (2002), Bankes (2002)
and Kim and Lee (2007) discuss the shortcoming of ex-post models (e.g., statistical
models) for dynamic policy analysis. Such models do not capture the complexity of
policy problem, competing values, emergent behaviors, interdependencies, and
uncertainties (Pfeffer and Salancik, 2003; Mostafavi et al., 2011a). These issues can
be addressed using complex systems simulation models which facilitate
understanding the probable macro patterns of a system based on the micro behaviors
of adaptive components. Such models (so-called ex-ante analysis) facilitate
considering various probabilities and possibilities to provide a set of “robust”
solutions across different parameter values, scenarios, and model representations
(Bankes, 2002). Table 1 summarizes the traits of ex-post versus ex-ante policy
analysis. Based on the traits of the policy problem, the appropriate analysis tool can
be selected.

Table 1- Traits of ex-post versus ex-ante policy analysis


Ex-post Policy Analysis Ex-ante Policy Analysis
Policy problem Static Dynamic
Analysis goal Best solution for a scenario Set of robust solutions across
different scenarios
Level of uncertainty in None High uncertainty
analysis
Traits of system Non-adaptive Adaptive
Behavior of system Behavior of the system resides Emergent behaviors exist
in its components (behavior of the system does
not reside in its components)
Focus of analysis General law/causality probabilities and possibilities
Modeling objective To predict/to optimize To understand
Examples of modeling Game theory and statistical Agent-based modeling
tools tools

METHODOLOGY USED IN THIS STUDY


Innovative financing systems for infrastructure include the activities of and
interactions between the various players such as the federal government, local
agencies, private organizations, and the public. These players are managerially and
COMPUTING IN CIVIL ENGINEERING 43

operationally independent and adapt new behaviors as they learn from their
environment over time. Thus, the assessment of the activities and interactions of the
players for policy analysis requires complex system simulation (ex-ante analysis).
Analysis of policies (such as innovation policies) using complex systems simulation
requires a theoretical framework (DeLaurentis and Callaway 2004). Mostafavi et al.
(2011a) proposed a theoretical framework called the Innovation System of Systems (I-
SoS) for such analysis. The three dimensions of analysis in the I-SoS framework
(definition, abstraction, and implementation) are discussed in this paper to investigate
the dynamics of investment in infrastructure to be used for innovative financing
policy-making. Further details regarding the components of the I-SoS framework can
be found in Mostafavi et al. (2011a).
Definition: The analysis begins with the definition phase. The context of the analysis
includes assessment of private investment in infrastructure. The category of
innovative financing policy that is considered in this paper includes those policies
which facilitate private investment in infrastructure. The levels of analysis include
sub-national (local), national, and global levels, which means that the players,
interactions, and factors within and across these levels are considered. The barriers in
the analysis include the heterogeneity of the players and the activities within and
across different levels of analysis, which adds to the complexity of the analysis.
Abstraction: The abstraction phase includes identification of the players, institutions
(norms and practices), activities, networks, and resources within and across the
different levels of analysis (sub-national, national, and global). Mostafavi et al. (2011
b) identified these elements using a case-based research approach. Constructs
regarding the activities and institutions of different players were explored to be used
as rules in creating the simulation model in the implementation phase. Please see
Mostafavi et al. (2011 b) for further details.
Implementation: The implementation phase includes modeling methods, objects,
data, and classifications. The first step in the modeling phase is to identify the
appropriate modeling method. Application of modeling tools depends on the level of
abstraction and level of aggregation in the modeling problem at hand. In the case of
systemic assessment of private investment in infrastructure, the level of abstraction is
at the micro level. The level of abstraction is the level of complexity (detail) by
which a system is assessed. The level of aggregation is the level at which the
aggregate emergent outcome of the players' activities and interactions is considered.
The level of aggregation is national, which means that the aggregate outcome of the
activities and interactions among the different players and factors is considered at the
national level (e.g., the amount of infrastructure investment at the national level is the
result of activities and interactions among different players and factors within and
across different levels). The most appropriate modeling tools for such analysis are the
Agent-Based Model (ABM) and System Dynamics (SD). ABM is capable of micro-
modeling the emergent behavior of a system that consists of managerially and
operationally independent players (Bonabeau, 2002; Sanchez and Lucas, 2002; Macal
and North, 2005). ABM is gaining popularity as a standard tool policy analysis
(Bankes, 2002; Macy and Willer, 2002; and Kim and Lee, 2007). System Dynamics
is useful for understanding the behavior of complex systems and the effects of causal
factors over time (Sterman, 2001). Concurrent use of ABM and SD takes advantage
44 COMPUTING IN CIVIL ENGINEERING

of the capabilities of both modeling tools to simulate players' activities in conjunction


with the important driving and inhibiting factors.
The hybrid ABM/SD computer model for simulating the dynamics of private
investments in infrastructure is created using ANYLOGIC 6.5.1. The agents in the
model include Traditional Investors, New Investors, and General Public. The reason
for defining two different classes of agents to represent private investors is that
Traditional Investors have different objectives, norms and practices, and activities
compared to New Investors. Each agent is simulated in the computer model as a Java
class. Traditional Investors, New Investors, and Public Java classes of objects are
defined in the model to represent the agents’ properties. The model also includes
another active object class called Infrastructure, which is not an agent. The purpose
of considering an Infrastructure object class is to facilitate the aggregation of the
outcomes of other object classes. This active object class has a SD model.
Traditional Investors Object Class: A preview of this class is shown in Figure 1a.
This active object class encompasses a state chart, the parameters, and the variables
structured to define the beliefs, knowledge, and information (BKI) of the object.
There are four states for this object class: Potential, Motivated, Active, and
Withdrawn. At the beginning, all objects of this class are in the Potential Investor
state. Some objects change their states directly to the Investment state while others
require a signal of successful investments to become a Motivated Investor. Objects
whose active state is Active Investor might experience unsuccessful investments and
might withdraw. In such situations, the object Investor, whose state is Withdrawn
from investing in infrastructure, sends a signal to Potential Investors not to invest in
infrastructure. The state of the objects changes based on the equations, parameters,
and variables defined in the transition between the states. Transitions between states
are triggered by the rate, the condition, or the message. Type 1 transitions are
triggered by the rate (e.g., Investment Rate), type II are triggered by the message (e.g.,
unsuccessful investment message sent to Potential Investors), and type III are
triggered by the condition (e.g., unsuccessful investment condition), as shown in
Figure 1a.

II
I II II
II I

III I

(a) (b)
Figure 1 – (a) Preview of Traditional Investors object class in the model; (b) Preview of New
Investor active object class in the model

NewInvestors Object Class: A preview of this object is shown in Figure 1b. This
active object class encompasses a state chart, parameters, and variables structured to
COMPUTING IN CIVIL ENGINEERING 45

define the BKI of the object. There are three states for this object class:
GlobalInvestor, PotentialInvestor, and InfrastructureInvestor. At the beginning, all
the objects of this class are in the GlobalInvestor state, which indicates that the
investors are investing in sectors other than infrastructure. These agents start
considering investing in infrastructure upon receiving signals of successful
investments by TraditionalInvestors and NewInvestors who have already begun
investing in infrastructure. Then, the agents of this object class change their state to
PotentialInvestor state, at which time the active state of the agents changes to
InfrastructureInvestor with a rate that is equal to the InvestmentRate variable. The
InvestmentRate variable type is double, which is calculated in the Traditional
Investors active object class. In the state chart, type I transitions are triggered by the
rate (e.g., InvestmentRate) and type II are triggered by the message (e.g., successful
investment message sent to Global Investors) as shown in Figure 1b. The arrow
inside a state signifies sending a message.
Public Object Class: This active object class encompasses an action chart which
includes a decision and two actions. The decision determines which action will be
taken. The condition for the decision is whether the level of Need for infrastructure
investment is higher than a pre-set value. If the need is higher than the specific value,
the decision leads to Support action; otherwise, it leads to Object action. The effect of
public support or objection is reflected in the probability of successful investment
which affects the InvestmentRate variable.
Infrastructure Object Class: A preview of this class is shown in Figure 2. The
variables in the object include:
 FinancingCapacity variable (Eq.1): This variable determines the monetary value of
the annual capacity within the budget of a public agency to finance infrastructure
through either traditional systems of pay-as-you-go or borrowing (e.g., bonds) plus
the innovative pay-as-you-go capacity of a public agency.

FinancingCapacity = Financing capacity facilitated by traditional financing


mechanisms + Financing capacity facilitated by innovative financing (1)
mechanisms

 Flow variable (Eq.2): This variable determines the rate of flow between the
NeededInfrastructure and FinancedInfrastructure stock variables. The Flow and
Need (Eq.3) variables are calculated using the following formulas:

Flow = FinancingCapacity + Private * m +New Private * m (2)


Need = NeededInfrastructure - FinancedInfratsructure (3)

In the Flow formula, m is embedded to calculate the currency value of infrastructure


financed by private investors and also to account for the fact that infrastructure
financed in that manner are usually large projects with higher monetary values
compared to projects financed by public agencies.
46 COMPUTING IN CIVIL ENGINEERING

Figure 2 - Preview of Infrastructure active object class in the model

 Private variable: This variable refers to the number of Traditional Investors whose
active state becomes ActiveInvestor at each time step. This variable is calculated by
counting the agents in the Traditional Investor object class.
 NewPrivate variable: This variable refers to the number of NewInvestors whose
active state becomes InfrastructureInvestor at each time step. This variable is
calculated by counting the agents in the New Investor object class.
 NeededInfrastructure stock variable: This variable refers to the stock of
infrastructure projects that need a financing source. This variable is calculated using
Eq. 4. The initial value of the stock variable is an input entered at the time the policy
analysis is implemented. Another component of this stock variable is the annual rate
of growth for the needed infrastructure, which is an input at the time of policy
analysis.

NeededInfrastructure = initial value + (4)

 FinancedInfrastructure stock variable: This variable represents the stock of


financed infrastructure projects. The variable is calculated using Eq. 5, in which the
initial value equals zero at the beginning of the analysis, and the rate of financing
projects is equal to the Flow variable.

FinancedInfrastructure =initial value + (5)

Table 2- Input variables values


Input Assumed Values Basis for Values
Needed 800 (million Annual need for infrastructure at the time of
Infrastructure dollars) policy analysis
Financing capacity Uniform (300,350) Annual financing capacity of the sponsor
agency (this means that that there is a gap of
approx. 500 million)
Number of potential 100 Estimated potential traditional investors in
traditional investors the market
Number of potential 100 Estimated potential new investors in the
new investors market
m (value of each 10 (million dollars) Estimated cost of projects financed
private investment) privately
COMPUTING IN CIVIL ENGINEERING 47

META-MODELING
Meta-modeling techniques are useful in such analysis. Classification and Regression
Tree (CART) is a technique that can select, from among a large number of variables,
the most important variables in determining the outcome variable to be explained and
their interactions (Breiman et al. 1984). In the infrastructure finance model, the
CART technique is used to construct a regression tree using data obtained from
different runs of Monte Carlo experiments in the simulation model to identify the
significant factors for policy-making.

APPLICATION OF THE PROPOSED METHODOLOGY


The proposed methodology is used to assess the financial policies of Fininfra, a
hypothetical nation. The government of Fininfra is considering setting policies to
expand investment in transportation infrastructure over the next ten years. The model
inputs are estimated by the government as summarized in Table 2.
The model output variables include the number of private investments in
infrastructure and the currency value of the total financed infrastructure during the
policy analysis horizon (the former is the sum of the number of private investors,
either traditional or new, that invest in infrastructure at each time step (i.e., year),
while the latter is the sum of the total currency value of the financed infrastructure,
either through private investment or through pay-as-you-go, at each time step. It is
assumed that all projects are of same size and value.
The distribution of the total currency value of the financed infrastructure is simulated
through 300 runs of the Monte Carlo experiment. The Monte Carlo experiment results
are used for data mining analysis using Miner Enterprise, which is a part of SAS 9.1
software. For simplifying the interpretation of the results of the analysis, three levels
of infrastructure investment are considered over the policy horizon (10 years): high
(investment greater than $12 billion), medium (investment greater than $7 billion and
less than $12 billion) and low (investment less than $7 billion). The regression tree is
trained using 70% of the data and tested using the remaining 30% of the data, which
is a common approach for partitioning data for training and testing. The regression
tree accuracy is measured using the coefficient of determination R2, which is 0.457
for the regression tree and which indicates plausible accuracy for the statistical
models.
The significant parameters are identified using the structure of the regression tree
which is shown in Figure 3. Based on the tree structure, success probability is the
most significant parameter affecting the total currency value of the financed
infrastructure. The next most significant parameter is the innovative pay-as-you-go
capacity of a public agency, which can be projected at the time of policy analysis.
Success probability, support effectiveness, and innovative pay-as-you-go capacity of
public agencies have positive effects on the output variable, which is the number of
projects financed, assuming that each project has the same value, and that only one
investor is needed for each project while unsuccessful effect has a negative impact.
Therefore, in order to increase the level of financed infrastructure, policy-makers
should set policies which reduce the risk of unsuccessful investment by private
investors. An example of such policies includes but is not limited to establishment of
48 COMPUTING IN CIVIL ENGINEERING

pre-specified processes (e.g., standardized procurement processes and contract


provisions) to effectively facilitate institutional investors’ participation.
In addition, in cases when enhancing the probability of success for private investment
is not possible, the policies should expand the pay-as-you go financing capacity of
public agencies to achieve the desirable level of financed infrastructure. For instance,
as shown in Figure 3, in cases when the probability of success is less than 85%, there
is a need for policies that facilitate innovative pay-as-you-go financing of more than
70 million dollars in order to have at least a 30% chance of achieving the High level
(in this example, more than nine billion dollars in ten years) of financed
infrastructure. An example of such policies includes the flexible match mechanism
adopted by the U.S. Federal Highway Administration to enhance the pay-as-you-go
financing capacity of state Departments of Transportation.

Figure 3- Regression tree meta-model for the hypothetical case study


MODEL VALIDITY
Face validity (Xiang et al., 2005; and Sargent ,1998) is implemented to ensure the
four features of modeling quality; completeness, consistency, coherence, and
correctness (Pace, 2000). The model is complete since the conceptual representation
is abstracted from subject matter experts (SMEs). The model is consistent since the
results of several replications of the simulation model using different random seeds
produce consistent outcomes. The model is coherent since all the elements in the
model have functions and there is no extraneous element. Finally, the model is
correct since it is shown to be appropriate for the intended application. As shown in
the meta-model, the model is suitable for identification of significant factors affecting
infrastructure investment to be used for policy-making purposes. Further validation
by subject matter experts is required in future work.

CONCLUSIONS
This paper presented a systemic approach for ex-ante analysis of innovative financing
policy-making. A hybrid Agent-Based/System Dynamics model was created to
simulate the dynamics of infrastructure financing to be used for policy analysis. The
model output variable, which is the total currency value of the financed infrastructure,
was simulated using a Monte Carlo experiment. Classification and Regression Tree
analysis was performed on the simulated data to identify significant factors affecting
the total level of financed infrastructure. Increased probability of successful
investment and enhanced pay-as-you-go capacity of public agencies were identified
COMPUTING IN CIVIL ENGINEERING 49

as the most significant factors affecting the total level of financed infrastructure in the
case study. Policies which enhance these factors could be effective in enhancing the
total value of the financed infrastructure. To enhance the probability of success of
private investments, innovative risk mitigation and contract tools might be considered
by policy-makers and public agencies when an innovative financing system is
proposed. The methodological approach proposed herein could potentially assist
policy-makers in expanding infrastructure investment by providing recommendations
regarding the significance of different factors and the effects of different policies.

REFERENCES
Bankes, S. C. (2002). "Tools and techniques for developing policies for complex and
uncertain systems." Proceedings of the National Academy of Sciences, 99(3), pp. 7263-7266.
Beiman, L., Friedman, J. H., Olshen, R. and Stone, C. J. (1984). Classification and
Regression Trees. Belmont, CA: Wadsworth.
Bonabeau, E. (2002). "Agent-based modeling: methods and techniques for simulating
human systems." Proceedings of the National Academy of Sciences, 99, pp. 7280-7287.
DeLaurentis, D., and Callaway, R.K.C.A. (2004) "System-of-Systems Perspective for
Public Policy decisions." Review of Policy Research, 21(6), pp. 829–837.
Lempert, R. (2002). "Agent-based modeling as organizational and public policy
simulators." Proceedings of the National Academy of Sciences, 99(3), pp. 7195-7196.
Levine, R. (1997). "Financial Development and Economic Growth: Views and Agenda.",
Journal of Economic Literature, American Economic Association, 35(2), pp. 688-726.
Kim, Y., and Lee, M. (2007). "Agent-based models as a modeling tool for complex
policy and managerial problems." Korea Journal of Public Administration, 45(2), pp. 25-50.
Macal, C.M., and North, M.J. (2005). "Tutorial on Agent-Based Modeling and
Simulation." Proceedings of the 2005 Winter Simulation Conference, Orlando, FL, Dec. 4-7,
pp. 2 15.
Macy, M. W., and Willer, R. (2002). "From factors to actors: Computational sociology
and agent-based modeling." Annual Review of Sociology, 28: pp. 143–166.
Mostafavi, A., Abraham, D.M., DeLaurentis, D., and Sinfield, J. (2011a). "Exploring the
Dimensions of Systems of Innovation Analysis: A System of Systems Framework.", IEEE
Systems Journal, Accepted for publication on February 17, 2011.
Mostafavi, A., Abraham, D.M., and Sullivan, C.A. (2011b)."Drivers of Innovation in
Financing Transportation Infrastructure: A Systemic Investigation." Electronic Proceedings
of the Second International Conference on Transportation Construction Management,
February 7 - 10, 2011, Orlando, FL.
Mostafavi, A., and Abraham, D.M. (2010). "Frameworks for Systemic and Structural
Analysis of Financial Innovations in Infrastructure." Working paper Electronic Proceedings
of 2010 Engineering Project Organization Conference (EPOC 2010), November 4 - 6, 2010,
South Lake Tahoe, CA.
Nelson, R.R., editor. National Innovation Systems: A Comparative Analysis. New York:
Oxford University Press; 1993.
Pace, D.K. (2000) "Ideas about simulation conceptual model development." Johns
Hopkins Apl Technical Digest, 21 (3), pp. 327–336.
Pfeffer, J., and Salancik, G. R. (2003). The external control of organizations: A resource
dependence perspective. Stanford, CA: Stanford Business Books.
Sanchez, S. M., and Lucas, T. W. (2002). "Exploring the world of agent-based
simulations: Simple models, complex analyses." E. Yücesan, C.-H. Chen, J. L. Snowdon, J.
Charnes, eds. Proc. 2002 Winter Simulation Conf. Institute of Electrical and Electronics
Engineers, Piscataway, NJ, pp. 116–126.
50 COMPUTING IN CIVIL ENGINEERING

Sargent, R.G. (1998). "Validation and Verification of Simulation Models," in


Proceedings of the 1998 Winter Simulation Conference, pp. 121-130.
Sterman, J. D. (2001). "System Dynamics Modeling: Tools for Learning in a Complex
World." California Management Review, Vol. 43, No. 4, pp. 8-25.
Xiang, X., Kennedy, R., and Madey, G. (2005) "Verification and Validation of Agent-
based Scientific Simulation Models." Agent-Directed Simulation Conference, San Diego, CA.
A Multi-objective Generic Algorithm Approach for Optimization of Building
Energy Performance
Don Chen1 and Zhili (Jerry) Gao2
1
Assistant Professor, Ph.D., LEED AP, Department of Engineering Technology &
Construction Management, University of North Carolina at Charlotte, 9201
University City Blvd, Charlotte, NC 28223; PH (704) 687-6299; FAX (704) 687-
6653; email: Dchen9@uncc.edu
2
Assistant Professor, Ph.D., C.P.C., Department of Construction Management &
Engineering, North Dakota State University, Fargo, ND 58105; PH (701) 231-8857;
FAX (701) 231-7431; email: Jerry.Gao@ndsu.edu

ABSTRACT

This paper presents the results of a pilot study conducted to optimize building
energy performance using a Multi-objective Generic Algorithm (MOGA), an
evolutionary adoptive approach. In this study, a Building Information Modeling
(BIM) model was built to provide design data, such as building form and space
layout, and site and building orientation to IES <VE>, a building energy simulation
software. Energy performance of design options was evaluated. The optimal settings
of the design parameters were then obtained using a MOGA approach. This study
indicates that the MOGA approach (1) enables continuous investigation of design
parameters over their entire spectrum, (2) accounts for that fact that design
parameters dynamically, not statically, impact energy performance, and (3) optimizes
multiple design criteria simultaneous. This study concluded that MOGA is an
appropriate approach that can better ensure a global optimal solution for design of
energy efficient buildings.

INTRODUCTION

To ensure the success of energy efficient buildings, energy performance


analysis is the very important first step. Currently a wide variety of energy simulation
software packages is available for assessing energy consumptions of different design
options. This allows the Architecture, Engineering, and Construction
(A/E/C) professionals to fine tune their designs and ultimately to ensure that the
buildings perform at the optimal energy performance level. For instance, the A/E/C
professionals can evaluate their designs by simulating building energy consumption
for differing building orientations and/or increased window wall ratios, and
calculating of construction costs. A number of researches have been conducted on
this approach (Caldas et al., 1999; Caldas et al., 2001; Wright et al., 2002; Caldas et
al., 2003; Chaisuparasmikul, 2008; Pollock et al., 2009). However, this approach has

51
52 COMPUTING IN CIVIL ENGINEERING

several drawbacks. First of all, design parameters are altered discretely, not
continuously, e.g., building energy consumption is usually simulated at several
chosen angles from the project north at the designers’ discretion. The consideration
of infinite number of building orientations besides the chosen orientations is usually
omitted due to simulation time and cost constraints. Secondly, the dynamic impacts
of design parameters are not accounted for in simulation. The use of total energy
consumption in the evaluation process fails to consider the dynamic interactions
between the two components of the total energy consumption: heating and cooling.
With the increased window wall area ratios, typically energy consumption for heating
is decreasing and energy consumption for cooling is increasing, but the total energy
consumption might have very slight amount of fluctuations and thus can be
considered a constant. Therefore, energy consumption for heating and cooling,
instead of total energy consumption, should be evaluated. Thirdly, not all the design
objectives can be optimized simultaneously. During a typical energy evaluation
process, for instance, the professionals alter design parameters to minimize energy
consumption, but very likely that construction costs are not minimized with the same
set of design parameters. Lastly, because of the abovementioned drawbacks, usually
only a local, instead of a global, optimal solution is achieved. Thus the goal of
designing a truly energy efficient building cannot be achieved.
The main question that this study is to answer is: to design a truly energy
efficient building, what is an appropriate optimization technology that can search the
entire spectrum of design parameters, optimize multiple building design objectives
simultaneously, and achieve a global optimal solution? The main objective of this
study is to use this optimization technology to find an optimal design that can achieve
the following design criteria at an acceptable level:
 to minimize energy consumption that is required to meet heating and cooling
conditions, and meanwhile
 to minimize construction costs.

METHODOLOGY

In this study, the researchers proposed multi-objective genetic algorithms


(MOGA’s) as a promising optimization method for design of energy efficient
building. As a case study, a two-story academic building was simulated and data
collected and analyzed. Finally, conclusions and recommendations are provided.

Optimization Algorithm. A genetic algorithm (GA) is an evolutionary adoptive


approach to find an optimal solution, usually a solution to minimize the magnitude of
the objective, to a problem. Firstly mentioned in Nils Aall Barricelli’s work (1954,
1957), and then implemented by John Holland (1975), GA has gained its momentum
in many fields, including equipment design, manufacturing, controls, municipal
utilities, robotics, signal processing, fault detection, and building design (Goldberg,
1989; Gero, 1978). A GA involves several steps: 1) a group of possible solutions to
a problem, called a population, is generated; 2) genetic operators, e.g., reproduction,
crossover and mutation, are applied to the initial population to generate a new
COMPUTING IN CIVIL ENGINEERING 53

population, called a generation, which solves the problem better than the initial
population; 3) the first two steps are repeated for the number of pre-defined
generations to produce the optimal solution. In many real-life problems, more than
one objectives need to be optimized. To this end, multi-objective genetic algorithms
(MOGA) can be used to find an optimal solution that satisfies the objectives, often
time conflicting objectives, at an acceptable level. The application of the genetic
operators in a GA/MOGA prevents the searches of solutions from being falling into a
local optimum, and thus a global optimal solution can be ensured.

Energy Simulation Software. The Integrated Environmental Solutions <Virtual


Environment> (IES <VE>) v6.2.0.1 (http://www.iesve.com/NAmerica) was selected
to perform the whole-building energy simulation because it conforms to
ANSI/ASHRAE Standard 140 and provides advanced and comprehensive thermal
simulation.

Building Information Modeling (BIM) Software. Autodesk Revit Architecture


(http://usa.autodesk.com/) was selected to develop the BIM model of the building
because of its superior modeling capacities and its seamless interoperability with IES
<VE>. A BIM model is a computer software generated digital representation of a
building which is examined virtually and then constructed (U.S. General Services
Administration, 2007). Besides construction cost information, most of the building
element specific information needed in energy analysis, such as site location and
orientation, material quantities, insulation values, lighting quality, and recyclability
are described in the BIM model.

Case Study Building. The building chosen for this study is a new 52,000 S.F., 2-
story academic building located in North Carolina. Figures 1, 2, and 3 show the first
and second floor plans, and the BIM model of the building.

Figure 1. 1st Floor Plan Figure 2. Second Floor Plan Figure 3. BIM model

Simulation Inputs and Assumptions. The source of design weather used in this
study was the ASHRAE design weather database, and the weather design file was
Raleigh TM2.fwt.
To accurately perform thermal simulation, magnetic declination should be
considered. For the case study building, the declination is 9° 45' W. The BIM model
was adjusted to the “true” north in Revit using this declination.
54 COMPUTING IN CIVIL ENGINEERING

Construction cost data was obtained from RSMeans CostWorks online


database (http://www.meanscostworks.com/). Assembly B20101305420, “Brick
veneer wall, standard face, 16 ga x 3-5/8" LB @ 16" metal stud back-up, common
bond”, was selected as exterior walls. The total cost of building this type of wall is
$15.17 per S.F. Assembly B20201024300, “Windows, wood, casement, insulated
glass”, was selected as exterior windows. The average total cost of installing this type
of window is $ 41.164 per S.F.
Lights and the HVAC system were assumed to use the same default settings
for differing design options.

Simulation Procedure and Results. The BIM model of the academic building was
first developed using Revit Architecture. This model was exported as a gbXML file
which is then imported into IES <VE>. In IES <VE>, a module called Apache was
used to perform thermal calculations and simulations. One of the simulation outputs
is total energy consumption (MBtu) per year. This total energy consumption is split
into several sub-categories. They are heat, cool, fans/pumps, lights, and equipment.
This enables further studies of energy usage for heating and cooling individually.
In this study, two design parameters were altered to develop new design
options. Energy performance of these new design options was then simulated. The
two design parameters are the building orientation and the window wall area ratio.
The building was rotated counter-clock wise by 0°, 45°, 90°, 135°, 180°, 225°, 270°,
and 315° from the “true” north. Five window sizes were chosen for all the windows:
fixed (36”x48”), casement double with trim (48”x48”, 72”x36”), awning-triple
(60”x48”), and casement-quad (72”x48”). The window wall area ratio was calculated
by dividing the total window area by the total exterior wall area. Considering the
different combinations of these two design parameters produces a total of 40 design
options. Energy performance of these 40 design options was simulated. The
simulation results are shown in Table 1. Regress analyses of these results generate
equations (a) and (b) in the following section.

Statistical Optimization Procedure. The Optimization Tool in Matlab R2009b


(http://www.mathworks.com/products/matlab/) was used to conduct multi-objective
optimization. The goal is to minimize energy consumption for heating and cooling,
and to minimize construction costs.
The objective functions are:
f (heating )  1123.4145  (17.9136)( x1 )  (6.888e  8)( x2 ) 3 (a)
f (cooling )  271.5097  (75.578)( x1 ) 0.5  (1.3703e  8)( x2 ) 3 (b)
f (cos t )  (15.17)(1  x1 )(33640)  (41.16)( x1 )(33640) (c)
where,
x1: window wall area ratio;
x2: building orientation, in degrees;
f(heating): energy consumption for heating, MBtu per year;
f(cooling): energy consumption for cooling, MBtu per year; and
COMPUTING IN CIVIL ENGINEERING 55

f(cost): construction costs for building exterior walls and windows, dollars.
Since all other building components (roof, interior walls, doors, floors, etc.)
remain unchanged, f(cost) is a good indicator of construction costs. $15.17
/SF is the cost for building exterior walls; $41.16 /SF is the cost for installing
exterior windows; and 33,640 SF is the total exterior wall surface area.
Equations (a) and (b) were obtained by regressing f(heating) and f(cooling)
against x1 and x2, respectively. For equation (a), R-square is 0.30, p-value is 0.0013;
for equation (b), R-square is 0.98, p-value is less than 0.0001.

RESULTS AND DISCUSSION

The top best 30 optimization solutions are listed in Table 2. In this table, for a
set of x1 and x2, the corresponding f(heating), f(cooling), and f(cost) are predicted. A
3-D surface built from these solutions is shown in Figure 4. The following
observations were made by carefully examining these solutions:
 Solutions #1 and #24 appear to be the same, and both can be consider the
optimal solution. For this optimal solution, x1 and x2 are 0.25 and 1.67°,
respectively. These two figures are not the pre-determined figures for x1 and
x2. Therefore, it has been proven that MOGA had searched the entire spectrum
of design parameters for the global optimal solution.
 For the optimal solution (#1 or #24), f(heating) was 1,114.37 MBtu, which is
larger than the minimum figure 1,109.54 MBtu , but its f(cooling) was the
smallest. The sum of f(heating and f(cooling) was the smallest compared to
other solutions. In addition, its f(cost)was the smallest. Therefore, overall the
optimal solution minimizes the design objectives simultaneously at an
acceptable level.

CONCLUSIONS AND RECOMMENDATIONS

Today’s A/E/C professionals are charged to design more energy efficient


buildings. The wide availability of energy analysis solutions has made it possible to
adopt these professionals’ energy performance related design inputs early in the
design phase. However, an understanding of how design parameters dynamically
impact energy performance, and how multiple design objectives can be effectively
optimized, is still lacking in the A/E/C industry. To address this issue, this study was
conducted to develop a BIM model of a case study building, conduct energy
performance simulation on various design options, collect performance data and
apply a MOGA as the optimization method to generate the optimal solution. The
results indicated that MOGA can effectively search the entire spectrum of design
parameters, optimize multiple building design objectives simultaneously, and achieve
a global optimal solution. Therefore, it is a promising optimization approach for
design of energy efficient buildings.

Recommendations for Future Studies. Window wall area ratios of North, West,
South, and East walls should be considered. In this study, the overall window wall
56 COMPUTING IN CIVIL ENGINEERING

area ratio was studied. If the ratio of individual exterior wall can be considered, the
A/E/C professionals will gain better understanding on exactly how many windows
will be placed on each exterior wall. This will result in a more accurate final design.
More design parameters, for example, shading, construction materials, and
sources of renewable energy, can be included in future studies.
Building energy performance is largely dependent on the life styles of
residents. How to quantify the impacts of various life styles is an interesting topic for
future studies.

Table 1. Energy simulation results


Fans Total
x1 x2 Heat Cool Pumps Energy
( °) (Mbtu) (Mbtu) (Mbtu) (Mbtu)
Design 1 0.254 0 1,111.6 309.8 697.5 3,238.80
Design 2 0.254 45 1,114.2 309.7 697.4 3,241.30
Design 3 0.254 90 1,113.1 308.5 696.9 3,238.40
Design 4 0.254 135 1,116.2 309.5 697.3 3,242.90
Design 5 0.254 180 1,115.6 309.5 697.3 3,242.40
Design 6 0.254 225 1,116.0 309.7 697.4 3,243.10
Design 7 0.254 270 1,111.3 308.9 697.1 3,237.20
Design 8 0.254 315 1,112.1 310.1 697.6 3,239.70
Design 9 0.285 0 1,110.8 312.2 698.4 3,241.30
Design 10 0.285 45 1,113.7 312.0 698.4 3,243.90
Design 11 0.285 90 1,112.4 310.6 697.8 3,240.70
Design 12 0.285 135 1,115.8 311.7 698.2 3,245.70
Design 13 0.285 180 1,115.3 311.8 698.3 3,245.30
Design 14 0.285 225 1,115.8 312.0 698.4 3,246.10
Design 15 0.285 270 1,110.8 311.1 698.0 3,239.80
Design 16 0.285 315 1,111.4 312.5 698.5 3,242.20
Design 17 0.317 0 1,110.1 314.6 699.4 3,244.00
Design 18 0.317 45 1,113.1 314.3 699.3 3,246.70
Design 19 0.317 90 1,111.7 312.8 698.7 3,243.10
Design 20 0.317 135 1,115.5 314.0 699.2 3,248.50
Design 21 0.317 180 1,115.1 314.1 699.2 3,248.30
Design 22 0.317 225 1,115.6 314.3 699.3 3,249.10
Design 23 0.317 270 1,109.9 313.3 698.9 3,242.10
Design 24 0.317 315 1,110.6 314.8 699.5 3,244.80
Design 25 0.381 0 1,108.7 319.3 701.4 3,249.30
Design 26 0.381 45 1,112.1 319.0 701.2 3,252.20
Design 27 0.381 90 1,110.5 317.2 700.5 3,248.00
Design 28 0.381 135 1,114.8 318.5 701 3,254.10
Design 29 0.381 180 1,114.7 318.7 701.1 3,254.40
Design 30 0.381 225 1,115.3 318.9 701.2 3,255.30
Design 31 0.381 270 1,108.6 317.8 700.8 3,247.10
Design 32 0.381 315 1,109.3 319.5 701.5 3,250.20
Design 33 0.190 0 1,113.3 305.1 695.5 3,233.80
Design 34 0.190 45 1,115.5 305.1 695.5 3,236.00
Design 35 0.190 90 1,114.6 304.1 695.1 3,233.70
Design 36 0.190 135 1,117.1 305 695.5 3,237.40
Design 37 0.190 180 1,116.2 305 695.5 3,236.50
Design 38 0.190 225 1,116.5 305.2 395.5 3,237.10
Design 39 0.190 270 1,112.9 304.5 695.3 3,232.50
Design 40 0.190 315 1,113.8 305.4 695.6 3,234.70
COMPUTING IN CIVIL ENGINEERING 57

Figure 4. 3-D surface of optimal solutions.

Table 2. OMGA optimization results


x2 f(heating) f(cooling) f(cost)
x1
(°) (Mbtu) (Mbtu) (dollar)
0.25 1.67 1,114.37 309.65 $733,002.83
0.49 2.44 1,110.91 324.26 $936,228.34
0.60 9.65 1,109.54 330.05 $1,034,899.89
0.48 1.70 1,111.06 323.63 $926,098.68
0.42 2.89 1,111.77 320.62 $879,529.01
0.36 2.56 1,112.68 316.81 $824,454.77
0.39 5.38 1,112.18 318.92 $854,311.62
0.38 2.58 1,112.34 318.25 $844,768.50
0.40 4.06 1,112.10 319.23 $858,844.34
0.60 6.22 1,109.56 329.96 $1,033,184.12
0.60 9.65 1,109.54 330.05 $1,034,899.89
0.55 3.38 1,110.09 327.74 $994,344.92
0.47 6.92 1,111.20 323.05 $916,942.10
0.27 1.88 1,114.12 310.73 $745,823.50
0.35 2.90 1,112.82 316.21 $816,154.40
0.32 2.42 1,113.26 314.36 $791,416.00
0.51 5.86 1,110.62 325.48 $956,122.90
0.31 5.82 1,113.48 313.41 $779,073.43
0.50 1.76 1,110.81 324.69 $943,174.93
0.32 2.56 1,113.23 314.49 $793,036.18
0.30 3.10 1,113.61 312.89 $772,452.90
0.53 4.65 1,110.32 326.74 $977,264.56
0.45 3.08 1,111.42 322.12 $902,354.66
0.25 1.67 1,114.37 309.65 $733,002.83
0.43 2.06 1,111.62 321.27 $889,359.98
0.59 6.73 1,109.69 329.42 $1,023,701.79
0.55 2.18 1,110.14 327.50 $990,175.20
0.27 1.70 1,114.19 310.45 $742,394.77
0.60 9.62 1,109.54 330.05 $1,034,899.89
0.57 9.65 1,109.90 328.51 $1,007,577.90
58 COMPUTING IN CIVIL ENGINEERING

REFERENCES

Autodesk Revit Architecture. http://usa.autodesk.com/adsk/servlet/pc/index?


siteID=123112&id=8479263.
Barricelli, Nils Aall (1954). "Esempi numerici di processi di evoluzione". Methodos:
45–68.
Barricelli, Nils Aall (1957). "Symbiogenetic evolution processes realized by
artificial methods". Methodos: 143–182.
Caldas, L. G., and Norford, L. K., 1999, “A Genetic Algorithm Tool for Design
Optimization.” ACADIA ’99. Proceedings of 1999 Conference of the
Association for Computer-Aided Design in Architecture, Salt Lake City, UT.
Caldas, L. G., and Norford, L. K., 2001, “A Design Optimization Tool Based
on a Genetic Algorithm.” Automation in Construction, 11(2), pp. 173–184.
Caldas, L.G., and Norford, L.K., 2003, “Genetic algorithms for optimization of
building envelopes and the design and control of HVAC systems.” Journal of
Solar Energy Engineering, August 2003, Vol. 125, pp. 343-351.
Chaisuparasmikul, P., 2008,” Innovative approach of integrated building enclosure
and HVAC systems Modeling to Improve building energy efficient design.”
Building enclosure science and technology conference, June 12, 2008.
Gero, J., and Radford, A., 1978, “A Dynamic Programming Approach to the
Optimum Lighting Problem.” Energy Optimization, pp. 71–82.
Goldberg, D., 1989, “Genetic Algorithms in Search, Optimization and Machine
Learning.” Addison-Wesley.
Holland, J. H. (1975). “Adaptation in Natural and Artificial Systems.” University of
Michigan Press, Ann. Arbor, Michigan.
IES <VE>:
http://www.iesve.com/NAmerica.
Matlab R2009b:
http://www.mathworks.com/products/matlab/.
Pollock, M., Roderick, Y., McEwan, D., and Wheatley, C., 2009, “Building
simulation as an assisting tool in designing an energy efficient building: a case
study.” www.iesve.com/content/mediaassets/pdf/p135final-long.pdf.
RSMeans Costworks Construction Estimator:
http://www.meanscostworks.com/
U.S. General Services Administration, 2007.
Wright, J. A., Loosemore, H. A., and Farmani, R., 2002, “Optimization of
Building Thermal Design and Control by Multi-Criterion Genetic Algorithm.”
Energy Building, 34, pp. 959–972.
Comparison of Image-Based and Manual Field Survey Methods for Indoor As-
Built Documentation Assessment
Laura Klein1, Nan Li2, Burcin Becerik-Gerber3
1,2,3
Sonny Astani Department of Civil and Environmental Engineering, University of
Southern California, Los Angeles, CA 90089;
Email: 1lauraakl@usc.edu, 2nanl@usc.edu 3becerik@usc.edu

ABSTRACT
As-built models and drawings are essential documents used during the operations and
maintenance of buildings for managing facility spaces, equipment, and energy
systems. Inefficiencies in processing, communicating, and revising as-built
documents therefore result in high costs imposed on building owners. Facility
managers still rely heavily upon manual surveying procedures for developing and
verifying as-built drawings and models. To streamline this often time consuming
process, this paper addresses the advantages and limitations of photogrammetry for
remote sensing and verification of interior as-built conditions. Two classrooms are
captured using photogrammetric image processing software and image-based
dimensions are compared to dimensions gathered through a traditional manual survey
yielding an average percent error of approximately 2%. Both image-based and
manual dimensions are then compared to dimensions extracted from an existing as-
built BIM model of the interior spaces, and the proposed image-based verification
method successfully identifies the same gross errors in the as-built BIM model.
Keywords: image-based measurements; as-built verification; as-built documentation;
photogrammetry; facilities management
INTRODUCTION
As-built models and drawings are essential documents used during the operations and
maintenance of buildings for managing facility spaces, equipment, and energy
systems. While these documents are typically generated, developed, and used
throughout the design and construction phases of new buildings, they are of greatest
value to building owners and managers of existing facilities for assessing building
performance, managing building repairs and renovations, and assisting building
decommissioning (Akcamete et al., 2009; Eastman et al., 2008; Gallaher et al., 2004).
Inefficiencies in processing, communicating, and revising as-built documents
therefore result in high costs imposed on building owners. A 2004 NIST report found
that an estimated $1.5 billion is wasted every year as a result of unavailable and
inaccurate as-built documents causing information delays to facilities management
(FM) personnel. Changes that occur during construction are often reflected as redline
markups or partial drawings that are not transferred to complete as-built
documentation handed over to owners during building closeout or after major

59
60 COMPUTING IN CIVIL ENGINEERING

renovations. An additional $4.8 billion is therefore spent annually on FM labor alone


to verify and validate existing as-built documentation (Gallaher et al., 2004).
Facility managers still rely heavily upon manual measurements by tape or laser
measuring devices, to verify drawings and/or models or to generate digital as-built
documents, where they do not exist. Manual surveys of building interiors generally
include dimensioning room width, length, and height as well as sizes of doors and
windows. Dimensions collected by manual surveys are used by FM personnel as the
"ground truth" to verify the accuracy of existing as-built documentation. If the
difference between dimensions from the existing as-built documentation and the field
survey exceeds a pre-determined threshold (approximately 2%), correction of the as-
built documentation is required.
Attempts to automate the surveying and modeling of as-built conditions include
leveraging new remote sensing technologies, such as 3D laser scanning and
photogrammetry, which use sensors to capture 3D spatial information from a distance
in a non-disruptive fashion (Brilakis et al., 2010; Markley et al., 2008; Tang et al.,
2010). The availability of low cost and effective tools for automated verification and
modeling of as-built conditions would allow facility managers and building owners
more frequent and more comprehensive updates to as-built documents to improve
their daily operations. Today photogrammetry offers one of the most promising low
cost solutions, but it relies heavily on satisfactory environmental conditions. For the
application of photogrammetric image-processing for interior as-built document
assessment, it is therefore crucial to conduct tests in realistic field environments.
Recent research efforts have focused on testing the accuracy of photogrammetric
techniques for measuring as-built conditions including isolated objects, individual
building elements, historical building facades, and construction site progress (Dai and
Lu, 2010; El-Hakim, 2001; El-Omari and Moselhi, 2008; Ordonez et al., 2010;
Remondino et al., 2005). While each of these test subjects represent unique
circumstances for the remote sensing of spatial conditions, existing and occupied
buildings offer a complex set of obstacles that has yet to be fully investigated.
This study therefore addresses the advantages and limitations of commercial semi-
automated photogrammetric image-processing software for the verification of interior
as-built documentation. The image-based spatial data is used to assess the accuracy of
an existing as-built BIM model currently undergoing verification by the University of
Southern California (USC) after the design and construction phase handover. Image-
based dimensions are compared to dimensions gathered through a traditional manual
survey of two classrooms to assess the accuracy of the proposed method for capturing
the complex interior environments of occupied buildings.
REMOTE SENSING WITH PHOTOGRAMMETRY
The most common means by which facility spatial information is gathered remotely is
through 3D laser scanning and photogrammetry. While very different in terms of
equipment costs and sensing processes, both technologies use sensors to either
directly or indirectly compute relative distances between their locations and points in
the sensed scene. Choice of technology depends heavily on the size and complexity of
COMPUTING IN CIVIL ENGINEERING 61

the scene or object, the required accuracy and level of detail, and budgetary
constraints. In comparison to 3D laser scanning, photogrammetry offers a low cost,
low skill, portable solution for remote sensing (Remondino and El-Hakim, 2006).
Photogrammetry traditionally refers to the process of deriving geometric information
(distances and dimensions) about an object through measurements made on
photographs. Photogrammetry can involve one photo or multiple photos, analogue or
digital images, still-frame or video images (videogrammetry), and manual or
automatic processing (Mikhail et al., 2001). Generally, photogrammetry includes
selecting common feature points in two or more images; calculating camera positions,
orientations, and distortions; and reconstructing 3D information by intersecting
feature point locations. Over the past decade, major developments in computer vision
and image processing have allowed increased automation in each of these steps,
thereby expanding the potential applications and the commercially available software
for photogrammetry (Nister, 2004; Pollefeys et al., 1999).
Automated detection and stitching of overlapping feature points requires a large
number of images taken closely together to provide sufficient overlap and repetition
of captured objects (El-Hakim, 2001; Shum and Kang, 2000). While automated
stitching reduces the need for human intervention, it is, at this time, more prone to
stitching errors and increased noise (Remondino and El-Hakim, 2006) caused by the
extraction of unwanted background feature points such as trees, surrounding
buildings, and sky. After feature points are defined and stitched between 2D images,
camera positions and orientations are calculated based on corresponding collections
of approximated 3D feature point locations. A method known as bundle adjustment is
often employed to simultaneously optimize calculated structure and camera poses
(Triggs et al., 2000). The final reconstructed scene includes the optimized camera
positions and their associated visual data in a 3D representation such as a sparse point
cloud. Once cameras are positioned and calibrated for each image, the 3D coordinate
of any point or image pixel can be calculated with a relatively high degree of
accuracy by defining the same point in two images taken from different perspectives.
TEST BED DESCRIPTION
This paper assesses the accuracy of semi-automated photogrammetric image-
processing software in capturing and verifying interior as-built documents of an
operational building at USC. The School of Cinematic Arts “Student Services and
Media Arts” building, referred to as SCB, was selected to test the current and
proposed as-built verification methods on existing conditions. The test bed building is
of relatively recent construction and has been occupied since June 2010. As part of
construction closeout, a BIM model was delivered to the university as as-built
documentation. The existing BIM model is currently undergoing standard verification
processes executed by USC FM.
A research library (Room 206) and a classroom (Room 207) on the second floor of
SCB were selected for the interior case study as they represent typical spaces found
on the university campus (Figure 1). Room 207 is roughly twice the size of Room
206, each room covering approximately 25.5 and 53 m2, respectively. Both rooms
62 COMPUTING IN CIVIL ENGINEERING

include one or more windows on their southern walls which allow in natural light. At
the time of the surveys, both rooms were heavily populated with equipment and
furniture, obstructing corners of the floor, windows and door. The walls of Room 207
were also covered with posters and other visually distinct graphics but the walls of
Room 206 were mostly clear.

Figure 1. Test bed floor plans (Rooms 206 and 207).


METHODOLOGY FOR AS-BUILT DOCUMENTATION ASSESSMENT
To replicate the manual survey verification procedure currently carried out by the
university FM personnel, measurements of the two interior rooms were gathered with
an off-the-shelf laser surveying device, which measures linear distances within a
range of 100 m to an accuracy of 1.6 mm. Building elevations and floor plans were
used to choose the dimensions and measuring sequence before the survey, as well as
to facilitate documenting the measurements recorded onsite. The surveyed
dimensions included the length, width, and height of each room; magnitudes of wall
protrusions and recessions; and sizes of doors and windows and their relative
distances to adjacent walls. Keeping consistent with current university FM practice,
dimensions collected through the manual field survey were compared to dimensions
extracted from the existing as-built BIM model to verify the accuracy of the model.
To gather the same building geometry data found by the manual field survey, an
image-based survey was executed for the two interior rooms. The major steps
involved in remote sensing through photogrammetry include image acquisition,
image processing (image stitching and 3D reconstruction), and geometry or
dimension extraction. For the study, pictures were taken with a fixed focal length
using an off-the-shelf 8.0 megapixel digital camera. Photographs were planned to
optimize views of all critical geometry and building elements such as wall openings
and floor and ceiling corners. A total of 130 images were acquired for the two interior
rooms in a single session. Unique visual markers were added to all four walls of
Room 206 to augment the number of feature points in each image predicted to be
insufficient as a result of the unornamented white walls (Figure 2).
COMPUTING IN CIVIL ENGINEERING 63

Commercially available photogrammetric image-processing software was used to


automatically calculate camera distortion and execute feature stitching, camera
positioning, and 3D reconstruction. Bundle adjustment was also used to optimize
cameras and 3D structure before returning the stitched images, their associate camera
positions, and the generated sparse point cloud in a 3D model space. The resulting 3D
model was then scaled using a manually found measurement in each room. Once
image processing was completed for both interior scenes, points and polylines were
manually generated to model and extract all major room geometry (Figure 3). Where
critical points such as floor corners could not be modeled directly due to
environmental obstructions, geometric assumptions were made using planar and axial
constraints. Image-based measurements were compared to the manual measurements
to assess the accuracy of the proposed method. Finally the proposed image-based
method was assessed for direct verification of the existing as-built BIM model.

Figure 2 (left). Visual markers added to building interior to augment feature points.
Figure 3 (right). Automatically generated point cloud and manually modeled lines.

COMPARISON OF MANUAL AND IMAGE-BASED SURVEYS


Two separate 3D scenes were reconstructed for the two interior rooms captured in the
test bed. In the final 3D scene used to verify the as-built dimensions of Room 206, 60
of 67 photos were successfully stitched together, representing 89% of the attempted
reconstruction. Resulting from the automated stitching process, 9,055 3D points were
computed. An additional 18 3D points were modeled manually to aid in 3D
coordinate and dimension extraction. For Room 207, 60 of 63 photos were
successfully stitched, representing 95% of the attempted reconstruction. This high
percentage can be attributed to the existence of posters and other feature points on the
walls of Room 207 not present on the walls of Room 206. A total of 10,081 3D points
were automatically computed and 22 points were manually modeled for Room 207.
After reconstruction, 17 dimensions were extracted from the 3D scene of Room 206
and 23 dimensions were extracted from the 3D scene of Room 207. Dimensions
ranged from 0.5 m to 8.2 m. A total of 17 and 23 dimensions were also measured
manually in Rooms 206 and 207 respectively to match the image-based dimensions.
Comparing the image-based dimensions to the manual field verified dimensions
yielded absolute and percent errors for each room as summarized in Table 1. The
percent errors of 7 image-based dimensions in Room 206 and 14 image-based
64 COMPUTING IN CIVIL ENGINEERING

dimensions in Room 207 exceeded 2% although the average percent errors in each
room were close to the 2% threshold. The maximum absolute errors and maximum
percent errors reported in Table 1 do not represent the same dimensions.
Table 1. Absolute and percent errors for image-based dimensions.
Room Number of Min. Error Max. Error Mean Error Std. Deviation
Dimensions cm % cm % cm % cm %
206 17 0.10 0.07 13.49 5.25 3.79 1.78 3.15 1.33
207 23 0.09 0.02 12.37 4.96 5.25 2.50 3.77 1.65
The largest errors seen in image-based dimensions in both rooms resulted from
dimensions partially or completely obstructed by furniture or by the limitations of the
room’s perimeter illustrating the potential difficulties in using line-of-sight sensing
tools to capture operational building conditions. In Room 207, two dimensions, the
bottom left corner of the door and the bottom right corner of one window, were
obstructed by furniture in all photos used for reconstruction. Similarly, in Room 206
the smallest dimension of the north wall was only visible in one photo due to the
limitations of the room for viewing the recessed corner. Together the occluded
dimensions represented three of the top five greatest percent errors averaging 4.62%
(circled in Figure 4). When these dimensions were removed from the data sets, the
average percent errors for image-based dimensions in Room 206 and Room 207
reduced to 1.54% and 2.33%, respectively.
Manually measured dimensions, considered as the “ground truth”, were then used to
verify the corresponding dimensions extracted from the as-built BIM model. The
absolute and percent errors of the as-built BIM model dimensions are summarized in
Table 2. The percent errors of 7 as-built BIM dimensions in Room 206 and 14 as-
built BIM dimensions in Room 207 exceeded the 2% threshold, requiring updating of
the as-built BIM model. These erroneous dimensions, however, were unrelated to the
erroneous image-based dimensions previously found. In each room, the door widths
saw the greatest discrepancies between as-built conditions represented in the existing
BIM model and the true as-built conditions with percent errors exceeding 10% and
absolute errors exceeding 10 cm. The manual survey also found the as-built BIM
dimensions of the windows in both rooms to differ by 2 to 4% or 4 to 6 cm.
Table 2. Absolute and percent errors for as-built BIM model dimensions.
Room Number of Min. Error Max. Error Mean Error Std. Deviation
Dimensions cm % cm % cm % cm %
206 17 0.00 0.00 10.72 11.65 4.02 2.74 3.24 3.61
207 23 0.52 0.10 9.72 10.68 4.44 2.68 2.48 2.75

Finally, the image-based dimensions were used in a direct assessment of the existing
as-built BIM model to parallel the as-built BIM assessment already performed with
manual measurements. Differences between as-built BIM dimensions and image-
based measurements were plotted against zero and directly compared in the same plot
to differences between as-built BIM dimensions and manual measurements (Figure
COMPUTING IN CIVIL ENGINEERING 65

4). As visually observed, a relatively high level of agreement was found between the
manual field and image-based assessments especially with respect to those dimension
differences far outside the 2% error threshold. As the manual field assessment found
the greatest discrepancies in dimensions for door widths in both rooms, the image-
based survey similarly showed the as-built BIM model to under represent the actual
door widths in each room (dimensions 5, 6, 21, and 22 in Figure 4). In this way, the
image-based survey method achieved virtually the same identification of gross errors
in the existing as-built BIM model as the manual survey method.

Figure 4. Difference between as-built BIM model dimensions and manual and image-
based dimensions.
CONCLUSION
The results of the manual survey to verify the existing as-built BIM model revealed
that true as-built conditions can differ by more than 10% from interior as-built
documentation. This finding supports the need for improved methods for efficiently
and automatically verifying as-built drawing and models. While work must still be
done to improve image acquisition and image processing for complex environments
such as the interiors of operational buildings, the image-based reconstructions of both
interior rooms came close to the 2% standard threshold dictated by current FM
practices. Even more, the greatest geometric errors found in the existing as-built BIM
model through the manual field survey, the door widths in both classrooms, were also
detected through the image-based survey. The proposed image-based survey method
offers potential advantages to the currently employed manual survey method
including: less time and labor spent on-site, increased accessibility to building
geometry and features beyond the limits of traditional measuring devices, and the
simultaneous generation of both 2D dimensions and 3D spatial data. These
opportunities should motivate further research in remote sensing technologies,
including automated photogrammetry, for capturing and verifying operational
building exteriors and for automatically generating as-built documentation.
66 COMPUTING IN CIVIL ENGINEERING

ACKNOWLEDGEMENTS
Authors would like to thank Autodesk IDEA Studio for their support of this project.
Any opinions, findings, conclusions, or recommendations presented in this paper are
those of the authors and do not necessarily reflect the views of Autodesk.
REFERENCES
Akcamete, A., Akinci, B., Garrett, J.H. (2009). “Motivation for computational support for
updating building information models (BIMs).” Proceedings of the 2009 ASCE
International Workshop on Computing in Civil Engineering, 346, 523-532.
Brilakis, I., Lourakis, M., Sacks, R., Savarese, S., Christodoulou, S., Teizer, J., Makhmalbaf, A.
(2010). “Toward automated generation of parametric BIMs based on hybrid video and
laser scanning data.” Advanced Engineering Informatics, 24, 456-465.
Dai, F. and Lu, M. (2010). “Assessing the accuracy of applying photogrammetry to take
geometric measurements on building products.” Journal of Construction Engineering and
Management, 136(2), 242-250.
Eastman, C., Teicholz, P., Sacks, R., Liston, K. (2008). BIM Handbook: A Guide to Building
Information Modeling for Owners, Managers, Designers, Engineers, and Contractors,
John Wiley and Sons.
El-Hakim, S. (2001). “3D modeling of complex environments.” Proceedings of SPIE – The
International Society for Optical Engineering. 4309, 162-173.
El-Omari, S., Moselhi, O. (2008). “Integrating 3D laser scanning and photogrammetry for
progress measurement of construction work.” Automation in Construction, 18(1), 1-9.
Gallaher, M. P., O'Connor, A. C., Dettbarn, J. L., Jr., Gilday, L. T. (2004). “Cost Analysis of
Inadequate Interoperability in the U.S. Capital Facilities Industry.” NIST GCR 04-867.
Markley, J.D., Stutzman, J.R., Harris, E.N. (2008). “Hybridization of photogrammetry and laser
scanning technology for as-built 3D CAD models.” 2008 IEEE Aerospace Conference,
1014(1).
Mikhail, E.M., Bethel, J.S., McGlone, J.C. (2001). Introduction to Modern Photogrammetry,
Wiley & Sons.
Nistér, D. (2004). “Automatic passive recovery of 3D from images and video.” Proceedings - 2nd
International Symposium on 3D Data Processing, Visualization, and Transmission,
3DPVT, 438-445.
Ordonez, C., Martinez, J., Arias, P., Armesto, J. (2010). “Measuring building façades with a low-
cost close-range photogrammetry system.” Automation in Construction, 19(6), 742-749.
Pollefeys, M., Koch, R., Van Gool, L. (1999). “Self-calibration and metric reconstruction inspite
of varying and unknown intrinsic camera parameters.” International Journal of Computer
Vision, 32(1), 7-25.
Remondino, F., Guarnieri, A., Vettore, A. (2005). “ 3D modeling of close-range objects:
photogrammetry or laser scanning?” Proceedings of the SPIE - The International Society
for Optical Engineering, 5665(1), 216-25.
Remondino, F., El-Hakim, S. (2006). "Image-Based 3D Modelling: A Review." The
Photogrammetric Record, 21(115), 269-291.
Shum, H.Y., Kang, S.B. (2000). “Review of image-based rendering techniques.” Proceedings of
SPIE-The International Society for Optical Engineering, 4067(1-3), 2-13.
Tang, P., Huber, D., Akinci, B., Lipman, R., Lytle, A. (2010). “Automatic reconstruction of as-
built building information models from laser-scanned point clouds: a review of related
techniques.” Automation in Construction, 19, 829-843.
Triggs, B., McLauchlan, P., Hartley, R., Fitzgibbon, A. (2000). “Bundle adjustment – A modern
synthesis.” Vision Algorithms: Theory and Practice, 1883, 298-375.
Image-based 3D reconstruction and Recognition for Enhanced Highway
Condition Assessment
Berk Uslu1, Mani Golparvar-Fard2, and Jesus M. de la Garza3
1
Graduate Student, Construction Engineering and Management Group. Via Dept. of Civil
and Environmental Engineering, Virginia Tech, Blacksburg, VA; PH (540) 905-8525; FAX
(540) 231- 7532; email: berkuslu@vt.edu
2
Assistant Professor, Construction Engineering and Management Group. Via Dept. of Civil
and Environmental Engineering, and Myers-Lawson School of Construction, Virginia Tech,
Blacksburg, VA; PH (540) 231-7255; FAX (540) 231- 7532; email: golparvar@vt.edu
3
Vecellio Professor, Construction Engineering and Management Group. Via Dept. of Civil
and Environmental Engineering, and Myers-Lawson School of Construction, Virginia Tech,
Blacksburg, VA; PH (540) 231-7255; FAX (540) 231- 7532; email: chema@vt.edu

ABSTRACT
Frequent and accurate condition assessment is essential for an effective transportation
system operation and asset management. Despite the importance, current manual data
collection methods for highway assets are time consuming, subjective and sometimes
unsafe. There is a need for an automated and efficient data collection method that
does not have a significant cost impact and can achieve automation, accuracy, and
safety in condition assessment. Over the past few years, advances in technology such
as cheap and high-resolution digital cameras and availability of vast data storage has
allowed a number of computer vision models to be developed that can detect and
assess condition of some individual assets. However, none of these vision-based
methods recognize, locate, assess condition of the assets, and visualize their most
updated status in a 3D environment. This paper proposes a new approach, based on
3D image-based reconstruction and integrated recognition of color, shape, and texture
for highway assets, and presents preliminary results from the developed system on a
real world case study.

INTRODUCTION
Infrastructure systems are recognized as the fundamental foundation of societal and
economic functions such as transportation, communication, energy distribution,
wastewater collection, and water supply. Most of the infrastructure systems are both
geographically extensive and have a long service life. It is expensive to provide and
manage any physical infrastructure over spatially extensive areas and for longtime
spans. This spatial and temporal range of infrastructure systems causes a high degree
of uncertainty in setting numerical models for modeling deterioration rates. These
characteristics of the infrastructure systems complicate the planning for future
infrastructure maintenance, repair, and reconstruction of the existing facilities. High
costs, tight budgets, and previous decisions that were based on inaccurate predictions
of infrastructure performance are resulting in serious consequences (Maser 2005).
American Society of Civil Engineers is estimating that $2.2 trillion is needed over
five years to repair and retrofit the U.S. infrastructure to a good condition (ASCE
2009). This issue is not only limited to the U.S. as the infrastructure in other countries
is also aging and failing. Although managing and maintaining infrastructure is not a

67
68 COMPUTING IN CIVIL ENGINEERING

new problem, nonetheless, in recent decades a significant expansion in size and


complexity of infrastructure networks have posed several new engineering and
management problems on how existing infrastructure can be monitored, prioritized,
and maintained in a timely fashion. One of the grand challenges in restoring and
improving urban infrastructure, as identified by the National Academy of Engineering
(NAE 2010), is to devise techniques to efficiently create records of locations and up-
to-date status of the infrastructure.
The need for frequent tracking and condition assessment is not only specific to
existing infrastructure but it is also affecting new construction projects due to lack of
techniques to easily and quickly track, analyze and visualize the as-built status of a
project and monitor performance metrics (Golparvar-Fard et al. 2010, 2009a&b). To
address these inefficiencies in an all-inclusive manner, this research looks into
creating a new technique through application of infrastructure close range imagery,
and explores how current challenges of creating up-to-date records of new and
existing civil infrastructure (recognizing and locating them), in addition to assessing
their conditions can be proactively addressed. This paper proposes a new approach,
based on 3D image-based reconstruction and integrated recognition of color, shape,
and texture for highway assets, and presents preliminary results from the developed
system on a real world case study.

PROBLEM STATEMENT
In current practice, assessing asset conditions is still a predominantly manual and thus
a time consuming process. A certain amount of subjectivity and the experience of the
raters have an undoubted influence on the final assessment (Binachini et al. 2010). In
addition, most maintenance decision-making approaches employ a discrete
representation of condition. For example, pavements are usually evaluated in five
different condition states varying form excellent to very poor (de la Garza and
Krueger 2008). Advances in continuous condition based decision-making are of
interest to the infrastructure management community, since infrastructure damage
variables are typically continuous in nature. Rapid advances in automated inspection
techniques are easily measuring these damage variables, and practical benefits from
considering this more natural representation of condition are increasingly possible.
These advances foster further research in formulating, solving, and implementing
infrastructure management methods using continuous representations of important
condition variables. Some research studies have already addressed the problem of
automated detection, classification, and assessment of assets in a discrete fashion
(Mashford et al. 2009, Meegoda et al. 2006). Current research efforts in devising a
computer vision model for highway asset detection are roughly divided into three
stages: segmentation, detection and condition assessment. Bascon (2010) presented a
Support Vector Machine to recognize road-signs. Krishnan (2007) has presented a
triangulation and bundle adjustment approach for identifying road signs. Hu and Tsai
(2010) and Wu and Tsai (2006) have created a nearest-neighbor assignment of feature
descriptors for an image recognition model for developing a sign inventory. Although
most of these techniques have achieved the goal of automation and accuracy to a
reasonable level, nonetheless none of these systems use the same visual information
to locate the assets and more importantly detect them in a continuous fashion.
COMPUTING IN CIVIL ENGINEERING 69

The specific goal of this research is to create an automated condition assessment


tool that will be used for low-cost, accurate, frequent and continuous data collection.
The newly created condition assessment system, contrary to the current systems in
use will not be solely focusing on one type of asset, but will be a comprehensive
system that can be employed to perform automated condition assessment for many
different assets (such as guardrail, signs, paved ditches and lighting fixtures). By
utilizing this newly created system, highway agencies would not only obtain low-
cost, accurate, and frequent condition data. But this consistent data can be used to set
the discrete representation of conditions of the low capital assets and formulate the
deterioration rates for these assets. Consequently, that would allow better investment
planning for low-capital assets. Furthermore, this new approach is built on a newly
developed 3D image based reconstruction technique (Golparvar-Fard et al. 2010 &
2009a) which enables assets to be located and visualized in a common 3D
environment, and integrates 3D reconstruction with 2D recognition of elements.

RESEARCH APPROACH
The new proposed approach and the developed system will be able to exceed
minimum requirements of standards on safety, efficiency and the consistency by
utilizing visual sensing techniques. The working principle of the system is
summarized in Figure 1. The steps that will be followed to create the proposed system
are as follows:
1. 3D image-based reconstruction of all objects using the D4AR reconstruction
approach (Golparvar-Fard 2010) which integrates structure-from-motion, multi-
view stereo and voxel coloring/labeling;
2. Utilizing Semantic Texton Forrest (STF) algorithm to independently segmentize
each image into proper asset categories;
3. Integrate camera parameters recognized through the reconstruction step with the
segmented areas to stitch relevant image parts into an panoramic image (Necessary
for large sized assets which are present in more than one frame like guardrail and
pavement);
4. Project and visualize the results into a common 3D environment, accessible
through ubiquitous devices in onsite and remote coordination centers.

Figure 1. The data and process in our developed system


70 COMPUTING IN CIVIL ENGINEERING

3D Image-based Reconstruction
The state-of-the-art 3D reconstruction has gone under a significant important over the
past few years. Availability of cheap and high-resolution imagery along with large
data storage capacity, in addition to advances in computing, has created a great
opportunity to run 3D image-based reconstruction at large scales. A few research
groups (Furukawa et al. 2010, Gallup et al. 2010) have already demonstrated high
density and accurate image based reconstruction results. Application of image-based
3D reconstruction in the construction industry is relatively new. These images are
traditionally unordered and uncalibrated, and usually include significant amount of
occlusion, which makes the application of existing 3D reconstruction algorithms
difficult. Recently Golparvar-Fard et al. (2010, 2009a) proposed a new dense
reconstruction algorithms which is based on Structure-from-Motion (SfM), Multi-
View Stereo (MVS) and a voxel coloring/ labeling mechanism which results in dense
reconstruction. In this research, the 3D image-based reconstruction module builds
upon the newly proposed algorithm and is tested in the context of sequentially
captured images for highways.

Semantic Texton Forest (STF) for Recognition


Textons and visual words have proven powerful discrete image representations for
categorization and segmentation. In these approaches, filter bank responses (e.g.,
derivatives of Gaussians, wavelets) or invariant descriptors (e.g., SIFT) are computed
across a training set. The collections of these descriptors are clustered to produce a
codebook of visual words, typically with the simple but effective k-means, followed
by nearest-neighbor assignment. Unfortunately, this three-stage process is extremely
slow and often the most time consuming part of the whole system, even with
optimizations such as kd-trees, the triangle inequality, or hierarchical clusters, making
their application less attractive for highway asset management.
The STF algorithm (Shotton et al. 2008) is an efficient and powerful low-level
feature which can be effectively employed in the semantic segmentation of images.
Semantic texton forests do not need the expensive computation of filter-bank
responses or local descriptors. The STF algorithm is built upon a randomized decision
tree structure where the nodes in the trees provide: (i) Implicit hierarchical clustering
into semantic textons, and (ii) Explicit local classification estimate. Finally, these
features are used in machine learning algorithm which performs segmentation and
detection with a semi-supervised technique (the algorithm trains itself with ground
truth images that are created by the user).

Randomized Decision Trees


As illustrated in Figure 2, a decision forest is a group of T decision trees. P(c|n) is the
learned class (c) probability distribution associated with each node (n) in the tree. A
decision tree works by branching down the tree according to a learned binary function
of the feature vector, until a leaf node l is reached. The whole forest achieves an
accurate and robust classification by averaging the class distributions over the leaf
nodes L= (l1,….lT):
COMPUTING IN CIVIL ENGINEERING 71

P c|L ∑T P c|lT (1)


T
A forest consists of T decision trees. A feature vector is classified by descending
each tree. This gives, for each tree, a path from root to leaf, and a class distribution at
the leaf. As an illustration, the roots to leaf paths are highlighted in yellow and class
distributions in red for one input feature vector.

Figure 2. Decision forests.

Randomized Learning
Each tree is trained separately on a small random subset of the training data I.
Learning precedes repetitively, splitting the training data In at node n into left and
right subsets Il and Ir according to a threshold t of some split function f of the feature
vector v.
| , (2)
\ (3)
At each split node, several candidates for function f and threshold t are generated
randomly, and the one that maximizes the expected gain in information about the
node categories is chosen.
| | | |
∆ | | | |
(4)
Where E(I) is the Shannon entropy of the classes in the set of examples I (Shotton
et al. 2008). The training continues to a maximum depth D or no further information
can be acquired. The class distribution P(c|n) are estimated as a histogram of the class
labels ci of the training examples i that reached node n.

Bag of Semantic Textons


The bag of semantic textons combines a histogram of semantic textons over an image
region with a region prior category distribution. The bag of semantic textons is used
with a support vector machine (SVM) classifier which assumes an image-level prior
over categories, enables the segmentation to emphasize those categories that the SVM
believes to be present.
Tree t1 Tree tT
Prob.

d: 2 3 4 5 d: 2 3 4 5 Category

Figure 3. Bags of semantic textons.


Within a region r of image I, the semantic texton histogram and region prior
generated. The histogram incorporates the implicit hierarchy of clusters in the STF,
72 COMPUTING IN CIVIL ENGINEERING

containing both STF leaf nodes (green) and split nodes (yellow). The region prior is
computed as the average of the individual leaf node class distributions P(c|l).

RESEARCH EXPERIMENTS
The developed asset management system first performs a 3D image-based
reconstruction using the images that are collected in a sequential fashion, next, the
STF algorithm is implemented to perform segmentation and classification of the
images acquired form the highway. The performance of the semantic texton forest
algorithms for the segmentation and detection of the highway assets is evaluated in
the newly created automatic condition assessment system. The recognition algorithm
uses a dataset consisting of the images and the ground truths (same image labeled in a
supervised fashion) of these images that are used to create the decision trees.
There were two experiments performed to evaluate the performance of this
algorithm. First experiment was performed with a new image dataset consisting of
four categories (i.e., guardrail, pavement, poles, and signs) plus the void category
consisting of fourteen images to investigate the performance of the algorithm for the
segmentation and detection of the highway asset images. These images are taken
from Virginia Tech’s Smart Road, which is a 2.1 mile long research facility used for
highway research located at Blacksburg VA. An initial 3D reconstruction was
performed with this dataset. The results of this initial and controlled experiment
suggested that the number of categories used for the training should be increased in
order to have correct segmentation with minimal segmentation confusion (wrong
recognition of the category).
Subsequently, second experiment was performed with extending the dataset that
was created for the 1st experiment. By adding the background objects (such as sky,
grass, soil or trees) as new categories for the algorithm to be trained, the confusion
was reduced significantly. The dataset for this experiment were consisting of twelve
different categories plus a void category to train the algorithm. Similar to the first
experiment, a 3D image-based reconstruction was performed with this dataset. Table
1 presents the results of evaluating performance of the 3D image-based reconstruction
algorithm with the state-of-the-art Structure from Motion algorithm (Snavely et al.
2007) on the dataset.

Table 1. Results of the 3D image-based reconstruction.


Experiment # of SfM Point D4AR Point SfM D4AR Recall2
images cloud cloud computational computational
resolution resolution time1 time
#1 120 108,621 1,437,001 6hr 13min 8hr 25min 0.93
#2 171 175,737 2,076,887 8hr 54min 10hr 17min 0.98
1
Computation times are benched marked on an Intel i7 core with 12GBs of RAM.
2
Recall: percentage of the images that are successfully registered to the point cloud.
COMPUTING IN CIVIL ENGINEERING 73

Figure 4. 3D Image-based reconstruction results


Table 2 presents the segmentation categories, the number of images used per category
and the specific color that was assigned to each category for supervised training and
automated testing. For this purpose, the regions of interests were highlighted with
these colors in a supervised fashion and the rest of the images were highlighted in
black representing the void category.
Table 2. Thirteen segmentation categories for experiment #2.
Category Images (R,G,B) Color Category Images (R,G,B) Color
Name (#) Name (#)
Void 7 (0,0,0) Grass 7 (128,0,128)
Asphalt 7 (0,128,0) Soil 7 (255,0,0)
Pavement
Concrete 7 (0,0,128) Sky 7 (0,255,0)
Pavement
Guardrail 7 (128,0,0) Safety 7 (0,0,255)
Cones
Poles 7 (128,128,0) Traffic 7 (255,128,255)
Lights
Signs 7 (128,128,128) Pavement 7 (128,255,255)
Markings
Trees 7 (0,128,128)

Guardrail Pole Pole


Soil
Asphalt
(a-1) (a-2) (b-1) (b-2)

Guardrail Pavement
Markings
(c-1) (c-2) (d-1) (d-2)
(c-1)

Figure 5. Supervised segmentation of the ground truth images.

Results and Discussion


Results of the second experiment were investigated further to evaluate the
performance of the STF algorithm. For each category, three images long with their
74 COMPUTING IN CIVIL ENGINEERING

segmentations were randomly selected per category. Ten principal pixels were
selected per image from the asset in interest and the segmentation result was
evaluated by acquiring RGB value of these points. If the RGB of the specific point
matches the specific color assigned for the asset in interest, it was considered to be a
True Positive (TP), if it did not match, it was considered to be a False Negative (FN).
The results of this analysis were plotted in a Receiver Operating Characteristic (ROC)
plot (Figure 6).

ROC Plot For Training Categories Asphalt Pavement


100
Rate (Percent)

Concrete Pavement
True Positine

Guardrail
Poles
50 Signs
Trees
Grass
0 Soil

0 20 40 60 80 100
False Negative Rate (Percent)
Figure 6. ROC plot for trained categories.

As demonstrated in Figure 6, the results of this preliminary experiment were


mostly reasonable. All of the images except one have a true positive rate above the
50% line. Although there were minor segmentation confusions present, as
demonstrated in Figure 7, most of the images are segmented successfully. The high
succession rates in the segmentations are encouraging and suggesting that the STF
algorithm can be implemented to perform the segmentations for the newly created
automated condition assessment system.

(a-1) (a-2) (b-1) (b-2)

(c-1) (c-2) (d-1) (d-2)

Figure 7. The segmentation and asset recognition results.

The results show that if distinct features of the highway asset present, the success
rate in segmentizing that asset is increased. As represented in Figure 6, the True
Positive rates for the signs are among the highest. This is caused by the distinct green
color of these signs. In contrary, the segmentation results for the poles are among the
lowest since the features of these asset items resemble other asset items such as the
guardrails. The computational time confirms that application of such a machine
learning algorithm is much faster and more convenient compared to other algorithms
used for segmentation. The machine learning kernel allows the thresholds for the
filter bank to be automatically trained through the ground truth data and dynamically
COMPUTING IN CIVIL ENGINEERING 75

finds the threshold surface. This flexibility is an important attribute for the highway
asset condition assessment system, yet confirms that for a more robust segmentation
& categorization of assets; more systematic collection of training data is required.

Conclusion
The automated and integrated image-based 3D reconstruction and recognition asset
management system presented in this paper demonstrates promising results. The low-
cost, and accuracy of this technology along with the high safety associated with its
application, can replace the current manual and subjective data analysis and/or the
computer vision systems that are currently in use. The implementation of this
algorithm is the first step in creating this new condition assessment system. By using
this approach, there will be no need for application of filter-bank responses or local
descriptors which are computationally expensive. More experiments need to be
conducted by expanding the training dataset, and testing performance on different
datasets with different levels of visibility and occlusion. Since the 3D image-based
reconstruction algorithm geo-registers and associates images together, the
segmentation results in any of these paired images can help in boosting the
confidence in segmentation and recognition of any new training image. This
integration will also be tested and reported in a near future.
References
ASCE. (2009). The 2009 report card for America’s infrastructure.
http://www.asce.org/reportcard/2009. Accessed Jan. 10 2011.
Bascon S. M., Rodriguez J. A. , Arroyo S. L., Caballero A. F., and Lopez-Ferreras F. (2010). “An
optimization on pictogram identification for the road-sign recognition task using SVMs.” CVIU. 14
(3), 373-383.
Bianchini A., Bandini P., and Smith D.W. (2010). “Interrater reliability of manual pavement distress
evaluations.” ASCE J. of Transp.Eng., 136 (2), 165-172.
de la Garza J. M., and Krueger D. A. (2007). “Simulation of highway renewal asset management
strategies.” Proc., ASCE Conf. of Computing in Civil Eng., 527-541, 2007.
Furukawa Y., Curless B., Seitz S.M. and Szeliski R. (2010). “Towards internet-scale multi-view
stereo.” Proc., Computer Vision and Pattern Recognition Conf.
Gallup D., Frahm J.-M., Pollefeys M. (2010). “A heightmap model for efficient 3D reconstruction
from street-level video.” Proc., Int. Conf. on 3D Data Processing, Visualization and Transmission
(3DPVT2010).
Golparvar-Fard M., Peña-Mora F. and Savarese S. (2010). “D4AR – 4 dimensional augmented reality -
tools for automated remote progress tracking and support of decision-enabling tasks in the
AEC/FM industry.” Proc., The 6th Int. Conf. on Innovations in AEC.
Golparvar-Fard M., Peña-Mora F., and Savarese S. (2009a). “D4AR- a 4-dimensional augmented
reality model for automating construction progress data collection, processing and
communication.” Journal of Information Technology in Construction (ITcon), 14, 129-153.
Golparvar-Fard M., Peña-Mora F. Arboleda C. A., and Lee S. H. (2009b). “Visualization of
construction progress monitoring with 4D simulation model overlaid on time-lapsed photographs.”
ASCE J. of Computing in Civil Engineering, 23 (6), 391-404
Hu Z. and Tsai Y. (2010) “Image Recognition Model for Developing a Sign Inventory” ASCE J. of
Comp. in Civil Eng., in press.
Krishnan A. (2009). “Computer vision system for identifying road signs using triangulation and bundle
adjustment”. MS Thesis, Computer Engineering. Kansas State University, Manhattan, Kansas.
Maser K., J. (2005) “Automated systems for infrastructure condition assessment” ASCE J. Infrastruct.
Syst. 11, 153.
76 COMPUTING IN CIVIL ENGINEERING

Mashford J., P. Davis P., Rahilly M. “Pixel-based colour image segmentation using support vector
machine for automatic pipe inspection,” Proc. the 20th Australian Joint Conf. on AI, vol.
4830,739–743.
Meegoda J. N., Juliano T. M., and Banerjee A., (2006). “A Framework for Automatic Condition
Assessment of Culverts,” Paper No. 06-2414, 85th Annual Meeting of the Transportation Research
Board, Washington, DC,
NAE, National Academy of Engineers (2010). Grand Challenges for Engineering. NAE of the
National Academies.
Sotton J., Johnson M., Cipolla R., (2008). “Semantic Texton Forests for Image Categorization and
Segmentation.” Proc. Int. Conf. Computer Vision and Pattern Recognition.
Snavely N., Steven M. Seitz, S. M., Szeliski, R. (2007). “Modeling the World from Internet Photo
Collections”. Int. J. of Comp.Vis., 2007.
Wu J. and Tsai Y. (2006). “Enhanced Roadway Inventory Using 2-D Sign Video Image recognition
Algorithm”, J. of Computer-Aided Civil & Infrastructure Eng., 21, 369-382.
Design and Evaluation of Algorithm and Deployment Parameters for an RFID-
Based Indoor Location Sensing Solution

N. Li1, S. Li2, B. Becerik-Gerber3, and G. Calis4


1
Ph. D. Student, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA 90089-2531; PH (213) 740-
0578; email: nanl@usc.edu
2
Ph. D. Student, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA 90089-2531; PH (213) 740-
0578; email: shuail@usc.edu
3
Assistant Professor, A.M.ASCE, Sonny Astani Department of Civil and
Environmental Engineering, University of Southern California, Los Angeles, CA
90089-2531; PH (213) 740-4383; Fax: (213) 744-1426; email: becerik@usc.edu
4
Postdoctoral Researcher, Sonny Astani Department of Civil and Environmental
Engineering, University of Southern California, Los Angeles, CA 90089-2531; PH
(213) 740-0560; Fax: (213) 744-1426; email: gulben.calis@usc.edu

ABSTRACT

Indoor location information is valuable to the building industry for a wide


range of purposes such as on-site personnel safety, asset security, facility
maintenance, and in-building emergency response. Despite the availability of
applicable technologies, there is no indoor location sensing solution that is cost
efficient and, therefore, widely adapted by the industry while providing highly
accurate location information. The authors have designed and tested an RFID-based
location sensing algorithm that uses virtual reference tags, which eliminates the need
for collecting prior localization data, increases the accuracy of the location
information, and potentially reduces the deployment costs. The paper summarizes a
series of 9 field tests, and presents findings on algorithm parameter optimization and
equipment deployment strategies. The test results show that the number of nearest
neighbors, k=4 and arithmetical averages yielded best results, and the performance of
the proposed solution was consistent for different reference tag layouts.

INTRODUCTION
Location information is of paramount value to the building industry. It is the
basis of context-awareness (Aziz et al. 2005), which is based on automatic
recognition of both the user’s location and activity. Context-aware information
delivery can replace current manual processes with automated delivery of spatial
information to on-site mobile users. With its application, targets such as building
materials, equipment, construction tools, and people can be easily located and target-

77
78 COMPUTING IN CIVIL ENGINEERING

specific information can be accessed onsite, which increases the efficiency of


information search and supports important decision-making tasks in the field (Khoury
and Kamat 2009). For in-building use, facility management (FM) personnel could be
provided with locations of building components or equipment they need to maintain
or repair. Locations of tools and on-site FM personnel and the length of time they
spend at each location could be analyzed to optimize tool usage and improve
productivity. Occupants unfamiliar with a built environment could be provided with
location information to navigate around and find their destinations. Changes in
building occupancy could be detected in real time through location sensing, and
measures could be taken to save energy consumptions, such as turning off lighting
and air conditioning in unoccupied rooms. Assets could also be monitored for anti-
theft purposes. During emergencies, rescuers could be guided to the shortest route
through a building and they could be supported by space-specific information.
The importance of location sensing has resulted in the assessment of various
technologies for indoor location sensing (ILS) purpose, including indoor GPS, motion
sensors, infrared, ultra wide band (UWB), ultrasound, wireless local area network
(WLAN), and radio frequency identification (RFID). A recent study compared these
ILS technologies (Li and Becerik-Gerber 2011), and illustrated RFID technology’s
advantages over other competing ILS technologies, including its capability of
providing accurate and cost efficient indoor location information, no requirement of
line of sight, having on-board data storage capacities which enable context-specific
data to be accessed onsite (Ergen et al. 2007; Motamedi A 2009), and wide adoption
by the building industry (Domdouzis et al. 2007; Ergen and Akinci 2007; Wing 2006).
To realize the above benefits of indoor location information, this paper
proposes a new RFID-based location sensing algorithm, and builds and tests a
solution. Research objectives include evaluation of two algorithm parameters and one
deployment parameter. The algorithm was tested in a controlled environment, and
findings were presented and discussed.
LITERATURE REVIEW AND MOTIVATION
As there are various benefits of indoor location information and context-aware
services built on it, an ILS solution applicable to the building industry might bring
tremendous values to building owners, managers and occupants. This has triggered a
number of research projects that focused on developing ILS solutions using the RFID
technology. The majority of the research has been done in the electrical engineering
and computer science fields, which either proposed algorithms that could locate
targets at the point level (Ni et al. 2004; Zhao et al. 2007) or at the zone level
(Hightower et al. 2000; Zhen et al. 2008), or improved existing algorithms by
calibrating estimated locations (Jin et al. 2006; Hsu et al. 2009; Wang et al. 2009) or
optimizing infrastructure layout (Li et al. 2009; Sue et al. 2006). The reported
accuracy from field tests or simulations in these research projects varied from 0.5 m
to 2 m. Recently, researchers in the building industry have also focused on this area,
with developing and testing several ILS system prototypes (Rueppel and Stuebbe
2008; Pradhan et al. 2009; Taneja et al. 2010; Razavi and Haas 2011), and comparing
multiple factors in algorithm design and deployment (Khoury and Kamat 2009; Luo
et al. 2010; Zhou and Shi 2011).
COMPUTING IN CIVIL ENGINEERING 79

With the achievements of previous research, several issues remain to be


addressed. The first issue is the criteria used in designing and evaluating previous
solutions, which are not persuasive due to the unbalanced focus on the accuracy and
neglect of other criteria such as cost, robustness, scalability and so on. Secondly, the
adaptability of the previous solutions to the building industry is uncertain. The
majority of the solutions were tested either in simplified scenarios or in simulated test
beds, making the adaptability of the solutions in real-life deployments questionable.
For those tested at a zone or building level, more discussion is needed on how the
solutions should be deployed to optimize the effectiveness of the solution and achieve
the best performance.
MATHEMATICAL FRAMEWORK
The proposed algorithm has the following components: reference tags,
tracking tags, and virtual reference tags. Reference tags are active RFID tags that
have known IDs and locations, and they are deployed around the sensing area as
reference points. Tracking tags are also active RFID tags, but they are attached to
targets, which can be persons or objects (building materials, components, equipment,
tools), and either stationary or mobile. Virtual reference tags have the same attributes
and functions as reference tags except that they are imaginary, and the strengths of
signals they emit are estimated instead of measured.
The algorithm builds on the k-nearest neighbor (KNN), which locates a target
from the known locations of the target's k nearest neighboring reference tags. A
“virtual reference tag” method is used in this research to loosen the tradeoff between
accuracy and cost. Locations of the virtual tags are assigned and their RSSI values are
estimated, through which the impact of complex and dynamic nature of built
environments is also considered.
The framework for the developed algorithm is established as follows:
Step 1: Determine the number and layout of virtual reference tags;
Step 2: Estimate the RSSI readings of virtual reference tags based on RSSI
readings of real reference tags.
Step 3: Establish the Euclidean distance between each target and each
reference or virtual tag, using RSSI readings gathered in tests and estimated in step 2;
Step 4: Identify the k nearest neighbors of a target based on a vector that
denotes the Euclidean distances of a tag to all antennae;
Step 5: Estimate the location of the target, using either the arithmetical or
weighted averages of the locations of the target’s k nearest neighbors. The location of
k
target i is estimated as: ( xi , yi )   w j ( xij , yij ) , where ( xi , yi ) is the estimated
j 1

coordinate of target i , ( xij , yij ) is the actual coordinate of the j -th nearest neighbor of
target i , where j  (1, k ) , and w j is the weighting factor.

EXPERIMENT DESCRIPTION
Based on the mathematical framework, field tests were conducted using off-
the-shelf ultra high frequency (UHF) active RFID technology that runs at a frequency
80 COMPUTING IN CIVIL ENGINEERING

of 915 MHz. The reader supports two antennae, which can be attached to the reader
directly or via data cables. The active tag is encapsulated in a plastic case, so that it
can be attached to a wider range of materials without significant interference to the
performance. Powered by an AA battery, a tag emits a non-directional signal every
1.5 seconds. A middleware is used to communicate with the reader and extract real-
time data, including tag ID, tag model, battery life, RSSI readings, last contact time,
and contact count.
To achieve the research objectives, a total of 9 field tests were completed in a
6 m by 7 m conference room in an educational building. A total of 2 readers, 4
antennae and 16 tags were used in the tests. The numbers and positions of the
reference tags and the targets varied in different tests. The antennae positions were
fixed throughout the tests, one of which was in the corridor and another in a room
next door. The other two were inside the conference room. Reference tags were
attached to the ceiling to simulate the use scenario, where the mechanical, electrical
and plumbing (MEP) equipment at the same height is tagged for maintenance
purposes. The target tags were placed either on the ground or above the ceiling.
Two algorithm parameters (number of k nearest neighbors and weighting
method) and one deployment parameter (reference tag layout - RTL) were tested, and
findings are summarized in the following section. Different RTL configurations
tested are illustrated in Figure 1. Accuracy is used as the evaluation criterion in this
paper, and the cost and robustness of the proposed solution will be evaluated in future
research. The accuracy is measured by the difference between targets' actual locations
and estimated locations.

Figure 1. Illustration of three RTLs.


COMPUTING IN CIVIL ENGINEERING 81

FINDINGS
To optimize the algorithm parameters, different values of a parameter are
applied to each test, and the resulting accuracies are compared.
If the k value is too large, selected neighbors may not be necessarily close to
the target, and being far reduces their reliability as reference points; if the k value is
too small, selected neighbors are less likely to be evenly distributed around the target,
leading to an increased error in the accuracy. Therefore, a desired k value should be
balanced. Both k=3 and k=4 values have been reported by different publications as
the optimal value (Huang et al. 2009; Ni et al. 2004), which indicates that the optimal
value may depend on the design of the specific solution. Table 1 illustrates the
optimal k value under the design of the proposed solution.
Table 1. Comparison of k values.
Arithmetical average Weighted average
k=4 k=3 k=4 k=3
Mean error distance (m) 1.94 2.23 1.96 2.14
Max error distance (m) 2.15 2.60 2.21 2.47
Min error distance (m) 1.65 1.91 1.60 1.95
Standard deviation (m) 0.13 0.22 0.20 0.17

When the error distance is calculated using arithmetical averages, k=4 yielded
a higher accuracy than k=3 in all tests. When the error distance was calculated using
weighted averages, k=4 yielded a higher accuracy than k=3 in 88.9% of all tests. The
average increases of accuracy in both scenarios are 0.29 m or 14.7%, and 0.18 m or
9.3%, respectively; with a maximum increase of 0.55 m in test 3 using arithmetical
averages. In addition, when k=4, the accuracy in most tests was within 2.2 m, with a
highest accuracy of 1.60 m, while when k=3, the accuracy in about half of the tests
exceeded 2.1 m, with a highest accuracy of 1.91 m.
For the weighting method, both arithmetical average and weighted average
can be used. Table 2 is extracted from table 1 that compares these two methods, using
k=4. Under arithmetical averages, the mean error distance of all 9 tests was 1.94 m,
slightly smaller than that using weighted averages. The standard deviation of the
former was also smaller than that of the latter. Although also not significant, it
suggests a more stable performance of the algorithm.
Table 2. Comparison of weighting methods.
Arithmetical average Weighted average
Mean error distance (m) 1.94 1.96
Standard deviation (m) 0.13 0.20

Based on the above analysis, optimal performance of the solution was


achieved when k= 4 with the use of arithmetical averages. Following analysis is
based on these parameter settings.
To assess the impact of RTL, a grid layout, RTL1, and two random layouts,
RTL2 and RTL3, were compared using the 9 tests. All three RTLs covered the whole
sensing area. Table 3 shows that all three RTLs yielded similar mean error distances.
82 COMPUTING IN CIVIL ENGINEERING

RTL2 had the largest max error distance, and RTL3 had the smallest min error
distance. Standard deviation did not vary significantly between different RTLs. In
general, the difference between RTLs did not lead to noticeable changes in accuracy.
To further validate the finding, a statistic t-test is done using all the data gathered to
verify the following hypothesis: "the error distance of an individual target does not
change as the RTL switches from one of the three RTLs to another". No hypothesis
was rejected in the t-test under a confidence level of 90%, which indicates that
changing RTLs did not statistically cause any change in accuracy.
Table 3. Comparison of RTLs.
RTL1 RTL2 RTL3
Mean error distance (m) 1.85 1.92 1.90
Max error distance (m) 2.89 3.95 3.49
Min error distance (m) 0.78 0.33 0.27
Standard deviation (m) 0.72 0.89 1.08

DISCUSSION
Results from the field tests show that the proposed solution yielded
significantly better performance when applied with k=4 than k=3 under either
weighting method, with the latter resulting in an increase of 14.7% and 9.3% in error
distance using arithmetical averages and weighted averages, respectively. This may
be caused by the fact that a smaller k value increases the chance that the identified
nearest neighbors are not evenly distributed around a target, leading to biased
estimated locations. In addition, an error in identifying one of the nearest neighbors
would lead to a larger error distance when fewer nearest neighbors are used in
calculation. On the other hand, using arithmetical averages yielded less error distance
and standard deviation than using weighted averages, although the improvements
were not as significant. Tests conducted to optimize the deployment parameter
indicate that the solution could keep its performance consistent under different RTLs,
and that a strict grid layout is not a must. Therefore, it is possible that RFID tags
attached to equipment and building components at the manufacturing stage, whose
layouts are likely to be random, could be used for ILS purpose. This would lead to
reduced costs and strengthen the argument that RFID-based solutions could be
implemented throughout a building’s life cycle.
With its optimal algorithm parameters, the solution demonstrated its ability of
adapting to different RTLs and potential to share existing RFID equipment with other
applications. However, to better assess the capability of the solution especially its
adaptability to building-scale implementations, the following issues need to be further
examined: performance of both stationary and mobile targets, optimization of the
number and layout of virtual reference tags, tradeoff between accuracy and cost,
robustness of the solution, and integration with various location-based services.
CONCLUSION
Indoor location information is of paramount value to the building industry and
can be used to facilitate FM practices, improve occupant experience and building
COMPUTING IN CIVIL ENGINEERING 83

utilization, and ensure building safety and security. Currently no ILS solution has
been validated and widely used in the industry. This research proposed a new
approach for ILS. A solution was built and tested in a controlled environment for
validation. A series of 9 tests were conducted, most of which reported an accuracy of
within 2 m. The use of k=4 and arithmetical averages yielded best results. The
performance of the solution was consistent for different RTLs. Based on the results of
this research, the authors plan to further explore the effects of the following
deployment parameters: target type (stationary or mobile, ground or above ceiling),
number of readers, and number of reference tags. The accuracy/cost tradeoff and the
robustness of the proposed approach will also be assessed. Then the algorithm will be
implemented at a building level, and the solution’s technical viability, cost
implications, and potential value for supporting various location-based services will
be examined.
REFERENCES

Aziz, Z., Anumba, C. J., Ruikar, D., Carrillo, P. M., Bouchlaghem, N. M. (2005).
"Context aware information delivery for on-site construction operations." Proc.
22nd CIB-W78 Conference on Information Technology in Construction, 304,
321-327.
Domdouzis, K., Kumar, B., Anumba, C. (2007). "Radio-frequency identification
(RFID) applications: A brief introduction." Advanced Engineering Informatics,
21(4), 350-355.
Ergen, E., and Akinci, B. (2007). "An overview of approaches for utilizing RFID in
construction industry." RFID Eurasia, 2007 1st Annual, 1-5.
Ergen, E., Akinci, B., Sacks, R. (2007). "Life-cycle data management of engineered-
to-order components using radio frequency identification." Advanced
Engineering Informatics, 21(4), 356-366.
Hightower, J., Borriello, G., Want, R. (2000). SpotON: An Indoor 3D Location
Sensing Technology Based on RF Signal Strength, Department of Computer
Science and Engineering, University of Washington, Seattle, WA.
Huang, Y., Lv, S., Liu, Z., Jun, W., Jun, S. (2009). "The topology analysis of
reference tags of RFID indoor location system." Proc., 2009 3rd IEEE
International Conference on Digital Ecosystems and Technologies (DEST),
IEEE, 313-17.
Jin, G., Lu, X., Park, M. (2006). "An indoor localization mechanism using active
RFID tag." Proc., IEEE International Conference on Sensor Networks,
Ubiquitous, and Trustworthy Computing, June 5, 2006 - June 7, IEEE, 40-43.
Khoury, H. M., and Kamat, V. R. (2009). "Evaluation of position tracking
technologies for user localization in indoor construction environments." Autom.
Constr., 18(4), 444-457.
Li, N., and Becerik-Gerber, B. (2011). "Performance-based evaluation of RFID-based
indoor location sensing solutions for the built environment." Advanced
Engineering Informatics, (Article In Press).
Li, W., Wu, J., Wang, D. (2009). "A novel indoor positioning method based on key
reference RFID tags." Proc., 2009 IEEE Youth Conference on Information,
Computing and Telecommunication (YC-ICT 2009), IEEE, 42-5.
84 COMPUTING IN CIVIL ENGINEERING

Luo, X., O'Brien, W. J., Julien, C. L. (2010). "Comparative evaluation of received


signal-strength index (RSSI) based indoor localization techniques for
construction jobsites." Advanced Engineering Informatics, (Article In Press).
Motamedi A, H. A. (2009). "Lifecycle management of facilities components using
radio frequency identification and building information model." ITcon,
14(Special Issue Next Generation Construction IT: Technology Foresight,
Future Studies, Roadmapping, and Scenario Planning), 238-262.
Ni, L. M., Liu, Y., Yiu, C. L., Patil, A. P. (2004). "LANDMARC: Indoor location
sensing using active RFID." Wireless Networks, 10(6), 701-10.
Hsu, P.H., Lin, T. H., Chang, H. H., Chen, Y. T., Yen, C. Y., Tseng, Y. J., Chang, C.
T., Chiu, H. W., Hsiao, C. H., Chen, P. C., Lin, L. C., Yuan, H. S., Chu, W. C.
(2009). "Practicability study on the improvement of the indoor location tracking
accuracy with active RFID." Proc., CMC 2009, IEEE, 165-9.
Pradhan, A., Ergen, E., Akinci, B. (2009). "Technological assessment of radio
frequency identification technology for indoor localization." J. Comp. in Civ.
Engrg., 23(4), 230-238.
Razavi, S. N., and Haas, C. T. (2011). "Using reference RFID tags for calibrating the
estimated locations of construction materials." Automation in Construction,
(Article In Press).
Rueppel, U., and Stuebbe, K. M. (2008). "BIM-based indoor-emergency-navigation-
system for complex buildings." Tsinghua Science & Technology, 13(Supplement
1), 362-367.
Sue, K., Tsai, C., Lin, M. (2006). "FLEXOR: A flexible localization scheme based on
RFID." Proc., International Conference on Information Networking, ICOIN
2006, January 16, 2006 - January 19, Springer Verlag, 306-316.
Taneja, S., Akcamete, A., Akinci, B., Garrett, J., Soibelman, L., East, E. W. (2010).
"Analysis of three indoor localization technologies to support facility
management field activities." Proc. ICCCBE 2010, June 31-July 2.
Wang, X., Jiang, X., Liu, Y. (2009). "An enhanced approach of indoor location
sensing using active RFID." Proc., 2009 WASE International Conference on
Information Engineering (ICIE), IEEE, 169-72.
Wing, R. (2006). "RFID applications in construction and facilities management."
ITcon, 11, Special Issue IT in Facility Management, 711-721.
Zhao, Y., Liu, Y., Ni, L. M. (2007). "VIRE: Active RFID-based localization using
virtual reference elimination." Proc., 2007 International Conference on Parallel
Processing, IEEE, 8.
Zhen, Z., Jia, Q., Song, C., Guan, X. (2008). "An indoor localization algorithm for
lighting control using RFID." Proc., 2008 IEEE Energy 2030 Conference,
ENERGY 2008, November 17, 2008 - November 18, Inst. of Elec. and Elec. Eng.
Computer Society.
Zhou, J., and Shi, J. (2011). "A comprehensive multi-factor analysis on RFID
localization capability." Advanced Engineering Informatics, 25(1), 32-40.
Impact of Ambient Temperature, Tag/Antenna Orientation and Distance on the
Performance of Radio Frequency Identification in Construction Industry

S. Li1, N. Li2, G. Calis3, B. B. Gerber4


1
Ph. D. Student, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA 90089-2531; PH (213) 810-
0325; email: shuail@usc.edu
2
Ph. D. Student, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, Los Angeles, CA 90089-2531; PH (213) 300-
6533; email: nanl@usc.edu
3
Postdoctoral Researcher, Sonny Astani Department of Civil and Environmental
Engineering, University of Southern California, Los Angeles, CA 90089-2531; PH
(213) 740-0560; Fax: (213) 744-1426; email: gulben.calis@usc.edu
4
Assistant Professor, Sonny Astani Department of Civil and Environmental
Engineering, University of Southern California, Los Angeles, CA 90089-2531; PH
(213) 740-4383; Fax: (213) 744-1426; email: becerik@usc.edu, A.M.ASCE

ABSTRACT

Construction industry has been utilizing the RFID technology in various


applications such as increasing productivity, enhancing safety, and improving quality,
mostly through the analysis of the received signal-strength index (RSSI) readings.
However, RSSI readings are directly influenced by environmental factors which can
decrease the effectiveness of the RFID technology. This study evaluates the effects of
environmental factors on RSSI readings. Evaluated environmental factors include: (1)
relative orientation between tags and antennae, (2) temperature of the environment,
(3) distance between tags and antennae. A series of tests were conducted in an
educational building and RSSI readings collected from the tests were evaluated by
statistical analysis to quantify environmental effects on RFID performance. The
results show that orientation and distance between tags and antennae had significant
effects on RSSI readings. Moreover, very weak effects on RSSI readings were
observed under varying indoor temperatures.

INTRODUCTION

Nowadays, among all RF based technologies, the RFID technology has


received significant interest from both the academia and the industry with a total
global market value of $6.4 billion in 2010 (BCC research, 2010). RFID applications
have been implemented in many industries including medicine, astronautics,
manufacturing, retailing and automotive manufacturing (Landt, 2005). Construction
industry has been utilizing the RFID technology for various areas such as increasing
productivity, enhancing safety, and improving quality. Evaluation of RFID

85
86 COMPUTING IN CIVIL ENGINEERING

performance in construction industry has been based on read range, read rate and read
time, whichdirectly depend on received signal strength indication (RSSI). RSSI is the
indicator used to measure the strength of radio waves received by an antenna. Across
a wide range of RFID applications in construction industry, RSSI serves to lay the
basis of calculations and analysis. When RFID is applied for tracking materials and
equipment in construction industry, read rate and read range are frequently used to
represent RFID performance. RSSI evaluation of a tag enables researchers to assess
the read range and read rate (Clarke et al., 2006; Dziadak et al., 2008; Ergen et al.,
2007; Goodrum et al., 2006; Tzeng et al., 2008). An increase in read rate and read
range in association with higher RSSI readings represents a better RFID performance.

However, various environmental factors have been reported to have influence


on RFID performance to different extents. Goodrum et al. (2006) discovered that low
temperature could lead to difficulty in detecting active tags with a short read range.
When the temperature was as low as -10oC, RSSI readings were much lower than
tags of 22 oC and represented poor RFID performance. Clarke et al. (2006)
discovered that various tag orientations could cause various read rates of passive tags.
Ergen et al. (2008) conducted a research to explore RFID applications in facility
management by attaching active tags to fire valves. A highly metallic environment
proved to have negatively influenced the RFID performance with low read rates.
Tzeng et al. (2008) attached passive RFID tags to interior decorating materials and
investigated the read rate of passive RFID. It was reported that measured RSSI
readings were significantly affected by metallic materials. In addition, it was also
found that radio waves tended to fail when penetrating obstructions and that the read
range of passive tags was inversely related with the distance between the tags and
antennae. Dziadak et al. (2009) conducted experiments to test the read range of
passive tags in the presence of different soil types. Tags were attached to pipes and
were buried underground. Field experiments suggested a significant difference of
read ranges between gravel and sand.

Despite of the findings of environmental factors on RFID applications in


construction industry above, none of them provides a systematic approach to assess
the effect of environmental factors on RSSI readings in indoor environments.
Moreover, most research results above only cover the qualitative effects of
environmental factors for passive tags. The quantified effect in indoor environments
on RSSI still remains undefined. This paper aims to outline the effect of different
environmental factors on RSSI readings. Environmental factors studied in the context
of this paper include: (1) relative orientation between tags and antennae, (2)
temperature of the environment, (3) distance between tags and antennae.

RFID TECHNOLOGY OVERVIEW

RFID technology is an automated data collection technology that enables


automatic identification of objects in a non-contact fashion, and enhances the
efficiency in data capture, storage and distribution. Tags, readers and antennae are
three major components of a typical RFID system. A tag contains a microchip that
COMPUTING IN CIVIL ENGINEERING 87

stores the tag’s ID and other customized information, and it sends out radio waves
containing the on-board information, which is captured by an antenna. The antenna is
connected to a reader, and it establishes the communication between the reader and
the tag. The reader receives the information from the tag, processes it, and transfers it
to users for further analysis.

There are two kinds of RFID tags which differ from the power source: passive
and active. There is no built-in power source for passive tags which have to operate
on the electromagnetic energy radiated by the reader. An internal battery is installed
in an active tag, and it provides power for the tag to function. Passive tags are
inexpensive, small in size and thus easy to deploy, and have been widely used in
construction industry for materials and components identification and tracking.
However, the short read ranges, mostly within 1 m, exclude passive tags from
applications where long read range is required. In addition, it is proven that electric
waves of passive tags fail to penetrate common floor materials with a thickness of
1cm to 2cm (Tzeng et al., 2008). On the other hand, active tags have longer read
ranges and extendable on-board memory, but they are more expensive and have a
limited lifespan of up to 10 years. Examples of active tag applications are found in
RFID localization (Zhou and Shi, 2010), building maintenance (Ko, 2009), personnel
monitoring (Lin et al., 2010), and construction material management (Ren et al.,
2010).

The strength of radio waves received by the antenna in a RFID system is


measured by RSSI, which indicates the overall effect of various environmental
factors such as orientation, temperature, distance between tags and antennae, and so
on. RSSI readings in an indoor environment are inversely related with the distance
between tags and antenna and have non-Gaussian noise, resulting from multipath and
environmental effects. This type of noise could be caused by building geometry,
network traffic, presence of people and atmospheric conditions (Ladd et al., 2004).
Moreover, there are some parameters that might affect the RSSI readings, such as tag
orientation and position, distance between tags and antenna. External factors, which
can influence the RSSI readings, include temperature, materials in the surrounding
environment and so on.

RESEARCH APPROACH

The authors designed several tests to assess the effect of each environmental
factor on RSSI readings. Tests were carried out in a typical educational building at
the University of Southern California. In the field tests, 4 antennae, 2 readers and 16
tags were deployed. Different quantities of equipment were used based on the
specific test design. For all field tests, tags emitted signals every 1.5 seconds. Omni
directional antennae were selected due to their uniform power radiation in space and
continuous message reception without disconnection.

Tests for relative orientation and temperature were carried out in a conference
room 7m in width and 6 m in length (Figure 1). A total of eighteen tests were
88 COMPUTING IN CIVIL ENGINEERING

conducted for the selected orientations. To assess the effect of orientation, tags were
located in three different positions: forward facing the antenna, backing from the
antenna and upward facing the ceiling (Figure 2). The antenna was either horizontally
or vertically placed. Tests to assess the effect of temperature settings were conducted
in the same room which was selected due to the availability of access to controlled
temperature settings. A total of 16 tags were attached to acoustical plaster ceiling tiles
in two parallel lines at an equal interval of 0.8 m from each other, facing downwards
to the floor (Figure 3).

The second test bed was designed to assess the effect of the distance between
tags and antennae. Tests were carried out in a corridor of 3 m in width and 60 m in
length. A total of 8 tags and 1 antenna were deployed in the test. Tags were attached
to a wooden table and remained fixed throughout the test. The table was moved along
a straight line away from the antenna. Tags were first placed 2.5 m away and moved
2.5 m further each time from the antenna until no signal could be detected.

Tag upward to the ceiling

Tag forward facing antenna

Tag with back to antenna

Figure 1. Layout to evaluate Figure 2. Tag Orientation. Figure 3. Layout to


the orientation and elevation evaluate temperature
effect. effect.

RESULTS

This section reports the effect of environmental factors on the RSSI readings.
Maximum, minimum and mean RSSI readings are garnered to assess the effect. It is
assumed that an undetected tag has a (-128) RSSI reading, which was defined by the
manufacturer’s specifications. In addition, trend lines were generated to realize the
behavior of the RSSI readings under different environmental factors.

To assess the effect of relative orientation of tags and the antenna on RSSI
readings, tests were repeated three times. Among all combinations, the best RSSI
COMPUTING IN CIVIL ENGINEERING 89

readings and tag read rates were obtained when tags were facing the vertical antenna.
20 tags were detected with a mean RSSI reading of (-65.9). In all combinations, the
minimum RSSI reading was (-128), which means that at least one tag was not
detected in all tests. The mean RSSI readings of different antenna orientations are
plotted in Figure 4.

Antenna Orientation Test Facing


Orientation
-65.92
Backing
Vertical -72.21
-75.38 Upward

-103.63
Horizonta
-106.75
l
-106.88

0.00 -20.00 -40.00 -60.00 -80.00 -100.00 -120.00


Average RSSI
Figure 4. Mean RSSI readings from different relative orientations.

To evaluate the effects of different indoor temperatures, RSSI readings were


obtained from 16 tags. Tag read rates were observed to be over 95% at all
temperature settings. In all tests the mean RSSI readings fluctuated between (-45) and
(-47) which indicates RSSI readings were consistent. No significant effect was
reported in any of the temperature settings. Figure 5 shows the results when tags are
exposed to varying temperatures.

15 20 25 30
0
-10 Temperature
-20
Mean RSSI

-30
-40
-50
-60
-70
-80

Figure 5. Mean RSSI readings under varying indoor temperatures.

The estimated regression equation is as Equation 1.


y=0.1009x-48.481 Equation 1
90 COMPUTING IN CIVIL ENGINEERING

As can be seen from the Equation 1, the coefficient between RSSI readings
and temperature is a positive value of 0.10, which indicates that 1°C degree increase
in temperature will lead to a 0.1009 improvement in RSSI reading. The R square
statistic is 0.71 which indicates that the linear regression can reliably predict the
relationship between RSSI and temperature. The active tags in the test were powered
by lithium batteries which tend to suffer from voltage decrease under low temperature
(Linden and Reddy, 2002). The drop of battery power in RFID tags can lead to a
decrease in RSSI readings.

The result of the tests for varying distances between the tags and antenna is
plotted in Figure 6. Overall, RSSI readings were better when the distance between
tags and antenna decreased. The radio signal transmitted by RFID tags could not be
detected when tags were placed over 25 m away from the antenna.

0 2.5 5 7.5 10 12.5 15 17.5 20 22.5 25


0 Distance
-20
Mean RSSI readings

-40
-60 y = -33.8ln(x) - 9.1673
-80 R² = 0.882
-100
-120
-140

Figure 6 Mean RSSI readings under varying distances

A logarithmic trend line was generated with a R-square of 0.882, which shows
the trend line is relatively reliable to predict the relationship between the mean RSSI
readings and the distance. This relationship complies with the classical signal
propagation model used frequently in the electrical engineering field where the path
loss of signal strength is correlated with the logarithmic distance (Keenan and
Motley, 1990). Based on the data collected in the test, the regression equation can be
depicted as in Equation 2.
y = -33.8ln(x) - 9.1673 Equation 2

As seen from the equation, the constant value is (-9.1673) in the equation,
which can be interpreted, as the RSSI reading will be (-9.1673) when tags are placed
1 m away from the antenna.

DISCUSSIONS AND CONCLUSION

In this study, the effect of environmental factors are investigated based on


RSSI readings collected from the RFID system deployed in a typical educational
building at the University of Southern California. Tests were designed to assess the
effects of environmental factors namely: (1) relative orientation between tags and
antennae, (2) temperature of the environment, (3) distance between tags and antenna.
COMPUTING IN CIVIL ENGINEERING 91

Regardless of the tag orientation, horizontal antenna orientation yielded worse results
compared to vertical antenna. The RSSI readings were significantly better when the
antenna was vertically positioned than the horizontally positioned antenna at a 1%
confidence level. No significant difference was recorded among different tag
orientations. This result can be interpreted as antenna orientation has more effect on
RSSI readings than the tag orientation. Results show that RSSI readings can be
improved by 38.3% when tags are placed facing a vertical antenna compared to the
tags positioned backing to the horizontal antenna. No significant effect was reported
in any of the temperature settings. It can be concluded that the effect of temperature
on RSSI readings in an indoor environment is negligible within a narrow temperature
range. Moreover, RSSI readings had an inverse relationship with the distance and the
radio signal transmitted by RFID tags could not be detected when tags were over 25
m away from the antenna.

Overall, this study filled the gap in research in which there are a lack of
systematic approaches in assessing the environmental effect on RSSI readings and
thus RFID systems. This study can be beneficial to RFID technology users and
researchers to design and deploy systems in a more location- and performance-aware
manner, which will lead to higher RSSI readings and thus better RFID performance.
Future studies of the authors will focus on expanding environmental factors involving
different materials tags attached to, obstructions between tags and antenna, building
components, furniture layouts, and room occupancies. In addition, accuracy for
indoor localization will be a potential evaluation criterion for assessing the
importance of the effect of the environmental factors on RSSI readings.

REFERENCES

Bahl, P.,Padmanabhan, V.N., 2000, "RADAR: an in-building RF-based user location


and tracking system ," INFOCOM 2000. Nineteenth Annual Joint Conference of
the IEEE Computer and Communications Societies Proceedings IEEE , Vol.2,
775-784.
BCC Research, 2010, RFID: Technology, Application and Global Markets
Brilakis, I., Cordova, F. and Clark, P., 2008, Automated 3D vision tracking for
project control support, Proceedings of the Joint US-European Workshop on
Intelligent Computing in Civil Engineering, 2-4 July, Plymouth, UK, 487-496.
Clarke, R. H., Twede, D., Tazelaar, J. R. and Boyer, K. K. (2006).Radio frequency
identification (RFID) performance; the effect of tag orientation and package
contents. Packaging Technology and Science, 19(1), 45-54.
Cordova, F. and Brilakis, I., 2008, On site 3D vision tracking of construction
personnel, Proceedings of the 16th Annual Conference of the International
Group for Lean Construction, 16-18 July, Manchester, UK, 809-820.
Dziadak, K., Sommerville, J. and Kumar, B. (2008). RFID based 3D buried assets
location system. Electronic Journal of Information Technology in Construction,
13, 155-165.
92 COMPUTING IN CIVIL ENGINEERING

Dziadak, K., Kumar, B. and Sommerville, J. (2009). Model for the 3D location of
buried assets based on RFID technology. Journal of Computing in Civil
Engineering, 23(3), 148-59.
Ergen, E., Akinci, B., East, B. and Kirby, J. (2007).Tracking components and
maintenance history within a facility utilizing radio frequency identification
technology.Journal of Computing in Civil Engineering, 21(1), 11-20.
Goodrum, P. M., McLaren, M. A. and Durfee, A. (2006).The application of active
radio frequency identification technology for tool tracking on construction job
sites. Automation in Construction, 15(3), 292-302.
Hightower, J. and Borriello, G., 2001a, Location systems for ubiquitous computing,
Computer, 34(8), 57-66.
Keenan, J. M. and Motley, A. J. (1990). Radio Coverage in Buildings. British
Telecom Technology Journal, 8(1), 19-24.
Ko, C. (2009) RFID-based building maintenance system, Autom.Constr. 18, 275-284.
Ladd, A.M., Bekris, K.E., Rudys, A.P., Wallach, D.S. and KAvraki, L.E., 2004, On
the feasibility of using wireless Ethernet for indoor localization, IEEE
Transactions on Robotics and Automation, 20(3), 555-559.
Landt, J. (2005). The history of RFID. IEEE Potentials, 24(4), 8-11.
Lehmann, E.L. and Romano, J.P. (2006).Testing statistical hypotheses, Springer,
New York.
Lin, C.J., Lee, T.L., Syu, S.L., Chen, B.W., 2010 Application of intelligent agent and
RFID technology for indoor position: Safety of kindergarten as example.
International Conference on Machine Learning and Cybernetics (ICMLC 2010),
5 2571-6.
Linden D. and Reddy T. (2002), Handbook of Batteries. Mcgraw-Hill, New York
Luo, X., O’Brien, W.J. and Julien, C.L., 2010, Comparative evaluation of Received
Signal-Strength Index (RSSI) based indoor localization techniques for
construction jobsites, Advanced Engineering Informatics, in press.
Mautz, R., 2009, Overview of current indoor positioning systems, Geodesy and
Cartography, 35(1), 18-22.
McCarthy, J. F., Nguyen, D. H., Al, M. R. andSoroczak, S. (2002). Proactive
displays the experience UbiComp project. SIGGROUP Bulletin, 23(3), 38-41.
Pradhan, A., Ergen, E. and Akinci, B., 2009, Technological assessment of radio
frequency identification for indoor localization, Journal of Computing in Civil
Engineering, 23, 230-238.
Ren, Z., Anumba C. J., Tah J. (2010) RFID-facilitated construction materials
management (RFID-CMM) - A case study of water-supply project, Advanced
Engineering Informatics. In Press.
Tzeng, C., Chiang, Y., Chiang, C. and Lai, C. (2008). Combination of radio
frequency identification (RFID) and field verification tests of interior
decorating materials. Automation in Construction, 18(1), 16-2 3.
Wang, L-C., Lin, Y-C, and Lin, P. H. (2007).Dynamic mobile RFID-based supply
chain control and management system in construction.Advanced Engineering
Informatics, 21(4), 377-90.
COMPUTING IN CIVIL ENGINEERING 93

Zhou, J. and Shi, J. (2010) A comprehensive multi-factor analysis on RFID


localization capability, Advanced Engineering Informatics. In Press, Corrected
Proof.
Multiobjective Optimization of Advanced Shoring Systems Used in Bridge
Construction
Khaled Nassar1, Mohamed El Masry2 and Yasmine Sherif3
1
Associate Professor, Department of Constructon Engineering, American University
in Cairo, knassar@aucegypt.edu
2
Graduate student and research assistant with the Department of Construction and
Architectural Engineering at the American University in Cairo,
m_elmasry@aucegypt.edu
3
Graduate Research Assistant the American University in Cairo
PhD Candidate in Construction Management y_essawy@aucegypt.edu
ABSTRACT
Bridge construction can be considered as a very complicated process that
contains interdisciplinary activities that have to interact efficiently and effectively.
Advanced shoring system is a technique that was recently used as an advanced
construction method in bridges construction. This technique requires high initial cost
because of the heavy equipment and number of crews that are used, so number of
crews and number of equipment may affect significantly both the cost and time.
Bearing in mind that a trade-off might exist in such industry, decision making should
be made carefully; therefore Simulation of the construction process of the advanced
shoring system was carried out using STROBOSCOPE simulation tool to simulate
the effect of changing the number of equipment and crew members to reach an
optimum solution. The variables to be used for optimization in the simulation were
determined through a survey that was done among different contractors in the
industry
INTRODUCTION
Bridge construction projects are classified as infrastructure construction projects
and heavy construction projects, Such infrastructure projects are characterized by
their long service life, large budget and complexity. Bridge construction projects are
performed in different conditions, i.e., different locations and terrains and different
environmental conditions, hence raising uncertainties which influence the production
rates of all resources. That’s due to conditions like unusual or complex works,
equipment breakdown, unfavorable weather conditions, unexpected site conditions
and many others. When production rates get modified in such large projects, this
might lead to large divergence from the original plan. Beside rapid construction,
activities in advanced shoring system are transferred from ground to the girder of the
bridge which allows construction over important obstacles without hindering traffic.
On the other hand, the main drawback of bridge deck construction using launching
girder systems is the large required capital investment, compared to other techniques,
because of the intensive utilization of equipment. It could be hard for contractors to
use advanced shoring system when it is required to minimize cost and time. Of
construction, as a result time-cost trade off exists. Such analysis is associated with

94
COMPUTING IN CIVIL ENGINEERING 95

many decisions such as changing the size and number of crews, equipment types,
activity relationships.
Time-cost trade-off analysis has been extensively studied in literature with different
application areas, including highways El-Rayes and Kandil 2005, and earthwork
Marzouk and Moselhi 2004. Evolutionary algorithms (EAs) have been extensively
utilized to solve time cost trade-off problems Li and Love 1997; Que 2002; Elbeltagi
et al. 2005. EAs are stochastic search methods that mimic the metaphor of natural
biological evolution and/or the social behavior of species. They include: genetic
algorithms, memetic algorithms, particle swap optimization, shuffled frog leaping
algorithm, and ant colony optimization (Elbeltagi et al. 2005).(Marzouk et al., 2009)

ADVANCED SHORING
Flying Shuttering System is, also, called Advanced Shoring, Mobile or Moving
Scaffolding, or Self-Launching Erection Girder. The advanced shoring system was
initially used for pre-stressed cast-in-situ concrete bridges with spans of a relatively
short length. If the ground conditions are poor, the ground level is variable or the
bridge is high above the ground, movable casting girders which are supported off the
permanent sub or superstructure can be a viable alternative solution. The system’s
main concept is that the formwork is supported on a moving gantry system, which
will simulate a factory operation transported on the site.
While casting the piers, recesses (to support the brackets) and small diameter
horizontal holes should be made (to allow the insertion of steel bars in order to erect
brackets), Steel brackets are, then, mounted (This is mainly done through friction
between the surfaces of the steel plates of the brackets (that’s provided through
tensioning 6 bars at each column) and the concrete column). System is assembled on
the ground and the formwork is supported on moving gantry system, The system is
then lifted to position on brackets, Construction is done in stages (each stage ends at
the point of zero bending moment). The construction of a typical stage starts by
lowering formwork to free it from the bottom slab and webs. The brackets are
required to move forward to support the next span. Accordingly, the main girders and
formwork move forward until the girders’ end pass the next column in a manner that
would preserve system stability. The main trusses are supported temporarily on the
superstructure by means of high tensile bars. The brackets are dismantled from their
current position and travel along rails fixed on the bottom chord of the two trusses
until they reach the required position. The main girders are lowered to rest on
brackets and then travel to-their final position. Formwork levels are adjusted as well
as carpentry works. Steel reinforcement and pre-stressing components are fixed for
the bottom slab and webs, Concreting is performed normally on two stages, bottom
slab and webs and then top slab, After the concrete of the bottom slab and webs gains
sufficient strength the formwork of the inner sides of the webs are dismantled, The
same activities are repeated for the top slab, and the span is stressed after it gains the
required strength and the system becomes in its final position. Then, the gantry rolled
forward by means of outriggers on both side’s gantry's deck, and the cycle repeats for
the next span. (Essawy, 2007)
96 COMPUTING IN CIVIL ENGINEERING

SIMULATION MODULE
Simulation can be considered as a powerful tool because it imitates what happen
in reality to a certain level of accuracy and reliability without extra costs.
STROBOSCOPE (Martinez, 1996) is used as a simulation tool to represent the tasks
in reality using rectangular and chamfered rectangles called “combi” and “normal”
activities while the resources can be represented by circle shapes named “queues”.
The combi activities should have queues to support the combi activity. Each activity
can take an argument called semaphore to control start and end of this activity. This
was used to entail stroboscope to start work at the beginning of the working day and
ends at the end of the day.
The basic idea of using stroboscope is its ability to create multiple replications for the
various alternatives that could affect the simulation time. As such to determine which
are the factors that could affect the simulation a do loop was performed in the form of
while to create multiple replications of the various alternative.
Table.1: Description of the activities and Resources used in stroboscope model
Model
Description
Entity
A queue that represents the parts of the gantry system used in supporting the
GntryPrts
formworks of the bridge deck
GrdrAssmbly Combi activity of assembling the gantry system parts
A queue that represents the finished assembled gantry system ready to get
Grdr
launched
Combi activity of positioning the girder on the brackets supported on the
PsntgGrdr
piers of the bridge
PrsFrms A queue that represents the parts of the piers forms
Combi activity representing the activity of constructing the piers of the
PrsCnstr
bridge
A dummy queue that has the gantry system positioned and ready for form
GrdrPst
works and steel reinforcement
AdjFrm Combi activity for the adjustment of formworks
Crn A queue that has the crane used in different tasks in the bridge construction
FrmCr A queue for the form crew that is used in adjusting the forms
A queue that represents the steel reinforcement used in reinforcement of the
StlRf
bridge deck
Combi activity representing the process of lifting the steel bars used in
LftgStl
reinforcement
A queue that representing the bottom forms that were constructed by the
BtmFrms
form crews
Rebar Combi activity for the reinforcement of the bridge deck
StlCrw A queue that representing the steel crew
Rnf1 A queue that represents the bottom reinforcement
CncrtArrvl Normal activity that represents the arrival of concrete
Trk Queue representing concrete trucks or containers
FllgPmps Combi activity representing filling of the pumps prior pouring starts
Cncrt1 Dummy queue that transfers the concrete resource
COMPUTING IN CIVIL ENGINEERING 97

Combi activity that represents the pouring of concrete in the web and bottom
PrgCncrt1
part of the girder (assuming it is a box section which is usually the case)
a queue that represents the poured concrete section of the web of the bridge’s
PrdWb
girder
Normal activity for curing of the poured section and consuming time to attain
Curing1
the characteristic strength required and removing the formworks
CrtCr A queue that represents the concrete crew
FnshdWb A queue that represents the finished web section and the bottom of the girder
Combi activity of dismantling the forms to use in the top formworks of the
DsmntlgFrms
girder
TpFrms Dummy queue that holds the formworks until pouring concrete starts
Dck A queue that represents the deck formworks ready for pouring concrete
PrdDck Queue representing the poured deck
FshdSpn Queue representing the finished spans
Prstrg Combi activity representing the prestressing of the post tensioned steel
Combi activity that represents the lowering of form works to move to the
LwrgFrmWk
next span
MvgToNxtPr Normal activity to move the gantry system to the next span
Normal activity to move the brackets supporting the gantry system to the
MvgBrkts
next span
LwgGrdr Normal activity to lower the gantry system to start an new segment
Sgmnt An empty queue that represents finished number of spans

Table.2: Number and Type of alternatives to be optimized


Alternatives Number and Type of Alternatives
Concrete Crew 1 2 3
Steel Crew 1 2 3
Form Crew 1 2 3
Crane 1 2 3
Gantry System 1 2
Additives A B C
98
COMPUTING IN CIVIL ENGINEERING
Figure.1: Stroboscope model used in simulation module
COMPUTING IN CIVIL ENGINEERING 99

OPTIMIZATION MODULE
Creating multiple replications for the model with different alternatives for the
critical resources and running a simulation would help in determining which
resources had an effect on the construction time. Critical resources that could affect
the duration of construction or cost are resources that have minimum average waiting
time in the resources queue.
The crane used in lifting machinery, equipment and resources can affect the
simulation time, changing the rest of the resources available would affect the
simulation as discussed below.
179
175 111
171
Simulation Time (Days)
Simulation Time (Days)

107
167
163 103
159 99
155
151 95
147 91
143
139 87
135 83
131
127 79
1 2 3 1 2 3
No of Cranes No of Cranes
Figure.2: Simulation time versus Figure.3: Simulation time versus number
number of cranes using 1 gantry system of cranes using 2 gantry systems
Figure.2 shows how the simulation time decreases by increasing number of cranes
while using different resources; this curve was plotted by changing number of form,
steel and concrete crews with the number of cranes. The results were identical no
matter how many resources were used. This is because no new segment can start
unless the previous segment is finished because gantry system is used all the time.
Therefore when two gantry systems were used, the simulation time decreased by a
considerable value which was 67 days approximately 40% of the total duration. This
can be seen in figure.3.
The above shows that many alternatives are involved when a decision is made; the
previous alternatives may have an impact on the cost and simulation time. The total
number of alternatives can yield into a huge number of combinations, as such
optimization was used in an attempt to reduce processing time and improve the
quality of solutions.
Evolutionary Algorithms have been introduced during the past 10 years. In addition
to various Genetic algorithm improvements, recent developments in Evolutionary
algorithms; several techniques inspired by different natural processes. To optimize
the different alternatives to use, the particle swarm optimization (PSO) algorithm is
used. Particle swarm optimization was found to perform better than other
100 COMPUTING IN CIVIL ENGINEERING

evolutionary algorithms in terms of success rate and solution quality (Elbeltagi et al.,
2005).
In PSO, each solution is a ‘bird’ in the flock and is referred to as a ‘particle’. A
particle is analogous to a chromosome (population member) in GAs. As opposed to
GAs, the evolutionary process in the PSO does not create new birds from parent ones.
Rather, the birds in the population only evolve their social behavior and accordingly
their movement towards a destination (Elbeltagi et al., 2005).

Product of simulation time and total cost was set as the objective function that would
be optimized. Different particles were initiated to get optimized and get the optimum
solution.the number of alternatives that are optimized can be seen in Table.2. Each
particle represents different number of alternatives of resources that can affect the
simulation time (steel, concrete, form crew numbers and number of the cranes). A
system of one and two gantry systems was used and the effect on cost and simulation
time was observed. In addition to using multiple gantry systems, additives were used
to increase the rate of hardening of the girder. Given all the previously mentioned
resources, cost and time were obtained and PSO was performed. Convergence was
achieved and pareto optimal face was drawn as shown in figure 4.

35000000

30000000
Total Cost (L.E.)

25000000

20000000

15000000

10000000
60 80 100 120 140 160 180
Simulation Time (Days)

Figure.4: PSO output, total cost versus simulation time and pareto optimal frontier
Figure.4 shows that pareto set varied between different alternatives, it was found that
usin 2 gantry systems is efficient when using 2 cranes, form crews and concrete crews
in presence of additives. Another solution was found to be effective when using only
one gantry system but while using three resources of form, steel and concrete crews.
COMPUTING IN CIVIL ENGINEERING 101

CONCLUSION
Flying shuttering is one of the newly introduced techniques in construction of
bridges, that could be advantageous due to its speedy construction feature.A
framework was introdueced to help contractors in performing time- cost trade off
analysis to optimize resources utilization in flying shuttering technique. A pareto
optimal forentier is introduced that would help the contractors in deciding how many
resources to use and the effect of this decision on cost and time. The multi-objective
optimization was performed to get the previously mentioned pareto face by using
particle swarm optimization (PSO) with an objective function that is the product of
the cost and simulation time of construction. The PSO algorithm was plugged in
STROBSCOPE simulation tool that imitates the processes in reality. To account for
the uncertainties in durations of tasks performed, duration of some activities were
defined as beta distribution that would vary between a certain maximum and
minimum value.
By performing multiple objective optimization it was found that using additives and
more than one gantry system would increase the cost of construction by
approximately 65 % while it would help in decreasing the construction duration by 55
%.
References:
Elbeltagi,E.(2007) “Evolutionary Algorithms for Large Scale Optimization In
Construction Management” The Future Trends in the Project Management,
Riyadh, KSA.
Elbeltagi,E., Hegazy, T., and Grierson, D. (2005). “Comparison among five
evolutionary-based optimization algorithms.” Adv. Eng. Inf., 19(1), 43–53.
El-rayes, K. and Kandil, A. (2005). “Time-Cost quality trade-off analysis for highway
construction.” Journal of construction Engineering and Management.” Vol.
131(4), 477-486.
Essawy,Y. (2007). “Value Engineering in Bridge Deck Construction during the
Conceptual Design Phase”. Master of Science in Construction Management
thesis, The American University in Cairo.
Li, H., and Love, P. (1997). “Using improved genetic algorithms to facilitate time-
cost optimization.” J. Constr. Eng. Manage., 123(3), 233–237.
Martínez, J. C. (1996) STROBOSCOPE: State and Resource Based Simulation of
Construction Processes, Doctoral Dissertation, University of Michigan.
Marzouk, M. and Moselhi, O. (2004).”Multiobjective Optimization of Earthmoving
Operations.” Journal of construction Engineering and Management.” Vol.
130(1), 105-113.
Marzouk, M., Said, H., El-Said, M. (2009) “Framework for Multiobjective
Optimization of Launching Girder Bridges.” Journal of Construction
Engineering and Management Vol. 135(8), 791-800.
Que, B. C. (2002). “Incorporating practicality into genetic algorithms based time-cost
optimization.” J. Constr. Eng. Manage., 128(2), 139– 143.
Application of Dimension Reduction Techniques for Motion
Recognition: Construction Worker Behavior Monitoring

SangUk Han1, SangHyun Lee2, and Feniosky Peña-Mora3


1
PhD student, Department of Civil Engineering and Engineering Mechanics, Columbia
University, 622A Southwest Mudd Building, 500 West 120th Street, New York, NY
10027; PH: (212) 854-3143; email: sh2928@columbia.edu
2
Assistant Professor, Department of Civil & Environmental Engineering, University of
Michigan, 2356 GG Brown, 2350 Hayward Street, Ann Arbor, MI 48109; PH: (734)
764-9420; email: shdpm@umich.edu
3
Dean of The Fu Foundation School of Engineering and Applied Science and Morris
A. and Alma Schapiro Professor of Civil Engineering and Engineering Mechanics,
Earth and Environmental Engineering, and Computer Science, Columbia University,
510 Southwest Mudd Building, 500 West 120th Street, New York, NY 10027; PH:
(212) 854-6574; email: feniosky@columbia.edu
ABSTRACT
In the construction industry, the unsafe actions and behavior of workers are the
most significant causes of accidents. Measurement of worker behavior thus can be
used as a positive indicator in assessing safety management and preventing accidents.
The monitoring of worker behavior, however, has not been applied to safety
management in practice due to the time-consuming and painstaking nature of this type
of monitoring. To address this problem, this paper utilizes a computer vision-based
approach that automatically monitors workers with video cameras installed on-site and
focuses on motion recognition methods. Templates predefined through experiments
are used to determine safe and unsafe poses. Using a dimension reduction technique
on a set of spatio-temporal motion segments, the human motion data obtained from
experiments are clustered and generalized to recognize motions. In this manner, the
unsafe behavior of workers is detected and analyzed through the shape of the human
skeleton and joints. The use of video cameras allows worker behavior to be monitored
automatically and constantly. The measured information then can be used to reduce
the frequency of unsafe behavior and potentially reduce the number of accidents.
INTRODUCTION
The fatality rate in construction is about 2.5 times higher than the average for
all other industries in the United States (Bureau of Labor Statistics 2010). Statistics
show that in 2005, among 10,000 full-time workers, 11.1 workers were fatally injured
and 239.5 workers were nonfatally injured or contracted illnesses. These injuries and
illnesses resulted in days away from work (CPWR 2008). Construction has
characteristics unique to the industry (e.g., large forces are involved in complex
operations and placed in various worksites; jobsites are continually changing; products

102
COMPUTING IN CIVIL ENGINEERING 103

are unique; etc.) (Hendrickson 1998) and the high fatality and incident rates may result
from these characteristics. However, the unsafe behavior of workers on a construction
site also leads to injuries (Hinze 1997); previous studies state that about 80 to 90
percent of accidents are caused by unsafe acts rooted in employee behavior (Heinrich
et al. 1980; Helen and Rowlinson 2005). Measurement of worker behavior thus is a
way to assess safety management and can be used as a positive indicator to prevent
accidents (Levitt and Samelson 1987). By identifying and reducing unsafe behavior,
major and minor injuries could be reduced; this is based on the theory that
approximately one serious and ten minor injuries occur among 600 near-miss incidents
(Phimister et al. 2003; Bird and Germain 1996). Despite the importance of monitoring
worker behavior, however, it has not been applied actively to practical safety
management for the following reasons: (1) field observation is a time-consuming and
painstaking task (Levitt and Samelson 1987); (2) there is a lack of safety experts on-
site for behavior observation (Han et al. 2010); (3) traditional reporting systems for
unsafe behavior requires the active participation of workers; and (4) current methods
have systemic issues, including how the observed results are analyzed and applied to
safety practices.
To address these limitations, the automated monitoring, analysis, and
visualization of worker behavior is proposed. In our scenario, workers are monitored
with video cameras installed on-site. Safe and unsafe poses are pre-defined and
utilized as templates. Worker behavior thus can be detected, analyzed, and visualized
in the shape of a human skeleton and its joints in a Virtual Reality (VR) environment.
In this paper, we focus on motion recognition that captures the motions predefined in
the templates. A dimension reduction technique is applied to analyze high dimensional
motion data (e.g., 78 dimensions in this paper). Using a dimension reduction technique
on a set of spatio-temporal motion segments, the human motion data are clustered and
generalized to recognize the same motions. The entire dataset is obtained from
experiments and then separated into training and testing datasets. A training dataset is
used to learn human motions (e.g., in a brick laying activity) and label each action
(e.g., mixing mortar, lifting a brick, stacking a brick). A testing dataset which
recognizes the same motions then is projected into a low dimensional space.
LITERATURE REVIEW
Human motion data are high dimensional. Dimension represents the number of
features (i.e., variables) in data. Motion datasets, with their high number of features,
thus contain underlying challenges regarding efficient and accurate data analysis.
Dimension reduction techniques, which identify important features, thus facilitate
efficient analysis, reducing computational time, decreasing the impact of noisy or
irrelevant features, and improving the resolution of similarity measures in lower
dimensions (Cunningham 2008). Data transformation thus converts high dimensional
data to lower dimensions. To understand this transformation, both linear (e.g.,
principal component analysis) and nonlinear (e.g., kernel principal component analysis,
semidefinite embedding, and minimum volume embedding) dimension reduction
techniques have been studied for this paper. Among linear techniques, principal
component analysis (PCA) is used widely and yields reasonably good results
(Carreira-Perpinan 1997). This technique maximizes the variance of original variables
104 COMPUTING IN CIVIL ENGINEERING

in an interrelated dataset through linear mapping that identifies the uncorrelated and
ordered principal components (Jolliffe 2005). Linear principal components, however,
may not properly represent the nonlinear characteristics inherent in human motion data
(Jenkins and Mataric 2002). To address this limitation, nonlinear dimension reduction
techniques have been explored. Kernel PCA (Schölkopf et al. 1998) is a dominant
technique that uses kernel methods to reproduce data in a kernel induced feature space
through non-linear mapping (Cunningham 2008). In addition to kernel PCA, a number
of non-linear dimension reduction techniques have been suggested. For example,
semidefinite embedding (SDE) uses semidefinite programming for optimization to
learn a kernel matrix with preservation of the local distances (Weinberger et al. 2004).
Minimum volume embedding (MVE) uses semidefinite programming but optimizes
the eignespectrum to maximize energy in lower dimensions (Shaw and Jebara 2007).
However, these techniques, which are based on an eigendecomposition, do not provide
a straightforward extension that can be applied to new testing samples. In this paper,
the kernel PCA thus is used as a preliminary study to reduce the dimensions of motion
data and recognize motions with new sample datasets.
DATA COLLECTION
In this study, human motion data was collected from the University of
Michigan (UM) 3D Lab using a Vicon motion capture system. Reflective markers
were attached to the joints of a human body and motions were recorded by eight
cameras that circled the performer. The resulting data includes three dimensional
locations for body joints moving over time and can be converted to the Biovision
hierarchical data (BVH) format. This format contains skeleton hierarchy information
and provides location and rotation information for body joints (e.g., joint rotation
angles and 3D joint positions); these are useful to define and analyze motions.
In the experiment, motions for a bricklaying activity were analyzed. Back
injuries are common in construction and the back injury rate for masonry workers is
the highest, about 1.6 times higher than the average for all construction workers
(CPWR 2008). Bricklaying typically consists of a sequence of seven actions (mixing
mortar, putting mortar on top of bricks, putting mortar on a side, lifting a brick,
carrying, stacking, and fastening) that are repetitive and require the lifting of heavy
objects—this is a major cause of back injuries. Figure 1 illustrates the activities in
order and shows snapshots of the data collected during the experiment. Out of about
23,000 frames of collected data, 1,877 and 6,000 frames were used as training and
testing datasets respectively. The training dataset was manually labeled according to
frame ranges for each action (e.g., mixing mortar, etc.) and then used to identify
specific actions within the testing dataset.
COMPUTING IN CIVIL ENGINEERING 105

Figure 1: Human motion data for bricklaying with the sequence of actions

DATA ANALYSIS
The motion data used for training consists of 1,877 points with 78 dimensions.
Kernel PCA was applied to this data and the results were compared through cross
validation with various kernels and target dimensions to identify those that could be
easily visualized and be useful for motion recognition. As a result of the cross
validation, a polynomial kernel and two dimensions as target dimensions were
selected. This is shown in Figure 2. Using a kernel PCA technique, the training dataset
then was analyzed to learn the principal components and the coefficient for mapping
(left panel in Figure 2). Through this process, the eigenvectors with the largest
eigenvalue were selected and the data points were mapped into the eigenvector
coordinates (i.e., the x and y axes in Figure 2 represent the selected eigenvectors).
With the learnt information, manually labeled points for actions from the training data
could be mapped into the same space (right panel in Figure 2). This indicates that from
the first action (i.e., mixing mortar) to the last (i.e., fastening a brick), each point is
drawn consecutively over time. The training dataset with 78 dimensions also is
mapped into the two-dimensional space. In Figure 3, the first two eigenvectors from
the kernel (i.e., the x-axis) contain significantly high energy (i.e., large eigenvalues on
the y-axis) to represent most dimensions and to support the selection of two
dimensions as target
106 COMPUTING IN CIVIL ENGINEERING

Figure 2. Results of the kernel PCA for training data: motions (left) and marked
actions (right)

Figure 3. The eigenvalues for features after kernel PCA


dimensions. Around 20 eigenvectors may fully represent the dataset based on the
eigenvalues in the figure, yet two dimensions are sufficient to visualize and recognize
motions; this is particularly important, considering the purpose of dimension reduction.

RESULT
The objective of applying dimension reduction techniques for human motion
data in this paper is to recognize predefined motions from random samples.
Reconstruction of new sample points thus was conducted with testing datasets to
examine whether the new data could be properly projected in the space where training
datasets are mapped. As a testing dataset, 6,000 points were used and transformed into
two dimensions using the coefficient obtained from learning (left panel in Figure 4).
The right panel in Figure 4 clearly shows that the testing dataset can be accurately
projected onto the same space. The datasets contain the motions that performers
repeatedly required for the bricklaying activity. The overall flow of points over time
thus takes place in similar areas. As marked in the figure, each action can be
recognized by identifying regions near the trajectory of testing datasets.
COMPUTING IN CIVIL ENGINEERING 107

Figure 4: Result of kernel PCA for a testing dataset (left) and comparison with a
training dataset (right)
However, it is difficult to define standardized actions (e.g., mixing mortar,
lifting a brick, etc.) in practice. Actions vary from individual to individual and each
person’s motions can differ over time. In the experiment, the performer used similar
poses for the activity. However, the range of sample motions is distributed widely in
comparison to the training data. Furthermore, workers in the real world may utilize
actions that are not pre-defined (e.g., talking with a supervisor, shifting equipment,
etc.). It thus is not realistic that every possible action can be defined. To recognize
predefined motions accurately, more datasets therefore need to be applied in training
to identify potential areas of motion. As the regions of actions we want to monitor are
more accurately determined with data, many more sample datasets can be mapped into
the training coordinates and analyzed to recognize the motions systematically.
DISCUSSION
In this paper, the motion data representing masonry work was tested to
recognize motions during construction activities. Based on defined actions with
training datasets, similar motions in new testing datasets can be identified by mapping
into the same coordinates. The results show that unsafe actions can be detected
through the training data. For example, unsafe actions such as slip, loss of balance,
and bad posture while lifting heavy objects (e.g., bending one’s back rather than one’s
knees) can be defined and the poses captured. With the resulting information, it also is
possible to compute cycle times for an activity and the performing time for each pose
by calculating the time between actions. This is assuming that workers may take
unsafe actions or make errors under production pressures, such as attempting to work
faster in order to increase productivity (Hollnagel and Woods 2006; Hinze 1997). This
information thus may prove useful in the investigation of the impact of such pressures
on safety. Moreover, the data provides useful information that can prevent injuries
related to the performance time of poses (e.g., back injuries are affected by carrying
time and trajectories which cause back strains). Motion data thus has a high potential
to provide fruitful information for safety management.
108 COMPUTING IN CIVIL ENGINEERING

CONCLUSION
Dimension reduction techniques can be applied to monitor worker behavior on
construction sites. To analyze behavior, motions during an activity are divided into
specific actions and the actions are identified with training datasets using kernel PCA.
The results indicate that motion data can be used to recognize construction worker
motions with machine learning techniques. Testing data shows similar behaviors over
time in the space that training data is transformed. Training with more datasets thus
can be used to statistically determine the potential regions of individual actions and
eventually lead to an improvement in the accuracy of motion recognition. By defining
unsafe actions, this technique can be useful in detecting the unsafe actions of workers
during their activities. The use of video cameras allows worker behavior to be
monitored automatically and constantly. Safety experts thus will not need to undertake
time-consuming tasks and the measured information can be used to reduce the
frequency of unsafe behavior and potentially reduce the number of accidents.

FUTURE WORK
In this study, kernel PCA with polynomial kernel was applied for motion
recognition. However, there are a number of non-linear dimension reduction
techniques (e.g. Gaussian Process Dynamical Model) and various kernels (e.g.,
probability product kernel) which may provide better visualization and embedding
results for motion data. Further investigations will be carried out to compare these
techniques and kernels to identify those most reliable and applicable to construction
worker motion data. Since this paper focuses on motion recognition, all the datasets
were collected from the UM 3D Lab. In future studies, however, we plan to collect
datasets from a construction site using a motion capture system. Our project is
ongoing and our intent is to develop a markerless system to extract 3D human skeleton
information from images taken by multiple video cameras. With these samples, it will
be possible to identify numerous actions taken by construction workers. Training with
sufficient data will improve monitoring accuracy and the detection of worker actions
on-site.
ACKNOWLEDGEMENT
We would like to thank Chunxia Li, a PhD student at the University of
Michigan, for her help in collecting motion data. The work presented in this paper was
supported financially by two National Science Foundation Awards (No. CMMI/ITR-
0427089 and CMMI-0800500).
REFERENCES
Bird, F. E., and Germain, G. L. (1996). Practical loss control leadership, Det Norske
Verita, Loganville, GA.
Bureau of Labor Statistics (2010). “Fatality injury rates, 2003–2008.” U.S.
Department of Labor, Washington, DC.
<http://www.bls.gov/iif/oshcfoi1.htm#rates> (Mar 2010).
Carreira-Perpinan, M. A. (1997). “A review of dimension reduction techniques.”
Technical report CS-96-09, Department of Computer Science, University of
Sheffield.
COMPUTING IN CIVIL ENGINEERING 109

The Center for Construction Research and Training (CPWR) (2008). The Construction
Chart Book: The U.S. Construction Industry and Its Workers, The Center for
Construction Research and Training, Silver Spring, MD.
Cunningham, P. (2008). “Dimension reduction.” M. Cord and P. Cunningham
eds., Machine learning techniques for multimedia: case studies on organization
and retrieval, Springer, Berlin.
Han, S., Lee, S., and Peña-Mora, F. (2010). “Framework for a resilience system in
safety management: a simulation and visualization approach.” The International
Conference on Computing in Civil and Building Engineering (ICCCBE) 2010,
Nottingham, U.K., Jun 30 – July 2, 2010.
Heinrich, H. W., Petersen, D., and Roos, N. (1980). Industrial accident prevention,
McGraw-Hill, Inc., New York.
Helen, L., and Rowlinson, S. (2005). Occupational health and safety in construction
project management, Spon Press, London, pp. 157-158.
Hendrickson, C. (1998). Project management for construction: fundamental concepts
for owners, engineers, architects and builders, Prentice Hall, New Jersey.
Available from <http://pmbook.ce.cmu.edu>.
Hinze, J. (1997). Construction safety, Prentice Hall, Upper Saddle River, NJ, pp. 213–
215.
Hollnagel, E., and Woods, D. (2006). “Prologue: resilience engineering concepts.”
Hollnagel, E., Woods, D., and Leveson, N., eds., Resilience engineering: concepts
and precepts, Ashgate, Aldershot, United Kingdom, pp. 1-6.
Jenkins, O. C., and Mataric, M. J. (2002). “Deriving action and behavior primitives
from human motion data.” Proceedings of 2002 IEEE/RSJ international conference
on intelligent robots and systems (IROS-2002), Lausanne, Switzerland, Sept. 30 -
Oct. 4, 2002, 2551-2556.
Jolliffe, I.T. (2005). “Principal component analysis.” B. S. Everitt and D. C. Howell,
Encyclopedia of Statistics in Behavioral Science, Wiley, New York, 3, 1580-1584.
Levitt, R. E., and Samelson, N. M. (1987). Construction safety management,
McGraw-Hill, New York.
Phimister, J. R., Oktem, U., Kleindorfer, P. R., and Kunreuther, H. (2003). “Near-miss
incident management in the chemical process industry.” Risk Analysis, Vol. 23, No.
3.
Schölkopf, B , Smola, A., Müller, K. (1998). “Nonlinear component analysis as a
kernel eigenvalue problem.” Neural Computation, 10, 1299-1319.
Shaw, B., and Jebara, T. (2007). “Minimum volume embedding.” JMLR W&P, 2, 460-
467.
Weinberger, K. Q., Sha, F., and Saul, L.K. (2004). “Learning a kernel matrix for
nonlinear dimensionality reduction.” Proceedings of the Twenty First International
Conference on Machine Learning (ICML-04), Banff, Canada, 839-846.
Civil and Environmental Engineering Challenges for Data Sensing and Analysis

Gauri M. Jog 1, Shuai Li2, Burcin Becerik Gerber3, Ioannis Brilakis4


1
Ph.D. Student, School of Civil and Environmental Engineering, Georgia Institute of
Technology, GA 30332; email: gmjog@gatech.edu
2
Ph.D. Student, Sonny Astani Department of Civil and Environmental Engineering,
University of Southern California, CA 90089; email: shuail@usc.edu
3
Assistant Professor, Sonny Astani Department of Civil and Environmental
Engineering, University of Southern California, CA 90089; email: becerik@usc.edu
4
Assistant Professor, School of Civil and Environmental Engineering, Georgia
Institute of Technology, GA 30332; email: brilakis@gatech.edu

ABSTRACT
The objective of this study was to identify challenges in civil and
environmental engineering that can potentially be solved using data sensing and
analysis research. The challenges were recognized through extensive literature review
in all disciplines of civil and environmental engineering. The literature review
included journal articles, reports, expert interviews, and magazine articles. The
challenges were ranked by comparing their impact on cost, time, quality, environment
and safety. The result of this literature review includes challenges such as improving
construction safety and productivity, improving roof safety, reducing building energy
consumption, solving traffic congestion, managing groundwater, mapping and
monitoring the underground, estimating sea conditions, and solving soil erosion
problems. These challenges suggest areas where researchers can apply data sensing
and analysis research.

INTRODUCTION
Though civil and environmental engineering is one of the oldest disciplines of
engineering, its adoption of technology to improve practices in the discipline has
been slow. With technological advances, readily available and cost efficient tools can
be used to alleviate and solve some of the most exigent challenges faced by the civil
and environmental engineering (CEE) community. This study identifies challenges
across different areas within civil and environmental engineering that can possibly be
solved using data sensing and analysis as a technological tool. Data sensing and
analysis (DSA) involves the use of sensors such as radio frequency sensors and
cameras to collect data from the real world. This data such as spatiotemporal data is
then processed to convert it to create meaningful information. The knowledge gained
from data and information is inferred to make various decisions.

METHODOLOGY
An exhaustive literature review including journal articles, conference
proceedings, magazine articles, expert interviews, and news articles was conducted to
find the challenges faced by the CEE community. The increase in required time, or
the decrease in quality, injury and fatality statistics, environmental impacts, and the
increases in cost due to these challenges were used to first rank these challenges

110
COMPUTING IN CIVIL ENGINEERING 111

based on their impact within each discipline of CEE and then they were ranked as a
CEE challenge in general. The number of metrics (cost, time, quality, safety, and
environment) that the challenge impacted and their magnitude was considered to
identify and rank the challenges. The challenges that impacted the highest number of
metrics was considered the most pressing and needed to be solved/ alleviated
immediately. Based on these ranks and their pertinence to CEE solutions, challenges
were selected to be further researched. Their applicability for DSA solutions in CEE
was then verified based on discussions with faculty and experts. These challenges are
presented in the next section.

RESULTS
The number in parentheses before the name of the challenge indicates its rank.
(1) Outwit traffic congestion: In 2007, U.S. citizens wasted 2.8 billion gallons of
fuel, 4.2 billion hours and spent $87.2 billion for extra time and cost in 439 urban
areas (Schrank and Lomax, 2009). The National Academy of Engineering identified
“Improvement of Transportation Systems” as a grand challenge (National Academy
of Engineering, 2008). Traffic congestion is a problem that needs to be addressed
because it wastes resources and adds expenses that burden individuals and businesses.
To help resolve the traffic congestion problem in roads, railways, seaports, and
airports, it is necessary to gain an insight into the existing conditions, the behavior of
the public in congestion, and traffic flow. Traffic monitoring, traffic flow analysis,
traffic volume and speed data, driver behavior, and traffic management in general can
help to understand these issues. Optimization, improvement, and betterment of
transportation modes require extensive research for modeling the system. DSA can
help to gather good quality data for transportation models for planning and analysis
of better transportation.
Data collected from sensors can be used to draw inferences about current
practices, needs and planning for the future. Traffic flow analysis requires trajectories
to analyze the flow of traffic. Data can be used to create trajectory information for
this purpose to formulate models with realistic data and to understand driver behavior
in congestion. Installation of sensors at traffic lights throughout a city can help to
determine the number of cars that cross a traffic light per day on average and origin-
destination data. This average can be taken into account by transportation planners
while developing and optimizing transportation. Overall, DSA can help in the
evaluation phase of decision-making and to measure the effectiveness of the solution.
(2) Enhance Construction Site Safety: Accidents on construction sites have
accounted for a preliminary number of 816 deaths in 2009 19% (Bureau of Labor
Statistics, U. S. Department of Labor 2010) of the total work related deaths but,
construction employs only 7% (Bureau of Labor Statistics, U.S. Department of Labor
2009) of the total U.S. workforce. Research in construction safety reported that
construction related injuries cost $4billion for fatal injuries and $7billion for non-fatal
accidents, due to days away from work in 2002 (Waehrer et al. 2007). These numbers
indicate that there is a great danger to life on a construction site. Accidents that result
in fatal or nonfatal incidents affect not only the person involved but also his/her
family dependents. Therefore, enhancing construction safety to have an accident-free
jobsite is of utmost importance and is entitled a top priority in construction sites.
112 COMPUTING IN CIVIL ENGINEERING

DSA can help locate dangerous activities then find the distance between a
piece of heavy equipment or trench and the construction worker to alert personnel.
Examples of research in this area include the use of 3D range cameras, RFID
technology, laser sensors, ad hoc wireless network, and development of obstacle free
paths. However, most of these methods are in preliminary stages of development and
need future work such as improvement, validation, implementation on construction
sites, and cost-benefit analysis. DSA applications can be used to inform project
managers of impending accidents by analyzing site conditions to take appropriate
action. For example, if a worker is not wearing a hard hat and a safety vest or if the
worker is not tied off when working at a height, the project manager can immediately
make sure that the worker wears a hardhat and vest or is tied off. Other examples
include creating algorithms for crane safety and trench cave in safety measures.
(3) Improve Construction Productivity: According to the US Bureau of Economic
Analysis 2010, 4.3% of the US gross domestic product (GDP) ($519 billion in 2009)
is generated by the construction industry (Bureau of Economic Analysis, U.S.
Department of Labor 2010). Construction productivity (measure of output per unit of
input) directly influences construction industry output. Research shows that
construction productivity has been declining since the 1960s and has fallen behind
other industries such as manufacturing (Dyer and Goodrum, 2009). DSA can
potentially help to generate new knowledge by abstracting tacit knowledge into
representative data or by building an integrated database where construction
companies share the knowledge. DSA can help to preserve and transfer knowledge to
young workers. Current knowledge management research is working on creating
network structures to transfer explicit knowledge to new workers. Some AEC firms
have been successful at collecting and storing explicit information in enterprise
databases (Woo et al. 2003). These databases can provide a foundation where new
methods and technologies in CEE could be incorporated to generate new knowledge.
DSA can help to manage construction materials, which can also improve
construction productivity. Engineers and academia have been using the RFID or GIS
to track materials which have led to an increase in craft productivity. By
automatically collecting data with the sensing devices on equipment and post-
processing the data, workers can find and flag the components more easily and time
to track components was reduced from 36.8min to 4.56min (Grau et al. 2009).
Another issue is transfer of documented data (paper and electronic) into useful
information that can be analyzed and used for site management. There is no well-
defined automated mechanism to extract, to preprocess, and to analyze data and
summarize the results so that the site managers could use it.
(4) Monitor the health of infrastructure: An average grade of ‘D’ in the 2009 report
card for America’s Infrastructure reveals the poor condition of the infrastructure
(American Society of Civil Engineers, 2009). Monitoring the condition of this failing
infrastructure to make appropriate decisions regarding improvement or replacement is
fundamental for the maintenance and enhancement of infrastructure.
DSA can help through the installation of wireless sensors on bridges and
roads to collect data that can be analyzed without any subjectivity and give a verdict
on the health of the structures. Research is being conducted to mimic nature by
replicating the crawling capabilities of a gecko to provide mobile sensor networks.
COMPUTING IN CIVIL ENGINEERING 113

Quality assessment of concrete columns also has been studied. However, there still is
a wide scope for data sensing and analysis community to make the technology more
robust and readily available by (1) developing mobile sensors that can maneuver real
world structures and detect damage; and (2) applying data sensing and analysis to
existing infrastructure and embedding sensors into new infrastructure during
construction. Using machine-learning techniques, the data can be analyzed to provide
both qualitative and quantitative results so that the authorities can decide the required
plan of action (regular maintenance or replacement) for infrastructure systems.
(5) Map and Monitor the Sub-surface: Most of the infrastructure in the United
States including the internet, sewage, water lines, and electrical conduits is buried
underground. As noted by the Grand Challenges for Engineering developed by the
National Academy of Engineers, “one major challenge will be to devise methods for
mapping and labeling buried infrastructure, both to assist in improving it and to help
avoid damaging it” (National Academy of Engineering, 2008). The mining industry
and geotechnical engineering could also benefit from knowing the substructure to
avoid accidents/ deaths in mines and to have better geotechnical reports respectively.
Currently, underground assets can be mapped using geophysical techniques such as
Ground Penetrating Radar (GPR) which can only be applied for utilities and
instruments such as geophones, pore pressure transducers, and accelerometers, are
used for geotechnical measurements which traditionally are wired acquisition
systems.
DSA applications can include electromagnetic waves being reflected by
metallic surfaces. This idea is being used in the United Kingdom in an attempt to
locate buried metallic pipes. Sensors can be installed on buried infrastructure during
installation or maintenance. Radio frequency sensors and Geographic Information
System (GIS) can potentially be used to collect information regarding the subsurface.
Grain size of a soil is the fundamental property of the soil, which gives the shear
strength, compressibility, and hydraulic conductivity. DSA tools can be applied to
“see” the soil and find its grain size in an easier and faster manner. DSA can be
applied inside mines to analyze safety conditions. Researchers believe robotics to be
a promising technology to replace humans in mines.
(6) Improve Building Energy Efficiency: Building energy consumption reached
38.9% in 2006 and is expected to be 42.4% by 2030. Energy consumed by residential
buildings cost $225.6 billion and $392.2 billion for commercial buildings (D&R
International, 2009). Improving building energy efficiency is one of the most cost-
effective ways to solve challenges of energy crisis, global warming, and air pollution
and to reduce demand for fossil fuels, and stabilize energy prices.
A common discrepancy exists between the intended and actual building
energy consumption. Currently, it is hard to monitor the distribution of energy
consumption within a building by the individual end-user. For example, the number
of free riders that exist in an energy system is still determined by surveying
participants or appliance retailers.
Most research on improving energy efficiency is limited to a certain size and
types of buildings. To further improve building energy efficiency, it is very urgent to
develop an integrated database and analysis framework that can cover a larger range
of building types, locations, and design details. The National Institute of Standards
114 COMPUTING IN CIVIL ENGINEERING

and Technology is working on expanding the database by adding more detailed data
and specifying the framework by incorporating elements such as environmental flow
estimates and building temporal efficiency deterioration.
(7) Reduce Soil Erosion: According to Global Assessment of Human-induced Soil
Degradation, around 15% of earth’s ice-free land surface is affected by soil erosion.
Of the accelerated erosion, water is responsible for 56%, wind for 28%, and chemical
and physical deterioration for 16%.
DSA can help to inspect soil conditions and potentially be used to analyze the
non-linear manner of rainfall and can help to predict the rates of water-induced
erosion in order to take appropriate actions. DSA can help to sense the three-
dimensional geometrics of waves; the affected land area by salt water can be
predicted. DSA was also used to quantitatively evaluate wave-induced erosion on the
flood side of the Mississippi River Gulf Outlet spoil bank (Storesund et al. 2010).
However, this technology is now mostly applied at a laboratory level and has not
been applied at a large scale.
More specific data, which will reflect the dynamic soil conditions in spatial
dimensions, are still needed but unavailable for the research. Current experiment
results are usually based on laboratory level that represents soil conditions at certain
point of time. It is still a challenge to build up a systematic analysis frame for
inspecting soil conditions.
(8) Manage Ground Water: According to the U.S. Geological Survey (USGS), 50%
of the drinking water comes from ground water. In the high plains of the United
States such as Nebraska, Colorado, and Texas, the water level has been declining for
the recent 30 years. Contamination of groundwater is another threat. Of the 33
drinking ground water samples tested by USGS, 15% had exceeded nitrate content
(Kolpin et al. 2002). Traditional municipal wastewater-treatment technology is not
designed to effectively remove pesticides, chemicals and pharmaceuticals entering
the system. Inspection and reduction of contamination in groundwater remain to be a
challenge.
DSA can potentially provide a broad idea of the national water availability.
Water storage can be represented by analysis of microgravity data. The general
measurement of underground water, which is called water budget, is based on
collecting and analyzing data related to the inflows, outflows, and changes in storage
in the whole ground-water system. Water recharge refers to the process where water
moved downwards from surface to ground water. It is an important factor affecting
water budget, which happens randomly and continuously in space and time.
Therefore, it is difficult to estimate the accurate recharge rate. With proper data
sensing and analysis, water recharge amount can be determined as the residual term
of water budget. Engineers are making efforts to solve this problem by measuring the
subtle changes in gravity detected by gravimeter. Aquifer storage can be represented
by analysis of microgravity data.
The lack of a nation-wide, comprehensive, consistent and up-to-date database
and an integrated analysis form for water availability directly leads to a lack of
indicators representing the status and trends in storage volumes, flow rates, and uses
of water nationwide. Underground water systems have a long-term equilibrium
between inflows and outflows. Sensors used for underground inspections tend to be
COMPUTING IN CIVIL ENGINEERING 115

damaged frequently due to environmental conditions so the long-term performance of


data sensing technology in underground water remains unknown.
(9) Estimate Sea Level: It is important for engineers to know the sea level when
designing a coastal structure. Coastal infrastructure needs to be designed above the
highest sea level or with necessary prevention against waves and tides. Better
estimation of the sea level can help reach a balance between coastal infrastructure
safety and construction cost. Knowledge about sea levels is necessary for coastal
design and management and predicting flooding risks.
Data sensing and analysis can be used to analyze the temporal and spatial
variations of wave dynamics. Disasters such as landslides, rock falls, shore
instabilities, snow avalanches, and glacier calving all may generate waves in oceans,
lakes or reservoirs. By detecting the impulse of sea waves, seismically generated
tsunamis with potential global destructive effect can be predicted and prevented.
DSA can be used to compute the gravitational effects and predict tides. The
current approach focuses more on stationary sea level but DSA can potentially be
used to predict extreme sea levels. A case study in Richmond, B.C. Canada by
engineers based on two-year data has accurately portrayed the relationship between
sea levels and wave run-up in success (Liu et al. 2010). Cluster methodology has
been successfully used to reflect the transition between the Scottian shelf and the
Gulf of Maine area on the east coast of the U.S. (Scotto et al. 2010). Current DSA
methods to detect the impulse of sea waves and tsunamis are specific prototype
studies; numerical simulations and predictions based on field data, analytical
calculations, or general model studies. A wave framework that integrates current
models with maximum wave and impact zone calls for further development.
(10) Maintain Roofs Efficiently: According to a survey of business owners
conducted by U.S. Census Bureau 2007, the roofing industry in the United States
accounted for a production of over $30 billion in 2007, an increase of over 30% from
$23 billion in 2002. Several small hurricanes that caused little damage to a structure
have caused severe damage to roofs (Miami-Dade Country Building Code
Compliance Office 2006) reveal that roofs are the weakest part of a structure. Water
seepage, usually caused by strains to the interior structure is an issue. An estimated
increase of $1232 in maintenance cost per leak is expected (Coffelt et al. 2010).
DSA was used to analyze the relationship between roof damage and roof tile
patterns under experimental hurricane condition. By analyzing the coefficiency
between roof damage and roof tile patterns under experimental hurricane condition,
concrete tile roof with mortar set is proved to be the best when resisting against a
simulated hurricane (Huang et al. 2009). Another DSA use is to analyze the dynamic
rain and snow loads on roofs. DSA can be used to assess and inspect the condition of
a roof system to help to identify the quantity and severity of roof defects against a set
of benchmarking condition table.
Due to lack of standards and labor movement, it is hard for facility managers
to keep a consistent record of roof conditions before reaching the right decision for
roof maintenance and renovation. U.S. Army Construction Research Laboratory has
developed roof condition index based on a wide range of data collected nationwide
that can serve as a benchmarking system but it is too general to be applied a specific
roof system.
116 COMPUTING IN CIVIL ENGINEERING

CONCLUSION
This paper reported challenges across all disciplines within CEE that can be
aided with the use of DSA methods. These challenges were identified based on their
impact on cost, quality, time, safety, and environment. The challenges included
improving construction productivity, enhancing construction site safety, outwitting
traffic congestion, monitoring the health of infrastructure, mapping and monitoring
the subsurface, improving building energy efficiency, maintaining roof efficiently,
reducing soil erosion, estimating sea level, and managing ground water. Due to page
limit, complete evidence regarding the severity of the challenges could not be
presented. Different methods using DSA that can be employed to gather information
about the problem to make better decisions or to help solve/alleviate the problem
were also suggested. The challenges can possibly help researchers define their
research direction and the suggested methods to aid solving the challenges can
potentially be used by researchers to solve the problems faced by the CEE
community.

ACKNOWLEDGEMENTS
This work was financially supported by the American Society of Civil
Engineers through the Technical Council on Computing and Information Technology
Council’s Data Sensing and Analysis Committee.

REFERENCES
American Society of Civil Engineers (2009) “Report card for America’s
infrastructure” <http://www.infrastructurereportcard.org> (Nov. 21, 2010)
Bureau of Economic Analysis, U.S. Department of Commerce (2010) “National
Income and Product Accounts”
Bureau of Labor Statistics, U.S. Department of Labor (2009) “Household Data
Annual Averages- Employed persons by industry, sex, race, and
occupation”<http://www.bls.gov/cps/cpsaat17.pdf> (Nov. 21, 2010)
Bureau of Labor Statistics, U.S. Department of Labor. (2010). “National Consensus
of Fatal Occupational Injuries in 2009 (Preliminary
Results)”<http://www.bls.gov/news.release/pdf/cfoi.pdf> (Nov. 21, 2010)
Coffelt, D. P., Hendrickson, C. T., & Healey, S. T. (2010). “Inspection, condition
assessment, and management decisions for commercial roof systems”.Journal
of Architectural Engineering; American Society of Civil Engineers, 16(3), 94-
99. Retrieved from http://dx.doi.org/10.1061/(ASCE)AE.1943-5568.0000014
Dyer, B. D., &Goodrum, P. M. (2009). “Construction industry productivity: Omitted
quality characteristics in construction price indices”. Paper presented at the
2009 Construction Research Congress - Building a Sustainable Future, April 5,
2009 - April 7,121-130. Retrieved from http://dx.doi.org/10.1061/41020(339)13
D&R International, L. (2009). “2008 buildings energy data book” Retrieved from
http://buildingsdatabook.eren.doe.gov/docs%5CDataBooks%5C2008_BEDB_U
pdated.pdf
Grau, D., Caldas, C. H., Haas, C. T., Goodrum, P. M., & Gong, J. (2009).“Assessing
the impact of materials tracking technologies on construction craft
productivity”.Automation in Construction, 18(7), 903-911.
COMPUTING IN CIVIL ENGINEERING 117

Huang, P., Mirmiran, A., Chowdhury, A. G., Abishdid, C., & Wang, T. (2009).
Performance of roof tiles under simulated hurricane impact. Journal of
Architectural Engineering, 15(1), 26-34.
Kolpin, D. W., Furlong, E. T., Meyer, M. T., Thurman, E. M., Zaugg, S. D., Barber,
L. B., et al. (2002). “Pharmaceuticals, hormones, and other organic wastewater
contaminants in U.S. streams, 1999-2000: A national reconnaissance”.
Environmental Science and Technology; American Chemical Society, 36(6),
1202-1211.
Liu, J. C., Lence, B. J., & Isaacson, M. (2010).“Direct joint probability method for
estimating extreme sea levels”. Journal of Waterway, Port, Coastal and Ocean
Engineering, 136(1), 66-76. Retrieved from
http://dx.doi.org/10.1061/(ASCE)0733-950X(2010)136:1(66)
Miami-Dade Country Building Code Compliance Office (MDC-BCCO). (2006).
“Post hurricane Wilma progress assessment. Miami”
National Academy of Engineering (2008) “Grand Challenges for Engineering”
<http://www.engineeringchallenges.org/Object.File/Master/11/574/Grand%20C
hallenges%20final%20book.pdf> (Nov. 21, 2010)
Schrank, D. and Lomax, T. (2009). “2009 Urban Mobility Report”
<http://tti.tamu.edu/documents/mobility_report_2009_wappx.pdf> (Nov. 21,
2010)
Scotto, M. G., Alonso, A. M., & Barbosa, S. M. (2010).“Clustering time series of sea
levels: Extreme value approach”.Journal of Waterway, Port, Coastal, and
Ocean Engineering, 136(4), 215-225.
Storesund R., Bea R. G., & Huang Y. (2010). “Simulated wave-induced erosion of
the mississippi river-gulf outlet levees during hurricane Katrina”. Journal of
Waterway, Port, Coastal and Ocean Engineering, 136(3), 177-189. Retrieved
from http://dx.doi.org/10.1061/(ASCE)WW.1943-5460.0000033
Waehrer, G.M., Dong, X.S., Miller, T., Haile, E., Men, Y. (2007). “Costs of
occupational injuries in construction in the United States”, Accident Analysis &
Prevention, Vol. 39, Issue 6, Pg 1258-1266
Woo J.H., Clayton M.J., Johnson R.E., Flores B.E., Ellis C. (2003). “Dynamic
Knowledge Map: reusing experts' tacit knowledge in the AEC industry”,
Automation in Construction, Vol. 13, Issue 2, Pg 203-207
Automated 3D Structure Inference of Civil Infrastructure Using a Stereo
Camera Set
H. Fathi1, I. Brilakis2 and P. Vela3
1
Construction IT Lab, School of Civil and Environmental Engineering, Georgia Institute of
Technology; Phone: (404)713-3667; Email: ha_fathi@gatech.edu
2
Assistant Professor, School of Civil and Environmental Engineering, Georgia Institute of
Technology; Phone: (404)894-9881; Fax: (404)894-1641; Email: brilakis@gatech.edu
3
Assistant Professor, School of Electrical and Computer Engineering, Georgia Institute of
Technology; Phone: (404)894-8749; Fax:(404)894-5935; Email: pvela@gatech.edu
Keyword: Spatial data collection; Infrastructure; Stereo vision; Videogrammetry.
Abstract:
The commercial far-range (>10m) infrastructure spatial data collection methods are
not completely automated. They need significant amount of manual post-processing
work and in some cases, the equipment costs are significant. This paper presents a
method that is the first step of a stereo videogrammetric framework and holds the
promise to address these issues. Under this method, video streams are initially
collected from a calibrated set of two video cameras. For each pair of simultaneous
video frames, visual feature points are detected and their spatial coordinates are then
computed. The result, in the form of a sparse 3D point cloud, is the basis for the next
steps in the framework (i.e., camera motion estimation and dense 3D reconstruction).
A set of data, collected from an ongoing infrastructure project, is used to show the
merits of the method. Comparison with existing tools is also shown, to indicate the
performance differences of the proposed method in the level of automation and the
accuracy of results.
1. Introduction
Spatial data can be used to infer required information on the current state and/or
condition of civil infrastructure and make optimal decisions at various stages of the
infrastructure’s life cycle. It can assist constructors, facility managers and inspectors
to design the site layout more efficiently, assess on-site 3D status of the project and
collect information for health monitoring of the built structures. A number of
infrastructure’s spatial data collection techniques and 3D reconstruction
methodologies are commonly used today; however, current practice lacks a solution
that is accurate, automatic and cost efficient at the same time.
Videogrammetry, the process of measuring coordinates of object points from two or
more video frames captured by camcorders (Zhu and Brilakis, 2009), is a promising
area of research which is potentially able to address the limitations of the available
methods. A videogrammetric method needs little human intervention and can provide
a high degree of automation. Low equipment cost is another advantage since the
method only needs off-the-shelf cameras.

118
COMPUTING IN CIVIL ENGINEERING 119

Considering a set of two calibrated cameras, this paper aims to present an automated
and robust method for the first step of progressive infrastructure modeling using
videogrammetry which is an ongoing research. 3D coordinates of visual feature
points of the infrastructure are calculated in this step and the outcome is presented in
the form of a 3D point cloud.
As the input data, the proposed method uses two video frames (i.e., left and right
view) captured at the same time by a set of two calibrated cameras. Speeded-Up
Robust Features (SURF) (Bay et al., 2006) are used to detect the location of the
distinctive features. SURF also encapsulates the descriptive information of each
feature in the form of vectors in a multi-dimensional space. The descriptor vectors are
then used to automatically match the feature points between two frames.
Mathematical point matching constraints and RANdom SAmple Consensus
(RANSAC) (Fischler and Bolles, 1981) algorithm are used to discard mismatches.
Given the point correspondences and the constraints, corrected correspondences are
calculated using an optimal correction algorithm (Kanatani et al., 2008) such that
geometric error is minimized. Finally, the structure information of the scene (i.e.,
sparse 3D point cloud) is calculated using triangulation.
The proposed method is implemented using Microsoft Visual C# and EmguCV (a
.Net wrapper to the Intel OpenCV library). A set of stereo video frames, collected
from an ongoing infrastructure project, is used to validate the accuracy of the results.
Spatial distance between randomly selected features is used for this purpose. In the
evaluation, tape measurements are considered as the actual distance. Using 95% limits
of agreement method (Bland and Altman, 1986), the results for the typical range values of
infrastructure mapping indicate that a point cloud-based measurement can differ from
its corresponding tape measurement by -44.1 to 53.5mm.
2. Background
This section, first, reviews the existing technologies that are used in practice for
spatial data collection of infrastructure. State of the research is then presented to show
the latest efforts in this field.
2.1. Remote Spatial Sensing of Infrastructure
The current practice in the Architecture, Engineering, Construction and Facilities
Management (AEC/FM) industry is to use remote spatial sensing methods to collect
spatial data. Remote spatial sensors for far-range (>10m) spatial data acquisition are
generally categorized into two classes: active (e.g., terrestrial laser scanner) and
passive (e.g., photogrammetry and videogrammetry) sensors.
Terrestrial laser scanners can provide tens of thousands of measurements per second
with millimeter level accuracy (Tang et al., 2009). It can maintain the accuracy on the
order of a few millimeters even for objects at the distance of hundreds of meters.
However, at spatial discontinuities (e.g., object edges), the scanned data contains
inaccurate data points, known as mixed-pixels (Tang et al., 2009). On the other hand,
in the range value required for infrastructure mapping, data collection has to be done
in several steps and individual point clouds should be merged together to create the
overall result. The main limitation of laser scanning, however, lies in its high
120 COMPUTING IN CIVIL ENGINEERING

equipment cost. A laser scanner, appropriate for the measurement range required in
civil infrastructure, can cost tens of thousands of dollars.
In contrast, photogrammetry is the process of measuring the properties of real world
objects from digital images. Photogrammetry requires a two-step procedure to
provide spatial information. First, the site engineer has to shoot the right source
photos as the input data. After the collection of all photos, 3D object point
coordinates are calculated through some post-processing stages. At least two images
from different views of an object are needed to calculate the depth value. A number
of commercial photogrammetry software such as ImageModeler and PhotoModeler
are now available in the market. They provide the possibility to take measurements in
the image and create photo-realistic 3D models. However, they have some limitations
as well. The user needs to provide the information that the software requires to derive
3D position, orientation, focal length and distortion of the camera (ImageModeler,
2009). This information is 2D points in the images that correspond to the same point
in the space. In general, high level of human intervention is required in several steps
(Zhu and Brilakis, 2009).
2.2. Related Work
Remote spatial sensing and its applications related to infrastructure has been an active
research topic in the recent years. Kim et al. (2005) acquire on-site spatial
information using targeted laser range finder to generate a sparse 3D point cloud. This
cloud helps to create a 3D workspace model which is then used for various safety-
enhancement applications such as obstacle avoidance. Akinci et al. (2006) plan a
process for active quality control of construction sites (i.e., identifying defects early
in the construction) using laser scanners to collect the spatial data. The method uses a
number of commercially available software in the modeling process and needs human
intervention in some stages.
A number of vision-based technologies are also presented for 3D point cloud
generation of civil infrastructure. On-site digital images have been used in Memon et
al. (2005) to create 3D models of the structural elements presented in 3D CAD
drawings. In this semi-automated approach, the 3D model of a specific object is
generated using commercial photogrammetry software. Structure from Motion (SfM)
techniques were used by Golparvar-Fard et al. (2009) to extract sparse 3D data from
daily progress photographs of construction sites. It helps to compare as-built and as-
planned construction by superimposing the sparse 3D data over as-planned 4D
models. The accuracy of a photogrammetric method is evaluated in Dai and Lu
(2010) for generating 3D models of building components. First, each object’s spatial
data is acquired from a set of digital images using commercial photogrammetry
software. High degree of human intervention is necessary in this step for point
matching and data smoothing. Then, the 3D model is generated up to a scale and
hence the length of a reference line is required to convert it to the metric
reconstruction. Son and Kim (2010) use video streams as the input for 3D data
acquisition and 3D structural component recognition. A trinocular stereo camera is
used in conjunction with its available software to acquire 3D data. This type of
camera generates rectified images and hence the search for corresponding points only
needs to be performed along the scanline which significantly simplifies the problem.
COMPUTING IN CIVIL ENGINEERING 121

3. Methodology
The goal of the proposed method is to automatically generate a sparse 3D point cloud
of infrastructure scenes using a stereo set of video frames collected by a set of two
calibrated cameras. Fig. 1 shows an overview of the method. The output can be used
for camera ego-motion estimation and dense 3D reconstruction of the infrastructure
scene.

Fig. 1: Overview of the proposed method


Reconstructing a 3D model in which objects have their correct (i.e., Euclidean) shape
necessitates calibration of the cameras (Hartley and Zisserman, 2003). Camera
calibration is the process of determining intrinsic and extrinsic parameters of a
camera set. Therefore, the first step in the proposed method is to calibrate the stereo
camera set which is going to be used for data collection. Stereo camera set calibration
is a two-step procedure. Initially, each camera has to be calibrated separately. Then,
the rotation and translation vectors that map the two local coordinate systems, which
were assigned to each camera, have to be calculated.
In the next step, highly distinctive visual features have to be detected. This paper uses
the SURF algorithm to detect the features which are invariant to image rotation,
scaling, translation and illumination with different levels of robustness. Once the
location of the features is detected, SURF calculates the feature descriptor vector
from the local gradient orientation and magnitudes in the neighborhood around the
feature. The motivation behind the selection of SURF lies in the fact that SURF is
superior to other competent algorithms (e.g., Scale Invariant Feature Transform
(SIFT) presented by Lowe (2004)) in terms of the computational efficiency.
122 COMPUTING IN CIVIL ENGINEERING

Moreover, the slightly smaller ratio of correct matches reported in Bauer et al. (2007)
can be compensated using the RANSAC algorithm to discard mismatches.
Having the feature set in each video frame, the reliable correspondences are found by
comparing individual features from one set with the features in the other set. The
matching is based on the distance between descriptors. Euclidean distance is used for
this purpose. The constraint described by Lowe (2004) is used to discard the
candidate matches that are not reliable. While using this constraint can increase the
accuracy of the correct matches significantly, outliers are further discarded using the
RANSAC algorithm. The method employs the normalized 8-point algorithm (Hartley,
1997) to estimate the fundamental matrix as the mathematical model of the RANSAC
algorithm. The fundamental matrix describes epipolar geometry between two images
and provides the transformation by plotting a selected point in one image as an
epipolar line on another image, thus projecting a point onto a line. Once the
appropriate mathematical model is established, the consensus number is determined
according to the number of pairs in the data set which fit the model. The model
corresponding to the maximum consensus is finally used to discard mismatches.
The next step is to displace the location of the points in each matched pair in order to
satisfy the epipolar geometry constraint (i.e., xT Fx  0 where x is a point in the first
view, F is the fundamental matrix and x T is the transpose of the corresponding point
in the second view). The optimal correction algorithm (Kanatani et al., 2008) is used
to find the minimum displacement based on the geometric error minimization.
Finally, spatial coordinates of the 2D points in stereo frames are estimated using
triangulation. For the given set of corresponding points, the output can be represented
in the form of a sparse 3D point cloud.
4. Experimental Results
A set of stereo video streams were collected from the Clough Undergraduate
Learning Commons project of Turner Construction at the Georgia Tech campus using
a calibrated set of Microsoft LifeCam NX-6000 notebook web cameras (Fig. 2). The
resolution of the video streams was set to 1600 1200 pixels. Prior to data collection,
the camera set was calibrated using the Bouguet’s stereo camera calibration toolbox
(Bouguet, 2004). A set of 28 stereo video frames were selected randomly to verify the
accuracy of the method with adequate statistical significance of the results. SURF
features were then extracted and 64-dimensional feature descriptors were calculated
for each frame in the database. Fig. 3 demonstrates the result of feature extraction for
one of the frames.

Fig. 2. Sensor system


COMPUTING IN CIVIL ENGINEERING 123

Fig. 3. SURF feature extraction and matching


The best candidate match for each feature point in the left view was found by
identifying its nearest neighbor in the right view’s feature set. Automatic
correspondence point matching was implemented using Euclidean distance and the
threshold value suggested by Snavely et al. (2007), i.e., 0.6, was applied to evaluate
the distance ratio of the closest neighbor to that of the second-closest neighbor and
discard features that do not have any good matches.
The fundamental matrix was considered as the mathematical model in the RANSAC
algorithm. Fig. 4 shows a sample of the stereo frames in the database and the result of
correspondence point matching Not all of the point pairs exactly satisfy the epipolar
constraint resulted from the estimated fundamental matrix. The overall error for each
stereo frame in the database, according to the fundamental matrices used for
mismatch rejection, was averagely 1.1707. The optimal correction algorithm,
therefore, was used to refine the location of features at a subpixel level. The algorithm
decreased the error to 2.946  10 7 .

Fig. 4. Sample matched feature points. (a) Sample correct matches; (b) Sample
incorrect matches discarded by RANSAC algorithm
124 COMPUTING IN CIVIL ENGINEERING

Once the 2D location of the point pairs are corrected using geometric error
minimization, their corresponding back-projected rays meet in space. The 3D
coordinates of the matched feature points can then be estimated via triangulation.
Triangulation generates a sparse 3D point cloud for each pair of the stereo frames.
In order to evaluate the accuracy of the point clouds, on-site tape measurements
between the spatial location of randomly selected feature points were compared with
the distance between the corresponding points in the generated point cloud. In this
comparison, the minimum sample size required for 95% confidence level and ±10%
confidence interval is equal to 96. Since there are 28 stereo frames in the database, 4
random samples were selected from each pair which led to 112 samples. The samples
were selected from those points having a depth value between 15m to 20m. The 95%
limits of agreement method were used to assess the agreement of the tape and point
cloud-based measurements. The obtained results indicate that the mean value of the
differences is 4.7mm and the standard deviation is 24.9mm. Therefore, the lower and
upper limits in the 95% limits of agreement method are calculated as -44.1 and
53.5mm. It implies that with 95% confidence, a point cloud-based measurement
would differ from the corresponding tape measurement by no less than 44.1mm and
no more than 53.5mm at a depth value between 15 to 20m.
5. Conclusion
This paper presented a method for sparse 3D point cloud generation of an
infrastructure scene which is the first step of a general videogrammetric framework
for remote spatial sensing of civil infrastructure. 3D coordinates of the corresponding
SURF feature points in a stereo pair of video frames were calculated to form a point
cloud. This sparse point cloud will be used for camera motion recovery and dense 3D
reconstruction of the infrastructure. The general framework, upon success, can fully
automate the process of spatial data collection that is a necessary step for applications
such as infrastructure as-built modeling.
A database of stereo frames was considered to evaluate the validity and statistical
significance of the results. The distance between randomly selected points in a point
cloud was calculated using the estimated 3D coordinates of each point and then was
measured by a measurement tape. The difference between these two measurement
sets was used to find the 95% limits of agreement.
Future work will focus on several areas to improve the accuracy of the presented
method. First, the effect of the distance between two cameras on the accuracy of the
generated point cloud needs to be investigated. Second, the proposed method was
only at scene level and the potential benefit of using video sequences as the input data
was not considered. The interaction of the video frames in the sequence would
significantly increase the accuracy.
6. Acknowledgement
This material is based upon work supported by the National Science Foundation
under Grant #0904109. Any opinions, findings, and conclusions or recommendations
COMPUTING IN CIVIL ENGINEERING 125

expressed in this material are those of the authors and do not necessarily reflect the
views of the National Science Foundation.
7. References
B. Akinci, F. Boukamp, C. Gordon, D. Huber, C. Lyons, K. Park, A formalism for
utilization of sensor systems and integrated project models for active construction
quality control, Aut. Const. 15(2) (2006) 124-138.
J. Bauer, N. Sunderhauf, P. Protzel, Comparing several implementations of two
recently published feature detectors, in: Proceedings of the Int. Conf. on Intelligent
and Autonomous Systems, IAV, Toulouse, France (2007).
H. Bay, A. Ess, T. Tuytelaars, L.V. Gool, Speeded-up robust features (SURF), in
Computer Vision-ECCFV 2006, Springer, 3951 (2006) 404-417.
J.M. Bland, D.G. Altman, Statistical-methods for assessing agreement between 2
methods of clinical measurement, Lancet 1(8476) (1986) 307-310.
J.Y. Bouguet, <http://www.vision.caltech.edu/bouguetj/calib_doc/>.
F. Dai, M. Lu, Assessing the accuracy of applying photogrammetry to take geometric
measurement on building products, J. Constr. Eng. M., 136(2) (2010) 242-250.
M. Fischler, R. Bolles, Random sample consensus: a paradigm for model fitting with
applications to image analysis and automated cartography, Communications of the
ACM 24(6) (1981) 381-395.
M. Golparvar-Fard, F. Peña-Mora, S. Savarese, D4AR – a 4-dimensional augmented
reality model for automating construction progress monitoring data collection,
processing and communication, J. of Inf. Tech. in Constr. 14 (2009) 129-153.
R. Hartley, In defence of the 8-point algorithm, IEEE Transactions on Pattern
Analysis and Machine Intelligence 19 (1997) 580-593.
R. Hartley, A. Zisserman, Multiple view geometry in computer vision, second ed.,
Cambridge University Press, Cambridge, 2003.
ImageModeler, 5-step tutorial of a complete ImageModeler project (2009).
K. Kanatani, Statistical optimization for geometric computation: theory and practice,
Dover, York, New York, USA, 2005.
C. kim, C.T. Hass, K.A. Liapi, Rapid, on-site spatial information acquisition and its
use for infrastructure operation and maintenance, Aut. Con., 14 (2005) 666-684.
D. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. of
Computer Vision 60(2) (2004) 91-110.
Z.A. Memon, M.Z. Abd-Majid, M. Mustaffar, An automatic project progress
monitoring model by integrating Auto-CAD and digital images, in: Proceedings of
the ASCE Int. Conf. on Computing in Civil Eng., Mexico, July 12-15, 2005.
N. Snavely, S. Seitz, R. Szeliski, Modeling the world frame internet photo
collections, Int. J. of Computer Vision 80(2) (2008) 189-210.
H. Son, C. Kim, 3D structural component recognition and modeling method using
color and 3D data for construction progress monitoring, Aut. Con. 19(7) (2010)
844-854.
P. Tang, B. Akinci, D. Huber, Quantification of edge loss of laser scanned data at
spatial discontinuities, Aut. Con., 18 (2009) 1070-1083.
Z. Zhu, I. Brilakis, Comparison of optical-sensor-based spatial data collection
techniques for civil infrastructure modeling, J. of Comp. in Civil Eng., 23(3)
(2009) 170-177.
Unstructured Construction Document Classification Model through Support
Vector Machine (SVM)

Tarek Mahfouz

Assistant Professor, Department of Technology, College of Applied Science and


Technology, Ball State University, Muncie, Indiana, 47306; tmahfouz@bsu.edu
ABSTRACT
The dynamic nature of the construction industry yields enormous amount of
documents that have to be stored, retrieved, and reused. Most of these documents are
generated in an unstructured format. Therefore, in an attempt to provide a robust
document classification methodology for the construction industry, the current
research proposes an automated classifier model through Support Vector Machines
(SVM). The adopted research methodology (1) gathered a corpus of documents
including 300 correspondences, 150 meeting minutes, 25 claims, and 300 Differing
Site Conditions (DSC) cases; (2) developed C++ algorithms which process
unstructured documents into a readable format by the SVM algorithm; (4) developed
16 SVM automated classification models; and (5) tested and validated the developed
models. The developed models under the current research attained higher accuracy,
and better precision and recall than previous researches illustrated in the literature.
The current research represents a continuation to previous researches performed
within this realm.
INTRODUCTION
The construction industry is considered to be one of the cornerstones of any
growing economy. The total spending in this industry in 2007 was estimated to be
about $ 14 trillion (US Census, 2010). This level of expenditure is taking place in a
very dynamic and complex environment due to the current advancement rate in
technologies, materials, and construction methods. Considering the above,
construction projects are becoming more sophisticated and are requiring the
integration of different expertise that might not be available in one geographic
location. Consequently, one of the outcomes of any construction project is the
production of a massive amount of documents in diversified formats. These
documents represent the knowledge database of the industry. This characteristic of
the industry has initiated the need for methodologies that facilitate the storage and
retrieval of these documents for reuse of the stored knowledge.
Over the last few years, Artificial Intelligence (AI) is being used by researchers to
address the need for Knowledge Management (KM) in the construction industry
(Labidi 1997). These researches resulted in the development of automated and semi
automated tools, to enable the utilization of textual data expressed in natural
language, through text mining, document clustering, controlled vocabularies, and
web-based models (Caldas et al. 2002, Caldas and Soibelman 2003, Ioannou and Liu
1993, and Ng et al. 2006). Although those studies resulted in significant contribution,
none of them investigated the development of a generic automated model for
unstructured automated classification.

126
COMPUTING IN CIVIL ENGINEERING 127

Therefore, in an attempt to provide a robust document classification


methodology for the construction industry, this paper developed automated classifiers
through Support Vector Machines (SVM). The paper focused on two groups of
construction documents. The first is constituted of documents with high variation in
words like correspondences, and meeting minutes. The second group relates to
documents of low word variations like construction claims and legal documents. To
that end, the adopted research methodology (1) developed C++ algorithms to create
feature spaces for the utilized document sets and implement weighing schemes; (2)
developed 16 SVM automated classification models; and (3) tested and validated the
developed models. It is conjectured that this research stream will help in relieving the
negative consequences associated with the lengthy process of analyzing textual
documents in the construction industry. In addition, the achieved outcomes of this
research highlight the possibility of this technique to be adopted for automated
decision support in the construction industry.
LITERATURE REVIEW
Over the last decade, researchers focused on developing construction
information integration tools that are designed to work with structured data like CAD
models and scheduling databases. However, a major portion of knowledge is
produced in semi structured or unstructured formats like contract documents, change
orders, and meeting minutes, all of which are normally stored as text files (Caldas and
Soibelman 2003). Consequently, facilitating the use of these documents through
integrated methods has become a necessity to enhance project control, performance,
and data reuse. Ioannou and Liu (1993) proposed a computerized database for
classifying, documenting, storing and retrieving documents on rising construction
technologies. Scherer and Reul (2000) utilized text mining techniques to classify
structured project documents. Caldas et al. (2002) and Caldas and Soibelman (2003)
used information retrieval via text mining techniques to facilitate information
management and permit knowledge discovery through automated categorization of
various construction documents according to their associated project component. In
addition, on 2005, Caldas et al. (2005) proposed a methodology for incorporating
construction project documents into project management information systems using
semi-automated support integration to improve overall project control. Ng et al.
(2006) implemented Knowledge Discovery in Databases (KDD) through a text
mining algorithm to define the relationships between type and location of different
university facilities, and the nature of the required maintenance reported in the
Facility Condition Assessment database. Although these researches had significant
contribution to KM in the construction industry, they did not investigate the
development of a generic model that deals with different types of construction
documents.
Support Vector Machines (SVM)
The following is a descriptive background of the Support Vector Machines
concept. SVM classification aims to find a surface that best separates a set of training
data points into classes in a high dimensional space. In the current research, it aims at
defining the construction subject pertinent to each of the training documents based on
the word representation in its content. In its simplest linear form, a SVM finds a
hyper-plane that separates a set of positive examples (documents belonging to a
128 COMPUTING IN CIVIL ENGINEERING

construction subject) from the set of negative examples (documents not belonging to
the same construction subject) with a maximum margin. Binary classification is
performed by using a real valued hypothesis function, equation 1, where input x
(document) is assigned to the positive class (Specific Subject) if ƒ(x)≥0; otherwise, it
is assigned to the negative class.
Y = <w.x> + b (1)
For a binary linear separation problem a hyper-plane is assigned to be ƒ(x) = 0. With
respect to equation 1, the vector w (weight vector) and b (functional bias) are the
parameters that control the function of the separation hyper-plan (refer to Figure 1).
In addition, x is the feature vector which may have different representations based on
the nature of problem. Within the context of the current research, the input feature
space X constitutes of the training documents that are defined by the vectors x and o
in figure 1.

Figure 1. SVM Kernel Transformation and Classification

In the development of the proposed SVM models a problem emerges if the


data are not linearly separable. Assigning an unstructured document to specific class
cannot be represented by a simple linear combination of its content words.
Consequently, a more sophisticated higher dimension space is needed for the
representation of the current problem in order for it to be linearly separable. As the
literature in this field suggests, Kernel representations provides a solution to this
problem by transforming the data into a higher dimensional feature space to enhance
the computational power of linear machine learning (Mangasarian, and Musicant,
1999). As shown in equation 1, the representation of a case in the feature space for
linear machine learning is achieved as a dot product of the factors vector (x) and the
weight vector (w). By introducing the appropriate Kernel function, cases are mapped
to higher feature space (equation 2 and figure 1) transforming the prediction problem
from a linearly inseparable to a linearly separable one. In this manner, the input space
X is mapped into a new higher feature space F={Ø(x)|x where Ø is the kernel
transformation function.
x=(X1, …., Xn)→Φ(X)=(Φ1X1, …., ΦnXn) (2)
METHODOLOGY
The following sections of the paper describe the different steps of developing,
implementing, and validating the SVM models. The adopted research methodology
under the current task is composed of four main stages as illustrated in figures 2.
These stages are defined as (1) Corpus Development; (2) Feature Space
Development; (3) Model Design and Implementation; and (4) Model Testing and
Validation.
COMPUTING IN CIVIL ENGINEERING 129

Figure 2. SVM Research Methodology


Corpus Development
The current research task is concerned with two types of unstructured
documents of high and low word variations. To that end, the first group included 2
subgroups, the first included 300 correspondences and the second constituted of 150
meeting minutes. Furthermore, the second group constituted of 25 claims and 300
DSC cases as the first and second subgroups respectively. The documents pertinent to
the correspondences, meeting minutes, and claims were gathered from a number of
projects that were performed around the world. However, the DSC cases were
gathered from the Federal Court in New York due to the abundant amount of cases.
They were compiled using LexisNexis, a web legal retrieval system.
Feature Space Development
Under the current research, a feature space is developed for each subgroup of
the utilized documents. Although each document implicitly includes the required
knowledge, in the form of words and phrases, to perform the classification analysis, it
also includes textual representations that are not related to the topic. Including these
words in the analysis hinders the performance of the SVM classifier. As a
consequence, an initial preparation step is needed. This step will include data
cleaning, data integration, and data reduction (Ng et al. 2006). For more illustrations,
textual representation of documents might include frequent words that carry no
130 COMPUTING IN CIVIL ENGINEERING

meaning, misspelled words, and inconsistent data. While data processing is


performed on each textual case representation separately, data integration is
performed over the entire dataset. In this step, the entire processed dataset is stored in
a coherent manner that facilitates their use for further analysis. While the integrated
data might be very large, data reduction can decrease the data size by aggregating and
eliminating redundant features. To perform the aforementioned sub-steps, an
algorithm was developed and implemented in C++. The steps implemented by the
algorithm are as follows: (1) Extract all words in a document; (2) Eliminate non-
content-bearing words, also known as stopwords (Scherer and Reul, 2000); (3)
Reduce each word to its “root” or “stem” eliminating plurals, tenses, prefixes, and
suffixes; (4) For each document, count the number of occurrences of each word; and
(5) Eliminate low frequency words (Salton, and Buckley, 1991). Low frequency
words are those that were repeated less than 3 times in a document. The output of the
implementation of this algorithm is w unique words remain in d unique documents; a
unique identifier is assigned between 1 and w to each remaining word, and a unique
identifier between 1 and d to each document resulting in a term-frequency (tf) matrix.
However, mere representation of significant words in the form of (tf) is not sufficient
to accurately extract the required knowledge from the document corpus. For example,
a word like “construction” might exist in all processed documents in high (tf).
However, a decision must be made about whether this word would help assign the
topic to a specific subject or not. Consequently, an appropriate weighting mechanism
must be implemented to create a representative matrix of these documents within the
entire dataset. Literature in the field of ML illustrated the effectiveness of alternate
term weighting schemes like logarithmic term frequency (ltf), augmented weighted
term frequency (atf), and term frequency inverse document frequency (tf.idf)
(equation 3) (Manning and Scheutze, 1999). Earlier researches performed by the
author illustrated the superiority of tf.idf weighing scheme (Mahfouz and Kandil
2010a, b, and c). As a result, tf.idf was adopted for the current research. The
developed C++ algorithm implements the required calculations to formulate the final
matrix of each set of documents.
(3)

Model Design and Implementation


The proposed research methodology developed and compared the outputs of
16 SVM models as follows (1) four 1st degree polynomial kernel SVM models; (2)
four 2nd degree polynomial kernel SVM models; (3) four 3rd degree polynomial kernel
SVM models; and (4) four Radial Base Function (RBF) SVM models. Validation of
the best developed model was based on prediction accuracy. Since the analysis is
aiming at automatically classifying each document to a specific topic, each model
was developed as a multiple classifier. In other words, each document is tagged with
a known topic. In the training stage, the SVM classifier learns the latent relation
between the existing word matrix and the tagged topic. The learning process is
performed on a 10 fold cross validation mechanism. For more elaboration, the set of
training data is divided into 10% and 90% portions in each fold. The model is trained
on the 90% and tested on the other 10% cases. The process is done in an iterative
manner until the model is trained and tested over the whole set of cases. The
COMPUTING IN CIVIL ENGINEERING 131

prediction accuracy of the model is developed as the average accuracy attained


among all folds and the Kappa as the measure of agreement between all folds.
Model Testing and Validation
Each of the developed SVM models is tested with newly un-encountered set of
documents from each subgroup. The testing and validation is performed in 3 steps.
First, the new documents are converted to tf.idf matrix as if they have been part of the
original training corpus. Such step is essential to allow for accurate representation of
the documents in the developed feature spaces. A C++ algorithm was developed to
perform this step. Second, the trained models are run utilizing the new developed
matrixes. Third, automated assignment of topics or prediction is reported by the
models. Similar to the LSA testing and validation, the output of the models are
compared to manual tagging of the newly introduced set of documents for each
group.
RESULTS AND DISCUSSION
The outcomes of the implementation of the aforementioned methodology are
illustrated in tables 1. The discussion of the attained results in the following sections
of the paper is twofold. The first relates to defining the complexity of the problem in
hand based on understanding human performance in similar situations. The second
compares the prediction accuracy of the developed SVM models to that of humans
and derives their strengths and weaknesses.
Golden Standard
The first step under this subtask was to establish a Golden Standard of human
agreement to which the performance of the developed model is to be compared. To
that end, a set of 8 volunteers comprised of Assistant Professors, graduate students,
and undergraduate students in construction engineering and management programs
were utilized to set the base level of human agreement. It was assumed that by virtue
of the occupations of the participating volunteers, they posses enough knowledge
about construction practices and documents to be valid selectors. Each volunteer was
provided with a set of documents from each subgroup and asked to classify them
according to similarities under related topics of his\her determination. A document is
considered to be classified correctly under a specific topic if three or more persons
agreed on the document’s topic (Mahfouz, 2010a). The average agreements between
participating members in regards to each set of documents are illustrated in the third
columns of table 1. It is evident from the table that the lowest human agreements
were attained in relation to meeting minutes and claims. Such aspect is attributed to
that fact that these documents are usually comprised of a set of aspects that could not
be defined under a specific title. A construction claim for example might include
different causes of disputes that might not be related in nature.
SVM Prediction Accuracy
The last four columns of table 1 illustrate the averages of the attained results
of the developed SVM models. A closer examination of the results shows that the
performance of the SVM models was consistent with average human agreement. The
highest predictions were achieved in relation to correspondences and the lowest in
relation to meeting minutes. Such aspect could be attributed to the complexity of the
analyzed documents as mentioned earlier. This outcome is attributed to the
132 COMPUTING IN CIVIL ENGINEERING

computational capacity of SVM classifiers. Support Vector Machine (SVM) is a


state-of-the-art classification and regression algorithm, which implements strong
regularization techniques, that is, the optimization procedure maximizes predictive
accuracy while automatically avoiding over-fitting of the training data (Cannon et al.
2007). Furthermore, the transformation of the data into a higher dimension space
through Kernel estimation provides the strength of the SVM model in solving this
complex problem. On the other hand, the analysis utilizes sets of documents ranging
between 25 and 300 documents and is considering a large number of features
reaching to more than 2000 terms. The fact that the number of cases is less than twice
the number of features deteriorates the active learning feature of SVM.

Table 1: Golden Baseline of Human Agreement and SVM Results


Document Average 1st Degree 2nd Degree 3rd Degree Radial
Type Agreement Poly. Poly. Poly. Base
Between Kernel Kernel Kernel Function
Humans SVM SVM SVM (RBF)
Correspo- 84% 89% 91% 87% 87%
Group 1

ndence
Meeting 71% 72% 78% 72% 72%
Minutes
Claims 71% 74% 79% 72% 72%
Group 2

DSC 80% 83% 83% 79% 79%


Cases
CONCLUSION
The paper proposed a methodology for automating document classification for the
construction industry through SVM. To that end, 16 SVM models were developed out
of which the models with the best prediction accuracy were adopted. The utilized set
of documents for model development, testing, and validation included 300
correspondences, 150 meeting minutes, 25 claims, and 300 DSC cases. The outcomes
of this research highlight the followings.
• The task in hand is a complex research task. Human agreements about document
classification to specific topic ranged between 97% and 89%.
• The attained results of the SVM models were consistent with human agreements.
• SVM prediction accuracy ranged between 91% and 83%.
• Due to the complexity of the task, 3rd polynomial degree kernel SVM model is
the most suitable one.
The current research task has built on previously performed works using SVM.
However, it was able to develop a comprehensive methodology that addresses
different types of unstructured construction documents. When compared to human
agreements measures, for the first time in similar research task, the developed models
achieved comparative results. The outcomes discussed within the body of the paper
illustrate the potential of SVM techniques to be adopted for automated document
classification. It is conjectured that this research line will help in relieving the
negative consequences associated with the lengthy analysis and classification of
documents in the construction industry.
COMPUTING IN CIVIL ENGINEERING 133

REFERENCES
Caldas, C. H., and Soibelman, L. (2003). “Automating hierarchical document
classification for construction management information systems.” Autom. in
Const., 12(4), 395-406.
Caldas, C. H., Soibelman, L., and Gasser, L. (2005). “Methodology for the integration
of project documents in model-based information systems.” J. of Comput. in Civ.
Eng., 19(1), 25-33.
Caldas, C. H., Soibelman, L., and Han, J. (2002). “Automated classification of
construction project documents.” J. of Comput. in Civ. Eng., 16(4), 234-243.
Cannon, E. O., Amini, A., Bender, A., Sternberg, M. J. E., Muggleton, S. H., Glen, R.
C., and Mitchel, J. B. O. (2007). “Support vector inductive logic programming
outperforms the Naïve Bayes classifier and inductive logic programming for the
classification of bioactive chemical compounds.” J. of Comput Aided Mol., 21,
269-280.
Ioannou, P. G., and Liu, L. Y. (1993). “Advanced construction technology system—
ACTS.” J. of Const. Eng. and Manag., 119(2), 288-306.
Labidi, S. (1997). “Managing multi-expertise design of effective cooperative
knowledge-based system.” Proc., 1997 IEEE Knowledge & Data Engineering
Exchange Workshop, IEEE, Piscataway, NJ, 10-18.
Mahfouz, T., and Kandil, A. (2010a). “Unstructured construction document
classification model through latent semantic analysis (LSA).” Proceeding of the
27th International Conference on Applications of IT in the AEC Industry (CIB-
W78 2010), Cairo, Egypt.
Mahfouz, T., and Kandil, A. (2010b). “Construction legal decision support using
support vector machine (SVM).” Proc. of the CRC 2010: Innovation for
Reshaping Construction), Banff, Canada.
Mahfouz, T., and Kandil, A. (2010c). “Automated outcome prediction model for
differing site conditions through support vector machines.” Proc. of the ICCCBE
2010, Nottingham, United Kingdom.
Mangasarian, L. and Musicant, D. (1999). “Massive support vector machine
regression.” NIPS*99 Workshop on Learning with Support Vectors: Theory and
Applications, December 3, 1999.
Manning, C. and Scheutze, H. (1999). “Foundations of statistical natural language
processing.” Cambridge: MIT Press.
Ng, H. S., Toukourou, A., and Soibelman, L. (2006). “Knowledge discovery in a
facility condition assessment database using text clustering.” J. of Comput. in Civ.
Eng., 12(1), 50-59.
Salton, G., and Buckley, C. (1991). “Automatic text structuring and retrieval –
experiment in automatic encyclopedia searching.” Proc. of the 14th Annual
International ACM SIGIR Conference on Research and Development in
information Retrieval, 21-30.
Scherer, R. J., and Reul, S. (2000). “Retrieval of project knowledge from
heterogeneous AEC documents.” Proc. of the Eight International Conference on
Computer in Civil and Building Engineering, Palo Alto, Calif., 812-819.
US Census Bureau, < http://www.census.gov/const/www/c30index.html> (Accessed
2010).
Automatic Look-Ahead Schedule Generation System for the Finishing Phase of
Complex Projects for General Contractors
N. Dong1, M. Fischer2, Z. Haddad3
1
Ph.D. student, Center for Integrated Facility Engineering (CIFE), Dept. of Civil and
Environmental Engineering, Stanford University, Y2E2 Building, 473 Via Ortega,
Room 292, Stanford, CA 94305, United States of America, Phone: +1-650-391-5599,
E-mail: ningdong@stanford.edu
2
Professor, Center for Integrated Facility Engineering (CIFE), Dept. of Civil and
Environmental Engineering, Stanford University, Y2E2 Building, 473 Via Ortega,
Room 292, Stanford, CA 94305, United States of America, Phone: +1-650-725-4649,
Fax: +1-650-723-4806, E-mail: fischer@stanford.edu
3
VP, Corporate Affairs & CIO, Consolidated Contractors Company (CCC), 62B
Kifissias Ave., Amaroussion, Athens, Greece 15125, Phone: +30-210-618-2162, E-
mail: zuhair@ccc.gr

ABSTRACT

Generating Look-Ahead Schedule (LAS) manually or semi-manually using


existing tools for complex projects in the finishing phase is very difficult, time-
consuming and error-prone. Plus, because of the vast amount of alternatives in the
crew assignment process, whether a LAS is the best, in terms of project duration or
cost, is unknown. This paper proposes an LAS generation system to automate the
LAS generation process. Based on field research and investigations, this paper first
proposes an information model to integrate various sources of project data (e.g.,
product, process and organization related data) to get ready for LAS generation. An
automatic schedule generation method is then described to simulate the LAS
generation process and produce LAS outputs. Finally a prototype is developed based
on the information model and the schema.

INTRODUCTION

Construction monitoring and control over complex projects in the finishing


phase is always challenging to construction managers, project planners and site
engineers as multiple disciplines are involved and GC needs to coordinate their own
crews as well as various subcontractors’ work. Look-Ahead Schedule (LAS) can
clarify which crew (who) is working on what activity at which location (where) on
which day (when), making everybody stand on the same ground. Therefore, it is a
useful tool to facilitate monitoring and control in complex projects in which multiple
crews need to work at multiple locations (rooms).

A “good” LAS for the finishing phase for a complex project should take into
account factors including work content and work sequence in different types of

134
COMPUTING IN CIVIL ENGINEERING 135

rooms, priorities of rooms and activities, effective crew formation from sharable
skilled workers, their availability, actual progress from job site, and etc. It cannot be
used to guide field people’s work unless it considers the above factors.

Unfortunately, such LAS is not widely used in the finishing phase of complex
projects. The reasons are multi-fold. First, no existing tools can help the site
engineers to consider both spatial resource (i.e., rooms) and crew resource at the same
time to avoid work conflict, besides progress update and activity dependencies they
need to worry about. A second challenge is considering crew resource availability
when dynamic crew formation from sharable skilled workers is allowed. For
example, two carpenters are needed to install door frames and panels in one room;
but when they do not have doors to install, they could be assigned to work on wood
skirting activity which requires 3 carpenters. This paper aims to create a solution to
fully address these challenges and get ready for schedule optimization.

SCHEDULE GENERATION METHOD

Critical path method (CPM) is the commonly used network technique for
scheduling both repetitive and non-repetitive projects (Clough and Sears 1991;
O’Brien and Plotnick 2005). However, using CPM to schedule a project with a large
number of activities while considering assigning crew resource to each individual
activity is time consuming and extremely difficult to maintain (Reda 1990). Line of
balance (LOB) is another popular network-based method for scheduling for repetitive
projects (Carr and Meyer 1974; Johnston 1981; Arditi and Albulak 1986). However,
its effectiveness in non-repetitive projects is not well documented and remains
unclear.

Through the use of fragnet, CPM tools (i.e., Primavera and Microsoft Project)
provide a level of automation to project planners to allow them to define activity sub-
network (i.e., activity template) to be reused over time. In the finishing phase, a
fragnet can represent a specific type of room with a unique work sequence. However,
fragnet itself does not address problems such as activity definition and its relation
with resource, activity dependencies in the finishing phase and automatic activity
duration calculation. Much prior literature has discussed automated schedule
generation methods considering activity definition, activity dependencies and crew
resource allocation in the schedule generation process as well (Darwiche et al., 1998;
Waugh 1990; Echeverry et al., 1991; Yau et al. 1991; Winstanley et al. 1993; Dzeng
and Tommelein 1995; Thabet and Beliveau 1997; Aalami 1998; Chevallier and
Russell 1998; Kanit et al. 2009), but none fully addressed the problems including the
consideration of spatial resource and crew resource at the same time, multiple
perspective of room (i.e., a type of resource but also an instance of a fragnet from the
process perspective), dynamic crew formation from sharable skilled worker, and
schedule generation based on progress updates.

INFORMATION MODEL FOR LAS GENERATION (IMLASG)


136 COMPUTING IN CIVIL ENGINEERING

The automation of schedule generation requires inputs from various project


databases. Therefore, we need an information model to effectively retrieve and
integrate data. In the finishing phase, room is a special element as it carries product,
process and organization (POP, Kam and Fischer, 2004) features, all of which need to
be taken into account. Table 1 lists these POP features related with a room.

Table 1. Product, Process and Organization (POP) features of a room when


considering scheduling.
Product Process Organization
Room Fixed process (fragnet) Foreman required
- Room components - Operation - Crew (productivity)
- BOQ Dynamic process (room profile) - Skilled workers
- Quantity - Room available date (the date the required
- Unit first finishing operation can start)
- Current operation
- Days left for the current operation
- Finished operation list

In table 1, the term operation is used to describe activities related with a


specific room. The same naming convention applies to the rest of this paper. We view
process feature of room from two aspects – fixed aspect (i.e., fragnet) and dynamic
aspect (i.e., room profile). Fragnet corresponds to a certain type of room (with a
particular functionality). For example, a university mid-rise building can comprise
room types such as lecture hall, computer classroom, rest room, electrical room, IDF
room, plant room, and etc. Each type corresponds to a unique finishing work
sequence mainly because of the specific finishing materials and construction methods
used in the room. We use fragnet to define the work sequence inside a type of room
including operation and dependencies. Once defined, fragnet is fixed in a project and
can be used multiple times. However certain process related inputs are dynamic when
the project is started. That is why we need to have a dynamic process view (i.e., room
profile in table 1) to be able to simulate site engineers’ daily routine in checking a
room’s available date, current operation, finished operations, and etc.

Besides the above key features regarding room, resource is an important


factor to be considered in scheduling. From the crew perspective, we need to know
the available sharable skilled workers and fixed crews to start an operation if all of its
predecessors are finished. From the spatial perspective, we need to know whether a
room is ready for an operation to start even if it has enough crew resource. In this
paper, we assume that all rooms are independent - operations going on in one room
have no effect on other rooms except holding the desirable crew resource. In practice,
certain operations in a particular type of room can affect operations in other rooms.

According to the discussions above, we summarize the inputs required for


scheduling generation in the finishing phase in table 2.
COMPUTING IN CIVIL ENGINEERING 137

Table 2. Inputs required for schedule generation in the finishing phase.


Input Type Specific Items Source of Input
Product related - BOQ Project surveying
inputs - Quantity department
- Unit
Fragnet related - Operation Construction manager,
inputs - Dependency site engineers, planners
Room related inputs - Room ID Project engineering
- Room finishing available department, site
date engineers
- Room priority
Crew related inputs - Crew composition Foremen and site
- Required skilled worker engineers
type
- Related operations
- Related BOQ
- Crew productivity rate
Skilled worker - Skilled worker type Foremen
related inputs - Worker ID
Duration input - Operation duration that Foremen and site
cannot be calculated engineers
directly

Table 2 only covers part of the dynamic process features of room listed in
table 1. The rest of the features are not scheduling inputs but must be recorded in the
schedule generation process to facilitate space and crew allocation. Such features
combined with the inputs from table 2 form the data framework of the scheduling
information model shown in Figure 1.

Quantity in Room Room Profile n:1


n:1 Fragnet
- Room ID - Fragnet ID
- BOQ - Room ID
- Quantity - Fragnet ID
- Room Available Date - Operation ID
- Unit - Room Priority - Operation Name
- Current Operation ID - Operation Property
1:1 - Days Left - Dependency
1:1 - Previous Operation List

n :1
Progress Update Crew Productivity

- Room ID - Composition
1:n
- Operation ID - Required Worker Type
- Assigned Crew/worker IDs - Fragnet ID Skilled Worker Pool
- Expected Operation - Operation ID
Duration - BOQ - Worker Type
- Actual Start Date - Productivity Rate - Worker ID
- Actual Finish Date - Worker Available Date
- Days Left
1:n

Figure 1. Information model for LAS generation (Haddad et al., 2009).


138 COMPUTING IN CIVIL ENGINEERING

AUTOMATIC LAS GENERATION METHOD (ALASGM)

The proposed Automatic LAS Generation Method (ALASGM) aims to


generate schedule considering the availability of both spatial resource and crew
resource. It extends ACP (Waugh 1990) method which only takes crew resource into
account. ALASGM simulates the site engineer’s daily routine in assigning certain
crew resource to certain rooms. For each date D, ALASGM checks all the available
rooms first. When a room is available, it checks whether any operation is ready (all
its predecessors are finished) in this room. If an operation is ready, it allocates crew
to this operation, calculate/retrieve the duration and write the operation to LAS. If
any other operation can start at the same time (according to the operation property in
IMLASG), ALASGM repeats the same steps for such operation. When all ready
operations in this room are properly handled, ALASGM continues to check the next
room. When all rooms are handled, it releases the room and crew resources for all
operations already in the LAS which are to be finished by the end of Date D. When
ALASGM finishes this process on Date D, it rolls to Date D+1. It repeats such
processing and rolling until all operations for all rooms are finished, by which it
creates the LAS. This method can be applied to the scenario when a GC needs to
manage multiple projects whiling allowing certain resources (i.e., engineers,
equipment, etc.) to be shared among these projects. Each project can be handled as a
room. IMLASG is queried throughout the whole ALASGM process whenever a link
is required between two data types. For example, the link between fragnet and crew
productivity and the link between crew productivity and skilled worker pool are used
in IMLASG when checking whether there is enough skilled workers to form a crew
for a specific operation in a room.

SIX-ROOM EXAMPLE AND SIMULATION RESULT

This example represents a very small portion of a complex non-repetitive


project in the finishing phase. Despite its simplicity, it can represent the scheduling
problem for the finishing phase. There are three types of rooms in this example –
electrical room, IDF room and plant room, each room having a unique work sequence
requiring multiple types of skilled workers and crews. Therefore, there are three
fragnets in this example.

Six-Room Example In The Finishing Phase.

Table 3 summarizes the fragnets and related operations of this case study.
Although certain fragnets contain the same operation names, the work contents and
the related productivity rates are often different. For example, the “Electrical final
fix” in the ELE fragnet concentrates on finalizing the power boxes and panels while
in the IDF fragnet on data servers above the raised floor. The successor(s) of each
operation is also indicated in table 3.
COMPUTING IN CIVIL ENGINEERING 139

Table 3. Three types of fragnets involved in the case study.


ELE Fragnet IDF Fragnet PLT Fragnet
Room ELE1, ELE2 IDF1, IDF2 PLT1, PLT2
instance
Self- E-1. Conduit & I-1. Conduit & box P-1. Conduit & box
performed box E-2  I-2  P-2
operations E-2. Plastering  I-2. Plastering  I- P-2. Plastering P-3
E-3 3 P-3. Screed  P-4
E-3. Screed  E-4 I-3. Painting (first P-4. Painting (first
E-4. Painting (first two coats)  I-4 & three coats)  P-5 &
two coats)  E-5 & I-11 P-12
E-10 I-4. Electrical first P-5. Electrical first fix
E-5. Electrical first fix  I-6  P-11
fix  E-11 I-5. Painting (last P-6. Louvers  P-7
E-6. Painting (last coat)  I-7 P-7. Painting (last
coat)  E-7 & E-8 I-6. Raised floor  coat)  P-8 & P-9
E-7. Epoxy floor  I-9 P-8. Epoxy floor 
E-9 I-7. Electrical final P-10
E-8. Electrical final fix  I-8 P-9. Electrical final
fix  E-9 I-8. Doors & wood fix  P-10
E-9. Doors & wood panels P-10. Doors
panels
Subcontracted E-10. Electrical I-9. Electrical P-11. Electrical
operations second fix  E-6 second fix  I-5 second fix  P-6
E-11. HVAC & I-10. HVAC & P-12. HVAC &
Firefighting  E-6 Firefighting  I-6 Plumbing &
Firefighting  P-6

Since most of the rooms are too small to allow multiple operations to proceed
at the same time, we only define start-finish relations between any two operations. In
other words, the crew for the next operation cannot move into the same room until the
crew for the first operation finish their work. For two or more operations that can go
in parallel in a room, only one operation is allowed to proceed at a time.

The crew formation only focuses on sharable skilled workers considering they
are the most important part of the crew. The plastering and screed operations have
fixed crew formations in any fragnet. We treat these crews as single skilled workers
in scheduling. All subs’ crews are treated in the same way.

Prototype and Simulation Result.

A prototype is developed using C# to run simulations based on ALASGM


with project data organized using IMLASG. Figure 2 presents two different views of
a schedule created from the prototype after one simulation. Figure 2 (a) illustrates the
room centered view of a schedule with each row representing the work sequence in a
140 COMPUTING IN CIVIL ENGINEERING

room. Each cell contains the skilled workers and the operation to take place in a room
on a specific day. For example, in room 1001 (where), on January 7, 2009 (when),
four electricians (who) are required to work on “Conduit & box” installation (what).
By rearranging the who-what-when-where elements, we get a worker-centered view
of a schedule, as demonstrated in Figure 2 (b), with each row representing the
operation and its location for a skilled worker to perform on a daily basis. A grey cell
in either of these schedules indicates a resource unit (room or skilled worker) is idle
on a specific day.

Figure 2. Two types of schedules created by the prototype.

Figure 3. Schedule distribution after 3,000 runs of the simulation.


COMPUTING IN CIVIL ENGINEERING 141

Figure 3 shows the duration distribution when we run the simulation 3,000
times with the following assumptions: (1) no operation has started yet in any room,
(2) all rooms have the same priority, (3) all rooms only allow one operation to
proceed at a time and (4) the availability of skilled workers is only enough to for one
operation in one room at a time. The computing time for this simulation is 136
seconds using a PC with Intel Core(TM)2 Duo CPU (2.53GHz).

CONCLUSION AND FUTURE STUDIES

LAS generated from ALASGM can effectively guide site engineers’ field
work and avoid running everything in their heads causing work conflicts and rework.
They can also use the system to analyze resource utilization and determine when to
add/remove people from the job site to better achieve project goals such as shorter
project duration. Future work includes developing a user interface to allow the site
people to conveniently input progress updates, track field people’s work efficiency
and examine the scheduling result, using artificial intelligence to quickly discover
optimal solutions and consideration of project related constraints into the schedule
generation process.

REFERENCES

Aalami, F. (1998). “Using method models to generate 4D production models. ”


Doctoral dissertation, Dept. of Civil Engineering, Stanford Univ., Stanford,
Calif.
Arditi, D. and Albulak, M. Z. (1986). “Line-of-balance scheduling in pavement
construction.” J. Constr. Eng. Manage., 112(3), 411-24.
Carr, R. I., and Meyer, W. L. (1974). “Planning construction of repetitive building
units.” J. Constr. Div., Am. Soc. Civ. Eng., 100(3), 403–412.
Chevallier, N., and Russell, A. D. (1998). “Automated schedule generation.” Can. J.
Civ. Eng., 25, 1059–1077.
Clough, R. H., and Sears, G. A. (1991). “Construction project management.” Jon
Wiley & Sons, New York.
Darwiche, A., Levitt, R., and Hayes-Roth, B. (1988). "OARPLAN: generating project
plans by reasoning about objects, actions and resources." AI EDAM, 2(3), 169-
181.
Dzeng, R., and Tommelein, I. D. (1995). “Case-based scheduling using product
models.” Proc., 2nd Congress on Computers in Civil Engineering, ASCE,
Atlanta, Ga., 163–170.
Echeverry, D., Ibbs, C. W., and Kim, S. (1991). “Sequencing knowledge for
construction scheduling.” J. Constr. Eng. Manage., 117(1), 118–130.
Haddad, Z. Dong, N., and Fischer, M. (2009). Oral and written communications.
Johnston, D. W. (1981). ‘‘Linear scheduling method for highway construction.’’ J.
Constr. Div., Am. Soc. Civ. Eng., 107(2), 247–261.
Kam, C., and Fischer, M. (2004). “Capitalizing on early project decision-making
opportunities to improve facility design, construction, and life-cycle
142 COMPUTING IN CIVIL ENGINEERING

performance—POP, PM4D, and decision dashboard approaches.” Autom.


Constr., 13(1), 53-65.
O’Brien, J.J., and Plotnick, F. L. (2005). “CPM in construction management.”, 6th
Edition, McGraw-Hill, Inc., New York.
Reda, R. (1990). “RPM: Repetitive project modeling.” Journal of Construction
Engineering and Management, 116(2), 316–330.
Thabet, W. Y., and Beliveau, Y. J. (1997). ‘‘SCaRC: space-constrained resource-
constrained scheduling system.’’ J. Comput. Civ. Eng., 11(1), 48–59.
Waugh, L. (1990). “A construction planner.” Doctoral dissertation, Department of
Civil and Environmental Engineering, Stanford University, Stanford, Calif.
Winstanley, G., Chacon, M. A., and Levitt, R. E. (1993). “Model-based planning:
scaled-up construction application”. J. Comput. Civ. Eng., 7(2), 199–217.
Yau, N., Garrett, J. H., and Kim, S. (1991). “Integrating the processes of design,
scheduling, and cost estimating within an object-oriented environment.” Proc.,
Construction Congress on Preparing for Construction in the 21st Century,
ASCE, Cambridge, Massachusetts, 342–347.
Sustainable Construction Ontology Development Using Information Retrieval
Techniques

Yacine Rezgui1 and Adam Marks2


1
Cardiff School of Engineering, Cardiff University, Cardiff CF24 3AA, UK; PH (44)
2920 875719; Fax (44) 2920 874716; email:RezguiY@cardiff.ac.uk
2
Embry-Riddle Aeronautical University, 600 S Clyde Morris BLVD, Daytona Beach,
FL, 32114, USA; PH (407) 256 5156; Email: Marksa@erau.edu

ABSTRACT
The paper describes the “SCrIPt” methodology used to develop a sustainable
construction domain ontology, taking into account the wealth of existing semantic
resources in the construction sector. The latter range from construction taxonomies
(e.g. IFCs) to energy calculation tools’ internal data structures (i.e. conceptual
database schema). The paper argues that taxonomies provide an ideal backbone for
any ontology project. Equally, textual documents have an important role in sharing
and conveying knowledge and understanding. Therefore, a construction industry
standard taxonomy is used to provide the seeds of the ontology, enriched and
expanded with additional concepts extracted from large construction sustainability
and energy oriented document bases using information retrieval (tf-idf and Metric
Clusters) techniques. The SCrIPt ontology will be used as the semantic engine for a
Sustainability Construction Platform, commissioned by the Welsh Assembly
Government in the UK.

INTRODUCTION

The overwhelming volume of knowledge to which designers are exposed, coupled


with the increasing sophistication of buildings that have to conform to new, including
sustainable construction, regulations render the design process ever more complex
with designers having to adapt to, and solve, multi-disciplinary and multi-dimensional
design situations (Wetherill et al., 2007; Rezgui and Zarli, 2006). While a great deal
of expertise already exists in detailing and constructing low-energy buildings, much
of this expertise is fragmented and exists in various forms, with no real systematic
means or mechanisms to assist designers in their sustainable construction decision-
making (Wetherill et al, 2007). In this context, construction stakeholders involved in
new or refurbishment projects are faced with: (a) complex legislation related to
sustainable construction, (b) a plethora of overlapping commercial tools supporting
the process of delivering sustainable buildings, (c) numerous guidelines and
documentation, (d) an increasingly rigorous energy certification process, and (e) lack
of clarity on types of financial assistance and eligibility criteria (Lowe and
Oreszczyna, 2008; Rezgui and Miles, 2010). Sustainable construction is multi-
disciplinary (i.e. concerns various specialities). It involves architectural, engineering,

143
144 COMPUTING IN CIVIL ENGINEERING

construction, management, and social sciences applied to the lifecycle of a building


project from concept design to demolition / recycling.
The consultation led by one of the authors (Rezgui et al., 2010) reveals the
complexity of the subject, exacerbated by the existence of a variety of overlapping
and fragmented resources produced and maintained by users, ad-hoc communities
(e.g. through the use of Wikis), organisations, government authorities, and official
institutions such as BRE and the Carbon Trust in the UK.
The SCrIPt project aims to create circle of impacts that bind building
professionals, energy administrations, and citizens (including home owners and
tenants) in a shared sustainable construction experience through a state-of-the-art
Sustainable Construction Service Platform (hereafter referred to as SCrIPt) that
assists and improves the capacity of building professionals to offer effective energy
and sustainable construction solutions and increases demand for such solutions, while
at the same time fostering adoption of energy reducing use patterns in buildings.
One of the objectives of SCrIPt is to deliver a sustainable construction
ontology which will be used to develop a wide range of sustainability related services
(Rezgui et al., 2010). The paper describes the preliminary work aimed at the
development of the SCrIPt sustainable construction platform, with a focus on the
ontology that underpins the SCrIPt knowledge services. Following the introduction,
the paper gives an overview of the semantic resources that inform the development of
the sustainable construction ontology, followed by the proposed methodology for the
development of the ontology. The paper then summarizes the techniques used for
ontology development, including concept and relationship identification, using
information retrieval techniques. Finally, the paper provides concluding remarks and
directions for future work.

INFORMATION SOURCES FOR ONTOLOGY DEVELOPMENT

A consultation involving key stakeholders from the construction sector in


Wales (UK) was organized to capture key requirements for the SCrIPt platform and
identify potential semantic resources to serve the development of the ontology. The
consultation reinforced the need for a sustainable construction ontology to inform and
underpin the services of the SCrIPT platform. In fact, this was referred to by the
workshop participants as the “Map of Everything” emphasizing the need for a
common semantic referential in sustainable construction.
Also, the consultation helped identify sustainability knowledge sources that have
informed the development of the ontology, as summarized below:
 Sustainable construction Official Resources: this forms the overall
sustainable construction publicly available information and knowledge. It
includes administrative information (e.g. regulations, planning permission),
standards, technical rules, product databases, etc. This information is, in
principle, available to all companies, and is partly stored in electronic
databases.
 Sustainable construction Proprietary Resources: this is company specific,
and forms the intellectual capital of innovative construction firms. These
reside both formally in company records and informally through the skilled
COMPUTING IN CIVIL ENGINEERING 145

processes of the firm. These also relate to knowledge about the personal skills,
sustainable construction project experience of the employees and cross-
organizational knowledge. The latter covers sustainable construction
knowledge nurtured through collaborative relationships with other partners,
including clients, architects, engineering companies, and contractors.
 Sustainable construction Practical Knowledge: this is knowledge acquired
by individuals through practice drawing from the two above categories of
knowledge. This exists in a tacit form and in several instances is codified but
mainly available from users’ computers, hence, not shared by others.
 Commercial Sustainable construction Knowledge: this knowledge is
formalized and conceptualized by software vendors through their commercial
software solutions. This can only be accessible through the functionality
exposed via their software.

As to existing services, the consultation revealed the co-existence of several


sustainable construction (related) solutions. These can be summarized into:
 Sustainable construction Third party services: these are commercial low
carbon and building energy calculation tools (including energy compliance
tools).
 Proprietary Portal Solutions: these are corporate portals developed by
construction companies with a view to maintaining their own corporate
knowledge.
The ontology development factors in the above sustainable construction
knowledge categories and third party services.

Figure 1. SCrIPt Conceptual Framework.


146 COMPUTING IN CIVIL ENGINEERING

ONTOLOGY DEVELOPMENT METHODOLOGY

The ontology is being developed incrementally, in a collaborative way,


involving representatives from the various disciplines, in order to promote ontological
commitments and provide a mechanism whereby stakeholders share and exchange
their perspectives and expertise (Hahn and Schultz, 2003; Rezgui, 2007). Textual
documents have an important role in sharing and conveying knowledge and
understanding. The methodology, illustrated in Figure 2, comprises the following
stages: domain scoping, ontology architecture definition, candidate semantic
resources selection, ontology modules development, ontology testing and validation,
and ontology maintenance. For pragmatic reasons, these stages are grouped into four
main phases and described as such in the following sections: Phase 1 (Domain
scoping and architecture definition), Phase 2 (Candidate semantic resources
identification), Phase 3 (Ontology modules construction), and Phase 4 (Ontology
validation and maintenance). The ontology conforms to an underlying knowledge
model involving concepts, attributes and relations, defined using OWL.

Figure 2. The various stages of the Methodology (adapted from Rezgui, 2007).
COMPUTING IN CIVIL ENGINEERING 147

Figure 3. The Ontology Architecture (adapted from Rezgui, 2007).


The ontology is structured into a set of discrete, core and discipline-oriented,
sub-ontologies referred to as modules (Figure 3). Each module features a high
cohesion between its internal concepts while ensuring a high degree of
interoperability between them. These are organized into a layered architecture with,
at a high level of abstraction, the core ontology that holds a common
conceptualization of the whole construction domain enabled by a set of inter-related
generic core concepts forming the seeds of the ontology. These generic concepts
enable interoperability between specialized discipline-oriented modules defined at a
lower level of abstraction. This middle layer of the architecture provides discipline-
oriented conceptualizations of the construction domain. Concepts from these sub-
ontologies are linked with the core concepts by generalization / specialization
(commonly known as IS-A) relationships. The third and lowest level of the
architecture (Figure 3) represents all semantic resources currently available, as
described in the earlier section (Figure 2). The purpose of the research is to enrich
these ontology modules with sustainability concepts and relationships as described
hereafter.

CONCEPT AND RELATIONSHIP ENRICHMENT USING INFORMATION


RETRIEVAL TECHNIQUES

The Industry Foundation Classes (IFC, 2010) play a pivotal role in the
representation and conceptualisation of a building. However, this cannot support in its
present form, building thermal analysis and sustainable construction design. The IFCs
need to be enhanced to support features (concepts, facets, and semantic relationships)
required by existing energy calculation, simulation, and compliance checking tools.
The IFCs are therefore used as a basis to develop the sustainable construction
148 COMPUTING IN CIVIL ENGINEERING

ontology. In a nutshell, an ontology provides a stronger semantic expressiveness and


representation richness of a domain while remaining closer to user needs and
perceptions of the domain. Advantages of an ontology compared to a product model
are elaborated in (Rezgui et al., 2009).
Therefore, we propose to enhance and extend the latest specification of the
IFCs with sustainable construction constructs (concepts and facets) while embedding
the lifecycle dimension necessary to provide total lifecycle accounts of energy
consumption and carbon emissions of a building design. IFCs can be classed as
taxonomy. The latter provides an ideal backbone and the seeds of the sustainable
construction ontology. It is proposed in this task to enrich and expand the IFCs with
additional concepts and facets extracted from (a) a sustainable construction document
repository that includes over 200 sustainability reference documents, and (b) from the
data structures that underpin industry energy calculation, simulation, and compliance
checking tools.
The proposed approach makes use of established term frequency-inverse
document frequency (tf-idf) and metric clusters (Baeza-Yates and Ribeiro-Neto,
1999) techniques to identify relevant ontological concepts from the document base
and their relationships with concepts from the sustainable construction ontology
under development (Rezgui, 2007). In fact, for each identified concept and facet, it is
important to quantify the degree of importance (in terms of semantics) it has over not
only the document but also the entire gathered sustainable construction documentary
corpus. Equally, in order to assess the relevance of relationships between concepts, an
approach that factors the number of co-occurrences of concepts with their proximity
in the text is adopted. This is known as the ‘metric clusters’ method. This proceeds by
factoring the distance between two terms in the computation of their correlation
factor. The overall process will necessitate knowledge expert validation.
Expanding an ontology from index terms extracted from documents requires the
following operations: (a) Document Cleansing (the objective is to reduce the
document to a textual description that contains nouns and associations between nouns
that carry most of the document semantics), (b) Keyword Extraction (the objective is
to provide a logical view of a document through summarization via a set of
semantically relevant key words, referred to as index terms). The purpose is to
gradually move from a full text representation of the document to a higher-level
representation. In order to reduce the complexity of the text, as well as the resulting
computational costs, the index terms to be retained are all the nouns from the
cleansed text. The approach used combines nouns co-occurring with a null syntactic
distance (i.e. the number of words between the two nouns is null) into a single
indexing component, regardless of their collocation frequency. These are referred to
as noun groups (non-elementary index terms). The result of this step is a set of
elementary and non-elementary key words that are representative of the discipline
being conceptualized.
The research employed two types of index term integration into the ontology:
concept level integration and syntactic level integration. Concept level integration
requires inference over the domain ontology to make a decision about integration of a
particular pair of concepts. Syntactical integration defines the rules in terms of class
and attribute names to be integrated. Such integration rules are conceptually blind but
COMPUTING IN CIVIL ENGINEERING 149

are easy to implement and develop (Omelayenko, 2001). As highlighted in (Baeza-


Yates and Ribeiro-Neto, 1999) a glossary or thesaurus can provide a controlled
vocabulary for the extension of the ontology based on the identified key words. A
controlled vocabulary presents the advantage of normalised terms, the reduction of
noise, and the possibility of turning key words into concepts with clear semantic
meaning. The construction BS6100 glossary, which is also structured as a taxonomy,
is used. Moreover, for each identified key word, it is important to quantify the degree
of importance (in terms of semantics) it has over not only the document but also the
entire documentary corpus selected for the given discipline. The following formula,
known as “Term frequency-inverse document frequency” (tf-idf) (Baeza-Yates and
Ribeiro-Neto, 1999; Salton and Buckley, 1988), is used:

Wi,j = fi,j x idfi (1)

Where Wi,j represents the quantified weight that a term ti has over the document dj; fi,j
represents the normalised occurrence of a term ti in a document dj, and is calculated
using Equation (2):

freqi , j
fi , j  (2)
maxfor all terms in document freqterm, j

Where freqi,j represents the number of times the term ti is mentioned in document dj;
maxfor all terms in document freqterm, j computes the maximum over all terms which are
mentioned in the text of document dj; idfi represents the inverse of the frequency of a
term ti among the documents in the entire knowledge base, and is expressed as shown
in Equation (3):

N
idfi  log (3)
ni

Where N is the total number of documents in the knowledge base, and ni the number
of documents in which the term ti appears. The intuition behind the measure of Wi,j
is motivated by the fact that the best terms for inclusion in the ontology are those
featured in certain individual documents, capable of distinguishing them from the
remainder of the collection. This implies that the best terms should have high term
frequencies but low overall collection frequencies. The term importance is therefore
obtained by using the product of the term frequency and the inverse document
frequency (Salton and Bukley, 1988).
The next step involves building the relationships connecting the concepts,
including those that have not been retained in the previous stage. Concept
relationships can be induced by patterns of co-occurrence within documents. We
distinguish three main types of relationships: (a) Generalization / Specialization
Relationship (e.g., Wall can be specialised into separation wall, structural wall,
Loadbearing Separation Wall), (b) Composition / Aggregation Relationship (e.g.,
Door is an aggregation of a Frame, a Handle, etc), (c) Semantic relationship between
concepts (e.g., a Beam supports a Slab, and a Beam is supported by a Column).
150 COMPUTING IN CIVIL ENGINEERING

The last two categories above are addressed in this step. The process is semi-
automated in that relations are first identified automatically. Contributions from
knowledge specialists are then requested to qualify and define the identified relations.
In order to assess the relevance of relationships between concepts, an approach that
factors the number of co-occurrences of concepts with their proximity in the text is
adopted. This is known as the “Metric Clusters” method (Baeza-Yates and Ribeiro-
Neto, 1999) (Equation 4). This proceeds by factoring the distance between two terms
in the computation of their correlation factor. The assumption being that terms which
occur in the same sentence, seem more correlated than terms that appear far away.

1
Cu , v   
tiV ( Su ) tjV ( Sv ) r (ti, tj )
(4)

The distance r(ti, tj) between two key words ti and tj is given by the number of
words between them in the same document. V(Su) and V(Sv) represent the sets of
keywords which have Su and Sv as their respective stems. In order to simplify the
correlation factor given in Equation 4, it was decided not to take into account the
different syntactic variations of concepts within the text, and instead use Equation 5,
where r(tu, tv) represents the minimum distance (in terms of the number of separating
words) between concepts tu and tv in any single document

1
Cu , v  (5)
Min[ r (tu , tv )]

The domain knowledge experts drawn from the SCrIPt project stakeholders
have the responsibility of validating the newly integrated index terms as well as their
given names, and then defining all the concept associations that do not belong to the
generalization / specialization category. First, these relationships are established at a
high level within the Core Ontology, and then subsequent efforts will establish
relationships at lower levels within the discipline ontologies. The use of discipline
documents to identify ontological concepts and relationships was revealed to be the
right strategy to construct the discipline sub-ontologies (Rezgui, 2007).

CONCLUSION

The paper presented initial research aimed at the development of a sustainable


construction ontology. A layered and modular approach is adopted to structure and
develop the ontology. This is justified by the fragmented nature of the construction
sector, organized into a variety of disciplines. The uniqueness of the methodology is
illustrated by the combination of the following distinctive features: (a) the modular
structure of the ontology: The ontology takes into account the fragmented nature of
the construction sector and its organization into established disciplines; (b) the
support for the multiple interpretations of concepts across disciplines; (c) the
collaborative nature of the ontology development process; (d) the iterative nature of
the ontology development process; (e) the ontology development approach: this is
semi-automated and relies on discipline-oriented documentary corpuses to identify
COMPUTING IN CIVIL ENGINEERING 151

concepts and relationships using tf-idf and metric clusters techniques, which are then
validated by human experts. At the time of writing the paper, the ontology is still
under development using the techniques described in the paper. Once completed, the
final version of the sustainable construction ontology will be reported in a follow-on
publication.

REFERENCES

Baeza-Yates, R. and Ribeiro-Neto, B. (1999) “Modern Information


Retrieval” Addison Wesley.
Boddy S, Rezgui Y, Cooper G, Wetherill M. (2007) “Computer Integrated
Construction: A review and Proposals for Future Directions2, Advances in
Engineering Software, 38 (10).
Hahn, U, Schulz, S. (2003). “Towards a Broad-Coverage Biomedical
Ontology Based on Description Logics” Pacific Symposium on
Biocomputing (8) pp. 577-588.
Ifc (2010). “Building Smart, “Building Smart. Web Site”, 2010, web page at
http://www.buildingsmart.com.
Lowe, R. and Oreszczyna, T. (2008) “Regulatory standards and barriers to
improved performance for housing”, Energy Policy, 36(12): 4475-
4481.
Omelayenko, B, (2001). Syntactic-level ontology integration rules for e-
commerce, In: proceedings of the 14th FLAIRS Conference
(FLAIRS-2001), Key West, FL, May 21-23, AAAI press.
Rezgui, Y, Zarli, A. (2006). “Paving the Way to the Vision of Digital Construction: A
Strategic Roadmap”, Journal of Construction Engineering and Management,
132(7) pp. 767-776.
Rezgui, Y. (2007) “Text Based Domain Ontology Building Using tf-idf and
Metric Clusters techniques”, Knowledge Engineering Review
(Cambridge Press), 22(4): 379-403.
Rezgui Y, Wilson I, Miles J C, Hopfe C J. (2010) “Federating information
portals through an ontology-centered approach: A feasibility study”,
Advanced Engineering Informatics, 24(3), pp. 340-354.
Rezgui, Y., Boddy, S., Wetherill, M. and Cooper, G. (2009) “Past, present
and future of information and knowledge sharing in the construction
industry: towards semantic service-based e-construction?” Computer
Aided Design (In Press) doi:10.1016/j.cad. 2009.06.005.
Salton, G and Buckley, C. (1988). “Term weighting approaches in automatic
retrieval” Information Processing and Management, 24(5) pp. 513-
523.
Wetherill, M., Rezgui, Y., Boddy, S. , Cooper, G.S. (2007) “Intra- and inter-
organizational knowledge services to promote informed sustainability
practices”, Journal of Computing in Civil Engineering 21(2), pp. 78-89.
Machine Vision Enhanced Post-earthquake Inspection

Zhenhua Zhu1, Stephanie German1, Sara Roberts1, Ioannis Brilakis2 and Reginald
DesRoches3

1
School of Civil and Environmental Engineering, Georgia Institute of Technology,
Atlanta, GA. 30332; email: {zhzhu, s.german, sroberts4@gatech.edu}
2
School of Civil & Environmental Engineering, Georgia Institute of Technology,
Atlanta, GA. 30332; PH (404)894-9881; email: brilakis@gatech.edu
3
School of Civil & Environmental Engineering, Georgia Institute of Technology,
Atlanta, GA. 30332; PH (404)385-0402; email: reginald.desroches@ce.gatech.edu

ABSTRACT
Manual inspection is required to determine the condition of damaged buildings after
an earthquake. The lack of available inspectors, when combined with the large
volume of inspection work, makes such inspection subjective and time-consuming.
Completing the required inspection takes weeks to complete, which has adverse
economic and societal impacts on the affected population. This paper proposes an
automated framework for rapid post-earthquake building evaluation. Under the
framework, the visible damage (cracks and buckling) inflicted on concrete columns is
first detected. The damage properties are then measured in relation to the column’s
dimensions and orientation, so that the column’s load bearing capacity can be
approximated as a damage index. The column damage index supplemented with other
building information (e.g. structural type and columns arrangement) is then used to
query fragility curves of similar buildings, constructed from the analyses of existing
and on-going experimental data. The query estimates the probability of the building
being in different damage states. The framework is expected to automate the
collection of building damage data, to provide a quantitative assessment of the
building damage state, and to estimate the vulnerability of the building to collapse in
the event of an aftershock. Videos and manual assessments of structures after the
2009 earthquake in Haiti are used to test the parts of the framework.
KEYWORDS: Post-earthquake inspection; Machine vision
INTRODUCTION
Post-earthquake inspection is performed by teams comprising licensed inspectors
and/or structural engineers. They follow the guidelines in the ATC-20 documents
(ATC-20, 1989; ATC-20-2, 1995) and classify a post-earthquake building as 1)
imminent threat to life-safety (red-tag), 2) risk from damage but not imminent threat
to life-safety (yellow-tag), or 3) safe for entry and occupancy as earthquake damage
has not significantly affected the safety of the building (green-tag). As suggested by
the definitions of the categories, the application of these guidelines requires

152
COMPUTING IN CIVIL ENGINEERING 153

significant judgment and the inspection results are highly subjective. Also, mobilizing
post-earthquake reconainssance teams and assessing damaged buildings often take
days to weeks to complete. This is the case even for a moderate earthquake.
According to a summary report of the October 15, 2006 Hawaii Earthquake, over
several hundred buildings were requested to be assessed each day from October 15 to
the end of October in the County of Hawaii, while the inspection capacity was only
around 1000 buildings in a week (Chock ,2007).
Prompted by the critical role of post-earthquake inspection in hazard mitigation
and the need for its fast performance in earthquake damaged areas, several efforts
towards automating building safety assessment have led to the creation of
sensing-based evaluation methods. For example, Kottapalli et al. (2003) showed that
sensor networks installed in new buildings can provide useful information for
evaluating structural damage. However, sensor networks are installed in a very small
percentage of existing structures in earthquake prone areas, and rarely in most
susceptible, old reinforced concrete (RC) buildings.
This paper proposes an automated framework for the evaluation of
post-earthquake RC buildings using machine vision techniques. Under the framework,
the visible damage inflicted on concrete columns is first detected. The spatial
properties of the damage are measured in relation to the column’s dimensions and
orientation to approximate the column’s load bearing capacity as a damage index. The
column damage index supplemented with other building information (structural type
and columns arrangement) is used to query fragility curves of similar buildings,
constructed from the analyses of existing and on-going experimental data. The query
estimates the probability of the building being in different damage states. The
framework is expected to provide the quantitative assessment of the damage state of
an RC frame, and its vulnerability to collapse in an aftershock.
RELATED WORK
In this section, the recent work of machine vision-based structural element detection
is introduced first. Following that, the assessment of the vulnerability of RC buildings
to collapse and the loss estimation for the buildings subjected to earthquake loading
are described. All of them are what the framework builds on.
Machine Vision-Based Structural Element and Damage Detection
Machine vision-based detection methods rely on: 1) scale/affine-invariant features, 2)
color/texture features, and 3) geometry features. Scale/affine-invariant feature-based
methods are powerful in detecting a specific object, but not appropriate for object
category detection (Zhu and Brilakis, 2010).
Color/texture based methods use the objects’ interior color/texture values to
perform detection. Neto et al. (2002) observed that the color/texture values for most
materials (e.g. concrete and steel) in an image do not change significantly. Based on
this observation, material regions in an image can be identified and the type of
154 COMPUTING IN CIVIL ENGINEERING

structural element of one region is determined from the region dimensions (Brilakis
and Soibelman, 2008). However, when one element is connected to another structural
element with the same material, this kind of methods regards them as one single
element instead of two separate elements.
Edge information is another type of detection indicator. Geometry-based
methods make use of this information. They start with edge detection using common
operators, and then form object boundaries by analyzing the distribution of edge
points through Hough transform, covariance matrices or principle component analysis
(Lee et al. 2006). The sole reliance on edge information renders these methods
inadequate for complex scenes.
As for automated damage detection, a lot of methods have been created using
image processing techniques, such as wavelet transforms, edge detection, and/or
region-based segmentation. Their effectiveness has been verified in inspecting
concrete structures such as bridges, underground pipes and tunnels. For example,
Abdel-Qader et al. (2006) proposed a principal component analysis (PCA) based
algorithm for detecting unsupervised bridge cracks. Sinha and Fieguth (2006)
introduced two crack detectors for identifying crack pieces in buried concrete pipes.
Yu et al. (2006) used Sobel and Laplacian operators to retrieve crack information
from captured concrete surface images. The error of measurement for the extracted
cracks in their system was below 10%. These successful efforts validated the ability
of machine vision technologies in detecting damage, even when well-light conditions
were not available.
Assessment of the Vulnerability of RF Buildings to Collapse
In earthquake engineering, models are required to link the component damage
visually identified on-site to the building performance and vulnerability to
aftershocks. These models are referred to as “fragility functions”. Researchers have
developed fragility functions that advance assessment of the post-earthquake
vulnerability of buildings beyond ATC-20 documents. One study was undertaken as
part of the Pacific Earthquake Engineering Research (PEER) Center Lifeline
Research Program (Bazzurro et al. 2004). The study developed the recommendations
to quantify the vulnerability of the building to collapse during an aftershock given
that the building had been red, yellow, or green-tagged following the main shock.
Maffei et al. (2008) applied these recommendations for the evaluation of utility
company buildings that required limited access after an earthquake so that personnel
could access equipment to restore power supply and thereby enable post-event
recovery.
Another primary component of the assessment methods is a pushover analysis to
determine the response of the structure under earthquake loading (Bazzurro et al.
2004). So far, the data from suites of analyses have been used to develop fragilities
for different types of buildings, including concrete frames (Haselton and Deierlein,
2008), concrete continua (Ji et al. 2009), and concrete wall buildings (Elnashai 2002).
COMPUTING IN CIVIL ENGINEERING 155

The pushover response history can be quickly estimated using a few basic parameters
characterizing the building system.
A third critical component of the assessment methods is the ability to link
analysis results with observed damage. For example, when a pushover response
history is used, it is necessary to identify the earthquake load–roof displacement
points on the history corresponding to development of specific observable damage
states, such as initiation of measurable residual concrete cracking, initiation of
concrete crushing, or buckling of longitudinal reinforcing steel. This introduces
additional effort into the model-building process (in defining hinge response for RC
elements) and additional uncertainty into the process; both can be reduced, with
relatively little impact on computational time, through the use of fiber-type response,
and associated damage prediction, models.
Loss Estimation from Buildings Subjected to Earthquake Loading
Typically, the extent of federal and state funding provided for recovery efforts is
determined by the estimates of earthquake losses; making rapid, accurate estimation
of these losses critical to recovery. For most buildings, and especially under light to
moderate earthquake loading, the cost of repairing non-structural elements greatly
exceeds that for structural damage. However, economic losses must include the cost
of lost productivity during the time the structural system is repaired. This cost can be
quite significant.
Previous studies provided a basis for developing rapid, automated procedures for
using damage data to estimate repair costs and downtime for repair. For example,
Pagni and Lowes (2006) and Lowes and Li (2009) linked damage with the repair
methods required to return the structure to its original stiffness and strength. Pagni
(2003) demonstrated the use of these repair-specific fragility functions to compute the
cost and time for the repair of old concrete frames. The on-going ATC-58 project has
developed a framework for loss estimation on the basis of the required repair.
THE FRAMEWORK OF THE PROPOSED METHODOLOGY
This paper proposes a novel, automated framework for post-earthquake inspection
(Figure 1). The framework first collects video frames via a high-resolution video
camera and transmits the frames to a computer off-site for analysis. There, each frame
is searched for concrete columns and the damage inflicted on the columns. The spatial
damage properties are measured, so that the column’s load bearing capacity can be
approximated as a damage index. In parallel to this process, the building structural
type and the columns arrangement per floor is recorded by the user while performing
the building safety evaluation. The collected information is used to query a fragility
database constructed from analyses of existing and on-going experimental data. The
database contains building fragility curves that report the probability of various levels
of structural damage. Consulting these curves gives the estimate of the probability of
156 COMPUTING IN CIVIL ENGINEERING

being in different damage states. Specifically, the framework is composed of three


main steps.

Figure 1. Schematic representation of the proposed automated framework


Automated Detection of Concrete Columns and Damage On-Site
Concrete columns (typically rectangular or elliptical) are man-made solid objects that
have two distinguishing visual characteristics: 1) the shapes dominated by long
vertical boundary edges, and 2) the uniform texture and color pattern of concrete on
each of their surfaces. Based on that, the detection method for concrete columns is
specified to find long vertical line pairs using an edge detection operator and the
Hough transform and then determine whether the material contained in the line pair is
concrete.
When concrete columns are detected, the damage on them is extracted. The
damage investigated focuses mainly on cracks and exposed reinforcement. Other
damage indicators such as column drift ratio are also considered. For each kind of
detected damage, its properties (e.g. length, width and orientation) are measured and
spatially correlated with the structural member on which it lies. The approach to
deriving these measurements differs on a case by case basis, but the common steps to
be followed are a) identify the measurements needed for each damage type, b) set a
relative scale standard (e.g. angle of crack direction in relevance to the column’s
vertical edges, width of crack in relevance to column width, etc.), and c) measure and
store results in a columns and damage properties database.
Linking Damage with Performance for RF Frame Members
The post-earthquake vulnerability of the structure to collapse can be estimated using
response-mechanism information. The results of previous research provide a basis for
establishing links between damage patterns and response mechanisms for RC frame
members, but they do not, unfortunately, provide the explicit damage-response
mechanism. In order to know the characteristics of the damage patterns associated
with specific response mechanisms and how these damage patterns evolve during
earthquake loading, sufficient data in the on-going tests are used to classify frame
member response on the basis of observed damage patterns.
COMPUTING IN CIVIL ENGINEERING 157

Beyond identifying the expected response mechanisms, it is necessary to use the


damage data to establish the post-earthquake structural performance state of the
building, and thereby provide a basis for determining the vulnerability of the building
to collapse in an aftershock. The results of several different types of previous research
efforts are combined to accomplish this task. The simplest and most basic approach is
to determine the reduction in stiffness associated with damage during the earthquake
main-shock. Also, the structural performance state of the building can be established
using nonlinear response models according to the recommendations of the ASCE/SEI
Standard 41-06.
Post-earthquake Fragility Analyses of Damaged RF Frames
This step is to provide reliable, rapid assessment of the performance state and
vulnerability of the structural system. Two different approaches are investigated for
accomplishing this task. The first employs parameterized pushover curves for the
damaged structure. The second employs a suite of pre-calculated fragility functions,
each of which defines the likelihood of structural collapse given the earthquake
aftershock and the damage state of the structural system following the earthquake
main-shock. Both approaches analyze the data characterizing the earthquake response
of damaged and undamaged concrete frames.
The activities associated with this phase are to 1) complete analyses of a
representative suite of concrete building frames and use analysis results to 2) develop
a method for defining frame performance on the basis of component performance, 3)
develop a suite of parameterized pushover curves (base shear versus roof drift history
for monotonic loading) for the undamaged and damaged building system, 4) develop
suites of fragility functions defining the likelihood that a damaged building will
collapse in an aftershock given the aftershock hazard and the structural performance
state of the lateral system following the earthquake main-shock, and 5) evaluate the
proposed methods (pushover- and direct fragility-based) for assessing vulnerability to
identify a preferred method.
IMPLEMENTATION AND RESULTS
The framework presented in this paper is still in the stage of the development. Parts of
the framework have been implemented and integrated into the prototype developed
by the Construction Information Technology Laboratory at the Georgia Institute of
Technology. The prototype was written in Microsoft Visual Studio .NET, and it was
tested by the authors to collect the images/videos of structures that were damaged in
the January Haiti earthquake. The parts of the framework implemented so far include
1) concrete column recognition, 2) crack detection and properties retrieval, and 3)
exposed reinforcement detection.
Figure 2 shows the results of detecting concrete columns in images and the
visible damage (cracks and exposed reinforcement) inflicted on the concrete column
surfaces. The detection performance is measured by the precision and recall.
158 COMPUTING IN CIVIL ENGINEERING

Precision is calculated as the percentage of the number of elements (concrete columns,


cracks or exposed reinforcements) correctly detected within the total number of
elements correctly and incorrectly detected. Recall is calculated as the percentage of
the number of elements correctly detected within the total number of elements
correctly recognized and not recognized at all. The average precision and recall for
concrete columns, cracks and exposed reinforcement reach (89.7%, 84.3%), (64.2%,
91.8%), and (83.2%, 82.2%).

(a) (b) (c)


Figure 2. Automated detection of concrete columns and damage: (a) concrete column
detection, (b) crack detection, and (c) exposed reinforcement detection
CONCLUSIONS AND FUTURE WORK
Current post-earthquake inspection is performed by certified inspectors and/or
structural engineering. They follow the guidelines in the ATC-20 documents. The
whole process may take weeks or even months to complete, and the inspection results
are highly subjective. In order to overcome the limitations of current solutions, a
novel, machine vision enhanced framework is proposed in this paper.
The proposed framework is mainly composed of three parts. First, concrete
columns and the damage inflicted on these columns are detected. Then, the load
carrying capacity of concrete columns is assessed based on the detected damage. This
information is used to develop a reliable, rapid assessment of the performance state
and vulnerability of the RF Frame structure system. The framework is expected to
provide the quantitative assessment of the damage state of an RC frame building, and
its vulnerability to collapse in an aftershock. So far, three parts of the framework, 1)
concrete column recognition, 2) crack detection and properties retrieval, and 3)
exposed reinforcement detection and properties retrieval have been implemented,
tested, and validated with the videos and assessments of structures collected from
Haiti.
ACKNOWLEDGEMENT
This material is based in part upon work supported by the National Science
Foundation under #1000700 and #1034845. Any opinions, findings, and conclusions
COMPUTING IN CIVIL ENGINEERING 159

or recommendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation.
REFERENCES
Abdel-Qader, I., Pashaie-Rad, S., Abudayyeh, O., & Yehia, S. (2006). “PCA-Based
Algorithm for Unsupervised Bridge Crack Detection.” Advances in Engineering
Software, 37 (12), 771-778
ATC-20 (1989). “Procedures for Postearthquake Safety Evaluations of Buildings.”
Report ATC-20, Redwood City, CA.
ATC-20-2 (1995). “Addendum to ATC-20, Procedures for Postearthquake Safety
Evaluations of Buildings.” Report ATC-20, Redwood City, California.
Bazzurro, P., Conell, C.A., Menun, C. Luco, N. and Motahari, M., (2004). “Advanced
Seismic Assessment of Buildings.” Report for Pacific Gas & Electric Company &
the Pacific Earthquake Engineering Research Center.
Brilakis, I. and Soibelman, L. (2008). "Shape-Based Retrieval of Construction Site
Photographs." J. of Computing in Civil Engineering, 22(1): 14 – 20
Chock, G., (2007). “ATC-20 Post-Earthquake Building Safety Evaluations Performed
after the October 15, 2006 Hawaii Earthquakes Summary and Recommendations
for Improvements (updated).”
http://www.scd.state.hi.us/HazMitPlan/chapter_6_appM.pdf (Dec. 10, 2008)
Elnashai, A.S., Papanikolaou, V. and Lee, D.H. (2002). “Zeus-NL — A system for
inelastic analysis of structures.”
Haselton, C., Deierlein, G. (2008) “Assessing Seismic Collapse Safety of Modern
Reinforced Concrete Moment-Frame Buildings” PEER Report 2007/08.
Ji J, Elnashai, A.S., Kuchma, D.A. (2009). “Seismic Fragility Relationships of
Reinforced Concrete High-Rise Buildings.” Structural Design of Tall & Special
Buildings 18(3): 259-277.
Lee, Y., Koo, H., and Jeong, C. (2006). “A Straight Line Detection using Principal
Component Analysis.” Pattern Recognition Letters, 27 (14), 1744-1754.
Lowes, L.N., Li, J. (2009). “Fragility Functions for RC Moment Frames.” Report to
ATC-58 Structural Performance Products Review Panel.
Maffei, H., Telleen, K, and Nakayama, Y. (2008). “Probability-Based Seismic
Assessment of Buildings, Considering Post-Earthquake Safety.” Earthquake
Spectra 24(3)
Neto, J., Arditi, D., and Evens, M. (2002). “Using Colors to Detect Structural
Components in Digital Pictures.” Computer Aided Civil and Infrastructure
Engineering, 17(2002): 61-76
Kottapalli, V. A., Kiremidjiana, A. S., Lyncha, J. P., Carryerb, E., Kennyb, T. W.,
Lawa, K. H., (2003). “Two-tiered wireless sensor network architecture for
structural health.” SPIE’s 10th Annual International Symposium on Smart
Structures and Materials, (pp. 8-19). San Diego.
160 COMPUTING IN CIVIL ENGINEERING

Pagni, C.A. and L.N. Lowes. (2006) “Fragility Functions for Older Reinforced
Concrete Beam-Column Joints.” Earthquake Spectra 22(1): 215-238
Sinha, S., & Fieguth, P. (2006). Automated detection of cracks in buried concrete
pipe images. Automation In Construction , 15 (1): 58-72.
Yu, S.-N., Jang, J.-H., & Han, C.-S. (2007). Auto Inspection System Using a Mobile
Robot for Detecting Concrete Cracks in a Tunnel. Automation in Construction ,
16 (3), 255-261.
Zhu, Z. and Brilakis, I. (2010). “Concrete Column Recognition in Images and
Videos.” J. of Computing in Civil Engineering, 24(6): 478 – 487.
Continuous Sensing of Occupant Perception of Indoor Ambient Factors
Farrokh Jazizadeh1, Geoffrey Kavulya 2, Laura Klein3, Burcin Becerik-Gerber4
1,2,3,4
Sonny Astani Department of Civil and Environmental Engineering, University of
Southern California, Los Angeles, CA 90089;
Email: 1jazizade@usc.edu, 2kavulya@usc.edu, 3lauraakl@usc.edu, 4becerik@usc.edu

ABSTRACT

Ambient factors such as temperature, lighting, and air quality influence occupants’
productivity and behavior. Although these factors are regulated by industry standards
and monitored by the facilities management groups, occupants’ perceptions vary from
actual values due to various factors such as building schedules and occupancy,
occupant activity and preferences, weather and climate, and the placement of sensors.
While occupant comfort surveys are sometimes conducted, they are generally limited
to one-time or periodic assessments that do not fully represent occupant experiences
throughout building operations. This study proposes a new methodology for gathering
real time data on a continuous basis through participatory sensing of occupant
ambient comfort in indoor environments based on a smart phone application. The
developed application is presented and validated by a pilot study in a university
building. Occupant perceptions of temperature are compared to actual temperature
records. No correlation is found between perceived and actual room temperatures
demonstrating the potential of a participatory sensing tool for adaptively controlling
building temperature ranges.

Keywords: participatory sensing, occupant satisfaction, indoor environments,


temperature, adaptive control, thermal comfort

INTRODUCTION

Ambient factors such as temperature, lighting, and air quality can greatly influence
occupants’ productivity and behaviour in indoor environments (Sensharma et al.,
1998). As a result, industry standards have been developed to define acceptable
ranges for these factors according to rational comfort indices, most notably the PMV
(predicted mean vote) index for thermal comfort (Fanger, 1982; ASHRAE, 2004;
CEN, 2005). Recent studies, however, have shown weak and context dependant
correlations between code-defined comfort ranges and occupant reported comfort
ranges (Barlow and Fiala, 2007; Corgnati et al., 2009). Often times, occupant comfort
ranges are found to be larger and more forgiving than predicted ranges implying a
potential for reduced building energy consumption by allowing more flexible and
adaptive control of HVAC and lighting system set points (Hwang et al., 2006; Nicol
and Humphreys, 2009). In the United States, buildings account for 40% of national
energy consumption of which 33% is associated with heating and cooling and 18% is
associated with lighting (U.S. Department of Energy, 2009). Consequently, there is a

161
162 COMPUTING IN CIVIL ENGINEERING

significant opportunity for improving occupant comfort levels and reducing building
energy demands by collecting and analysing occupant perceptions of indoor
environmental conditions.

Current practices for controlling and assessing indoor environments limit occupant
feedback to one-time or periodic occupant surveys and individual occupant
complaints (Nicol and Roaf, 2010). Such limitations reflect challenges to continuous
and large-scale acquisition of human-contributed data (Ari et al., 2008). To allow
frequent and real-time assessment of a large collection of indoor environments, this
research proposes a new methodology for gathering real time data on a continuous
basis through participatory sensing of occupant ambient comfort in indoor
environments. In recent years, smart phones have evolved from devices used solely
for voice and text communication to platforms that are able to capture and transmit a
range of data types including image, audio, location, and cloud services to collect and
analyze systematic data (Estrin, 2010). Participatory sensing involves and empowers
end-users in collecting and sharing these data types (Reddy et al., 2010; Payton and
Julien, 2010) by using mobile devices. Consequently, there is a shift towards
employing the widespread capabilities of smart phones for manual, automatic, and
context-aware data capture especially by incorporating participatory sensing
functionality (Trossen and Pavel, 2010; Sääskilahti et al, 2010), which includes
sensors and techniques for pervasive environmental monitoring.

The research develops and tests a new participatory sensing smart phone application
for building occupants that is intended to facilitate more customized and therefore
more efficient heating, cooling, ventilation, and lighting standards and protocols for
building facilities. The developed application is presented and the proposed
methodology for facilitating continuous sensing of occupant perceptions of indoor
ambient factors is explained. The application is tested and validated by a pilot study
in a university building. Occupant perceptions of temperature are compared to actual
temperature records and results are used to assess the value of participatory sensing
for indoor environmental control and for building energy management. The main
objectives for improving data collection for indoor environmental assessment include
monitoring multiple buildings and building zones, accepting input from multiple
occupants, and analysing and addressing input data in real time. Thorough literature
reviews of building occupant comfort and participatory sensing methods have been
completed but due to the page limitations, authors were not able to include the review
in this paper.

CONTINUOUS SENSING APPROACH

Participatory sensing using mobile devices, specifically smart phones, could be the
optimal approach to collecting data from large user groups and in large facilities such
as university campuses or urban regions. This method offers great potential for data
collection as mobile devices are frequently available to majority of the occupants of
large facilities due to popularity of this type of technology. According to the result of
COMPUTING IN CIVIL ENGINEERING 163

a survey study in 2010 among college and university students in United States, 53%
of the students own smartphones (Digital Media Test Kitchen, 2010).

As part of the approach, a smart phone application was designed and shared with
building occupants for free, to allow them real time and continuous input of their
perceptions of indoor ambient factors. This participatory sensing application differs
from most traditional building occupant surveys in that it does not include a
comprehensive list of questions but rather a few questions designed to encourage fast
and frequent input. By developing the application for different types of smart phones
and operating systems, most building occupants are able to access a compatible
device. Such widespread opportunity supports the goals of this study to enable large
scale data acquisitions from a large population.

PARTICIPATORY SENSING APPLICATION

Survey Design. The application was designed for collecting, recording, and
analyzing spatiotemporal perceived ambient factors. In order to identify the most
important ambient factors and their associated effects on building environments, the
results of a study, conducted by the third and the fourth author, which explored the
relation between student’s learning and classroom features including ambient factors,
space layout, and classroom technology were analyzed. Based on the study results
and an extensive literature review, three important ambient factors were selected,
temperature, light intensity, and airflow. These factors also have the greatest impact
on building energy consumption as discussed in previous sections.

The survey question for perceived temperature was based on the thermal sensation
scale proposed by ASHRAE with the following choices: +2 Hot, +1 Warm, 0 Neutral,
-1 Cool and -2 Cold. The two options, slightly warm and slightly cool, were removed
from the original ASHRAE scale for this study as these options were judged to be
potentially ambiguous and difficult for participants to interpret in comparison to cool
and warm levels. Moreover, in participatory sensing, the brevity of questions and
answers plays an important role in encouraging high levels of participation.
Metabolic rate and clothing factors (ASHRAE, 2004) were not included since this
study focuses only on those factors that are under the control of the building systems
and are related to energy consumption. The adopted approach in this study is based
on continuous and real time data collection with large data samples which are
assumed to normalize impacting factors like clothing, gender, and so forth.

Perceived lighting was assessed by two survey questions: light source and light
intensity. Participants were asked to share whether their environment used natural
light, artificial light, or both to evaluate the contribution of lighting to energy
consumption and the contribution of light source to occupant perception. For light
intensity, a scale similar to that of temperature was adopted with the following five
levels: +2 Glaring, +1 Bright, 0 Neutral, -1 Dim, and -2 Dark. Ventilation or airflow,
which plays an important role preserving acceptable air quality and acceptable air
speeds for occupant comfort, was assessed with the following scale: +2 Draughty, +1
164 COMPUTING IN CIVIL ENGINEERING

Slightly Draughty, 0 Neutral, -1 Slightly Stuffy and -2 Stuffy. The final survey
question asked participants to share their mood to investigate correlations between
occupants' moods and ambient conditions. Mood is assessed with the following
discrete answers, not intended to represent a continuous scale: +2 Focused, +1 Calm,
0 None, -1 Distracted, and -2 Sleepy.

Application Design and Workflow. Location and time of participation are two
important parameters of this study. A GPS based locating algorithm running on the
application central server provides the three nearest buildings to a participant’s
location, from which they can navigate and scroll through floors and rooms. Storing a
list of campus buildings and rooms and running parts of the application on the server
reduce the computing load on the mobile devices. The location-sensing module of the
application runs as a service in the background to record the last available latitude
and longitude of the participant. This also reduces the computing time of location
sensing while participants use the application. Reducing manual data entry is an
important step in participatory sensing as it encourages participants to contribute
easily and also reduce faulty data. Once the building and room location is defined, a
participant completes the questions regarding temperature, light source, light
intensity, air quality, and mood. The entire process takes approximately 10 seconds.
The captured data is then sent to the server and recorded in a database.

To implement the application, the Android operating system was selected as the test
platform. For the Android application, Java programming language and Eclipse
developer platform were used. Screen shots of the mobile device interface are
presented in Figure.1.

List of
nearest List of List of
Temp. Light Light
Buildings Floors Room Air Flow
Source Intensity

Figure1. Screen shots of ambient factors application on Android

Application Verification. In order to verify the performance of the application and


define barriers and challenges for implementation of the application on a large scale,
a pilot study was conducted. Eight rooms, on different floors of a university building,
were selected based on the availability of temperature sensors. The selected rooms
included four offices (130, 444, 444C, 444L), three classrooms (144, 163, 164), and
one conference room (209) for a representative sampling of typical building spaces
and typical building occupants. Selected occupants in these rooms were asked to
participate in the participatory data collection for a period of ten days. During this
period, about 200 data points were obtained, which are summarized in Table 1. While
perceptions of all ambient factors included in the application were gathered
(temperature, airflow, light source, light intensity, and mood), only perceived
temperatures were analyzed and summarized in this paper.
COMPUTING IN CIVIL ENGINEERING 165

In addition to participatory perceived temperature data gathered over ten days by the
ambient factors application, actual temperature data for each of the surveyed rooms
was collected. Existing temperature sensors in each of the rooms allowed the record
of air temperatures every six minutes over the study period. The averages and
standard deviations for perceived and actual temperatures in addition to the
distribution of data points are summarized in Table 1. The actual temperatures were
matched by date, time, and location to the perceived temperature data. In Figure 2,
perceived temperature votes are plotted against actual temperature ranges in which
the perceived temperatures were reported. Over 80% of perceived temperature votes
fell under the “cool” and “neutral” categories and covered an actual temperature
range of 20 to 26 degrees Celsius. As expected, the median actual temperatures of
votes for “cool”, “neutral”, “warm”, and “hot” showed a positive correlation with
perceived increases in room temperatures. The median actual temperature of votes for
“cold”, however, was higher than the median actual temperatures for both “cool” and
“neutral” votes.

Table 1. Means and standard deviations for perceived and actual temperatures
Room Number of Data Points for Each Level Perceived Temp. Actual Temp. (◦C)
Number Cold Cool Neutral Warm Hot Mean Std Dev Mean Std Dev
130 0 1 3 1 2 -0.57 1.13 23.33 0.14
144 7 20 8 1 0 0.92 0.73 24.70 0.54
163 2 23 7 1 0 0.79 0.60 21.49 1.23
164 3 15 10 0 0 0.75 0.65 20.94 0.41
209 7 19 26 4 0 0.52 0.81 21.93 0.39
444 0 3 3 1 0 0.29 0.76 23.89 0.28
444C 0 0 2 1 3 -1.17 0.98 24.19 0.43
444L 1 0 4 0 0 0.40 0.89 22.64 0.88
Total 20 81 63 9 5 -

Based on the analysis, the HVAC systems operating in the eight surveyed rooms were
found to maintain relatively uniform air temperatures in each space. The highest
standard deviation for actual temperatures was 1.23 degrees in Room 163. Six of the
rooms had standard deviations for actual temperatures less than or approximately
equal to 0.5 ⁰C implying that temperatures remained within a one degree range for at
least two thirds of the time in these rooms. While each room operated in a somewhat
narrow and regulated range, temperature ranges varied between rooms. Room 164
saw the lowest standard temperature range with the majority of recorded temperatures
falling between 20.53 and 21.35 degrees Celsius. In contrast, Room 144 saw the
highest temperatures with a standard range of 24.18 to 25.26 degrees Celsius. The
maximum temperature in Room 144 was 26.2 degrees which was almost five degrees
higher than the maximum temperature of 21.8 degrees recorded in Room 164. This
variance reveals that the regulated temperature set points in each room differ
somewhat substantially.
166 COMPUTING IN CIVIL ENGINEERING

Figure 2. Box plot of perceived temperature votes against actual temperature ranges

Similar to the actual temperature findings, perceived temperatures were relatively


uniform for each room. Seven of the eight rooms had standard deviations of less than
one scale point for perceived temperatures. Room 130 had a standard deviation of
1.13 scale points. Six of the surveyed rooms were perceived on average as neutral to
cool in temperature. Room 130 was perceived as neutral to warm and Room 444C
was perceived as warm to hot on average.

Surprisingly, no correlation was found between perceived and actual mean room
temperatures. While Room 444C, which saw the second highest actual temperatures,
was perceived as warm to hot, Room 144, which saw the highest actual temperatures,
was perceived as the coolest of the eight rooms. Rooms 444 and 444C belong to the
same HVAC zone and therefore shared one temperature sensor and one VAV box for
controlling air temperature. As a result, their average temperatures are almost
identical at 23.9 and 24.2 degrees Celsius respectively. Despite this relation, Room
444 was perceived as slightly cool (0.29) and Room 444C was perceived as very
warm (-1.17). This discrepancy in occupant perceptions could be explained by
numerous factors including locations of building spaces, vents, and windows, room
size, occupancy rate, occupant activity and clothing, and occupant preferences.

The absence of a correlation between perceived and actual room temperatures reveals
the potential value of the ambient factors participatory application for adjusting
ambient factors to optimize occupant comfort. The findings of this study demonstrate
that standardized temperature set points do not guarantee ideal thermal conditions in
all indoor environments. Continuous and real-time access to occupant perceptions of
thermal conditions would therefore provide more effective means for adjusting
heating and cooling ranges for different building spaces. At least four of the test-bed
COMPUTING IN CIVIL ENGINEERING 167

rooms would benefit from slight increases in their set temperature ranges and two of
the rooms would benefit from slight decreases in their set temperature ranges.

CONCLUSION AND FUTURE RESEARCH

Heating, cooling and lighting systems together make buildings some of the greatest
consumers of energy in the United States. Much of their energy consumption results
from requirements that these systems are highly regulated and controlled to meet
established standards for occupant thermal and lighting comfort. Accordingly, in this
study, participatory sensing was adopted to allow for real-time continuous assessment
of the ambient conditions of large facilities and urban regions. A smart phone
application was developed in order to provide concrete solutions to inherent gaps in
building operations and performance assessment methods. Conventional methods rely
on periodical measurement and verification surveys which do not address the full
operational life cycle of the building. The application is used to gather occupants’
perceptions of temperature, lighting, air quality, and mood. Performance verification
of the application over a period of ten days in eight rooms revealed no significant
correlation between perceived and actual temperatures in these rooms. However, as it
is illustrated in the verification section, about 65 percent of occupant perceptions of
temperature differed from the neutral condition. This finding indicates the proposed
methodology's potential for improving the building systems efficiency in large scale
data collection. Using this approach, optimizing standards for ambient conditions to
improve both energy consumption efficiency and occupant comfort could also be
achieved.
Future work is focused on implementing the developed methodology at larger scales
for building level and campus level data collection. Moreover, a test bed is developed
by the authors for sensing and measuring other ambient factors such as lighting
intensity and air quality. Conducting large scale data collection will provide a large
source of data points for analytical and statistical assessment of occupant satisfaction
with building system performance. The long term objective is the development of an
intelligent adaptive control system, which relies on continuous and real time occupant
perceptions to set optimal ambient conditions for occupant comfort and building
energy efficiency. To achieve these goals, several necessary infrastructural
developments such as expansion to other smart phone operating systems,
development of a visualization platform for energy literacy, and incorporation with
facilities management information systems, are part of the future work of the authors.

ACKNOWLEDGEMENTS

The authors would like to thank University of Southern California (USC) Integrated
Media Systems Center (IMSC). Any opinions, findings, conclusions, or
recommendations presented in this paper are those of authors and do not necessarily
reflect the views of USC IMSC.
168 COMPUTING IN CIVIL ENGINEERING

REFERENCES

Ari, S., Wilcoxen, P., Khalifa, H.E., Dannenhoffer, J.F., Isik, C. (2008). “A
Practical Approach to Individual Thermal Comfort and Energy Optimization
Problem.” NAFIPS 2008 - Annual Meeting of the North American Fuzzy
Information Processing Society, 388-93.
ASHRAE. (2004). “Thermal environment conditions for human occupancy.”
ASHRAE Standard 55/2004. Atlanta.
Barlow, S., Fiala, D. (2007). “Occupant comfort in UK offices-How adaptive
comfort theories might influence future low energy office refurbishment
strategies.” Energy and Buildings, 39, 837-846.
CEN. (2005). "Ergonomics of the thermal environment – analytical determination
and interpretation of thermal comfort using calculation of PMV and PPD
indices and local thermal comfort criteria. Standard EN ISO 7730. Bruxelles.
Corgnati, S.P., Filippi, M., Viazzo, S. (2007). “Perception of thermal environment
in high school and university classrooms: Subjective preferences and
thermal comfort.” Building and Environment, 42, 951-959.
Digital Media Test Kitchen (2010)." Smartphone survey methodology" <
http://testkitchen.colorado.edu/projects/reports/smartphone/smartphone-
methodology/> (March 08, 2011)
Estrin, D., (2010) “Participatory sensing: applications and architecture [Internet
Predictions].” IEEE Internet Computing,14, 12-14.
Fanger, P.O. (1982). Thermal Comfort. Malabar: Robert E. Kriger Publishing Co.
Hwang, R.L., Lin, T.P., Kuo, N.J. (2006).“Field experiments on thermal comfort in
campus classrooms in Taiwan.” Energy and Buildings, 38, 53-62.
Nicol, J.F., Humphreys, M.A. (2009). “New standards for comfort and energy use in
buildings.” Building Research & Information, 37(1), 68-73.
Nicol, F., Roaf, S. (2005). “Post-occupancy evaluation and field studies of thermal
comfort.” Building Research & Information, 33(4), 338-346.
Payton, J., Julien, C., (2010). Integrating participatory sensing in application
development practices. FoSER '10 Proceedings of the FSE/SDP workshop
on Future of software engineering research, 1817-1820
Reddy, S., Estrin, D., Srivastava, M., (2010). “Recruitment Framework for
Participatory Sensing Data Collections.” Pervasive Computing. Proceedings
8th International Conference, Pervasive, 138-55.
Sääskilahti, K., Kangaskorte, R., Luimula, M., Hemminki, J.H.,( 2010). “ Collecting
and visualizing wireless geosensor data using mobile devices.” In COM.Geo
'10: Proceedings of the 1st International Conference and Exhibition on
Computing for Geospatial Research, 38, 1-8.
Sensharma, N.P., Woods, J.E. Goodwin, A.K. (1998). " Relationships between the
indoor environment and productivity: A literature review.”ASHRAE
Transactions, 104(1A), 686-701.
Trossen, D., Pavel, D., (2007)."An Open Source Platform to Facilitate Participatory
Sensing with Mobile Phones." Proceedings of the 4th Annual International
Conference on Mobile and Ubiquitous Systems: Computing, Networking
and Services, MobiQuitous
U.S. Department of Energy (2009). Energy Data Book.
EFFECTS OF COLOR, DISTANCE, AND INCIDENT ANGLE ON QUALITY
OF 3D POINT CLOUDS
Geoffrey Kavulya1, Farrokh Jazizadeh2, Burcin Becerik-Gerber3
1,2,3
Department of Civil and Environmental Engineering, University of Southern
California, Los Angeles, California,
Email: 1kavulya@usc.edu, 2jazizade@usc.edu, 3becerik@usc.edu

ABSTRACT

In laser scanning, the precision of the point clouds (PC) acquisition is influenced by a
variety of factors such as environmental conditions, scanning tools and artifacts,
dynamic scan environments, and depth discontinuity. In addition, object color, object
texture, and scanning geometry are other factors that affect the quality of point
clouds. These factors can affect the overall quality of point clouds, which in turn
could result in a significant impact on the accuracy of as-built models. This study
investigates the effect of object color and texture on the PC quality using a time of
flight scanner. The effect of these factors has investigated through an experiment
carried out on the Rosenblatt Stadium in Omaha, Nebraska. The outcomes of this
ongoing research will be used to further highlight the parameters that must be taken
into consideration in 3D laser scanning operations to avoid sources of errors that
result from laser sensor, object characteristics, and scanning geometry.

Keywords: 3d laser scanning, point cloud, quality, color, texture, incident angle,
distance

INTRODUCTION

3D laser scanning technology is being increasingly used in the architecture,


engineering, and construction (AEC) industry for constructing three-dimensional
virtual representations of buildings and infrastructure. Today, this technology is used
for various types of applications in the AEC industry such as indoor mapping
(Tommaso et al., 2006), project control (Akinci et al., 2002), construction metrology
(Cheok et al., 2001), development of as-built 3D CAD models and building
information models of existing facilities (Arayici, 2007) and resource management
(Gong and Cladas, 2007). A 3D laser scanner emits a laser beam and calculates the
distance between the object and the scanner either by calculating the phase difference
between the emitted and returned signals (phase-based scanners) or calculating the
laser beam travel time (time-of-flight scanners).

There are various sources of errors that may contribute to undesired quality of point
clouds. Scanning errors may result from environmental conditions such as dust or
mist, instrument vibration, thermal expansion, surface reflectivity, and dynamic scan
scenes (Becerik-Gerber et al., 2010). The mixed-pixel phenomenon is another source

169
170 COMPUTING IN CIVIL ENGINEERING

of error that causes inaccurate data acquisition. A mixed pixel forms, when the laser
beam hits two surfaces on two or more planes e.g. laser partially striking the front
surface and another surface behind and two ranges are recorded for one point (Tang
et al, 2007). Invalid data may also be generated during the scanning process because
of shadows or movement of objects in the scene, which are referred to as noise. The
process of aligning and merging different point clouds from one scene is called
registration. Artifacts known as targets are used to merge multiple point clouds.
Displacement of targets during scanning process, poor target layout design, and errors
in target acquisition algorithm are common sources of registration errors. Modeling
errors are inherently dependent on the scanning and registration errors. Errors or
missing points in the point clouds decrease the quality of the final model. Prior
research focused on addressing different sources of errors and developed algorithms
for detecting mixed-pixel phenomenon and its effects on final products (Tang et al.,
2007), noise filtering, coarse and fine registration (Huber and Herbert, 2003),
methods for removing faulty data (Tuley et al., 2005), identified reasons for edge loss
in point clouds, and developing algorithms for its corrections (Tang et al., 2009). In
addition, in a recent study the effects of different target types, scanner types, target
layout design, and scanning process on the registration accuracy were investigated
(Becerik-Gerber et al., 2010).

Object properties including surface properties and materials, laser sensor


specifications, and relative location of the scanner and objects in the scan scene
(scanning geometry) are other important factors that affect the quality of the point
clouds (Vukašinovi´c et al, 2010). The quality of a point cloud, in the context of this
paper, is defined as a dense point cloud of geometrically detected objects, given the
scan resolution. Presence of different materials, characterized by typical texture
(Tang et al, 2007), surface roughness (Yong-hua et al, 2009 and Lichti and Harvey,
2002), slope of measured surface and the interference of objects (Yong-hua et al,
2009) and reflectivity (Tang et al, 2009), can affect the accuracy of laser distance-
measurement. The optical properties of the surface and the angle of incidence of the
laser beam dictate the amount of diffuse (rough surface) and specular (mirror-like)
reflected light. Geometrically, the relative position of a 3D laser scanner and a
measured surface also influence the measurement results (Vukašinovi´c et al, 2010).
Although the 3D laser scanners send out a constrained pulse of light, the diameter of
the pulse expands (Stone et al, 2004) as it moves outward from the scanner. Accuracy
of point clouds depends on the type of the scanner, the laser ray angle of the
incidence, the optical design of the scanner and the distance itself (Ingensand, 2006).
The error of the single coordinate in a point cloud can be attributed to two types of
errors namely, systematic errors (e.g. calibration) and random errors (Yong-hua et al,
2009; Tang et al, 2009 and Tuley et al, 2005) (e.g. speckle noise, error from
coordinate measuring machines, distance).

Affects of distance and angle of incidence have been subjects of some of the previous
research efforts (Kukko et al., 2008; Vukašinovi´c et al., 2010), though there is still
lack of empirical research focusing on the object color/texture and their correlation
with scanning geometry angle. This paper reports findings from an investigation that
COMPUTING IN CIVIL ENGINEERING 171

focuses on the affects of laser sensor specification and object color which are
correlated with the object texture, laser beam incident angle, and the distance between
the scanner and objects. The remaining sections of the paper are structured as
follows. First, object characteristics and scanning geometry that might affect the
noise level and the quality of point clouds is discussed. Then the test bed, the
experiment and its findings are presented.

OBJECTS CHARACTERISTICS AND SCANNING GEOMETRY

Color and Surface Reflection. A 3D laser scanner works with a signal reflected
back from the object surface to the receiving unit. The reflective abilities of the
surface (albedo) affect the signal strength (Ingensand, 2006 and Tang et al, 2009) and
as reported in (Boehler et al, 2003), white surfaces result in strong reflections,
whereas reflection is weak on black surfaces. Accordingly, the detection of colored
surfaces depend on the spectral characteristics of the laser beam (green, red, near
infrared) and shiny surfaces pose detection challenges (Boehler et al, 2003). Surfaces
and colors observed in a visible spectrum by naked eye may not necessarily be
detectable by the laser scanners (Becerik-Gerber et al, 2010). Therefore, surfaces of
different reflectivity may result in systematic errors (Boehler et al, 2003). This
reflection retraces the path of the transmitted beam that depends on the object
properties, such as its material and its shape dependent anisotropy, and the scanning
geometry (Soudarissanane, 2007).

Distance (Range). A time of flight scanner calculates the distance by multiplying the
light velocity with the time of travel, which means that in order to decrease the pulse
expansion, the velocity of the beam must be increased or the time of travel must be
decreased. During scanning, a laser scanner generates a triangle between the scanner
lens, laser, and object to gather accurate 3D data by the principle of laser
triangulation (Froehlich and Mettenleiter 2004). To obtain the x, y, z coordinates of
an object, the distance between the scanner lens and the laser, also known as the
parallax base, and the angle of the laser as provided by the galvanometer, must be
established. Time-of-flight scanners use two methods for distance measurement
(Bogue, 2010). The first method uses amplitude modulated light and measures the
phase difference between a reference signal and calculates the distance (Lange,
1999). The second method calculates the distance by means of direct measurement of
the runtime of a travelled light pulse using arrays of single-photon avalanche diodes
(Falie and Buzuloiu, 2007). To ensure a larger scan point density, studies (Becerik-
Gerber et al, 2010 and Boehler et al, 2003) show that fixed, paddle or sphere targets
may be used if their precise positions are surveyed with instruments and methods that
are more accurate than of a laser scanner. While measuring distances, technical
specifications such as scanning speed and spatial resolution (Froehlich and
Mettenleiter,2004), field of view (Ingensand, 2006; Ryde, 2009 and Tuley, 2005) and
accuracies of range measurement (Lichti and Harvey, 2000) must be considered.

Angle of Incidence. A laser scanner consists of a laser source and a charge-coupled


device (CCD). The laser source sends a laser beam at a defined and incrementally
172 COMPUTING IN CIVIL ENGINEERING

changed angle, from one end of a base onto an object, and a CCD camera at the other
base, which detects the laser spot on the object surface. In laser scanning operations,
the performance is affected and limited by the laws of retro directive reflection (Tang,
2007; Yong-hua et al, 2009 and Ingensand,2006), where laser pulse irradiates objects,
and the optical properties of materials (Tuley, 2005). When a laser sensor measures a
scan scene, its beam rotates horizontally and/or vertically. This establishes a
relationship between the incident and reflected beam, which in turn pivotally
delineates the accuracy. The different incidence angles of the laser beam on a surface
result in 3D points of varying quality.

EXPERIMENT DESCRIPTION

Experiments were carried out at the Johnny Rosenblatt Stadium as the test bed, which
is located in Omaha, Nebraska and was built in 1947 with a maximum capacity of
23,000 people. To conduct the experiments, the entire exterior stadium façade was
scanned with a time of flight scanner (Figure 1). Objects that had different
homogenous and heterogeneous materials with different colors including brick walls
and columns, red steel columns and trusses, blue steel columns and trusses, and silver
steel flagpoles were present in the scan scene. The curvilinear architecture of the
stadium façade required wide shot scanning, which provided a unique opportunity for
analyzing the effects of different angles and distances.

Figure.1: The Rosenblatt Stadium, exterior façade views

The exterior façade was scanned at four different locations in high-resolution. Each
scan shot, including the equipment set up, target acquisition, and scanning, took about
1 hour. Technical specifications of the time-of-flight scanner used in this study are
illustrated in Table 1:

Table.1: Technical specifications of the scanner (Leica, 2011)


Laser scanning system Scan resolution
Type: Pulsed; proprietary microchip Spot size: From 0 - 50m:4mm (FWHH -
Color: Green based); 6mm (Gaussian - based)
Laser Class: 3R (IEC 60825-1) Point spacing: Fully selectable horizontal
Range 300 m @ 90%; 134 m @ 18% albedo and vertical (360 ° H, and 270 ° V)
Scan rate Up to 50,000 points/sec
COMPUTING IN CIVIL ENGINEERING 173

Captured point clouds were extracted and the object detection statuses were defined.
Figure 2 illustrates four scanner locations and two types of steel columns and
flagpoles used in the analysis.

Red Steel Blue Steel Column


Column
D C1
C2
C3
Flagpole
C4
C5
C6
C7
C8
C9
C10

C
B A
Figure 2: Four scanner locations, two types of columns and flagpoles

RESULTS

Most of the red steel columns were not detected when the façade was scanned from
scanner locations B and C. However, almost all columns were detected when the
façade was scanned from scanner locations A and D. In Figure 3, the screenshots of
the point clouds are presented. In general, red colored objects return a very low laser
return intensity for any laser scanner that employs a visible green laser (Hiremagalur
et al., 2007). However, the results show that there is a correlation between the object
texture, distance and angle of incidence and detection of objects. In addition, other
objects with red color such as brick columns which were in the same relative position
as the red steel columns in terms of distance and angle were detected with no issues.
Moreover, blue steel columns and trusses, and flagpoles were also detected from all
scanner locations. In order to determine which factors are more important for
detecting objects with different colors, the analysis of distance and orientation was
carried out and discussed below in detail.

First, the façade was modeled in Autodesk Revit Architecture by using the imported
point cloud data. Then, distances and angles of incidence were calculated based on
the coordinates acquired. Both distances and angles were measured in horizontal
planes. The vertical angle is not important in this analysis because in most cases, the
vertical angle is constant for all the columns.
174 COMPUTING IN CIVIL ENGINEERING

Scanner Location A
Scanner Location D

Scanner Location B

Not detected

Detected

Scanner Location C

Figure.3: Detection of red steel columns in different point cloud data

The numerical results along with the detection status of the columns are presented in
Table 2.

Table 2: Red column distances and angles

For scanner location A, based on the scanner setting (defined by the operator),
columns C1 to C6 were out of scanning range. However the rest of the columns (C7 –
C10) were detected successfully. For scanner location D, columns C5 to C10 were
out of scanning range. The rest of the columns were detected except C1. For scanner
locations B and C, all columns were in line of sight and scan range. For scanner
location B, only C5 and C6 were detected. The two detected columns are at the center
COMPUTING IN CIVIL ENGINEERING 175

of the façade and almost perpendicular to the scanner location axis. The distances for
these two columns are equal to 40 meters. But, for scanner location C, the closest
column is 43 meters from the scanner, yet no column was detected. Nonetheless, all
red brick columns at the same relative locations (under the red steel columns) were
detected from all scanner locations. The same method was carried out for blue steel
columns and silver flagpoles. While the distances between the scanner and the blue
columns were larger and the distances between the scanner and the flagpoles are
smaller than the distances between the scanner and the red columns, all blue columns
and flagpoles were detected. These results indicate that the texture which affects the
laser beam reflection from the surface is important in return intensity of the laser
beam. Moreover, for red steel columns, lack of detection could be correlated to the
critical geometric characteristics in addition to the red color effect. In order to
determine the impact of distances and angles, results are sorted in the Figures 4 and 5.

Figure 4: Relative column distances and detection rates for red steel columns

Figure 5: Relative angles and detection rates for red steel columns

Any column that is over 40 meters far away from the scanner could not be detected.
However, angles do not play significant role for large distances. Even columns that
are almost 90 degrees to the scanner were not detected in some cases. It could be
concluded that there is no specific relation for detection or lack of detection and
angles. Based on this analysis, besides the importance of the surface material/texture
(in case of red brick columns), the most impactful factor is the distance, and angles
might be important when they are combined with critical distances such as above 40
meters. In critical distances, angles close to 90 degrees could affect the detection of
red color objects in case that laser source emits a green laser beam.
176 COMPUTING IN CIVIL ENGINEERING

CONCLUSIONS
The paper presented a test bed that was developed to model how object’s color,
scanning distance and angle of incidence influence point cloud quality. This test bed
with rich interplay of colors, materials, textures and geometry enabled authors to
explore additional factors that the AEC industry ought to pay special attention to
when constructing three-dimensional virtual representations of buildings and
infrastructure. Experimental results indicate that the distance between the laser sensor
and the object with very low laser return intensity is essential in object detection. This
study can inspire future research in defining the standard procedures in scanning
operations for object colors with low laser return intensity.

ACKNOWLEDGMENTS
The authors would like to thank Optira Inc, which provided funding and expertise to
this project. Any opinions, findings, conclusions, or recommendations presented in
this paper are those of the authors and do not necessarily reflect the views of Optira.

REFERENCES
Akinci, B., Garrett, J., Patton, M., (2002). "A vision for active project control using
advanced sensors and integrated project models." Specialty Conference on
FIAAP, ASCE, Virginia Tech, January 23–25 , 386–397.
Anderson, D., Herman, H., Kelly, A. (2005). “Experimental characterization of
commercial flash ladar devices.” In Proceedings of International Conference on
Sensing Technologies.
Arayici, Y., (2007). "An approach for real world data modeling with the 3D terrestrial
laser scanner for built environement." Automation on Construction, 16, 816-829.
Becerik-Gerber, B., Jazizadeh, F., Kavulya, G., Calis, G. (2010). “Assessment of
target types and layouts in 3D laser scanning for registration accuracy.”
Automation in Construction.
Boehler, W., Bordas-Vicent, M., Marbs, A.(2003). “Laser Scanner Accuracy.”
Proceedings of the 19th. CIPA Symposium, ISPRS/CIPA, 696-701.
Bogue, R.(2010).“ Three-dimensional measurements:a review of technologies and
applications.” Sensor Review,102-106.
Cheok, G. S., Stone, W. C., Bernal, J., (2001). "Laser scanning for construction
metrology, National Institute of Standards and Technology." American Nuclear
Society 9th International Topical Meeting on Robotics and Remote Systems,
Seattle, Washington, March 4-8.
Falie, D., Buzuloiu, V.( 2007). Noise characteristics of 3D Time-of-Flight cameras.
In Proceedings of IEEE Symposium on Signals Circuits & Systems (ISSCS),
Iasi, Romania, 229-232.
Franaszek, M., Cheok, G.S., Witzgall, C.(2009). “Fast automatic registration of range
images from 3D imaging systems using sphere targets.” Automation in
Construction, 265-274.
Froehlich, C., Mettenleiter, M, (2004). “Terrestrial Laser Scanning-New Perspectives
in 3D Surveying.” International Archives of Photogrammetry, Remote Sensing
and Spatial Information Sciences, 36, 7-13.
Gong, J., Caldas, C., (2007). "Processing of high frequency local area laser scans for
construction site resource management." Proceedings of the 2007 ASCE
COMPUTING IN CIVIL ENGINEERING 177

International Workshop on Computing in Civil Engineering, Pittsburgh, PA,


665–6728, July 24–27.
Hiremagalur, J., Yen, K.S., Akin, K.,Bui,T.,Lasky, Y.A., Ravani, B.(2007). “Creating
Standards and Specifications for the Use of Laser Scanning in Caltrans Projects.”
AHMCT Research Report.
Kukko, A., Kaasalainen, S., Litkey, P.(2008). “Effect of incidence angle on laser
scanner intensity and surface data.” Applied Optics, 986-992.
Lange, R. (1999). “Time-of-Flight range imaging with a custom solid-state image
sensor.” In Proceedings of SPIE, Munich, Germany,180-191.
Leica, Spec Sheet, Available: http://hds.leica-geosystems.com/en/Leica-ScanSta-
694tion-2_62189.htm (Accessed on 01/9/2011).
Li-Chee-Ming, J.G., D. Ciobanu, T. Armenakis, C.(2009). “Generation of three
dimensional photo-realistic models from Lidar and image data.” Science and
Technology for Humanity (TIC-STH), 445-450.
Lichti, D., Harvey B.R. (2002). “The effects of reflecting surface material properties
on Time of Flight laser scanner measurements”. Symposium on Geospatial
Theory, Processing and Applications.
Soudarissanane, S., Van Ree, J., Bucksch, A. and Lindenbergh, R.(2007). “Error
budget of Terrestrial Laser Scanning: influence of the incidence angle on the
scan quality.” Proc. in the 3D-NordOst.
Tommaso, G., Cicirelli, G., Attolico, G., Distante, A., (2006). “Automatic
construction of 2D and 3D models during robot inspection. ” Industrial Robot:
An International Journal, 33 (2006) 387-393.
Tang, P., Huber, D., Akinci, B., (2007). "A comparative analysis of depth-
discontinuity and mixed-pixel detection." Sixth International Conference on 3-D
Digital Imaging and Modeling, Montreal, Canada, Aug 21-23.
Tang, P., Akinci, B., Huber, D. (2009). “Quantification of edge loss of laser scanned
data at spatial discontinuities.” Automation in Construction, 1070-1083.
Tuley, J., Vandapel, N., Hebert, M.(2005). “Analysis and Removal of Artifacts in 3-
D LADAR.” International Conference on Robotics and Automation,Proceedings
of the 2005 IEEE.
Vukašinovi´c, N., . Braˇcun, D,. Možina, J,. Duhovnik, J,(2010). “The influence of
incident angle, object colour and distance on CNC laser scanning.” International
Journal of Advanced Manufacturing Technology.
Yong-hua, X., Yuan-min, F.,Xiang-ying, Y., Jie,C., Xiao-qing, Z.(2009). “Error
analysis and calibration of 3D laser scanner in surveying in finished stopes.”
World Congress on Computer Science and Information Engineering
The Effective Acquisition and Processing of 3D Photogrammetric Data from
Digital Photogrammetry for Construction Progress Measurement

C. Kim1, H. Son2, and C. Kim3


1
Research Assistant, Dept. of Architectural Engineering, Chung-Ang University, 221
Heukseok-dong, Dongjak-gu, Seoul, Korea 156-756; 82-2-825-5726; 82-2-825-5726;
changmin@wm.cau.ac.kr
2
Researcher, Dept. of Architectural Engineering, Chung-Ang University, 221
Heukseok -dong, Dongjak-gu, Seoul, Korea 156-756; 82-2-825-5726; 82-2-825-5726;
hjson0908@wm.cau.ac.kr
3
Associate Professor, Dept. of Architectural Engineering, Chung-Ang University, 221
Heukseok -dong, Dongjak-gu, Seoul, Korea 156-756; 82-2-820-5726; 82-2-812-4150;
changwan@cau.ac.kr (corresponding author)

ABSTRACT
With the development of digital imaging technology, digital photogrammetry
has found various engineering applications, such as architecture, automotive and
aerospace engineering. Although digital photogrammetry allows the generation of
3D photogrammetric data with high density and resolution, it has not been as popular
in construction as it has in other industries. In particular, acquisition and processing
of 3D photogrammetric data from digital photogrammetry for construction progress
measurement applications is at an early stage, and its feasibility has not been
evaluated. The objective of this research is to propose a processing method for
progress measurement application by utilizing acquisition of 3D as-built data using
photogrammetry technology. For this purpose, a framework consisting of 3D
photogrammetric data acquisition, 3D photogrammetric data refinement, and 3D
structural components detection is presented. The effectiveness of the proposed
method is verified by evaluating the quality of the processed 3D photogrammetric
data with respect to the density. The preliminary experimental result shows that using
processed 3D photogrammetric data for advanced and automated construction
progress measurement applications is possible.

Keywords: 3D Object Detection; Digital Photogrammetry; Progress Measurement;


Support Vector Machine; Tensor Voting

178
COMPUTING IN CIVIL ENGINEERING 179

INTRODUCTION
With the development of digital imaging technology, photogrammetry offers
low cost, portable, and accurate methods of obtaining three-dimensional spatial
information. Based on these advantages, photogrammetry has been widely utilized in
various engineering applications such as architecture, automotive, and aerospace
engineering. Further, advances in digital photogrammetric systems have achieved
high density and resolution to the extent it can possibly detect the performance
deviations between 3D as-built data and an as-planned 3D CAD model.
Although photogrammetry offers new opportunities for users to obtain dense
and accurate 3D photogrammetric data, these data can contain noise. The cause of
noise is the mismatch between correspondence pixels in each image (Snavely and
Szeliski 2010). A construction site is an outdoor environment and is cluttered, so,
images obtained from construction sites contain illumination variations, sensor
noises and occluded pixels. These pixels cause a mismatch by existing in only one
image and not corresponding with pixels in other images (Niese et al. 2007). Thus, a
3D photogrammetric data obtained from a construction site can contain a large
amount of noise. Such a noisy data may have a negative effect on the accuracy of
progress measurement; for this reason, processing is needed.
Many studies have been conducted in progress measurement using LADAR
(Laser Detection and Ranging) that does not require complex processing (Shih and
Wang 2004; Bosche 2010). However, the LADAR is not only expensive and time
consuming during data collection but also has limitations of scanner placement
(Golparvar-Fard et al. 2009). These limitations are critical for a progress
measurement application that requires continued acquisition of as-built data.
The objective of this research is to propose 3D photogrammetric data
acquisition and processing method for progress measurement application, using
photogrammetry technology. The proposed process consists of 3D photogrammetric
data generation, refinement on 3D photogrammetric data, and 3D structural
component detection. The results of the experiment show possibilities for applying
the proposed process to automatic construction progress measurement.

FRAMEWORK FOR ACQUISITION AND PROCESSING OF 3D


PHOTOGRAMMETRIC DATA
In this section, the framework for acquisition and processing of 3D
photogrammetric data from digital photogrammetry for construction progress
measurement applications is described (see Figure 1).
The photogrammetric software package Photomodeler Scanner was used to
acquire 3D data of the construction sites. Photogrammetry is based on the processing
of a pair of images of the object to be modeled and can be used to generate a 3D
photogrammetric data. Each point of the 3D photogrammetric data obtained using
photogrammetry has both 3D coordinates (x, y, z) and color information represented
in RGB colorspace.
In 3D photogrammetric data generated using photogrammetry, there is the
presence of varying degrees of noise because many points do not correspond
between the pair of images; thus, photogrammetric techniques are difficult to use
180 COMPUTING IN CIVIL ENGINEERING

directly and effectively in practical applications. The noise is reduced using tensor
voting algorithm to achieve better 3D photogrammetric data for construction sites.
The next step is 3D structural components detection based on the color model,
using a machine learning algorithm from 3D photogrammetric data for construction
sites. RGB colorspace of the acquired 3D photogrammetric data is converted to HSI
colorspace. Then, in order to detect the structural components based on their color
information, a support vector machine is used. After extracting the structural
components, the 3D photogrammetric data corresponding to them is acquired. The
obtained 3D as-built data of the structural components in progress can be utilized for
advanced and automated construction progress measurement applications by
comparing the obtained data with the as-planned data, such as that obtained using a
3D CAD model. The process is described in detail in the following section.

Figure 1. Acquisition and processing of 3D photogrammetric data.

METHODOLOGY
In this section, the process of acquisition and processing 3D photogrammetric
data from digital photogrammetry is explained with an outdoor experimental result
performed on a construction site where concrete buildings were under construction.
In the section entitled “3D Photogrammetric Data Acquisition for Construction Sites,”
image acquisition issues for acquisition of 3D photogrammetric data for construction
sites using photogrammetry are discussed and the process of generation of 3D
photogrammetric data from construction site images using the Photomodeler Scanner
is introduced. Then, a detailed method for refining 3D photogrammetric data of
construction sites using the tensor voting algorithm is utilized, as illustrated in the
section entitled “3D Photogrammetric Data Refinement.” The final step, described in
COMPUTING IN CIVIL ENGINEERING 181

the section entitled “3D Structural Components Detection,” is to detect the structural
components in progress, based on the color information using a machine learning
algorithm from 3D photogrammetric data for construction sites.

3D Photogrammetric Data Acquisition for Construction Sites


In order to acquire 3D photogrammetric data for construction sites, the
Photomodeler Scanner, developed by Eos Systems, Inc., was adopted (Eos Systems,
Inc. 2010). The first task in acquisition of 3D photogrammetric data for construction
sites is to acquire a sequence of images using a calibrated camera. In order to obtain
reliable data, one must know how to take images in terms of camera position and
parameters. Positioning the camera involves moving in a circular pattern around the
construction site at the same distances while ensuring each image has at least a 50%
overlap (Arias et al. 2005). Camera parameters, such as focal length, image
resolution, zoom, and brightness, should be remain fixed while taking images
(Radisevic 2010). Figure 2 shows 45 camera positions acquired with a Nikon D90
with a fixed lens focal length (f 18mm). Since the greatest accuracy can be obtained
using images with a resolution of over 11 megapixels, images were acquired with a
resolution of 12 megapixels (4,288ൈ2,848 pixels).

Figure 2. Image acquisition positions.

The 3D photogrammetric data generation process in the Photomodeler


Scanner consists of three steps: idealization, orientation, and matching. Camera lens
distortion was considered a cause of measurement error (Dai and Lu 2010). In order
to increase reconstruction accuracy, idealization was performed. In this step, the
image distortion caused by the lens was reduced, using information calculated during
the camera calibration process. After the idealization step, images need to be related
to reference points in order to show the software the same objects in two or more
images. In this step, orientation, the position and angle of the camera was determined.
After determining the position and angle of the camera, 3D coordinates of the pixels
in nine pairs of images were calculated using triangulation in the matching step. The
resulting 3D photogrammetric data for the construction site is composed of
4,879,577 points and is shown in Figure 3.
182 COMPUTING IN CIVIL ENGINEERING

(a) (b)
Figure 3. (a) 3D photogrammetric data for the construction site;
(b) magnified portion of (a).

3D Photogrammetric Data Refinement


3D photogrammetric data contain noise resulting from the mismatching of
points. Thus, in order to obtain a reliable 3D photogrammetric data for construction
sites, tensor voting, proposed by Medioni et al. (2000), was applied. Tensor voting is
an algorithm that is able to estimate the strength and orientation of normal to each
point using spatial information and then segmenting the 3D photogrammetric data
into three different categories: surface, curve, and point (Reyes et al. 2010).
Tensor voting consists of two steps: tensor representation for data encoding
and tensor voting for communication with neighbors. Each point is represented by
second order symmetric tensors to encode the local orientation of features and their
saliency. The orientation information is propagated from each token to its neighbors
via pre-calculated voting fields through a voting process. Afterward, each point is
represented by a second order symmetric tensor that estimates its features and
saliency. Then, noise in the 3D photogrammetric data can be removed by eliminating
points with lower surface saliency. Through this process, about 22 % of the original
data points were eliminated and 3,794,062 points were preserved. Figure 4 shows the
refinement of the 3D photogrammetric data for the construction site, using the tensor
voting algorithm.

(a) (b)
Figure 4. (a) Refined 3D photogrammetric data; (b) magnified portion of (a).
COMPUTING IN CIVIL ENGINEERING 183

3D Structural Components Detection


After refinement of 3D photogrammetric data for construction sites, it is
necessary to detect the structural components desired for construction progress
measurement applications. However, detecting structural components using 3D data
alone is difficult, since 3D photogrammetric data for construction sites are complex
and unstructured. Therefore, a color model-based method using the machine learning
algorithm was employed to detect the 3D structural components, particularly those
composed of concrete.
The RGB colorspace of acquired 3D photogrammetric data was converted to
the HSI colorspace. A support vector machine was then applied as a classifier to
distinguish between structural components composed of concrete and other objects.
The extracted features were combined with corresponding 3D photogrammetric data.
Finally, 2,917,514 points were detected by the proposed method as the as-
built concrete structural components. Figure 5 shows the 3D structural components
detection result.

(a) (b)
Figure 5. (a) 3D structural components detection result;
(b) magnified portion of (a).

VERIFICATION
In the progress measurement applications, in order to utilize the processed 3D
data, high-quality 3D as-built data are required to both ensure a high level of detail of
the structural components and to better interpret and compare the differences
between the as-built data and that obtained through 3D CAD models. In this section,
the quality of the processed 3D photogrammetric data is tested with respect to the
density to show the effectiveness of the proposed method.
Figure 6(a) shows the as-planned 3D CAD model and Figure 6(b) shows the
3D CAD model overlapping with the corresponding as-built 3D data of the structural
components. The density of the 3D processed data is evaluated by calculating the
number of points per m2 for ten parts obtained from the proposed method [marked by
the red boxes in Figure 6(b)].
184 COMPUTING IN CIVIL ENGINEERING

(a) (b)
Figure 6. (a) The as-planned 3D CAD model; (b) the 3D CAD model
overlapping with the corresponding 3D as-built data.

The density of the ten parts is depicted in Table 1. For the 3D data acquired
and processed using the proposed method, on average, approximately 1,590 points
per m2 were achieved at a range of about 50 m. Compared to a high density laser
scanner (e.g., a TrimbleTM GX3D laser scanner), which produces 3D data with
approximately 1,110 points per m2 at a range of about 50 m, the obtained data has an
acceptable level of quality.

Table 1. The density of 3D as-built data of the structural components.


Parts No. No. of Points/m2 Parts No. No. of Points/m2
1 1,181 6 2,052
2 931 7 1,725
3 3,216 8 2,176
4 1,562 9 984
5 528 10 1,580

CONCLUSIONS AND RECOMMENDATIONS


Digital photogrammetry seems to be a promising way to generate 3D
photogrammetric data using acquired using photogrammetric techniques. It could be
a useful input to construction progress measurement applications. This paper
presented the framework for acquisition and processing of 3D photogrammetric data
using photogrammetric data for construction progress measurement applications.
After acquisition of 3D photogrammetric data for construction sites using the
Photomodeler Scanner, the proposed method allows for the refinement of the
obtained 3D photogrammetric data. Then, one can effectively extract the 3D as-built
data of the structural components. The effectiveness of the proposed method for
acquisition and processing of 3D photogrammetric data for construction sites was
verified by calculating the point density of the processed 3D photogrammetric data.
The preliminary experimental result demonstrates the feasibility and potential
of 3D photogrammetric data acquisition and processing using close-range
photogrammetric techniques for construction progress measurement applications.
This experiment showed that the processed data can be utilized in the comparison of
3D as-built structural components data with that of the 3D CAD model for advanced
COMPUTING IN CIVIL ENGINEERING 185

and automated construction progress measurement applications. Future research will


be conducted to verify the quality of the processed data in terms of accuracy. In
addition, the method will be extended to allow for matching and comparing the
obtained 3D as-built data with the as-planned 3D CAD model to automatically assess
the construction progress.

ACKNOWLEDGEMENTS
This research was supported by Basic Science Research Program through the
National Research Foundation of Korea (NRF) funded by the Ministry of Education,
Science and Technology (2010-0023229).

REFERENCES
Arias, P., Herraez, J., Lorenzo, H., and Ordonez, C. (2005). “Control of structural
problems in cultural heritage monuments using close-range photogrammetry
and computer methods.” Computers & Structures, 83(21-22). 1754–1766.
Bosche, F. (2010). “Automated recognition of 3D CAD model objects in laser scans
and calculation of as-built dimensions for dimensional compliance control in
construction.” Advanced Engineering Informatics, 24(1), 107–118.
Dai, F. and Lu, M. (2010). “Assessing the accuracy of applying photogrammetry to
take geometric measurements on building products.” Journal of Construction
Engineering and Management, 136(2), 242–250.
Eos Systems, Inc. (2010). http://www.photomodeler.com/index.htm, last accessed on
December 31 2010.
Golparvar-Fard, M., Pena-Mora, F., and Savarese, S. (2009). “D4AR – A 4-
dimensional augmented reality model for automating construction progress
monitoring data collection, processing, and communication.” Journal of
Information Technology in Construction, 14, 129–153.
Medioni, G., Lee, M., and Tang, C. (2000). “A computational framework for
segmentation and grouping.” Elsevier Science, New York, NY.
Niese, R., Al-Hamadi, A., and Michaelis, B. (2007). “A novel method for 3d face
detection and normalization.” Journal of Multimedia, 2(5), 1–12.
Radisevic, G. (2010). “Laser scanning versus photogrammetry combined with
manual post-modeling in Stecak digitization, Proc., 14th Central European
Seminar on Computer Graphics, Budmerice, Slovakia.
Reyes, L., Medioni, G., and Bayro, E. (2010). “Registration of 2D points using
geometric algebra and tensor voting.” Journal of Mathematical Imaging and
Vision, 37(3), 249–266.
Shih, N.J. and Wang, P.H. (2004). “Point-cloud-based comparison between
construction schedule and as-built progress: Long-range three-dimensional
laser scanner’s approach.” Journal of Architectural Engineering, 10(3), 98–
102.
Snavely, N., Simon, I., Goesele, M., Szeliski, R., and Seitz, S.M. (2010). “Scene
reconstruction and visualization from community photo collections.” Proc.
IEEE, 98(8), 1370–1390.
Data Transmission Network For Greenhouse Gas Emission Inspection

Qinyi Ding1, Xinyuan Zhu2, Qingbin Cui3


1
PhD student, Department of Civil & Environmental Engineering, University of
Maryland, College Park, MD 20742, qding@umd.edu
2
Research Assistant, Department of Civil & Environmental Engineering, University
of Maryland, College Park, MD 20742, zxyemily@umd.edu
3
Assistant Professor, Department of Civil & Environmental Engineering, University
of Maryland, College Park, MD 20742, cui@umd.edu (Corresponding Author)

ABSTRACT
Exhaust from construction equipment is one of the major sources of
Greenhouse Gas emissions in the construction industry. And collecting, monitoring,
and managing equipment emissions in a real time environment will help ensure
contractor’s compliance with applicable emission regulations and contractual
requirements. Existing emission compliance systems, however, fail to address the
complexity of construction operations. This paper presents an ad hoc network
optimization model for construction equipment emission inspection. Equipment
specific emission data is collected by a device attached to each vehicle and
transmitted through the ad hoc network to reach the data processing server. The
optimal data transmission mechanism is modeled for minimizing data loss during
transmission. The paper also demonstrates the highly efficiency and accuracy of the
model through a simulation of various equipment distribution patterns and a
discussion of relaxed transmission capacity.

INTRODUCTION

There is general scientific consensus on global warming and that the warming
is primarily due to anthropogenic activities grown since pre-industrial times (Pachauri,
2007). According to the EPA’s GHG emission report (U.S. EPA, 2008), the
construction sector produced 6% of total U.S. industrial GHG emissions in 2002, and
has the third highest GHG emissions among the industrial sectors. The major source
of the construction sector is fossil fuel combustion (76%), which is the use of fossil
fuels, such as gasoline, diesel, or coal, to produce heat or run equipment.
In order to control the GHG emission of the whole construction project
lifecycle, especially the construction stage, US EPA has put forward many related
programs or regulations, such as Diesel Emissions Reduction Program (U.S. EPA,
2005), Idling Reduction Program (U.S. EPA, 2010), and Clean Fuel Program (U.S.
EPA, 2000). Project owners usually incentivize their contract bidders by producing
green contracts involving the emission control technology or strategy packages as
required or optional provisions.
However, it is practically difficult to generate a measurable and enforceable
operation pattern for green construction equipments. It is even harder to control in a

186
COMPUTING IN CIVIL ENGINEERING 187

project level that contractors are complied with regulations or provisions in the
contract. Therefore, a real-time construction project monitoring system for GHG
emission is increasingly important to:
1) Help to collect information about the GHG emission during the whole lifecycle
of the project so as to correctly estimate the total actual impact to the field
environment.
2) Help to provide baseline data for the construction process so as to establish a
standard for green contract performance evaluation.
3) Monitor contractor’s construction equipment behavior so as to ensure that the
process is complied with provisions in the green contract.
4) Timely provide contractors with reminders or warnings once abnormal
information is detected by the inspection system.
In this paper, we consider a general construction project where different fleet
equipments are operating within the construction field. Each piece of equipment has a
device installed so as to collect and transfer the emission data to the central
processing server. An ad hoc wireless sensor network is designed for better collecting
information. An algorithm of data transmission protocol between devices is
established for minimizing data loss during transmission. A simulation of various
equipment distribution patterns is then conducted to demonstrate the efficiency and
accuracy of the model. Different influence factors are discussed in the end for the
model limitation and future improvement.

DESIGN OF THE INSPECTION NETWORK

In order to establish a real-time network, we will need to design a data


transmission scheme with devices installed in construction equipments. Different
from static systems, construction equipments are usually moving within a limited
field range, which makes it hard and infeasible to resort to wired connections between
devices. Therefore, we come to an idea of wireless sensor network to realize the data
transmission mechanism.
A wireless sensor network could be either centralized or decentralized (Toh,
2002). In a centralized network each sensor communicates with base stations and the
base stations are responsible for connecting and routing. In a decentralized network
like an ad hoc network there is no pivot above the sensors. Instead, each sensor is
responsible for routing and data transmission with other sensors.
Such ad hoc networking is more suitable for a construction project context. In
a construction site, we expect a good portion of equipments (cars, tractors, bulldozers
etc) are moving in a wide range of field area, sometimes randomly. It might be hard
to design a base station layout when too many base stations raise cost substantially
and too few base stations reduce coverage. With ad hoc networking, we don’t need to
worry about base station installation, and we could take advantage of free public
spectrum to further reduce cost.
We try to take advantage of ad hoc networking to realize real time greenhouse
gas monitoring. To set up, we need one device for each piece of equipment in use,
and a data server to receive data from the device with all post data analysis
functionality. Each device consists of a sensor module, a receiving module, a
188 COMPUTING IN CIVIL ENGINEERING

transmitting module, a controlling module, a positioning module (Mohapatra &


Krishnamurthy, 2004). The functionalities of all the modules are:
1) Sensor module: collect greenhouse gas emission data. Sensor module will
generate monitoring data in a certain speed and output to transmitting module. Non-
dispersive Infrared (NDIR) sensors are most often used for measuring the
concentration of CO2 (Lang, Wiemhöfer, & Göpel, 1996) through testing the
wavelength characteristic of infrared.
2) Receiving module: receive data within a certain range from other devices in a
certain speed. In an ad hoc network, each device should be able to receive data from
other devices if they cannot reach the data server or it is better through this device for
less transmission loss.
3) Transmitting module: send data within a certain range out to other devices or data
server in a certain speed. Since we want a real time ad hoc network, the transmitting
speed should be large enough to consume the data generated by the sensor module
and accommodate other far-end devices at the mean time.
4) Controlling module: in charge of routing control. The controlling module
maintains a distribution picture of all available devices with in its range. It reads the
information from the positioning module to get the distance between every device to
itself. From all the information the controlling module should design a transmission
strategy in order to transmit all local data out. The algorithm designed in this paper is
focusing on this module.
5) Positioning module: provide absolute position of the device. We need to get the
position of the device to generate the distribution picture for the controlling module.
Commonly, positioning system like GPS will work (Xu, 2007).
With all these modules each device could send and receive data from the
devices/server within its transmission range. If a device is out of the range of the
server, it has to find a multi-hop route to the server via several devices and transmit
data to them to finally reach the server.

INSPECTION NETWORK TRANSMISSION MECHANISM

Assumptions
In order to design a feasible and reasonable wireless ad hoc sensor network,
several assumptions need to be made for the system. Notations are shown in table 1.
1) There are N pieces of equipment (with N devices) and a data server in the network.
Data server could only receive data from equipments, while equipments could both
send and receive data from their peers.
2) To send data across distance d, there is a data loss proportional to d. The data loss
is regarded as the transmission cost in our model.
3) Each device will generate emission data in a constant speed (g).
4) Each device has a transmission capacity limit (tc); total flow sending out from the
device cannot exceed the limit.
5) Each device and the data server have a transmission range. They could only
transmit data to other devices within the range.
6) Equipments are moving in the construction field. It is highly possible that at a
certain time point a device is within the range of the other and later it is not.
COMPUTING IN CIVIL ENGINEERING 189

7) All devices are synchronized at any time. They all keep the same picture and
information at a certain time about how all the devices are distributed.
Table 1 Table of Notation for the Transmission Model
n Number of equipment
Xij Transmission flow from node i to node j
Xik Transmission flow from node i to server or dummy node
Cij Unit data loss of transmission flow from node i to node j
Cik Unit data loss of transmission flow from node i to server or dummy node
g Data flow generated by each device
tc Total transmission capacity for each device
Transmission Model
Based on the assumptions made for the transmission mechanism, we could
formulate the problem as:
∑ ∑ · ∑ ∑ · Eqn 1

1 , ∑ ∑ ∑ Eqn 2
1 , ∑ , Eqn 3
0, 0
First of all, it is necessary to introduce the dummy node in the model for some
potential imbalanced node. If a device cannot find a way to transmit data out to other
devices, it could always go to the dummy node. The total flow going into the dummy
node represents the total data loss of the system. Of course, dummy node does not
send out any data.
With this being said, we have two types of data transmission cost. The first
one is caused by distance, and it is proportional to the distance. The second one is the
total amount of data lost, which is the total flow sending to the dummy node.
Therefore the objective function Eqn 1 is to minimize the total cost of the system.
Equation 2 is the flow balance constraint. For each node, the flow sending out
to other nodes (including the server and the dummy) should be balanced with the data
it generates and the flow it receives from other nodes.
Equation 3 is the transmission capacity constraint. For each node, the total
flow it sends out should be under a certain transmission capacity limit tc.

NUMERICAL ANALYSIS

In order to examine the effectiveness and efficiency of the model, a simulation


process is conducted with different time point. The simulation is to depict a general
construction site with normal equipment activities, while typical locations of the
equipment are designed representing some normal or extreme cases for the network
transmission.
190 COMPUTING IN CIVIL ENGINEERING

Basic Parameters
We consider a construction field with dimension of 3 miles by 2 miles. There
are altogether 6 equipments moving in the field, represented by node 1 to 6. The data
server and the dummy node are two additional nodes.
The data server, as well as all devices have a transmission range L=1 mile.
The sensor module on each device is generating information in the flow of g=2 units/s.
Each device is transmitting data out to other devices, including the server, within a
flow capacity limitation of tc = 5 units/s.
Absolute Positions
Since the equipments are moving within the construction field, the absolute
position of each device is changing with time. Therefore, we need to design a series
of position scale (x,y) for each device. We consider the following four special cases,
and other general conditions could be regarded as combinations of these cases.
1) All equipment are in the range of the server
2) Only one equipment is out of the range of the server
3) Only two equipment is in the range of the server
4) One equipment is in the very end of the field and no other equipment is in the
range of it
Since the server is not moving, and generally is set in certain condition by the side
of the construction field, we set the static server position as (0,3). Hence the position
scales of the 6 equipments could be given as:
Table 2 Four Cases of Locations for Six Construction Equipment
Case 1 Case 2 Case 3 Case 4
Node x y x y x y x y
1 2.2 0.1 2.6 0.5 2.6 0.8 2.2 0.4
2 2.3 0.6 2.1 0.2 1.8 0.9 2.6 0.9
3 2.5 0.8 2.9 0.9 2.2 0.5 1.8 1.4
4 2.8 0.9 2.3 0.7 1.4 1.4 1.4 0.5
5 2.9 0.4 2.8 0.2 1 0.5 1 1.1
6 2.5 0.3 2 1.2 0.6 1.2 0.2 1.8

Cost Coefficients
Then we need the cost coefficients among each pair of the nodes to calculate
the data loss. As discussed in the model, if a certain node is in the transmission range
of another node, the cost of the flow is proportional to the distance d between them,
and we simply set the multiplier as 1. If a device is out of the range of another device,
the cost equals to a very large number, and we set it as M1=1000. Since the flow is not
allowed to send to the node itself, we set the cost of Xii, cii=1000. Meanwhile, since
the flow sent to the dummy node represents the information that is lost, we need to
allocate a fairly large cost to those flows but smaller than the out-of-range cost. We
set it as M2=500. Therefore, the cost coefficients could be written as:
, 1, 1, . . , 6, 0, . . , 6,
1000, 1, 1, . . , 6, 0, . . , 6
Eqn 4
1000,
500,
COMPUTING IN CIVIL ENGINEERING 191

DISCUSSION OF SIMULATION RESULTS

General Cases
Given the parameters, the transmission pattern could be solved through the
model:
Table 3 Simulation Result
X21 X31 X41 X51 X61 X32 X42 X52 X62 X43 X53 X63 X54 X64 X65
Case 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Case 2 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0
Case 3 2 0 0 0 0 -3 2 1 0 0 0 0 0 0 0
Case 4 0 0 3 0 0 3 0 0 0 0 1 0 1 0 0
X 1, s X 2, s X 3, s X4,s X5,s X6,s X1,d X2,d X3,d X4,d X5,d X6,d
Case 1 2 2 2 2 2 2 0 0 0 0 0 0
Case 2 4 2 2 2 2 2 0 0 0 0 0 0
Case 3 4 0 5 0 0 0 0 0 0 0 1 2
Case 4 0 5 5 0 0 0 0 0 0 0 0 2

Case one: all devices are located within the range of the data server (Figure 1).
In this case, since the cost coefficients are equal to the distance between device and
data server, each device will transmit its own data (2 units) to data server. There is no
data flow among devices and dummy node.
Case two: device #6 is located outside of the range of the data server (Figure
2). Since device #6 cannot transmit data to data server directly, it has to transmit its
data to device #1 and then device #1 sends all incoming data (4 units) to the data
server. All other devices are transmitting data directly to data server.

Figure 1. Case One Figure 2. Case Two


Case three: only devices #1 and #3 are located within the range of the data
server (Figure 3). Device #2 is the bottleneck here since devices #4, #5, #6 have to
pass through it to reach the data server. However with the capacity limitation, device
#2 is not able to transmit all the data, hence there is data loss of 3 units (data sent to
dummy). Device #2 distributes the data it receives to device #1 and #3 respectively.
Case four: device #6 is located without the range of any other devices (Figure
4). In this case, device #6 simply has no other choice but lose all its data. Device #1
and Device #2 has used up all their transmission capacities, 5 units.
192 COMPUTING IN CIVIL ENGINEERING

Figure 3 Case Three Figure 4 Case Four


Improved Cases with Relaxed Capacity Limit
There are a number of factors which could affect the final transmission pattern,
e.g. cost coefficients, device distribution (distance matrix), ratio of data generation
speed and transmission capacity etc. In this section we’d like to fix the other
parameters and discuss how increasing transmission capacity could improve the
system. We take case three and four as examples. Relaxing the capacity limit from 5
to 8 units/s, the new transmission patterns are solved in Table 4.
Table 4 Simulation Result for Relaxed Capacity
X21 X31 X41 X51 X61 X32 X42 X52 X62 X43 X53 X63 X54 X64 X65
Case 3 2 0 0 0 0 -6 4 2 0 0 0 0 0 2 0
Case 4 0 0 4 0 0 2 0 0 0 0 0 0 2 0 0
X1,s X2,s X3,s X4,s X5,s X6,s X1,d X2,d X3,d X4,d X5,d X6,d
Case 3 4 0 8 0 0 0 0 0 0 0 0 0
Case 4 6 4 0 0 0 0 0 0 0 0 0 2

For case three, now with sufficient capacity, device #4, #5 and #6 are able to
transmit all their data to device #2, which re-distribute the flow to device #1 and #3
(Figure 5). The data loss of the system is reduced.
Similarly, for case four, with the relaxation of capacity limit, the system takes
more advantage of the low-cost path (device #5 send data through 5–4–1–Server
instead of 5–3–2–Server) (Figure 6). However, increasing capacity could not improve
device #6 whose data are still lost due to out of range.

Figure 5. Case Three with Relaxed Capacity Figure 6. Case Four with Relaxed Capacity
COMPUTING IN CIVIL ENGINEERING 193

CONCLUSIONS AND FUTURE WORK

In this paper, we consider a real-time ad hoc wireless network for construction


project GHG emission inspection. The network transmission model is established.
Through solving cases with different equipment distribution, and transmission
capacity, we could draw the following conclusions:
1) The programming model works perfectly for depicting the data transmission
network, and the results are in accordance with our expectations.
2) Four cases of equipment distribution positions are discussed. The data
transmission pattern is depending on cost coefficients, device distribution (distance
matrix) and ratio of data generation speed and transmission capacity etc.
3) Relaxing capacity limitation could reduce data loss in the transmission. However,
it could not improve the “out of range” situation.
With basic research results from this paper, we would like to continue our work
through the following aspects in the future:
- For this project we only discussed simple cost coefficients which are proportional
to distance. In a more realistic model we should use different coefficients e.g.
exponential to distance.
- We could consider a storage module which would help to address the “out of
range” issue and further improve the network transmission performance.
- We would like to cover more pieces of equipment to better simulate a practical
construction field. With more equipment in the network, the transmission pattern will
change a lot, and the analysis of improvement factors will be more complicated.
- All the analysis is based on fixed time. A real time animation will be more
interesting and make the model easier to understand.
- To really examine the model, it will be helpful to do some real experiments. We
could also conduct some post data analysis like operation pattern recognition.

REFERENCES

Lang, T., Wiemhöfer, H.-D., & Göpel, W. (1996). Carbonate Based CO2 Sensors with High
Performance. Sensors and Actuators B: Chemical , 34 (1-3), 383-387.
Mohapatra, P., & Krishnamurthy, S. (2004). Ad hoc networks, Technologies and Protocols.
California: Springer.
Pachauri, P. K. (2007). Acceptance Speech for the Nobel Peace Prize Award to the
Intergovernmental Panel on Climate Change(IPCC).
Toh, C.-K. (2002). Ad Hoc Mobile Wireless Networks: Protocols and Systems.
U.S. EPA. (2000). Clean Fuel Fleets. Retrieved from http://www.epa.gov/otaq/cff.htm
U.S. EPA. (2008). Quantifying Greenhouse Gas Emissions from Key Industrial Sectors in the
United States. http://www.epa.gov/ispd/pdf/greenhouse-report.pdf.
U.S. EPA. (2005). Subtitle G—Diesel Emissions Reduction. In ENERGY POLICY ACT OF
2005 (pp. pp 246-252). http://www.epa.gov/OUST/fedlaws/publ_109-058.pdf.
U.S. EPA. (2010). Technologies, Strategies and Policies: Idling Reduction. Retrieved from
http://www.epa.gov/SmartwayLogistics/transport/what-smartway/idling-reduction-
tech.htm
Xu, G. (2007). GPS Theory, Algorithms and Applications, 2nd edition. Springer.
Wearable Physiological Status Monitors for Measuring and Evaluating
Worker’s Physical Strain: Preliminary Validation

Umberto C. Gatti1, Giovanni C. Migliaccio2, and Suzanne Schneider3


1
Department of Civil Engineering, The University of New Mexico, MSC01 1070
Albuquerque, NM 87131; PH (505) 340-8927; FAX (505) 277-1988; email:
umbertog@unm.edu
2
Department of Construction Management, University of Washington, Box 351610,
Seattle, WA 98195; PH (206) 685-1676; email: gianciro@uw.edu
3
Department of Health, Exercise and Sport Science, The University of New Mexico,
MSC042610, Albuquerque, NM 87131;PH(505)277-3795; email: sschneid@unm.edu

ABSTRACT
Construction activities are usually physically demanding and performed in
ubiquitous, highly variable, and, often harsh environments. Excessive physical strain
affects productivity, inattentiveness, and accidents. Therefore, a monitoring system
able to assess workers’ physical strain may be an important step towards better safety
and productivity management. Previous efforts to assess construction workers’
physical demand relied on instrumentation that hindered workers’ activities.
However, worker’s physical strain can now be monitored by recently-introduced,
non-intrusive Physiological Status Monitors (PSMs). We have investigated three
PSMs to assess if they can effectively monitor a person during activities similar to
construction workforce’s dynamic activities. Comparing PSMs’ and standard
laboratory instruments’ measurements, we found that two of the selected PSMs are
mostly reliable and accurate. These preliminary results demonstrate the PSMs’
effectiveness in monitoring subjects during dynamic activities and show promise that
they can be successfully implemented to monitor construction workers’ physical
strain.

INTRODUCTION
Even though progresses in construction equipment and workplace ergonomics
have reduced construction workers’ physical strain, many construction activities are
still physically demanding and have to be accomplished in challenging and harsh
environments. In fact, the construction work environment not only comprises heavy
lifting and carrying, pushing and pulling, but also vibrations and awkward work
postures (Hartmann & Fleischer, 2005). Anecdotal evidence suggests that physically
demanding work, safety and productivity are related (Abdelhamid & Everett, 2002;
Bouchard & Trudeau, 2008; Garet et al., 2005). Hence, the measure of physical strain
for construction activities is a crucial issue in managing productivity and preserving
the workforce’s health and safety.
Numerous studies have been focused on the assessment of workers’ physical
demands. Several authors accomplished studies comparing workers employed in
different trade or occupation, such as iron and steel industry workers (Kang, Woo, &

194
COMPUTING IN CIVIL ENGINEERING 195

Shin, 2007), firefighters (Elsner and Kolkhorst, 2008), and choker setters (Kirk and
Sullman, 2001). Few studies on construction workforce are available (Abdelhamid
and Everett, 2002; Faber et al. 2009; Turpin-Legendre and Meyer, 2003).
Nevertheless, most of these studies present critical drawbacks. In some studies the
measuring equipment was clumsy and uncomfortable; therefore it hindered the
subjects during routine activities (Abdelhamid and Everett, 2002; Elsner and
Kolkhorst, 2008). In other studies the techniques selected to evaluate the physical
demands were suitable for only a small number of subjects or they could not monitor
the subjects continuously (Turpin-Legendre and Meyer, 2003). However, physical
strain can be now monitored using innovative and non-invasive physiological
monitoring technologies, called Physiological Status Monitors (PSMs), which are
able to continuously monitor workers in a remote and automated way. They are
comfortable and do not hamper during any type of activity. Thus, they can be worn
for several hours without interruptions. PSMs have already been used to monitor
patients in remote healthcare, firefighters, miners, soldiers, and athletes. However, at
best of our knowledge there are not studies assessing PSMs’ reliability during
dynamic activities similar to construction workforce’s routine activities. Hence, the
aim of this paper is to evaluate three commercially available PSMs to assess if they
can effectively monitor a person in simulated construction activities. Initially, a brief
description of the selected PSMs is provided. Then, the selection of the monitored
parameters is explained and the different experiments are described. Finally, the
collected data are discussed and conclusions are drawn from these preliminary results.

METHODOLOGY
Several techniques and methods have been developed to assess physical
strain, including heart rate monitoring, rating of perceived exertion, oxygen
consumption, and motion sensors. In particular, it has been proved that Heart Rate
(HR) monitoring is an effective method in applied field studies (Kirk and Sullman,
2001). However, HR monitoring presents some limitations. The main issue is that
several factors not related with physical activity can greatly affect HR reducing its
reliability in the assessment of physical strain. Motion sensors, such as
accelerometers have also been used to monitor physical strain because body
accelerations are directly proportional to muscular forces (Melanson and Freedson,
1996). Unfortunately, they are effective on repetitive activities (e.g., walking) but not
in complex, construction-type activities (e.g., carry a load or walk on a gradient).
Nevertheless, coupling HR and accelerations can enhance the physiological strain
estimation’s accuracy. Motion sensors complement HR monitoring by differentiating
between HR changes caused by physical activity or by other factors. To evaluate
PSMs’ reliability in monitoring physical strain, it is necessary to assess their accuracy
in measuring HR and accelerations. The following sections include details on the
assessments for these two parameters. While some of the tested PSMs are also able to
monitor Breathing Rate, posture and skin temperature, this paper discusses results
from tests of HR and accelerations.
Physical Status Monitors (PSMs)
Three PSMs were selected (see Table 1 and Fig. 1): Zephyr BioHarness (BH)-
BT, Zephyr HxM, and Hidalgo EQ-01. All these devices present three main parts: the
196 COMPUTING IN CIVIL ENGINEERING

monitoring unit (i.e., the sensor electronics module and monitoring belt worn around
the chest), the communication unit, and viewing and analysis software. Via
Bluetooth, BH-BT and EQ-01can either transmit live data to a computer or work as
data logger for several hours, instead HxM can only transmit live data to portable
devices (e.g., cell phone or PDAs). Manufacturer-provided software is available for
BH-BT and EQ-01 whereas third-party applications for various mobile platforms are
available for HxM. For this project a Smartphone application, the Run.GPS
(eSymetric GmbH, Germany), was selected.

Table 1. PSMs’ Features.


Name BH-BT HxM EQ-01
Dimensions 80x40x15 mm 65x30x12 mm 123x75x14 mm
Weight (w/o belt) 35 g 16 g 75 g
HR, 3DA, BR, HR, 3DA, BR,
Monitored Parameters1 HR, 3DA
ST, BO ST, BO
Heart Rate and Breathing Rate
1 sec 3.4 sec2 15 sec
Sampling Period
Acceleration Sampling Period 0.02 sec (50Hz) N/A3 0.04 sec (25Hz)
1
Heart rate (HR), 3D Accelerations (3DA), Breathing Rate (BR), Skin Temperature (ST), and
Body Orientation (BO)
2
HxM measures HR on a 17 sec loop (3s/3s/3s/3s/4s = avg 3.4 sec), but does not monitor BR.
3
HxM does not provide raw accelerometer data.

Figure 1. From left: BH-BT, HxM, and EQ-01.

Heart Rate Assessment


Given the involvement of human subjects, this part of the study was reviewed
and approved by the University of New Mexico’s Institutional Review Board. To
date, the researchers have enrolled and completed the data collection on two subjects
(age 27 and 23, one male and one female with no history of cardiovascular diseases or
other issues that might make it unsafe for them to participate in an exercise program).
These individuals met the study’s inclusion criteria that were designed to minimize
risks. Written informed consent was obtained from all subjects. The tests were
performed at the Exercise Physiology Lab at the University of New Mexico where a
trained lab technician fitted the appropriate sensors.
Instrumentation
EKG HR measurement was obtained from the CASE Exercise Testing (GE
Healthcare, Waukesha, WI, USA). Electrocardiogram was monitored at 500Hz using
five leads (Left Arm, Left Leg, Right Arm, Right Leg, and V4) to reduce
interferences with PSMs’ monitoring belts.
Study Protocol
Two main factors are able to influence PSMs’ performance during dynamic
activities: an EKG noise generated either by electrical activity due to muscles’
contractions or due to PSM’s displacements with respect to the skin surface, and a
COMPUTING IN CIVIL ENGINEERING 197

mechanical noise generated also by PSM’s displacements. Therefore, a series of


activities was developed to assess the occurrence of these issues: (1) Static (5
minutes) - the subject sits without moving (i.e., no movements or electric noise is
generated); (2) Thoracic Rotation (5 minutes) - the subject, keeping his/her hands
next to the device and the elbows raised at the device level, rotates the torso either
side with a 3 second pace; (3) Arm lift (5 minutes) - the subject stands. He/she raises
his/her arms simultaneously to vertical upright position and lowered again with a 2
second pace; (4) Batting (5 minutes) - The subject repeats an exaggerated ‘batting’
motion: a combined movement of the arms and the twisting of the torso to either side
with a 5 second pace; (5) Weight moving (10 minutes) - the subject moves a 5 kg
weight for a distance of 3 meters. The weight is on the floor, thus the subject bends
down to pick it up, walks 3 meters, and sets it down; and, (6) Walk on a treadmill (10
minutes) - the subject walks on a treadmill at two different paces: 5 minutes slow
walking (3 mph), 5 min brisk walk (4 mph). Each subject accomplished these
activities wearing a PSM at a time. Therefore, PSM-obtained measurements are
compared with lab-measurements without any comparison among PSMs.
Signal Processing and Data Analysis
Signal processing and data analysis were performed offline with Matlab (The
Mathworks, Natick, MA, USA) and SPSS (IBM, Armonk, NY, USA). Signals were
re-sampled using linear interpolation and averaged on 5 seconds epochs. The Pearson
correlation coefficient (r) was obtained and tested with the null hypothesis that PSM-
and lab- measurements are not linearly related (α=0.001). Then, the agreement was
measured analyzing the differences between the PSM- and lab-measurements (Bland
& Altman, 1986). The lack of agreement was summarized by calculating the bias,
estimated by the mean difference (D) and the standard deviation of the differences (s).
Finally, due to the non-normality of many data sets, the non-parametric Wilcoxon
signed ranks test was used to test any significance difference (α=0.05).
3D Acceleration Assessment
The tests were accomplished at the Multi-Agent, Robotics, Hybrid, and
Embedded Systems (MARHES) Laboratory at University of New Mexico.
Instrumentation
Whereas a 3D accelerometer is embedded in HxM, this device does not
provide raw accelerometer data. Therefore, only BH-BT and EQ-01 were tested. The
laboratory instrument selected to measure accelerations was the motion capturing
system Vicon MX (Vicon, Los Angeles, CA, USA) equipped with the software Vicon
Nexus. Vicon’s sampling frequencies was 100 Hz.
Study Protocol
PSMs are equipped with accelerometers; therefore, they not only measure
devices’ accelerations due to displacements but also gravity. According to standard
PSMs’ position and direction when they are worn, three axes were defined: Vertical,
Anterior-Posterior, and Lateral. We noticed that both PSMs gave a different gravity
value on the three axes; therefore to obtain meaningful values (i.e., an acceleration of
9.81 m/s2 in the gravity direction when the device is not moving) every axis was
calibrated separately. Moreover, the three axes were tested separately maintaining
always the same axis parallel to gravity direction. Therefore, three tests were
accomplished for each PSM.
198 COMPUTING IN CIVIL ENGINEERING

Signal Processing and Data Analysis


Vicon’s signal was re-sampled using linear interpolation to match PSMs’
signals. Moreover, a moving average filter was applied to reduce random noise in
PSMs’ and Vicon’ signals (signal processing and data analysis were performed
offline using software previously mentioned). Vicon’s accelerations were obtained
from the second derivatives of displacements. Therefore, gravity (i.e., 9.81 m/s2) was
added to Vicon’s vertical axis to obtain comparable values. In addition, the total
acceleration was calculated as parameter for the comparison. The same data analysis
procedure used for HR and BR were applied.

RESULTS
HR Assessment Results
To date, data from only two subjects have been collected. Therefore, these
data are not statistically conclusive. In these two experiments, HxM showed
inadequate performance in assessing HR (Fig. 2) in many of the performed activities
(correlation coefficients for each activity: r1= 0.89, r2= 0.12, r3= -0.12, r4= -0.02,
r5=0.77, and r6=0.25). We do not have enough data to clearly determine the reason of
such behavior, but we can assume that it reflected poor contact between the chest belt
and subject’s body. In fact, HxM was the only PSM not equipped with a shoulder
strap.
Heart rate derived from PSMs and EKG were highly correlated (BH-BT r = 0.960,
p<0.0001, 917 data sets; EQ-01 r = 0.936, p<0.0001, 845 data sets) as shown in Fig.
3. The Bland-Altman technique demonstrated good agreement between PSMs and
EKG as shown in Table 2. Further, both PSMs showed a significant difference with
p-value less than 0.05. Descriptive statistics of correlation coefficient and agreement
indexes across the six activities are shown in Table 3. Unlike EQ-01, BH-BT seemed
reliable in every activity. This behavior translated in the BH-BT’s steadier behavior
than EQ-01 across activities. The two systems performed best in activities 1 (static)
and 6 (walking). This result supported the assumption that electric noise generated by
chest muscles and movement of the torso can affect PSMs’ measurements. Last, both
PSMs showed in almost every activity significant difference with HR derived from
EKG.
Table 2. HR agreement indexes.
Data sets > D+1.96s Data sets < D-1.96s
D s Data sets
# % # %
BH-BT vs EKG 1.389 3.469 917 28 3.05 31 3.38
EQ-01 vs EKG -2.782 5.798 845 11 1.30 46 5.44

EKG HxM EKG BH-BT


160 119
Heart Rate (bpm)

Heart Rate (bpm)

114
110
109
60 104
0
25
50
75
100
125
150
175
200
225
250
275

100
125
150
175
200
225
250
275
0
25
50
75

Time (s) Time (s)


Figure 2. Comparison of HR for HxM - EKG (left) and BH-BT - EKG (right).
COMPUTING IN CIVIL ENGINEERING 199

160 180
y = 0.9262x + 11.437
140 160
EKG (bpm)

EKG (bpm)
R² = 0.8757
120 140
100 120
80 y = 1.0094x - 2.4387 100
60 R² = 0.9222 80
60 80 100 120 140 160 80 100 120 140 160
BH-BT (bpm) EQ-01 (bpm)
Figure 3. HR derived from BH-BT and EKG (left) and from EQ-01 and EKG (right).

Table 3. HR descriptive statistics of correlation coefficients and agreement indexes.


BH-BT EQ-01
r D s r D s
Mean 0.942 1.456 2.779 0.766 -3.135 5.208
St. Dev. 0.026 0.646 0.715 0.168 2.652 1.443
Median 0.946 1.207 2.593 0.781 -2.126 4.665
Max 0.974 2.437 4.187 0.951 -1.301 7.498
Min 0.907 0.698 2.259 0.551 -8.385 3.592

3D Acceleration Assessment Results


Fig. 4 presents the comparison between the accelerations derived from PSMs
and the motion capture system for a representative test. Even if both PSMs showed a
good match in terms of acceleration values, EQ-01 constantly demonstrated a time
shifting. In fact, even if it kept a constant sampling frequency, it shifted the
measurements on time axis. The shifting pattern was iterative and acted in both
directions with a period of approximately 15 seconds. From our point of view (i.e.,
monitoring workforce activity level) the shifting pattern is not a matter of concern
because its maximum value is less than a second. However, it affected the statistical
analysis of the data.
As shown in Fig. 5, acceleration derived from PSMs and Vicon were well
correlated for BH-BT (r = 0.939, p<0.0001, 6000 data sets) but not for EQ-01 (r =
0.344, p<0.0001, 6000 data sets) due to the shifting pattern. The Bland-Altman
analysis demonstrated good agreement between PSMs and Vicon (Table 4). Both
PSMs showed a significant difference with p-value less than 0.05.
Descriptive statistics of correlation coefficient and agreement indexes for the
three tests in different directions are shown in Table 5. BH-BT showed a good and
stable correlation in the tests. Instead EQ-01 demonstrated a poor correlation that was
mainly due to the shifting pattern previously described. Both PSMs maintained a
good agreement in all the tests. Moreover, BH-BT was significantly different in all
the tests; instead EQ-01 showed significant difference only in one test.
Table 4. Acc. agreement indexes.
Data sets > D+1.96s Data sets < D-1.96s
D s Data sets
# % # %
BH-BT vs Vicon 0.100 0.558 6000 133 2.22 265 4.42
EQ-01 vs Vicon -0.096 3.427 6000 143 2.38 157 2.62
200 COMPUTING IN CIVIL ENGINEERING

14
12
10
Acc (m/s2)

8
6
4
0.0
0.7
1.4
2.1
2.8
3.4
4.1
4.8
5.5
6.2
6.8
7.5
8.2
8.9
9.6
10.2
10.9
11.6
12.3
13.0
13.6
14.3
15.0
15.7
16.4
17.0
17.7
18.4
19.1
19.8
Time (s) BH-BT Vicon
14
12
10
Acc (m/s2)

8
6
4
10.5
11.2
11.8
12.4
13.1
13.7
14.4
15.0
15.6
16.3
16.9
17.6
18.2
18.8
19.5
20.1
20.8
21.4
22.0
22.7
23.3
24.0
24.6
25.2
25.9
26.5
27.2
8.6
9.2
9.9

Time (s) EQ-01 Vicon


Figure 4. Comparison of acc. for BH-BT - Vicon (upper) and EQ-01 - Vicon (lower).

35 35 y = 0.4985x + 4.9497
Vicon (m/s2)

30 30
Vicon (m/s2)

25 25 R² = 0.1183
20 20
15 15
10 10
5 y = 1.0876x - 0.9214
0 5
R² = 0.8816 0
0 5 10 15 20 25 30 35 0 5 10 15 20 2 25 30 35
BH-BT (m/s2) EQ-01 (m/s )
Figure 5. Acc. derived from BH-BT and Vicon (left) and from EQ-01 and Vicon (right).

Table 5. Descriptive statistics of correlation coef. and agreement indexes for acc.
BH-BT EQ-01
r D s r D s
mean 0.952 -0.014 0.840 0.407 0.109 2.627
st. dev. 0.019 0.117 0.302 0.226 0.335 1.102
median 0.956 -0.082 1.014 0.355 -0.084 1.990
max 0.968 0.121 1.014 0.654 0.496 3.899
min 0.931 -0.082 0.491 0.211 -0.084 1.990

CONCLUSION AND FUTURE DIRECTIONS

Using a wide variety of activities and tests, the researchers were able to
initiate a comprehensive assessment of PSM performance in terms of HR and
accelerations. Even though the small sample size limit the statistical validity of the
study, these preliminary results demonstrate the poor reliability of HxM in assessing
HR and the effectiveness of BH-BT and EQ-01 in monitoring subjects during
dynamic activities similar to construction workforce’s routine activities. In fact, even
if these PSMs showed significant difference in regards to lab instruments in almost
every performed test, the assessed correlation and agreement make them suitable
candidate as physiological monitoring device for construction workforce.
COMPUTING IN CIVIL ENGINEERING 201

To date, HR data from only two subjects have been collected. Thus, to obtain a
comprehensive assessment of PSMs capability the next steps of our research project
include: (1) enrolling other subjects to expand the data collection up to at least ten
subjects, and (2) assessing the accuracy of the breathing rate and skin temperature
sensors. Moreover, we will use PSMs in simulated construction activities to analyze
the relationship between physical strain and workers’ productivity.

ACKNOWLEDGEMENT
The authors would like to thank the Exercise Physiology Lab and the
MARHES Lab at the University of New Mexico for providing the instruments
necessary for this study as well as the lab assistants Jeremy Clayton Fransen an Ivana
Palunko for time and efforts in performing the experiments.

REFERENCES
Abdelhamid, T.S., & Everett, J.G. (2002). Physiological demands during construction
work. Journal of Construction Engineering and Management, 128(5), 427-
437.
Bland, M., & Altman, D. (1986). Statistical Methods for Assessing Agreement
between Two Methods of clinical Measurement. The Lancet, 327(8476), 307-
310.
Bouchard, D.R., & Trudeau, F. (2008). Estimation of energy expenditure in a work
environment: comparison of accelerometry and oxygen consumption/heart
rate regression. Ergonomics, 51(5), 663-670.
Elsner, K.L., & Kolkhorst, F.W. (2008). Metabolic demands of simulated firefighting
tasks. Ergonomics, 51(9), 1418-1425.
Faber, A., Strøyer, J., Hjortskov, N., & Schibye, B. (2009). Changes in Physical
Performance among Construction Workers during Extended Workweeks with
12-hour Workdays. International Archives of Occupational and
Environmental Health, 83(1), 1-8.
Garet, M., Boudet, G., Montaurier, C., Vermorel, M., Coudert, J., & Chamoux, A.
(2005). Estimating relative physical workload using heart rate monitoring: a
validation by whole-body indirect calorimetry. European Journal of Applied
Physiology, 94(1), 46-53.
Hartmann, B., & Fleischer, A. (2005). Physical Load Exposure at Construction sites.
Scandinavian Journal of Work Environment and Health, 31, 88-95.
Kang, D., Woo, J., & Shin, Y. (2007). Distribution and determinants of maximal
physical work capacity of Korean male metal workers. Ergonomics, 50(12),
2137-2147.
Kirk, P.M., & Sullman, M.J.M. (2001). Heart rate strain in cable hauler choker setters
in New Zealand logging operations. Applied Ergonomics, 32(4), 389-398.
Melanson, E.L., & Freedson, P.S. (1996). Physical activity assessment: a review of
methods. Critical Reviews in Food Science and Nutrition, 36(5), 385-396.
Turpin-Legendre, E., & Meyer, J. (2003). Comparison of physiological and subjective
strain in workers wearing two different protective coveralls for asbestos
abatement tasks. Applied Ergonomics, 34(6), 551-556.
A Framework for Optimizing Detour Planning and Development around
Construction Zones

M. Jardaneh1, A. Khalafallah2, A. El-Nashar3, and N. Elmitiny4


1, 2
Department of Civil, Environmental and Construction Engineering, University of
Central Florida, Orlando, FL; email: mjardane@mail.ucf.edu, khalafal@mail.ucf.edu
3
Department of Industrial Engineering and Management Systems, University of
Central Florida, Orlando, FL; email: aelnasha@mail.ucf.edu
4
Egyptian National Institute for Transportation, Ministry of Transportation, Cairo,
Egypt; email: nelmitiny@yahoo.com

ABSTRACT
Construction zones are traffic way areas where construction, maintenance or
utility work is identified by warning signs, signals and indicators, including those on
transport devices that mark the beginning and end of construction zones. Construction
zones are among the most dangerous work areas, with workers facing workplace
safety challenges that often lead to catastrophic injuries or fatalities. In addition, daily
commuters are also impacted by construction zone detours that affect their safety and
daily commute time. These problems represent major challenges to construction
planners, as they are required to plan vehicle routes around construction zones in such
a way that maximizes the safety of construction workers and reduce the impact on
daily commuters. This paper presents a study that aims at developing a framework for
optimizing the planning of construction detours. The main objectives of the study are
to: 1) identify all the decision variables that affect the planning of construction
detours; 2) quantify the impact of these decision variables on construction workers
and daily commuters; and 3) implement a model based on shortest path formulation
to identify the optimal alternatives for construction detours. The ultimate goal of this
study is to offer construction planners with the essential guidelines to improve
construction safety and reduce construction zone hazards, and a critical tool for
selecting and optimizing construction zone detours.

INTRODUCTION
Many commuter drivers have to go through traffic detours on a daily basis.
Traffic detouring (also known as rerouting) is the process of forcing the through
traffic to follow an alternative path to the usual path in order to promote the safety of
construction workers, the safety of commuters and the efficiency of traffic flow. As
such, the provided alternative path is usually selected to ensure the orderly movement
of all road users on streets and highways throughout construction and work zones. In
addition to construction zones, traffic detours are used for lane closures due to
adverse weather conditions, road maintenance work, utility construction activities,
among other reasons. Traffic detours are typically identified by warning signs, signals
and indicators, including those on transport devices that guide commuters throughout
the detour. Local authorities usually require construction planners to include detailed

202
COMPUTING IN CIVIL ENGINEERING 203

traffic detour plans whenever construction work is expected to affect the traffic flow
around the construction zone. The requirement of such traffic detours vary from one
state to the other, and even between counties and cities within the same State.
Most local authorities and municipalities pay the closest attention to make
sure that detour signs are easily understood by both local residents who are familiar
with the area, and daily commuters who are familiar with just the main traffic path.
There are no specific guidelines for defining the path of the detour other than not to
detour traffic into roads that are known to be at or exceeding road capacity (i.e., roads
that failed to achieve the desirable level of service). The lack of guidelines and tools
to help construction planners in selecting an efficient detour can lead to overlooking
potentially good choices of available traffic detours. As such, there is a need for a
system to specify detour guidelines and help construction planners in identifying
optimal traffic routes that maximizes the safety of construction workers and
commuters, while efficiently maximizing traffic flow.
The main objective of this research is to develop guidelines and tools that can
help construction planners select optimal traffic detour routes. Potentially, this could
result in safer construction zones, reduce traffic jams and greenhouse gases, and offer
better utilization of the available capacity in the entire traffic network

LITERATURE REVIEW
Several studies have been conducted to evaluate the safety of highway
construction zones in several locations in the United States. Harb concluded that work
zones produce a significantly higher rate of crashes when compared to non-work zone
locations (Harb 2009). Harb cited that motor vehicle crashes increase by 26% during
construction or roadway maintenance in work zones (Harb 2009).
Anderson et al. introduced the concepts of assignment, transshipment, and
shortest route problems. They categorized traffic rerouting problems under a category
of liner programming named network flow problems. The network model for such
problems consist of nodes and arcs (Anderson et al. 2007). Focusing on shortest route
problems, they considered the main objective for such problems is to find the shortest
path or route between two nodes of the network. This can also be expressed as a
transshipment problem with one origin and one destination. By transporting one unit
from one point (the origin) to another point (the destination), the solution is
determined by finding the shortest route through the network (Anderson et al. 2007).
Radwan discussed the advantages of utilizing new techniques for tackling
traffic incidents, whether these incidents are natural such as hurricanes and floods or
manmade such as road construction and car accidents. Radwan emphasized the
importance of having a good detour around the incident location (Radwan 2003).
Snelder et al. described how a disturbance of even a small section of a
network can cause a major disruption to that network as a whole, making it
vulnerable and easy to all types of traffic problems, including congestions and delays.
They developed a methodology to analyze the specification of the design standards,
analyze the road network and test the quality of the network. The developed
methodology is reported to improve the network by decreasing the travel time by
2.3% and decreasing the lost time in case of accident by 29%. Moreover, the average
speed is reported to have increased by 1.6% (Snelder et al. 2009).
204 COMPUTING IN CIVIL ENGINEERING

DECISION VARIABLES AND OBJECTIVES


This study assumes that the various optimization objectives of planning
detours could be accumulated in a cost function. As such, minimizing the total cost of
a route is suggested to be the overall optimization objective and criterion to compare
among the possible alternative detour routes, as shown in Eq. 1. Considering the total
cost of a route is a concept that should help not only with saving the daily commuter
time, but also with considering and reducing transportation costs, carbon fuel
consumption, and greenhouse gases. As such, this should ultimately help to provide
the most economic and sustainable detour alternative.
Min: (Total Route Cost = Σ Objective Cost) Eq. 1

Figure 1: Optimization Model Formulation


Alternatively, traveling time could also be selected as a design criterion.
According to the community and the commuter needs, the construction planner could
select either of these two criteria as a detour planning criterion (See Figure 1). Note
that the selection criteria dropped the distance as a factor as it could be a deceiving
factor.
COMPUTING IN CIVIL ENGINEERING 205

Network Representation
In order to select the optimal solution, a mechanism for identifying all
available alternatives should be developed. The traffic network segments are
represented by nodes and arrows. Then they are all compiled into a single matrix to
represent the travel time or cost and allow the software to select feasible routes.
Formulating Shortest Path Problem
To select the optimal solution, a modification to the shortest path formulation
is proposed. This method is based on sending a single unit of flow from one node
(e.g. node 1) to a destination node (e.g. node m) at the least possible cost/time
(Bazaraa 1990). The mathematical formulation of the problem could be described as
follows:

Minimize , Subject to

The summation is taken over the existing arcs of the network. The constant x ij
(which is equal to 0 or 1) indicates whether an arc is in path or not.

APPLICATION EXAMPLE
To better understand the concept of Total Route Cost, an application example
is given on a construction zone within the city of Orlando. It noticed that many daily
commuters who work, and or study at Valencia Community College drive through the
intersection of South Goldenrod Road (SR 551) and Lake Underhill Road. The
common route for those commuters is first heading North on Goldenrod Road (SR
551) until reaching the intersection of Goldenrod Road (SR 551) with Valencia
College lane (distance is 1 mile), then heading east on Valencia College lane
(distance 2.1 miles) until reaching their destination which is Valencia Community
College. The total distance of this route is 3.1 miles as illustrated in Figure 2.

Figure 2 Traditional commuter Figure 3 Marginal Cost Curve above Standard


route showing the closed Link BPR (Data from Kockelman 2004)

To formulate the shortest path model for this problem, part of the East-West
section (link) of the daily commuter route of Figure 1 is closed, as shown in Figure 2.
The closed section or link is the link from the beginning of Valencia College Lane
until the intersection of Valencia College Lane with North Chickasaw Trail. This link
is considered a critical link in the daily commuter route, since passing through it is a
must for the daily commuter to reach the destination. Closing this link is expected to
generate substantial disturbance to the network. The length of the closed link is
206 COMPUTING IN CIVIL ENGINEERING

approximately half a mile, as illustrated in Figure . In order to close this section of the
daily commuter route, realistic alternative routes are sought and examined using the
modified shortest path formulation. These realistic alternatives are evaluated and
assessed based on the optimization objective (cost or time) to find the most feasible
alternative route for the original common route.
It should be noted that when it comes to economic feasibility, usually a
comparison is conducted between the cost of blocking the whole road for a shorter
total duration, and the cost of partially blocking the road for a relatively longer
duration. While the former allows faster completion of road construction tasks, the
latter requires less detour planning. In addition to these costs, the cost of delay due to
construction should be considered in assessing the alternatives and selecting the best
solution. Moreover, the social value of delay and how much a daily commuter is
willing to pay to avoid going through traffic congestion must be precisely estimated
in terms of time and money. Since the travel time per unit distance is an inverse
function of the speed, delays are expected to noticeably increase as the speed goes
down. Also with reduced speeds, density rises as more and more users enter the
congested zone, reducing inter-vehicle spacing and causing the speed to fall to almost
zero. It has been reported that the travel times tend to rise exponentially as a function
of demand for the scarce road space, as illustrated in Figure 3.
In this study a formula developed by the Bureau of Public Roads (BPR) is
used to calculate the common travel time (FHWA 1979), as shown by the following
equation:

Where, V is the traffic volume; C is the practical capacity, corresponding to


approximately 80 percent of the true capacity; t(V) is the actual travel time, as a
function of demand volume V; and tƒ is the free-flow travel time.

DATA COLLECTION
Data was collected on nine different locations. These nine locations are
considered critical points that commuters have to pass by in order to take one of the
alternative routes. The data was collected on regular days for three consecutive hours
and was analyzed on a 15-min basis in order to provide precise results and more
extensive information. The analysis also identified the peak hour precisely. Figures 4-
9 illustrate the realistic routes that can be taken by daily commuters to avoid the
construction zone.
Route “A” starts similar to the original route by heading north approximately
0.35 miles on FL-551 (Goldenrod Road), then turning right to merge onto FL-408 E
toward Titusville a distance of approximately 0.7 miles. The route continues by
merging slight left at Central Florida Greenway/Eastern Beltway/Florida 417 north, a
distance approximately 0.4 miles, then taking the Valencia College Lane exit, a
distance of approximately 0.3 mile, and then finally turning right at Valencia College
Lane for a distance of approximately 1.0 mile to arrive at Valencia Community
college. The expected travel time on this route is approximately 4 minutes. The main
advantage of this route is its shortest distance and time. On the other hand, the main
COMPUTING IN CIVIL ENGINEERING 207

disadvantage of this rote is its cost since it requires the commuter to take a toll road,
as illustrated in Figure 4.
Route B starts by having the commuter head east on Lake Underhill Rd
toward S Chickasaw Trail for a distance of approximately 2 miles, before turning left
at S Econlockhatchee Trail until reaching the main entrance of Valencia community
college at Valencia College Lane. The main advantage of this route is the distance,
since this route is consider the second shortest route in terms of distance. The main
disadvantage for this route is the existence of a critical facility (a hospital), as
illustrated in Figure 5.

Figure 4 Route “A” alternative Figure 5 Route “B” alternative


Route C starts similar to the original route by heading north on FL-551
(Goldenrod Road) for approximately 2.0 miles, then turning right at FL-50 E/E
Colonial Drive for a distance of 0.5 mile, then turning right at north Chickasaw Trail
for one mile, and finally back on the original route by turning left on Valencia
College lane for approximately 1.7 mile. This route takes the daily commuter a total
time of 12 minutes from point A, the intersection of Lake Underhill Road with
Goldenrod Road, until reaching the final destination, Valencia Community College.
The main disadvantage of this route is that the time is almost tripled compared to the
time on the original route. This route is illustrated in Figure 6.
Route D starts by heading north on FL-551 (Goldenrod Road) for
approximately 2.0 miles, then turning right at FL-50 E/E Colonial Drive for a
distance of 2.0 miles, then turning right at north Econlockhatchee Trail for distance of
1 mile. This route takes about 11 minutes to complete and it is illustrated in Figure 7.

Figure 6 Route “C” alternative Figure 7 Route “D” alternative


Route E begins by first heading south on FL-551 (Goldenrod Road) for
approximately 1.7 miles, then turning left at Curry Ford Road, driving for distance of
2.1 miles, before turning left at S Econlockhatchee Trail for 3.1 miles and then finally
turning right at Valencia College Lane. It takes approximately 17 minutes on this
route with a total distance of 7.2 miles, as illustrated in Figure 8.
208 COMPUTING IN CIVIL ENGINEERING

Route F starts similar to route E but passes through Chickasaw Trail instead of
Econlockhatchee Trail. The direction for this route begins by first heading south on
FL-551 (Goldenrod Road) for approximately 1.7 miles then turning left at Curry Ford
Road, driving for a distance of 1.4 miles, before turning left at S Chickasaw Trail for
approximately 1.0 mile, then turning left at El Prado Avenue for 0.4 mile. The route
continues onto S Chickasaw Trail for another 1 mile, and ends by finally turning left
at S Econlockhatchee trail, arriving to the final destination. This route takes
approximately 16 minutes to complete and it is illustrated in Figure 9.

Figure 8 Route “E” alternative Figure 9 Route “F” alternative

ANALYSIS AND PRELIMINARY RESULTS


As mentioned above, in order to have the most accurate results, data was
collected at nine different locations. The primary objective of collecting the data is to
determine traffic volumes and levels of service in order to select the best alternative
solution. As shown from the discussion of the six alternatives illustrated in the above
section, the problem involves optimizing a number of objectives (e.g. distance, travel
time, direct cost, safety…etc.) The complexity of the problem increases with the
consideration of all the available local routes as the number of alternatives in such a
case could be hundreds. The complexity also increases with the need for a method to
identify all the possible alternative routes. To deal with multiple objectives, a cost (or
utility) function could be defined to combine the effect of the considered objectives.
As such, customized software could then be developed to aid construction planners in
selecting the most optimal alternative. The framework for solving such an
optimization problem is based on the concepts discussed below.

Peak Hour Costs


Analyzing the collected data to find peak hour is a critical step in building the
optimization model. This is important so that the planner would have the flexibility to
vary the cost of delay according to the traffic demands and the busiest time of the
day. According to the collected data, the peak hour is estimated to be between the
hours of 7:00 am and 8:00am.

Highway Capacity and Level of Service


The desired level-of-service (LOS), or quality of the connections in the
network, is yet another factor that should be considered. An acceptable Volume-
capacity ratio (capacity) is needed as a prerequisite to determine the LOS (see Table
COMPUTING IN CIVIL ENGINEERING 209

1). It is also recommended to consider the capacity as a separate measure


(Kockelman 2004).

Table 1 Level of Service Definition for Basic Freeway Segments (TRP 2000)*

*Calculation for the Level of Service (LOS) is composed of the following steps: (1) Calculation of FFS, (2)
Determination of Flow Rate, and (3) Calculation of LOS.

CONCLUSION
Implementation factors such as driving costs, critical facilities, school zones,
peak time, highway capacity and level of service, affect the selection of optimal
detour routes around construction zones. A framework for optimizing construction
detours has been proposed in order to take these factors into account when developing
detour plans. The framework is based on combining the effect of these factors into a
cost function and using a modified shortest path formulation to determine the optimal
routes. The formulation considers each objective to have a certain cost (utility) that
could be determined by the user in order to evaluate the overall quality of the
solution. This should prove useful to construction planners as it can help them
identify the optimal construction detour around construction zones.

REFERENCES
Bazaraa, M. S. (1990). Liner Programming and Network Flows, John Wiley & Sons,
New York.
Anderson, D. R., Sweeney, D. J., Williams, T. A., and Martin, R.K. (2007). An
Introduction to Management Science: Quantitative Approaches to Decision
Making, Thomson/South-Western.
FHWA (1979). Urban Transportation Planning System (UTPS), Federal Highway
Administration, Washington, DC.
Harb, R. (2009). Safety and Operational Evaluation of Dynamic Lane Merging in
Work Zones, PhD Dissertation, University of Central Florida, Orlando, FL.
Kockelman, K. (2004). Traffic Congestion. Handbook of Transportation Engineering.
M. Kutz. McGraw- Hill, New York.
Radwan, E. (2003). Framework for Modeling Emergency Evacuation. University of
Central Florida, Center for Advanced Transportation Systems Simulation, and
Florida Department of Transportation, Orlando, FL.
Snelder, M., Schrijver J. M., Immers, L. H., Egeter, B. (2009). "Designing Robust
Road Networks," Report no. 09-2718, the 88th Annual Meeting of the
Transportation Research Board, Washington, DC.
TRP (2000). Highway Capacity Manual: 2000 Washington, D.C, Transportation
Research Board, Washington, DC.
A Multi-Objective Decision Support System for Ppp Funding Decisions
Morteza Farajian1 and Qingbin Cui2
1
PhD Student, Department of Civil & Environmental Engineering, University of Maryland, College
Park, MD, USA, 20742; morteza@umd.edu
2
Assistant Professor, Department of Civil & Environmental Engineering, University of Maryland,
College Park, MD, USA, 20742; cui@umd.edu

ABSTRACT
PPP is an innovative delivery method used as an option to leverage public funds by
attracting private investment into public projects to make the delivery of previously
impossible projects possible. Leveraging resources to build more infrastructure increases the
output (quantity) of funds; however, besides leveraging resources, the public agency should
also increase the outcome (benefits) of those projects by utilizing more efficient funding
strategies. Currently, some project level evaluation methods such as VfM and BCA are being
practiced to evaluate PPPs; however, those methods fail to consider the overall benefits and
costs of PPPs for multiple stakeholders, and they do not provide much assistance in terms of
comparison between different projects in portfolio level. This study introduces a Multi-
Objective Decision Support System (MODSS) which integrates quantitative and qualitative
aspects of PPPs and calculates the utility function based on different interests of multiple
stockholders. The ROR on private investment, the regional economic benefits for local
people, and long term national level benefits are considered as the main attributes to address
different objectives of different stakeholders. This two level MODSS model assists public
agencies such as FHWA to better spend their resources in special programs such as TIFIA by
optimizing their funding portfolio and allocating the available funds into the optimal portfolio
of PPP projects.

INTRODUCTION
Transportation projects in the US have been funded traditionally by excise taxes.
However, in recent years, due to the decrease in the excise tax revenue, and increase in
transportation fund needs, the gap between financial resources and needed funds to maintain
and improve transportation systems has widened (National Surface Transportation
Infrastructure Financing Commission, 2009). There are different options available to fill this
gap such as increasing taxes or using new financing resources such as private investment on
public projects. However, there is a huge public resistance against tax increase or road
privatization in the US. The other available option is increasing the efficiency of public
funding strategy in order to maximize the benefits from each dollar of tax payers’ money.
Public Private Partnership (PPP) is an innovative financing method which has been
recently used by numerous states in the US (Cui, Farajian, & Sharma, 2010) as an option to
leverage their resources and attract new financing resources to public projects. A Public-
Private Partnership can be broadly defined as a long term agreement between public and
private sectors for mutual benefits (HM Treasure 2000), while the private sector is awarded
the right to Design Build Finance and Operate (DBFO) a roadway, often times a toll road, and
based on the risks that the private sector takes, it either gets paid through toll payments by
users or availability payments by state DOTs. Using PPP agreements, public agencies try to
bridge the increasing gap between required investments and limited funding, while increasing
the efficiency and shifting some of the risks during the design, construction and
operating/maintaining phases of the road to the private sector. However, the private sector

210
COMPUTING IN CIVIL ENGINEERING 211

usually seeks an incentive such as federal loans, the Transportation Infrastructure Finance and
Innovation Act (TIFIA) for instance, or grants such as Transportation Investment Generating
Economy Recovery (TIGER). Due to the recent trend in the increase in the number of the
states that use PPPs (Cui, Farajian, & Sharma, 2010), the number of projects that compete to
receive those loans and grants has increased.
In order to enhance the efficiency of funding decisions, funding agencies are limited
by legislation to use PPPs only if there is a well-defined and executed business case analysis
showing added value for money for PPPs compared to publicly financed projects. Due to
competition among different regions and different projects receiving public funds, decision
makers are obligated to decide about the priority of different competing projects before
allocating the grants or loans. In addition, the complexity of public private partnership
proposals and the existence of multiple objectives of different entities involved in such
projects creates a need for funding agencies to use a systematic decision support system that
is able to integrate both qualitative and quantitative aspects of those projects into a single
model in order to decide which projects among all qualified projects have higher priority in
receiving public assistance. The other decision that needs to be made is deciding the optimal
amount of money that each project should receive in order to maximize the benefits from
each dollar of taxpayers’ money. Optimization of this “funding strategy” is of special
importance because of the public resistance against an increase in tax rates or privatization of
public projects. One of the main criticisms of the public agencies is that they do not
efficiently utilize available resources. This research reviews the state of the art of the current
“funding strategy” in the US Departments of Transportation, and suggests a Multi-Objective
Decision Support System (MODSS) which can integrate both quantitative and qualitative
aspects of PPPs, as well as calculate the utility function for the decision maker. The MODSS
enables the decision maker to utilize a funding strategy that will allocate resources more
efficiently with the available funds to the optimal portfolio of PPP projects in order to
increase the output of public investment.

LIMITATIONS OF CURRENT FUNDING STRATEGIES FOR PPPS


Currently some evaluation methods such as VfM and BCA are being used by state
DOTs and FHWA for evaluating different projects (Office of the Secretary of Transportation,
2010 ) (Office of the secretary of transortation, 2009). The TIGER office is required by law to
ask for a BCA report along with TIGER application documents based on their standard
procedure to conduct BCA which has been published by TIGER in order to set up a fair
evaluation process (Office of the Secretary of Transportation, 2010 ). The BCA study should
consider the qualitative and quantitative study of issues such as State of Good Repair,
Economic Competitiveness, Livability, Sustainability, and Safety. TIFIA, on the other hand,
uses VfM analysis because legislation usually only allows the use of PPPs if there is a well-
defined and executed business case analysis showing added value for money for PPPs
compared to a publicly financed competitor. Value for Money (VfM) analysis calculates the
difference between the costs associated with both traditional and PPP procurements based on
the optimal allocation of risks and rewards by converting all costs and risks associated with a
project to dollar values.
Although VfM and BCA are both powerful tools that can help decision makers shift
from subjective decisions to more well-defined objective decisions, they have some
limitations when it comes to investment strategies in PPPs. PPPs are usually complex
arrangements between multiple stakeholders with competing objectives over a long period of
time. There is also a high uncertainty embedded in PPP governance structure which requires
212 COMPUTING IN CIVIL ENGINEERING

an extra effort to model the benefits and costs of a project delivered using a PPP arrangement.
The current evaluation methods are usually useful as a preliminary project evaluation tool in
order to demonstrate the availability of PPPs as a delivery option, however, they do not
provide enough decision making support when it comes to comparing different competing
PPP projects for federal assistance. Therefore, there is a need for strategic planning for
improving capital allocation decisions in the investment portfolio of federal programs such as
TIFIA while considering the competing objectives of multiple stakeholders. This paper aims
to go beyond the available project evaluation analyses for PPPs at the project level, and
provides a decision support system at the program level, based on the utility theory and multi
objective optimization.

STATE OF THE ART OF MODSS IN OTHER INDUSTRIES


When it comes to comparison in a multi objective decision making process, one of
the most popular methods is Multi Attribute Utility Theory (MAUT). This method has been
widely used in decision support systems as a logical approach to establish meaningful
tradeoffs among conflicting objectives (Keeny & Raifa, 1976). This theory usually presumes
a single decision maker who faces tradeoffs on two or more criteria or attributes among
number of alternatives (Wallas, 1995). Based on this theory, the decision maker intends to
prioritize available alternatives based on a utility function which is unique to the decision
maker. MAUT is based on a large body of mathematical theory for utility models as well as a
wide range of practical assessment techniques that help to assess the tradeoff weight factors
among different attributes and identify a unique utility function for the decision maker based
on those weight factors (Wallas, 1995). Later in this research we will explain how we can
account for objectives of multiple stakeholders by defiling different attributes, and still use
MAUT to establish tradeoffs among those attributes in order to get a multi-objective utility
function.
There is a wide range of applications for MAUT in engineering and business for both
the private sector and public sector. S. Eom and E. Kim (2006) report “most applications of
MAUT are for POM (44.16%) followed by marketing, transportation, and logistics (17.53%),
MIS (13.64%), and multifunctional areas (8.44%)” (Eom & Kim, 2006). Vinke (1992) and
Roy (1996) discuss some of the applications of MAUT in finance and economics;
environmental management; energy planning; marketing; transportation, particularly in
highway planning and subway design; human resources management in job evaluation and
personnel selection; education, and agriculture. Petrovic et al. (2007) presents a new tool for
multi-objective job shop scheduling problems based on an interactive fuzzy multi-objective
genetic algorithm which considers aspiration levels set by the decision maker for all the
objectives. Cox (2002) uses MAUT to develop health status indicators for a multi-party risk
management decision process. Walls talks about the need for applying MAUT models to the
private sector in order to develop a framework to link the firm's capital allocation process and
its business strategy (Wallas, 1995).
Applications of MAUT in the public sector are even more than the applications of
this type of decision model in the private sector. Most public sector problems involve
multiple conflicting objectives, such as in public health care systems (Stanhope and
Lancaster, 2004); environmental policy (Linkov and Ramadan, 2004); climate change issues
(Konidari, 2007); site selection (Burak et al., 2007); and energy (Sung-Kyun and Ohseop,
2009). In a real life application of MAUT, the Department of Energy’s Office of Fissile
Materials Disposition funded a project to design a multi-attribute utility model to evaluate
different options for disposing excess plutonium (Butler et al., 2005). Rios- Rios-Insua et.al
COMPUTING IN CIVIL ENGINEERING 213

(2006) combine MAUT and Multi Objective optimization to develop” a multi-objective


decision support system for optimization in engineering design”.

MODEL DEVELOPMENT
The model presented in this paper is based on a two level MAUT and a Bayesian
network. Level 1 of MAUT model captures the utility function of different stakeholders based
on three attributes; meanwhile Level 2, integrates these utility functions into one centralized
utility function based on the preferences of the decision maker which in this case is the
program or the office which is going to make the funding decisions. The centralized utility
function can change based on events that may happen in the future by using a Bayesian
network to update the relative importance weights. The different steps of the model are shown
in Figure 1.

Figure 1: Strategic Decision Process for Project Prioritization

 Setting Strategic Objectives:


The first step in creating the model is setting the strategic objectives of the program.
Usually these objectives can be achieved by referring to the legislation that has authorized the
funding as well as the objectives of different stakeholders. This process prepares the ground
for the establishment of different attributes which will be used in the model. The
identification of the attributes is a crucial stage in the evaluating process using MAUT
approach. To begin with, the goal of an evaluation for PPP projects is defined as maximizing
the overall satisfaction of three different stakeholders: the concessionaire (private company),
the local community, and the federal government. As shown in Figure 1, the main objective
of the concessionaire is maximizing its profit by reaching a higher Rate of Return (ROR) on
his investment. For the local community, issues such as reduction in traffic time, increase in
real estate prices, economical benefits, increase in safety and a reduction in life loss, and
better accessibility are the main concerns. In this paper, we have integrated all of the
mentioned factors and defined an index called the “local livability index” or (LLI). Likewise,
a similar index called the “National Impact Index” or (NII) is defined which includes the
impacts of the project at the national level, such as carbon emission reduction, induced
economic impact of the project at the national level, better interstate connectivity, fair
214 COMPUTING IN CIVIL ENGINEERING

development, and political factors. It should be mentioned that developing such indexes in
more details are beyond the scope of this paper, so we assume such index is available and can
be obtained through collecting surveys and data analysis.
The first thing that should be mentioned about the three attributes - ROR, LLI & NII -
is the fact that they are usually contradictory to each other. For instance, to achieve a better
ROR on private investment, there is a need to scarify the design or the quality of the project,
which means lower LLI, or receive more funding from federal government, which means
lower NII. The other thing that should be mentioned about the model is the fact that the
mentioned attributes cannot be easily combined into one simple formula because each
stakeholder has different preferences and therefore the importance weights vary from
stakeholder to stakeholder. As shown in Figure 1, this paper develops the final utility function
into two steps. In the first step, the utility function of each stakeholder based on the
mentioned three attributes- ROR, LLI & NII - will be obtained. Each one of these utility
functions will be treated as a new attribute in the next step, in which the utility function of the
decision maker in the public agency will be obtained based on his preferences for the utility
of each stakeholder.
 Level One: Obtaining Utility Function of Each Stakeholder
A common method for obtaining the utility function for each stakeholder is to
interview key members from each group of stakeholders who are familiar with the
preferences of that group or company. Before calculating the utility functions, there should be
a determination of the appropriate range of the attributes. In measuring the range of the
attributes, an upper limit and a lower limit of the attributes scope should be determined by
using existing research results and a scientific analysis on the basis of an engineering model
(Goicoechea & Duckstein, 1982).
In the three-attribute utility function, we assess the utility function of the ROR, the
Local Livability Index, and the National Impact Index. The utility is assessed assuming
mutual utility independence for ROR, LLI, and NII. It is also assumed that preferential
independence exists because change in the rank ordering of preferences of one attribute does
not change the rank ordering of preferences of other attributes. Thus the utility function can
be expressed as:
1
, , i i 1 1 1

Where 0 < ki < 1, U1 = U(ROR), U2 = U(LLI) and U3 = U(NII) and K≥-1 and non-
zero. The next step in developing the utility function for each stakeholder is obtaining the
scaling factors. It should be mentioned that we assume the worst case for all utilities is 0
meaning lowest ROR, minimum LLI, and minimum NII. The best case for each utility is
assumed to be 1, meaning highest ROR, maximum LLI, and maximum NII.

, , 0, 0, 0 0
, , 1 , 1, 1 1

Third, the k1 can be assessed by asking the decision maker at what chance ‘p’ for the
following lottery such that he/she is indifference between choice #1 and #2. For instance, if
the decision maker is indifferent between choice #1 and #2 at the point where p = 0.5, k1 =
0.5.
COMPUTING IN CIVIL ENGINEERING 215

p
#1 (0, 0, 0)
1-p (1, 1, 1)
#2 (1, 0, 0)

This process should be repeated by setting up similar lotteries to asses k2 and k3. In
the last step of obtaining the utility function for each stakeholder, the obtained ki should be
applied to equation (1) for the best case, U(1,1,1), in order to calculate K.

i 1

1 2

It should be noted that this process should be repeated for the private company,
Up(ROR, LLI, NII), the local community, UL(ROR, LLI, NII), and the federal agency,
UN(ROR, LLI, NII).

 Level Two: Obtaining Utility Function of the funding agency


In Level 2 of this model, the preferences of the funding agency are obtained. The
process in this level is similar to the process in Level 1; however, the utility functions which
are obtained in Level 1 are used instead of the three attributes. So, Up = UROR, UL = ULLI and
UN = UIIL.
p, l, n

1
i i 1

1 3

p, l, n 0, 0, 0 0
p, l, n 1 , 1, 1 1

The next step in obtaining the utility function of the decision maker in the public
agency is assessing the scaling factors by changing p in a set of lotteries similar to the
previous level.
p
#1 (Max Up , Max UL, Max UN)
1-p (Min Up , Min UL, Min UN )
#2 (Max Up , min UL, min UN )

As mentioned before, this process should be repeated by setting up similar lotteries to


asses k2 and k3. In the last step of obtaining the utility function for funding agency, the
obtained ki should be applied to equation (2) for the best case, U(1,1,1), in order to calculate
K. This step finalizes model development by assessing utility function for funding agency.
216 COMPUTING IN CIVIL ENGINEERING

AN APPLICATION OF THE MODEL


As discussed before, one of the strengths of the model is its ability to account for
different objectives of different stakeholders. In order to develop the Level 2 utility function
for the funding agency, it is necessary to capture Level 1 utility functions for the
concessionaire, local community, and federal agency. This process requires meetings with
representatives of each stakeholder to assess their utility functions for each attribute and
importance weight factors, but it is beyond the scope of this paper to establish an accurate
practical utility function for a funding office such as TIFIA. In order to show an application
of the model for funding agencies, a simulation is used for 5 imaginary projects.
First, a random number generator is used to generate data for Level 1’s input. ROR is
set to be between 0.1 and 0.5, and LLI and NII are set to be between 0 and 1. Then it is
assumed that the scaling factors for different stakeholders are obtained using the mentioned
lotteries. Based on those scaling factors, K is calculated using equation (2). The results of the
calculations of scaling factors are as follows:

Table 1: Calculation of scaling factors for level 1 and level 2 utility functions
Level 1
KROR KLLI KNII K
Concessionaire (P) 0.80 0.30 0.10 -0.45156
Local Community (L) 0.20 0.70 0.30 -0.45156
Federal Agency(N) 0.10 0.50 0.70 -0.67191
Level 2
Kp Kl Kn K
Funding Agency 0.30 0.50 0.40 -0.45156

Finally, the utility function for each stakeholder is calculated using equation (1) and
Level 1 scaling factors. These utility functions are used in Level 2 as inputs. In Level 2, the
utility function of the funding agency is calculated using these inputs, Level 2 scaling factors
and Equation 2. The results of these calculations are presented in Table 2.

Table 2: Calculation of utility functions in level 1 and level 2 and project ranking
Level 2 Input Level 2 Output
Level 1 Input Level 1 Output
ROR LLI NII Up Ul Un U Rank
Project 1 0.475 0.623 0.512 0.57 0.63 0.62 0.66 2
Project 2 0.195 0.495 0.663 0.35 0.54 0.65 0.58 3
Project 3 0.221 0.858 0.522 0.46 0.74 0.70 0.70 1
Project 4 0.378 0.523 0.147 0.45 0.46 0.37 0.48 5
Project 5 0.281 0.426 0.593 0.39 0.50 0.59 0.55 4

DISCUSSION OF IMPLICATION
As shown in Table 2, this model is able to account for the preferences of
different stakeholders and prioritize projects based on the strategic objectives of the
funding agency and the preferences of different stakeholders.
COMPUTING IN CIVIL ENGINEERING 217

Table 3: What-If Analysis S for Output Utility Function for Funding Agency (Top 9 Inputs Ranked By
Percent Change)
Minimum Maximum
Output Input Output Input
Rank Input Name Cell Value Change(%) Value Value Change(%) Value
1 Funding Agency (Kn) R8 0.31 -46.31% -0.093456088 0.84 46.31% 0.893456088
2 Funding Agency (Kl) Q8 0.35 -39.31% 0.006543912 0.80 39.31% 0.993456088
3 Funding Agency (Kp) P8 0.45 -22.15% -0.193456088 0.70 22.15% 0.793456088
4 Knii for Local AE8 0.46 -19.85% -0.193456088 0.69 19.85% 0.793456088
Community
5 Klli for Local AD8 0.48 -15.98% 0.206543912 0.67 15.98% 1.193456088
Community
6 Knii for Federal Agency AK8 0.49 -15.66% 0.206543912 0.67 15.66% 1.193456088
7 Klli for Federal Agency AJ8 0.52 -9.77% -3.55026E-05 0.63 9.77% 1.000035503
8 Klli for Concessionaire X8 0.52 -8.97% -0.193456088 0.63 8.97% 0.793456088
9 Knii for Concessionaire Y8 0.55 -3.85% -0.064485363 0.60 3.85% 0.264485363

In order to show how the preference of each stakeholder can change the final utility
function of the funding agency, a simulation model is created in which 12 different inputs -
scaling weights for the concessionaire, local community, federal agency, and the funding
agency, have been defined as input variable, and the changes in the utility function for the
funding agency in Level 2 have been studied. Table 3 shows the results of this simulation,
and Figure 2 shows the sensitivity analysis for the 9 main inputs that have a great effect on
the utility function for the funding agency.

Funding Agency (Kn) (R8)


Knii for Federal Agency (AK8)
-50%
-40%
-30%
-20%
-10%

10%
20%
30%
40%
50%
0%

Percent Change in Utility Function for Funding Agency


Figure2: Sensitivity analysis results

CONCLUSIONS AND FUTURE RESEARCH


This paper intends to show mathematically how a multi-objective decision support
system can be built using utility theory for different stakeholders. In this paper a computer
simulation has been used to run the model for 5 imaginary projects. However, the research
team is planning to improve the model from utility function to value function. In order to do
so, some metrics will be defined in order to convert different objectives of different
stakeholders into dollar values and create a value function using Multi Attribute Value
Theory (MAVT). The research team will use the available literature to build mentioned
metrics and will enhance the model using the data available from TIFIA office. Some other
stakeholders of the projects will be contacted in order to make the model more accurate by
assessing true values for scaling factors.
After collecting the importance weigh factors, a case study will be conducted on a
PPP proposal, and a computer simulation will be used to calculate the expected value of that
project for the funding agency, and run the sensitivity analysis for different metrics. Later, the
model will be tested using the data available from TIFIA office for applicants for TIFIA loan
218 COMPUTING IN CIVIL ENGINEERING

in order to study how the model can prioritize projects in a portfolio. Finally, an optimization
program will be added to the model based on some constraints.
One of the most important features of this model is the fact that it can be updated
easily based on the changes in the strategic objectives of the funding agency, preferences of
different stakeholders or even based on the events that may happen for the portfolio. The next
feature which will be added to the model is a Bayesian network which can automatically
update the scaling factors for the funding agency based on different events. For instance, in
the event of a bankrupt PPP project, such as what happened in California to South Bay
Express Lane Project, the sensitivity towards ROR will increase, or in the event of election
season, the sensitivity towards local benefits or national impact may increase so Kl and KN in
the model should be updated.
REFERENCES
Burak Canbolat, Y., Chelst, K., & Garg, N. (2007). Combining decision tree and MAUT for selecting a
country for a global manufacturing facility. Omega , Volume 35, Issue 3, pp. 312-325.
Butler, J., Chebeskov, A., Dyer, J. S., Edmunds, T., Jia, J. and Oussanov., V.2005. The Use of Multi-
Attribute Utility Theory for the Evaluation of Plutonium Disposition Options in the US.
Cox, L. A. (2002). Risk Analysis: foundations, models, and methods . Kluwer Academic publishers.
Cui, Q., Farajian, M., and Sharma, D. (2010). Feasibility Study Guideline for Public Private.
UniversityTransportation Center for Alabama.
Eom, S., and Kim, E. (2006). A Survey of Decision Support System Applications (1995-2001). The Journal
of the Operational Research Society , Vol. 57, No. 11 , pp.1264-1278.
Goicoechea, D. H., & Duckstein, L. (1982). Multi-Objective Decision Analysis with Engineering and
Business Applications. New York: , John Wiley & Sons.
Keeny, R., and Raifa, H. (1976). Decisions with Multiple Objectives: Preferences and Value Tradeoffs.
Cambridge: Cambridge University Press.
Konidari, P. A. (2007). Multi-criteria evaluation of climate policy interactions. Journal of Multi-Criteria
Decision Analysis , Volume 14, Issue 1-3, 35 – 53.
Linkov, I., & Ramadan, A. B. (2004). Comparative risk assessment and environmental decision making .
Kluwer Academic publishers.
National Surface Transportation Infrastructure Financing Commission. (2009). Paying our Way: A New
Framework for Transportation Finance. U.S. Department of Transportation.
Office of the secretary of transortation. (2009). Notice of Funding Availability for Supplemental
Discretionary Grants forCapital Investments in Surface Transportation Infrastructure Underthe
American Recovery and Reinvestment Act. Federal Register /Vol. 74, No. 115 /Wednesday, June 17,
2009 /Notices.
Office of the Secretary of Transportation. (2010 ). Interim Notice of Funding Availability for the
Department of Transportation’s National Infrastructure Investments Under the Transportation, Housing
and Urban Development, and Related Agencies Appropriations Act for 2010; and Request for
Comments. Federal Register , Vol. 75 /No. 79 /Monday, April 26, 2010 /Notices.
Petrovic, D., Duenas, A., & Petrovic, S. (August, 2007 ). Decision support tool for multi-objective job shop
scheduling problems with linguistically quantified decision functions. Decision Support Systems
,Volume 43 Issue 4
Ríos-Insua, S., Mateos, A., Jiménez, A., and Rodríguez, LC. A multi-objective decision support system for
optimization in engineering design., ISC'2004 Proceedings, The International Industrial Simulation
Conference, pages 512-516, June 2004. EUROSIS.
Stanhope, M., and Lancaster, J. (2004). Community and public health nursing. Missouri: Mosby.
Sung-Kyun, K., and Ohseop, S. (2009). A MAUT approach for selecting a dismantling scenario for the
thermal column in KRR-1. Annals of Nuclear Energy , 145-150.
Wallas, M. R. (1995). The Engineering Economist: Integrating business strategy and capital allocation: an
application of multi-objective decision making. The Engineering Economist , 247-266.
Truck Weigh-in-Motion using Reverse Modeling and Genetic Algorithms
Vala, G.1, Flood, I.2 and Obonyo, E.3
1
M.E. Rinker. Sr. School of Building Construction, University of Florida, P.O. Box
115703, Gainesville, Fl-32611-5703; PH (352) 271-1152; email: gvala@ufl.edu
2
M.E. Rinker. Sr. School of Building Construction, University of Florida, P.O. Box
115703, Gainesville, Fl-32611-5703; PH (352) 273-1159; email: flood@ufl.edu
3
M.E. Rinker. Sr. School of Building Construction, University of Florida, P.O. Box
115703, Gainesville, Fl-32611-5703; PH (352) 273-1161; email: obonyo@ufl.edu
ABSTRACT
The ability to accurately determine the loading attributes of a truck (namely
the axle configuration, the spacing between the axles, and the load imposed by each
axle) while it is in motion is an important function for the design and structural health
monitoring of bridges, and highways. Truck weigh-in-motion (WIM) as it is termed is
an inverse problem where the load is identified from the observed response of the
structure over which it is travelling. The problem has been reasonably well solved
using neural network techniques, but there is still significant room for improvement
in terms of reducing the number of misclassifications of trucks and increasing the
precision of the axle spacing and load estimates.
The problem can be formulated as an optimization problem. Genetic
algorithms (GAs) are proven robust and efficient search optimization techniques. The
potential of the GA approach for reverse identification of axle configuration and
loading from bridge girder stress envelopes has been investigated and compared to an
existing neural network solution. The investigation is a pilot study that considers a
simply supported steel girder bridge with a concrete deck. The bending stresses of the
bridge are simulated numerically and are used as the input for reverse modeling. The
identification procedure is carried out using GAs by minimizing error between the
measured bridge response and reconstructed bridge response. The performance of the
GA depends on the tuning of genetic operators, hence different operator settings are
considered and tuned for optimality. Advance strategies such as migration and
multiple species with real coded representation variables are adopted to improve the
performance. The effect of measurement parameters such as sampling frequency (50-
400 Hz), levels of noise (5-25%), time varying load and measuring sections on
accuracy of identification are also investigated. The performance of the GA approach
is found to outperform the existing neural network solution. The significance of this
is that, unlike the neural network approach, the GA solution can be applied to any
bridge configuration for which a reasonable stress model exists. Moreover, the
computational time for the GA is found to be on average 3-4 seconds which, although
is several orders of magnitude slower than the neural network solution, it is well
within what could be considered an acceptable delay for generating a solution.

219
220 COMPUTING IN CIVIL ENGINEERING

BACKGROUND
Traditionally, truck loads are measured at weigh stations used to penalize the
overweight trucks. Also, the load history obtained from the weigh-in-motion (WIM)
station is used for the design of bridges, pavement surface infrastructural design and
planning. However, this system is expensive and causes the traffic to slow down as
each truck takes a considerable amount of time for weighing (Gagarin, N. 1991).
Hence advanced WIM technology which calculates the weight from the responses of
bridge have been developed and analyzed by many researchers.
Identification of the axle loads, axle spacings, and velocity of a truck
travelling across a bridge is an inverse problem, where the attributes of the truck are
identified from the bridge’s time-wise strain response (strain envelope). One
approach to solving this type of problem is reverse modeling. This makes use of a
search algorithm that iterates towards a solution (a loading scenario defined by the
number and spacing of axles, the axle loads, and the truck velocity) that would have
caused the observed strain envelope. The fitness (accuracy) of a solution is
determined using a forward model that computes the resultant strain envelope for a
given loading scenario using, for example, a finite element model. The fitness is
measured as some function of the difference between the strain envelope generated
by the forward model and that measured on the bridge.
Yu and Chan (2007) have reviewed and compared the various method of load
identification. Traditional WIM system has developed from the earlier work done by
Moses (1979). The method involves the inverse of the system matrix and is solved by
the least squares method. However, methods based on the inversion of a matrix are
computationally expensive; hence the pseudo inverse or singular value decomposition
method is used to reduce the computational load. This method shows high fluctuation
in the error due to the presence of measurement error and ill-posed conditions
(Pinkaew, 2006). In order to tackle these regularization techniques the least square
error method is used (Law et al., 1997). However, finding the optimal value of the
regularization parameter proves difficult in practice.
There are many methods that involve the use of an optimization algorithm to
search for a solution. Sequential quadratic programming and dynamic programming
have been used most frequently (Leming and Stalford, 2003). However, these
algorithms are based on finding the zero gradient from the provided auxiliary
information. The algorithms require good formulation of the equation containing
gradient information. It is difficult to form the equation for nonlinear constrained
problems. Also traditional optimization algorithms often get stuck in local error
minima. In contrast, Genetic Algorithms are a class of optimization techniques which
do not require knowledge of the gradient of the error function or any auxiliary
information about the system and are capable of escaping from local error minima.
GA has been used successfully to other field.
The objective of this study is to test and evaluate the potential of using reverse
modeling with Genetic Algorithms to determine the truck loading scenario that gave
rise to an observed strain envelope at one or more locations on a bridge.

BRIDGE MODEL
COMPUTING IN CIVIL ENGINEERING 221

This study considers a bridge modeled as a Timoshenko beam with constant


cross section and constant mass per unit length. Damping and inertial effects of the
bridge are neglected. Bridges considered under this study are formulated as simply
supported single span steel girder structures with a concrete deck as shown in Figure
1. During the passage of the truck, stresses developed at the bottom of the girder have
been simulated. Only the static responses of the bridge have been considered in the
study. The truck is modeled using constant magnitude moving axle loads. Stresses at
the bottom of the girder are calculated from the equation 1.
M

S 1
Where  = Stress in the girder, M = Bending moment in the girder, S =
Section modulus of girder.

Figure 1. Bridge Model

Bending moments developed during the passage of a truck are calculated as


influence lines. Influence lines for the bending moment of a unit moving load can be
calculated by the triangular function as represented by equation 2
 x(t )* xm 
IL m  ( x(t ))   x(t )   if x(t )  xm
 L 
2
 x(t )* xm 
  xm  if x(t )  xm
 L 
Where IL x t = influence line of bending moment at section m at x(t ) ,
x(t ) is distance of moving load from left support at time t, L = length of the bridge,
xm is distance of measuring section where Bending Moment is calculated.
Considering N axles of a truck A1 , A2 ......An and Axle Spacing S1 , S2 ........Sn the
total bending moment is formulated from superposition of equation 2. The Bending
Moment at any point along the length of the beam at time t is given by equation 3

….. .. .. . . 3

Measurement Noise
WIM data will include measurement error due to various factors including
lack of precision of the system used for recording the response, uneven profile of
222 COMPUTING IN CIVIL ENGINEERING

road surfaces, vehicle acceleration, internally induced vibration in a vehicle,


environmental conditions such as temperature and moisture (Prozzi and Hong, 2007).
Some causes of noise may result in periodic dynamic behavior of the truck and can be
modeled using Fourier type functions, as discussed in the next sub-section. Other
noise is random in the time and space domains and can be modeled as a Gaussian
distribution which is uncorrelated and has a zero mean. White Gaussian noise, as this
is termed, has been used in various studies for noise modeling. On the other hand,
systematic error occurs mostly due to inadequate calibration of equipment (Prozzi
and Hong, 2007), although this is not considered in this study. In order to investigate
the effect of noise on the accuracy of a solution White Gaussian noise is added to the
noise-free data generated using the static influence line of bridge moment. Noise
vectors having zero mean and unit standard deviation and being uncorrelated are
added to the noise-free simulated bridge response vector as described by equation 4.
The levels of noise considered in this study were 5%, 10%, 15%, 20%, and 25%.
 np   nf  RMS( nf ) * Nl * Nrand 4

Where  np = Noise polluted stress response of the bridge, nf = Noise free
stress response of the bridge, RMS = Root Mean Square value, Ni = Level of noise,
Nrand = Random noise vector with zero mean and a standard deviation of 1.0.
Dynamic Axle Weight
The dynamic effects of a truck in motion affect the load on the axle with time.
To investigate the accuracy of the algorithm under dynamic effects, the time varying
load model used in the study by Zhu and Law (2002) is adopted, as described by
equations 5 and 6:
Front Axle Load = Static Front Load * [1+ 0.1 sin (10π t) + 0.05*sin (40π t) 5
Rear Axle Load = Static Rear Load * [1- [0.1 sin (10π t) + 0.05*sin (40π t)]] 6
GENETIC ALGORITHM
GA gains its inspiration from the theory of evolution by natural selection
‘survival of the fittest’. Holland (1975) used the idea of Darwin’s theory for complex
optimization problems. GA’s operate on a population of individuals. Each individual
is a potential solution to the problem. A fitness function (or objective function) must
be defined and the GA searches for the optimal solution within the constrained
domain, for which the value of the function is minimum. GA’s are an iterative
process where at each iteration each solution is evaluated based on its fitness value
and, through some stochastic operation, potentially better solutions are produced for
the next generation. A population of solutions are generated and evaluated at each
iteration. The population is created using GA operators such as crossover and
COMPUTING IN CIVIL ENGINEERING 223

mutation. The process is repeated until a solution is found that is within a specified
error tolerance.

Tuning of GA Operators
The performance of the GA depends on the tuning of the GA operators. It is
good practice to try different combinations of operators and analyze their impact on
the results. Table 1 outlines the different settings of the GA operators that were
investigated. Since the GA is a stochastic process the procedure was run 20 times to
gain a more representative mean and variance for the truck loading attributes. The
mean of the result was taken as the identified value of load and spacing.
Table 1. GA operators
Operator Options Trial 1 Trial 2

Fitness Scaling Proportionate, Rank, Shift Linear Rank Proportionate


Selection Roulette, Stochastic Uniform, Uniform Remainder
Tournament, Uniform, Remainder
Crossover Scattered, Single Point, Two Scattered Heuristic(0.8
Point, Intermediate, Heuristic )
(ratio: 0 to 2), Arithmetic
Crossover 0 to 1 0.4 0.8
Fraction
Number of 3 3
Subpopulation 3
Subpopulation 5 to 20 5: 5:10 10:10:15
Size
Migration Forward, Both Forward Both
Direction
Migration 5 to 60 40 45
Interval

Effect of Fitness Scaling: Both the rank scaling and proportionate scaling were
found to perform well compared to the shift linear scaling. Rank scaling had a
consistent error in all the identified parameters and also within the selection criteria.
Hence, rank scaling was adopted as the optimal scaling operator for the study.
Effect of Selection: Stochastic Uniform, Roulette and Uniform selection have
produced results which meet the selection criteria. In order to select one of them as
the optimal operator, the minimum cumulative error associated with each operator
was considered. Since the cumulative error did not show much difference, the
standard deviation was considered. The lowest standard deviation was associated
with the Stochastic Uniform selection operator, hence it was adopted as the optimal
selection operator for the study.
224 COMPUTING IN CIVIL ENGINEERING

Effect of Crossover: The only crossover operator that met the selection criteria was
Heuristic Crossover with a ratio set to 1.8. Hence, Heuristic Crossover was chosen as
the optimal crossover operator for further analysis.
Effect of Migration: Migration and multiple species increase the accuracy of the
identification. Three subpopulation, each having 20 individuals was considered with a
migration interval of 40 and forward migration fraction of 0.6.

RESULTS
The bridge response was simulated with increases in the measuring section,
sampling frequency, and noise level. The minimum sampling frequency considered
was 100 Hz in order to capture the minimum 100 data point of the bridge response
profile as recommended (Yu and Chan 2007). In Figure 2, the spacing refers to the
axle spacing between front and rear axle load. The spacing between the rear axle
loads was assumed to be known. From the results, it was found that the front axle
load experienced more fluctuation in error compared to the rear axle load. For noise
free data all the results were found to be within 3% error at all sampling frequencies.
It was also found that even a single measuring section at midspan was sufficient to
produce results of this level of accuracy.
The range of each parameter considered in the subsequent analysis was as
follows:
 Sampling Frequency = 100 to 400 Hz
 Number of Measuring Sections = 1, 3, 5, 7, and 9 points along the length of
the bridge. Points were placed at equal distance of 1/8th of the length of span
from the midpoint.
 Level of Noise = 5%, 10%, 15%, 20%, 25%

Effect of Number of Measuring Location and Noise Level


Figure 2 shows the effect on performance of changing the measuring points,
and sampling frequency rate, for the 15% noise situation. For noise free bridge
response, the number of measuring points does not show any significant effect on
accuracy. However, as the noise level increased, error in identification increased.
Improvement in the result was not consistent with an increase in the number of
measuring locations. However, the error was reduced to within 10% when
considering a minimum of 3 measuring locations. The front axle load is more
sensitive to error than the rear axle loads. Error in rear axle loads remained within 5%
at the low sampling frequency and for higher frequencies; the error is within 1% for
all levels of noise. Error in the front axle load was high for noisy data at a low
sampling frequency, but a increasing the sampling frequency to 400 Hz reduced the
error to within 5-10%. Axle spacing identification shows randomness in error.
However, at higher frequencies, the error can be seen within 5% except for certain
levels of noise.

Computational Time
The computational time taken by the GA for finding a solution was
investigated. A personal computer with Intel core i5 2.67 GHz with 4GB RAM was
used for the study. The CPU time required to find a solution was between 3 and 4
COMPUTING IN CIVIL ENGINEERING 225

seconds. However, it should be noted that increasing the size of the population
significantly influences the processing time.
10 Noise: 15% Frequency: 100 Hz
Error (%)

5 Axl1

0 Axle2
1 3 5 7 9 Axle3
Spacing
No. of Meauring Locations

Noise: 15% Frequency: 200 Hz


10 Axl1
Error (%)

5 Axle2
0
Axle3
1 3 5 7 9
Spacing
No. of Meauring Locations
10 Noise: 15% Frequency: 400 Hz
Error (%)

5
Axl1
Axle2
0
1 3 5 7 9
Axle3
Spacing 1-2
No. of Meauring Location

Figure 2. Effect of number of measuring locations and sampling frequency at


noise level of 15%.

Effect of Time Varying Loading


The effect of the dynamic nature of axle loading was considered, as described
by equations 5 and 6. With the inclusion of this dynamic, it was found that the
influence of sampling frequency and the number of measuring sections on the front
axle was significant while the rear axle remained within 3% error for all level of
noise and measuring section. For low sampling frequencies, the error in spacing was
greater but for higher sampling frequencies the error was reduced to within 5%.
CONCLUSIONS AND RECOMMENDATIONS
The performance of GA’s as a reverse modeling tool for determining the
loading attributes of a truck was studied. The bending stresses of the bridge were
numerically simulated and used as input for prediction. The effect of several
measurement parameters (namely, sampling frequency, number of measuring
sections, white noise, time varying load) on the accuracy of the solutions generated
was investigated.
The GA optimization approach appears to be more accurate than the existing
static neural network approaches, although this needs to be verified in a one on one
226 COMPUTING IN CIVIL ENGINEERING

comparison using an identical set of validation problems. For noise free bridge
response, truck attributes can be found within the accuracy of 1% considering the
bridge response recorded only at the midspan. However, the presence of the dynamic
effect of a truck and white noise affect the accuracy significantly. The single
measuring location is not inadequate for noisy data sets. Increasing the number of
measuring locations increased the accuracy, as did increasing the sampling frequency.
Increasing the number of measuring locations and sampling frequency increase the
amount of information available for identifying the truck loading attributes, and thus
helped overcome the effects of white noise and dynamic loading.
It is proposed to extend the scope of the study to include bridges of more
complicated structure, using finite element methods for the forward modeling
component of the algorithm. In addition, consideration will be given to a range of
truck types. Additional validation will include the use of live data from a range of
bridges.
REFERENCES
Gagarin, N. (1991). "Advances in weigh-in-motion with pattern recognition and
prediction of fatigue life of highway bridges." PhD thesis, University of
Maryland at College Park, MD.
Law, S. S., Chan, T. H. T, and Zeng, Q. H. (1997). “Moving force identification: A
time domain method.” J. Sound Vib., 201, 1-22.
Law, S. S., Bu, J. Q., Zhu, X. Q., and Chan, S. L. (2004). "Vehicle axle loads
identification using finite element method." Eng.Struct., 26(8), 1143.
Law, S. S., and Zhu, X. Q. (2004). "Dynamic behavior of damaged concrete bridge
structures under moving vehicular loads." Eng.Struct., 26(9), 1279.
Leming, S. K., and Stalford, H. L. (2003). "Bridge Weigh-in-Motion System
Development Using Superposition of Dynamic Truck/Static Bridge
Interaction." Proceedings of the American Control Conference, .
Monti, G., Quaranta, G., and Marano, G. C. (2010). "Genetic-Algorithm-Based
Strategies for Dynamic Identification of Nonlinear Systems with Noise-
Corrupted Response." J.Comp.in Civ.Engrg., 24(2), 173-187.
Moses, F. (1979). “Weigh-in-Motion system using instrumented bridges,” J. Comp.
in Civ. Engrg., 105(3), 233-249.
Pinkaew, T. (2006). "Identification of Vehicle Axle Loads from Bridge Response
using Updated Static Component Technique." Engineering Structures, 28(11),
1599-1608.
Prozzi, J. and Hong, F. (2007). “Effect of Weight-in-Motion System Measurement
Errors on Load-Pavement Impact Estimation.” J.Trans. Engrg., 133(1), 1-10.
. Yu, L., and Chan, T. H. T. (2007). "Recent Research on Identification of Moving
Loads on Bridges." J.Sound Vibrat., 305(1-2), 3-21.
The Application of Artificial Neural Network for the Prediction of the
Deformation Performance of Hot-Mix Asphalt

Ilseok Oh, Ph.D.1 and Wasim Barham, Ph.D.2


1
Assistant Professor, Civil and Construction Engineering, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA, 30060; PH (678) 915-3264;
FAX (678) 915-5527; email: ioh@spsu.edu
2
Assistant Professor, Civil and Construction Engineering, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA, 30060; PH (678) 915-3946;
FAX (678) 915-5527; email: wbarham@spsu.edu

ABSTRACT

The prediction of the deformation responses of Hot-Mix Asphalt (HMA) mixtures to


loading conditions, which is a crucial component of the structural design of
pavements, requires complex analytical models be determined through extensive
laboratory testing and field validation. Those experiments generate a number of data,
and the traditional statistical regression has been employed to analyze the data and to
develop the prediction models. In terms of the behavior, HMA is categorized under
the visco-elastic material, and in order to model this relation a non-linear regression
must be adopted due the complexity of the HMA behavior. The Artificial Neural
Network (ANN) is a powerful modeling tool that is capable of capturing highly
nonlinear functions. This paper describes how the ANN is trained to predict the
responses of HMA, and a comparison with a conventional regression model is
presented as well. In order to ensure the generality of the final neural network, the
input/output training patterns are taken from different conditions and therefore the
trained neural network is not limited to a specific condition. The final network is
tested using a separate set of data that has not been used in training the ANN.

INTRODUCTION

As a unique distress mode of asphalt pavements, permanent deformation, i.e.,


rutting and shoving, has been accounted for by the load-temperature related
viscoelastic properties of Hot Mix Asphalt (HMA) mixtures, responding to diverse
in-situ states of stress and/or strain under repeated trafficking and various
environmental conditions. Furthermore, the fact that truck tire pressures are
increasing and that most rutting observed in trench cuts occurs in the top 3 ~ 4 inches
of the HMA layer requires the production of more rut-resistant and stable mixtures
and better methods to predict the complicated behavior of HMA mixtures (Brown and
Cross, 1992).
As can be seen in Figure 1 (the subscripts, e, v, and p stand for elastic,
viscous, and plastic (or permanent) strain respectively), the response of HMA mixture
to a traffic load or tire pressure is time-dependent and temperature-dependent. This
type of material behavior has an elastic component - ‘instantaneous response &
instantaneous full-recovery’ and a viscous/plastic component - ‘retarded response &

227
228 COMPUTING IN CIVIL ENGINEERING

retarded partial-recovery’ at the same time, and the proportions of those components
are highly dependent on the loading time (during of the loading) and material
temperature.

Figure 1. The Deformation of HMA mixture under one cycle of loading

The permanent deformation of HMA mixtures has been a major concern to


asphalt paving technologists for a long time because this type of performance failure,
which often occurs in the early service years, significantly reduces the serviceability
and causes hazardous hydroplaning of vehicles. However, due to the complexity of
the HMA behavior, most of the prediction proposed require complex analytical
models be determined through extensive laboratory testing and field validation.
Those experiments generate a number of data, and the traditional statistical regression
has been employed to analyze the data and to develop the prediction models. This
paper sought to examine the applicability of the Artificial Neural Network (ANN) to
the prediction of HMA performances.
Nowadays, neural networks have gained a broad interest in various civil
engineering problems. They are used as an alternative to the traditional regression
and optimization methods because of their capability to capture complex nonlinear
relationships.

DATA GENERATION

Since the objective of this study is to implement ANN for the prediction of
HMA performances, a set of data used in this study was extracted from the author’s
previous study, which can found elsewhere (Oh and Coree, 2004).
Table 1 summaries the data used in this study. It contains critical volumetric
properties of HMA mixtures and the Gyratory Indentation Test Number (GITn) which
represents the number of loading corresponding to the 2% deformation of the
specimens measured by the Gyratory Indentation Test. High GITn Value indicates a
stable or highly rut-resistance mix while low GITn indicates highly rut-susceptible
mix. It should be noted that among the 27 test samples, 24 samples were use to train
the artificial neural network and the other 3 samples; #4, #11, and #23 were used to
verify the trained ANN.
COMPUTING IN CIVIL ENGINEERING 229

Table 1. Input/output data used to train and validate the ANN


Inputs Output

a b c d e f g h i
Sample No. Nd SA Pb Pbe VMA VFA DP FT FAA GITn

1 75 5.929 6.542 5.715 15.773 74.641 0.875 9.640 48.1 52


2 75 5.929 5.708 4.679 13.650 70.696 1.069 7.892 46.5 58
3 75 5.929 5.594 4.704 13.680 70.760 1.063 7.934 44.6 56
4 75 4.942 6.650 5.732 15.780 74.651 0.698 11.599 48.1 51
5 75 4.942 5.929 4.983 14.193 71.817 0.803 10.083 46.5 54
6 75 4.942 5.729 4.854 13.918 71.259 0.824 9.823 44.6 53
7 75 3.958 6.773 5.883 15.910 74.858 0.510 14.863 48.1 41
8 75 3.958 6.250 5.428 14.915 73.182 0.553 13.714 46.5 49
9 75 3.958 5.950 5.093 14.411 72.243 0.589 12.867 44.6 39
10 100 5.929 6.167 5.337 15.660 74.458 0.937 9.002 48.1 66
11 100 5.929 5.429 4.396 13.733 70.873 1.137 7.415 46.5 78
12 100 5.929 5.363 4.470 13.836 71.090 1.119 7.539 44.6 62
13 100 4.942 6.188 5.265 15.461 74.129 0.760 10.654 48.1 70
14 100 4.942 5.444 4.493 13.816 71.048 0.890 9.092 46.5 69
15 100 4.942 5.479 4.602 13.996 71.421 0.869 9.312 44.6 63
16 100 3.958 6.278 5.383 15.666 74.466 0.557 13.600 48.1 49
17 100 3.958 5.818 4.992 14.810 72.991 0.601 12.613 46.5 60
18 100 3.958 5.550 4.689 14.249 71.927 0.640 11.847 44.6 65
19 125 5.929 5.941 5.109 15.674 74.479 0.979 8.617 48.1 89
20 125 5.929 5.300 4.266 13.988 71.404 1.172 7.196 46.5 109
21 125 5.929 5.164 4.269 13.854 71.128 1.171 7.201 44.6 82
22 125 4.942 5.833 4.907 15.325 73.899 0.815 9.929 48.1 88
23 125 4.942 5.278 4.325 13.952 71.330 0.925 8.752 46.5 103
24 125 4.942 5.272 4.394 14.081 71.593 0.910 8.891 44.6 94
25 125 3.958 5.923 5.024 15.488 74.173 0.597 12.695 48.1 75
26 125 3.958 5.500 4.671 14.774 72.925 0.642 11.803 46.5 75
27 125 3.958 5.292 4.428 14.205 71.841 0.677 11.189 44.6 78
a
Nd = Design Number of Gyration b SA = Surface Area of combined aggregate
c d
Pb = Binder Content in percent Pbe = Effective Binder Content in percent
e
VMA = Voids in Mineral Aggr. f VFA = Voids Filled with Asphalt binder
g h
DP = Dust Proportion FT = Film Thickness in microns
i
FAA = Fine Aggregate Angularity (ASTM C1252, Method A)
230 COMPUTING IN CIVIL ENGINEERING

NEURAL NETWORK TRAINING

Artificial neural networks are parallel computational models inspired by the


structure of the nerve cells of the human brain, their interconnection, and their
interaction. These models can perform some intelligent activities similar to those of
the human’s brain and therefore they are able to capture the knowledge about a
phenomenon. In this research, The Rumelhart Multilayer Perceptron program
(Dawson and Yaremchuk, 2003) developed by Michael R.W. Dawson and Vanessa
Yaremchuk from the University of Alberta was used. Rumelhart is a program written
in Visual Basic and uses two types of activation functions: logistic activation function
and Gaussian activation function. Those activation functions can produce four
different neural networks types; 1) Gaussian activation function in all units, 2)
logistic activation function in all units, 3) logistic activation function in the hidden
units and Gaussian activation function in the output units, and 4) Gaussian activation
function in the hidden units and logistic activation function in the output units. In this
study, we used Gaussian activation function in all units.
The GITn of Hot-Mixed Asphalt is a function of 9 parameters; Nd, SA, Pb,
Pbe, VMA, VFA, DP, FT, and FAA. A total of 27 HMA samples with different
parameters were tested in the lab and the GITn value was determined for each sample.
Among the 27 data sets, 24 samples were used to train the neural network with 9
input nodes and one output node (the GITn value). The minimum level of squared
error to define a hit was set to be 0.001 with a learning rate of 0.5. A trial and error
procedure was used to determine number of nodes needed to train the network. A
simple normalization operation was performed to scale the input and output data
between 0 and 1 which is the accepted input-output range in Rumelhart program.
Because of the complexity of HMA deformation performance, a total of 5 hidden
units were needed to capture the behavior of the 24 HMA samples. The final ANN is
shown in the Figure 2 below.

Figure 2. Trained Neural Network


COMPUTING IN CIVIL ENGINEERING 231

The other 3 lab sample data were used to verify the accuracy of the developed
neural network and its capability to predict the GITn value for a given HMA mix.

RESULTS AND DISCUSSION

In this section, results obtained from the trained ANN are presented and
compared to the observed values. Next, verification for the effectiveness of the
developed network to predict the GITn value for test samples different from those
used to train the neural network is presented.
Figures 3, 4, and 5 present a comparison between the GITn values predicted
by the ANN versus the observed values for all the test samples that were used to train
the neural network. Although the deformation performance of HMA is very
complicated, the figures prove that the ANN was able to learn the relation between
the 9 inputs and the one output (GITn). The sum of the least squared was 0.003 for all
samples and less than 0.001 for each individual sample.

70
60
50
40
Nd=75 30
20
10
0
1 2 3 4 5 6 7 8
Observed GITn 58 56 41 49 39 54 53 52
ANN GITn 56.5 56.6 41.0 48.8 39.7 53.6 53.1 52.2

Figure 3. Observed GITn vs. ANN GITn of training patterns (Nd=75)

80

60

Nd=100 40

20

0
1 2 3 4 5 6 7 8
Observed GITn 63 49 60 66 62 70 65 69
ANN GITn 61.9 48.8 59.7 66.0 61.9 69.8 65.2 70.1

Figure 4. Observed GITn vs. ANN GITn of training patterns (Nd=100)


232 COMPUTING IN CIVIL ENGINEERING

120
100
80
60
Nd=125
40
20
0
1 2 3 4 5 6 7 8
Observed GITn 75 75 88 89 109 82 78 94
ANN GITn 74.5 74.7 88.4 88.3 107.2 83.8 78.3 92.1

Figure 5. Observed GITn vs. ANN GITn of training patterns (Nd=125)

The nature of the learning process of neural network provides a trained


network that can be used to forecast the response for other patterns different from
those used to train the network. In order to validate the applicability of the developed
ANN to predict the performance of HMA, the trained ANN was used to calculate the
GITn values for three test samples, which were not part of the training patterns. Also
the regression model developed previously by the authors using the statistical
analysis software – SAS was used to calculate the GITn for each case and compared
to ANN output. The regression was linear and based on reduced independent inputs
after eliminating statistically insignificant factors and their interactions (Oh and
Coree, 2004; Cochran and Cox, 1960; John and Quenouille, 1977; Itt, 1993).
The results of this analysis are presented in Table 2. As it demonstrates, the
ANN was able to successfully predict the GITn values for the three samples within a
small margin of error. In fact, the ANN prediction is closer to the observed value
compared to the prediction using the regression model especially sample#4. This
clearly proves that the ANN is a very powerful tool in modeling the nonlinear
performance of HMA.

Table 2. Neural Network Verification


Observed Regression
Test ANN GITn
GITn GITn
sample #4 (Nd=75) 51 68.9 47.78
sample #11 (Nd=100) 78 85.1 83.58
sample #23 (Nd=125) 103 99.3 102.40

CONCLUSION

This paper has been focused on the application of the artificial neural
networks and its potential use in predicting the performance of HMA. Based on the
findings of this study, the ANN was able to learn the relation between the 9 inputs
COMPUTING IN CIVIL ENGINEERING 233

governing the behavior of HMA and the one output (GITn). The developed artificial
neural network performance in predicting the GITn value was much better than the
traditional regression method. Therefore, it clearly has great potential in this field.
Although this study used only 9 inputs, more input parameters such as testing
temperatures, loading conditions, asphalt binder types, etc., can be easily included in
training the ANN which is not the case for the traditional statistical analysis.

REFRENCES

Brown, E. R., and Cross, S. A. (1992). “A National Study of Rutting in Hot Mix
Asphalt (HMA) Pavements.” Journal of the Association of Asphalt Paving
Technologists, Volume 61.
Cochran, W. G., and Cox, G. M. (1960). Experimental Designs, 2nd ed. New York:
John Wiley & Sons.
Dawson, M. R. W., and Yaremchuk, V. (2003). The Rumelhart and RumelhartLite
Multilayer Perceptron Programs. Biological Computation Project, University
of Alberta, Edmonton, Alberta, Canada.
Department of Transportation, Federal Highway Administration (1998). Performance
of Coarse-Graded Mixes at Westrack – Premature Rutting. Final Report,
FHWA-RD-99-134, U.S.
Itt, R. L. (1993). An Introduction to Statistical Methods and Data Analysis, 4th
ed.California: Wadsworth Pub. Co.
John, J. A., and Quenouille, M. H. (1977). Experiments: Design and Analysis, 2nd ed.
London: Charles Griffin & Company Ltd.
Oh, I., and Coree, B. J. (2004). “A Rapid Performance Test for SUPERPAVE HMA
mixtures.” Proceedings of International Symposium on Long Lasting Asphalt
Pavements, International Society of Asphalt Pavements, Auburn, Alabama.
Oh, I., and Coree, B. J. (2004). “A Really Simple Performance Test.” Proceedings of
Association of Asphalt Paving Technologists, Baton Rouge, Louisiana.
An Approach for Occlusion Detection in Construction Site Point Cloud Data

Dennis J Bouvier, Ph.D.1, Chris Gordon, Ph.D.2, and Matthew McDonald3


1
Department of Computer Science, Southern Illinois University Edwardsville, SIUE
Box 1656, Edwardsville, IL, 62026, 618-650-2386, email: dbouvie@siue.edu
2
Department of Construction, Southern Illinois University Edwardsville, SIUE Box
1803, Edwardsville, IL, 62026, 618-650-2867, email: cgordon@siue.edu
3
Department of Computer Science, Southern Illinois University Edwardsville, SIUE
Box 1656, Edwardsville, IL, 62026, email: mmcdona@siue.edu
ABSTRACT
Data collected using laser scanners on construction sites often include regions
in 3D space that cannot be observed beyond occlusions, which are objects in the line
of sight of the scanner. These occlusions may exist even if scans are planned using a
scan-planning algorithm. The issue of occlusion can prevent accurate modeling of
objects in a scan, requiring potentially costly decisions to revisit the site for additional
scans. Computational support is needed to help quickly decide whether obtained data
is adequate, or if additional data collection is needed to meet scanning objectives.
This paper describes an approach to rapidly interpret point cloud data obtained from
construction sites. This approach can help determine whether to collect more data, to
use modeling techniques to identify features or objects in the existing data, or to
continue without data in occluded spaces. The paper demonstrates initial
experimental results obtained by applying this approach to simulated and actual point
cloud data.
Keywords: LADAR, laser scanning, occlusion, point cloud
INTRODUCTION
Laser scanners have been widely studied in construction automation literature
due to their accuracy (on the order of millimeters), range (on the order of hundreds of
meters), and speed of data collection (up to one million points per second). Due to
these characteristics, laser scanners have been utilized on construction sites to collect
data for such applications as construction progress tracking (e.g., Bosche 2009) and
defect detection (e.g., Akinci et al. 2006). Laser scanners collect sets of 3D
coordinate data (or “point clouds”) from their surroundings by measuring the phase
shift or round-trip time of flight for pulses of light to travel from the scanner to
surfaces in the line of sight of the scanner. Each point cloud is composed of
thousands to millions of points in 3D space. These points can support measurements
in 3D space, surface meshing, basic geometry detection (e.g. planes and spheres), and
object recognition. However, given that point cloud data is only collected in the line
of sight of a scanner, point clouds from construction sites often include regions in 3D
space not observable beyond occluding objects. In Figure 1, for example, a single
scan of several piles (shown in gray) results in large occluded areas (shown in black)
out of the line of sight of the scanner. This lack of directly measured points can
complicate use of point cloud data for construction applications (e.g. for calculation
of pay quantities).

234
COMPUTING IN CIVIL ENGINEERING 235

Figure 1. Demonstration of occlusions in point cloud data.


Elmqvist defined occlusions in point clouds created by optical scans by
stating “an object o is said to be occluded from a viewpoint v if there exists no line
segment r between v and o such that r is not blocked” (Elmqvist and Tsigas 2008).
While every object in the line of sight of the scanner occludes some space from the
view of the scanner, this work concentrates on ‘meaningful occlusions’, in which
some object of interest is missing from the collected scan data due to the relative
positions of objects and the scanner.
The following alternatives exist to accommodate potential of occlusions in
scans: (1) planning scans to ensure adequate coverage of a site with strategically
positioned scans; (2) modeling and interpolation using given data to estimate site
conditions in occluded areas; and (3) additional scanning to collect data in occluded
areas. Scan planning algorithms reason with known objects on a site (e.g. 3D models
of building components) to determine the scanner location or locations that provide
least-cost coverage objects of interest. Construction sites are challenging domains for
scan planning due to varying topology, clutter, and vegetation. In the event that
properly planned scans miss data due to unexpected occlusions, several approaches
such as (Fischler et al. 1987) and (Ying and Castanon 1999) have been previously
used to apply or develop reconstructed models with data adjacent to an occluded
space. If such techniques are unacceptable, one has to make the potentially costly
decision of whether to revisit the site for additional scans.
Rapid computational support for occlusion detection is needed to determine
whether obtained data is adequate, or if additional data collection is needed to meet
scanning objectives. This paper presents techniques for addressing occlusions in
construction site scan data without a priori information about a given site. The paper
demonstrates initial experimental results obtained by applying this approach on
simulated and actual point cloud data.
APPROACH TO IDENTIFYING MEANINGFUL OCCLUSIONS
Previous work on this problem suggests there are different ways of detecting
occlusions in point cloud data. However, a survey of the literature finds little previous
work directly in the problem domain. One promising approach finds occlusions in
indoor scenes using prior knowledge that the scene will include rectangular objects
(e.g., doors and windows) as part of a complex system for identifying occluded
spaces (Adán and Huber 2010). Another approach, also for indoor scenes, keys on
edge detection for detecting occluded spaces (Dell’Acqua and Fisher 2002).
236 COMPUTING IN CIVIL ENGINEERING

Construction sites do include considerable amounts of straight edges, rectangular


objects, and even 3D/4D models of proposed construction, all of which can assist
occlusion detection approaches for this domain. This work aims to detect occluded
spaces without being constrained to rectangular objects and with the ability to
accommodate the irregularities of varying site topologies and naturally occurring
scenery. Thus the approach taken here is not based on any a priori knowledge of the
scan data before processing.
One should also note this prior work was impractical for field work due to the
execution time reported was on the order of tens of minutes for small point clouds,
albeit without algorithmic optimization (Adán and Huber 2010) (Dell’Acqua and
Fisher 2002). This speed would be impractical for decision support in the field, and
for the size of point clouds collected in typical construction site scans. With
increasing scan resolutions and scan rates of new scanners, an alternative approach
seems necessary.
Occlusions can be detected by finding large differences in distance to a point
from the scanner for small changes of viewing angle. Entering point cloud data into a
data structure that enables spatial ‘queries’ provides a mechanism for identifying
occlusions in this manner. As the octree data structure naturally fits 3D problems
requiring space partitioning, it can be used as a data structure for such an approach to
occlusion detection. An octree representation partitions 3D space (and the points
within this space) into successively smaller sets of eight equally sized spaces, or
octants. Each node of an octree can be further subdivided or may remain a leaf node.
An approach to detect occlusions within this data structure is to traverse the octree to
visit each leaf node and compare the distance from the origin (i.e., position of the
scanner) to the leaf nodes that are ‘neighbors’ as viewed from the origin. This
involves computing the view direction and determining the view direction for
neighbor nodes. Figure 2, below, uses a quad-tree to illustrate this approach in two
dimensions.

Figure 2. Finding field-of-view neighbors of a particular (shaded) leaf node


using projectors in a quad-tree.
The shaded square in Figure 2 represents the leaf node being considered as an
occluding object. Projectors from the point of view to the right and left of the node
are used to find the field-of-view-neighbors. When a projector encounters another
leaf node, the distances of the points are compared to decide if the node is occluding
other data. This approach successfully detects candidate occlusions, but
computational time and complexity suffers due to the mismatch between the kind of
query needed (finding field-of-view-neighbors) and the organization of the data in the
octree. Soon after implementing a prototype quad-tree based occlusion detection
algorithm, we utilized a new data structure, described below, to better connect the
COMPUTING IN CIVIL ENGINEERING 237

representation of 3D space to the spatial querying needed for effective identification


of occlusions and occluded space.
Angle-Tree Storage of Point Cloud Data. The angle-tree data structure is designed
to reduce the computation time and complexity of finding field-of-view neighbors.
The root of an angle-tree has two children; each encodes the points of one
hemisphere. Each non-leaf hemisphere node has four children that subdivide the field
of view into four parts, subdividing the space into upper-left, upper-right, lower-left
and lower-right quadrants. This subdivision of the field of view continues until the
number of points in the field of view of a particular node is below a predefined
threshold of subdivision.
Using this organization, points are entered into the tree based on their view
direction as expressed by two angles. The two angles are analogous to latitude and
longitude values used for global positioning. Consequently, each leaf node contains
the points visible in some narrow field of view. Additionally, points near that field of
view are in nodes neighboring that node. On the other hand, it is possible for the
points in one leaf to have very different Euclidian distance values (as measured from
the origin). Figure 3 shows the structure of an angle-tree. Note this illustration shows
an unrealistically short tree. Angle trees used to store scans of several construction
site examples have resulted in trees with eight to twelve levels.

Figure 3. Angle-tree data structure showing one point bucket as the only child of
one leaf node and two point buckets as the children of a different leaf node.
Figure 3 also shows point buckets at leaf nodes. To deal with the possibility of
points in one field of view being at various distances from the scanner, leaf nodes of
the angle-tree keep one or two lists of points, or point buckets. When the node
distances are close together, one point bucket is used. When the greatest distance
between any pair of nodes exceeds a user-defined threshold value, two point buckets
are used. When two point buckets are used, the points are segregated by distance
using the midpoint of the two extreme node distances in the node.
Finding Meaningful Occlusions in the Angle-Tree. With all points of a point cloud
scan entered into an angle tree, the tree can be processed to tag point buckets as a)
being the source of occlusion, b) neighboring an occluded space, or c) neither. A
number of user-defined parameter values are then used to tune the occlusion detection
algorithm to the data set. These include “depth range”, “detection distance”, and
“bucket size”. Each of these is discussed in more detail below.
238 COMPUTING IN CIVIL ENGINEERING

The depth range value is the number of levels above the deepest leaf node at
which nodes will be considered for occlusion detection. The higher the value used for
depth range, the higher (and consequently, larger) the nodes will be considered. In
most cases tested to date, a depth range of 1, 2, or 3 yields good results. The tree is
traversed from top to bottom. If a parent node is within the depth range, the parent is
examined and the children of the parent will be ignored. To look at the parent node's
points, we traverse to all of the children that are leaves and combine all of the
buckets. This potentially large bucket(s) is considered as the parent's set of points.
The detection distance value specifies the minimum distance difference
between neighboring nodes before they are classified as being a source of occlusion
or bordering an occluded space. In this paper, distance means “distance of a point to
the origin”. For example, assume that node A has an average point distance of 10,
node B has an average point distance of 20, and the detection distance value is 10. In
this case, even though node A is closer and could possibly occlude node B, it will not
be counted as occluding because the difference of distances between nodes A and B
does not exceed the detection distance value.
The bucket size value specifies how many points a node can hold before
dividing into two nodes. This value, consequently, determines how many levels of
nodes the tree has. For smaller point clouds, a bucket size of 3 to 10 works well. For
larger clouds, a size of 100 or larger is still largely effective.
The combination of detection distance, bucket size, and depth range values
control the behavior of the algorithm. As a result, the occlusion detection algorithm
can be tuned to successfully detect occlusions in very large and very small point
clouds.
EXPERIMENTAL RESULTS
A range scan simulator was created to facilitate development and evaluation
of the occlusion detection approach. The simulator generates point cloud data from
simple geometry, such as rectangles and circles, thus allowing for testing on well-
known data. The simulator allows for testing experimental implementations on cases
with controllable properties, including point cloud size, shape, and density.
Figure 4 shows a rendering of a small (~ 3,000 point) point cloud produced by
the simulator. The scene represents a wall with a window (a large rectangle enclosing
a rectangular opening) with two trees (two tall, narrow rectangles) occluding parts of
the wall and window from a scanner location to the left of the wall and trees. As the
point cloud is the production of a scanner-simulation, some ‘wall’ points are absent
from the resulting point cloud due to the occluding ‘trees’.
In this scene, the expectation is for the parts of the ‘trees’ in front of the wall
to be classified as meaningful occluders. Likewise, it is expected that the points of the
wall adjacent to the missing (occluded) points would be classified as such. In this
case, the angle-tree approach to occlusion detection is judged to be successful, as the
expected classifications appear in the rendered data.
In the image (see Figure 4), the points colored orange are points in a node
detected as the source of a meaningful occlusion. The points colored green are points
in a node detected as being adjacent to occluded space. The remaining points, colored
black, are classified as neither occluding nor adjacent to an occluded space.
COMPUTING IN CIVIL ENGINEERING 239

Figure 4. Detection of meaningful occlusions in a small simulated scan data set.


Note that only part of the larger rectangle has been identified as adjacent to a
meaningful occlusion. Also, note that only parts of the smaller rectangles are
identified as a source of a meaningful occlusion. This is the result of a) the small
rectangles being represented by many nodes in the angle-tree, and b) some of those
nodes being inline with the ‘window’, and thus, not occluding the ‘wall’. Hence, the
algorithm is able to correctly detect that there is an occlusion in the data, and is able
to highlight the furthest extent of this occluding object from the simulated scanner.
Figure 5 shows a rendering of a point cloud used for our examples in the rest
of this paper. The scene is of a building with a geodesic dome as part of the roof. The
same data set is used for Figures 6 to 8. Note the vegetation in front of the building
(from the scanner point of view). The data set was edited to frame the outline of the
building.

Figure 5. Rendering of the dome scene, as viewed from above and to the right of
the scanner.
In the following images (Figures 6 to 8), points are colored based on their
participation in occlusions. The orange points are in a node detected as the source of
an occlusion. The green colored are points in a node detected as adjacent to occluded
space. The black nodes are neither occluding nor adjacent to an occluded space.
Figure 6 shows the same dome scene as rendered in Fig. 5. The differences
between these two images are a) the viewing position, and b) points which are the
source of occlusion have been indentified in the image. The points in Fig. 7 are
identified in the same way as in Fig. 6. Here the viewing position is near the scanner
location. From this position, it is easy to see the correlation between the points that
occlude and those adjacent to occluded spaces. Again, the data was edited to include
the field of view that outlines the building.
240 COMPUTING IN CIVIL ENGINEERING

Figure 6. Scene of building with a dome in which trees occlude parts of a wall,
viewed from a point above and to the left of the origin of the scanner.

Figure 7. Scene of building with a dome in which trees occlude parts of a wall
viewed from a point near the origin of the scanner.

Figure 8. A slice of dome scene in which trees occlude a wall with a window,
(left) occlusions detected with high bucket value (100), (right) occlusions
detected with low bucket value (10)
Figure 8 shows the different results given by changing the bucket size
parameter. With a large bucket size (100 points), the algorithm produces many false
positives, as seen in the left image of Figure 8. The large bucket value prohibited the
angle tree from subdividing sufficiently to properly classify finer details of the point
cloud, resulting in over-classification. By comparison, the right image of Figure 8 is a
rendering of the same scene with a lower bucket value (10). In this rendering, the
occlusions are correctly identified.
DISCUSSION
The approach outlined above proved to be a successful approach to identifying
meaningful occlusions in our initial experiments on simulated and actual data. This is
true for large outdoor scenes as well as small simple scenes. For the dome scan
(approximately 55,000 points) the angle tree construction and occlusion detection
was completed in a matter of a few seconds on a consumer-grade personal computer.
COMPUTING IN CIVIL ENGINEERING 241

Testing of larger scenes (approx. 1 million points) was equally successful. Though
programming refinements may yield improved performance, we believe our
prototype is already fast enough for field applications. While individual scans can be
set to exceed the numbers of points tested to date, the parameters tested are sufficient
to quickly evaluate the existence of occlusions in a scan from a given location.
Though we tried to identify characteristics in which our approach yielded
significant false positives (detecting an occlusion where none exists) or false
negatives (failing to detect an existing occlusion), we found that tuning the
parameters of the algorithm would resolve minor issues encountered to date. As we
continue to improve this work, we plan to examine automation of setting the
occlusion detection parameters for different site characteristics.
Future work will include investigating alternative data structures, as well as
exploring the general approach outlined here as a means for efficient meshing and
reconstruction of occluded regions. The approach outlined in this paper shows
promise of providing valuable decision support at a similar speed to data collection.
While the approach as conceived, and implemented, focuses on one scan at a time, it
may be extended to accommodate additional scans (as would be necessary for
complete coverage of typical construction sites).
REFERENCES
Adán, A. and Huber, D., 2010, Reconstruction of Wall Surfaces Under Occlusion and
Clutter in 3D Indoor Environments. CMU-RI-TR-10-12
Akinci, B., Boukamp, F., Gordon, C., Huber, D., Lyons, C. and Park, K., 2006. A
formalism for utilization of sensor systems and integrated project models for
active construction quality control. Automation in Construction, Elsevier,
New York, USA, Vol. 15, No. 2, pp. 124-138.
Bosche, F., Haas, C.T., Akinci, B., 2009, "Automated Recognition of 3D CAD
Objects in Site Laser Scans for Project 3D Status Visualization and
Performance Control", ASCE Journal of Computing in Civil Engineering,
Special Issue on 3D Visualization, Vol. 23, Issue 6, pp. 311-318.
Dell’Acqua, F. and Fisher, R. 2002 Reconstruction of Planar Surface Behind
Occlusions in Range Images. IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 24, 569-575.
Elmqvist, N. and Tsigas, P. 2008. A Taxonomy of 3D Occlusion Management for
Visualization. IEEE Transactions on Visualization and Computer Graphics,
Vol. 14, No 5, September 2008.
Fischler, M. A. and Bolles, R. C. 1981. Random sample consensus: a paradigm for
model fitting with applications to image analysis and automated cartography,
Communications of the ACM, Vol 24:6.
Ying, Z. and Castanon, D. 1999. Statistical Model for Occluded Object Recognition,
Proceedings of the 1999 International Conference on Information Intelligence
and Systems, pp. 324 -327.
Applications of Machine Learning in Pipeline Monitoring

Yujie Ying1, Joel Harley2, James H. Garrett, Jr.3, Yuanwei Jin4, Irving J. Oppenheim5,
Jun Shi6, and Lucio Soibelman7
1
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (412) 620-3253; email: yying@cmu.edu
2
Department of Electrical and Computer Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (732) 567-6786; email: jharley@andrew.cmu.edu
3
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (412) 268-2941; email: garrett@cmu.edu
4
Department of Engineering and Aviation Sciences, University of Maryland Eastern
Shore, Princess Anne, MD 21853; PH (410) 621-3410; email: yjin@umes.edu
5
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (412) 268-2950; email: ijo@cmu.edu
6
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (607) 279-9558; email: junshi@andrew.cmu.edu
7
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (412) 268-2952; email: lucio@andrew.cmu.edu

ABSTRACT
In the field of structural health monitoring, researchers focus on the design of
systems and techniques capable of detecting damage in structures. However, most
traditional detection methods fail under environmental and operational variations that
tend to distort the signals and masquerade as damage. In this paper, we investigate the
applications of machine learning techniques to developing a damage detection system
robust to changes in the internal air pressure of a pipe. From each of the 240
experimental datasets, we extract 167 features and implement three classification
algorithms for detecting damage: adaptive boosting, support vector machines, and a
method combining the two. The performances of the three classifiers are evaluated
over 30 detection trials with different combinations of training and testing data,
resulting in the average accuracies of 87.7%, 92.5% and 93.5%, respectively. The
combined method is a promising classifier for damage detection. Through feature
selection, we also demonstrate the effectiveness of features related to the curve length,
the shift-invariant correlation coefficient and the peak amplitude of the signal.

INTRODUCTION
Natural gas pipelines require regular inspection and maintenance to ensure
their structural safety and integrity. We have explored a continuous monitoring
technique for steel natural gas pipelines using permanently installed low-cost
transducers to perform structural health monitoring (SHM). Previously, we had
devised a Time Reversal Change Focusing (TRCF) approach by combining guided
wave ultrasonics with time reversal acoustics (Harley et al. 2009, Ying et al. 2010a).
The TRCF method can focus and magnify the changes caused by damage in the
COMPUTING IN CIVIL ENGINEERING 243

received signals and allows us to detect very small defects. However, benign effects
such as changes in air pressure can also produce considerable difference in the signals
(Ying et al. 2010b). It is essential but challenging to develop robust detection schemes
that are invariant to environmental and operational conditions. In this paper, we
present our results of applying machine learning algorithms to distinguishing damage
(simulated by a mass scatterer) from harmless pressure variations in a steel pipe.

MEASUREMENTS
For our experiments, we used a pair of lead zirconate titanate (PZT) ultrasonic
sensors to generate guided waves inside of a pressurized, steel pipe (Figure 1a). We
used a National Instruments PXI data acquisition device to excite a 300 kHz sinc
pulse from one PZT and measured the response from the other PZT. To simulate
damage, we placed a mass scatterer at six locations on the pipe surface, with three
near the transmitter (Zone 1), and three close to the receiver (Zone 2), as shown in
Figure 1.

The data was taken during 13 different collection events, each with 20 records.
Every record is a 10 ms long signal, sampled at 1 MHz. Over the 20 records in each
collection, the pipe was randomly pressurized or discharged from 0 to 110 PSI. The
first collection of measurements from an “undamaged” pipe (i.e., no mass scatterer
applied to the pipe) is regarded as baseline data. In SHM, a baseline is a known
signal collected when there is no damage in the structure under test and is used as a
reference to evaluate the present conditions of the structure. Therefore, the baseline
data will be excluded in both the training and testing set as machine learning
algorithms are applied. After measuring the baseline, the second collection is taken
with a grease-coupled mass on the pipe. The mass is then removed and another set of
undamaged data is taken. This is done to accommodate for any changes due to the
grease coupling used. These two succeeding collections of damaged and undamaged
records are named as “Measurement Set 1”. This process of placing and removing the
mass is repeated for another five locations, and six measurement sets (240 records) in
total are recorded, each with the equal number of undamaged and damage records.

Figure 2 shows examples of one undamaged record and one damaged record.
The two signals are difficult to distinguish in either the time or frequency domain.
Moreover, correlation coefficient is computed as a metric for the similarity of the
baseline measurement and any of the 240 measurements taken, with 1 indicating two
identical signals and 0 indicating no similarity. The pressure changes decrease the
correlation coefficient by an equivalent amount as compared to damage; the reduction
in correlation coefficient is subtle but variable over all the measurements (see Figure
3).
Pressure gauge PZT transmitter PZT receiver Valve
Zone 1 Zone2

Figure 1. (a) Schematic of the steel pipe specimen and the mass locations (blue
crosses), and (b) photo of the mass.
244 COMPUTING IN CIVIL ENGINEERING
Voltage [mV]

Magnitude
10
0 0.5
-10
0
0 1 2 3 4 100 200 300 400 500
(a) Time [ms] (b) Frequency [kHz]
Voltage [mV]

Magnitude
10
0 0.5
-10
0
0 1 2 3 4 100 200 300 400 500
(c) (d)
Time [ms] Frequency [kHz]

Figure 2. (a) Received signal with no damage present, (b) amplitude spectrum of (a),
(c) received signal with damage (mass) present, and d) amplitude spectrum of (d).

(a) (b)
0.98 0.98
Corr. Coef.

Corr. Coef.

0.96 0.96
20 20
0.94 0.94
0.92 15 0.92 15
12 10 12 10
3 5 3 5
45 45
6 Pressure level 6 Pressure level
Measurement set Measurement set
Figure 3. Correlation coefficients of the baseline and (a) any of the 120 measurements
with no damage present, and (b) any the 120 measurements with damage present

FEATURE EXTRACTION
Two types of features are considered: one requires a baseline and the other is
independent of the baseline. 167 different features are extracted using signal processing
and machine learning tools, such as the Fourier transform, the Hilbert transform, Time
Reversal Focusing (TRF), TRCF, correlation, principal component analysis (PCA), and
the analysis of local maxima, as briefly detailed below.

Baseline-free features
112 baseline-free features are extracted from the time domain signal, the TRF
signal, the TRCF signal, the envelopes of the above three, and the amplitude spectrum.
The TRF and TRCF methods have been developed in our earlier work (Harley et al. 2009,
Ying et al. 2010a); the envelope and the amplitude spectrum of a signal can be computed
by using the Hilbert transform and the Fourier transform, respectively.

Peak amplitude and location features. The peaks of a complex signal indicate the arrival,
reflection, or conversion of wave modes. We would expect certain peaks to be affected
differently from others when damage is introduced. Local maxima of a signal are
searched for to construct the features, including the number of local maxima, the
amplitudes and the locations of the first 3 maxima, and the peak-to-peak amplitude.

Statistical features. We extract the mean, median, standard deviation and kurtosis values,
for the signals and the amplitude spectrum, as well as for the locations and amplitudes of
all the peaks in different domains. Any shift, scale or conversion in wave modes may
change the distribution of energy across time or frequencies.
COMPUTING IN CIVIL ENGINEERING 245

Curve length. The curve length of a signal is useful for describing the signal complexity
(Lu and Michaels 2009). A variation in curve length may be caused by changes in the
modal amplitudes or locations of waves. The curve length is also robust to time-scale
changes since the signal’s shape remains the same. The curve length of a discrete-time
signal x[n] is defined by

Baseline-dependent features
55 features are generated based on 11 baselines from the mean of the first collection and
from the first 10 principal components of those measurements. PCA is used to uncover
certain properties of the signal that may better define the presence of damage, and can be
implemented by singular value decomposition or Eigen decomposition.

Standard and shift-invariant correlation coefficients. We have shown in Figure 3 the


correlation coefficient of two signals (a discrete-time baseline signal xb[n] and another
measured signal x[n]) is generally not ideal for our applications. However, we may be
able to modify the correlation coefficient formula by utilizing the invariance of the
magnitude of the Fourier transform to time-shifting changes. We generate shift-invariant
correlation coefficients by correlating the magnitudes of the transform.

where denotes the discrete Fourier transform, and is the Euclidean norm.

Differential curve length. Lu and Michaels (2009) showed that the differential curve
length was an excellent feature for damage detection. The feature is computed similar to
the curve length shown previously, but instead with the residual signal. In addition, the
curve length of the envelope of the residual signal is also obtained.

Mean square error (MSE). MSE is an important criterion to evaluate an estimator for the
true value. For our analysis, we utilize the MSE as a feature to measure the difference
between a discrete-time signal with N sampling points and the baseline.

MACHINE LEARNING RESULTS


We utilize three binary classification approaches for damage detection: adaptive
boosting (AdaBoost), support vector machines (SVMs), and a method combining the two
(AdaSVM). AdaBoost is an ensemble classification approach that linearly superposes a
number of weighted “weak” binary classifiers to generate a final “strong” classifier. The
weak learners are usually simple and moderately inaccurate. Each weak classifier focuses
on the misclassified instances of the previous classifier (Freund and Schapire 1997).
SVM is a linear maximum margin classifier, providing a mapping of data into a higher
dimensional space by kernel tricks so that a linear hyperplane or set of hyperplanes can
be determined in that high dimensional feature space (Cortes and Vapnik 1995, Burges
1998). In our application, 167 weak classifiers are used to construct the AdaBoost strong
246 COMPUTING IN CIVIL ENGINEERING

classifier; soft margin SVM with the radial basis function as kernel is implemented by
LIBSVM (Chang and Lin 2001)

One limitation of AdaBoost is that it can only linearly combine weak classifiers;
thus the final classifier may not necessarily be optimal (Shen et al. 2005, Morra et al.
2010). By contrast, SVM can effectively incorporate nonlinear combinations of features
through kernel functions. However, applying SVM to 167 features is not computationally
efficient and some features may create adverse effects in classification by adding noise.
As a result, we develop a combined method that uses AdaBoost to select principal
features, followed by SVM for classification. We define the principal features as the
features selected when the AdaBoost classifier reaches its lowest error rate after a certain
number of iterations. One issue about AdaBoost is that it allows the features to be
selected repeatedly, whereas, using the same feature more than once is of little use for
SVM (Morra et al. 2010). Therefore, we make slight modifications in the AdaBoost
algorithm to avoid the repeated selection of the same feature when implementing
AdaSVM.

For cross-validation purposes, the three classifiers are applied to 30 tests with different
divisions of the training and testing sets. All of the trials are categorized into five groups,
six in each group, according to the number of measurement sets used for training, i.e.
from one to five. All the remaining data records compose the testing set. We consider
several tests with a very small portion of the acquired data for training, given that in the
real world of SHM, we usually do not have a large amount of data to learn the damage
characteristics in a structure. The challenge is to determine how to make the most of the
limited information to maximize the probability of making a correct decision.

Feature Selection
We apply AdaBoost to automatically rank principal features over the 30 trials.
The three most frequently selected features are the curve length of the time domain, the
shift-invariant correlation coefficient of the second principal component and the
amplitude of the third greatest peak of the time domain signal. Figure 4 illustrates that the
three features oscillate over all the measurement sets, but show generally fine separation
between 120 undamaged datasets in blue and 120 damaged datasets in red.

As an example of the trials, consider the measurements taken at Zone 2 for


training. All the datasets are plotted in the feature space defined by two features selected
by AdaBoost: the curve length and the amplitude of the third greatest peak of the signal
(Figure 5a). The two classes, undamaged and damaged, are well separated. This indicates
that the selected features are sensitive to the damage while robust to the pressure changes.
On the contrary, the data points are largely overlapped if we randomly choose two
features, such as the maximum amplitude and the number of peaks of the signal (Figure
5b). AdaBoost has been demonstrated to be effective for feature selection; further
classification algorithms are needed for automatic damage detection.
COMPUTING IN CIVIL ENGINEERING 247

Amp. of 3rd greatest peak


1 1 1

Shift invar. corr. coef


0.8 0.8 0.8
Curve length

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
20 40 60 80 100 120 20 40 60 80 100 120 20 40 60 80 100 120
(a) Measurement (b) Measurement (c) Measurement

Figure 4. Three normalized principal features selected by AdaBoost: (a) curve length of
the signal, (b) shift-invariant correlation coefficient of the signal and the baseline, and (c)
amplitude of the third greatest peak of the signal, over 240 measurements, with the
undamaged records in blue circles and damaged records in red circle areas.
Amp. of 3rd greatest peak

1 1

0.8 0.8

No. of peaks
0.6 0.6

0.4 0.4

0.2 0.2

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
(a) Curve length (b) Maximal amp.

Figure 5. Normalized feature space with features (a) selected by AdaBoost, and (b)
randomly selected. Blue crosses: undamaged records for training; red asterisks:
damaged records for training; blue circles: undamaged records for testing; and red
circle areas: damaged recording for testing.

Damage Detection
We show the results of damage detection by using three classification methods,
AdaBoost, SVM, and AdaSVM. Figure 6 shows the classification results of six trials
with four measurement sets for training and another two sets for testing. All the tests
lead to high accuracy, greater than 90% and with several at 100%, using any of the
three classifiers. Low false-positive rates (FPRs) and false-negative rates (FNRs) are
also shown in Figure 6. In addition, six complementary tests are conducted by
reversing the roles of the training and testing sets of the forgoing trials. Figure 7
shows that the performance of AdaBoost is weakened due to the reduction in the
number of training data cases, while SVM and AdaSVM still achieve relatively high
accuracy, ranging from 82.5% to 96.9%, and 83.1% to 100%, respectively.
Furthermore, we show in Figure 8 the average performance of the three algorithms as
the number of training data cases increases. SVM and AdaSVM show more than 85%
accuracy no matter whether the training data is sufficient or inadequate; AdaBoost
gives more than 95% accuracy when the training set consisted of at least three
measurement sets, but the accuracy decreases rapidly as the number of training data
cases is reduced. As a rough evaluation of the classifiers, the average accuracy over
all the 30 trails is 87.7%, 92.5% and 93.5% for AdaBoost, SVM and AdaSVM,
respectively. Combining AdaBoost and SVM leads to superior performance.
248 COMPUTING IN CIVIL ENGINEERING

100

Percentage [%]
100 100
Percentage [%]

Percentage [%]
50 50 50

0 0 0
1 1 1
2 2 2
3 Accuracy 3 Accuracy 3 Accuracy
4 FNR
4 FNR 4 FNR
5
(a) Trial
5
6 FPR (b) Trial 6 FPR (c) Trial
5
6 FPR

Figure 6. Classification results of damage detection with four measurement sets (2/3
of all the datasets) for training, by using (a) AdaBoost, (b) SVM, and (c) AdaSVM.

100 100 100

Percentage [%]
Percentage [%]
Percentage [%]

50 50 50

0 0 0
1 1 1
2 2 2
3 Accuracy 3 Accuracy 3 Accuracy
4 FNR 4 FNR 4 FNR
5 FPR 5 5
6 6 FPR 6 FPR
(a) Trial (b) Trial (c) Trial

Figure 7. Classification results of damage detection with two measurement sets (1/3
of all the datasets) for training, by using (a) AdaBoost, (b) SVM, and (c) AdaSVM.
100 100 100
AdaBoost AdaBoost
Percentage [%]
Percentage [%]

Percentage [%]

80 80 SVM 80 SVM
AdaSVM AdaSVM
60 60 60

40 40 40
AdaBoost
20 SVM 20 20
AdaSVM
(a) 0
1 2 3 4 5 (b) 0
1 2 3 4 5 (c) 0
1 2 3 4 5
No. of measurement sets for training No. of measurement sets for training No. of measurement sets for training

Figure 8. Average classification performance over different numbers of training data


cases, in terms of (a) accuracy, (b) false-positive rate, and (c) false-negative rate.

CONCLUSIONS
Physical experiments were conducted on a pipe with varying internal
pressures and with a mass scatterer at six positions to simulate damage. Signal
processing and machine learning techniques have been applied to extract 167 features.
The curve length, the shift invariant correlation coefficient and the amplitude of the
third peak in the time domain signal were largely selected by AdaBoost as principal
features for good separation between undamaged and damaged classes. Three
classification methods (adaptive boosting, support vector machines and a combination
approach of the two) have been investigated in order to detect the damage in the pipe.
These three classifiers provide average accuracies of 87.7%, 92.5% and 93.5%,
respectively, over 30 trials with different combinations of training and testing data.
The combined method is a promising classifier for damage detection.

In a further study, we have incorporated features extracted from wavelet


analysis and the Mellin transform, and have localized the mass as being present on
different zones on the pipe. The additional investigations will be discussed in a future
paper.
COMPUTING IN CIVIL ENGINEERING 249

ACKNOWLEDGEMENTS
The work is based on an earlier project (the Instrumented Pipeline Initiative)
that was supported by Department of Energy through Concurrent Technologies
Corporation, and the work has been supported by an award from the Pennsylvania
Infrastructure Technology Alliance and by a gift from Westinghouse Electric
Company. The authors would also like to thank Professor Lawrence Cartwright at
Carnegie Mellon University for his advice on operating the experimental apparatus.

REFERENCES
Burges, C. J. (1998). “A tutorial on support vector machines for pattern recognition.”
Data mining and knowledge discovery, 2(2), 121–167.
Chang, C., and Lin, C. (2001). “LIBSVM : a library for support vector machines.”
Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
Cortes, C., and Vapnik, V. (2005). “Support-vector networks,” Machine Learning,
3(20), 273-297.
Freund, Y., and Schapire, R.E. (1997). “A decision-theoretic generalization of on-line
learning and an application to boosting.” Journal of Computer and System
Sciences, 55(1), 119–139.
Harley, J., O'Donoughue, N., States, J., Ying, Y., Garrett, J. H., Jin, Y., Moura, J. M.
F., Oppenheim, I. J., and Soibelman, L. (2009). “Focusing of Ultrasonic
Waves in Cylindrical Shells using Time Reversal.” Proceedings of the 7th
International Workshop on Structural Health Monitoring, Stanford, CA.
Lu, Y., and Michaels, J. E. (2009). “Feature Extraction and Sensor Fusion for
Ultrasonic Structural Health Monitoring Under Changing Environmental
Conditions.” Sensors Journal, IEEE, 9(11), 1462–1471.
Morra, J. H., Tu, Z., Apostolova, L. G., Green, A. E., Toga, A. W., and Thompson, P.
M. (2010). “Comparison of AdaBoost and support vector machines for
detecting Alzheimer's disease through automated hippocampal segmentation.”
IEEE Transactions on Medical Imaging, 29(1), 30-43.
Shen, L., Bai, L., Bardsley, D., and Wang, Y. (2005). “Gabor feature selection for
face recognition using improved AdaBoost learning.” Advances in Biometric
Person Authentication, 39–49.
Ying, Y., Harley, J., Garrett, J. H., Jin, Y., Moura, J. M., O'Donoughue, N.,
Oppenheim, I. J., and Soibelman, L. (2010). “Time reversal for damage
detection in pipes.” Proceedings of SPIE, 76473S.
Ying, Y., Soibelman, L., Harley, J., O'Donoughue, N., Garrett, J. H., Jin, Y., Moura, J.
M. F., and Oppenheim, I. J. (2010). “A Data Mining Framework for Pipeline
Monitoring Using Time Reversal.” Society for Industrial and Applied
Mathematics (SIAM) Conference on Data Mining (SDM10) -Workshop on
Data Mining for Smarter Infrastructure, Columbus, OH.
Using Electimize to Solve the Time-Cost-Tradeoff Problem in Construction
Engineering

Mohamed Abdel-Raheem1 and Ahmed Khalafallah2


1
Phd Candidate, 2 Assistant Professor; Department of Civil, Environmental and
Construction Engineering, University of Central Florida, 4000 Central Florida Blvd.
Orlando, Florida, 32816, USA. Email: abdelrah@mail.ucf.edu khalafal@mail.ucf.edu

ABSTRACT
Construction optimization problems are difficult to solve due to the enormous
number of parameters resulting from the booming in technology and the application
of sophisticated systems in construction projects. In the past few decades,
evolutionary algorithms (EAs) have served as good optimization techniques for
solving these problems. However, many EAs are limited in their capabilities in
reaching optimality due to their methods of evaluating candidate solution strings.
This paper presents a newly developed evolutionary algorithm, named
Electimize, with an application example on solving construction time-cost-tradeoff
problem (TCTP). The new algorithm simulates the behavior of electrons moving
through electric circuit branches with the least resistance. Specifically, the paper
discusses: 1) the basic steps of optimization using Electimize, 2) TCTP modeling
using Electimize, and 3) comparison between performances of Electimize and other
EAs used to solve this problem. Electimize demonstrates an advantage over other
existing evolutionary algorithms in the method used for evaluating solution strings,
which is reflected by the better results obtained for the TCTP.

INTRODUCTION

In construction projects, optimization is a very essential tool that is used to


decide on the best among the multiple alternatives available in a single construction
project. In turn, the available alternatives are aggregated in optimization models in an
attempt of finding the alternative(s) that would guarantee optimality. However,
finding the optimum solution for any problem is not an easy task. The majority of
optimization problems in the construction industry are of the NP-hard type, which
requires very long processing time. Over years, mathematical methods proved to be
inefficient solving large scale construction optimization problems. Consequently, the
shift in emphasis was toward evolutionary computation and its application in solving
optimization problems in construction.
Although some of the evolutionary algorithms (EAs) applied proved to be
useful in finding the optimal/near-optimal solution, they continue to fail guaranteeing
reaching optimality for all types of problems. Previous studies by Abdel-Raheem and
Khalafallah (2009), and Khalafallah and Abdel-Raheem (2010) investigated this
shortcoming of current EAs and attributed it to two main reasons: 1) the majority of
EAs deal with candidate values in the solution space indifferently. This is

250
COMPUTING IN CIVIL ENGINEERING 251

demonstrated by the process of evaluating solution strings in any generated


population, where all values in the same string receive equal appreciation based on
the overall performance of the values collectively without paying attention to the
individual performance of each one, 2) the phenomena simulated are not adequately
represented in the developed algorithms or their mathematical equations used.
Example of these are the simulation of the Darwinian principle in Genetic Algorithms
(GAs), the foraging behavior of ants in Ant Colony Optimization (ACO), and social
interaction between a migrating flock of bird in Particle Swarm Optimization (PSO),
and many other.
In 2009, Abdel-Raheem and Khalafallah attempted to overcome the
aforementioned limitations of current EAs in Electimize, a new evolutionary
algorithm. Electimize simulates the phenomenon of the behavior of electrons in a
multi-branch electric circuit, where the majority of electrons select the wire with the
least resistance. The main contribution of this algorithm lies in its ability to evaluate
each value in the solution string independently, as is discussed in the following
sections. Additionally, unlike other EAs, Electimize uses scientifically valid equations
in the simulation process, which are : Ohm’s law and Kirchhoff’s rule. The algorithm
has been applied successfully to a number of bench mark optimization problem in
construction engineering. The algorithm demonstrated higher capabilities not only in
finding the optimal solution, but also in finding alternative optimal solutions
(Khalafallah and Abdel-Raheem 2010).
This paper presents the application of Electimize in solving one of the most
cumbersome construction optimization problems, which is the Time-Cost-Tradeoff
problem (TCTP). The problem has been modeled and solved using Electimize. The
results are compared to previous results from the literature optioned by using different
EAs in solving TCT problem

LITERATURE REVIEW

In construction projects, there is a relationship between the duration of the


activity and its direct costs. Such relationship can be described as an inverse
relationship in the sense that the activity direct costs decreases as duration increases.
The relationship between time and cost is derived from the selection of different
methods available for executing a given activity. There are four different
representations for the relationship between time and cost; they can be: 1) Linear;
2) Multi-linear; 3) Discrete function; and 4) Curvilinear relationships (Ahuja, 1983).
The Time-Cost curves suggest different completion times for every single
activity with varying corresponding direct costs. Such timings and costs represent the
various alternatives available for executing the activity. Consequently, there is a huge
pool of alternatives for executing a single project. In turn, there are multiple different
scenarios for executing the same project.
The tradeoff between cost and time can be better described in the sense of the
following: in general, shortening the project duration results in running direct costs
and lessening indirect costs. This arises due to the utilization of more resources to
meet the activity targeted duration. On the other hand, there is a general trend in the
construction industry assumes that the project indirect cost decreases by decreasing its
252 COMPUTING IN CIVIL ENGINEERING

duration, as shown in Figure 1. This has made the TCT problem a complex
optimization problem that accommodates far too many investigations in search of the
optimality.
Solving the TCT problems serves many purposes. It could be utilized to find
the project optimum duration, which corresponds to the least cost. The objective
could be meeting the project’s deadline with least cost. (Hegazy and Ersahin, 2001).
A third utility of TCT is finding the project least cost regardless of the time it takes to
finish the project. Some of the previous work incorporated the application of EA in
solving construction TCTP appeared in: [Feng et al. 1997; Hegazy and Wassef 2001;
Zheng, et al. 2005; El-Rayes and Kandil 2005; Elbeltagi 2005; Elbeltagi, et al. 2005]

ELECTIMIZE: OPTIMIZATION STEPS

Electimize is a new evolutionary algorithm that simulates the flow of electrons


in a multi-branch electric circuit. This flow generates an electric current that has an
intensity (I) based on the resistance (R) of the wire and the voltage (V) of the electric
circuit according to Ohm’s Law: V=I*R.
In Electimize, each candidate solution is represented by a wire (solution
string) that has an unknown global resistance (RGlobal). Each wire is fabricated by
randomly selecting values from the solution space. Each value represents a separate
segment in the wire that has a local resistance (rlocal) and corresponds to a designated
variable in the objective function, as shown in Figure 2.
The fabricated wires are then connected in parallel to a virtual electric source
that has a voltage (V) representing an electric circuit as shown in Figure 3. All wires
in the circuit are compared globally to each other, and each segment in the wire is
compared to other local segments within the same wire. The circuit and wires are then
dismantled and segments are reused to build a new circuit. The process of building
and dismantling the circuit is iterative until the best wire is reached, i.e. optimum
solution (Abdel-Raheem and Khalafallah 2009).
Electimize algorithm consists of nine main steps, as discussed by Khalfallah
and Abdel-Raheem (2010). It is worth mentioning that some modification have been
introduced to the algorithm and they appear in steps 1,5, 6, 7and 8. The main steps of
the algorithm are:
1. Fabrication of wires: In this step, a number of wires (N) composed of (M)
segments are fabricated. For each segment, a value is selected randomly from the
solution space. Each value (lml) in the solution space has a designated resistance
(rml), which is calculated based on the value resistivity (ρ=1), its cross-sectional
area (aml), and its length (bml). Every value (lml) in the solutions space has a
constant cross-sectional area (aml) that is calculated according to Equation 1. The
lengths of values are estimated after determining the local resistances in step 6.
U
a ml  L (1)
U
l1

Where U = a value representing a piece of information that reflects the user


knowledge or certain preference. For example, in the TCT problem, the value of
COMPUTING IN CIVIL ENGINEERING 253

“U” can be the time, cost, or an index combing both of them for the available
construction methods.

19 25 36 17 1 11
Figure 2. Solution String Represented as a
Wire Composed of Multiple Segments
Cost

I V

19 25 36 17 1 11

Duration
18 21 32 18 3 9

Figure 1: Time-Cost Tradeoff


in Construction Projects 22 23 34 19 5 12

20 27 37 16 2 10

Figure 3. Representing a Population


of Wires as an Electric Circuit
2. Construction of the electric circuit: The fabricated wires are connected in
parallel to an imaginary electric source of voltage (V). The voltage (V) is an
arbitrary value that is used to differentiate between the qualities of the solutions.
Electimize determines a suitable value for (V) by randomly fabricating a
temporary wire and substituting its value in the objective function. The yielded
value is then raised to the power of (G), which lies in the range of [1.5, 1.9[ based
on experimentation.
3. Determining the electric current intensity (In): The intensity of the electric
current (In) passing through each wire (Wn) is the value of the objective function
after substituting the wire segments values (lml) in its variables.
4. Calculating the global resistance of wires (Rn): The global resistance of each
wire is calculated using Ohm’s Law: Rn = V/In.
5. Evaluating the quality of wires: The quality of each wire (Wn) is indicated by its
global resistance (Rn) from previous step. The top (5-25)% of the wire
populations, the best wire in each iteration, and the best wire in all iterations are
identified.
6. Evaluating the quality of wire segments: The quality of each value (lml) in
segment (m) of wire (Wn) is based on its length (bml). To calculate length (bml),
local resistance (rml) should be estimated first. At start, it is assumed that all
resistances (rml) for the values of segments (m:1M) appear identical since there
is no prior information about how good the solution is. Therefore the resistances
are calculated according to equation (2).
R
rlm  n (2)
M
254 COMPUTING IN CIVIL ENGINEERING

A sensitivity analysis is then conducted to determine the actual resistances of each


value (lml). The top (5-25)% of wires are selected to perform a sensitivity analysis
by substituting the value (lml) of each segment in the best wire (Wb) among them
in 90% of the iterations and in the overall best wire determined throughout all the
iterations (WBEST) in the remaining 10% of the iterations. If a better wire is
identified it immediately replaces WBEST. The change (ΔR) in the global resistance
of the Control Wire is then recorded (see eq. 3 and 4) and the resistances (rml) of
values are modified according to equation (5). The modified resistances (r*ml) are
then normalized so that their sum is equal to the original wire resistance (Rn). This
guarantees that there is no violation to Kirchhoff’s rule.
R  R  R (3)
CW n

H  R / RCW (4)

M (5)
r * ml  [ rml ( 1  H )] * R n /  [ rml ( 1  H )]
m 1

Where r*ml = modified resistance of segment (m); rml = resistance of value (lml) of
segment (m) in the original wire (Wn); Rn = resistance of wire (Wn); and
RCW= resistance of the control wire.
7. Updating resistances (rml) for the generated values: The resistance (rml) is
updated for each selected value (lml) for each segment (m) according to equation
(6). The length (bml) can then be calculated using equation (1). If a certain value
(lml) is used more than a specified number of times- set by the user, then the

updated resistance r ml is multiplied by the Heat Factor to account for the pseudo-
resistance generated due to the overuse of segments. Experimentation showed that
a suitable value for the Heat Factor can be in the range of [0.4, 0.7[.

r ml  rml  r * ml (6)


Where r lm = updated resistance for value (l) of segment (m), and rml = resistance
for value (lml) of segment (m) from the previous iteration.
8. Selection of new values (lml) for the variables: The selection probability of new
values is based on the calculated length (bml) of each value (lml). For maximization
problems, it can be calculated according to equation (7).

1 / bl
Pml  L (7)
 1 / bl
l 1
Where Pml= probability that value (lml) is selected for segment (m).
9. Algorithm Termination: The algorithm terminates after the stipulated number of
iterations is reached.
COMPUTING IN CIVIL ENGINEERING 255

PROBLEM MODELING

The Time-cost Tradeoff problem in this research was formulated in a similar


manner to previous studies as appeared in [Feng et al. 1997; Elbeltagi 2005; and
Elbeltagi, et al. 2005]. For each activity, there is a number of construction methods
available for executing the activity. The construction method can be combinations of
the available resources, materials, and equipment or different technologies. Each
method has stipulated time and cost for executing a given activity determined by the
planning engineer. An index is then created to relate the construction method time and
cost to its main activity.
A number of wires (N) of segments (M) are then created. The number of
segments is equal to the number of project activities, where each segment represents
an activity and carries an index that refers to a certain construction method available
for executing this activity, as illustrated in Figure 4. The objective function of the
TCTP is a cost minimization function and given in (8).

Activity/Segment
No.
1 2 3 i K
Wire X1j X2j X3j Xij XKj
Index
Figure 4.Wire Representation of a Project Activities-Execution Scenario

K
Minimize (Total cost = DR   C ij  C l  W ) (8)
i 1

Where: D: project total duration; R indirect cost/unit time; Cij direct cost of activity (i)
using construction method (j); k: number of activities; Cl: total liquidated damages;
and W: incentive for speedy performance.

APPLICATION EXAMPLE

For application, a case study was selected from the literature. The case study
is a construction project composed of 18 activities. There are different construction
methods available for executing each activity. The maximum number of construction
methods available for a single activity is five, while the least number available is two.
The time and cost of each construction method is given. The objective is to find the
project optimum completion time, which corresponds to the least cost using different
combination of the activities different construction methods. The problem at hand has
4.72 x 109 available solutions.
The problem was first solved using linear and integer programming (Burns et
al. 1996); reattempted using GAs (Feng et al. 1997); resolved using Ant Colony
Optimization (Elebeltagi 2005); and reattempted using five different evolutionary
algorithms (Elebeltagi et al. 2005). The data for the problem can be easily obtained
from the literature.
256 COMPUTING IN CIVIL ENGINEERING

IMPLEMENTATION AND COMPARISON WITH PREVIOUS RESULTS

Microsoft Project scheduling software was used to maintain the relationship


between activities, duration, project data, and calculate project different completion
time. Electimize was coded as a Macro program using VBA through MS project. To
have a robust comparison with previous models, the initial settings of previous
attempts were used. All activities durations were set to the construction methods with
the longest duration and least cost. The project total duration in this case is 169 days
with a direct cost of $99,740, and total cost of $184,240 using $500/day as an indirect
cost. Electimize was able to find the project least cost ($161,270) and the
corresponding completion time (110 days) using only one iteration and 30 wires,
which by far is less in the number of iterations and solution strings than other EAs
used in previous studies. The sensitivity analysis was conducted using 12 wires.
Using these settings (1 iteration, 30 wires, Top 12 wires), twenty experiments were
conducted with a success rate of 100%.
It is worth mentioning that one of the powerful capabilities of Electimize is
the sensitivity analysis step. The identification of the optimum solution in the first
iteration is mainly attributed to the sensitivity analysis step. For the problem at hand,
some EAs failed to find the optimum solution, and those that found it used hundreds
of iterations and solution strings. The results of previous attempts of solving the TCT
problem using different EAs are shown in Table 1 in comparison to the results
obtained using Electimize.

Table 1. Summary of Parameters Values of Different EAs used to Solve the TCTP
Best No. of Unit of
Least No. of Attempt
Algorithm Time Solution Sol.
Cost Iterations By
(Days) Strings Strings
Abdel-
Electimize 161,270 110 1 30 Wire Raheem &
Khalafallah
Genetic Chromoso
NA NA 50 400 Feng et. al
Algorithms me
Genetic Chromoso Elebetagi
162,270 113 Unlimited 500
Algorithms me et. al
Ant
Elebetagi
Colony 161,270 110 100 30 Ant
et. al
Algo.
Memetic Chromoso Elebetagi
161,270 110 Unlimited 100
Algorithm me et. al
Particle Elebetagi
161,270 110 10,000 40 Particle
Swarm et. al
Shuffled
Elebetagi
Frog 162,020 112 10 200 Frog
et. al
Leaping
COMPUTING IN CIVIL ENGINEERING 257

CONCLUSION

This paper presented another application of a new evolutionary algorithm,


named Electimize, in solving a complex construction optimization problem.
Electimize was applied successfully to the TCT problem. The 18-activity-TCT
problem selected for the study is considered a benchmark problem as its optimum
value is known before hand and has been attempted in previous studies. Electimize
was able to find the optimum solution in one iteration using 30 wires. This was
accomplished by the help of the sensitivity analysis step, which allows for the
extensive search of the solution space in a relatively short time. In this study,
Electimize demonstrates its powerful capabilities in solving discrete construction
optimization problems in comparison to other EAs.

REFERENCES

Abdel-Raheem, M., and Khalafallah, A. (2009). “Framework for a Multi-


Level Evolutionary Algorithm for Construction Optimization.” Proceedings
of the 23rd European Conference on Modeling and Simulation (ECMS 2009),
June 9 – 12, Madrid, Spain.
Ahuja, H. N. (1983). Project Management: Techniques in Planning and Controlling
Construction Projects. John Wiley & Sons, USA.
Burns, S., Liu, L., and Feng, C. (1996). “The LP/IP hybrid method for construction
time-cost-tradeoff analysis.” Construction Management and Economics, Vol. 14,
265-276
Elbeltagi, E., Hegazy, T., and Grierson, D. (2005). “Comparison among five
evolutionary-based optimization algorithms.” Advanced Engineering Informatics,
Elsevier, 43-55.
El-Rayes, K., and Kandil, A. (2005). “Time-Cost-Quality Trade-Off Analysis for
Highway Construction.” Journal of Construction Engineering and Management,
ASCE, Vol. 131, No.4, 477-486.
Feng, C.W., Liu, L., and Burns, S. (1997). “Using Genetic Algorithms To Solve
Construction Time-Cost Trade-Off Problems.” Journal of Computing in Civil
Engineering, ASCE, Vol. 11, No.3, 184-189.
Fonseca, M.C., and Fleming, P.J. (1995). “An overview of Evolutionary Algorithms
in Multiobjective Optimization.” Evolutionary Computation, MIT Press, Vol. 3, 1–
16
Hegazy, T., and Ersahin, T. (2001). “Simplified Spreadsheet Solutions. II: Overall
Schedule Optimization.” Journal of Construction Engineering and Management,
ASCE, Vol. 127, No.6, 469-475.
Hegazy, T., and Wassed, N. (2001). “Simplified Spreadsheet Solutions. II: Overall
Schedule Optimization.” Journal of Construction Engineering and Management,
ASCE, Vol. 127, No.3, 183-191.
Khalafallah, A, and Abdel-Raheem, M. (2009). “Electimize: A New Optimization
Algorithm with Application in Construction Engineering.” Journal of Computing
in Civil Engineering, accepted for publication, ASCE.
Vision-Based Crane Tracking for Understanding Construction
Activity
J. Yang1, P.A. Vela2, J. Teizer3, and Z.K. Shi1
1
College of Automation, Northwestern Polytechnical University, China; e-mail:
junyang9@mail.nwpu.edu.cn, zkshi@nwpu.edu.cn
2
School of Electrical and Computer Engineering, Georgia Tech, Atlanta, GA 30332-
0250; e-mail: pvela@gatech.edu
3
School of Civil and Environmental Engineering, Georgia Tech, Atlanta, GA 30332-
0355; e-mail: teizer@gatech.edu

ABSTRACT

Visual monitoring of construction work sites through the installation of


surveillance cameras has become prevalent in the construction industry. Cameras
also have practical utility for automatic observation of construction events and
activities. This paper demonstrates the use of a surveillance camera for assessing
tower crane activities during the course of a work day. The jib angle and the trolley
position are tracked using 2D-3D rigid pose estimation and density-based tracking
algorithms, respectively. A finite-state machine model for crane activity is designed to
process the track signals and recognize crane activity as belonging to one of the two
categories: concrete pouring and non-concrete material movement. Experimental
results from a construction surveillance camera show that crane activities are
correctly identified.

INTRODUCTION

The goal of this investigation is to understand tower crane operations in order


to connect them to construction progress on the work site. The tower crane is
chosen as the tracking target because of its standard visual geometry and its important
role in construction projects. In order to advance the goal, algorithms to track both
the crane jib and trolley, and to connect the motion to activities are needed. The
output of the automated algorithms provides information regarding the activities
supported by the tower crane and their execution over time. Longer term, these
activities could be connected to progress on the physical as-built structure.
Past research on cranes has focused on improving productivity and safety.
(Everett and Slocum 1993) introduced a video system called CRANIUM to transmit a
real time picture of the loads to the operator for improved communication.
(Shapira et al. 2008) designed a tower-crane-mounted live video system to enhance
the visibility of the operator for both daytime and nighttime operation. Other
researchers (Abdelhamid and Everett 1999; Ju and Choo 2005; Tantisevi and Akinci
2008) focus on optimizing the location of tower crane and materials in the
construction site or improving the crane control to restrain the sway and swing.

258
COMPUTING IN CIVIL ENGINEERING 259

Research involving visual tracking on work-sites includes the tracking of personnel,


of excavator end effectors, and of concrete buckets (Gong and Caldas 2010; Park et
al. 2010; Teizer and Vela 2009; Yang et al. 2010). A machine learning algorithm was
trained to detect the crane bucket in the video. By analyzing the bucket location in the
image using prior knowledge of the scene, they measured the productivity of a
concrete pouring project.
The current objective of this paper is to demonstrate that a monocular
surveillance camera capable of monitoring a tower crane and its operational
environment can be used to identify the activities associated to the tower crane.
Rather than focus on detecting a specific load carried by the tower crane, this
investigation focuses on the visual tracking of the crane jib and trolley. Tracking is
chosen over load detection since a tower crane transports a variety of objects,
including, but not limited to, concrete buckets, equipment, beams, slabs, columns,
etc. Furthermore, given the setup of a general purpose surveillance camera, the
crane bucket may not always be visible or may be in poor visibility. With the
information of both crane jib and trolley, in connection with site layout plans, the
algorithm is able to infer the construction activities associated to a tower crane.
Tracking of the crane and estimation of the activities will consist of the
following. A model-based visual pose estimation algorithm will measure the jib
rotation. A general-purpose, density-based visual tracking algorithm will measure the
trolley’s linear movement along the crane jib. The activities of the tracked tower
crane will be inferred using a finite state machine. Actual recorded image sequences
of a tower crane over a two hour time span are converted into activity reports and
compared to manually determined ground truth.

CRANE TRACKING: JIB AND TROLLEY.

This section describes the approaches and algorithms required to


automatically track both the jib rotation angle and the trolley position.

Tracking Jib Rotation. Automated estimation of the jib rotation angle will be
performed through the use of a 3D crane model in combination with the camera
calibration parameters. Through the use of the model and the camera parameters,
3D pose estimation techniques will provide the jib rotation. 3D pose estimation
algorithms first generate renderings of the 3D crane as perceived by a camera with
known parameters. These rendered images are then compared to the actual
observed image. The estimated geometry is correct when the rendered image and
the captured image are in agreement.
The rendered image is not a true-life rendering of the work-site as seen in
Figure 1(a), but is instead a rough approximation that depicts only the image model,
here a crane. A crude crane model was generated by surveying the installed crane
using a robotic total station (RTS). The model is depicted in Figure 1(b). Using
the known camera calibration configuration, a rendering of the crane jib generates a
predicted image of the scene given a specified jib rotation angle. Figure 2 depicts
several such simulated renderings given distinct jib rotation angles.
The renderings only depict the crane jib, and exclude both the tower and the
260 COMPUTING IN CIVIL ENGINEERING

tower mast. These binary images must be compared to the actual captured camera
image. Given that the image is far richer than the simulated rendering, pre-
processing algorithms convert the captured image into a binary image. The process
for doing so incorporates three steps, all of which are described in the following
paragraphs: 1) cropping of the image to consist of the crane jib operational region, 2)
a sky elimination step which identifies the sky regions, and 3) a background
subtraction step which isolates the crane arm from other static elements of the image.
The first and simplest step isolates the image region that the crane arm could
realistically occupy. In this case, that corresponds roughly to the top quarter of the
image (100 lines of 480 lines). This step is depicted in the top row of Figure 3.
The second step converts the image to grayscale and applies Otsu's thresholding
algorithm (Otsu 1979) to the image in order to isolate and remove the sky regions of
the image. Results for various sky conditions are depicted in the second row of
Figure 3. The remaining visual elements are the tower crane and buildings.
The last step of the process consists of foreground detection through the use
of a background subtraction algorithm. The algorithm utilized is the single
Gaussian background modeling algorithm (Wren et al. 1997), which models the
background as an image whose intensities obey a Gaussian distribution. Associated
to each pixel are mean and covariance values. The expected image is generated by
the mean values. When a new image is captured, converted to grayscale, and has
the sky regions removed, it is then compared against the Gaussian model. Pixels are
outliers if they have low likelihood of belonging to the Gaussian model, e.g., if they
lie too many standard deviations away from the mean. Here, the threshold was 2
standard deviations. The last row of Figure 3 depicts the classified statistical
outliers to the Gaussian image model. What mostly remains is the crane jib.
The rendered binary image is compared to the processed surveillance camera
image to see how well the two images match. Let represent the silhouette
generated from the 3D model with jib angle , and let represent the processed
camera image. The overlap energy between these two binary images is defined as:

where sums over all image pixels. Searching through all possible rotation angles
can be exhaustive given the need to render the images. Since the target application
is tracking, we can assume that the angle from the previous frame is known and the
current angle is sought. As the crane has a finite angular rate of change, the set of
angles reachable from one frame to the next is limited. Thus a window-based search
is applied to find the angle that maximizes the matching energy

where is the jib angle from the previous frame and is the search radius. The
angular search range is discretized with step size .
COMPUTING IN CIVIL ENGINEERING 261

z y

J x

z
z
y y

W M
x x
(a) Imaged crane structure and site layout. (b) Simple 3D model of crane.
Figure 1. Surveillance camera view of worksite and a wireframe rendering of the
crane model.

Figure 2. Black and white renderings of only the crane jib at various rotation
angles.

Figure 3. Results of the image processing. (The top row depicts the cropped image.
The second row shows the result after sky removal via Otsu's thresholding method.
The last column shows the final jib segmentation after background removal.)
Tracking Trolley Position. As seen in Figure 1(b), the trolley appears as a small,
dark quadrilateral region along the crane jib in the captured image. Geometric,
model-based approaches do not work well for objects with the visual characteristics
of the trolley and the resolution of the imagery. Since color cues are the primary
mechanism for identifying the trolley visually, a color-density approach to tracking is
proposed. The target model description found in (Comaniciu et al. 2003) will be the
model followed in this paper, which builds a histogram of the target given a template.
The histogram defines a quantized density estimate of the target appearance
probability density function. When performing tracking, the density estimate is
augmented with a spatial kernel density function, ,

where: is the Kronecker delta function, is the histogram bin location at


template pixel (the pixels are 0 centered), represents the -th pixel element of
the template, is the histogram bin, and is a normalization constant so that the
density is normalized. The Epanechnikov kernel is chosen for .
Estimation of the trolley location requires comparison of an extracted image
sample to the template sample through a similarity function. Here, the Bhattacharya
262 COMPUTING IN CIVIL ENGINEERING

measure between two densities provides the similarity score. Given a target location
, the density associated to the corresponding extracted image sample is

where is the kernel bandwidth, is the density normalization constant. From


this density and the target model density, the Bhattacharya measurement

provides a value of how well the two distributions match. The value approaches one
when the distributions match, and approaches zero when they differ substantially.

The current trolley position is estimated from the previous trolley position by
comparing the Bhattacharya measure for nearby trolley positions. The windowed
search procedure seeks to optimize over the trolley location in the image,

where is the search direction in the image, is the trolley search radius in pixels,
and is the previous trolley position. The search range is
sampled at one pixel intervals along the vector direction . The vector direction
gives the direction of expected motion of the trolley, computable via the crane model,
for the current jib angle . Once the best location is found, it is transformed
from image coordinates to 3D world coordinates corresponding to where the trolley
would be on the crane jib, using the known crane geometry. The two measurements,
jib angle and trolley jib translation, provide a polar coordinate description of the
crane load with the origin being the center of rotation. Transformation from polar to
Cartesian coordinates gives the 2D position of the crane hook in the plan view.

ACTIVITY INFERENCE
Once the jib angle and trolley position are known, and thus also the crane
hook location in the plane view, the activities of the crane can be decoded. As a
lifting machine which moves materials around the construction site, a tower crane's
activity can be clearly defined as loading, lifting and unloading materials. A static
crane occurs during loading, unloading or transitions between the two, while a
moving crane corresponds to lifting. Since the material being lifted is not actively
classified, the site layout plans will be exploited to infer the tower crane’s activities.
Inferring the crane activities requires knowledge of the site layout and the
functions associated to different regions of the site space. Many construction sites
have site layout plans that describe the intended use of the construction work space,
which includes a plane view description of the extents of the as-built structure,
expected roadways, laydown yards, and permissible crane flyby zones. Figure 1(a)
shows the building construction site divided into three major zones: driveway,
COMPUTING IN CIVIL ENGINEERING 263

parking lot, and working zone. The site's logistics plan, Figure 4, indicates that the
concrete mixer is allocated two spots along the driveway, where it serves the crane.
The coverage of the crane jib is a circle centered at the tower mast. A mapping
between the crane jib rotation angle and the function area is defined in Figure 4(b).
Empirically, the crane loads materials from the storage area to unload in the work
zone. Ideally, if the storage area is organized with sub-areas for distinct materials
types, the material type lifted by the crane could be inferred by area. Based on our
observations of the available surveillance video, that is not the case. However, out
of all the materials lifted, the concrete bucket is unique. The concrete mixers are
located at specific spots of the work site, thus distinguishing the concrete pouring
activity from other materials lifting tasks. Hence, crane activity is naturally
categorized into concrete pouring and non-concrete pouring.
Based on the activity categories and the crane action modes (lifting, loading,
unloading), a natural model for describing the crane activity is the finite state
machine. A finite state machine (FSM) is a mathematical behavior model composed
of a finite number of states. Transitions between states will happen when certain
condition is met. Examples of FSMs include [Davis et al 1994] and [Hong et al
2000], which were used for visual gesture recognition. Inspired by their works, we
introduce FSM to construction activity analysis. As shown in Figure 5., the FSM
model for concrete pouring has four states which happen in a fixed order: loading
concrete at the mixer, moving from the mixer to work zone, unloading concrete at the
work zone and moving back to the mixer from the work zone. Transitions between
states are determined by motion of the crane jib. To be robust to the measurement
noise, two thresholds are defined. One is for the instant angular speed, the other is for
the state transition. When the instant angular speed exceeds the speed threshold, is the
crane jib considered to be moving. To generate a transition, the transition event has
to be continuously detected for a specific amount of time (the time threshold).

EXPERIMENTAL RESULTS
The experimental data was obtained from a surveillance camera mounted on
the Georgia Tech campus that monitors the construction of a building, which views
the site from the roof of a nearby building. Access to the roof is possible, allowing
for measurement of the crane and the camera parameters. A robotic total station
(RTS) was used to measure the necessary 3D points. The video sequence taken
from 1:03PM to 2:11PM, with capture period of 4 seconds/frame, totals 1377 frames.
The crane’s activities as per the finite-state machine are given in Figure 6,
with a breakdown of the activity timing shown in Table 1. Note that manual ground
truth of the crane activity state matches that of the algorithm. Further, the times
spent on the concrete pouring activities within each cycle is consistent, which is
expected when the crane is operated by an experienced operator. Furthermore, time
on concrete loading is shorter than time on concrete pouring, and time on bucket
moving to the working zone is longer than time on bucket moving back to the mixer.
264 COMPUTING IN CIVIL ENGINEERING

Trailer Complex

Entry
Emergency Gate
Gate
Concrete
Security Fence Mixer Working Zone

Tower
Crane

Concrete
Mixer

Entry Gate
(a) Site logistics plans. (b) Plan view of crane activity zones.
Figure 4. Understanding crane activities by incorporating information regarding
site logistics plans.

Figure 5. Depiction of crane activity finite state machine.

Figure 6. Crane activities over time according to the finite state machine model.

Table 1. Automatic tabulation of concrete pouring initiation and duration (in


seconds).
No. Start Time Load Concrete Bucket to Pour Concrete Bucket from
1 13:21:34 57 48 75 36
2 13:25:10 63 51 78 33
3 13:28:55 66 45 66 42
4 13:54:40 63 42 75 42
5 13:58:22 45 42 60 42
6 14:01:31 45 42 60 42
7 14:04:49 54 42 90 36
8 14:08:31 48 42 42 36
X Acc. Time 441 363 549 306
X Avg. Time 55 45 69 38

CONCLUSION
This paper illustrated the use of computer vision algorithms for construction
project analysis. A visual tracking algorithm for the tower crane coupled with a
finite state machine with activity state enabled construction activity understanding.
COMPUTING IN CIVIL ENGINEERING 265

Tower crane activity was categorized into concrete pouring and non-concrete
pouring. Experimental results show that the visual tracking algorithm is able to track
the tower crane while the finite-state machine distinguishes the crane activities.
Future work seeks to consider additional activities. We hypothesize that a
Bayesian inference algorithm that optimally processes the track signal using a
collection of past measurements, rather than simply the current measurement, will
accurately detect different activities. Further, future work also seeks to actively
detect and classify the load to more accurately assess the activity state.

REFERENCES
Abdelhamid, T. S., and Everett, J. (1999). "Time Series Analysis for Construction
Productivity Experiments." Journal of Construction Engineering and
Management., 125, 87-95.
Comaniciu, D., Ramesh, V., and Meer, P. (2003). "Kernel-Based Object Tracking."
IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5), 564-577.
Davis, J., and Shah, M. (1994). “Visual Gesture Recognition.” Image and Signal
Processing, 141(2), 101-106.
Everett, J., and Slocum, A. (1993). "Cranium: Device for Improving Crane
Productivity and Safety." Journal of Construction Engineering and Management.,
119(1), 23-39.
Gong, J., and Caldas, C. H. (2010). "Computer Vision-Based Video Interpretation
Model for Automated Productivity Analysis of Construction Operations." Journal
of Computing in Civil Engineering, 24(3), 252-263.
Hong, P., Huang, T., and Turk, M. (2000). “Gesture Modeling and Recognition using
Finite State Machines. 4th IEEE International Conference on Automatic Face
Gesture and Recognition, 410-415.
Ju, F., and Choo, Y. S. (2005). "Dynamic Analysis of Tower Cranes." Journal of
Engineering Mechanics, 125(1), 88-96.
Otsu, N. (1979). "A Threshold Selection Method from Gray-Level Histograms."
IEEE Transactions on Systems, Man, and Cybernetics, 9, 62-66.
Park, M. W., Makhmalbaf, A., and Brilakis, I. (2010) "2D Vision Tracking Methods'
Performance Comparison for 3D Tracking of Construction Resources." ASCE
Construction Research Congress, Banff, Canada., 459-469.
Shapira, A., Rosenfeld, Y., and Mizrahi, I. (2008). "Vision System for Tower Cranes."
Journal of Construction Engineering and Management., 134, 320-332.
Tantisevi, K., and Akinci, B. (2008). "Simulation-based Identification of Possible
Locations for Mobile Cranes on Construction Sites." Journal of Computing in
Civil Engineering, 22, 21-30.
Teizer, J., and Vela, P. A. (2009). "Personnel Tracking on Construction Sites Using
Video Cameras." Advanced Engineering Informatics, 23(4), 452-462.
Wren, C. R., Azarbayejani, A., Darrel, T., and Pentland, A. P. (1997). "Pfinder: Real-
Time Tracking of the Human Body." IEEE Transactions on Pattern Analysis and
Machine Intelligence, 19(7), 780-785.
Yang, J., Arif, O., Vela, P. A., Teizer, J., and Shi, Z. (2010). "Tracking Multiple
Workers on Construction Sites Using Video Cameras." Advanced Engineering
Informatics, 24(4), 428-434.
Design of Optimization Model and Program to Generate Timetables for a Single
Two-Way High Speed Rail Line under Disturbances

T. W. Ho1, C. Y. Lin1, S. M. Tseng1 and C. C. Chou2

1
Graduate Research Assistant, Department of Civil Engineering, National Central
University, 300 Jhongda Rd., Jhongli, Taoyuan 32001, Taiwan (03) 422-7151 ext.
34150; email: 993402004@cc.ncu.edu.tw, 993402003@cc.ncu.edu.tw,
983402006@cc.ncu.edu.tw
2
Associate Professor, Department of Civil Engineering, National Central University,
300 Jhongda Rd., Jhongli, Taoyuan 32001, Taiwan (03) 422-7151 ext. 34132; FAX
(03) 425-2960; email: ccchou@ncu.edu.tw

ABSTRACT

This research proposes a rescheduling optimization model for high speed


railway at stoppage time that occurs as a result of related disaster. Mixed integer and
dynamic programming have been chosen to solve the model under CPLEX.
Rescheduling activity for updating an existing schedule in response to disruptions is
needed. This research has taking Taiwan High Speed Rail as an example, we apply
mathematical programming, assumptions as well as input and output values are
configured by using real data from THSR. The model has obtained a timetable result
as good as the real timetable in two hours (that is, 7442.51 seconds). Rescheduling
results could determine the new timetable for calamity. This research provides a
model that is capable of generating new timetable when an accidental event occurs.
We expect this to increase the ability for crisis management, service quality and
leads to positive long-term prospects for Taiwan High Speed Rail operations.

INTRODUCTION

Nowadays, railway transportation has become a good alternative in many


countries as an efficient and economic public transportation mode. It plays an
important role in the passenger and freight transportation market. The railway has

266
COMPUTING IN CIVIL ENGINEERING 267

grown by over 40% in both freight and passenger sectors over the past 10 years. All
railway companies try to provide good services in order to satisfy their customers.
One way to realize this is by improving the quality of the train control process or
scheduling so that the railway company could optimize these services as well.
In addition, the train timetable is the basis for performing the train operations. It
contains information regarding the topology of the railway, train number and
classification, arrival and departure times of trains at each station, arrival and
departure paths, etc. More formally, the train scheduling problem is to find an
optimal train timetable, subject to a number of operational and safety requirements.
Due to the limited resources of railway companies, managing circulation of trains
becomes important, including turning back operations, regular inspection and car
cleaning times. Any solution that ignores train circulation requirements is
unreasonable to train companies. The Taiwan High-Speed Railway System already
has cyclic patterns of daily train circulation, but these patterns have not been
modeled yet. Moreover, based on a review of the literature, researchers in the railway
field have never considered train circulation, especially in high-speed rail systems,
even though it is an important requirement. Therefore, a scheduling model which has
the capability to accommodate not only basic requirements (railway topology, traffic
rules, and user requirements) but also train circulation requirements as well needs to
be formulated.
Furthermore, based on the data in the contingency timetable, THSR prefers to
cancel many trains and operates only two trains per hour in many cases of
disturbances. On the other hand, creating an optimal timetable, which means optimal
journey time, is important since the THSR Company has to preserve the maximal
profit during disturbances. In addition, in order to mitigate the impact of disturbances
instead of cancelling many trains on their system, THSR needs a method for
analyzing how disturbances propagate within the original timetable and which
actions to decide. In the end, the train operator could predict the effects of disruptions
on the timetable without doing real experiments.

MODEL DEVELOPMENT

Before developing a mathematical model for the scheduling problem, we should


understand the timetable components and rules. A timetable contains information
regarding railway topology (stations, tracks, distance among stations, traffic control,
268 COMPUTING IN CIVIL ENGINEERING

etc), and the schedules of the trains that use this topology (arrival and departure times
of each train at stations, dwell-times, crossing times, regular inspection times, and
turning back operation). The timetabling design in this research is described as
follows. Given the THSR railroad system and set of services, then the problem
performs a timetable as well as a track assignment plan for these services.
The goals of the optimization model in this research are to let the trains depart as
close to their target departure times as possible, at the same time minimizing the
operation times of services. Since the operation times of each train as well as
required headway between consecutive trains depend on the track assignment,
railway topology and train circulation issues have to be considered simultaneously to
obtain a realistic result which is close to the real timetable.
Suppose a railway system with r station, n trains going down and m trains going
up. Minimizing the operation times for all trains means minimizing the journey times
(arrival and departure times) for all trains going-down, initialized as i (1 to n) plus
the journey time of trains going-up as j (1 to m) in every station (1 to r). Thus, the
mathematical constraint for representing the objective function in this research is
presented as Equations 1 below:

 T A  Ti D1    j 1 T j A1  T j Dr 
i n j m
Min i 1 i r (1)

The variables of this research are journey time, arrival and departure time of all
trains with travel time, station time, headway, car cleaning time, regular inspection
time and turning back operation, as parameters. Variables and parameters will be
explained as constraints below. Travel time constraints restrict minimum time to
travel between two contiguous stations (k to k+1) for all trains going up initialized as
i (1 to n) and trains going down initialized as j (1 to m).

Ti Ak 1  Ti Dk  time ik ( k 1) (2)

As represented by Equations 2, the arrival time for train i in the station k+1
minus departure time in the station k (origin station) should be greater or equal to the
needed time for trains i to travel between two contiguous stations (k to k+1).

T j Ak  T j Dk 1  time j( k 1)k (3)

The arrival time for train j in the station k minus departure time in station k+1
should be greater or equal to the needed time for trains j to travel between two
contiguous stations (k+1 to k). This research uses minimum travel time between two
COMPUTING IN CIVIL ENGINEERING 269

contiguous stations, because different types of trains have different speeds and travel
time would automatically differ. As explained before, running time is calculated from
departure times in the timetable minus dwell times. Therefore, station time for each
train i and j at station k (1 to r) should be greater than departure time minus arrival
time, as shown in Equations 4 and Equations 5. This condition represents that the
model uses maximum station time at each station, because not all trains will stop at
every station.

Ti Dk  Ti Ak  TS ik  CS ik (4)

T j Dk  T j Ak  TS jk  CS jk (5)

Headway constraint restricts to the departure times differences between two


consecutive trains in the same station. The headway time in this research is fixed to
one value because we want to keep the time spacing between two trains exactly.

Tt ' Dk  Tt Dk  INF  Yt t ';k ( k 1)  INF  headway (6)

Tt ' Dk  Tt Dk  INF  Yt t ';k ( k 1)  headway (7)

Tt ' Ak 1  Tt Ak 1  INF  Yt t ';k ( k 1)  INF  headway (8)

Ti ' Ak 1  Ti Ak 1  INF  Yt t ';k   ( k 1)  headway (9)

As represented by Equations 6 to Equations 9, Yt t ';k  ( k 1) is the decision

variable for the availability of track in one segment. The value is 1 if there is a track
available between station k to k+1 and 0 otherwise. Travel time in line determines the
total travel time for one train to travel through a line southbound and northbound
plus allowed time margin. Travel time in line determines the total travel time for one
train to travel through a line southbound and northbound plus allowed time margin.
Maximum travel time has been applied in the model; thus, the difference between
arrival and departure time for one train in the same station should be less or equal to
this travel time, as formulated in Equations 10 and Equations 11 below:

  
Ti Ar  Ti D1  1    time i1r (10)
 100 
270 COMPUTING IN CIVIL ENGINEERING

  
T j A1  T j Dr  1    time jr 1 (11)
 100 
In the THSR system, allowed time margin was set to different numbers for
different types of train. Therefore, this parameter would be a good input in sensitivity
analysis to reveal the effects of changes in this parameter on objective value.

RESULTS AND MODEL CHECKING

After the mathematical model for scheduling problem was formulated, a


collection of data regarding scheduling requirements from the THSR Company
began. Primary data was collected from interviews with senior engineers in the
THSR Company, and secondary data was gathered from THSR documents including
the Equipment and Facilities Operations Manual, and existing timetables from THSR.
The next step in creating a scheduling program was coding and developing a model
application. This application was developed to create .lp file as an input file in
CPLEX and accommodate all activities including add, deletes, edit data, and
generate model results and timetable diagrams. Mathematical models in this research
were coded in Matlab, which refers to the .lp file format as output.
Mixed integer programming (MIP) methods and dynamic programming solution
techniques have been used to solve the models with the CPLEX solver tool. The tests
were operated in an Intel Core 2 Duo CPU 2Ghz with 2GB RAM. The algorithm
derived good results and obtained the minimum total travel time as shown in Table 1
below:

Table 1. Results using CPLEX to solve the problem directly.


Objective Computation
Objective Gap
Value Time
Optimal solution cutoff 6438 0.02 % 7442.51s

The results showed 0.02% gap during the process, and the algorithm defined the
results by cutting off the model. It means that the nodes were being solved during the
branch and cut search. This value is typically computed by exceeding the cutoff
value, and then the node can be pruned without the need to solve the node to
optimality. Therefore, the computation time to solve the model is 7442.51 seconds
and performance time is 6,438 minutes as the optimal result of minimum total
COMPUTING IN CIVIL ENGINEERING 271

operation time in the THSR system. The timetable diagram contains information
regarding departure and arrival times of trains at each station. From the output
departure and arrival times for each train at stations, we developed the timetable
diagram in Figure 1(a). The new timetable for rescheduling is shown by Figure 1(b)
that has a calamity in station 3 at time is 100.

Figure 1. Timetable diagram: (a) original model, and (b) rescheduling model.

The program for timetable rescheduling was developed. The train operator can
input the time when the disaster was occurred, and the location of trains is shown in
Figure 2. The train operator can input the parameters for related condition during the
disaster, and the new train timetable will be generated running proposed program.
Figure 3 is shown the new train timetable by rescheduling in the program.

Figure 2. The location of each train is shown in proposed program.


272 COMPUTING IN CIVIL ENGINEERING

Figure 3. Create the new timetable after rescheduling.

CONCLUSIONS

This research developed an optimization model for designing timetables in


high-speed railway systems that considers basic requirements as well as special
requirements regarding train circulation, including car cleaning, regular inspection,
and train turning back operations. The model could generate a good timetable result
as good as a real timetable in two hours (7442.51 seconds). Furthermore, the model
could generate train circulation patterns as illustrated in the timetable diagram results.
The new train timetable can be generated when the train operator input the
parameters in the proposed program. The rescheduling could be a good simulation
analysis for predicting the effects of disruptions on the timetable without conducting
real experiments.

ACKNOWLEDGEMENTS

The authors would like to express sincere thanks to Mr. Te-Che Chen, who is a
senior engineer at Taiwan High-Speed Railway Corporation and currently a Ph.D.
student under the guidance of Dr. Chien-Cheng Chou, for providing real data and
invaluable suggestions.

REFFERENCES

Brännlund, U., Lindberg, P. O., Nõu, A. and Nilsson, J. E. (1998). “Railway


COMPUTING IN CIVIL ENGINEERING 273

Timetabling Using Lagrangian Relaxation.” Transportation Science, 32(4),


358369.
Caprara, A., Fischetti, M., Toth, P. and Vigo, D. (2002) “Modeling and solving the
train timetabling problem.” Operations Research, 50(5), 851-861.
Caprara, A., Monaci, M., Toth, P. and Guida, P. (2006) “A lagrangian heuristic
algorithm for a real -world train timetabling problem.” Discrete Applied
Mathematics, 154, 738–753.
Carey, M. (1999). “Ex ante heuristic measures of schedule reliability.”
Transportation Research Part B, 33, 473–494.
Carey, M. and Carville, S. (2003). “Scheduling and platforming trains at busy
complex stations.” Transportation Research Part A , 37, 195–224.
Higgins, A. and Kozan, (1998). ”Modeling train delays in urban networks.”
Transportation Science, 32, 346–357.
Kroon, L. G. and Peeters, L. W. P. (2003). “A variable trip time model for cyclic
railway timetabling.” Transportation Science, 37(2), 198-212.
Kroon, L. G., Maroti, G. and Helmrich, M. R. (2008). “Stochastic improvement of
cyclic railway timetables.” Transportation Research Part B, 42, 553-570.
Learning and Classifying Motions of Construction Workers and Equipment
Using Bag of Video Feature Words and Bayesian Learning Methods

Jie Gong1 and Carlos H. Caldas2


1
Assistant Professor, Department of Construction, Southern Illinois University
Edwardsville, Edwardsville, IL, 62026, Tel. (618)650-2498, jgong@siue.edu
2
Associate Professor, Department of Civil, Architectural, and Environmental
Engineering, The University of Texas at Austin, 1 University Station C1752, Austin,
TX 78712-0273, Tel. (512)471-6014, FAX (512)471-3191, caldas@mail.utexas.edu

ABSTRACT
Automated motion classification of construction workers/equipment from
videos is a challenging problem, but has a wide range of potential applications in
construction. These applications include, but are not limited to, enabling rapid
construction operation analysis and ergonomic studies. This research explores the
potential of an emerging motion analysis framework, bag of video feature words, in
learning and classifying workers and heavy equipment motions in challenging
construction environments. We developed a test bed that integrates the bag of video
feature words with a Bayesian learning method, and evaluated the performance of
this motion analysis approach on two video data sets. For each video data set, a
number of motion models are learned from the training video segments and applied to
the testing video segments. Compared to previous studies of construction
worker/equipment motion classification, this new approach can achieve good
performance in learning and classifying multiple motion categories while robustly
coping with the issues of partial occlusion, view point and scale changes.

INTRODUCTION
Video becomes an easily captured and widely spread media serving the
purpose of construction method analysis and worker ergonomic study in the
construction industry. The associated demand for reducing the burden of manual
analyses in retrieving information from video motivates further research in automated
construction video understanding.
Recent studies have focused on leveraging computer vision algorithms to
automate the manual information extraction process in analyzing recorded videos
(Teizer and Vela 2009; Jog et al. 2010; Zou and Kim 2007; Peddi et al. 2009; Gong
and Caldas 2010). However, despite considerable progress in construction object
tracking, classifying the motion of construction workers or construction equipment in
single view video, especially in beyond simple categories like working and not
working, remains a hurdle for reaping the full benefits of video-based analysis in
method studies and worker ergonomic studies. Robust motion analysis algorithms
that are capable of differentiating subtle motion categories and handling scene clutter,
occlusion, and view point changes are essential to overcome such a hurdle. However,
there are no reported studies that have developed algorithms with the above
capabilities in a challenging construction environment.

274
COMPUTING IN CIVIL ENGINEERING 275

In this paper, we aim to explore the potential of an emerging visual learning


approach in classifying subtle motion categories in a large amount of construction
video segments. This new visual learning approach is composed of three major steps
including feature detection, feature representation, feature modeling, and motion
model learning. More specifically, it utilizes 3D-Harris detector as the feature
detector, local histograms of optical flow (HoF) as the feature representation, bag-of-
words as the feature model, and Bayesian learning methods as the framework to learn
motion models. For the simplicity purpose, we refer this approach as the bag of video
feature words in the remaining part of this paper. We developed a test bed in
MATLAB to evaluate the performance of this new approach in learning and
classifying motion categories in challenging construction videos. Two video data sets,
including backhoe motion and formwork assemble motion, are constructed from
hundreds of hours of construction videos as the evaluation data sets. As the main
contribution of this paper, we demonstrate that the bag-of-words model with the HoF
motion representation and Bayesian learning methods has a great potential in
significantly advancing automated construction video understanding as it performs
well in learning subtle motion categories in challenging construction videos.
The rest of the paper is organized as follows. Section 2 briefly reviews the
relevant literature in computer vision-based construction video analysis and the
background of motion analysis. Section 3 explains the bag of video feature words
model. Section 4 evaluates the performance of the bag of video feature words model
on two video data sets. Section 5 concludes the paper.

RELATED WORK
Computer vision algorithms can be widely used in construction to improve a
variety of manual processes if the problem of reliable recognition and tracking of
objects on construction jobsites can be solved. In this regard, many recent studies
have focused on evaluating the performance of existing vision recognition and
tracking algorithms in construction environments (Weerasinghe and Ruwanpura 2010;
Teizer and Vela 2009; Jog et al. 2010). In lieu of automated productivity
measurement using videotaping, there are so far three main approaches. They include
detecting the movement of construction resources (Zou and Kim 2007), recognizing
and tracking the trajectories of construction resources (Gong and Caldas 2010), and
recognizing worker gestures (Peddi et al. 2009). In particular, Peddi et al. (2009)
proposed to use wireless camera to develop a real-time productivity measurement
system based on human poses for bridge replacement. In this study, background
subtraction was used to extract human pose at each frame, and a neural network was
used to train models for classifying worker performance into three classes including
effective work, ineffective work, and contributory work. This approach was only
tested on a bridge deck placement activity, and its performance on other video data
sets remains to be seen. Furthermore, it is likely that similar human gestures can
belong to the different categories as defined above. Besides these efforts, Gonsalves
and Teizer (2009) studied human motions such as walking and running using 3D
ranging cameras. However, the performance of the algorithms is not reported. As an
extension of the work reported in Gong and Caldas (2010), this research focuses on a
general framework of classifying motions of construction objects into intrinsic
276 COMPUTING IN CIVIL ENGINEERING

categories pertaining to the activity in which the objects are engaged. We are
interested in a method that can classify motions into a level of detail that is
comparable to crew balance analysis or manual ergonomic studies. To date, this type
of method is remaining to be found in the construction research domain

BAG-OF-VIDEO FEATURE WORDS


Rooted in text mining and document analysis, the Bag-of-Words model has
been increasingly used in image and video mining applications. The driving force for
this trend is the advent of reliable image local feature detection and representation
methods, such as Scale-Invariant Feature Transform (SIFT) and Histogram of
Gradient (HOG) (Dalal et al. 2006). Figure 1 shows the overall steps in developing
the Bag-of-Video feature words model. The learning stage of model development
involves feature detection and representation, vector quantization for generating a
Codebook, and learning action models. In the recognition stage, the goal is simply to
apply the learned action model to classify the new action video. In the rest of this
section, each of these steps is described.

Figure 1. An Overview of the Bag-of-Video Feature Words Method

Representing Motion as a Video Feature Words


Video feature words are essentially scale-invariant and view-invariant features
detected in video volumes. This method has been developed under the general notion
that human action in video sequences can be seen as silhouettes of a moving torso
and protruding limbs undergoing articulated motion and that such silhouettes can be
described using three-dimensional shapes or volumes. Then local descriptors, such as
SIFT and HOG, can be used to represent the features of these shapes for action
classification (Gorelick et al. 2004). Recently, this method has been increasingly used
COMPUTING IN CIVIL ENGINEERING 277

for human action classification, particularly in analyzing player motions in various


sports (Niebles and Fei-Fei 2007; Laptev et al. 2008).
This research uses a video feature descriptor based on 3D grids to represent
the features detected in construction actions. The features were detected by the
combined usage of optical flow, 3D Harris Corner detector, and the HOG method.
The overall computing steps involve the following steps: 1) Determine local regions
using an interest point detector or dense sampling of the image plane or video volume
(video volume is generated from optical flow in multiple frames); and 2) Use a
descriptor to represent the region by a feature vector. The detailed computation
schema can be found in Klaser et al. 2008. Each of the HOG descriptors essentially
consists of high dimensional vectors that record the gradients around a particular
interest point. Figure 2 shows the features (yellow circles) computed on the video
frames in typical construction video sequences using the program developed by
Laptev et al. (2008). These features are essentially local interest points on the three-
dimensional motion shapes, and each of the features is represented through a 90-
dimension vector. The major benefit of these descriptors is their invariance to scale
and rotation, therefore they are robust to view point and video resolution changes.

Figure 2. Video Feature Words

By using this method, a large set of features (typically in the order of 105) and
their associated descriptors can be computed to represent the visual contents of the
videos. These features and descriptors are analogous to the words in a document. The
number of features produced in each video sequence depends on many factors, such
as the resolution and action type. This leads to another important step, codebook
formation, before these features can be effectively used for action classification.

Vector Quantization and Codebook Formation


Considering that there are tens of thousands of features that can be computed
for each category of actions, it becomes fairly difficult to discover a set of common
features to one particular type of action. In practice, these large quantities of features
are usually clustered into several hundred to thousands of clusters. In many recent
studies, K-Means based clustering algorithms have been used for this purpose. Then
the center of each of these clusters (which also have 90 dimensions) will be used to
represent a set of features that belong to this cluster. In this way, a compact
representation of the features can be formed. This process is often referred to as
vector quantization. At this point, these centers of clusters represent the video words
in a code book, and particular combinations of them are used to represent different
categories of actions. An intuitive way to understand this process is through an
analogy: the code book can be viewed as a dictionary, and the centers of the clusters
278 COMPUTING IN CIVIL ENGINEERING

are the entries in the dictionary. Video sequences for different actions have these
entry words showing at different frequencies. Each of the video sequences can be
represented as a bag of video feature words. A particular distribution of entry words
for each action category can be learned from a training data set.

Learning Motion Models using a Bayesian Approach


In a Bayesian learning framework, a naïve Bayesian approach can be
formulated as shown in Figure 3. The w represents a set of video words, the c
represents action class decisions, and the p(w|c) represents video word likelihood
given an action class. The shaded node in the figure can be observed, while the not
shaded ones cannot. Suppose there are N video sequences containing video words
from a vocabulary size M (i = 1,…M), then the corpus of videos can be summarized
in an M by N co-occurrence table, where each element of the table stores the number
of occurrences of a particular word in a particular video sequence. The Naïve
Bayesian classifier assumes that the probability of observing the conjunction w1,
w2….wN given the action category is just the product of the probabilities of the
individual word (Figure 3). With a table of co-occurrences of video feature words in
different action categories, the process shown in Figure 3 can be readily used for a
new video. Classification of a new video segment is to compute the associated
probability of generating a particular video word that appears in the new video in
each of the action categories, and for each action category, a joint probability can be
calculated to determine which action category is most probable.

= argmax ( | ) = argmax ( ) ( | ) = argmax ( )∏ =1 ( | )


Figure 3. The Naïve Bayesian Model

Evaluation of Bag-of-Video Feature Words Method


Video Data sets
We created two video data sets to test the bag of video feature word model.
The information of both video data sets is summarized in Table 1. The first video
data set includes three backhoe motion categories, and the second data set includes
five motion categories in a typical formwork activities. These video segments are
manually trimmed from hundreds hours of video that were recorded on a variety of
construction jobsites. Some of these videos were recorded in rainy/snowy/windy days
at a large distance. Therefore, these video samples represent realistic construction
scenarios. Figure 4 shows snapshots of some of video segments. The challenge of
view point, scale, and illumination changes, occlusion, and low video resolution is
evident in the video data set.
COMPUTING IN CIVIL ENGINEERING 279

Motion Swing, Excavating, Traveling, Transporting,


Categories Relocating Motion Bending Down, Nailing
# of Video Categories with Hammer, Aligning
Segments 50 Formwork
/Category # of Video
Frame Rate 7 frames/second Segments 60
Length of Video /Category
10 seconds
Segments Frame Rate 30 frames/second
Total #of Video Length of Video
150 5 seconds
Segments Segments
Total # of Video
300
Segments
(a) (b)
Table 1. (a) Video Data Set I: Backhoe Motion; (b) Video Data Set II: Worker Motion in
Formwork Activities

Figure 4. Example Snapshots of Video Segments

Testing Results and Discussion


For each of the video segments, we ran 3D Harris corner detector on the
accumulated optical flows in consecutive frames. Then, for each of the interest points,
we used HoF as a descriptor to represent the feature surrounding an interest points. In
this way, there are 62622 features and 146800 features detected respectively in the
two video data sets. The K-Means method was used to group these features into 400-
2000 clusters, and the centers of these clusters become visual words in the codebook.
During the process of evaluation, there are primarily two factors that can be
adjusted. They are the ratio of test data vs. training data and the number of code
words to be clustered into and used for model training. The first factor determines the
proportion of training and testing data. The testing data were not used in the training
process. In this research, we commonly use 80% of the data as training data and the
rest of data as testing data. For each evaluation, we typically adjust the number of
code words in order to compare its impact on the algorithms’ performance. At the end
of each evaluation, a confusion matrix is computed to summarize classification
accuracy. The following shows part of the testing results.
Ten experimental evaluations were conducted on the backhoe motion data set.
In each evaluation, the data set was randomly divided into training data and testing
data. In this case, we use 80% of the data for training, and the rest 20% for testing the
learned motion models. To ensure consistency, we fix the number of code words to
500. Figure 5 shows the confusion matrix that summarizes the average classification
280 COMPUTING IN CIVIL ENGINEERING

performance in these ten trials. It is expected that random guess in this case would
yield 33.3% of accuracy given there are equal number of cases in each category. It
became clear that our model performs well in distinguishing these motions.
Relocating Excavating Swing Relocating Excavating Swing
Relocating 92% 4% 1% Relocating 80% 1% 1%
Excavating 6% 81% 13% Excavating 16% 78% 20%
Swing 3% 15% 86% Swing 4% 21% 79%
(a) (b)
Note: average of 10 random runs, 500 code words, and 80% data for training
Figure 5. Confusion Matrix: (a) Training Data in Data set I; (b) Testing Data in Data set I
A similar training and testing process was conducted on the worker motion
data set. In this case, we used 1000 code words but keep the training vs. testing ratio
same (80% for training). The results are shown in Figure 6. This is a much more
difficult data set than the crane motion data set. Because there are more categories in
this data set, the expected accuracy of random guess drops to 20%. Our learned
model can do significantly better than the random guess. It can be noted that the
learned model performs better in terms of classifying bending, nailing, and aligning
motions. It is clear that the most difficult motion categories to classify is transporting
and traveling. It is also true that these two categories themselves have much in
common in terms of motion features. Overall, the bag of video feature words model
performs reasonable well considering the difficulty of this video data set.
Transporting Traveling Bending Nailing Aligning
Transporting 69% 8% 4% 1% 2%
Traveling 19% 69% 1% 1% 1%
Bending 3% 11% 90% 1% 6%
Nailing 2% 5% 1% 90% 8%
Aligning 6% 8% 3% 7% 83%
(a)
Transporting Traveling Bending Nailing Aligning
Transporting 45% 12% 5% 2% 2%
Traveling 35% 47% 7% 0% 3%
Bending 12% 20% 75% 7% 8%
Nailing 3% 17% 7% 68% 13%
Aligning 5% 5% 7% 23% 73%
(b)
Note: average of 5 random runs, 1000 codewords, and 80% data for training
Figure 6. Confusion Matrix: (a) Training Data in Data set II; (b) Testing Data in Data set II

CONCLUSION
In this study, we extended the bag of video feature words model into the
construction domain. We implemented this new motion learning and classification
framework in MATLAB, and we created two construction video data sets to evaluate
its performance. Experiments show that the bag of video feature words model
performs reasonably well on the video set. The attractiveness of the bag of video
feature words is that it does require foreground segmentation, and is robust to partial
occlusion and changes in view point, illumination, and scale. The performance of this
method can be further improved by adding spatial information since it is well known
that the bag-of-words method ignores spatial information and only concerns the
COMPUTING IN CIVIL ENGINEERING 281

frequency of feature occurrence. Also, the strong assumption in Naïve Bayesian


method can be relaxed by using other generative models such as probabilistic latent
semantic analysis. Since this is the first study on using this method on construction
video data set, we hope that this study can establish a baseline for further comparing
the performance of other algorithms.

Reference
Dalal, N., Triggs, B., and Schmid, C. (2006) “Human detection using oriented
histograms of flow and appearance.” In ECCV, 2006.
Gong, J., and Caldas, C. (2010). "Computer Vision-Based Video Interpretation
Model for Automated Productivity Analysis of Construction Operations."
ASCE Journal of Computing in Civil Engineering, 24(3), 223-324.
Gonsalves, R., and Teizer, J. "Human Motion Analysis Using 3D Range Imaging
Technology." 26th International Symposium on Automation and Robotics in
Construction (ISARC 2009), Austin Texas, 76-85.
Gorelick, L., Galun, M., Sharon, E., Brandt, A., and Basri, R. “Shape Representation
and Classification Using the Poisson Equation,” Proc. IEEE Conf. Computer
Vision and Pattern Recognition, vol. 2, pp. 61-67, 2004.
Jog, G. M., Brilakis, I. K., and Angelides, D. C. "Testing in harsh conditions:
Tracking resources on construction sites with machine vision." (in press)
Automation in Construction.
Klaser, A., Marszalek, M., and Schmid, C. “A spatio-temporal descriptor based on 3
D gradients.” In BMVC, 2008.
Laptev, I., Marszałek, M., Schmid, C., and Rozenfeld, B. (2008). “Learning realistic
human actions from movies” In CVPR, 2008.
Lowe, D. G. (2004). "Distinctive image features from scale-invariant keypoints."
International Journal of Computer Vision, 60(2), 91-110.
Niebles, J. C. and Fei-Fei, L. (2007) “A hierarchical model of shape and appearance
for human action classification.” In CVPR, 2007.
Peddi, A., Huan, L., Bai, Y., and Kim, S. "Development of human pose analyzing
algorithms for the determination of construction productivity in real-time."
Construction Research Congress 2009, Seattle, WA, 11-20.
Teizer, J., and Vela, P. A. (2009). "Workforce Tracking on Construction Sites using
Video Cameras." Advanced Engineering Informatics, 23(4), 452-462.
Weerasinghe, T. I. P., and Ruwanpura, J. Y. "Automated Multiple Objects Tracking
System (AMOTS)." Construction Research Congress 2010, Banff, Canada,
11-20.
Zou, J., and Kim, H. (2007). "Using Hue, Saturation, and Value Color Space for
Hydraulic Excavator Idle Time Analysis." J. Computing in Civil Engineering,
21, 238.
EVOLUTIONARY SOFTWARE DEVELOPMENT TO SUPPORT
ETHNOGRAPHIC ACTION RESEARCH
Timo Hartmann1
1
Assistant Professor, Department of Construction Management and
Engineering,
Twente University, P.O. Box 217, 7500AE Enschede, The Netherlands;
PH +31(0)53 489-3376; email: t.hartmann@utwente.nl

ABSTRACT
Using the ethnographic action research method researchers can develop
information systems by simultaneously accounting for technological and
organizational factors. The method relies on the close collaboration of practitioners
and researchers that develop a new information system in iterative steps of observing
current work practices in practical work contexts, developing a new or adjusted
information system, and evaluating the usefulness of the system by its introduction in
the same practical work context. One shortcoming of the method, caused by its highly
iterative character, is that it is not possible for researchers to design the information
system in much detail upfront to guide the software development efforts within each
iteration. This makes it hard for researchers to develop systems that they can
introduce readily in practice for the purpose of evaluating the developed system
during each research iteration. By drawing on evolutionary software development
methods this paper introduces a test driven software development framework to
support ethnographic action researchers to overcome this problem. The paper also
illustrates the application of the framework by describing the exemplary
implementation of the framework in software. Overall, with the evolutionary software
development framework the paper contributes to action research methodology to
develop information systems. It provides another stepping stone in enabling
researchers to develop methods and systems to support the complex project based
engineering processes of the construction industry in a bottom-up iterative manner.

EVOLUTIONARY SOFTWARE DEVELOPMENT AND ETHNOGRAPHIC


ACTION RESEARCH
Work processes in project based industries change frequently across projects
and even during the lifetime of a single project. Ethnographic action research is a
method to develop information systems to support such frequently changing for work
processes (Hartmann et al. 2009). A first premise of ethnographic action research is to
work in close collaboration with the practitioners on the project. This close
collaboration allows the researcher to gain an in depth understanding of the local
work processes. A second premise of the research method is the development of
improvements for the local context in small iterative cycles of ethnographic
participant observation, information system development, and consecutive
implementation of the information system to allow for another cycle of ethnographic

282
COMPUTING IN CIVIL ENGINEERING 283

participant observation. In this way, the method allows the continuous adjustment of
the information system with the changing work processes. Additionally, the method
allows to react to changes in the work processes that are caused by the
implementation of the newly developed information system. Hence, in theory, the
ethnographic action research methodology allows for the continuous improvement of
project based work processes through the iterative development of information
technologies. In this way, the method enables the bottom up research of generally
applicable processes and best practices for work settings in frequently changing
environments, such as the construction industry.
One of the problems during ethnographic action research activities is the
dynamic accommodation of the iterative and evolutionary change that lies at the heart
of the research method. The research process requires the modification of functions
already developed during a previous iteration, or the extension of the system by the
introduction of new functions. In both cases, the introduced changes should not
disrupt the already ongoing application of the system's functionality developed in
previous iterations. Additionally, the integrity and consistency of the existing system
needs to be ensured, both functionally and from the perspective of data persistence.
This paper presents a framework to allow for the introduction of dynamic changes
while ensuring the integrity of the existing system. The framework is based on the
evolutionary and test driven software development philosophy (Kramer & Magee,
2002). To implement and test the framework the paper presents an exemplary
implementation of the framework in software.
The paper is structured in two parts. The first part of the paper derives the
development framework by drawing on existing literature in the field of evolutionary
software development. In the second part, the paper then describes the illustrative
application of the framework.

AN EVOLUTIONARY SOFTWARE DEVELOPMENT FRAMEWORK TO


SUPPORT ETHNOGRAPHIC ACTION RESEARCH EFFORTS

Development Description
Method
Acceptance Acceptance tests are tests for a completed function of the overall IT system.
Tests Acceptance tests simulate the interaction of users with the system by automating
simulated user interaction with the system and testing the system's outputs. Hence,
acceptance tests treat the underlying functionality of the IT system as black box
(Ambler & Sadalage, 2006).
Localization Localization are means to adapt IT systems to regional differences and local
requirements of the organizational cultures the system is to be implemented in.
Localization allows this adaption without changes in the underlying functional and
structural logic of the system.
Unit Tests Unit test are tests to determine whether individual units of code work for their
specific focus. Ideally, each unit test is independent from each other and can run in
isolation from the rest of the system (Martin, 2003).
Database A database sandbox is a database different then the database that supports the
Sandboxes ongoing operation of the system. Database sandboxes allow developers to
evolutionary design, program, and test functionality without comprising the integrity
284 COMPUTING IN CIVIL ENGINEERING

of the operational data of the system. During the evolutionary development of IT


systems every participating developer should use a personal sandbox. Additionally,
different sandboxes can support various testing purposes (Ambler & Sadalage,
2006).
Serendipitous Serendipitous decoupling is the use of functional structures within software code
Decoupling that allows the flexible switch between using different database sandboxes and
testing functionality (Martin, 2003).
Table 1. Evolutionary Software Development Methods

Ethnographic action research requires that researchers are able to quickly


provide new or improved parts of an iteratively developed system to the practitioners
they work with in the software development process. Hereby, it is very important that
previously implemented functionality implemented and tested in previous iterations is
maintained in downstream cycles. Evolutionary development methods are well suited
to support ethnographic action research efforts as they provide functionality to
iteratively update existing systems. For one, evolutionary software development
methods rely on a modular system structure that allows for the separate management
of the system's functional and structural elements. This modular setup allows
developers to dynamically extend the system without stopping or disturbing the
operations of the modules of the system that are unaffected by the change (Kramer &
Magee, 2002). Hence, a modular architecture allows the provision of new or improved
parts of a system with only minimal disruption of existing work processes
implemented earlier. Next to modular architectures, evolutionary software
development uses a number of methods to safeguard the integrity of a system after
every evolutionary introduction of a change (Ambler & Sadalage, 2006; Martin, 2003).
The most important of these methods are “Acceptance Testing”, “Unit Testing”,
“System Localization”, the use of “Database Sandboxes”, and “Serendipitous
Decoupling” of functional code. Table 1 describes and summarizes these methods in
more detail.
Based on these briefly introduced evolutionary software development
methods, this section derives a framework to support ethnographic action research
efforts. The framework consists of a general software architecture and an iterative
development process that integrates the evolutionary software development methods
that are summarized in Table 1. Illustration 1 presents the modular server-client
architecture of the framework. The architecture relies on a database to persist all
application data. This database resides on a database server. The server also provides
an interface to clients to access the core business logic of the to be developed IT
system and specific functionality to interact with information stored in the database.
On the client side, users can access the server functionality through different, usually
browser based user interfaces. Clients execute the business logic they receive from
the server and perform the bulk of the computations locally to keep the server
workload low. The communication between the clients and the server is by large
managed asynchronous. Hence, clients send requests, but do not stop operations until
they receive a response from the server. In contrary, specifically designed listener
processes that run parallel to the clients' other operations wait for responses from the
server and then execute the functionality that the initial request relied on as a separate
process. In this way, state-of-the-art distributed systems can provide the speed and
COMPUTING IN CIVIL ENGINEERING 285

“feel” of traditional stand alone workstation programs. The modular architecture


allows for the quick exchange and extension of its different modules. Additionally,
the architecture allowsfor the integration of the evolutionary methods in the
development process. Illustration 1 depicts this relation between the methods and the
modules.

Illustration 1: Server-client architecture of the proposed framework with methods


to ensure the integrity of the system during evolutionary development efforts.

Next to suggesting the client-server based architecture, the framework introduced in


this paper also suggests a development process to integrates the methods into
ethnographic action research iterations (Illustration 2). The process starts with
ethnographic observations of the working practices within specific work
environments and with the development and identification of software supported use
cases with the potential to improve these work processes. The process then suggests
to development the functionality to support these use cases using two cascaded
iterative development cycles. Outer development cycles represent the development of
the graphical user interface for the use case itself to allow for black box configuration
testing. Inner iterative cycles represent the unit test driven development of required
atomic business functions to implement each use case. Each inner iteration can be
tested independently by a specific unit test, while developers test each outer iteration
using configuration tests that ensure the overall functionality of the use case. After the
successful testing of the use case itself, the new functionality can then be deployed in
the operating IT system at the ethnographic research site. The successful deployment
of the new functionality can then be tested by running the configuration test
developed previously against the operating IT system. After successful deployment of
the new functionality ethnographic action researchers can then start with the next
iteration of the research process by starting to observe the interactions of practitioners
with the updated system. In what follows we apply the developed process to an
example problem, the development of a cost accounting system for a small
construction company.
286 COMPUTING IN CIVIL ENGINEERING

Illustration 2: Evolutionary Software Development Process to Support Action


Research Activities

AN ILLUSTRATIVE IMPLEMENTATION EXAMPLE


This section describes an illustrative implementation of the above described
architecture. The section also demonstrates the application of the evolutionary
development methods by describing the implementation of a small part of an
exemplary IT system. The IT system is loosely based on a previous ethnographic
action research effort to develop a financial decision support system for a a small US
based construction and design service company (Hartmann & Johnson 2009). The
description in this paper only focuses on describing the development of the “add
project” functionality of this IT system to provide an easy to understand illustrative
example. The interested reader can refer to the open source project “Open Project
Control” (http://sourceforge.net/projects/projectcontrol) that provides a more
comprehensive implementation of the system following this paper's framework.
Researchers can also use the open source project's software code as a starting point
for their own ethnographic action research activities.
The basic architecture to support the exemplary implementation of the process
uses the JAVA programming language (Horstmann & Cornell, 1997) and a number of
related JAVA application development tools that are described briefly in Table 2. To
illustrate the use of database sandboxes, the example implementation uses two
database sandboxes. One development sandbox locally on the development machine
and one production sandbox on a remote server. Overall, the implemented
architecture closely resembles the in Illustration 1 described client-server structure.
COMPUTING IN CIVIL ENGINEERING 287

Therefore, the rest of the section will focus on describing the implementation of the
“add project” functionality following the process described by Illustration 2.

Tool Description
Google The Google web toolkit allows for the easy setup of server-client based
Web architectures for projects that use the JAVA programming language.
Toolkit The GWT also allows for the easy setup of localization mechanisms to
(GWT) support different specific project contexts.
JUnit JUnit is a JAVA based framework to support unit and configuration
testing. It provides all the infrastructure necessary to implement, run,
and evaluate unit and configuration tests.
DbUnit DbUnit is another JAVA based framework that provides functionality to
write unit tests for database functions.
Selenium Selenium is a software testing framework for web applications. The
framework provides functionality to record user interaction with a
browser and to convert this activity in JUnit test code.
Table 2. Tools used to implement the evolutionary development architecture

Illustration 3: Excerpt from a GWT localization file with text strings used to
implement the "Add Project" functionality.
The minimal user interface to support the “add project” functionality provides the
possibility to enter a project name, a contract type, and a short description for the
project. Further, the user interface should provide the possibility to confirm that the
project is to be added to the application's database. The example implementation
programs this basic user interface in JAVA code which then can be translated by GWT
to JAVA Script that is executable by state-of-the art internet browsers. All text strings
of the user interface are integrated in the localization functionality of GWT to allow
for the easy adjustment of the user interface to specific language used in specific
project settings. Illustration 3 provides an excerpt from a GWT localization file with
the terms used to implement the “Add Project” functionality. Using the GWT
localization functionality action researchers can support the language practitioners
use in different project contexts by providing similar text files. It would be, for
example, easy to exchange the term “Lump Sum” by the term “Fixed Price” in a copy
of the above displayed file to support a different company setting.
288 COMPUTING IN CIVIL ENGINEERING

Illustration 4. 'Add Project' configuration test.

Once this user interface is implemented, developers can then use Selenium to record a
configuration test for the add project functionality. To do so, they load the JAVA script
code generated earlier in a by Selenium supported browser and record en exemplary
interaction with the user interface. Selenium can then then generate the JUnit code for
a configuration test of this interaction. Illustration 4 provides an example of such a
configuration test code that is partly generated by Selenium. The code automates the
navigation to the website and the opening of an “add project” dialog box (Illustration
4, line 7-8). Further, it fills in the above described ”add project” interface (line 10-
12), and it presses the confirmation button (line 13). The functionality of the
configuration text is then finalized with manually written code to test whether the
new project was actually added to the database (line 19-22). It is also important to
note that the test can run against an existing production database, without comprising
the information in this database. This functionality of the configuration test is crucial
because configuration tests also need to be run to test whether a deployment of a new
or improved use case functionality in an existing production environment was
successful (see also the deployment part of the evolutionary development process in
Illustration 2). The code example in Illustration 4, therefore, removes the added
project from the database in line 25 before it stops the database transaction in line 27
and 28.
After the implementation of the configuration test, the next step in the test
driven development framework is the identification of atomic implementation
functions. For this simple use case scenario the only atomic implementation function
COMPUTING IN CIVIL ENGINEERING 289

is to add a new project to the database. Hence, this use case only requires the
implementation of one unit test: “addProject”. Using the functionality of JUnit and
DBUnit the implementation of the test is straight forward. It involves the manual
storage of a number of projects in a separate test database and, afterward,
functionality to verify whether the projects have been stored in the database.
Illustration 5 shows the implemented example code of this unit test.

Illustration 5: Unit test to verify the functionality to add a new project to the database.

With these tests in place an action researcher can then continue to implement
the actual functionality of the interface. The reader should note that, while the
previous implementation of the tests seems to be a lot of extra work, most of the logic
of the required functional code is already included in the tests. Hence, the actual
implementation of the functional code does not add much more work to the overall
programming effort. After, the implementation of the functionality, the action
researcher can then deploy the new code and test whether the functionality works in
the local IT environment using the previously implemented configuration test.
290 COMPUTING IN CIVIL ENGINEERING

CONCLUSION
This paper introduced a development framework to support ethnographic
action research efforts to implement IT systems for project based environments. The
framework draws on state-of-the-art evolutionary software technologies to
specifically support the iterative nature of action research efforts. It is based on a
modular client-server architecture and provides an test-driven development process.
In this way, the framework allows for the quick and easy development and
deployment of new or altered functionality to support specific business processes
while ensuring that previously developed functionality remains intact. Additionally,
the modular server-client character enables developers to make program changes
available immediately to all users without the need to install new functionality on
client machines.
By providing the possibility to evolutionary integrate changes in existing and
running IT systems, the framework is specifically designed to support the iterative
nature of ethnographic action research efforts. The framework allows for the
continuous adjustment of information systems to work processes identified through
ongoing ethnographic observations. It also allows for timely reaction to changes in
the work processes in dynamic project based settings and for the quick change of
functionality to support work in different settings. Finally, with the possibility to
quickly integrate changes while ensuring the integrity of the overall system, the
framework allows for the timely integration of suggestions developed in collaboration
with practitioners.
Overall, the here presented framework supports ethnographic action research
efforts by allowing for an iterative improvement of project based work processes by
the evolutionary development of information technologies. The framework presents
another stepping stone towards the possibility to support bottom up research and
development efforts of generally applicable processes and best practices for work
settings in frequently changing environments, such as the construction industry.

REFERENCES
Ambler, S. & Sadalage, P. (2006). Refactoring databases: Evolutionary database
design. Addison-Wesley Professional.
Hartmann, T., Fischer, M. & Haymaker, J. (2009). Implementing information systems
with project teams using ethnographic-action research. Advanced Engineering
Informatics, 23, 57-67.
Hartmann T. and S. Johnson (2009). A Pragmatic Approach to Develop Financial IT
Systems for Small Construction Companies. Proceedings of the 2009 ASCE
International Workshop on Computing in Civil Engineering, Austin, Texas, USA.
Horstmann, C. & Cornell, G. (1997). Core Java 1.1 fundamentals: volume 1. Sun
Microsystems, Inc. Mountain View, CA, USA.
Kramer, J. & Magee, J. (2002). The evolving philosophers problem: Dynamic change
management. Software Engineering, IEEE Transactions on, 16, 1293-1306.
Martin, R. (2003). Agile software development: principles, patterns, and practices.
Prentice Hall PTR Upper Saddle River, NJ, USA.
Determining The Benefits Of An RFID-Based System For Tracking Pre-
Fabricated Components In A Supply Chain
E. Ergen1, G. Demiralp2, G. Guven3
1
Assistant Professor, Department of Civil Engineering, Istanbul Technical
University, Istanbul, 34469, TURKEY; PH (90) 212 285 6912; e-mail:
esin.ergen@itu.edu.tr
2
Graduate Student, Department of Civil Engineering, Istanbul Technical
University, Istanbul, 34469, TURKEY; e-mail: demiralpg@itu.edu.tr
3
Doctoral Student, Department of Civil Engineering, Istanbul Technical
University, Istanbul, 34469, TURKEY; PH (90) 212 285 3656; e-mail:
gursans.guven@itu.edu.tr
ABSTRACT
Radio Frequency Identification (RFID) technology is an automated
identification technology that can be used to track components through
construction supply chains. Although there are studies which show that RFID
increases labor productivity, detailed assessment of the benefits of RFID
technology utilization through a supply chain is limited. In this study, a simulation
model is developed to calculate benefits of an RFID investment in a construction
supply chain. The simulation model is developed for a pre-fabricated exterior
concrete wall panel supply chain and it includes prefabrication and construction
phases. The results of the simulation indicate significant reductions in task
durations and improvements in the efficiency of the process. Based on the
identified benefits, a cost sharing factor for the parties of the supply chain is
determined and it is proposed to be used for distributing the investment cost.
INTRODUCTION
Radio Frequency Identification (RFID) technology has been utilized in
multiple research studies in the construction industry to track components and
related information throughout various phases of a supply chain (Jaselskis and
Misalami, 2003; Goodrum et al., 2006; Ergen et al., 2007). In the recent studies,
several benefits were identified such as decreases in the time needed to complete
certain tasks (e.g., delivery and receipt of materials), increases in the labor
productivity and improvements in data collection processes (Jaselskis and
Misalami, 2003; Song et al., 2006; Grau et al., 2009). Also in some studies,
simulation models were developed to compare the current systems with the RFID-
based systems (Akinci et al. 2006, Young et al. 2010). However, benefits of RFID
technology for different parties in a supply chain were not specifically assessed in
the literature.
In the study explained in this paper, a supply chain of pre-fabricated panels
is investigated to determine the expected benefits for different parties, and a
simulation model is created to assess the impact of RFID technology through the
supply chain. The simulation model includes the prefabrication and construction
phases. In this paper, the initial results of the simulation are provided and
discussed and the cost sharing factor is determined for the supply chain members.

291
292 COMPUTING IN CIVIL ENGINEERING

BACKGROUND RESEARCH
Various cost-benefit studies were performed in retail industry to determine
the feasibility of using RFID technology in supply chains (Lee and Ozer, 2007;
Sarac et al., 2010; Ustundag, 2010). In Architecture/Engineering/Construction
(A/E/C) industry, previous studies show that integrating RFID technology with
the current approach resulted in time savings and improvements in labor
productivity for specific construction activities. However, the studies that consider
the impact of RFID technology on the entire A/E/C supply chain are limited in the
literature. For example, Grau et al. (2009) identified the benefits associated with
the automation of tracking process for the structural steel elements in a case study
and the focus was only on the construction site (i.e., lay down yard and the
installation area). Nasir (2008) also determined the cost and benefits of an
automated construction materials tracking system that located the materials (e.g.,
pipe spools, valves) via integration of RFID and Global Positioning System (GPS)
technologies at the job site. The benefits are identified as the total man hours that
were reduced for locating materials, the reduced lost labor hours, and the costs
avoided due to reduced number of lost materials. Jang and Skibniewski (2009)
also performed a cost-benefit analysis to illustrate the labor savings in sensor-
based material tracking. In another study, time savings were reported due to RFID
usage during material receiving process at site (Jaselskis and Misalami 2003).
In some of the studies, simulation models were used to assess the impact
of technology use in different phases. Davidson and Skibniewski (1995)
developed a simulation model to investigate the effects of an automated data
collection method (i.e., bar coding) on increasing efficiency in asset management
at an office building in the maintenance phase. In another study, a simulation
model was developed to investigate the benefits of using advanced data collection
technologies and it only focused on collection of productivity data from the
construction site (Akinci et al. 2006). The most comprehensive simulation model
covers a supply chain including the installation of components and it was
developed to reflect the impact of automated materials tracking technology on the
visibility of materials (Young et al., 2010). In the study explained in this paper, it
is aimed to examine the impacts of RFID technology on the supply chain of a pre-
fabricated concrete exterior wall panels, including the prefabrication and
construction phases.

CURRENT AND RFID-BASED PROCESSES


In this study, a case study was conducted to investigate the supply chain of
the pre-fabricated concrete exterior wall panels. In this supply chain, effective
identification of each individual panel is needed for a successful material
management and construction. The concrete exterior wall panels have a dimension
of 3 m x 5 m. Approximately 500 pieces are stored at once at the production plant,
and about 70 pieces are stored at the construction site. The investigated supply
chain includes pre-fabrication activities of the wall panels at a production plant,
shipping to the site and installation at the construction site. Simulation models
were developed both for the current process and for the RFID-based process to
identify the benefits of using RFID technology through the supply chain.
COMPUTING IN CIVIL ENGINEERING 293

The practitioners interviewed stated that in the current process


identification, tracking and locating of wall panels is a problematic and time-
consuming process since it is performed manually by using paper-based labels
attached to the components. In the simulation model, the current process was
modeled to include the basic identification, production, transfer and installation
activities: production, transfer, storage at the production plant and shipping to the
site, and receiving, transfer, and installation at the construction site. It also
includes extended search and reproduction activities for some percentage of the
panels that cannot be found at the production site and at the construction site.
The RFID-based process includes the similar activities; however, in some
activities instead of manual methods, RFID-based methods are used. In the
proposed RFID-based process, it is envisioned that the RFID tags will be attached
to the components once the production of panels is completed. The tags will be
used to automatically identify the components through the supply chain, and RFID
technology integrated with GPS will be used in storing the location of the
components and in locating them at the plant and at the construction site. In the
envisioned system, the coordinates of a panel is received from GPS unit and it is
stored in a database along with the panel’s ID retrieved from the RFID tag
attached to the panel. When locating the panel, the location of the panel is
retrieved from the same database. Thus, in the current process the durations of
identification, tracking and locating activities were reduced by using RFID
technology. Besides, utilization of RFID ensures that a higher percentage of
pieces are identified correctly for shipping to the construction site and for
installation.

SIMULATION MODEL
Two simulation models are developed to examine the impacts of RFID
usage in the current supply chain of the prefabricated concrete panels: (1) Base
case which represents the current manual approach. (2) RFID case which is the
modified version of the base case. In this case, RFID tags are attached to wall
panels, and identification and tracking of the panels are performed in a semi-
automated way by using handheld RFID readers and GPS units. The objective of
developing two different simulation models is to calculate the time differences
between the base case and the RFID case to determine the benefits in terms of
time and money savings due to utilization of RFID technology.
Prefabricated concrete panels go through two different phases within their
supply chain: (1) the production phase at the plant, and (2) the construction phase
at the construction site. Thus, the supply chain is considered as a two-echelon
supply chain and the tasks are classified as plant tasks, and construction site tasks.
Durations of each activity and probabilities in the model are the inputs for the
simulation models. The task durations for the base case were gathered from the
case study. To collect data, observations were made at the construction site, and
practitioners from the manufacturing plant and at the construction site were
interviewed. On the other hand, probabilities (e.g., percentages of located and
missing components) and the durations of the tasks in RFID case were adapted
from the average durations given in the previous RFID studies (Jaselskis, 2003;
Yin et al., 2009; Grau et al., 2009). Table 1 lists the probabilities used in the base
case and RFID case. When determining the durations of the transfer tasks (i.e.,
294 COMPUTING IN CIVIL ENGINEERING

transfer to storage area in plant and transfer to lay down area at construction site)
in the RFID case, estimations were made based on the observations since these
activities are not considered in previous studies.
Table 1. Probabilities for the base case and RFID case
Probabilities Base case (%) RFID case (%)

% of panels located in plant for shipping 65 99.5


% of missing panels located during extended search 97 99.5
in plant
% of missing materials identified at receipt 5 0.5
(construction site)
% of panels located at construction site for 80 99.5
% of missing panels located during extended search 99 99.5
at construction site
% of incorrectly moved pieces (for installation) 97 99.5

The simulation is performed by using academic version of a commercial software


package called Arena. A portion of the developed base case simulation model is
presented in Figure 1. The replication length of the simulations was 3600 hours
which corresponds to the production of 150 precast concrete panels in five
months. Both models were simulated for 1000 times. Common random numbers
are used in both simulations to minimize the variability in the simulation models.

Figure 1. A portion of the developed base case model

SIMULATION RESULTS
The initial results of the simulation of two models are summarized in
Table 2. The average and accumulated task durations are given for the base case
and RFID-based case. There is a significant decrease in the durations of related
tasks in the RFID case. The largest time savings in task durations are observed in
locating of panels and extended search, which is performed when the panels
cannot be located at the plant. Another important labor time saving is observed
during receiving of panels at construction site.
COMPUTING IN CIVIL ENGINEERING 295

Table 2. Average and accumulated durations

Task Name Base case durations RFID case durations Accum. time
Average Accum. Average Accum. savings
Production of panels* 24 h 3600 h 24 h 3600 h -
Transfer to storage area
15 min 31.8 h 8 min 19.9 h 11.9 h
in plant**
Locate panels for
20 min 46.1 h 0.6 min 1.4 h 44.7h
shipping
Extended search in
90 min 80.9 h 90 min 1.2 h 79.7 h
plant
Shipping panels to site* 60 min 150 h 60 min 150 h -
Receive panels at site 1.3 min 3.8 h 0.78 min 1.9 h 1.9 h
Transfer to storage yard
25 min 53.9 h 10 min 24.8 h 29.1 h
at constr. Site**
Locate panels at const,
10 min 23.2 h 0.57 min 1.4 h 21.8 h
site
Extended search in
60 min 15.2 h 60 min 1.5 h 13.7 h
construction site
Moving panels to
20 min. 50 h 20 min. 50 h -
construction area*
*Tasks with fixed durations that are not affected from RFID utilization.
** Performed by two workers

Table 3 summarizes the total number of incorrectly shipped pieces and


missing pieces both for the base case and the RFID case. The numbers calculated
in the simulation were rounded up to the nearest integer. The number of
incorrectly shipped pieces was identified to be approximately eight at the plant
and four the construction site for the base case and they were reduced to zero for
the RFID case. Similarly, two pieces were identified to be lost at the plant in the
base case and no pieces were lost in the RFID case. Finally, no pieces were
identified to be missing in both cases.

Table 3. Number of incorrectly shipped panels in plant and at construction site

# of incorrectly shipped # of missing panels


or identified pieces
Base case RFID case Base case RFID case
Plant 8 0 2 0
Construction 4 0 0 0

To determine the benefits of using RFID for each party in the supply
chain, base case and RFID case were compared and the differences between these
two cases were analyzed in terms of cost reduction. Three types of improvements
that resulted in cost savings were identified in comparison of the RFID case with
the base case: (1) decrease in task durations which leads to reduction in labor and
equipment cost, (2) decrease in the number of incorrectly shipped/identified
pieces and related transfer (i.e., labor and equipment) cost, (3) decrease in the
number of missing panels and reduction in reproduction costs. Since it was not
possible to quantify the cost of the delay caused by a missing panel at the
construction site, this factor was not included in the cost saving calculations.
296 COMPUTING IN CIVIL ENGINEERING

To calculate the cost savings incurring as a result of decreased task


durations, time savings are obtained and multiplied by the corresponding
activities’ unit cost. Similar approach is used for calculating the cost savings
resulting from decreased number of incorrectly shipped/identified pieces. The
transfer costs are related to these factors and it is calculated both for the plant and
the construction site. In the plant, transfer cost corresponds to costs incurring from
transportation of incorrect panels to the construction site on trailers, and the
average hourly transportation cost is $58 per panel (for a one-hour trip), including
the labor cost. On the other hand, at the construction site transfer cost is related to
incorrectly identified panels which are moved to construction area. The panels are
moved by cranes, and the unit price of this transfer is $106 per panel, including
the labor cost. The average unit price of transfer costs (i.e., shipping to the site and
moving pieces to the installation location at the construction site) were identified
and multiplied by the corresponding savings (e.g., decreased number of
incorrectly shipped pieces and the incorrectly shipped/identified pieces). Finally,
to calculate the cost savings of reduced missing pieces, unit cost of panel
reproduction is multiplied by the savings in number of missing panels. All the unit
costs are average values gathered from the practitioners during the interviews.
The identified cost savings were grouped in terms of the source of the cost
saving (i.e., decrease in task duration, missing panel and number of incorrect
identification) and parties that benefit from these savings. The cost savings that
incur at the plant and in shipping to the site is attributed to the panel manufacturer
whereas the savings occur at the construction site is included in the contractor’s
savings (Table 4). The total cost saving is calculated as $14.709. 67% of the
benefit is gained by the panel manufacturer and 33% of the benefit is gained by
the contractor.
Table 4. Cost savings obtained at the plant and construction site ($)
Location Task duration Production of Incorrect Total
Missing panel identification

Plant 6672 2730 464 9865


Site 4420 0 424 4844

The results show that the panel manufacturer of this two-echelon supply chain
gets almost twice more benefits (i.e., cost savings) compared to the contractor
when RFID technology is utilized. One of the reasons is that the number of
incorrectly identified panels and missing panels in the plant are more than that of
the construction site. Since the panels for different destinations are stored together
at the plant, it is more common to lose material at the plant compared to
construction sites. Additionally, the plant stores larger number of panels at once,
while the construction site stores limited number of panels in the lay down areas.
The identified benefit ratio for two parties can be used as a cost sharing factor
when implementing the RFID-based system in the described supply chain. The
cost of the RFID investment can be shared by two parties based on this cost
sharing factor.
COMPUTING IN CIVIL ENGINEERING 297

CONCLUSIONS
In this study, two simulation models are developed for calculating the
benefits of an RFID-based system for the members of a prefabricated exterior
concrete panels supply chain. Since the supply chain is modeled as a two-echelon
supply chain, both models include the prefabrication and construction phases. The
first model represents the existing manual approach (i.e., base case), and the
second model represents RFID integrated semi-automated approach which is
developed for automated identification and locating of components. The initial
results of the simulations show that there is a major reduction in the durations of
the tasks that are related to identification and localization of the panels at the plant
and at the construction site. Also, the number of missing or incorrectly
shipped/identified panels decreased significantly. There were no panels missing
both at the plant and at the construction site in the RFID case.
When these benefits were quantified for each party, it is determined that
while both parties of the supply chain gained cost savings by using RFID
technology, the total benefit of the panel manufacturer is about twice more
compared to the benefit of the contractor. The identified benefit ratio for two
parties can be used as a cost sharing factor when implementing an RFID-based
system in the described supply chain. Also, for other RFID implementations in
construction supply chains, cost sharing factor can be calculated and used to
distribute the investment cost. As a future work, it is planned to calculate the
investment cost of the proposed RFID system to perform a detailed cost-benefit
analysis.

REFERENCES

Akinci, B., Kiziltas, S., Ergen, E., Karaesmen, I. Z., and Keceli, F. (2006).
“Modeling and Analyzing the Impact of Technology on Data Capture and
Transfer Processes at Construction Sites: A Case Study.” J. Constr. Eng.
Management, 132(11), 1148-1157.
Davidson, I. N. and Skibniewski, M. J. (1995). “Simulation of automated data
collection in buildings.” J. Comput. Civ. Eng., 9(1), 9-20.
Ergen E., Akinci B., East B., and Kirby J. (2007). “Tracking Components and
Maintenance History within a Facility Utilizing RFID Technology.” J.
Comput. Civil Eng., 21(1), 11-20.
Goodrum, P.M., McLaren, M.A. and Durfee, A. (2006). “The Application of
Active Frequency Identification Technology for Tool Tracking on
Construction Job Sites.” Automation in Construction, 15(3), p. 292-302.
Grau, D., Caldas, C. H., Haas, C. T., Goodrum, P. M., and Gong, J. (2009).
“Assessing the impact of materials tracking technologies on construction”,
Automation in Construction, 18(7), 903-911.
Jang, W.S. and Skibniewski, M. (2009). “Cost-Benefit Analysis of Embedded
Sensor System for Construction Materials Tracking”, J. Constr. Eng.
Management, 135(5), 378-386.
298 COMPUTING IN CIVIL ENGINEERING

Jaselskis, E. J. and El-Misalami, T. (2003). “Implementing Radio Frequency


Identification in the Construction Process.” J. Constr. Eng. Management,
129(6), 680-688.
Lee, H. and Ozer, O. (2007). “Unlocking the value of RFID.” Production and
Operations Management, 16(1), 40-64.
Nasir, H. (2008). “A model for automated construction materials tracking.” MSc
Thesis, University of Waterloo, Waterloo, Canada.
Sarac, A., Absi, N. and Dauzere-Peres, S. (2010). “A literature review on the
impact of RFID technologies on supply chain management.” Int. J.
Production Economics, 128(2010), 77–95.
Song, J., Haas, C.T., Caldas C.H., Ergen E. and Akinci B. (2006a). “Automating
the task of tracking the delivery and receipt of fabricated pipe spools in
industrial projects.” Automation in Construction, 15(2), 166-177.
Ustundag, A. (2010). “Evaluating RFID investment on a supply chain using
tagging cost sharing factor.” Int. Journal of Production Research, 48(9),
2549-2562.
Yin, S. Y. L., Tserng, H. P., Wang, J. C. and Tsai, S.C. (2009). “Developing a
precast production management system using RFID technology”,
Automation in Construction, 18(5), 677-691.
Young, D., Nasir, H., Razavi, S., Haas, C., Goodrum, P. and Caldas, C. (2010).
“Automated Materials Tracking and Locating: Impact Modeling and
Estimation.” Proc. of the Construction Research Congress, Banff, Alberta.
Coordination of Converging Construction Equipment in
Disaster Response
Albert Y. Chen1 and Feniosky Peña-Mora2
1
Ph.D. Candidate, Department of Civil and Environmental Engineering, University of
Illinois at Urbana-Champaign, 205 N. Mathews Ave., Urbana, IL 61801; email:
aychen2@illinois.edu
2
Dean of The Fu Foundation School of Engineering and Applied Science and Morris
A. and Alma Schapiro Professor of Civil Engineering and Engineering Mechanics,
Earth and Environmental Engineering, and Computer Science, Columbia University,
510 S.W. Mudd Bldg, 500 W. 120th St., New York, NY 10027 email:
feniosky@columbia.edu

Abstract

During disaster response, it is imperative to timely provide the rescuers with the
adequate equipment to facilitate lifesaving operations. However, in the case of the 9-
11 terrorist attacks for example, supply of high demand equipment was insufficient
during the initial phase of disaster response, challenging lifesaving operations.
Prioritization of limited resources is one of the greatest challenges in decision making.
Meanwhile, management of geographically distributed resources has been recognized
as one of the most important but difficult tasks in large scale disasters. Additionally,
resource outside of the disaster affected zone converges into the disaster affected area
to assist the response efforts, which is the effect of resource convergence that often
made the already complex task of resource coordination even more challenging.
Although there are difficulties on managing the converging volunteers and groups,
such as the ability to be deployed immediately to the incidents without their
appropriate skills and training, construction equipment and its professional operators
are specialized entities. The effectiveness of their collaboration in the disaster
response operations could be improved through regular participation in drills. As a
result, the convergence of construction equipment could be efficiently utilized to
facilitate Urban Search and Rescue (US&R). This paper proposes a mobile
application that could potentially guide and coordinate the volunteering construction
equipment in collaboration with the emergency command and control structure.

299
300 COMPUTING IN CIVIL ENGINEERING

Introduction
Distribution of resources, such as heavy construction equipment, is critical to efficient
and effective urban search and rescue (US&R) operations during disaster response. It
is imperative to timely provide the rescuers with the adequate equipment to facilitate
lifesaving operations (Sullum et al., 2005; McGuigan, 2002). However, management
of geographically distributed resources has been recognized as one of the most
important but challenging tasks in disaster response (Holguin-Veras et al., 2007;
Halton, 2006). Challenges include identification, assignment, location tracking and
delivery of resources (SBC, 2006; 9/11 Commission Report, 2004). For disaster
response efforts to become more effective, these challenges must be addressed.

During disaster response, search and rescue task forces would need to gain situational
awareness of the disaster, activate required resources and capabilities, and to
coordinate the response actions (DHS 2008). These steps form a loop to continually
gain and maintain the status of the disaster, activate and deploy resources, and to
coordinate response actions for an efficient and effective disaster response.
Coordination of resources during disaster response operations has been characterized
by various shortcomings that inhibit efficient and effective decision making, and
prioritization of limited resources is one of the greatest challenges (SBC, 2006; 9/11
Commission Report, 2004; Auf der Heide, 1989). Limited resources must be
distributed efficiently to the first responders to facilitate lifesaving operations.
However, the supply of resources such as construction equipment is usually unable to
meet the great demand in large scale incidents. This could result in additional
casualties (Gentes, 2006; Bissell et al., 2004). As a result, an efficient prioritization
and distribution of resources is critical to disaster response efforts.
Motivation
In response to disasters, the initial efforts including US&R are usually and mostly
carried out by civilians, which are within the area at the time when the disaster occur
(Drabek and McEntire, 2003; Auf der Heide, 1989). These individuals collect relief
supplies, provide shelter, and are engaged in a variety of services (Drabek and
McEntire, 2003; Wenger, 1992). At the same time, the establishment of the official
command and control by the Emergency Management Agencies (EMAs) from the
local, state and federal usually takes time, to coordinate task forces and assets to
respond to the disaster (Drabek and McEntire, Auf der Heide, 1989). Meanwhile,
volunteers and response organizations outside of the disaster affected zone converges
into the area to assist the response efforts. This is the effect of resource convergence
that often made the already complex problem of resource coordination even more
challenging (Drabek and McEntire, 2003; Fritz and Mathewson, 1957). For incidence,
this causes on-site congestion from volunteers, material, and equipment that hinders
an efficient logistical coordination (Drabek and McEntire, 2003; Kendra and
Wachtendorf, 2001). However, provided with the convergence of resources, such as
volunteers, equipment and organizations, the response to the incident could become
more efficient and effective (Drabek and McEntire, 2003; Mileti, 1989; Auf der
Heide, 1989). In the rapidly changing environments of disasters, the convergence
COMPUTING IN CIVIL ENGINEERING 301

could bring certain capabilities and flexibilities that do not exist or is not sufficient in
the response system (Kendra and Wachtendorf, 2001). How to properly manage the
converging resources is then the challenge to be addressed.
One of the greatest challenges of utilizing the converging resources is their ability to
be deployed immediately to the incidents without the appropriate and required skills,
training and the familiarity to the command and control structure and EMAs (Kendra
and Wachtendorf, 2001). In addition, Kendra and Wachtendorf (2001) pointed out
that to have an efficient and effective disaster response, it is vital to develop, maintain
and take action based on a “Shared Vision” of emergency goals, critical tasks and
their need of critical resources. It is difficult to have civilian volunteers obtain such
Shared Vision without any prior training and communication with the EMAs.
As types, magnitude and context of disasters vary, the mitigation actions usually need
creativity and require responders to improvise to better respond to the incident (Auf
der Heide, 1989). However, the official centralized command and control system
makes logistics coordination difficult, as it is static and inflexible (Neal and Philips,
1995). The command and control structure is established to coordinate the response
efforts and resources of the local, state and federal government, private sector and
NGOs (NRF, 2008). The general outline from bottom up is as follows, although it
may vary from jurisdiction to jurisdiction: 1) first response teams on site request for
resources; 2) the Incident Command Post (ICP) which manages and coordinates
several aggregated incidents, such as several collapsed and partially collapsed
buildings in the area, provides the resource for the first responders with the resources
in their jurisdiction; 3) the county level Emergency Operations Center (EOC)
provides resources to multiple ICPs, and establishes priorities for the distribution of
resources among the various incidents; 4) a State level EOC is activated if the
incident exceeds the response capacity of the County, with the primary role of
supporting the local government in responding to the incidents and coordinating
resources within the state; and 5) if the incident exceeds the local and state response
capacity, the federal government involves its agencies to organize a federal response
and coordinates with the states and response partners to mobilize more resources. To
accomplish those efforts, the private sector and NGOs coordinate and support
response actions of the governments. However, this approach inherits various
challenges that inhibit an efficient utilization of available response resources.
During the initial phase of disaster response, access to heavy equipment is critical to
the relief efforts (Gentes, 2006; SBC, 2006; Kevany, 2005; Bissell et al., 2004).
Heavy equipment is a necessity during response operations such as 1) rapid debris
clearance of the transportation network for first response teams to reach blocked
hazard zones, 2) careful lifting of damaged structural elements in conditions when
human power is not sufficient, and 3) selected debris removal to clear structural
materials to facilitate void searches and tunneling under collapsed buildings
(ELANSO, 2009). In destructive events, the best timing of saving victims is within
the first 24 hours right after the impact of the disaster (Mizuno, 2001). However, in
major disasters, supply of heavy construction equipment for rapid removal of
collapsed building sections are often not able to meet the massive demand. In the
Loma Prieta Earthquake, there were also challenges faced in the early US&R due to
302 COMPUTING IN CIVIL ENGINEERING

the lack of available heavy equipment (McGuigan, 2002). Heavy equipment, which
supports critical lifesaving activities, must be efficiently located, assigned and
distributed to meet the urgent demands in US&R.
Objective
How response units perceive information to make decisions is critical. When disasters
occur, information needed is not always available. Before the Haiti Earthquake for
instance, there is little information regarding the road network and the spatial entities
on existing digital maps. After the earthquake, this lack of information hindered the
response operations. However, volunteers in Port-au-Prince filled in cartographic
blanks in the maps which became very detailed and were accessible to the public
online (OpenStreetMap, 2010). It is also important to emphasize that initial
information collected about the disaster is often inaccurate (Quarantelli, 1983). For
this reason, assessment of resource needs has to be a recurring procedure that
continues throughout the duration of the incident, to update information for all
entities involved within the disaster response operations (Auf der Heide, 1989). In the
case of Haiti, the volunteers used text messages, GPS, and hand drawings to dispatch
thousands of updates for road names, building collapse, and injury locations
(OpenStreetMap, 2010; Ushahidi, 2010). The officials used the information to guide
their emergency workers, including the Marine Corps and Red Cross (Ushahidi,
2010). Although there are drawbacks in this approach of information update, the
benefits outweighed in the case of Haiti (OpenStreetMap, 2010; Ushahidi, 2010).
The objective of this paper is to implement a mobile application for responding
equipment to communicate with a public web service that is capable of receiving and
storing information discovered and updated by civilians and first responders. The
mobile application could be potentially used by officials in the command and control
system and volunteering personal, equipment and materials.
Approach
A decentralized approach that facilitates immediate equipment distribution in
response to disasters is proposed by Chen and Peña-Mora (2011). An Equipment
Control Structure, which is inspired by the behavior control structure of honeybees’
foraging (Biesmeijer and Seeley, 2005), enables a collective decision making process
for equipment coordination. With the Equipment Control Structure applied to
facilities management such as construction equipment distribution, disaster response
operations have the potential to become more efficient. Each volunteering Equipment
Unit will make its own decision on where it will carry out the disaster relief effort.
Based on the decentralized approach the authors proposed for the converging
resources (Chen and Peña-Mora, 2011), the mobile application –proposed by this
paper– could automate information gathering and decision making for an Equipment
Unit. An Equipment Unit is assumed to be a complete crew formed by the equipment,
the operator and the required labor and material.
Information Technology approaches have great potential to make equipment
coordination more efficient. GIS analysis and visualization with GPS tracking could
provide to the authorities an overall view of how all the equipment move and
COMPUTING IN CIVIL ENGINEERING 303

distribute in the disaster affected areas. The aforementioned decentralized process


could be implemented to automate decision making for each Equipment Unit through
a software agent (Fiedrich and Burghardt, 2007) that processes the computation in the
background and suggests further steps to Equipment Units. The agent could be
deployed on a smart phone, PDA or a portable computer with GIS/GPS for
visualization and tracking (Figure 1d).
The authors’ former work on damage assessment and GIS visualization (Peña-Mora
et al., 2010) has great potential to support and be integrated with this research.
Damage information in US&R including hazards, structural damage, and trapped
victims could be collected by the Building Assessment System (Figure 1a), which is
developed based on standard building assessment procedures used by US&R
structural specialists (Peña-Mora et al., 2010; USACE, 2008). Assessment
information is stored in the digital format. Each ICP could use the collected digital
information to cluster all demand in its jurisdiction (Figure 1b). The EOC could host
a data server, to serve as the Dance Floor (Chen and Peña-Mora, 2011), to broadcast
resource demand (Figure 1c). Equipment Units could use digital devices to access
demand information and the software agent installed would suggest which ICP the
unit should be deployed and provide route guidance (Figure 1d). In Figure 1b, the red
diamond markers overlaid on the map are the ICPs within the disaster affect zone.
The blue circles are available Equipment Units and the green triangle is the EOC.

Figure 1 a) User interface of BAS (left); b) Spatial Visualization of the Damaged Zone
(center); 3) EOC and data server (top right); and d) Digital device, e.g. an iPhone, for each
equipment unit (bottom right).
For volunteering Equipment Units, a public web service could provide the converging
resources a source of information as to guide where the resources should converge.
The web service takes discovered or updated demand information into its database
and provides access to the public.
When a person in the disaster affected area discovers a location where there are
victims that need help, for example victims are trapped under collapsed structural
elements, the person discovered the situation could send this piece of information to
the web server through a handheld device with network capability such as a personal
device assistance (PDA), a smart phone, or a touchpad device. The information
uploaded by the person and all other information provided by other people could be
seen through a webpage. As a result, the webpage could serve as an information hub
for unassigned disaster response resources, such as Equipment Units. This way the
304 COMPUTING IN CIVIL ENGINEERING

productivity of the unassigned resources could greatly increase, avoiding unnecessary


idle due to the overload of the official command and control system.
There are certain assumptions for this to work. We assume that there will be access to
computer networks such as a wireless 3G network. In cases if infrastructure-based
networks are not presented, an ad hoc network approach could be taken (Peña-Mora
et al., 2010). We also assume there will not be malicious injections of information
into the web service. In addition, for this web service to be worth using, the situation
of the disaster response scenario is assumed to be in the condition when the official
command and control system is saturated. In other words, the command and control
system is overloaded by the massive tasks to be carried out, such as US&R, resource
activation, assignment and coordination.
The implementation of the web service is as follows. MySQL is chosen as the
database. The database holds information such as the entry key/id, the timestamp of
when the piece of information is received, the latitude and longitude coordinates of
the location, a photograph of the situation, potential number of victims, the
condition/severity of the victims, and textual comments. The web interface is written
in PHP for its easy access to databases and the ability to program logic in HTML web
pages. Google Maps V3 API is used to display spatial information. The web service is
programmed to automatically annotate the reported victim location with the
photograph taken and the textual information.
The result of the civilians reporting of equipment demand could be viewed via the
server through an internet browser (Figure 2). People who are interested in helping
the disaster relieve efforts could visit the web service and see where help is needed.

Figure 2

Conclusion and Future Work


In this paper, a mobile device is expected to facilitate coordination of Equipment
Units. Through mobile devices, Equipment Units connect to a web service that takes
information from users who discover equipment demand on the disaster affected area
and publishes the information on a map is presented. The web service provides
COMPUTING IN CIVIL ENGINEERING 305

Equipment Units the necessary information for the decentralized decision making
proposed by the authors (Chen and Peña-Mora, 2011). Mobile devices take the
information and automate decision making for the Equipment Units.
Although this approach of equipment distribution could result in non-optimal
assignment and arrangement of equipment utilization, it is under the assumption that
the official command and control system is overloaded. As a result, this web service
could potentially be used to guide construction equipment to respond to demands in
the early phase of disaster.
Future work would be to further implement algorithms into this process. In a large
scale setting when the official command and control system is overloaded, demand
for equipment could be in a great number. As a result, clustering of discovered
demands needs to be performed on the server side, to avoid overloading information
to the Equipment Units. In addition, an algorithm to rank demand locations for a
Equipment Unit based on the number of demand, spatial attributes, severity of
demand and the capacity of the piece of equipment could be highly useful to help
decision making of the crew.
Acknowledgement
The authors would like to thank Bill Keller (Champaign County EMA), Mark
Toalson (Champaign County GIS Consortium), Mr. Nacheman (chair, ITTF PSC
Building Industry Emergency Response Network) for their kind suggestions, and
Gavin Horn (Research Program Director of IFSI) and Brian Brauer (Assistant
Director of IFSI) for their help and guidance in the exercise at the IFSI, and the
reviewers for their valuable and helpful comments.
Reference
9-11 Commission Report (2004). “National Commission on Terrorist Attacks Upon the
United States—9-11 Commission Report.” Final Report of the National Commission
on Terrorist Attacks Upon the United States, Official Government Edition.
Auf der Heide, E. (1989) “Disaster Response: Principles of Preparation and Coordination.”
Online Book for Disaster Response, Center of Excellence in Disaster Management
and Humanitarian Assistance.
Biesmeijer, J. C. and Seeley, T. D. (2005) “The use of waggle dance information by honey
bees throughout their foraging careers.” Behavioral Ecology and Sociobiology, 59(1),
133–142
Bissell A. B., Pinet, L., Nelson, M., and Levy, M. (2004). “Evidence of the Effectiveness of
Health Sector Preparedness in Disaster Response.” Family and Community Health,
Lippincott Williams & Wilkins, Inc, Vol. 27, Np.3, pp. 193-203
Chen, A.Y. and Peña-Mora, F. (2011) "A Decentralized Approach Considering Spatial
Attributes for Equipment Utilization in Civil Engineering Disaster Response." ASCE,
Journal of Computing in Civil Engineering, Reston, VA. doi:
10.1061/(ASCE)CP.1943-5487.0000100
DHS, 2008 “National Response Framework Document,” Department of homeland Security,
January 2008, <http://www.fema.gov/pdf/emergency/nrf/nrf-core.pdf> (11/20/2008)
Drabek T. E., and McEntire D. A. (2003) “Emergent phenomena and the sociology of
disaster: lessons, trends and opportunities from the research literature.” Disaster
Prevention and Management, Vol. 12 Iss: 2, pp.97-112
306 COMPUTING IN CIVIL ENGINEERING

ELANSO (2009) <URL:


http://www.elanso.com/ArticleModule/HlVwUKSETDLcPzIsUfVcPAIi.html> (2/12/2009)
Fritz, C. E., and Mathewson, J. H. (1957) “Convergence behavior in disasters; a problem in
social control.” Washington, National Academy of Sciences-NRC.
Gentes, S. (2006) “Rescue Operations and Demolition Works: Automating the Pneumatic
Removal of Small Pieces of Rubble and Combination of Suction Plants with
Demolition Machines.” Bulletin of Earthquake Engineering, Vol. 4, pp. 193-205
Halton, B. (2006). “Katrina: Size, Scope, and Significance.” Fire Engineering, Vol. 159 Issue
5, pp. 220-224, 5p, May, 2006.
Holguin-Veras, J., Perez, N., Ukkusuri, S., Wachtendorf, t., and Brown, B. (2007)
“Emergency Logistics Issues Affecting the Response to Katrina: A Synthesis and
Preliminary Suggestions for Improvement.” Journal of the Transportation Research
Board, No. 2022, Transportation Research Board of the National Academics,
Washington D.C. 2007, V. 2022, pp. 76-82
Kendra, J. M. and Wachtendorf, T. (2001) “Rebel Food… Renegade Supplies: Convergence
after the World Trade Center Attack.” Disaster Research Center, Preliminary Paper
No. 316, Disaster Research Center, University of Delaware, Newark.
Kevany M. J. (2005). “Geo-Information for Disaster Management: Lessons from 9/11.” Geo-
Information for Disaster Management, Springer Berlin Heidelberg, pp. 443-464.
McGuigan, D. M. (2002) “Urban Search and Rescue and the Role of the Engineer.” M.S.
thesis, University of Canterbury, New Zealand.
Mileti, D. S. (1989) “Catastrophe planning and the grass roots: a lesson to the USA, from the
USSR.” International Journal of Mass Emergencies and Disasters, Vol. 7, pp. 57-67.
Mizuno, Y. (2001) “Collaborative Environments for Disaster Relief.” Master’s thesis,
Department of Civil & Environmental Engineering, MIT, Cambridge, MA. June,
2001.
National Research Council (NRC) (2007). “Successful Response Starts with a Map.
Improving Geospatial Support for Disaster Management.” Committee on Planning for
Catastrophe: A Blueprint for Improving Geospatial Data, Tools, and Infrastructure.
The National Academes Press. Washington D. C. 2007.
OpenStreetMap (2010) “Map volunteers help Haiti Search & Rescue - 24th January 2010”
<URL: http://www.geog.ucsb.edu/events/department-news/680/how-map-volunteers-
helped-haiti-search-amp-rescue/> (2010/9/24).
Peña-Mora, F., Chen, A.Y., Aziz, Z., Soibelman, L., Liu, L.Y., El-Rayes, K., Arboleda, C.
A., Lantz, T. S., Plans, A. P., Lakhera, S., and Mathur, S. (2010) "A Mobile Ad-Hoc
Network Enabled Collaborative Framework Supporting Civil Engineering Emergency
Response Operations." ASCE, Journal of Computing in Civil Engineering, Reston,
VA, Vol. 24, Issue 3, pp 302-312
Quarantelli, E. L. (1983) “Delivery of emergency medical care in disasters: assumptions and
realities.” New York, 1983, Irvington Publishers, Inc.
Select Bipartisan Committee (SBC) (2006). “A Failure of Initiatives, Final Report of the
Select Bipartisan Committee to Investigate the Preparation for and Response to
Hurricane Katrina.” US Government Printing Office, Washington, DC.
Sullum, J., Bailey, R., Taylor, J., Walker, J., Howley, K., and Kopel, D. B. (2005) “After the
Storm Hurricane Katrina and the failure of public policy” < URL:
http://www.reason.com/news/show/36334.html > (2/10/2009)
Ushahidi (2010) “Haiti: Taking Stock of How We Are Doing.” <URL:
http://blog.ushahidi.com/index.php/2010/02/06/ushahidi-how-we-are-doing/> (2010/9/24).
Wenger, D. (1992) “Emergent and Volunteer behavior during disaster: Research findings and
planning implications.” HRRC Publication 27P, Texas A&M University, Hazard
Reduction Recovery Center, College Station, TX.
A Management System of Roadside Trees Using RFID and
Ontology

Nobuyoshi Yabuki1, Yuki Kikushige2, and Tomohiro Fukuda3


1
M. ASCE, Ph.D., Professor, Division of Sustainable Energy and Environmental
Engineering, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka,
Suita, Osaka, 565-0871, Japan; PH +81-6-6879-7660; FAX +81-6-6879-7663; email:
yabuki@see.eng.osaka-u.ac.jp
2
Graduate Student, Division of Sustainable Energy and Environmental Engineering,
Graduate School of Engineering, Osaka University
3
Dr. Eng., Associate Professor, Division of Sustainable Energy and Environmental
Engineering, Graduate School of Engineering, Osaka University

ABSTRACT

As roadside trees are important asset of road infrastructure, standardized


inspection and diagnosis guidelines and records have been proposed in Japan.
However, the ledgers and records are usually paper-based and the database, if
employed, is often weak and poor. Thus, the recorded data has not been used for
maintenance, remedy, and renewal of roadside trees effectively by road administrators.
Furthermore, each governmental or municipal agency has its own ledger or database,
they use different terminologies, units, tree registration systems so that it is very
difficult to compare or combine two or more roadside ledgers or databases. Therefore,
in this research, a roadside tree diagnosis system is being developed using Radio
Frequency Identification (RFID) and Personal Digital Assistants (PDA) in order to
facilitate inspection and diagnosis. In addition, the ontology of roadside tree
management is being developed in order to compare and analyze various roadside
tree databases. The prototype systems will be applied to real roadside trees and the
proposed methodology will be validated.

INTRODUCTION

Since planting trees has various effects such as carbon dioxide fixation,
mitigation of heat island phenomenon, reduction of air pollution, scenery
enhancement, effect of relaxation, ecosystem maintenance, etc, afforestation in urban
area is a very important action as an environmental program. Roadside trees are an
essential part of urban plants and their effects include not only the ones stated above
but also road safety, disaster mitigation, leafy shade forming. The number of high
roadside trees in Japan increased from 3.7 million in 1987 to 6.7 million in 2007,
which implies the increase of social demand and importance of roadside trees. On the
other hand, roadside trees are surrounded by objects causing growth inhibition such

307
308 COMPUTING IN CIVIL ENGINEERING

as electric poles and wires, light poles, road signs, billboards, underground gas and
water pipes. Extremely heavy pruning is often done to reduce frequency and cost of
pruning work, and roots are often uplifted by underground works. In addition, gas
emissions and dust on roads have impact to health of trees. Neglecting the health of
roadside trees would cause outbreak of fungi, disease and insect damage, dieback,
stump-hole, rotting wood, and eventually, trees would fall when strong wind blows.
In fact, some people were killed in tree falling accidents.
In order to prevent such accidents and to keep roadside trees healthy for a
long time, some national highway offices and local governments have begun to
introduce roadside tree diagnosis by tree surgeons based on the Visual Tree
Assessment (VTA) method developed by Clause Mattheck (2007) of Germany. Tree
surgeons are experts on diagnosis and treatment of trees and are publicly certified by
Japan Greenery Research and Development Center. There are 1,730 tree surgeons as
of January 2009.
Diagnosis of VTA method is to find detects of disease and decay of inside
trees by appearance inspection. Its feature is that diagnosis is done scientifically and
systematically following a common procedure instead of relying on experts’ hunch
and experiences. Japan Urban Tree Diagnosis Association promotes to progress this
diagnosis technique to realize systematic maintenance of roadside trees by
accumulating clinical records of periodical diagnosis work. However, currently, not
many national road offices or local governments execute VTA based diagnosis except
some progressive local governments for fiscal reasons and high cost of diagnosis.
Thus, new resolutions or improvement of the diagnosis method are necessary to
attempt a breakthrough. Recently, although Radio Frequency Identification (RFID)
technology has been investigated for the application to inspection and diagnosis in
maintenance of structures, it has not been used for research and development of
maintenance of roadside trees.
On the other hand, if periodical inspection and diagnosis are done for
roadside trees, a large amount of data will be stored in each database of the
organization for roadside trees. If data in these databases can be shared or integrated
temporarily for comparing or analyzing data, the databases will be more effectively
used. However, since different terms may be used for the same meaning, it will be
difficult for the integrated database to correctly treat queries.
Ontology is a technique which can formulate concepts of human terms into
forms comprehensible by machines using concept classes and semantic links.
Development of consistent knowledge base using ontology can enhance sharing and
reuse of knowledge significantly. Using ontology as a schema is effective in unifying
different terminology and interoperability among multiple systems. Although
ontology has been used for developing unified medical science database, no
application has been heard in developing an ontology-based schema for roadside
trees.
Therefore, the objective of this research is to develop a system for diagnosis
support and data management using RFID technology and ontology in order to
improve the efficiency and to enable multiple databases with different terminology to
be compared and integrated. A prototype system was developed and tested at an
actual road and evaluated by several experts.
COMPUTING IN CIVIL ENGINEERING 309

ROADSIDE TREE DIAGNOSIS METHOD AND CURRENT PROBLEMS

Roadside Tree Diagnosis Method


Roadside tree diagnosis starts from filling out blanks of street name, tree
species, tree number, date of diagnosis, weather, road management office, name of
inspector, name of tree surgeon, planting form, etc. Then, appearance inspection,
which comprises 55 items, is executed and then, whether or not complete
examination is necessary is determined by six criteria, i.e., life-energizing force,
boughs, crotches, shafts, shaft bifurcations, and root based on the diagnosis result
with the determination table systematically. It takes about 20 minutes for each tree
depending on the individual. If complete examination is determined necessary, tests
using resistograph and gamma ray tree decay detectors are executed to determine the
tree falling risk. Then, health determination is done. Each tree is determined as
healthy, somewhat unhealthy, or unhealthy. After the health determination, decision is
made whether wait-and-see, logging-and-renewal, or logging.
To improve the efficiency of roadside tree management, Sekizawa et al.
(2007) proposed a tree health assessment method using photographs of high
resolution satellites and remote sensing data. This method is applicable to judge a
group of the same kind of trees in an area but is not to each tree management. Sasaki
et al. (2007) proposed a roadside tree management support system using GIS and
diagnostics robots. To bring this concept to reality, various problems such as robot
algorithms, automatic diagnostics technologies will have to be solved.

Current Problems
Current problems regarding roadside tree management, based on our
literature search and interviews with public agencies and greenery business
companies, are described in the following.
1) Inconsistent and unestablished management method: Although a unified roadside
tree diagnosis format has been made among tree surgeons, it takes more time for
the format to prevail all over the nation.
2) Lack of accumulation of diagnosis data: Due to the high cost (about US$ 185) for
writing a diagnosis form for each tree, diagnosis has not been done repeatedly.
3) Difficulty in tree identification: Generally, tree identification is done using
photographs and maps, which makes the following problems. As the same kind
of many trees are planted in a row at certain intervals along the road, mistakes in
identifying a tree in a picture occur frequently. Since a tree is identified as
somthingth from a certain reference point, e.g., 9th from the traffic light No. 102,
if some tree is logged or multiple trees are planted, the number is not correct
anymore.
4) Time for diagnosis work based on VTA method: It takes much time for a tree
surgeon to execute diagnosis due to the many diagnosis items and complicated
determination treatment. If the required time for diagnosis is reduced, the number
of trees to be diagnosed per day per person will be increased, which will
eventually reduce cost.
310 COMPUTING IN CIVIL ENGINEERING

DEVELOPMENT OF ROADSIDE TREE DIAGNOSIS SUPPORT SYSTEM

System Overview
In this research, Roadside Tree Diagnosis Support System (RTDSS) was
developed to solve the problems described in the previous section. In this system, an
RFID tag is installed to each roadside tree and a tree surgeon with a Personal Digital
Assistant (PDA) attached with an RFID reader/writer performs diagnosis after
reading the ID from the RFID. RFID tag has its own unique ID and thus it is possible
to distinguish individual trees firmly. Once the tree ID is identified, diagnosis forms
are displayed on the PDA display and the user can input necessary data quite easily.
The PDA stores previous diagnosis data if any, which is useful for comparing the
current and previous conditions at the site.

System Architecture
Since active RFID tags have batteries and need battery exchange, passive,
RFID tags without batteries are used. There are 4 bands available in Japan, and each
band has its own advantages and disadvantages. We adopted 13.56 MHz band,
considering the communication distance and directionality. There are three types for
data storage for RFID tags, i.e., read only type, one time write type, and read/write
type. In this research, we adopted read/write type because every time diagnosis is
done the latest information should be stored in the RFID tag for reference. In addition,
resistance to environment and durability are required. Based on the above
considerations, we used Tag-it HF-I of Texas Instruments. This tag of 13.56 MHz is
passive type, coin-type, 22mm in diameter, covered with polyphenylene sulfide (PPS),
resistant to environment and durable.
We selected iPAQ212 of Hewlett-Packard as a PDA because its display is
relatively large and touch screen type, suitable for outdoor usage with LED back light,
and it comes with a Compact Flash (CF) card slot and a relatively large memory.
As an RFID reader/writer, RF5400-542 of Socket Mobile was used. This
reader/writer can read and write the selected RFID tags and can be installed to
iPAQ212 using a CF card.
As a system development environment for the PDA, Le Courent, which is
developed by Soar Systems, Co., Ltd. was used. This development environment is
very useful because the agent does not depend on Operating Systems (OS) and
hardware platforms.

System Functions
When the user turns on the system, the first window to appear is RFID tag
reading function. The RFID tag reader/writer can read the ID of the tag by tapping the
Read Tag button on the screen. Then, the system displays the data in the tag, namely,
tag ID, tree number, last date of diagnosis, last user name. The system, then, displays
the detailed information of the tree corresponding to the tag ID. The user can
overwrite the previous data based on the diagnosis. If it is the first time for the tree to
be diagnosed, the user inputs various diagnosis data using the following functions.
1) Fundamental data input form: The user inputs the date, weather, office, user
name, tree surgeon’s name, history.
COMPUTING IN CIVIL ENGINEERING 311

2) Appearance inspection form: As there are many items to be filled out, eight forms
(windows) are provided, i.e., form, dimensions, life-energizing force; boughs;
crotches; shaft damage; damage of shaft bifurcations; other matters of shaft; root
damage; and other matters of root. The user can move to these windows freely by
tapping the tabs on the right hand side of the window.
3) Appearance determination form: Based on the appearance inspection data, the
system automatically displays the appearance determination result, i.e., normal,
complete examination necessary, pruning or replanting necessary, etc. If, for
some reason, one or more necessary data in the appearance inspection form are
missing, the system gives an alarm to the user.
4) Complete examination form: After the complete examination such as
resistograph or gamma ray tree decay detection, the void ratio is to be filled on
the form. Then, the decision is shown on the form.
5) Special instruction form: If the user receives a previous special message,
acknowledgment can be input here. The user can send a message or special
instruction to the following inspector.
6) Photograph file name form: The user takes some digital photographs of the tree
and input the file name of each photograph.

After the diagnosis is done, the user saves the input data and write the
specific data to the RFID tag using the diagnosis result saving function.

VERIFICATION OF RTDSS

Experiment
To verify the feasibility and practicality of the developed RTDSS, an
experiment was performed at Makuharicho 4-Chome, National Highway No. 14 for 7
roadside trees on November 29, 2010 with permission of the Chiba National Highway
Office. The examiner, who is an employee of Toho Leo Co., Ltd, is a certified tree
surgeon and routinely executes tree diagnosis. First, the examiner diagnosed three
roadside trees by the conventional method, filling out the diagnosis form and making
decisions. During the diagnosis, another examiner kept time. Then, the same
examiner used RTDSS for the same kind but different three roadside trees and kept
time. Table 1 shows the total time spent for three trees for each method. The
diagnosis time is not so different but there is a significant difference in the time for
filling out form, appearance and health decisions. The reason is that while using
RTDSS the user can input all necessary data into PDA immediately. Thus, the total
time for RTDSS is less than half of the time spent by the conventional method.

Interview and Questionnaire Investigation


Six tree surgeons were interviewed by us and were asked to answer the
prepared questions. After we showed them how the RTDSS works, the tree surgeons
actually used the system indoors. Then, they answered six questions, selecting one
number out of five choices, 5: think so strongly, 4: think so, 3: cannot say, 2: do not
think so, 1: do not think strongly. Table 2 shows the result of their evaluations.
312 COMPUTING IN CIVIL ENGINEERING

Table 1. Result of the Experiment (Unit: minute)


Method Diagnosis Time for filling out form, Total time
time appearance and heath decision
Conventional 21.9 35.0 56.9
RTDSS 24.6 0.0 24.6

Table 2. Evaluation of RTDSS


Evaluation item 5 4 3 2 1 Average
Reduce time and improve efficiency 4 2 0 0 0 4.7
Reduce incompleteness and mistakes 5 1 0 0 0 4.8
in the form
Improve identifying right trees 3 3 0 0 0 4.5
Improve storage of diagnosis data and 3 3 0 0 0 4.5
reusability
Prevail the Visual Tree Assessment 2 4 0 0 0 4.3
method
Want to use RTDSS actually 5 1 0 0 0 4.8
Average 3.7 2.3 0 0 0 4.6

DEVELOPMENT OF ONTOLOGY OF ROADSIDE TREE MANAGEMENT

Ontology
Ontology is originally a philosophical term meaning a theory of being. In
computer science it means something which enables sharing and reuse of knowledge
in a domain by describing it explicitly and logically so that computers can process
(Kanzaki 2005). Thomas R. Gruber (1993) defined ontology as an explicit
specification of a conceptualization.
Ontology is composed of concept classes and semantic links. Concept
classes involve entry words such as automobile, vehicle in the real world, and
semantic links represent relationships of these concepts. Semantic links include
subClassOf links (general-special links), hasPart links (whole-part links), and
attribute links. Furusaki (2010) classified objectives of ontology usage as (1)
providing common terminology, (2) utilization for semantic query, (3) usage as
indices, (4) usage as a schema, (5) usage as media for sharing knowledge, (6) usage
for information analysis, (7) usage for information extraction, (8) usage as a
specification of knowledge model, (9) usage for systematization of knowledge.

Objectives of Using Ontology


The first objective of using ontology in this research is (4) usage as a schema.
The second objective is to convert different words having the same meaning into one
word for integrating different databases, namely, (3) usage as indices and (5) usage as
media for sharing knowledge. Note that the numbers are the same as the previous
section. As per the second objective, some tree names are originated locally and
different words are used to mean the same kind of trees. This will hinder unifying
COMPUTING IN CIVIL ENGINEERING 313

different databases for comparison. For example, Pasania, commonly known as


Japanese Stone Oak, is a typical roadside tree and is generally called Matebashii in
Japanese. Synonyms of Matebashii include Mategashi, Matajii, Satsumajii, Aojii, etc.
Especially, in Kyushu island, Satsumajii is much more often used than the general
Matebashii.

Unifying Different Databases Using Ontology of Roadside Tree Management


In this research, Hozo was used to develop the ontology of roadside trees.
Hozo is a general ontology development environment, which is provided by
Mizoguchi Laboratory of Osaka University and Enegate Co., Ltd. As for database
development, MySQL. PHP, and HTML were used. We developed a prototype
ontology regarding several kinds of trees and developed two different databases for
two different hypothetical regional organizations. One database contains Matebashii
for meaning Pasania and the other contains Satsumajii intentionally. Then, we
developed a program for unifying two different databases using Java and Java
Database Connectivity (JDBC). The developed program successfully identified the
synonyms and unified as Matebashii.

CONCLUSION

In this research, first, problems in the management of roadside trees were


identified. Then, in order to improve efficiency and to reduce cost, Roadside Tree
Diagnosis Support System (RTDSS) was developed using RFID technology and PDA,
based on the Visual Tree Assessment (VTA) method. A prototype system was
developed and tested at an actual road. The test showed that the time spent by RTDSS
for diagnosis and decision making was less than half of the conventional method. Six
tree surgeons who were interviewed evaluated the RTDSS very highly.
Next, in order to enable multiple different databases to be compared,
ontology was used to represent the concept of roadside trees explicitly. A computer
program was developed to properly process synonyms in different databases for
unifying them. As an example, synonyms Matebashii and Satsumajii were tested. The
test showed that the unification was successfully done.
As a future work, RTDSS should be enhanced and other than PDA, tablet PC
or smart phone should be explored as a platform of RTDSS.

ACKNOWLEDGMENT

The authors would like to thank the Chiba National Highway Office, Kanto
Regional Maintenance Bureau, Ministry of Land, Infrastructure, Transport and
Tourism and Toho Leo Co. Ltd., for their kind support to this research.

REFERENCES

Thomas R. Gruber (1993). “A translation approach to portable ontology specification,”


Knowledge Action, 5(2), pp.199-220.
314 COMPUTING IN CIVIL ENGINEERING

Masahide Kanzaki (2005). “Introduction to RDF/OWL for semantic web,” Morikita


Press, in Japanese.
Claus Mattheck (2007). “Updated field guide for visual tree assessment,” Mende,
Germany.
Yutaka Sasaki et al. (2007). “Identification of trees in virtual space by roadside tree
management and diagnosis robots,” Bulletin of Tokyo University of
Agriculture, 52(1), pp.33-38, in Japanese.
Yuichi Sekizawa et al. (2007). “Management method for roadside trees using high
resolution satellite data,” Proceedings of Nihon University Civil Engineering
Seminars, Vol.40, pp.75-78, in Japanese.
Transforming an IFC-based Building Layout Information into a Geometric
Topology Network for Indoor Navigation Assistance

S. Taneja1, B. Akinci1, J.H. Garrett1, L. Soibelman1 and B. East2


1
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pennsylvania, USA
2
Engineer Research and Development Center, Illinois, USA

ABSTRACT

Automated navigation assistance in indoor environments requires the existence of


spatial models with the ability to represent navigational knowledge of these
environments. In this paper, we have built upon network-based navigation and the vector
representation of road networks in GIS to create a spatial model that can be utilized for
navigation guidance and erroneous positioning data correction in indoor environments.
We name this spatial model the Geometric Topology Network (GTN) and have identified
the requirements and developed a process for its automated creation. We have compared
the strengths and weaknesses of two algorithms, namely, the Straight Medial Axis
Transformation algorithm and a modified form of the Medial Axis Transform algorithm
for automated creation of the GTN. The developed process for GTN creation transforms
the building spaces and spatial connections represented in an IFC file into a graph
network. We have also created a proof-of-concept prototype to demonstrate the
automated creation of a GTN.

INTRODUCTION
With the growing complexity of built environments, there has been an increasing
emphasis on navigation assistance for vehicles as well as pedestrians and robots.
Although the business case for providing navigation guidance to vehicles is well known,
the need for providing navigation guidance to building occupants and first responders
during building emergencies is being realized more recently (Zlatanova and Holweg,
2004; Walder et al., 2009). Other use-cases for providing navigation guidance include
assisting elderly people in navigating complex environments especially hospitals.
Currently, Global Positioning System (GPS) provides sufficient accuracy in open
environments so as to enable navigation solutions that provide accurate guidance to
vehicles. In congested environments, such as city centers, navigation solutions utilize
vector representation of road networks in GIS databases to correct the not-so-accurate
GPS data (Scott, 1994; Taylor et al., 2001). Unfortunately, unlike the GPS and GIS, there
is no mature framework that can be used to provide accurate navigation assistance to
pedestrians in indoor environments. Indoor environments present a challenge to
positioning technologies and hence there is a need for spatial models that can correct
erroneous positioning data (Liao et al., 2003; Spassov, 2007). In this paper, we have built
upon network-based navigation and the vector representation of road networks in GIS to

315
316 COMPUTING IN CIVIL ENGINEERING

create a spatial model that can be utilized both for navigation guidance and erroneous
positioning data correction. We refer to the developed navigation-network model of a
building as a Geometric Topology Network (GTN). This paper compares strengths and
weaknesses of two algorithms namely, the Straight Medial Axis Transformation
algorithm and a modified form of the Medial Axis Transform algorithm for automated
creation of the GTN from an IFC file.
The next section presents the requirements for creating a GTN to provide
navigation assistance in indoor environments. Section 3 provides the background
research on creating spatial representations for navigation assistance. Section 4 describes
the algorithms to create a GTN that have been selected for comparison. Section 5
contains the details of the process to transform IFC-based building information into a
GTN. Section 6 concludes this paper and presents a discussion on the findings.

REQUIREMENTS FOR GTN DEVELOPMENT


Indoor positioning forms and important aspect of automated indoor navigation
solutions but indoor environments present a challenge to all types of positioning
technologies and systems (Kaemarungsi, 2005; Pradhan et al., 2009). Scott (1994) and
Taylor et al. (2001) have shown that a GIS representation of the road network can correct
erroneous positioning data from the GPS. Similar to a GIS representation of the road
network, a GTN should accurately represent the length, as well as the topology, of
navigation routes in indoor environments to correct erroneous positioning data from
positioning systems. Therefore, the first requirement is that a GTN should be a
dimensionally weighted topology network of connected spaces in indoor environments so
as to accurately represent indoor route lengths. But, unlike road networks, where vehicles
have the degree of freedom to move only along a roadway, navigation routes in indoor
environments can provide pedestrians degrees of freedom to move anywhere within a
plane. Therefore, the second requirement is that a GTN should be a network-based
representation of indoor navigation routes that results from decomposition of planar
polygons, such as hallways. Moreover, since indoor environments are mainly
characterized by furnishing elements being placed along the periphery of spaces and
space centerlines or medial-axis being utilized for navigation, a centerline or medial axis
based GTN is an acceptable representation for indoor navigation routes and this
constitutes as the third requirement for the creation of the GTN. Although, other
researchers have provided arguments for visibility-based navigation networks for
circulation distance calculation (Lee et al., 2010), the fact that visibility-based navigation
networks cannot be easily utilized for correcting position data from positioning systems
(as will be explained in section 3) renders these networks less useful for navigation
assistance.

RESEARCH BACKGROUND
Researchers in the robotics domain have been utilizing algorithms from
computational geometry to decompose planar layouts of indoor environments into
topology network-based maps for robot navigation. Some of the most commonly used
computational geometry algorithms that have been utilized for mobile robot navigation
include the Medial Axis Transform (Blum, 1967; Lee, 1982) and the Generalized
COMPUTING IN CIVIL ENGINEERING 317

Voronoi Graphs (Wallgrun, 2005). The medial axis of a polygon is the set of points
internal to a polygon that are equidistant from and closest to at least two points on the
polygon’s boundary. Lee (1982) stated that the Medial Axis Transform of a polygon is
the same as the Voronoi Diagram of that polygon minus the edges that originate from
concave vertices.
One of the drawbacks of the Medial Axis Transform and the Voronoi diagram is
the fact that these representations include points, lines and parabolic curves.
Representations containing complex parabolic curves work well for a mobile robot that
carries significant computational power onboard, but these representations are not
suitable for a pedestrian carrying a mobile device with limited memory and
computational power. Researchers in the domain of computational geometry recognized
these drawbacks and developed algorithms that would produce topology networks to
contain only linear elements. Aichholzer, et al. (1995) developed the straight skeleton
representation that is used for calculating the polygonal roof over a general layout of
ground walls. This representation consists of only straight elements and no parabolic arcs
and hence is referred to as straight skeleton. Yao and Rokne (1991) developed another
simple algorithm for creating the medial axis that creates a topology network with
straight-line elements rather than parabolic arcs. Lee (2004) modified the algorithm
developed by Yao and Rokne (1991) to create a 3D topology network for providing
geospatial analysis capabilities for urban environments. They named their algorithm the
Straight Medial Axis Transform (S-MAT) algorithm.
The Medial Axis Transform, the Voronoi diagram and the Straight Medial Axis
Transform are centerline-based topology networks. Kannala (2005) developed a metric-
based topology network for fire-egress distance measurement, as illustrated in Figure 1b,
but it does not reflect actual navigation routes in indoor environments. Lee et al. (2010)
developed a visibility-based circulation network (Figure 1c) for code-compliance
checking on circulation distances in building. Their representation includes only linear
elements and the algorithm is accurate and efficient. The circulation network is created as
needed based on the particular query for navigation between any two points in the
building. Unlike the centerline-based network representation, where there is only one
consistent network for the whole building, in visibility-based network representation,
there are numerous networks that are possible based on the different routes a person can
take in the building. Creating this network as needed, using a lightweight mobile device,
can prove to be a significant challenge. Hence, this representation is suitable for static
applications, such as code-compliance checking, rather than mobile dynamic
applications, such as navigation.

a) b) c)
Figure 1. a) Centerline-based geometric topology network (Lee 2004); b) metric-
based topology network (Kannala 2005); c) visibility-based circulation network (Lee
et al 2010)
318 COMPUTING IN CIVIL ENGINEERING

SELECTED ALGORITHMS FOR GTN PROCESSING


We have selected the Straight Medial Axis Transform (S-MAT) algorithm
developed by Yao and Rokne (1991) for computational geometry and modified by Lee
(2004) for creating a 3D topology network using CAD files for geospatial analysis in an
urban environment. We have also modified the Medial Axis Transform algorithm
developed by Blum (1967) so as to remove the parabolic arcs from the topology network
and replace them by straight-line segments. In this section we will describe the two
algorithms that we compare for creating a GTN: the Straight Medial Axis Transform and
the Modified Medial Axis Transform. We will also present the advantages and
shortcomings of these two algorithms.
The S-MAT algorithm, developed by Yao and Rokne (1991) and modified by Lee
(2004) especially for hallways and planar spaces, works on the principle of defining
Voronoi regions of a planar polygon, such as edges and vertices. A Voronoi region is an
enclosed region formed by an element of a polygon, either an edge or a vertex, and the
corresponding bisectors. The Voronoi region is closest to the element that encloses it.
The S-MAT algorithm involves creating angle bisectors of all the convex vertices and
determining which angle bisectors enclose a Voronoi region. This step is described in
detail by Yao and Rokne (1991). In Figure 2, bisectors r12, r23 and edge e2, and bisectors
r45, r51 and edge e5 enclose two separate Voronoi regions. Once all the Voronoi regions
are determined at the first level, i.e. the Voronoi regions formed from bisectors of convex
vertices, then bisectors are drawn from the nodes formed from the intersection of angle
bisectors at the first level. In Figure 3, bisectors r13 and r41 are examples of second level
bisectors drawn from the nodes formed from the intersection of bisectors at the first level.

Figure 2. An example of straight medial axis transform of a polygonal hallway


Similar to the first level Voronoi region construction, Voronoi regions enclosed
by second level bisectors and corresponding polygon elements are determined. If the
planar polygon has only convex vertices, then determining Voronoi regions for any level
of angle bisectors is easy. On the other hand, if the planar polygon contains concave
vertices then two special properties of the bisectors have to be kept in mind. The first
property is that a bisector of two edges, where one of the edge ends in a convex vertex
and the other one ends in a concave vertex, has the end point as the mid-point of the line
segment joining the above-mentioned convex and concave vertex (Lee 2004). This
property is illustrated in Figure 3a. The second property is that if two edges intersect in a
common concave vertex, then the two distinct bisectors of these edges intersect to form a
valid node in the S-MAT diagram of the planar polygon. This property is illustrated in
Figure 3b. Lee (2004) has provided a detailed procedure to create the S-MAT diagram of
a planar polygon.
COMPUTING IN CIVIL ENGINEERING 319

a) b)
Figure 3. a) Illustration of medial axis defined by property one; b) Illustration of
medial axis defined by property two
We decided to implement the aforementioned S-MAT algorithm, but discovered
certain drawbacks and limitations of this algorithm. Since the algorithm involves
constructing bisectors of only convex vertices, whenever there is an intersection at a
concave vertex, a new bisector does not emerge from that intersection point. Unless there
is another bisector that is heading from the opposite direction, such as bisector r67 in
Figure 2, the algorithm gets stuck at that point. Figure 4 illustrates the fact that the S-
MAT algorithm gets stuck after determining nodes n1 and n2 at concave vertices.
Similarly, in Figure 3b the algorithm gets stuck at the intersection at the common
concave vertex. Figure 2 illustrates a scenario where the S-MAT algorithm does not get
stuck at an intersection at a concave vertex. In Figure 3, bisectors r13 and r41 intersect at a
concave vertex, but since there is another bisector, r67, heading from the opposite
direction, the S-MAT of the polygon is completed. Keeping in mind this limitation, we
decided to use a different algorithm that is a modification of the algorithm developed by
Blum (1967) for generating the medial axis.

Figure 4. Limitation of S-MAT algorithm. The algorithm gets stuck after reaching
nodes n1 and n2
Our algorithm, the modified medial axis transform (MAT), involves constructing
bisectors of all the elements of a planar polygon including the concave vertices. Figure 5
illustrates the various bisectors possible in a simple planar polygon. Figure 5a involves
creating an angle bisector of two edges. Figure 5b illustrates the parabolic bisector of a
concave vertex and an edge. Since a parabola is a locus of all the points equidistant from
a point and a line hence a bisector of a concave vertex and an edge will always be a
parabola. Figure 5c depicts the case where the bisector of two concave vertex elements of
a simple planar polygon is the perpendicular bisector of these two vertices. We modified
the algorithm for medial axis creation developed by Blum (1967) by removing the
parabolic bisector depicted in figure 5b and replacing it with the two perpendicular
bisectors of a concave vertex as shown in figure 5d. The unique properties of these two
perpendicular bisectors ensure that the region enclosed between these two bisectors and
the concave vertex is a Voronoi region with respect to the concave vertex. This property
ensures that the nodes resulting from the intersection of these perpendicular bisectors
with other bisectors of a planar polygon lie on the medial axis of the polygon.
320 COMPUTING IN CIVIL ENGINEERING

a) b) c) d)
Figure 5. a) Bisector of two edges, b) Bisector of an edge and a concave vertex, c)
Bisector of two concave vertices, d) Bisectors of a concave vertex.
The modified MAT algorithm involves constructing the two perpendicular
bisectors of concave vertices to determine the nodes of the medial axis. Two
perpendicular bisectors of a concave vertex are shown in Figure 6b as dotted red lines.
The nodes, n1, n2 and n3, of the medial axis are determined by intersecting these
perpendicular bisectors with other angle bisectors. The original MAT algorithm involves
constructing the parabolic bisector, p1 or p2, of a concave vertex and determining the
nodes, n1, n2 and n3, of the medial axis by intersecting the parabolic bisector with other
bisectors as shown in Figure 6a. A major difference between the MAT and modified
MAT algorithms is the fact that in the MAT algorithm the parabolic bisector of a concave
vertex is a part of the medial axis of the polygon, whereas in the modified MAT
algorithm the two perpendicular bisectors of a concave vertex only assist in determining
the nodes of a medial axis. This difference is clear in Figure 6. To complete the medial
axis in the modified MAT algorithm, we draw line segments, l1 and l2, between those
nodes of medial axis, as shown in Figure 6b, that would originally contain a parabolic
bisector, as shown in Figure 6a.

a) b)
Figure 6. a) Voronoi diagram of the polygon, b) Modified medial axis of the polygon.
The modified MAT algorithm has the advantage of being generally applicable to
any shape or layout of the indoor environment, whereas the S-MAT algorithm breaks if
there is an I-shaped hallway. On the other hand, the resulting medial axis from the
modified MAT algorithm presents certain challenges to navigation assistance. For
instance, in Figure 6b, if a user has to walk straight through the hallway crossing nodes n1
and n2, then the node n3 does not lie on the user’s path, although the medial axis
generated from modified MAT algorithm will take the user from node n3, as there is no
direct connection between nodes n1 and n2. The straight medial axis resulting from S-
MAT algorithm, shown in Figure 2, does not suffer from this limitation. Second, the
modified medial axis also suffers from the fact that a straight line replaces a parabolic
bisector at a concave vertex. As the angle of the concave vertex approaches 360o, the line
segment that has replaced the parabolic bisector gets nearer and nearer to the concave
vertex, and hence represents neither the path of a user nor the centerline of the hallway.
We have implemented the two selected algorithms in a proof-of-concept prototype. The
next section describes the process that has been used in the proof-of-concept prototype
for transforming an IFC-based building information file into a GTN.
COMPUTING IN CIVIL ENGINEERING 321

TRANSFORMING IFC-BASED INFORMATION INTO A GTN


Industry Foundation Classes (IFCs) are specifications that define generic objects
that are used in the AEC domain for seamless information exchange (IFC, 2010). IFC-
based object-oriented representation can be utilized to create the GTN of a building. We
have used the IFC to Java parser created by OpenIFCTools (OpenIFCTools, 2011) for
parsing the IFC objects represented in the EXPRESS schema into Java objects. Once the
IFC objects are available in Java, all the instances of IFCRelContainedInSpatialStructure
and IFCRelAggregates classes can be reasoned with to determine the number of levels in
the building and the spaces contained in each level. This step helps us to create the GTN
for each level of the building. To identify the topology in each level, we have to read all
the instances of IFCRelSpaceBoundary. This class has two methods, namely
getRelatedBuildingElement() and getRelatingSpace(). These two methods are useful for
determining which IFCSpaces are related to which building elements. The IFCSpaces
that are linked to common IFCDoor building elements are then linked to each other
through special topological representations, such as an adjacency matrix or a linked list.
As stated in the requirements sections, a GTN needs to represent the dimensionally
weighted topology so as to accurately represent indoor route lengths. Hence, we also
need to extract the geometry of the spaces represented in an IFC file. The
IFCShapeRepresentation class along with the IFCAxis2Placement3D and
IFCLocalPlacement classes provide information about the space points of a particular
IFCSpace. These space points are then used by the S-MAT and the modified-MAT
algorithms for calculating the medial axis of the concerned IFCSpace.

CONCLUSIONS AND DISCUSSIONS


In this paper, we presented the requirements for creating a Geometric Topology
Network (GTN). A GTN should be representative of navigation routes in indoor spaces
that resemble simple polygons and due to the common layout of indoor spaces a
centerline-based navigation network is an acceptable solution. We presented two
algorithms that could be used for creating a GTN from geometry of spaces that resemble
simple polygons and we also outlined the process for converting information contained in
an IFC-file into a GTN. We have compared the strengths and weaknesses of the two
selected algorithms (Table 1). Both the S-MAT and the modified MAT algorithms
produce medial axis diagrams that have only straight elements. The modified MAT
algorithm works in O(N2) time as it is required to construct all the bisectors whereas the
S-MAT algorithm works in O(NlogN) time (Lee 2004). The S-MAT algorithm suffers
from the fact that it can get stuck at concave vertices of a simple polygon and the
modified MAT algorithm does not accurately represent the centerline of the space.
Table 1. Strengths and weaknesses of the selected algorithms for creation of GTN
Algorithm Strengths Weaknesses
S-MAT - Works in O(NlogN) time - Gets stuck at concave vertices
algorithm especially in I-shaped hallways
Modified - Works for all polygonal shapes for - At highly concave vertices (angle
MAT which Voronoi diagrams and Medial > 270O), the medial axis starts
algorithm Axis Transform algorithm work getting distorted
322 COMPUTING IN CIVIL ENGINEERING

REFERENCES

Aichholzer, O., Aurenhammer, F., Alberts, D. and Gartner, B. 1995, A novel type of
skeleton for polygons. Journal of Universal Computer Science, vol (1) pp. 752-761.
Blum, H. 1967, A Transformation for Extracting New Description of Shape. Symp.
Models for Perception of Speech and Visual Forms, Cambridge, MA: MIT Press, pp.
362-380.
IFC 2010, http://www.iai-tech.org/products/ifc-overview. Last accessed 3rd October,
2010.
Kaemarungsi, K. 2005, Design Of Indoor Positioning Systems Based On Location
Fingerprinting Technique. Doctoral Thesis, University Of Pittsburgh.
Kannala M, 2005, Escape route analysis based on building information models: design
and implementation, MSc thesis, Department of Computer Science and Engineering,
Helsinki University of Technology, Helsinki.
Lee, D.T. 1982, Medial axis transformation of a planar shape. IEEE Trans. Pattern
Analysis and Machine Intelligence, vol (4), pp. 363-369.
Lee, J. 2004, A spatial access-oriented implementation of a 3-d gis topological data
model for urban entities. GeoInformatica, 8 (3), pp. 237-264.
Lee, J.-K., Eastman, C.M., Lee, J., Kannala, M. and Jeong, Y.-S. 2010, Computing
walking distances within buildings using the universal circulation network.
Environment and Planning B: Planning and Design, 37 (4), pp. 628-645.
Liao, L, Fox, D., Hightower, J., Kautz, H. and Schulz, D. 2003, Voronoi tracking:
Location estimation using sparse and noisy sensor data. In Proc. of the IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS).
OpenIFCTools 2011, Open IFC Java Toolbox,
http://www.openifctools.org/Open_IFC_Tools/Demo.html.
Pradhan, A., Akinci, B., and Garrett Jr., J. H. 2009, Development and testing of inertial
measurement system for indoor localization. Proceedings of the 2009 ASCE
International Workshop on Computing in Civil Engineering, pp. 115-124.
Scott, C. 1994, Improving GPS positioning for motor vehicle through map matching. In
Proceedings of ION GPS-94. The Seventh International Technical Meeting of the
Satellite Division of the Institute of Navigation, Salt Lake City, Utah, pp. 1391-1410.
Spassov, I. 2007, Algorithms for Map-Aided Autonomous Indoor Pedestrian Positioning
and Navigation. Ph. D. thesis, Ecole polytechnique fédérale de Lausanne (EPFL).
Taylor, G., Blewitt, G., Steup, D., Corbett, S. and Car, A. 2001, Road Reduction Filtering
for GPS-GIS Navigation. Transactions in GIS, 5(3), pp. 193-207.
Walder, U., Bernoulli, T. and Wießflecker, T. 2009, An Indoor Positioning System for
Improved Action Force Command and Disaster Management. Proceedings of the 6th
International ISCRAM Conference, pp. 251 – 262.
Wallgrun, J. O. 2005, Autonomous Construction of Hierarchical Voronoi-Based Route
Graph Representations. In Volume 3343 of Lecture Notes in Computer Science,
Berlin, Heidelberg: Springer Berlin / Heidelberg, Chapter 23, pp. 413-433.
Yao, C. and Rokne, J., 1991, A Straightforward Algorithm for Computing the Medial
Axis of a Simple Polygon. Intern. J. Computer Mathematics, 39, pp. 51-60.
Zlatanova, S. and Holweg, D. 2004, 3D Geo-information in emergency response: a
framework. Proceedings of the Fourth International Symposium on Mobile Mapping
Technology (MMT'2004), Kunming, China, pp. 29.
Business Models for decentralised Facility Management supported by
Radio Frequency Identification Technology

Z. Cong 1, A.M. ASCE, L. Allan 1, K. Menzel 1, A.M. ASCE


1
Informatics Research Unit for Sustainable Engineering (IRUSE), Department of Civil and
Environmental Engineering, University College Cork, College Road, Cork, Ireland; PH (353)
214205409; FAX (353) 214205451; email: z.cong /l.allan /k.menzel@ucc.ie

ABSTRACT

Holistic facility maintenance service provision has the potential to provide


new business opportunities for Facility Management (FM) companies, building
managers, energy providers, maintenance providers and many other stakeholders
currently working in the area of building design, construction, building operation,
energy management, maintenance, etc since customers wish to get access to all
services related to building services maintenance through a “one-stop-shop”.
This paper describes how the concept of RFID could be applied in the area of
facility maintenance service provision. A proposal is presented which describes the
context, the relevant stakeholders, required novel IT services, new business models
and the concept of Decentralised Information Management (DIM) to allow easy and
standardized exchange of building maintenance data amongst potential partners.

INTRODUCTION

Radio Frequency Identification Technology (RFID) has great potential to be


used in AEC/FM industry for information exchange and delivery. Traditionally,
RFID has been used for tracking and tracing of components and tools on “item level”,
but has not been used in facility management and facility maintenance. Following a
comprehensive literature review we discovered the facility management industry has
been slow to adopt RFID technology. Therefore, we propose to use RFID technology
to support Decentralised Information Management to enable construction companies
to offer value added services in the FM sector (Cong, Z. Yin, H. et al. 2010).
The paradigm of Decentralised Information Management (DIM) proposed in
this paper, allows distribution of technical specifications and progress monitoring of
maintenance activities, for example, ambient interaction of maintenance personnel
with building components. Integration of RFID and DIM provides new opportunities
for facility management and facility maintenance. The concept of DIM is to store
specific item level information on a RFID device for its timely and updated
maintenance. This leads to better information management of building inventories
and elements. It also provides useful information about inventory location within the
building (Cong, Z. Manzoor, F. Yin, H. and Menzel, K. 2009).
The proposed business model specifies how business strategies can be used by
facility management providers using RFID technology. The purpose of a business

323
324 COMPUTING IN CIVIL ENGINEERING

model is to ensure that all the factors needed to create a successful business plan are
analyzed and proposed. Business models can describe the facility management and
maintenance services offered, the business infrastructure (internal & external
resources) required for producing these services, the stakeholders (building owners,
facility managers and maintenance crews) who will use these services, and the
financial cost savings and profits facility management providers can achieve. The
objective of this paper is to use a proposed Alex Osterwalder’s business model and
adopt this model for the opportunities of using RFID in FM.

BUSINESS MODEL TEMPLATE

A business model is a description of how a business makes (or intend to make)


money. It is the centerpiece of business plan. Constructing a business model is the
first step in planning to start a business. The purpose of a business model is to insure
that all the factors needed to operate a successful business are considered and
analyzed to make sure they are reasonable and achievable. Business models describe
the products and services offered for sale, the business infrastructure required to
produce and sell these products and services, the target customers that the business
experts will buy these products and services, and the financial results and profit the
business expects to achieve (Mcintire, J.T 2008).
Over the past few years business model research has developed from defining
business models, via exploring business model components and classifying business
models into categories, to developing descriptive models (for an overview see
reference. Pateli.A.G et al. 2004). The business model concept plays a valuable role
when simulating, analyzing, and understanding current or new business concepts and
exploiting these concepts. Besides, it supports managers communicating their ideas
and visions about their company and its management to parties concerned (Bjorn Kijl,
et al. 2010). Based on a literature review of several sources, we select the Alex
Osterwalder’s business model for our business model framework.
Alex Osterwalder describes business models according to nine characteristics
which can apply in any type of business model. These characteristics are defined as
building blocks of business model and summarized in the following figure.

Figure 1. Business Model Template (Source: Alex Osterwalder, 2005)


COMPUTING IN CIVIL ENGINEERING 325

By using those well defined building blocks, it is easy to build/develop


business models along with the most important aspects to describe a product or
service, customer relationship, the infrastructure and finance aspects. This method
was used to describe the business models in business model section.

STAKEHOLDERS FOR FACILITY MAINTENANCE SERVICE PROVISION

Before we describe the complete business model, it is important to define


relevant stakeholders, their relationship and distributed channel. This paper is
concerned with efficient management of building facilities; the most important actor
is the user of the building and their relationship with an internal or external facility
management team.

End Users – Occupants and Building Owners

The building occupants and building owners are the people who will benefit
with the outcome. They are one of the valuable sources to evaluate user comfort
based on the performance of building systems and components. Measuring and
documenting the user satisfaction with environmental factors such as air quality,
thermal comfort or lighting means this data could be used to determine and evaluate
service level agreements with facility management and maintenance services
providers (Allan, L. and Menzel, K. 2009).

Facility Management Service Providers – Internal & External Providers

The provision of facility management services includes facility management


and other forms of technical and infrastructural building management. Currently,
facility management systems are insufficiently integrated with a Building
Management Systems (BMS). Therefore, the integrated planning of inspection and
maintenance activities is not easily achievable. The availability of standardized data
exchange formats and mechanisms would be of great help to improve the
interoperability of the available systems, such as Building Information Modelling
Tools (BIM), BMS and Computer Aided Facility Management (CAFM) system.
Furthermore, the information would also allow the facility management
providers to make predictions depending on the course of action for retrofit and
renovation determined by the building owners and occupants as a result of that audit.
Consultancy services to improve the effectiveness of buildings could be offered to
users and owners of the buildings.

Interviews from Facility Management Experts

David Moore (Director BAM FM Contractors Ltd.) was approached from the
view point of using the mobile RFID system to provide an early warning system for a
FM contractor that the building environmental comfort parameters have been
contravened so that the FM manager could mobilize his maintenance crew to rectify
the issue before the end user lodged a complaint through the FM helpdesk.
326 COMPUTING IN CIVIL ENGINEERING

Eamonn Connaughtton’s (Facility Manager at Western Gateway Building at


University College Cork) primary concerns with improving current FM systems is
that the maintenance inspections on life support systems, that are required to be
routinely carried out and recorded to comply with legislation, are done so in a
foolproof way. Eamonn also advised that he would see a benefit able to have
personnel easily locate themselves with the building to be productive. A means of
tracking or recording crew movements would also be beneficial to be able to define
which areas of the building take longer to clean etc (McDonagh, H. 2010).

FACILITY MANAGEMENT SERVICE CONTRACTS

In the Irish market there are basically four types of FM categories. The choice
made by organisations is dependent on the size and circumstances under which they
operate. Managers need a clear understanding of the benefits or otherwise in
determining which choice to make. The decision also has to be made as to whether
outsourcing will occur on the operational side or the management side. For example
on the management side it may be decided to outsource project management activities
to those experienced and qualified to carry out such tasks. This in itself could be
broken down into total outsourced project management or a mixture of both outside
consultants and in-house employees. Operational outsourcing would deal more with
the physical activities associated with the running of an organisation.
The first category involving FM is the decision to keep both operational and
management within the company. This would involve dedicating a facilities team
consisting of employees to deal with FM.
The second category is outsourcing some of non-core activities where the
expertise is not available within the organisation or where it is more cost beneficial to
do so. This would involve a service contract being set up with an outside contractor.
The number of service contracts set up is dependent on the facilities manager who
retains the responsibility for monitoring and control of these services.
The third category involves a relatively new type of FM contract. This is
called partnerships and can be described as a strategic agreement whereby the client
organisation and the service provider share the responsibility for the delivery and
performance of the service. Both parties share the benefits of more efficient methods
and associated cost savings.
The fourth category and one, which is provided by most of the companies, is
Total Facilities Management (TFM). Under this situation the organisation contracts
with a company who will provide both operational and managerial FM in its entirety.
There would need to be complete trust by the organisation that the TFM company
will provide the required service and quality that is being offered. This form allows
the TFM company to be totally responsible for delivering, managing, controlling and
achieving the objectives of the organisation. This grouping of service contracts is
known as bundling and it has been suggested that TFM can never truly exist. This is
because a TFM provider would have to have the capability to cover every aspect of
FM from auditing to providing cleaners. Due to the many aspects involved in FM it is
unlikely that any firm would have that capability. It is unlikely that any organisation
would surrender all in-house support totally.
COMPUTING IN CIVIL ENGINEERING 327

There are advantages and disadvantages to be considered and the problem


with FM is that since it is relatively new there is little data to examine in terms of
performance and the decision as to whether a firm should renew contracts or resort to
in-house. Also many service providers can make claims that cannot be disproved. To
this end it could be seen that the advantages would make a very good sales ploy for
service providers. (Atkin, Brain, Brooks, Adrian. 2005).

MANAGEMENT/MAINTENANCE SERVICE PROFILING

To deliver a high quality repairs services to every building by responding


quickly and efficiently using relevant information about the building in order to carry
out the repair and ensuring that this information is passed on to any contractor the
organisation uses.
To monitor the repairs and maintenance service ensuring that a high quality
repairs service is provided and that there is no difference in the quality of the repair or
maintenance that the building receives because of the inventory name, manufacturing,
specification, installation date, optional sensor reading and responsible people. The
monitoring should also identify where necessary action needs to be taken if a high
quality is not achieved.
The list below introduces the function of maintenance service profiles to
specify service demand, service supply, required set of performance criteria, a certain
level of user comfort if required.
 Supply Profile: This profile is used by the maintenance service provider
to inform customer about the available services. The “supply profile”
could be compared to other providers. It might lead to a business activity
– the contract of some form of maintenance service.
 Demand Profile: This profile is used by the end user to maintenance
service provider how much services are they looking for. The “demand
profile” requests a certain service level. The request will trigger business
activities such as one-off service or more holistic services.
 Performance Profile: This profile is used by the building design and
MEP engineer to inform facility manager and building operator about the
performance of the building.
 Maintenance Profile: This profile is used by the facility manager to
inform the building owner and maintenance staff about required
maintenance activities.
 Retrofit Profit: This profile is used by facility manager to inform the
maintenance service provider about replacement of building
components/HVAC systems or proposed renovation activities.

To better define these profiles, a generic relationship needs to be defined.


These profiles describe building conditions; it should include building types, building
areas and maintenance service suppliers.
328 COMPUTING IN CIVIL ENGINEERING

Figure 2 illustrates the maintenance and retrofit of building systems, the


responsibility and relationship of each stakeholder, building performances analysis,
maintenance simulation and system component etc.

Figure 2. Maintenance and retrofit of building systems

BUSINESS CASE

These business cases differ from business models in that they provide general
descriptions of certain areas of interest without being very specific (Browne, D. and
Menzel, K. 2010). Therefore, the following business cases were created:
Business Case 1: IT-supported design, installation and operation of RFID
networks to enable efficient and effective installation of RFID tags and decentralised
information and central data unit.
Business Case 2: Development, implementation and operation of a
decentralised information management system which provides decentralised
information such as manufacture, specification, timestamp, sensor reading (optional)
etc. to facility managers and maintenance crews.
Business Case 3: Development, implementation and operation of a
graphical mobile user interface which provides maintenance crews with access to
building maintenance data to carry out maintenance activities onsite.
COMPUTING IN CIVIL ENGINEERING 329

Business Case 4: Development, implementation and operation of a


building performance analysis tool for the achievement and storage of maintenance
data. This can be realized using Data Warehousing Technology (DWT).
Business Case 5: Development, implementation and operation of a
Retrofit Decision support Tool which provides facility managers and building
owners with instruments allowing the evaluation of retrofit scenarios and the
calculation of an estimated “return of investment”.

BUSINESS MODELS

After the development of business cases, a number of detailed business


models were created from these in the format described in the above section. For this
research, the following business models were created and defined. The extended
descriptions are given using the business model template in section 2.

Business Model 1: Prediction of system failure for better user comfort.


This suggests that there are opportunities for maintenance company or system
specialist to predict when the system will be failure according to history data; it can
avoid failure in advance.
The core business activity requires facility manager to develop a maintenance
schedule which checks the system frequently.
Business Model 2: Decentralised information to support maintenance
activities.
This Business model suggests maintenance activities can be carried out using
decentralised information which is stored on the RFID tag. Whereas business model 1
was concerned with prediction of system failure, this business model is more
concerned with maintenance activities, for example how to solve the problems.
The core business activity requires facility manager to classify what data
should be stored on the RFID tag.
Business Model 3: Establish external facility maintenance company as
outsourced contractors
This Business model relates to establish a facility maintenance company as
outsourced contractors. It can be based on contracts from other internal maintenance
team, it can come and go based on calls and text messages, also the external facility
maintenance company can regularly check the system which can reduce the number
of internal maintenance staff, and save money.
The core business activity allows an internal maintenance team to just use the
services which don’t need to worry about the management of the outsource
maintenance crews.
Business Model 4: Continuous commissioning and retrofit activities.
Retrofit and commissioning of existing buildings is going to become far more
relevant in the near future than it currently is (Holness, Gordon. 2009). There exists
the potential for specialist retrofit/commissioning consultants, not just for building &
equipment alteration as it currently exists, but how to use modern and new
Information and Communications Technology (ICT) systems to exploit the efficiency
330 COMPUTING IN CIVIL ENGINEERING

of existing buildings. This is likely to be related to whole life cycle building


simulation.

CONCLUSION

This paper describes the potential business opportunities and clearly defines
them as business models in a building management/maintenance context. It becomes
possible to easily specify the relationship between stakeholders and maintenance
profiles. As future work, we plan to evaluate the model to suit better use.
This study is funded by the Higher Education Authority Ireland, under PRTLI
– Cycle 4. It is embedded in research and development activities of smart building
cluster at University College Cork.

REFERENCES

Pateli, A. G. and Giaglis, G. M. (2004). “A research framework for analysis business


models”, European Journal of Information System, Vol. 13, No. 4, 2004
Allan, L. and Menzel, K. (2009). “Virtual Enterprises for Integrated Energy Service
Provision.” in: Proceedings of 10th IFIP WG 5.5 Working Conference on
Virtual Enterprises, PRO-VE 2009, Pro VE 2009 07-09 Oct. 2009"
Atkin, Brain, Brooks, Adrian. (2005). “Total Facilities Management” Blackwell
Publishing
Bjorn Kijl, Harry Bouwman, Timber Haaker, Edward Faber (2010). “Developing a
dynamic business model framework for emerging mobile services.” JEL-
Codes: L21, L63, L86, M13, O32, O33
Browne, D. and Menzel, K. (2010). “ICT enabled Business Models for Innovation
Energy Management.” in: Proceedings of ECPPM, pg 345, 14-17 Sept 2010
Cong, Z. Manzoor, F. Yin, H. and Menzel, K (2009). “Decentralised Information
Management in Facility Management using RFID Technology.” in Proc. Int.
Conf. Smart Materials and Nanotechnology in Engineering, Volume 7493,
2009 pp. 749310-749310-7.
Cong, Z. Yin, H. Manzoor, F. Cahill, B. Menzel, K. (2010). “Integration of RFID
with BIM for Decentralised Information Management”, In Proceedings of the
International Conference on Computing in Civil and Building Engineering, W.
TIZANI (Editor), 30 June-2 July, Nottingham, UK, University of Nottingham
Press, Paper 212, p. 423, ISBN 978_1_907284_60_1
Holness, Gordon. (2009). “Sustaining Our Future by Rebuilding Our Past”. ASHRAE
Journal. Available at www.ashhrae.org
Osterwalder, A. Pigneur,Y. et al. (2005). “Clarifying Business Models: Origins,
Present and Future of the Concept.”
McDonagh, H (2010). “Deployment of a Mobile RFID System to Support
Continuous Commissioning and Facility Management.” M.Eng.Sc Thesis,
Dept. of Civil & Environmental Engineering, University College Cork, Cork.
66-68.
Mcintire, J.T (2008). “A Business Model Template.”
Requirements for Autonomous Crane Safety Monitoring

Xiaowei Luo1, Fernanda Leite1, William J. O’Brien1


1
Department of Civil, Architectural and Environmental Engineering, the University
of Texas at Austin. 1 University Station C1752, Austin, TX 78712; email:
wjob@mail.utexas.edu

ABSTRACT
Crane related accidents, caused by multiple factors such as a worker entering a
dangerous area, is one of the major accident types in the construction industry. A
vision for addressing this problem is through an intelligent jobsite, fully wired and
sensed. Recent advancement in pervasive and ubiquitous computing makes
autonomous crane safety monitoring possible. An initial step towards implementing
autonomous crane safety monitoring is to identify the safety and information
requirements needed. This paper presents a literature review and results from a set of
expert interviews, used to extract requirements for autonomous crane safety
monitoring. The extracted requirements for dynamic safety zones and associated
information requirements as a precursor to deployment are also presented in the
paper.

INTRODUCTION
The construction industry has suffered from economic loss due to jobsite
accidents every year. Among these accidents, cranes are one of the major issues.
According to statistics from the Occupational Safety and Health Administration
(OSHA), there were 323 fatalities in 307 crane incidents between 1992 and 2006 (22
fatalities per year on average). Among these 323 fatalities, 102 were caused by
overhead power line electrocutions (32%), 68 deaths associated with crane collapses
(21%), and 59 deaths involved a construction worker being struck by a crane
boom/jib (18%)(McCann 2009). A crane’s components or a worker entering a
dangerous area by accident or on purpose is a critical node on the fault chains of over
50% of fatalities. Reducing unnecessary access to dangerous areas by giving clear
warning to involved workers can eliminate the critical node on the fault chain and
hence improve the safety performance on construction jobsites. Our research
envisions an autonomous crane safety monitoring system, which utilizes data and
information collected from various sources (e.g., global positioning system,
anemometer, load cell sensor, rotary sensor and building information models). This
research is divided into three phases: 1) safety knowledge elicitation to extract the
safety and information requirements as a foundation for development of crane safety

331
332 COMPUTING IN CIVIL ENGINEERING

monitoring system; 2) development and implementation of crane safety monitoring


system in a perfect and distributed mobile computing environment; and 3) adjustment
of the system to imperfect situations (e.g., missing data, erroneous data), its impact on
decision making and methods to deal with imperfection. This paper reports the
preliminary results of the safety knowledge elicitation, part of this research’s first
phase, and proposes a crane safety monitoring system framework using extracted
safety knowledge and information requirements.

OSHA, the American National Standard Institute (ANSI) and other


organizations (e.g. American Society of Mechanical Engineers, UK Health & Safety
Executive, European Agency for Safety and Health at Work) have set up regulations
and standards for crane operation and maintenance to ensure crane safety on
construction jobsites. The advancement in information technologies, such as
emergence of new sensing devices and increasing adoption of building information
modeling, has provided opportunities to move from manual crane safety monitoring
to fully autonomous crane safety monitoring. The National Institute of Standards and
Technology (NIST) conducted a study on the identification of information
requirements for the implementation of an intelligent jobsite(Saidi 2009). Several
studies proposed using the different emerging information technologies (e.g.,
wireless network, GPS and UWB, various sensors) to implement intelligent crane
safety monitoring. However, these studies focus more on the application and
demonstrated the feasibility for crane safety monitoring. A more thorough study on
how to represent the safety knowledge in digital worlds and affiliated information
requirements can promote the advancement of autonomous crane safety monitoring.
The extracted knowledge can be used by researchers and developers regardless of
what technologies they are using or planning to use.

This paper focuses on extracting the safety requirements for autonomous


crane safety monitoring and identifying the information requirements through
literature review and expert interviews, which extends NIST’s work. The
organization of the rest of the paper is as follows: Section 2 summarizes the
construction practice and research related to crane safety monitoring of dangerous
area; Section 3 describes the research methodology used in this research; Section 4
reports the results of safety knowledge and information requirements; Finally, section
5 summarizes the results and proposes future work for this research.

RESEARCH BACKGROUND
Cranes play an important role on construction jobsite regarding hoisting and
transporting materials and equipment. Crane safety has been addressed in several
regulations published by various organizations. Until very recently, construction
practitioners followed OSHA regulations (e.g., OSHA 1910 Subpart N: Material
handling and storage). With the publication of OSHA’s recently released regulation
COMPUTING IN CIVIL ENGINEERING 333

on crane safety (OSHA 2007-0066: Crane and derricks in construction), practitioners


are now basing their work on this new document. The American Society of
Mechanical Engineers (ASME) established explicit standards for different types of
cranes in ASME B.30 series. Each company can establish its own environment health
and safety rules, which includes crane. Companies follow these regulations, standards
and rules to execute safety monitoring manually and using information technologies.
An example of autonomous crane safety monitoring is using power line proximity
sensors (Arcolano, Diercks et al. 2001) to warn a crane operator if a crane’s
component is in close proximity to a power line.

NIST proposed to establish an intelligent construction jobsite test bed to


facilitate research to introduce emerging information technologies on construction
jobsite. NIST organized a workshop with 35 participants from industries, academics
and others to identify the requirements for jobsite management, which covers safety
management(Saidi 2009). The identified information requirement is an important
step towards the implementation of intelligent construction jobsites. The NIST study
covers a wide range of jobsite management tasks and hence has not set up detailed
requirements for crane safety. Wu (Wu, Yang et al. 2010) also analyzed the data
requirements of real-time tracking system of near-miss accidents on construction
jobsite but the data requirements did not go into detail for crane safety.

Various technologies have been introduced for intelligent crane safety


monitoring in the last decade. Lee proposed a prototype of advanced tower crane
equipped with wireless video control and Radio Frequency Identification (RFID), in
which the crane operator can see the situations around the crane and under the trolley
from the video sent back from video camera through wireless network(Lee, Kang et al.
2006). The decision on safety is still made by an operator and it is not a fully
autonomous safety monitoring system. RF remote sensing technologies were
introduced for equipment proximity safety alert system by Teizer (Teizer, Allread et
al. 2010) to warn workers on foot and equipment operators when an equipment is in
close proximity to other equipment or workers(Teizer, Allread et al. 2010). The
system examines the feasibility of using RF sensing technologies for safety
monitoring on jobsite and did not go into the details to the safety knowledge for safety
monitoring.

Most existing studies in the safety monitoring area focus on introducing


emerging technologies to promote crane autonomous safety monitoring on
construction jobsites(Lee, Kang et al. 2006; Giretti, Carbonari et al. 2008; Teizer,
Allread et al. 2010). To move the autonomous crane safety monitoring forward, we
need to examine both the safety requirement and information requirements regardless
of specific technologies. Based on the identified safety requirements, technologies
can be chosen to implement the crane autonomous safety monitoring. Even when new
334 COMPUTING IN CIVIL ENGINEERING

technologies appear later, the identified safety knowledge and information


requirements can still be useful as a foundation for safety monitoring system
development.

RESEARCH METHODOLOGY
In order to reach a comprehensive understanding of the safety requirements of
crane operation regarding dangerous areas, the authors reviewed OSHA regulations
and industry best practices, including OSHA 2007-0066 (Cranes and Derricks in
Construction), OSHA 1910 subpart N (Occupational Safety and Health Standards:
Materials Handling and Storage) and a series best practice guidelines published by
Construction Plant-hire Association. Based on these publications, eight dangerous
areas for crane components and workers around the operating cranes were identified.

Subsequently, the authors designed a semi-structured interview guide,


composed of five parts: 1) interviewee background information; 2) safety concerns
regarding crane operation; 3) current crane safety practice; 4) attitude towards IT
application in crane safety; and 5) safety requirements regarding safety zone for crane
operation. Four safety experts, one from an owner company and three from contractor
companies, all of which are active members of the Construction Industry Institute
(CII) Safety Community of Practice, were selected for the interviews. Interviewees
were asked to review eight dangerous areas and comment on them, regarding
dangerous areas, boundaries of these areas and decision rules of safety operations in
close proximity of these areas. After identifying the dangerous areas related to crane
operations, the authors analyzed the decision rules of safety operation in close
proximity to these areas, extracted information requirements, and proposed various
data sources for the identified information requirements.

RESULTS
After reviewing safety regulations and interviewing safety experts, we have
summarized three dangerous areas for workers near operating cranes and four
dangerous areas for crane components/loads. Unauthorized users and authorized
users without proper personal protection entering the following three areas should be
warned: w1) area under the crane load; w2) area around material stacks from which a
crane is lifting/unloading materials; and w3) swing area of mobile crane’s
superstructure. Crane load’s entering the following areas should be avoided: l1)
Proximity to nearby structure and l2) proximity to nearby highway, traffic road,
railway or waterway. Also, any part of the crane should be avoided entering these two
areas: c1) proximity to power line and c2) proximity to other crane’s components.

Although OSHA’s regulations mention these areas, there does not exist a clear
summary to facilitate the implementation of autonomous safety monitoring. In this
research, seven extracted dangerous areas have set a clear set of boundaries of safety
operation for various entities related to crane operation. The definition of these
COMPUTING IN CIVIL ENGINEERING 335

dangerous areas in OSHA’s regulations does not have a specific numbers to define
the area and leaves the decision to safety professionals. However, in order for a
machine to make such safety decisions, specified boundary parameters are required.

As shown in figure 1, we have extracted safety requirements for autonomous


warning systems from the perspectives of a crane operator as well as a laborer on foot.
The information flow in figure 1 gives an overall concept of the how the warning
system makes safety decision regarding the seven identified dangerous areas. There
are several key components in the decision making process: 1) determine crane’s
condition: crane’s type and if crane is loaded; 2) calculate dangerous areas: first the
system will retrieve the stored static dangerous area information, regarding
w2,w3,l1,l2 and c1, from the database; secondly, the system will calculate the
dangerous area based on the location of crane load and other components; and lastly
do union operation on selected areas based on the environment data (e.g., if power in
the power line is cut off and if there is a train coming on the railway); 3) locate the
location or 3D information of concerned objects (crane component, crane load,
worker); 4) get intersection set of the dangerous area and the location or 3D boundary
of the concerned objects to judge if the concerned objects might be in danger; and 5)
consider authorization and safety protection equipment to see if the possible safety
violation do exists. In the safety monitoring system design and development phase,
each of these component can be treated and embedded into a decision making chunk.
By assembling these decision making chunks together, an autonomous safety
monitoring system can be achieved.

Due to data imperfections in the real world, exceptions needed to be consider


in the decision making process. Exceptions regarding data imperfection include: 1)
missing data: for example, even if the system do not if there is a train coming on the
railway in the next time slot, the system still sends warning to crane operator if the
crane load enters the dangerous area in the proximity of railway in operation; 2) data
conflicts: required data is collected from various sources and there might be a
conflicts among these data. For example, at one moment, the scale reading on the
crane’s hook reads 0 but the RFID reader on it reads a structure component of 5000 lb
is loaded by the crane. In such situation, a warning needs to be sent to crane operator
and further action is required. These exceptions exist in most of the decision making
component identified in figure 1 and need to be taken care with proper techniques
(e.g., alternative decision path).
336 COMPUTING IN CIVIL ENGINEERING

Worker Side Crane Side

Determine crane’s condition

Retrieve static and live


Get location and 3D information of nearby cranes information of nearby
buildings, power lines,
railway and ect.
Get environmental data (e.g., wind speed)

Calculate Calculate dangerous


areas for crane
Dangerous Areas component/load

Get position information of crane, crane load and crane boom

Calculate dangerous areas for


worker

Get concerned
Get worker’s location object’s location

Decision making based on location and identity


Compare concerned object’s location to
Send warning dangerous areas and concerned object’s Send warning
identity to authorization list

Decision making based on


Safety protection
Send warning
requirement examination
safety protection
requirements

Timer
Figure 1. Decision rules to monitor worker’s safety near crane load.

To move forward with the decision making chunks development, the authors
identified the information requirements for autonomous crane safety monitoring.
When comparing to previous studies, the identified information requirements (table 1)
add value to the safety community as the authors give detailed information
requirements and propose possible data sources. An application programming
interface will be provided for each data source so the decision making chunks we are
going to use in the application development can easily call the functions and retrieve
the required data for decision making.

CONCLUSIONS
Crane safety has been an important issue in construction for decades.
Although there are various standards and regulations for practitioners to follows in
COMPUTING IN CIVIL ENGINEERING 337

Table 1. Information requirements for crane safety monitoring.


Entity Required information Required Data Sample Data Possible Source
Type
Crane’s main Crane’s identification String TC001 RFID
structure Crane’s type String Tower Crane RFID

Height (from the boom to the ground) (ft) Double 10 RFID

Cartesian Location of crane base center(ft) (x,y,z) (0,30,0) RFID

Angular velocity(degree/s) double 1 Angular sensor

Boom angel (degree) Double 10 Angular sensor

Trolley Belongs to which crane (crane id) string TC001 RFID

Distance to the crane center(ft) Double 20 Cable length sensor

Moving speed(ft/s) Double 2 speed sensor

Hook Belongs to which crane (crane id) string TC001 RFID

Distance to boom(ft) Double 20 Cable length sensor

Speed(ft/s) Double 3 speed sensor

Weight of load on the hook (lb) Double 5000 scale

Load Identification String PCP001 RFID reader

Load Load id String PCP001 RFID

Load dimension(ft) Array of doubles {20,30,20} RFID

Largest distance from hooking point to any point on Double 30 RFID


the load(ft)

Worker Worker’s identification String WC001 RFID

Category String Rigger RFID

Cartesian Location(ft) (x,y,z) (0,10,0) Conversion from


GPS or RFID

Assigned task (task ID) String TS001 RFID

PPEs carried (list of ID) Array of strings HH002 RFID Reader

Task Task ID String TS001 Database

Required PPEs Array of strings {HardHat} Database

Power line Voltage(kV) Double 100 BIM

Cartesian Location of start point (x,y,z) (40,50,20) BIM

Cartesian Location of end point (x,y,z) (40,50,20) BIM

Railway/ Cartesian Location of start point (x,y,z) (120,70,0) BIM


highway/ Cartesian Location of end point (x,y,z) (180,96,0) BIM
waterway If a train is coming Boolean Yes Accelerometer

Start time of closure Time 2010/12/25 1:00:00 Database

Finish time of closure Time 2010/12/25 5:00:00 Database

Surrounding 3D information - - BIM


Buildings
338 COMPUTING IN CIVIL ENGINEERING

order to improve safety performance on jobsites, the site conditions (e.g., lack
of sight for crane operator) and human errors (e.g., unaware of entering dangerous
areas) causes many accidents in the construction industry. This research envisions an
autonomous crane safety monitoring system on jobsites to improve safety records in
the construction industry. To achieve this goal, this paper identified typical dangerous
areas related to crane operations and examined safety requirement and information
requirement for crane safety monitoring through a comprehensive regulation review
and expert interviews. The result can be used to design and develop safety monitoring
systems. Future work includes the extraction of generic decision making modules and
information requirement across different dangerous area scenarios.

ACKNOWLEDGEMENTS
The authors would like to acknowledge the experts participating in the
interview and their contributions to our study. This research was funded, in part, by the
National Science Foundation, Grant # OCI-0636299. The conclusions herein are those
of the authors and do not necessarily reflect the views of the National Science
Foundation.

REFERENCES
Arcolano, N., M. Diercks, et al. (2001). Powerline proximity alarm. Worcester, MA,
Worchester Polytechnic Institute: 1-103.
Giretti, A., A. Carbonari, et al. (2008). Advanced Real-time Safety Management
System for Construction Sites. The 25th Internetional Symposium on
Automation and Robotics in Construction, Vilnius, Lithuania.
Lee, U.-K., K.-I. Kang, et al. (2006). "Improving Tower Crane Productivity Using
Wireless Technology." Computer-Aided Civil and Infrastructure Engineering
21: 594-604.
McCann, M. (2009). Crane-Related Deaths in Construction and Recommendations
for Their Prevention. Silver Spring, MD, The Center for Construction
Research and Training: 1-13.
Saidi, K. (2009). Intelligent and Automated Construction Job Site Testbed.
Gaithersburg, MD, National Institute of Standards and Technology. 2010:
1-44.
Teizer, J., B. S. Allread, et al. (2010). "Autonomous pro-active real-time construction
worker and equipment operator proximity safety alert system." Automation in
Construction 19: 630-640.
Wu, W., H. Yang, et al. (2010). "Towards an autonomous real-time tracking system
of near-miss accidents on construction sites." Automation in Construction
19(2): 134-141.
A Knowledge-directed Information Retrieval and Management Framework for
Energy Performance Building Regulations
Lewis John McGibbney¹ and Bimal Kumar²
¹School of the Built and Natural Environment, Glasgow Caledonian University, G4
0BA, Glasgow; PH (0044) 0141-3318038; email: lewis.mcgibbney@gcu.ac.uk
²School of the Built and Natural Environment, Glasgow Caledonian University, G4
0BA, Glasgow; PH (0044) 0141-3318522; email: b.kumar@gcu.ac.uk
ABSTRACT
The Internet-driven world we now live in has profound implications for every aspect
of our personal and professional lives. Over the past two decades or so, an enormous
amount of information has been made accessible over the Internet, thanks to
advanced search and retrieval technologies. Over the last five years 1200 Exabyte’s
(1 Exabyte – 1 billion Gigabyte) of data have been put online. As a result, an
increasing amount of professional work within the domain of sustainable design and
construction is becoming dependent on retrieving regulatory and advisory
information over the web quickly. Designers and builders are finding it increasingly
difficult to identify this information and assimilate them in their activities. Generic
search engines like Google do not retrieve relevant information for domain-specific
needs in a focussed manner. Therefore, there is a need for developing smarter
domain-specific search and retrieval technologies under an information management
framework. This paper presents a web-based information search and retrieval
application which employs domain specific ontology to identify (in particular)
relevant energy performance building regulations. The paper will focus on our
development of a customised, domain specific web search platform providing
information on (i) the choice of technologies used within this research and the basic
construction of the search application, (ii) the construction of the domain dependant
ontology which is used to enhance search results, (iii) initial observations relating to
ongoing experiments. Our proposed framework is being developed in collaboration
with a Scottish City Council’s building control department who are actively
validating the value of our approach in their daily activity of checking and approving
designs for construction.
INTRODUCTION
In recent years we have seen a paradigm shift towards semantic retrieval of
information over the internet. It is becoming increasingly common for developers to
incorporate semantic knowledge technologies such as RDF, RDFS, OWL and
Ontology into web-based applications, this enables them to become more compatible
with the World Wide Web in general and the vision of the Semantic Web in
particular. In research the requirement for more efficient information retrieval over
the web has been widely documented. Systems which aim to solve this high level
problem have been implemented mainly within the biomedical (Yu, 2010) and legal
(AKOMANTOSO, 2000-2010) domains (these references by no means represent an

339
340 COMPUTING IN CIVIL ENGINEERING

exhaustive list). Examples within construction and engineering have also been in
development over a number of years (Gulla, 2006), (Rezgui Y. B., 2009), and serious
contributions to knowledge in the domain of both ontology engineering and use of
ontology in information processing have been made. The framework proposed in this
paper provides an effective method of efficiently retrieving web-based data, in
particular Scottish energy performance building regulations using the expressiveness
of OWL the Web Ontology Language as the primary driver towards improved search
and retrieval. The rationale behind this work stems from collaboration with a Scottish
Council’s building control department, their experiences retrieving online data and
effectively incorporating this into design decisions and regulatory rulings within the
local authority. Forthcoming sections of this paper are structured as follows: an
overview of the research framework providing information on the construction
architecture of the search application, proceeded by a section containing underlying
justification for the requirement of a domain specific ontology within the
management framework and the use of the W3C’s OWL language as an appropriate
regulation representation format, we bring the paper to a close with our initial
observations during testing of the framework followed by suggestions for future
work efforts.
RESEARCH FRAMEWORK ARCHITECTURE
According to (Cafarella, 2004) the fundamental flaw regarding current commercially
owned internet search engines is twofold. Firstly they provide no details of their
internal workings e.g. algorithms associated with the ranking of search results,
clustering techniques, scoring options or spidering policies. Second they encapsulate
immense political and cultural power which can distort the underlying search
direction. To provide an information retrieval solution tailored for the domain of
construction and engineering it was obvious an alternative search and retrieval
architecture was required. Its underlying principles would include a spiderbot (or
crawler) tailored to specifically crawl the web for required data, an indexing and
search implementation which would store fetched data in a structured manner backed
by a database populated with ontology, finally an ontology enhanced query
refinement mechanism running in-between the web-based user interface and the
index. For the information retrieval framework to be successful the following factors
would have to satisfied
a) Web-based data such as building regulations in particular are subject to periodic
change; their dynamic nature would have to be taken into consideration when
designing the system as any system implementation which does not contain up-to-
date data can offer little value. It is a fundamental requirement that an accurate
image of the Web graph would have to be maintained
b) It was essential that the knowledge framework had to have good performance
running on sets of standard machines, as this specific criteria would undoubtedly
ensure no IT upgrade would be required in order to test and validate
c) The system would need to incorporate domain specific ontology to enhance
search results; this meant that the knowledge-based tools used to infer the ontology
COMPUTING IN CIVIL ENGINEERING 341

would have to be interoperable with various web systems as well as standards


compliant and extensible
d) During testing and validation the system would have to scale to a hundred or so
users, consequently scalability became an issue for consideration as fast query
response times are Web-based
integral to a knowledge Building User
Regulations
retrieval system
e) Finally it was vitally User Query
important that the system Nutch
Crawler
be highly configurable,
Ontology enhanced
permitting changes to be query refinement
made should there be the Lucene
need, an open source Index
system would be Ontology
necessary as access to the Plugin

full code base was


Database
paramount.
Search
Nutch (ASF, 2010) a top Energy
Results
level project licenced by Performance
Ontology Figure 1 Knowledge
the Apache Software Framework
Foundation, is an open
source web-search project written in Java which builds on existing search
architectures such as Lucene (ASF, Welcome to Apache Lucene, 2010) adding web-
specifics, such as a crawler, a link-graph database, parsers for HTML and other
document formats. In addition, the extensible nature of Nutch enabled us to develop
our own implementations of any given interface, a vital attribute required to integrate
OWL into the research framework. The knowldege framework is displayed in Figure
1. Integration of the ontology enhanced query refinement came in the form of a plug-
in built using Jena (Dickinson, 2009), a Java based programming toolkit for building
semantic web applications. The plug-in implements an OWL parser to parse any
ontology models provided and retrieves all subclasses and instances of entities within
the ontology. These are then enhanced with synonyms from the WordNet (Princeton,
2010) corpus, to find possible entities with semantic equivalence within crawled
webpage’s. When a user submits a query the plug-in retrieves information of
relevance from the ontology stored within the database, matches documents within
the Lucene index and presents document matches accompanied by a list of similar
documents comprising of synonymic counterparts.
CONSTRUCTION OF DOMAIN ONTOLOGY
342 COMPUTING IN CIVIL ENGINEERING

Generally speaking the practice of Ontology Engineering is still relatively young,


however in recent years we have witnessed pioneering success in the construction
and enginnering domain as researchers begin to make implicit, segragated and
unaquired knowldge explicit and reusable to professionals (Kitamura, 2004),
(Rezgui, 2010). Traditionally the ontological modelling of domain knowldege
produces the most comprehensive results through a small number of methods such as
encoding experts’ conceptual patterns (Reich, 2000) or automatically by encoding
natural language patterns from a text corpus (Ciaramita, 2005). These efforts
promote the requirement to share common understanding regarding the structure of
information amongst people and software agents and also endorse a collorabortive
attempt to reuse
domain knowldege
which would have
otherwise remained
untapped. Common
concepts relating to
ontology development
include the requriement
to make domain
assumptions
unambiguous, as a
standardised method of
applying knowldege
creates greater
understanding within a
given domain and the
necessity to separate
domain knowldge from
operational (functional)
knowledge, which in
turn enables users to
analyze domain
knowledge in a more Figure 2 subsection 6.2.1, Maximum U-values from Section 6
precise context. Factors Energy of the Domestic Technical Handbook 2010.
specific to the
construction and engineering domain present within the above citations, provide
sufficient evidence to suggest that the widespread integration of ontology and
knowldege technologies must provide the next generation of information processing
and retireval across the industry specrum. The W3C’s OWL (Herman, 2007) is a
Web Ontology Language which uses both URI’s (for naming) and RDF (the
description framework for the Web) to add key attributes to ontologies which enable
them to be used within systems which interoperate with the semantic Web. OWL as a
semantic language has an additional layer of expressiveness which builds on top of
RDF and RDFS. At an abstract level it describes properties and classes, and relations
between these classes. Specifying relationships between classes introduces users to a
multitude of elements which provide richer property expression, only once this is
COMPUTING IN CIVIL ENGINEERING 343

understood can the power and


usefulness of OWL be fully
appreciated. The concept of an
OWL compatible information
framework which shares
attributes such as openness,
scalability, and extensibility
provides the grounding for our
domain ontology. The choice
of methodology behind
ontology construction design is
very much dependant on the
nature and characteristics of
the targeted domain and its
various applications, as well as
the resources and development
time available and the required
depth of analysis of the
ontology (Rezgui Y. , 2007). In
recent years energy
performance building
regulations have become prone
to regular change, this is a
direct result of Scottish
Ministers adoption of a staged
approach towards the ambition
of net zero carbon buildings by 2016/17. In keeping with an independent energy
report (Sullivan, 2007), the Scottish Government has given a commitment to further
review energy standards for 2013 and 2016. An excerpt from subsection 6.2.1
Maximum U-values from Section 6 Energy of the Domestic Technical Handbook can
be seen in Figure 2.
To identify top level classes, regulations were parsed, converted to plain text, then
analyzed using a Java based custom analyzer. As regulations were fed into the
analyzer firstly BiGrams such as U-value and building-integrated were identified,
secondly every letter was converted to lowercase and finally the removal of
stopwords was applied. An automated method of analysing regulations enabled
important concepts to be identified in a consistent manner. At this stage we began to
formulate classes and the class hierarchy. There is no one correct way to model a
domain, there are always viable alternatives. The best solution almost always
depends on the application that you have in mind and the extensions that you
anticipate (Noy, 2001). Due to familiarity reasons Protege-OWL editor (2000) was
selected as the ontology development environment. We were then able to follow a
logical process of editing classes, associating classes with properties and building
relationships between classes. Protege also includes plugins which enable us to
visualise classes, define logical class characteristics such as OWL expressions and to
344 COMPUTING IN CIVIL ENGINEERING

execute reasoners to regularly check the ontology for consistency during the design
process. The ontology class heirarchy during the construction process can be seen in
Figure 3.
INITIAL TESTING & OBSERVATIONS
The principal aim of our knowledge framework is to provide an enhanced search and
retrieval platform with specific application to energy performance building
regulations. This primary aim encompasses several sub objectives, several of which
are mentioned in the next section. In terms of achieving the primary research focus,
we are able to provide significant levels of accuracy over two commercial search
engines which were used as comparisons. Various test scenarios were implemented
and initial results compared with the popular commercial search engines Google UK
and Yahoo UK, as this would provide an initial basis for comparison. Testing was
structured around the submission of various queries and results were based upon
performance when comparing levels of precision ((Eq 1) in this case it was
determined that a precision at n method would be adopted and that n would represent
ten documents, as occasionally the number of documents retrieved from our index
was less than ten) and recall ((Eq 2) the fraction of documents that are relevant to the
query that were successfully retrieved) between search platforms. Based upon these
criteria remaining constant some initial results represented by can be seen in Table 1.

Eq 1

Eq 2
CONCLUSIONS & FUTURE WORK
This paper documents our early efforts towards the construction of an efficient
knowledge-directed information retrieval and management framework tailored
specifically to locate energy performance building regulations. From the results
shown in Table 1, one can conclude that our efforts towards ontology driven
information retrieval enhance both levels of precision and recall far beyond the
current ability of commercial search engines. The framework maintains underlying
principles which permit further extensibility both in terms of knowledge processing
by use of extended ontology based on building regulations as well as the potential to
create a distributed computing architecture operating over clusters of processing
units. An important characteristic which needs to be addressed is the dynamic nature
of web data; therefore we are actively working towards an automated crawler which
maintains a healthy and accurate representation of the web graph. The ontology
enhanced query refinement enables our research framework to be extended to deal
not only with building regulations but with any data encoded in OWL format. This
promotes clear support for further application of our research framework. Finally, we
maintain an interest in specifically locating clauses within regulations; this provides
COMPUTING IN CIVIL ENGINEERING 345

an additional layer of accuracy and will facilitate direct application of regulations to


builders and designers work activities.

ACKNOWLEDGEMENTS
The authors would like to thank members of the Scottish City Council’s building
control department who are actively validating and improving our research
framework.
REFERENCES
AKOMANTOSO. (2000-2010). Architecture for Knowldege-Oriented Management
of African Normative Texts using Open Standards and Ontologies. Retrieved 12 9,
2010, from AKOMANTOSO: http://www.akomantoso.org/
346 COMPUTING IN CIVIL ENGINEERING

ASF. (2010, December 3). Welcome to Apache Lucene. Retrieved December 7, 2010,
from Apache Software Foundation: http://lucene.apache.org

ASF. (2010, September 27). Welcome to Nutch. Retrieved December 7, 2010, from
Apache Software Foundation: http://nutch.apache.org

Ciaramita, M. G. (2005). Unsupervised Learning of Semantic Relations between


Concepts of a Molecular Biology Ontology. Nineteenth IJCAI. Edinburgh, Scotland.

Dickinson, I. (2009, 02 24). The Jena Ontology API. Retrieved November 29, 2010,
from Jena - A Semantic Web Framework for Java:
http://jena.sourceforge.net/ontology/index.html

Gulla, J. A. (2006). Semantic Interoperability in the Norwegian Petroleum Industry.


5th International Conference ISTA 2006, (pp. 81-93). Klagenfurt, Austria.

Kitamura, Y. K. (2004). Deployment of an ontological framework of functional


design knowledge. Advanced Engineering Informatics , 115-127.

Noy, N. F. (2001). Ontology Development 101: A Guide to Creating Your First


Ontology. Stanford University.

Princeton. (2010, 12 20). About WordNet. Retrieved 12 26, 2010, from WordNet:
http://wordnet.princeton.edu/

Reich, J. R. (2000). Ontological Design Patterns: Metadata of Molecular Biological


Ontologies, Information and Knowledge. DEXA 2000 (pp. 698-709). Springer-
Verlag Berlin Heidelberg.

Rezgui, Y. B. (2009). Past, present and future of information and knowledge sharing
in the construction industry: Towards semantic service-based e-construction.
Computer-Aided Design , doi:10.1016/j.cad.2009.06.005.

Rezgui, Y. (2007). Text-based domain ontology building using Tf-Idf and metric
clusters techniques. The Knowledge Engineering Review , 379-403.

Rezgui, Y. W. (2010). Federating information portals through an ontology-centred


approach: A feasibility study. Adevanced Engineering Informatics , 340-354.

Sullivan, L. (2007). A Low Carbon Building Standards Strategy For Scotland.


Livingston: arcamedia.

Yu, H. T. (2010). Retrieving Information Across Multiple, Related Domains Based


on User Query and Feedback: Application to Patent Laws and Regulations.
ICEGOV2010. Beijing, China: ACM
A Novel Sensor Network Architecture for Intelligent
Building Environment Monitoring and Management
Qian Huang1, Xiaohang Li2, Mark Shaurette1, Robert F. Cox1
1
Department of Building Construction Management, Purdue University, West
Lafayette, Indiana, 47906, email: {huang168, mshauret, rfcox}@purdue.edu
2
Department of Electrical and Computer Engineering, Purdue University, West
Lafayette, Indiana, 47906, email: li179@purdue.edu

ABSTRACT

Innovations in the design and construction of sustainable green buildings have gained
significant interest in recent years. It has been estimated that the deployment of an
intelligent monitor and control systems can result in around 20% savings in energy
usage and play a crucial role in green buildings. Among various emerging
technologies, wireless sensor network (WSN) for building management has been
becoming an increasingly feasible approach. However, because of the extreme
constraints on system size (and hence the battery capacity), frequent battery
recharging or replacement for a sensor node is unavoidable and suffers from
unaffordable labor cost. Thus, limited energy availability in a WSN poses a big
challenge and obstacle to wide deployment of WSN based building automation and
management systems.
In this paper, the authors introduce and discuss two emerging techniques (i.e., energy
harvesting and power line communication) that have potentials to be integrated
together and provide a significant improvement on cost, performance, convenience
and reliability. To achieve low-cost high-efficiency building automation and
management, a hybrid system diagram and operation mechanism is proposed in this
paper. A case study is also provided to demonstrate how the proposed system
mitigates the inherent weakness of WSN systems.

INTRODUCTION

According to the U.S. Green Building Council, buildings account for 39% of CO2
emission and consume 70% of the electricity load in the United States. Much of these
emissions and energy usage could be saved by increasing energy efficiency when
providing heating, cooling, and lighting [1]. Even a small adjustment to the operation
of HVAC systems could result in significant reduction of energy consumption and
operating cost. As a result, in recent years the design of sustainable green buildings
with intelligent energy management is attracting more and more attention in both
academic and industrial communities. Smart building automation and energy
management is considered a practical and sustainable solution that could make a huge
contribution to energy savings and environmental benefits.

347
348 COMPUTING IN CIVIL ENGINEERING

The emerging technology of wireless senor networks (WSN) has become an


increasingly feasible approach to realize control and management of the building
environment [5, 11]. The data collected by sensors could be temperature, CO2 level,
artificial lighting, and humidity in the vicinity of the sensor. These data can then be
transmitted within the network region through a wireless medium. Some WSN
systems have already been installed and implemented inside buildings for monitoring
and management [3-5, 7]. It has been shown that the deployment of WSN based
control systems can lead to nearly 20% savings in energy usage and play a crucial
role in green buildings [6].
However, current cutting-edge WSN technique involves several new challenges. To
simplify installation, retrofit WSN systems are conventionally battery-powered.
Because of the extreme constraints on system size and volume (a few cm2 or cm3),
the battery energy capacity is limited. Hence, battery-powered WSN systems cannot
sustain a long-term operation (e.g. several years or months) without battery
recharging or replacement. For example, two AA batteries can only last a few months
for a typical sensor node. This weakness results in a big problem for building
maintenance personnel, since they have to frequently replace hundreds or thousands
of batteries for a building with a large sensor network. The difficulty becomes more
severe if sensor nodes are not easily accessible. For example, some sensor nodes are
embedded in a wall structure or in a harsh corner of a building. On the other hand,
from the building owner’s perspective, battery integration may be prohibitively
expensive, due to the expensive labor cost for regular battery replacement. Therefore,
a key challenge in these WSN systems is to efficiently provide the required power for
achieving long-lived, maintenance-free operation. The second drawback of applying
the WSN technology is that the quality of wireless communication cannot always be
guaranteed. Compared with wired communication, wireless communication suffers
from unreliable and unpredictable channel characteristics. The signal degradation
becomes even worse inside buildings, due to significant fading caused by
complicated indoor environment and wireless interference.
To date, there are two new emerging technologies, energy harvesting (EH) [9-10, 13]
and power line communication (PLC) [2, 8, 12] that have the potential to be
integrated and form a new building monitoring system. Environmental energy
sources abound in our immediate surroundings. Examples of such energy sources
include light, thermal gradients, vibrations, electromagnetic wave, etc. Energy
harvesting is a physical process by which the energy is collected from the
environment. Vijay, et al. [13] showed the estimated power density of a few
commonly used energy harvesting modalities and concluded that the harvested power
is sufficient for a typical sensor operation. In a previous publication [9], the authors
proposed the harvest of environmental light energy as the power source for individual
sensor nodes, and experimentally demonstrated the feasibility of such an energy
harvesting-aided WSN system for room temperature monitoring and management.
Power line communication (PLC) has drawn a lot of interests from the building
management community. The main idea is to reuse power lines inside a building as a
signal transmission medium, whereas power lines are conventionally used to only
carry and deliver electrical power. Because in PLC the data signal can be transmitted
through power lines, there is no need to provide additional network cables and power
COMPUTING IN CIVIL ENGINEERING 349

supply. Moreover, PLC is a kind of wired communication network, which provides


more reliable and high-quality communication than wireless networks. As a result,
due to the above advantages, PLC has the potential to be a big plus for building
monitoring and management.
In this paper, the authors analyze the potential for EH and PLC to be integrated
together. In order to realize low-cost high-efficient building automation and
management, a hybrid system diagram and operation mechanism is proposed, which
could achieve significant improvement on cost, performance, convenience and
reliability. Moreover, a case study is investigated and discussed to demonstrate the
benefit of the proposed system architecture.

BUILDING ENVIRONMENT MANAGEMENT

Less efficient operation of HVAC equipment results in increased energy consumption


and waste. For the past few decades, building management personnel had to manually
carry out data measurement in-situ and then return to a central control room to
optimize HVAC equipment parameters. In addition to imparting significant labor
cost, the limited sampling data that is collected is generally insufficient for dynamic
optimization of the HVAC operation. Therefore, the design of real-time automatic
building environment monitoring systems is appealing in practice.
In recent years, the rapid progress of wireless communication and semiconductor
technology enables deployment of distributed sensor nodes inside buildings to obtain
local environmental status. The operational status data retained by each sensor node
can be transmitted to a central control computer by wireless transmission. It could
eliminate the dependence on manual measurement and data collection. This
innovation brings in more flexibility and low installation cost, especially for large
and complex buildings. As a result, the WSN approach is cost-effective and user-
friendly for high-performance green buildings.
However, in practice, the operation of each wireless sensor node is heavily dependent
on the remaining energy of its associated battery. Due to volume constraints of the
sensor node, battery size is very limited and can only last several months. As a result,
the resultant labor cost for frequent battery replacement becomes a new concern. The
focus of this paper is to investigate and present a new sensor network that directly
contributes towards mitigating the weakness of a WSN to realize a significant
improvement on cost, performance, convenience and reliability.

PROPOSED HYBRID NETWORK ARCHITECTURE

In this section, the advantages and disadvantages of EH and PLC techniques are
discussed. Based on their features and merits, the authors propose a hybrid network
architecture.
Power Line Communication
Recently power line communication (PLC) has drawn a lot of interest from the
building construction and management community. PLC enables data transmission
350 COMPUTING IN CIVIL ENGINEERING

through power lines that are normally used to carry and deliver electrical power to
household apparatus. The entire network for power line communication is illustrated
in Figure 1 below. PLC can be utilized to intelligently manage the home appliances.
The operating mechanism of a PLC system is described as follows. The PLC adaptor
modulates a baseband signal onto a carrier, and injects the modulated signal onto the
power line. Once the modulated signal is captured, another PLC adaptor (in the
receiver) demodulates it and extracts the original baseband signal. By communicating
with the PLC adapter, the central controller is able to monitor and control all
appliances connected to the power line. The entire power line network can be easily
set up by installing power line adaptors at each power electrical outlet. The resultant
expense is relatively low in comparison with the total cost of setting up a Local Area
Network (LAN).

Figure 1. Illustration of Power Line Communication Network


PLC technology has the following advantages. The transmission medium is quite
low-cost and convenient, since there is no need to install any additional network
cables. This feature makes the employment of PLC technique quite appealing in the
building management community. Especially for old building applications where no
additional wire cables need to be set up, thereby greatly reducing the difficulty in
upgrading the management system. It is also apparent that since signals are
transmitted on the power line, there is no necessity to provide an additional power
supply.
However, if PLC is solely utilized as the transmission network for building
management, individual sensor nodes have to be placed near the PLC adaptors, which
are most easily located close to the power outlets in a building. Restricting the
position of sensor nodes near power outlets is not good for data sensing and sampling.
For example, temperature is commonly stratified with the warmer layer at the ceiling,
while most of the electrical outlets are far away from the room’s ceiling. In this
case, data sampled at electrical outlets would have a large deviation from that
sampled near the room’s ceiling. The flexibility of moving or relocating sensor nodes
COMPUTING IN CIVIL ENGINEERING 351

dynamically is quite limited, which directly affects the performance of building


management due to potentially inaccurate data sensing.
Energy Harvesting
Environmental light energy harvesting through photovoltaic conversion is practical
due to its ubiquitous nature inside buildings. Even though the outdoor light irradiance
is strong, the light conditions (i.e. light illumination) are much lower inside buildings,
especially for rooms or hallways that do not have adjacent windows. In this kind of
weak light environment, the most accessible light is either from a light fixture nearby
or from very weak scattered sunlight. Figure 2 shows the network architecture of an
energy harvesting based WSN system.

Figure 2. Energy Harvesting Based WSN System


Previous experiments have verified the feasibility and system operation by using
indoor harvested light. However, if a sensor node moves to a dim location, this node
cannot convert sufficient light energy to sustain its operation, and thus this node will
lose connection with the central control computer [9]. In addition, the authors
observed the performance of wireless transmission was significantly affected by the
complicated indoor environment. Moreover, even in good lighting conditions, the
maximum reliable communication distance was measured as only sixteen meters [9],
which is not enough for large dimensional buildings. These experimental results
validate indoor light energy harvesting as a useful technique for WSN systems, but
also demonstrated that the sensor network operation can be restricted by the local
light conditions. In addition it was shown that the wireless connection can be
unreliable in some cases.
Hybrid Network Architecture
From the above discussions, it is clear that either energy harvesting (EH) or power
line communication (PLC) has its own intrinsic weakness and drawbacks. Since PLC
and EH techniques are mutually beneficial, it is natural to combine them and build a
novel network architecture, as shown in Figure 3.
352 COMPUTING IN CIVIL ENGINEERING

Figure 3. Hybrid Network Architecture


In this system, for the areas where surrounding light intensity is insufficient, the EH
sensors (e.g. node A and B in Figure 3) cannot directly transfer their sensed data to
the central control computer. Instead, these data are first transferred to access points
(i.e., receiver in Figure 3), which are connected with electrical outlets, where the
signal is transferred to the PLC. For the areas where ambient light condition is
sufficient, the EH sensors will directly communicate with the central control
computer. All of collected information will be transferred to the central control
computer by PLC or wireless link, and provide support for decision-making processes
of the building management network.
This proposed hybrid system has the following advantages: (i) It is economical, as it
saves the effort required for setting up additional wires or cables for networking. (ii)
It helps to reduce the carbon emission or battery replacement, since it does not need
an additional power supply for both the PLC and EH. (iii) It is easy to set up (i.e.,
plugging PLC adaptors into power outlets or integrating tiny solar cells in the sensor
board is simple). (iv) The data transmission is more reliable, compared with a WSN
system. (v) The proposed hybrid network architecture could cover larger indoor areas
than any other single network, which is either an energy harvesting based WSN or a
PLC.

EXAMPLE CASE STUDY

This section presents a simple example case study to illustrate the benefits of our
proposed hybrid network architecture (Figure 4). Suppose the sensors in the darkened
area are exposed to sufficient light irradiance and the distance between each sensor
COMPUTING IN CIVIL ENGINEERING 353

node and the central control computer is less than the sensor’s maximum allowed
transmission distance. Then these sensor nodes are able to directly perform reliable
wireless transmission to the central control computer. Other sensors that are placed
away from this darkened region cannot directly communicate with the central control
computer, since their distance exceeds the maximum transmission range. As a result,
in the conventional energy harvesting based WSN systems [9], the central control
computer is unable to receive wireless signals from these nodes outside the maximum
wireless communication distance. Hence, the coverage percentage is limited.
However, the PLC is available to operate where the wireless system is disabled.
Those nodes that are far away from the central controller can transmit their data to the
adapters that are connected to the electrical outlets. Then those data can be transferred
to the controller via the power line. In this case, our proposed network architecture
can cover a larger area.

Figure 4. Hybrid network performance illustration

CONCLUSION

Distributed use of a wireless sensor network (WSN) is a promising solution to


optimizing control and management systems in support of the operation of a
building’s HVAC system. However, a WSN suffers from some weaknesses (i.e., short
battery life and unreliable communication), which impedes its practical deployment
for building automation and management. In this paper, a new hybrid network
architecture is proposed to utilize the benefits of energy harvesting and power line
communication. Its operating mechanism and advantages are explained by an
example case study, which capitalizes on both wired and wireless forms of data
sensor communications with the central control computer.

REFERENCES

[1] U.S. Green Building Council website: http://www.usgbc.org/


354 COMPUTING IN CIVIL ENGINEERING

[2] Papaioannou, A., and Pavlidou, F. N. (2009). “Evaluation of power line


communication equipment in home networks.” IEEE Systems Journal, 3(3), 288 -
294.
[3] Chiara, B., Alberto, F., and Roberto, V. (2010). “An IEEE 802.15.4 wireless
sensor network for energy efficient buildings.” In The Internet of Things, Springer,
New York, 329-338.
[4] Charles, C., Jeffrey, F., Asad, D., and Ruei, C. (2009). “Temperature control
framework using wireless sensor networks and geostatistical analysis for total spatial
awareness.” 10th International Symposium on Pervasive Systems, Algorithms, and
Networks, 717-721.
[5] Tessa, D., Elena, G., and James, B. (2009). “Wireless sensor networks to enable
the passive house-deployment experiences.” European Conference on Smart Sensing
and Context, 177-192.
[6] James, D., Marcus, K., and Vladimir, B. (2008). “Specification of an information
delivery tool to support optimal holistic environmental and energy management in
Buildings.” National Conference of IBPSA-USA, 61-68.
[7] Antony, G., Alan, G., and Dirk, P. (2009). “A wireless sensor network design tool
to support building energy management.” ACM Workshop on Embedded Sensing
Systems for Energy-Efficiency In Buildings.
[8] Jee G, Rao RD, and Cern Y. (2003). “Demonstration of the technical viability of
PLC systems on medium- and low-voltage lines in the united states.” IEEE
Communications Magazine, 108-112.
[9] Qian, H., Chao, L., and Mark, S. (2010). “Feasibility study of indoor light energy
harvesting for intelligent building environment management.” International High
Performance Buildings Conference.
[10] Chao, L., Vijay, R., and Kaushik, R. (2010). “Micro-scale energy harvesting: a
system design perspective.” Asia and South Pacific Design Automation Conference,
89-94.
[11] Karsten, M., Dirk, P., Brendan, F., Marcus, K., and Cian, M. (2008). “Towards a
wireless sensor platform for energy efficient building operation.” Tsinghua Science
and Technology, vol. 13, 381-386.
[12] Pavlidou N., Han V. A., and Yazdani, J. (2003). “Power line communications:
state of the art and future trends.” IEEE Communications Magazine, 34-40.
[13] Vijay, R., Aman, K., Jason, H., Jonathan, F., and Mani, S. (2005). “Design
consideration for solar energy harvesting wireless embedded systems.” Information
Processing in Sensor Networks, 457-462.

ACKNOWLEDGEMENT

This work was supported by 2010 Graduate Advisory Committee (GAC) Fellowship,
Purdue University.
Planning of Wireless Networks with 4D Virtual Prototyping for Construction
Site Collaboration

O. Koseoglu
Assistant Professor, Construction Technology and Management, Department of
Civil Engineering, Eastern Mediterranean University, Gazimagusa – TRNC Via
Mersin 10 Turkey, PH +90392 6301233, FAX. +90392 6302869,
email: ozan.koseoglu@emu.edu.tr

ABSTRACT:

Emerging collaborative technologies and working methods often require tremendous


engineering and organisational efforts for successful implementation of information
and communication technologies (ICTs). For some years, the feasibility of
implementing wireless solutions to construction sites has been researched. The state-
of-the-art in wireless networks and communications in the construction industry
revealed that construction companies are not widely deploying wireless networks at
remote offices or in the field. For better use of wireless networks in construction
projects, these technologies and construction sector have to be examined in some
detail to implement the most suitable technology for real-time information access
and improved mobile collaboration of distributed teams in construction. However,
case studies in construction are very limited in number and scope. This paper
discusses and proposes an implementation scenario of wireless networking on a live
construction project through the use of 4D (four dimensional) virtual prototyping
technologies.
Keywords: wireless networks, 4D prototyping, construction, mobile collaboration

INTRODUCTION

Mobile computing has witnessed a revolution over the last years,


empowering distributed collaborative teamwork to also mobile workers. Several
individual technologies, 3rd generation mobile telecommunications (3G), wireless
networking (WLAN, WIMAX), pocket computing, widespread availability of cheap
broadband (DSL, cable) and XML-based web technologies are individually
developing, maturing and, more importantly, converging in the mobile revolution.
Wireless technologies have reached a point where more and more
organisations are adopting 'mobile collaboration' strategies. In many sectors it has
been proven that, for managers and information workers wireless access to e.g.
corporate Intranet and email allow freedom from the desktop. For construction
however, development in wireless is particularly interesting and even more far-
reaching because it promises vastly developed IT possibilities at the point of core
activities, i.e. operations on site. IT has long been implemented in support activities
of the construction value chain (HR, procurement, etc.) but with mobile and wireless
technologies, IT is available in the midst of constructions primary operations.

355
356 COMPUTING IN CIVIL ENGINEERING

This paper presents the case study research on a live construction project
carried out with a major contractor in UK and planning of wireless network on a 4D
sequenced virtual prototype for onsite implementation.
STATE-OF-THE-ART- WIRELESS NETWORKS AND 4D VIRTUAL
PROTOTYPING TECHNOLOGIES IN CONSTRUCTION

Wireless Networks in Construction. There has been little research focused on the
feasibility and assessment of wireless communication networks at construction sites.
Survey results, recorded from 58 construction managers around the world, on the
use of wireless and web-based technologies in construction revealed that
construction companies are not widely deploying wireless networks at remote
offices or in the field (Williams et al., 2006).

A good example of wireless communications on-site is Stent Foundations, a UK


based contractor specialising in pilings, which has successfully tried Wi-Fi solutions
in the Wembley Stadium and Kings Cross Terminal construction projects in London
(Mobile Enterprise Analyst, 2005). Furthermore, Brilakis (2006) presented a case
study on long-range wireless communications suitable for data exchange between
construction sites and the main office. The aim of this research was to define the
requirements for a secure wireless communications model where data, information
and knowledge will be shared efficiently between site and office personnel (Brilakis,
2006) Nuntasunti & Bernold (2006) presented the integrated (IWS) wireless site
concept which is based on a mesh communication network and discussed the lessons
learned from installing and evaluating wireless mobile and fixed video devices at
construction sites. Nielsen & Koseoglu (2007) proposed an implementation scenario
for wireless networking on a multi-site tunnelling project in Turkey and discussed
the benefits, barriers and cost assessments. Most other mobile applications in the
construction industry are basically for inspection-type work (snagging), data
collection and asset tracking using GPS and RFID technologies (Bowden et al.,
2006).

4D Virtual Prototyping Technologies in Construction. The 4D CAD model is a


3D visualized model of a project with the added dimension of the project schedule.
It aims to integrate the technical design information respectively within the design
and construction phases respectively (Barret, 2000; Dawood et al., 2005).
Visualization technologies used in scheduling and planning are 2D or 3D animations
and simulations and 4D CAD applications. Animation enables the user to visualise
on a computer screen the change of status of a construction process and the dynamic
interactions in the process over simulated time. It provides an opportunity for the
user to observe the dynamic interactions between interlinked events (Zhang et al.,
2002). In 4D models, project participants can effectively visualise and analyse
problems regarding the sequential, spatial and temporal aspects of construction
schedules (Dawood et al., 2002).

Many current research studies on digital imaging and 4D visualization technologies


in the construction industry aim to improve construction processes. Dawood et al.
COMPUTING IN CIVIL ENGINEERING 357

(2006) highlighted the need for measuring and identifying the benefits of 4D
planning in the construction industry. From the real applications and performance
analysis point of view, 4D planning has not been investigated in detail. Quantifying
the benefits and identifying the capabilities of 4D planning is crucial for the
improvement of project performance (Dawood et al., 2006). Hu et al. (2005)
presented the ease and speed with which 4D models could be developed from the 2D
drawings of a specific construction project. Dib et al. (2006) suggested an approach
to combine graphical objects and textual information in order to integrate the
information between construction team members and parties. Kang et al. (2007)
investigated the usefulness of web-based 4D construction visualisation in
collaborative construction planning and scheduling. Research results revealed that
project teams using 4D models detected logical mistakes easier and faster than the
teams using 2D drawings (Kang et al., 2007). Hartmann et al. (2006) presented the
data collected from case studies on six pilot projects between 1997-2005 in order to
measure and compare the 3D modelling productivity on construction projects.
Research has shown a general increase in 3D modelling productivity in recent years
and project managers have demanded the use of 4D modelling in all case-study
projects (Hartmann et al., 2006). Sadeghpour (2006) proposed a system that
integrates Real Time Locating System (RTLS) technology with a 4D site
visualization model. The aim of the system is to visualise movements of, and
changes to, objects on construction sites in real-time by using different technologies,
such as GPS-enhance RFID and 4D CAD (Sadeghpour, 2006). Podbreznik & Rebolj
(2007) presented the development process of the 4D-ACT (Automated Construction
Tracking) system, which automatically recognizes the building elements from the
building elements on site and makes comparison between planned and performed
activities (Podbreznik & Rebolj, 2007). Jongeling & Olofsson (2007) suggested a
location-based planning approach to 4D CAD models to improve their usability for
work-flow analyses. A 4D CAD model is useful for traditional planning, however, it
does not provide information about the flow of resources to specific locations at
construction sites and this article presented a case study which investigated the
combined use of location-based scheduling and 4D CAD (Jongeling & Olofsson,
2007). 4D and nD modelling concepts and new production methods provide an
opportunity for modifying the existing construction planning and scheduling
processes (Rischmoller, 2006). Norberg et al. (2006) investigated the use of 4D
CAD models combined with Line of Balance scheduling technology for the
planning of cast-in-place concrete construction processes. In the existing 4D
modelling software packages, links between 3D CAD objects and the activities of
the time schedule have to be established manually. Tulke & Hanff (2007) presented
a solution for creating time schedules and 4D simulations based on data stored in a
building model. The aim of this approach is to speed up the preparation of 4D
simulations and to provide additional benefits by a better integration of the 4D
models into planning and scheduling practice (Tulke & Hanff, 2007).

Conclusion on the State of the Art. There has been little research focused on the
feasibility and assessment of wireless communication networks at construction sites.
The state-of-the-art in wireless networks and communications in the construction
358 COMPUTING IN CIVIL ENGINEERING

industry revealed that construction companies are not widely deploying wireless
networks at remote offices or in the field. Research projects should focus more on
the planning and implementation of wireless networks with the help of 3D digital
tools at construction sites in order to improve real-time communication and
collaboration.

CASE STUDY WITH LAING O’ROURKE

Laing O’Rourke plc is the largest privately owned construction company in the UK.
From its headquarters in the UK, the Group is developing into an international
business with hubs in Europe, the Middle East and Asia, and Australasia. They have
offices in the UK, Germany, India, Australia, and United Arab Emirates, with over
30,000 employees worldwide.

Laing O’Rourke agreed to support this research and granted the researcher
permission to become involved in some of the organization’s activities whilst
conducting the case study. One of the outcomes of this case study research was to
identify the planning and implementation of wireless networks on a 4D visualised
construction model within a live construction project called “One Hyde Park”.

One Hyde Park (OHP) Project-Pilot Project. One Hyde Park is a prestigious
development of eighty apartments, set out over four residential blocks. Project was
managed and interior designed by Candy & Candy. One Hyde Park is planned to be
one of the finest residential addresses in London and is due for completion in 2010
(Candy & Candy, 2007). The structural design has been undertaken by Arup and
coordinated with Richard Rogers Partnership, the building services engineer is
Cundall.

The building(s) is predominantly concrete framed including in-situ reinforced and


post-tensioned concrete; and pre-cast reinforced and pre-tensioned concrete
components. Stability is provided by a combination of concrete shear walls and
structural steel framing (Arup, 2006).

OHP- Wireless Network Planning. Laing O’Rourke Digital prototyping teams


were trying to integrate a 4D modeling solution (Synchro) to the One Hyde Park
construction site to enable engineers to understand better the phases of construction
on a 3D model and allow the project team to monitor the progress in real time
(Figure 1). Synchro was a new software solution that supports project management
and enables the project team to monitor progress in real-time on a web-based
platform. It provides 3D object creation/manipulation, rapid 4D project visualisation
using project schedules and advanced filtering of a 3D model of the project. Synchro
users can display 3D models of the project linked project schedules to monitor
construction progress.
COMPUTING IN CIVIL ENGINEERING 359

Figure 1 4D model for OHP project

Construction project information including site dimensions, height of buildings, etc.


and construction sequence of the project captured from 4D model were provided to
Nortel Networks for planning the wireless networks. The main aim of OHP project
is to build four residential blocks with the tallest being 14 storeys above ground with
maximum height of 52m. The site is approximately 140m by 53m. The initial plan
would be to bring a 2Mbps broadband into the site office and mount an access point
(AP) there. Initially, the site is clear and only a few APs are needed. Depending on
the number of users, 2-3 APs would be enough to cover the site and red dots on
Figure 2 show proposed locations where APs needs to be installed at the
construction site. Exact locations are not critical and nodes can be moved as
required. As construction proceeds, rising of buildings will create radio shadows and
necessitate fine-tuning of the system. The solution would be to mount APs on the
cranes above the level of the buildings to facilitate cross-site coverage. A suggested
layout is shown in Figure 3 with APs providing coverage down the lanes between
the buildings.

Figure 2 Wireless Network Layout- Phase 1


360 COMPUTING IN CIVIL ENGINEERING

Figure 3 Wireless Network Layout- Phase 2

As the buildings near completion, the height may exceed good propagation
depending on the exact mounting of the nodes. For example, it could be desirable to
mount an AP at the top of a crane to provide good rooftop coverage. To achieve this
a pair of wireless bridges would be required with one mounted on the site office and
the other with the AP on the crane with the antenna panels directed at each other for
optimum performance. In addition, floors that require coverage might need to have
APs most probably one per three floors but these could be moved to cover the floors
where there is a need for wireless coverage. The exact requirements and coverage
heavily depend on the materials used for the internal construction of the buildings.

The buildings might cause more significant radio shadow as they are closer
to completion. The layout might look like Figure 4 (note that some APs are hidden
by the buildings and none of the APs are shown for the floors of the buildings).

Figure 4 Wireless Network Layout- Completion


COMPUTING IN CIVIL ENGINEERING 361

CONCLUSIONS

The subject matter of this research is the planning and implementation of


wireless networks into construction site activities. The aim was to develop a modern
vision for management and information flow of construction sites. Information
technology is developing rapidly and especially wireless communication and
networks continue to improve business processes in sectors such as marketing and
manufacturing This paper presented a case study on planning the implementation of
wireless networks through the use of 4D virtual prototyping technologies within a
live construction project. Future research will be based on physical implementation
of wireless communications at construction sites and evaluation of tangible and
intangible benefits of such technology adaptation at construction projects.

REFERENCES

Arup (2006). One Hyde Park- Structural Engineering Report, Volume 04.
Barrett, P. (2000). Construction Management Pull for 4D CAD. Construction
Congress IV: Building Together for a Better Tomorrow, pp.977-983.
Bowden, S., Dorr, A., Thorpe, T., Anumba, C. (2006). Mobile ICT Support for
Construction Process Improvement. Automation in Construction, Vol.15, Issue 5,
pp. 664-676.
Brilakis, I.K. (2006). Remote Wireless Communications for Construction
Management: Case Study. Joint International Conference on Computing and
Decision Making in Civil & Building Engineering, June14-16 2006, Montreal-
Canada, pp.135-144
Candy & Candy Official Website (2007). One Hyde Park Project.
www.candyandcandy.com. Accessed November, 2007.
Dawood, N., Akinsola, A. & Hobbs, B. (2002). Development of automated
communication of system for managing site information using internet technology.
Automation in Construction, 11(5), 557-572. Elsevier Science.
Dawood, N., Scott, D., Sriprasert, E., Mallasi, Z. (2005). The virtual construction
site (VIRCON) tools: An industrial evaluation. ITcon, 10, Special Issue From 3D
to nD modelling , 43-54, http://www.itcon.org/2005/5. Accessed July 2005.
Dawood, N., Sikka, S., Ramsay, B., Allen, C., Khan, N. (2006). The Potential
Value of 4D Planning in UK Construction Industry. Joint International Conference
on Computing and Decision Making in Civil& Building Engineering, June14-16
2006, Montreal-Canada, pp.3107-3115
Dib, H., Issa, R.R.A., Cox, R. (2006). Visual Information Access and Management
for Life-Cycle Project Management. Joint International Conference on Computing
and Decision Making in Civil& Building Engineering, June14-16 2006, Montreal-
Canada, pp.2466-2475.
Hartmann, T., Gao, J., Fischer, M. (2006). An Analytical Model to Evaluate and
Compare 3D modeling Productivity on Construction Projects. Joint International
Conference on Computing and Decision Making in Civil& Building Engineering,
June14-16 2006, Montreal-Canada, pp.1917-1926.
362 COMPUTING IN CIVIL ENGINEERING

Hu, W., He, X., Kang, J.H. (2005). From 3D to 4D visualization in building
construction. Proceedings of ASCE International Conference on Computing in Civil
Engineering, Cancun, Mexico, July 12-15.
Jongeling, R., Olofsson, T. (2007). A method for planning of work-flow by
combined use of location-based scheduling and 4D CAD. Automation in
Construction, Vol. 16, Issue 2, 189-198.
Kang J. H., Anderson, S.D., Clayton, M.J. (2007). Empirical Study on the Merit of
Web-Based 4D Visualisation in Collaborative Construction Planning and
Scheduling. Journal of Construction Engineering and Management, Vol.133, Issue
6, 447-461.
Mobile Enterprise Analyst. (2005). Construction: Can the sleeping giant be roused?
(http://www.comitproject.org.uk/downloads/news/MEAStent.pdf). Accessed July
2006.
Nielsen, Y., Koseoglu, O. (2007). Wireless Networking in Tunnelling Projects.
Tunnelling and Underground Space Technology, Vol.22, Issue 3, 252-261.
Norberg, H., Jongeling, R., Olofsson, T. (2006). Planning for cast-in-place concrete
construction using 4D CAD models and Line-of-Balance scheduling. Proceedings
of the World IT Conference for Design and Construction, INCITE/ITCSED 2006,
New Delhi, India, Vol.2, 391- 402.
Nuntasunti, S., Bernold, L., E. (2006). Experimental Assessment of Wireless
Construction Technologies. Journal of Construction Engineering and Management,
Vol.132, No.9, 1009-1018.
Podbreznik, P, Rebolj, D. (2007). Real-time Activity Tracking System- The
Development Process. Proceedings for CIB 24th W78 Conference, Maribor 2007,
67-71.
Rischmoller, L. (2006). Construction Multidimensional (nD) Planning and
Scheduling. Proceedings of the World IT Conference for Design and Construction,
INCITE/ITCSED 2006, New Delhi, India, Vol.2, 299-314.
Sadeghpour, F. (2006). Real Time Locating System for Construction Site
Management. Joint International Conference on Computing and Decision Making;
Montreal, Canada, June 13-16, 2006, pp.3736-3741.
Tulke, J., Hanff, J. (2007) 4D Construction Sequence Planning- New Process and
Data Model. Proceedings for CIB 24th W78 Conference, Maribor 2007, 79-84.
Williams, T.P., Bernold, L., Lu, H. (2006). A survey of the use of wireless and web-
based technologies in construction. Proceedings of the 10th Biennial International
Conference on Engineering Construction, and Operations in Challenging
Environments, p.113.
Zhang, H., Shi, J.J., Tam, C.M. (2002). Iconic animation for activity-based
construction simulation. Journal of Computing in Civil Engineering, 16(3), 157–
164.
Comparison of Camera Motion Estimation Methods for 3D
Reconstruction of Infrastructure

Abbas Rashidi1, Fei Dai2, Ioannis Brilakis3 and Patricio Vela4


1
PhD student, School of Building Construction, Georgia Institute of Technology,
E-mail: rashidi@gatech.edu
2
Post-Doctoral Researcher, Construction Information Technology Group, Georgia Institute of
Technology
3
Assistant Professor, School of Civil and Environmental Engineering, Georgia Institute of
Technology
4
Assistant Professor, School of Electrical and Computer Engineering, Georgia Institute of
Technology

Abstract: Camera motion estimation is one of the most significant steps for
structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the
7-point, and the 5-point algorithms are normally adopted to perform the estimation,
each of which has distinct performance characteristics. Given unique needs and
challenges associated to civil infrastructure SFM scenarios, selection of the proper
algorithm directly impacts the structure reconstruction results. In this paper, a
comparison study of the aforementioned algorithms is conducted to identify the most
suitable algorithm, in terms of accuracy and reliability, for reconstructing civil
infrastructure. The free variables tested are baseline, depth, and motion. A concrete
girder bridge was selected as the “test-bed” to reconstruct using an off-the-shelf
camera capturing imagery from all possible positions that maximally the bridge’s
features and geometry. The feature points in the images were extracted and matched
via the SURF descriptor. Finally, camera motions are estimated based on the
corresponding image points by applying the aforementioned algorithms, and the
results evaluated.

Keywords: Camera motion estimation, Corresponding points, Essential matrix,


Infrastructure

Introduction
The 3D spatial data of infrastructure contain useful information for civil engineering
applications including as-built documentation, on-site safety enhancement, progress
monitoring, and damage detection. Accurate, automatic, and fast acquisition of the
spatial data of infrastructure has been a priority for researchers and practitioners in the
field of civil engineering over the years.
Advances in computer vision provide a useful path for 3D data acquisition from
images and video frames. Vision-based 3D reconstruction has been investigated in the
area of computer vision for two decades. Based on the setup, such as the type of
sensor (monocular or binocular camera) or the type of captured data (image or video),
a number of frameworks have been proposed by researchers Fathi and Brilakis (2010).
Each framework, as a pipeline, consists of several stages, and each stage can be

363
364 COMPUTING IN CIVIL ENGINEERING

implemented using different algorithms. Selecting the most appropriate algorithm for
each stage is a critical decision that depends not only on the application of the
framework but also the user’s requirements.
In computer vision, algorithms that are proposed are usually tested and evaluated
using synthetic data or data obtained indoors. For 3D reconstruction of infrastructure,
such as bridges, the distance between the camera and the bridge is usually more than
10 m.n The scene itself consists of several distinct elements such as trees and sky.
Thus, evaluating the performance of such algorithms in real conditions and choosing
the best one for specialized applications is of great importance.
In this paper, we evaluate and compare different algorithms for the estimation of
camera motion. As explained in Section 3, camera motion estimation is an essential
part of every monocular 3D reconstruction framework. The performance of commonly
used methods is evaluated and compared in terms of specific metrics determined by
the requirements of infrastructure systems. The rest of the paper is organized as
follows. In Section 2, an overview of the necessary steps for camera motion is
presented. Section 3 presents the matrices and experimental setup used to compare the
performance of different algorithms, and the obtained results are discussed in Section
4. The conclusions of the investigation are presented in Section 5.

Camera Motion Estimation


In computer vision, 3D reconstruction means the process of capturing the 3D data of
an object using captured images or video. It starts with the capturing of images or the
videotaping of the object from different views and ends with a 3D point cloud or a 3D
surface generated for that object. Several approaches have been proposed by
researchers for obtaining 3D information from visual data, a number of which are very
famous: Snavely proposed the approach called “Photo Tourism” for the 3D
reconstruction of the world’s well-photographed sites, cities, and landscapes from
Internet imagery (Snavely et al., 2007); Pollefeys presented a complete system for
building 3D visual models from uncalibrated video frames (Pollefeys et al., 2004).
In the construction area, Golparvar-Fard proposed a simulation model based on the
daily photographs of construction sites for visualizing construction progress
(Golparvar-Fard et al., 2009). He also provided a 4D augmented reality model for
automating the construction progress data collection and processing (Golparvar-Fard
et al., 2009). Fathi provided a framework to obtain a 3D sparse point cloud of
infrastructure scenes using a stereo set of cameras (Fathi & Brilakis, 2010).
As one of the most significant steps of every structure-from-motion algorithm, the
problem of obtaining the motion of a camera from feature point correspondences is an
old one. The first documented attempt to solve the problem dates to more than 150
years ago (Hartley & Zisserman, 2004). There are a variety of solutions given the
(minimum) number of correspondences available. The three most common algorithms
are the normalized 8-point algorithm and the 7-point algorithm suggested by Hartley et
al. (2004 and 1997, respectively), and the 5-point algorithm, which was first solved
efficiently by Nistér (2004). The performance of these algorithms was evaluated by
Rodehorst et al. (2008) using synthetic data injected with noise. In this paper, the
necessary steps to obtain the camera’s motion are briefly reviewed, considering the
COMPUTING IN CIVIL ENGINEERING 365

most efficient algorithm for each step. Then, using a real infrastructure scene, the
performances of the three motion estimation algorithms are evaluated.
The approach for the estimation of camera motion between two views using an
essential matrix consists of three main steps: the calibration of the camera; the
computation of correspondence feature points; and the computation of the essential
matrix, camera rotation, and translation between two views. As depicted in Figure 1,
each step also contains sub-stages, which are briefly described in the next few
sections.

Figure 1- common framework to estimate motion of a camera from corresponding


points

Calibration of camera
In computer vision, the process of obtaining the intrinsic parameters of a camera is
called calibration. Intrinsic parameters define the pixel coordinates of an image point
with respect to the coordinates in the camera reference frame. The parameters that are
known as camera intrinsic parameters are:
- Focal length;
- Image center or principle point;
- Skew coefficient (defines the angle between the X and the Y pixel axes); and
- Coefficients of lens distortions;
In this paper, we used the method proposed by Zhang for calibration (Zhang, 1999).
The method only requires the camera to observe a planar pattern shown at a few (at
least two) different orientations.

Feature points detection and matching


One of the more sensitive stages within the 3D reconstruction pipeline is the detection
of specific points within each image and the matching of these points in across images.
Several approaches have been proposed to detect and match these so called feature
points, among which the most popular are SIFT and SURF (Bauer et al., 2007).
The SIFT keypoint detector is the most widely used detector in the field of computer
vision. Benefits include robustness to changes in scale changes, view, and
illumination. However, the high cost of computation makes it infeasible for real-time
366 COMPUTING IN CIVIL ENGINEERING

applications. In recent years, another feature point detector and descriptor known as
SURF has become more popular. While the SIFT method uses a 128D vector as the
descriptor, the SURF descriptor uses a 64D vector. Thus, from the viewpoint of
identifying matches, SURF is more computationally efficient than SIFT. According to
the research conducted by Leo Juan et al. (Bauer et al., 2007), though SIFT performs
slightly better than SURF in terms of accuracy, the performance of these two
descriptors are almost the same after applying a RANSAC algorithm to remove
outliers.
In this paper, we use the SURF method as a feature detector and descriptor. We also
used the Euclidian distance between descriptors as the criterion to find corresponding
matches. In order to improve matching efficiency, an approximate nearest
neighborhood matching strategy, a ratio test described by Lowe (2004), has been
applied rather than the classification of false matches by thresholding the distance to
the nearest neighbor. Moreover, since camera motion estimation algorithms are so
sensitive to false matches, the detected matched features are refined by the calculation
of the fundamental matrix between the two views using the RANSAC approach.
Further information on such refinement can be obtained from Snavely et al. (2007).

Camera Pose Estimation


To compute the camera ego motion, the essential matrix should be calculated. In the
case of camera pinhole models, the essential matrix is a 3×3 matrix which relates the
corresponding points of two view frames if intrinsic parameters of camera are known.
Assuming that homogenous normalized image coordinates of corresponding points in
two view frames are y  ( x, y,1)T and y  ( x, y,1) respectively, the essential matrix,
E , will relate these points by:
( y)T Ey  0 (1)
It has been proven that to solve the problem and compute the essential matrix, at least
5 corresponding pairs of feature points should be known (Nistér, 2004). The solution
approach to Equation (1) is what differentiates the various algorithms, such as the
(normalized) 8 point, 7 point and 5 point algorithms. A brief description of each of
these algorithms is presented in the following sub-sections. After computing the
essential matrix, a 3×1 translation matrix and a 3×3 rotation matrix are obtained from
the following procedure (Nistér, 2004):
If singular value decomposition of essential matrix represented as
E  Udiag (1,1, 0)V T , where U and V are chosen such that det(U )  0 and Det (V )  0 ,
then the translation matrix is equal to:
[t ]x  VDdiag (1,1, 0)V T (2)

Where[t ]x is cross product matrix of t and rotation matrix is equal to:


Ra  UDV T or Rb  UDT V T (3)
,
where:
COMPUTING IN CIVIL ENGINEERING 367

 0 1 0 
D  1 0 0  . (4)
 0 0 1 
The 8 point algorithm
The 8-point algorithm, which is the most straightforward method for the calculation of
the essential matrix, was first introduced by Longuet-Higgins (Hartley, 1997). The
great advantage of the 8-point algorithm is that it is linear, and hence, it is fast and
easily implementable. If 8-point matches are known, the linear equations are simply
solved. For more than 8 points, a linear least-squares minimization problem must be
solved. The key to the success of the 8-point algorithm lies in proper normalization
of the input data before the construction of the equations to be solved. In this case, a
simple transformation (translation and scaling) of the points in the image before
formulating the linear equations leads to an enormous improvement in the
conditioning of the problem, and hence, in the stability of the result. The complexity
added to the algorithm as a result of the normalizing transformations is insignificant.

The 7 point algorithm


When the essential matrix is vectorized as per

E   E11 E12 E13 E 21 E 22 E 31 E 32 E 33 
T
E 23 (5)

Equation 1 gives rise to a set of equations of the form AE  0 , where the number of
rows in the matrix A varies based on the number of point matches:

 x1 x1 x1 y1 x1 y1 x1 y1 y1 y1 x1 y1 1 


  . . . . . . . . .  
AE   E (6
 . . . . . . . . .  )
 
 x n x n x n y n x n y n x n y n y n y n xn yn 1 

If A has rank 8, then it is possible to solve for E up to scale. In the case where the
matrix A has rank 7, it is still possible to solve for the essential matrix by making use
of the singularity constraint. The most important case is when only 7 point
correspondences are known, leading to a 7 × 9 matrix A , which generally has rank 7.

The solution to the equations AE  0 in this case is a 2-dimensional space of the form
(Hartley & Zisserman, 2004):
aE1  (1  a ) E2 (7)
Where a is a scalar variable. The matrices E1 and E2 are obtained as the matrices
corresponding to the generators of the right null-space to A. Next, we exploit the
constraint det E  0 . Since E1 and E2 are known, this leads to a cubic polynomial
equation in a. This polynomial equation may be solved to find the value of a. There
will be either one or three real solutions, giving one or three possible solutions for the
essential matrix.
368 COMPUTING IN CIVIL ENGINEERING

The 5 point algorithm


As mentioned before, the minimum number of corresponding points required to
compute the essential matrix is 5. Let us rewrite equation 1 in the following form:
 T 
q E 0 (8)
where:

q   q1 q1 q 2 q1 q 3 q 3 
T
q 3 q1 q1 q 2 q 2 q 2 q 3 q 2 q1 q 3 q 2 q 3
(9)
Considering X, Y, Z, W as four 3×3 matrixes, essential matrix could be written in the
form of:
E  xX  yY  zZ (10)
th
This form is a 10 degree polynomial whose roots are the values of the essential
matrix. The detailed procedure for solving the equation and computing the essential
matrix is available at Nistér (2004).

Comparison Matrices and Experimental setup


To evaluate the performance of different motion algorithms, several motion primitives
have been designed. These scenarios are based on the combination of translation in
different directions as well as rotation in different planes. A camera trajectory would
consist of sequences of such primitives. The motion primitives are listed in Table 1
and shown in Figure 2.
Table 1: Different camera motion primitives
Translation(T) Rotation(R)* Translation(T) Rotation(R)*
1 X - 6 Y β
2 X α 7 Y and Z -
3 X and Z - 8 Y and Z β
4 X and Z α 9 Z -
5 Y - 10 Z α
*: α and β mean angles of rotations in XZ and YZ planes respectively.

Figure 2: Vector motion depictions of the camera motion primitives.

Two parameters are considered for evaluating the performance of these algorithms: the
length of the baseline and the depth value. For each motion scenario, three possible
COMPUTING IN CIVIL ENGINEERING 369

baseline lengths have been defined: 60,100, and 140 cm. For sideway motion
scenarios (items 1 to 8), four different depth values have been selected: 12, 16, 20, and
24 m. Consideration of these parameters implies 102 motion primitives in total.
In order to run the test, a concrete girder bridge located on Interstate 75, McDonough,
GA, has been chosen as our target infrastructure. The two-span bridge consists of three
rows of concrete columns, and each row contains five columns (Figure 2).
We used a high-resolution 8-megapixel Nikon camera installed on a tripod as our
sensor. The tripod was marked such that it was possible to measure the degree of
rotation in different configurations. A tape measure was used to measure the actual
translations of the sensor.

Figure 3: Concrete girder bridge used as selected infrastructure to conduct the test(left)
and test-bed platform(right)

Experimental Results
The average calculated error in computing the translation and rotation for one of the
motion primitives (table 1, number 2) with different baseline and depth values is
presented in Figure 4:
5 4 5 point algorithm
5 point algorithm 7 point algorithm
7 point algorithm 8 point algorithm
4
8 point algorithm
3

3
2

1
1

0 0
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
Number of Motion Number of Motion

Figure 4: Average translation and rotation errors for 3 different algorithms, motion
primitive number 2.

By observing the results, we obtain the following:


- Experiments demonstrate that 5-point algorithm is more accurate than 7- and 8-point
algorithms. The main reason for this is that a 5-point algorithm is less sensitive to
outliers. Even then, in a number of scenarios including forward motion (Cases 9 and
10), the other two algorithms also performed well.
- The algorithms are sensitive to outliers. In the case where wrong correspondences
existed, the algorithms performed poorly.
370 COMPUTING IN CIVIL ENGINEERING

- The length of the baseline in the applied range (60 to 140 cm) has no specific effect
on the accuracy of the results; however, increasing the depth value usually leads to
less accurate results.

Summary and Conclusion


In this paper, a comparison between three algorithms for the camera motion estimation
between different views of the same structure’s scene is presented. Controlled
parameters used for performance evaluation were the baseline length, and the depth
(distance between the camera and the infrastructure), and different motion primitives.
To run the test, a concrete girder bridge was selected, and frames were captured
according to the defined motion primitives. Ground truth data was obtained by
measuring the real translation and rotation of the camera between different camera
poses. The outputs obtained by implementing the three different motion estimation
algorithms were also computed, and the average error for each one is calculated.
Examination of the results indicates that the 5-point algorithm is better in comparison
to the others, in terms of accuracy.
A further research extension is the completion of the whole process of 3D
reconstruction to obtain a 3D point cloud of the civil infrastructure. The selection of
robust strategies to reduce computational load and the evaluation of the performance
of these algorithms from the viewpoint of computing efficiency will be the focus areas
of our future research.

Acknowledgements: This material is based upon work supported by the National


Science Foundation under Grant #1031329. Any opinions, findings, and conclusions
or recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation.

References
Bauer, J., Sunderhauf, N., & Protzel, P. (2007). “Comparing Several Implementations
of Two Recently Published Feature Detectors.” In Proc. of the International
Conference on Intelligent and Autonomous Systems, IAV, Toulouse, France.
Fathi, H., and Brilakis, I. (2010). “Automated sparse 3D point cloud generation of
infrastructure using its distinctive visual features.” Journal of Advanced Engineering
Informatics, in press.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2009). “D4AR- A 4-
Dimensional augmented reality model for automating construction progress data
collection, processing and communication.” Journal of Information Technology in
Construction (ITcon), Special Issue Next Generation Construction IT: Technology
Foresight, Future Studies, Road-mapping, and Scenario Planning, 14, 129-153.
Golparvar-Fard, M., Peña-Mora, F. Arboleda, C. A., and Lee, S. H. (2009).
“Visualization of construction progress monitoring with 4D simulation model overlaid
on time-lapsed photographs.” ASCE J. of Computing in Civil Engineering, 23 (6),
391-404.
Hartley, R. (1997). “In defense of the eight-point algorithm.” IEEE Transactions on
Pattern Analysis and Machine Intelligence, 19(6), 580–593.
COMPUTING IN CIVIL ENGINEERING 371

Hartley, R., and Zisserman, A. (2004). “Multiple view geometry.” Cambridge, UK:
Cambridge University Press.
Lowe, D. (2004). “Distinctive image features from scale-invariant keypoints.”
International Journal of Computer Vision, 60(2), 91-110.
Nistér, D. (2004). “An efficient solution to the five-point relative pose problem.” IEEE
Transactions on Pattern Analysis and Machine Intelligence (PAMI), 26(6), 756-770.
Pollefeys, M., Van Gool, L., Vergauwen, M., Verbiest, F., Cornelis, K., Tops, J., and
Koch, R. (2004). “Visual modeling with a hand-held camera.” International Journal of
Computer Vision, 59(3), 207-232.
Rodehorst, V., Heinrichs, M., and Hellwich, O. (2008). “Evaluation of relative pose
estimation methods for multi-camera setups.” In proceedings of ISPRS08, B3b: 135 ff.
Snavely, N., Seitz, S., and Szeliski, R. (2007). “Modeling the world from internet
photo collections.” International Journal of Computer Vision, 80(2), 189-210.
Zhang, Z. (1999). “Flexible camera calibration by viewing a plane from unknown
orientations.” International Conference on Computer Vision (ICCV99), pages 666—
673.
Multi-Image Stitching and Scene Reconstruction for Evaluating Change
Evolution in Structures

Mohammad R. Jahanshahi1 and Sami F. Masri2


1
Sonny Astani Department of Civil & Environmental Engineering, University of
Southern California, 3620 S. Vermont Ave., KAP 268B, Los Angeles, CA 90089-
2531; PH (213) 740-6304; e-mail: jahansha@usc.edu
2
Sonny Astani Department of Civil & Environmental Engineering, University of
Southern California, 3620 S. Vermont Ave., KAP 206A, Los Angeles, CA 90089-
2531; PH (213) 740-0602; e-mail: masri@usc.edu

ABSTRACT
It is well-recognized that civil infrastructure monitoring approaches that rely
on visual approaches will continue to be an important methodology for condition
assessment of such systems. Current inspection standards for structures such as
bridges require an inspector to travel to a target structure site and visually assess the
structure’s condition. This study presents and evaluates the underlying technical
elements for the development of an integrated inspection software tool that is based
on the use of commercially available digital cameras. For this purpose, digital
cameras are appropriately mounted on a structure (e.g., a bridge) and can zoom or
rotate in three directions. They are remotely controlled by an inspector, which allows
the visual assessment of the structure’s condition by looking at images captured by
the cameras. By not having to travel to the structure’s site, other issues related to
safety considerations and traffic detouring are consequently bypassed. The proposed
system gives an inspector the ability to compare the current (visual) situation of a
structure with its former condition. If an inspector notices a defect in the current view,
he/she can request a reconstruction of the same view using images that were
previously captured and automatically stored in a database. Furthermore, by
generating databases that consist of periodically captured images of a structure, the
proposed system allows an inspector to evaluate the evolution of changes by
simultaneously comparing the structure’s condition at different time periods. Several
illustrative examples are presented in the paper to demonstrate the capabilities, as
well as the limitations, of the proposed vision-based inspection procedure.

1. INTRODUCTION
Bridges constitute one of the major civil infrastructure systems in the U.S. According
to the National Bridge Inventory (NBI), more than 10,400 bridges are categorized as
structurally deficient (Chong et al. 2003). There is an urgent need to develop effective
approaches for the inspection and evaluation of these bridges. In addition, periodical
inspections and maintenance of bridges will prolong their service life (McCrea et al.
2002).

Even though many Non-Destructive Evaluation (NDE) techniques have been


developed for the inspection of bridge structures (McCrea et al. 2002), visual

372
COMPUTING IN CIVIL ENGINEERING 373

inspection is the predominant method used for the inspection of bridges. In many
cases, other NDE techniques are compared with visual inspection results (Moore et al.
2001). Visual inspection is a labor-intensive task that must be carried out at least bi-
annually in many cases (Chang et al. 2003).

The visual inspection of structures is a subjective process that depends on the


inspector’s experience and focus. Furthermore, inspectors who feel comfortable with
height and lift spend more time finishing their inspection and are more likely to locate
defects (Graybeal et al. 2002). Difficulties in accessing some parts of a bridge hinder
the transmission of knowledge and experience from an inspector to other inspectors.
Consequently, improving the skills and experiences of inspectors will take much time
and effort using current visual inspection practices (Mizuno et al. 2001).

The main purpose of the current study is to enable inspectors to accurately and
conveniently compare the structure’s current condition with its former condition.
Cameras can be conveniently mounted on a structure, and in the case of bridges, the
cameras can be mounted on bridge columns. Even though the cameras may be
constrained in regard to translation, they can easily rotate in two or three directions.
In the present study, a database of images captured by a camera is constructed
automatically. If the inspector notices a defect in the current view, he or she can
request the reconstruction of that view from the previously captured images. In this
way, the inspector can look at the current view and the reconstructed view
simultaneously. Since the reconstructed view is based on images that are in the
database and it virtually has the same camera pose of the current view, the inspector
can easily compare the current condition of the structure with its previous condition
and evaluate the evolution of defects. Figure 1 shows a simplified schematic
hardware configuration of the proposed inspection system.

Figure 1: Schematic hardware configuration of the image-based inspection


system.
374 COMPUTING IN CIVIL ENGINEERING

2. MULTI-IMAGE STITCHING AND SCENE RECONSTRUCTION

In order to reconstruct a view from a large collection of captured images in the


database and project it based on the current camera pose, the images should be
selected automatically from the database. For this purpose, automatic "keypoints"
should be detected. In the next step, images that have greater number of matching
keypoints with the current view should be identified. A procedure to select the images
from the database is introduced in Section 2.3. The next step is to eliminate the outlier
matching keypoints. Then, the camera poses for each of the selected views and the
current view are computed. This is the bundle adjustment problem. The stitching of
the selected images will take place after this step. In this section, the above
components are introduced and discussed. Figure 2 shows a schematic overview of
the proposed image stitching procedure described above.

Figure 2: Schematic overview of the proposed image stitching procedure.

2.1 Keypoint Detection


Scale-Invariant Feature Transform (SIFT) (Lowe 2004) is a popular choice for
keypoint detection. SIFT keypoints are invariant to changes in scale and rotation, and
partially invariant to changes in 3D viewpoint and illumination. The SIFT operator is
also highly discriminative and robust to significant amounts of image noise.

2.2 Initial Keypoint Matching


At this stage, the detected keypoints from the current-view image are matched with
the detected keypoints of the database images. The matching keypoints are used as
the criterion for similarity comparison between the current-view image and the
database images. An initial estimate is necessary to identify the correspondences.
Each SIFT keypoint has a 128-element descriptor vector assigned to it. The Euclidean
distances between each keypoint’s descriptor vector in the reference (current view)
image and any of the keypoint descriptor vectors in the input image (any of the
database images) are computed. In the current study, we reject all matches in which
COMPUTING IN CIVIL ENGINEERING 375

the distance ratio of the closest neighbor to that of the second-closest neighbor is
greater than 0.6 (Brown 2005).

2.3 Image Selection and Outlier Exclusion


At this stage of the data processing, the images in the database that have overlaps
with the current-view image are selected. Then, all the images that have a number of
initial matching keypoints greater than a threshold are selected. We select images
which have more than 40 matches with the current-view image. In order to improve
the correspondence estimation, outliers (defined as incorrect matching keypoints) are
identified and excluded. Random Sample Consensus (RANSAC) is used to compute
homography between two images as well as to find outliers. Now, the image that has
the greatest number of matching keypoints with the current-view image is
transformed onto the current-view image (using the estimated homography by
RANSAC) to find its projection boundaries on the current-view image. Then, the
current-view image is updated by setting the pixel values in the projection region to
zero (i.e., that projection region will be eliminated from the current-view image). The
above procedure is repeated using the remaining images and the updated current-view
image until the updated current view image turns into a black scene (which means the
selected images cover the whole current-view image). If, after one iteration, none of
the remaining selected images have any matching keypoints with the updated current-
view image, the latter one is updated by stretching the remaining regions by 10% in
the horizontal and vertical directions. This iteration continues until the updated
images turn into a black scene.

2.4 Bundle Adjustment


Bundle Adjustment (BA) aims to optimize 3D structure and viewing parameters (e.g.,
camera pose and intrinsic calibration) simultaneously, from a set of geometrically
matched keypoints, from multiple views. In fact, BA is a large sparse geometric
estimation problem in which the parameters consist of camera poses and calibrations,
as well as 3D keypoint coordinates (Lourakis and Argyros 2004).

The Levenberg-Marquardt (LM) algorithm is an iterative minimization method that


has been used to minimize the reprojection error of the bundle adjustment problem.
Lourakis and Argyros (2004) provided details of how to efficiently solve the BA
problem based on the sparse structure of the Jacobian matrix used in the LM
algorithm. Their modified implementation of this algorithm is used to solve the
bundle adjustment problem in this study.

2.5 Composition
The selected images are all transformed onto the plane of the current-view image and
stitched using the homographies between each selected image and the current-view
image. The composition surface is flat.
Consequently, straight lines remain straight, which is important for inspection
purposes. Finally, the reconstructed scene is cropped and can then be compared to the
current-view image.
376 COMPUTING IN CIVIL ENGINEERING

2.6 Blending
After stitching the images together, some image edges are still visible. This effect is
usually due to exposure differences, vignetting (reduction of the image intensity at the
periphery of the image), radial distortion, or mis-registration errors (Brown 2005).
Due to mis-registration or radial distortion, linear blending of overlapped images may
blur the overlapping regions. In the problem under discussion, the preservation of the
high-frequency components (e.g., cracks) are of interest. A solution to this problem is
to use a technique that blends low-frequency components over a larger spatial region
and high-frequency components over a smaller region. For this purpose, the
Laplacian pyramid blending (Burt and Adelson 1983) technique is used.

3 EXPERIMENTAL RESULTS AND DISCUSSION


Figure 3 shows two sets of image databases captured from a truss system (i.e., a
structural system similar to bridge structures) at different time periods t1 and t2, where
t1 < t2. Each of these images has about 50% overlap with its neighboring images. All
of the images are saved in the databases without any specific order (the images in
Figure 3 are presented in an order to give the reader the sense about the overlapping
regions); however, indexing the images can enhance the search speed for image
selection. The resolution is 640 × 480 pixels for each image. All the images are
captured by a Canon PowerShot SX20 IS digital camera. The SIFT keypoints are
detected and saved in a file for each of the database images. In this way, there is no
need to recompute the keypoints for the database images while reconstructing each
scene.

Figure 4(a) shows a current-view image of the truss system shown in Figure 3. The
resolution for this image is 800 × 600 pixels. A yellow tape is attached to the truss in
this image. Figures 4(b) and (c) are the reconstructed and cropped scenes using the
images captured at time periods t2 and t1, respectively. The regions of interest are
shown by red circles in these figures. One can see that the yellow tape did not exist at
time period t1. At time t2, a vertical tape is attached to the truss. The current-view
image shows two vertical and horizontal yellow tapes attached to the structure. This is
a simple example to demonstrate the capabilities of the proposed system.

Note that none of the images in Figures 3(a) and (b) are identical with the
reconstructed scenes in Figures 4(b) and (c). To reconstruct the scenes shown in
Figures 4(b) and (c), four and six images are selected automatically from the
databases in Figures 3(a) and (b), respectively. Figure 5 shows the contribution of
four images used to reconstruct Figure 4(b). On a AMD Athlon II X4 (2.6 GHz)
processor, it takes 110 seconds for the proposed system to detect SIFT keypoints in
the current-view image, find the matching keypoints between the current-view image
and all the images in the database (32 images), select matching images, solve the
bundle adjustment problem, blend the selected images and crop the reconstructed
scene in Figure 4(b).

Bundle adjustment takes less than a second of the whole computation time (because
the sparse bundle adjustment algorithm is efficiently implemented in C++). Note that
COMPUTING IN CIVIL ENGINEERING 377

no parallel processing is used in this process. Except for the bundle adjustment
algorithm, which is implemented in C++, the rest of the algorithms are implemented
in MATLAB. For faster performance (i.e., online processing), all the algorithms
should be efficiently implemented in C++ (or an equivalent computer language).

(a)

(b)
Figure 3: Two image databases of a truss system captured at different time
periods: (a) and (b) images of a truss system captured at time periods t1 and t2,
respectively (t1 < t2).

4 SUMMARY AND FUTURE WORK


Visual inspection is the predominant method for bridge inspections. The visual
inspection of structures is a subjective measure that relies heavily on the inspector’s
experience and focus (attention to detail). Furthermore, inspectors who do not have
fear of heights and feel comfortable with height and lift spend more time finishing
their inspection and are more likely to locate defects. Difficulties accessing some
parts of a bridge adversely affect the transmission of knowledge and experience from
an inspector to other inspectors. The integration of visual inspection results and the
optical instrumentation measurements gives the inspector the chance to inspect the
structure remotely by controlling cameras at the bridge site. This approach resolves
the above difficulties and avoids costs of traffic detouring during the inspection.
Cameras can be appropriately mounted on the structure. Although the cameras are
constrained by translation (i.e., attached to a fixed location), they can rotate in two
378 COMPUTING IN CIVIL ENGINEERING

directions. The inspector thus has the appropriate tools to inspect different parts of the
structure from different views.

(a) (b) (c)


Figure 4: Change evolution in a structural system: (a) current-view image of a
truss system, (b), and (c) scene reconstructions of the same truss system at time
periods t2 and t1, respectively (t1 < t2). The changed region is shown with a red
circle.

Figure 5: The scene reconstruction and the contribution of four selected images
from the database captured at time t2 (Figure 3(b)). The current-view image
corresponding to this reconstruction is shown in Figure 4(a).

The main purpose of the current study is to give the inspector the ability to compare
the current situation of the structure with the results of previous inspections. In order
to reach this goal, a database of images captured by a camera is constructed
automatically. When the inspector notices a defect in the current view, he can request
the reconstruction of the same view from the images captured previously. In this way,
the inspector can evaluate the growth of a defect of interest. If overlapping images are
captured periodically and saved in separate databases, then the evolution of changes
can be tracked through time by multiple reconstruction of a scene from images
captured at different time intervals.
COMPUTING IN CIVIL ENGINEERING 379

The correction of radial distortion is not considered in this study. Radial distortion
can be modeled using low order polynomials. Furthermore, implementing all of the
discussed algorithms in a computer language such as C or C++ will dramatically
decrease the computation time and will hasten the online usage of the proposed
system. Further details and examples about the proposed study can be found in the
studies done by Jahanshahi et al. (2009 and 2011).

5 ACKNOWLEDGEMENTS
This study was supported in part by grants from the National Science Foundation.

REFERENCES
Brown MA. Mult-image Matching using Invatiant Features. The University of British
Columbia. Vancouver, British Columbia, Canada; 2005.
Burt PJ, Adelson EH. A Multiresolution Spline With Application to Image Mosaics.
ACM Transactions on Graphics. 1983 October;2(4):217–236.
Chang PC, Flatau A, Liu SC. Review Paper: Health Monitoring of Civil
Infrastructure. Structural Health Monitoring. 2003;2(3):257–267.
Chong KP, Carino NJ, Washer G. Health monitoring of civil infrastructures. Smart
Materials and Structures. 2003 June;12(3):483–493.
Graybeal BA, Phares BM, Rolander DD, Moore M, Washer G. Visual inspection of
highway bridges. Journal of Nondestructive Evaluation. 2002
September;21(3):67–83.
Jahanshahi MR, Kelly JS, Masri SF, Sukhatme GS. A survey and evaluation of
promising approaches for automatic image-based defect detection of bridge
structures. Structure and Infrastructure Engineering. 2009 December;5(6):455–
486.
Jahanshahi MR, Masri SF, Sukhatme GS. Multi-Image Stitching and Scene
Reconstruction for Evaluating Defect Evolution in Structures. Structural Health
Monitoring. In press (2011). doi:10.1177/1475921710395809.
Lourakis MIA, Argyros AA. The Design and Implementation of a Generic Sparse
Bundle Adjustment Software Package Based on the Levenberg-Marquardt
Algorithm. Heraklion, Crete, Greece: Institute of Computer Science - FORTH;
2004. 340. <http://www.ics.forth.gr/~lourakis/sba> (Jan. 12, 2010).
Lowe DG. Distinctive Image Features from Scale-Invariant Keypoints. International
Journal of Computer Vision. 2004;60(2):91–110.
McCrea A, Chamberlain D, Navon R. Automated inspection and restoration of steel
bridges - A critical review of methods and enabling technologies. Automation in
Construction. 2002 June;11(4):351–373.
Mizuno Y, Abe M, Fujino Y, Abe M. Development of interactive support system for
visual inspection of bridges. Proceedings of SPIE - The International Society for
Optical Engineering. 2001 March;4337:155–166.
Moore M, Phares B, Graybeal B, Rolander D, Washer G. Reliability of visual
inspection for highway bridges, Volume I: Final Report. US Department of
Transportation, Federal Highway Administration; 2001.
<http://www.tfhrc.gov/hnr20/nde/01020.htm> (Jan. 6, 2010).
Computer Vision Techniques for Worker Motion Analysis to Reduce
Musculoskeletal Disorders in Construction
Chunxia Li1 and SangHyun Lee2
1
PhD student, Department of Civil & Environmental Engineering, University of
Michigan, 1316 G. G. Brown, 2350 Hayward Street, Ann Arbor, MI 48109; PH:
(734)763-5091; email: chunxia@umich.edu
2
Assistant Professor, Department of Civil & Environmental Engineering, University
of Michigan, 2340 G. G. Brown, 2350 Hayward Street, Ann Arbor, MI 48109; PH:
(734)764-9420; email: shdpm@umich.edu

ABSTRACT
Worker health is a serious issue in construction. Injuries and illnesses result in days
away from work and incur tremendous costs for construction organizations.
Musculoskeletal disorders, in particular, constitute a major category of worker injury.
The repetitive movements, awkward postures, and forceful exertions involved in trade
work are leading causes of this type of injury. To reduce the number of these injuries,
worker activities must be tracked and analyzed. Traditional methods to measure work
activities rely upon manual on-site observations which are time-consuming and
inefficient. To address these limitations, computer vision techniques for worker
motion analysis are proposed to automatically identify non-ergonomic postures and
movements without on-site work interruption. Specifically, we intend to acquire 2D
skeleton extracting joints from image sequences and, while obtaining 3D coordinates
for each joint, reconstruct 3D human skeletons for each frame; these then can be used
for diverse ergonomic analyses (e.g., joint angle comparisons with the suggested
ergonomic guidelines for trades). In this paper, we therefore discuss how 3D skeleton
video images can be reconstructed with two 2D skeleton images recorded from two
network surveillance cameras. The results demonstrate that the obtained 3D skeleton
video with coordinates of joints have enough detail to be used for motion analysis and
have great potential to identify non-ergonomic postures and movements. This
information can be used to reduce musculoskeletal disorders in the construction
industry.
Introduction
Worker health is a serious issue in the construction industry. It has attracted attention
both from academics and industry professionals (Albers et al. 2007). The physically
demanding characteristics of construction result in prevalent strains, sprains, and
work-related musculoskeletal injuries (Albers et al. 2007). The Federal Bureau of
Labor Statistics (BLS) defines musculoskeletal disorders (MSDs) as injuries and

380
COMPUTING IN CIVIL ENGINEERING 381

disorders developed chronically to muscles, nerves, tendons, ligaments, joints,


cartilage, and spinal discs (BLS 2001). These types of injuries are the most common
form of injury in construction (Labors’ Health and Safety Fund of North America:
LHSFNA) and include back pain, carpal tunnel syndrome, tendinitis, rotator cuff
syndrome, sprains, and strains. However, they do not include injuries that are the
result of acute events, such as slips, trips, and falls (Albers et al. 2007).
MSDs are caused predominately by forceful exertion, repetitive movements, and
awkward postures, each of which is related to worker activities. Other factors, such as
contact pressure, vibration, and temperature, also can cause MSDs. When two or
more factors are combined, the chance of injury increases dramatically (Punnett et al.
2004). The consequences of MSDs can be very costly (Department of Consumer and
Business Service: DCBS 2000) and determine a large percentage of compensation
claims (Everett et al. 1998). To improve construction worker health and reduce MSD-
related high claim costs, MSDs thus require effective study.
Existing methods for construction worker MSD studies are time consuming and
expensive (Levitt et al. 1987). To address the limitations of these methods, this paper
proposes a video-based computer vision approach. This incorporates a shift from
traditional human observation to precise and automatic video-based human motion
capture and analysis. With this approach, 2D skeletons are extracted from videos
taken by two cameras. These then are reconstructed as 3D skeletons, from which
information, such as frequency, joint angle, joint distance, and back bending angle,
can be extracted. In the next sections, a review of currently available (section 2) as
well as proposed methods (section 3) for measuring MSD related activities are
undertaken. The overall research framework then is explained briefly in section 4 and
section 5 outlines the current results produced by the 3D skeleton constructed as part
of the general framework. A discussion and conclusion follows separately in sections
6 and 7.
Existing Methods for MSD Studies
Certain tasks, such as repetitive and heavy lifting, bending and twisting, exerting too
much force, and working too long without breaks, cause or increase the risk of MSDs
(Anil et al. 2007). These tasks can be monitored to detect symptoms early and to
ensure that workers acquire timely treatment (Anil et al. 2007). The monitoring,
measurement, and analysis of worker movements thus are necessary and important to
reduce MSDs and improve worker health. Measurements of action frequency and
duration, for example, can be used to determine whether activities are within the
range of health defined within occupational health standards. For instance, according
to Washington Ergonomics Rule, if the worker is doing some activity with back
bending angle over 45° degree and the total duration for back bending is over two
hours/day, this is considered to be hazardous and needs rectification. Traditionally,
researchers would have to engage in on-site data collection, such as site observation
(Levitt et al. 1987), surveys (Daggfeldt et al. 2003), interviews (Everett et al. 1998),
video analysis (Oglesby et al. 1989), and self-reporting (Mitropoulos et al. 2011), to
obtain the necessary information. These methods do provide desirable and useful
information and can be utilized to collect the required information if employed on a
regular basis, thus making the evaluation of worker health possible.
382 COMPUTING IN CIVIL ENGINEERING

However, considering the chronic development of MSD, continuously keeping track


of worker activities with traditional methods demands a lot of manpower and
resources. Site observation and surveys require experts to collect and analyze data;
this is expensive and time-consuming, since experts need to be involved for long
periods of time. Further, self-reporting becomes increasingly inaccurate over time as
workers lose interest and patience in reporting. To address these limitations, we thus
need automatic and economical methods to obtain necessary information while
requiring less human involvement.
Emerging Technology: Computer Vision
Human motion capture and analysis which permit a “pure” video-based motion
capture system (Moeslund et al. 2001) are active research topics in the computer
vision community (Moelsund et al. 2006). The goal of human movement analysis is
to quantize the mechanics, such as joint kinematics, of the musculo-skeletal system.
The basic task of a computer vision-based human motion tracking system is to detect
and track human motion information from video or image sequences and to recognize
the motions. Construction worker activities can similarly be detected and
reconstructed from videos or images taken on a construction site.
Wang et al. (2003) reviewed research on computer vision-based human motion
analysis. Their survey emphasized the core components of human motion analysis,
which includes human detection, tracking, and activity comprehension from single or
multiple video sequences. Most existing research (Wang, Hu, & Tan 2003) focuses
on human motion tracking and recognition through motion sequences. 2D tracking
and 3D reconstruction are two main components of motion capture technologies.
Compared to 2D motion, 3D human motion can provide more robust recognition and
identification.
Generally, monocular and multi-camera videos are used to obtain 3D motion
parameters. Howe et al. (1999) reconstruct 3D human motions from a single camera.
They use a 2D tracking algorithm to obtain the coordinates of 20 tracked body points
and yield the positions of joints to realize 3D reconstruction. Difranco et al. (2001)
tackle the problem of reconstructing poses from complex activities from a single
viewpoint based on 2D correspondences specified either manually or by separate 2D
registration algorithms. They also utilize an interactive system that makes it easy for
users to obtain 2D reconstructions very quickly with minimal manual effort. To
increase reliability and accuracy and to avoid self-occlusions, multi-camera
approaches (D’Apuzzo et al. 1999) are used in this paper. 3D motion reconstruction
can be used to track human motion in real time (Arikan et al. 2002) by estimating
human motion from multiple cameras. Gavrila and Davis (1999) use multiple
synchronized cameras to reconstruct 3D body poses and to study human motion based
on 3D features, especially 3D joints. These techniques can be applied to establish 3D
skeletons and pose analysis.
Proposed Research Framework
An advantage of video-based computer vision technology is that it does not affect or
disturb them while they are working. Also, the use of multiple ordinary network
surveillance cameras on a construction site to record worker activities for motion
COMPUTING IN CIVIL ENGINEERING 383

analysis reduces the amount of human effort and workforce involvement in on-site
surveys and observation; it thus can be effective and economical. Our research efforts
focus on obtaining the required action information from video taken on a construction
site, using currently available video-based computer vision technologies. The
framework of this research is shown in Figure 1.

Figure 1. Research framework

1. Motion identification: In this phase, construction trades will be identified if they


involve activities that have potential to cause MSD. If so, these activities are
evaluated by ergonomic risk factors such as action frequency, posture and duration
(Mitropoulos et al, 2011). For example, Ovako working-posture analysis system
(OWAS) (Karhu et al. 1977) can calculate the ergonomic demands for different parts
of human body based on ergonomic risk factors (e.g., action frequency, posture, and
duration).
2. Motion recognition: After identifying trades, relative activities will be recognized
in videos taken at construction site. This is a process of classifying human motion
into known categories such as running, jumping or kicking. Traditionally, there are
two different paradigms: direct recognition and recognition by reconstruction
(Aggarwal et al. 2004). The former is based on 2D image data for recognizing human
actions while the latter reconstructs 3D human skeletons from image and then the
actions can be recognized. In our research, we will reconstruct 3D human skeleton in
order to quantify risk factors (e.g., frequency, angle, and duration of back bending).
3. Motion analysis: The developed 3D skeletons will be confirmed whether they
represent the real activities or postures of workers (i.e., motion validation). For
instance, if the video shows that a worker is doing bricklaying activity, the 3D
skeleton sequences should show that the worker is doing the same activity. Once they
384 COMPUTING IN CIVIL ENGINEERING

are correct, ergonomic risk factors, such as action frequency and duration, can be
calculated and compared to ergonomic standards to check whether they are within the
required range and whether workers are following correct work methods. For
example, if a bar bender is assembling bars while standing straight along the platform,
it can be said that his/her work is being conducted ergonomically in terms of back
bending.
4. Motion visualization: Obtained motion information will be implemented in virtual
reality environment to provide visual interface so that people can get intuitive
understanding of the construction site and visual feedback, such as how workers are
conducting their work and whether workers are working in an ergonomic way or not.
2D Skeleton Extraction and 3D Skeleton Reconstruction
Reconstructed 3D skeleton of workers’ actions is presented in this section. Since back
injuries accounts for 25% of injuries in the construction industry, the experiment
begins with activities involving back-bending. This experiment aims to establish
3D skeleton images based on 2D skeleton recorded from two network surveillance
cameras. In addition, it attempts to calculate the angle and duration of back
bending using this 3D skeleton. 2D skeletons with 15 joints (Figure 2) will be
marked by extracting from the video frame by frame. A projective reconstruction
algorithm is used on the 2D skeleton to establish a 3D skeleton and realize a three
dimensional reconstruction of the body joints (Hartley and Zisserman 2003).

Figure 2. 2D skeleton model Figure 3. Epipolar geometry Figure 4. Backbending


angle

For this experiment, there is no prior information except two sets of images from the
two cameras. The 3D skeleton can be recovered using projective reconstruction
without knowing calibration. The projective algorithm (Hartley and Zisserman 2003)
is as follows:

1. Compute the fundamental matrix F from two view point correspondences. F is


a 3 by 3 matrix relating corresponding points in stereo images and encodes all
the geometric information between two views when no additional information
COMPUTING IN CIVIL ENGINEERING 385

is available. If given a set of correspondences xi ↔xi', the epipolar constraint


x'T F x = 0 (Figure 3) can be used.
2. Compute the camera projection matrix from the fundamental matrix. After
obtaining F from the correspondences, it can be used to calculate the
corresponding camera matrices P and P', which represent the corresponding
relationship between 3D points and its 2D projection ( ; ′ ′ ).

Figure 4 shows the reconstructed 3 skeleton from this algorithm. With 3D coordinates
in this skeleton, motion information, such as back bending angle, joint angle and joint
distance, can be calculated and then be used to measure whether worker is working
within the range that the ergonomic standard recommends. For example, the angle of
back bending is calculated using the vector (from belly to neck) and (a vertical
vector) as shown in Figure 4. If a worker is bending back with an angle greater than
30° for four hours or more per working day (8 hours), it is considered as hazardous
(Spielholz et al. 2006).
To validate the accuracy of the calculated results, the experimenter stayed still with
fixed back bending angle (30° and 75°, respectively) for 30 seconds. 30 Frames (1
frame per second) were analyzed here (Figure 5). For the 30 frames with back
bending angle of 30°, the mean and variance are 29.11 and 2.44 respectively, and
error mean is -0.89. For the 30 frames with back bending angle of 75°, the mean and
variance are 79.33 and 8.59 respectively, and error mean is 4.22. With confidence
level 99%, the confidence interval for 30° and 75° dataset are (27.60, 31.63) and
(73.59, 85.01) respectively. Based on this analysis, it can be concluded that this
algorithm has a potential to be implemented in this research since the reconstructed
3D skeleton and angle calculation results are reasonably accurate.

Figure 5. Back bending angle (30° left and 75° right)

The duration for back bending angle over 30° can be also calculated multiplying the
frame rate by the number of frames. This information can be also used to check
whether this duration exceeds the one recommend by the ergonomic standard.
386 COMPUTING IN CIVIL ENGINEERING

Conclusions
Worker MSDs are a serious problem in the construction industry. The leading causes
of MSDs, such as repetitive movements, are related to worker activities. Through
measuring worker activities, information such as joint angle can be obtained and used
for MSD research as a basis comparing to health standard. The limitations of existing
research methods, such as surveys, interviews, and questionnaires, can be addressed
through the utilization of a computer vision-based research framework which includes
four steps: motion identification, motion recognition, motion analysis, and motion
visualization.
In this paper, we establish 3D skeletons of construction workers from videos. 2D
skeletons are manually extracted from image sequences decomposed from videos. A
projective reconstruction algorithm then is implemented to calculate the 3D
coordinates for each joint in the designated human model for each frame. The
algorithm produces 3D skeletons similar to the skeletons evident in the videos. The
3D joint coordinates also are useful for precise motion analysis, since they can be
used to calculate relative information, such as duration, frequency, joint angle, and
back bending angle. With this technology, early symptoms and warnings can be
automatically detected and feedback can be provided to the worker with regard to
existing ergonomic standards. Therefore, early intervention can be executed to rectify
worker behavior in order to reduce MSD development.
The 2D skeletons in this experiment were marked manually. Automatic extraction of
2D skeleton with human model-based tracking is ongoing.
Reference
Adisesh, A., Rawbone, R., Foxlow, J., and Harris-Roberts, J. (2007). “Occupational
health standards in the construction industry.” HSE Research Report
Aggarwal, J. K., Park, S. (2004). “Human Motion: Modeling and Recognition of
Actions and Interactions.” International Symposium on 3D Data Processing,
Visualization & Transmission, Thessaloniki, Greece.
Albers, J. T., and Estill, C. F. (2007). “Simple solutions: ergonomics for construction
workers.”
Arikan, O., and Forsyth, D. (2002). “Interactive motion generation from examples.”
ACM Transactions on Graphics, 21(3), 483–490.
Daggfeldt, K., and Thorstensson, A. (2003). “The mechanics of back-extensor torque
production about the lumbar spine.” J. Biomech, Jun, 36(6), 815-825.
D’Apuzzo, N., Plankers, R., Fua, P., Gruen, A., and Thalmann, D. (1999). “Modeling
human bodies from video sequences.” Videometrics Conferences, SPIE Proc.,
vol. 3461, 36-47.
DiFranco, D. E., Cham, T-J., and Rehg, J. M. (2001). “Reconstruction of 3-D figure
motion from 2-D correspondences.” Proc. of the 2001 IEEE Conf. on
Computer Vision and Pattern Recognition, vol. 1, 307-341.
Everett, J. G., and Kelly, D. L. (1998). “Drywall joint finishing: productivity and
ergonomics.” Journal of Construction Engineering and Management, 9-10,
347-353.
COMPUTING IN CIVIL ENGINEERING 387

Gavrila, D. M., and Davis, L. S. (1996). “3-D model-based tracking of humans in


actions: a multi-view approach.” Proc. Of IEEE Conf. on Computer Vision
and Pattern Recognition, 73-80.
Hartley, R., and Zisserman, A. (2003). Multiple view geometry in computer vision-
second edition, Cambridge University Press, Cambridge.
Howe, N. R., and Leventon, M. E. (1999). “Bayesian reconstruction of 3D human
motion from single camera video.” Advances in Neural Information
Processing System, 12.
http://www.cbs.state.or.us/external/imd/rasums/resalert/msd.html
http://www.lhsfna.org/index.cfm?objectid=EB4CE18C-D56F-E6FA-9F264298BF045289
http://www.cbs.state.or.us/external/imd/rasums/resalert/msd.html
http://www.ivanhoe.com/science/story/2008/04/417si.html
Kadefors, R. (1994). “An ergonomic model for workplace assessment.” 12th Trienial
Congress of the Int. Ergonomics Association, Vol.5, Human Factors
Association of Canada, Toronoto, 210-212.
Levitt, A. and Samelson, N.M. (1987). Construction safety management, McGraw-
Hill, New York.
Mitropoulos, P., Namboodiri, M. (2011). “New method for measuring the safety risk
of construction activities: task demand assessment.” Journal of Construction
Engineering and Management, 137(1), 30-38.
Moelsund, T. B., Hilton, A., and Krüger, V. (2006). “A survey of advances in vision-
based human motion capture and analysis.” Computer Vision and Image
Understanding, 104, 90-126.
Oglesby, C. H., Parker, H. W., and Howell, G.A. (1989). “Data collection method:
productivity improvement in construction.” Chapter 7: Data gathering for
on-site productivity improvement studies, McGraw-Hill, Inc., New York,
N.Y., 146-210.
Punnett, L., and Wegman, D. H. (2004). “Work-related musculoskeletal disorders: the
epidemiologic evidence and the debate.” Journal of Electromyography and
Kinesiology, 14, 13-23.
Rabiner, L. (1989). “A tutorial on hidden markov models and selected applications in
speech recognition.” Proceedings of the IEEE, 77(2), 257-286.
Spielholz, P., Davis, G., and Griffith, J. (2006). “Physical risk factors and controls for
musculoskeletal disorders in construction trades.” Journal of Construction
Engineering and Management, 10, 1059-1068.
Wang, L., and Hu, W. (2003). “Recent developments in human motion analysis.”
Pattern Recognition, 36(3), 585-601.
A Novel Crack Detection Approach for Condition Assessment of Structures

Mohammad R. Jahanshahi1 and Sami F. Masri2


1
Sonny Astani Department of Civil & Environmental Engineering, University of
Southern California, 3620 S. Vermont Ave., KAP 268B, Los Angeles, CA 90089-
2531; PH (213) 740-6304; e-mail: jahansha@usc.edu
2
Sonny Astani Department of Civil & Environmental Engineering, University of
Southern California, 3620 S. Vermont Ave., KAP 206A, Los Angeles, CA 90089-
2531; PH (213) 740-0602; e-mail: masri@usc.edu

ABSTRACT
Automated health monitoring and maintenance of civil infrastructure systems is an
active yet challenging area of research. Current inspection standards require an
inspector to travel to a target structure site and visually assess the structure's condition.
If a region is inaccessible, binoculars must be used to detect and characterize defects.
This approach is labor-intensive, yet highly qualitative. A less time-consuming and
inexpensive alternative to current monitoring methods is to use a robotic system that
could inspect structures more frequently, and perform autonomous damage detection.
Among several possible techniques, the use of optical instrumentation (e.g., digital
cameras), image processing and computer vision are promising approaches as
nondestructive testing methods. The feasibility of using image processing techniques
to detect deterioration in structures has been acknowledged by leading researches in
the field. This study presents and evaluates the technical elements for the
development of a novel crack detection methodology that is based on the use of
inexpensive digital cameras. Guidelines are presented for optimizing the acquisition
and processing of images, thereby enhancing the quality and reliability of the damage
detection approach and allowing the capture of even the slightest, which are routinely
encountered in realistic field applications where the camera-object distance and image
contrast are incontrollable.

1. INTRODUCTION
Civil infrastructure system assets represent a significant fraction of the global assets
and in the United States are estimated to be worth $20 trillion. These systems are
subject to deterioration due to excessive usage, overloading, and aging materials, as
well as insufficient maintenance and inspection deficiencies.

In the past two decades, efforts have been made to implement image-based
technology in crack detection methods. Tsao et al.(1994), Kaseko et al. (1994) and
Wang et al. (1998) used image processing to detect defects in pavements. Siegel and
Gunatilake (1998) developed a remote visual crack inspection system of aircraft
surfaces using wavelet transformation features and a neural network classifier.
Nieniewski et al. (1999) developed a visual system that could detect cracks in ferrites.
Moselhi and Shehab-Eldeen (2000) used image analysis techniques and neural
networks to automatically detect and classify defects in sewer pipes.

388
COMPUTING IN CIVIL ENGINEERING 389

Chae (2001) proposed a system consisting of image processing techniques along with
neural networks and fuzzy logic systems for automatic defect (including cracks)
detection in sewer pipes. Benning et al. (2003) used photogrammetry to measure the
deformations of reinforced concrete structures and monitor the evolution of cracks.
Abdel-Qader et al. (2003) analyzed the efficacy of different edge detection techniques
in the identification of cracks in concrete pavements of bridges. Abas and Martinez
(2003) used a morphological top-hat operator and a fuzzy k-means technique to detect
cracks in paintings.

A study on using computer vision techniques for automatic structural assessment of


underground pipes has been done by Sinha et al. (2003). The algorithm proposed
by Sinha et al. consists of image processing, segmentation, feature extraction, pattern
recognition, and a proposed neuro-fuzzy network for classification. Giakoumis et al.
(2006) detected cracks in digitized paintings by thresholding the output of the
morphological top-hat transform. Sinha and Fieguth (2006) detected defects in
underground pipe images by thresholding the morphological opening of the pipe
images using different structuring elements. Abdel-Qader et al. (2006) proposed
algorithms based on Principal Component Analysis (PCA) to extract cracks in
concrete bridge decks.

Yamaguchi and Hashimoto (2006) proposed a crack detection approach based on a


percolation model and edge information. Chen et al. (2006) introduced a
semiautomatic measuring system for concrete cracks using multi-temporal images.
Yu et al. (2007) introduced an image-based semiautonomous approach to detect
cracks in concrete tunnels.

Recently, Fujita and Hamamoto (2009) proposed a crack detection method in noisy
concrete surfaces using probabilistic relaxation and a locally adaptive thresholding.
Jahanshahi et al. (2009) surveyed and evaluated several crack detection techniques in
conjunction with realistic infrastructure components.

In all of the above studies, many important parameters (e.g., camera-object distance)
are not considered or assumed to be constant. In practical circumstances, the image
acquisition system often cannot maintain a constant focal length, resolution, or
distance to the object under inspection. In the case of nuclear power plants, for
instance, the image acquisition system needs to be located a significant distance from
the reactor site. To detect cracks of a specific thickness, many of the parameters in
these algorithms need to be adaptive to the 3D structure of a scene and the attributes
of the image acquisition system; however, no such study has been reported in the
open literature. The proposed approach in this study gives a robotic inspection system
the ability to detect cracks in images captured from any distance to the object, with
any focal length or resolution.
390 COMPUTING IN CIVIL ENGINEERING

2. CRACK DETECTION
An adaptive crack detection procedure is proposed in this study. This system is
adaptive because based on the image acquisition specifications, camera-object
distance, focal length and image resolution, it automatically adjusts its parameters to
detect cracks of interest. Figure 1 shows the overview scheme of the proposed system.
The main elements of the proposed crack detection procedure are segmentation,
feature extraction, and decision making. Note that before processing any image,
preprocessing approaches can be used to enhance the image [30].

Figure 1: The overview scheme of the proposed crack detection approach.

2.1 Segmentation
Segmentation is a set of steps that isolate the patterns that can be potentially classified
as a defined defect. The aim of segmentation is to reduce extraneous data about
patterns whose classes are not desired to be known. Several segmentation techniques
have been evaluated by the authors previously (Jahanshahi et al. 2009), and it has
been concluded that a proposed morphological operation by Salembier (1990) works
best for crack detection purposes in components that are typically encountered in civil
infrastructure systems.

2.1.1 Morphological Operation


Morphological image processing, which is based on mathematical morphology, is
used to extract useful information about the scene objects. The proposed
morphological operation by Salembier (1990) is slightly modified here to enhance its
capability for crack extraction in different orientations. The proposed operation is
shown in Eq. (1):
T  max[( I  S{0 ,45 ,90 ,135 } )  S{0 ,45 ,90 ,135 } ]  I , (1)

where I is the grayscale image, S is the structuring element that defines which
neighboring pixels are included in the operation, ‘  ’ is the morphological opening,
and ‘  ’ is the morphological closing. The output image T is then binarized using
Otsu's thresholding method (Otsu 1979) to segment potential crack-like dark regions
from the rest of the image. This nonlinear filter extracts the whole crack as opposed to
edge detection approaches where just the edges are segmented.

2.1.2 Structuring Element


By choosing the size and shape of the structuring element (i.e., neighborhood), a filter
that is sensitive to a specific shape can be constructed. When the structuring element
has a line format, it can segment cracks that are perpendicular to it. If the length of
the structuring element (in pixels) exceeds the thickness of a dark object in an image,
COMPUTING IN CIVIL ENGINEERING 391

then this object can be segmented by the operation in Eq. (1). Consequently, linear
structuring elements are defined in 0o, 45o, 90o, and 135o orientations. The challenge
is to find the appropriate size for the structuring element.

Using a simple pinhole camera model, the relation between the structuring element
size and different image acquisition parameters is shown below:

 FL SR  (2)
S   Cs  ,
WD SS 
where S (pixels) is the structuring element size, FL (mm) is the camera focal length,
WD (mm) is the working distance (camera-object distance), SR (pixels) is the camera
sensor resolution, SS (mm) is the camera sensor size, CS (mm) is the crack thickness,
and   is the ceiling function.

By having the working distance, the derived formula in (2) is used to compute the
appropriate structuring element. Using this equation, the size of the appropriate
structuring element is computed based on the crack size of interest. Figure 2 shows
the geometric relationship between the image acquisition parameters for a simple
pinhole camera model.

.
Figure 1: The geometric relation between image acquisition parameters of a
simple pinhole camera model.

2.2 Feature Extraction


After segmenting the patterns of interest, it is time to assign them a set of finite values
representing quantitative attributes or properties called features. These features should
represent the important characteristics that help identify similar patterns. To
determine discriminative features useful for classification purposes, this study
initially defined and analyzed twenty nine features. Eleven of these features were
selected as potentially appropriate features for further analysis. Finally, using the
LDA (Fisher 1936) approach, the following five features were found to be
discriminately appropriate (i.e., preserving 99.4% of the cumulative feature ranking
criteria) for classification: (1) eccentricity (a scalar that specifies the eccentricity of
the ellipse that has the same second-moments as the segmented object), (2) area of the
segmented object divided by the area of the above ellipse, (3) solidity (a scalar
specifying the proportion of pixels in the convex hull that also belong to the
392 COMPUTING IN CIVIL ENGINEERING

segmented object), (4) absolute value of the correlation coefficient (here, correlation
is defined as the relationship between the horizontal and vertical pixel coordinates),
and (5) compactness (the ratio between the square root of the extracted area and its
perimeter). The convex hull for a segmented object is defined as the smallest convex
polygon that can contain the object. The above features are computed for each
segmented pattern.

2.3 Classification
In this study, a feature set consisting of 1,910 non-crack feature vectors and 3,961
synthetic crack feature vectors was generated to train and evaluate the classifiers.
About 60% of this set was used for training, while the remaining feature vectors were
used for validation and testing. Note that due to the lack of access to a large number
of real cracks, randomized synthetic cracks were generated to augment the training
database. For this reason, real cracks were manually segmented and an algorithm was
developed to randomly generate cracks from them. The non-crack feature vectors
were extracted from actual scenes. The performance of several SVM and NN
classifiers was evaluated. Eventually, a SVM with a 3rd order polynomial kernel and a
3-layer feedforward NN with 10 neurons in the hidden layer and 2 output neurons
were used for classification. A nearest-neighbor classifier was used to evaluate the
performance of the above classifiers.

Table 1 summarizes the performances of these three classifiers. In this table,


‘accuracy’ is the proportion of true classifications in the test set, ‘precision’ is the
proportion of true positive classifications against all positive classifications,
‘sensitivity’ is the proportion of actual positives that were correctly classified, and
‘specificity’ is the proportion of negatives that were correctly classified. Since the
latter two quantities are insensitive to changes in the class distribution, they were used
to evaluate the classifier performances in this study. This table shows that the
proposed SVM and NN approaches have very close performances. Their performance
is better than a nearest-neighbor classifier.

Table 1: The performance of different classifiers on synthetic data


Classifier Accuracy Precision Sensitivity Specificity
(%) (%) (%) (%)
Neural Network 95.57 97.60 95.91 94.84
Support Vector Machine 95.06 97.85 94.98 95.25
Nearest Neighbor 88.93 92.30 91.37 83.69

2.4 Multi-Scale Crack Map


In order to obtain a crack map, the crack detection procedure described above was
repeated using different structuring elements (i.e., different scales). Note that the
extracted multi-scale binary crack map is the union of the detected cracks using
different structuring elements. The proposed crack map can be formulated as:

1 k  [ S min , m]; C k (u , v )  1,
J m (u , v )  
0 Otherwise,
COMPUTING IN CIVIL ENGINEERING 393

where Jm is the crack map at scale (i.e., structuring element) m, Smin is the minimum
structuring element size, Ck is the binary crack image obtained by using k as the
structuring element, and u and v are the pixel coordinates of the crack map image.
3. EXPERIMENTAL RESULTS AND DISCUSSION
In order to evaluate the overall performance of the proposed crack detection
algorithm, a test set consisted of 220 real concrete crack and 200 non-crack images
was used. Table 2 summarizes the performance of the detection system for real
patterns. The performance of the system based on NN is slightly better than the one
based on SVM. So, the former system is used for the rest of the experiments in this
study. The minimum length of the detected cracks was set to 10 mm.

Table 2: The overall performance of the proposed system using real data
Classifier Accuracy Precision Sensitivity Specificity
(%) (%) (%) (%)
Neural Network 79.5 78.4 84.1 74.5
Support Vector Machine 78.3 76.8 84.1 72.0

Figure 2 shows the detected cracks in a concrete beam under flexural stress. Each red
box indicates the borders of a detected crack. As it is seen, the system was able to
detect almost all cracks. In this figure, there are also few false positive alerts that are
mainly the handwritings on the concrete. Note that there are several edges and objects
in Figure 2(c) where the proposed system has correctly detected the real cracks.
Structuring element sizes of 4 to 22 pixels were used to extract the cracks in these
images. The images are 2 mega pixels and it took averagely 74 seconds to process
these images on an AMD Athlon II X4 (2.6 GHz) processor.

(a) (b)

(c)
Figure 2: Detected cracks in concrete beams under flexural stress. Each detected
crack is surrounded by a red box.
394 COMPUTING IN CIVIL ENGINEERING

4. SUMMARY
Current visual inspection of civil structures, which is the predominant inspection
method, is highly qualitative. An inspector has to visually assess the condition of a
structure. If a region is inaccessible, an inspector uses binoculars to detect and
characterize defects. There is an urgent need for developing autonomous quantitative
approaches in this field. In this study, a novel adaptive crack detection procedure is
introduced. A morphological crack segmentation operator is introduced to extract
crack-like patterns. The structuring element parameter for this operator is
automatically adjusted based on the camera focal length, object-camera distance,
camera resolution, camera sensor size, and the desired crack thickness. Appropriate
features are extracted and selected for each segmented pattern using the LDA
approach. The performances of a NN, a SVM, and a nearest-neighbor classifier are
evaluated to classify cracks from non-crack patterns. A multi-scale crack map is
obtained to represent the detected cracks. The authors are developing an autonomous
crack quantification approach based on the obtained crack map from this approach.

5. ACKNOWLEDGEMENTS
This study was supported in part by grants from the National Science Foundation.

REFERENCES
F. S. Abas and K. Martinez, “Classification of painting cracks for content-based
analysis,” Proceedings of the SPIE - The International Society for Optical
Engineering, vol. 5011, pp. 149–160, January 2003, Santa Clara, CA, USA.
I. Abdel-Qader, O. Abudayyeh, and M. E. Kelly, “Analysis of edge-detection
techniques for crack identification in bridges,” Journal of Computing in Civil
Engineering, vol. 17, no. 4, pp. 255–263, October 2003.
I. Abdel-Qader, S. Pashaie-Rad, O. Abudayyeh, and S. Yehia, “PCA-based algorithm
for unsupervised bridge crack detection,” Advances in Engineering Software,
vol. 37, no. 12, pp. 771–778, December 2006.
W. Benning, S. G¨ ortz, J. Lange, R. Schwermann, and R.Chudoba, “Development of
an algorithm for automatic analysis of deformation of reinforced concrete
structures using photogrammetry,” VDI Berichte, no. 1757, pp. 411–418, 2003.
M. J. Chae, “Automated interpretation and assessment of sewer pipeline,” Ph.D.
dissertation, Purdue University, December 2001.
L.-C. Chen, Y.-C. Shao, H.-H. Jan, C.-W. Huang, and Y.-M. Tien, “Measuring
system for cracks in concrete using multitemporal images,” Journal of
Surveying Engineering, vol. 132, no. 2, pp. 77–82, May 2006.
R. A. Fisher, “The use of multiple measurements in taxonomic problems”, Annals of
Eugenics 7 (1936) 179-188.
Y. Fujita and Y. Hamamoto, “A robust method for automatically detecting cracks on
noisy concrete surfaces,” Next-Generation Applied Intelligence. Twenty-
second International Conference on Industrial, Engineering and Other
Applications of Applied Intelligent Systems IEA/AIE 2009, pp. 76–85, June
2009, Tainan, Taiwan.
I. Giakoumis, N. Nikolaidis, and I. Pitas, “Digital image processing techniques for the
COMPUTING IN CIVIL ENGINEERING 395

detection and removal of cracks in digitized paintings,” IEEE Transactions on


Image Processing, vol. 15, no. 1, pp. 178–188, January 2006.
M. R. Jahanshahi, J. S. Kelly, S. F. Masri, and G. S. Sukhatme, “A survey and
evaluation of promising approaches for automatic image-based defect
detection of bridge structures,” Structure and Infrastructure Engineering, vol.
5, no. 6, pp. 455–486, December 2009.
M. S. Kaseko, Z.-P. Lo, and S. G. Ritchie, “Comparison of traditional and neural
classifiers for pavement-crack detection,” Journal of Transportation
Engineering, vol. 120, no. 4, pp. 552–569, July-August 1994.
O. Moselhi and T. Shehab-Eldeen, “Classification of defects in sewer pipes using
neural networks,” Journal of Infrastructure Systems, vol. 6, no. 3, pp. 97–104,
September 2000.
M. Nieniewski, L. Chmielewski, Jozwik, and Sklodowski, “Morphological detection
and feature-based classification of cracked regions in ferrites,” Machine
GRAPHICS and VISION, vol. 8, no. 4, pp. 699–712, 1999.
N. Otsu, A threshold selection method from gray-level histograms, IEEE
Transactions on Systems, Man, and Cybernetics 9 (1) (1979) 62-66.
P. Salembier, Comparison of some morphological segmentation algorithms based on
contrast enhancement. Application to automatic defect detection, Proceedings
of the EUSIPCO-90 - Fifth European Signal Processing Conference (1990)
833-836.
M. Siegel and P. Gunatilake, “Remote enhanced visual inspection of aircraft by a
mobile robot,” IEEE Workshop on Emerging Technologies, Intelligent
Measurement and Virtual Systems for Instrumentation and Measurement -
ETIMVIS ’98, May 1998, St. Paul, MN, USA.
S. K. Sinha, P. W. Fieguth, and M. A. Polak, “Computer vision techniques for
automatic structural assessment of underground pipes,” Computer-Aided Civil
and Infrastructure Engineering, vol. 18, no. 2, pp. 95–112, February 2003.
S. K. Sinha and P. W. Fieguth, “Morphological segmentation and classification of
underground pipe images,” Machine Vision and Applications, vol. 17, no. 1,
pp. 21–31, April 2006.
S. Tsao, N. Kehtarnavaz, P. Chan, and R. Lytton, “Image-based expert-system
approach to distress detection on CRC pavement,” Journal of Transportation
Engineering, vol. 120, no. 1, pp. 62–64,January-February 1994.
K. C. Wang, S. Nallamothu, and R. P. Elliott, “Classification of pavement surface
distress with an embedded neural net chip,” Artificial Neural Networks for
Civil Engineers: Advanced Features and Applications, pp. 131–161, January-
February 1998.
T. Yamaguchi and S. Hashimoto, “Automated crack detection for concrete surface
image using percolation model and edge information,” 32nd Annual
Conference on IEEE Industrial Electronics, pp. 3355–3360, November 2006.
S.-N. Yu, J.-H. Jang, and C.-S. Han, “Auto inspection system using a mobile robot
for detecting concrete cracks in a tunnel,” Automation in Construction, vol. 16,
no. 3, pp. 255–261, May 2007.
Developing an Efficient Algorithm for Balancing Mass-Haul Diagram

Khaled Nassar1, Ossama Hosny2, Ebrahim A. Aly3, Hesham Osman4


1
Associate Professor, Department of Construction and Architectural Engineering, the
American University in Cairo
2
Professor, Department of Construction and Architectural Engineering, the American
University in Cairo
3
Research Assistant, Department of Construction and Architectural Engineering, the
American University in Cairo
4
Assistant Professor, Department of Civil Engineering, Cairo University

ABSTRACT
To minimize the total cost of earthwork, a number of Linear and Integer Programming
Techniques have been used, through considering the various factors involved in this
process. These techniques often ensure a global optimum solution for the problem.
However, they require sophisticated formulations besides being computationally
expensive. Therefore, these techniques are of limited use in industry practice.
Mass-Haul Diagrams (MD) have been an essential tool for planning earthwork
construction for many applications. One of the most common heuristics that is used
widely by practicing engineers in this field to balance the MD is the “Shortest-Haul-
First” strategy. Using this heuristic in balancing the MD is usually carried out either
graphically on drawings or manually by computing values from the Mass-Haul Diagram
itself. However, performing this approach graphically or manually is fairly tedious and
time consuming. Besides that manual and graphical approaches are prone to errors. A
robust algorithm that can automatically offer a balance for the MD algorithm is,
therefore, needed.
This research presents a formal definition of an algorithm that uses a sequential
pruning technique for computing balances of Mass-Haul Diagrams automatically. It
shows that the new algorithm is more efficient than the existing Integer Programming
Techniques as it computationally runs in O(logn) time in most cases.

INTRODUCTION
The primary use of MD is to determine the points where the cuts and fills are balanced
out as well as planning for haul routes and distances. Mathematical programming
models of earthwork allocations have been formulated aiming at minimizing the total
earthwork costs considering various technological, physical and operational constraints
(Akay 2004; Zhand and Wright 2004; Shahram et al 2007). These models usually solve
the optimization problem using Linear or Integer Programming (LP and IP) Techniques.
Although, these techniques ensure a global optimum solution for the problem, they
require sophisticated formulations that are essential for their setup and definition.
Therefore, these techniques are of limited use in practice. On the other hand, one of the
most common strategies used by practicing engineers in this field to balance the MD is
the “Shortest-Haul-First” strategy. There are two ways for determining a balance for an

396
COMPUTING IN CIVIL ENGINEERING 397

MD using this approach; graphically and manually. Performing this approach


graphically or manually is fairly tedious, and depend on the accuracy of the engineer
doing the calculations. Current commercial software such as AutoDesk’s Civil 3D can
construct the MD automatically, but it does not offer any balancing functionalities. In
this research, we present an algorithm that can automatically balance Mass Diagrams
that has been developed. The algorithm can, therefore, be used as an automated tool to
determine haul routes, quantities and distances as well as the amount of excess or
borrowed materials. However, the algorithm proposed in this research runs in O(log n)
time. It is shown that this algorithm is more efficient than existing methods in simple
balancing and more advanced cost and grade based optimization of haul routes.

BALANCING THE MASS-HAUL DIAGRAM


In fact, Mass Diagrams are the plots of the accumulative volumes of cuts and fills along
an alignment. Typically, the Mass Diagram is plotted below the route profile; with the
ordinate at any station representing the sum of the volumes of the cuts and fills up to
that station. An example of a Mass Diagram is shown in figure 1. The horizontal axis of
the Mass Diagram can be thought of as a primary balance line. To determine which
hauls to take first, auxiliary balance lines needs to be drawn at the troughs of the graph
to indicate the remaining balances. The primary and auxiliary balance lines are essential
in finding the most economic balance for the MD and are used in the algorithm
developed in this research. The annotations used in the description of the algorithm are
shown below.

P Vector of all points of the diagram i Point index


Ppos Vector of the (+ve) points of the x Station
Mass Diagram
Pneg Vector of the (-ve) points of the y Mass diagram value
Mass Diagram
Lp The length of any Pvector T Trough points or local minimums
Lpos The length of any Ppos vector xjT Station of a point in the troughs
vector
Lneg The length of any Pneg vector yjT Mass diagram value in the troughs
vector
Lz The length of any Z vector Z Vector of zero points
Linv The length of any Inv vector xiZ Station of a point in the zeros
vector
Inv Vector of trough points yiZ Mass diagram value for a point in
the zeros vector
Table.1: Algorithm annotations
398 COMPUTING IN CIVIL ENGINEERING

DEVELOPING THE ALGORITHM


The structure of the algorithm is shown in figure 2. Firstly, the Mass Diagram is divided
into “Balances” or stations of cuts that will balance with stations of fills. Each point on
the MD consists of three items (i,x,y) where (i) is the point index, (x) is the station and
(y) is the Mass Diagram value. These are stored in vector P. The algorithm employs a
sequential pruning process where the MD is balanced by cutting it down into basic
balanced shapes. There are two possible kinds of balanced shapes; “Bell balances” and
“Trapezoidal balances”. The first type, “Bell balances”, are the bell shaped segments of
the diagram which can be identified by three values for x and one value for y; the x
values are the start station, the end station and the station corresponding to the
maximum Mass Diagram value (maximum y). The second type is “Trapezoidal
balances” which are the trapezoidal shaped segments formed after drawing the auxiliary
balance lines; this type will be defined by four values of x – two for the cut stations and
two for the fill stations- and a value of y as shown in figure 3. In fact, the algorithm
employs a series of forward and backward passes to identify each of those two kinds of
balances on both sides of the x-axis (primary balance line).

Locating intercepts and trough points


The first step is to add interception points to the vector P so that it becomes possible
to separate the different balance segments. This is done by adding intercepts at the
points where the diagram values cross the primary balance line; these intercepts will be
at the points where the Mass Diagram value (y) equals zero and the exact x location of
intercepts can be determined by scanning and interpolating. Each of these intercepts lay
between any positive value Mass Diagram point followed by a negative value Mass
Diagram point or vice versa. In other words, wherever there is a (yi>0 and yi+1< 0) or
(yi<0 and yi+1 > 0) a new zero point (i, xz, yz) will be added and the station of this point
will be calculated by the following equation:

(1)

Therefore,
xz = xi + dx (2)
The new zero points are added to the original vector P in their respective locations in
relation to the other points. Also a new vector Z is created, which carries these new
points where y value equals zero. The points stored in the Z vector shall be sorted in an
ascending order with respect to the x value. Before we start processing the diagram, the
original vector P is divided into two sub vectors; one carries the points with Mass
Diagram positive value points Ppos, and the other carries the negative value points
Pneg, i.e. two new diagrams are created, one above the primary balance line and the
other below it as in figure 3. In this step also, the trough points are stored in a new
vector Inv. For each trough, the preceding and succeeding y values are greater than the
y value of the trough itself, i.e. yi-1>yi and yi+1 >yi. As a result, each point that fulfills
this condition will be stored in the Inv vector and as we did for the Z vector, the Inv
vector’s points will be sorted in an ascending order with respect to the x value.
It is important to note that when scanning the P vector for zero points, a value (±ε)
that is slightly larger or smaller than zero can be considered, i.e. a volume of soil that is
COMPUTING IN CIVIL ENGINEERING 399

small enough so that the earthwork planner can consider it negligible. This is important
practically and also computationally and it varies from one project to another, as it
depends on the type of soil, the cost of moving this soil and the level of accuracy
needed for the project.
Getting the Bell balances
Starting with the first point in the Inv vector, we check for i, j = 0 if xjInv > xiZ and if this
condition is true we proceed to check if xZi < xjInv < xZi+1. These two conditions are
necessary to identify bell balances. For each bell balance, we will store the values [Xstrt,
Xmax, Xend, and Ymax] as shown in figure 3 and 4. Once these Bell balances have been
stored, they can be then pruned from the diagram (i.e. the points between Xstrt and Xend
are removed from P).
Getting trapizoidal balances
After trimming the diagram, auxiliary balance lines are drawn to divide the diagram
into more balances as shown in figure 4 and 5. This is done by assigning the minimum
value in the vector Inv to a dynamic pointer, Invmin= ykInv where k is a counter for the
values in the Inv vector. Invmin represents the height at which the first auxiliary balance
line should be drawn. To define this trapezoidal balance, two extra points are needed,
which are determined from the intersection of the auxiliary balance line and the
boundaries of the Mass Diagram after and before Invmin. Figure 5 shows how this
interpolation is accomplished. In figure 5 y@cut represents the first point before Invmin.
To interpolate we scan the P vector for the condition that yi < y@cut < yi+1 and
therefore we need to interpolate between yi and yi+1 get x2cut which can be calculated by
the equation:

(3)

x2cut = xi + dx (4)

The same will be done to get x2fill. The trapezoidal balance is stored by its parameters
[X1cut, X2cut, X1fill, X2fill,YkInv].

Trimming the diagram and cyclying the process


After storing the first trapezoidal balance, the diagram will be trimmed to the
unbalanced segments. To keep only the Mass Diagram values that shape the unbalanced
segments, the value of Invmin is subtracted from all the y values of the points in the Ppos
vector for stations from X1cut to X1fill. Graphically, the result of this will be a downward
shifted graph with new troughs and zero points (as shown in figure 5) and the new Ppos
will only contain the points whose y values are >= 0. This completes the first forward
pass, and then we go backward to start scanning the remaining points in the P vector in
another forward pass and this cycle continues until no points are left in the P vector.
400 COMPUTING IN CIVIL ENGINEERING

Processing the negative segments of the mass diagram Open endED diagrams
As mentioned earlier, the P vector is divided into two sub vectors; Ppos and Pneg.
Processing the negative segments of the diagram will not be much different than
processing the positive ones. Instead of dealing with the negative y values of the points
in the Pneg vector which may require some calculations to be modified, the Pneg
vector will be mirrored and processed exactly as the Ppos vector. In several cases the
soil along the alignment will not balance for the earthwork, this will appear as open
ends. These open ends will appear in our approach as remaining values in the P vector
(either Ppos or Pneg) after the last forward. Finally, the output of this program will be in
the form of a series of sequential balances along the MD, e.g. for Bell balance: “Cut
from stations (Xstart) to (Xpeak) will fill in stations from (Xpeak) to (Xend)” and for
Trapezoidal balance: “Cut from stations (X1 cut) to (X2 cut) will fill in stations from (X2
fill) to (X1 fill)” as shown in figure 6.

ANALYZING THE COMPLEXITY OF THE ALGORITHM


The above algorithm was implemented in C++. The estimations of the Time efficiency
of the proposed algorithm depends on each step of the algorithm and the time required
to perform a step must be guaranteed to be bounded above by a constant. Thus, in order
to assess time complexity of the proposed algorithm, it is broken down into different
steps and the complexity of each step is assessed. The overall complexity of the
algorithm is equal to the worse complexity of all the steps. The steps and the pseudo
code for the algorithm are shown in figure 7. It is clear that the worse step of the
algorithm in the sort step which required O(n log n) time. However, getting the primary
balance step may require O(n2) as a worse case if both of the intertwined loops are
needed to complete the computation. The lower bound of this step is only one loop
which is O(n).

COMPUTATIONAL EXPERIEMENT
A case study has also been used to test this approach. A rural 4 lane highway in Upper
Egypt was chosen. A Mass Diagram was created for the road, given the existing
contours and the designed vertical curves and horizontal alignments. Commercial
software (Civil 3D) was used to calculate the cuts and fills volumes for the road and,
hence, plots the Mass Diagram. The road was composed of 327 stations along the
alignment with varying cuts and fills sections. A total of 81 different balances were
identified along the length of the project. Therefore, the output of this Mass Diagram
was a set of 81 different segments that will balance the cuts and fills. These balances
were then analyzed further. Thus, It is possible to also calculate the average hauling
distance as well as the average volume (or mass in tons) of soil moved between each
station. The product of these two values represents the average ton-meter of work to be
done. In the preceding example, the average hauling distance was 112 meters with an
average volume of 209 meter cubed. It is also possible to incorporate swell and
shrinkage factors as well as limiting the economical hauling distance by specifying a
window for the algorithm similar to the concept of free-haul in some commercial
software.
COMPUTING IN CIVIL ENGINEERING 401

CONCLUSION
The algorithm employs a sequential pruning process, where the MD is balanced by
cutting it down into more simple closed balanced shapes. While computational
experiments on LP and IP solutions of the problem reported polynomial complexity of
the heuristic procedure and exponential worst-case complexity of traditional
enumerative methods, the algorithm presented in this research runs in O(log n) time.
The proposed algorithm is able to efficiently balance MDs faster than other methods in
the literature. This means that it can help construction planners in dealing with long
alignments and projects that extend for whatever number of stations.

REFERENCES

Akay, A. E. (2004) “A New Method of Designing Forest Roads” Turkish J. Agriculture


and Forestry, 28_4_, 273–279.
Moreb, A. A. (1996) ‘‘Linear programming model for finding optimal roadway grades
that minimize earthwork cost’’ EUROPEAN JOURNAL OF OPERATIONAL
RESEARCH, 93, 148–154.
Moreb, A. A., and Bafail, A. M., (1994) ‘‘A Linear Programming Model Combining
Land. Levelling and Transportation Problem.’’ Journal of the Operations Research
Society of America, 45-12, 1418–1424.
D. Henderson, D.E. Vaughan, S.H. Jacobson, R.R. Wakefield, and E.C. Sewell, (2003)
“Solving the shortest route cut and fill problem using simulated annealing”. European
Journal of Operational Research, 145:72–84.
Jayawardane, A. K. W., and Harris, F. C. (1990) “Further development of Integer
Programming in earthwork optimization” Journal of Construction Engineering and
Management, 116_1_, 18–34.
Pham, D. T., and Li, D. (2006) “Fuzzy systems for modeling, control, and diagnosis”,
Elsevier Science, New York.
Iman, R. L., Davenport, J. M., and Zeigler, D. K. 'Latin Hypercube Sampling (1980) “A
Program User's Guide: Technical Report” SAND79-1473, Sandia Laboratories,
Albuquerque.
Darrall Henderson, Diane E. Vaughan, Sheldon H. Jacobson, Ron R. Wakefield,
Edward C. Sewell (2003) “Solving the shortest route cut and fill problem using
simulated annealing” EUROPEAN JOURNAL OF OPERATIONAL RESEARCH,
145, 72–84.
A. Burak Goktepe, and A. Hilmi Lav (2003) “Method for Balancing Cut-Fill and
Minimizing the Amount of Earthwork in the Geometric Design of Highways”
JOURNAL OF TRANSPORTATION ENGINEERING.
Shahram Mohamad Karimi, Seyed Jamshid Mousavi, Ali Kaveh, and Abbas Afshar
(2007). “Fuzzy Optimization Model for Earthwork Allocations with Imprecise
Parameters”. JOURNAL OF CONSTRUCTION ENGINEERING AND
MANAGEMENT
402 COMPUTING IN CIVIL ENGINEERING

Figure 1. Typical land profile and its corresponding Mass Diagram

Figure 2. The structure of the proposed algorithm


COMPUTING IN CIVIL ENGINEERING 403

Figure 3. Types of balances and Dividing P into positive and negative values vectors algorithm

Figure 4. Segments that are balanced over the primary balance line. As there are no troughs
between xz1 and xz2, the circled segments belong to this group of segments

Figure 5. Defining and storing the trapezoidal balance and shifting down the MD algorithm
404 COMPUTING IN CIVIL ENGINEERING

Figure 6. The result as it appears in the program developed to test the algorithm

//Preprocessing
For (i=0, i<= Lp, i++)
If (yi> 0, yi+1 < 0) or (yi< 0, yi+1> 0) then
interpolate to get y=0, corresponding station (x) &
add to P & add to Z
ElseIf (yi< yi+1) & (yi< yi-1) &yi != 0 Store point in
troughs array in Inv
Sort inverts array Inv points in an ascending order by
(x) value

//Intercepts
For (i=0, i<= Lp, i++) If yi== ± ε then store point in Z
zero array &
Sort zero array points

//Getting primary balances


For (i, j = 0, i <= Lz, j<= Linv)
inv z z inv z
If !xj >xi then next j Elseif xi <xj <x i+1 then
Else store Bellbalance (Xstart, Xpeak, Xend, Ypeak)

//Removing balanced segments


For( i=0, i<= Lp, i++) If Xstart< xi<Xend then remove point
f P

Figure 7. Complexity of each of the steps of the algorithm and the


Pseudo Code
Standardization of Structural BIM
1
N. Nawari
1
School of Architecture, University of Florida, Gainesville, FL 32611-5702, Email:
nnawari@ufl.edu

ABSTRACT

The BIM standard establishes standard definitions for building information exchanges to
support critical business contexts using standard semantics and ontologies. This Standard
forms the foundation for accurate and efficient communication and commerce that are needed
by the construction industry. The Standard is still in its infancy and the evolution and
maturity of BIM Standard will depend largely on the efforts and contribution of various
disciplines involved in design, construction, and management of a facility. This paper
focuses on advancing standardization of BIM model for structural analysis and design.
Specifically speaking, the paper addresses The Information Delivery Manual (IDM) as it
aims to provide the integrated reference for process and data required by BIM by identifying
the discrete processes undertaken within structural design, the information required for their
execution and the results of that activity. Furthermore, it will address Model View
Definitions (MVDs) for structural design and analysis to create a robust process for seamless,
efficient, reproducible exchange of accurate and reliable structural information that is widely
and routinely acknowledged by the industry.

INTRODUCTION

The ultimate success of Building Information Modeling in Architecture, Engineering and


Construction (AEC) industry will in part depend on the ability to capture all relevant data in
the BIM model, and to successfully exchange data between the various project participants.
One of the means of doing this information exchange is through a standardized data exchange
format. To achieve these goals and improve construction productivity in the United States
and to increase competiveness of US construction industry internationally, the National BIM
Standard (NBIMS) is established to provide the digital schema and requirements for efficient
BIM application in the AEC industry. The vision for NBIMS is “an improved planning,
design, construction, operation, and maintenance process using a standardized machine-
readable information model for each facility, new or old, which contains all appropriate
information, created or gathered, about that facility in a format useable throughout its
lifecycle by all” The organization, philosophies, policies, plans, and working methods
comprise the NBIMS initiative and the end product will be the National BIM Standard
(NBIM Standard or NBIMS), which includes classifications, guides, practice standards,
specifications, and consensus standards.
BIM Standard focuses on the information exchanges between all of the individual actors
in all of the phases of a construction project lifecycle. NBIMS will be an industry-wide
standard for organizing the actors, work phases, and facility cycles, where exchanges are
likely and, for each of these exchange zones, stating the elements that should be included in
the exchange between parties.
Many of the aspects of this overarching goal will be accomplished by a large
conglomerate of players. The area that BIM Standard is focused on is the design of the theory
and structure for a new way of thinking about facilities and structures as information models.

405
406 COMPUTING IN CIVIL ENGINEERING

Specifically, the BIM Standard recognizes that a BIM requires a disciplined and transparent
data structure which supports the following:
 A specific business case that includes an exchange of building information.
 The users’ view of data that is necessary to support the business case.
 The digital exchange mechanism for the required information interchanges (software
interoperability).
This combination of content selected to support user need and described to support open
digital exchange are the basis of information exchanges in the NBIM Standard. All these
levels must be coordinated for interoperability and this is the focus of the NBIMS Initiative.
Therefore, in nutshell the primary drivers for defining requirements for the BIM Standard are
industry standard processes and associated information exchange requirements.
In addition, even as the BIM Standard is focused on open and interoperable information
exchanges, the BIM Standard Initiative addresses all related business functioning aspects of
the facility lifecycle. BIM Standard is chartered as a partner and an enabler for all
organizations engaged in the exchange of information throughout the facility lifecycle.
The success of a Building Information Model is its ability to encapsulate, organize,
relate, and deliver information for both users and machine in simple readable format. These
relationships must be at the detail levels relating, for example, a door to its frame or even a
nut to a bolt, but maintain relationships from a detailed level to a world view. When working
with as large a universe of materials as exists in the built environment there are many
traditional, vertical integration points that must be crossed and many different “languages”
that must be understood and related. Architects and engineers, as well as the real estate
appraiser or insurer must be able to speak the same language and refer to items in the same
terms as the first responder in an emergency situation. This also carries to the world view of
being able to translate to other international languages in order to support the multinational
corporation. In order to standardize these many options and produce a comprehensive viable
Standard, all organizations have to be represented and solicited for input.
One of the primary roles of BIM Standard is to set the ontology and associated common
language that will allow information to be machine readable between team members and
eventually provide direction and, add quality control to what is produced and called a BIM
model. Ultimately, these boundaries will encompass everyone who interacts with the built
and natural environments. In order for this to occur, the team members who share
information must be able to map to the same terminology. Common ontologies will allow
this communication to occur.
The recommended process for generating a NBIMS specification, and implementation is
described in NBIMS, Vol. 1, Section 5 (NIBS 2007). The core components of NBIMS (see
figure 1) include the Information Delivery Manual (IDM), and Model View Definition
(MVD).

Figure 1. NBIMS main components.


COMPUTING IN CIVIL ENGINEERING 407

The Information Delivery Manual (IDM), is adapted from international practices, to


facilitate identification and documentation of information exchange processes and
requirements. IDM is the user-facing phase of BIM Standard exchange standard development
with results typically expressed in readable form. The Model View Definition (MVD) is the
software developer interface of exchange. MVD is conceptually the process which integrates
Exchange Requirements (ER) coming from many IDM processes to the most logical Model
Views that will be supported by software applications. Implementation of these components
will specify structure and format for data to be exchanged using a specific version of the
Industry Foundation Classes (IFC) standard to create and sustain a BIM application.
Applying these development steps to the structural domain is the core activity for
standardization of structural BIM. Figure 2 illustrates the main areas of information
exchanges that involve structural systems information. Each of these main areas has a
number of sub-exchanges that require further development. An early effort in standardization
of structural BIM is represented by the Applied Technology Council project ATC-75. The
ATC-75 project is only a beginning and only started to define the most basic structural
exchange parameters.

Figure 2. Main structural information exchanges.


Considering the structural domain, there are almost a limitless set of structural attributes
that can be associate with a BIM model than can be exchanged across different activities or
disciplines. They range from member ID, geometric and material characteristics to loads and
boundary conditions, cost and schedule/sequence to such things as LEED attributes or in fact
any other parameter that need to be exchanged.
The ATC-75 project is focused on the following nine structural object categories or
elements: Story, Grid, Column, Beam, Brace, Wall, Slab, Footing and Pile. And for each of
these elements a similar set of attributes were defined. For example the Column element
attributes consisted of: Column Axis, Profile Name, Material Name, Grade, Length, Roll,
Cardinal Point, Element ID, Schedule Mark, Base Reference Story, Top Reference Story,
Base Off Set and Top Offset. There could of course be many more important attributes and
it’s expected that these will evolve over time to incorporate many more.
There is no question that structural BIM Standard will bring tremendous value to
structural engineering by increasing interoperability and productivity of both design and
construction (Nawari et. al. 2010). The next sections look into the main phases for
developing standardized structural BIM.

STRUCTURAL INFROMATION DELIVERY MANUAL (IDM)


Structural analysis modeling requires a wide range of input data, which includes building
orientation, building geometry including the layout and configuration of spaces, construction
materials including mechanical and thermal properties of all structural elements, type of
408 COMPUTING IN CIVIL ENGINEERING

connections, foundations and boundary conditions, loading conditions, and other MEP
related information.
The output results of structural analysis and design may include the assessment of the
building’s deformation and strength for compliance with regulations and targets, overall
estimate of the safety level by the building, and estimate of the quantities of structural
materials used.
The structural IDM is the document that describes the processes and requirements to set
up BIM models for structural analysis and design purposes. It focuses on the relationship of
processes and data. Structural designers are currently faced with the fact that BIM software
tools do not allow for full interoperability with their structural analysis and design software
and in addition they get upgraded quite frequently with new features. Until some of these
features are added, however, the designer has to use “workarounds” to get the paper
documentation to communicate design intent. The important issue here is to define the level
of detail desired for the modeling process. The structural IDM provides the foundation for
standardized data exchange. The main objectives of the IDM include:
i. Define the processes within the structural design project lifecycle for which
engineers require information exchange.
ii. Describe the results of process execution that can be used in subsequent processes.
iii. Identify the actors sending and receiving information within the process.
iv. Make certain that definitions, specifications and descriptions are provided in a form
that is useful and easily understood by the target group.
The IDM development has two main phases: one is the process map detailing the end
user processes and information exchange between end users, as shown earlier in Figure 3.
The other component is the list of exchange requirements. The development of IDM begins
with definitions of the data exchange functional requirements and workflow scenarios for
exchanges between architects, engineers, manufacturers, erectors, and general contractors
utilizing the ‘use case’ concept. A use case defines an exchange scenario between two well
defined roles for a specific purpose, within a specified phase of a building’s life cycle
(Eastman et. al, 2010). It is generally composed of more detailed processes and is embedded
in a more aggregate process context. Most of the use cases are parts of larger collaborations,
where multiple use cases provide a network of collaboration links with other disciplines.
Such composition of use cases is referred to a process map
The process map was created using the Business Process Modeling Notation (BPMN)
(www.bpmn.org), since the notation is adopted by buildingSMART and the National Institute
of Building Sciences (NIBS). Horizontal swim lanes are used for the major processes. Main
activity phases of typical structural analysis and design are identified along with their
relationship to sub processes (Figure 3). In addition to the standard BPMN notation the IDM
utilizes notation for information exchanges between activities called Exchange Models (see
Figure 4). The Exchange model requirement presents a link between process and data. It
applies the relevant information defined within an information model to fulfill the
requirements of an information exchange between two processes at a particular stage of the
project. Each exchange model is uniquely identified across all use cases and besides its name
caries abbreviated designation of the use case it belongs to:
 AS_EM01 - Architectural, Structural Concept use case exchange models.
 AS_EM02 - Structural concept, Structural Analysis use case exchange models.
 DD_EM01 - Preliminary Structural Analysis use case exchange models.
 DD_EM02 - Structural Analysis, Reinforced Concrete Design use case exchange models.
 DD_EM03 - Structural Analysis, Structural Steel Design use case exchange models.
 DD_EM04 - Structural Analysis, Structural Wood Design use case exchange models.
 DD_EM05 - Structural Analysis, Other Structural Design use case exchange models.
COMPUTING IN CIVIL ENGINEERING 409

Figure 3. Process map for structural design.

 DD_EM06 - Structural Analysis, Foundation and Geotechnical Design use case


exchanges
 DC_EM01 – Structural Design Review, Detailing and Fabrication use case exchange
models

Figure 4. exchange model notation.


In order to standardize the terms used in NBIMS use cases and to provide consistent
classification schemes for other information associated with building models, it is
recommended to utilize the Omniclass tables and codes that are defined by the Construction
Specifications Institute CSI (Eastman et.al. 2010, Omniclass, 2006) for cross referencing
among IDM specifications. Descriptions of some to the main tasks on the process map shown
in Figure 3 are given in the following tables:

Table 1a. Process Specification in Process Map.


Structural Concept
Type Task
Name Structural Concept
Omniclass Code 31-20-10-00 Preliminary Project Description
Documentation Engineer uses concept model from architect to provide feedback on the
structural grid, structural system, major structural connections issues,
interfaces between materials and other structural and curtain wall systems.

The scope of the exchange requirement is the exchange of information about structural
elements and systems. Each of the exchange models described above contains a wide range
410 COMPUTING IN CIVIL ENGINEERING

Table 1b. Process Specification in Process Map.


Structural Design Development
Type Task
Name Structural Requirements
Omniclass Code 31-20-20-00 Design Development
Documentation Structural engineers review architects' models and define the structural
requirements on the building. This may include load calculations,
boundary conditions and supports, bracing type of connection design,
diaphragm types, soil and foundation aspects, and other structural
framing requirements.

of exchange requirements to support the coordination of structural analysis and design


requirements with general architectural form and spacing requirements. For instance the
exchange mode AS_EM02 will include exchange requirements for structural wall as depicted
in Table 2 (ATC-75).
Table 2. Part of Model Exchange Requirements for A Wall.
Object Priority Attribute Explanation
Name
1 Thickness Dimensional thickness of the wall, applicable to standard wall,
having a unique, not-changing thickness along the wall axis.
1 Material Name of the material of the wall. It should be an indicator of
the type of material (steel, concrete, timber) and not any
specific material name. Only the material name should be
exchanged, not the material properties, like Density, Specific
Weight, etc.

MODEL VIEW DEFINITION (MVD)

The model view definitions provide the framework that the software developers use to
define the IFC exchange format. It focuses on the relationship of application and data. The
process of developing the MVDs begins as indicated earlier with defining the IDM and its
exchange requirements
by specifically identifying the object attributes to be exchange and how they will be used,
both in terms for the users and developers. For the case of AS_EM01 and AS_EM02, the list
of entities includes Story, Grid, Column, Beam, Brace, Wall, Slab, Footing
and Pile
The IFC schema contains a wide range of datasets as it covers the whole lifecycle of a
building and its environment. Software products should only deal with a subset of the full
IFC schema to void processing overwhelming amount of data. Therefore a model view
definition focuses on defining model subsets that are relevant for the data exchange between
specific application types. The goal is that software implementers only need to focus on the
parts of the IFC schema relevant to them.
The MVD structure consists of a number of levels. At the first level is a list of entities
that are relevant for the data exchange. Each entity is listed under a group such as “spatial
structure” or “architectural systems”.
At the second level is a list of concepts associated with a particular entity. These concepts
include basic information such as the name and description of the entity as well as specific
characterization related to the entity. Figure 5 shows the building story entity to illustrate
some of its associated concepts, which include spatial composition s, placement and
COMPUTING IN CIVIL ENGINEERING 411

geometric representation. Figure 6 expands the wall entity to illustrate further details about
the wall exchange requirements.
Finally at the last level is a list of implementer’s agreements associated with a particular
concept. Since IFC does not provide detailed information about how it should be used in
specific cases because of its wide scope and inclusive nature, making such decisions about
the use of IFC has been left to IFC implementers. These decisions are called implementer’s
agreements and they are documented as part of MVDs.

Details of this entity is expanded in Figure 6


Figure 5. ATC-75 MDV for Basic Structural Exchange Parameters.

Figure 6. ATC-75 MDV for Basic Structural Exchange Parameters.


TESTING AND VALIDATION

After the development and implementation of the IDM and MVD, testing and validation
must be performed to verify meeting the baseline for measuring structural information
modeling and exchange capabilities. The process include establishing test cases along with A
description of test criteria against which the result is validated, a realization of the same test
model in (at least) two structural modeling applications, and a matrix of success/failure
412 COMPUTING IN CIVIL ENGINEERING

descriptions for import/export into other software applications. These topics and other that
deals with establishing industry test cases and guidance for conformance testing and
interoperability testing will be addressed in the next phase of the research.

CONCLUSIONS
The NBIMS is, by design, a standard of standards, i.e. it is based upon other standards,
mainly, IAI, IFC, and Omniclass. The NBIMS strive to establish IFC building model schema
that provides the basis for achieving full interoperability within and across different AEC
trades.
This paper presents an initial effort to standardize structural BIM using NBIMS generic
approach. NBIMS defines a minimum standard providing a baseline against which
additional, developing information exchange requirements may be layered At the time of
writing, no completed content volumes of the NBIMS had been published and the
applicability of the generic process was thus not fully tested. The research offers detailed
guidelines for the development of structural BIM standards following the generic NBIMS
approach.
The basic process for developing structural BIM commences with the development of
functional specifications or exchange requirements defined by end users in an IDM. These
are then mapped to MVDs by software developer to establish a neutral IFC model schema.
Theoretically, a direct mapping should exist between the IDM, the MVD, and the IFC
schema where the IDM provides a list of information that must appear in the IFC schema and
the MVD provides the guideline specifying how the information must appear in the IFC
schema. The IDM and the MVD are generally supposed to be complementary of each other.
The paper presented examples illustrating these steps for the domain of structural analysis
and design.
The research attempted to develop a preliminary version of structural BIM standard and
bridges NBIMS implementation from theory into practice in a way that provides goals for the
best method to manage structural building information in an efficient integrated approach.

REFERENCES
AIA Document E201TM – 2007 (2007 ). “Digital Data Protocol”, American Institute of Architects, 2007.
ATC-75, Applied Technology Council (ATC), Project ATC-75: “Development of Industry Foundation
Classes (IFCs) for Structural components ” http://www.atcouncil.org/Projects/atc-75-project.html
(Nov. 2010)
American Society of Civil Engineers Structural Engineering Institute/Council of American Structural
Engineers Joint Committee on Building Information Modeling, http://www.seibim.org
Eastman, C. M.; Jeong, Y.-S.; Sacks, R.; Kaner, I.(2010). “Exchange Model and Exchange Object Concepts
for Implementation of National BIM Standards”. Journal of Computing in Civil Engineering,
Jan/Feb2010, Vol. 24 Issue 1, 25-34.
Froese T (2003): Future directions for IFC-based interoperability, ITcon Vol. 8, pg. 231-246.
International Alliance for Interoperability (IAI), buildingSMART International, http://www.iai-
international.org. (Nov. 2010)
Industry Foundation Classes (IFC), http://www.iai-tech.org (publication of the IFC specification). (Nov.
2010)
Omniclass. _2006_. Omniclass: A strategy for classifying the built environment, introduction, and user
guide, 1.0 edition, Construction Specification Institute, Arlington, Va.,
_http://www.omniclass.org/.
Nawari, N. and Sgambelluri, M. (2010). ―The Role of National BIM Standard in Structural Design‖, The
2010 Structures Congress joint with the North American Steel Construction Conference in
Orlando, Florida, May 12-15, 2010, pp. 1660-1671.
NIBS (2007): NBIMS (National Building Information Modeling Standard), Version 1, Part 1: “Overview,
Principles, and Methodologies”, National Institute of Building Sciences.
http://www.nationalcadstandard.org/ (Nov. 2010).
Collaborative Design Of Parametric Sustainable Architecture
J.C. Hubers1
1
Hyperbody, Faculty of Architecture, Delft University of Technology, P.O. Box 5,
2600 AA Delft, j.c.hubers@tudelft.nl.

ABSTRACT

Sustainable architecture is complex. Many aspects, differently important to


many stakeholders, are to be optimized. BIM should be used for this. Building
Information Modelling is a collaborative process where all stakeholders integrate
and optimize their information in a digital 3D model. Sometimes it is called Green
BIM. But what exactly is that? Is the International Standard Organization IFC
standard useful for this? And is it compatible with new developments in
parametric design? Advantages and disadvantages of BIM are listed. Full
parametric design is needed because it keeps the design flexible and open for
changes until the end of the design process. However it is not compatible with
IFC; only object parametric design is. A possible way out of this paradox could be
the use of scripts that only create objects if they are not already in the BIM
database and otherwise only adapt their properties. An example of parametric
sustainable architectural design explains the mentioned issues.

INTRODUCTION

Sustainable architecture is architecture that meets the needs of the present


without compromising the ability of future generations to meet their own needs
(adapted from WCED 1987). One of those needs is energy for heating, cooling,
lighting and equipment of buildings. Estimations show that buildings use 20 -
40% of the energy in Europe (IEA 2008).
The built environment is our habitat. This word is used in Ecology to
address the circumstances that make species flourish or not. Our habitat should be
built with all stakeholders in mind. Municipalities could play an active role in this.
They should not only approve or ask for changes in the plans, but actively
interfere from the start with the design of the habitat of its citizens. Municipalities
could demand for design teams with delegates representing all stake holders
(including the building professionals). Such multi-disciplinary design teams can
develop designs that optimally fulfil the needs and demands of all concerned.
In 1984 the author designed a sustainable office building (Figure 1). It had
a floating foundation, because of the risk of flooding and also to take profit both in
summer and winter of the constant ground temperature of 10 oC. The roof
construction would be made of round wood with Ethylene TetraFluoroEthylene
(ETFE) cushions, and has been realized several times since then. It had collection
and treatment of rainwater, heat storage in salt, fish culture and a vegetable
garden. The feasibility study for this innovative office took 2 years. And finally it
turned out that there was a negative return of investment, mainly caused by the
expensive foundation.

413
414 COMPUTING IN CIVIL ENGINEERING

Figure 1. Ecological office building “The Egg” (Hubers 1986).

Also around 1985 the Biosphere 2 was realised in the U.S. as a closed
ecological system. Fifteen years later the Eden project was realised in England.
Governments started to fund research into sustainable architecture end of the last
century. Demonstration projects were funded and alternatives were stimulated
with grants. The report of MIT to the Club of Rome in 1972 about the limits to
growth had a big impact. The Brundtland report in 1987 introduced the concept of
sustainability. Later the triple P was added: People, Planet and Profit. Prof.
Duijvestein, one of the founding fathers of sustainability at Delft University of
Technology listed in detail the criteria for buildings under every P (SenterNovem
2009). Reuse of existing resources with as less degradation as possible (cradle to
cradle) is important (McDonough and Braungart 2002).
But it was not until vice-president Al Gore went over the world in 2006
showing the spectacular movie: “An Inconvenient Truth”, that people became
aware of global warming, the ozone hole, widespread land degradation and
declining biodiversity. The work of Jón Kristinsson should be mentioned,
especially his design for the Floriade 2012 (Kristinsson 2010). Of course the IPCC
reports are important references.
COMPUTING IN CIVIL ENGINEERING 415

SUSTAINABLE BUILDING DESIGN

The biggest challenge stays with existing buildings, because it represents


20-40% of the energy use and CO2 emission and because it is more difficult to
reduce this in existing buildings than to avoid it in new ones. Recently the author
developed together with several specialists a proposal for synergetic greenhouses
on existing flat roofs in the cities. It is called SynSerre (Figure 2). One of the
proposed eco-innovations in this project is the combination of multilayered ETFE
cushions with reflecting patterns on the film that concentrate the spectrum of sun
light which is not useful for plants (~50%) to PV disks mounted on space frames
of round wood (Figure 3). Through in- and deflating the right chambers in the
cushions the system becomes interactive.

Figure 2. Schematic representation of the SynSerre project.

But it could well turn out that SynSerres in Northern countries need a
different solution than in Southern. Up to 77% of direct sunlight for PV reduces
the cooling capacity by 4.

+ +
Figure 3. ETFE cushions + Tentech round wood connection + PV
concentration

This project proposes solutions for the following problems.


1. Depletion of fossil fuel. SynSerres supply energy and insulate existing
buildings.
2. Flooding in the world’s delta areas due to climate change endanger the food
supply of the growing cities. SynSerres bring food production into the cities
on high and dry places.
3. Many existing buildings in European cities with flat roofs from the sixties and
seventies need renovation. Well designed and styled SynSerres are beautiful
and synergetic alternatives for traditional renovation.
416 COMPUTING IN CIVIL ENGINEERING

4. Many of those buildings are in problematic neighbourhoods. SynSerres create


jobs and offer social activities.
5. There is a lack of space for extending greenhouses in the green areas around
cities. The roofs and facades of existing buildings offer considerable space for
SynSerres.
6. Cities run out of space and the value of buildings in the target areas is
decreasing. There is added value through double land use.
7. Low quality of food in supermarkets. Neighbours profit from fresh herbs,
vegetables, flowers, plants and control the quality.
8. Global warming due to CO2 and particulates. Plants convert CO2 in O2 and
plant material. Particulates stick to ivy on the facades which are linked to the
SynSerre.
9. High cost for maintenance of green areas in the cities and e.g. ivy on the
facades. The greenhouse farmer maintains the green on and around the
building.
10. Overflow of sewage system during rain fall. Rainwater is collected, stored and
used to grow plants.
11. Little awareness of sustainability. SynSerre attracts attention and has an
educational function.
12. Transport causes much pollution and accidents. Less transport because of local
production and trade.

Figure 4. Elkas (WUR 2010)


Because existing buildings vary enormously, the SynSerre concept is
adaptable. Parametric design software will be used and dimensions expressed in
variables. The number of modules and their dimensions are matched with the
existing buildings. The SynSerre will be offered as a parametric product through
websites as www.conceptueelbouwen.nl. as part of an innovative turn towards an
offer based building market. Clients (i.e. greenhouse farmers) buy the product in
about the same way as a customisable car. The local team receives the command,
adapts the design in some days and prepares the building permit request. The
consortium offers SynSerre in principle as a Design & Build contract and shares
the profit.
The idea of SynSerre is obviously inspired by the design of Figure 1, but
combined with other ideas. In 1992 ONL, one of the partners in the project,
participated in a study where greenhouses were planned on top of housing
(Bahlotra 1992). However not on existing buildings and more as an urban design.
Grau (2009) proposed to integrate greenhouses on the roofs of Barcelona,
however also not on existing buildings and not developed as a realistic product.
SynSerre partners from the leading greenhouse research centre in Europe, the
Wageningen University and Research centre in the Netherlands, showed the
advances of energy producing greenhouses like the Elkas (Figure 4) and the
COMPUTING IN CIVIL ENGINEERING 417

Fresnel greenhouse (WUR 2010). This together with the ideas of Urgenda (2010)
and the initiative of Rotterdam (2010) to turn existing flat roofs into green roofs
led to the idea of this project. The Elkas produces 18-15 KWh/m2 year at the
curved side of the roof by reflecting and concentrating the Near Infrared Radiation
of the sun to an adaptable line of PV cells.

Table 1. Criteria For Sustainable Buildings


Evaluator 1 Evaluator 2 Standard
deviation
2 4 6 8 10 2 4 6 8 10 2 4 6 8 10
Relation with 1 wind hinder 6 3 2
environment 2 sun light 5 5
3 urban infrastructure 10 5 4
4 waste 4 10 4
5 view 8 3 4
Adaptability 6 flexibility 2 9 5
7 expandability 2 9 5
8 handicapped 5 5
Spatial 9 routing 8 7 1
organization 10 viewing lines 9 9
11 functional relations 9 9
12 privacy 3 8 4
Safety 13 fire 8 8
14 burglary 8 8
15 flood 5 5
16 traffic 5 8 2
Experience 17 recognizablity 9 9
18 proportions 9 9
19 rhythm 9 9
20 acoustics 5 5
21 comfort 7 3 3
22 contrast 9 9
Materials 23 colour 9 3 4
24 texture 9 3 4
25 sustainability 5 5
Construction 26 safety 5 5
27 comfort 5 5
28 speed 4 2 1
Energy use 29 insulation 5 5
30 outer surface 8 8
31 wind tightness 5 5
32 ventilation 5 5
33 sun light 3 3
34 lighting 8 8
Cost 35 initial, maintenance, demolition 9 5 3

Besides fossil energy use, CO2 emission and the problems mentioned
before, there are other criteria that a sustainable building should answer. The
author made the list in Table 1 and used it during a test case of collaborative
design (Hubers 2008). The list was meant to be completed by the design team. It
turned out that the multidisciplinary design team only focused on a few criteria
and didn’t take the time to add other criteria or to evaluate all of them. Also other
research shows that weighted criteria evaluation is not reliable (Lawson 2006).
But maybe it is better to have a badly used list of criteria, than no list at all. At
least it helps efficiently focussing the discussion on the subjects, which team
members don’t agree about (standard deviation in Table 1).
A multidisciplinary design team needs more than a list of criteria. It needs
to have good ideas! Developing ideas and evaluating them with criteria are the
two main sub processes of design (Lawson 2006, Hubers 2008). Knowledge
sharing and management is important. We plan to use Wikis, which are websites
418 COMPUTING IN CIVIL ENGINEERING

that everybody that is authorized can easily edit. Well known is Wikipedia.
Creativity is needed to turn experience and knowledge into ideas. Many
techniques can be used. The use of different representations and different media is
one of them (Stellingwerff 2005, Schön 1983). The work of Edward de Bono
shows several methods like Random Word Stimulation, Analogy thinking, Brain
storming etc. (Bono 1980). The Environmental Maximisation Method of
Duijvestein is interesting though a bit laborious (Duijvestein 2002). It consists of
drawing the design only from the point of view of one function, e.g. water, green,
sun, wind, power, traffic, housing, parking etc. and later the plans are integrated.
Recent developments in ICT make it possible to share all this information in 3D
digital models. We call this Green BIM.

BIM

BIM is mainly used in the meaning of Building Information Modelling, the


process of making a Building Information Model. Collaborative design is one of
the main aspects of BIM (Eastman et al. 2008, Hardin 2009, Hubers 2008). We
define it as follows. Collaborative architectural design is multidisciplinary
simultaneous design from the very start of a project. It is also called co-design or
concurrent engineering.

Table 2. Advantages Of BIM (Adapted From Eastman et al. 2008, P. 321)


Feasibility study
1 Support for project scoping and cost estimation (2)
Concept design
2 Scenario planning (2)
3 Early and accurate visualizations (3)
4 Optimize energy efficiency and sustainability (1)
Integrated design/construction
5 Automatic maintenance of consistency in design (8)
6 Enhanced building performance and quality (5)
7 Checks against design intent (3)
8 Accurate and consistent drawing sets (8)
Construction execution/coordination
9 Earlier collaboration of multiple design disciplines (6)
10 Synchronize design and construction planning (5)
11 Discover errors before construction (clash detection) (5)
12 Drive fabrication and greater use of prefabricated components (5)
13 Support lean construction techniques (2)
14 Coordinate/synchronize procurement (4)
Facility operation
15 Lifecycle benefits regarding operation costs (1)
16 Lifecycle benefits regarding maintenance (1).

The pressure to use IFC based BIM is growing. IFC is an ISO standard. A
good introduction to IFC-based BIM is to be found in Khemlani (2010). Autodesk
adopted it a.o. in Autodesk Revit. Other major CAD developers in the AEC
industry support it too. The Dutch part of the buildingSMART association, which
develops IFC, states in her newsletter that the directions of Governmental Services
of U.S., Denmark, Finland, Norway and the Netherlands signed an agreement for
adopting IFC based BIM for all major government projects (BS 2010).
COMPUTING IN CIVIL ENGINEERING 419

Contractors for a long time are working on this and recently the Dutch Conceptual
Building network starts working in this direction too (CB 2010). The conceptual
building approach converts the demand market into an offer market. Providers of
concepts no longer wait for a client to define a demand, but develop complete
adaptable solutions that clients can order. It is more or less like in the car business:
lean production and mass customization.
The simulation of buildings is a vital benefit. VR systems like CAVEs and
Head Mounted Displays are used for that. Delft University of Technology
developed a lab called protoSPACE which uses these techniques (Hubers 2008).
Eastman et al. report 10 case studies of realized buildings. The 16 reported
benefits are summarised in Error! Reference source not found. with the number
of projects that had these benefits. Benefit 9 ‘Earlier collaboration of multiple
design disciplines’ is in the Construction execution/coordination phase and thus
not collaborative design as we define it. Not one project had all those benefits.
Besides benefits of BIM there are also drawbacks of course. It is obvious that
BIM asks for much knowledge about 3D, 4D, 5D, nD CAD knowledge (4D is
planning, 5D is cost, nD is management etc.). Then there are the difficulties of
author/ownership and liability of the BIM. Contracts like Design Build and
Guaranteed Maximum Price have enormous impact on the concerned professional
practices (Hardin 2009).
Recently parametric design software is used. Two main groups of
parametric design software can be distinguished: object parametric or process
parametric. The problem is that only object parametric design software is
compatible with IFC (Hubers 2010).

CONCLUSION

Sustainable architecture has many criteria to fulfil of many stakeholders. A


promising idea is to develop synergetic greenhouses on flat roofs of existing
buildings. Research is needed to find ways to use both IFC based BIM and full
parametric design software. Only then multidisciplinary teams can work
efficiently together on sustainable architectural designs. A solution might be the
use of scripts that only create objects if they are not already in the BIM database
and otherwise only adapt their properties.

REFERENCES

Bhalotra, A., Oosterhuis, K. A.H. Art Activities, A.J. Alblas, J.C. Alblas and
Witteveen + Bos, (1992). City Fruitful. Ed. G. W. de Vries. 010
Publishers Rotterdam.
Bono, E. de. (1980). Lateral thinking; a textbook of creativity. Harmondsworth
Penguin
BS (2010). Newsletter April 2009 nr1 available at http://bw-dssv07.bwk.tue.nl/-
files/newsletters/nieuwsbrief-buildingsmart-22-04-2009.pdf. accessed 3-
1-2010.
CB (2010) http://www.conceptueelbouwen.nl/?mod=cbouwen&id=17&act=cb_-
english . Accessed on 9-3-2010.
Duijvestein, C. A. J. (2002). The environmental maximisation method in: T. M. d.
Jong and D. J. M. v. d. Voordt Ways to study and research urban,
architectural and technical design (Delft) Delft University Press
420 COMPUTING IN CIVIL ENGINEERING

Eastman, C., Teichholz, P., Sacks, R. and Liston, K. (2008). BIM Handbook.
Wiley, Hoboken, New Jersey, U.S.A.
Grau, L. (2009). Sustainable district in Barcelona. In Changing roles; new roles,
new challenges, ed: H. Wamelink, M. Prins and R. Geraedts. TU Delft
Faculty of Architecture Real Estate & Housing, Delft.
Hardin, B. (2009). BIM and construction management. Sybex, Indianapolis,
Indiana, U.S.A.
Hubers, J.C. (1986). “Eindelijk een gebouw dat met alles rekening houdt: ‘Het
Ei’”. In Bouw, februari’86, pp. 10 – 14.
Hubers, J.C. (2008). Collaborative architectural design in virtual reality. PhD
diss. Faculty of Architecture of Delft University of Technology, The
Netherlands. Also available at http://www.bk.tudelft.nl/users/hubers/
internet/DissertatieHansHubers(3).pdf.
Hubers, J.C. (2010). Collaborative parametric BIM. In proceedings of the 5th
ASCAAD conference 2010, ed: A. Bennadji, B. Sidawi and R. Reffat.
Robert Gordon University, Scotland. ISBN: 987-1-907349-02-7.
IEA (2008). Energy efficiency requirements in building codes: Energy efficiency
policies for new buildings, IEA Publications.
Khemlani L. (2010). The IFC Building Model: A Look Under the Hood. In AEC-
bytes. Available at http://www.aecbytes.com/feature/2004/
IFCmodel.html.
Kristinsson, J. (2010).
http://www.kristinssonarchitecten.nl/projecten/images/9006/9006-1.jpg
Accessed 7-10-2010.
LAWSON, B.R. (2006). How designers think. Architectural Press/Elsevier,
Oxford.
McDonough, W. and M. Braungart. (2002). Cradle to Cradle. North Point Press,
New York Rotterdam (2010).
http://www.rotterdamclimateinitiative.nl/en/100_climate_proof/-
rotterdam_climate_proof/results. Accessed 10-10-2010.
Schön, D. A. (1983). The reflective practitioner; how professionals think in
action. Basic Book Inc. U.S.A.
SenterNovem (2009).
http://www.senternovem.nl/mmfiles/Position%20paper%20-
Duurzame%20Bedrijfsvoering%20Rijk_tcm24-338988.pdf Last accessed
6-11-2010.
Stellingwerff, M. C. (2005). Virtual context. Ph.D. diss. Delft University of
Technology, The Netherlands.
Urgenda (2010). http://www.urgenda.nl/visie/ Last accessed 29-9-2010.
WCED (1987). Our Common Future, Report of the World Commission on
Environment and Development, World Commission on Environment and
Development, 1987. Also available at http://www.worldinbalance.net/-
intagreements/1987-brundtland.php Last accessed 29-9-2010.
WUR (2010). http://www.glastuinbouw.wur.nl/UK/expertise/design/. Last
accessed 23-11-2010.
Developing Common Product Property Sets (SPie)

E. William East1, David T. McKay2, Chris Bogen3, Mark Kalin4


1
Engineer Research and Development Center, 2902 Newmark Drive, Champaign, IL
61826-9005; phone: 217.373-6710; email: bill.east@us.army.mil
2
ibid; phone: 217.373.3495; email: david.t.mckay@us.army.mil
3
Engineer Research and Development Center, 3909 Halls Ferry Road, Vicksburg, MS
39180; phone: 601-634-4624; email: chris.bogen@us.army.mil
4
Kalin Associates, 1121 Washington Street, West Newton, MA 02465; phone:
617.964.5477; email: mkalin@kalinassociates.com

ABSTRACT

Even when commercial software is effectively used, sharing building information


model (BIM) data is unlikely to provide satisfactory results due to differences in
properties, aggregations, and organization of the information contained in the various
software systems used at different stages of the project life-cycle. The approach taken
by the authors is to develop open specifications for information exchanges that are
included in contract specifications. A critical subset of the life-cycle BIM data set is
that information pertaining to project requirements, specification, review, installation,
and maintenance of the materials, products, and equipment used to create our
engineered environment. The Engineer Research and Development Center, in
conjunction with the National Institute of Building Sciences' buildingSMART
alliance, the Speciation’s Consultants in Independent Practice, and the Construction
Specifications Institute, is currently leading a project to deliver open BIM information
about manufactured materials, products, and equipment. The goal of the Specifiers’
Properties information exchange (SPie) project is to mobilize the United States
building product manufacturing sector to develop open-standard consensus building
product information models. In this paper, the authors' report on the progress of the
Specifier’s Properties information exchange (SPie) project.

BACKGROUND

The re-use of building information models (BIM) is fraught with difficultly due to the
differences in properties, aggregations, and organization of the information in various
software systems. Recent history has even demonstrated that software system vendors
will implement unique properties for high-visibility owners. The example of this
situation is the space area measurement property required by the General Services
Administration (GSA 1996) included in design-oriented BIM products. This property,
“GSA BIM Area” appears on all users’ versions of these software systems, even if you
don’t work for that specific agency, or speak English. Imagine a world where each
owner had their own unique requirements loaded into every software system that was
required during a project life-cycle. Rather than creating a common language for
information exchange this creates a BIM Tower of Babylon. If everyone gets to have
their own “standard,” none of the parties end up being able to speak with any other
due to the complexities of the data sets being provided.

421
422 COMPUTING IN CIVIL ENGINEERING

In addition to problems with object properties, the way that various software
systems aggregate and decompose their data provides another level of complexity to
creating software standards. An example of such a problem is encountered with
attempting to provide classifications of objects within commercial software systems.
None of the design software reviewed by the authors have externally modifiable
classifications that may be easily changed when switching from one client to another.
Facility Management software can be explicitly classified as those systems based on a
spatial decomposition of a facility or facilities, and those systems based on an asset
classification. Spatial facility management software allows the equipment to be placed
in space. Asset classification provides only a notional location such as within a given
building or floor. As a result, by providing information from a spatially oriented
system data set, such as a design BIM, into an asset-oriented facility management
system, the user runs the risk of losing the room numbers where the equipment is
located.

Finally, the organization of information in different systems results in


significant problems for users wanting to share information. The most obvious
example of these differences are those that result from differences in field lengths.
Other differences such as the ability of one system to represent multiple inheritances
and those with strictly hierarchical models also result in the loss of fidelity when
transferring information from one system to another.

The transfer of information in a design office or project site from one system to
another is not an arbitrary act or academic consideration; it is an act that is required to
complete some clear business function. These exchanges are needed to provide
specific information at specific times in the project. Usually such exchanges are
included directly in contracts to ensure that critical exchanges are required. Examples
are daily construction reports, construction schedules, and equipment lists. Rather than
focus on some global approach to creating interoperable BIM, the approach favored by
the authors is to prepare small, contractually possible, performance-based data
exchanges (East 2009). The idea of “contracted information exchanges” also is the
heart of the standard process for the development of Industry Foundation Class (IFC)
Model View Definitions (MVD) using the Information Delivery Manual Process
(IDM) (ISO 2010).

The IDM process provides a pattern of requirements analysis and


documentation that is required to create the National BIM Standard – United States
(NIBS 2007). There are three essential parts to the IDM process. The first is the
identification of the business processes who currently exchange the needed building
information. The documentation of these processes are prepared using swim-lane
diagrams, often utilizing Business Process Modeling Notation (BPMN) (OMG 2010).
The swim-land diagram shows clearly the points of information exchange needed to
solve the specific information exchange problem. The details of the information to be
exchanged are collected in a series of “exchange requirement” documents. Up until
this point the IDM process can be completed with the assistance of business process or
systems analysts. It is only the final stage that requires detailed knowledge of the IFC
model. In the final stage, exchange requirements are collated into an MVD. The MVD
COMPUTING IN CIVIL ENGINEERING 423

is the technical specification of the IFC model that provides the open standard
framework for information exchange.

SPECIFIERS PROPERTIES INFORMATION EXCHANGE

The delivery of BIM data for the Facility Management Handover MVD (bSi 2009) in
the United States is often referred to by the more accessible name, COBie (East
2010a). COBie is the Construction-Operation Building information exchange. COBie
is one of many information exchange, or MVD, projects currently underway through
the buildingSMART alliance (bSa 2011). COBie has been shown to eliminate the
delivery of current paper documentation of construction handover documents
including installed equipment lists, submittal and shop drawing information,
commissioning, operations and maintenance, and asset management information.
Project teams may also use COBie as a platform upon which to transform their
business practices since they can manage COBie data instead of paper documentation.
The Life-Cycle information exchange (LCie): LCie describes the exact format and
timing of each exchange of information during a project that results, ultimately, in the
production of facility management handover information as simply a report provided
from the building information model (East 2010b).

Today COBie forms an index to electronic documents, in Portable Document


Format (PDF), created during the submittal, installation, quality assurance, and
commissioning processes. While even such a simple catalog has been proven to save
facility managers man-years of effort when accepting new facilities (Medellin 2009),
there remains a tremendous amount of information about materials, products,
equipment, and systems locked within documents. To unlock the information content
of these PDF documents several of the authors began the Specifiers Properties
information exchange (SPie) project in 2008 (bSa 2010). This work takes advantage of
previous efforts by the buildingSMART International (bSi 2002). A recent study of
electronic catalogs has identified both technical and business gaps (Amor 2009) of
existing approaches that are addressed through the SPie project as described in the two
following sections.

Technical Gaps

The SPie project resolves semantic gaps in existing approaches by carefully


considering the context and specific uses of product and property definitions. This
consideration begins the recognition of the variety of properties needed during a
project. An example of a maintenance-related property is the bolt-hole layout for
equipment base plates. While designers are not interested in such properties, the
mechanic must have this information to ensure that equipment replacement doesn’t
require rerouting pipes. The swing of an equipment access panel to change filters is an
example of operability domain data not present in designs or specifications. On the
other hand, many types of properties are common across virtually all manufactured
products. These properties include the contact information for the manufacturer,
information about size, color, and accessories, warranty durations, and lists of
preventative maintenance requirements.
424 COMPUTING IN CIVIL ENGINEERING

The Automating Equipment Information Exchange (AEX) project was initiated


to develop standard product data models, primarily centrifugal pumps, focused
primarily on ensuring the proper semantics of the information to be exchanged
(Begley 2005). As with the AEX project, the SPie project management plan ultimately
requires manufacturer groups to reach consensus on the list and definitions of products
properties. The authors’ experience has shown that industry groups have more success
in considering such questions when reacting to sample information created by others.
As a result the SPie project began with an effort to create product templates across the
entire set of MasterFormat and UniFormat specification sections in the Whole
Building Design Guide’s Product Guide (WBDG 2011). The ProductGuide™
combined with the standard IFC property sets and the OmniClass Properties Table
(CSI 2010) form the basis of templates prepared by SPie domain analysts.

Another critical part of the SPie approach is to integrate with manufacturer’s


product portfolios slowly. SPie efforts with a specific domain begin with agreement on
pre-manufactured products which have individual serial numbers. Examples of such
products are electrical outlets or centrifugal pumps. The next step of SPie is to model
products that are assembled to order from pre-manufactured parts. Examples of this
type of product are electrical panel boards and chillers. The third types of products are
engineered-to-order products. Such products are comprised of both pre-manufactured
and assembled-to-order products. In these products the connections between the
objects comprise the equipment and/or system. It is interesting to note that the
modeling ontology of model engineered-to-order products is expected to be the same
as that needed for any building system requiring shop drawings.

Agreeing on the semantics, and starting with pre-manufactured products, is the


bulk of the technical effort on the SPie project. This is because of the need to work
directly with product manufactures and trade associations. The required format for the
templates is an IFC mini-MVD that can be instantiated in STEP Physical File Format,
ifcXML, and SpreadsheetML using the COBie 2.30 data structure (East 2010d).
Interoperability is guaranteed by external checking using lightweight building model
server tools (East 2010c). The publication of these formats is freely accomplished
through the ProductGuide™.

Business Gaps

Requirements by owners are rarely sufficient, on their own, to achieve industry


standardization much less industry transformation. To ensure that SPie information is
provided as a matter of course, and replaces paper product data sheets throughout the
project life-cycle, requires that all stakeholders have direct benefit in creating,
exchanging, and using SPie information. Realizing the infrastructure needed to
achieve this benefit is a critical part of the SPie effort. The selection of “specifiers”
properties is the first key to the expected success of the SPie project. This is because
SPie information is used by contractors when initially purchasing product and facility
managers ultimately purchasing replacement products. Thus manufacturer’s marketing
and sales departments will be keenly interested in ensuring that their products are
provided in SPie format. Manufacturers have also expressed to the authors the
COMPUTING IN CIVIL ENGINEERING 425

pressure they have begun to feel from customers for the production of BIM data
models. SPie allows manufacturers to respond to these requests once, using a common
(and defensible) format, rather than creating multiple models for every user and every
software platform.

Past efforts to create electronic product catalogs required manufacturers to pay


publishers for the rights for their own information. The second business key to the
SPie project is that manufacturers are identified as the authors of their own data. A
demonstration of the success of this concept is the 2009 General Electric
demonstration of SPie templates for selected light fixtures (GE 2009). Manufacturers
and associations participating in the SPie effort have stated to the authors that the
creation of standard templates can be automated directly from manufacturer databases.
The programming required for this mapping is a trivial procedure since the semantics
of the target format will have been resolved prior to any such programming.

A key business driver is also the need to allow manufacturers to identify


differentiating properties. Such properties allow manufacturers to highlight the
specific benefits of their products compared to others in of that product type. While
the required SPie template fields represent the least common denominator of product
properties for a given class of product, the SPie model is extensible allowing
manufacturers to identify properties that they consider as important proprietary data.

EXAMPLE SPie TEMPLATE

A SPie set of template is comprised of information common to all products and those
properties required for the specific class of product. Table 1 provides the list of
properties common to all products. The SPFF, ifcXML, and SpreadsheetML
transforms required to produce and consume these standard properties are available
through the free BIMServices software (AEC3 2010).

Table 1. SPie Common Properties


 Code Performance
Common Product Properties  Sustainability Performance
 Manufacturer Contact
Information Common Warranty Properties
 Nominal Length, Width, Height  Parts Warranty Contact
 Model Reference  Parts Warranty Duration
 Shape  Labor Warranty Contact
 Size  Labor Warranty Duration
 Color
 Finish Common Maintenance Schedule
 Grade Properties
 Material  Task Name
 Constituents  Status
 Features  Type
 Accessibility Performance  Description
426 COMPUTING IN CIVIL ENGINEERING

 Duration  Priors
 Frequency  Resources Required
 Task Number

The product class specific properties of the initial SPie template are shown in
Table 2. Since the thermostat has multiple inheritances from both an electrical device
and a controller the default IFC properties in the template reference both property sets.
Finally there are additional properties derived from product-specific product catalog
data sheets. Using the combined information from the properties identified in Table 1
and Table 2, the template has been provided to product manufacturers for review.
Following that review the harmonized minimum common denominator will be used to
update the templates on the WBDG ProductGuide™. One of the primary sources of
the existing templates relied on the efforts of Specifications Consultants in
Independent Practice (SCIP), who provided an 8,500 line database organized into 425
specification sections, developed from Kalin Associates’ Master Short-Form
Specifications, 8th Edition. To coordinate efforts of SCIP and Construction
Specifications Institute (CSI) a Construction Engineering Research Laboratory project
paid to review the ProductGuide™ content against that of the OmniClass properties
table.

Table 2. SPie Thermostat Template Properties


Property Derivation
NominalCurrent Pset_ElectricalDeviceCommon
UsageCurrent Pset_ElectricalDeviceCommon
NominalVoltageLower Pset_ElectricalDeviceCommon
NominalVoltageUpper Pset_ElectricalDeviceCommon
NominalPower Pset_ElectricalDeviceCommon
NumberOfPoles Pset_ElectricalDeviceCommon
HasProtectiveEarth Pset_ElectricalDeviceCommon
FrequencyRangeLower Pset_ElectricalDeviceCommon
FrequencyRangeUpper Pset_ElectricalDeviceCommon
PhaseAngle Pset_ElectricalDeviceCommon
PhaseReference Pset_ElectricalDeviceCommon
IPCode Pset_ElectricalDeviceCommon
InsulationStandardClass Pset_ElectricalDeviceCommon
CurrentType Pset_ElectricalDeviceCommon
ThermostatType Pset_UnitaryControlElementTypeThermostat
ThermostatMode Pset_UnitaryControlElementTypeThermostat
FanMode Pset_UnitaryControlElementTypeThermostat
TemperatureSetPoint Pset_UnitaryControlElementTypeThermostat
Application Pset_HVACControl_ProductCatalog
Sustainability Pset_HVACControl_ProductCatalog
Manufacturers Pset_HVACControl_ProductCatalog
COMPUTING IN CIVIL ENGINEERING 427

RESULTS

Results from the SPie project debuted at the 2009 NIBS Annual Conference (bSa
2009). Representatives from General Electric described the ease with which product
templates were created from their standard product catalogs and posted on their
website. A specifier using the design software AutoDesk Revit and the specification
software system eSpecs demonstrated how the product template and then specific
manufacturer information may be directly linked in the design and specification
process. The presenter stated that while “out of the box BIM objects contain little
valuable data”, SPie property sets “allow designers to assign valuable product data
into BIM components.”

Based on this success a meeting in March 2010 was held at NIBS to reach out
to manufacturers associations. Several associations are currently working with NIBS
to develop entire libraries of SPie templates from their members’ products. Chief
among these organizations is the National Electrical Manufacturing Association
(NEMA). In a Dec 2010 presentation NEMA demonstrated a prototype application for
the integration of SPie information within their manufacturers’ standard Electronic
Data Interchange (EDI) application.

FUTURE WORK

The level of effort required to develop consensus templates across the approximately
10,000 building product manufacturers is a daunting task. While there is increasing
demand for open BIM deliverables of product data, the authors expect that it will
require decades to fully replace the current PDF marketing catalog page with the
associated computable BIM model and associated style sheet enabling human readable
display. The authors’ see the development of these templates as the key work to be
accomplished, since the production of manufacturer data into these formats has been
acknowledged to be a trivial task by manufacturers themselves. Given that the hard
work is to come to a consensus on the content of the SPie templates, the authors would
like to encourage members of the 400 building product related trade associations assist
in organizing these discussion through the buildingSMART alliance. The national
Technical Committee of the Construction Specifications Institute (CSI) is currently
encouraging their 6,000 industry members to participate as well, with an update to the
ProductGuide expected after August 2011. Additional support for reviewing the
consensus templates is anticipated from members of Specifications Consultants in
Independent Practice.

ACKNOWLEDGEMENTS

The U.S. Army, Engineer Research and Development Center, Construction Engineer
Research Laboratory in Champaign, IL and Vicksburg, MS supported this project
under the “Life-Cycle Model for Sustainable and Mission Ready Facilities” project.
The authors would like to thank Earle Kennett and Dominique Fernandez of NIBS,
Bob Payn of DB Interactive, and Nicholas Nisbet of AEC3 for their support and work
on this project.
428 COMPUTING IN CIVIL ENGINEERING

REFERENCES

AEC3 UK (2010) “BIMServices – Command-Line Utilities for BIM,”


http://www.aec3.com/6/6_04.htm, cited 14 Jan 11.
Amor, Robert, et. al. (2008) “Online product Libraries: The State-of-the-art,”
Scientific Commons, http://en.scientificcommons.org/42559250, cited 13 Jan
11.
Begley, E. F., at. al. (2005) “Semantic Mapping Between IAI ifcXML and FIATECH
AEX Models for Centrifugal Pumps,” National Institutes of Standards and
Technologies, NISTIR 7223, May 2005.
buildingSMART Alliance (2009) “SPie Meeting Notes Dec 2009,” http://www.
buildingsmartalliance.org/index.php/newsevents/meetingspresentations/spieme
eting09/, cited 13 Jan 11.
buildingSMART Alliance (2010) “Specifiers Properties information exchange,”
http://www. buildingsmartalliance.org/index.php/projects/ activeprojects/32,
cited 13 Jan 11.
buildingSMART Alliance (2011) “Active Projects,”
http://www.buildingsmartalliance.org/ index.php/projects/activeprojects/, cited
13 Jan 11.
buildingSMART International (2002) “PM-3 Material Selection, Specification, and
Procurement,” IFC R3 Extension information Requirements Specification,
October 2002,
buildingSMART International (2009) “FM Basic Handover,”
http://www.buildingsmart.com /content/fm_basic_handover, cited 13 Jan 11.
Construction Specifications Institute (2010) “OmniClass: Table 23 – Products,”
http://www.omniclass.org/pdf.asp?id=8&table=Table%2023, cited 13 Jan 11.
East, E. William (2009) ”Performance Specifications for Building Information
Exchange,” Journal of Building Information Modeling, National Institute of
Building Sciences, Washington, DC, Fall 2009, pp. 18-20.
East, E. William (2010a) “Construction Operations Building information exchange
(COBie),” http://www.wbdg.org/resources/cobie.php, cited 13 Jan 11.
East, E. William (2010b) “Life-Cycle information exchange (LCie),” http://www.
buildingsmartalliance.org/index.php/projects/activeprojects/140, cited 13 Jan
11.
East. E. W, Nisbet, N, Wix, J (2010c) “Lightweight Capture of As-Built Construction
Information,” in Dikbas, A., et. al. eds., Proceedings of the 26th International
conference on IT in Construction, October 2009, CRC Press, New York.
East, E. William, Nisbet, Nicholas (2010d) “COBie Version 2.30 Update,”
http://www.buildingsmartalliance.org/index.php/projects/cobiev23, cited 13
Jan 11.
General Electric (2009) “Building Information Modeling,”
http://www.geindustrial.com/ cwc/bim/, cited 14 Jan 11.
General Services Administration (1996) “GSA BIM Guide for Spatial Program
Validation”,http://www.gsa.gov/graphics/pbs/BIM_Guide_Series_02_v096.pdf
, cited 13 Jan 11.
COMPUTING IN CIVIL ENGINEERING 429

International Standards Organization (2010) “Building information modeling --


Information delivery manual,” ISO 29481-1,
http://www.iso.org/iso/catalogue_detail. htm?csnumber=45501, cited 13
Jan 11.
Medellin, Ken, et. al (2010) “University Heath System and COBie: A Case Study,”
Proceedings of the 2010 National Institute of Building Sciences Annual
Conference, Washington, DC, December 2010,
http://projects.buildingsmartalliance.org/files/ ?artifact_id=3598, cited 13 Jan
11.
National Institute of Building Sciences (2007) “National Building Information Model
Standard Version 1, Part 1: Overview, Principals, and Methodologies,”
http://www.wbdg.org/pdfs/NBIMSv1_p1.pdf, cited 13 Jan 11.
Object Modeling Group (2010) “Business Process Model and Notation,” Version 2.0.
dtc/2010-06-05, http://www.omg.org/spec/BPMN/2.0, pp 40-41, cited 13 Jan 11.
Whole Building Design Guide (2010) “Product Guide,” http://www.wbdg.org/
references/pg_spt.php, cited 14 Jan 11.
Integration of geotechnical design and analysis processes using
a parametric and 3D-model based approach

M. Obergrießer1, T. Euringer2, A. Borrmann3 and E. Rank4


1
Research Assistant, Construction Informatics Group, Regensburg University of
Applied Sciences, Germany. 93049 Regensburg, Prüfeninger Str. 58, Tel. +49 941
943 1222, Fax +49 941 943 1426, Email: mathias.obergriesser@hs-regensburg.de
2
Professor of Construction Informatics, Regensburg University of Applied
Sciences, Germany. 93049 Regensburg, Prüfeninger Str. 58, Tel. +49 941 943
1226, Fax +49 941 943 1426, Email: thomas.euringer@hs-regensburg.de
3
Research Group Leader, Chair for Computation in Engineering, Technische
Universität München, Germany. 80290 Munich, Arcisstr. 21, Tel. +49 89 289
25117, Fax +49 89 289 25051, Email: andre.borrmann@tum.de
4
Professor of the Chair for Computation in Engineering, Technische Universität
München, Germany. 80290 Munich, Arcisstr. 21, Tel. +49 89 289 23048, Fax +49
89 289 25051, Email: ernst.rank@tum.de

ABSTRACT
The aim of this work is to improve the integration between the
geotechnical and infrastructural designing, modeling and analyzing processes. Up
to now these three planning stages are executed isolated and without the required
data exchange between each other. This separation leads to time-consuming and
expensive manual re-input of geometric and semantic data. Currently, roads are
designed using the traditional approach which is based on various 2D drawings.
The current design process focuses on the roadway itself, additional geotechnical
conditions such as the slope angle of the dam or the position of a retaining wall
are not considered.
To solve these problems, a new parametric and 3D-model based approach
has been developed in the research project ForBAU – The virtual construction
site. This new approach is based on the traditional 2D infrastructure planning
process, but includes a new parameterized 3D modeling concept. Different open
data formats such as LandXML and GroundXML allow data integration with
both, a parametric Computer Aided Design (CAD) system and a geotechnical
engineering software. An automatic update function ensures data flow without
loss of information. Usage of this new approach will accelerate the infrastructure
design and provide a parametric 3D-model approach to close the gap between the
geotechnical and the infrastructure planning process. This paper provides detailed
information about this new integration concept and gives an overview on the
various implementation steps.

MOTIVATION

How can we improve or optimize the construction planning process? A lot


of engineers have to deal with this question to make their planning work more
economical and efficient. In some Architecture Engineering Construction (AEC)
sectors such as the structural engineering domain, several optimization approaches

430
COMPUTING IN CIVIL ENGINEERING 431

exist. One of these approaches is Building Information Modeling (BIM) (Eastman


et al., 2008). It is based on a consistent and central 3D data model that provides
engineers with required building data information during the whole lifecycle of
the construction. Furthermore it improves the collaboration between all involved
planning companies and processes. Up to now a similar approach does not exist in
the civil engineering domain and especially not in the infrastructural and
geotechnical planning process. Every sector realizes its planning tasks alone and
almost without any information transfer to other planning fields (Kaminski, 2010).
For example, the designing of a roadway needs a lot of information about the
position of existing buildings, the surface of the environment or the conditions of
the subsoil. Most of this data can be provided in a digital format, however the
infrastructure engineer re-uses a very small part of this data only. Another
problem consists in generating the physical model for the geotechnical analyzing
process. In the current planning process, the engineer has to integrate the required
geometric and semantic information into the system manually. Before he can
generate the analysis model, the engineer has to interpret the geometrical
information of the infrastructure cross-section plan and the semantic information
contained in the ground expertise (Figure 1). Afterward he uses this information to
create 2D cross-sections which are significant for the structural analysis.

Figure 1. Required datasets for the geotechnical analysis model


The mentioned problems lead to an inefficient, time consuming and cost-
intensive planning process. This paper presents an approach for improving and
automating the infrastructural and geotechnical planning process by developing an
approach that ensures a continuous geometrical and semantic data exchange,
realizes a parameterized 3D infrastructure model and connects the different
planning processes by using a parametric concept.

PARAMETRIC 3D-MODEL BASED APPROACH

The next sections describe the implementation of a concept that can realize
an integrated geotechnical and infrastructural design and analysis process based
on a parametric 3D-model. It gives a short overview on the available parametric
432 COMPUTING IN CIVIL ENGINEERING

3D modeling systems and data exchange formats, explains in detail the newly
developed concept and finally discusses some topological problems that arose
during the development process.

Parametric 3D modeling systems


Parametric modeling means that an object is not created with a fixed
geometry but its dimensions are described by free parameters. Furthermore the
different values of single parameters can be linked by arithmetic expressions. In
the example depicted in Figure 2, the parameter “area” is given by the
multiplication of the parameter “length” and “width”. The use of a parametric
approach allows an update of the whole geometry by changing one parameter or
constraint. It is possible to distinguish two versions of parametric modeling
(Eastman et al., 2008). The first one only describes constraints within one object.
In this case the objects are instances of predefined element classes. The second
kind of parametric modeling is highly important for the modeling of geotechnical
and infrastructural buildings like roadways (or railways) or bridges. It describes
the position and form of individual geometric objects with reference to other
objects. Changing the value of a parameter either results in the system generating
a warning, if rules are violated, or in an automatic update of the entire model.

Figure 2. Parameters and constraints of a geometric object


Numerous parametric 3D modeling systems do already exist. Especially in
the aircraft and automotive engineering sector, parametric 3D modeling systems
are commonly used in the daily design process. Also in the AEC domain,
parametric systems are used more and more for designing complex buildings, such
as bridges, sports stadiums or hospitals. To determine which system is the most
suitable one for modeling complex buildings, a comprehensive CAD study was
conducted in the research project ForBAU (Borrmann et al., 2009). In this study,
different mechanical and civil engineering systems such as Gehry Technologies
Digital Project, Siemens NX and Autodesk Revit Structure have been analyzed
and compared. The results, the used analyzing methods and more information are
summarized in a CAD guide published recently (Obergrießer et al., 2011).

A new approach for integrated geotechnical design and analysis


The new parametric 3D-model approach integrates different existing and
newly developed elements and consists of three design/analysis phases (Figure 3).
The first phase involved is concerned with the creation of the roadway (or
COMPUTING IN CIVIL ENGINEERING 433

railway) plan. In the second phase, a parameterized 3D-model of the roadway is


created. In the last phase, the information of the 3D-model is used to analyze the
geotechnical structure. If changes become necessary, the modified parameters of
the roadway are transferred back to the 3D modeling system.

Figure 3. Diagram of the different planning steps involved in the proposed


integrated process
In a first step, the design of the roadway plan is realized using a
conventional 2D-based system. Hence, the creation of the axis is divided into two
2D steps. At first, the horizontal alignment of the road is designed in the position
plan, and in the next step, the vertical alignment is added to the vertical plan. Then
different cross sections are generated and added to the alignment by combining
these three 2D views – horizontal and vertical alignment and the cross section.
The 2D-based approach is appropriate for designing a roadway because it reduces
the complexity of the design by allowing the engineer to concentrate on the main
aspects of the different sections. Since the combination of various 2D sections
implicitly describes a 3D-model, all required information e.g. 3D points (x, y, z
coordinates) of the alignment or different cross section can be produced out of it.
The generated cross section plan includes now all the information about the form
of the cutting or dam objects and the position of the subsoil layers. After finishing
the roadway design the 3D point coordinates of the alignment and the different
cross sections are transferred to the 3D modeling system using the LandXML
format.
The second part of the proposed integration approach consists in the
creation of a parameterized 3D roadway model. For the automatic modeling
process the add-on “Geo2NX” has been developed for the parametric CAD
software Siemens NX. It generates a volumetric 3D infrastructure model by
interpreting the information contained in the LandXML file.
In the first step, internal objects are created by parsing the LandXML file.
After this, the 3D alignment curve of the road can be generated as a B-spline
object by integrating the 3D points of the alignment as intermediate points of the
B-spline. In the next step the different cross section objects are integrated and
automatically parameterized along the B-spline. At the end of the integration
phases the generated geometrical objects (B-spline and cross section) are extruded
434 COMPUTING IN CIVIL ENGINEERING

to 3D volumetric objects (Figure 4). Afterwards different Boolean operations are


applied to model the final dam and subsoil bodies.

Figure 4. Elements of the parameterized 3D roadway model


After this modeling process the semantic data of the subsoil layers (friction
angle, cohesion parameter etc.) is assigned to the different subsoil bodies. The
required semantic information is provided by the GroundXML format
(Obergrießer et al., 2009). Joining the geometric and the semantic information is
realized by interpreting a subsoil ID that is included in the LandXML and
GroundXML format.
The last step of the proposed integrated process is to analyze the
geotechnical structure. In this step, the engineer identifies the stabile slope angle
of the cutting or the dam object using a structural analysis program. The results of
the analyzing process may also supply the engineer with information about the
position of a required retaining wall or any other kind of geotechnical structure. In
the original 3D roadway model no additional geotechnical information had been
included. This reflects the design/analysis sequence common in practice today.
The geotechnical structure simulation is realized in 2D by analyzing every
significant cross section of the roadway plan. In the current planning process, the
engineer has to enter every cross section manually. This can be avoided by using
the geometric and semantic information of the parameterized 3D-model approach.
At first, the engineer models the different cross sections considering the geometric
and semantic information regarding the subsoil and the dam/cut conditions. For
this process he uses an interface that automatically interprets and integrates the
data into the geomechanical system (Figure 5).
The geotechnical analysis may result in the need to change certain
parameters of the roadway design. The modified parameters of the dam or cutting
cross sections are therefore transferred back to the 3D modeling system. The
exchange of these parameters is realized using an extended version of the
LandXML format (Rebolj et al., 2008). An update function of the “Geo2NX”
module updates the parametric 3D roadway model by interpreting the XML file,
for example when the slope angle of a dam is changed from 26.5 degree to 14.0
degree. If a retaining wall is necessary, the “Geo2NX” module adds the shape of
the wall into the existing cross section. In a next step, the body of the geotechnical
structure is generated automatically by a cross section extrusion. Finally a
COMPUTING IN CIVIL ENGINEERING 435

parameterized 3D roadway model that includes all information of the


infrastructural and geotechnical planning process is available.

Figure 5. Integrated Figure 6. 2D points of the retaining wall


geotechnical analysis model

LandXML
LandXML (www.landxml.org) is a terrestrial and infrastructural extension
of the W3C standard XML format and is used to exchange geo-referenced
information regarding the surveying and infrastructure planning process. The
structure of the data set is defined by the LandXML schema, which is based on the
XML format (Crews et al., 2010). Through the hierarchical structure and its easy
extensibility, complex datasets can be defined and stored in this format. However,
semantic information of geotechnical properties such as cohesion parameter or
friction angle cannot be transferred using this format.

GroundXML
GroundXML is an extension of the LandXML format (Obergrießer et al.,
2009). It can store all geometrical and semantic information resulting from the
survey, geotechnical and infrastructure planning processes (Figure 7). The level of
the stored data depends on the progress of the planning process. In the first step a
GroundXML file is used to transfer the information regarding the geotechnical
data of a 3D subsoil model. In the next step it stores the information regarding the
infrastructure model. Finally it uses the geometrical and semantic infrastructure
information to model the cross sections in the geotechnical structural analysis
system. The major advantage of this format is that it enables a continuous data
stream for the entire planning process.
436 COMPUTING IN CIVIL ENGINEERING

Figure 7. Excerpt of the GroundXML scheme

Topological problems
During the developing process of the parameterized 3D-model approach some
problems have been discovered. One problem occurs regarding the topology of the
cut and dam cross sections. There are three different forms of cross sections
(Figures 8). The first cross section includes only dam geometry and the second
one only cut geometry. The third cross section is a mix of a dam and a cut field. It
is easy to model a roadway that consists only of a dam or cutting cross section
because it only requires the extrusion of each cross section along the form of the
3D space curve. An irregular cross section combining dam and cut sections,
however, cannot be modeled like this because the left side of the roadway is in a
dam area and the right side in a cutting field (or vice versa). The problem is the
intersection between the roadway line and the surface line. The existing
intersection point results in violating a cross section extrusion rule. To solve this
problem an advanced modeling concept has been developed which will be
presented in future publications.
CONCLUSION

The presented approach helps to improve the infrastructural and geotechnical


planning process. It closes the gap in the information flow between the geometric
design and the geomechanical analysis and supports the civil engineer with a
parametric 3D road- or railway model As a further work the “Geo2NX” module
generating the parametric 3D modeling system will be improved, the topological
problems will be solved and a possibility to transfer back the results of the
geotechnical analysis process will be developed

Figure 8. Different cut-dam cross sections


COMPUTING IN CIVIL ENGINEERING 437

REFERENCES

Borrmann, A., Ji, Y., Wu, I-C., Obergrießer, M., Rank, E., Klaubert, C, Günthner,
W. (2009). ForBAU - The virtual construction site project. Proc. of the 24th
CIB-W78 Conference on Managing IT in Construction, Istanbul, Turkey.
Crews, N. and Hall E. (2010). “LandXML Schema.” LandXML Schema Version
1.0 Reference. http://www.landxml.org/ (Dec.10, 2010).
Eastman, C., Teicholz, P., Sacks, R., Liston, K. (2008). BIM handbook: A guide to
building information modelling for owners, managers, designers, engineers,
and contractors, Wiley, New York.
Kaminski, I. (2010). Potenziale des Building Information Modeling im
Infrastrukturprojekt - Neue Methoden für einen modellbasierten Arbeitsprozess
im Schwerpunkt der Planung, Dissertation, Universtät Leipzig, Leipzig.
Obergrießer, M., Ji, Y., Baumgärtel, T., Euringer, T., Borrmann, A., Rank, E.
(2009). GroundXML - An addition of alignment and subsoil specific cross-
sectional data to the LandXML scheme. Proc. of the 12th International
Conference on Civil, Structural and Environmental Engineering Computing,
Madeira, Portugal.
Obergrießer, M., Euringer, T., Horenburg, T., Günthner, W. (2011). CAD-
Modellierung im Bauwesen: Integrierte 3D-Planung von Brückenbauwerken,
In: 2. ForBAU Kongress, München.
Rebolj, D., Tibaut, A., Čuš-Babič, N., Magdič, A., Podbreznik, P. (2008).
„Development and application of a road product model.” Automation in
construction. Volume 17, Issue 6, 719-728.
Aspects of Model Interaction in Mechanized Tunneling

K. Lehner1,a, K. Erlemann1,b, F. Hegemann1,c, C. Koch1,d, D. Hartmann1,e,


and M. König1,f
1
Chair of Computing in Engineering, Faculty of Civil and Environmental
Engineering, Ruhr-Universität Bochum, Universitätstr. 150, 44801 Bochum,
Germany
a
PH +49-234-32-27416; FAX +49-234-32-07416; email: karlheinz.lehner@rub.de;
b c d
kai.erlemann.lehner@rub.de; f.hegemann@web.de; koch@inf.bi.rub.de;
e f
hartus@inf.bi.rub.de; koenig@inf.bi.rub.de;

ABSTRACT

Underground infrastructures are becoming increasingly important components of


modern traffic concepts worldwide. This includes, in particular, investigations of the
stability of tunnel faces, material models for subsoil behavior, damage analysis of
tunnel linings and supports as well as process-oriented simulation models for
mechanized shield driving. Due to the strong interaction between the individual tasks
in mechanized tunneling, the exchange of data and the interplay of components in
simulation, the Collaborative Research Center SFB 837 has been established at the
Ruhr-University of Bochum. In this paper a brief overview on the 14 sub-projects in
the SFB 837 addressing one of the characteristic research areas in underground
engineering is given. The main part of the paper deals with key sub-projects solely
dedicated to the computer-supported integration, visualization and interaction of
related models and information.

INTRODUCTION

The focus of research of the Collaborative Research Center SFB 837 “Model
Interaction in Mechanized Tunneling” started at the Ruhr-University of Bochum in
2010 is placed on two main issues. First, there are several sub-projects concerned
with fundamental research problems regarding specific aspects of mechanized
tunneling. This includes subjects such as
● recognizing subsoil structures based on the analysis of machine data and
creating material models for destructuring subsoil behavior,
● using acoustic techniques for underground exploration,
● investigating the stability of tunnel faces,
● creating process oriented simulation models for mechanized shield driving,
including monitoring-based optimization of process work flows and
● employing methods of system identification used for the adaption of
numerical simulation models.

438
COMPUTING IN CIVIL ENGINEERING 439

Second, as can be seen from the above list of highly interrelated tasks, a further focus
of research addresses the question of how the individual project models have to be
coupled and how data and ideas can be exchanged in an efficient, collaborative and
practical manner to create synergetic effects that will notably increase the
productivity and creativeness as a whole. In addition, it is of interest how designers,
engineers, managers, TBM operators, maintenance workers and others can
successfully collaborate during the actual construction phase, using tools and ideas
developed within the research projects.
Thus, a specific sub-project (D1) is in charge of the implementation of an
“interaction platform in mechanized tunneling”. Accordingly, this sub-project is
responsible not only for the definition of purely technological aspects of interaction,
such as specifying the type of a network protocol or other communication paradigms,
but also for the establishment of soft skills to classify the amount and type of
interaction needed. The need of collaboration was one of the important lessons
learned from a similar, preceding tunneling project (TunConstruct 2010, Lehner et al.
2007, Beer 2009). In a networked environment of cooperating researchers it is
therefore vital to find a proper balance between technological issues and subject-
specific aspects.

BACKGROUND

System and Process Modeling. A concise and formal description of a complex


system composed of interacting individual subsystems, including the rationale for
phenomena created by the system, requires a systems theoretical basis that reaches
into cognitive science. Based on proper tools for systems analysis, the states of a
physical system can be adequately described and, more importantly, possible
improvements and refinements can be found and implemented.
The problems and difficulties that arise during system and process modeling
in large scale research alliances have long been a target of computer science and
computational engineering. This particularly includes aspects of the management and
controllability of information masses where the quantity and complexity of
information input precludes an impromptu approach to information processing.
Product data management and product data models have been introduced as a
mechanism to describe the entire life cycle of a product or an entire engineering
system, including all relevant geometric, material, mechanical, manufacturing or
administrative traits. General engineering standards such as the ISO 10303 standard
for the computer-interpretable representation and exchange of product manufacturing
information (STEP), or the Industry Foundation Classes (IFC) are often used
successfully in various engineering fields; however, they do not (yet) have mature
counterparts in underground engineering.
Product data models therefore play an important role to capture the
characteristics of real world systems and subsystems, including the underlying system
processes. However, because such processes usually exchange data, information or
knowledge with one another, this customarily creates a large number of interactions
and interdependencies. Even though an intense research activity in the field of system
integration using appropriate computer models and partial models can be observed, it
is still difficult to achieve a robust and loss-free exchange of data and semantics at a
440 COMPUTING IN CIVIL ENGINEERING

high level. If proper measures to guide integration are not made available in due time,
then the proper and consistent interaction between project partners can be disrupted or
even endangered.

Interaction Modeling. To sensibly manage the dynamic behavior of a complex real


world system, such as an underground excavation site, not only the context of the
overall system must be defined, but also the interactions between the individual
subsystems have to be captured. This also includes the aspects of fuzziness inherent
in complex engineering systems in general, but especially prevalent in subterraneous
engineering. The context of a system defines the boundary of the system and the
types of actors that can manipulate the components of the system (who can do what).
In this respect, a modeling language such as the Unified Modeling Language (UML),
a standard in current computer science, has proven to be a powerful tool to describe
system context and interactions, using the concepts of interconnected objects reacting
to events and the exchange of messages to communicate as typical paradigms.
One research focus in the sub-project D1 is therefore the adaption of general
modeling tools (alike UML) to formally define domain specific semantics, tasks and
rules, using standards such as XML Metadata Interchange (XMI) or the Object
Constraint Language (OCL). Examples from a related research project are given in
(Mundani, et.al. 2006) and (Niggl, et. al. 2006). A further example is given in
(Hartmann 2007).

Distributed Computing. With the availability of high-speed, reliable computer


networks, interaction among communicating subsystems can be handled at local
(LAN), enterprise (MAN) or world-wide (WAN) level. Whereas data exchange
among two cooperating software components may possibly be transacted in a local
computer network, in contrast, access to the data server of an operating tunnel boring
machine relies on a non-local Internet connection. In the sub-project D1, the
interoperability is based upon distributed computing taking into account the following
facets
● user-oriented, optimal coordination of resources and capacities, which are
provided in a decentralized and dynamic manner, often using the client-server
paradigm;
● efficient and problem-oriented use of open, standard network protocols and
interfaces to implement resource access and inter-process communication;
● custom-made, domain-specific software service components based on existing
technologies such as web services and other Internet-enabled software.

METHODOLOGY

The realization of the proposed interaction mechanism is based on a four step


methodology consisting of (1) system analysis, (2) system modeling, (3) product
model implementation and (4) interaction platform implementation.
Within the first step, the domain specific interaction structure between
subsystems is analyzed and interacting system processes as well as various coupling
parameters are identified and classified with respect to the multi-level and multi-scale
characteristics of mechanized tunneling. By means of expert interviews, the domain
COMPUTING IN CIVIL ENGINEERING 441

specific terminology and knowledge regarding system states, actions, activities and
tasks are formally defined. Subsequently, the system and its inherent interactions and
couplings are modeled resulting in a holistic object-oriented ontology for mechanized
tunneling. This ontology developed in the second step contains distributed partial
models incorporating different space and time scales. Hereby, dependencies are
revealed resulting in either “strong” or “loose” coupling rules, object relations,
behavior patterns, data flows, events and actors interconnections.

Real World
Interactions
Processes
Actors

Product Model
Cooperation
Agents Support
Agents
Workflow Web Services Intelligent Coupling
Agents
Domain specific
Domain Agents, Services,
Agents and Models

Access/ Embedment
Models and Resources
Processes Product Models
Partial Models
DIN/ EC Standard Software
Database Regulations FE-Software Product Models

Figure 1. Three-layer Tunneling Interaction Platform (T-IP)

Within the third step, the identified components as well as the static
interaction structure of the tunnel driving system are implemented as an Object-
Oriented Tunneling Product Model (OOT-PM), with an emphasis on model
consistency and correctness. This model is incrementally improved and enhanced
within the ongoing project and provides a basis for the forth step, the Tunneling
Interaction Platform (T-IP) implementation. The T-IP supports information retrieval,
model updating, product and process visualization capabilities as well as a context-
sensitive interaction control, in the sense of computational steering, to interactively
run a holistic tunnel driving simulation. Providing a collaboration platform including
system dynamics and organizational aspects, the T-IP is implemented as a three-layer
architecture (see Fig. 1). On the top level, real world couplings and interactions
between sub-processes and actors take place. If feasible, individual partial domain
models and workflows are supported by domain agents and workflow agents,
respectively (middle layer). These agents are organized in a multi-agent system,
which is responsible for keeping dependencies (couplings) consistent and actually
performing defined interactions between partial models. For this purpose, they access
resources (bottom layer) that could be provided as Web services. If autonomous
442 COMPUTING IN CIVIL ENGINEERING

computing by means of agents is inappropriate, Web services or alternative native


coupling paradigms are applied, for example High-level Architectures for distributed
simulations.

Exemplary Interaction Chain. In order to test and prove the proposed


methodology, the overall system complexity is first reduced to a level where a
prototype implementation can be examined quickly and inconsistencies or logical
flaws become immediately apparent. To this end, an exemplary interaction chain of
directly interrelated subprojects has been selected and implemented. This interaction
chain comprises the dependent sub-processes "Advance Exploration" and "Driving
Simulation". The identified key component among these two processes is the
"Ground Model", which is a dominant part of the tunneling product model (OOT-
PM). An overview of the interaction workflow for the chain considered is shown in
Fig. 2 and elucidated as follows.

Ground Model Ground Model Extract Seismic Measurements

Tunnel track

Re-integration Advance Exploration Simulation

Water
inclusion

Boulders
Driving Simulation

Refined Ground Model

Figure 2: Workflow in an exemplary interaction chain

To perform advance exploration simulations, data from several different


sources are required. On the one hand, rough geometric and geological conditions of
the ground in front of the tunnel boring machine is needed to set up the simulation
environment. In addition to that, current sensor data directly available at the tunnel
boring machine are used as seismic measurements. At the tunnel boring machine,
special actuators produce so-called preface waves, which are reflected by objects
(obstacles) with a density different from their surrounding (for example boulders,
water inclusions, clefts, etc. as illustrated in Fig. 2). The reflected waves are in turn
detected by geophones, which measure the acceleration, velocity or displacement of
the wave field. Both the ground model extract and the seismic measurements are
COMPUTING IN CIVIL ENGINEERING 443

input to the exploration simulation. Within this simulation, the sensor data is analyzed
and an attempt is made to replicate the received data by changing the geological
conditions in the exploration model. Through defect minimization the recognition of
boulders, water inclusions or other geological irregularities is enabled and used to
improve the common ground model. Then, the incrementally refined ground model
provides an up-to-date basis to perform several other simulations, for example the
driving simulation. Once the ground model is updated, the multi-agent system takes
care of the change notification and propagation, so that other processes can benefit
from the improved data set.

PROTOTYPE IMPLEMENTATION

The fundamental structure of the SFB 837, with its numerous sub-projects and
participants, necessitates an adequate approach for the underlying computational
infrastructure. To ensure persistency, a persistence layer is created to cope with large
data sets and very heterogeneous data formats. This approach is required because, so
far, no accepted common product model file format for tunneling projects exists.
Furthermore, it is a fact that many subprojects are dependent on proprietary formats
provided by existing simulation and analysis tools. As it is often not feasible to map
all incoming simulation data files to a common objects model without loss of
information, the persistence layer is used to store raw data in their respective file
formats and, at the same time, to provide access to all data files as needed.
Traditional relational database management systems (RDBMS) with their
rigid structures are not well suited to address the data heterogeneity needed.
Therefore, a document-oriented database approach using Apache CouchDB has been
chosen. In CouchDB, each document consists of a text body that uses JSON
(JavaScript Object Notation) to define its contents. JSON is a light-weight text format
comparable to the Extensible Markup Language (XML), but with reduced complexity
and smaller computational overhead. By that, documents can be processed in the
several different programming languages. Additionally, each document may have an
arbitrary number of attachments, which allow to store original raw files originating
from the different sub-projects. For a product model data that cannot be transformed
directly into a corresponding JSON structure, the original content is stored as an
attachment and annotated with a JSON document containing the meta-data necessary
to find and identify the content.
As the central data repository has to be accessible from a large number of
different clients, the communication layer has to provide easy access to the database
content without relying on heavy-weight protocols or language-specific
communication frameworks. Therefore, a RESTful approach (Representational State
Transfer) has been chosen for client-server communication. REST is usually based on
the HTTP protocol and allows to access and manipulate each resource by sending a
standard HTTP request to a Uniform Resource Identifier (URI) that denotes the target
resource. The request are processed by a Tomcat Server running dedicated Java
servlets responsible for providing requested data or for updating the model.
444 COMPUTING IN CIVIL ENGINEERING

The basic approach for processing requests based on the exemplary interaction
chain is shown in Figure 3. Going back to our example, to start a new simulation run,
the advance exploration client needs different sets of input data. First, it sends a
HTTP GET request to obtain the geometry of all ground layers in the observed area.
The file format and system boundaries are provided using URI parameters. Then, the
geometry servlet responsible for processing the request fetches the relevant data from
the database and transforms it into the designated target format (e.g. an ACIS file),
which is suitable for generating a finite element model for simulation purposes. In a
second GET request, the corresponding material parameters are fetched as JSON text
and incorporated into the simulation model. Now, the simulation results can be
compared to actual seismic sensor data, which are read in a final GET request. Once
the simulation optimization has found an improved model, the necessary changes are
send back to the respective servlets and stored in the database. As all clients have
access to the same data set, all modifications are instantly accessible to other
participating sub-systems (in our case. the driving simulation).

Sensor Data
Servlet
Advance 4 HTTP UPDATE
Ground Model
Exploration
Simulation Geology Apache Tomcat
Servlet CouchDB

Geometry
5 HTTP GET Servlet
Driving Simulation
Plaxis file

Figure 3: Service architecture and request structure for the interaction chain

CONCLUSIONS AND FUTURE WORK

The concepts, models and implementations described in this paper concerning


possible interaction models in underground engineering should only be viewed as a
first approach and pathfinder in the ongoing SFB 837. Although each individual
research subfield in the overall project is a well-defined engineering project reflecting
the state-of-the-art in mechanics, soil structures, composite materials, simulation
technologies, ground modeling, etc., it is still a grand challenge problem to find the
most efficient, user-friendly and technologically viable integration of co-projects to
create a holistic tunneling product model and tunneling interaction platform.
The challenge itself does not only lie in the feasible implementation of
technological standards and protocols, which itself can be a demanding task. Rather,
because independent researchers and co-workers are actually expected to use the
platform, subtle socio-technological aspects must be considered when designing the
architecture of an interoperable system. In other words, it is not sufficient to create a
large, monolith product data model and expect everyone to (re-)write their software to
fit the model. Because of existing legacy systems, software components and even
COMPUTING IN CIVIL ENGINEERING 445

simple personal preferences, the driving force “interaction” implies collaboration


among researchers.

Acknowledgement

The authors gratefully acknowledge the support of this project by the German
Research Foundation (DFG).

REFERENCES

Beer, G. (ed.), (2009). “Technology Innovation in Underground Construction”, CRC


Press, ISBN 978-0-415-55105-2
Hartmann, T., Neuberg, F., Fischer, M. (2007). “Integrating 3-Dimensional Product
Models into Engineering and Construction Project Information Platforms” In:
Proceedings of Bringing ITC Knowledge to Work
Lehner, K., Mittrup, I., Oberste-Ufer, K., Hartmann, D. (2007). " Software Integration
Aspects in a Multi-Disciplinary Research Project on Underground
Engineering". In: Proc. of the 2007 ASCE International Workshop on
Computing in Civil Engineering.Pittsburgh, USA.
Mundani, R.-P., Bungartz, H.-J., Niggl, A., Rank, E. (2006) “Embedding,
Organisation and Control of Simulation Processes in an Octree-Based CSCW
Framework” In: Proc. of the 11th Int. Conf. on Comp. in Civil and Building
Engineering, Montreal, Canada
Niggl, A., Rank, E., Mundani, R.-P., Bungartz, H.-J. (2006) “A Framework for
Embedded Structural Simulation” In: Proc. of the 11th Int. Conf. on Comp. in
Civil and Building Engineering, Montreal, Canada.
TunConstruct (2010), “TunConstruct: Advancing the European underground
construction industry through technology innovation”,
http://www.tunconstruct.org/
Robust Construction Scheduling Using Discrete-Event Simulation

M. König1
1
Chair of Computing in Engineering, Institute of Computational Engineering,
Faculty of Civil and Environmental Engineering, Ruhr-Universität Bochum,
Universitätsstr.150, Building IA, 44780 Bochum, Germany, PH (49) 234 32-
23047; FAX (49) 23432-14292; email: koenig@inf.bi.rub.de

ABSTRACT
In construction management the definition of a robust schedule is often more
important than finding an optimal process sequence for the construction activities. In
nearly every construction project the planned schedule must be continually adapted
due to disruptions like activities take longer than expected, construction equipments
have failures, resources vary, delivery dates change or new activities have to be
considered. Therefore, it is imperative to generate a robust schedule regarding the
different project objectives like time, costs or quality. In the context of planning and
scheduling the term “robust” means that normal project variations have no significant
effects on the schedule and mandatory project objectives. One appropriate concept to
analyze the robustness of schedules is to simulate different typical disturbance
scenarios. In the end, from the multitude of valid schedules the one that is nearly
optimal and highly robust is selected for execution. In this paper a concept is
presented to generate robust construction schedules using evolution strategies.
Therefore, it is necessary to define reasonable robustness criteria to evaluate the
schedules. Two important robustness criteria are presented in the paper. Finally, the
practicality of the presented robust scheduling approach is validated by a case study.

INTRODUCTION
Usually, during the execution of construction projects numerous disruptions
can occur. These disruptions are for example that some activities can take longer than
expected, construction equipments have failures, resources vary, delivery dates
change or new activities have to be considered. Now, the challenge is to handle all
these uncertainties and resulting disturbances. Therefore, the main criterion for the
generation of a schedule should not be to find a global optimum regarding time, costs
or quality, but rather to define a robust schedule that can react flexible to possible
disruptions. Several definitions for robustness have been proposed. Billaut et al.(2005)
state that a schedule is robust if its quality is little sensitive to data uncertainties and
to unexpected events. Another definition is that a robust schedule isone that is likely
valid under a wide variety of disturbances (Leon et al. 1994).
Dealing with uncertainties is nothing new in the context of scheduling.
Though, often only durations of activities are considered as stochastic variables. The
uncertain numerical data are assumed to be random and obey a known probability
distribution. Thus, for every activity an appropriate probability distribution for the

446
COMPUTING IN CIVIL ENGINEERING 447

duration needs to be defined. Then, by applying Monte Carlo Simulation an


expectancy value for the total project time can be calculated under the given
constraints and resources.
When considering robustness during scheduling one difficulty is to measure
the robustness of a schedule. In some concepts robustness is defined as the weighted
sum of the expected absolute deviation between the realized activity start times and
the planned activity start times (Herroelen and Leus 2004). The weights represent
disruption costs of the activities and may include additional storage and resource
costs or costs related to agreements with subcontractors. The realized activity start
times are often generated randomly based on assumptions of the project planner. The
goal is now to minimize this robustness function. Considering resources and different
possible execution orders lead to an NP-complete problem which is normally solved
by using discrete-event simulation.
Generally, two main solution approaches dealing with uncertainties and
robustness can be distinguished (Davenport and Beck 2000). Proactive approaches
that take into account some knowledge of uncertainties and reactive approaches for
which the schedule is revised each time a disruption occurs. The aim of proactive
scheduling is to make the schedule more robust. In most of the approaches a first
feasible initial so-called baseline schedule is generated considering technological
interdependencies and resources. This baseline schedule is not protected against
possible disruptions. Then the robustness of the baseline schedule needs to be
increased. Therefore, different concepts can be applied. Some research activities
focus on robust resource allocation, i.e. to assign the available resources in an optimal
way regarding a defined robustness function. Another concept focus on the
intersection of time buffers to prevent from distortions throughout the schedule
(Herroelen and Leus 2004).
Contrarily, reactive scheduling concepts can be applied during execution
when disruptions occur that cannot be absorbed by the planned schedule. Reactive
procedures try to repair the schedule in such a way that the original structure of the
schedule is only changed as minimally as possible. Further particulars can be found in
van de Vonder et al. (2006).
In this paper the focus lies on proactive strategies for robust scheduling. Two
robustness criteria are used to evaluate construction schedules. The deviation between
the realized activity start times and the planned activity start times is on criterion.
Another criterion is the structural difference between the realized schedule and the
planned schedule. Thereby, the order of the activities is taken into account. Activity
delays, breakdowns of resources and unserviceability of workers are considered as
uncertainties. The delay and probabilities must be defined for every activity and
every resource. The generation of robust schedules is done by using discrete-event
simulation. One simulation experiment calculates one feasible schedule considering
all construction constraints. Furthermore, evolution strategies are adopted in this
paper to find robust as well as efficient construction schedules.

ROBUST SCHEDULING CONCEPT


448 COMPUTING IN CIVIL ENGINEERING

Construction schedules can be generated by solving the underlying


Resource- constrained Project Scheduling Problem (RCPSP). The RCPSP can be
described as follows: A project consists of a set A = {1, ..., n} of activities, which
must be performed on a set R = {1, ..., m} of resources. An activity j A requires
rjk ≥ 0 units of resource k R throughout its processing time pj ≥ 0. Each resource
k has a limited capacity Rk > 0. Precedence relations exist between the activities,
such that one activity j cannot be started before all its immediate predecessors are
completed. The aim is now to find a precedence and resource-capacity feasible
schedule regarding different objectives such as total time, costs or quality. It has
been shown that Resource-constrained Project Scheduling Problems are NP-
complete and cannot be solved exactly by using analytic methods. Therefore, in
most cases different heuristics in combination with simulation are applied. In König
et al. (2007) a constrained-based simulation approach is introduced which is used
in this work to generate feasible schedules.
The generation of robust schedules is done in four steps (see Figure 1). The
first step is to calculate some feasible schedules by applying the constrained-based
simulation approach. Thereby, only deterministic values for the durations of
activities are used and all resources are set to be available continuously. The results
are so-called baseline schedules. Next, different robustness checks are performed for
each baseline schedule. That means that some durations are changed according pre-
defined delay assumptions. Furthermore, some resources are set to be unavailable for
a certain time period. Subsequently, simulation experiments with different
disruptions are performed. The experiments generate so-called disturbed schedules.
Now, for all disturbed schedules robustness values and averages are calculated. The
averages are used to evaluate the baseline schedules. Besides robustness some other
criteria like time, costs and quality can be also used for the evaluation. The main
challenge is to find a robust and efficient baseline schedule in a reasonable amount of
time. Different optimization approaches can be applied to guide and control the
generation process of new baseline schedules. In this paper the evolution strategy
optimization technique is used to find, on the one hand, robust and, on the
other hand, efficient solutions. That means new baseline schedules are generated by
selection, recombination and mutation using schedules of the current generation.
More details about meta-heuristic optimization based on evolution strategies can be

found in Beyer and Schwefel (2001).


Figure 1. Robust scheduling procedure.

ROBUSTNESS CRITERIA
The specification of robustness criteria is not trivial. A very common
COMPUTING IN CIVIL ENGINEERING 449

robustness measurement rs is the weighted sum of the expected absolute deviation


between the realized activity start times Si and the planned activity start times si.
The activity weights wi denote the costs of a deviation between the start times.

| |

Very important for the robustness measurement rs are the realized activity
start times Si and the activity weights wi. However, during scheduling no realized
activity start times are available. Therefore, the planner must estimate possible
delays in consequence of typical disruptions during the execution. Currently, only
a discrete determined delay value is considered to calculate the realized activity
start time for each activity. However, the concept can be easily extended by
distribution functions or other uncertainties concepts like Fuzzy sets. In the same
way the activity weights must be defined. Pre-defined probabilities associated with
linguistic values are provided to support the planner within the weighting process.
Additionally, delays can occur if resources are not available. Consequently, possible
breakdown intervals and breakdown times must be defined for each resource type.
For some resources experienced data about failures and maintenance exists. For
other resources failure probabilities must be estimated. Within this paper only simple
probabilities and fixed breakdown times are used. However, the bounds and
linguistic variables can be adapted according to project planner’s experiences.

The second robustness criterion is the structural difference between the


planned and the realized schedule. In this paper the structural difference is defined
as the stability of the activity execution position rp. Within each simulation run the
starting order of all activities is stored as a so-called activity execution list AEL.
Thus, each activity has a certain execution position within the planned and
realized schedule. Consequently, the robustness measurement rp is the absolute
deviation between the realized activity position Pi and the planned activity position
pi.

| |

Both robustness measurements are used to evaluate the robustness of


baseline schedules. For this purpose the Pareto-optimality concept is applied instead
of calculating the weighted sum. Detailed information about Pareto optimality can
be found in T’kindt and Billaut (2005). Hence, the project planner gets not only
one robust and efficient schedule, but rather than a set of Pareto optimal solutions.

IMPLEMENTATION

The system architecture consists of two main parts. The specification of


the input data for the robustness analysis of baseline schedules and the generation of
new baseline schedules using selection, recombination and mutation in the
context of evolution strategies. Modeling realistic disruptions is not trivial. Within
450 COMPUTING IN CIVIL ENGINEERING

the implemented robustness framework different parameters can be controlled by


the user to define realistic disruption scenarios. The most important parameters are
show in Table 1.
Table 1. Parameters for disruption scenarios.
Parameter Description
Delay [min] Average delay of an activity or an activity type.
Delay Probability [%] Probability which is used to determine whether activity delay
must be considered in the current simulation event. Normally,
standard values like very rare = 1%, rare = 5%, common =
10%, often = 20%, or very often = 30% are used.
Breakdown time [min] Average breakdown time of a resource or a resource type.
Breakdown Probability which is used to determine how often a breakdown
probability [%] must be considered during the simulation run. If the frequency
is set to 100% that means that every time when the resource is
assigned to an activity the complete breakdown probability
must be considered. Otherwise the breakdown probability is
multiplied by the breakdown frequency. Average downtime
of the complete project. That means that all activities are
suspended simultaneously.
Breakdown Probability which is used to determine how often a breakdown
frequency [%] must be considered during the simulation run. If the frequency
is set to 100% that means that every time when the resource is
assigned to an activity the complete breakdown probability
must be considered. Otherwise the breakdown probability is
multiplied by the breakdown frequency. Normally,
standard values like very rare = 1%, rare = 5%, common =
10%, often = 20%, or very often = 30% are used.
Project downtime Average downtime of the complete project. That means that all
[min] activities are suspended simultaneously.
Project downtime Probability which is used to determine if project downtime
probability [%] must be considered at the current simulation event.
Some parameters are directly used during the preparation of the input data
for a robustness scenario simulation experiment. For each activity a random number
is generated and checked against the delay probability to determine whether the
activity will be delayed or not. That means that a delayed activity has a longer
duration in the current scenario. Other parameters are primarily used within the
discrete-event simulation. These include the breakdown of resources or a complete
project downtime. At each simulation time point the probabilities are evaluated
based on generated random numbers. For realistic scenarios the selected
probabilities values should not be too large.
The constraint-based simulation model uses all these parameters and the
given positions of the activities of the baseline schedule to generate a new disturbed
schedule. The order of the activities can change if the required resources are not
available or certain activities are not finished yet. Consequently, the disturbed
schedule has some modifications with respect to activity start times and activity
orders. This procedure is repeated for all baseline schedules of the
current population.
COMPUTING IN CIVIL ENGINEERING 451

The (µ+λ) evolution strategy is applied to identify robust and efficient


schedules (Beyer and Schwefel 2001). As aforementioned, in order to evaluate
schedules often more than one criterion must be considered. Thus, within the
selection process of efficient candidates the Pareto optimality concept is used for
the recombination step. The selection process bases on the tournament strategy. In
the context of evolution strategy optimization this selection procedure is widely
used (Bäck and Schwefel 1993). Tournament selection involves running several
"tournaments" among a few baseline schedules chosen at random from the
population. The winners of each tournament are determined by Pareto optimality.
In the next step the activity execution lists of selected baseline schedules are used
for recombination. For this purpose an Order Crossover-2 operator with a fixed
swapping length for the activity positions is applied (Starkweather et al. 1991).
Currently, mutation operators are not implemented yet. The modified activity
execution lists serve as input data for the generation of new baseline schedules
using constraint-based simulation. The best baseline schedules of the current
generation and the new baseline schedules are used for the next analysis and
evaluation process.

CASE STUDY
In order to test the practicality of the presented robustness scheduling
approach a case study was realized. The case study includes the scheduling of shell
constructions activities of an office building with 14 similar levels. Within this case
study only two levels with a total of 512 activities were simulated. Some
considered activities including their main attributes and parameters for the
robustness scenario analysis are shown in Table 2.
Table 2. Shell construction activities and robustness input data.
Element Activity Performance Delay Delay Weight
type factor probability
Column Installing formwork 0.5 h/m2 0.04 h/m2 rare low
Reinforcing steel 0.05 h/kg 0.01 h/kg common average
Concreting 2 h/m3 0.3 h/m3 very rare high
Curing 8h 1h very rare very high
Removing formwork 0.3 h/m2 0.04 h/m2 common low
Wall Installing formwork 0.3 h/m2 0.02 h/m2 rare low
Reinforcing steel 0.4 h/m2 0.02 h/m2 common average
Concreting 0.65 h/m3 0.05 h/m3 very rare high
Curing 8h 1h very rare very high
Removing Formwork 0.3 h/m2 0.04 h/m2 common low
Slab Installing ceiling table 0.45 h/m2 0.06 h/m2 common low
Reinforcing steel 0.4 h/m2 0.03 h/m2 rare average
Installing concrete 4.3 h 1h common very high
distributor
Concreting 0.5 h/m3 0.01 h/m3 very rare high
Curing 8h 1h very rare very high
Removing formwork 0.3 h/m2 0.04 h/m2 rare low
452 COMPUTING IN CIVIL ENGINEERING

For each baseline schedule 100 robustness scenarios were simulated to


calculate significant robustness averages. The selection size for the evolution
strategy was set to 50 baseline schedules. For each recombination step 20 new
baseline schedules were generated. The initial population for the first robustness
analysis was set to 70. After 200 iterations the optimization process was stopped.
In Figure 2 the normalized robustness measurements rs and rp of all 100
associated robustness scenarios of one baseline schedule are highlighted. The
normalized robustness averages are 0.369 and r 0.369.

/
Figure 2. Normalized robustness measurements of disturbed schedules.

After 200 generations 10 Pareto optimal baseline schedules were


determined. The optimal schedules are depicted in Figure 3. Following optimal
values were determined: total project duration = 36.07 days, robustness averages
0.22 and 0.58 and. The optimal schedules can be exported to standard
project management software (e.g. Microsoft Project, Primavera) for further
planning.

Figure 3. Scatter plots of Pareto optimal baseline schedules.

CONCLUSION AND OUTLOOK


Construction projects are typically subject to considerable uncertainty.
This often leads to large object discrepancies with respect to the planned time,
costs or quality. One concept to handle disruptions during execution is to make the
schedules more robust. In this paper a concept is presented to measure and increase
the robustness of construction schedules. Two different robustness criteria are
highlighted. Therefore, it is necessary to define and simulate a multitude of
COMPUTING IN CIVIL ENGINEERING 453

robustness scenarios for each schedule. Multi-criteria optimization approach based


on evolution strategies and the Pareto optimality concept is applied to generate
robust and efficient schedules. By an example it could be shown that a robust
schedule can also be efficient in terms of time.
An interesting area of future research is the exploration of other robustness
measures than the weighted activity starting time deviations or the activity order
deviations. In particular, a closer inspection of activity and resource constraints could
lead to other robustness assessments. Another strong connected criterion is the
flexibility of a schedule. This means how flexible a schedule can be modified if
execution disruptions with substantial effects occur. Furthermore, the decision-
support of the planners during the specification process of robust and efficient
construction schedules need to be improved. Thereby, an appropriate visualization of
differences between possible schedules is an important fact for practical applications
in the construction industry.

REFERENCES
Bäck, T., and Schwefel, H.-P. (1993). “An overview of evolutionary algorithms
for parameter optimization”, Evolutionary Computation, Spring 1993, Vol. 1,
No. 1:1–23
Beyer, H.-G., and Schwefel, H.-P. (2002). “Evolution Strategies: A
Comprehensive
Introduction”, Journal Natural Computing, 1(1):3–52, 2002
Billaut, J.-C., Moukrim, A., and Sanlaville, E. (2005). “Flexibilité et robustesse en
ordonnancement”, Hermès, Paris
Davenport, A., and Beck, J. (2000). “A survey of techniques for scheduling with
uncertainty”, http://www.eil.utoronto.ca/profiles/chris/gz/uncertainty-
survey.ps, (2010-12-10)
Herroelen, W., and Leus, R. (2004). “Robust amd reactive project scheduling: A
review and classification of procedures”, International Journal of
Production Research, vol. 42, num. 8, p. 1599-1620
Leon, V. J., Wu, S. D., and Storer, R. H. (1994). “Robustness measures and
robust scheduling for job shops”, IIE Transactions, 26(5):32-43
König, M., Beißert, U., Steinhauer, U., and Bargstädt, H.-J. (2007). “Constraint-
Based Simulation of Outfitting Processes in Shipbuilding and
Civil Engineering”, Proceedings of the 6th EUROSIM Congress on Modeling
and Simulation, Ljubljana, Slovenia
Starkweather, T., Mcdaniel, S., Whitley, D., Mathias, K., and Whitley, D.
(1991). “A Comparison of Genetic Sequencing Operators”, Proceedings of
the fourth International Conference on Genetic Algorithms, Morgan
Kaufmann, 69-76
T’kindt, V., and Billaut, J.-C. (2005). “Multicriteria Scheduling – Theory,
Models and Algorithms”, Springer, Berlin Heidelberg
Van de Vonder, S., Ballestin, F., Demeulemeester, E., and Herroelen, W.
(2006). “Heuristic procedures for reactive project scheduling”, Report num.
KBI 0605, Department of Decision Sciences & Information Management,
Katholieke Universiteit Leuven, Belgium.
The Development of the Virtual Construction Simulator 3: An Interactive
Simulation Environment for Construction Management Education

Sanghoon Lee1, Dragana Nikolic2, John I. Messner3 and Chimay J. Anumba4

1
Department of Architectural Engineering, Pennsylvania State University, 104
Engineering Unit A, University Park, PA 16802; PH (814) 863-6786; FAX (814) 863-
4789; email: SHLatPSU@gmail.com
2
Department of Architectural Engineering, Pennsylvania State University, 104
Engineering Unit A, University Park, PA 16802; PH (814) 865-5022; FAX (814) 863-
4789; email: dragana@psu.edu
3
Department of Architectural Engineering, Pennsylvania State University, 104
Engineering Unit A, University Park, PA 16802; PH (814) 865-4578; FAX (814) 863-
4789; email: jmessner@engr.psu.edu
4
Department of Architectural Engineering, Pennsylvania State University, 104
Engineering Unit A, University Park, PA 16802; PH (814) 865-6394; FAX (814) 863-
4789; email: anumba@engr.psu.edu

ABSTRACT
This paper discusses the development of the Virtual Construction Simulator
(VCS) 3 - a simulation game-based educational tool for teaching construction
schedule planning and management. The VCS3 simulation game engages students in
learning the concepts of planning and managing construction schedules through goal
driven exploration, employed strategies, and immediate feedback. Through the
planning and simulation mode, students learn the difference between the as-planned
and as-built schedules resulting from varying factors such as resource availability,
weather and labor productivity. This paper focuses on the development of the VCS3
and its construction physics model. Challenges inherent in the process of identifying
variables and their relationships to reliably represent and simulate the dynamic nature
of planning and managing of construction projects are also addressed.

KEYWORDS: Construction Education, Construction Management, Simulation,


Problem-Based Learning

INTRODUCTION
The nature of building construction is dynamic due to factors that are difficult to
manage, such as resource availability, weather conditions and the resources
performance. Project delays are commonly expensive causing significant problems to
contractors and the owner. It is imperative for construction professionals to be able to
deal with unanticipated events and problems to complete the project on time within

454
COMPUTING IN CIVIL ENGINEERING 455

the budget.
Educators are tasked with equipping students with knowledge to develop feasible
construction schedules and manage common and unforeseen problems at the site.
When learning construction scheduling concepts, students typically start by
interpreting 2D drawings and supplemental documents, and then identify activities
and establish relationships into a logical sequence. Critical Path Method (CPM)
schedule development using 2D/3D drawings coupled with lectures and traditional
assignment format however, fail to motivate students to try different approaches when
solving construction related problems. In addition, this method relies on the students’
personal ability to interpret the documents. It is not easy for students to tell whether
the developed schedule has any conflicts or deficiencies, especially when the project
is complex. The opportunity for students to experience real construction processes
remains limited to field trips, case studies, and exercises with typical building
projects based on real construction projects. While valuable, site visits are too short
for students to see construction progress over time, and learn about inherent risks and
challenges.
Computational support for education using various simulation techniques, 3D/4D
modeling, and Virtual Reality (VR) technology has significantly advanced and is
increasingly used in solving various construction problems. Simulation technologies
offer students with opportunities to experience realistic scenarios and actively learn to
develop construction plans, test solutions and modify strategies accordingly. To
engage students in active learning of construction scheduling concepts, our research
team developed and evaluated an educational simulation – the Virtual Construction
Simulator (VCS).

BACKGROUND
The Construction Industry increasingly employs commercial schedule
development applications to support visualization of the construction process.
However, the solution quality still greatly depends on the developer’s personal
knowledge and experience. Due to limited practical experience, students often
struggle to detect conflicts and make informed decisions when developing a
construction plan. Furthermore, drawings and bar chart schedules impede students’
ability to visualize spatial data and their temporal relationships. Construction
simulations, 4D modeling, and building information modeling (BIM) have become
valuable tools for developing and visualizing construction schedules and processes.
Construction engineering programs are progressively incorporating advanced
simulation technologies to prepare students to respond to industry needs. Examples of
simulation technologies used in education include the 3D visualization system for
construction operations simulation (Kamat et al., 2001) and the virtual construction
model for integrating the design and construction process to improve constructability
(Thabet, 2001). In particular, 4D modeling can aid in visualizing construction
schedule of each building element in a 3D environment in sequence over actual
construction time so that project participants can see construction progress and easily
identify any potential problems such as time-space conflicts, congestion, and
accessibility problems prior to actual construction.
A simulation is a useful tool to test the developed construction plan. From the
educational perspective, simulations can help students learn complex concepts as they
456 COMPUTING IN CIVIL ENGINEERING

can see immediate impacts of their decisions in a close to realistic simulation


environment. Computer simulations allow students to interact with the environment,
resulting in increasing motivation, active participation and more engagement in the
learning process (Chen and Levinson, 2006; Galarneau, 2004). Much research has
been conducted to assess the value of educational simulations in construction
engineering. For instance, Rojas and Mukherjee (2005) develop Virtual Coach that is
a situational simulation environment where students are able to learn from problems
that the system generates randomly. Al-Jibouri et al. (2005) developed and tested a
simulation as a supplementary education tool, where users can plan construction,
monitor progress and manage uncertain incidents while constructing rock and clay
dams. Martin (2000) developed a project management simulation engine, with which
educators can create customized simulations for project management.

MOTIVATION
The Virtual Construction Simulators (VCS) project sought to address existing
limitations in traditional methods for teaching construction scheduling and explore
simulation technologies for active and engaged learning. The goal of the VCS project
is to provide students with opportunities for scenario-based learning through
practicing decision-making skills, testing different strategies and outcomes to achieve
most optimum solutions.
Current VCS simulation application is a continuation of the research efforts
initiated in 2004. The first version (VCS1) developed as a 4D learning module
integrated the processes of viewing a 3D model and creating a construction sequence
(Wang, 2007; Wang et al. 2007), eliminating the need for the CPM schedule and its
subsequent linking to each corresponding 3D building element. The implementation
with undergraduate students in Architectural Engineering demonstrated improvement
in student communication and interaction, efficiency in time spent on understanding
construction problems and greater focus on developing solutions, resulting in higher
quality solutions. The second VCS version (VCS2) addressed certain limitations of
the VCS1 in terms of software development and the user interface (Jaruhar, 2008).
The VCS2 focused on more robust interaction while developing construction plans.
The newly added functions include preset viewpoints, sequencing activities in a chain,
automatic schedule generation, and save and load functions. The same building
model used in the first version, a typical floor of the MGM Grand hotel, was
embedded. Throughout the implementation of the VCS2 with students in the same
course as the VCS1, the VCS2 showed that it can reduce time throughout the process
of construction schedule development and that its interface was more intuitive and
user-friendly than the VCS1. The students also ascertained that the VCS and 4D
modeling was an effective tool for communication and helped them better understand
their construction schedules.
However, one of the most crucial limitations of both the VCS1 and VCS2 is that
the user needs to manually identify a set of activities for building element types and
manually calculate the duration of each activity prior to developing a construction
sequence inside the application. Furthermore, there is no function to check if a user’s
values are logically correct and if it is a realistic schedule. The applications reproduce
4D simulations based on the input data. Specific project constraints which would add
to the realism and encourage developing feasible solutions were not yet included.
COMPUTING IN CIVIL ENGINEERING 457

Hence, the only feedback students received on the schedule solution came from the
instructor’s comments after the simulation was presented in the classroom. In
addition, the applications required an amount of repeated typing and mouse
interaction to complete the schedule development.

DEVELOPMENT OF VIRTUAL CONSTRUCTION SIMULATOR 3


The development objective of the current VCS version (VCS3) was to extend the
functionality of the previous versions including project-based constraints and
immediate feedback to students, along with more comprehensive, realistic, and
engaging simulation environment. Thus, students can learn key concepts, test
decisions and experience impacts of different approaches directly from the
application.

Features of VCS3
The following is the list of VCS3 features implemented to achieve the project
goal. They are mainly elicited from the analysis of the evaluation results and students’
feedbacks, which were obtained through surveys and focus group discussion.
(1) Project-based constraints and rules: Project-based constraints and rules
provide scenario based goal-driven exploration and help the user develop feasible
construction schedules. In addition to scenario-based constraints such as budget or
available resources, the VCS3 has embedded both physical and activity constraints,
against which each activity is checked before its planned start. The VCS3 allows a
new activity to start only when both conditions are met – all the building elements
identified as physical constraints for the given building element associated with the
activity are constructed; and also activity predecessors within the building element
instance are completed. For example, activities associated with constructing a column
can start only after the footing for the column is constructed. Also, among activities
associated with constructing a footing, the excavation activity needs to be completed
before other activities such as formwork or concrete placement can start. Information
about physical and activity constraints for each building element are stored in the
project database.
(2) Productivity factors: Resources in VCS3 consist of equipment and labor.
The productivity of each laborer can vary as a function of project experience and
weather conditions. The user dynamically manages labor and equipment during the
construction simulation to respond to any changes in construction progress. In
addition to currently implemented factors – weather and learning curve; factors
identified to impact project performance to be added later include project experience,
fatigue, site congestion, and random equipment breakdowns.
(3) Performance feedback: The report interface summarizes daily construction
progress allowing the user to track schedule progress, resource utilization with
comparison of time spent on site and time worked, and cost data for the day as well as
cumulative. The report data guides students to make appropriate decisions and
adjustments if necessary for the next simulation day.
(4) Goal-driven exploration: The VCS3 engages students in exploration of
different solutions depending on project goals. For example, depending on a scenario
a user can choose construction methods and allocate resources to complete the project
with minimum cost, or test strategies to construct the project under time constraints
458 COMPUTING IN CIVIL ENGINEERING

within the given budget. Real time resource management and the performance
feedback help the user achieve the user-defined goal.
(5) Pre-defined construction activities and corresponding method sets: The
VCS3 provides the user with a set of pre-defined activities to construct the particular
building element type. For each building element type and its defined list of
construction activities, the user selects between possible construction methods
depending on the project goal. The VCS3 then generates and assigns selected
methods to all the building element group instances of the same type. Thus, the user
does not create custom activities but the VCS3 generates activities automatically
based on selected construction methods. Automated activity list eliminates the error-
prone manual data input process of the previous versions of the VCS. Information
about construction methods and activities is stored in the MS Access project database
and can be easily modified if necessary.
(6) Development of construction plans and review process: Figure 1 illustrates
the process of planning and simulating a construction schedule using the VCS3. The
process consists of two main phases: construction planning and simulation. During
the construction planning phase, the user develops a construction plan by selecting
construction methods from the list of applicable methods, allocating resources to each
activity and sequencing the activities. After developing a sequence, the user can
estimate the total time to complete the construction project using MS Project.

Figure 1. The procedure of developing construction schedules using VCS3


In the simulation phase, the user assumes the role of superintendent and makes
daily decisions regarding resources to be on site. Before each day simulation starts,
the user hires labor and equipment resources for the construction activities scheduled
to start and activities in progress for that day. New construction activities can start
only when all the physical and activity constraints are fulfilled. For each new activity
scheduled to start, the user is prompted to allocate a number of available resources.
Although the user allocates resources in the planning phase, in simulation mode the
user can change the crew size and thus accelerate activities if needed. This process
continues throughout the simulated day and for each of the following days until the
construction is completed. In this manner, the user actively tracks the construction
progress and changes that may occur, manages resources to respond to changes and
delays, and thus learns the dynamic nature of the construction by seeing the
COMPUTING IN CIVIL ENGINEERING 459

differences between the as-planned and as-built schedule.

SYSTEM ARCHITECTURE OF VCS3


The VCS3 consists of three core control modules: the 3D geometry model
control module, the construction plan control module and the simulation control
module (Figure 2). The 3D geometry model control module using the XNA game
engine imports and displays the binary 3D model of a building, and allows the user to
navigate the model using a mouse or a keyboard. The module also displays the
construction progress during simulation using each building element’s color and
texture. Two projects of different complexity – a park pavilion and a two-story office
building – are currently embedded. Both projects are modeled in Autodesk 3D Studio
Max and imported in the FBX file format. The construction plan control module
allows the user to interactively develop a construction plan through a series of user
interfaces by choosing construction methods, allocating resources to each activity,
and developing sequences for construction activities attached to each building
element group. Finally, the simulation control module manages construction progress
in the simulation phase. The module starts new activities, calculates the progress of
ongoing activities, and manages the status of resources for the day. The module
continues the process until the construction project is completed. Figure 3 shows
VCS3 screenshots of the simulation progress and the daily report.

VCS 3

3D geometry model 3D
control module geometry
components

User Interfaces
User Interfaces Construc on plan
User Interfaces control module Access
Global
DB
Simula on control
module Access
Project-
Specific
DB
VCS data model
Data flow

Figure 2. System Architecture

Two database files are used: a Global database and a Project-Specific database.
The Global database stores general data for running the application including
construction activities, corresponding methods, and resource data obtained from
RSMeans, independently from a particular construction project. The Project-Specific
database stores data about construction activities status, resources, as well as the
results of the daily simulation for future analysis.
460 COMPUTING IN CIVIL ENGINEERING

Figure 3. Screenshots of the VCS3


As shown in Figure 4, a data model developed for the VCS3 has four classes:
VCSBuildingElement, VCSResource, VCSConstructionActivity and VCSGeometry.
The VCSBuildingElement class defines building element types including footings,
slabs, columns, beams, roofs and trusses. The attributes of the BuildingElement class
include geometry representation, physical constraints, corresponding construction
activities, and construction status (“not started”, “in progress”, and “completed”). The
VCSResource class defines resource types based on construction methods and
activities. The resource class defines labor, equipment, and crew classes as child
classes. The “crew” class comprises of HumanResource and EquipmentResource
class instances which form crew units specific to each of the construction activities.
The VCSConstructionActivity class defines construction activities, and its attributes
include construction method, duration, assigned human and equipment resource lists,
associated building element group, total workload quantity, remaining workload
quantity, and the predecessor and successor activity lists. The VCSGeometry class
has attributes related to representing the geometric model of a building element such
as Color, Transparency, and GeometricModel. In addition, for programming
convenience, static functions are developed to perform specific functions independent
from the object classes, such as SQL (Structured Query Language) functions.
VCS Object

VCSBuildingElement VCSResource VCSConstruc onAc vity VCSGeometry

VCSBeam VCSHumanResource
VCSConstants
VCSColumn VCSEquipmentResource

VCSFoo ng
VCSCrew VCSFunc ons

Figure 4. VCS Data Model

CONCLUSIONS AND FUTURE WORK


This paper discussed the computational development of the third generation
Virtual Construction Simulator (VCS3) for improving construction engineering
education. The VCS3 focuses on providing more realistic experiences of construction
COMPUTING IN CIVIL ENGINEERING 461

schedule development and resource management through applying various


constraints and productivity factors. Students can experience the dynamic nature of
building construction projects and observe the differences between as-planned and as-
built schedules resulting from various factors such as weather or labor productivity
and how to manage changes to achieve project goals.
The object oriented programming of the VCS3 is flexible enough to adapt
technologies such as using different rendering engines and pre-defined libraries for
different purpose. The VCS can be easily expanded and customized to other
application areas without significant modifications. From an educational perspective,
further VCS development will seek to incorporate more complex construction
projects for different learning scenarios.

ACKNOWLEDGEMENTS
We are grateful to Lorne Leonard and George Otto for their support during the
development of the VCS3. We thank the National Science Foundation (Grant
#0935040) for support of this project. Any opinions, findings, conclusions, or
recommendations expressed in this paper are those of the authors and do not
necessarily reflect the views of the National Science Foundation.

REFERENCES
Al-Jibouri, S., Mawdesley, M., Scott, D., and Gribble, S.J. (2005). “The Application
of a Simulation Model and Its Effectiveness in Teaching Construction Planning
and Control.” Computing in Civil Engineering 2005, Vol. 179, No. 7
Chen, W. and Levinson, D.M. (2006). “Effectiveness of Learning Transportation
Network Growth through Simulation.” Journal of Professional Issues in
Engineering Education and Practice, Vol. 132, No. 1, January 1.
Galarneau, L.L. (2004) “The e-learning edge: Leveraging interactive technologies in
the design of engaging, effective learning experiences.” In e-Fest 2004
Jaruhar, S. (2008). “Development of Interactive Simulations for Construction
Engineering Education.” Master’s Thesis, The Pennsylvania State University
Kamat, V. R., and Martinez, J. C. (2001). "Visualizing simulated construction
operations in 3D." Journal of Computing in Civil Engineering, ASCE, 15(4), 329-
337.
Martin, A. (2000). “A simulation engine for custom project management education”
International Journal of Project Management, Vol. 18, 201-213
Rojas, E.M. and Mukherjee, A. (2005). “General-Purpose Situational Simulation
Environment for Construction Education.” Journal of Construction Engineering
and Management, Vol. 131, No. 3, March 1.
Thabet, W. Y. (2001). "Design/Construction Integration thru Virtual Construction for
Improved Constructability." Retrieved on December 2010 from:
http://www.ce.berkeley.edu/~tommelein/CEMworkshop/Thabet.pdf
Wang, L. and Messner, J.I. (2007). “Virtual Construction Simulator: A 4D CAD
Model Generation Prototype.” ASCE Workshop on Computing in Civil
Engineering Pittsburgh, PA, 2007
Wang, L. (2007). “Using 4D Modeling to Advance Construction Schedule
Visualization in Engineering Education.” Master’s Thesis, The Pennsylvania State
University
Preparation of Constraints for Construction Simulation

Arnim Marx1 and Markus König2


Chair of Computing in Engineering, Faculty of Civil and Environmental Engineering,
Ruhr-Universität Bochum 1arnim.marx@rub.de 2koenig@inf.bi.rub.de

ABSTRACT

The planning of construction projects depends on technological and temporal


constraints, which in turn determine the relationships between construction processes.
Additionally, many activities require exclusive access to certain spaces at the
construction site, which often depends on the progress of construction. If these
dependencies are not adequately considered during construction scheduling, they may
cause conflicts at the construction site, resulting in incalculable delays and extra costs.
Nowadays, discrete event simulation based on building information models (BIM)
can be used to support construction scheduling. Normally, building information
models do not provide all the required information needed for construction
scheduling. In addition, a construction site plan and information about equipment and
resources are required to provide further details. The focus of this paper is on data
aggregation and the preparation of constraints for construction simulation. For an
efficient construction simulation, data have to be extracted from various underlying
data models and enhanced with additional process information. Data preparation and
integration into simulation is done with the aid of an interactive 4D editor for
construction simulation preprocessing and evaluation.

INTRODUCTION

In the construction industry, project planning is carried out manually by using


general project planning programs. The result of manual scheduling is usually exactly
one schedule because the manual creation of construction schedules is very time
consuming. Therefore it is not financially feasible to create several plans to compare
alternatives. Construction scheduling requires a large amount of various input data.
These input data include building data, the layout of the construction site, the number
of required and available resources, and the definitions of all construction processes.
All these data are specified during different design phases. They are created with
various specialized software tools and are therefore stored in different data models.
Several of these data sets are created for purposes other than construction scheduling,
but are also necessary for scheduling. Some of the data are generated several times
because they are required in different design phases in different granularity. During
manual scheduling most of the decisions that are made are based on empirical values.
In general, theses values are neither formalized nor stored, because they are the result
of the planner’s experience.
An efficient approach to support the planner is the construction simulation.
Construction simulation can be used to analyze existing schedules or to generate new
schedules. Different approaches have been developed from the late seventies until

462
COMPUTING IN CIVIL ENGINEERING 463

today (Halpin, 1977; AbouRizk and Hajjar, 1998; Lu, 2003; Zhang et al., 2005; König
et al., 2007). Using these approaches, it is also possible to generate near-optimal
schedules with respect to a multitude of restrictions and different optimization criteria
(Beißert et al., 2008; Hamm and König, 2010). The use of simulation in the
construction industry is still very limited. There are various reasons for this (Hajjar
and AbouRizk, 2002), one of which is the time-consuming nature of preparing
planning data for the simulation.
In this paper we introduce an approach to accelerate the preprocessing of
construction simulation. The key to shorter planning time is the reuse of existing data
that are generated during design or former projects. But not all available data are of
suitable quality and sufficient quantity for construction simulation. This paper
explains what data are required, how they are prepared, and what kinds of additional
data are added. The focus is on preparation of constraints for construction simulation.
To prove the applicability of our approach an interactive 4D tool for construction
simulation preprocessing and evaluation, called SiteSim Editor, has been
implemented as a prototype. The functionalities of the SiteSim Editor are described in
terms of implementation of the schemata and patterns of our approach.

SIMULATION WORKFLOW

Using simulation for construction scheduling requires several preprocessing


steps. During preprocessing, information available about the construction project
needs to be analyzed and prepared. The result of the preprocessing steps is used as
input data for the simulation and is stored in a simulation database. The simulation
database is the central access point for all simulation components during the entire
workflow. After preprocessing, the construction simulation can be performed using
any suitable simulation software. The simulation results are also stored in the
database to allow systematic evaluation. During evaluation, the input data and
assumptions have to be checked for accuracy and the results have to be compared
with the objectives of project planning. Although the result of each simulation run is
technologically feasible, it might be contradictory to certain project objectives, such
as cost or allowed execution time. To generate different alternatives or change
assumptions, the preprocessing must be performed again based on the results of the
previous simulation. The generation of efficient solutions is usually an iterative
process. The overall simulation workflow is shown in Figure 1. The individual steps
of simulation preprocessing are presented in the following sections.

Project data. The input data for simulation preprocessing are project specific data
that are generated during the design process. The input data consist of a building
information model, the construction site layout, an operational bill, and supply chain
information. Missing data must be manually entered by the user. The BIM provides
the building data, i.e., the building components in the form of a 3D building model
with additional semantic information. The BIM is created during the design process.
The construction site layout is defined by the construction site plan. It defines the
footprint of the building, existing construction and landmarks, parking and storage
areas, delivery and emergency egress paths, and also includes stationary equipment
like cranes, site trailers, and storage sheds. The operational bill includes the
464 COMPUTING IN CIVIL ENGINEERING

description of construction processes and their outcomes. The description of a process


includes all resource requirements in which personnel, equipment, and material are
itemized. Supply chain information describes the technological and logistical aspects
of resource movement to the site and on the site. The operational bill and supply chain
information are usually generated by a quantity surveyor.

Project Data Simulation Simulation


Preprocessing
BIM
Process Patterns
Site Layout
Constraints Simulation DB
Operational Bill
Groups
Supply Chain
Supply Chain
Result
Calendar Evaluation
Process Workflow
Data Data Flow

Figure 1. Simulation workflow with emphasis of simulation preprocessing.

SIMULATION PREPROCESSING

Figure 1 shows the elements that have to be defined or enhanced during


simulation preprocessing. The process definition given by the operational bill has to
be enhanced with additional information. Constraints have to be derived and defined.
Groups of building elements have to be defined for working sections. The supply
chain information must also be enhanced. All available resources have to be
associated with a calendar to define working days and shifts.

Process patterns. Descriptions of the processes and their outcomes are part of the
operational bill. In general, these descriptions do not have the required granularity and
are not sufficient. For the erection of an in-situ concrete column, the operational bill
contains a process description that includes information about the material resources
of the concrete and the reinforcements. The process consists of five subprocesses that
must be defined for simulation. With the help of process patterns, these subprocesses
are defined according to the construction method. Process patterns are reusable
patterns that are stored in a pattern catalog. This catalog is a company specific
knowledge base that consists of all construction patterns that comprise the company’s
internal work processes. Process patterns define the technological constraints for the
subprocesses and associated personnel and operational values. For example, the
reinforcement can be completed by two to five workers with an operational value of
0.05h/kg steel for each worker.
Supply Chain. Figure 2 shows a simplified construction site with a storage area, the
construction area, and a tower crane with its capacity related radii. Normally, the
project specific supply chain information is not sufficient and must be modified and
COMPUTING IN CIVIL ENGINEERING 465

enhanced. In particular, movement on the construction site must be defined. For


example, material for a process has to be transported with a crane from a storage area
to its assembling position. The allocation of the applicable crane with the
corresponding resource can either be done manually or can be calculated. Both
methods are based on the data of the BIM and the construction site plan. Depending
on the supply chain of the material, the storage area is known and the assembling
position is given by the BIM. In association with the capacity related radii of the
crane, it is possible to assign the crane to the material.

Capacity Related Radius Storage Area

Construction Area Tower Crane

Figure 2. Automatic allocation of the applicable crane to the corresponding


resource.
Strategic Constraints. The order of the construction sequence for each component
depends on the construction method and can seldom be changed. In some cases it is
useful to define strategies for building construction sections in order to improve the
construction process and resource utilization. Building the columns for one floor can
be organized using different strategies. From a strategic point of view, it is not
reasonable to concrete less than three columns at once, because it would take too
many resources for cleaning the concrete pump for each column. It is more viable to
erect the columns in small construction sections. Inside each construction section
reinforcement and formwork for all columns has to be finished before concreting.
Construction sections are defined by groups of building elements. Consideration of
these requirements during the construction simulation is achieved by using strategic
constraints. These constraints ensure that the intended order is maintained. Figure 3
shows the strategies defined by groups and strategic constraints. The construction
sequence for the columns is defined by an appropriate process pattern.
Column 1 Reinforcing Forming Curing Stripping
Group A
Column 2 Reinforcing Forming Curing Stripping
Concreting
Column 3 Reinforcing Forming Curing Stripping
Precedence Constraint
Figure 3. Technological and strategic constraints for three concrete columns.
Spatial constraints. The execution of a construction process requires access to a
specific construction space for a certain amount of time. Some processes can share
466 COMPUTING IN CIVIL ENGINEERING

their construction space, while others require exclusive access. These spatio-temporal
constraints have to be taken into account during construction scheduling.
Construction spaces can be classified into three categories: resource space, topology
space, and process space (Akinci et al., 2002; Marx et al., 2010). The resource space
is the space that is occupied by a resource. It is derived from the dimensions of a
resource and is defined for a specific time period. Topology space includes the
building under construction, the construction site, and its surrounding area with all
landmarks and existing constructions. Topology spaces are also time-dependent and
can change during construction. A process space is linked with a construction process.
It covers a space for a specific period, which corresponds to the length of the
construction process. Process spaces are composed of many process-related subspaces
like working spaces, hazard spaces, protected spaces, and post-processing spaces.
Working spaces must be available to execute construction works using resources (see
Figure 4). Some construction works require special hazard space for safety purposes.
So-called protected spaces are sometimes required to temporarily protect a building
component from possible damage induced by adjacent construction. Some building
elements need post-processing work that can only be performed in special areas.

a) Equipment spaces of cranes and b) Working spaces and conflict


material resources caused by crane operations
Figure 4. Spatial constraints of crane operations.

SITESIM EDITOR IMPLEMENTATION

To prove the feasibility of our approach (see Figure 1), a prototype interactive
4D simulation editor called SiteSim Editor has been implemented (see Figure 5). It is
designed for simulation preprocessing and result evaluation. The SiteSim Editor is a
stand-alone application, implemented in Java, based on the Eclipse Rich Client
Platform (RCP) technology. The application supports all data models that are required
for the simulation workflow depicted in Figure 1 and can read data formats as
follows. Building information models are imported as IFC data models. Currently the
construction site layout is not part of the IFC data model. The enhancement of the IFC
data model is still in progress. Therefore, construction site plans as well as operational
bills and supply chain information are imported in XML format.
Simulation preprocessing and evaluation are embedded in a 4D environment,
which enables an intuitive connection of the building elements and the construction
processes (see Figure 5).
COMPUTING IN CIVIL ENGINEERING 467

Figure 5. SiteSim Editor for simulation preprocessing and evaluation.


4D environment. The 4D environment is implemented with the Open IFC Tools
(Tulke et al., 2010). The building and the construction site can be displayed and
edited inside a 3D view or in the form of different semantic representations. All data
for construction simulation are associated with one or more building elements.
Defining new data or editing existing data can only be done with the selection of an
appropriate element. Therefore, these views feature different methods for element
selection. The 3D view features all common 3D picking options. A type view
provides a tree representation of the building, ordered by building element type,
which allows selection by element type. For example, all columns of a building can
easily be selected. A structure view provides a structural tree representation of the
building. A structural representation can be story or working section oriented. A more
specialized selection can be made with additional filters. For example, all round
columns on the sixth floor with a diameter of less than 50 cm that are made of
concrete of the same type (e.g., C30/37) can be selected. These selected building
elements can then be combined into groups of building elements.
The SiteSim Editor features two kinds of 4D animations. During simulation
preprocessing the building elements can be displayed step by step, according to the
defined precedence constraints. This is a helpful method for a visual inspection of the
precedence constraints. The SiteSim Editor is also capable of performing automatic
constraint controls, to avoid the creation of loops. During result evaluation, the
generated construction schedule is illustrated as a Gantt chart and as 4D animation.

Process definitions. Processes are defined with regard to building elements in


conjunction with process patterns. In the process pattern view, process patterns are
assigned to sets of selected building elements with the help of pattern catalogs. There
are different modes of assignment. One mode is to assign process patterns separately
to each building element of the selection set. This results in independent process
sequences with technological constraints for each building element (see Figure 6a).
Another mode combines the processes of all elements, resulting in a combined
468 COMPUTING IN CIVIL ENGINEERING

predecessor and successor relationship (see Figure 6b). In combination with groups
the assignment of process patterns creates the technological constraints depicted in
Figure 6c. The required personnel and operational values provided by the process
patterns are defined irrespective of the mode of assignment.
BE1 Process P Process S Process P Process S Process P
Group A
BE2 Process P Process S Process P Process S Process P
Process S
BE3 Process P Process S Process P Process S Process P
a) Separate Assignment b) Combined Assignment c) Grouped Assignment
BEn = Building Element n
Figure 6. Modes of constraint assignment
The process definition derived from groups of building elements can be
performed in two ways. A summary process can be defined with as many
subprocesses as elements. The required personnel and operational values are set
separately for each process. The processes can be of different types. Otherwise a new
process related to all elements can be defined. All processes must be of the same type
and the required personnel and operational values are derived from the process type.
The strategic constraints are defined in a similar manner. Two selection sets
are created, one with predecessors and the other with successors. The result of the
assignment of the strategic constraints corresponds to the result in Figure 6. Strategic
constraints can be defined among single or grouped building elements.

CONCLUSIONS AND FUTURE WORK

Construction projects often take much longer than planned or are more
expensive than projected. Efficient planning and scheduling are essential for
successful construction. In particular, the comparison of different schedules and
resource utilization alternatives is crucial, but often it cannot be conducted because
construction scheduling is extremely time consuming. The simulation of construction
processes can help to generate multiple construction schedules and to find a near-
optimal solution. Currently, aggregation and preparation of construction simulation
input data are still very time consuming. In this paper a concept has been presented to
accelerate the simulation preprocessing based on a multi-model approach. Data that
are created during different design phases and former projects are extracted from
different data models and reused to generate simulation input data. Data are extracted
and enhanced by additional process information as described above. A tool for
simulation preprocessing and evaluation, called SiteSim Editor, has been introduced.
An interesting area for future research is the generation of constraints.
Currently, many precedence constraints are manually specified by the planner. These
constraints are derived from different project-specific circumstances, such as
construction methods, building and building site layout, costs, and operational values.
For construction scheduling, these constraints have to be converted into precedence
constraints. Generally, this is done based on the experience of the planner. To support
the planner’s decision-making process, constraint generation must be improved or,
better yet, automated. Furthermore, the flow of material at construction sites is still
COMPUTING IN CIVIL ENGINEERING 469

not adequately considered. In addition to material supply, other material movements


must be considered during simulation. Currently neither waste removal nor on site
movement of reusable resources are considered adequately, due to a lack of sufficient
data.

REFERENCES

AbouRizk, S.M, and Hajjar, D. (1998). "A framework for applying simulation in
construction." Can. J. Civ. Eng. 25(3): 604–617.
Akinci, B., Fischer, M., Levitt, R., and Carlson, R., (2002). “Formalization and
automation of time-space conflict analysis.” J. Comp. in Civ. Engrg. Volume
16, Issue 2, pp. 124-134.
Beißert, U., König, M., and Bargstädt, H.-J. (2008). “Generation and local
improvement of execution schedules using constraint based simulation.” Proc.
of the 12th International Conference on Computing in Civil and Building
Engineering (ICCCBE-XII), Beijing, China.
Hajjar, D., and AbouRizk, S.M, (2002) “Unified Modeling Methodology for
Construction Simulation” J. Constr. Engrg. and Mgmt. Volume 128, Issue 2,
pp. 174-185.
Halpin, D. W. (1977). ‘‘CYCLONE—Method for modeling of job site processes.’’
Journal of the Construction Division, Vol. 103, No. 3, pp. 489-499.
Hamm, M. and König, M. (2010). “Constraint-based multi-objective optimization of
construction schedules.” In Computing in Civil and Building Engineering,
Proceedings of the International Conference, W. TIZANI (Editor), 30 June-2
July, Nottingham, UK, Nottingham University Press, Paper 122, p. 243, ISBN
978-1-907284-60-1.
König, M., Beißert, U., Steinhauer, D. and Bargstädt, H-J. (2007). “Constraint-Based
Simulation of Outfitting Processes in Shipbuilding and Civil Engineering.”
Proceedings of the 6th EUROSIM Congress on Modeling and Simulation,
Ljubljana, Slovenia.
Lu, M. (2003). “Simplified discrete-event simulation approach for construction
simulation.” J. Constr. Engrg. and Mgmt. Volume 129, Issue 5, pp. 537-546.
Marx, A., Erlemann, K., and König, M. (2010). “Simulation of Construction
Processes considering Spatial Constraints of Crane Operations.” In Computing
in Civil and Building Engineering, Proceedings of the International
Conference, W. TIZANI (Editor), 30 June-2 July, Nottingham, UK,
Nottingham University Press, Paper 17, p. 33, ISBN 978-1-907284-60-1.
Tulke, J., Tauscher, E., and Theiler, M. (2010). “Open IFC Tools”
http://openifctools.com (Dec. 5, 2010).
Zhang, H., Tam, C. M., and Li, H. (2005). “Activity object-oriented simulation
strategy for modeling construction operations.” J. Comp. in Civ. Engrg.
Volume 19, Issue 3, pp. 313-322.
Using IFC Models for User-Directed Visualization

A. Chris Bogen1 and E. William East2

1
PhD, Computer Scientist, U.S. Army Engineer Research and Development Center,
Information Technology Laboratory, 3909 Halls Ferry Road, Vicksburg, MS 39180-
6199; PH (601) 634-4624; FAX (601) 634-4402; email:
Chris.Bogen@usace.army.mil
2
PhD, PE, F. ASCE, Research Civil Engineer, U.S. Army Engineer Research and
Development Center, Construction Engineering Research Laboratory, P.O. Box 9005,
2902 Newmark Drive, Champaign, IL 61826-9005; PH (217) 373-6710; email:
bill.east@us.army.mil

ABSTRACT

Deriving virtual design walkthroughs from building information models is a


specialization of more general interoperability issues encountered in the architect,
engineer, construction domain (AECO) where data exchanges are made throughout
the facility life cycle between diverse stakeholder groups and software applications.
This paper describes a repeatable process for efficiently transforming design
coordination view Industry Foundation Classes (IFC) models to multi-user
visualization for Radiant, a popular 3D game engine. The adopted transformation
process overcomes the inherent difficulties of compiling native geometry in the target
visualization platform while supporting two-way traceability of design elements
transformed by the underlying data exchanges. The intended audience for this paper
includes readers interested in design visualization as well as in AECO data
maintenance, integration, and BIM interoperability.

BACKGROUND

Transporting CAD models into three-dimensional graphics engines for games can be
labor-intensive, rely on expensive software product stacks, and may require skills and
approaches unfamiliar to many architects and designers (O'Coill and Doughty 2004).
This complexity is compounded by model compilation processes inherent to many
popular gaming engines that support large scale worlds, efficient rendering, dynamic
lighting, and real-time multi-user interactions (e.g. Radiant, Unreal, and
Source/Hammer).

In 1999, Fu and East outlined the requirements for a multi-user virtual design review
that includes multiple perspectives of building design models, interactions between
reviewers and designers in a spatial context, restricted access, design review, and

470
COMPUTING IN CIVIL ENGINEERING 471

project management verification (Fu and East 1999). Various researchers have
advanced the design review concept by reporting on transformations from design
models to game engine models. For example, Shiratuddin and Thabet outlined an
approach for exporting 2D Autodesk models into the Unreal game engine with an
intermediate editing and export step in the 3DS VIZ/Max environment (Shiratuddin
and Thabet 2002). Later in 2011, Shiratuddin and Thabet reported on an alternate
approach where a 3D model of a 2D design was developed in Autodesk 3D Studio
Max, and then imported into the Torque Game Engine. Limitations of the Torque
.Max import feature required Shiratuddin and Thabet to manually re-assemble the
individual 3D components (e.g. doors, walls, roof) by importing the elements and
then manually moving and re-aligning them properly (Shiratuddin and Thabet 2011).
Kumar et al. reported on a transformation from Revit to Autodesk’s .FBX file, and
finally into the Unity game engine where textures were assigned (Kumar et al. 2011).
Such approaches rely on data exchange artifacts (e.g. 3DS .MAX and AUTOCAD
.DXF files) that may not contain direct linkages to design model metadata, and they
can also obscure or hide details about the underlying data exchanges. In such cases it
may be very difficult or impossible to programmatically trace the target destination
file entities back to the entities in the original source file.

Alternatively, researchers adopt open standards to address traceability, cost, and


portability constraints. The primary international effort to create a unified ontology
for the capital facilities industry is the Industry Foundation Class (IFC) model. While
the IFC model offers enormous data integration potential, its expressive power can
also lead to unwanted complexities for visualization developers. For example, in
2007 McDonald developed an IFCtoMAP prototype application for transforming IFC
files into solid geometry “brushes” in D3Radiant engine’s ASCII-based .MAP files
(McDonald 2007). However, the IFCtoMAP application handles a limited number of
IFC types, and produces inaccurate results. Furthermore, satisfactory results were not
repeatable on IFC files other than the one distributed with IFCtoMAP, and in almost
all of the authors’ transformation attempts resulted in program crashes halted the
model transformation process and produced no target file.

PROBLEM STATEMENT

The authors’ intent is to define an efficient, repeatable process for converting IFC
models to raw geometry files that are processed by a 3D game engine compiler. To
facilitate traceable information exchanges, elements of the target file format must be
explicitly referenced back to elements in the design file through unique identifiers.
The process must also provide semi-automated support for selecting surface textures
and visualization properties, while also considering more technical issues such as the
efficiency demands of the target visualization engine. Finally, the transformation
process must provide these features at a low cost of ownership for non-commercial
research and educational purposes.
472 COMPUTING IN CIVIL ENGINEERING

APPROACH

The authors adopted a transformation from IFC 2x3 (Coordination Model View
Definition), to VRML (.wrl v2.0 and .x3d v3.0), and finally, to the .MAP format for
the Call of Duty 4 (COD4) Radiant compiler. This approach attempts to reduce the
steep learning curve for BIM applications of compiler-based real-time modeling
platforms. The authors’ approach makes uses IFC attributes to mediate surface
texture selections and other scene customizations. VRML was chosen as the
intermediate model format because of its international adoption and the availability of
free conversion tools. A reliable IFC to VRML translation is provided by Karlsruche
Institute of Technology’s IFCStoreyView. The IfcStoreyView-generated VRML 2.0
(.wrl) file includes the IFC element type, element name, and unique ID tags while
representing the geometry with collections of triangulated surface meshes.

The COD4Radiant .MAP format was chosen as the map representation format
because it is ASCII based and it directly supports face-vertex meshes, the same
geometry format of the generated VRML files. The COD4 Radiant engine supports
multi-user interactions while efficiently rendering large-scale, detailed models, and it
(as well as accompanying development tools) may be used free of charge for
research, academic, and noncommercial purposes. While the .MAP format is not
defined by a formal schema, technical references are available in McDonald’s Thesis
report (McDonald 2007) and on various game “mod” Websites (Modsonwiki.com).
The .MAP format also allows for the identification of mesh face groups via unique
identifiers and descriptive data. The authors use this feature to tag the destination
model objects with their corresponding IFC element unique identifiers.

A Microsoft Windows Presentation Foundation (WPF) application,


ifcVRMLToMAP, was implemented to mediate the transformation process after the
IFCStoreyView export to VRML .wrl. The application allows the user to assign
COD4Radiant surface textures to IFC elements and provides other rendering options
such as collision detection and polygon reduction. A master-detail view lists and
groups the IFC model elements by ifcElementType and allows the user to sort by
attributes such as Object Identifiers (OIDs), element names, and polygon counts. The
rendering option, Is Collision Surface, specifies whether or not the boundaries of the
object will block or allow the user to pass through, e.g., furniture as boundaries
versus walking through furniture. This option is sometimes necessary to avoid
reaching the maximum number of collision vertices, 65,542, allowed by the MAP
compiler (Modsonwiki.com). Objects such as toilets can have several thousand
polygons; and if all the toilet’s surface faces are represented collision surfaces, then
several thousand of the 65,542 available binary space partition (BSP) node collision
vertices will be wasted. Building a BSP is a fundamental stage of .MAP compilation.
Similarly, the application provides an option to allow the user to specify whether or
not an IFC object or object type will be processed or ignored by the transformation
process. The ifcVRMLtoMAP processing details are outlined by the typical
stimulus-response in Table 1, and Figure 1 is a corresponding screenshot of the
COMPUTING IN CIVIL ENGINEERING 473

ifcVRMLToMAP application with alphabetic tags identifying the major


ifcVRMLtoMAP interface elements.

Before the VRML file is parsed, the xj3d tool is used to transform the .wrl VRML
format to .x3d. Once the x3d file is deserialized, ifcVRMLtoMAP prompts the user
to specify the original length measurement units of the IFC model and specify
whether or not to perform polygon reduction on objects with more than 1000
polygons. Polygon reduction simplifies a complex surface meshes and it is
sometimes necessary to avoid exceeding BSP node size thresholds. The authors
implemented a version of Melax’s edge-collapse polygon reduction algorithm (Melax
1998) that may be applied to objects with high polygon counts such as toilets and
sinks.

The openings of IFCDoor objects are copied to invisible hint surfaces. Hint surfaces
influence the compiler to create a separate BSP node for an enclosed space, thus
reducing the chance that a BSP node will have too many vertices. Users may also use
the ifcVRMLtoMAP user interface to identify an object as a light. While lights can
enrich repetitive surface textures with shading and contrast, they are not a
requirement because the COD4Radiant engine provides adequate ambient light
settings. After the .MAP text is constructed, it is distributed in several .MAP files
that are labeled by building element type. Each file is limited to 2 MB because the
Radiant editor performs poorly or crashes when dealing with larger .MAP files.

After surface textures are assigned to the model elements, these options may be saved
to an XML file so that they may be reused if the transformation is repeated. Users
may optionally specify a skybox file to contain the facility model. All MAP objects
must be placed in a skybox, which can simply be a hollow cube large enough to
contain the building with special ground texture on the bottom and special sky
textures on all of the remaining sides. It is also possible to build a generic skybox that
may be used with minimal editing for almost any building.

A few critical manual steps must be executed before compiling the map assets. First,
the surface texture and light mapping coordinates in the MAP files must be corrected.
This is accomplished by opening the map file(s) in the Radiant editor, selecting all
elements in the map, and clicking the Natural or LMAP texturing buttons in texture
material mode and light-map surface editing modes. Finally, the map must be
compiled using the Radiant compiler, and this process may inherently require some
trial and error for detailed maps containing several complex objects with high
(>1000) polygon counts.

FACILITY VISUALIZATIONS

Virtual walkthroughs for two building models were developed to demonstrate the
adopted transformation process. The first building, a 248 m2 (2,669 ft2) duplex
apartment building (47.2-MB IFC File), was originally developed as a submission to
a German design school competition. This duplex apartment model has been used to
474 COMPUTING IN CIVIL ENGINEERING

support a buildingSMART (international) precertification event against the Facility


Management Handover Model View Definition (East 2009). The second project, a
4835 m2 (52,044 ft2) medical clinic (24.2-MB IFC File), was recently built in the
southwestern United States. Figure 2 is a collage of screenshots derived from the
duplex and medical clinic models. The effort required to transform the duplex and
clinic models according to the adopted process was approximately 2 hours and 4
hours respectively.

Figure 1. Tagged screenshot of the authors’ ifcVRMLtoMAP application


The authors typically specify collision surface to occur only on walls, ceilings, stairs,
ramps, and floors. Those objects usually contain the fewest number of polygons and
will not exceed the collision point constraints of the Radiant compiler. Also, sticking
to the essential collision surfaces allowed the authors to avoid potential pathway
blockades caused by interferences such as furniture or fixtures placed in small rooms.
Since objects such as doors do not open/close without additional scripting, the
authors typically do not specify them as collision surfaces. The authors also chose to
fill doors with clear glass surfaces so users can quickly survey a room’s contents
without entering the room. As illustrated in the bottom left panel of Figure 2,
assigning ceilings with transparent surfaces can also be useful for exposing otherwise
hidden details such as the HVAC design for the clinic model.

The floor planes of the IfcSpace objects were represented as scriptbrushmodel map
objects with space OID and name attributes. This representation enabled the authors
COMPUTING IN CIVIL ENGINEERING 475

to develop a script that allows walkthrough participants to view the room name and
usage category of their current location. The authors also used the IfcSpace objects
to customize floor textures according to space function–e.g., bathrooms have tile,
offices have carpet. Since the floor slabs of the source models were represented by sa
single group of polygons, the floors of the IfcSpace objects provided an efficient and
accurate way to “color” floors by room function.

Table 1: Typical Conversion Use Case for ifcVRMLtoMAP


GUI
Areas
Step (Fig. 1) User Stimulus System Response
1 A Select “Open wrl” from Top Menu Initiate Conversion to X3D

2 G Select base length units & indicate Deserialize X3D, perform polygon
whether or not to perform polygon reduction, perform unit conversion, &
reduction initialize internal ifcVRML objects
3 B, C, D Browse the master (B) & detail grid User interface handling
(D) views. Assign Textures (C) &
specify collision surface options
(B,D). When the user wishes to omit
an object, uncheck Render?
4 E Select an existing skymap .MAP file User interface handling
5 F Click “Convert X3D to MAP” Combine user options with the existing
internal ifcVRML objects, transform
ifcVRML objects to .MAP text, & write
.MAP files

CONCLUSIONS

Successful compilation of maps on platforms such as COD4Radiant inherently


requires troubleshooting, proficiency with platform tools, and an intermediate
understanding of 3D graphics concepts. The authors demonstrated how some of
these complexities may be mitigated through a mediated transformation using
common open-source model formats and inexpensive tools. The authors’
IfcVRMLtoMAP also demonstrated how facility metadata can be automatically
injected into the process to make informed decisions about the transformation to real-
time, interactive engine .MAP files. While the IfcVRMLtoMAP application provides
productivity gains it is not intended to fully-automate the multi-user visualization
process; and expecting such a solution is unrealistic given the constraints of existing
approaches.

RECOMMENDATIONS

Further development efforts of applications like IfcVRMLtoMAP are required to


realize a more comprehensive and productive mashup of IFC model attributes, COBie
space/equipment sheets, and multi-user visualization options. Eventually it may be
necessary to introduce a visualization designers’ IFC Model View Definition to
476 COMPUTING IN CIVIL ENGINEERING

promote IFC interoperability with general purpose real-time visualization software.


More extensive evaluation of model transformations over a larger sample size will
require the implementation of additional automated testing features such as
comparison of source and target model centroids and surface areas. A standardized
set of common test metrics for IFC files should be provided in a format accessible to
application developers. Such disciplined approaches to testing and evaluation are
necessary to establish trust in adopted visualization technologies, and move beyond
vendor claims and anecdotal successes. In consideration of these issues, the authors
of this paper have recently submitted a technical article manuscript including
evaluation results of their IFC to .MAP conversion process to the ASCE’s Journal of
Computing in Civil Engineering.

Figure 2. Visualization collage of ransformed clinic (left) and duplex


(right) models

ACKNOWLEDGEMENTS

This work was sponsored under the Life-Cycle Model for Mission-Ready,
Sustainable Facilities project through the U.S. Army Engineer Research and
Development Center. The authors would like to acknowledge Howard Yu (ERDC-
Champaign) for his assistance in preparing the clinic demonstration, Nicholas Nisbet
(AEC3 UK) for his IFC expertise and his work on BIMServices, and the
buildingSMART alliance™ for their on-going commitment to BIM interoperability.
COMPUTING IN CIVIL ENGINEERING 477

REFERENCES

East, B. (2009). "The COBIE2 Challenge."


<http://www.buildingsmartalliance.org/index.php/newsevents/meeting
spresentations/cobie2challenge/> (November 29, 2010).
Fu, M. C., and East, W. E. (1999). "The Virtual Design Review." Computer-Aided
Civil and Infrastructure Engineering, 14(1), 25-35.
Kumar, S., Hedrick, M., Wiaceck, C., and Messner, J. I. (2011). "Developing an
Experienced-Based Design Review Application for Healthcare Facilities
Using a 3D Game Engine." ITCon, 16(1), 85-104.
McDonald, C. E. (2007). "Framework for a Visual Energy Use System," Master of
Science, Texas A&M, College Station.
Melax, S. (1998). "A Simple, Fast, and Effective Polygon Reduction Algorithm."
Game Developer, 5(11).
Modsonwiki.com "Call of Duty 4: Compiling, BSP Limits (approximate)."
<http://www.modsonwiki.com/index.php/Call_of_Duty_4:_Compiling>
(July 13, 2010).
O'Coill, C., and Doughty, M. (2004) "Computer Game Technology as a Tool for
Participatory Design." eCAADe2004: Architecture in the Network Society,
Copenhagen, Denmark.
Shiratuddin, M. F., and Thabet, W. (2002). "Virtual Office Walkthrough Using a
3D Game Engine." International Journal of Design Computing(4), 4.
Shiratuddin, M. F., and Thabet, W. (2011). "Utilizing a 3D Game Engine to
Develop a Virtual Design Review System." ITCon, 16(1), 39-68.
Understanding Building Structures Using BIM Tools

N. Nawari1 , L. Itani2, E. Gonzalez3

1
Assistant Professor, School of Architecture, University of Florida, Gainesville, FL
32611-5702, Email: nnawari@ufl.edu
2
Student, College of Engineering, University of Florida, Gainesville, FL 32611-5702,
Email: litani@ufl.edu
3
Grad. Student, College of Engineering, University of Florida, Gainesville, FL
32611-5702, Email: egonzalez6@ufl.edu

ABSTRACT

Knowledge, technology and information sharing is one of these areas that


have significantly affected 21st century students learning process. In this environment,
computers and advanced technology play an important supporting role. This research
proposes a method to develop and enhance the understanding of fundamental
principles of structural analysis using Building Information Modeling (BIM) tools.
The study focuses on evaluating the effects of various gravity and lateral loads on
portal frames and their relationship to basic static equilibrium equations. BIM tools
promote the understanding of structural analysis concepts such as the force
equilibrium, support reactions, shear force, and bending moment diagrams. Digital
tools, such as Revit Structure and Robot, aid in the investigation and correlation
between simple structural systems and static equilibrium equations to understand the
conceptual behavior of portal frames. By conceptually understanding the behavior of
frames experiencing load combinations, various complex structural computations can
be approximated by static equilibrium equations or merely relate to simply supported
beam or fully fixed structure. This method will allow engineering students to develop
deep learning and long-term retention environment in which conceptual thinking is a
core activity.

INTRODUCTION

Structural analysis among undergraduate engineering students mainly focuses


on computation, without stressing the importance of understanding conceptual
behaviors of structural systems. Addis(1991) noted that at all times in architectural
engineering history there have been some types of knowledge which have been
relatively easy to store and to communicate to other people, for instance by means of
diagrams or models, quantitative rules or in mathematical form. At the same time,
there are also other types of knowledge which, even today, still appear to be difficult
to condense and pass on to others; they have to be learnt afresh by each young
engineer or architect-a feeling for the structural behavior, for instance. Currently, in
the education of young structural engineers and architect, educators have tended to
concentrate particularly on that knowledge which is easy to store and communicate.

478
COMPUTING IN CIVIL ENGINEERING 479

Unfortunately, other types of engineering knowledge have come to receive rather less
than their fair share of attention.
To advance other type of structural engineering knowledge, this research
focuses on the conceptual and qualitative behavior of a structure and how to engage
student’s’s imagination and to use it no less creatively than a musician or artist
producing ideas out of his other head. In addition to the envisioning of a geometrical
shape or type of material, which can be done largely from memory, there is also the
possibility of carrying out structural analysis in the mind -what can be termed
conceptual analysis. The research aims to emphasize the value of a qualitative
understanding of structural behavior in the context of the education of engineers and
architect. Although the data is lacking to allow comparison with earlier times, some
alarm has been sounded at the poor qualitative and conceptual understanding amongst
young structural engineers and architects.
With recent technological advancements, students have more tools to analyze
and demonstrate how load combinations affect the stability and behavior of a
structure. Specifically, Building Information Modeling (BIM) has the potential to
assist in achieving different types of structural knowledge learning objectives without
compromising their distinct requirements. Building information modeling, or BIM, is
a process that fundamentally changes the role of computation in structural design by
creating a database of the building objects to be used for all aspects of the structure
from design to construction and beyond.
BIM has revolutionized the design and construction of buildings mainly due
to its ability to specify the interaction of stresses, section properties, material strength,
and deformation based on type of supports and connections. This research project
focuses on utilizing Revit Structure and its extensions including Robot Structural
Analysis software to understand the basics building structures and how to
conceptually analyze members such as portal frames. This conceptual knowledge of
structural behavior is similar to the type of knowledge usually associated with craft
skill, or the skill of knowing how to do something (e.g. swim, paint, make docks,
playa musical instrument) and normally yields deep learning results.
The experimental research team includes one undergraduate student and one
graduate student from the college of engineering and one graduate student from
college of design and construction working at the school of architecture, University
of Florida to investigate how BIM would improve learning and understanding of
building structures. The research team was introduced to the basics of BIM and Revit
Structures. This introduction took about eight contact hours (see figure 1). The last
phase of this introduction was an overview of REVIT Structure emphasizing the
comprehension of new concepts such as model element, categories, families, types
and instances. Before starting the analysis, students are then assigned simple projects
to practice using REVIT structure in modeling single and two-storey steel and wood
buildings. Figure 2 below illustrates the process followed in this introduction
(Sharag-Eldin & Nawari 2010).
480 COMPUTING IN CIVIL ENGINEERING

Figure 1 : BIM introduction blocks.

Following the introduction, students started to learn about the analysis tools in Revit
Structure. These tools are available as extensions to the basics version of
BIM tools used in this research are principally the beam, frame simulation, the
load takedown, and the integration with Robot Structural Analysis. The load
takedown played an important role in introducing load path, load tracing, reactions
and constraints in building structures. Students were able to understand concepts such
as tributary areas for beams, girders and columns in visually interactive manner (see
figure 2) which greatly stimulated the interest and motivation to explore other
analysis capabilities of the tool.
The study was then centered on understanding the conceptual behavior of
frames under gravity and lateral loads using these tools. The next sections illustrate
the approach and results obtained.

(a) Tributary Areas for beams and girders (b) Axial Column
Loads Map.

Figure 2. An example of load takedown results obtained by students.


COMPUTING IN CIVIL ENGINEERING 481

STUDY APPROACH

Presently Structural engineering curriculum focuses primarily on the


quantitative and formalized aspects of structural engineering knowledge-those which
can readily be expressed in numbers and symbols, and combinations of these, which
are of use in design procedures and the description and justification of proposed
designs. However, in addition to these formal techniques, it is also important to make
use of the concepts behind the words and symbols at a more fundamental level, to
assist thinking about structures and most of this thinking is undertaken not necessarily
in quantitative terms, but also qualitatively. For instance, consider the interesting
example given by Addis (1990): a beam in a structure that is not pin-jointed at its
ends, and yet the maximum bending moment it has to sustain might be calculated to
be that appropriate to a simply supported beam, namely, PL/4 (load x span / 4).
Alternatively, although known not to have fixed ends it may be designed using
computations appropriate to a beam with fully restrained ends, PL/8 ( may be to take
into account some extra stiffness provided by the floor system). Both of the above
cases might astonish engineers a little in the use of assumptions known to be
imprecise. However, his or her surprise would become alarm upon examining the use
of the formula PL/6 for the case of a portal frame believed to be 'partially restrained'.
Unlike the previous two formulae, this last one has no theoretical justification at all.
And yet a designer might be quite pleased with such approach after their use can
yield safe and economic structures!. It is this ability to conceptualize that marks the
transition from the direct structural engineering fundamentals to having the ability to
think about, to understand and to communicate about structural behavior.
It is critical that structural education in engineering as well as in architecture
should incorporate an increased concentration upon behavior and conceptual analysis
as a central activity. It should include these as an identifiably distinct skill which
needs to be nurtured and developed separately. A move towards this goal has been
frequently suggested by many researchers and practioners, for instance Beckmann
1966, Pugsley 1980, Harris 1980, Cowan 1981, Dunican 1981, Lewin 1981, Brohn
1982, and Addis 199, Hills & Tedford 2002).
The ability to use concepts of structural analysis and design successfully in
discussing and explaining engineering behavior gives a very direct indication of a
student’s understanding of, and ability to think about, structures. However, to do this
is not to engage only in an objective and merely descriptive activity. The use of
concepts and, more generally, language, in both thought and discussion specifies a
standpoint from which an engineer is viewing problems.
This study investigates the conceptual understanding of structural behavior
using building information modeling combined with analysis and design tools. The
BIM tools used to analyze structures throughout this part of the research project
mainly utilizes the integration between Revit Structure and Robot Structural
Analysis. Gravity and lateral loads were applied to a variety of concrete portal frames
to develop and enhance the understanding of the fundamental principles of shear and
bending moment behavior, as well as how the boundary conditions affect structural
behavior.
A general reinforced concrete portal frame of 20.0 ft. x 20.0 ft. was used as a
reference point during the study (Figure 3). A fixed and pinned frame experiencing a
482 COMPUTING IN CIVIL ENGINEERING

single gravity point load was analyzed by altering the beam and column member
sizes. The first analysis allowed the beam sizes to remain constant as the column’s
width and depth gradually increased by intervals of 6 in. The second portion of the
study kept the column sizes constant and varied the beam sizes by increasing only the
height by 6 in. The analysis was repeated by removing the gravity load and replacing
it with a point lateral load.

Figure 3. RC portal frame.

CASE STUDY

The following conditions were analyzed in order to determine the behavior of


fixed and pinned reinforced concrete portal frames experiencing various gravity or
lateral loads:

Table 1.Frame Member Sizes for the Analysis.


20 ft. x 20 ft. Frame Subjected to 1.0 kip Gravity & Lateral Point Load:
CONSTANT BEAM SIZE OF 12IN. X 12IN. CONSTANT COLUMN SIZE OF 12IN. X 12IN.
VARIED COLUMN SIZES OF: VARIED BEAM SIZES OF:
1. 12 in. x 12 in. 1. 12 in. x 12 in.
2. 18 in. x 18 in. 2. 12 in. x 18 in.
3. 24 in. x 24 in. 3. 12 in. x 24 in.
4. 30 in. x 30 in. 4. 12 in. x 30 in.
5. 36 in. x 36 in. 5. 12 in. x 36 in.

RESULTS AND DISCUSSION

When performing structural analysis, the geometry of the structure, properties


of materials used, support conditions, and applied the loads must be taken into
consideration. Each aspect plays an important role in the equilibrium of any structure.
COMPUTING IN CIVIL ENGINEERING 483

The main concern of structural analysis is the calculation of internal force systems
and stress analysis that are involved in the determination of the corresponding
internal stresses.
This research paper focuses on statically indeterminate frames which cannot
be solved simply using force equilibrium equations. In order to understand the
structural behavior and qualitatively determine the corresponding shear and bending
moment diagrams of such structures, the study endeavors to utilize the concepts of
simple beams, namely simply supported, cantilever and fully fixed beams (figure 4).
P P P

P/2 P/2
Shear
P
P/2 P/2
PL/4 PL/8
Bending PL

Shear Bending
PL/8 PL/8

Figure 4. simple beam and column concepts used.

Fixed and pinned portal frames subjected to a 1.0 kip gravity load behave
similarly when analyzing their shear and bending moment diagrams. As the beams
increased in size in relation to the columns, the bending moments gradually moved
towards the center of the frame. The portal frame members act more like a simply
supported beam. Alternatively, as the columns increased in size in relation to the
beams, the bending moments gradually move away from the center and behave more
like a fully fixed structure. When columns and beam sizes are the same, the behavior
is the average of simply supported and fully fixed beam(bending moment =1/2 (PL/4
+PL/8) =PL/6) . Figures 5 below depict visually the shear and bending moment
diagrams part of the results for different conditions.
As the portal frame undergoes a 1.0 kip lateral load application, the effects of
the column size changes (beam size is fixed) on the shear and bending moment
diagrams is insignificant in the case of pinned frame. On the other hand, in the case of
a fixed frame, as the column sizes increases the column behave more like a cantilever
beam with the maximum moment PL at the support. Considering pinned frames, as
the beams gradually increase in size while the column size is fixed, the behavior of
the frame is identical to the pinned frame when changing the column sizes. However,
in the case of fixed portal frame, the larger the beam sizes in relation to the column
size, the columns behave like a half cantilever with maximum bending moment of
PL/2 at both ends. Figures 6 below demonstrates part of the corresponding shear and
moment diagrams for the fixed and pinned frames.
As the column size increases for a lateral load on the fixed frame, the bending
moment and shear force on the beam diminishes almost to zero. For the pinned frame,
the changes in the sizes of the beam do not affect the magnitude of the bending
moment or shear force on the beam when subjected to the same lateral load.
484 COMPUTING IN CIVIL ENGINEERING

VARYING COLUMNS
12 ft. x 12 ft.

Bending Shear Bending Shear

24 ft. x 24 ft.

36 ft. x 36 ft.

(a) PINNED PORTAL FRAMES (b) FIXED PORTAL FRAMES


Figure 5. Portal frame subjected to a 1.0 kip gravity load.

VARYING BEAMS
12 ft. x 12 ft.

Bending Shear Bending Shear


12 ft. x 24 ft.

12 ft. x 36 ft.

(a) PINNED PORTAL FRAMES (b) FIXED PORTAL FRAMES

Figure 6. Portal frame subjected to a 1.0 kip lateral load.


COMPUTING IN CIVIL ENGINEERING 485

CONCLUSIONS

The research work was focused on the development of more effective ways in
which structural behavior and analysis can be judged, improved, communicated,
learnt and taught. The study emphasized the development of more effective
techniques by which structural engineering knowledge, in its widest sense, including
the understanding of structural behavior which is so essential to the skill of design,
can be educated and learnt. Structural education in engineering as well as in
architecture should incorporate an increased concentration upon behavior and
conceptual analysis as a central activity.
A direct approach to the understanding of structural behavior is to be
emphasized, rather than relying on the dubious assumption that such understanding
necessarily follows from learning the mathematics of structural analysis. Ultimately,
this type of knowledge is the only check on the legitimacy of using structural
engineering theories in design procedures and computer aided design software.
The use of BIM tools in a reflective mode to enhance learning of fundamental
structural concepts allowed students to appreciate the full behavior of the structure
and hence this approach has promoted improved deep learning/understanding of
structural behavior.

REFERENCES

Adis, W. (1991). “Structural Engineering – the nature of theory and design”. Ellis
Harwood, New York.
Beckman, P. (1966). “Education lost have been misled”, Arup Journal 1, No.3, 7.
Brohn, D.M. (1982). “Structural Engineering – a Change in Philosophy”, Structural
Engineer, 60A, 117-120.
Cowan, J.(1981).”Design Education based on an Expressed Statement of the Design
Process”, Proc. Instn. Civ. Engrs 70, 743-753.
Duncan, P. (1981). “The Teaching of Structural Design: a Proposal”, Arup
Newsletter, No. 125, 1-2.
Harris, A. J. (1980). “Can Design be Taught?” Proc. Instnt., Civ., Engrs. , 68, 409-
416.
Hills, G. and Tedford, D., (2002). “Innovation in engineering education: the uneasy
relationship between
science, technology and engineering”, Proc. 3rd Global Cong. on Eng. Edu.,
Glasgow, UK, 43-48.
Lewin , D. (1981). “Engineering Philosophy – the Third Culture”, Journal of Royal
Society of Arts, 129, 653-666.
Pugsley, A. (1980). “The Teaching of the Theory of Structures”, Structural Engineer,
58A, 49-51.
Sharag-Eldin, A., and Nawari, N.O.:“BIM in AEC Education” 2010 Structures
Congress joint with the North American Steel Construction Conference in
Orlando, Florida, May 12-15, 2010, pp.1676-1688.
Efficient and Effective Quality Assessment of As-Is Building Information
Models and 3D Laser-Scanned Data

P.Tang1, E.B. Anil2, B. Akinci2 and D. Huber3


1
Civil and Construction Engineering Department, Western Michigan University, 4601
Campus Drive, Kalamazoo, MI 49008-5316; PH (269) 276-3203; FAX (269)
276-3211; email: pingbo.tang@wmich.edu
2
Civil and Environmental Engineering Department, Carnegie Mellon University,
5000 Forbes, Pittsburgh, PA 15213-3890; PH (412) 268-2959; FAX (412) 268-7813;
email: {eanil, bakinci}@andrew.cmu.edu
3
Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA
15213; PH (412) 268-2991; FAX (412) 268-6436; email: dhuber@cs.cmu.edu

ABSTRACT

Documenting as-is conditions of buildings using 3D laser scanning and


Building Information Modeling (BIM) technology is being adopted as a practice for
enhancing effective management of facilities. Many service providers generate as-is
BIMs based on laser-scanned data. It is necessary to conduct timely and
comprehensive assessments of the quality of the laser-scanned data and the as-is BIM
generated from the data before using them for making decisions about facilities. This
paper presents the data and as-is BIM QA requirements of civil engineers and
demonstrates that the required QA information can be derived by analyzing the
patterns in the deviations between the data and the as-is BIMs. We formalized this
idea as a deviation analysis method for efficient and effective QA of the data and as-is
BIMs. A preliminary evaluation of the results obtained through this approach show
the potential of this method for achieving timely, detailed, comprehensive, and
quantitative assessment of various types of data/model quality issues.

INTRODUCTION

Laser scanning is a method for capturing detailed geometries of constructed


facilities and constructing as-is Building Information Models (BIMs) (Tang et al.
2010). These as-is BIMs can serve as central project knowledge bases for various
applications, such as facility management and renovation design (Tang et al. 2010).
In supporting such applications, it is critical to conduct timely, detailed, and
comprehensive quality assessments (QA) of the laser-scanned data and as-is BIMs.
Efficient QA can help to reduce the project delay and improve the proactivity of the
decision making. Detailed and comprehensive quality information about the data and
the as-is models is necessary for enabling better use of the data and models.
One QA method that has been utilized commonly by service providers is the

486
COMPUTING IN CIVIL ENGINEERING 487

physical measurement method. In this method, engineers take a number of physical


measurements in the facility and compare them to the corresponding virtual
measurements in the as-is BIM. These measurements can be performed randomly to
support statistical analyses of the data quality (Cheok et al. 2009; Cheok and
Franazsek 2009). While effective, this method suffers from additional time needed for
data collection and analyses. Collecting physical measurements is laborious and
time-consuming, resulting in evaluation periods of days or even weeks (Anil et al.
2011; Cheok and Franazsek 2009). Such tedious measurement collections
compromise the timeliness of QA, and pose accessibility issues for some parts of the
facility (Anil et al. 2011). Another limitation of this method is that it highlights the
error, but does not provide an assessment of possible reasons for the error. For
example, using this method, an engineer has limited clues about whether the
inconsistencies are caused by mistakes made by the modeler or by a scanner
calibration problem. A final limitation of this approach is that, in practice, it is
impractical to physically measure every possible location; hence, it is likely to miss
some problematic data points or model parts.
To overcome the limitations of the physical measurement method and achieve
timely, detailed, and comprehensive QA of laser-scanned data and as-is BIMs, we
have formulated a deviation analysis based approach. Assuming that most parts of an
as-is BIM derived from the data align well with the data, any substantial deviations
between the data and the BIM indicate potential quality issues of the data and BIM.
Similarly, data collected at different locations for the same facility should agree with
each other, and substantial deviations between scans collected at different stations for
the same objects could serve as indicators of data quality issues. According to our
investigations, different sources and types of errors in the data or model lead to
different deviation patterns. As a result, these deviation patterns can be visualized and
guide engineers in identifying potential data/model quality issues. Figure 1 shows a
colorized deviation pattern between the data and the BIM of a building’s roof. In the
circle at the bottom left, an object has larger deviations (yellow) compared with the
other parts of the roof (blue and green). Such deviation patterns can guide engineers
to investigate and analyze that part of the model and data in depth.

Figure 1. Deviation patterns color coded for a roof of a building.

The deviation analysis method addresses the limitations of the physical


measurement method. It does not require physical measurements; hence, it can
deliver timely data and model quality information. Classifications of deviation
patterns enable engineers to identify different types of quality issues with the data and
the model. For all areas covered by data, this method can conduct comprehensive QA.
488 COMPUTING IN CIVIL ENGINEERING

This paper presents the data and as-is BIM QA requirements of civil engineers, the
deviation analysis method for QA, and evaluation results illustrating how this
deviation analysis method meets the domain requirements.

RELATED STUDIES

Previous studies have explored QA approaches for laser-scanned data and


as-is 2D/3D models. A research group at the National Institute of Standards and
Technology (NIST) proposed a physical measurement method for the QA of as-is
2D/3D building plans (Cheok et al. 2009; Cheok and Franazsek 2009). This method
estimates the confidence that a 2D/3D building plan meets a given accuracy
requirement based on statistical analysis of the differences between a number of
virtual and physical measurements (Cheok et al. 2009). While this approach is
effective in identifying modeling issues, since it directly compares the model with the
physical measurements, it is time consuming and difficult to achieve a comprehensive
assessment of the model quality. In addition, it focuses only on QA of 2D/3D models
and does not address the QA of the data.
Previous studies in multiple domains have explored methods for generating
deviation patterns between the data and models for quality control of manufactured
mechanical parts or constructed facilities (B. Akinci et al. 2006; Gordon et al. 2003).
For the quality control of mechanical parts, the manufacturing industry uses 3D
reverse engineering software for generating the deviations of the actual geometries of
these parts from the designed geometries (Innovmetric, Inc. 2010). Compared with
the QA of a building project, the QA of mechanical parts occurs in a controlled
environment for relatively small objects with minimal occlusion. In the domain of
construction management, researchers generated and visualized the deviations
between an as-built model derived from laser-scanned data and an as-designed BIM
for detecting and managing construction defects (B. Akinci et al. 2006). Since these
studies focused on quality control of physical objects rather than quality control of
3D data and models, they conducted limited explorations about the QA of data and
model and how to identify types of data/model errors based on deviation patterns.
Another method relevant to deviation analysis is clash detection — a method
used by building renovation projects for detecting the spatial conflicts of the designed
and existing objects (Autodesk Inc. 2010). This method identifies locations where the
space between designed and as-is objects is negative (strict physical clashes) or
smaller than user-specified tolerances (soft clashes). Binary clash/non-clash
information is a type of deviation indicator, but its binary nature poses limitations
when detailed deviation patterns, rather than binary clash maps, are needed.

QUALITY ASSESSMENT REQUIREMENTS AND AS-IS BIM WORKFLOW

Two major domain requirements need to be addressed for achieving timely,


detailed, and more comprehensive QA of laser-scanned data and as-is BIMs. First,
various quality issues of the data and models occur along the workflow of
constructing an as-is BIM, and engineers need to pinpoint the types of these issues so
that they can fix them or make decisions with awareness of the identified errors exist
COMPUTING IN CIVIL ENGINEERING 489

in the model. For instance, engineers need to know whether large deviations between
overlapping scans are caused by scanner calibration problems or data registration
errors, so that they can recalibrate the scanner or improve data registration
accordingly. Second, most applications have specific tolerances about the accuracy of
the data and as-is BIMs. The engineers need to quantify the magnitudes of deviations
or errors. For instance, if an architect specifies that the positioning accuracy tolerance
for windows is 5 cm, then the QA method should enable that architect to identify all
locations having errors larger than 5 cm.
A typical as-is BIM construction workflow is composed of three phases: (1)
Data collection; (2) Data preprocessing; and (3) Modeling the BIM. More detailed
descriptions of these three steps can be found in (Tang et al. 2010). Generally, the
first two phases influence the data quality, while the last phase influences the model
quality. The major error sources in the data collection phase include: 1) Incorrect
calibration of the scanner; 2) Mixed pixels due to spatial discontinuity edges; and 3)
Range errors due to specular reflections (Anil et al. 2011). Data preprocessing mainly
involves identifying and removing noisy data points, and aligning multiple scans in
local coordinate systems to a common coordinate system (known as data registration).
The major error sources involved in this step include: 1) Incorrect noise removals;
and 2) Data registration errors. The major error sources in the modeling phase include:
1) Failing to model physical components; 2) Modeling components using incorrect
shapes; 3) Modeling components with incorrect positions. A good QA approach
should be able to identify all these types of quality issues, and to enable engineers to
quantify and understand their implications to the domain applications. Due to the
space limits, this paper focuses on the domain requirements and an evaluation of the
deviation analysis method on satisfying these requirements without detailing data
processing steps and the definitions of all error types. More details on these aspects
can be found in a related publication (Anil et al. 2011).

DEVIATION ANALYSIS

The deviation analysis method completes the QA in two steps: 1) deviation


computation, and 2) deviation visualization. First, an algorithm computes the
deviations of data points from the surfaces of the as-is BIM based on the assumption
that all data points and the as-is BIM are in the same coordinate system. This
assumption is valid for all projects studied in this research. In these projects,
engineers first registered the laser-scanned point clouds to a geographic coordinate
system, and then created BIM in that coordinate system. The deviations can be
computed in several ways. The most common way is to compute the minimum
Euclidian distance from each point to its nearest surface in the BIM. Other methods
include computing the point-surface distances along user-specified directions, such as
the X, Y, Z direction of the common coordinate system or the direction of the surface
normal. In this paper, we tested the approach using the minimum Euclidian distances.
After generating the deviations, engineers visualize the deviation patterns.
Generally, they can configure several aspects of the visualization algorithms. First,
they can configure the color maps. Two major categories of color maps are the
continuous color map, and the binary color map. In this paper, we focus on evaluating
490 COMPUTING IN CIVIL ENGINEERING

a red-yellow continuous color map (gradual color variation from red to yellow with
the reduction of deviation values) and a yellow-green binary color map (assign
yellow/green color to data or model with deviations larger/smaller than a
user-specified threshold), as detailed later. Second, for continuous color maps,
engineers can configure it as unsigned or signed. Unsigned color maps visualize the
absolute deviation values, so that deviations of the same absolute values will have the
same color, while signed color maps visualize equivalent positive and negative
deviations with different colors. This paper focuses on signed color maps, which we
found to be more effective in practice. Third, engineers can configure the scale of the
color map so that they can control which ranges of deviations are of interest.
Specifically, they can configure the maximum and minimum deviation values
visualized; they can also set the threshold value for the binary color map to only
distinguish deviations larger and smaller than that threshold. Finally, engineers can
choose to colorize points or colorize the BIM surfaces. In this paper, we focus on
evaluating the point colorization method, since it can give more detailed and
localized deviation information for QA (Anil et al. 2011).
In addition to deviation generation and visualization, statistical analysis can
be used to analyze the deviation patterns. One example is to create the deviation
histograms for a certain region for obtaining the mode of deviation values, as shown
in a related publication (Anil et al. 2011). Such statistical methods could make the
deviation pattern analysis automatic. This paper focuses on the deviation generation
and visualization, and leaves the automated deviation analysis for future exploration.

EVALUATION RESULTS

We are in the process of evaluating the technical feasibility of the developed


QA approach by using data and models generated for several projects by service
providers. In this paper, we use data from one of these projects to illustrate the
potential effectiveness of the deviation analysis method. Specifically, we conducted
two sets of evaluations and analyzed the results for understanding how this method
addresses two domain requirements of identifying different types of data and model
quality issues, and quantifying the magnitudes of these issues. To investigate whether
the deviation analysis can help engineers to identify different types of quality issues
of the data and as-is BIMs, we generated and analyzed large amounts of deviation
patterns. We found that all studies types of data/model quality issues can produce
distinguishable deviation patterns, and these patterns can serve as indicators for
guiding engineers in pinpointing the error types and sources.
Figure 2 shows typical deviation patterns revealing various data quality issues.
Figure 2(a) shows the roof’s top view of one of the studied buildings. It uses a binary
color map to highlight parts of the roof with deviations larger than 2.5 cm as yellow.
On the roof, two circular stripes are centered around two scanning locations on the
platform. These abnormal patterns correlated with the scanning locations indicate the
likelihood of incorrect scanner calibration. Figure 2(b) shows deviation patterns on
the front façade of this building using a continuous color map. On the roof, the
deviations increase roughly linearly from left to right. According to detailed analysis,
this gradient deviation pattern is caused by an inaccurate rotation angle used for data
COMPUTING IN CIVIL ENGINEERING 491

registration. Figure 2(c) shows the deviation patterns around a window on the façade
of this building, using the same binary map adopted in 2(a). The deviations around
the two vertical edges of a window are all larger than 2.5 cm. Detailed investigations
revealed that the mixed pixels around spatial discontinuities influence the data quality
and cause such patterns. Using the same binary color map, Figure 2(c) and (d) show
that for all specular objects with high reflectivity, such as window glass and the
metallic awning, the deviations are larger than other parts, likely due to higher noise
in these regions. These observed correlations between deviation patterns and types of
data quality issues show the effectiveness of the deviation analysis method for
pinpointing types of data problems.

(a) Potential scanner calibration problem (b) A rotation error in data registration

(c) Mixed pixels at spatial discontinuities (d) Low data quality on specular surfaces
Figure 2. Deviation patterns for identifying various data quality issues.

Figure 3 shows deviation patterns caused by various modeling errors on one


of the studied buildings. Figure 3(a) shows the photo of a part of the façade, and 3(b)
shows the deviation pattern of this region based on the continuous color map. Figure
3(b) shows a rectangular region with large deviations. That pattern is caused by an
inset rectangular region on the wall, which was a window but was sealed with bricks
that the modeler failed to model. Figure 3(c) shows abnormal patterns on all columns.
The radii of these columns vary parabolically, while the modeler assumed linear
variations. This example shows how to use the deviation patterns for identifying
problems of modeling with incorrect shapes. Figure 3(d) shows the deviation patterns
on another part of the façade using continuous color map. Different colors on the first
and second floors indicate that these walls are not coplanar, while the modeler
assumed that they were and modeled them incorrectly. All these examples indicate
that the deviation analysis method can pinpoint various model quality issues.
In relation to the requirement about quantifying the data/model quality issues,
492 COMPUTING IN CIVIL ENGINEERING

the deviation analysis method enables engineers to configure parameters of the color
maps for visualizing deviations of interest. First, engineers can configure the
maximum and minimum deviations visualized by a continuous color map to only
show the patterns within that range based on their requirements. In Figure 2(b), the
range of interest is (-0.1 m to 0.1 m). In Figure 3 (b), (c), and (d), the ranges of
interest are (-0.2 m to 0.2 m), (-0.05 m to 0.05 m), and (-0.05 m to 0.05 m)
respectively. Generally, identifying “failing to model physical components” issues
needs a larger range than identifying the other two types of modeling issues, since
missing a component typically causes relatively larger deviations. Similarly, for the
binary color map, engineers can configure the threshold to only highlight regions
exceeding a tolerance. According to the tolerance specified in the project manual, we
used 0.025 m as the threshold for all shown results.

(a) Photo of a part of the back façade (b) Failing to model a physical component

(c) Model using incorrect shape (d) Model components with incorrect positions
Figure 3. Deviation patterns for identifying various model quality issues.

SUMMARY AND FUTURE RESEARCH

In this paper, we formulated a deviation analysis method to overcome the


limitations of the physical measurement method for the QA of laser-scanned data and
as-is BIMs, and illustrated its effectiveness on addressing the domain requirements of
timely, detailed, and comprehensive QA of the data and BIM. Based on a list of data
and as-is BIM quality issues that we identified, we found that this deviation analysis
method can detect all listed quality issues. This method also enables engineers to
quantify and visualize the deviations of certain magnitude for improving their
COMPUTING IN CIVIL ENGINEERING 493

quantitative awareness of the data and BIM quality issues.


In the future, we plan to improve this method in these aspects: 1) identify
more types of data and model quality issues and further evaluate the performance of
the deviation analysis method on identifying them; 2) formulate a taxonomy of data
and quality issues for formalized and systematical QA of 3D data and as-is BIMs; 3)
conduct more detailed evaluation of the efficiency of this method; and 4) develop
pattern recognition methods for automated deviation pattern analysis. In addition to
these technological improvements, we envision that this approach will evolve into a
methodology for automated data and model quality management to aid data driven
decision making in construction and facility management projects, and to aid
data/model quality driven data collection and interpretation on job sites.

ACKNOWLEDGEMENT

This material is based upon work supported by the U.S. General Services
Administration under Grant No. GS00P09CYP0321. Any opinions, findings,
conclusions, or recommendations presented in this publication are those of authors
and do not necessarily reflect the views of the U.S. General Services Administration.

REFERENCES

Akinci, B., Boukamp, F., Gordon, C., Huber, D., Lyons, C., and Park, K. (2006). “A
formalism for utilization of sensor systems and integrated project models for
active construction quality control.” Automation in Construction, Elsevier,
15(2), 124–138.
Anil, E. B., Tang, P., Akinci, Burcu, and Huber, Daniel. (2011). “Assessment of
Quality of As-is Building Information Models Generated from Point Clouds
Using Deviation Analysis.” Proceedings of SPIE, San Jose, California, USA.
Autodesk, Inc. (2010). “Navisworks.”
http://usa.autodesk.com/adsk/servlet/pc/index?siteID=123112&id=10571060.
Cheok, G. S., Filliben, J. J., and Lytle, A. M. (2009). Guidelines for accepting 2D
building plans. NIST Interagency/Internal Report (NISTIR) - 7638.
Cheok, G. S., and Franazsek, M. (2009). Phase III: Evaluation of an Acceptance
Sampling Method for 2D/3D Building Plans. NIST Interagency/Internal report
(NISTIR)-7659.
Gordon, C., Boukamp, F., Huber, D., Latimer, E., Park, K., and Akinci, B. (2003).
“Combining reality capture technologies for construction defect detection: a
case study.” EIA9: E-Activities and Intelligent Support in Design and the Built
Environment, 9th EuropIA International Conference, Citeseer, 99–108.
Innovmetric, Inc. (2010). “Polyworks v11.0.” www.innovmetric.com.
Tang, P., Huber, Daniel, Akinci, Burcu, Lipman, R., and Lytle, A. (2010).
“Automatic reconstruction of as-built building information models from
laser-scanned point clouds: A review of related techniques.” Automation in
Construction, 19(7), 14.
Occlusion Handling Method for Ubiquitous Augmented Reality
Using Reality Capture Technology and GLSL

Suyang Dong1, Chen Feng1, and Vineet R. Kamat1


1
Laboratory for Interactive Visualization in Engineering, Department of Civil and
Environmental Engineering, University of Michigan, Room 1318, G.G.Brown, 2350
Hayward Street, Ann Arbor, MI 48109; Tel 734-764-4325; Fax 734-764-4292; email:
dsuyang, cforrest, vkamat@umich.edu
ABSTRACT:
The primary challenge in generating convincing Augmented Reality (AR) graphics is
to project 3D models onto a user’s view of the real world and create a temporal and
spatial sustained illusion that the virtual and real objects co-exist. Regardless of the
spatial relationship between the real and virtual objects, traditional AR graphical
engines break the illusion of co-existence by displaying the real world merely as a
background, and superimposing virtual objects on the foreground. This research
proposes a robust depth sensing and frame buffer algorithm for handling occlusion
problems in ubiquitous AR applications. A high-accuracy Time-of-flight (TOF) camera
is used to capture the depth map of the real-world in real time. The distance information
is processed using the OpenGL Shading Language (GLSL) and rendered into the
graphics depth buffer, allowing accurate depth resolution and hidden surface removal
in composite AR scenes. The designed algorithm is validated in several indoor and
outdoor experiments using the SMART AR framework.

INTRODUCTION
As a novel visualization technology, Augmented Reality (AR) has gained widespread
attention and seen prototype applications in multiple engineering disciplines for
conveying simulation results, visualizing operations design, inspections, etc. For
example, by blending real-world elements with virtual reality, AR helps to alleviate the
extra burden of creating complex contextual environments for visual simulations
(Behzadan, et al., 2009a). As an information supplement to the real environment, AR
has also been shown to be capable of appending georeferenced information to a real
scene to inspect earthquake-induced building damage (Kamat, et al., 2007), or in the
estimation of construction progress (Golparvar-Fard, et al., 2009). In both cases, the
composite AR view is composed of two distinct groups of virtual and real objects, and
they are merged together by a set of AR graphical algorithms.
Spatial accuracy and graphical credibility are the two keys in the implementation of
successful AR graphical algorithms, while the primary focus of this research is
exploring a robust occlusion algorithm for enhancing graphical credibility in
ubiquitous AR environments. In an ideal scenario, AR graphical algorithms should
have the ability to intelligently blend real and virtual objects in all three dimensions,
instead of superimposing all virtual objects on top of a real-world background as is the
case in most current AR approaches. The result of composing an AR scene without
considering the relative depth of the involved real and virtual objects is that the
graphical entities in the scene appear to “float” over the real background rather than
blending or co-existing with real objects in that scene. The occlusion problem is more
complicated in outdoor AR where the user expects to navigate the space freely and the
relative depth between involved virtual and real content is changing arbitrarily with

494
COMPUTING IN CIVIL ENGINEERING 495

time.
Several researchers have explored the AR occlusion problem from different
perspectives: (Wloka, et al., 1995) implemented a fast-speed stereo matching algorithm
that infers depth maps from a stereo pair of intensity bitmaps. However random gross
errors blink virtual objects on and off and turn out to be very distracting; (Berger, 1997)
proposed a contour based approach but with the major limitation that the contours need
to be seen from frame to frame; (Lepetit, et al., 2000) refined the previous method by a
semi-automated approach that requires the user to outline the occluding objects in the
key-views, and then the system automatically detects these occluding objects and
handles uncertainties on the computed motion between two key frames. Despite the
visual improvements, the semi-automated method is only appropriate for
post-processing; (Fortin, et al., 2006) exhibited both model-based using bounding box
and depth-based approach using stereo camera. The former one only works with static
viewpoint, and the latter is subject to low-textured areas; (Ryu, et al., 2010) tried to
increase the accuracy of depth map by region of interest extraction method using
background subtraction and stereo depth algorithms, however only simple background
examples were demonstrated; (Tian, et al., 2010) also designed an interactive
segmentation and object tracking method for real-time occlusion, but their algorithm
fails in the situation where virtual objects are in front of real objects.
In this paper, the authors propose a robust AR occlusion algorithm that uses real-time
Time-of-flight (TOF) camera, RGB video camera and the OpenGL frame buffer to
correctly resolve the depth of real and virtual objects in AR visual simulations.
Compared with previous work, this approach enables improvements in three aspects:
1) Ubiquitous: TOF camera capable of suppressing background illumination enables
the algorithm and implemented system to work in both indoor and outdoor
environments. It puts the least limitation on context and illumination conditions
compared with any previous approach; 2) Robust: Due to the depth-buffering
employed, this method can work regardless of the spatial relationship among involved
virtual and real objects; 3) Fast: The authors take advantage of OpenGL texture and
OpenGL Shading Language (GLSL) fragment shader to parallelize the sampling of
depth map and rendering into the frame buffer. A recent publication (Koch, et al., 2009)
describes a paralleled research effort that adopted a similar approach for TV production
in indoor environments with 3D model constructed beforehand.

DEPTH BUFFER COMPARISON APPROACH


In this section, the methodology and computing framework for resolving incorrect
occlusion are introduced. This approach takes advantage of OpenGL depth buffering
on a two-stage rendering basis.
Distance Data Source
Accurate measurement of the distance from the virtual and real object to the eye is the
fundamental step for correct occlusion. In the outdoor AR environment, the distance
from the virtual object to the viewpoint is calculated using Vincenty algorithm
(Vincenty, 1975) with the geographical locations of the virtual object and the user.
Location of the virtual object is decided in the event simulation phase. Meanwhile,
location of the user is tracked by Real-time Kinematic (RTK) GPS. The ARMOR
platform (Dong, et al., 2010) utilizes Trimble AgGPS 332 along with Trimble AgGPS
RTK Base 450/900 to continuously track the user’s position up to centimeter level
accuracy.
496 COMPUTING IN CIVIL ENGINEERING

ag th
Im ep
e
D

Hidden Surface
Removal
Dep
B uf t h
fer
AR
RGB & TOF Registration
Camera
R ag
G e
Im

Col
o
B uf r
FIRST Rendering Stage fer SECOND Rendering Stage

Fig.1: Two Stages Rendering


On the other hand, TOF camera estimates the distance from the real object to the eye
with the help of time-of-flight principle, that measures the time a signal travels, with
well defined speed spends, from the transmitter to the receiver (Beder, et al., 2007).
The chosen PMD CamCube 3.0 utilizes Radio Frequency (RF) modulated light sources
with phase detectors. The modulated outgoing beam is sent out with a RF carrier, and
the phase shift of that carrier is measured on the receiver side to compute the distance
(Gokturk, et al., 2010). Compared with traditional LIDAR scanners and stereo vision,
TOF camera possesses ideal features of being deployed in real-time applications:
captures a complete scene with one shot and speeds up to 40 frames per second (fps).
However TOF camera is vulnerable to background light, like artificial lighting and sun
that also generates electrons and confuses the receiver. Fortunately the Suppression of
Background Illumination (SBI) technology allows PMD CamCube 3.0 to work flexibly
in both indoor and outdoor environment. (PMD, 2010)
Two Stages Rendering
Depth buffering, also known as z-buffering, is the solution for hidden-surface
elimination in OpenGL and is usually done efficiently in the graphics card or GPU.
Depth buffer is a two-dimensional array that shares the same resolution with the
viewport, and always keeps record of the closest depth value to the observer for each
pixel. For a new candidate color arriving at a certain pixel, it will not be drawn unless
its corresponding depth value is smaller than the previous one. If it is drawn, then the
corresponding depth value in the depth buffer will be replaced by the smaller one. In
this way, after the entire scene has been drawn, only those items not obscured by any
others remain visible.
Depth buffering thus provides a promising approach for solving the AR occlusion
problem. Fig. 1 shows the two rendering stage method: In the first rendering stage, the
background of the real scene is drawn as usual but with the depth map retrieved from
TOF camera written into the depth buffer at the same time. In the second stage, the
virtual objects are drawn with depth buffer testing enabled. Consequently, the invisible
part of virtual object, either hidden by real object or another virtual one, will be
correctly occluded.
Challenges with Depth Buffering Comparison Approach
Despite the simplicity and straightforward approach of depth buffering, there are
several challenges when feeding depth buffer with TOF camera distance information:
COMPUTING IN CIVIL ENGINEERING 497

1) After being processed through the OpenGL graphics pipeline and written into the
depth buffer, the distance between the OpenGL camera and the virtual object is not
the physical distance at all (Shreiner, et al., 2006). The transformation model is
explained in section 3.1. Therefore the distance for each pixel from the real object
to the viewpoint given by the TOF camera has to be processed by the same
transformation model, before it is written into the depth buffer for comparison.
2) Traditional glDrawPixels() command can be extremely slow when writing a
two-dimensional array, i.e. the depth map, into the frame buffer. Section 4
introduces an alternative and efficient approach using OpenGL texture and GLSL.
3) The resolution of TOF depth map is fixed as 200*200 while that of the depth buffer
can be arbitrary, depending on the resolution of the viewport. This implies the
necessity of interpolation between the TOF depth map and the depth buffer. Section
4 also takes advantage of OpenGL texture to fulfill interpolation task.
4) There are three cameras for rendering an AR space: Video camera captures RGB
values of the real scene as the background, and its result is written into the color
buffer; TOF camera acquires the depth map of the real scene, and its result is
written into the depth buffer; OpenGL camera projects virtual objects on top of real
scene with its result written into both color and depth buffer. To ensure correct
registration and occlusion, all of them have to share the same projection
parameters: aspect ratio and focal length. While the projection parameters of
OpenGL camera are adjustable, the intrinsic parameters of video camera and TOF
camera do not agree: i.e. different principle points, focal lengths and distortion
models. Therefore an image registration method is designed to find the
correspondence between the depth and RGB image.

Fig.2: Projective transformation of depth map. The right side shows the original depth
map, and the left side shows transformed depth map written into the depth buffer (Dong
and Kamat 2010)

DISTANCE DATA PREPROCESSING AND FUSION


Preprocessing of Depth Map
The distance value provided by TOF camera is treated in the eye coordinate system as
ze (actual distances from vertices to the viewer in viewing direction). In the OpenGL
pipeline, several major transformation steps are applied on ze before its value is written
into the depth buffer. Table 1 summarizes the transformation procedure, and more
detailed information is available from (Mcreynolds, et al., 2005): 1) clip coordinate zc
(distance values in clip space where objects outside the view volume are clipped away)
is the result of transforming vertices in eye coordinate by projection matrix; 2) zc
divided by wc (homogenous component in clip space) is called perspective divide that
generates zndc; 3) Since the range of zndc (distance values in normalized device
coordinate (NDC) space that encompasses a cube and is screen independent) is [-1,1], it
498 COMPUTING IN CIVIL ENGINEERING

needs to be offset and scaled by the depth buffer range [0,1] before it is sent to the depth
buffer.
Table 1: The transformation steps applied on the raw TOF depth image.
Name Meaning Operation Expression Range

Ze Distance to the Acquired by TOF camera (0,


viewpoint +∞)
2
Zc Clip coordinate after Mortho * Mperspective * [Xe [-n,f]
T
projection Ye Ze We] n and f is the near and far plane, We is the
transformation homogenous component in eye
coordinate, and usually equal to 1
2
Zndc Normalized device Zc / Wc (Wc = Ze, and is the [-1,1]
coordinate homogenous component in
clip coordinate

Zd Value sent to depth (Zndc+1) / 2 0.5 [0,1]


2
buffer

Depth Map and RGB Image Fusion


Because of the different intrinsic parameters, i.e. principle points, focal lengths, and
distortion models, of the RGB and TOF cameras, image registration is indispensible
before any of them can be written into frame buffers. Given the intrinsic matrixes of
both cameras, and relative extrinsic matrixes between these two cameras, one solution
of mapping from an image point x to another image point x’ is formulized as follows,
(Hartley, et al., 2003)
′ ′ ′
/
K and K’ are the intrinsic matrixes of the RGB and TOF camera respectively. R and T
represent the relative rotation and translation from image TOF to image RGB. Z is
depth of point xTOF in the physical world, i.e. distance to the TOF camera. This model
implies the way of registering the depth map with the RGB image, since Z is known for
each pixel in the TOF depth map. K and K’ are known with camera calibration, while R
and T come from the decomposition of essential matrix between two cameras. The
essential matrix is the specialization of the fundamental matrix to the case of
normalized image coordinates. E = K’TFK (Hartley, et al., 2003). E represents essential
matrix, and F represents fundamental matrix, that is known by matching identical
points on two images.
Successful implementation of the above model depends on two conditions: the first one
is accurate camera calibration, that is responsible for extracting relative extrinsic
parameters, as well as registration itself. The second one is the accurate extrinsic
parameters. As Fig.3 shows, in order to cover the same physical space as much as
possible, the RGB camera is located right on top of TOF camera, that makes short
baseline situation inevitable. Short baseline indicates slight error in camera calibration
and finding identical points can be amplified in the registration process. The low
resolution of TOF image (200*200) makes it hard to calibrate TOF camera and select
identical points with little errors.
Thus we adopt an alternative approach by taking advantages of the short baseline itself,
and replacing the image registration problem with a homography model. Even though
theoretically homography only works for matching points on two physical planes or
COMPUTING IN CIVIL ENGINEERING 499

during the pure camera rotation, the short translation makes the approximation
reasonable. Fig.3 shows the registration results using homography, where RGB image
is transformed to the TOF depth image coordinate frame. Since it is difficult to find
identical points if using the depth map, instead we use the grey scale intensity image
provided by TOF camera, that has a one to one mapping to the depth map.
Transforming the RGB image points to the depth map coordinate system on the fly is
very expensive. To accelerate the process, this mapping relationship between RGB and
TOF image points is pre-computed and stored as a look-up table. The depth value is
bilinear interpolated for the corresponding RGB image points on the fly.

TOF Camera

Fig.3: The identical points on two images are used to calculate the homography matrix,
that registers the RGB image with the TOF depth image.

USING TEXTURE TO RENDER TO FRAME BUFFER


This section describes how to efficiently render to the frame buffer using texture and
GLSL. After preprocessing, the depth map is ready to be written into the depth buffer.
However a challenging issue is how to write to depth buffer fast enough so that real
time rendering is possible: 1) the arbitrary size of the depth buffer requires
interpolation of the original 200*200 image. While software interpolation can be very
slow, texture filtering presents a hardware solution here since texture sampling is so
common that most graphics cards implement it very fast; 2) even though
glDrawPixels() command with GL_DPETH_COMPONENT parameter provides an
option for writing array into depth buffer, no modern OpenGL implementation can
efficiently accomplish this since data is passed from main memory to OpenGL to
graphics card on every single frame. On the other hand, texturing a QUAD and
manipulating its depth value in GLSL fragment shader can be very efficient.
500 COMPUTING IN CIVIL ENGINEERING

Occlusion DISABLED

Occlusion ENABLED

Fig.4: Outdoor Validation Experiment.


Texture is the container of one or more images in OpenGL (Shreiner, et al., 2006), and
is usually bound to geometry. Here the OpenGL geometric primitive type GL_QUADS
is chosen as binding target, and two 2D textures are pasted on it. One is RGB image
texture, and the other one is depth map texture. The quad shares the same size with the
virtual camera’s viewport, and projected orthogonally as the background.
RGB Image Texture
Since modifying the existing texture object is computationally cheaper than creating a
new one, it is better to use glTexSubImage2D() to replace repeatedly the texture data
with new captured RGB images. (Shreiner, et al., 2006) However the RGB image must
be loaded to an initial, larger texture with size in both directions set to the next biggest
power of two than its resolution, i.e. 640*480. Accordingly the texture coordinates are
assigned as (0, 0), (640/1024, 0), (640/1024, 480/512), (0, 480/512) in
counterclockwise order of the quad.
Depth Map Texture
The same sub image replacement strategy is applied on depth map texture. However
even though internalformat of texture is set to GL_DEPTH_COMPONENT, the depth
value written into depth buffer is not the depth map texture, but the actual depth value
of the quad geometry instead. Therefore the depth value of the quad needs to be
manipulated in fragment shader according to the depth map texture. A fragment shader
operates on every fragment that is spawned by Rasterization phase in OpenGL pipeline.
One input for the fragment processor is interpolated texture coordinates, and the
common end result of the fragment processor is a color value and a depth for that
fragment (Rost, et al., 2009). These features make it possible to alternate polygon depth
value so that the TOF depth map can be written into the depth buffer. The basic GLSL
source code is listed in Appendix A.
COMPUTING IN CIVIL ENGINEERING 501

VALIDATION
Despite the outstanding performance of TOF camera in speed and accuracy, the biggest
technical challenge of it is the modular error, since the receiver decides the distance by
measuring the phase offset of the carrier. Ranges are mod the maximum range, that is
decided by the RF carrier wavelength. For instance, the standard measurement range of
CamCube3.0 is 7m. (PMD, 2010) If an object happens to be 8m away from the camera,
its distance is represented as 1m (8 mod 7) on the depth map instead of 8m. This can
bring incorrect occlusion in outdoor condition, where ranges can easily go beyond 7m.
The authors have been looking into object detection, segmentation etc. to mitigate the
limitation. For now, the experiment range is intentionally restricted to within 7m.
The TOF camera is positioned approximately 7m facing the wall of a building, so that
ambiguous distance is ruled out. A small excavator model (accredited to J-m@n from
Google 3D Warehouse community) is positioned about 5m away from the TOF camera,
and the author is standing in front of the excavator. Scenarios for both occlusion
function enabled and disabled are shown. It is obvious that occlusion provides much
better spatial cues and realism for outdoor AR visual simulation.

CONCLUSION AND FUTURE WORK


This paper described research that designed and implemented an innovative approach
to resolve AR occlusion in ubiquitous environments using real-time TOF camera
distance data and the OpenGL frame buffer. Sets of experimental results demonstrated
promising depth visual cues and realism in AR visual simulations. However, several
challenging issues remain outstanding and are currently being investigated by the
authors. For example, the fusion algorithm using homography to register RGB image
with the depth map is subject to ghost effects in some cases, whose causes are not
clearly identified yet. Ghost effect implies over-occluding virtual objects and leaving
blank gaps between virtual and real objects. Secondly, the authors acknowledge that the
current 7m average operational range of TOF camera puts a limitation on fully outdoor
simulation visualization. However the occlusion algorithm designed here is generic
and scalable so that future hardware with improved range and accuracy can be plugged
into the current AR visualization system with little modification to the core algorithm.
Meanwhile, the authors are studying the feasibility of implementing hybrid methods,
like stereo vision and object detection, to mitigate this limitation.

REFERENCE
Acharya, T., & Ray, A. K. (2005). Image processing : principles and applications, John
Wiley & Sons, Inc.
Beder, C., Bartczak, B., & Koch, R. (2007). A Comparison of PMD-Cameras and
Stereo-Vision for the Task of Surface Reconstruction using Patchlets, Computer Vision
and Pattern Recognition, Minneapolis, MN, 1-8. .
Behzadan, A. H., & Kamat, V. R. (2009a). Scalable Algorithm for Resolving Incorrect
Occlusion in Dynamic Augmented Reality Engineering Environments, Journal of
Computer-Aided Civil and Infrastructure Engineering, Vol. 25, No.1, 3-19.
Behzadan, H. A., & Kamat, R. V. (2009b). Automated Generation of Operations Level
Construction Animations in Outdoor Augmented Reality, Journal of Computing in
Civil Engineering, Vol.23, No.6, 405-417.
Berger, M.-O. (1997). Resolving Occlusion in Augmented Reality: a Contour Based
502 COMPUTING IN CIVIL ENGINEERING

Approach without 3D Reconstructioin, Proceedings of CVPR (IEEE Conference on


Computer Vision and Pattern Recognition, Puerto Rico.
Dong, S., & Kamat, V. R. (2010). Robust Mobile Computing Framework for
Visualization of Simulated Processes in Augmented Reality, Proceedings of the 2010
Winter Simulation Conference, Baltimore, USA.
Fortin, P.-A., & Ebert, P. (2006). Handling Occlusions in Real-time Augmented
Reality: Dealing with Movable Real and Virtual Objects, Proceedings of the 3rd
Canadian Conference on Computer and Robot Vision, Laval University, Quebec,
Canada.
Gokturk, B. S., Yalcin, H., & Bamji, C. (2010). A Time-Of-Flight Depth Sensor -
System Description, Issues and Solutions, Proceedings of the 2004 Conference on
Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 35-44.
Golparvar-Fard, M., Pena-Mora, F., Arboleda, C. A., & Lee, S. (2009). Visualization of
construction progress monitoring with 4D simulation model overlaid on time-lapsed
photographs, Journal of Computing in Civil Engineering, Vol. 23, No. 4, 391-404.
Hartley, R., & Zisserman, A. (2003). Multiple View Geometry in Computer
Vision(Second Edition).
Kamat, V. R., & El-Tawil, S. (2007). Evaluation of Augmented Reality for Rapid
Assessment of Earthquake-Induced Building Damage, Journal of Computing in Civil
Engineering, Vol. 21, No. 5, 303-310.
Koch, R., Schiller, I., Bartczak, B., Kellner, F., & Koser, K. (2009). MixIn3D: 3D
Mixed Reality with ToF-Camera, Lecture Notes in Computer Science, Vol. 5742,
126-141.
Lepetit, V., & Berger, M.-O. (2000). A Semi-Automatic Method for Resolving
Occlusion in Augmented Reality, IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, Hilton Head, South Carolina.
Mcreynolds, T., & Blythe, D. (2005). Advanced Graphics Programming Using OpenG,
San Francisco, CA: Dlsevier Inc.
OSGART. (2010). Main Page. Retrieved from OSGART:
http://www.osgart.org/wiki/Main_Page
PMD. (2010). PMD CamCube 3.0. Retrieved July 2010, from PMD Technologies:
http://www.pmdtec.com/fileadmin/pmdtec/downloads/documentation/datenblatt_cam
cube3.pdf
Rost, R. J., Licea-Kane, B., & ginsberg, d. (2009). OpenGL Shading Language (3rd
Edition), Addison-Wesley Professional.
Ryu, S.-W., Han, J.-H., Jeong, J., Lee, S. H., & Park, J. I. (2010). Real-Time Occlusion
Culling for Augmented Reality, 16th Korea-Japan Joint Workshop on Frontiers of
Computer Vision, Hiroshima Japan, 498-503.
Shreiner, D., Woo, M., Neider, J., & Davis, T. (2006). OpenGL Programming Guide
(Fifth Edition).
Tian, Y., Guan, T., & Wang, C. (2010). Real-Time Occlusion Handling in Augmented
Reality Based on an Object Tracking Approach, Sensors , 2885-2900.
COMPUTING IN CIVIL ENGINEERING 503

Vincenty, T. (1975). Direct and inverse solutions of geodesics on the ellipsoid with
application of nested equations, In Survey Reviews, Ministry of Overseas
Development, 88-93.
Wloka, M. M., & Anderson, B. G. (1995). Resolving Occlusion in Augmented Reality.
Symposium on Interactive 3D Graphics Proceedings ACM New York, NY, USA, 5-12.

Appendix A
// Function Texture2D receives a sampler2D that is DepthTex or IntensityTex here,
fragment texture
// coordinates. And it returns the texel value for the fragment.
vec4 texelDepth = texture2D(DepthTex,
gl_TexCoord[1].xy);
// The final depth of the fragment with the range of [0, 1].
gl_FragDepth = texelDepth.r;
// Since a fragment shader replaces ALL Per-Fragment operations of the fixed function
OpenGL pipeline, the
// fragment color has to be calculated here as well.
vec4 texelColor = texture2D(IntensityTex,
gl_TexCoord[0].xy);
// The final color of the fragment.
gl_FragColor = texelColor;
A Visual Monitoring Framework for Integrated Productivity and Carbon
Footprint Control of Construction Operations
Arsalan Heydarian1 and Mani Golparvar-Fard2
1
Graduate Student, Vecellio Construction Engineering and Management Group, Charles E.
Via Department of Civil and Environmental Engineering, and Myers-Lawson School of
Construction, Virginia Tech, Blacksburg, VA; PH (540) 383-6422; FAX (540) 231-7532;
email: aheydar@vt.edu
2
Assistant Professor, Vecellio Construction Engineering and Management Group, Charles E.
Via Department of Civil and Environmental Engineering, and Myers-Lawson School of
Construction, Virginia Tech, Blacksburg, VA; PH (540) 231-7255; FAX (540) 231-7532;
email: golparvar@vt.edu

ABSTRACT
As buildings and infrastructure are becoming more energy efficient, reducing and
mitigating construction-phase carbon footprint and embodied carbon is getting more
attention. Government agencies are forming incentive-based regulations on
controlling these impacts and expressing control of carbon footprints as principle
dynamic goals in projects. These regulations are placing requirements upon
construction firms to find control techniques to minimize carbon footprint without
affecting productivity of operations. Nevertheless, there is limited research on
integrated real-time techniques to monitor operations productivity and carbon
footprint. This paper proposes a new framework and presents preliminary data in
which (1) construction operations are visually sensed through construction site
imagery and video-streams; subsequently (2) equipment’s location and action are
semantically analyzed through an integrated 3D image-based reconstruction and
appearance-based recognition algorithm; (3) productivity and carbon footprint of
construction operations are measured through a new machine learning approach; and
finally (4) for each construction schedule activity, measured productivity and carbon
footprint are visualized.
INTRODUCTION
According to several research studies, the rise in Green House Gas (GHG) emission
is very likely the main reason for most of the recently observed increase in the
temperature and other climate changes (EPA 2010, IPCC 2007). On earth, GHG
emissions from human activities have increased by 26% from 1990 to 2005 (EPA
2010). Over this period in U.S., GHG emission has increased by 14% (EPA 2010).
Among these emissions, carbon dioxide which is the main reason for the rise in the
temperature (EPA 2010) accounts for three quarter of the total GHG emissions, with
increase of concentration by 31% over the same period of time; meanwhile, a rise of
35% is projected by the U.S. department of Energy (Artenian et al. 2010, IPCC 2008).
The construction industry is considered to be one of the major contributors of
these GHG emissions (EPA 2010). According to EPA, historical emission from 14
industrial sectors in the U.S. count for 84% of the industrial GHG emissions, while
the construction sector is responsible for 6% of the total U.S. industrial-related GHG

504
COMPUTING IN CIVIL ENGINEERING 505

emissions, placing the construction sector to be the producer of the third highest GHG
emissions along all these sectors. Among all environmental impacts from
construction processes (e.g., waste generation, energy consumption, resource
depletion, etc.), emissions from construction equipment account for the largest share
(more than 50%) of the total impact (Guggemos and Harvath 2006). Furthermore,
embodied carbon - emissions from production and transportation of construction
materials - accounts for another 8% of the global GHG emissions and is mainly
released within the first year of any construction project.
In order to minimize concentrations of GHGs, the United Nations, many
European countries, and the state of California are considering a reduction of 80% in
GHG emissions by 2050, necessary to prevent the most catastrophic consequences of
climate change (Kockelman et al. 2009, Luers 2007). Nonetheless in the U.S., a new
set of EPA off-road diesel emissions regulations is rapidly becoming a concern for
the construction industry (ENR 2010) and has required Associated General
Contractors of America and the California Air Resources Board to postpone
enforcements of these emission rules until 2014. Although these regulations are
considered to minimize construction carbon footprint by a large factor, yet industry
interest has been minimal due to high cost of the alternatives: (1) high cost of new
equipment, and (2) upgrading older machinery. These regulations are challenging
construction firms to find solutions to reduce the carbon footprint of their operations
without affecting productivity and the final cost of their projects. In order to meet
these ambitious reductions in carbon footprints, a major cut in GHG emissions due to
construction operations, manufacture, and delivery of materials is necessary.
Among all decision alternatives, minimizing the idle time of construction
equipment would result in reduction of fuel use, extension of engine life, and safer
work environment for operators and workers on site. If the equipment is rented,
reducing the idle time can reduce the rental fee and the cost associated with the labor.
From a contractor’s perspective, better operation planning, deployment of equipment
through a more accurate equipment idle time analysis will improve construction
productivity, leading to significant time and cost saving (Zou and Kim 2007).
Establishing and implementing idle time reduction policies enables the construction
industry to take a proactive action in carbon footprint reduction (EPA 2010). Despite
the importance, reducing idle time for any onsite operation requires proper
assessment of productivity. It is important to first gather data on resources and
processes that are used for each construction operation in order to measure and
analyze productivity as well as carbon footprint.
Traditional data collection methods for productivity analysis (Oglesby et al. 1989)
include direct manual observations; i.e., a set of methods that are adopted from stop
motion analysis in industrial engineering, and survey based methods. Although this
method provides beneficial solutions in terms of construction operations, but
implementing it is time-consuming, manual and labor-intensive, and is prone to
errors (Su and Liu 2007). This significant amount of information also affects the
quality of the analysis, makes it subjective (Gong and Caldos 2009, Grau et al. 2009,
Golparvar-Fard et al 2009) and therefore many critical decisions will be made based
on these faulty or incomplete information, ultimately leading to project delays and
cost overruns. Therefore, contractors only attempt to collect productivity data at the
506 COMPUTING IN CIVIL ENGINEERING

project information system level. Developing an automated productivity data


collection method will allow the contractors to measure the process of operations
throughout every stage of the construction, which is considered an important step
towards on-site productivity improvement. Over the past few years, cheap and high
resolution digital cameras, extensive data storage capacities, in addition to availability
of internet on construction sites, have enabled capturing and sharing of construction
image collections and video streams on a truly massive scale. This imagery is
enabling construction firms to remotely and easily analyze progress, safety, quality,
and productivity (Golparvar-Fard et al. 2010).
Systematic monitoring and control enables construction professionals, suppliers,
and manufacturers to improve the operation’s productivity by assessing the carbon
footprints. This may also result in motivating development of low carbon products
and better planning for efficient operations. It seems imperative for the construction
industry to (1) track construction operations and sense GHG emissions, (2) assess the
carbon footprint of supply and manufacturing processes, (3) study the relationship
between operations’ productivity and carbon footprints, and (4) visualize construction
and supply chain carbon footprints. The proposed framework in this paper enables
project stakeholders to visually determine the amount of carbon emissions in their
projects and improve each activity by adjusting productivity and reducing idle time.
Figure 1 presents an overview of the proposed method.

Figure 1. An overview of data and process in the proposed vision-based tracking and
integrated productivity and carbon footprint assessment framework.

RESEARCH BACKGROUND
In recent years, there have been a number of research groups that have focused on
estimating, monitoring and controlling construction operation GHG emissions. Ahn
et al. (2010) presents a model which estimates construction emission using a discrete
event simulation. Peña-Mora et al. (2009) present a framework on integrated
estimation and monitoring of GHG emission and recommend application of portable
emissions measurement systems. Lewis et al. (2009a) presents the challenges
associated with quantification of non-read construction vehicle emissions and
proposes a new research agenda that specifically focuses on air pollution generated
by construction vehicles. Lewis et al. (2009b) studies the impact of changing fuel
COMPUTING IN CIVIL ENGINEERING 507

type, and Tier 0, 1, and 2 engines and recommendations are made by development
and practical application of emission inventories for construction fleet management.
Artenian et al. (2010) demonstrated that lowering construction emissions could be
achieved through an intelligent and optimized GIS route planning for the construction
vehicles. Shiftehfar et al. (2010) also propose a visualization system which visualizes
the impact of construction operation emissions with a tree metaphor. In Most recent
study, Lewis et al. (2011) presents a framework for assessing the effects of equipment
operational efficiency on the total pollutant emissions of construction equipment
performing a construction operation. Nonetheless, data collection or analyses in most
of these state-of-the-art approaches are not automated. Furthermore, there is a
significant non-renewable energy which is consumed in the acquisition of raw
construction materials, their processing, manufacturing, and transportation to the site
which is not being considered in these approaches. An automated tracking system
that can measure both construction operations and initial embodied carbon footprints
could result in a faster and more accurate data collection technique.
Similarly in recent years, a number of research groups have focused on automated
assessment of construction productivity and idle time. Gong and Caldas (2009), Grau
et al. (2009), and Su and Liu (2007) all emphasize on the importance of a real-time
construction operation tracking of resources. More specifically, Gong and Caldas
(2009) presented a vision-based tracking model for monitoring a bucket in
construction placement operations. Despite the effectiveness of the proposed
approach, the operation equipment location and action are not simultaneously
tracked. Zou and Kim (2007) has also presented an image-processing approach that
automatically quantifies the idle time of hydraulic excavator; though this approach
uses color information for detecting motion of equipment in 2D, and it since it uses
color space, may not be robust to changes of scale, illumination, viewpoint and
occlusions. To the best of the author’s understanding, there is no existing research on
automated vision-based tracking that can simultaneously locate the equipment in a
3D and identify their idle times and actions. Such an approach not only allows
productivity of construction operations to be remotely and inexpensively measured,
but it also enables onsite monitoring of construction carbon footprint. Integrated with
the initial embodied carbon enables construction practitioners to assess productivity
and carbon footprint of their operations and decide on control actions that can
maintain or maximize productivity, while the overall carbon footprint is minimized.
INTEGRATED PRODUCTIVITY & CARBON FOOTPRINT MONITORING
The goal of the proposed framework is to establish guidelines on how to visually
monitor construction equipment, increase productivity of operations, and reduce
carbon footprint. To reach this goal, an initial study is done to understand time-cost-
footprint relationship, equipment productivity, and construction resources. An
automated and visual identification system to identify construction equipment’s
location and action is developed; this tracking technique allows for performing a
productivity analysis on each crew. To understand their relationship for every
activity and operation, a side-by-side productivity and carbon footprint analysis was
then performed. Hence, as an initial step an integrated 3D reconstruction and
recognition algorithm is proposed to sense and model the construction site.
508 COMPUTING IN CIVIL ENGINEERING

In the proposed approach (1) construction operations are visually sensed through
construction site video-streams from fixed cameras; subsequently (2) equipment’s are
recognized and located in 2D frames. For this purpose (as observed in the process and
data model presented in Figure 1), these videos are further processed to spatially
recognize and locate equipment in 3D and go-register their location in D4AR, 4-
dimensional augmented reality environment (Golparvar-Fard et al. 2010, 2009).
Equipment actions are recognized using an action recognition model. Throughout this
stage, for each equipment (i) the location Li(x,y,z,time) and action Action(Li) of
construction equipment are monitored and reported. (3) productivity and carbon
footprint of construction operations are measured through a new machine learning
approach; finally (4) by integrating 4D Building Information Models for each
construction schedule activity, measured productivity as well as operation and
embodied carbon footprint are visualized. Figure 2 shows the IDEF-0 representation
for monitoring of equipment actions, locations, and productivity.

Figure 2. IDEF-0 representation of tracking, analyzing location and action,


measuring productivity and carbon footprint, and visualizing the results.
Productivity
An accurate prediction of the productivity of construction equipment is necessary and
critical in construction control. In this research, productivity of construction operation
is estimated through a new action recognition machine learning approach. Through
real-time action recognition model a process chart of construction equipment and its
actions for a specific operation is produced.
Carbon Footprint
Initial Embodied Carbon: To calculate an accurate construction emission rates, the
proposed mathematical algorithm integrates the initial embodied carbon with
operation carbon emissions (Eq. 1). The initial embodied carbon in building
construction is from the non-renewable energy consumed in indirect energy use –
energy for acquisition of raw materials, processing, manufacturing, direct energy use
– transportation of the materials to the site, and the on-site construction and assembly
use. Due to lack of accurate databases and tracking techniques of construction
material resources, embodied carbon is usually deemed optional in carbon emissions
analysis and calculations for construction processes. The proposed method is based
on the D4AR (4D augmented reality) monitoring tool to query specific material used
COMPUTING IN CIVIL ENGINEERING 509

in every stage of the construction from the underlying building information model.
Since the D4AR model is linked to the construction schedule, it can also provide a
connection between embodied emissions and operations emissions.
Operation Carbon: To measure the operation carbon footprint, activities that need to
be monitored are initially queried from the D4AR model. Similar to Lewis et al.
(2011) and based on the monitoring component and manufacturing equipment
dataset, for the action of each equipment, the engine power (EP), operation hours
(OD), emission factor (EF), and load factor (LF), on-site humidity and site’s physical
characteristics are measured (Eq. 2). The overall effect of humidity varies at different
type of the day by 1% to 9%; for instance it is expected to have lower emission rates
in the evening and early morning where the humidity level is higher and temperature
lower (Lindhjem 2004). Figure 3 presents the instantaneous and accumulative carbon
footprints and gained reductions.
#
∑# (1)
(2)
#
(3)
where em is the Emission Module measurement of each action, and tm is the duration
of each action. OE is the operations emission and EE is the embodied emission.
(a) Instantaneous vs. Accumulative (b) Instantaneous vs.
CF for an Operation CF Accumulative CF
Accumulative
for all Operations
Accumulative
CFi

CFi
Instantaneous

ti ti

Figure 3. Construction Carbon Footprint

Concept Study
The goal is to demonstrate the concepts of tracking, locating, and action recognition
of the equipment. The operation includes one excavator and three dump trucks. The
D4AR model is used to provide a 3D image-based reconstruction and BIM
registration (Figure 4a). Once the entire site is reconstructed, using the vision-based
tracking, equipment is tracked and located (4b, c). Locating, tracking, and identifying
different motions of equipment at a given time for each operation, enables action
recognition for deformable equipment body (4d). The actions for excavator included
digging, hauling, dumping, swinging, and idle time; respectively, the recognized
actions of each truck included moving, filling, dumping, and idle time. The 3D
reconstructed scene and equipment locations are visualized in a Euclidean 3D
environment. Once the location and action of equipment is recognized, an operation
chart is created for one cycle (Figure 5). D4AR provides the material resources based
on the schedule activity which allows for the calculation of embodied carbon. The
operation emission can also be calculated using Eqs.1, 2, and 3. The overall
510 COMPUTING IN CIVIL ENGINEERING

instantaneous and accumulative emission rates are plotted (Figure 3). By comparing
the operation sequence chart overall carbon footprint, the user can determine exactly
how much carbon footprint is emitted for a given activity.

Figure 4. The integrating tracking and monitoring framework for an actual


construction site (reconstruction based on 160 existing 2Mpixel images).
PERCIEVED APPLICATION
This application simply allows the user to reconstruct a construction site and
recognize the location and action of construction equipment. By recognizing the
operational sequence, an automatic productivity analysis is performed. Meanwhile,
carbon emission of the construction operations is calculated for each activity and is
plotted to visually demonstrate the emission rate side by side with the productivity
analysis. Compared to other sensing technologies (e.g., GPS, wireless trackers), this
application is practical as it does not require “tag” construction entities. Considering
the $900 billion construction industry, each 0.1% increase in efficiency can lead to
$900 million in savings, resulting in a significant impact on the current construction
practice and EPA regulations on construction GHG emissions.

Figure 5. Construction Operation Sequences


CONCLUSION
With the new set of EPA regulations and current economy crisis, being able to reduce
construction emissions, which is responsible for 6% of the total U.S. industrial-
related GHG emissions, using the resources available without additional cost could be
beneficial. This research focuses on the gained amount of reduction in carbon
footprint through productivity improvements of different construction operations.
COMPUTING IN CIVIL ENGINEERING 511

One of the most challenging facts is measuring accurate operation productivity. To


measure the productivity of the construction, paper proposes a new automated visual
sensing technique, in which equipment’s location and action is semantically analyzed
through an integrated 3D reconstruction and recognition algorithm using the D4AR
model. Productivity of construction operation is then learned and estimated through a
new machine learning algorithm. This joint assessment of productivity and carbon
footprint for the first-time enables project managers to study their operations in real-
time and revise their construction plan and operation strategies to simultaneously
reduce their carbon footprint and increase/maintain the level of productivity.
REFERENCES
Ahn C., Rekapalli P., Martinez J., Peña-Mora F. (2009). “Sustainability Analysis of Earthmoving Operations.”
Proc., 2009 Winter Simulation Conference, 2605-2611.
Artenian, A., Sadeghpour, F., and Teizer, J. (2010). "Using a GIS Framework for Reducing GHG Emissions in
Concrete Transportation," Proc., Construction Research Congress, Canada, May, 1557-1566.
EPA 2010. “Climate Change Indicators in the United States.” USEPA #EPA 430-R-10-00.
Golparvar-Fard M., Peña-Mora F. and Savarese S. (2010). “D4AR – 4 Dimensional augmented reality - tools for
automated remote progress tracking and support of decision-enabling tasks in the AEC/FM industry.” Proc.,
The 6th Int. Conf. on Innovations in AEC.
Golparvar-Fard M., Peña-Mora F., and Savarese S. (2009). “D4AR- A 4-Dimensional augmented reality model for
automating construction progress data collection, processing and communication.” Journal of Information
Technology in Construction (ITcon), 14, 129-153.
Gong J., Caldas C.H. (2010).“Computer Vision-Based Video Interpretation Model for Automated Productivity
Analysis of Construction Operations.” J. Comp. in Civ. Engrg. 24, 252-263.
Guggemos, A. and A. Horvath (2006), "Decision-Support Tool for Assessing the Environmental Effects of
Constructing Commercial Buildings," Journal of Architectural Engineering, 187-195.
IPCC (Intergovernmental Panel on Climate Change). (2007). Climate change 2007: The physical science basis.
Cambridge University Press Cambridge, United Kingdom.
Kockelman K., Bomberg M., Thompson M., Whitehead C. (2009). “GHG Emissions Control Options -
Opportunities for Conservation.” National Academy of Sciences.
Lewis P., Frey H.C., Rasdorf W. (2009a) “Development and Use of Emissions Inventories for Construction
Vehicles.” J. of the TRB, 2009, Washington D.C., 46-53.
Lewis P., Rasdorf W., Frey C., Pang S., Kim K. (2009b). “Requirements and Incentives for reducing Construction
Vehicle Emissions and Comparison of Nonroad Diesel Engine Emissions Data Sources.” ASCE J. of
Construction Eng. and Mgmt., 135 (5), 341-35.
Lewis P., Leming M., Frey C., Rasdorf W., (2011). “Assessing the Effects of Operational Efficiency on Pollutant
Emissions of Nonroad Diesel Construction Equipment.” Journal of TRB, NRC., Washington D.C.
Lindhjem C., Chan L., Pollack A. (2004), “Applying Humidity and Temperature Corrections to On and Off-Road
Mobile Source Emissions.” Proc., 13th Int. Emission Inventory Conf.
Luers, A., Mastrandrea, M.D., Hayhoe, K., Frumhoff, P.C. (2007) “How to avoid dangerous climate change: a
target for U.S. emissions reductions.” Union of Concerned Scientists Research Report.
National Highway Traffic Safety Admin. (2010). “Factors and Considerations for Establishing a Fuel Efficiency
Regulatory Program for Commercial Medium-and Heavy-Duty Vehicles.” U.S. Department of
Transportation.
National Research Council (2009). “Committee on Advancing the Competitiveness and Productivity of the U.S.
Construction Industry.”
Nunnally S.W. (2000). Managing Construction Equipment. Prentice Hall, NJ, 339-359.
Oglesby C.H., Parker H.W., Howell G.A. (1989). Productivity Improvement in Construction. McGraw-Hill, New
York, 84-130.
Peña–Mora F., Ahn C., Golparvar-Fard M., Hajibabai L., Shiftehfar S., An S., Aziz Z. and Song S.H. (2009). “A
Framework for managing emissions during construction.” Proc., Conf. on Sustainable Green Bldg. Design
and Construction, NSF.
Shiftehfar R., Golparvar Fard M., Peña-Mora F., Karahalios K.G., Aziz Z. (2010). “The Application of
Visualization for Construction Emission Monitoring.” Proc., Construction Research Congress 2010,
Canada, 1396-1405.
Su Y., Liu L., “Real-time Construction Operation Tracking from Resource Positions.” Proc., 2007 ASCE Int.
Workshop on Computing in Civil Eng., Pittsburgh, PA, 200-207.
Zou, J., and Kim, H. (2007). "Using Hue, Saturation, and Value Color Space for Hydraulic Excavator Idle Time
Analysis." J. Computing in Civil Engineering, 21, 238-246.
Building Information Modeling Implementation - Current and Desired Status

Pavan Meadati1, Javier Irizarry 2 and Amin Akhnoukh 3

1
Assistant Professor, Construction Management, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA, 30060; PH (678) 915-3715;
FAX (678) 915-4966; email: pmeadati@spsu.edu
2
Assistant Professor, Building Construction Program, Georgia Institute of
Technology, 280 Ferst Drive,1st Floor, Atlanta, GA, 30332; PH (404) 385-7609; FAX
(404) 894-1641; email: Javier.irizarrry@coa.gatech.edu
3
Assistant Professor, Construction Management, University of Arkansas at Little
Rock, 2801 South University Ave, Little Rock, AR, 72204; PH (404) 385-7609; FAX
(404) 894-1641; email: akakhnoukh@ualr.edu

ABSTRACT

The paper presents a methodology, which helps to extend Building Information


Modeling (BIM) beyond the pre-construction stage and facilitate its implementation
during the operation and maintenance (O&M) phase of a facility’s life cycle. In each
phase, a large amount of information is exchanged among various project
participants. This information can be categorized into graphical and non-graphical
data. Traditionally, these two information categories exist as independent entities and
are not linked to each other. This non-linkage decreases the project participants’
productivity due to implementation of time consuming information retrieval methods,
and regeneration of data. BIM implementation increases the project participants’
productivity by facilitating easy information access, and reusage of data. The
objective of the study is to create 3D as-built model and facilitate access to the
facility’s O&M information through it. This facilitates BIM implementation during
operation and maintenance phase of the life cycle. The paper presents the techniques
to produce the 3D as-built model and steps to associate facility maintenance
information to it.
Keywords: BIM, lifecycle, 3D, as-built, operation and maintenance

INTRODUCTION
The different phases of the project life cycle include planning, design, construction,
maintenance and decommissioning. The construction phase can be divided into pre
and post construction stages. The traditional media of communication among various
phases of life cycle is two dimensional (2D) drawings. The introduction of object
oriented computer aided design (CAD) software facilitated three-dimensional (3D)
models as media of communication between the planning and design phases and
introduced the concept of Building Information Modeling (BIM). Some of the
applications of these 3D models in the preconstruction stage include resolving

512
COMPUTING IN CIVIL ENGINEERING 513

constructability problems, space conflict problems, and site utilization (Koo &
Fischer, 2000; Chau et al. 2004). During construction, post construction and
maintenance still 2D drawings are most widely used. The as-built drawings developed
during post construction stage are in the form of 2D drawings. Currently, the 2D as-
built drawings developed in post construction stage and the related documents exist
independently. Thus the usage of a 3D model developed through BIM during
different early stages of the lifecycle is not in use after the preconstruction stage. This
paper presents an overview of the current status of BIM and discusses the desired
status of BIM that facilitates its implementation beyond the preconstruction stage.
The objective of the study is to extend BIM beyond the pre-construction stage and
facilitate its implementation during the maintenance O&M phase of the project life
cycle. The paper presents the different approaches for developing 3D as-built models.
This paper also discusses means to integrate the information to the 3D as-built model
to facilitate BIM implementation during the operation and maintenance phase.

BUILDING INFORMATION MODELING


BIM is a process. It provides a framework to develop data rich product models and
facilitates the realization of integrated benefits. In this process the real world elements
of a facility such as walls, doors, windows and beams are represented as objects in a
three dimensional digital model. In addition to modeling, facility information from
conception to demolition is integrated into the model. Thus the model serves as a
gateway to provide any time access to insert, extract, update, or modify digital data
by all the project participants involved in the facility life cycle.
Current Status
The integration of 3D models with schedule and cost information is extended only up
to the preconstruction phase disregarding the construction and post construction
phases. The 3D models were proven useful during the preconstruction stage for
applications such as visualization, resource allocation and hazard analysis (Tanyer &
Aouad, 2005; Kim & Lee et al., 2005). However, little has been done to extend the
usage of 3D model into post construction and maintenance stages of the project life
cycle. Construction and post construction phases continue to be accomplished using
2D representation. During the process of construction, project participants exchange
construction process documents such as request for information (RFI), submittals,
change orders, shop drawings, specifications, and site photos. These are not linked to
either the 2D or 3D models. Similarly during post construction, the information such
as warranties, maintenance schedules, O&M manuals, operation guidelines, training
manuals are also not linked to the 2D or 3D models. During the operation and
maintenance phase, when the facility manager requires information about a
component, the manager currently needs to search 2D as-built drawings for
dimensional details and multiple construction documents for other information. Due
to a lack of integration, much of the manager’s time is spent on performing non-value
added tasks such as searching and validating information. Though the information
exists during the construction process, lack of quick access to the information during
operation and maintenance phase increases response time which results in high
operation costs.
514 COMPUTING IN CIVIL ENGINEERING

The information used in a facility’s lifecycle can be categorized into graphical and
non-graphical data. The graphical data includes two dimensional and three
dimensional (3D) drawings. The non-graphical data includes other project documents.
The current status of graphical data and non-graphical data in the construction phase
is shown in Figure 1. It shows the 2D as-built drawings developed in the post
construction stage where related documents exist independently. The 3D models
developed in early stages of the facility’s life cycle are generally not in use after the
preconstruction stage. The implementation of BIM needs a 3D product model and
association of relevant information to each component to serve as information
resource. Thus BIM implementation tends to stop at the preconstruction phase leaving
large amounts of relevant data out of the final model needed by facilities management
for operations, maintenance, and possible re-commissioning or decommissioning
efforts.

Figure 1: Current status of BIM in Construction Phase

Desired Status
Two reasons for not achieving BIM during post construction of the facility are (1)
unavailability of a 3D as-built model and (2) lack of integration of operation and
maintenance information to the 3D as-built model (Goedert & Meadati, 2008). The
desired status of information flow during the construction phase to facilitate the
implementation of BIM is shown in Figure 2. The existing 2D as-built drawings have
to be replaced with 3D as-built models. Operation and Maintenance data such as
COMPUTING IN CIVIL ENGINEERING 515

maintenance schedule, warranties, operational manuals, specifications, training


videos, and maintenance records must be attached to the 3D as-built model. This
integration of O&M documents to the 3D as-built model facilitates BIM
implementation beyond the preconstruction stage.

Figure 2: Desired status of BIM for O&M Phase

EXTENDING BIM INTO THE OPERATION & MAINTENANCE PHASE


This section discusses the research conducted to study the feasibility of achieving the
desired status of BIM. The objective of this research is to extend implementation of
BIM into the maintenance and operation phase by creating a 3D as-built model and
attaching the information to the model. The research objective was accomplished
through a case study using commercially available Autodesk’s BIM software
products. The techniques adopted for developing a 3D as-built model included the
Fully Automated Data Acquisition Approach (FADAA) and the Objective Driven
Data Acquisition Approach (ODDAA). Both of these techniques were used to collect
the data for developing a 3D as-built model of the Construction Management
Department at Southern Polytechnic State University. The collected data was then
used to develop a 3D as-built model as shown in Figure 3 using Autodesk’s Revit
BIM software. Once the 3D as-built model was developed, to facilitate the extension
of BIM into O&M phase, O&M information was integrated into it.
516 COMPUTING IN CIVIL ENGINEERING

Figure 3: 3D as-built model developed in Revit

3D As-built Data Collection Techniques


The Objective Driven Data Acquisition Approach (ODDAA) and the Fully
Automated Data Acquisition Approach (FADAA) were the two techniques used to
collect the data for the development of 3D as-built model. The ODDAA is also
referred as “sparse range point cloud” approach (Kim and Haas et al. 2005). Only
target points are scanned in ODDAA. This approach includes human involvement for
targeting the objects and makes it semi-automatic instead of fully automated data
acquisition used in the FADAA. An individual target object is selected and x, y, and
z coordinates of a minimum number of points is acquired to represent the object in a
3D model. The 3D as-built model is developed by selecting the target object from
parametrically defined graphical objects stored in the database and placing it spatially
using the scanned x, y and z coordinates. The density of the data used for modeling is
substantially less when compared to the FADAA. The x, y, and z coordinates of the
as-built components were acquired by using a total station as shown in Figure 4.
These coordinates were then used for developing the 3D as-built guide line layout in
AutoCAD. This guideline layout was further used for developing 3D as-built model
using Autodesk’s Revit software.

In the FADAA, 3D laser scanner as shown in Figure 5 was used to produce a dense
point cloud. The required data collection is achieved by scanning and merging the
dense point clouds collected from various locations. Then this data is used for
developing a 3D model. This approach provides a very accurate and detailed 3D
model (Kwon et al. 2004). However the 3D model obtained by using the 3D laser
scanner is not compliant for BIM implementation, since the captured 3D model will
act as single composite object and will not allow the elements to be picked
individually. The scanned 3D model was further used to develop the 3D as-built
model for BIM implementation.
COMPUTING IN CIVIL ENGINEERING 517

Figure 4: Data Collection using Total Station

Figure 5: Data Collection using 3D Laser Scanner


O&M Information Integration
The two different approaches for integration of O&M information include: (a)
selection of each component of the 3D model and linking the documents by
specifying the documents storage path and (b) automation of linkage by placing the
files in a preassigned path with standardized storage location. The automation can be
achieved by adopting NBIMS standard called Construction Operations Building
Information Exchange (COBie). COBie facilitates manufactures to submit the
518 COMPUTING IN CIVIL ENGINEERING

electronic product information and then provides it to O&M purposes (East, 2007). In
this study, O&M information is integrated by selecting each component and linking
the documents by specifying the documents storage path. O&M information
integrated into the 3D as-built model included Microsoft Word files, PDF,
photographs, audio and video files. The steps involved in the integration of
information to the 3D as-built model include creation of new parameters and
association of information to these parameters. In Revit, each element is associated
with predefined parameters and these are categorized into type parameters and
instance parameters. The type parameters control properties of elements of that type
while the instance parameters control the instances properties. The type and instance
parameters are further categorized into different groups. The data format stored in
each parameter is of type: text, integer, number, length, area, volume, angle, URL,
material, and yes/no. In this project, since the predefined type or instance parameters
are inadequate, new parameters are added to the elements. Revit facilitates the
addition of new parameters as a project parameter or a shared parameter. Only the
shared parameters are exported to databases. Other families and projects share these,
whereas project parameters are not exported to the databases. Some of the newly
added shared parameters include O&M manuals, Maintenance Schedule, Performance
Test Videos, Specifications, Typical Section, Construction Photos, Code
Requirements, and Installation videos. These parameters are made to appear under the
group name ‘Other’ in the type parameters list. The URL data format is used for each
parameter. This format is useful for establishing the link between the respective files
and components. The association of information to the model components is
accomplished by assigning the file paths of the information to the parameters. This
link between the documents through the path stored in the parameter allows easy
access to the required information. Figure 6, shows the screen shot of the retrieved
O&M manual, specifications, Performance Test Videos of the door accessed from the
3D as-built model.

CONCLUSION

BIM provides the means to facilitate an integrated and coherent information


management strategy. BIM eliminates fragmentation and provides seamless flow of
facility information among the planning/design, construction, and operation and
maintenance phases. The solutions offered by BIM include integration for reduced
fragmentation, reuse of model based digital data instead of data regeneration, and
using spread sheet type product modeling tools instead of traditional CAD systems to
reduce errors and mistakes. Implementation of BIM offers benefits in all the phases of
the project lifecycle. It gives any time access to digital data to owners, clients,
engineers, architects, contractors, facility managers, maintenance and operations
engineers, safety and security personnel and many others involved in the building life
cycle. By ceasing the implementation of BIM after the preconstruction stage of the
project lifecycle, project participants are not able to realize the benefits offered by
BIM. The FADAA, ODDAA and O&M information integration techniques facilitate
the extension of BIM implementation into the O&M phase.
COMPUTING IN CIVIL ENGINEERING 519

Figure 6: Screen shot showing retrieved O&M information of a door using BIM

REFERENCES

Chau, K.W., Anson, M. and Zhang, J.P. (2004). “Four dimensional visualization of
construction scheduling and site utilization.” J. of Constr. Engrg and Mgmt, 130(4),
598-606.
East, W, E., (2007). “Construction Operations Building Information Exchange
(COBIE).” < http://www.wbdg.org/pdfs/erdc_cerl_tr0730.pdf> (March 5, 2011).
Goedert, J.D., and Meadati, P. (2008). “Integration of construction process
documentation into Building Information Modeling.” J. of Constr. Engrg and Mgmt,
137(7), 409-516.
Kim, C., Haas, C.T., and Liapi, K.A. (2005). “Rapid on site spatial information
acquisition and its use for infrastructure operation and maintenance.” Autom. Constr.,
14, 666-684.
Kim, K.J., Lee, C.K., Kim, J.R., Shin, E.Y., and Cho, Y.M. (2005). “Collaborative
work model under distributed construction environments.” Can. J. Civ. Eng., 32,
299-313.
Koo, B., and Fischer, M. (2000). “Feasibility study of 4D CAD in commercial
construction.” J. of Constr. Engrg and Mgmt, 126(4), 251-260.
Kwon, S.W., Bosche, F., Kim, C., Haas, C.T., and Liapi, K.A. (2004). Fitting range
data to primitives for rapid local 3D modeling using sparse range point clouds.
Autom. Constr., 13, 67-81.
Tanyer, A.M., and Aoudad, G. (2005). “Moving beyond the fourth dimension with an
IFC-based single project database.” Autom. Constr., 14, 15-32.
Simulating the Effect of Access Road Route Selection on Wind Farm
Construction
Mohamed El Masry1,Khaled Nassar2 and Hesham Osman3
1
Graduate student and research assistant with the Department of Construction and
Architectural Engineering at the American University in Cairo,
.m_elmasry@aucegypt.edu
2
Associate Professor, Department of Constructon Engineering, American University
in Cairo, knassar@aucegypt.edu
3
Assistant Professor, Department of Structural Engineering, Faculty of Engineering,
Cairo University, 12613, Giza, Egypt hesham.osman@gmail.com
ABSTRACT
Potential adverse impacts on environment are increasing due to the usage of
fossil fuels to produce energy; nowadays renewable energy is used instead to
overcome this problem. One of the most widely used renewable energy sources is
wind energy from which electricity is produced by wind farms. On shore wind farms
construction could be very complicated due to the interaction between the various
disciplines involved in the construction process. To overcome such complexity,
construction process of wind farms is simulated using STROBOSCOPE simulation
tool to illustrate how selecting a certain route in constructing the access roads could
produce different volumes of cut and fill that could significantly affect the
construction cost and time. Not only selecting a certain path could affect the cost and
time, but also the used equipment can play a vital role. Optimization of the number
of equipment and crews to reach an optimum cost and time for construction is
presented. A case study to illustrate all the above is also presented.
INTRODUCTION
The increase in global demand for renewable energy has created a booming wind
energy market. By the end of 2009, the capacity for worldwide production of wind
power using wind turbine generators reached almost 157.9 Gigawatts (GW). In 2009
38.3 GW were added. The wind energy production capacity grew by 31%, the highest
rate since 2001. Predictions are that 54 GW will be added in 2010 (WWEA, 2009).
This ambitious growth in wind power requires a significant ramp-up in all links of the
wind turbine supply chain. Wind turbine construction is one of the most critical yet
under-investigated steps in the supply chain of wind turbines. Wind turbine
construction is a repetitive construction process that mainly involves constructing
access roads to connect locations of wind towers that will be erected and lifting large
prefabricated components to large heights in high wind speed conditions. Thus,
contractors are faced with challenging work environments that impact the time, cost
and safety of construction operations. Based on the exploratory research work in wind
turbine construction presented in this paper, it is important to clearly identify the
scope of this work and identify areas where further investigation is necessary. The
scope of this paper can be delimited as follows:

520
COMPUTING IN CIVIL ENGINEERING 521

1- Wind Farm Scope: The scope of construction activity in a wind farm generally
encompasses three main elements; site infrastructure (access roads, crane pads, and
tower foundations), wind turbines, and electrical substation and grid networks. This
paper will focus on the construction of the access roads.
2- Wind Farm Location: The construction processes described in this paper are for
on-shore wind turbines. Details regarding off-shore wind farms are beyond the scope
of this work.
3- Project Stage: This paper will focus on the development of tools for the planning
of wind turbine construction activities. It is expected that tools can also be developed
for monitoring the construction process and planning the maintenance and
rehabilitation of wind turbines.
When selecting the appropriate site for constructing a wind farm, scheduling
consideration should be given to the access of the site and to the site construction. An
important factor affecting project schedule and costs is the transportation and road
system in the wind farm. Roads have to be constructed such that they can adequately
bear the load of wind turbine parts and equipment. A framework is presented in an
attempt to guide the contractors in wind farm construction to select best route to start
the construction operation and to choose the optimum number of resources to use
with the exact combination.
SIMULATION MODULE
Simulation can be considered as a powerful tool because it imitates what happens
in reality to a certain level of accuracy and reliability without extra costs.
STROBOSCOPE (Martinez, 1996) is used as a simulation tool to represent the tasks
in reality. Strobscope represents activities by rectangular shapes called “combi” and
“normal” while the resources can be represented by circle shapes named “queues”.
Each activity can take an argument called semaphore to control the start and the end
of this activity. To entail stroboscope to start activities at the beginning of a working
day and ends by the end of the day, semaphore was used using the following syntax:
SEMAPHORE workingHours;
The road segments were defined in a queue; with the type “characterized resource”.
The characterized resource had a defined property under the name “cf” which might
take a value of zero or one and defines whether this section is cut or fill. For cut
sections the “cf” property was given a value of zero and for the fill sections the “cf”
value was given one. The value of the quantities of cut and fill were entered in a
property under the name of “value” which expresses the volume of cuts and fills.
Each segment was given a different value for cf and value depending whether this
section is cut or fill and depending on the volume to be cut or filled, this was done
using a property for the characterized resource called subtype, which defines the
different stations like st1, st2. st3,…etc. The syntax for the previous was written as
follows :
CHARTYPE Stations cf value; /ST
SUBTYPE Stations st1 1 200;
SUBTYPE Stations st2 0 400;
SUBTYPE Stations st3 1 800;
522 COMPUTING IN CIVIL ENGINEERING

Multiple replications for different number of alternatives for resources were done for
the model and different simulation times were obtained as shown in figures 2 and 3.
Table.1: Description of the activities and Resources used in stroboscope model
Abbreviation
used in the Description Remarks
model
Survey works and setting out of the points
Activity: Setting
StgOut important to define the premises of the roads and
Out
the project
Activity: Overlaying of the aggregate bas on the road
Overlng
Overlaying segments after filling and cutting
Activity: Filling Filling segments that are required to be filled using
FllgSoil
Soil a fill truck and a bulldozer
After overlaying the aggregate base watering is
Wtrng Activity: Watering necessary for compaction to reach optimum
moisture content
Queue: Segments Fill segments in which stations were routed using
Fill
to be filled dynafork using the expression cf. stations=1
Queue: Segments Cut segments in which stations were routed using
Cut
to be cut dynaforks using the expression cf. stations=0
Queue: All
Total road segments that are required to get cut or
RdSgmnts segments to be cut
filled
and filled
Activity: Cutting Activity of cutting the soil using a loader that loads
CtgSoil
soil the trucks with the cut soil
Queue: Road Queue that represents the road segments read to get
RdCmpct
Compaction compacted after grading and watering
Activity:
AgBsCmpct Aggregate Base Activity of compacting aggregate base in roads
Compaction
Activity: Fill Compacting the fill segments and getting them
FillCmpctn
Compaction ready to get the aggregate base
Activity: Tilting The activity of tilting the section nacelles of the
TltScn
sections tower to get lifted
Queue: Secondary
SecCrn Secondary crane used in installing the wind tower
crane
Activity:
Positioning and bolting of the blade hub with the
PBladeHub Positioning Blade
turbine at the tip of the wind tower
Hubs
Activity: Lifting
LftgSec Lifting the nacelles in the tower
sections
Activity:
PstngSc Positioning and bolting of the nacelles
Positioning Section
Activity: Lifting
LBladeHub Lifting of blade hub to get bolted to the turbine
Blade Hub
Queue represents the ticks of the clock to get an
Ticks Queue: Ticks
eight hour working shifts in the day
EightHours Activity: Eight A combi that entails the day to be eight hours
COMPUTING IN CIVIL ENGINEERING 523

Hours instead of 24 hours


Grdng Activity: Grading Grading of the aggregate base in the road
Activity: Travel to The activity of traveling of the loaded truck form
TrvlToCt
Cut the fill segments to the cut segments
Activity: Travel to Returning of the truck after dumping the soil in the
TrvlToFl
fill cut segments to the fill segments

OPTIMIZATION MODULE
Figure.1 shows a schematic diagram for the alternatives that could be optimized
and different approaches to be used in construction of wind farms. The first
alternative is the different paths that could be available in the construction of access
roads. The second alternative is the number of crews that would be used in road
construction, where only one or more than one road crew can be used in road
construction to do overlaying, watering and grading. The third alternative that could
be analyzed is the equipment used in hauling the dirt in earth moving, for the same
alternative several scenarios could be found. One of these scenarios could be using
scrappers, or using loaders and trucks or dozers depending on how long the hauling
distance is. The fourth and the last alternative is the cranes and how they would be
used. There are two approaches to be used; the first is the assembly of blade hub on
the ground while the other is on the tower after erection which would require cranes
of higher capacity and this was covered in another research (Atef, 2010).

Figure.1: Different alternatives available in wind farm construction


Creating multiple replications for the model with different alternatives for the critical
resources and running a simulation would help in determining the resources which
have an effect on the construction time. These resources were determined by
addressing number of highway and road contractors in the form of questionnaire.
524 COMPUTING IN CIVIL ENGINEERING

310 200
Simulation Time (Days)

Simulation Time (Days)


Path1 190
290 180
Path2
170
270 Path3 160
250 Path4 150
140 Path1
230 130 Path2
210 120 Path3
110 Path4
190 100
1 2Numbr of
3 Trucks4 5 1 Number of Cranes 2

Figure.2: Different Paths and their Figure.3: Different Paths and their
effect on construction time using different effect on construction time using
number of trucks different number of cranes
It was found that the equipment used in hauling could decrease the duration of road
construction significantly (i.e.: loaders, dozers and hauling trucks).While equipment
such as water and aggregate trucks have less effect on the simulation time. The effect
of using different number of cranes in lifting tower parts with different paths is shown
in figure.3. The effect of changing the number of hauling trucks with different paths
is shown in figure 2. As shown in figures 4 and 5 simulation time decreases with the
increase in the number of trucks for different number of loaders and dozers. The
above shows that many alternatives are involved when a decision is made. The total
number of alternatives can yield into a huge number of combinations, therefore
optimization was used in an attempt to reduce processing time and improve the
quality of solutions.
7 7
1 Dozer
Simulation Time (Days)

1 Loader
Simulation Time (Days)

6 6
2 Dozers 2 Loaders
5 5

4 4

3 3

2 2
1 2 Number3of trucks 4 5 1 2 Number3of trucks 4 5

Figure.4: Simulation time versus Figure.5: Simulation time versus


number of Trucks using 1,2 Dozers number of Trucks using 1,2 loaders
Evolutionary Algorithms have been introduced during the past 10 years. To optimize
the different alternatives to use, the particle swarm optimization (PSO) algorithm is
used. Particle swarm optimization was found to perform better than other
evolutionary algorithms in terms of success rate and solution quality (Elbeltagi et al.,
2005).
COMPUTING IN CIVIL ENGINEERING 525

Product of simulation time and total cost was set as the objective function that would
be optimized. Different particles were initiated to get optimized and get the optimum
solution. Each particle represents different number of alternatives of resources that
can affect the simulation time (crane, loader, compactor, grader, and truck, road
crews, volumes of cut and fill representing different sequence of construction). Given
all the previously mentioned resources, cost and time were obtained and PSO was
performed. Convergence was achieved and pareto optimal face was drawn as shown
in figure 6.
Figure.6 shows four different alternatives with different resources and equipment.
The points having same marker represents a certain path taken in the construction of
wind farm, while the different points of the same marker represent different
combination of alternatives to use for equipment. For the given alternatives it was
found that the best solution was the third path (the path represented with the triangle)
using two cranes, 3 road crews, two dozers and two loaders with five hauling trucks
while keeping the other resources the same.

44000000
42000000
Total Cost (L.E.)

40000000
38000000
36000000 S1
34000000
32000000 Pareto Optimal
30000000
250 300 350 400 S2
450 500 550
Simulation Time (Days)

Figure.6: Total cost versus simulation time showing pareto optimal face
Table.2 summarizes the alternatives that were optimized and the optimum number to
use based on the resulting pareto face.
Table.2: Number of alternatives to use
Alternatives to use S1 S2
Path Number 3 3
Number of Loader 2 2
Number of Trucks 5 5
Number of Dozers 2 2
Number of Graders 1 1
Number of Road Crews 3 3
Number of Main Cranes 2 1
Number of Compactors 1 1
Number of Secondary Cranes 2 1
526 COMPUTING IN CIVIL ENGINEERING

CONCLUSION
There is a global boom in using alternative energy resources, leading to an
increase in wind farms construction. Many trades are involved in wind farm
construction that could interact in an efficient way to reduce cost and time of
construction. A framework was introduced to help contractors in performing time-
cost trade off analysis to optimize resources utilization in wind farm construction. A
pareto optimal forentier is introduced that would help the contractors in deciding how
many resources to use and the effect of this decision on cost and time. The multi-
objective optimization was performed to get the previously mentioned pareto face by
using particle swarm optimization (PSO) with an objective function that is the
product of the cost and simulation time of construction. The PSO algorithm was
plugged in STROBSCOPE simulation tool that imitates the processes in reality. By
running the original model several times using different alternatives; it was found that
using the combination of optimized solution set would decrease the duration by a
value of 40% but would affect the total cost by an increase of 25%.

Figure.6a: earth moving part in stroboscope model

Figure.6b: Road Construction part in stroboscope model


COMPUTING IN CIVIL ENGINEERING 527

Figure.6c: Foundation and Concrete works part in stroboscope model

Figure.6d: Wind towers erection part in stroboscope model


References:
Atef, D. (2010) A Simulation-based planning system for wind turbine construction.
M.Sc. thesis. Faculty of Engineering, Cairo University, Egypt.
Elbeltagi, E. (2007) “Evolutionary Algorithms for Large Scale Optimization In
Construction Management” The Future Trends in the Project Management,
Riyadh, KSA.
Martínez, J. C. (1996) STROBOSCOPE: State and Resource Based Simulation of
Construction Processes, Doctoral Dissertation, University of Michigan.
WWEA (2009). World Wind Energy Report. World Wind Energy Association.
Towards the Exchange of Parametric Bridge Models
using a Neutral Data Format

Yang Ji1, André Borrmann2 and Mathias Obergrießer3


1
Research Assistant, Computational Modeling and Simulation Group, Technische Universität
München, Germany. Tel. +49 89 289 25062, Email: ji@tum.de
2
Professor, Computational Modeling and Simulation Group, Technische Universität München,
Germany. Tel. +49 89 289 25117, Email: andre.borrmann@tum.de
3
Research Assistant, Construction Informatics Group, Regensburg University of Applied Sciences,
Germany. Tel. +49 941 943 1222, Email: mathias.obergriesser@hs-regensburg.de

ABSTRACT
While there are mature data models for exchanging semantically rich building
models, no means for exchanging bridge models using a neutral data format exist so
far. A major challenge lies in the fact that a bridge’s geometry is often described in
parametric terms, using geometric constraints and mathematical expressions to
describe dependencies between different dimensions. Since the current draft of IFC-
Bridge does not provide a parametric geometric description, this paper presents a
possible extension and describes in detail the object-oriented data model proposed to
capture parametric design including geometric and dimensional constraints. The
feasibility of the concept has been verified by actually implementing the exchange of
parametric models between two different computer-aided design (CAD) applications.

INTRODUCTION
Planning and realizing roadways and bridges are important aspects of
infrastructure construction projects. Nowadays, road and bridge models are usually
generated using completely different modeling systems. However, since bridges form
part of the roadway, a bridge’s geometry depends significantly on the course of the
carriageway, i.e. its main axis. Small modifications in the road design occur
frequently during the planning process. When a conventional computer-aided design
(CAD) system is used to create the bridge model, these modifications involve a
tedious, time-consuming manual adaptation of the bridge’s geometry. Researchers
belonging to the research cluster ForBAU - “The Virtual Construction Site”
(Borrmann et al., 2009), have accordingly been investigating the application of
parametric CAD technology, which makes it possible to model dependencies
between geometric objects explicitly (Hoffmann and Peters, 2004; Sachs et al., 2004).
With the help of this technology the bridge model can be coupled with the axis of the
carriageway, enabling a fast and automatic update whenever the roadway design is

528
COMPUTING IN CIVIL ENGINEERING 529

modified. At the same time, a parametric description allows for an advanced


modeling of the bridge itself, especially with respect to varying cross-sections along
the axis (Figure 1).

B
A

Sketch A

Sketch B

Figure 1. A parametric description helps in defining varying cross-section along


the superstructure’s axis.
To support the design of bridges even more, it is desirable to provide ways
and means to exchange parametric bridge models between different applications.
Doing so will provide a way of transferring the concept of the design and
consequently speed up variation studies. This applies in particular to the integration
of structural analysis applications in the bridge design process. In general, data
transfer can be realized on the basis of bilateral data interfaces that are specifically
implemented for a source and a target system. A more promising solution is to
employ a neutral data format enabling the exchange of data between arbitrary
applications and ensuring long-term readability (Eastman et al. 2008). The latter
aspect is particularly important for the bodies or corporations that own the
infrastructure (usually public authorities), which typically has to be maintained over
long periods of time. It is to this end that the IFC-Bridge data model is being
developed (Yakubi et al., 2006; Arthaud and Lebegue, 2007). It is based on the
Industry Foundation Classes (IFC), the standardized data exchange format for
construction engineering (ISO, 2005a), re-using a large extent of its entities. The
IFC-Bridge development currently focuses on standardizing definitions of bridge
components and their hierarchical relationships. With regard to the geometric
description, 3D bridge models can be represented by extruding 2D cross-sections
along a 3D path. However, a parametric geometric description using design
530 COMPUTING IN CIVIL ENGINEERING

parameters, geometric and dimensional constraints, as well as mathematical inter-


dependencies between the parameters is not available (Ji et al., 2010).
To fill this gap, this paper presents in detail a neutral data structure for
exchanging parametric geometry models, which is proposed as an extension of the
emerging IFC-Bridge schema.

PARAMETRIC BRIDGE MODELING


Parametric design refers to the use of geometric parameters and the
mathematical formulation of interdependencies between them. It also includes the
option of defining geometrical and topological constraints (Shah and Mäntylä, 1995).
Using parametric design features, bridges can be coupled with the axis of the
roadway, which enables an automatic update of the bridge’s geometry and saves a
laborious manual adaptation whenever modifications of the road axis become
necessary. In any case, bridges are structures with a complicated geometry (Katz,
2008). They are frequently located in a bend in the road which, more often than not,
simultaneously features a longitudinal camber. This creates a superstructure with a
highly complicated, three-dimensionally curved surface (Figure 2). To construct such
a superstructure in a CAD system, the cross-section (2D sketch) is positioned on the
road axis, which acts as the 3D extrusion path. The geometric form of the
superstructure accordingly depends on both the sketch and the associated extrusion
path. Any modification to the road axis results in an update of the bridge’s
superstructure. To realize this functionality, an advanced CAD system, which not
only provides parametric features but also freeform and volume modeling, is
required. The parametric design approach is good for producing fast design
variations and thus enables the extensive re-use of existing models.

Figure 2. The bridge’s superstructure is coupled with the road’s main axis.
A more detailed example of a parameterized design is illustrated in Figure 3.
The sketch describes the superstructure of a beam bridge consisting of geometric
objects, i.e. lines and points, the geometric constraints Parallel and Perpendicular,
and design parameters h1 to h8 and b1 to b6. A complete list of the geometric and
dimensional constraints commonly used for bridge modeling is depicted in Figures 4
and 5Figure 5.
COMPUTING IN CIVIL ENGINEERING 531

Figure 3. Parameterized cross-section of a beam bridge superstructure

NEUTRAL FORMATS FOR EXCHANGING PARAMETRIC MODELS


In order to achieve interoperability between different software applications used in
the design and construction process, it is necessary to establish a standardized data
model. The most mature data model standards in the AEC domain are the IFC
standards. For historical reasons, the IFC data model has been developed on the basis
of the ISO STEP standard for the exchange of product model data (ISO, 1995).
While the STEP data models have not achieved acceptance in the AEC
industry, they have become comparatively well established in the mechanical
engineering domain, and new features have subsequently been added. In 2005, the
ISO Technical Committee 184 published STEP Part 108 for transferring design
parameters and geometric constraints of 2D sketch elements (ISO, 2005b). This part
contains more than 40 cases of geometric constraints (ProSTEP, 2006). The
ProSTEP Association launched an implementation project of Part 108 for the
mechanical modeling systems CATIA, Pro/E and NX. The major problem, however,
was the high complexity of mapping the geometric constraints from individual CAD
systems to the neutral standard (Pratt et al., 2005). Up to now, STEP Part 108
import/export functionality has not been available in these CAD systems.
At the same time, IFC-Bridge, a data schema for exchanging bridge models
based on IFC, has been developed. As mentioned above, the current draft of the IFC-
Bridge schema is not capable of capturing design parameters and sketch constraints,
which is an essential aspect of transferring design concepts during the project phases.
Since STEP Part 108 is not a common exchange standard in the AEC industry, it is
more appropriate to extend the IFC-Bridge data schema in such a way that it satisfies
the requirement of transferring design concepts contained in parametric bridge
models. To this end, a small set of geometric constraints (Error! Reference source
not found.4) and dimensional constraints (Figure 5) have been identified. They are
commonly used for bridge modeling and widely supported by commercial parametric
CAD systems on the current market. Accordingly the implementation effort and
mapping process required is within reason.
532 COMPUTING IN CIVIL ENGINEERING

Figure 4. Supported types of geometric constraints


The developed data model has been formulated using UML (Unified
Modeling Language). The resulting class diagram is depicted in Error! Reference
source not found.6. The model focuses on the representation of parametric 2D
sketches. When extruded, they enable parametric 3D volumetric design. This
particularly applies to the design of bridge superstructures.
A sketch (class Sketch) consists of three components, geometric objects
(SketchGeometry) which may be lines (SketchLine), points (SketchPoint) and arcs
(SketchArc), geometric dependencies (GeometricConstraint) between these objects
such as the parallelism (ParallelGeometricConstraint) of two lines, and dimensional
constraints (DimensionalConstraint) referring to the size of a dimension (Parameter).
A user-defined or system-inferred value can be assigned to each dimension. In the
first draft of the data model, mathematical expressions describing relationships
mbetween parameters are represented as strings. The subclasses of
DimensionalConstraint define which kind of dimension is referred to (distance, angle
or radius) and how the distance is measured (horizontally, vertically or parallel to the
line).
The relations between design constraints and their associated sketch geometry
objects are explicitly defined. Explicit specifications enhance the clarity of the data
structure and reduce the possibility of misinterpretation in sending and receiving
systems. A concrete example will be presented in the following section.

Figure 5. Supported types of dimensional constraints


COMPUTING IN CIVIL ENGINEERING 533

Figure 6. UML diagram of the proposed data structure for representing parametric
geometry

IMPLEMENTATION AND CASE STUDY


The modeling language used for defining STEP and IFC data models is
EXPRESS which provides a wide range of object-oriented modeling features
534 COMPUTING IN CIVIL ENGINEERING

including the definition of entity types (the equivalent of a class) and attributes
representing the common properties of the objects belonging to the same entity type.
While support for STEP data is rather limited, reading and writing XML documents
is supported by a large variety of libraries available for almost every programming
language. For an initial evaluation, the proposed data structure was therefore
implemented as an XML schema.
To illustrate the proposed data structure, Figure 7 depicts a specimen sketch
and the corresponding XML instance file. The points of the sketch P_1 to P_5 are
defined by means of explicit coordinates. The lines Line_1 to Line_5 are defined
using the respective start and end points. The geometric constraint parallel
(ParallelGeometricConstraint) is associated with Line_2 and Line_4. Similary, the
perpendicular constraint (PerpendicularGeometricConstraint) is associated with
Line_2 and Line_3. The vertical dimensional constraint
(VerticalDimensionalConstraint) refers to the design dimension p4 to which the
string value “8.7” has been assigned.

Figure 7. XML instance of a parametric sketch.

Prototypical sketch translators have been implemented and tested as add-on


programs for two commercial parametric modeling systems, Autodesk AutoCAD
and Siemens NX. They are able to read and write instance files for the proposed
XML schema, to interpret the parametric design described and to create
parameterized sketches in the receiving system. The corresponding solid model is
then generated accordingly by extruding the 2D sketches along an extrusion path.
COMPUTING IN CIVIL ENGINEERING 535

CONCLUSION AND OUTLOOK


The paper has presented a data model which is able to capture parametric
geometry descriptions as a first step towards realizing the exchange of parametric
bridge models between different software applications. The proposed neutral data
format has been implemented on the basis of XML Schema. In a follow-up step, this
data model will be transformed into an EXPRESS schema and integrated with the
current IFC-Bridge draft. Future research will also include developing methods of
describing mathematical relationships between dimensional constraints in an object-
specific way. This will make the neutral data format even clearer and prevent
misinterpretations.

REFERENCES
Arthaud, G. and Lebegue, E. (2007). IFC-Bridge V2 Data Model, edition R7.
Borrmann, A., Ji, Y., Wu, I-C., Obergrießer, M., Rank, E., Klaubert, C., Günthner W.
(2009). “ForBAU - The Virtual Construction Site Project”. In Proc. of the 26th CIB-W78
Conference on Managing IT in Construction.
Eastman, C., Teicholz, P., Sacks, R., Liston, K. (2008). BIM Handbook: A Guide to Building
Information Modeling for Owners, Managers, Designers, Engineers and Contractors.
Wiley Press Inc.
Hoffmann, C. M.., Peters, J. (1994). “Geometric Constraints for CAGD”. In Proc. of
Mathematical Methods for Curves and Surfaces, Vanderbilt University Press.
International Organization for Standardization (1995). ISO 10303 - Standard for the
exchange of product model data.
International Organization for Standardization. (2005a). ISO/PAS 16739:2005- Industry
Foundation Classes, Release 2x, Platform Specification.
International Organization for Standardization. (2005b). ISO 10303-108:2005 – Part 108:
Parameterization and constraints for explicit geometric product models
Ji, Y., Obergrießer, M., Borrmann A. (2010). “Development of IFC-Bridge-based
Applications in Commercial CAD Systems (in German)”. In Proc. of the 22th Forum
Bauinformatik, Technical University Berlin, Germany.
Katz, C. (2008). “Parametric Description of Bridge Structures”. In Proc. of the IABSE
Conference on Information and Communication Technology for Bridges, Buildings and
Construction Practice, Helsinki.
Pratt, M. J., Anderson, B. D., Ranger, T. (2005). “Towards the Standardized Exchange of
Parameterized Feature-based CAD Models”. Computer-aided Design, vol. 37.
ProSTEP. (2006). Final Project Report – Parametrical 3D data Exchange via STEP,
ProSTEP iViP Association.
Sacks, R., Eastman, C. M., Lee, G. (2004). “Parametric 3D Modeling in Building
Construction with Examples from Precast Concrete”. Automation in Construction, 13(3).
Shah, J. J. and Mäntylä, M. (1995). Parametric and Feature-based CAD/CAM - Concepts,
Techniques, Applications, Wiley Press Inc.
Yabuki, N., Lebeque, E., Gual, J., Shitani, T., Li, Z-T. (2006). “International Collaboration
for Developing the Bridge Product Model IFC-Bridge”. In Proc. of the International
Conference on Computing and Decision Making in Civil and Building Engineering.
An Agent-Based Approach to Model the Effect of Occupants’ Energy Use
Characteristics in Commercial Buildings

Elie Azar1 and Carol Menassa2


1
Graduate Student, Department of Civil and Environmental Engineering, University
of Wisconsin-Madison, Madison, 2231 Engineering Hall 1415 Engineering Drive; PH
(608) 262- 3542; FAX (262) 911-5199; email: eazar@wisc.edu
2
Phd, M.A. Mortenson Company Assistant Professor, Department of Civil and
Environmental Engineering, University of Wisconsin-Madison, Madison, 2318
Engineering Hall 1415 Engineering Drive; PH (608) 890- 3276; FAX (262) 911-
5199; email: menassa@wisc.edu

ABSTRACT

Energy consumption estimates obtained during the design phase of commercial


buildings typically differ from actual consumption levels measured during the
operation phase. One important reason for this variation is that energy estimation
software (e.g. eQuest and Energy Plus) consider occupants as static elements with
fixed schedules and energy use characteristics, misrepresenting the dynamic aspect of
their interactions in the building environment and the resulting changes in their
energy consumption patterns. This paper proposes a new approach for energy
estimation in buildings using agent-based modeling, a technique capable of
simulating occupancy in a dynamic manner. First, occupants are divided into ‘Low’,
‘Medium’, and ‘High’ energy consumers. Then, an agent-based model simulates
these occupants’ interactions with each other, with the room environment, and with
the exterior. Preliminary results show a difference of more than 20 percent in the
design energy consumption estimates for a university building office when using the
proposed method.

INTRODUCTION

Buildings are responsible for 30 to 40 percent of global energy use (UNEP-SBCI


2007), and a similar amount of green house gas emissions (Yudelson 2010). In 2009,
commercial buildings in particular consumed 28 percent of the total energy used by
the built environment, with associated carbon dioxide emissions totaling 1.0 billion
metric tons (EIA 2010).
According to the United Nations Environmental Program Sustainable
Construction and Building Initiative (UNEP-SBCI 2007), 80 percent of the energy
consumed by a building during its life-cycle occurs when the building is in actual
occupancy and use. In fact, activities in buildings consume up to 70 percent of
electricity produced in the US, use 14 percent of non-industrial water, and generate
40 percent of non-industrial waste (Hawthorne 2003).
A number of studies emphasize the role that building occupants play in

536
COMPUTING IN CIVIL ENGINEERING 537

affecting the energy consumption in buildings, and the anticipated savings in energy
usage if occupant behavior was modified (Emery and Kippenhan 2006; Meier 2006;
Staats et al. 2000). These studies looked at how changes in occupants’ behavior can
result in energy saving in excess of 40 percent in the building under consideration
when compared to buildings of similar type.
Energy modeling techniques exist and are widely used in the building sector to
predict energy consumption during the operational phase of buildings. However, the
estimates obtained from these tools typically deviate by more than 30 percent from
actual energy consumption levels (Yudelson 2010; Dell’lsola and Kirk 2003;
Soebarto and Williamson 2001). This deviation can be mainly attributed to the
approach used by these modeling software that accounts for building occupants as
static elements with constant energy use characteristics. The term ‘occupant’s energy
use characteristics’ being defined as the presence of people in the premises and the
actions they perform (or do not perform) to influence the level of energy consumption
(Hoes et al. 2009). So, these tools assume that all occupants consume energy at the
same rate, and that these rates are constant over time (Hoes et al. 2009 and Jackson
2005). Therefore, by accounting for building occupants as dynamic entities with
different and changing energy consumption characteristics over time, better energy
consumption estimates can be obtained (Hoes et al. 2009). This can be achieved by
using agent-based modeling, a technique capable of simulating almost all behavioral
aspect of agents (XJ Technologies 2009). In this research, agents represent the
building occupants. Consequently, the qualitative behavioral aspects of occupants can
then be represented in a quantitative way.

BACKGROUND

Energy simulation software including eQuest, Energy-10, TRNSys, and Energy Plus,
which are commonly used in the industry, are very sensitive to occupancy related
inputs such as energy consumption rates and building schedules (Turner and Frankel
2008). The Clevenger and Haymaker (2006) study on the impact of building
occupancy on energy simulation models showed that estimated energy consumption
can change by more than 150 percent when occupants with different energy
consumption rates were considered.
Not only it is important to model occupants with different energy consumption
patterns, it is also essential to model and predict their change in behavior over time
(Jackson 2005). For example, an occupant might change his/her energy usage
characteristics by adopting more energy efficient practices or on the contrary, adopt
bad consumption habits known as the ‘rebound effect’ (Sorrell et al. 2009). Many
factors could lead to such changes in energy consumption behavior such as ‘green’
social marketing campaigns or financial incentives that encourage energy efficiency
(Jackson 2005). Another important factor is the ‘word of mouth’ effect, which is
considered to be a very influential channel of communication (Allsop et al. 2007).
The ‘word of mouth’ effect is a marketing concept defined as a type of informal,
person-to-person communication between a perceived non-commercial communicator
and a receiver regarding a brand, a product, an organization or a service (Harrison-
Walker, 2001). This study mainly focuses on this factor, representing the influence
538 COMPUTING IN CIVIL ENGINEERING

that each occupant exerts on the other occupants in the same room to change their
energy consumption habits.
Agent-based modeling has already been used to assist energy simulation software
for buildings. More specifically, Erickson et. al (2009) used agent-based modeling to
model the rooms’ occupancy in buildings in order to optimize the HVAC loading, and
hence avoid typical over sizing problems. This research showed that by simulating
occupancy usage patterns, HVAC energy usage can be reduced by around 14 percent.
Another example where agent-based modeling was used to assist HVAC design was
presented by Li et al. (2009). In this study, the occupancy of an emergency
department of a health care facility was first modeled. The obtained numbers were
then used to optimize the sizing of the HVAC system, avoiding unnecessary or
excessive air conditioning loads. This organizational simulation model showed that
the required capacity of the ventilation system might change by as much as 43 percent
when a building’s occupancy is properly modeled.
Literature specific to assisting energy simulation models with agent-based
modeling tends to mainly focus on HVAC calculation. While HVAC accounts for 31
percent of the total energy consumption for an average U.S. commercial building,
other energy consumption sources such as lighting, computers, and hot water supply
account for more than 33 percent (InterAcademy Council 2007). As a consequence,
there is a need to broaden the scope of study to include energy consumption sources
other than HVAC, while accounting for the occupancy effect on the levels of energy
consumption.
Therefore, the main objective of this paper is to present a new approach to energy
estimation in buildings by using agent-based modeling to account for the different
occupant energy use characteristics, their change over time, and finally calculate
energy consumption levels that reflect this dynamic aspect of occupancy.

METHODOLOGY

The methodology that was used to achieve the study’s objectives consists of three
main steps: (1) Define different occupants’ energy use characteristics and obtain
corresponding energy consumption rates, (2) simulate occupants’ interaction and the
change in their behavior over time, (3) combine the results and estimate total energy
consumption.
For the first step, three categories of occupants were defined. First, the ‘High
Energy Consumers’ category represents occupants that over-consume energy.
Second, occupants that make minimal efforts towards energy savings form the
‘Medium Energy Consumers’ category. Finally, ‘Low Energy Consumers’ represent
occupants that use energy efficiently. These assumptions were made based on a study
by Accenture (2010) that classified energy consumers into different categories based
on their attitude toward energy management programs. For each of the three defined
categories, an energy consumption rate was then obtained through literature review
and through simulations using traditional energy software (e.g., EnergyPlus, eQuest,
etc.). As a result, the change in behavior was translated into a change in energy
consumption levels.
COMPUTING IN CIVIL ENGINEERING 539

The second step consists of an agent-based model that simulates the interactions
of the building occupants and the resulting change in their energy use characteristics.
This change in behavior is shown by continuously calculating the number of
occupants in each category: high, medium, and low energy consumers.
The last step is to combine the previous results by applying the energy
consumption rates obtained from Step 1 to the changing behavior simulated in Step 2
and finally calculate dynamic energy estimates that account for the differences and
changes in occupants’ behavior.
Figure 1 shows the flow chart of the agent-based model summarizing the three
stated phases. In step 1, energy consumption rates were obtained from traditional
energy simulation software. Step 2 represents the interaction of agents and the
potential change in behavior that is translated into the move of an agent from a
category to another (e.g., from High Energy Consumers’ to ‘Medium or Low Energy
Consumers’ and vice versa). Finally, step 3 combines the obtained results and
generates the total energy consumption estimates.
For each time step, the occupants start by interacting. Then in the case of a
successful influence, certain occupants change behavior and the model updates the
number of occupants in the three categories: high, medium, and low energy
consumers. These numbers are then combined to the energy consumption rates from
Step 1, and total energy consumption levels are calculated for this time step (Step 3).
Once this iteration completed, the model moves to the next time step and keeps
repeating the cycle until the total simulation time is reached.

Figure 1. Agent-Based Model Flowchart


NUMERICAL EXAMPLE

An experimental energy simulation model was built for the purpose of this study
consisting of a 1000 sq ft graduate student office of a university building,
540 COMPUTING IN CIVIL ENGINEERING

accommodating 10 full-time graduate students over a period of 60 months (5 years).


This room was located at the ground floor of a multistory university building in the
city of Madison-Wisconsin.
As shown in the left side of Figure 2, the room has four windows facing the
East and one door facing the North. These are the only two faces that are exposed to
the exterior. The Southern/ Western faces and the roof have no openings. They are
considered to be in contact with air conditioned areas, representing other sections of
the building.
To determine the energy use during the operational phase of the building, the
following energy categories need to be studied: heating, cooling, ventilation, lighting,
domestic hot water, and other miscellaneous equipment loads (Dell’lsola and Kirk
2003). For this purpose, six energy consumption sources were studied in the proposed
model: (1) HVAC heating, (2) HVAC cooling, (3) area lighting, (4) task lighting, (5)
equipment (computers), and (6) hot water supply. To simplify this example, it was
assumed that the occupants did not have direct control over HVAC heating and
cooling sources, where a central air conditioning system maintained the room
temperature constant. The occupants however indirectly affected the room
temperature by controlling the lighting and equipment use. The right side of Figure 2
summarizes these sources where eQuest was used to breakdown the electric and gas
consumptions. The main sources directly controlled by the occupants were
“Area/Task Lighting” and “Miscellaneous Equipment” (computers) for electric
consumption, and hot “Water Heating” for gas consumption.

Figure 2. Graduate Student Lounge Energy Breakdown

Then for each occupant categories: High, medium, and low energy consumers,
energy consumption levels were obtained also using eQuest by running different
experiments that were tailored to each type of behavior. The results from these tests
are shown in Figure 3, where the red, orange, and green curves represent respectively
the energy consumption rates of high, medium, and low energy consumers. Similar
graphs were obtained for the gas consumption.

Figure 3. Electric Consumption Rates


COMPUTING IN CIVIL ENGINEERING 541

The next step is to model the change in energy consumption characteristics over
time (Figure 4), where the 10 students in the room interact and possibly influence
each others’ behaviors. At the start of the simulation, 3 of the students were assumed
to be ‘High Energy consumers’, 4 ‘Medium Energy Consumers’, and 3 ‘Low Energy
Consumers’. In this example, low energy consumers were assumed to have a higher
level of influence than the other categories. This means that the low energy
consumers were the most efficient in influencing other occupants to change their
behavior and adopt low energy use behavior.
As shown in Figure 4, while the simulation time was advancing, low energy
consumers represented by the green line were successfully converting all of the
medium and high energy consumers to the low consumer category. More specifically,
all of the 10 occupants of the room became low energy consumers after the 48th
month. Consequently, low energy consumers were attracting other occupants at a
faster rate than they were being attracted. Therefore, their number kept increasing
until all of the occupants were converted to their category.

Figure 4. Occupants' Energy Use Characteristics Change

After calculating the energy consumption rates and the changes in occupancy
behavior over time, electric and gas consumptions were calculated by applying the
rates of Figure 3 to the number of occupants in each category over time from Figure
4. Figure 5 summarizes the total electric and gas consumption levels.

Figure 5. Total Electricity and Gas Consumption

As shown, there was a significant drop of 20 percent for the total electric
consumption over the simulation time of 60 months. This was expected since the
number of low energy consumers was increasing over time as high and medium
energy were getting converted. So with the room occupants becoming low energy
consumers, their energy consumptions decreased over time (Figure 5).
On the other hand, the drop of only one percent in the gas consumption was less
542 COMPUTING IN CIVIL ENGINEERING

significant since the occupants did not have direct control over the major portion of
the gas consumption sources. In fact, as was previously shown in Figure 2, the
occupants did not control the HVAC heating system, which accounted for 88 percent
of the gas consumption. So even though behavior changed and people were
consuming less energy, this did not reflect on the gas consumption as it did on the
electric consumption, where occupants were directly controlling 77 percent of total
the electric consumption (Figure 2).

CONCLUSION

Energy simulation software are failing to reliably predict energy performance of


buildings (Clevenger and Haymaker 2006). This is mainly due to the
misunderstanding and underestimation of the important role that occupants play in
determining energy consumption levels (Hoes et al. 2009). This paper presented a
new agent-based modeling approach to energy estimation by modeling occupancy in
a dynamic way, accounting for both the differences between occupants’ energy use
characteristics and the changing aspect of these characteristics over time.
A numerical example that was developed to test the proposed approach showed
that with occupancy being properly modeled, the more the occupants control the
energy consumption sources of their environment, the more a change in their
behavior will affect the total energy use.
To sum up, the proposed energy estimation method provided the first step
toward more realistic building energy estimates. The next step is to expand the model
to include more complex environments with diverse occupants’ characteristics and
different forms of occupants’ interaction. Data from an actual building in operation
then needs to be collected to first verify the made assumptions about occupants’
energy use characteristics, and finally validate the energy estimates generated by the
model. Our research team is currently preparing the data collection process after
getting access to closely monitor a recently constructed high-tech building at the
University of Wisconsin-Madison.

REFERENCES

Accenture (2010). “Understanding Consumer Preferences in Energy Efficiency”.


<http://www.accenture.com/NR/rdonlyres/AA01F184-9FFC-4B63-89BD-
7BFA45397F13/0/Accenture_Utilities_Study_What_About_Consumers_Final.pdf>
(Dec. 17, 2010)
Allsop, D.T., Bassett, B.R., Hoskins, J.A. (2007). “Word of mouth research: principles
and applications”. Journal of Advertising Research 47(4):398-411
Clevenger C. M. and Haymaker, J. (2006). “The Impact of the Building Occupant on
Energy Modeling Simulations”.
<http://www.stanford.edu/~haymaker/Research/Papers/OccupantImpact-2006-
Montreal-Clevenger-Haymaker.pdf> (Jun. 1, 2010)
Dell’lsola, A. J. and Kirk S. J. (2003). “Life Cycle Costing of Facilities”. Kingston, MA:
Reed Construction Data.
COMPUTING IN CIVIL ENGINEERING 543

Emery, A. and Kippenhan, C. (2006). “A long term study of residential home heating
consumption and the effect of occupant behavior on homes in the Pacific Northwest
constructed according to improved thermal standards.” Energy, 31 (5), 677–693.
Energy Information Administration (EIA) (2010). “Annual Energy Review.” DOE/EIA –
0384, August 2010. <http://www.eia.doe.gov/aer/pdf/aer.pdf> (Sep. 3, 2010).
Erickson, V. L., Lin, Y., Kamthe A., Brahme, R., Surana, A., Cerpa, A.E., Sohn, M.D.,
and Narayanan, S. (2009). “Energy Efficient Building Environment Control
Strategies Using Real-time Occupancy Measurements”.
<http://andes.ucmerced.edu/papers/Erickson09a.pdf> (Jun. 1, 2010).
Harrison-Walker, L.J. (2001). “The measurement of word-of-mouth communication and
an investigation of service quality and customer commitment as potential
antecedents”. Journal of Service Research 4(1):60-75.
Hawthorne, C. (2003). “Turning Down the Global Thermostat.” Metropolis Magazine,
<http://www.metropolismag.com/story/20031001/turning-down-the-global-
thermostat>, (Oct. 1, 2009).
Hoes, P., Hensen, J. L. M., . Loomans, M. G. L. C., de Vries, B., and Bourgeois, D.
(2009). “User Behavior in Whole Building Simulation”. Energy and Buildings,
Elsevier 41:295-302.
InterAcademy Council (2007). “Lighting the Way: Towards a Sustainable Energy
Future”. Technical Report, InterAcademy Council, Amsterdam, The Netherlands.
Jackson, T. (2005). “Motivating Sustainable Consumption: a review of evidence on
consumer behaviour and behavioural change”. Technical Report, Centre for
Environmental Strategy, University of Surrey, Surrey, United Kingdom.
Li, Z., Yeonsook, H. and Godfried A. (2009). “HVAC Design Informed by
Organizational Simulation”. Proceedings of the Eleventh International IBPSA
Conference, Glasgow, Scotland.
<http://www.ibpsa.org/proceedings/BS2009/BS09_2198_2203.pdf> (Sep. 15, 2010).
Meier A. (2006). “Operating buildings during temporary electricity shortages.” Energy
and Buildings, 38(11), 1296–1301.
Soebarto, V. I. and Williamson, T.J. (2001). “Multi-criteria Assessment of Building
Performance: Theory and Implementation”. Building and Environment, Elsevier
36(6):681-690.
Sorrell, S., Dimitropoulos, J. and Sommerville, M. (2009). “Empirical Estimates of the
Direct Rebound Effect: A Review”. Energy Policy, 37 (4) 1356–1371
Staats, H., van Leeuwen, E. and Wit., A. (2000). “A longitudinal study of informational
interventions to save energy in an office building.” Journal of Applied Behavior
Analysis, 33 (1) 101–104.
Turner, C. and M. Frankel (2008). “Energy Performance of LEED for New Construction
Buildings”. Technical Report, New Buildings Institute, Vancouver, WA.
<http://www.usgbc.org/ShowFile.aspx?DocumentID=3930> (Jun. 1, 2010).
United Nations Environment Programme (2007). “Buildings Can Play Key Role In
Combating Climate Change”.
<http://www.unep.org/Documents.Multilingual/Default.asp?DocumentID=502&Artic
leID=5545&l=en> (May 18, 2010).
XJTechnologies (2009). Anylogic Overview. <http://www.xjtek.com/anylogic/overview/>
( Jun. 1, 2010).
Yudelson, J. (2010). “Greening Existing Buildings”. Green Source/McGraw-Hill, New
York, NY.
Incorporating Social Behaviors in Egress Simulation
Mei Ling Chu1, Xiaoshan Pan2, Kincho Law3
1
Department of Civil and Environmental Engineering, Stanford University, Stanford,
CA 94305; PH (650) 723-4121; FAX (650) 723-7514; email: mlchu@stanford.edu
2
Tapestry Solutions, 2975 Mcmillan Avenue # 272, San Luis Obispo, CA 93401;
PH (805) 541-3750; FAX (805) 541-8296; email: xpan@stanfordalumni.org
3
Department of Civil and Environmental Engineering, Stanford University, Stanford,
CA 94305; PH (650) 725-3154; FAX (650) 723-7514; email: law@stanford.edu
ABSTRACT
Emergency evacuation (egress) is considered one of the most important issues in the
design of buildings and public facilities. Given the complexity and variability in an
evacuation situation, computational simulation tool is often used to help assess the
performance of an egress design. Studies have revealed that social behaviors can have
significant influence on the evacuating crowd during an emergency. Among the
challenges in designing safe egress thus include identifying the social behaviors and
incorporating them in the design analysis. Even though many egress simulation tools
now exist, realistic human and social behaviors commonly observed in emergency
situations are not supported. This paper describes an egress simulation approach that
incorporates research results from social science regarding human and social
behaviors observed in emergency situations. By integrating the behavioral theories
proposed by social scientists, the simulation tool can potentially produce more
realistic predications than current tools which heavily rely on simplified and, in most
cases, mathematical assumptions.
KEYWORDS
Social behavior, egress, crowd simulation, multi-agent based modeling
INTRODUCTION
This paper articulates a computational approach that integrates human and social
behaviors in emergency evacuation (egress) simulations. Despite the wide range of
simulation tools currently available, “the fundamental understanding of the
sociological and psychological components of pedestrian and evacuation behaviors is
left wanting [in computational simulation] (Galea, 2003, p. VI)”, and the situation has
been echoed by the authorities in fire engineering and social science (Aguirre 2009;
Challenger et al. 2009; Still 2000). Our approach to address this shortcoming is to
design a multi-agent based egress simulation framework that can incorporate current
and future social behavior theories on crowd dynamics and emergency evacuation.

544
COMPUTING IN CIVIL ENGINEERING 545

This multi-agent based framework is architected to facilitate the generation of


behavior profiles and decision models for a diverse population. This paper describes
the system framework and the features that are currently implemented. The prototype
system is capable of simulating some of the group and social processes that have been
observed in real situations and identified in recent social studies.
LITERATURE REVIEW
Social behavior in emergency situations
Social scientists and disaster management researchers have been studying human
behaviors in emergency situations and have developed a variety of theories about
crowd behaviors in emergency situations (Aguirre et al. 2009; Averill et al. 2005;
Cocking & Drury 2008; Proulx et al. 2004). A comprehensive review of various
social theories about crowd behaviors has recently been reported by Challenger et al.
(2009). Some examples of prevalent theories on crowd behaviors include self-
organization (Helbing et al. 2005), social identity (Cocking & Drury 2008), affiliation
model (Mawson 2005), normative theory (Aguirre 2005), panic theory (Chertkoff &
Kushigian 1999) and decision-making theory (Mintz 1951). Earlier theories in crowd
behavior suggest that people tend to behave individually and show non-adaptive
behaviors in dangerous situations. For example, the panic theory suggests that people
would become panicked in an emergency situation and act irrationally upon
perceiving danger. In contrast, the decision-making theory argues that people would
act rationally to achieve a better outcome in the situation. Recent theories, on the
other hand, emphasize the sociality of the crowd (such as pre-existing social
relationships or emerging identity during the emergencies) in explaining the
occupants’ reactions in past accidents. For example, the affiliation model suggests
that people are typically motivated to move towards familiar people or locations and
show increased social attachment behavior in an emergency situation. The normative
theory stresses that the same social rules and roles that govern human behavior in
everyday life are also observed in emergency situations. According to these recent
theories, evacuating crowds retain their sociality and behave in a socially structured
manner.
Incorporating social behaviors in egress simulation
The lack of human and social behaviors in current egress simulation tools has been
recognized by social scientists, organizational psychologists and emergency
management experts (Gwynne et al. 2005; Santos & Aguirre 2004). It is
recommended that, future simulation tools should include the following features and
their effects in relation to human social behavior (Aguirre 2009; Averill et al. 2005;
Challenger et al. 2009; Cocking & Drury 2008; Mawson 2005):

 pre-existing relationships and group behavior in a simulated crowd.


 communication between crowd members and its impact on crowd behaviors.
 ability to account for the fact that crowd members are unlikely to have complete
information or understanding of their environment.
 inter-group interactions and the influence of crowd members with different roles.
546 COMPUTING IN CIVIL ENGINEERING

COMPUTATIONAL SIMULATION FRAMEWORK

This work extends a multi-agent based framework, MASSEgress, which is designed


to model and to implement human and social behaviors in emergency evacuation (Pan
2006; Pan et al. 2007). In the simulation, each individual is modeled as an
autonomous agent who interacts with other agents. The multi-agent based approach
can simulate not only individuals, but also social and emerging behaviors of crowds
in a virtual setting. This approach also allows a software agent to mimic human
decision making process and individual behavior execution. Furthermore, the
framework offers the flexibility to implement a variety of human behaviors as
proposed by social scientists. For example, users can create different roles for the
agents and assemble behavior models to reflect a specific behavioral theory.
System architecture
Figure 1 schematically depicts the system architecture of the multi-agent simulation
framework. The Global Database, Crowd Simulation Engine and Agent Behavior
Model constitute the key modules of the framework. The Global Database maintains
all the information about the physical environment and the agent population during a
simulation. It obtains physical geometries from the Geometric Engine and sensing
information from the Sensing Data Input Engine, as well as the agent population
distribution and physical parameters from the Population Generator. The Agent
Behavior Model contains the agent decision profiles and agent group information.
The Global Database and the Agent Behavior Model interact with each other through
the Crowd Simulation Engine, which generates visual output and event logs.

Sensing Data Input Engine


Global Database Population Generator
Geometry Engine

Event Recorder Crowd Simulation Visualization Environment


Engine
Individual Behavioral Model Group Behavioral Model
Database Database
Agent Behavior
Individual behavior model 1 Model Group behavior model 1
………. ……….
Individual behavior model k Group behavior model k

Figure 1. Overall architecture of the framework


Agent behavior model
Figure 2 shows the agent behavior model consisting of three fundamental steps,
namely perception, decision making and execution. An agent possesses a list of
distinct traits (such as physical sizes and its affiliation to a group) and decision profile.
At each simulation step, an agent perceives and assesses information about the
surrounding environment. The information can be visual, audio, or time-related data,
such as the visibility of a leader or an exit sign, evacuating time, etc. Based on the
COMPUTING IN CIVIL ENGINEERING 547

perceived data, an agent prioritizes the different behaviors that the agent may exhibit
and chooses the one with the highest priority. After a decision is made, the agent
executes the actions according to the selected behavior, and invokes the appropriate
locomotion.

PERCEPTION DECISION-MAKING EXECUTION


Figure 2. The three subsystems of an agent behavior model

Creation of agents with specific social roles and functions


Each agent is defined by physical characteristics (such as physical size, gender and
mobility) and psychological traits (such as decision model). Taking advantage of
building the computational framework on an object-oriented programming (OOP)
paradigm, certain types of agents can be extended conveniently through inheritance.
For example, a “leader” can be specified as an agent who possesses some leadership
abilities with a high degree of autonomy when maneuvering in an environment. A
“marshall” can be modeled as an agent who inherits from a “leader” but with
additional knowledge about the egress routes and additional path-finding ability.
Different agents with specific functionality and social role can be created by
extending or modifying a base agent type.
Group level parameters
Studies in social science have shown the importance of group dynamics and social
behaviors as observed from past accidents. Besides modeling the interactions of
individual agents, it is desirable to explicitly model the social behaviors, for using
certain social parameters, as identified by social science researchers. Our framework
implements an additional layer of agent definition by affiliating each agent to a group,
whose collective behavior can affect the behavior of its members. For example, group
size may have influence on the speed of agent in that group. Another example is the
concept of “stickiness” which defines the likelihood of an agent to “stick” with its
group path despite the presence of other options (Aguirre et al. 2010). In our
framework, the group-sticking parameter defines the tendency of the agents to keep
looking for other members before they evacuate. Another group level parameter is the
group influence matrix representing the social structure of the group, such that
different members in the same group can have different levels of influence to each
other. In the current implementation, all members are weighed equally except for the
group leader (if any) who has a high influence to the other members.
548 COMPUTING IN CIVIL ENGINEERING

IMPLEMENTATION OF SOCIAL BEHAVIORS IN THE PROTOTYPE


By capturing certain individual psycho-social parameters, decision-making models,
etc., MASSEgress was able to demonstrate the ability of the multi-agent based
framework for simulating some common emergent social behaviors such as those
shown in Figure 3 (Pan 2006; Pan et al. 2007). One objective of this work is to extend
the MASSEgress framework to include additional group level social behavioral
models: group influence, following members with better knowledge of the
environment and seeking group members.

a. Competitive behavior b. Queuing behavior c. Herding behavior


Figure 3. Simulation of human behaviors (adopted from Pan (2006) Figure 5-5)
Behavior model 1- Seeking group members
Several social theories suggest that, even under an emergency situation, people
demonstrate group behavior rather than individual behavior (Aguirre et al. 2009;
Cocking & Drury 2008; Mawson 2005). People who are in the same social group tend
to stay together during evacuation and even search for other members. This
phenomenon can be modeled using a “group-sticking” parameter (Aguirre et al. 2010)
in our simulation. This parameter has a value between 0 and 1, which indicates the
proportion of the group that has to gather before the group evacuates. The closer the
pre-existing relationship among the group members is, the larger size the group has to
attain before evacuating. By adjusting the “group-sticking” parameter, different levels
of group closeness can be simulated. Figure 4 demonstrates the behavior of a group of
6 agents with group-sticking value of 1 (i.e., the group has to find all the members
before searching for exit signs).

Exit Sign Exit Sign


Separated
group

Exit Sign

a- Initially, the group members b- The members explore the c- The group starts to look for exit
are separated. floor until they see each other sign when all members are visible
Figure 4. Screenshots showing the member seeking behavior in a group of 6
COMPUTING IN CIVIL ENGINEERING 549

Behavior model 2- Group Influence


Communication among group members is often observed in emergency situations
(Cocking & Drury 2008; Averill et al. 2005). Individuals in the same group may have
different interpretation of a situation but the ones with certain social roles can
influence others’ decisions (Gwynne et al. 2005). This influence among group
members is demonstrated through the ability of sharing information within the group
and the level of influence is represented in a group influence matrix. For example,
when an agent detects an exit, the agent shares the information about the exit with
other group members. Other agents may or may not pursue that exit, depending on the
level of influence that the information-sharing agent has. In our simulation, the group
influence phenomenon is observed when the following conditions are satisfied: (1)
“group influence” is included in the behavior decision model (i.e. group influence
matrix takes effect in the decision making process); (2) an agent with a high level of
influence detects an exit sign and sets the exit sign as a goal; (3) the agent shares the
information about the exit sign with other agents. Figure 5 shows an example of
information-sharing and the group influence behavior. In this example, the group
influence takes place when the agent can see both the information-sharing agent and
the shared information (the exit sign). The agent in the room ignores the closer exit
after leaving the room and navigates to the farther exit which is suggested by the
information- sharing agent.

Agent out of the room

Agent in room
Agent of high
influence
Exit Sign Exit Sign

a- An agent sees the exit sign and shares the b- The agent moves out of the room and see the
information with the other members. Note the exit sign, his goal becomes the exit sign since
agent in room can see the information-sharing this exit location is informed by the
agent but not the exit sign. information-sharing agent.
Figure 5. Screenshots showing the “group influence” process in a group of 6.

Behavior model 3- Following member with better knowledge of the environment


During an emergency, people usually have limited or incomplete knowledge about the
environment (Challenger et al. 2009). The presence of individuals who are familiar
with the egress route can have significant impact to the evacuation outcome (Mawson
2005; Proulx et al. 2004). Generally, the agents who are less familiar with the
environment will follow the ones who are more certain about the escape route. This
phenomenon is observed under the following conditions: (1) there is no (or little)
guidance (such as exit signs) from the environment; (2) there is at least one member
in the group who has knowledge about the egress path; (3) the decision model of
members in a group is defined as “group member following”. Figure 6 shows an
example where there is no guidance available in the environment while one of the
550 COMPUTING IN CIVIL ENGINEERING

agents has perfect knowledge about the egress route. By following the agent with
perfect knowledge, other agents are able to escape efficiently. Other examples can be
created by varying the leader’s familiarity about escape routes and the exit sign
arrangement.

Leader

Label indicating the escape route Leader


Leader

a- Members are attracted to the leader (who b- Agents continue to follow the leader who
possesses knowledge of the escape route). navigates according to defined route.
*The user-defined escape route is symbolized by the square labels
Figure 6. Screenshots showing members following the leader with better
knowledge of escape route

CONCLUSION AND FUTURE WORKS


Although the importance of modeling realistic human social behaviors in egress
simulations has been recognized, such efforts are still seldomly considered in current
tools. Adopting a multi-agent based approach to model human social behaviors is
promising, because software agents are capable of capturing both human individual
behavior (through simulating individual characteristics and decision-making process)
and diverse social behaviors (through simulating the interactions among individuals).
This paper describes an extension of a prior MASSEgress model, focusing on group
and social behaviors, including group influence, following group member (who is
more familiar with the environment), and seeking group members. This development
demonstrates the potential to include group behaviors in a multi-agent based
simulation environment. Our current work continues to incorporate additional
behaviors, particularly those identified in social science research. Additionally, we
plan to enrich the simulation environment both at the individual level and at the group
level, develop benchmark models for validation, and develop tools to facilitate design
of egress.
ACKNOWLEDGEMENT
The first author is supported by a fellowship from the Croucher Foundation and a
John A. Blume fellowship at Stanford University.
COMPUTING IN CIVIL ENGINEERING 551

REFERENCES
Aguirre, B. E. (2005). “Commentary on Understanding Mass Panic and Other
Collective Response to Threat and Disaster.” Psychiatry, 68, 121-129.
Aguirre, B.E., Torres, M., and Gill, K.B. (2009). “A test of Pro Social Explanation of
Human Behavior in Building Fire.” Proceedings of 2009 NSF Engineering
Research and Innovation Conference.
Aguirre, B.E., El-Tawill, S., Best, E., Gill, K.B., and Fedorov, V. (2010). “Social
Science in Agent-Based Computational Simulation Models of Building
Evacuation.” Draft Manuscript, Disaster Research Center, University of
Delaware.
Averill, J. D., Mileti, D. S., Peacock, R. D., Kuligowski, E. D., Groner, N., Proulx, G.,
Reneke, P. A., and Nelson, H. E. (2005). Occupant Behavior, Egress, and
Emergency Communications, Technical Report NCSTAR, 1-7, NIST.
Challenger, W., Clegg W. C., and Robinson A.M. (2009). Understanding Crowd
Behaviours: Guidance and Lessons Identified, Technical Report prepared for
UK Cabinet Office, Emergency Planning College, University of Leeds, 2009.
Chertkoff, J. M., and Kushigian, R. H. (1999). Don’t Panic: The Psychology of
Emergency Egress and Ingress, Praeger, London.
Cocking, C., and Drury, J. (2008). “The Mass Psychology of Disasters and
Emergency Evacuations: A Research Report and Implications for the Fire and
Rescue Service.” Fire Safety, Technology and Management, 10, 13-19.
Galea, E., (ed.). (2003). Pedestrian and Evacuation Dynamics, Proceedings of 2nd
International Conference on Pedestrian and Evacuation Dynamics, CMC Press,
London.
Gwynne, S., Galea, E. R., Owen, M., and Lawrence, P. J. (2005). “The Introduction
of Social Adaptation within Evacuation Modeling.” Fire and Materials,
2006(30), 285-309.
Helbing, D., Buzna, L, Johansson, A., and Werner, T. (2005). “Self-Organized
Pedestrian Crowd Dynamics.” Transportation Science, 39(1), 1-24.
Mawson, A. R. (2005). “Understanding Mass Panic and Other Collective Responses
to Threat and Disaster.” Psychiatry, 68, 95-113.
Mintz, A. (1951). “Non-Adaptive Group Behavior.” Journal of Abnormal and Social
Psychology, 46, 150-159.
Pan, X. (2006). Computational Modeling of Human and Social Behavior for
Emergency Egress Analysis, Ph.D. Thesis, Stanford University.
Pan, X., Han, C. S., Dauber, K., and Law, K. H. (2007). “A Multi-Agent Based
Framework for the Simulation of Human and Social Behaviors during
Emergency Evacuations.” AI & Society, 22, 113-132.
Proulx, G., Reid, I., and Cavan, N. R. (2004). Human Behavior Study, Cook County
Administration Building Fire, October 17, 2003 Chicago, IL, Research Report
No. 181, National Research Council, Canada.
Santos, G., and Aguirre, B. E. (2004). “A Critical Review of Emergency Evacuation
Simulations Models.” in Peacock, R. D., and Kuligowski, E. D., (ed.).
Workshop on Building Occupant Movement during Fire Emergencies, June
10-11, 2004, Special Publication 1032, NIST.
Still, G. K. (2000). Crowd Dynamics, Ph.D. Thesis, University of Warwick, UK.
3D Thermal Modeling for Existing Buildings using Hybrid LIDAR System

Y. Cho1 and C. Wang2


1
Assistant Professor, ASCE member, Durham School of Architectural Engineering
and Construction, University of Nebraska-Lincoln, Peter Kiewit Institute, 1110 S.
67th St. Omaha, NE, 68182; PH (402)554-3277; FAX (402) 554-3850; email:
ycho2@unl.edu
2
GRA, Durham School of Architectural Engineering and Construction, University of
Nebraska-Lincoln, Peter Kiewit Institute, 1110 S. 67th St. Omaha, NE, 68182; PH
(402)554-3277; FAX (402) 554-3850; email: cwang@huskers.unl.edu

ABSTRACT

This paper introduces an on-going research that develops a hybrid thermal


LIDAR system for rapid thermal data measurement and 3D modeling of buildings,
which will allow “virtual” representations for the energy and environmental
performance of existing buildings. The modeled building is created for retrofit
decision-support tools for the decision makers such as occupants, owners, and outside
consultants of the buildings. Existing buildings represent the greatest opportunity to
improve building energy efficiency and reduce environmental impacts. This research
aims to stimulate the decision makers to improve their buildings by providing reliable
and visualized information of their building’s energy performance using the
developed hybrid thermal LIDAR system. The created 3D model contains point
clouds of building envelop and thermal data on each point including temperature (C)
and thermal color generated from the infrared camera. The developed system has
successfully demonstrated its technical feasibility through intensive lab tests and a
field experiment on residential house modeling.

INTRODUCTION

Buildings account for about 40% of the primary energy usage, 71% of the
electricity in the U.S.(U.S. DOE 2008; EIA 2009), and, yet, they receive much less
public attention than fuel economy or new technologies for automobiles or alternative
sources or distribution systems for power generation. The US DOE Build America
Program (NREL 2008) set a goal of reducing the average energy use of housing by
40% to 70%. Especially, the existing residential buildings comprise the single
largest contributor to U.S. energy consumption and greenhouse gas emissions (>50%)
which is applied to over more than 120 million buildings (>95% of the total number
of buildings). Exacerbating these problems is the fact that the average age of such
buildings is over 50 years, with about 85% of buildings built before 2000 (U.S. DOE
2008). However, millions of decision makers of these buildings usually lack

552
COMPUTING IN CIVIL ENGINEERING 553

sufficient information or tools for measuring their building’s energy performance


and they are faced with a dizzying array of expensive products and services for
energy efficiency retrofit with long, uncertain payback periods. Consequently,
existing buildings represent the greatest opportunity to improve building energy
efficiency and reduce environmental impacts. Most of the current research is related
to high performance technologies for new construction and large commercial
buildings. In contrast, the primary focus for this study would be on existing small
commercial and residential buildings where decision support tools are lacking.

The disconnect between existing high performance building products and the
willingness of decision makers to choose those products is likely due to the
complexity of building systems and the marketplace and the lack of adequate
feedback loops between decision makers and outcomes associated with the different
stages of the building lifecycle. In particular, there is still a lack of: 1) rapid and
low-cost as-built data collection techniques for Building Information Modeling of
existing buildings(Schlueter 2008); 2) metrics and measurements for evaluating
overall building performance (including energy and occupant issues); 3) adequate
measurements and integrated intelligence for evaluating component performance; 4)
tools and information geared to non-expert decision makers (e.g., owners, occupants);
and 5) evidence that buildings touted as high performance actually perform well.

Previous work has focused on some aspects of the problems above. However,
several gaps remain that will be addressed in this study including:
 Lack of perception-based rapid & low-cost data collection tools for as-built
BIM design and thermal performance of existing buildings
 Lack of integrated tools and data for analyzing the performance and
opportunities for improvement in existing buildings, and
 Lack of connectivity between building performance information and decision
makers

As an on-going research, the goal of the project is to develop and evaluate


rapid low-cost measurement and modeling approaches that will allow “virtual”
representations for the energy and environmental performance of existing buildings to
be created for decision-support tools for the occupants, owners, and outside
consultants of the buildings. In this paper, an innovative perception-based data
collection methodology will be mainly discussed using a proof of concept prototype
of portable hybrid thermal laser scanning system to model as-is thermal performance
of existing buildings.

PHOTO IMAGE MAPPING TO 3D MODEL

Three dimensional geo-information modeling is a fast developing topic in


remote sensing and Geographic Information System (GIS). Using the technology of
light detection and ranging (LIDAR) and photogrammetry, 3D building models can
be constructed to resemble real-world building layouts, appearances and other
characteristics (Tsai and Lin 2004). LIDAR technology has been used to create 3D
554 COMPUTING IN CIVIL ENGINEERING

as-built models of structures and scenes for quality control, surveying, mapping,
reverse engineering (Cheok 2005). Most of the commercial survey-level LIDAR
scanners enable an internal or external camera to capture the digital images of the
scanned scene and map image textures onto corresponding points in point clouds.
Then, each point has information of position(x, y, z) and color (R,G,B) values.
Unlike applications using digital cameras, there have been few efforts to map thermal
images taken from an infrared camera onto LIDAR’s point clouds although the
infrared thermography technique has long been used as a non-invasive approach to
diagnose buildings and infrastructure (Balaras and Argiriou 2001). Tsai and Lin
(2004) developed a software program which can create an integrated system to
acquire and integrate information produced by laser scanner and infrared (IR) camera
used in cultural heritage diagnostic for restoration in architectural and cultural study
applications.

From previous efforts, the research team developed an integration method


which projects an infrared thermal image onto the point clouds by calculating
distance, position and orientation between corresponding common points(Figure 1).
Similar to Tsai and Lin’s work, this approach merely merges the radiometric images
to a 3D point-clouds model (Figure 2).

Figure 1. A building image (right) and an IR thermal image of the building.

Figure 2. Infrared thermal image projected onto point clouds of the building
(overlay).
COMPUTING IN CIVIL ENGINEERING 555

While it is still good visual information to detect thermal difference of


building materials, however, the captured temperature information by an infrared
camera is lost in the 3D thermal model. The thermal color of captured objects is
relatively determined by the surrounded environment in an IR camera. The same
object (e.g., wall) can be differently colored if the temperature range is different from
another capture. Thus, the thermal measurement which provides absolute temperature
values in C or F is more accurate information to diagnose building materials for
their energy-efficiency. For this reason, the research team has been developing a
hybrid system which fuses the absolute temperature values with the position data to
create a 3D thermal model.

HYABRID 3D LIDAR SYSTEM

Thermography offers a rapid and cost-effective method of investigation that


does not require any contact with the surface materials or structure. Since it is a non-
contact, non-destructive technique, thermography has been extensively utilized in the
assessment of buildings, infrastructure, monuments and ancient structure(Rao 2007).

In this study, an innovative hybrid system was developed which integrates a


3D LIDAR scanner and an infrared (IR) thermal camera as shown in Figure 2. A
graphical user interface (GUI) was developed using Visual C ++. The GUI controls
the LIDAR scanner and the IR camera and visualizes the captured 3D model.

Figure 3. Prototype hybrid thermal LIDAR system.

As a main sensor of the hybrid system, a light-weight 3D LIDAR was built


consisting of a small-size 2D line laser and a pan and tilt unit (PTU) from the
previous research (Cho and Martinez 2009), which allows the research team more
flexibility in hardware control and software programming rather than using a
commercial LIDAR scanner. Based on the current mounting configuration, multiple
Degree-Of-Freedom (DOF) kinematics was solved to obtain x-y-z point values for
the LIDAR and corresponding pixels of the radiometric image for the IR camera.
The transformation matrices for the LIDAR and the IR camera share the first two
frames and split into two different kinematics frames from 3rd matrix (Figure 4).
556 COMPUTING IN CIVIL ENGINEERING

Point
Clouds
Data
Fusion
Temp. on
Pixels
Figure 4. Integrated kinematics frame for the hybrid thermal LIDAR system.

Most off-the-shelf cameras are not perfect and tend to show a variety of
distortions and aberrations. For geometric measurements by a camera, the most
important issue is the distortion(Shah and Aggarwal 1996). To solve the distortion
that the IR camera exhibits, the intrinsic parameters are identified which encompass
focal length in terms of pixels, image format, and principal point (Hartley and
Zisserman 2003; Zhang 2000). Extrinsic parameters are also needed to transform 3D
world coordinates to 3D camera centered coordinate frame. There are 3 extrinsic
parameters: the Euler angles yaw θ, pitch , and tilt φ for rotation. In this research,
angle θ always equals zero, angle and angle φ can be obtained from pan-tilt
equipment. And the rotation matrix R can be expressed as function of θ, , and φ as
follows:

 cos cos sin  cos  sin  


R   sin  cos  cos sin  cos cos cos  sin  sin  sin  cos sin   (1)
 sin  sin   cos sin  cos  cos sin   sin  sin  cos cos cos 

X  Xw
Y   R Y  (2)
   w
 Z   Z w 
In equation (2), (X, Y, Z) is the infrared camera 3D coordinate system, and (Xw, Yw,
Zw) is the object world coordinate system.

While the LIDAR can cover wide range of area with one scan, the IR camera
needs to capture multiple images due to its low resolution (320 x 240), especially
when a target is too large as previously shown in Figure 2. Figure 5 demonstrates the
proposed hybrid data fusion approach in which point clouds are mapped with the
thermal image of human body. Once the LIDAR scan is done, the area of interest is
captured by the IR camera using the panning and tilting functions from the GUI. In
this example, one LIDAR scan and one IR camera capture were used to model the 3D
image. The points that are out of the IR camera range (not mapped with thermal data)
are shown in blue.
COMPUTING IN CIVIL ENGINEERING 557

Figure 5. Example of 3D thermal modeling of human body- front view (left) and
skewed view (right).

FIELD TEST

A "Living Laboratory" residential house, Zero Net Energy Testing Home


(ZNETH) at the University of Nebraska, was used as a preliminary field test for the
developed prototype hybrid system (Figure 6). The test was conducted in hot and
sunny day. When each side of the house was scanned, thermal data was captured and
stored with the point clouds. Then, the multiple scans were registered later. Each laser
scanned point is mapped with corresponding temperature on the building surface with
various colors configured from the IR camera (Figure 7).

Figure 6. ZNETH House. Figure 7. Example of 3D thermal modeling


field test (colored thermal data(ºC) on the
front side of the house).

With a mouse click on any point on the point clouds from the GUI, it shows x,y,z and
temperature data. In Figure 7, a hottest point shown in red was selected (39.566 C).

CONCLUSIONS & RECOMMENDATIONS

This paper introduced a non-invasive rapid measurement system for thermal


3D model of existing buildings. To rapidly and accurately measure the 3D geometries
of building envelop, a 3D LIDAR scanner was developed. Then an infrared
558 COMPUTING IN CIVIL ENGINEERING

camera(IR) was integrated to the LIDAR system, which measures temperature of the
building surface. Multiple degrees of freedom (DOF) Kinematics was solved to
integrate two units to obtain x,y,z, point values and corresponding thermal data for
each point. A graphical user interface (GUI) was developed to control hardware units
(LIDAR, pan and tilt unit, and IR camera) for data capture, and edit and visualize 3D
thermal point clouds.

The developed hybrid system has successfully demonstrated its technical


feasibility through a field experiment on residential house modeling. As an on-going
research, the research team continues to improve the current hybrid prototype,
develop more realistic forms of information such as an estimated description of
thermal performance of building envelop (e.g., heat resistance value) and energy
usage status from an economic stand point. Also, this research plans to develop
cost/benefit analysis tools for benchmarking the performance of buildings, identifying
specific problems, and improving performance through repairs, retrofits, and better
operational strategies. Especially, the developed decision support tools are expected
to stimulate the decision makers to improve their buildings by providing reliable and
visualized information of their building’s energy performance, thus benefits to the
economy, society, and environment.

ACKNOWLEDGEMENTS

This research has been supported by a grant from the U.S. Department of
Energy(DOE) (Contract #: DE-EE0001690 ). The authors would like to acknowledge
and extend the gratitude to the U.S. DOE for their support.

REFERENCES

Balaras, C. and Argiriou, A., (2001). “Infrared Thermography for Building


Diagnostics”, Energy and Buildings, vol. 34, no. 2, pp. 171-183.

Cheok, G. (2005). "Proceedings of the 2nd NIST LADAR Performance Evaluation


Workshop – March 15 - 16, 2005", NISTIR 7266, National Institute of Standards and
Technology, Gaithersburg, MD, October.

Cho, Y. and Martinez, D. (2009). "Light-weight 3D LADAR System for Construction


Robotic Operations," 26th International Symposium on Automation and Robotics in
Construction (ISARC), Austin, Texas, June 24-27, pp.237-244.

Energy Information Agency(EIA), (2009). “Annual Energy Review 2008”,


DOE/EIA-0384 (2008), U.S. Department of Energy, June 2009.

National Renewable Energy Laboratory(NREL), 2008, “Building America Research


Benchmark Definition,” Technical Report NREL/TP-550-44816.
COMPUTING IN CIVIL ENGINEERING 559

Schlueter, A. and Thesseling, F. (2008). “Building information model based


energy/exergy performance assessment in early design stages”, Journal of
Automation in Construction, Vol.18, pp. 153-163.

Rao, D.S. (2007). “Investigations on Ancient Masonry Structures Using Infrared


Thermography,” Proc. of InfraMation 2007 Conference, Oct 15-19, Las Vegas, NV.

Shah, S., Aggarwal, J. (1996). “Intrinsic parameter calibration procedure for a high-
distortion fish-eye lens camera with distortion model and accuracy estimation,”
Pattern Recognition, 29(11), pp. 17775-1788.

Tsai, F., Lin, H. (2004). “Realistic Texture Mapping on 3D Building Models,”


http://www.gisdevelopment.net/aars/acrs/2004/b_dem/acrs2004_b1006.shtml
(12/23/2010).

U.S. Department of Energy(DOE) (2008). “2008 U.S. DOE buildings energy


databook,” http://buildingsdatabook.eren.doe.gov/

Hartley, R. and Zisserman, A. (2003). Multiple View Geometry in Computer Vision.


Cambridge University Press. pp. 155–157. ISBN 0-521-54051-8.

Zhang, Z.(2000). "A flexible new technique for camera calibration'", IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol.22, No.11, pages
1330–1334, 2000
A Generalized Time-Scale Network Simulation Using Chronographic Dynamics
Relations

A. Francis1 and E. Miresco2

École de technologie supérieure, Department of Construction Engineering, Quebec


University, Quebec, 1100, Notre-Dame West, Montreal (Quebec) H3C 1K3
1
PH (514) 396-8415; FAX: (514) 396-8584; email: adel.francis@etsmtl.ca
2
PH (514) 396-8846; FAX: (514) 396-8584; email: edmond.miresco@etsmtl.ca

ABSTRACT

The scheduling information for complex and fast-track projects is often


incomplete, and some decisions are postponed for a later date when new data is
available. One solution is to propose several alternative task execution sequences,
which could mitigate some uncertain and doubtful results. However, the use of non-
time-scaled solutions prevents their integration into the existing construction planning
software. This paper reviews and analyses the roles, advantages and disadvantages of
the Temporal Function as proposed by the Chronographic Scheduling Method and
introduces a generalized time-scale network simulation by means of production-based
dynamics relations. The proposed solution uses decision points as part of the
temporal functions which manage the uncertainty and their interdependencies. These
functions are extended to represent the existing risks associated with an activity and
its respective probabilities. The probabilities are represented by an entity that contains
the most probable duration, and productivity values with their corresponding costs.
This model is characterized by its relative ability to perform simulation studies based
on the probabilistic aspect of the dynamic relationships and the streamlining of
interactions between activities.

INTRODUCTION

For complex and fast-track projects, the scheduling information is often


incomplete and some decisions are postponed for a later date when new data is
available. One solution is to propose several alternative task execution sequences and
probabilistic duration, which could mitigate some of the uncertain and doubtful
results. The most common non-time scaled critical path schedule, CPM or
Precedence, presents only one predetermined scenario and duration for the project
activity sequence. These methods consider that the project is not fully completed until
all activities are executed. El-Bibany (1997) provides a description of the
computational methodology including the constraint modeling process, the graphical
representation of constraints, and the evaluation of constraint networks. The paper
indicates that each problem may be represented by interrelating parameters using
construction duration and precedence knowledge.

560
COMPUTING IN CIVIL ENGINEERING 561

PERT (Malcom et al., 1959) was the first step to applying uncertainties within
activity durations. Various researchers including Murray (1963), Grubbs (1962) and
MacCrimmon and Ryavec (1964) suggested alternatives to PERT adding
uncertainties to the cost and reliability. Martinez and Joannou (1997) stated that
PERT is unable to establish a correlation between activities durations, or to manage
uncertainty in activity relationships. Daji and Reiar (1993) introduce uncertainty in
the durations of non-critical activities with BFUE. Wang and Demsetz (2000)
correlate the activity networks durations with NETCOR. Pritsker et al. (1989), Halpin
and Riggs (1992), and Lu AbouRizk (2000) have applied the simulation to the PERT
network.

Allen (1984) describes the logic based on temporal intervals rather than time
points, defines thirteen possible temporal relationships and describes situations from
either a static or dynamic perspective. Song and Chua (2007) present a temporal logic
intermediate function relationship based on an interval-to-interval format. The
temporal logics residing in the intermediate functions is applied from three
perspectives: the construction life cycle of a single product component, functional
interdependencies between two in-progress components, and availability conditions
of an intermediate functionality provided by a group of product components.

Eppinger (1997) uses the concepts of concurrent engineering to study the


impact of upstream activities upon downstream activities. Peña-Mora and Park
(2001) have presented the Dynamic Planning Methodology based on system
dynamics for fast-tracking building construction projects by providing overlapping
strategies. Peña-Mora and Li (2001) proposed an overlapping framework based on
the activity progress rate, upstream task reliability, downstream task sensitivity and
task divisibility. This method is an integrating application of axiomatic design
concepts, concurrent engineering concepts and GERT.

Francis and Miresco (2000, 2002a, 2002b, 2006 and 2010) propose the
Chronographic Model and introduce the internal division concept. Divisions are
related to the quantity of work to be accomplished and can be adjusted automatically
as a function of the production variation rates. The concept of Temporal Functions is
introduced in order to specify the decisional and relational constraints between
activities. Temporal functions connect activities on one or more points, called
connection points. Each connection point can be at one of the two extremities of the
activity, or on one of its internal divisions. Internal divisions extend the relationships
between the activities to external and internal types, and generate realistic
dependencies and new types of floats. The Chronographic Method studies the
dynamic time-scaled dependencies that allow probabilistic simulations based on the
internal variation of the production rate. Plotnick (2006) proposes the Relationship
Diagramming Method (RDM) that also employs the notion of partial relationships.
The RDM uses five classes of new coding: Event Codes, Duration Codes,
Reason/Why Codes, Expanded Restraint or Lead/Lag Codes and Relationship Codes.
Han et al (2007) propose the Value Addition Rate (VAR), a time-scaled metric
562 COMPUTING IN CIVIL ENGINEERING

method that captures the amount of non-value adding activities consuming time
and/or resources without increasing value. They use different colour schemes to
model the percentage of efforts effectively utilized to add value on a bar chart.

BACKGROUND

Several stochastic models were developed to generalize networks. Stochastic


models are characterized by a tetrad of essential elements: logical nodes with some
inputs and outputs, probabilistic activity branches, feedback loops, and multiple
sources and sinks (Itakura and Nishikawa, 1984). Eisner (1961) developed the
generalized PERT, also called the Decision Box. This method was designed to
expand PERT by incorporating flexibility into the network. Elmaghraby (1964)
proposed refinement to the Decision Box. Hespos and Strassmann (1965) introduced
network decision trees.

Pritsker (1966) proposes the Graphical Evaluation and Review Technique,


GERT. The proposed network consists of Nodes (events representing the operations)
and Directed Branches (connecting the nodes). There are five types of nodes: Input
nodes (Exclusive-or, Inclusive-or and And) and Output nodes (Determined and
Probable). Directed branches have several parameters, including: the probability (P)
that we choose the branch and the branch time (D). The network implementation is
the result of a particular group of branches and nodes. By definition, it is not
necessary that all network activities are completed by the project’s end. Crowston and
Thompson (1967) proposed the DCPM to investigate several possibilities for
implementation using a single graph schedule. Nodes have three types: AND, OR and
XOR. These nodes possesses durations and costs. Moeller and Digman (1981)
develop VERT to assess risks in the networks. VERT nodes use input (INITIAL,
AND, PARTIAL AND, and OR) and output (TERMINAL, ALL, MONTE CARLO,
FILTER1, FILTER2 and FILTER3) and the activities are characterized by three
parameters: time, cost, and performance. Itakura and Nishikawa (1984) propose a
fuzzy network technique for very large and/or complex systems in which the activity
branches and their corresponding times belong to a fuzzy set.

Chehayeb and AbouRizk (1998) propose SimCon, a simulation-based project


technique used to model complete project networks, flexible logical sequences and
various scheduling alternatives without the need to change activity sequencing.
Martinez and Ioannou (2005), using Stroboscope modeling elements, consider
uncertainty in any aspect, including dynamically selecting the routing of resources
and the sequence of operations.

To our knowledge, existing generalized methods use non-time scaled


graphical models to present execution variants. Despite their efficiency in planning
complex construction, these generalized methods can hardly represent the entire
project schedule. The use of non-time-scaled solutions is not suitable for work
coordination and progress control. Non-time-scaled solutions cans also hardly present
calendar and non-working days which limits the visual communication of the
COMPUTING IN CIVIL ENGINEERING 563

schedule, and prevents its integration into the existing construction planning software.
This paper presents a generalized time-scale network simulation using chronographic
dynamics relations, thus allowing the user to present a schedule with several
alternatives and have a better understanding of the implication of each decision on the
entire project.

GENERALIZED TIME-SCALE NETWORK

The Chronographic Model (Francis and Miresco, 2006) introduces the


concept of Temporal Functions in order to specify the decisional and probabilistic
relational constraints between activities. This paper extends the concept of temporary
functions to represent incertitude associated with the execution alternatives and their
respective probabilities. In some situations, we can delay the choice between the
alternatives so that the decision is made according to the requirements of the
situation.

Execution Alternative and Decision Point

The next illustration demonstrates an example of execution uncertainties and


analyzes a situation where three alternatives are involved, namely:
 Alternative 1: will be executed through the activities 2 and 3;
 Alternative 2: use activities 2/1 and 2/2;
 Alternative 3: activities employ 3/1 and 3/2.

Figure 1. Execution alternative and decision point.

The manager chooses alternative 1 as the most probable. This alternative is


integrated in the schedule for critical path calculation purposes. The other two
alternatives are placed on hold to serve for possible implementation in a future
decision. These other alternatives are drawn with a dotted light color and can be
plotted on a different layer. For time scale schedule, use of a decision point drawn
564 COMPUTING IN CIVIL ENGINEERING

separately may affect the modeling representation. Integrating these decision points
within the activity representation (see Figure 1) is an acceptable solution. Decision
points are drawn as green triangles.

Influence of The Choice of An Alternative on The Project

In the previous example, the project was calculated based on the selected
alternative. Thus, if a different alternative is chosen during the execution, the project
duration is likely to change. If the probability of execution of the three (3)
alternatives are consecutively 40%, 35% and 25%, the project has a 60% chance that
one of the two others alternatives are executed. This means a probability of 60% that
the duration and cost of the project will be different. With such a process, the
confidence in the result decreases, as the method has completely neglected the effect
of the two other alternatives on the duration, cost and quality of the project. The
simulation should then take all possible alternatives into consideration.

Adjusting The Duration and Cost Based on The Likelihood of Alternative.

The most likely duration is calculated by the sum of the products of the
duration of each activity by its respective probability: 16 x 0.6 + 12 x 0.1 + 21 x 0.3 =
17.1  17 days.

The difference between the most likely duration and the chosen alternative
duration represents the uncertainty (17 – 16 = 1 day). This uncertainty is represented
by a temporary function called the probability entity. The probability entity adjusts
the overall project duration and is represented graphically by a spring (see Figure 2).
The most probable cost is the sum of the products of the cost of each alternative
multiplied by its respective probability. The difference between the actual and the
most probable cost is associated with the probability entity.

Figure 2. Adjusting the duration based on the likelihood of alternative.


The next illustration (see Figure 3) shows a case in which the most likely
duration is less than the chosen alternative duration. The most likely duration is
calculated by the sum of the products of the duration of each activity multiplied by its
respective probability: 22 x 0.6 + 12 x 0.1 + 21 x 0.3 = 20.7  21 days. In such a
situation, the probability entity is drawn reversed to adjust the project duration
COMPUTING IN CIVIL ENGINEERING 565

downward. This entity will also include the cost difference, whether positive or
negative.

Figure 3. Reversed duration based on the likelihood of alternative.

Execution Uncertainty Simulation Using Production-Based Dynamic Relations

Using the Chronographic Method, the execution alternatives are simulated


using the production-based dynamic relations. The internal dependencies between
any two activities could be probabilistic, which means that they permit a certain gap
in the interdependence of activities. The amount of flexibility of these internal
dependencies relies upon the predefined activity category.

Figure 4. Execution alternative using dynamic relations.


In this paper we use a simple example with only two alternatives. The
problem data uses several probabilistic aspects: i) probabilistic duration of each
activity, ii) probabilistic dynamic relationships and interactions between activities,
and, iii) uncertainty in the probability of each execution alternative. The simulation
result is shown in the illustration (see Figure 4).
566 COMPUTING IN CIVIL ENGINEERING

Due to limited space, the simulation model is not specifically explained. This
article is therefore limited to the presentation of the modeling approach to simulate
the execution alternatives using the Chronographic Method.

The methodology and the mathematical details for an extensive example using
a mathematical function will be presented in a future paper. The mathematical
function will contain the rules that manage the interdependencies between the two
activities in progress.

CONCLUSION

The proposed generalized time-scale network simulation using chronographic


dynamic relations extend the Temporal Function role to a dynamic solution for
execution alternatives. This paper examines the overall impact of the application of
the dynamic relationships on the critical path and on the criticality of each activity.
This approach manages several probabilistic aspects: i) probabilistic duration of each
activity, ii) probabilistic dynamic relationships and interactions between activities,
and, iii) the uncertainty in the probability of each execution alternatives.

We conclude that the applications of these proposed generalized time-scale


networks and their simulations present many advantages: first, they easily permit
tracking probabilistic interdependencies between two in-progress activities and
groups of alternatives; second, they bring more accurate results than the traditional
PERT, while tracking internal dependencies and margins, and adjusting the durations
and costs based on the likelihood of an alternative; and finally, they enhance the
visual aspects when several activities are interlinked together or several alternatives
are proposed. These advantages allow the planner to present a more realistically
detailed schedule and to make adjustments during project monitoring. The benefits of
applying the Chronographic Scheduling Method to project planning include the
ability to set up and test the implications of various assumptions and scenarios, which
results in a more effective tool, especially for a better simulation of the site
production.

REFERENCES

Allen, J. F. (1984). “Towards a general theory of action and time.” Art. Int. J.,
Elsevier, 23, 123-154.
Chehayeb, N. N. and AbouRisk, S. M. (1998). “Simulation-based scheduling with
continuous activity relationships.” J. Constr. Eng. Manage., 124(2), 107-115.
Crowston, W. B. and Thompson, G. L. (1967). “Decision CPM: a method for
simultaneous planning, scheduling and control of projects.” Oper. Res., 15, 407-
426.
Daji, G. and Reiar, H. (1993). “Time uncertainty analysis in project networks with a
new merge-event time-estimation technique”, Oper. Res., 11(3), 165-173.
COMPUTING IN CIVIL ENGINEERING 567

Eisner, H. (1962). “A generalized network approach to the planning and research and
scheduling of the research program.” Oper. Res., 10, 115-125.
El-Bibany, H. (1997). “Parametric constraint management in planning and scheduling
computational basis.” J. Constr. Eng. Manage., 123(3), 348–53.
Elmaghraby, S. E. (1964). “An algebra for the analysis of the generalized activity
network.” Manage. Sc., 10, 494-514.
Eppinger, S. D. (1997). “Three concurrent engineering problems in product
development seminar.” MIT, Sloan School of Management, Cambridge, Mass.
Francis, A. and Miresco, E. (2000). “Decision support for project management using
a chronographic approach.” Proc.2nd Int. Conf. on Decision Making in Urban and
Civil Engineering, Lyon, France, 845-856.
Francis, A. and Miresco, E. (2002a). “Amélioration de la représentation des
contraintes d’exécution par la méthode Chronographique.” Proc. 2006 Annual
Canadian Society of Civil Engineers (CSCE) conf., Montreal, Qc, GE019, g-27.
Francis, A. and Miresco, E. (2002b). “Decision support for project management using
a chronographic approach.” J. Decis. Sys., 11(3-4), 383-404.
Francis, A. And Miresco, E. (2006). “A chronographic method for construction
project planning.” Can. J. Civ. Eng., 33(12), 1547-1557.
Francis, A. and Miresco, E. (2010). “Dynamic production-based relationships
between activities for construction projects' planning.”, Proc., Int. Conf., in
Computing in Civil and Building Engineering, Nottingham, UK, 126, 251.
Grubbs F.F. (1962). “Attempts to validate certain PERT statistics.” Oper. Res., 10,
912-915.
Halpin, D., and Riggs, L. (1992). “Planning and analysis of construction operations.”
Wiley, New York.
Han, S., Lee S., Fard, M. G. and Peña-Mora, F. (2007). “Modeling and representation
of non-value adding activities due to errors and changes in design and construction
projects.” Proc., 2007 Wint. Simu. Conf., Washington D.C, 2082-2089.
Hespos, R. F. and Strassmann P. A. (1965). “Stochastic decision trees for the analysis
investment decisions.” Manage. Sc., S.B, 11, 244-259.
Itakura, H. and Nishikawa, Y. (1984). “Fuzzy network technique for technological
forecasting, Fuzzy Sets and Systems.” Int. J. Inf. Sc. Eng., 14(2), 99-113.
Lu, M. and AbouRizk, S-M. (2000). Simpli. ed. “CPM=PERT simulation model.”
J. Constr. Eng. Manage., 126 (3), 219–26.
MacCrimmon, K.R., Ryavec, C.A. (1964). “An Analytical study of the PERT
Assumptions.” Oper. Res., 12(1), 16-37.
Malcom, D.G., Roseboom, J.H., Clark, C.E., Fazar, W. (1959). “Applications of a
technique for R and D Program Evaluation, PERT.” Oper. Res., 7(5), 646-669.
Martinez, J. C. and Ioannou, P. G. (1997). “State-Based Probabilistic Scheduling
Using STROBOSCOPE’s CPM add-On.” Constr. Congress V, Minneapolis, MN,
438-445.
Moeller, G. L. and Digman, L. A. (1981). “Operations planning with VERT.” Oper.
Res., 29(4), 676-697.
568 COMPUTING IN CIVIL ENGINEERING

Murray, J.E. (1963). “Consideration of PERT assumptions.” IEEE Trans., EM-10,


94-99.
Peña-Mora, F. and Li, M. (2001). “Dynamic Planning and Control Methodology for
Design/Build Fast-Track Construction Projects.” J. Constr. Eng. Manage.,
127(1), 1-17.
Peña-Mora, F. and Park, M. (2001). “Dynamic planning for fast-tracking building
construction projects.” J. Constr. Eng. Manage., 127 (6), 445–56.
Plotnick, F. L. (2006). “RDM-Relationship diagramming method.” AACE Int. Trans.,
PS.08.1-PS.08.10.
Pritsker, A. B. (1966). “GERT: Graphical Evaluation and Review Technique.”
Memorandum RM-4973-NASA, The Rand Corporation.
Pritsker, A., Sigal, C. and Hammesfahr, R. (1989). “SLAM II network models for
decision support.” Prentice-Hall, Englewood Cliffs, N.J.
Song, Y. and Chua, D.K. (2007). “Temporal Logic Representation Schema for
Intermediate Function.” J. Constr. Eng. Manage., 133(4), 277-286.
Wang, W. C. and Demsetz, L. A. (2000). “Application example for evaluating
networks considering correlation.” J. Constr. Eng. Manage., 126(6), 467-74.
Automating Codes Conformance in Structural Domain

Nawari. O. Nawari1
1
School of Architecture, College of Design, Construction and Planning, University of
Florida, Gainesville, FL 32611, USA. Email:nnawari@ufl.edu.

ABSTRACT
The intelligent codes (SMARTcodes) is a new initiative of the International Code
Council (ICC) that strives to automate code compliance check which takes the building plan
as represented by a Building Information Model (BIM), and instantly checks for code
compliance via model checking software. The goal is to be able to create an inspection
checklist of building elements to look for, and viewing the building components that don't
comply with code provisions and for what reasons.
This paper examines automated code compliance checking systems that assess
building designs according to various structural code provisions. This includes evaluating and
reviewing the functional capabilities of both the technology and structure of smart codes and
current building design rule checking systems. The paper suggests a new framework for
development of automated rule checking systems to verify structural design against code
provisions and other user defined rules.

INTRODUCTION
At present structural design and construction processes become more complex every
day because of the introduction of new building technologies, research outcomes and
increasingly stringent building codes. As a result, structural engineers are responsible to
comply with many regulations and specifications ranging from seismic, blast resistance,
progressive collapse, to fire safety and energy performance requirements. They are constantly
facing the problem of checking the conformance of products and processes to international,
national and local regulations. They are also more and more subject to increasing expectations
on several knowledge domains, striving towards building designs with better performance and
quality. These challenges require an intense collaboration among project participants, and a
profound verification of the building design starting from the earliest stages in the design
process.
The introduction of Smart Codes will greatly improve the current design practice by
simplifying the access to code provisions and complaints checks. Converting Code and
Standards from a flat rigid format into dynamic actionable format does play the key role. By
breaking through the precincts of Code and Standard provisions, design software, and the
Building Information Modeling a solution to insurmountable hurdle can be achieved
Smart or intelligent code is referred to as the electronic digital format of the building
codes that allow automated rule and regulation checking without modifying a building design,
but rather assesses a design on the basis of the configuration of parametric objects, their
relations or attributes. Smart Codes employ rule-based systems to a proposed design, and give
results in format such as “PASS”, “FAIL” or “WARNING”, or ‘UNKNOWN’’ for conditions
where the required information is incomplete or missing.
There has been a long historical interest in transforming building codes into a format
acquiescent for machine interpretation and application. The initial effort was started in 1966
when Fenves made the observation that decision tables, an if-then-novel programming and
program documentation technique, could be used to represent design standard provisions in a

569
570 COMPUTING IN CIVIL ENGINEERING

precise and unambiguous form. The concept was put to use when the 1969 AISC Specification
(AISC 1969) was represented as a set of interrelated decision tables. The stated purpose of the
decision table formulation was to provide an explicit representation of the AISC Specification,
which could then be reviewed and verified by the AISC specification committee and
subsequently used as a basis for preparing computer programs. Subsequently, Lopez et al.
implemented the SICAD (Standards Interface for Computer Aided Design) system (Lopez and
Elam 1984; Lopez and Wright 1985; Elam and Lopez 1988; Lopez et al. 1989). The SICAD
system was a software prototype developed to demonstrate the checking of designed
components as described in application program databases for conformance with design
standards. The SICAD concepts are in production use in the AASHTO Bridge Design System
(AASHTO 1998). Garrett developed the Standards Processing Expert (SPEX) system (Garrett
and Fenves 1987) using a standard-independent approach for sizing and proportioning
structural member cross-sections. The system reasoned with the model of a design standard,
represented using SICAD system representation, to generate a set of constraints on a set of
basic data items that represent the attributes of a design to be determined.
Then further research effort was led by Singapore building officials, who started
considering code checking on 2D drawings in 1995. In its next development, it switched and
started the CORENET System working with IFC (Industry Foundation Classes) building
models in 1998 (Khemlani,, 2005). In the United States similar works have been initiated
under the Smart Code initiative. There are also other several research implementations of
automated rule-checking to assess accessibility for special populations (SMC, 2009) and for
fire codes (Delis, 1995). The GSA and US Courts has recently supported development of
design rules checking of federal courthouses, which is an early example of rule checking
applied for automating design guides (GSA, 2007). A comprehensive survey of developments
for computer representation of design codes and rule checking was reported by Fenves et al.
(1995) and Eastman et al. (2009).

SMART CODES
This refers to the electronic digital representation of the rules and regulations of the
building codes and the dictionary needed for that format. In the United State, the International
Codes Council (ICC) will be available in a form of XML. To maintain consistency of
properties within the digital format of the Codes a dictionary of the properties found within the
building codes is being developed. The dictionary is being developed as part of the
International Framework for Dictionaries effort and, in the US, is being managed by the
Construction Specifications Institute (CSI) in cooperation with ICC. This work is also
enabling the properties within the codes to be identified against appropriate tables within the
Omniclass classification system that has been developed by CSI.
Recently, a number of researchers investigated the application of ontology-based
approach (Yurchyshyna et al. 2009) and the semantic web information as a possible rule
checking framework (Pauwels et. al. 2009). The first research approach works on formalizing
conformance requirements conducted under the following methods (Yurchyshyna et al. 2009):
(i) knowledge extraction from the texts of conformance requirements into formal languages
(e.g. XML, RDF); (ii) formalization of conformance requirements by capitalizing the domain
knowledge. (ii) semantic mapping of regulations to industry specific ontologies; and (iv)
formalization of conformance requirements in the context of the compliance checking
problem. On the other hand the semantic web approach focuses on enhancing the IFC model
by using description language based on a logic theory such as the one found in semantic web
domain (Pauwels et. al. 2009). Because the IFC schema was not explicitly designed for
interaction with rule checking environments, its specification is not based on a logic theory.
By enhancing IFC onto a logical level, it could be possible to enable design and
implementation of significantly improved rule checking systems.
COMPUTING IN CIVIL ENGINEERING 571

As can be seen, Smart Codes systems depend on Information availability and rule
conformance checking system. Each of these components has some limitations aspects. Major
cluster of difficulties are related to the nature of Codes and Standards. Building Codes can be
extremely subjective in certain provisions. That means legal scholars have the ability to argue
either side of a question using accepted methods-of legal discourse. The most recurring cause
of indeterminacy of Code provisions is caused by open-textured concepts used in expressing
the provisions.
It is clear that a powerful semantic-oriented representation that encompasses most of
the Codes and Standard provisions and the encoding of the knowledge domain are keys in the
success of Smart Code initiative. The paper proposes a new framework based upon XML and
LINQ (Language Integrated Query) language to enable basic and complex level of rules and
reasoning to be expressed both in XML as a normative concrete syntax and in a more human-
readable abstract syntax to allow for effective AC3 systems.

THE ROLE OF BUILDING INFORMATION MODELING (BIM)


The primary requirement in application of Smart Codes is that object-based building
models (BIM) must have the necessary information to allow for complete code checking. BIM
objects being created normally have a family, type and properties. For example, an object that
represents a structural columns possess type and properties such as steel, wood or concrete,
and sizes etc. Thus the requirements of a building model adequate for code conformance
checking are stricter than normal drafting requirements. The BIM models created by typical
BIM platform such as REVIT and ARCHICAD to date do not typically include the level of
detail needed for building code or other types of rule checking. The GSA BIM Guides (GSA,
2009) provide initial examples of modeling requirements for simple rule checking. This
information must then be properly encoded in IFC by the software developers to allow proper
translation and testing of the design program or the rule checking software. IFC is currently
considered one of the most appropriate schemas for improving information exchange and
interoperability in the construction industry. New applications have been developed, capable
of parsing IFC models, interpreting and reusing the available information. These software
applications have mainly concentrated on deriving additional information concerning
specialized domains of interest. The code conformance domain represents a new level of
details and requirements on IFC model. This should be achieved by developing the
appropriate Information Delivery Manuals (IDMs) and Model View Definitions (MVDs) for
the Automated Code Conformance Checking (AC3) domain. For instance Figure 1 depicts the
process map of the structural design IDM (Nawari 2010) while Figure 2 expands the
illustration of the exchange requirements for code conformance checking of the design review
tasks.
Development of the required model views goes hand-in-hand with the preparation of
code conformance checking functions. Code conformance checking can be constructed upon
different types of model views in response to the exchange requirements specified in the IDM.

PROPOSED AUTOMATED CODE CONFORMANCE CHECKING


FRAMEWORK (AC3)
The suggested rule-based checking system is based upon a number enabling
technologies described earlier. Namely, these are XML smart Codes, BIM, and LINQ
(Language Integrate Query) LINQ. The framework schema of this platform is shown in Figure
3.
572 COMPUTING IN CIVIL ENGINEERING

Figure 1. Process Map of the Structural IDM (Nawari 2011).

Figure 2. Process Map Showing the Exchange Requirements for the AC3
Framework.
COMPUTING IN CIVIL ENGINEERING 573

Rule Base Configuration

Figure 3. Automated Code Conformance Checking Framework (AC3)


In this framework the BIM model data is represented in ifcXML and FBM (Feature-
based Model) as suggested by Nepal et. al. 2008. Due to the complicated query paths and
sometimes requirements of multiple separated queries or functions, extracting
features/properties from the original ifcXML is quite complicated leading to performance
degradation. Thus, the FBM is introduced to improve performance and simplicity. It is an
intermediate schema in XML to store information that extracted from ifcXML to enable code
conformance checking. It is sometimes referred to as FBM-xml. The schema of FBM-xml is
really simple: every instance of a feature is an element; all properties of a feature with their
values are explicitly represented as sub-elements. The FBM-xml system instantiates feature
instances and property values by directly extracting explicitly-defined components and by
analyzing the geometry and topological relationships between objects in the IFC model to
derive implicitly-defined features. The result is an XML data model tailored for AC3 in
structural design domain (see Figure 4).

Figure 4. Preparing BIM model for AC3 Processing


This study focuses on developing a framework for rule-based checking systems
utilizing LINQ and XML Smart Codes. The LINQ (Language-integrated query) technology as
a part of Microsoft .Net framework allows query expressions to benefit from the rich
metadata, compile-time syntax checking, static typing and IntelliSense. Language-integrated
query also allows a single general purpose declarative query facility to be applied to all in-
memory information, not just information from external sources. The .NET Language-
Integrated Query defines a set of general purpose standard query operators that allow
traversal, filter, and projection operations to be expressed in a direct yet declarative way in any
programming language. The standard query operators allow queries to be applied to any
IEnumerable<T>-based information source. LINQ allows third parties to augment the set of
standard query operators with new domain-specific operators that are appropriate for the target
domain or technology. More importantly, third parties are also free to replace the standard
query operators with their own implementations that provide additional services such as
remote evaluation, query translation, and optimization. By adhering to the conventions of the
LINQ pattern, such implementations enjoy the same language integration and tool support as
the standard query operators.
More specifically, the framework suggested focused on LINQ to XML based data. It
is in essence a LINQ-enabled, in-memory XML programming interface that facilitates
communicating with XML from within the .NET Framework programming languages. The
powerful extensibility of the query architecture used in the LINQ provides implementations
that work over both XML and SQL data stores. The query operators over XML (LINQ to
574 COMPUTING IN CIVIL ENGINEERING

XML) use an efficient, easy-to-use, in-memory XML facility to provide XPath/XQuery


functionality in the host programming language.
To illustrate the concept of the AC3 system, XML file is created for a part of the ACI
318-05 Code (ACI 318, 2005) and depicted in Figure 5 below. The second step in
implementing the automated conformance code checking is to establish the rules schema that
allows communication with the Smart Code. This will be achieved by applying LINQ to the
Smart Code as shown in the example given in figure 6.
This section describes briefly how to use Language-Integrated Query with Smart
Code. Standard query operators form a complete query language for IEnumerable<T>.
Standard query operators show up as extension methods on any object that implements
IEnumerable<T> and can be invoked like any other method. In addition to standard query
operators are query expressions for five common query operators: Where, Select, SelectMany,
OrderBy, and GroupBy.
By implementing the above described AC3 framework, the following checking can be
executed to examine minimum concrete cover requirement for reinforced concrete beam
according to the ACI 318-05. The example shown below is the case of checking the beams in
a single storey reinforced building frame. The LINQ code below accesses the Smart Code and
read the encoded provisions given by the ACI 318-05 (Figure 6) and subsequently applies
them to the type of a reinforced concrete beam in the building.

<?xml version="1.0" encoding="utf-8" ?>


<ACI318>
<Year year="2005">
<Section Number = "7.7" title="Concrete Protection for Reinforcement">
<SubSection Number="7.7.1" title="Cast-in-place concrete (nonprestressed)">
<Category title ="Concrete cast against and permanently exposed to earth" >
<MinimumCover> 3 </MinimumCover>
</Category >
<Category title ="Concrete exposed to earth or weather" >
<Rebar Min="#6" Max="#18" Members="All">
<MinimumCover> 2 </MinimumCover>
</Rebar>
<BarSizes Min="#3" Max="#5" Members="All">
<MinimumCover> 1.5 </MinimumCover>
</BarSizes>
</Category >
<Category title ="Concrete not exposed to weather or in contact with ground" >
<Rebar Min="#14" Max="#18" Members="Slabs, Walls, Joists">
<MinimumCover> 1.5 </MinimumCover>
</Rebar>
<Rebar Min="#3" Max="#11" Members="Slabs, Walls, Joists">
<MinimumCover> 0.75 </MinimumCover>
</Rebar>
<Rebar Min="#3" Max="#18" Members="Beams, Columns">
<MinimumCover> 1.5 </MinimumCover>
</Rebar>
<Rebar Min="#6" Max="#18" Members="Shells, folded plate members">
<MinimumCover> 0.75 </MinimumCover>
</Rebar>
<Rebar Min="#3" Max="#5" Members="Shells, folded plate members">

Figure 5. XML Data from ACI 318-05 Code.

The first twelve lines of the code illustrate clearly the power of LINQ to extract
information from the Smart Code in a very efficient and flexible format. The query searches
the Smart Code for the minimum cover provision and read the values allocated for beams and
then compares them to the actual instance of the beam in the building. The actual building
structural framing information is extracted from the BIM generated IFC file which is
converted into ifcXML and then into fbmXML as described previously (figure 4). In the AC3
framework this is given by Line 18 to 26 in Figure 6, which implement LINQ to BIM via
fbmXML. This concise example depicts the potential of automating an unlimited range of
COMPUTING IN CIVIL ENGINEERING 575

rules, including unlimited nested conditions and branching of alternative contexts within a
specified structural design Code or Standard.

CONCLUSIONS

Application of the AC3 framework in structural design has the impending to optimize and
simplify the automated code and standard conformance checks by leveraging building
information that exists in the architectural and structural models created by BIM authoring
platform. The proposed automated code conformance checking (AC3) framework has many
advantages over existing rule checking systems. The major differentiator of the AC3 lies in the
abilities of LINQ to XML as in-memory XML programming platform. Language-

1. XElement ACCC = XElement.Load("C:\BIM\SmartCode\XMLFile1.xml"); var c = ACCC...<Section>;


2. IEnumerable<XElement> QUERY = From i In c.<SubSection> Where (string)
3. i.@title = "Concrete Protection for Reinforcement" Select i;
4. ForEach (XElement i In QUERY) {
5. IEnumerable<XElement> QUERY2 = From j In i.< Category> Where (string)
6. i.@title = "Concrete not exposed to weather or in contact with ground"
7. Select j
8. ForEach (XElement i In QUERY2) {
9. var k = j ...<Rebar>;
10. IEnumerable<XElement> QUERY3 = From m In k Where (string)
11. m.@Members = "Beams, Columns" Select m
12. ForEach (XElement m In QUERY3) {
13. double minCover = m.<MinimumCover>.Distinct.Value.ToString; }
14. }
15. }
16. System.Xml.XmlDocument doc = new System.Xml.XmlDocument();
17. doc.Load("C:\BIM\SmartCode\XMLFile2.xml");
18. System.Xml.XmlNodelList list = doc.GetElementsByTagName("feature");
19. ForEach(System.Xml.XmlElement j In list ) {
20. string wType = j.GetAttribute("ifcTitle");
21. If wType == "ifcBeam" {
22. string BuildingMaterial=j.Item("material").InnerText;
23. string exposure=j.Item("is_external").InnerText;
24. double conCover=j.Item("rebar_bottom_cover").InnerText;
25. }
26. }
27. Switch (BuildingMaterial) {
28. Case "Cast-in-place concrete (nonprestressed)":
29. If exposure = "false" {
30. If conCover >= minCover {result = "Pass"; else result = "Failed";}
Break;
31. Case "Cast-in-place concrete (prestressed)":
32. Break;
33. }
Figure 6. Example of LINQ to Smart Code

Integrated Query provides a consistent query experience across different data models as well
as the ability to mix and match data models within a single query, it is able to depict an
unlimited range of rules, including unlimited nested conditions and branching of alternative
contexts within a specified domain. Furthermore, AC3 framework provides flexibility of
encoding building codes provisions and domain knowledge, capability of providing friendly
user-defined rules, and the ability of integrating with other applications. Increasing BIM
adoption and the concomitant increasing interest in the interoperability potential of XML
prove to be the essential catalyst in the successful adoption and further development of
automated code conformance checking (AC3) systems.
576 COMPUTING IN CIVIL ENGINEERING

ACKNOWLEDGEMENT
The author would like to express his appreciation to College of Design, Construction
& Planning, University of Florida, Gainesville, Florida for funding and supporting this
research.

REFERENCES
Conover, D. (2007).”Development and Implementation of Automated Code Compliance
Checking in the U.S.”, International Code Council, 2007.
Delis, E.A., and Delis, A. (1995). “Automatic fire-code checking using expert-system
technology”, Journal of Computing in Civil Engineering, ASCE 9 (2), pp. 141–156.
Ding, L., Drogemuller, R., Rosenman, M., Marchant, Gero, D. J. (2006). “ Automating code
checking for building designs: in: K. Brown, K. Hampson, P. Brandon (Eds.), Clients
Driving Construction Innovation”: Moving Ideas into Practice, CRC for Construction,
Innovation, Brisbane, Australia, pp. 113–126.
Eastman, C. M., Jae-min Lee, Yeon-suk Jeong, Jin-kook Lee (2009). “Review Automatic rule-
based checking of building designs “, Journal of Automation in Construction (18), pp.
1011–1033, Elsvier.
EDM (2009).” EXPRESS Data Manager”, EPM Technology, http://www.epmtech.jotne.com.
Fenves, S. J. (1966). “ Tabular decision logic for structural design”, J. Structural Engn 9 92,
pp. 473-490
Fenves, S. J. and Garett Jr, J. H. (1986). “Knowledge-based standards processing”, Int. J.
Artificial Intelligence Engn 1, pp. 3-13.
Fenves, S. J., Garrett, J. H., Kiliccote. H., Law. K. H., and Reed, K. A. (1995). "Computer
representations of design standards and building codes: U.S. perspective." The Int. J. of
Constr. Information Technol., 3(1), pp. 13-34.
Garrett, J. H., Jr., and S. J. Fenves, (1987). “A Knowledge-based standard processor for
structural component design” Engineering with Computers, 2(4), pp 219-238.
GSA (2007). “U.S. Courts Design Guide”, Administrative Office of the U.S. Courts, Space
and Facilities Division, GSA,
http://www.gsa.gov/Portal/gsa/ep/contentView.do?P=PME&contentId=15102&contentTyp
e=GSA_DOCUMENT .
GSA (2009). “BIM Guide for Circulation and Security Validation”, GSA Series 06 (draft).
Hietanen, J. (2006). “IFC Model View Definition Format”, International Alliance for
Interoperability.
ICC (2006). “MDV for the International Energy Conservation Code”, http://www.blis-
project.org/IAI-MVD/.
ISO TC184/SC4 (1997). “ Industrial automation systems and integration—Product data
representation and exchange” , ISO 10303-11: Description Methods: The EXPRESS
Language Reference Manual, ISO Central Secretariat.
ISO TC184/SC4 (1999).” Industrial automation systems and integration—Product data
representation and exchange:”, ISO 10303-14: Description Methods: The EXPRESS-X
Language Reference Manual, ISO Central Secretariat.
Jeong, Y-S., Eastman, C.M., Sacks, R., Kaner, I. (2009) “Benchmark tests for BIM data
exchanges of precast concrete”, Automation in Construction 18 (2009) 469–484.
Khemlani, K. (2005). “ CORENET e-PlanCheck: Singapore's automated code checking
system”, AECBytes,
http://www.aecbytes.com/buildingthefuture/2005/CORENETePlanCheck.html.
Lopez, L. A., and S. L. Elam (1984). “ SICAD: A Prototype Knowledge Based System for
Conformance Checking and Design”, Technical Report, Department of Civil Engineering.
University of Illinois at Urbana-Champaign, Urbana-Champaign, IL.
COMPUTING IN CIVIL ENGINEERING 577

Lopez, L. A., and R. N. Wright (1985). “Mapping Principles for the Standards interface for
Computer Aided Design”, NBSIR 85-3115, National Bureau of Standards, Gaithersburg,
MD.
Lopez, L. A., S. Elam and K. Reed (1989). “ Software concept for checking engineering
designs for conformance with codes and standards”. Engineering with Computers, 5,
pp.63-78.
Nawari, N. O. (2009). “Intelligent Design Codes”, The Structures Congress, 2009, Structural
Engineering Institute, ASCE, pp.2303-2312.
Nawari, N. O. (2010).”Standardization of Structural BIM”, ASCE 2011 Workshop of
Computing in Civil Engineering, Florida, Miami, June 19-22, 2011.
SMC (2009). “automated code checking for accessibility” Solibri,
http://www.solibri.com/press-releases/solibri-model-checker-v.4.2-accessibility.html
Vassileva, S. (2000). “An approach of constructing integrated client/server framework for
operative checking of building code”, in Taking the Construction Industry into the 21st
Century, Reykjavik, Iceland, ISBN: 9979-9174-3-1, June 28–30 2000.
Benefits of Implementing Building Information Modeling for Healthcare
Facility Commissioning

C. Chen1, H. Y. Dib2 and G. C. Lasker3


1
Graduate student, Department of Building Construction Management, Purdue
University, West Lafayette, IN 47907-2021; email: chenchen@purdue.edu
2
Assistant Professor, PhD, Department of Computer Graphic Technology, Purdue
University, Room 331, Knoy Hall of Technology, West Lafayette, IN 47907-2021;
PH (765)494-1454; FAX (765)494-9267; email: hdib@purdue.edu
3
Assistant Professor, MBA, Department of Building Construction Management,
Purdue University, Room 433, Knoy Hall of Technology, West Lafayette, IN 47907-
2021; PH (765)494-6752; FAX (765)496-2246; email: glasker@purdue.edu

ABSTRACT

Perhaps no other building types benefit more from Building Information


Modeling (BIM) than healthcare facilities, in which the coordination of Mechanical,
Electrical and Plumbing (MEP) systems is a challenging effort for all parties involved
in the project. Research has been developed and focuses on the multitude of benefits
resulting from using BIM in the Architecture, Engineering, and Construction (AEC)
process of healthcare facilities. BIM provides detailed information for decision-
making and facilitates information exchanges between different parties in Design and
Construction phases. Project commissioning is essentially a communication and
validation process that begins as early in the building acquisition process as possible
and continues through owner occupancy. The benefits of adopting BIM in healthcare
facility commissioning will be discussed throughout this document. A case study of
Mary General Hospital (MGH) follows to help illustrate the benefits that have been
brought about using BIM in the commissioning process.
Keywords: Building Information Modeling (BIM), Healthcare Facility, Project
Commissioning, Case Study

INTRODUCTION

Healthcare is one of the grand challenges of the 21st century and the nation’s
leading industry. Despite miraculous advances in modern medical diagnostics and
interventions, healthcare in the U.S. is inconsistent, with sometimes-dismal quality,
safety and efficiency, and massive access inequities. Total healthcare construction
spending, including hospitals, medical office buildings, nursing homes, and other
health facility buildings are forecasted to be one of the highest performing
construction sectors throughout 2012 according to IHS Global Insight’s Construction
Service. Healthcare facilities are unique because these facilities need to be open and
operational regardless of any circumstances. Building commissioning is the process
of verifying, in new construction, that all the subsystems achieve the owner's project
requirements as intended by the building owner and as designed by the building

578
COMPUTING IN CIVIL ENGINEERING 579

architects and engineers. Commissioning helps to deliver the owner a project that is
on schedule and reduced cost of delivery and substantial life cost, and will meet the
needs of users and occupants. Continuous commissioning focuses on the
improvement of overall system control and operations for the building and/or plant as
it is currently utilized and on meeting existing facility needs. Continuous
commissioning extends beyond the operations and maintenance program, and
optimizes the facility to its current use, which is likely different from the original
design. During the continuous commissioning process, a comprehensive engineering
evaluation is typically conducted for both building and plant functionality and system
functions. The optimal operational parameters and schedules can then be developed
based on actual conditions. An integrated approach is used to implement these
optimal schedules to ensure local and global system optimization and to ensure
persistence of the improved operational schedules.
The National Building Information Modeling Standards (NBIMS) Committee
defines BIM as “a digital representation of physical and functional characteristics of a
facility. BIM is a shared knowledge resource for information about a facility, forming
a reliable basis for decisions during its life-cycle; defined as existing from earliest
conception to demolition. A basic premise of BIM is collaboration by different
stakeholders at different phases of the life cycle of a facility to insert, extract, update
or modify information in the BIM to support and reflect the roles of that stakeholder
(NIBS, 2007). This allows planners, designers and builders to better coordinate
details and information amongst the multiple parties involved.
THE BENEFIT OF BIM IN PROJECT MANAGEMENT

Current BIM software are parametric 3D modeling tools, that offer the
architect a quick and reliable method to design the facility and share the details of the
design with other stakeholders involved with the project. BIM as an approach focuses
on the collection and sharing of information throughout the life cycle of the project
and the visualization of this information using the 3D model. The transfer from 2D to
3D modeling influences the design of a structure in many ways (Sacks and Barak,
2007). The author used benchmarks or hours to measure the time saving of two
structural engineering design projects and reach the conclusion that parametric 3D
modeling is especially useful at the early stages of design. Azhar, Hein and Sketo
(2008) states BIM represents the development and use of computer-generated n-
dimensional (n-D) models to simulate the planning, design, construction and
operation of a facility. It helps architects, engineers and constructors to visualize and
identify potential design, construction or operational problems. BIM also facilitates
the information and date flow among architects, designers, owners and contractors.
Steel, Drogemuller and Toth (2010) have studied the information exchange model
between different models, especially in the format of IFC (Industry Foundation
Classes), which combine the effort of architectural, mechanical and electrical
drawings into a compiled document. The authors concluded that collaboration and
scale are two of the most prominent characteristics of interoperability. According to
Hlotz and Horman (2006), the added detail allows stakeholders to better coordinate
information in executing and developing the projects. Grilo and Jardin-Goncalves
(2010) have studied the value proposition that interoperability of BIM makes evident.
580 COMPUTING IN CIVIL ENGINEERING

The higher level of collaboration among participants increases cost benefits and
decreases risks; hence the reinforcement of interoperability in the AEC sector is
highly recommended.
BIM IN HEALTHCARE

Healthcare projects benefit most because of the complexity and rigorous build
environment in healthcare facilities. By modeling healthcare projects, the early
adopters of BIM have experienced reduced project costs, shortened schedules, and
increased project quality (Barista, 2007). Chellappa (2009) addressed that BIM feeds
the need for Evidence Base Design (EBD) to provide a healing environment for
patients and staff in healthcare facilities. Manning and Messner (2008) addressed the
following 5 aspects of why BIM benefits healthcare projects: 1. the layout of facility
in the hospitals should be arranged properly to avoid disease infection; 2. the
coordination between complex mechanical, electrical and plumbing system; 3.
because of the exist of patients in the hospital, the simulation of lighting, air
ventilation are also very important; 4. the operation phase of healthcare facility will
also benefit from the information in design and construction stages; 5. Saving
compared to large investment into healthcare facility will be tremendous. Two case
studies are presented: the first is a trauma hospital. In this project, 2D conceptual
drawings were abandoned after 7 months because of the discord of planning and
facility reality. The benefits of adopting 3D BIM modeling in this project are: 1. The
parametric design tools, the conversion of the drawing and dimensions between
metric units (used by vendors/contractors) and imperial units (used by panning teams
and users) in minimal time without any scaling or coding commands required; 2. The
updates for drawing set cross-referencing are performed quickly and automatically.
The second case study is the renovation of a Medical Research Lab. Using BIM, the
team is able to save 20% man-hours comparing to the historical data within the
company to calculate division and department space, which is approximately 62%
savings cost. Khanzode and Fisher (2008) have studied the benefits of BIM in
coordination between Mechanical, Electrical and Plumbing (MEP) systems. Through
the case of coordination of MEP in the new Medical Office Building (MOB) facility,
they discussed issues, such as: the role of the general contractor, specialty contractors,
the coordination of the scope of work, the coordination of software to be used,
coordination sequence, and the information exchange between designers and
subcontractors. The benefits of BIM for various stakeholders is also discussed, such
as, for an owner, there were close to zero change orders and made fast tracking
project delivery possible; for architects and engineers, there would be less time spent
producing requests for information during the construction phase; and for the general
contractor, improved safety on site (only one injury), and the ability to have more
planning time rather than “firefighting”, for specialty contractors, finishing work on
schedule and to 100% pre-fabricated plumbing and less than 2% rework.
HEALTHCARE FACILITY COMMISSIONING

According to American Society of Healthcare Engineering (ASHE, 2010),


commissioning is a process intended to ensure that building systems are installed and
COMPUTING IN CIVIL ENGINEERING 581

perform in accordance with the design intent, that the design intent is consistent with
the owner’s project requirements, and that operation and maintenance staff are
adequately prepared to operate and maintain the completed facility. Through the
commissioning process, building systems can be integrated. The critical built
environment in a healthcare facility, such as dysfunctional control system, air quality
system, temperature control system, acoustic system will be secured through the
commissioning process. Additionally, the documents created during commissioning
become a guideline for maintenance and operation. Re-commissioning during
operation and maintenance also brings savings in energy consumption (Feldbauer,
2008). Mills, et al(2005) studied the cost-effectiveness of commissioning new and
existing commercial buildings for 244 buildings, which represents 30.4 million square
feet of commissioned space, across 21 states. They compiledd and synthesized,
published and unpublished data, from real-world commissioning and retro-
commissioning projects, establishing the largest available collection of standardized
information on new and existing building commissioning experience. Through data
analysis, they quantified the energy saving per square meters and payback time. Seth
(2006) mentioned that the commissioning scope of work for critical healthcare
facilities should not only include traditional HVAC system, but also broadened
complex diagnostic environment, operation environment, recovery environment,
insulation and patient care services.
BIM IN HEALTHCARE COMMISSIONING

Healthcare facility commissioning exists in the following phases: predesign


phase, design phase, construction phase, transition to operational sustainability, post-
occupancy and warranty phase, and retro-commissioning (ASHE, 2009). Since
Building Information Modeling is n-D modeling, it gives the commission team,
owner, architect, contractor and special contractor more vivid expectation of the
project. The interoperability of BIM facilitates the collaboration among the nurses,
surgeons, patients, and other staff to make contributions to the commissioning
process. How BIM may be applied throughout different stages is shown as follows:

Pre-Design Phase
The primary tasks for pre-design phase of healthcare commissioning are
establishing the commissioning scope and selecting the commissioning team. Since
each healthcare facility has its own characteristics and budget limitations, the scopes
or work (SOW) need to be identified. With building information modeling,
information from similar projects will be found through the database. Then the
commissioning team and the owner collaborate together in early stage to make
decisions about what systems to commission. An important note is that not only
HVAC systems are important for commissioning, but different projects have special
systems to be commissioned. For example, if a hospital specializes in the treatment of
burn patients, then the air control systems are vital for the healing environment.
Based on experience-based judgment and information gained from the BIM
information database, the decision-making to choose scope of work within limited
cost is not difficult.
582 COMPUTING IN CIVIL ENGINEERING

Design Phase
BIM is most beneficial to the commissioning process during the design phase.
The earlier that BIM is adopted in commissioning, the more beneficial it will be
(Feldbauer, 2008). BIM based commissioning process requires the involvement of the
commissioning team in early stages, which will enhance knowledge sharing between
different parties. Communication through the BIM in the early stages helps the
commissioning team connect the owner’s project requirement (OPR) and Basis of
Design (BOD) for commissioning more tightly. In addition, a solid timeline will be
formed by BIM to guide the commissioning process. The timeline will lead the
commissioning team to keep up with the commissioning schedule efficiently. Instead
of a traditional schedule, it is a simulation of the actual commissioning practice.
Therefore, it is much more feasible and practical for the commissioning team.
Commissioning during the design phase is mainly focused on reducing design error
and conflicts. With BIM, the commissioning experts will coordinate mechanical,
electrical, plumbing, HVAC systems, which are designed by different specialists,
drawings. Then, the commissioning team will collaborate with the architect and
owner to correct the defects or errors in the design. BIM based commissioning also
enhances Evidence-Based Design (EBD) in the long run. Correcting design defects or
drawing negligence, and gaining more experiences will be accumulated for future
EBD simultaneously and will be archived and stored in the database. BIM based
commissioning also improves energy savings performance in operation and
maintenance phases.

Construction Phase
Commissioning in the construction phase will influence operation and
maintenance directly. During the construction Phase, the commissioning scope of
work will be ensured, meaning, applicable equipment and building systems are
installed properly and receive adequate start-up and testing by installation contractors.
Also, the manuals for maintenance and operation and Testing, Adjusting, and
Balancing (TAB) reports have to be reviewed by the commissioning engineers. Other
tests, including pressure test and witness functional performance tests, will also be
conducted. BIM provides a functional detailed 3D drawing, so the engineer can find
problems quickly by what they’ve seen with the document.

Transition to Operational Sustainability


During the transition to operational sustainability phase, the project will be
turned over to the operation and maintenance staff. The understanding and
communication between the O&M (Operation and Maintenance) staff and
constructors in this phase will largely reduce mistakes or problems in the O&M. The
Commissioning team will provide useful training courses for the maintenance and
operation staff. While training occurs, documents regarding maintenance and
operation will be delivered to the O&M team. Building information modeling is an
influx of all the useful information for O&M, which is more systematically,
conclusively, easily referred to or understood by the O&M staff. Through BIM,
Operation and Maintenance staff receives the whole picture of project and also details
related to their specialty without having to consult to heavy paper-based drawings and
COMPUTING IN CIVIL ENGINEERING 583

operation manuals. Additionally, BIM will form a baseline model for the facility. In
the operation and maintenance phases, the staff can benchmark performance of the
facility with the baseline model to detect system failure in the early stage. This is
particularly important for a healthcare facility, because the built environment is
extremely important for healing patients. Also, the baseline model will provide
reference for future redesign, and other research activity.

Retro-Commissioning
Retro-commissioning is also called continuous commissioning. The purpose
of this commissioning is to solve the conflicts between systems and to improve the
building energy savings during O&M. When it is realized that the facility consumes
more energy than necessary, the energy efficiency issues will be analyzed. BIM is
able to detect problems in operation of systems by simulating and analyzing which
part of system is responsible for energy consumption. The interoperability of BIM
will let the data exported into energy analysis software, like Energy Plus, to detect the
problems in the operation of systems. When the problems are found, the BIM based
commissioning team is able to devise the easiest and effective method to coordinate
the operation of systems to achieve the energy saving goals.
CASE STUDY: BIM BASED HEALTHCARE FACILITY COMMISSIONING
IN MARYLAND GENERAL HOSPITAL

The Maryland General Hospital (MGH) is an ideal example of the ways in


which BIM based commissioning during construction and transition to operational
sustainability phases will benefit the project.

Project Overview
Maryland General Hospital is a top-notch, university-affiliated teaching
hospital. The project is the Central Care Expansion project of 92500 sq ft (15534
square feet of renovated space and 77000 square feet of new space) expansion
project, with the budget of more than $57 million. The SOW includes 8 new
operating rooms: one dedicated to ophthalmology, two dedicated endoscopy suites;
one dedicated cystoscopy suite; a pre-surgical unit with 14 private patient rooms and
2 inpatient holding bays; and a post-anesthesia care unit with 20 recovery bays and 2
isolation rooms. Also, there are updated pharmacy and laboratory, family waiting
areas with private consultation rooms and elevators.

Commissioning Work Overview


Near the completion of the construction, the following systems needs to be
commissioned: a new 2000KVA normal power substation, a new 500 KW emergency
generator and paralleling switchgear, three new automatic transfer switches and
distributions, 2 new 650-ton electric centrifugal chillers and 650-ton cooling towers,
temperature and humidity systems, and duct work, air handlers, dampers, and fans.
For so many commissioning items, closeout documentation and maintenance &
operation information need to be collected and managed. Because the documents are
paper-based, it is quite possible that information will be omitted and the operation
staff has to make repairs, which will increase the facilities life-cycle cost. In order to
584 COMPUTING IN CIVIL ENGINEERING

archive life-cycle cost performance, BIM technology is used during the design and
construction processes.

BIM Application and Benefits


The 3D BIM model of mechanical system is created in Tekla Structure, which
enables the commissioning team know the system better. The BIM model also
connects data and documents together. In this project, a centralized database is
created, which is used to compile the data generated during the construction stage.
During the commissioning phase, the commissioning team can utilize all information
and documents. Instead of carrying field notebooks or paper based inspection forms,
the Tablet PC will be used by site commissioning workers. The commissioning Vela
software on mobile computers can electronically access documents and complete
QA/QC inspections, worklists, punchlists, field reports. After the mechanical system
is designed, each unit is identified with bar codes and tagged and can be connected to
maintenance, warranty and inspection documents. Then this information can be sent
to the central database through Bluetooth wireless connectivity by simply click a
button on the Tablet PC. Also, commissioning team can share more information with
the central database using a format of .csv. For example, when commissioning an air
handling unit, the commissioning scan the bar code using Tablet PC, then the Vela
software shows all documents related to the unit to the PC. Then the field
commissioner updates the document or information through the PC and the
information of the unit will be automatically updated in the central BIM file. The
BIM central file will play an important role for operation and maintenance. When it
comes to benefits, first of all, using BIM in the commissioning process makes the
whole project more transparent and understandable by the whole commissioning
team, although it is a very demanding project of connecting new mechanical system
to the existing system. Secondly, all information can be stored and share in the tablet
PC. Many errors can be avoided because large amount of paper work. Also, the
information sharing through Bluetooth is more accurate and rapid. Thirdly, when the
commissioning process is completed, the Tablet PC will be handed over to MGH
facilities management staff for use in ongoing operations. Data from Tekla and Vela
systems are imported into the facility management system in use at the hospital for
immediate availability. Tekla and Vela systems will be used to visualize and manage
documents and data updates to systems and equipment.
CONCLUSION

Commissioning the healthcare facility will improve the performance of the


building through a life-cycle management perspective. In this paper, the application
of BIM in healthcare facility commissioning has been discussed in different stages.
The case study of the expansion project of Maryland General Hospital is proves the
benefits of BIM in improving efficiency of healthcare facility commissioning.
However, the case study only focuses on using BIM in commissioning during
construction phase. Future studies need to investigate other cases for the benefits in
other phases.
COMPUTING IN CIVIL ENGINEERING 585

REFERENCES

ASHE. (2010). Healthcare facility commissioning guideline. ASHE, Chicago.


Azhar, S., Hein, M., and Sketo, B. (2008) “Buidling information modeling (BIM):
benefits, risks and challenges.” Proceeding of AACE annual meeting.
Chellappa, J. R. (2009). “BIM+Healthcare: utilization of BIM in the design of a
primary healthcare project”. Doctoral dissertation. School of Architecture,
University of Hawai’i. Hawai, America.
Claridge, D. E. (2004). “Using simulation models for building commissioning.”
Proceeding of the fourth international conference for enhanced building
operations, Paris, France, October 18-19.
Enache-Pommer, E., Horman, M. J., Messner, J. I. and Riley, D. (2010), “A unified
process approach to healthcare project delivery: synergies between greening
strategies, lean principle and BIM.” Construction Research Congress, ASCE,
1376-1385.
Feldbauer, R. R. (2004) “Commissioning healthcare building and equipment system.”
ASHE,www.ashe.org/resources/management_monographs/member/pdfs/mg2
004feldbauer.pdf (Oct. 10, 2010).
Fu, C., Aouad, G., Lee, A., Mashall-Ponting, A., and Wu, S. (2006) “IFC model
viewer to support nD model application.” Automation in Construction,
15(2006), 178-185.
Grile, A and Jardim-Goncalves (2010) “Value proposition on interoperability of BIM
and collaborative working environments.” Automation in Construction,
19(2010), 522-530.
Khanzode, A. and Fischer, M. (2008). “Benefits and lessons learned of implementing
building virtual design and construction (VDC) technologies for coordination
of mechanical, electrical and plumbing (MEP) systems on a large healthcare
project.” ITcon, vol. 13, 324-341.
Manning, R. and Messner, J. I. (2008). “Case studies in BIM implementation for
programming of healthcare facilities.” ITcon, vol. 13, 446-457.
Mills, E., Bourassa, N., and Piette, M. (2005). “The cost-effectiveness of
commissioning new and existing commercial building: lessons from 224
buildings.” Proceeding of National the Conference on Building
Commissioning. May 4-5
Sacks, R. and Barak, R. (2008) “Impact of three-dimensional parametric modeling of
buildings on productivity in structural engineering practice.” Automation in
Construction, 17(2008), 439-449.
Seth, A., Bjorklund, A. and Fournier, D. (2006). “Commissioning scope of work for
critical healthcare facilities.” Proceeding of National Conference on Building
Commissioning. San Francisco, America 19-21 May.
Tse, T. K., Wong, K. A. and Wong, K. F. (2005) “The utilization of building
information models in nD modeling: a study of data interfacing and adoption
barriers.” ITcon, Vol.10, 85-110.
Vela System Inc. and Baron Marlow Inc. (2008) “Connecting BIM to
Commissioning, Handover and Operations”.
<cgsbmc.com/offices/baltimore/MarylandGH__casestudy.pdf> (Nov. 5, 2010)
A Real Time Decision Support System for Enhanced
Crane Operations in Construction and Manufacturing

Amir Zavichi1 , Amir H. Behzadan2


1
Ph.D. Student, Department of Civil, Environmental, and Construction Engineering,
University of Central Florida, Orlando, FL, 32816; PH (407) 823-2480; FAX (407)
823-3315; Email: amir.zavichi@knights.ucf.edu
2
Wharton Smith Faculty Fellow and Assistant Professor, Department of Civil,
Environmental, and Construction Engineering, University of Central Florida,
Orlando, FL 32816; PH (407) 823-2480; FAX (407) 823-3315; email:
abehzada@mail.ucf.edu

ABSTRACT
Cranes are among the most expensive pieces of equipment in many
construction projects as well as freight terminal operations, shipyards, and
warehouses. Despite their wide range of application, a vast majority of cranes still in
use do not feature the advanced automation and sensor technologies. A typical crane
operator mostly uses visual assessment of the jobsite conditions which may be
enhanced through a signalperson on the ground. However, the lack of an integrated
decision support system which takes into account the evolving work conditions and
the time and space constraints may lead to delays due to inefficient prioritization of
crane service requests. In a longer term, this may affect or even change the project
critical path which will ultimately lead to increased project time and cost. This paper
presents the latest results of an ongoing study which aims to design and implement an
automated crane decision support system to help crane operators fulfill service
requests in the most efficient order.

INTRODUCTION
The construction industry still lags behind most manufacturing and industrial
operations where transforming conventional activities into fully automated processes
has resulted in significant increases in productivity and lowered the overall project
cost (Groover 2008). Recently, automating construction activities at various levels
(e.g. design, installation, and operation) through the application of robotic and
automation has been explored in two areas: hard robotics that deals with developing
new robotic systems, and soft robotic which centers more on software and
information technology (IT) by enhancing the efficiency of existing machines
(Balaguer and Abderrahim 2008). While only few robots succeeded to find their way
into construction industry (Gambao et al. 1997, Balaguer et al. 2000, Hasegawa 2006,
Naito et al. 2007), research has been mostly focused on investigating soft robotic
techniques in construction (Everett 1993, Rosenfeld 1995 and 1998, Balaguer and
Abderrahim 2008, Lee et al. 2009). Some researchers investigated the possibility of
automating existing construction equipment with an ultimate objective of increasing
project efficiency (Everett 1993, Rosenfeld 1995 and 1998, Lee et al. 2009). Among

586
COMPUTING IN CIVIL ENGINEERING 587

such projects, automating crane operations has been of major interest due to the fact
that cranes are typically the most expensive pieces of equipment in many construction
projects and activities that rely on crane service usually control the project critical
path. Previous research in this area mostly falls into two categories: optimization of
crane layout pattern, in which the main objective is to find the best number of cranes
and the optimum location for each crane in order to satisfy criteria such as balancing
workload or minimizing spatial conflicts between cranes and other moving resources
on the site (Zhang et al. 1999, Al-Hussein 2005, Tantisevi and Akinci 2008), and
planning of physical crane motions, which includes the design and implementation of
tools to help in navigating the motions of the end manipulator (i.e. crane hook) and
other body parts (e.g. boom, jib, trolley) from the moment a load is picked up until it
is delivered to the desired location (Everett and Slocum 1993, Rosenfeld 1995, Lee
2009). The missing link between these two bodies of research is the need for a tool
that helps a crane operator decide the sequence of fulfilling service requests received
from crews working on a jobsite that yields to maximum production rate and
minimum operations time and cost. This gap of knowledge has been identified in the
presented research and is referred to as the decision- making phase. In this phase of
the crane operation cycle, the operator should prioritize crane service requests and
create a job sequence list given constraints such as idle times of each crew requesting
a crane service, significance of ongoing crew tasks, and total resource idle times.
Figure 1 shows the schematic overview of major areas with high potential for
automation in crane operations.

Figure 1. Schematic Overview of Crane Automation Process

PROBLEM DESCRIPTION
Cranes are among the most expensive equipment in a typical construction
jobsite. The total cost of a crane includes the procurement (or rental) cost, operation
and maintenance (fuel, oil, parts) costs, and crane operator’s salary. Traditionally, a
crane is operated by a single crane operator. As shown in Figure 2, a signalperson
may assist the crane operator by giving hand or audio signals for lifting, swinging,
and lowering loads especially from and onto blind spots. In addition, a crane operator
uses his or her visual assessment and personal judgment or the help of an on-duty
superintendent to decide the order of tasks to fulfill if there are several service
requests from crews. This decision-making process could be biased towards certain
activities and as a result, may lead to longer operations time which can eventually
alter the project critical path.
588 COMPUTING IN CIVIL ENGINEERING

Figure 2. A Signalperson Is Giving Hand Signals to the Crane Operator


Given that cranes are used when there is a major need for material hoisting or
moving, achieving a minimum operations time requires a tool that assists the crane
operator in finding the optimal task sequence which guarantees a minimum total
travel time of the crane hook between a series of origin (loading) and destination
(unloading) points. In addition, such a tool must be able to detect and respond to
evolving work conditions that may delay crane service. Such conditions include but
not limited to changes in material or crew locations, crane operator’s skill and
robustness in navigating the hook between points of interest, and potential space
constraints such as existing obstacles blocking the hook’s motion path. This paper
presents the initial results of an ongoing research which aims to design and
implement a fully automated crane decision support system. As shown in Figure 3,
this problem can be best described using the graph theory. In this Figure, a bipartite
graph is used to show the travel times between crane (T) nodes, crew (C) nodes, and
material (M) nodes in a construction site in which material is delivered from m
storage areas to n working crews using k cranes.

Figure 3. Bipartite Travel Time Graph


Each crew sends their requests to a crane (or cranes) to receive certain material, and
the crane operator(s) should then decide which request to fulfill first, second, and so
on in order to minimize the total travel time(s) of crane hook(s) and ultimately lower
the overall project time and cost. If w of n crews requires crane service in a project
with only one crane, there will be w possible ways for the crane operator to choose
the crew that receives the crane service first. As soon as this crew is selected, the
crane hook must travel from its existing position to the requested material storage
area, load the requested material, hoist and swing towards the location of the
COMPUTING IN CIVIL ENGINEERING 589

requesting crew, and unload the material. Subsequently, the operator should choose
the next crew for crane service from a total of w-1 remaining crews. This process will
continue until all outstanding crane service requests are fulfilled which basically
implies that there will be a total of w! (permutation of w) possible ways to fulfill all
crane requests. Since w! grows significantly with w, the challenge is to design a
robust automated method to determine the optimal sequence of tasks that yields the
minimum completion time.

METHODOLOGY
Linear programming (LP) has been evolved as a method for allocating scarce
resources among various activities in an optimal manner and is one of the most
widely used operations research tools and decision-making aids in manufacturing
industry, financial sectors, and service organizations (Lawler et al. 1985, Koch et al.
2009). Several classes of LP optimization problems can be graphically represented in
a network model (Bazaraa et al. 2009). A network model consists of a set of nodes
and arcs, and functions associated with arcs and/or nodes (Winston 2003). Using this
terminology, the problem of crane operations optimization can be categorized as a
network problem. The authors have investigated several network models to find the
most efficient formulation that yields a minimized total travel time of crane hook.
One of the most promising ways to formulate the problem of optimization of
crane operations is to assimilate this problem to an equivalent transportation problem
which deals with physical distribution of products from supply points to demand
points (Ojhaa et al. 2010) with the goal of minimizing the shipping costs, while the
need of each arrival area is met and every shipping location operates within its
capacity. However, the major limitation which hampers the use of the original
transportation problem to solve the problem of optimizing crane operations is that
unlike the transportation problem, in which more than one demand can be satisfied at
the same time (no limit on transport means), the crane optimization problem includes
only a limited number of cranes on a jobsite. In addition, demands can be fulfilled
from any supply point in the transportation problem whereas in the crane optimization
problem, demands are targeted (i.e. each crew demands material from a specific
storage area). The shortest path problem is another LP method which seeks to
minimize the total length of a path between any two given nodes (Winston 2003).
This class of LP problem cannot be directly applied to the crane optimization problem
either mainly because it does not guarantee a “continuous” path which covers all
nodes. Considering the limitations of the transportation and the shortest path
problems, the authors developed a mathematical model based on the Traveling
Salesman Problem (TSP) with Dantzig, Fulkerson and Johnson (DFJ) formulation.

TSP Formulation
The main objective of the TSP is to find the shortest route of a traveling
salesperson that starts at a home city, visits several other cities, and finally returns to
the same home city. The distance travelled in such a tour will depend on the order in
which the cities are visited and, thus, the problem is to find an optimal order of the
cities (Gutin and Punnen 2004). TSP is a typical “hard” optimization problem and
solving a TSP with large number of nodes may turn into a very difficult if not
impossible task (Gutin and Punnen 2004, Applegate et al. 2006). The formulation of a
590 COMPUTING IN CIVIL ENGINEERING

TSP problem starts with introducing a graph G = (V, A) where V is a set of n vertices
and A is a set of arcs or edges. Let C : (cij) be a distance (or cost) matrix associated
with A. The TSP will then try to determine a minimum distance circuit passing
through each vertex once and only once. Such a circuit is known as a tour or
Hamiltonian circuit (or cycle) (Laporte 1992, Gutin and Punnen 2004). Several exact
algorithms have been proposed for the TSP among which DFJ is one of the earliest
formulation that can be explained in the context of integer LP (Applegate et al. 2006).
In this algorithm, a binary variable xij is associated to every arc (i, j) and set equal to
1 if and only if arc (i, j) is used in the optimal solution (i  j). The objective function
will then be to minimize  cij xij , subject to the following constraints,
i j

x
j
ij 1 Ǝ i , j  V , arc (i,j)  A (1)

x
k
ki 1 Ǝ k , i  V , arc (i,j)  A (2)

 x
jS jS
ij 1 S  V , 2  s  n  2 , arc (i,j)  A (3)

xij  {0,1} Ǝ i , j  V , i j (4)


In this formulation, constraints (1) and (2) are degree constraints which
specify that every vertex is left exactly once (constraint (1)) and entered exactly once
(constraint (2)). Constraint (3) is called the subtour elimination constraint which
prohibits the formation of subtours (i.e. tours on subsets of less than n vertices) since
the optimum solution must consist of a single continuous path rather than two or more
separate paths. If there was such a subtour on a subset S of vertices, this subtour
would contain s arcs and as many vertices. Because of degree constraints, subtours
over one vertex (and hence, over n-1 vertices) cannot occur. Therefore, it is valid to
define constraint (3) for 2  s  n  2 only. In constraint (3), S = V \ S (complement
of S). The geometric interpretation of connectivity constraints (3) is that in every TSP
solution, there must be at least one arc pointing from S to its complement (i.e. S ). In
other words, S cannot be disconnected (Laporte 1992). Finally, constraint (4) imposes
binary conditions on the variables.

Application of TSP in Crane Decision-making Optimization


In the crane optimization problem, each crew may or may not send a crane
service request to receive material from a specific storage area at a certain time. As a
result, there may be less than n total requests to fulfill and the crane operator cannot
arbitrarily seek for the shortest path that contains all n nodes. Instead, the final
optimal solution must contain all the arcs representing crew requests. In addition, it is
assumed that once material is picked up (from a storage node), the crane hook travels
to a crew node, and likewise, when material is unloaded (on a crew node), the crane
hook travels back to the next storage node. Hence, the crane hook never travels
directly from one material node to the other and so, material nodes are not connected
to each other in the bipartite travel time graph. Similarly, the crane hook never travels
from one crew node to the other and hence, crew nodes are not connected to each
COMPUTING IN CIVIL ENGINEERING 591

other in the bipartite travel time graph. In order to find the minimum travel time for
all w outstanding crew requests, the original TSP is simultaneously applied to w sub-
problems. Each sub-problem is derived by assuming a certain crew (out of all crews
with outstanding requests) to be the last crew receiving crane service and as a result,
there will be w different sub-problems that need to be independently solved using the
TSP formulation. This method will thus substantially reduce the necessary
calculations as it only requires solving w smaller sub-problems rather than w!
problems by Brute force. Figure 4 is a graphical illustration of the bipartite travel time
graph in which each arc weight represents the travel time of the crane hook along that
arc in minutes. A sample crew request list received by the crane operator is also
shown in this Figure for which w = 3 since there are three outstanding crane service
requests to be fulfilled (crew 1 requests material from storage area 2, crew 2 requests
material from storage area 3, and crew 3 requests material from storage area 1).

Figure 4. Graphical Illustration of a Bipartite Travel Time Graph


Note that in the example shown in Figure 4, there is only one crane available and for
simplicity, travel times are assumed to be constant. However, the presented method is
capable of dynamically solving the optimization problem for variable travel times.
Also, each solid arrow shows an outstanding crane request for which xij =1. In order
to formulate and solve this example using the modified TSP method, three sub-
problems first need to be created. As shown in Figure 5(a), the first sub-problem is
created assuming that crew 3 receives the last crane service and as a result, node 7
(representing crew 3) is visited last and thus, is not connected to any outgoing arcs.
Similarly, the second sub-problem is shown in Figure 5(b) in which crew 2
(represented by node 6) receives the last crane service. Finally, sub-problem 3
represents the case in which crew 1 (represented by node 5) receives the last crane
service and is shown in Figure 5(c). Also shown in Figures 5(a), (b), and (c) is how
each sub-problems is formulated in DFJ, the optimal solution resulted in each sub-
problem using available LP analysis tools, and the sequence of nodes visited by the
crane hook. As shown in this Figure, sub-problem 2 yield a total travel time of 26
minutes while both sub-problems 1 and 3 result in a total travel time of 24 minutes.
Hence, total crane operations time will be minimized if the operator chooses either
the path highlighted in sub-problem 1 (i.e. nodes 1, 3, 5, 4, 6, 2, and finally 7) or the
path highlighted in sub-problem 3 (i.e. nodes 1, 2, 7, 4, 6, 3, and finally 5).

CONCLUSIONS
Previous research in areas such as optimization of crane layout pattern and planning
of physical crane motion has shown the high potential of automating crane operations
in improving productivity and decreasing the overall project cost. This is
592 COMPUTING IN CIVIL ENGINEERING

Min 1X12 + 1X13 + 2X14 + Min 1X12 + 1X13 + 2X14 Min 1X12 + 1X13 + 2X14 +
2X62 + 5X52 + 3X63 + 5X54 +6X73 + 4X74 + 5X54 2X62 + 3X63 + 6X73 + 4X74
+ 7X27 + 3X46 + 6X35 +5X52 7X27 + 3X46 + + 7X27 + 3X46 + 6X35
X27 = X46 = X35 =X71 =1 6X35 X27 = X46 = X35 =X51 =1
X12+X13+X14 =1 X27 = X46 = X35 =X61 =1 X13 + X63 + X73 = 1
X52+X54=1 X13 + X73 =1 X62 + X63 = 1
X62+X63=1 X14 + X74 + X54 = 1 X12 + X13 + X14 = 1
X12+X62+X52=1 X12 + X13 + X14 = 1 X73 + X74 = 1
X63+X13=1 X12 + X52 = 1 X12+X62 =1
X54+X14=1 X73 + X74 = 1 X14+X74 =1
X52+X62 1 X52+X54 = 1 X63+X73 1
X54+X74 1

X13=1, X35=1, X13=1, X35=1, X12=1, X27=1,


X54=1, X46=1, X52=1, X27=1, X74=1, X46=1,
X62=1, X27=1, X74=1, X46=1, X63=1, X35=1,
X71=1, X61=1, X51=1,
X63=0, X52=0, X73=0, X54=0, X13=0, X14=0,
X12 =0, X14=0, X63 =0, X14=0, X62 =0, X73=0,

(a) Sub-problem 1 (b) Sub-problem 2 (c) Sub-problem 3


Figure 5. Graphical Representation, Formulation, and Optimal Solutions
mainly due to the fact that cranes are often the most expensive pieces of equipment in
construction and manufacturing and activities that rely on crane service mostly fall on
the project critical path. Despite previous work in this field, crane operators still rely
on their visual assessment and personal judgment of jobsite conditions and ongoing
activities to decide when and in what order crane service requests are fulfilled. This
subjective and often imprecise decision-making can lead to work delays and may
even affect the project time and cost in long term. In this research, an innovative
optimization method was developed which takes advantage of modified LP and is
derived from the original TSP with DFJ formulation and sub-tour elimination. This
method will serve as the backbone of an automated decision support system that
assists crane operators in prioritizing outstanding crane service requests based on
locations of crews and material, as well as the latest position of the crane hook. A
sample crane operations case was also presented and solved using the presented
method.
COMPUTING IN CIVIL ENGINEERING 593

REFERENCES
Al-Hussein, M., Alkass, S., and Moselhi, O. (2005). “Optimization Algorithm for Selection and
on Site Location of Mobile Cranes.” J. of Const. Engrg. and Mngt., ASCE, 131(5), 579-590.
Bazarraa, M. S., Jarvis, J. J., and Sherali H. D. (2009) Linear Programming And Network Flows,
Fourth Edition, John Wiley and Sons, New York, NY.
Balaguer, C., and Abderrahim, M. (2008) Robo. and Auto. in Const. , I-Tech Education and
Publishing KG, Vienna, Austria.
Balaguer, C., Giménez, A., Padron, V., and Abderrahim, M. (2000). “A climbing autonomous
robot for inspection applications in 3D complex environment.” Robotica, 18(3), 287-297.
Applegate, D. L., Bixby, R. E. and Cook, W. J. (2006) The Traveling Salesman Problem: A
Computational Study, Princeton University Press, Princeton, NJ.
Everett, J.G., and Slocum, A.H. (1993). ”CRANIUM: device for improving crane productivity
and safety.” J. of Const. Engrg. and Mngt, ASCE, 119(1), 23–39.
Gambao, E., Balaguer, C. Barrientos, A., Saltaren, R., and Puente, E. (1997). “Robot assembly
system for the construction process automation.” IEEE international Conference on Robotics
and Automation (ICRA’97) Albuquerque (USA), 46-51.
Gutin, G., and Punnen, A. P., (2004) The travelling salesman problem and its variationsi, Kluwer
Academic Publishers, Dordrecht, Netherlands.
Groover M. P. (2008) Automation, Production Systems, and Computer-Integrated Manufacturing,
Third Edition, Pearson Education, Upper Saddle River, NJ.
Hasegawa, Y. (2006). “Construction Automation and Robotics in the 21th century.” 23rd
International Symposium on Robotics and Automation in Construction (ISARC’06), Japan,
October 2006, Tokyo, Japan
Huang, C., Wong, C.K., and Tam, C.M. (2010) “Optimization of tower crane and material supply
locations in a high-rise building site by mixed-integer linear programming.” J. of Auto. in
Const., Elsevier, 19(5), 656-663.
Koch, S., König, K., and Wäscher, G. (2009). “Integer linear programming for a cutting problem
in the wood-processing industry: a case study.” Journal of International Transactions in
Operational Research, John Wiley, 16(6), 715–726.
Lawler, E., Lenstra, J., Rinnooy A., and Shmoys, A. (1985) The Traveling Salesman Problem,
John Wiley, New York, NY.
Laporte, G., (1992). “The Traveling Salesman Problem: An overview of exact and approximate
algorithms.” Euro. J. of Oper. Res., Elsevier Science, 59, 231-247.
Lee, G., Kim, H., Lee, C., Ham, S., Yun, S., Cho, H., Kim, B., Kim, G., Kim, K. (2009). “A laser-
technology-based lifting-path tracking system for a robotic tower crane.” automation in
construction, Elsevier Science, (2009)18, 865-874.
Naito, J., Obinta, G., Nakayama, A., and Hase, K. (2007). “Development of a Wearable Robot for
Assisting Carpentry Workers.” International J. of Adv. Robo. Sys., In-Tech, 4(4), 431-436.
Ojhaa, A., Dasb, B., Mondala, S., and Maitia, M. (2010). “A solid transportation problem for an
item with fixed charge, vehicle cost and price discounted varying charge using genetic
algorithm.” Applied soft computing, Elsevier Science, 10(1), 100-110.
Rosenfeld, Y. (1995). “Automation of existing cranes: from concept to prototype.”. Automation in
Construction, Elsevier Science, (1995)4, 125-138.
Rosenfeld, Y. Shapira, A. (1998). “Automation of existing tower cranes: economic and
technological feasibility.” Auto. in Const., Elsevier Science, 7 (4), 285–298.
Tantisevi, K., and Akinci, B. (2008). “Simulation-Based Identification of Possible Locations for
Mobile Cranes on Construction Sites.” J. Comp. in Civ. Engrg., ASCE, 22(1), 21-30.
Winston, W. (2003) Operations Research Applications and Algorithms, 4th. Edition, Duxbery
Press, Philadelphia, PA.
Zhang, P., Harris, F. C., Olomolaiye, O.P., and Holt, G.D. (1999). “Location Optimization for a
Group of Tower Cranes.” J. of Auto. in Const., ASCE, 125(2),115-122.
The Competencies of BIM Specialists: a Comparative Analysis of the
Literature Review and Job Ad Descriptions

M. B. Barison1, 2 and E. T. Santos2


1
Department of Mathematics, Center of Exact Sciences, State University of Londrina,
Celso Garcia Cid PR 445 Km 380, Londrina, PR 86051-990, Brazil; PH
+55(43)3371-4226; FAX +55(43)3372-4236; email: barison@uel.br
2
Department of Civil Construction, Polytechnic School, University of São Paulo, Av.
Prof. Almeida Prado, Trav. 2, n. 83, São Paulo, SP 05508-900, Brazil; PH +55(11)
3091-5284; FAX +55(11)3091-5715; email: eduardo.toledo@poli.usp.br

ABSTRACT

An effective implementation and use of BIM technologies and processes


requires the inclusion of new professionals in AEC organizations. Each position must
have particular competencies. In seeking to fulfill the market demand for these
professionals, universities are making an effort to integrate BIM in their curricula,
especially in the fields of architecture, civil engineering and majors in construction
management. However, to ensure a proper planning of how to integrate BIM in the
course programs, it is necessary to find out what competencies are required from BIM
professionals. The same information is useful for companies that adopt management
competency models or those that need to select and recruit BIM specialists. This
Paper describes the results of a research project based on the Content Analysis of
BIM job ads and the technical literature. Competency lists from both sources were
compiled and compared. The results of the analysis show that, although there are
different focuses, both the job market and specialists are generally in agreement about
which competencies a BIM Manager should have, to perform well.
INTRODUCTION
With the dramatic rise in the demand for BIM technology worldwide, the
shortage of people with BIM competencies has become a significant constraint that
delays and slows down the use of BIM (Sacks and Barak 2010). Training has been
identified as a key issue in adopting BIM (Gu and London 2010), as team members
increasingly need the appropriate knowledge and skills that can allow them to
participate in BIM-enabled processes.
Higher education institutions are unable to meet this demand in the short term.
This means that companies will have to quickly develop BIM skills internally among
their employees (Smith and Tardif 2009). An alternative strategy that has been
adopted to lessen this problem is to outsource services by hiring specialized
companies for staff training or to help in the construction of models. A medium and
long-term solution is to teach BIM competencies at schools. Education (not in the
sense of training) will be the largest investment required for this (Smith and Tardif
2009) but it is still unclear what exactly the BIM roles and their competencies are.

594
COMPUTING IN CIVIL ENGINEERING 595

COMPETENCIES
In North American countries, competencies are regarded as being a set of
characteristics (knowledge, skills and attitudes - KSAs) that underlie (affect) the
successful performance (or behavior) of the individual at work (Slivinski and Miles
1996). In Europe, competencies are understood differently: employees demonstrate
the possession of a competence when they achieve or exceed expected results in their
work (Parry 1996).
Companies should select competencies in practical and concrete terms that are
aligned with the organization’s goals. Zingheim and Schuster (2009) recommend
keeping competency programs relatively simple and easy to understand. Hoff (2010)
outlines the steps necessary for creating a competency model: collecting information
about a job (tasks and skills), creating a draft model of competencies, collecting
quantitative and qualitative feedback to support competencies and, refining the final
model.
The present study addresses the research question “what are the individual
competencies necessary to perform functions related to BIM?”, and it is limited to the
first of the steps mentioned above that are needed for creating a competence model
for BIM specialists.
Owing to the different origins of the term ‘competency’ and the wide range of
types of competencies, it has been defined in various ways. For the purposes of this
study, the definitions that are considered are those that are relevant to the domain of
human resources.
Although many studies about the issue of ‘competency’ have been published
in recent years, the concept of competency is often mixed up with other terms such as
aptitude, qualifications, skill/ability, knowledge and attitude (Table 1). The present
study demarcates individual competencies in accordance with the terms and
definitions outlined in Table 1.

Table 1. Terms and definitions for individual competencies.


Terms Definitions
Aptitude Natural ability to acquire relatively general or specialist types of
knowledge or skills (Colman 2001).
Qualifications Educational degree obtained and years of professional experience.
Skill/Ability Ability is a developed skill, competency, or power to do something;
an existing capacity to perform some function, without further
education or training (Colman 2001). A skill is a combination of
abilities, techniques and knowledge which allows someone to
Knowledge achieve a high standard in undertaking a task.
‘Foundation knowledge’ is the kind of knowledge required by
Attitude someone to understand what needs to be done. An individual needs
knowledge to learn how to carry out a task.
A stable, long-lasting, learned predisposition to respond to
certain things in a certain way (Statt 1998).
596 COMPUTING IN CIVIL ENGINEERING

METHODOLOGY
A survey of the technical literature was conducted with the aim of searching
for references to any competencies that BIM specialists might need.
Job ads from the main labour market for BIM-related careers (i.e., that of the
United States) were collected from the Internet, particularly from “BIM Wiki” and
“LinkedIn” weblogs. The job descriptions analyzed were from more than 20 large
companies in the U.S., some of them with international branches. Thus, this study
was confined to the social context where these jobs can be found.
A Content Analysis process (Krippendorf 2004) was performed using input
from BIM job descriptions and the technical literature. Content Analysis is a process
that involves categorizing qualitative textual data into clusters of similar entities or
conceptual categories, in order to identify patterns and relationships between themes,
which can either be identified a priori or just emerge from the analysis. In this
method, the texts are broken down into units. This study has identified the units by
author and by job title.
The literature review and job descriptions covered individual competencies in
accordance with the five categories set out in Table 1. A list of competencies was
generated from the responsibilities and functions of several BIM professions from
both sources. A comparative analysis was carried out between the required
competencies in the job market for BIM-related careers and those cited in the
literature.
RESULTS
A large number of the job ads published online between 2009 and 2010,
advertising BIM related positions, were collected and analyzed (N=31). Although the
job titles in the ads varied, their classifications were standardized in this work. Table
2 provides a statistical summary of the ad sample (breakdown into categories of job
and company).

Table 2. Statistics of BIM job ads sample.


BIM specialist Number Company Number
of ads (%) of ads (%)
BIM Manager 22 (71%) General Contractor 12 (38.7%)
BIM Modeler 3 (9.7%) Architectural Design 7 (22.6%)
BIM Trainer 2 (6.5%) MEP Consulting 4 (13%)
Director of BIM 1 (3.2%) Consulting Services 3 (9.6%)
Technologies 1 (3.2%) Software Company 3 (9.6%)
BIM Consultant 1 (3.2%) (Tools and Services)
Manager of BIM Marketing 1 (3.2%) Construction 2 (6.5%)
BIM Software Applications Management Services
Support Engineer.
The analysis was restricted to the most frequent type of BIM specialist (the
BIM Manager) as the sample was too small to obtain meaningful results for any of
the others. As the BIM Manager is the professional responsible for most BIM-related
tasks in companies, including the implementation and training of BIM (Barison and
Santos 2010), it is natural that most job ads seek this kind of professional, especially
at this early stage when most firms are just embarking on this technology.
COMPUTING IN CIVIL ENGINEERING 597

The results of the content analysis that was conducted for the job ads and
literature are summarized in Table 3. In this table, the numbers between parenthesis (
) in the left-hand column refer to the number of ads mentioning the item, and those
between square brackets [ ] in the right-hand column refer to publications in the
reference section, which are marked in the same way. There were no aptitudes
mentioned in the job ads that were collected.
DISCUSSION
In terms of education, the requirements collected from the job ads indicate
that companies sometimes accept applicants with a lower degree than that stipulated
in the literature. This is probably because the specific responsibilities of a BIM
Manager vary a lot among companies, sometimes with priority being given

Table 3. The competencies required for a BIM Manager in BIM job ads and
specified in the technical literature (items are listed in higher-to-lower order of
frequency).

From 22 job ads* From 24 authors**


Aptitudes Ability to work with computers,
observation and detailed
planning skills sufficient to
allow a good visualization of
the building before its
construction [17]
Education From High-school and Technical BSc Degree (or equivalent) in
Diploma (4) to a Bachelor’s degree, but Construction Management,
usually a Bachelor’s degree (12), is Engineering or Architecture
required in Architecture, Engineering, [11]
CM, or a related field
Experience Normally 5-7 years (6,1) Normally 3-4 years [11]
Skills and Ability w/ multiple BIM applications Critical Thinking [1, 3,
Abilities (19) 5,12,13,15,18,23]
Oral and written communication (12) Oral and written
Organization and Prioritization (10) communication [1,8, 11, 15, 16,
Experience in giving training to 19, 20, 23]
employees (5) Systemic Thinking [12, 17, 18,
Implementing BIM in the company (5) 19, 20,23]
Presentation skills(4) Management [4,6,7,8,14,20]
Working in a collaborative environment Coordinated teamwork [8,
(4) 9,10,16]
Understanding construction drawings (3) Ability in handling BIM
Giving support on BIM tools to applications[4,7,20,21]
employees (3) Understanding processes [8, 16]
Using programming language (3) Presentation skills [1, 20]
Leadership (3) Lateral and creative thinking
Interpersonal skills (2) [18]
Mentoring (2) Integrating new materials into
Understand BIM (2) designs [2]
Working independently (2) Carrying out new types of
598 COMPUTING IN CIVIL ENGINEERING

Making cost estimates with BIM tools analysis [2]


(2) Implementing and maintaining
Using scheduling tools (2) BIM at the company [8, 14]
Management (1) Making cost estimates with
Time management (1) BIM tools [8]
Graphic communication (1) Using scheduling tools, clash
Thriving on challenges (1) detection , 4D simulation,
Commitment (1) logistics, clash detection, safety
planning [8]
Handling objects by following
prescribed rules [3]
Learning [1]
Knowledge Design/construction process (3) Information Technology
BIM workflow process (3) [4,5,11,12,14,15,20,22]
Construction costs, schedules and Construction process [1,4,5,12,15]
financial risks (2) Management [1,14,16,22]
Parametric object-based design (1) BIM coordination process [1,6]
Construction drawings (1) BIM/IPD process, BIM standards
IPD concepts (1) [14,24]
BIM / Project Management (1) Other disciplines [16]
Technology for collaborative systems(1)
Attitudes Self-driven / motivated (4) Team player [1,16,18]
Enthusiastic about BIM’s potential(3) Motivation to continually learn
Keen to make technical innovations (2) [1]
Willing to act as a team player (2) Involved and interested in BIM
Motivation to continually learn (1) [7]
Being quality minded (1) Appreciation of the value of
Showing initiative (1) professional practice [5]
Willingness to travel (1) Not overly ambitious [8]
*(number of job ads) ** [reference, in the reference section]

to technical rather than management issues; in these cases, a higher degree may not
be needed. On the other hand, with regard to experience, on average, the job market
expects more professionals to have worked for a longer period of time than what is
recorded in the literature (5-7 vs. 3-4 years).
Both the AEC companies and the reviewed authors regard some core abilities
like oral communication, team/collaborative work and management as very important
for a BIM Manager. In contrast, the analysis also showed that the job market is more
focused on functional skills related to systems & technological abilities, especially
skills in BIM software/applications, while the literature is more concerned that the
BIM Manager has the foundational skills of critical and systemic thinking.
With regard to the necessary background knowledge, the literature suggests that
information technologies, construction processes and management are the most
important areas that a BIM Manager must know. The job market also expects
professionals to have this same background knowledge, although it focuses more on
specific BIM-supported activities.
Finally, the job market seeks to hire self-driven professionals, who are motivated by
the benefits of BIM technology, as well as those who have a positive attitude to
COMPUTING IN CIVIL ENGINEERING 599

teamwork, whereas the literature more often concentrates on the need for attitudes
conducive to working in a team environment.

A Competent BIM Manager


On the basis of information from 22 job ads and 24 technical papers, a profile
for a competent BIM Manager was defined, as outlined below.
A competent BIM Manager should possess a Bachelor’s degree in an AEC-
related area and have at least 5 years of professional experience. To be appointed to
this position, a professional needs both foundational and functional skills: the former
mainly consist of communication skills and thinking skills (both critical and
analytical). The most essential functional skill is an ability to handle multiple BIM
software, the tools he/she will be using on a daily basis. Interpersonal functional
skills, like teamwork and leadership are also very important for a BIM Manager, as
well as resource skills of basic management. Apart from this, a competent BIM
Manager also needs to possess cognitive abilities to understand evolving BIM
concepts, in addition to construction processes and drawings. Moreover, this
specialist must have the capacities needed for implementing BIM, giving support and
training and, coordinating and developing BIM models.
Regarding the question of expertise, a competent BIM Manager needs to be
familiar with the following: Information Technology, design and construction
processes, management, BIM standards, BIM workflow, coordination practices,
project management, construction drawings and costs, schedules and financial risks,
parametric object-based design and other disciplines.
However, these qualities are not enough unless accompanied by positive
attitudes such as, for example, being self-driven, highly motivated, involved and
interested in BIM and its new technological aids. Moreover, it is essential to be a
team player, a lifelong learner, have initiative and be always ready to educate others
and travel to branch offices when necessary.
CONCLUSIONS
This study has outlined the competencies of a BIM specialist through a
Content Analysis which compares several job descriptions with those given in the
literature. By this means, it was possible to identify the profile of the BIM Manager.
The comparative analysis between the job ads and the literature revealed
several patterns for the kind of competencies required by BIM Managers that the
universities must cater for as soon as possible. Among those are abilities which are
usually developed in quality higher education courses like teamwork, communication
skills and critical and analytical thinking skills. Others require changes or adaptations
in the curricula, to include the following: contact with several BIM tools, BIM
standards, BIM workflow, BIM-enabled coordination practices and project
management, development of construction drawings, making estimates and schedules
with BIM applications and a knowledge of parametric object-based design concepts.
However this study has been subject to a number of constraints. The analyzed
job ads are only from large companies and mainly located in the U.S. It has not been
possible to discuss the differences between different types of BIM specialists because
there was very little information about BIM specialists, apart from the BIM Manager.
The study was limited to the first step for creating a competence model. Future
600 COMPUTING IN CIVIL ENGINEERING

studies for collecting quantitative and qualitative feedback from BIM professionals to
support the competencies listed here could refine and finalize a model of
competencies.
ACKNOWLEDGEMENTS
The first author would like to express her gratitude to CAPES for partially
funding this research. The second author is grateful to CNPq for partially funding this
research.
REFERENCES
Note: The number in [n] at the end of some references refers to Table 2.
Allison, H. (2010). “10 Things every BIM Manager should know”. Vico Guest
Blogger Series. <http://www.vicosoftware.com/vico-blogs/guest-blogger/
tabid/ 88454/bid/22833/10-Things-Every-BIM-Manager-Should-Know.aspx>
(Dec. 10, 2010). [1]
Barison, M. B. and Santos, E.T. (2010). “An overview of BIM specialists”.
Computing in Civil and Building Engineering, Proceedings of the
International Conference, Nottingham, UK, Nottingham University Press,
Paper 71, p. 141.
Bronet, F., Cheng, R. Eastman, J., Hagen, S., Hemsath, S. Khan, S. Regan, T., Ryan,
R. and Scheer, D., (2007). “Draft: The Future of Architectural Education”.
AIA TAP 2007. [2]
Casey, M. J. (2008). “BIM in Education: Focus on Local University Programs”.
BuildingSmart Alliance National Conference Engineering & Construction,
Washington, DC. <http://projects.buildingsmartalliance.org/files/
?artifact_id=1809>(Jan. 9, 2011). [3]
Chasey, A. and Pavelko, C. (2010). “Industry Expectations Help Drive BIM in
Today’s University Undergraduate Curriculum”. JBIM, Fall, 2010.
<http://www.wbdg.org/pdfs/jbim_fall10.pdf >(Dec, 2010). [4]
Cheng, R. (2006). “Questioning the role of BIM in architectural education”.
AECBytes.2006. http://www.aecbytes.com/viewpoint/2006/issue\_26.html
(Dec. 5, 2007). [5]
Colman, A. M. (2001). “A dictionary of psychology”. Oxford University Press,
Oxford.
Computer Integrated Construction Research Program (CICRP) (2009). “BIM Project
Execution Planning Guide – Version 1.0”. The Pennsylvania State University,
Pennsylvania, PA. [6]
Cooperative Research Centre for Construction Innovation (CRCCI) (2009). “National
Guidelines for Digital Modelling”. <http://www.construction-
innovation.info/images/pdfs/BIM_Guidelines_Book_191109_lores.pdf>(Sep,
2010).[7]
C3Consulting(2009).“Project-Level BIMM”. Infocus. <http://c3consulting.com.
au/newsletter/infocus-october-2009.html>(Dec. 13, 2010). [8]
Dossick, C. S., Neff, G. and Homayouni, H. (2009). “The Realities of BIM for
Collaboration in the AEC Industry”. Construction Research Congress.
<http://www.ascelibrary.org>(Oct. 25, 2010). [9]
COMPUTING IN CIVIL ENGINEERING 601

Duke, P; Higgs, S. and McMahon, W, R. (2010). “Integrated Project Delivery: the


value proposition. An Owner’s Guide for Launching a Healthcare Capital
Project via IPD”. White paper, KLMK Group. Feb. 2010.[10]
Eastman, C., Teicholz, P., Sacks, R. and Liston, K. (2008). BIM handbook: a guide to
building information modeling for owners, managers, designers, engineers,
and contractors., John Wiley & Sons, Hoboken. [11]
Gallello, D. (2008). “The BIM Manager”. AECbytes Viewpoint #34.
<http://www.aecbytes.com/viewpoint/2008/issue_34.html> (Dec. 5, 2009).
[12]
Green, R. (2009). “What’s the BIM Deal-Part 3?”. Cadalyst, September 23, 2009.
<http://www.cadalyst.com/collaboration/building-information-
modeling/what039s-bim-deal-part-3-12923?page_id=2>(Oct. 25, 2009). [13]
Gu, N. and London, K. (2010). “Understanding and facilitating BIM adoption in the
AEC industry. Automation in Construction,19, 988–999.
Hardin, B. (2009). BIM and construction management: Proven tools, methods and
workflows. John Wiley & Sons, Hoboken. [14]
Hjelseth, E. (2008). “A Mixed approach for SMART learning of building SMART”.
eWork and eBusiness in Architecture, Engineering and Construction. ECCPM
2008. Org. Alain Zarli and Raimar Scherer, London. [15]
Hoff, D. (2010). “The Role of Competencies in Career Development” EASI-
Consult.<http://www.easiconsult.com/articles/dhoff-compsincareers.html>
(Oct., 2010).
Homayouni, H., Neff, G. and Dossick (2009). “Theoretical Categories of Successful
Collaboration and BIM Implementation within the AEC Industry”.
Construction Research Congress 2009. [16]
Kymmell, W. (2008). Building Information Modeling: planning and managing
construction projects with 4D CAD and simulations, McGraw Hill, New
York.[17]
Krippendorf, K. (2004). Content analysis: an introduction to its methodology. 2nd ed.
Sage, Thousand Oaks, CA.
Önür, S. (2009). “IDS for Ideas in Higher Education Reform”. CIB IDS, First
International Conference on Improving Construction and use Through
Integrated Design Solution, 2009, Espoo, Finland. <http://www.vtt.fi/inf/pdf/
symposiums/ 2009/S259.pdf> (Oct., 2009). [18]
Owen, R. (2009). “Integrated Design & Delivery Solutions”. CIB White Paper on
IDDS, CIB, The Netherlands. [19]
Parry, S. B. (1996). “The quest for competencies”. Training, Lakewood Publications,
33(7), 48-54.
Penttilä, H. and Elger, D. (2009). “New Professional Profiles for International
Collaboration in Design and Construction”. In: eCAADe 26.
<http://www.mittaviiva.fi/hannu/studies/ecaade2008_penttila_elger.pdf>
(Out. 10, 2009. [20]
Sacks, R. and Barak, R. (2010). “Teaching Building Information Modeling as an
Integral Part of Freshman Year Civil Engineering Education”. J. Profl. Issues
in Engrg. Educ. And Pract., ASCE, 136(1),30-38. [21]
602 COMPUTING IN CIVIL ENGINEERING

Sebastian, R. (2009). “Changing roles of architects, engineers and builders through


BIM application in healthcare building projects in the Netherlands”. Changing
Roles, New Roles, New Challenges. Noordwijk Aan Zee, Netherlands.
<http://www.changingroles09.nl/ uploads/File/ Final.Sebastian.pdf>(Nov.,
2009). [22]
Slivinski, L. W. and Miles, J. (1996). “Wholistic Competency Profile”. Personnel
Psychology Centre, Public Service Commission of Canada.
Smith, D. and Tardif, M. (2009) “Building Information Modeling: A strategic
implementation guide for architects, engineers, constructors, and real estate
asset managers”. John Wiley & Sons, Hoboken, NJ.
Statt, D. A. (1998). Concise Dictionary of Psychology. 3rd. ed.,Routledge, London.
Tatum, C. B. (2009). “Champions for Integrated Design Solutions”. First
International Conference on Improving Construction and Use Through
Integrated Design Solutions. 2009. Espoo, Finland, <http://www.vtt.fi/
inf/pdf/symposiums/2009/ S259.pdf>(Oct., 2009). [23]
Federal Business Opportunities (FBO) (2009). “Nationwide Building Information
Modeling (BIM) and Related Professional Services”.
<https://www.fbo.gov/spg/GSA/PBS/PHA/GS-00P-09-CY-D-0136/listing.
html> (Aug. 5, 2009). [24]
Zingheim, P. K. and Schuster, J. R. (2009). “Competencies replacing jobs as the
compensation/HRfoundation”. World at Work Journal, 18(3), 6-20.
Adaptive Guidance For Emergency Evacuation For Complex Building
Geometries

Chih-Yuan Chu1
1
Assistant Professor, Department of Civil Engineering, National Central University,
No. 300, Jhongda Rd., Jhongli City, Taoyuan County 32001, Taiwan; Tel: +886-3-
422-7151 ext 34151; Fax: +886-3-425-2960; email: jameschu@ncu.edu.tw

ABSTRACT
The guidance system for pedestrians is one of the most critical components of
emergency evacuation in complex building geometries in the events of accidents and
natural disasters. However, pre-determined, fixed emergency guidance systems
provide only static information of evacuation routes to exits and do not respond to the
real-time situations. In the cases of large-scale evacuation, these evacuation routes are
likely to be congested because a large number of pedestrians attempt to leave the
hazardous areas at the same time. To address this problem, this paper proposes a
method for planning adaptive emergency evacuation guidance to support the fixed
guidance system. The method includes two steps. First, the spatial distribution of the
pedestrians is converted into a digital image and the congestion areas in the facility are
identified with the techniques of digital image processing. Second, the identified
congestion areas are considered as virtual obstacles in addition to the original obstacles,
and adaptive guidance that instructs pedestrians to bypass these areas can then be
generated.

INTRODUCTION

The emergency evacuation guidance systems are critical for buildings because
they are responsible for the evacuation time for a pedestrian to leave the hazardous
area in the events of accidents and natural disasters. The planning of emergency
evacuation guidance systems is particularly important for complex building geometries
such as high-rise building, train stations, and airport terminals because they usually
serve more pedestrians and their sizes are larger than other types of buildings. Studies
of the optimal design of guidance systems are rare. Among the few that do exist, Chu
(2010) developed an approach to designing optimal evacuation guidance systems given
polygonal obstacles. The fixed guidance system is optimal in the sense that: (1) all
pedestrians are covered by the guidance system; (2) after a pedestrian finds the first
sign, the evacuation direction information is provided unambiguously without
requiring any judgment on the part of the pedestrian; and (3) the guidance allows a
pedestrian to evacuate to the closest exit via the shortest path. However, pre-
determined, fixed emergency evacuation guidance systems provide only static
information of the evacuation routes to exits. In the case of the large-scale evacuation,
these routes are likely to be congested because too many pedestrians attempt to access
the exits simultaneously. Motivated by the need for the dynamic information of the

603
604 COMPUTING IN CIVIL ENGINEERING

evacuation routes, this paper proposes a method to find the adaptive guidance strategy
in response to the real-time status of congestion in the facility to support the fixed
guidance systems.

There are two major parts of research. In the first part, a fixed guidance system
provides static evacuation information to the pedestrians. The evacuation process is
monitored by a congestion detection mechanism. Based on the pattern of pedestrian
distribution, the bottlenecks of the evacuation are determined and the adaptive
guidance system generates alternative evacuation routes at a regular interval for the
pedestrians to guide them to bypass the congestion areas and expedite the evacuation
process. In the second part, a cellular automata (CA) pedestrian simulation model is
adopted to evaluate the benefit of the adaptive guidance and validate the methodology.
CA models proposed by Burstedde et al. (2001) simulate pedestrian behaviors
adequately and are highly efficient for the large-scale simulation of human movement
under the emergency evacuation guidance in complex environments. Thus, CA models
were chosen for this research to evaluate the performance of evacuation guidance
systems. Each of the fixed and the adaptive guidance systems generates a static field,
which drives the pedestrians to move as if they are following the corresponding
guidance in the simulation model. The ratio of pedestrians following the fixed
guidance to those following the adaptive guidance is decided by the compliance rate,
the percentage of pedestrians that follow adaptive guidance. As a result, the critical
measures such as the maximum evacuation time under the evacuation guidance system
can be evaluated numerically. Further, by using this simulation tool, the effects of the
important factors including the update interval and the compliance rate of the adaptive
guidance can be determined.

METHODOLOGY

Two major assumptions are made in the methodology of this paper. The first
assumption is that the pedestrians follow either fixed or adaptive guidance. The second
assumption is that the space of the facilities of interest is separated into squares of
equal size. The two assumptions are explained next.

In this paper, two types of pedestrians are assumed. In addition to the dynamic
field and randomness in CA simulation, part of pedestrians follow only fixed guidance
for evacuation while the others follow adaptive guidance. Note that the pedestrians are
assumed to have no knowledge of the floor plan and are not capable of searching for
the evacuation routes without guidance. Although theories have been developed for
wayfinding with partial knowledge of space or imperfect guidance (Golledge, 1999),
this assumption is necessary because the purpose of this study is to design an adaptive
guidance system and evaluate its performance. The effect of the guidance system
cannot be evaluated adequately when pedestrians' wayfinding behavior is involved.
Therefore, the wayfinding behavior of pedestrians without following emergency
evacuation guidance is excluded from this paper.

The technique of digital image processing is one of the key components in the
identification of congestion areas. Because digital images are composed of pixels, it is
COMPUTING IN CIVIL ENGINEERING 605

required to convert the facility under consideration into squares of equal size. Similarly,
the CA pedestrian simulation model that will be used for the validation of the
methodology is the discrete-space approximation of the actual pedestrian movement. It
discretizes the space into cells and each cell can only be occupied by a single person.
The assumption of the discrete-space approximation in the paper is made due to the
above two important tools. Indeed, the model accuracy could be affected due to the
discretization; it is emphasized that reducing the cell size is still possible if higher
accuracy is required as described earlier in the literature review. In this research, the
space of a cell is 40 cm×40 cm, which is the space an average person occupies and
widely accepted in CA simulation.

Identify congestion areas

Various studies have successfully adopted the techniques of image/video


processing for pedestrian tracking and detection. For example, Hoogendoorn et al.
(2003) and Helbing and Johansson (2007) are capable of detecting the positions of the
pedestrians in a specific area. It follows that macroscopic measures such as density can
also be calculated. They further combined various algorithms from different fields to
extract the microscopic characteristics such as speed of each pedestrian over time.
Therefore, the technologies of positioning the pedestrians in transportation facilities
are readily available and the spatial distribution of pedestrians that will be required in
the follow procedures can be obtained through these techniques without difficulties.

Figure 1, which will be used in the numerical example later in the paper, is
used to explain the problem of relying only on the fixed guidance system in emergency
evacuation. In the figure, the black areas represent the obstacles and the pedestrians are
represented by gray areas. A stairway connecting to the ground floor is marked with an
exit sign. To explain the methodology in detail, a 30 m×40 m space is marked in the
lower right part of the figure and the following discussions will be focused on this area.
As the marked area shows, the pedestrians are moving from north and west to leave the
floor via the stairway following the fixed guidance system. There are two sources of
bottlenecks caused by the fixed guidance: pedestrians from north pass the same ticket
gate to access the exit and all of those from west use the narrow space between the
wall and a column. Because the pedestrians follow the fixed guidance and all of them
are taking the shortest paths, significant congestions are formed at the two bottlenecks.
Without the adaptive guidance, the alternative routes are ignored and the evacuation
process could be delayed.

The concept of the identification of congestion areas is treating the spatial


distribution of the pedestrians as a binary image (i.e., black and white). Each pixel is
equivalent to a small square of space, i.e. a cell, in the facility. A congestion area
implies a group of cells crowded with pedestrians and jamming and clogging are
forming. Therefore, in terms of digital image processing, congestion can be identified
by finding objects in a binary image converted from a pedestrian distribution. The
standard procedure for identifying objects in image processing includes noise removal,
erosion, and dilation.
606 COMPUTING IN CIVIL ENGINEERING

Figure 2 is used to explain the procedure of the identification of congestion


areas. When a pedestrian is moving without any pedestrians in the adjacent cells, it is
clear that the pedestrian is able to walk freely and thus no congestion exists. In the
sense of digital image processing, it is equivalent to a single dot in the image and can
be seen as a noise. Because these scattered dots are not of interest for congestion
identification, a noise removal algorithm can be used to eliminate them from the
consideration. In this study, the commonly used median filter algorithm is adopted for
this task. A median filter scans a neighborhood and uses the median value for all pixels
in the neighborhood to replace the value of the pixel in the center of the neighborhood.
This study selects a typical 5×5 neighborhood. Because the cell size chosen for this
research is 40 cm×40 cm, the meaning of the selection is that if there are at least 13
persons in a 4 m2 space (0.16 m2× 25, equivalent to the density of 3.25 persons per m2),
the center cell of the space is considered congested. The threshold of 3.25 persons per
m2 is consistent with the jamming density reported in Jin (2002), which is 3.5 persons
per m2. Figure 2(a) and Figure 2(b) show the effect of a median filter. After the
processing, pedestrians not in the congestion areas are removed from the image and
only pedestrians in congestion areas remain. However, it can be seen from the figure
that many small gaps and holes exist and theirs shapes are rather irregular. Because the
planning of the guidance system adopted in this paper depends on the polygonal
objects and to avoid overly complicated adaptive guidance, it would be more effective
if the identified areas have more regular shapes, preferably concave polygons;
therefore, more processing is required.

After the step of noise removal, the next task is to search for major congestion
areas with simple shapes in the facility. In the terminology of image processing, it is
equivalent to finding objects of interest in an image, which can be done by smoothing
out object outlines, filling small holes, and eliminating small projections in the image
(Umbaugh, 2005). The common operations include erosion and dilation. Erosion
shrinks objects by eroding their boundaries to simply their shapes in an image and
dilation expands the objects for the loss of size in the step of erosion. By operating
erosion and dilation in sequence, the objects with simpler shapes can be identified.
When it is applied on the spatial distribution of pedestrians, the irregular shapes of
small groups of pedestrians are smoothed out. Therefore, the planning of the adaptive
guidance would not be affected by these relative small congestion areas. Figure 2(c)
and Figure 2(d) show the effects of erosion and dilation respectively, and the results
are the congestion areas that will be considered in the following procedure of adaptive
guidance design. Finally, it should be emphasized that the parameters in the image
processing algorithms determine the size and number of congestion areas. The
appropriate parameters for different scenarios of emergency evacuation could be
different and more research would be required for this topic.

Adaptive guidance

The main concept of the adaptive guidance is to consider the congestion areas
identified above as virtual obstacles. A new optimal guidance system considering both
the original obstacles and the update-to-date congestion areas as obstacles is generated
with the method proposed in Chu (2010). Due to the additional obstacles, the shortest
COMPUTING IN CIVIL ENGINEERING 607

paths to the exits and the optimal guidance would be different. By comparing the
routes suggested by the original guidance system and the adaptive guidance, the
required information to instruct the pedestrians to bypass the congestion areas can be
determined. The procedure can be explained by Figure 2(a) and the congestion areas
identified in Figure 2(d) are drawn in Figure 2(a) for comparison. As explained above,
the pedestrians are moving toward to the exit from west and north. It can be seen from
the figure that the alternative guidance (arrows in the figure) instructs the pedestrians
to bypass the congestion areas. One of the practical ways for implementing this
adaptive guidance is to deploy staff members at appropriate locations and guide the
pedestrians to use the less congested routes with vocal or gestural instruction. However,
the mechanisms of providing the adaptive guidance is not specified in this study and
still need more tests and experiments.

Figure 1. Building layout and simulation under only fixed guidance

(a) Illustration of adaptive guidance (b) Median Filter

(c) Erosion (d) Dilation


Figure 2. Congestion Detection and Adaptive Guidance
608 COMPUTING IN CIVIL ENGINEERING

NUMERICAL EXAMPLE

The floor B1 of the Taipei Train Station, the largest transportation terminal in
Taiwan, is used as an example to demonstrate and validate the proposed methodology.
Figure 1 shows the layout of floor B1 with a length of approximately 197 m and a
width of 143 m. The floor provides space for ticket checking and passenger waiting
areas. The north part is reserved for conventional rail and the south part is dedicated to
high-speed rail. The four stairways connecting to the ground floor above serve as the
exits in this example and are marked with exit signs. Other stairways connect to floor
B2 below; however, because pedestrians would try to move upward during an
evacuation, these stairways are not considered in this example. All the areas in the
figure that constitute obstacles to pedestrians are black; these include walls, columns,
and ticket gates.

Figure 1 also shows the simulation under the fixed guidance after 60 s based
on the CA implementation proposed in Chu (2009) and the optimal fixed guidance
system proposed in Chu (2010). The figure indicates several types of evacuation
bottleneck when large-scale pedestrians (6,000 total) are following the fixed guidance.
The first source of bottleneck is the ticket gates (double circles). The second source of
bottleneck is the columns next to the stairways (solid circles) and the third bottleneck
is the narrow corridors (dashed circles). These results are useful for identifying
potential problems in the case of emergency situations, and they provide evidence for
the need of adaptive guidance to improve the evacuation performance.

Adaptive Evacuation Guidance

Cases of various numbers of pedestrians in the range 100--6,000 were tested


also based on Chu (2009) and Chu (2010). The difference is the introduction of the
adaptive guidance proposed in this paper. In addition to the number of the pedestrians,
the effects of the update interval and compliance rate of the adaptive guidance on the
emergency evacuation were also studied. In the following analysis, the measure for the
performance of a guidance system is “maximum evacuation time”, which is the time
for all pedestrians to evacuate. In an emergency situation, the maximum evacuation
time is usually more meaningful than the average evacuation time. All the values in the
figures are based on an average of 3 repetitions.

To understand the effect of the update interval on the maximum evacuation


time, update intervals of 15 s, 30 s, 45 s, and 60 s were tested. The results show that
the adaptive guidance has no effect on the maximum evacuation time for the cases of
100 and 1,000 pedestrians. The reason is that no congestion areas were identified in
these two cases. For the cases of 3,000 and more pedestrians, the benefit of the
adaptive guidance system is significant. The reduction is as high as 20% for the case of
6,000 pedestrians and the update interval of 15 s. Overall, as the number of the
pedestrians increases, the improvement due to the adaptive guidance increases. The
results can also show that shorter update intervals lead to lower maximum evacuation
times for the cases of 3,000 and more pedestrians.
COMPUTING IN CIVIL ENGINEERING 609

The influence of the compliance rate, which is the percentage of the pedestrians
following the adaptive guidance, is also tested. In this analysis, the update interval of
15 s is selected because the adaptive guidance has the strongest effect on the
evacuation and the change due to the compliance rates could be observed more clearly.
As usual, the cases of 100 and 1,000 pedestrians are not affected by the adaptive
guidance because no congestion was detected. By comparing the compliance rates of
75%, 50%, and 25%, it can be observed that the compliance rate of 50% has the lowest
maximum evacuation times. This result is not very surprising because in this particular
example most of the alternative routes are relatively close to the original shortest
routes. As a result, splitting the pedestrians equally into two nearby routes has the
greatest improvement. Note that the analysis for the compliance rate is not complete
and it should be noted that the optimal compliance rate of 50% only apply to this
example.

Finally, Figure 3 shows the simulation after 60 s under the adaptive guidance
with an update interval of 15 s and a compliance rate of 50%. The case was chosen
because its performance is the best and the effects of the adaptive guidance are easier
to observe. The figure is useful for an overall understanding of the performance of the
guidance system. The adaptive guidance that indicates alternative routes are shown in
the figure as arrows. Compared to the fixed guidance, half of the pedestrians are taking
advantage of the alternative routes to bypass the congestion areas and the reduction of
the congestion areas is significant. It is noteworthy that most of the alternative routes
provided by adaptive guidance are close to the original routes from the fixed guidance
for this example. The implication is that the adaptive guidance and the fixed guidance
are identical for the most part, and the adaptive guidance is required only next to the
congestion areas. It implies that the implementation of the methodology would be
straightforward and feasible. The exception of the above observation is represented by
the solid line (original route) and the dashed line (alternative route) in the figure. The
original route shows that the shortest path to the exit without considering the
congestion is via the top right stairway. However, the congestion areas marked with
the dashed circle completely blocks the corridor. As a result, the adaptive guidance
guides the pedestrians to take the alternative route that leads to the lower right stairway.
As a result, the associated adaptive guidance is relatively far from the congestion area,
which is difficult to obtain without the help of a systematic approach proposed in this
research.

CONCLUSION AND FUTURE RESEARCH

This paper proposes a method for planning adaptive emergency evacuation


guidance to support pre-determined, fixed guidance systems that provide static
information and do not respond to the real-time situations. Using the techniques of
digital image processing, the congestion areas in the facility can be identified. By
considering these areas as virtual obstacles, the adaptive guidance that instructs the
pedestrians to take the alternative routes and bypass the congestion can be generated.
The methodology is validated with a numerical example and the benefit of introducing
adaptive guidance is significant when the number of pedestrians is large. In addition,
reducing the update interval of the adaptive guidance improves the maximum
610 COMPUTING IN CIVIL ENGINEERING

evacuation times. The example also finds that the compliance rate has impact on the
evacuation times and should be considered when the adaptive guidance is designed.

Figure 3. Simulation under both fixed and adaptive guidance

ACKNOWLEDGMENTS
This work was supported by the National Science Council of Taiwan through
research grant NSC 98-2221-E-008-078.

REFERENCES
Burstedde, C., Klauck, K., Schadschneider, A., and Zittartz, J. (2001). “Simulation of
pedestrian dynamics using two-dimensional cellular automation.” Physica A, 295,
507–525.
Chu, C.-Y. (2009). “A computer model for selecting facility evacuation design using
cellular automata.” Computer-Aided Civil and Infrastructure Engineering, 24(8),
608–622.
Chu, C.-Y. (2010). “Optimal emergency evacuation guidance design for complex
building geometries.” under review.
Golledge, R. (1999). Wayfinding behavior: Cognitive mapping and other spatial
processes. Johns Hopkins Univ Pr. 16
Helbing, D. and Johansson, A. (2007). “Dynamics of crowd disasers: An empirical
study.” Physical Review E, 75, 046109–1–046109–7.
Hoogendoorn, S., Daamen, W., and Bovy, P. (2003). “Extracting microscopic
pedestrian characteristics from video data.” Transportation Research Board 2003
Annual Meeting CD-ROM.
Jin, T. (2002). “Visibility and human behavior in fire smoke.” SFPE Handbook of Fire
Protection Engineering, P. J. DiNenno, ed., National Fire Protection Association,
Quincy, MA, USA, 3 edition, chapter 2-4, 2–42–2–53.
Umbaugh, S. (2005). Computer Imaging: digital image analysis and processing. CRC
Press.
IMPROVING THE ROBUSTNESS OF MODEL EXCHANGES USING
PRODUCT MODELING ‘CONCEPTS’ FOR IFC SCHEMA
Manu Venugopal1, Charles Eastman2, Rafael Sacks3, and Jochen Teizer4
1
PhD Candidate, School of Civil and Environmental Engineering, Georgia Institute of
Technology, 790 Atlantic Dr. N.W., Atlanta, GA, 30332-0355, PH (510) 579-8656.
E-mail: manu.menon@gatech.edu
2
Professor, College of Computing and College of Architecture, Georgia Institute of
Technology, Atlanta, GA, 30332-0155. E-mail: charles.eastman@coa.gatech.edu
3
Associate Professor, Faculty of Civil and Environmental Engineering, Technion-
Israel Institute of Technology, Haifa, 32000, Israel. E-mail:
cvsacks@techunix.technion.ac.il
4
Assistant Professor, School of Civil and Environmental Engineering, Georgia
Institute of Technology, Atlanta, GA, 30332-0355, E-mail: teizer@gatech.edu
ABSTRACT
Empirical approaches to define Model View Definitions (MVD) for exchange
specifications exist and are expensive to build, test, and maintain. This paper presents
the novel idea of developing modular and reusable MVDs from IFC Product
Modeling Concepts. The need and application for defining model views in a more
logical manner is illustrated with examples from current MVD development. A
particular focus of this paper is on precast entities in a building system. Presented is a
set of criteria to define fundamental semantic concepts articulated within the Industry
Foundation Classes (IFC) to improve the robustness of model exchanges.
Keywords: Building Information Modeling (BIM), Product/Process Modeling, Model
View Definition (MVD), Industry Foundation Class (IFC).
INTRODUCTION
Building Information Modeling (BIM) tools serving the Architecture, Engineering,
Construction (AEC) and Facilities Management (FM) industry cover various domains
and have different internal data model representation to suit each domain. Data
exchange is possible mostly by hard-coding translation rules. This method is costly to
implement and maintain on an individual system-to-system basis. NIST has estimated
that information copying and recreation is costing the industry 15.8 billion dollars a
year (NIST, 2004). The Industry Foundation Classes (IFC) schema is widely
recognized as the common data exchange format for interoperability within the AEC
industry (Eastman et al. 2008). Although IFC is a rich product-modeling schema, it is
highly redundant, offering multiple ways to define objects, relations and attributes.
Thus, data exchanges are not reliable due to inconsistencies in the assumptions made
in exported and imported data, posing a barrier to the advance of BIM (Eastman et al.
2010). The National BIM Standard (NBIMS) initiative (NIBS, 2008) proposes
facilitating information exchanges through model view definitions (MVD) (Hietanen,
2006). Empirical approaches to define MVD’s for exchange specifications exist and

611
612 COMPUTING IN CIVIL ENGINEERING

are expensive to build, test, and maintain (Venugopal et al. 2010). The authors’
experience in developing Precast BIM standard (Precast MVD, 2010), which is one of
the early NBIMS, has given insights into the advantages and disadvantages of the
MVD approach. Some of the deficiencies of current approaches are explained in this
paper to illustrate the need for a formal and rigorous approach to model view
development. We explore a novel idea of developing modular and reusable MVDs
from IFC Product Modeling Concepts. Presented is a set of criteria to define
fundamental semantic concepts articulated within the Industry Foundation Classes
(IFC) to improve the robustness of
model exchanges.

NBIMS PROCESS
Effective exchanges require
providing a layer of specificity over
the top of an IFC exchange schema
or other exchange schema. The
purpose of this layer of information
is to select and specify the
appropriate information entities
from a schema for particular uses.
Such a subset of the IFC schema
that is needed to satisfy one or
many exchange requirements of the
AEC industry is defined as a Model
View Definition by
buildingSMART organization
(NIBS 2008). The National BIM
Standard Version 1 Part 1 outlines a
draft of procedural steps to be
followed in the case of developing
model views. The NBIMS process
is shown in Figure 1. The focus of
this paper is on the translation from
the Design to Construct stage in the
model view development process in
Figure 1. The Design phase
rigorously defines the model view.
This involves translation of
exchange requirements from the
textual form so that they can be
bound to a particular exchange
schema. A model view is a
collection of such information Figure 1. Outline of NBIMS model view
modules, which will be development process. This research is aimed at
implemented by the software improving the Design and Construct stages of this
companies. Example MVDs include process.
COMPUTING IN CIVIL ENGINEERING 613

those supporting concept level design review by GSA (GSA, 2010), for structural
steel exchanges by steel fabricators (Eastman et al, 2005), all the exchanges needed to
support precast concrete exchanges from design to fabrication and erection (PCI,
2009), and the pass-off of building information from the contractor to the facility
owner or operator (COBIE2) and others. The Construct phase involves working with
the software companies to implement the model views. This involves creating
mapping of model views into internal data structures. The following section illustrates
the potential barriers of the model view approach and explains the need of a different
approach.
NEED FOR A FORMAL AND ROBUST MVD APPROACH
IFC is based on EXPRESS language, which is known to be highly expressive but
lacks a formal definition (Guarino et al. 1997). For example, no standard model view
has been proposed in which a precast architectural facade is modeled and mapped to
and from the IFC schema (Jeong et al. 2009), leading to ad hoc and varied results.
Performance studies of BIM data bases designed to create partial models and run
queries show a strong need for both identifying model views for specific exchanges,
as well as for specifying the exchange protocols in a stricter manner (Nour 2009;
Sacks et al. 2010). The translation from exchange requirements to model views in
NBIMS process is currently done manually and error prone. Moreover, it is time
consuming and expensive. The base entities from which model views can be defined
are not strictly defined. The model views developed are not based on logic
foundations, hence no possibility of applying reasoning mechanisms. Moreover, the
required level of detail of model exchanges is an issue, which is not specified in
current approaches.
In preparing a set of MVDs, information modelers must determine the
appropriate level of meaning and the typing structure. The structure of a model view
for exchange of product model data between various BIM application tools depends
on the extent to which building function, engineering, fabrication and production
semantics will be embedded in the exchange model. At one end of the spectrum, an
exchange model can carry only the basic solid geometry and material data of the
building model exchanged. The export routines at this level are simple and the
exchanges are generic. In this case, for any use beyond a simple geometry clash
check, importing software would need to interpret the geometry and associate the
meaning using internal representations of the objects received in terms of its own
native objects. At the other end of the spectrum, an exchange file can be structured to
represent piece-type aggregations or hierarchies that define design intent,
procurement groupings, production methods and phasing, and other pertinent
information about the building and its parts. In this case, the importing software can
generate native objects in its own schema with minimum effort, based upon
predefined libraries of profiles, catalogue pieces, surface finishes, and materials and
do not require explicit geometry or other data in every exchange. The export routines
at this level must be carefully customized for each case since the information must be
structured so that they are suitable for the importing applications supporting each use
case. Different use cases require different information structures. For example, an
architect might group a set of precast façade panels according to the patterns to be
614 COMPUTING IN CIVIL ENGINEERING

fabricated on their surfaces, manipulating the pattern as a family; an engineer might


group them according to their weights and the resulting connections to the supporting
structure; a fabricator might group them according to fabrication and delivery dates.
In order for the importing application to infer knowledge from the exchange, the
exporting application should structure the data based on the ordering scheme accepted
at the receiving end. This is an important requirement and needs to be taken into
account when the model exchange requirements are specified.
The level of detail in the provided and exchanged models for each information
unit can vary based on the project stage, purpose of model exchange, model recipient,
and local practices. Further, different delivery methods impose changes in roles and
responsibilities of project parties, which considerably change project deliverables at
each stage for each discipline involved in the project. Current MVD approaches do
not specify such a level of detail requirement for each phase of the project.

Figure 2. Formulation of Product Model Concepts for a Precast Piece

PRODUCT MODEL ‘CONCEPTS’


The idea of Product Model Concept is introduced as a means of modularizing MVD
development and also for improving re-usability. Concepts in the areas of engineering
and design are particular, in the sense that they define a mixture of partial
specifications of reality, the expected function and behavior of that reality, and the
reality of physical systems. Concepts regarding the different levels of realization are
needed to distinguish between definitions and objects within our domain.
COMPUTING IN CIVIL ENGINEERING 615

The notion of a Concept is that it is a subset of a product model schema that can be
used to create various, higher-level, Model View Definitions (MVD). These modular
sub-units or Concepts can be tested for correctness and completeness separately,
easing validation. A related but different purpose for defining product model sub-
schemas is for querying and accessing part of the instance data associated with a
target sub-schema.

Figure 3. A binding document prepared for precast connection component showing


usage info, mapping to IFC, 3D representation, business rules, and sample part-21 file
snippet.

Initial Test Model: The main criterion of Concepts is that they need to be stand-
alone and testable from the completeness point of view. Concepts should be a
complete subschema that has no broken links or references. Further, this also applies
to retrievable queries. This requirement of completeness is strongly influenced by the
optional versus mandatory property of some data fields. This may have to be adjusted
for IFC to work well with concepts. Figure 2 shows a grouping of various concepts
for a precast piece. A second and important requirement, which was identified during
the current model view work, is the need to avoid redundancy and rework in terms of
616 COMPUTING IN CIVIL ENGINEERING

development and testing of model views. Hence, concepts should be generated


following strict guidelines so that they are testable and standalone. For new MVD
development, these should be in a plug-and-play form. Retesting, which is expensive
and time consuming needs to be avoided. Moreover, the Concept structure developed
should support querying of content-based data from a product model or from a model
server. The current terminological and semantic ambiguities need to be removed by
the formal structure so that it minimizes semantic mismatch during querying for
various applications. Accomplishing this task requires the concept definitions and
constraints (business rules) to be represented rigorously. Figure 3 shows such a
binding diagram for Precast Connection Components. The first part shows the details
about the concept, such as title which is a unique identifier, description, IFC release
to which the binding conforms, history, references, authors, etc., and usage in view
definition. The usage in view definition shows the reuse of this concept in different
places in a model view. The second part shows a sample 3D representation of the
same, followed by the IFC binding relationships. Additional business rules for
implementation and an example part-21 file snippet are also provided to help the
implementers. The business rules tries to answer questions such as, Are the attributes
Required or Optional? Is the attribute referencing a Select or Enumerated type? In the
latter case, what are the allowable values? Are there naming conventions to be
followed?, etc. These are called the implementation agreements and the sample part-
21 file is populated with values illustrating these agreements.
IMPACT OF THIS RESEARCH
The following are the envisioned benefits by performing this research:
 The requirements or a standard criterion for defining the IFC concepts proposed
here should be documented to avoid various research teams generating varying
implementations.
 Such a standard approach will help in re-use of concepts and thereby resulting in
the re-use of MVDs itself.
 Concepts, once tested and implemented can provide a mechanism to generate
model views directly from exchange requirements. This is a novel idea and is yet
to be explored.
 There is a huge potential to reduce the current model view generation –
implementation cycle time of 2-3 years to more practical 4-6months by following
a modularized approach using concepts.
Concept-based design of modular MVDs and certification process uses concepts
as a central component in the methodology. A comparison matrix is envisioned to
evaluate the effectiveness of this new methodology. Some of the important criteria for
evaluation are shown in Table 1. An MVD, which supports an exchange requirement,
can be specified solely based on the concept packages. The exchange requirements
have a direct mapping to the concept structure (intuitive) and provide a means to
develop new MVDs in a plug-n-play manner. Extensive work and time is saved by
this new method. Further, the MVDs developed using this method are more consistent
among each other. The testing, validation and certification scenario can benefit from
the products of this research.
COMPUTING IN CIVIL ENGINEERING 617

Table 1. Evaluation Matrix for Concept based applications

Criteria Description

Expressivenes By using a formal Concept structure, semantics in MVD can be represented in a


s and Rigor consistent manner.
Understanding Model views represent different levels of detail. Concept based development
of complex methodology contributes to a better understanding of model views by providing
views a concise and object oriented view of the exchange.
A view is decomposed in several smaller modular objects that are more
manageable.
Traceability Traceability is very important feature in the development process. A more
effective translation and transparency of the user needs into the design of
MVDs.
Quality Better quality of MVD design may be achieved by using concepts that are
tested and verified.
Development Unnecessary iterations and redundancy avoided due to the front-loading of
time and costs concept design. Costs are also reduced by early verification of concepts.
Reuse of Building a new MVD becomes a matter of combining and configuring predefined
MVDs components from a concept library.

CONCLUSION
Product model schemas such as IFC are rich, but redundant. In order to build effective
exchanges, a new methodology based on formal definition of IFC Concepts is
introduced by this research. Based on the analysis, it is shown that MVD
development process needs to be transitioned from the current ad-hoc manner to a
more rigorous framework and/or methodology similar to the one explained in this
research. The semantic meaning of IFC concepts needs to be defined in a rigorous and
formal manner with strict guidelines. This can help achieve a uniform mapping to and
from internal objects of BIM tools and IFC.
The expressiveness and rigor, where MVD aspects can be represented fully
and in a consistent manner is important. Model views represent different levels of
detail; hence the new methodology should contribute to a better understanding of
model views by providing a concise and object oriented view of the exchange. It
should be possible to decompose the view into several modular objects (Concepts)
that are more manageable and testable. Moreover, traceability is a very important
feature in the development process. A more effective translation and transparency of
the user needs (Exchange Requirements) into the design of MVDs are required.
Avoiding unnecessary iterations and redundancy of IFC concepts can reduce
development time and costs. Work is still in progress in defining the IFC Concepts
and validating them. Based on the impact expected from this research, there is a
compulsive need to complete this research in a time bound manner to make available
the products to the IFC development community.
REFERENCES
Eastman C., F. Wang, S-J You, D. Yang, (2005) Deployment of An AEC Industry Sector
Product Model, Computer-Aided Design 37:11, pp. 1214–1228.
618 COMPUTING IN CIVIL ENGINEERING

Eastman, C., Teicholz, P., Sacks, R. and Liston, K., (2008) BIM Handbook: A Guide to
Building Information Modeling for Owners, Managers, Designers, Engineers and
Contractors, John Wiley & Sons, Inc., New Jersey.
Eastman, C., Jeong, Y.-S., Sacks, R., Kaner, I., (2010). Exchange Model and Exchange
Object Concepts for Implementation of National BIM Standards Journal Of
Computing In Civil Engineering, 24:1 (25).
GSA (2010). GSA BIM Program Overview, Available:
http://www.gsa.gov/portal/content/102276.
Guarino, N., Borgo, S., and Masolo, C., (1997). Logical modelling of product knowledge:
Towards a well-founded semantics for step, Citeseer.
Hietanen, J. and S. Final (2006). "IFC model view definition format." International Alliance
for Interoperability. In Rebolj, D.(ed.): Proceedings of the 24th CIB W78
Conference, Maribor. 26-29 June 2007.
Jeong, Y-S, Eastman C.M, Sacks R. and Kaner I, (2009) Benchmark tests for BIM data
exchanges of precast concrete Automation in Construction 18, 4, July 2009, pp 469-
484.
NIBS, (2008) United States national building information modeling standard version 1—Part
1: Overview, principles, and methodologies. (http://nbimsdoc.opengeospatial.org)
NIST, (2004) Gallaher P, O‘Connor A, Dettbarn, J., Gilday L, Cost Analysis of Inadequate
Interoperability in the U.S. Capital Facilities Industry, NIST GCR 04-867, U.S.
Department of Commerce Technology Administration National Institute of Standards
and Technology, Advanced Technology Program Information Technology and
Electronics Office Gaithersburg, Maryland 20899.
Nour, M. (2009). Performance of different (BIM/IFC) exchange formats within private
collaborative workspace for collaborative work, ITcon Vol. 14, Special Issue
Building Information Modeling Applications, Challenges and Future Directions , pg.
736-752, http://www.itcon.org/2009/48
Precast IDM, (2009) Eastman, C., Sacks, R., Panushev, I., Aram, V., and Yagmur, E.
Information Delivery Manual for Precast Concrete, PCI-Charles Pankow Foundation.
Available: http://dcom.arch.gatech.edu/pcibim/documents/IDM_for_Precast.pdf.
Precast MVD, (2010) Eastman, C., Sacks, R., Panushev, I., Venugopal, M., and Aram, V.
Precast Concrete BIM Standard Documents:Model View Definitions for Precast
Concrete, PCI-Charles Pankow Foundation. Available:
http://dcom.arch.gatech.edu/pcibim/documents/Precast_MVDs_v2.1_Volume_I.pdf.
Sacks, R, Kaner, I., Eastman, C.M., and Jeong, Y-S, (2010) The Rosewood Experiment –
Building Information Modeling and Interoperability for Architectural Precast
Facades, Automation in Construction 19 (2010) 419–432.
Venugopal, M., Eastman, C., Sacks, R., Panushev, I., Aram, V., (2010) Engineering
semantics of model views for building information model exchanges using IFC,
Proceedings of the CIB W78 2010: 27th International Conference –Cairo, Egypt, 16-
18 November.
Framework for an IFC-based Tool for Implementing
Design for Deconstruction (DfD)

A. Khalili1 and D. K. H. Chua2


1
PhD student, Department of Civil and Environmental Engineering, National
University of Singapore; email: alireza@nus.edu.sg
2
Associate Professor Department of Civil and Environmental Engineering, National
University of Singapore; email: cvedavid@nus.edu.sg

ABSTRACT
Design for deconstruction is a way of thinking about and designing a building
to maximize its flexibility and to ensure that a building can be disassembled for
various reasons such as aged community or after becoming obsolete. The goal of
designing for deconstruction is to design building elements to be easily disassembled.
This paper presents a framework to enhance the design for disassembled building
systems for the construction of new facility using IFC. The framework integrates
architectural design with the ability of disassembly and constructability in four main
modules. The first module extracts building components’ properties from IFC and
creates an internal data structure. The second module utilizes created data structure to
construct a graph data model. The third module generates possible disassembly
solutions based on disassembly criteria. The last module compares disassembly
sequence of existing building with assembly sequence of new designed building to
obtain optimal disassembly sequence.

INTRODUCTION
The disassembly of buildings to recover materials and components for future
reuse is not widely practiced in the modern construction industry. No matter how well
a structure is built, it will not last forever. Structural engineers long ago developed the
idea of a "service life", in which a building (or other structure) is designed to be
structurally durable for a given number of years after construction, commonly 35
years today. Moreover, end-of-life for a building generally means end-of-life for the
bulk of its component materials. Conventional construction methods create heavily
integrated building systems that cannot be dismantled piece by piece. Sustainable
practices seek to eliminate waste and reduce demand for new materials, largely by
turning linear processes (such as the standard life cycle of a building, from
construction to useful life to demolition) into cyclical processes that maximize reuse
and minimize waste of resources, as shown in Figure 1.
The objective of this paper is to develop an IFC-based framework to optimize
disassembly sequences of an existing building. The framework integrates
architectural design and assembly planning of new designed building with the ability
of disassembly of an existing building in four main modules. The first module
extracts building components’ geometrical and topological information as well as
semantic information and generates an internal data structure. The second module

619
620 COMPUTING IN CIVIL ENGINEERING

utilizes geometrical and semantic information of existing buildings to construct a


graph data model using IFC. The third module generates possible disassembly
solutions based on disassembly criteria. The last module compares disassembly
sequence of existing building with assembly sequence of new designed building to
obtain optimal disassembly sequence.

Figure 1. Build environment life cycle.

FRAMEWORK OVERVIEW
The overall design for disassembly (DfD) framework is shown in Figure 2
comprising four key modules as depicted in the model. The following are the key
modules:

IFC Browser and Database Generator. The first module develops an internal data
structure from extracted physical and spatial properties, such as dimensions, materials
and topological relations from CAD drawings in IFC model. This module uses a CAD
tool which can export 3D CAD drawings into IFC files. It can also read IFC files and
transfer them into 3D CAD drawings. Autodesk Revit with its IFC2x utility and
ArchiCAD (Graphisoft, 2005) with its add-on interface are two typical IFC-
compatible CAD applications available on the market.

Graph Model Generator. The second module maps geometrical and semantic
information into a graph data representation model. This module has a parser, which
is used to transpose geometrical and topological relationships of structural elements
from IFC files to a graph model (GM). This parser includes a user interface that helps
user to match up topological relationships if it is necessary. It is convenient to
transpose the assembly’s drawing into a set of logical expressions and graphical
representations.
One of the most famous diagram in the DfD is called “nodes and edges” or
“Graph Model”, where nodes represent the physical parts (structural components) and
edges (topological relationships) represent the existing connections among parts.
Figure 3-a shows a sample concrete framework which is represented by graph. The
“Graph Model” (figure 3-b) is the simplest “Graph Model” disassembly graph: it is
COMPUTING IN CIVIL ENGINEERING 621

constituted by as many nodes as the number of assembly’s parts, and by lines


connecting these nodes, which represent the contact’s conditions.

Figure 2. Design for Deconstruction (DfD) framework.

Figure 3. (a) Sample concrete structure, (b) graph model of sample structure.

As a logical data model, Graph Model (GM) is a pure graph representing the
adjacency and connectivity relationships among the internal elements of a building. In
order to implement network-based analysis such as graph traversal algorithms,
checking for feasibility and creating transition matrix in the GM, the logical network
model needs to be complemented by a 3D geometric network model that accurately
represents these geometry properties, called Geometric Network Model (GNM) (Lee
and Zlatanova 2008). In order to interpret geometrical and topological data of a
622 COMPUTING IN CIVIL ENGINEERING

building model, identifying a structural component graph is necessary. One of these


information models that can handle geometrical and topological information is IFC.
Combining the geometry representation and the placement of an object should
allow for the system to retrieve all required information in relation to shape, size,
placement and orientation. Geometrical data will be mapped to the topological graph.
The definitions of geometric representations in the IFC Releases 2.0 and 2.x are quite
close to the well approved STEP geometric definition of ISO 10303- 42:1994. Any
object in IFC with a geometric representation has two attributes: ObjectPlacement
and Representation (Liebich 2004).In IFC, the IfcRelConnectsPathElements deals
with “one to one” relationship and creates a logical connection between two objects.
Having access to attributes of this entity in IFC provides topological relationships of
structural components from IFC.
The GM is converted to an adjacency matrix. Assume a three-dimensional
assembly structure, S, formed by m parts. Consider components Ci and C j  S , if Ci
is in contact with C j , then Ci , j = 1; otherwise Ci , j = 0. The adjacency matrix is
symmetric; all the diagonal’s elements Ci ,i = −1 and rows and columns are swapped
together in order to have the non-null elements nearest to the matrix diagonal, as
shown in figure 4 for sample concrete frame.
B1 B 2 B3 B 4 B5 B6 B7 B8 C1 C 2 C 3 C 4
B1 1 1 0 1 0 0 0 0 1 1 0 0
B 2 1 1 1 0 0 0 0 0 0 1 1 0
B3 0 1 1 1 0 0 0 0 0 0 1 1
B4 1 0 1 1 0 0 0 0 1 0 0 1
B5 0 0 0 0 1 1 0 1 1 1 0 0
B6 0 0 0 0 1 1 1 0 0 1 1 0
B7 0 0 0 0 0 1 1 1 0 0 1 1
B8 0 0 0 0 1 0 1 1 1 0 0 1
C1 1 0 0 1 1 0 0 1 1 0 0 0
C2 1 1 0 0 1 1 0 0 0 1 0 0
C3 0 1 1 0 0 1 1 0 0 0 1 0
C4 0 0 1 1 0 0 1 1 0 0 0 1
Figure 4. Adjacency matrix of sample structure.

Disassembly Feasibility Analyzer. The third module utilizes GM to analyze


feasibility of disassembly sequence and generate transition matrix (TM). TM is
searched for creating binary trees. The aim of the feasible disassembly operation
analysis is the individuation of the physically feasible disassembly tasks that
successfully realize the division between the components. Disassembly sequences are
generated after identification of contact constraints between the surfaces of the
components. Following are the contact constraints:
1- Removal Space:
This constraint is defined to identify the directions along which to separate
each component from the assembly. These directions can be represented through a
topological spherical space, called the “Gauss Sphere” (GS). Having chosen a
component, the analysis for identifying the disassembly directions is conducted as
shown in Figure 5. This figure shows that the available removal space for the
COMPUTING IN CIVIL ENGINEERING 623

column is intersection of both removal space of wall and slab. If the directions’
sets are empty, then the component cannot be removed and the GS is null. An
advanced procedure for searching the interferences has been introduced by
Romney et. al. (1995); they translated into an algorithm the physical and intuitive
principle according to which a person is sure to be able to move a body if he/she
completely sees the three-dimensional borders of the object. Relying on the
projections of the three-dimensional body’s borders on an orthogonal plan to the
chosen direction, it is possible to discern if the movement will have a positive
result, thus disassembling the part from the rest.

Figure 5. Individuation of the Gauss sphere.

2- Structural Supporting:
Removing a component from the assembly may cause instability of the
structure. Supporting relationships of components are defined by assigning two
attributes to each component. These attributes are “Supporting” and “Supported
by”. For example “Supporting” attribute for component “A” depicts the
components which are relying on A and removing A before those components
cause regional or global instability in the structure. While “Supported By”
attribute of “A” represents the component(s) which are carrying “A” (Figure 6).
This attributes are extracted from “IFCSupportingAttribute” entity in IFC.

Figure 6. Structural supporting.


3- Constructability rules
The disassembled components should satisfy constructability which contains
information about transportation limitations, lifting constraints, safety issues,
environmental limitations, regional and international regulations for
constructability. For example components which are very big that can not be
transport to other sites or components which are very heavy that can not be lifted
are not among feasible disassembly solutions.
The algorithm then takes into consideration the disassembly operations that
produce two subassemblies, both composed of several components. Taking all the
624 COMPUTING IN CIVIL ENGINEERING

generated subassembly S k  TTM (Temporary Transition Matrix), it is searched, for


each generic subassembly Si  Sk , whether a complementary subassembly
S j  TTM exists such that Si  S j contains all the parts of the initial assembly (A) or
subassembly S k  TTM . Finally, it is calculated, through a feasible disassembly
operation analysis of their potential movements, whether it is feasible to generate
every couple ( Si , S j ) . The algorithm ends by automatically producing the final
Transition Matrix (TM) for every disassembly operation; the value −1 is assigned to
every father and the value 1 to every child.
According to Zhang and Kuo (2002) TM is defined in the following way: let’s
suppose the structure to be characterized by i feasible subassemblies and j feasible
disassembly actions. The generic element of TM ij is −1 if the j action disassembles
the parent component i, and is +1 if the j action creates the son component i. All other
elements are 0. Pertaining to the sample structure shown in figure 7, the TM is
presented in table 1.
The TM matrix must be interpreted in the following way: the columns
represent the disassembly actions, while the rows represent the feasible
subassemblies. By following the matrix it is possible to verify which transition
corresponds to each action. In the matrix, disassembly actions of a certain
subassembly are represented by the −1 value in the row matching to that
subassembly. Only if, in that row, more than one element is −1 for each subassembly,
there are more than one disassembly options. The columns, instead, always show just
one component whose value is −1 and two components whose value are +1: this is
because each operation always creates two subassemblies (sons) from each assembly
(father). Starting from the first row, the element −1 in columns 1, 2, and 3 represents
three different disassembly actions. For example, Action 3 represents the transition
from parent subassembly BCW (value −1) to the two son subassemblies CW and B
(values +1). Now, moving to row 6, corresponding to subassembly B, there are no
more elements in this row whose value is −1. This means that there are no more
disassembly actions for this component. If, instead, row 4 is read, corresponding to
subassembly CW, a value −1 is only found in column 6: this means just one
disassembly alternative which causes in individuation of C and W. The analysis of the
TM leads to explore all the feasible sequences. It is the space of all possible solutions
(Romney, Godard et al. 1995).

Figure 7. Example of generic assembly composed of three components


and disassembly graph.
COMPUTING IN CIVIL ENGINEERING 625

Table 1. Transition matrix of generic assembly.


Action
Assembly
0 1 2 3 4 5 6
BCW 1 -1 -1 -1 0 0 0
BC 0 1 0 0 -1 0 0
BW 0 0 1 0 0 -1 0
CW 0 0 0 1 0 0 -1
B 0 0 0 1 1 1 0
C 0 0 1 0 1 0 1
W 0 1 0 0 0 1 1

Having the TM of an assembly (A), some further processes are necessary in


order to find the disassembly sequences. The first step is the creation of binary trees,
each tree composed of two branches start from each node the one positioned on the
right represents the alternative disassembly action (to its own root), while the left one
represents the first son obtained through that action. So, if just the first son (or just the
second) of an AND/OR graph is considered, this is nothing but an n-array tree. As
each assembly has two sons, this procedure generates a couple of binary trees. For
example, in the previous TM, starting from the first operation, the value +1 is set to
the root of the tree; by moving through the column, the first +1 value must be found.
Then, by moving through the corresponding line, the first −1 value must be found: the
number of the corresponding column is then set for the value of the first node of the
left branch. By going on and on with this procedure, the vector [1, 4] can be found,
and this is the left branch of the tree, as shown in figure 8. Then, by back-tracking to
the last row analyzed, another feasible action is chosen (if any) and following this
procedure, the algorithm will identify all the sequences. The complexity order of the
total disassembly problem is a function of the number of subassemblies that can be
produced, corresponding to the simple dispositions of m parts. The problem is clearly
of factorial order. This method of representation can significantly reduce the order of
complexity without loss in generality.

Figure 8: Binary tree of generic assembly.


Disassembly Optimizer. Once all sequence vectors are obtained, it is possible to
easily identify the optimal sequence based on the defined objective function in the
forth module. In this study, de-attached components should be compared to the
required components of the new building. In addition, the optimal disassembly
sequence should be chosen so that it provides a close match to the assembly sequence
of the new building.
This process is carried using Graph/SubGraph matching algorithm.
Geometrical and topological information of the components of new building is
626 COMPUTING IN CIVIL ENGINEERING

extracted from IFC using the similar procedure as explained in section 34-
9583409573. VFLib Data-Link Library (DLL) developed by Cordella (2004) is used
to implement graph matching part of this research. The VF algorithm will perform a
sub-graph isomorphism in which there is a sub-graph of the first graph which is
isomorphic to the second graph. It also allows for retrieving the mapping after a
match has been made. It allows for context checking in which nodes can carry
attributes, and those attributes can be tested against the corresponding attributes for
the isomorphic node, and the matching can be rejected based on the outcome of such
tests.
Finally for each operation and component it is possible to associate a
disassembly index (D) that can be achieved choosing some parameters such as
matching index (m), cost (c), time (t) and necessary movement (n). So, for each
solution, the D= f(m, c, t, n) and the solution(s) which has/have the maximum or
minimum value of D is selected for disassembly and the components are used in
assembly of the new designed building.

CONCLUSION
In this paper, a new framework is presented that automatically produces all the
possible sequences of disassembly of a building structure using IFC. IFC enhances
obtaining New method is used to represent topological relations of components in
building which is Graph Model. This method is able to drastically reduce the set of
the disassembly operations of the components, in case of complex systems, that is of
exponential type, without losing generality, making possible the exact calculation of
all the disassembly sequences. Applying structural and operational constraints
decreased the total number of sequences to the set of feasible sequences.
Graph search and graph matching algorithm is used to identify structural components
of existing building and their equivalent in the new designed building. Then,
associating an index to the different sequence, it is possible to find the optimal
sequence of disassembly. The future work will integrate implementation of software
code and disassembly example of a complex case study.

REFERENCES
Cordella, L., P. Foggia, et al. (2004). "A (sub) graph isomorphism algorithm for
matching large graphs." IEEE Transactions on Pattern Analysis and Machine
Intelligence: 1367-1372.
Lee, J. and S. Zlatanova (2008). "A 3D data model and topological analyses for
emergency response in urban areas." Geospatial Information Technology for
Emergency Response.
Liebich, T. (2004). "IFC 2x Edition 2 model implementation guide." International
Alliance for Interoperability.
Romney, B., C. Godard, et al. (1995). "An efficient system for geometric assembly
sequence generation and evaluation." COMPUTERS IN ENGINEERING: 699-
712.
Zhang, H. and T. Kuo (2002). A graph-based approach to disassembly model for end-
of-life product recycling, IEEE.
Temporary Facility Planning of a Construction Project Using BIM
(Building Information Modeling)
Hyunjoo Kim1 and Hongseob Ahn2

¹ Assistant Professor, Department of Civil Engineering, California State University,


Fullerton, USA, PH: 657-278-3867; FAX: 657-278-3916, email:
hykim@fullerton.edu
² Professor, Architectural Engineering, Kunsan University, Korea, PH: +82-063-463-
0785, FAX: +82-63-469-4883, email: hsahn@kunsan.ac.kr

ABSTRACT
The key role in safety management is to identify any possible hazard before it
occurs by identifying any possible risk factors which are critical to risk assessment.
This planning/assessment process is considered to be tedious and requires a lot of
attention due to the following reasons: firstly, falsework (temporary structures) in
construction projects is fundamentally important. However, the installation and
dismantling of those facilities are one of the high risk activities in the job sites.
Secondly, temporary facilities are generally not clearly delineated on the building
drawings. It is our strong belief that safety tools have to be simple and convenient
enough for the jobsite people to manage them easily and be flexible for any occasions
to be occurred at various degrees. In order to develop the safety assessment system,
this research utilizes the BIM technology and collects important information by
importing data from BIM models and use it in the planning stage.

INTRODUCTION
In spite of various efforts of safety professionals and strong governmental
reinforcement, its high frequency and severity of injuries and illness have not
decreased well enough in the construction industry. The recent development in BIM
(Building Information Modeling) technology encourages us to utilize it in the field of
accident prevention as well as in design and project management.
The key role in safety management is to identify any possible hazard before it
occurs by developing prevention measures which are critical to risk assessment. This
planning/assessment process is considered to be tedious and requires a lot of attention
due to the following reasons: firstly, falsework (temporary structures) in construction
projects is fundamentally important. However, the installation and dismantling of
those facilities are one of the high risk activities in the job sites. Secondly, temporary
facilities are generally not clearly delineated on the building drawings. It is our strong
belief that safety tools have to be simple and convenient enough for the jobsite people

627
628 COMPUTING IN CIVIL ENGINEERING

to manage them easily and be flexible for any occasions to be occurred at various
degrees.
Current CAD systems are mostly used for physical models of buildings,
representing the static results of design and construction. While these models provide
a topological description of buildings in the way different objects (or entities) are
connected together and store specific architectural features and attributes, the authors
recognize that CAD programs typically represent buildings mostly as geometric
models. Information flow from design to construction is critical and, when efficiently
controlled, it allows for design-build and other integrated project delivery methods to
be favored. The impact of BIM processes has been more evident in cutting-edge
buildings and innovative processes. Utilizing the BIM technology, this paper
developed a new methodology of modeling an installation and dismantling of
falseworks in construction projects. Using the methodology developed in this
research, it is expected that the construction manager could create falsework models
(temporary structures) imbedded with certain guidelines and regulations containing
the safety related requirements of a building in the planning/assessment process. This
research established a procedure of Building Information Modeling (BIM) modeling
technique in assessing possible hazard(s) by visually representing falsework objects,
and their locations of a building. The prototype described in the paper is mainly for
designing a scaffolds layout of a building, but could be developed further in planning
on various temporary facilities.
A number of case studies have illustrated how designers have implemented
collaborative work via 3D modeling with contractors to enhance constructability. In
that sense, we believe that BIM is one of the most recent technologies that has gained
acceptance in the AEC industry. This study intends to develop a safety assessment
system based on the BIM technology which will enable the jobsite people to plan
ahead on safety management and eventually achieve more productivity. One of the
possible benefits from the BIM based safety assessment system can be efficient
hazard identification at the planning process which will focus on the movements of
workers incorporated with other resources such as different kinds of equipment,
materials and tools.
PREVIOUS RESEARCH
Year after year construction is one of the most dangerous industries, with
approximately 1,050 construction workers dying on the job each year. Although
construction employment equals just over 5% of the workforce, construction injuries
account for in excess of 17% of all occupational deaths. One out of every seven
construction workers is injured each year and one out of every fourteen will suffer a
disabling injury.
Jaselskis et al. (1996) developed a strategy for improving construction safety
performance and Hinze et al. (1995) measured the number of safety violations and
fatalities which revealed interesting trends. Interestingly, Carter et al (2006) focused
on the safety hazard identification on construction projects and described that
unidentified hazards [resent the most unmanageable risks. The research utilized an IT
(Information Technology) tool in construction project safety management with a
computerized module. De la Garza et al. (1998) analyzed safety indicators in
COMPUTING IN CIVIL ENGINEERING 629

construction projects and Hinze et al [5] did research in identifying factors that
significantly influence the safety performance of specialty contractors. Jannadi et al
(2003) worked on the assessment of risk for major construction activities. And
Kartam (1997) emphasized the importance of effective planning and control
techniques to prevent construction accidents.
RESEARCH METHODOLOGY
Figure 1 shows the correspondence between BIM data and 3D simulation
model. The major BIM design software (ArchiCAD) is able to export data to an
ifcXML, which is a non-proprietary, open standard. ifcXML receives a lot of support
from government and the AEC industry. The proposed approach is that the
requirements of temporary facilities are extracted into the safety management system
for the installation and dismantling of the falseworks. In this experiment, the scaffolds
were built to demonstrate the hazard identification process, but a future paper is under
preparation that will show an automatic extraction of requirements and identification
of all the temporary facilities from a BIM data by saving the BIM model in ifcXML.
In figure 2, an example of a 5 story office building is shown in CAD representation
and stored in ifc file (shown in the figure background). Next the types of scaffolds
and their locations are taken from the ifc file once it is saved in ifcXML. Finally the
safety management system developed in this research will identify temporary
facilities and their locations. Details of modeling process are described in the next
section. These steps are explained in further detail in the case study section.

Figure 1. Process of BIM data and 3D model


CASE STUDY
This section describes a case worked through to illustrate the process of
establishing a hazard identification model corresponding with the building design
630 COMPUTING IN CIVIL ENGINEERING

represented in a BIM and using it to automatically identify types and locations of


different temporary facilities in the construction phase. The process consists of six
steps:

Step 1: Create a BIM model

Step 2: Obtain the building


information in ifcXML from the BIM
data

Step 3: Build the construction schedule


according to the BIM data

Step 4: Identify the location of


scaffolds necessary to construct the
building

Step 5: Create a 3D model

Step 6: Build a 3D simulation with


construction schedule and falseworks

Figure 2. Risk assessment process using BIM

Step 1: Create a BIM model


In our risk assessment process, the BIM technology was used to extract the
geometry of a building to enable modeling with data from the instance data model to
build a 3D model. ArchiCAD software was used to build a BIM model, which was
saved in a non-proprietary standard model, ifcXML. Figure 3 shows a BIM model of
a five story office building, built in ArchiCAD.

Step 2: Obtaining the building geometry from the ifcXML data.


ifcXML is an XML format defined by ISO 10303-28 (STEP-XML), having
file extension “ifcXML”. This format is suitable for interoperability with XML tools
and exchanging partial building models. In general, the size of an XML data file from
an architectural design may easily excess several MB. It is not economic to deal with
such a large amount of data in extracting building geometry from a BIM model. In
this research, a Web-based XML file reader was developed for the XML
transformation purpose. The building information for any building design objects
such as walls, floors, slabs has been easily extracted and transformed into a 3D visual
model.
Step 3: Build the construction schedule according to the BIM data
In this step, all the building components are retrieved and stored in the safety
management system. Next step is to calculate the amounts of each building
component such as walls, floors, slabs, finishes, and openings. In this case study, the
building area is 15,000 SQFT. According to the standard estimation, each floor is to
COMPUTING IN CIVIL ENGINEERING 631

be completed in two weeks, totaling 10 weeks to complete the five story office
building. Figure 4 shows the entire progress of the construction schedule could be
segmented into ten different weeks from the prediction of the amounts of each
component in the office building. Figure 4 also shows a 3D model which has
completed the second floor of the building along with scaffolds built around the
perimeter of the building.

Figure 3. BIM model of a five story office building

Figure 4. BIM model with schedule information


632 COMPUTING IN CIVIL ENGINEERING

Step 4: Identify the location of the scaffolds necessary to construct the building
The use of scaffolds as tools for working at varied levels on construction sites,
is a fixture in the construction industry. Unfortunately, there have been many
accidents involving scaffolding. There could be many different reasons for the
accidents. But one of the important reasons is many scaffolds have been improperly
installed because of the lack of knowledge, or the lack of misunderstanding on the
exact locations of the scaffolds. They must be designed, installed, loaded and
dismantled properly in full accordance with OSHA regulations.
Besides the scaffolds, each worker is to be provided with additional
protection from falling hand tools, debris, and other small objects through the
installation of toe boards, screens, or guardrail systems. In this research, one of the
scaffolds, carpenters’ bracket scaffolds was applied and an example of the scaffolds
and its guardrails are shown in Figure 5.

Figure 5. Specifications of scaffolds

Step 5: Create a 3D model


In this step, a 3D model of the BIM model is re-generated in a computer
program, called Google SketchUp. The reason that we used the SketchUp is a
commonly used software program with software extensions known as “Rubies”
written in Ruby programming language to augment the capabilities of SktechUP 3D
modeling.
In the programming, the time sequence of the building construction is
combined and added into scaffolds specifications and visualized in a 3D model in
Google SketchUp.

Step 6: Build a 3D simulation with construction schedule and scaffolds


As shown in Figure 6, 3D simulation is built along with construction
schedule and temporary facility (scaffolds) in Google SketchUp. For the purpose of
simplicity, each floor of 3,000 SQ FT is to be constructed in two weeks. Therefore,
scaffolds is to be placed on each floor according to its construction schedule.
COMPUTING IN CIVIL ENGINEERING 633

Figure 6. 3D simulation in Google SketchUp

CONCLUSIONS
BIM technology allows the special characteristics of safety oriented
construction process planning. The safety management system proposed in the paper
showed in the case study that BIM technology could be used to optimize the design
process to create safer construction environments.
Utilizing the BIM technology, this paper proposed a new methodology so
that a construction manager could create falsework models imbedded with safety
regulations and guidelines containing the safety related requirements of a building in
the planning/assessment process. This research demonstrated a procedure of Building
Information Modeling(BIM) modeling technique in assessing possible hazards by
visually representing falsework objects, and their locations in a building.
The prototype described in the paper is mainly for designing a scaffolds
layout of a building, but could be developed further in planning on various temporary
facilities.
REFERENCES
Jaselskis, E., Anderson, S., and Russel, J., “Strategies for Achieving Excellence in
Construction Safety Performance”, Journal of Construction Engineering and
Management, Vol.122, pp.61-70, 1996.
Hinze, J. and Russel, D., “Analysis of Fatalities Recorded by OSHA”, Journal of
Construction Engineering and Management, Vol. 121, pp. 209-214, 1995
Carter, G., and Smith, S., “Safety Hazard Identification on Construction Projects”,
Journal of Construction Engineering and Management, Vol. 132, pp. 197-205,
2006.
634 COMPUTING IN CIVIL ENGINEERING

Garza, J., Hancher, D., and Decker, L..”Analysis of Safety Indicators in Construction”,
Journal of Construction Engineering and Management, Vol. 124, pp. 312-314.
1998
Hinze, J., and Gambatese, J., “Factors That Influence Safety Performance of
Specialty Contractors”, Journal of Construction Management and
Engineering, Vol. 129, pp. 159-164, 2003.
Jannadi, O. and Almishari, S. “Risk Assessment in Construction”, Journal of
Construction Management and Engineering, Vol. 129, pp. 492-500, 2003.
Kartam, N. “Integrating Safety and Health Performance into Construction CPM”,
Journal of Construction Management and Engineering, Vol. 123, pp. 121-126,
1997
Energy Simulation System Using BIM (Building Information Modeling)

Hyunjoo Kim1, and Kyle Anderson2

¹ Assistant Professor, Civil and Environmental Engineering, California State


University, Fullerton, PH: 657-278-3867, FAX: 657-278-3916, email:
hykim@fullerton.edu
² Graduate Student, Civil and Environmental Engineering, California State University,
Fullerton, PH: 657-278-3012, FAX: 657-278-3916, email: kyleand@csu.fullerton.edu

ABSTRACT
It is recognized that there is a need in the architecture, engineering, and
construction industry for new programs and methods of producing reliable energy
simulations using BIM (Building Information Modeling) technology. Current
methods and programs for running energy simulations are not very timely, difficult to
understand, and lack high interoperability between the BIM software and energy
simulation software. The goal of this research project is to develop a new
methodology to produce energy estimates from a BIM model in a more timely
fashion and to improve interoperability between the simulation engine and BIM
software. In the proposed methodology, the extracted information from a BIM model
is compiled into an INP file and run in a popular energy simulation program, DOE-2,
on an hourly basis for a desired time period. Case study showed that the application
of this methodology could be used to expediently provide energy simulations while at
the same time reproducing the BIM in a more readably three dimensional modeling
program.
INTRODUCTION
While BIM technology allows designers to run energy simulations (Kim et al.,
2009), there are limits on its usefulness in the current state of energy simulation
programs. Academics have pointed out that there is a need for improved
interoperability between energy simulation and building information modeling
programs (Messener et al., 2006).
The goal of this research project is to develop a new methodology to produce
energy estimates from a BIM in a more timely fashion and to improve interoperability
between the simulation engine and BIM software. In the case study applied in this
paper a BIM is created using modern commercial building design software. Next
the ifcXML file is read in a more commonly used modeling program that allows us to
use the ruby programming language to extract the relevant from by the ifcXML.
The geometric information regarding the building envelope is gathered and then used
to recreate the BIM in this new interface. From there the extracted information in
conjunction with user entered data is compiled into an INP file and run in a popular
energy simulation program, DOE-2, on an hourly basis for a desired time per0iod.

635
636 COMPUTING IN CIVIL ENGINEERING

This then produces estimated energy requirement reports for the proposed structure
based on the inputted conditions, duration, and location. The simulation results are
then compared over various locations to the results from commercial energy
simulation programs given the same conditions.
LITERATURE REVIEW
The Architecture/ Engineering/ and Construction industry is experiencing
great change because of BIM and its increasing popularity (Eastman et al. 2008;
Sacks et al. 2004). Many different energy modeling techniques have been applied to
numerous studies in attempts to predict future energy usage over the years including:
artificial neural networks, statistical analysis of building consumption data, decision
trees, and computer simulations programs (DOE-2, eQuest, and EnergyPlus)
(Catalina et al., 2008; Ekici et al., 2009; Olofsson et al., 2009). Previous work has
shown that using computer simulations takes a considerable amount of time to
properly input data correctly, even for qualified practitioners (Zhu et al., 2006;
Catalina et al., 2008). At the same time, DOE-2 specifically was applied in the
study of predicting energy consumption in the building sectors of major U.S. cities to
determine energy consumption profiles, but is very timely due to intensive labor
requirements. In hopes to alleviate some of the time requirements in this process
several groups have begun creating new methodologies for energy modeling using
EnergyPlus.
RESEARCH METHODOLOGY
In process 1.1 (Figure 1) a model is first created using the Graphisoft three
dimensional modeling program ArchiCAD 14. The model is created with only the
most basic of features: foundation, walls, windows, door(s), and a roof. While this
might not seem like very much information it is all that is required for running energy
simulations. Interior walls and cosmetic designing is not required unless it is desired
to have multiple heating ventilation and air conditioning (HVAC) zones for separate
parts of the structure. This is because heat loss and gain through the interior walls
are zero-sum products when considered in one zone. Once these basic parameters
have been set and the geometry of the structure has been finalized the model is
exported as an IFCXML file, process 2.1 (Figure 1). Process 1.2 (Figure 1) will be
discussed later in the section on writing the INP file.
Before moving on to process 2.2 (Figure 1) it is necessary to better
understand IFCXML files. The IFCXML file type was created by buildSMART
(formerly the International Alliance for Interoperability). The goal of the file type is
to, “promote open and interoperable IT standards to support the process change
within the construction and facility management industries” [buildSMART 2008].
COMPUTING IN CIVIL ENGINEERING 637

Figure 1. Energy Simulation Methodology Flowchart

Figure 2. Transforming the IFCXML File into a XML File


Process 2.2 (Figure 1) has the shortest duration of all, this is where the
exported IFCXML file is resaved as an XML file manually. This is done as it is
required to make the information in the file readable and able to be extracted later in
the process (Figure 2), and can be done simply using most document editing software.
SciTE was used to rename the file in this paper. Nothing changes about the file
other than the name from filename.ifcxml to filename.xml.
Once the file has been transformed into an XML file it is now able to be
read and reconstructed in a more readily available three dimension modeling program,
process 2.3. Google SketchUp was chosen for this purpose because it allows the
user to program customized applications in the program using the ruby programming
language in their ruby console.
IFCXML files even for very simple BIM produce very long and complex
files. The model used in this paper consisted of only six walls, a door, a roof, and a
slab, yet the IFCXML file was almost 8,000 lines of code.
Figure 3 below, is a very small exert from the IFCXML file created by the
BIM used in this paper. This exert is describing the length in inches of the northern
wall, or the second wall, based on order of construction in the BIM, as it is described
638 COMPUTING IN CIVIL ENGINEERING

in the file. To extract the information regarding the length of the wall we have to
write the following ruby code:

target[IfcShapeRepresentation][‘i1803’][‘Items’][‘i1808’][IfcPolyline]['i1799’'][Po
ints][‘i1802’]
['IfcCartesianPoint'][1]['Coordinates']['i1798']['IfcLengthMeasure'][0]
The information is referenced by indexes essentially. The first thing that is
called out is to find the first level of the index, IfcShapeRepresntation.

Figure 3. Indexed wall length information about BIM in IFCXML file

A default unit is selected (PSZ system), temperature supply levels are set,
occupancy type, and the heat sources are defined. The last step regarding the
HVAC systems is defining the zone. The zone is where the temperatures required to
trigger the heating and air conditioning to turn on and off are entered. Once entered,
the majority of the INP file has been written and all that remains is completing the
economics and reports section. The only data needed for the economic section are
the current utility rates for gas and electricity which the user entered earlier. Lastly
the reports section is generally standard and concludes the file with default methods
of reporting the information computed (Hirsch 2004). The INP file is now ready to
be run in the DOE-2 simulator and estimate the proposed structure’s utility bill.
COMPUTING IN CIVIL ENGINEERING 639

Figure 4. DOE-2.2 Program Interface and Simulation of INP File

Now it is finally possible to calculate the structure’s fuel and electrical demands. The
economic analysis subprogram can now compute the expected energy costs applying
the user inputted utility rates. The loads, HVAC, and economic subprograms are run
on an hourly basis for the defined duration (Figure 4) and produce the estimated
utility bill of the proposed structure for the given time period with detailed analysis of
the estimated energy consumption (Hirsch 2004).

CASE STUDY

In this case study the proposed methodology was applied to a relatively


simple one story L-shaped BIM with a net area of 1,461 interior square feet. The
structure was designed with only basic features required for running energy
simulations: exterior walls, flooring, roofing, and a door. The BIM transferred with
little effort into the SketchUp using the previously written Ruby code and was
geometrically recreated.

Figure 5. Los Angeles Energy Simulation Consumption Breakdown


640 COMPUTING IN CIVIL ENGINEERING

Inputting the INP file into the DOE-2.2 energy simulation program we were
able to produce reasonable simulations. In Los Angeles the estimated kilowatt hours
for the electrical components of the BIM was 18,025 and the estimated natural gas
amounted to 377 THERM (Figure 5). On a per square foot basis this amounted to
an estimated 67.9 kilo BTU per square foot per year..

CONCLUSION

There is a great need in the architecture, engineering, and construction


industry for new programs and methods of producing reliable energy simulations
from BIM in an easily understood and prompt manner. Current methods and
programs for running energy simulations are not very timely, difficult to understand,
and lack high interoperability between the BIM software and energy simulation
software. It is necessary to improve on these drawbacks as design decision are often
made without the aid of energy modeling leading to the design and construction of
non-optimized buildings with respect to energy efficiency.
This paper presents a case study illustrating a new methodology for running
energy simulations from a BIM in a straight forward and timely fashion. The results
from the case study were very similar to those produced from current commercial
energy simulation programs, with an average energy estimate variation of 10.5% and
cost variation of less than 3%. The application of this methodology can be used to
expediently provide energy simulations while at the same time reproducing the BIM
in a more readably three dimensional modeling program. With the aid of an easy to
run and easily understood energy simulation methodology, designers will be able to
make more energy conscious decisions during the design phase and as changes in
design requirements arise.

REFERENCES
Kim, H. and Stumpf, A. (2009), “Framework of Early Design Energy Analysis using
BIMs (Building Information Models”, ASCE Construction Research
Congress, Seattle, WA
Catalina, T., Virgone, J., and Blanco, E. (2008). “Development and validation of
regression models to predict monthly heating demand for residential
building.” Energy and Buildings, 40, 1825-1832
Ekici, B., and Aksoy, U. (2009). “Prediction of building energy consumption by using
artificial neural network.” Advances in Engineering Software, 40, 356-362
Olofsson, T., Andersson, S., and Sjӧgren, J. (2009). “Building energy parameter
investigations based on multivariate analysis.” Energy and Buildings, 41, 71-
80
Zhu, Y. (2006). “Applying computer-based simulations to energy auditing: a case
study.” Energy and Building, 38, 421-428
Semantic Modeling for Automated Compliance Checking
D. M. Salama1 and N. M. El-Gohary2
1
Graduate Student, Department of Civil and Environmental Engineering, University
of Illinois at Urbana-Champaign, 205 North Mathews Ave., Urbana, IL 61801; FAX
(217) 265-8039; email: abdelmo2@illinois.edu
2
Assistant Professor, Department of Civil and Environmental Engineering, University
of Illinois at Urbana-Champaign, 205 North Mathews Ave., Urbana, IL 61801; PH
(217) 333-6620; FAX (217) 265-8039; email: gohary@illinois.edu
ABSTRACT
Automated compliance checking of construction projects remains to be a
challenge. Existing computer-supported compliance checking methods are mainly
rule-checking systems (utilizing if-then-else logic statements) that assess building
designs based on a set of well-defined criteria. However, laws and regulations are
normally complex to interpret and implement; and thus if-then-else rule-checking
does not provide the level of knowledge representation and reasoning that is needed
to efficiently interpret applicable laws and regulations and check conformance of
designs and operations to those interpretations. In this paper, we explore a new
approach to automated regulatory compliance checking – we propose to apply
theoretical and computational developments in the fields of deontology, deontic logic,
and Natural Language Processing (NLP) to the problem of regulatory compliance
checking in construction. Deontology is a theory of rights and obligations; and
deontic logic is a branch of modal logic that deals with obligations, permissions, etc.
The paper starts by discussing the need for automated compliance checking of
construction operations and analyzing the limitations of existing compliance checking
efforts in this regard. The paper, then, provides an overview of the proposed approach
for automated compliance checking; and follows by an introduction of deontology
and deontic logic and their applications in other domains (e.g. computational law).
Finally, the paper presents the initial deontic modeling efforts towards automated
compliance checking.
INTRODUCTION
The Architecture, Engineering and Construction (AEC) industry is facing a
technological revolution with the introduction of Building Information Modeling
(BIM). Researchers, software developers, and industry professionals are pursuing
automation in diversified areas of the AEC industry. One area of automation in the
AEC industry is compliance checking. Compliance checking is the process of
assessing the compliance of a design, process, action, plan, or document to applicable
laws and regulations. Laws and regulations address architectural and structural design
requirements such as fire safety, accessibility, building envelope performance,
structural performance, etc. Laws and regulations also govern construction operations
to ensure construction safety, environmental protection, quality assurance, and
contractual compliance. Ongoing research efforts have been undertaken to automate

641
642 COMPUTING IN CIVIL ENGINEERING

the compliance checking of architectural and structural designs to applicable laws and
regulations (International Building Code (IBC), Americans with Disabilities Act
(ADA) standards, etc.). With the evolution of BIM-based tools, design data needed
for checking compliance is represented in a BIM model. This facilitates, to an extent,
the process of developing a tool for automated compliance checking of architectural
and structural designs. Automating the compliance checking of construction
operations, on the other hand, is far more challenging for three main reasons: 1)
data/information about construction operations (e.g. construction methods, temporary
facilities, construction safety procedure, quality control procedure, etc.) are not
semantically represented in a BIM model; 2) data/information about construction
operations are distributed across several documents (construction operations plans,
site layout, construction safety plan, quality control plan, etc.); and 3) construction
operations are highly dynamic and related documents undergo frequent changes and
updates (e.g. construction schedules). Due to the aforementioned challenges, more
attention has been given to the automation of compliance checking of architectural
and structural designs in comparison to construction operations. In the following
sections, this paper discusses the need for and challenges of automated compliance
checking of construction operations and proposes a new approach to automated
compliance checking.
THE NEED FOR AUTOMATED COMPLIANCE CHECKING OF
CONSTRUCTION OPERATIONS
Compliance checking of construction operations is a complex process. The
construction phase of a project is governed by a number of laws and regulations that
are issued by various authorities, originate from different sources, vary from one
location to another, change dynamically with time, and govern different construction
operations and activities: 1) Laws and regulations are issued by various authorities
such as the Occupational Safety and Health Administration (OSHA) safety
regulations, Environmental Protections Agency (EPA) laws and regulations, ASTM
(American Standard for Testing Materials (ASTM) standards, etc; 2) Project
contracts are also a major source of law - the source of private law; a contract
represents a binding agreement imposing rules and regulations on construction
operations; 3) Laws and regulations vary by project location; some laws are imposed
on a federal level, while other laws are imposed on a state level or a local level; 4)
Environmental laws and regulations are expected to change frequently in response to
the increasing awareness of sustainability and green construction; and 5) One piece of
regulation may apply to many construction operations. Given such complexities,
manual compliance checking of construction operations has been a time and
resource-consuming task. Not only that, but it has been error prone, causing
construction projects to violate the law and as such suffer monetary and/or non-
monetary consequences. For example, recent violations to environmental regulations
by construction contractors include Wal-Mart Stores Inc. that was fined $1 million
and committed to an environmental management plan valued at $4.5 million to
increase compliance to storm-water regulations at its construction sites, through
additional inspections, training, and recordkeeping (US EPA 2010). Similarly, Beazer
Homes USA Inc., a national homebuilder, paid a $925,000 fine due to its violations
COMPUTING IN CIVIL ENGINEERING 643

of the Clean Water Act (Helderman, Washington Post 2010a); Bechtel National paid
$170,000 fines due to quality violations in the construction of a vitrification plant
(Cary, Hanford News 2010); and Hovnanian Enterprises, another homebuilder, paid
$1 million fines due to storm-water run-off violations (Helderman, Washington Post
2010b). Automated compliance checking would reduce the probability of making
compliance assessment errors and, consequently, improve compliance; thereby
reducing violations to laws and regulations that govern the construction process.
CURRENT EFFORTS TOWARDS THE AUTOMATION OF COMPLIANCE
CHECKING
Research on automated rule checking has been ongoing for over a decade
(Tan et al. 2010, Eastman et al. 2009). Researchers and software vendors developed
different compliance checking software focusing on the architectural and structural
design phases of a construction project. Most developers utilize Industry Foundation
Classes (IFC) to facilitate data exchange. IFC is a data format developed by the
buildingSMART initiative, which aims to facilitate data sharing between project
members and software applications (Building Smart Alliance 2008). IFC provides a
medium for data interoperability. It is registered by ISO (International Organization
for Standardization) and is currently in the process of becoming an official
international standard (Building Smart Alliance 2008). Efforts to automating the
compliance checking process include Solibri Model Checker, several projects led by
FIATECH, CORENET led by the Singapore Ministry of National Development, and
HITOS a Norwegian BIM based project. As an example, Solibri Model Checker
(SMC) is an IFC-compliant rule-checking software. IFC models are created in
applications such as Autodesk Revit Architecture, ArchiCAD, etc. (Khemlani 2002).
SMC is a java-based desktop platform application. It reads an IFC model and maps it
to an internal structure facilitating access and processing (Eastman et al. 2009).
Solibri performs several checks; it includes a pre-checking built-in function that tests
the model for overlaps, object existence, and name and attributes conventions. It
performs a set of design checks, such as accessibility checks according to the ISO
accessibility building code, fire safety checks according to the fire code exit path
distance, etc. The software reports the checking results in a visual manner, in the
form of pdf, xls or xml files. The checking is carried out using parametric constraints;
thus, the user can change the parameters of certain constraints according to the
desired standard (Khemlani 2002). A limitation of SMC, however, is that the addition
of new rules or modification of existing rules has to be done through the
addition/modification of java programming code. This means that the user is not
capable of removing, adding, or updating the built-in rules.
Previous research and software development efforts have undoubtedly paved
the way for automated compliance checking in the AEC industry. However, one
limitation of these efforts is that most of them focus on the architectural and structural
design domains. Automated compliance checking of construction operations has
received little, if any, effort due to its relative complexity. Another limitation of
existing automated compliance checking tools is that they all focus on the, relatively,
simpler form of rules; for example rules dealing with geometrical and spatial
attributes of the buildings, such as rules for checking proper representation of objects,
644 COMPUTING IN CIVIL ENGINEERING

overlaps and intersections of objects, wall thicknesses, door sizes, etc. Existing tools
lack the capability of performing more complex levels of compliance reasoning and
checking, such as checking compliance with contractual requirements. A third
limitation of existing tools is that they do not provide the level of flexibility that is
needed so that users can add or modify the set of governing rules and regulations (the
addition of rules is controlled by software vendors). Another fact which limits the
applicability of previous efforts to construction operation compliance checking
applications is that the compliance checking depends on data/information that is not
part of the BIM model. Data/information about construction operations (e.g.
construction methods, temporary facilities, construction safety procedure, quality
control procedure, etc.) are not semantically represented in a BIM model. A fifth
limitation of previous research efforts is that the rules and regulations are manually
extracted (from relevant textual documents describing laws and regulations) and
coded. Full (or at least higher) automation of compliance checking requires complex
processing of regulatory and contractual documents to automatically (or semi-
automatically) extract applicable rules.
PROPOSED NEW APPROACH TO AUTOMATED COMPLIANCE
CHECKING – A DEONTIC-BASED APPROACH
Automated compliance checking remains to be a challenge because of its complexity
and the highly elaborate reasoning it requires. Existing rule-checking engines do not
provide the level of knowledge representation and reasoning that is needed to process
applicable regulations and check conformance of designs and operations to those
regulations. As such, further research efforts towards semantic modeling of
construction-related laws and regulations and compliance reasoning must be
undertaken.
In this paper, we propose a new approach to automated regulatory compliance
checking; we propose to apply theoretical and computational developments in the
fields of deontology, deontic logic, and Natural Language Processing (NLP) to the
problem of compliance checking in construction. Deontology is a theory of rights and
obligations. Deontic logic is a branch of modal logic that deals with obligations,
permissions, etc. Natural Language Processing is NLP is a theoretically-based
computerized approach to analyzing, representing, and manipulating natural language
text or speech for the purpose of achieving human-like language processing for a
range of tasks or applications (Chowdhury and Cronin 2002). The proposed approach
for automated compliance checking is outlined as a five-step process, as shown in
Figure 1: 1) Extracting and formalizing the rules: extracting the rules from textual
regulatory and contractual documents (natural language text presented in word
documents), and converting these rules into formal logic sentences; 2) Extending the
BIM model: adding missing data/ information/ knowledge needed to represent and
reason about construction operations and its compliance to laws and regulations; 3)
Extracting and formalizing project data: extracting relevant construction data from
textual project documents (e.g. safety plan, environmental plan, etc.), and converting
these data into a semantic format. Unlike step 1, this step involves the documents
being checked for compliance, rather than the documents describing the laws and
regulations; 4) Executing the code checking process: checking whether the
semantically-represented project data (extended BIM model data and extracted
COMPUTING IN CIVIL ENGINEERING 645

textual data) comply with the formalized rules; and 5) Reporting the results: the
system reports missing data, violating objects and warnings; in some cases the result
is in the form of a pass or fail. All the above-mentioned processes (Step 1 through 5)
will be facilitated by the representation and reasoning capabilities of a deontic model
(presented and discussed in the following sections). Steps 1 and 3 will, additionally,
require the use NLP techniques. The remainder of this paper focuses on the
application of deontology and deontic logic for semantic modeling of laws and
regulations and compliance reasoning. Presenting and discussing the authors’
research efforts in using NLP for automated compliance checking is beyond the scope
of this paper.

Extending the
Extracting and Extracting and Executing the
Building
Formalizing Formalizing Code Checking Reporting
Information
the Rules Project Data Process
Model

Figure 1. Proposed approach for automated compliance checking.


DEONTOLOGY AND DEONTIC LOGIC
Deontology is a general theory of duty or obligation. It originates in ethical
philosophy and is concerned with decision-making and doing what is ‘right’. Deontic
logic is a branch of modal logic that deals with notions such as obligation (ought),
permission (permitted), and prohibition (may not) for normative reasoning. Deontic
logic is, thus, sometimes defined as a logic to reason about ideal versus actual states
or behavior (McNamara 2007, Cheng 2008). Several research efforts have been
conducted in the domain of semantic modeling to propose formal representations of
deontology based on propositional calculus with operators for obligation, permission,
and prohibition. Until now, however, there are no elaborate formalized deontological
models and no agreed-upon syntax for writing deontic logic sentences.
Deontology falls within the domain of theories that guide and assess our
choices of what we ought to do - deontic theories - (McNamara 2007). Deontology
deals with assessing whether a specific action or state is right or wrong, permitted or
forbidden. Deontic logic could, thus, be applied to any area of application that
requires normative reasoning about ideal versus actual behavior or state of systems;
such as formal contract representation, automated contractual analysis, violation
assessment systems, etc. Most notably, deontic logic has been applied in the area of
legal automation (also called computational law). Deontic logic is deemed to be a
suitable means of representing legal systems; it provides a formal language with
normative notions suitable for the formal representation and specification of laws,
legal rules, and precedents (Cheng 2008). Legal automation involves the use of
computers to support different tasks in the legal process. Legal automation may range
from text processing and electronic data interchange (EDI) to information retrieval
and legal advice giving, depending on the application (Wieringa and Meyer, 1993).
Some examples of attempts to apply deontic logic are the LEGOL and the TAXMAN
projects. The LEGOL (Legal Oriented Language) project aimed to improve the
conceptual modeling methods for information and a system development in
646 COMPUTING IN CIVIL ENGINEERING

organizations towards a more precise representation of actual behavior. LEGOL is a


legal automation project as it represents the legislation of computers. LEGOL as a
language allows the expression of complex rules and regulations. It was developed in
an attempt to automate administrative procedures of statute law and preparation of
legislation with the primary aim of providing techniques of information analysis
(Gazendam and Liu 2005). One of the extensions of the LEGOL language included
deontic operators such as right, duty, privilege, and liability. Research in the area of
deontic logic for legal applications is still ongoing (Cheng 2008).
Recently, deontic logic is being proposed for use in other similar applications
requiring normative reasoning, such as formal writing of electronic contracts
(Prisacariu and Schneider 2008), organizational responsibility reasoning (Feltus and
Petit 2009), compliance of business processes (Awad and Weske 2010), regulatory
compliance (Jureta et al. 2010), etc.
INITIAL DEONTIC MODELING EFFORTS FOR AUTOMATED
COMPLIANCE CHECKING IN CONSTRUCITON
The proposed compliance reasoning system consists of two layers: an
deontology layer and a ontology layer (see Figure 2). The purpose of the dentology
layer is to represent the laws and regulations and reason about compliance of
construction operations to those laws and regulations. Data/information about the
project and its construction operations are represented in the ontology layer. The
ontological concepts (defined in the ontology layer) offer a medium of
communication between an existing project ontology (or BIM Model) and the
deontology. The deontology is structured into deontic concepts, relationships, and
deontic axioms.
Upper-level deontic concepts (see Figure 2) represent the main normative
concepts of the model: ‘regulation,’ ‘authority,’ ‘deontic document,’ ‘subject,’
‘obligation,’ ‘permission,’ ‘prohibition,’ ‘agent,’ ‘violation,’ ‘penalty.’ A ‘regulation’
is any law, regulation, rule, or contractual requirement prescribed by an ‘authority.’ A
‘deontic document’ is a regulatory or contractual document that defines a regulation.
A ‘subject’ is a ‘thing’ (actor, process, document, etc.) that is subject to a particular
regulation. An ‘agent’ is a responsible actor (organization or individual) who is
bound by an obligation or prohibition. As per Figure 2, a subject is governed by
regulations; a regulation originates from an authority and is defined in a deontic
document; a regulation prescribes obligations, permissions, and prohibitions; a
subject is obligated by an obligation, is permitted by a permission, and is prohibited
by a prohibition; an obligation obligates an agent, while a prohibition prohibits an
agent; and an agent may commit a violation and as a result may suffer a penalty.
As such, two main types of relationships are used: 1) Subsumption
Relationships: is-a relationships relating a concept to a sub-concept, such as a ‘federal
obligation’ ‘is_a’ ‘obligation; and 2) Deontic Relationships: relationships that define
normative requirements, such as a ‘regulation’ ‘defines’ an ‘obligation.’ Deontic
axioms include three main types of axioms: 1) Definitive Axioms: define the laws
and regulations that a project or operation is subject to given its characteristics (such
as project location, project type, project size, type of activities involved, etc.); 2)
Regulatory Axioms: are formal representations of the rules included in the applicable
COMPUTING IN CIVIL ENGINEERING 647

laws and regulations; and 3) Compliance Checking Axioms: assess the compliance of
the extended BIM model and the textual documents to the applicable rules.
originates from
Authority

is governed by is defined in Deontic


Subject Regulation
Document

prescribes
is prohibited by

prescribes prescribes
is permitted by

is obligated by

Obligation Permission Prohibition

obligates prohibits
Agent
may suffer may commit

may result in
Penalty Violation

Figure 2. Preliminary upper-level deontic model.


The following code compliance checking processes will be facilitated by the
representation and reasoning capabilities of the deontic model: NLP of textual
deontic (regulatory and contractual) documents for extraction and formalization of
rules; BIM model extension; NLP of textual project documents for extraction and
formalization of deontic-relevant construction data; and compliance checking of
construction operations based on BIM model data and extracted textual data.
SUMMARY AND CONCLUSION
Automated compliance checking in construction remains to be a challenge
because of its complexity and the highly elaborate reasoning it demands. To address
this challenge, we propose a new approach to automated regulatory compliance
checking. The approach utilizes semantic modeling (deontic conceptualization and
deontic logic) and Natural Language Processing (NLP) techniques. Formal deontic
modeling aims at offering the level of knowledge representation and reasoning that is
needed to process applicable laws and regulations and check compliance of designs
and operations to the rules that are prescribed by those laws and regulations. NLP
techniques will support the task of textual document analysis and processing for
extraction and formalization of rules and information. A preliminary upper-level
denontic model is presented in the paper. This model represents the first deontic
modeling effort in the construction domain.
REFERENCES
Awad, A.,and Weske, M.(2010).“Visualization of compliance violation in business
process models.”Business process management workshops, Springer,182-193.
648 COMPUTING IN CIVIL ENGINEERING

Building Smart Alliance. (2008). "Industry Foundation Classes (IFC). Building Smart,
http://www.buildingsmart.com/bim> (Dec 30, 2010).
Cary, A. (2010). “Bechtel Settles 'Quality Issues'.” Hanford News Online,
http://www.hanfordnews.com/2010/09/25/15961/bechtel-settles-quality-
issues.html>( Sep. 25, 2010).
Chowdhury, G., and Cronin, B. (2002). “Natural language processing.” Annual
Review of Information Science and Technology, 37, 51-89.
Cheng, J. (2008). “Deontic relevant logic as the logical basis for representing and
reasoning about legal knowledge in legal information systems." Knowledge-
based intelligent information and engineering systems, Springer, 517-525.
Eastman, C., Lee, J., Jeong, Y., and Lee, J. (2009). "Automatic rule-based checking
of building designs." Automation in Construction, 18(8), 1011-1033.
Feltus, C., and Petit, M. (2009). “Building a responsibility model using modal logic -
towards accountability, capability and commitment concepts.” Proc. Intl.
Conf. on Computer Systems and Applications, IEEE, 386-391.
Gazendam, H. W.M., and Liu, K. (2005). “The evolution of organisational semiotics:
A brief review of the contribution of Ronald Stamper.” Studies in
organisational semiotics, Kluwer Academic Publishers, Dordrect.
Helderman, R. S. (2010a). "Cuccinelli Praises EPA for Polluting Homebuilder
Settlement." The Washington Post (Dec. 10, 2010),
http://voices.washingtonpost.com/virginiapolitics/2010/12/cuccinelli_praises_
epa_for_pol.html> (Dec. 03, 2010).
Helderman, R. S. (2010b). "Cuccinelli Issues Rare Praise for EPA in Stormwater
Case." The Washington Post (Apr. 21, 2010),
http://voices.washingtonpost.com/virginiapolitics/2010/04/cuccinelli_issues_r
are_applaus.html> (Dec.10, 2010).
Jureta, I., Siena, A., Mylopoulos, J., Perini, A., and Susi, A. (2010). “Theory of
regulatory compliance for requirements engineering.” Computing Research
Repository (CoRR), Vol. abs/1002.3711
Khemlani, L. (2002). "Solibri Model Checker." CadenceWeb,
http://www.cadenceweb.com/2002/1202/pr1202_solibri.html> (Oct. 10, 2010).
McNamara, P. (2007). "Deontic Logic." Stanford Encyclopedia of Philosophy,
http://plato.stanford.edu/entries/logic-deontic/> (Oct. 30, 2010).
Prisacariu, C., and Schneider, G. “A formal language for electronic contracts.” Proc.
9th IFIP WG Intl. Conf. on Formal Methods for Open Object-Based
Distributed Systems, 174-189.
Tan, X., Hammad, A., and Fazio, P. (2010). "Automated code compliance checking
for building envelope design." J. of Compt. in Civil Engrg, 24(2), 203-211.
US EPA. (2008). US Environmental Protection Agency, http://www.epa.gov/> (Nov.
30, 2010).
Wieringa, R. J., and Meyer, J. (1993). "Applications of deontic logic in computer
science: a concise overview." Deontic logic in computer science: normative
system specification, 17-40.
Ontology-based Standardized Web Services for Context Aware Building
Information Exchange and Updating

J. C. P. Cheng 1 and M. Das 2


1
Department of Civil and Environmental Engineering, The Hong Kong University of
Science and Technology; PH (852) 2358-8186; email: cejcheng@ust.hk
2
Department of Civil and Environmental Engineering, The Hong Kong University of
Science and Technology; email: mdas@ust.hk

ABSTRACT
Standardized web services technology has been used in the construction
industry to support activities such as sensing data integration, supply chain
collaboration, and performance monitoring. Currently these web services are used for
exchanging messages with simple structure and small size. However, building
information models often contain rich information and are huge in size. Therefore,
retrieving and exchanging building information models using standardized web
services technology is challenging.
This paper discusses the usage of ontologies and context awareness in
standardized web services technology to facilitate efficient and lightweight
information retrieval and exchange. Ontologies for building information such as
Industry Foundation Classes (IFC) and aecXML have been developed for over a
decade and become mature. These ontologies can be leveraged to structure the input
and output messages of web services. On the other hand, context awareness not only
provides data security according to user’s location, time, and profile, but also enables
retrieval and exchange of partial information models. This paper presents and
demonstrates an ontology-based context aware web service framework that is
designed for retrieval and updating of building information models.

INTRODUCTION
With the emergence of the Internet and the advancements in network
technologies, the web services technology has become a promising means to deliver
information and software functionalities in a flexible manner. A web service is a self-
contained, self-describing application unit that can be published, distributed, located,
and invoked over the Internet to provide information and services to users through
application-to-application interaction. Due to the reusability and plug-and-play
capability of web services, the web services technology has attracted increasing
attention for communication and system implementation. In the construction industry,
the web services technology has been leveraged for various applications, such as
supply chain collaboration (Cheng et al. 2010), sensing data transmission and
integration (Hsieh and Hung 2009), searching and browsing of construction products
catalogue (Kong et al. 2005), and performance monitoring (Cheng and Law 2010).

649
650 COMPUTING IN CIVIL ENGINEERING

As the Internet becomes ubiquitous, the uses of the web services technology will keep
growing in the construction industry.

Web service standards such as SOAP (Simple Object Access Protocol) (World
Wide Web Consortium (W3C) 2003) and WSDL (Web Service Description Language)
(World Wide Web Consortium (W3C) 2007) have been developed to facilitate the
communication between web services and to enhance the interoperability of web
service units. These standards provide a generic data model and communication
mechanism for message exchange between web service units. Currently, the
standardized web services technology is used for exchanging messages with simple
structure and small size in the construction industry. However, building information
models often contain rich information and are huge in size. It is not efficient to
exchange the entire building information model using web services for data retrieval
and modification of models.
Therefore, we propose an ontology-based context aware web service
framework that is designed for exchanging data in a lightweight and customized
manner for manipulating building information models. The framework leverages
commonly used building information modeling (BIM) ontologies such as Industry
Foundation Classes (IFC) and CIMSteel (CIS/2) for defining the structure and
semantics of the data being exchanged. With the aid of these ontologies, users of the
framework do not need to exchange entire building information models through web
service units to make changes on them. Exchanging of partial models or key values is
sufficient. As different software programs may interpret the same BIM ontology
differently, the ontology-based web service units in the framework are specific to the
software environment of the source and the target applications. The framework also
uses context information for providing customized operations and functionality. This
paper presents the framework and an illustrative example.

RESEARCH BACKGROUND

Context Awareness. Context awareness has been adopted in the construction


applications in recent years (Aziz et al. 2006). Keidl and Kemper (2004) states that
context includes all information about the client of a web service that may be utilized
to adjust the execution and output for providing the client with a customized and
personalized behaviors. The context can be of two types – from the perspectives of
service clients and from the perspectives of service providers. For the former one,
context information is the surroundings of the clients, such as clients’ profile and
preferences, the applications used by the clients, and clients’ location. For the latter
one, context information can be the environment of the web services, and devices and
platform for the service execution. Common context types are described as follows.
 Location: Information about clients’ current location, e.g. address, country, and
GPS coordinates. Based on these kinds of information, web services can output
information according to the local environment. For example, a web service can
validate a building model according to the building codes of a particular country.
 Client: Information regarding client’s application, e.g. client’s hardware
configurations and software like operating system and applications. The main
COMPUTING IN CIVIL ENGINEERING 651

purpose of such context is to allow the web services to modify their output
according to the client’s device properties. It is necessary to share construction
data that can be understood and used by the clients. For example, if CIS/2 data
is to be received by a client who uses other ontologies, say IFC, appropriate
mapping is required.
 Consumer: Information about the consumer, e.g. name and id, who invokes the
web service. Such information can be intelligently used by a web service in
different scenarios.
 Connection preferences: Information about the properties of the connections to
the web services.

BIM Ontology. Ontology is an explicit specification of conceptualization, which is


the structure of a domain. A typical ontology consists of concepts, relations,
instances, attributes, rules, and formal axioms. Building information modeling (BIM)
is a concept that handles the process of generating and managing building data during
its life cycle. Several BIM ontologies have been developed to represent the scope,
format, and semantics of building models so that the models can be shared as
information across the different domains and applications of the construction industry.
Some of the commonly used ontologies are IFC/ifcXML, aecXML, CIS/2 (CIMSteel
Integration Standard), and gbXML (green building XML). There are several other
ontologiess for storage and exchange of data related to facility items (e.g. pumps, heat
exchangers), virtual 3D city model, and the real estate property industry, etc. Most of
them use XML specifications. Attempts to map between the various ontologies for
enhanced interoperability in the construction industry are still under active research.

ONTOLOGY-BASED CONTEXT AWARE WEB SERVICE FRAMEWORK


Figure 1 shows the schematic representation of the web service framework.
The framework consists of a layer of inputs, a layer of outputs, and three layers of
web service units. There are three kinds of inputs in the framework – structured data,
command, and contexts. The structured data input can be a set of values such as
element coordinates and dimensions organized in a user-defined structure, or an
extracted part of a building information model in standardized formats like IFC and
gbXML. The command input specifies the modification of a building model that is
intended to be performed. A modification can be a simple operation such as adding a
column in a drawing, or a complex series of operations which include moving a wall,
checking compliance of geometric constraints, and moving the window and door
components attached on the wall. The context information input includes user profile,
device settings, geographical location, and software environment of the source and
the target applications.
The web service units in the framework perform three different types of
operations – service routing, data mapping, and model editing. Each web service unit
uses standardized web service protocol SOAP for message transmission and open
standard WSDL for service specification. The three layers of web service units are
briefly described as follows.
Service Routing Layer. This layer contains only a single web service unit, which
processes the structured data input and determines the data standard. Based on the
652 COMPUTING IN CIVIL ENGINEERING

command input and the context information input, the web service unit then decides
the suitable web service units on the data mapping layer and the model editing layer.
Command Context information
Structured (e.g. Modify Wall, (e.g. user profile, Inputs
Data Add Column) software environment)

WS Service Routing

WS WS WS WS Etc. Data Mapping


gbXML  IFC IFC  gbXML CIS/2  IFC IFC  CIS/2

Original WS WS WS
building Etc.
model [IFC, Revit] [gbXML, Revit] [CIS/2, ETABS] Model Editing
{AddWall, ModifyWall, {MoveWall, {ModifyColumn,
RemoveWall, etc.} ResizeWindow, etc.} etc.}

Outputs
New model files Model updated

Figure 1. Schematic representation of the ontology-based context aware web


service framework

The service routing unit leverages the BPEL standard (Business Process Execution
Language) (Organization for the Advancement of Structured Information Standards
(OASIS) 2007) for service composition. BPEL is a service orchestration language
that is commonly supported by open source and commercial web service execution
engines.

Data Mapping Layer. This layer contains a set of ontology-based web service units,
which extract the structured data input and convert the data into parameter values in
the target ontology. The messages for communication between web services are in
XML (eXtensible Markup Language) format. The web service units can identify the
ontology used in the structured data input by parsing the XML tags from the data
input. For example, if the data input is structured using ifcXML, the XML tags will
contain ifcXML elements which start with the prefix ‘IFC’.
BIM standards such as IFC and CIS/2 may represent the same object using
different parameters. For instance, in one standard the location of a wall may be
defined using the coordinates of the starting point and the coordinates of the ending
point, while in another standard the location may be defined using the coordinates of
the midpoint and the length of wall. Therefore, after parsing the structured data input
and identifying the ontology used, the data mapping web service units convert the
information into parameter values in the target ontology. This task requires the
knowledge of what parameters are needed to define a particular type of building
component in a specific ontology.

Model Editing Layer. This layer contains web service units which make changes to
building information models. The web service units identify the components (e.g.
COMPUTING IN CIVIL ENGINEERING 653

doors, walls, columns, slabs, etc.) to be changed and select the templates for the
specified command (e.g. addition, removal, moving, etc.). The actual operations and
behaviors may vary depending on the target software environment, data structure, and

Figure 2. (Left) 3D view of the building model used in the illustrative example;
(Right) The wall to be added using the ontology-based web service framework

user profile. The model editing layer is therefore context aware. The web service
units obtain the entire building model, either from online or local machine, and
incorporate the changes into the original building model. Finally, the web service
units either return a file of the new building model, or edit the original model directly.

EXAMPLE SCENARIO
To illustrate the proposed web service framework, an example scenario is
presented in this section. In this example, users are allowed to modify an Autodesk
Revit Architecture building model by inputting commands and parameters through
web browsers. The building model has two floors and one basement (see Figure 2).
For demonstrative purposes, addition of walls using the ontology-based context aware
web service framework is presented and discussed in the following sub-sections. The
wall to be added is an interior wall located on the lower floor, as depicted in Figure 2.
Currently, 3D building models can be showed on web pages with the aid of
technologies such as VRML (Virtual Reality Modeling Language), AutoCAD DXF
(Drawing eXchange Format), and CAD viewers. In this example, an interior architect
logs into an intranet and views the building model on a designated web page. The
web page not only displays the 3D building model, but also allows users to perform
some pre-defined functions to edit the building model and to view the updated model.
The architect selects the function ‘Add Wall’ and provides parameter values such as
wall dimensions and location coordinates. Once the architect submits the form, the
web page connects to the web service unit for service routing and starts the invocation
of web services in the framework. While the command and supporting data are
provided by the users, context information including user id and browser display
settings is extracted by the web page and sent to the service routing unit.

Service Routing Layer. The service routing unit processes the inputs which consist
of (1) the information such as dimensions and material type of the new wall,
structured in a user-defined schema, (2) the command ‘Add Wall’, and (3) the context
information including target software environment and user id. The service routing
654 COMPUTING IN CIVIL ENGINEERING

unit then selects the appropriate data mapping web service unit and model editing
web service unit, and invokes them using BPEL.

Web Service Unit for Mapping Data to IFC. In this example, the target software
environment is using IFC as the data structure. IFC defines a wall using parameters
including coordinates of the starting point, orientation, length, width, height, and type
(e.g. retaining wall and exterior wall). On the other hand, the parameters in the
structured data input are height, width, coordinates of the starting point, coordinates
of the ending point, material, and wall type. The coordinates need to be converted
into wall orientation and length before wall addition can be performed.

Web Service Units for Addition of Walls. To add a wall in an IFC building model
that Revit Architecture can understand, five modifications of the IFC model should be
performed:
(1) Add an IfcWallStandardCase element and its sub-elements to represent the
new wall;
(2) Add properties to the new wall using IfcRelDefinesByProperties elements;
(3) Specify the material of the wall using an IfcRelAssociateMaterial element;
(4) Connect the new wall to neighboring walls, if any, using
IfcConnectsPathElements elements; and
(5) Assign the new wall to a storey in the structure, using an
IfcRelContainedSpatialStructure element.
These five modifications performed by the wall addition web service unit will be
discussed in the following paragraphs.
Figure 3 shows the structure of a wall defined in the IFC ontology. To add a
new wall, IFC elements such as IfcWallStandardCase, IfcProductDefinitionShape,
and IfcExtrudedAreaSolid need to be created. In Figure 3, the numbers in the boxes
represent the line numbers of the IFC elements in the modified building model. The
highlighted boxes represent the new lines created for the new wall. The highlighted
element names represent the elements that contain numeric values which are
associated with the dimensions of the new wall.
Similarly, the IFC elements IfcRelDefinesByProperties,
IfcRelAssociatesMaterial and IfcConnectsPathElements, and their subsequent
elements are created for the new wall in the model editing web service unit. The
IfcRelDefinesByProperties element defines the properties of a building component in
1-to-N relationship. It allows the assignment of one property set to a single or
multiple building components such as walls or floor slabs. A building component can
be represented even without associated with any IfcRelDefinesByProperties element,
but component information may be lost in this case. The IfcRelAssociatesMaterial
element relates a building component with its material properties. The
IfcConnectsPathElements element defines the connectivity relation between two
elements. The structures of the three IFC elements are described in Figure 4.
Finally, the IfcRelContainedSpatialStructure element which specifies the
structural elements on each floor of the building should be re-defined. If the new wall
is located on an existing floor, there is already an IfcRelContainedSpatialStructure
element in the building model and the new wall should be added to the element.
COMPUTING IN CIVIL ENGINEERING 655

Otherwise, an IfcRelContainedSpatialStructure element needs to be created to define


the new floor and to include the new wall. The structure of the
IfcRelContainedSpatialStructure element for the new wall is described in Figure 5.

Figure 3. The IfcWallStandardCase element and its sub-elements of a new wall

IFCRELDEFINESBYPROPERTIES
3770
IFCOWNERHISTORY IFCWALLSTANDARDCASE IFCRELASSOCIATESMATERIAL
33 IFCPROPERTYSET 3100 5001
IFCOWNERHISTORY
3570 33

IFCMATERIALLAYER- 5002 3100


IFCOWNERHISTORY
SETUSAGE
33 89 … 3540 3550 IFCWALLSTANDARDCASE
IFCMATERIAL- 5003
IFCPROPERTYSINGLEVALUE
LAYERSET
IFCMATERIAL-
IFCRELCONNECTSPATHELEMENT 5004
LAYER
3750
IFCMATERIAL 5005
IFCOWNERHISTORY IFCWALLSTANDARDCASE
33 82
IFCWALLSTANDARDCASE

3100

Figure 4. The IfcRelDefinesByProperties, IfcRelAssociatesMaterial, and


IfcRelConnectsPathElement elements and their sub-elements of a new wall

IFCRELCONTAINEDSPATIALSTRUCTURE
163
IFCOWNERHISTORY IFCBUILDINGSTOREY
33 47
2600 3100

IFCWALLSTANDARDCASE

Figure 5. The IfcRelContainedSpatialStructure elements and its sub-elements


656 COMPUTING IN CIVIL ENGINEERING

SUMMARY AND FUTURE WORK


This paper presents an ontology-based context aware web service framework that
allows customized operations and lightweight data exchange of building information.
The framework leverages standardized web service technology for flexible data and
functionality integration, and uses BIM ontologies for specifying the structures and
semantics of the building information. The operations are specific to software
environment and user profile.
Currently, Autodesk Revit Architecture can read the building information
model modified using the presented framework. However, the framework has been
tested for simple objects like walls and doors only. Building elements with complex
shapes will be considered in the next step because their ontological representations
may be different. In the example scenario, information structured in the user-defined
ontology is converted into standardized IFC ontology, guided by manually pre-
defined rules. In the future, semantics of the information and linguistic approaches
may be investigated to facilitate automated or semi-automated mapping of
information in the data mapping web services layer. In addition, as context aware
applications often raise security and privacy concerns, secure web service technology
will be considered in the framework in the future.

REFERENCES
Aziz, Z., Anumba, C. J., Ruikar, D., Carrillo, P., and Bouchlaghem, D. (2006).
"Intelligent Wireless Web Services for Construction--A Review of the
Enabling Technologies." Automation in Construction, 15(2), 113-123.
Cheng, J. C. P., and Law, K. H. "A Web Service Framework for Environmental and
Carbon Footprint Monitoring in Construction Supply Chains." Proceedings of
the 1st International Conference on Sustainable Urbanization, Hong Kong,
China, December 15 - 17, 2010.
Cheng, J. C. P., Law, K. H., Bjornsson, H., Jones, A., and Sriram, R. D. (2010). "A
service oriented framework for construction supply chain integration."
Automation in Construction, 19(2), 245-260.
Hsieh, Y.-M., and Hung, Y.-C. (2009). "A scalable IT infrastructure for automated
monitoring systems based on the distributed computing technique using
simple object access protocol Web-services." Automation in Construction,
18(4), 424-433.
Keidl, M., and Kemper, A. (2004). Towards Context-Aware Adaptable Web Services,
University of Passau, Germany.
Kong, S. C. W., Li, H., Liang, Y., Hung, T., Anumba, C., and Chen, Z. (2005). "Web
services enhanced interoperable construction products catalogue." Automation
in Construction, 14(3), 343-352.
Organization for the Advancement of Structured Information Standards (OASIS).
(2007). Web Services Business Process Execution Language (WS-BPEL),
Version 2.0.
World Wide Web Consortium (W3C). (2003). Simple Object Access Protocol (SOAP),
Version 1.2.
World Wide Web Consortium (W3C). (2007). Web Services Description Language
(WSDL), Version 2.0.
IFC-Based Construction Industry Ontology And Semantic Web Services
Framework

L. Zhang1 and R. R. A. Issa2

1
Rinker School of Building Construction, College of Design, Construction and
Planning, University of Florida, PO Box 115703, Gainesville, FL, USA 32611-5703;
PH (352) 949-9419; email: zhangle@ufl.edu
2
Rinker School of Building Construction, College of Design, Construction and
Planning, University of Florida, PO Box 115703, Gainesville, FL, USA 32611-5703; ,
PH (352) 273-1152; email: raymond-issa@ufl.edu

ABSTRACT
A construction project is a multi-disciplinary team effort combining inputs from
various domains, for which interoperability is of great importance. Currently existing
data exchange problems between different software applications is adversely
impacting the overall productivity of the Architecture, Engineering and Construction
(AEC) industry. The use of Industry Foundation Classes (IFC) has been proposed to
help address the lack of interoperability throughout construction industry. But the IFC
specification itself is too complicated for normal users without special training.
This paper proposes a semantic Web Services framework utilizing IFC-based
industry ontology to address the interoperability problem. First, the possibility of
building an IFC-based construction industry ontology is reviewed. Then, a framework
to build semantic Web Services on this ontology is suggested. Both a core service and
an assistant service is included. The framework could be easily expanded as long as
the same Web Services model and the common ontology is observed. Once
implemented, the framework could be utilized by any IFC-supported BIM
applications, as well as personnel without extensive knowledge of IFC specifications,
for more precise, consistent and up-to-date project information retrieval. This
approach is expected to further the effort of IFC and enhance and improve
interoperability in the AEC industry without requiring extremely technologically
savvy users.

INTRODUCTION
A construction project is a multi-disciplinary team effort combining valuable
and unique inputs of stakeholders from various domains, including owners, architects,
engineers, contractors and facility managers. Among other requirements of

657
658 COMPUTING IN CIVIL ENGINEERING

interoperability, correct and timely information sharing between the parties is of vital
value for a successful project, as well as the continuous development of the whole
industry.
The application of information technology (IT) in construction industry is an
indispensible contributing factor for the growth of productivity within each specific
domain in the industry. But when everything is put together to work on a project, the
data exchange problem between software applications adversely impacts over-all
project productivity. As a result, the owner spends more money and waits for a longer
time for a project to be designed and built. Even worse, after the completion of the
project the owner sometimes spend extra money re-inventing and re-inputting
everything that should have already been stored somewhere along the design and
construction of a project.
In this paper a Web Services approach is proposed to address these problems.
After a brief review of Semantic Web and Web Services technologies, the possibility
of building an IFC-based construction industry ontology structure is discussed. Then,
an expandable semantic Web Services framework build on the IFC ontology is
suggested. If this framework is implemented, all IFC-supported BIM applications
could utilize the framework for precise, consistent and up-to-date construction
industry and project information, which is expected to greatly enhance and improve
interoperability.

SEMANTIC WEB SERVICES

Semantic Web and ontology


The World Wide Web was initially designed as a document system whose
content is meant to be displayed by Web browsers and is only meaningful to human
readers rather than to computers (Cardoso 2007). The data and information expressed
on a web page is difficult for a computer to extract and understand, thus preventing
further automated information processing (Berners-Lee 2001). The Semantic Web
project was initiated in the hope of giving order and meaning to the unstructured
information available on the Web by adding contextual information (i.e. metadata) to
existing information.
The first step towards the Semantic Web is to add metadata – data about data
– into the web pages by inserting “tags”. Metadata is the building block of the
Semantic Web. The tagging ability of Extensible Markup Language (XML) allows
users to insert tags into the current content on the web to label the content. The
context (or semantic) information stored in the tags will be available to software
applications (the so-called agents) reading the content, and make the agents able to
realize the difference between similar data.
The second step of the task is to make the computers really understand the
meaning of the metadata by classifying the metadata in accordance with formal
COMPUTING IN CIVIL ENGINEERING 659

ontologies. Ontologies are “formal, explicit specifications of shared


conceptualizations,” i.e. the special documents that define metadata terms (Cardoso
2007). An ontology is represented as a set of concepts within a domain and the
description of the relationships between the concepts (Akinci 2008). Ontologies work
like an encyclopedia for computers, joining heterogeneous metadata information,
explaining all the definitions and listing all the synonyms, therefore giving global
structure to the data on the Web and allowing the data to be understood and shared
across multiple applications and communities.

Web Services utilizing the Semantic Web content


While the Semantic Web provides the potential of embedding intelligence into
the Internet, the content on a Semantic Web by itself is still static information waiting
to be discovered and utilized. Web Services are the active elements that connect these
scattered resources and actually do the work. According to W3C (2004), Web
Services are “software systems designed to support interoperable machine-to-machine
interaction over a network.” They provide a standard means of interoperating between
different software applications from different platforms. Through predefined query
syntax Web Services can retrieve specific information for a user from the Semantic
Web and/or finish specific tasks using the resources available on the Semantic Web.
The basic working model of Web Services consists of two roles: a service
consumer and a service provider. A service consumer is an entity that needs the
function in some Web Service. A service provider is an entity that could finish certain
computing tasks and return the result to the consumer. The key is that the service
provider is remotely located and communicates with the consumer through predefined
interface via Internet protocols. The service provider may provide the service based
on its local resources, but more importantly, it could search the web and provide the
result on the fly. In the reality, as long as the consumer knows the URI of the provider,
it can communicate with the provider directly. If not, a third party, the service registry,
may be needed to match-make the consumers to the providers. Information available
on the registry includes the services provided by the service providers as well as the
parameters needed to finish the service.
Web Services still lack semantic representation capabilities, therefore a Web
Service alone is not capable of automatic integration (Cardoso 2007). Semantic Web
Services have thus been proposed to extend the Web Services concept by adding
machine-interpretable semantics and making the discovery, composition and
integration automatic.

Current research
Domain ontologies define concepts, activities, objects and the relationships
among elements within a certain domain. Construction industry knowledge
management is among the first disciplines focusing on the building and application of
660 COMPUTING IN CIVIL ENGINEERING

industry-wide ontologies for construction. The e-COGNOS project emphasized


ontology as “a basis for knowledge indexing and retrieval (Wetherill et al. 2002).” In
2006, OGC examined the feasibility of representing GML in OWL as part of the
preliminary effort to extend existing services, encodings and architectures with
Semantic Web technologies (Akinci et al. 2008). Currently studies are being
undertaken to investigate the opportunities to leverage the current IFC model to
derive ontologies and develop standard models of the knowledge within the domain.
The ISTforCE explored the development of an ontology to decode IFC models
(Katranuschkov 2002). Besides the OGC and IFC initiatives mentioned above,
building codes seem to be a promising alternative (Cheng et al. 2008). When several
ontologies are available, it is necessary to choose one of them to use for a specific
Web Services call. Semantic and ontology matching and mapping is becoming an
interesting topic since it plays an important role in joining heterogeneous ontologies
to work together (Cheng et al. 2008).
As previously discussed, ontology alone is hardly useful in the real world.
Many researchers have resorted to Web Services to exert the power of ontology.
Wetherill et al. (2002) suggested a knowledge management platform based on the
Web Services model. Although they applied the ontology concept and used the
construction domain ontology as “a basis for knowledge indexing and retrieval”, they
failed to specify the details of building a working ontology for the system. The effort
of Vacharasintopchai et al. (2007) to build a working semantic Web Services
framework for computational mechanics is a good example of combining the
Semantic Web and Web Services together to work in the real world rather than
academic laboratories. Their framework is built on smart phone rather than normal
desktop operating systems, which is very promising for mobile computing
requirements such as on a construction job site.
Since the concept of Semantic Web and Web Services is relatively new, these
terms may have been used by different authors in different contexts. In a Working
Group Note, W3C (2004) put Web Services under strict definitions. Its components,
including interface description and message passing, must conform to specific Web
standards. According to this definition, some so-claimed Web Services applications
are in fact web-based or web-enabled applications (Chen et al. 2006;
Vacharasintopchai 2007). Further clarification of the connotation of Web Services is
necessary.

IFC-BASED CONSTRUCTION INDUSTRY ONTOLOGY


IFC is a set of definitions describing the consistent data representation of all
building components. IFC is developed and maintained by International Alliance for
Interoperability (IAI, also known as buildingSMART). It is designed to be able to
store and exchange all building information over the whole building lifecycle. The
class object specifications in IFC, which includes not only the geometric information
COMPUTING IN CIVIL ENGINEERING 661

but also properties and behaviors, endows the IFC objects intelligence (Vanlande et al.
2008). As an instance of the ISO 10303 international standard, one advantage of IFC
is that it is an open standard and everyone has full access to the information within.
Therefore it is ideal for transferring data between different software platforms.
Another advantage of IFC is its built-in support for XML, which allows any IFC
model to be described in ifcXML format in a standard XML file.
According to Corcho (2002), an ontology should include the following
minimal set of components: classes or concepts (with attributes describing the class),
and relations or associations between concepts. The contents in IFC could fulfill these
components requirements. IfcWindow is a typical Entity Type in IFC specifications.
The content page of IfcWindow includes the following sections: summary, property
set use definition, quantity use definition, containment use definition, geometry use
definition, express specification, attribute definitions and formal propositions (IAI
2010). The sections contained in other IFC elements are different due to the nature of
each element, but are all similar. The information contained in these sections fit into
the different components required for having an ontology.
The classes or concepts requirement of an ontology is about the nature or
definition of certain terminology. They are also known as entities (this is the name
used in IFC) or sets. The “summary” section of IfcWindow gives formal definitions
of a window, including the definition from ISO and IAI as well as an explanation of
other IFC entities used or related to IfcWindow. The URL of the webpage could be
used as the URI to identify the term in the ontology. Classes are usually organized in
taxonomies with inheritance information. This information is available in the IFC
“formal propositions” section. The “Inheritance Graph” in this section lists all the
entities that the current entity inherits.
In IFC, the attributes of the class is included in the following sections:
property set use definition, quantity use definition, geometry use definition and
attribute definitions. Property sets are most typical attributes information. A property
set is a group of properties that applies to each entity. The IfcWindow entity has three
property sets: WindowCommon, DoorWindowGlazingType and
DoorWindowShadingType. The WindowCommon property set includes reference,
acoustic rating fire rating, etc. Each property is described in a word (string) or a
number, which are referred to as IfcPropertySingleValue in IFC. The other two
property sets are similar properties about the glazing and shading of the window, but
they also apply to doors. The reason all these properties are divided into three groups
is to promote the re-use of each property set through different entities. Other sections
are also sources of entity attributes, for example, the geometry use definition section
includes the height and width of the window each value which is represented as a
number.
The relations in an ontology are also called roles. They denote how the classes
or entities are associated with others. Most of the relations are binary, meaning two
662 COMPUTING IN CIVIL ENGINEERING

classes are involved. The relation of the IfcWindow class with other classes are
described in “containment use definition” section. The relations a window may be
involved in include “fills” and “voids”, etc. A window “fills” an opening, which
“voids” a wall. Together, the window, the opening and the wall are “contained” in a
building story. The building itself is an “aggregation” of several stories.

IFC-BASED SEMANTIC WEB SERVICE FRAMEWORK


The Semantic Web Services framework proposed here is an object-oriented
implementation of a building information retrieval system via a Web Services model
utilizing IFC-based ontology. Open industry standards are used wherever possible.
The conceptual architecture of the framework is shown in Figure 1.

Figure 1. Semantic Web Services framework

The client portal is the user interface available to the end users. The portal
will be a web page implemented in JSP, and could easily be transported to mobile
devices. Users have options to enter information for enquiries and use other functions
provided on the web page. The portal will talk to the Web Services through Simple
Object Access Protocol (SOAP). SOAP is an XML-based standard protocol using
HTTP that describes a request/response message and therefore governs the
communication between a service and the client (W3C 2004). It provides a platform
and programming language independent way for Web Services to exchange
information.
The core service, named “Model Service”, will receive and analyze the
client’s enquiry and return the result, e.g. the detailed information of a certain element
specified by the user. Other assistant services are also available for use under the
same interface. The “Dictionary Service” is shown as an example of assistant services
in the diagram. The function of this service is to translate between plain English
words and IFC terms.
COMPUTING IN CIVIL ENGINEERING 663

The Web Service Description Language (WSDL) is used to describe each


service’s functions. WSDL is an XML-based language for describing a service as well
as their providers. It includes all the parameters needed to advertise and invoke a Web
service and the format that the result will be in. Each service module is registered in a
public Universal Discovery Description and Integration (UDDI) registry. The UDDI
registry is also a Web Services standard. Stored in the registry are WSDL files
describing the Web Services so that they can be discovered by clients.
The actual implementation of each service module may be different, as long
as they all conform to the uniform Web Services interface. For example, the core
service may be implemented through a client/server database query via SQL. Some of
the services could be implemented by third party applications.
When necessary, a service module could directly use another service module’s
functions. An example would be when a user searches for “window” in the core
service, the core service may consult the translation service to translate the “window”
into “IfcWindow” in order to run a query into the building model. But the translation
service could also provide its service independently. For example, if a user just wants
to know what the corresponding IFC term is for a window, the user could just run the
translation service and get the result. So although they could work together, the
different service modules may not necessarily know the physical existence of each
other. If needed, they will use the UDDI registry to discover necessary services.

CONCLUSION AND FUTURE WORK


This paper addresses the interoperability problem in construction industry by
proposing an IFC-based ontology and Semantic Web Services framework. Once
implemented, the framework could be utilized by all IFC-supported BIM applications
as well as personnel for more precise, consistent and up-to-date project information
from all platforms supporting Web Services. It is expected this framework will benefit
the operation staff on the construction site and greatly enhance the interoperability of
construction industry software applications.
However, the experimental framework is far from complete. For example,
the ontology used in the current framework is primitive with very limited entries. A
complete industry-wide ontology requires much more work to build and constant
maintenance to be actually useful. How to evaluate and integrate the currently
available ontology is also an interesting topic. Although Web Services is expected to
be adopted as main application distribution platform due to its advantage as a
pay-per-use service model, its commercial application in the construction industry
needs more research.
664 COMPUTING IN CIVIL ENGINEERING

REFERENCES
Akinci, B., Karimi, H., Pradhan, A., Wu, C., and Fichtl, G. (2008) "CAD and GIS
interoperability through semantic web services." Journal of Information
Technology in Construction, 13, 39-55.
Berners-Lee, T., Hendler, J., and Lassila, O. (2001) "The Semantic Web." Science
American, 284(5), 34.
Cardoso, J. (2007). "Semantic Web Services: Theory, Tools and Applications." IGI
Global.
Chen, H., Lin, Y., and Chao, Y. (2006). "Application of web services for structural
engineering systems." Journal of Computing in Civil Engineering, 20(3),
154-164.
Cheng, C. P., Lau, G. T., Pan, J., and Law, K. H. (2008). "Domain-specific ontology
mapping by corpus-based semantic similarity." Proceedings of 2008 NSF CMMI
Engineering Research and Innovation Conference.
Corcho, O., and Fernandez-Lopez, M. (2002). "Ontological engineering: what are
ontologies and how can we build them?" Semantic Web Services: theory, tools and
applications, J. Cardoso, ed., IGI Global, .
IAI. "IfcWindow."
http://www.iai-tech.org/ifc/IFC2x4/beta3/html/ifcsharedbldgelements/lexical/ifcw
indow.htm (5/20/2010, 2010).
Katranuschkov, P., Gehre, A., and Scherer, R. J. (2002). "An engineering ontology
framework as advanced user gateway to IFC model data." EWork and EBusiness
in Architecture, Engineering and Construction: Proceedings of the Fourth
European Conference on Product and Process Modelling in the Building and
Related Industries, Portorož, Slovenia, 9-11 September 2002, Taylor & Francis,
269.
Neches, R., Fikes, R. E., Finin, T., Gruber, T., and Patil, R. (1991). "Enabling
technology for knowledge sharing." AI Magazine, 12(3), 36.
OMG. (2009). "Ontology definition metamodel." http://www.omg.org/spec/ODM/
(5/10, 2010).
Vacharasintopchai, T., Barry, W., Wuwongse, V., and Kanok-Nukulchai, W. (2007).
"Semantic Web Services framework for computational mechanics." Journal of
Computing in Civil Engineering, 21(2), 65-77.
Vanlande, R., Nicolle, C., and Cruz, C. (2008). "IFC and building lifecycle
management." Automation in Construction, 18 70-78.
W3C. (2004). "Web Services architecture." http://www.w3.org/TR/ws-arch/ (2/11,
2010).
Wetherill, M., Rezgui, Y., Lima, C., and Zarli, A. (2002). "Knowledge management
for the construction industry: the e-COGNOS project." Journal of Information
Technology in Construction, 7, 183-195.
Using Laser Scanning to Assess the Accuracy of As-Built BIM
B. Giel1 and R.R.A. Issa2
1
M.E. Rinker School of Building Construction, College of Design Construction and
Planning, University of Florida, P.O. Box 115703, Gainesville, FL 32611-5703; PH
(352) 339-0237; FAX (352) 846-2772; email: b1357g@ufl.edu
2
M.E. Rinker School of Building Construction, College of Design Construction and
Planning, University of Florida, P.O. Box 115703, Gainesville, FL 32611-5703; PH
(352) 273-1152; FAX (352) 846-2772; email: raymond-issa@ufl.edu

ABSTRACT
The growth of laser scanning and building information modeling (BIM) has
impacted the process by which we document facilities. However, despite rapid
advances in scanning technology, there still remains a great disconnect between the
data derived from laser scans and the creation of functional as-built BIM to be used in
the operation and maintenance phase of a building's life cycle. Even with the
paradigm shift to BIM-centered processes, the question still remains as to how
owners can be assured of the accuracy of their as-built models and supplemental
documentation. This paper addresses some of the current methods used to capture
and update existing facility information and also reviews several laser scanning
applications currently being developed. Based on existing 2D as-built documentation
available for a campus facility, a 3D BIM was created post construction. Then, using
point-cloud data from a small area of the building, a case study was conducted to
assess the accuracy and value of updating the previously generated BIM to better
represent the as-is conditions.
INTRODUCTION
While the employment of BIM in the design and construction phases of
facilities has increased dramatically in recent years, utilization of BIM after the
construction phase is still seldom explored. Even as owners become driving forces in
the shift to BIM-centered processes, few are fully utilizing their building models in
the operations and maintenance of their facilities. This is primarily because the
accuracy and level of detail obtained from a construction model does not always
reflect as-built conditions needed by maintenance personnel. Furthermore, over the
span of a facility's life cycle, multiple changes occur post-construction, that are
seldom documented. Laser scanning provides one possible solution to this issue by
presenting a fast and simple method for digitizing spatial information. With this
technology, as-built conditions and changes can be documented to ensure the
accuracy of information processed in BIM.

665
666 COMPUTING IN CIVIL ENGINEERING

LITERATURE REVIEW
Introduction
The operations and maintenance phase of a building's life cycle is the longest
and most significant. Consequently, it is also the most costly. According to Gallaher
et al. (2004), owners and operators are shouldering $10.6 billion or 68% of the total
cost of interoperability in the built environment. This is much in part due to
numerous hours wasted searching, authorizing, and in many cases reconstructing as-is
documentation (Akcamete et al. 2009). In 2004, it was estimated that roughly $1.5
billion was exhausted annually on facilities personnel being delayed waiting for
accurate and adequate as-built documentation. Another $4.8 million is spent each
year by facility personnel updating existing facility documentation to match current
as-built conditions (Gallaher et al. 2004).
Traditional methods of capturing existing facility information
Facilities managers need accurate information available to them to support
decision making processes, yet documentation is consistently not updated during the
construction phase to reflect in-place conditions. This is in part due to a lack of time
and adequate staffing as well as the tedious nature of the process. To date, there is no
standardized method for updating a design model to reflect changes made during
construction (Gu and London 2010). Moreover, as renovations and modifications
take place after a project's handover, little is done to document those changes.
Akcamete et al. (2009) conducted several case studies of projects during the
construction and O&M phases to determine the types of changes that occur over a
building's life cycle. Analysis of construction change orders as well as maintenance
work orders showed a consistent pattern among change history. Though BIM
provides a possible solution to the problem of change management, the authors
critiqued its inability to track the history of changes, which may also be relevant to
facilities personnel (Akcamete et al. 2009).
Owners' push to require BIM during the design and construction phases has
facilitated an improved level of accuracy over traditional 2D methods of
documentation. However, there is still a wealth of existing facilities constructed
without BIM which must also be documented and updated. The traditional method of
updating as-built information involves the tedious process of taking field
measurements and manually recording necessary changes that must then be reflected
in the 2D and paper-based documentation. The COBIE (Construction Operations
Building Information Exchange) standard has provided some support for the digital
documentation of existing building information. However, updating this information
into COBIE is still somewhat manual. Rojas et al. (2009) compared and critiqued
three common methods being used to capture existing facility information including
traditional paper forms and computer data entry, laptop computers, digital pens and
hand held computers. They found using digital pens resulted in the highest
productivity rate, while hand-held computers were the most cost effective method.
Additionally, they noted a need to standardize procedures for surveying and data
collection within existing facilities.
COMPUTING IN CIVIL ENGINEERING 667

Others have suggested non-traditional methods for updating facility


information. Dai and Lu (2010) assessed the accuracy of photogrammetry to
document building dimensions and track status changes in construction. In contrast,
Woo et al. (2009) suggested obtaining energy performance data using BIM and a
sensor network and manually creating BIM by tracing existing 2D as-built
documentation.
Research on laser scanning in the AEC industry
In addition to these traditional methods of recording existing facility
information and changes, laser scanning has recently been utilized to assist the
process as well. Three avenues of research on laser scanning in the AEC industry are
currently being increasingly explored include: research conducted to improve the
accuracy of scanned data, research related to analysis methods for comparing
scanned data to derived models for quality control, and the automated generation of
as-built BIM based on laser scans (Huber et al. 2010). This paper will primarily
discuss how scan data can be used for analysis of an existing BIM to assist quality
control.
Understanding the limitations of laser scanning technology may help in
solving the mystery as to how it can be used more precisely. Huber et al. (2010)
addressed some of the concerns raised when using the technology including the issue
of mixed pixel detection and modeling edge loss at certain depth conditions. Bosche
(2009) commented on the need for 3D status tracking during the construction
timeline, proposing a methodology to check dimensional compliance control and
progress monitoring. Goedert and Meadati (2008) collected coordinates of the end
line and centerline of building components using a robotic total station and a robotic
laser rangefinder system. They then imported the ACAD guiding lines into Revit to
place necessary modeling components. This process is less accurate but was
perceived to be more cost effective than a full laser scan. Huber et al. (2010) have
also looked at the use of laser scanners to monitor construction sites to compare as-
built conditions with a facility's design model and uncover potential defects.
Thus far, there has been little success in the complete automation of the
generation of as-built BIMs from laser scans. Current processes require that point
cloud data generated from LIDAR technology be first converted to a 3D surface
model using a variety of complex algorithms. Many researchers have explored
developing such algorithms for this task (Maas and Vosselman 1999, Huber et al.
2010, Tang et al. 2010, and Brilakis et al. 2010). Each combination of points
representing different types of building components must be manually replaced by
objects from a standard library of components. Lastly, a more detailed set of
attributes must be attached to each object to reflect the level of as-built detail needed
(Brilakis et al. 2010). As this process is both labor intensive and costly, it is no
surprise that few building owners are investing in the time or manpower to reverse
engineer BIMs for existing buildings at this time. Instead they still rely heavily on
field measurements and traditional methods.
668 COMPUTING IN CIVIL ENGINEERING

METHODOLOGY
Research was conducted in two phases. First, using existing 2D as-built
documentation a multi-disciplinary 3D BIM was constructed to assess the accuracy
of as-built documentation for an existing campus facility. Then, using point cloud
data from a scan of a small area of the facility, the virtual model was checked for
dimensional and object attribute accuracy. Finally, based on discrepancies uncovered
between the scan data and the BIM, updates will be made to the BIM to reflect true
as-built dimensions and conditions.
RESULTS
Phase 1: As-built BIM creation
In the fall of 2007, a group of several graduate level and PhD students at the
University of Florida began a research project to gain hands on experience with a
variety of BIM software tools available to the industry (Giel and Issa 2010). The
students were charged with constructing an as-built BIM of their current facility,
Rinker Hall, using the 2D as-built documentation given to them by the UF's
Construction and Planning office. The software platforms used to achieve this task
were Autodesk's Architecture, Structure, and MEP. Some general information about
the facility is listed in Table 1.
Table 1. Rinker Hall Summary
Rinker Project Statistics
Location: Gainesville, FL
Use: Higher Education
Type: New Construction
Scope: 3- story building
Square Footage: 47, 300 SF
Construction Type: Steel framing, lightweight concrete on metal deck with brick
veneer and metal panel facade
Construction Cost: $6,500,000.00
Rating: USGBC LEED-NC, v2--Level: Gold (39 points)
The BIM for Rinker Hall was virtually constructed into a series of linked
multi-disciplinary models using AutoCAD files, paper based construction drawings,
and any available specifications and submittals on file for the project. Additionally,
students were asked to document any existing conditions that were not reflected in the
original as-built documents. To gain a better understanding of how BIM may
improve the documentation process, the students also simultaneously interpreted the
2D drawings originally designed in the corresponding 3D Revit platform.
Rather than tracing up from the imported 2D drawings, the students virtually
constructed Rinker Hall using two monitors. This was done to simultaneously check
the accuracy of AutoCAD rounding and resolve any drafting errors that may
otherwise have been overlooked. An image of the twin screen methodology used is
shown in Figure 1. An image of the final BIM model that was constructed is shown in
Figure 2.
COMPUTING IN CIVIL ENGINEERING 669

Figure 1. An example of using twin screens to construct the BIM from 2D documents

Figure 2. The final BIM for Rinker Hall


During the as-built conversion process to BIM, multiple significant errors in
the 2D documentation were uncovered. One of the greatest problems faced in the
Architecture discipline was the amount of drafting error discovered in the rounding of
interior dimensions. When walls were placed in the Rinker model with the exact
thickness of finishes, there were discrepancies between what the as-built drawings
read and the actual interior dimensions. In many case dimensional strings did not add
up to the same value indicated in the 2D drawings because of this issue.
More significantly, the drawings for the MEP disciplines severely
contradicted what was visible in place in the building. Rinker's design uses exposed
ceilings in the front of most classrooms, allowing students to observe many of these
discrepancies on their own. However, many assumptions had to be made regarding
the mechanical systems hidden in the plenum spaces, particularly when routing the
hot water supply and return lines through the building. Because the only available
documentation on the piping systems was generic riser documents, the exact routing
solution that was used by the students was not a true reflection of the as-built facility.
The most notable inaccuracies observed in Rinker Hall's as-built drawings
occurred in the mechanical room documentation and the documentation of certain
duct sizes. The drawings indicated that rectangular ducts were used throughout the
building when in actuality a round tap was used to transition into most secondary
supply air systems. In addition, many of the duct sizes indicated in the drawings
mechanical plans contradicted the neck sizes indicated in the diffuser schedule.
Additionally, the Fire Protection drawings showed multiple errors as well in
the routing of main and branch piping. One such discrepancy can be seen in Figures
3 and 4 which compare the as-built document to an as-built photo taken in the current
facility. In the lobby space of the first floor of Rinker Hall, the documents indicated
recessed sprinkler heads every six feet in the dropped ceiling. In the actual space,
there were no recessed sprinkler heads in that ceiling at all; instead the sprinklers
were mounted on an adjoining wall.
670 COMPUTING IN CIVIL ENGINEERING

Figure 3. Example as-built documentation Figure 4. Existing conditions


Phase 2: Assessing model accuracy using as-built laser scans
Due to the consistent inaccuracies uncovered in the 2D as-built
documentation, the next phase of research involved a pilot study to determine the best
methodology for updating and validating the accuracy of the BIM that was created.
First, scans from different vantage points were taken of a small lab room in Rinker.
Then using the resulting point cloud data, a 3D wireframe model was created using
Kubit's Point Cloud plugin for AutoCAD. Figures 5 and 6 show the original point
cloud in AutoCAD and the resulting planar model that was created to verify the basic
room bounding dimensions and some of the visible mechanical, piping and casework
conditions of the lab.

Figure 5. Point-Cloud Figure 6. Planar model produced

Using the "slice" tools, multiple sections were created along the different UCS
planes of the point cloud to better visualize the interior space. Then, the "fit plane"
and "fit line" tools were employed to draw the the basic boundaries of the room. This
is achieved in the program by selecting a sample of points along the known planes
and edges using Least Squares algorithm. Other useful tools were the "extend" and
"intersection" tools for planes which were helpful in determining where multiple
planes met. The "fit cylinder" tool was also tested to determine the location and
sizing of an insulated rain leader pipe located at the front of the classroom.
There were several lessons learned through this pilot study. Perhaps the most
significant was the importance of obtaining accurate point cloud data. The level of
accuracy of the planar model that can be created is dictated by the density and
number of points and the number of scans conducted. Therefore, it is necessary to
take multiple scans from different vantage points to record information about the
space in its entirety. It is felt that the accuracy of the BIM could have been greatly
improved by taking more scans of the interior environment and moving some of the
excess debris in the room out. In addition, the number of scans needed is determined
COMPUTING IN CIVIL ENGINEERING 671

by the materiality of the surfaces in the space, the amount of shadows and obstacles in
the room, and the overall size of room. As noted by Huber et al. (2010), edge loss
was a significant issue encountered. Thus, the farther objects were from the
instrument, the lower the density of points created and less accurate the method for
placing those planes. There was also significant difficulty placing the curtain wall
plane at the back of the classroom because of its transparent materiality. The point
cloud actually captured several points outside the interior space. Another major
difficulty faced was the nature of working within a closed interior space. Because the
space was bounded by solid planes on all sides, it was difficult to distinguish between
objects inside the room. While the slice tools greatly assisted this issue, using a scan
with a photo superimposed on it would have greatly assisted in interpreting objects
based on color and edge differentiation.
The final step of this study will be to import the 3D planar model created in
AutoCAD into the existing federated Revit model to try and assess its accuracy.
Physical interior dimensions will also be taken in the space to verify the accuracy of
the planar modeling method before updates are made.
CONCLUSIONS
There is still much work to be done to ensure the accuracy of as-built
documentation in the AEC industry. Many of the errors found in the as-built
documentation of Rinker Hall were the result of drafting errors inherent in a 2D
system. However, the other significant errors that were uncovered raise the question
whether greater weight needs to be placed on the review process of as-built
documentation before project handover. Perhaps the solution to this issue extends
beyond advanced technology and software; but in improving management practices
and quality control. In addition, the process of tracing and creating a physical
geometric model from a point cloud was found to be labor intensive and sometimes
inexact procedure. Furthermore, laser scanning equipment is an expensive initial
investment. Thus, the decision to use this methodology must be weighed heavily
against scale and nature of the facility. At this time, the authors are not fully
convinced that this method is any less time consuming or more accurate than the
traditional manual processes of updating as-is documentation. However, the method
does digitize a wealth of information that may be useful for future applications and it
also provides a temporary solution to a growing problem in the construction industry.
FUTURE WORK
The objective of the second phase of this study was to become comfortable
using some of the software tools available for point cloud post-processing and
analysis and understand some of their limitations. Therefore, only a small room in
Rinker hall was analyzed and updated for this study. However, after evaluating the
accuracy of this method, we hope to update and correct the entire as-built BIM.
Lastly, due to the sheer number of inaccuracies uncovered in the MEP disciplines'
drawings, the next phase of research will involve scanning Rinker's mechanical room
to provide a more accurate and intelligent set of as-built documentation to the
Facilities Planning office.
672 COMPUTING IN CIVIL ENGINEERING

ACKNOWLEDGEMENTS
Thanks to Alex Demogines from Faro for his patience in helping us in getting
started on our journey. We would also like to thank Scott Diaz and Kubit-USA for
allowing us to sample several of their software packages.
REFEERENCES
Akcamete, A., Akinci, B., Garrett, J.H., Jr. (2009). "Motivation for computational support
for updating building information models (BIMs)." Proceedings from the 2009
ASCE International Workshop on Computing in Civil Engineering, Texas, June 24-
27, 2009, 523-532.
Bosche, F. (2009). "Automated recognition of 3D CAD model objects in laser scans and
calculation of as-built dimensions for dimensional compliance control in
construction." Advanced Engineering Informatics, 24, 107-118.
Brilakis, I., Lourakis, M., Sacks, R., Savarese, S., Christodoulou, S., Teizer, J., and
Makhmalbaf, A. (2010). "Toward automated generation of parametric BIMs
based on Hybrid video and laser scanning data." Advanced Engineering Informatics,
24, 456-465.
Dai, F. and Lu, M. (2010). "Assessing the accuracy of applying photogrammetry to
take geometric measurements on building products." Journal of Construction
Engineering and Management, 136(2), 242-250.
Gallaher, M.P., O'Connor, A.C., Dettbarn, J.L., Jr., and Gilday, L.T. (2004). "Cost
analysis of inadequate interoperability in the U.S capital facilities industry."
NIST GCR 04-867.
Giel, B. and Issa, R.A., (2010). "Benefits and challenges of converting 2D-as-built
documentation to a 3D BIM post construction." Autodesk White Papers.
Goedert, J.D. and Meadati, P. (2008). "Integrating construction process documentation into
building information modeling." Journal of Construction Engineering and
Management, 134 (7), 509-516.
Gu, N. and London, K. (2010). "Understanding and facilitating BIM adoption in the AEC
industry." Journal of Automation in Construction, 19, 988-999.
Huber, D., Akinci, B., Tang, P., Adan, A., Okorn, B. and Xiong, X. (2010). "Using laser
scanners for modeling and analysis in architecture, engineering , and construction."
Proceedings from the 2010 44th Annual Conference on Information Sciences and
Systems, CISS 2010.
Maas, H.G. and Vosselman, G. (1999). "Two algorithms for extracting building models from
raw laser altimetry data." Journal of Photogrammetry and Remote Sensing, 54, 153-
163.
Rojas, E.M., Dossick, C.S., Schaufelberger, J., Brucker, B.A., Juan, H., and Rutz, C., (2009).
"Evaluating alternative methods for capturing as-built data for existing facilities."
Proceedings from the 2009 ASCE International Workshop on Computing in
Civil Engineering, Austin,TX, June 24-27, 2009, 237-246.
Tang, P., Huber, H., Akinci, B., Lipman, R. and Lytle, A. (2010). "Automatic
reconstruction of as-built building information models from laser-scanned point
clouds: a review of related techniques." Automation in Construction,19, 829-843.
Woo, J., Wilsmann, J. and Kang, D. (2010). "Use of as-built information modeling."
Proceedings from the 2010 Construction Research Congress, May 1-8, 2010,
538-548.
BIM Facilitated Web Service for LEED Automation

Wei Wu1 and Raja R.A. Issa2


1
Department of Construction Management and Civil Engineering, Georgia Southern
University, P. O. Box 8047, Statesboro, GA 30460-8047; PH (912) 478-0542; FAX
(912) 478-1853; email: wwu@georgiasouthern.edu
2
Rinker School of Building Construction, University of Florida, P. O. Box 115703,
Gainesville, FL 32611-5703; PH (352) 273-1152; FAX (352) 392-9606; email:
raymond-issa@ufl.edu

ABSTRACT
BIM technology has been increasingly implemented in green building design
and construction, noticeably in LEED projects. The advantages of using BIM on
LEED projects reside in its information richness that is desirable to tackle the
challenges posed by LEED certification, meeting the credit compliance and
generating the documentation required in certification review. This research proposes
a 3rd party web service relying on BIM as the information backbone to facilitate the
LEED documentation generation and management. As a potential enhancement to
LEED Online, this web service uses a structured database that feed on information
coming directly from an integrated BIM model, enabled by information exchange
protocols such as Open Database Connectivity (ODBC) and the Industrial Foundation
Class (IFC). This paper explores the premises of this web service and proposes the
preliminary architecture framework in two different scenarios.
Keywords: BIM, LEED, ODBC, IFC, Web services

INTRODUCTION
BIM and LEED are arguably two of the most popular trends in current
AEC&FM industry. With the market still in transformation, the U.S. Green Building
Council (USGBC)’s LEED brand has become a new paradigm in the U.S. green
building market. Stakeholders including legislators, developers, building owners,
design professionals and contractors are engaged with LEED in one way or another
and the business mode in building construction is affected correspondingly. LEED is
not yet a building code, but quite a few states (e.g. Arizona and California) and
governmental agencies (e.g. U.S. DOE and GSA) have mandated its implementation.
Earning LEED certification for a building is a challenging and tedious
process, with two major tasks to accomplish: 1) meet the LEED rating system
requirements; and 2) demonstrate such compliance with valid and comprehensive
documentation. To streamline the process, USGBC launched a web-based platform
called LEED Online in 2006 to help project teams manage LEED documentation.

673
674 COMPUTING IN CIVIL ENGINEERING

“Through LEED Online, project teams can manage project details, complete
documentation requirements for LEED credits and prerequisites, upload supporting
files, submit applications for review, receive reviewer feedback, and ultimately earn
LEED certification” (GBCI 2010). A major bottleneck of LEED Online, however,
stems from the intrinsic deficiency of the traditional project delivery method of the
building industry: fragmentation induced lack of interoperability. In terms of the
LEED project delivery, this means redundancy in project documentation and data
collection due to information inconsistency, which eventually causes the loss of
productivity and profitability.
The boom in building information modeling (BIM) technology deployment
seems to change the status quo. Despite the controversies remaining in defining what
“BIM” exactly is, the consensus is reached on the integrity of the information
captured in a building information model: everyone on the project team will be at the
same page at any point and any change triggered by any party will be delivered in a
consistent way to the rest of the team. In addition to being the information backbone,
BIM implementation in LEED projects can be justified by the functionalities of
current BIM authoring/analysis tools in building design and performance
configuration including those required by the LEED rating system, e.g. whole
building energy simulation. The bottom line is that practitioners of BIM and LEED
have found opportunity to leverage the LEED process by integrating it with BIM.
This research focuses on the information flow in LEED project delivery,
especially how information is generated, processed, delivered, managed and finally
submitted to USGBC/GBCI (Green Building Certification Institute) for certification
review. The goal is to propose a supplemental web service to LEED Online that feeds
on information coming directly from BIM. The information exchange process is
enabled by common protocols such as ODBC and IFC. By designing the appropriate
schema, generic information in BIM could be manipulated and organized in the
format that is compatible with LEED Online, and eventually the automation of the
LEED process can be achieved.

LEED DOCUMENTATION
As a critical step in the certification workflow (Figure 1), USGBC/GBCI
mandates the project team to prepare application with adequate documentation to
demonstrate that the project has fulfilled the claimed LEED credits. Two major types
of documentation are involved: 1) LEED Online templates and 2) supplemental
submittal documentation.

Project Prepare Submit Application


Certification
Registration Application Application Review

Figure 1. GBCI’s project certification overview (GBCI 2010).

A LEED Online template is a dynamic Portable Document Format (PDF)


document created by USGBC using the Adobe LifeCycle Designer. It is mandatory
and project teams must fill in the prescribed information relevant to the LEED credits
applied for. Despite the various LEED rating systems available, each LEED credit
COMPUTING IN CIVIL ENGINEERING 675

actually specifies what kind of submittals are expected from the project team. In
contrast, supplemental submittal documentation is optional and project teams only
use it to address issues they feel necessary to help increase the chances of achieving
the applied for credits.
Straightforward as it seems to be, the challenges in preparing LEED
documentation come from the chaos in information management that is intrinsic to
the traditional project delivery model, as illustrated in Figure 2A. Without a
centralized information source, project members are on their own in data collection
and documentation preparation. When project evolves, they respond to the progress
asynchronously, exacerbated by overwhelming redundancy during information
exchange. Consequently, inaccuracy or even loss of information will take place and
make the documentation error-prone. In contrast, with BIM, the project team will
have an integral information source that they can rely on when it comes to preparing
LEED documentation, as illustrated in Figure 2B. Without extra efforts to ponder
whether they are looking at the latest-greatest drawings, or wonder if the required
regional materials are furnished into the building, an investigation of the building
information model will answer all these questions.

A. Documentation-centric model B. Information-centric model

Figure 2. Information management in AEC-FM industry (Sjøgren 2007).


BIM MATURITY
An essential promise of BIM is to catalyze changes to reduce the industry’s
fragmentation, improve its efficiency/effectiveness, and lower the high costs of
inadequate interoperability. However, companies and organizations embrace and
practice BIM at different levels, and the benefits they are able to recoup may vary
significantly. The term “BIM maturity” was used by Succar (2008) to describe the
stages that organizations will experience when implementing BIM gradually and
consecutively, as illustrated in Figure 3.
Regardless of the segregation of these stages, real-world situation might be
intermixed as there are leaders and pioneers, followers as well as laggers. The
importance of identifying these maturity stages, however, resides in that it provides a
general framework of BIM implementation, reminds the industry where it is at and
676 COMPUTING IN CIVIL ENGINEERING

directs where it should head for. It also reveals where improvement is needed in terms
of keeping BIM as a technology to advance. It is believed that BIM Stage 1 and Stage
2 reflect the status quo for most of the industry, and Stage 3 is the next immediate
goal to embark on, which is also the focus of this research.

BIM Stage 1 BIM Stage 3


• Status of AEC
industry before the
• Model-based • Integrated Project
implementation of • Object-based COLLABO • network-based Delivery, the long-
BIM (manual, 2D or
MODELING RATION INTEGRATIO term goal of BIM
3D CAD) implementation
N
Pre-BIM BIM Stage 2 IPD

Figure 3. BIM maturity in stages – linear view (Adapted from Succar 2008).

INFORMATION EXCHANGE
The maturity of BIM also dictates how many resources a company or a project
team has access to in a project setting, especially in LEED project delivery. The
capacity of current BIM authoring and analyzing tools has made previously laborious
process cost-effective to conduct in order to achieve the desired building
performance. Wu and Issa (2010) summarized popular BIM solutions and their
possible application in LEED for New Construction projects at the credit level.
Formulating an effective strategy to take advantage of appropriate tools in
streamlining the sustainability oriented design and construction is critical to the
success of LEED certification. Documentation on the other hand, requires
stewardship in management of information produced along with the project progress.
Information exchange protocols adopted by a company to communicate with
business partners as well as the data format used internally may be consistent with the
company’s culture and thus resistant to transit. Nevertheless, “for a BIM
implementation strategy to succeed, it must be accompanied by a corresponding
cultural transformation strategy”. A painstaking process as the transition is, it is
inevitable and the benefits are tangible, “the more flexibly information can be
exchanged, the greater the likelihood that it can be preserved in a useful form for the
long term (Smith and Tardif 2009).”
A whole range of data exchange and storage options already exists, and
Industry Foundation Classes (IFCs) protocol is a major initiative of open-standard
data formats that has been supported by many BIM software applications. The IFCs
framework involves comprehensive efforts in building semantics and ontologies that
is essential to accurate interpretation and exchange of building information, and so far
is still a work in progress. The most updated information about IFCs can be found at
http://www.buildingsmart.com/bim.
Another approach for interoperable information exchange is through the Open
Database Connectivity (ODBC) mechanism, which is also popular and supported by
major software vendors. Unlike IFC, ODBC aims to create a software interface for
accessing database management system (DBMS) independent of programming
COMPUTING IN CIVIL ENGINEERING 677

languages, database systems and operating systems, in other words, direct exchange
of information at the metadata level.
The major challenge for seamless information exchange in either the IFCs or
ODBC scenario is to ensure the integrity of information without distortion or loss
during transfer. For IFCs, it demands the richness of vocabularies. International
Framework of Dictionary (IFD) in the IFCs framework is dedicated to capture
building semantics and ontologies including those unique ones in LEED projects. In
the ODBC case, software vendors shall be prudent to allow users to manipulate the
internal database of the software without compromising its intended functionalities,
yet have the adequate freedom to export/import the data catering to their needs.
LEED AUTOMATION: NETWORK-BASED BIM-LEED INTEGRATION
LEED Automation is an effort to implement BIM in LEED projects at the
Stage 3 maturity level: network-based integration. USGBC’s official announcement
explicitly outlines the characteristics of such integration: “LEED Automation works
similarly to an app. It will perform three key functions for LEED project teams and
users of LEED Online by seamlessly integrating third-party applications to 1)
provide automation of various LEED documentation processes; 2) deliver customers
a unified view of their LEED projects; and 3) standardize LEED content and
distribute it consistently across multiple technology platforms” (USGBC 2010).
BIM Stage 3 is a perfect fit for this proposition for it has all the ingredients
required to fulfill these three functions: 1) documentation generation is a built-in
functionality of popular BIM solutions in the market; 2) a building information model
for a LEED project is more than a unified view to the customers but a valuable
reservoir of information for the project over its life cycle; 3) the essence of LEED, its
features broken down at the building component level together with the relationships
between them, are distributed to and shared by project members via a network in
standardized data format, regardless of how sparsely they are geographically located
and what software packages they are dealing with individually. BIM Stage 3 models
become interdisciplinary n-dimensional models allowing complex analyses at the
early stages of virtual design and construction. The model deliverables extend beyond
semantic object properties to include business intelligence, lean construction
principles, green policies and whole lifecycle costing (Succar 2008).
The network-based BIM-LEED integration is semantically-rich and can be
hypothetically achieved through model server technologies using proprietary, or non-
proprietary, open formats (e.g. BIMserver.org), single integrated/distributed federated
databases (e.g. Autodesk’s RDBLink, Laiserin 2003) and/or SaaS (Software as a
Service) solutions (e.g. Onuma Planning System, Wilkinson 2008). The prerequisites
for this integration include: 1) the maturity of network/software technologies allowing
a shared interdisciplinary model to provide two-way access to project stakeholders; 2)
the readiness of a competent information exchange format to lubricate the process.
BIM FACILITATED WEB SERVICE
This research looks at two possible approaches to propose the framework of
the network-based BIM-LEED integration differentiated by the interoperability
strategy implemented: the IFC approach and the ODBC approach. Figure 4 shows a
brief roadmap from the process perspective of BIM and LEED integration, of the key
678 COMPUTING IN CIVIL ENGINEERING

steps in LEED project delivery using BIM. No matter which approach is adopted, the
network-based integration kicks off when the model information flow is triggered.

ODBC Approach. ODBC enables information exchange in the most primitive


format: generic data that is platform independent. Basically, through ODBC, users
can potentially extract the building information as data in a tabular format that can be
easily opened in ubiquitous database management software such as MS Access. With
extended function of the database management software, project stakeholders can
then manipulate the data to conduct calculation and analysis that are often impossible
to perform in the BIM software environment.
To share the extracted information through ODBC over the network, a
WAMP (Windows, Apache, MySQL and PHP) solution is investigated, where
Windows is the operating system, Apache is the web server, MySQL is the database
and PHP is the web scripting language. Figure 5 illustrates the architecture of the
proposed web service oriented to LEED automation using Autodesk Revit as the
sample BIM solution. In this scenario, MySQL is the data infrastructure of this web
application. All information contained in the “Project Module” and “LEED Module”
can be entered into the MySQL database. For the “BIM Module”, it is possible to
export the Revit’s internal database through ODBC into a MySQL database. Revit as
a BIM authoring tool has a graphical user interface (GUI) that sits on top of a
database. In essence, any instance of the model components is the graphical
representation of a certain cluster of data underneath the GUI. Schedules/quantities
are evidence of the database’s presence. Depending on the tasks to be performed,
users can authorize the “Application Service” to trigger the query in MySQL to
manipulate the database and produce the desired outcomes.

Figure 4. BIM-LEED integration process roadmap.


COMPUTING IN CIVIL ENGINEERING 679

Figure 5. Architecture of network-based BIM-LEED integration: ODBC scenario.

However, there is a huge bottleneck of current ODBC support in BIM


authoring tools: during the export, none of the customized data, e.g. shared
parameters as in Autodesk Revit, will be exported. Such constraints significantly
compromise the potential of ODBC as a data exchange mechanism in LEED projects,
since most LEED-relevant information is not recognized by the software without
special input from the project team. For instance, in order to takeoff the wood
materials that are FSC certified, a shared parameter called “FSC” needs to be created
first to tag such wood in the building information model. Later on in the LEED
documentation stage, those tagged wood materials can be identified, summarized and
priced in order to apply for the corresponding LEED credit. While after the ODBC
export, the “FSC” tag is lost and it is no longer possible for the project team to tell
which are FSC certified wood materials that should be counted into the calculation.

IFC Approach. Using IFC as the interoperability protocol is a prevailing trends in


BIM implementation. Leading software solutions have been IFCs compatible for a
while. Bi-directional IFC support is a relatively new development, and is essential for
more rigorous information exchange using the IFC framework. Unlike the “raw data”
flow in the ODBC scenario, data in the IFC format is semantically-rich and bears the
hierarchy as well as full body of properties that are inherent in the building industry.
For instance, the information for a door in the building information model expressed
in the IFC format will contain not only the semantics of door: Ifcdoor, but also its
geometry: IfcPositiveLengthMeasure, its relationship with the host wall:
IfcRelContained-InSpatialStructure, and the whole portfolio or its property:
Pset_DoorCommon (buildingSMART International Ltd 2010). This is indisputably
conducive to retain the integrity of the project information. Figure 6 illustrates the
proposed web service in the IFC scenario.
680 COMPUTING IN CIVIL ENGINEERING

Figure 6. Architecture of network-based BIM-LEED integration: IFC scenario.

The IFC approach has the potential to overcome the barriers that encountered
in the ODBC scenario as long as its ontology and semantic representation of “LEED
parameters” are fully developed. The most recent version of IFC 2x4 RC has
significantly improved in addressing such needs. For example, a series of new entities
that deals with material definition: IfcMaterialDefinition, material profiles:
IfcMaterialProfile, material relationship: IfcMaterial-Relationship and material
usage: IfcMaterialUsageDefinition have been added into the IFC framework, and can
be expected to accommodate semantic needs in a LEED project that aims for credits
in the materials & resources category.
CONCLUSION
This research investigated the premises of a BIM facilitated web service to
achieve LEED automation, and proposed the framework of this web service in two
different scenarios. It is believed that current technology is ready for the industry to
experiment with the prototype of network-based BIM-LEED integration. The
semantic comprehensiveness of IFC, BIM server development, and the LEED API
are among the highest priorities to look at in the next step of research. Well
documented case studies of LEED projects are highly desirable to help validate and
improve the framework as well as the functionality of the proposed web service.
REFERENCE
buildingSMART International Ltd. (2010). “Industry Foundation Classes Release 2x4
(IFC2x4) Release Candidate 2.” <http://www.iai-
tech.org/ifc/IFC2x4/rc2/html/index.htm> (Dec.23, 2010).
GBCI. (2010). “Certification Guide: LEED for New Construction.” Green Building
Certification Institute, <http://www.gbci.org/main-nav/building-
certification/certification-guide/leed-for-new-construction/about.aspx> (Dec.
23, 2010).
COMPUTING IN CIVIL ENGINEERING 681

GBCI. (2010). “LEED Online.” Green Building Certification Institute,


<http://www.gbci.org/main-nav/building-certification/leed-online/about-leed-
online.aspx> (Dec. 23, 2010).
Laiserin, J. (2003). “Building Information Modeling - The Great Debate.” The
LaiserinLetter, <http://www.laiserin.com/features/bim/index.php> (Dec.23,
2010)
Sjøgren, J. (2007). “buildingSMART – a smart way for implementation of standards.”
<http://www.plcs-resources.org/papers/s1000d/day3/buildingSMART%20-
%20Jons%20Sjogren.pdf> (Dec.23, 2010).
Smith, D.K., and Tardif, M. (2009). Building information modeling: A strategic
implementation guide for Architects, Constructors, and Real Estate Asset
Managers, Wiley, Hoboken, New Jersey.
Succar, B. (2008). “Building information modeling framework: A research and
delivery foundation for industry stakeholders.” Automation in Construction,
18(3), 357-375.
USGBC. (2010). “USGBC Announces ‘LEED Automation’ to Streamline and Create
Capacity for LEED Green Building Projects.” U.S. Green Building Council,
<http://www.usgbc.org/Docs/News/LEED%20Automation.pdf> (Dec. 23,
2010)
Wilkinson, P. (2008). “SaaS-based BIM.” Extranet Evolution,
<http://www.extranetevolution.com/extranet_evolution/2008/04/saas-based-
bim.html> (Dec. 23, 2010)
Wu, W., and Issa, R.R.A. (2010). “Application of VDC in LEED projects: framework
and implementation strategy.” Proceedings CIB W-78 27th International
Conference on IT in Construction, November 15-19, Cairo, Egypt.
Optimization of construction schedules with discrete-event simulation
using an optimization framework

M. Hamm1,a, K. Szczesny1,b, V. V. Nguyen1,c and M. König1,d


1
Chair of Computing in Engineering, Faculty of Civil and Environmental
Engineering, Ruhr-Universität Bochum, Universitätsstr. 150, 44801 Bochum,
Germany
a
PH +49-234-32-26715; FAX +49-234-32-07416; email: matthias.hamm@rub.de;
b
kamil.szczesny@rub.de; cvinh.nguyen@rub.de; dkoenig@inf.bi.rub.de

ABSTRACT

The efficient execution of complex construction projects requires


comprehensive construction scheduling. It is necessary to consider various
dependencies and restrictions as well as the availability of required resources. The
generation of efficient schedules is a very challenging task, which results in a NP-
hard optimization problem. In this paper an approach is presented to determine
efficient construction schedules by linking discrete-event simulation with an
optimization framework. This enables the application of various metaheuristics to the
scheduling problem. Thus, efficient schedules for complex construction scheduling
problems can be determined in a relatively short amount of time. An example of
implementation is presented to validate the optimization concept using Evolutionary
Algorithms.

INTRODUCTION

In the contemporary construction industry the specification of efficient


schedules for complex construction projects is clearly needed. However, this is a
demanding task that requires extensive knowledge. A valid execution sequence of
construction tasks has to be determined with consideration of project-specific
technological constraints, required materials, and available resources. Due to these
conditions, scheduling problems belong to the class of combinatorial optimization
problems, which makes them NP-hard (Bruckner and Knust 2006). Thus, an
analytical calculation of optimal schedules with exact mathematical methods is not
possible. Alternatively, metaheuristics can be applied to enable the determination of
near-optimal schedules. Metaheuristics are heuristic methods that offer the
possibility of solving a very general class of computational problems in an
acceptable amount of time. Commonly used metaheuristics are Simulated Annealing
(Kirkpatrick et al. 1983), Evolutionary Algorithms (Bäck 1996), and Particle Swarm
Optimization (Eberhart et al. 2001). Metaheuristics are often combined with
simulation concepts to generate a multitude of solutions for different combinations of
parameters. In this paper the application of the constraint satisfaction approach
guarantees the observation of valid construction schedules (Beißert et al. 2007). In

682
COMPUTING IN CIVIL ENGINEERING 683

previous approaches, the optimization method was embedded within a simulation


framework. Even though good results could be achieved, some disadvantages are
obvious. The main issue is the lack of flexibility. A single optimization method is
hard-coded in the simulation framework. Thus, only the implemented optimization
method can be used, and an extension is very time-consuming.
By linking the discrete-event simulation with the existing optimization
framework MOPACK, the possibility arises that various metaheuristics can be
applied to the scheduling problem (Nguyen et al. 2010). MOPACK is a Java-based
optimization framework that supplies a general interface for miscellaneous
optimization algorithms. The approach we are presenting uses evolutionary
algorithms, which are already implemented in MOPACK. Additionally, MOPACK
provides the possibility of distributed computing throughout the optimization
process. The application of MOPACK interacting with the discrete-event simulation
framework enables the determination of near-optimal schedules in a relatively short
amount of time.
The general concept of our optimization approach for construction scheduling
problems is described in the following section. More detailed descriptions of the
applied simulation framework, the optimization framework, and the interaction
between them are specified in the subsequent sections. In the implementation
example section our approach is demonstrated for an evolutionary algorithm and
important implementation details are described.

SPECIALIZED SIMULATION BASED OPTIMIZATION APPROACH

The optimization approach for construction scheduling problems presented in


this paper is based on simulation based optimization, which again is divided into
three components. In general, simulation based optimization consists of the
components simulation, the optimization method, and the optimization model.
Moreover, the simulation based optimization approach is adapted to the scope of the
optimization of construction schedules. Thus, the simulation component is
specialized by a discrete-event simulation component while the optimization model
is replaced by a scheduling problem component. Therefore, our approach is based on
the following components (Figure 1).

Figure 1: Optimization approach for construction scheduling problems.


684 COMPUTING IN CIVIL ENGINEERING

In general, components in the presented approach interact through common


interfaces that ensure a high degree of interoperability. Thus, the exchange of a
particular component implementation does not negatively impact the general
functionality of this approach. This flexible approach overcomes the disadvantages
of some already existing solutions, for which an extension or even modification of at
least one component would be very time-consuming. The implementation of the
three components will be described more detailed in the following paragraphs.
SCHEDULING PROBLEM
The observed problem consists of determining efficient schedules for construction
projects. All problem-related information is stored in the scheduling problem
component. Furthermore, the scheduling problem component requires an input from
a certain optimization method in the form of the execution sequence of tasks. It is
transformed into simulation inputs, and the simulation is executed with this data.
After finishing the simulation run, the simulation output is again transformed into
objective function values that are demanded by the optimization method.
Since the execution of each task requires a specific amount of resources and
all resources have limited capacities, the resulting optimization problem is identified
as a Resource-Constrained Project Scheduling Problem (RCPSP). For the modeling
of the RCPSP some additional information is needed. The duration of each task has
to be defined, and existing precedence relations between the tasks, i.e., technological
dependencies, have to be specified. Additionally, the optimization criteria need to be
defined in the Scheduling Problem. Important information can be obtained from a
Building Information Model, which contains data about all building elements. These
data have to be enriched manually with additional information about resources,
materials, and constraints between construction tasks.
Within the scheduling problem all necessary information is provided. At the
beginning of the optimization process the total number of tasks is forwarded to the
optimization method. After the generation of a new execution sequence by the
optimization method, the task execution sequence is transformed by the scheduling
problem and passed to the simulation component as a simulation input.
OPTIMIZATION METHOD
The optimization method component is responsible for the generation of
possible solutions as well as their evaluation with regard to the optimization
objectives. To achieve this, the optimization method interacts with the scheduling
problem through a common interface. In particular, this interface enables
intercommunication between the optimization method and the scheduling problem.
Thus, the total number of tasks is obtained by the optimization method. Each task is
considered as an optimization variable by the optimization method. Additionally, the
optimization method obtains the objective function values for certain combinations
of optimization variables. The modification of optimization variables is
accomplished by the optimization method according to its particular implementation.
The modified optimization variables are the output of the optimization method to the
COMPUTING IN CIVIL ENGINEERING 685

scheduling problem. The optimization methods represent various metaheuristics, e.g.,


Evolutionary Algorithm, Simulated Annealing, or Particle Swarm Optimization.
In order to realize the optimization method component software is required
that provides customizable and flexible optimization methods. Therefore, the
software package MOPACK (Multi-method Optimization PACKage) is used
(Nguyen et al. 2010). MOPACK is a Java-based optimization framework that
supplies a general interface to a wide range of optimization algorithms. In addition to
this, MOPACK has a variety of already implemented optimization algorithms.
Moreover, MOPACK provides an object model based API for developing
optimization problems in a declarative and explicit manner using a common interface
for optimizers. Furthermore, the framework provides methods to convert any
optimization problem implemented using the MOPACK API into an optimization
problem that may be solved in a distributed computing environment by exploiting
parallel computing paradigms.
Thus, by providing the ability to implement sophisticated optimization
methods and complex optimization problems, MOPACK accomplishes the
requirements for both the optimization method component as well as the scheduling
problem component of the proposed approach. MOPACK is used successfully in the
field of structural optimization, in particular in simulation-based optimization
(Nguyen et al. 2010). Because of the common architecture of MOPACK, it is not
restricted to structural optimization, but may also be applied in other areas of
optimization.
DISCRETE-EVENT SIMULATION
The discrete-event simulation component computes simulation results for
certain input data. The input required by the simulation component consists of the
tasks with their required resources, available materials, existing technological
constraints between construction tasks, and the task execution sequence generated by
the optimization method. The simulation component generates a valid construction
schedule and calculates the objective function values.
In order to achieve this, the discrete-event simulation component is based on
the constraint satisfaction approach (Beißert et al. 2007). Individual constraints are
defined for each task. These constraints result from required resources, the
availability of materials, and technological dependencies between tasks. These
constraints are checked continuously during the simulation process. If all constraints
of a task are fulfilled, this task can be executed. The required resources are locked
and the starting time is recorded. This sequence will be repeated until all tasks are
executed.
The open source Java-based discrete-event simulation framework DESMO-J
is used for the implementation of the discrete-event component (Lecher and Page
1999). Furthermore, the framework is extended to meet requirements for
functionality. A significant functionality of the simulation component is the
possibility of repairing unfeasible execution sequences, i.e., sequences of
construction tasks that are nonexecutable due to precedence relations between the
task will be adjusted (Figure 2). Thus, independently of whether or not the given
686 COMPUTING IN CIVIL ENGINEERING

execution sequence of tasks is feasible, the result of the simulation will be a valid
schedule for the construction tasks.

Figure 2: Adjustment of unfeasible task sequences

LINKING SIMULATION AND OPTIMIZATION FRAMEWORK

The linking of the simulation framework with MOPACK is based on the


MOPACK problem interface. Within the scheduling problem implementation the
adjusted BIM export data is imported and the design variables, constraints, and
objective functions are defined. Furthermore, methods are provided to conduct the
evaluation of any number of potential solutions. These methods are recalled by the
optimization method for each solution and for every iteration loop.
Furthermore, the scheduling problem implementation interacts with
DESMO-J. For each task sequence generated by the optimization method a
transformation is carried out. The result of the simulation is retransformed into a
format that the optimization method can handle. It is also possible to transform the
scheduling problem into a parallel scheduling problem, which enables the
simultaneous execution of multiple simulations.

IMPLEMENTATION EXAMPLE

An implementation example was carried out to validate the described


optimization approach for construction scheduling problems. The MOPACK
framework provides methods to implement a construction scheduling problem based
on the Problem interface. Therefore, the implemented scheduling problem imports
rearranged BIM data exported from CAD software and uses these data to derivate
objective functions, design variables, and resource constraints. Specifically, objective
functions contain the total duration and resource utilizations, while design variables
are represented by a permutation of all possible tasks. Furthermore, the scheduling
implementation makes use of the DESMO-J API to run a simulation for every given
permutation of tasks. Hence, methods are supplied to transform design variables into
required simulation input data and, in return, to transform simulation output data into
objective function values. Additionally, a rank based sequence relationship graph is
generated. Such a graph contains tasks as nodes and sequence relations as edges.
Based on this graph, the rank of each task is specified in the following manner. The
COMPUTING IN CIVIL ENGINEERING 687

rank of a task equates with the number of predecessors of this task, i.e., a task
without any predecessors is defined as rank 0. The implementation details are shown
as an UML class-diagram in Figure 3.

Figure 3: Implementation example class diagram


In addition, an evolutionary algorithm, namely the genetic algorithm NSGA-
II (Deb et al. 2002), was implemented. As already shown in Ko and Wang (2010)
evolutionary algorithms have been successfully used for the optimization of
scheduling problems. Specifically, the NSGA-II optimization algorithm implements
the MOPACK Optimizer interface. It thus provides methods that are responsible for
the initialization of the objective variables based on the scheduling problem.
Furthermore, the initialization of the genetic operators, namely the selection, the
crossover, and the mutation operator is achieved. Additionally, methods for an
iterative optimization process are provided. Thus, in any iteration the generated
potential solutions are evaluated and the previously initialized genetic operators are
executed.
In addition, the crossover operator implements the MOPACK Operator
interface, which provides methods for the modification of a given input. The
crossover operator thus gains the potential to obtain more detailed information about
the scheduling problem, e.g., in conjunction with the well-known Adapter design
pattern (Gamma et al. 2005). The operator is thus enhanced to a knowledge based
operator that, in turn, can be used to manipulate the design variables in a more
sophisticated manner. This knowledge based operator is able to obtain information
about the scheduling problem in the form of a rank based sequence relationship
graph. The operator is implemented as an extension to the order crossover-2 (OX2)
operator (Syswerda 1991). However, while in the original OX2 tasks are selected
randomly within the second parent, we modify the selection process such that tasks
are selected based on the rank based sequence relationship graph (Figure 4). Thus,
the resulting child of the recombination process has a greater chance of being a valid
solution with regard to the precedence relations between tasks.
688 COMPUTING IN CIVIL ENGINEERING

Additionally, MOPACK mechanisms are applied that enable the


transformation of an optimization problem into a distributed problem. The appliance
of MOPACK parallelism mechanisms will transfer the time consuming execution of
the simulation into a grid computing environment.

Figure 4: Adjusted order crossover-2 AOX2

CONCLUSION

In this paper an optimization concept is presented that enables the determination of


efficient schedules for complex construction projects. The described optimization
concept relies on three flexible interchangeable components. The scheduling problem
component describes any Resource-Constrained Project Scheduling Problem. The
problem can thus be solved by each optimization method that fulfills the
optimization method component norm. Furthermore, a discrete-event simulation
component based on the constraint-satisfaction approach is conducted and used for
the generation of valid schedules. By linking the discrete-event simulation with an
existing optimization framework, various metaheuristics can be applied for the
determination of efficient schedules. An implementation example is presented, in
which an evolutionary algorithm is adapted within the optimization framework to
validate the described concept.
The described flexibility of our optimization concept enables the
straightforward implementation of further metaheuristics. Future research will
therefore deal with the realization of Simulated Annealing and Particle Swarm
Optimization. By exploiting the MOPACK parallelism mechanisms the optimization
will be executed in a distributed computing environment. Thereby a significant
reduction of computing time is expected. Moreover, the flexible and compact
optimization approach is qualified for embedding in a distributed agent-based
optimization environment.
COMPUTING IN CIVIL ENGINEERING 689

REFERENCES

Bäck, T. (1996) Evolutionary Algorithms in Theory and Practise – Evolutionary


Strategies, Evolutionary Programming, Genetic Algorithms, Oxford
University Press, New York.
Blazewicz, J., Ecker, K.H., Pesch, E., Schmidt, G., and Weglarz, J. (2007) Handbook
on scheduling: from theory to applications, Springer
Beißert, U., König, M., and Bargstädt, H.-J. (2007) “Constraint-based simulation of
outfitting processes in building engineering”, 24th W78 Conference.
Brucker, P., and Knust, S. (2006) Complex Scheduling, Springer, Berlin.
Deb, K., Pratap, A., Agarwal, S., and Meyarivan, T. (2002) “A Fast and Elitist
Multiobjective Genetic Algorithm: NSGA-II” IEEE Transactions on
Evolutionary Computing, Vol. 6, No. 2
Gamma, E., Helm, R., Johnson, R., and Vlissides, J. (2005) Design Patterns.
Elements of Reusable Object-Oriented Software, Addison-Wesley.
Ko, C.-H., and Wang, S.-F. (2010) “GA-based decision support systems for precast
production planning”, Automation in Construction
Lechler, T., and Page, B. (1999) “DESMO-J – An Object Oriented Discrete
Simulation Framework in Java.” In: Horton, G., Möller, D. (Hrsg.): In
Proceedings of the 11th European Simulation Symposium, SCS, p. 46-50.
Kirkpatrick, S., Gelatt, C. D. and Vecchi, M. P., 1983. Optimization by Simulated
Annealing. Science, 220, 671-680.
Nguyen, V.V., Hartmann, D., Baitsch, M., and König, M. (2010) “A Disributed
Agent-based Approach for Robust Optimization” 2nd International
Conference on Engineering Optimization
Eberhardt, R.C., Kennedy, J., and Shu, Y. (2001) Swarm Intelligence, Morgan
Kaufmann.
Syswerda, G. (1991) “Schedule Optimization Using Genetic Algorithms” Handbook
of genetic Algorithms, Van Nostrand Reinhold, p.332-349.
Development of 5D CAD System for Visualizing Risk Degree and
Progress Schedule for Construction Project

Leen-Seok Kang1, Hyoun-Seok Moon2, Hyeon-Seung Kim3, Gwang-Yeol Choi3


and Chang-Hak Kim4
1
Professor, Gyeongsang National University, Jinju, Korea; (82) 55-753-1713;
FAX (82) 55-753-1713; Lskang@gnu.kr
2
Ph.D, Gyeongsang National University, Jinju, Korea; (82) 55-753-1713; FAX
(82) 55-753-1713; civilcm@gnu.kr
3
MS course, Gyeongsang National University, Jinju, Korea; (82) 55-753-1713;
FAX (82) 55-753-1713; wjdchs2003@yahoo.co.kr, rhkdduf2004@nate.com
4
Professor, Gyeongnam National University of Science and Technology, Jinju,
Korea; (82) 55-753-1713; FAX (82) 55-753-1713; ch‐kim@jinju.ac.kr

ABSTRACT
4D CAD system visualizes the schedule data for construction project.
Generally, 5D CAD system visualizes construction cost or resource data by
linking 4D object. This study attempts to develop a 5D CAD system by linking
4D object for progress schedule data with the risk data for visualizing construction
risk degree of each activity. This system uses the fuzzy analysis and AHP analysis
procedures to estimate risk degree of each activity. And the system considers
construction cost, duration and dangerous condition of work site as risk factors.
The estimated risk degree of each activity is simulated with different colors by the
risk level in 4D CAD engine that developed in this study. Because the 5D CAD
system integrated with risk analysis data has creative functions comparing with
the current similar systems, it can be a useful tool for visualizing practical risk
data and progress schedule data.

INTRODUCTION
Risk management in construction projects is heavily dependent upon the
experience-based intuition of constructors and owners. To solve this issue, active
research is underway on risk management systems where risk analysis techniques
are applied. Kang (2010) suggested a 4D CAD engine with an improved link
method between 3D object and schedule data. Nasir (2003) suggested setting an
activity period for risk analysis and developed evaluating risk in construction-
schedule model (ERIC-S) by coming up with probability distributions of a risk
combination. This study suggests a risk analysis process that can quantify risk
factors and develops a 5D CAD system that risk information is visually expressed
by each activity schedule. The estimated risk degree of each activity is simulated
with different colors by the risk level in 4D CAD engine that developed in this
study. This methodology will not only simplify conventional risk analysis
procedures but also provide visualized risk information to maximize the efficiency
of risk management operations.

690
COMPUTING IN CIVIL ENGINEERING 691

THEORETICAL BACKGROUNDS

Quantified analysis theories for risk factors


Representative risk analysis theories for quantifying risk factors are the
analytic hierarchy process (AHP) and fuzzy approach. The AHP approach is a
theory that determines pair-wise comparison values apart from considering mutual
orders between factors and converts them into one-on-one matrices to determine
the levels of importance. The theory is widely used as it is simple and
straightforward, easy to apply and universally applicable. The fuzzy approach is a
mathematical theory for objectifying subjective judgment criteria. It expresses
vague and inaccurate descriptive expressions. Feedback from actual analysis
results, however, is hard to gain here as weight on evaluation factors cannot be
taken into account and degree-of-membership functions are difficult to modify.

4D CAD system
The 4D CAD system realizes the progress of building construction over
time with the virtual reality (VR) technique by combining three-dimensional
drawings with the schedule data that contains temporal information. The system
continuously simulates the progress of building construction on a three-
dimensional basis by point of time on the schedule data. Benefits of the 4D CAD
simulation are to detect problems such as temporal and spatial interferences for
structures in advance and thereby reduce the construction period and costs.

CONFIGURATION METHODOLOGY FOR DEVELOPING RISK


SIMULATION MODE

Process for visualizing risk degree by each activity


This section provides the conceptual diagram of a system needed for setting
up a risk analysis and visualized risk simulation model as illustrated in figure 1.
Visualized risk simulation suggested in this conceptual diagram is comprised of
two phases: (a) risk quantification and analysis by each activity; and (b) 4D CAD-
based simulation to visualize these risks. The former suggests processes based on
the AHP and fuzzy approach, while the latter proposes 4D CAD-linked processes.
692 COMPUTING IN CIVIL ENGINEERING

Figure 1. Conceptual diagram of visualized risk simulation system

Quantitative risk analysis process

A methodology for risk analysis process proposes an analysis procedure to


which both the AHP and fuzzy theory are applied together so that the risk levels
of risk criteria can be identified by evaluation factors. The limitation of the
conventional fuzzy approach ―the inability to consider the weight on evaluation
factors― can be addressed by applying the AHP approach. The difficulty of
gaining feedback from actual analysis results is also overcome by enabling the
revision/modification of fuzzy membership function.
The fuzzy analysis quantifies quantitative risk factors into mathematical
values, which is done by entering the risk probability and impact as evaluation
factors for each risk factor. For this purpose, this study defines linguistic variables
applied to the fuzzy theory as very low (VL), low (L), medium (M), high (H) and
very high (VH); input range values for the risk probability and impact are set up in
six grades each. The fuzzy membership values of each factor can be revised and
modified so that wide-ranging field environments can be taken into consideration.
Risk factors are generated on the basis of the work breakdown structure
(WBS) code in the 4D CAD system through the 4D CAD Database. The analysis
period, in particular, is set selectively through the linkage between WBS and
schedule information to generate risk factors falling into the range of the given
period. For the generated risk factors, their risk probability and impact are
designated subjectively by the users. For the reasonable selection of the risk
probability and impact, they are divided into six grades. When the risk probability
and impact of risk factors are entered, the risk levels are calculated through the
fuzzy algorithm-based Equation (1).
COMPUTING IN CIVIL ENGINEERING 693


........................................Equation (1)

i: risk criteria
j: risk evaluation factors (risk probability, risk impact) j∈{P, I}
n: linguistic variable values (n∈RV={VL, L, M, H, VH})
Rijn: fuzzy number of linguistic variable value on the levels of risk criteria i and
risk evaluation factor j
Pijn: fuzzy membership function value of linguistic variable value n on the levels
of risk criteria i and risk evaluation factor j

The risk levels are calculated by the risk probability and impact as
evaluation factors for each risk criteria. They are used in the approximation
formula, a triangular fuzzy number applied from Zadeh's extension principle, as in
Equation (2) to calculate risk evaluation values with different levels of importance.

∑ ∑ ∑
∑ ∑ ∑
.............Equation (2)

In order to obtain risk evaluation values by importance, the generalized


mean value (GMV) is utilized, as shown in Equation (3), to determine evaluation
priorities for each risk criteria.
.......................................................Equation (3)

From the fuzzy analysis results suggested above, risk priorities and levels
for each risk factor are drawn. This information helps identify risk criteria
requiring focused management. Risk analysis values for each evaluation factor
can also be checked on the basis of the weight on evaluation factors calculated in
the AHP analysis model. In other words, fuzzy input information by risk factor is
stored into the database, so risk analysis considering seven evaluation factors in
total can be performed by selecting evaluation factors for the same fuzzy input
values (i.e. time, cost and work condition) either entirely or redundantly.

Visualized simulation for risk factors

Mathematical information on risk criteria obtained from quantitative


analysis is visually simulated to enable efficient information delivery.
Accordingly, this study expresses mathematical information in different colors
and links this with 4D CAD for visual simulation. Figure 2 represents
configuration procedures for the visualization simulation of risk factors. First, the
3D, 4D models, WBS and schedules that are required for 4D CAD realization are
developed. They are composed of similar codes to build an efficient linkage
system with risk factors. Mathematical information on risk factors is expressed as
blue (VL), green (L), yellow (M), orange (H) and red (VH) depending on their
calculated risk level.
694 COMPUTING IN CIVIL ENGINEERING

Figure 2. Configuration methodology for visualized simulation

DEVELOPMENT OF RISK VISUALIZATION SYSTEM

Configuration of system functions


The processes in this system are structured around the methodology by the
suggested method in this study and Visual Basic 6.0 has been utilized for system
development. Also, C++-based modules have been used in AHP analysis and fuzzy
analysis to develop the risk visualization system. The main features of the risk
management and visualization system developed in this study are illustrated in
figure 3. Major system functions expressed in figure 3 include: AHP analysis
module for risk quantification; fuzzy analysis module; priority analysis screen
based on risk analysis results; and 4D CAD simulation system module for risk
information visualization.

Quantitative analysis of risk factors

Figure 3. Screenshots of main function modules for risk visualization system


COMPUTING IN CIVIL ENGINEERING 695

Figure 4 represents the procedures for quantitatively analyzing risk factors


in this system. The procedures are basically comprised of AHP and fuzzy analysis
procedures, which are carried out phase by phase for the convenience of users. In
risk factor analysis, the range of to-be-analyzed risk factors is designated to
identify risks falling into the given range. Through AHP analysis, the weight on
evaluation factors such as time, cost and work condition is also calculated. Single
or multiple evaluation factors may be selected depending on the project
environment. Once the procedures for AHP analysis are completed, each of the
risk factors identified here are quantified in line with fuzzy analysis procedures.
Fuzzy analysis sets the importance values of the risk probability and impact within
the range from very low (VL) through very high (VH). When the risk probability
and impact values are successfully entered for each risk criteria, the five-graded
risk levels and comprehensive risk priorities are obtained for each of the risk
criteria.

Figure 4. Quantitative analysis of risk factors in the system

Result analysis and simulation implementation


Once the quantitative analysis procedures by risk factor are completed,
result values are produced as in figure 5. The result values are expressed in
descending order of risks; each risk criteria is classified in five grades and marked
in different colors―blue, green, yellow, orange and red―to visually express risk
level. When the analysis results in figure 5 are implemented in 4D simulation, all
the 3D models linked to each risk criteria are expressed in different colors by their
individual risk level. Therefore, the risk levels of structures in all projects
implemented in 3D models can be identified by colors. This information is also
realized in 4D simulation so that the risk levels of processes changing by schedule
can be identified. Figure 5 visually expresses the risk levels of processes being
undertaken from August 2010 through January 2012, helping identify processes
underway on a daily basis and determine the risk levels of each process to come
up with measures to counter potential problems in advance.
696 COMPUTING IN CIVIL ENGINEERING

Figure 5. Analysis results and visualized simulation

Figure 5 visualizes risk information by each activity. Depending on the


fuzzy analysis-based quantification results of risk values, the activities are
expressed in five different colors, with the greatest risks marked with red and
those with the smallest risks with yellow. The visualization of these risk analysis
values enables differentiated construction management by risk levels. Therefore,
mathematical risk information is visualized, and the efficiency of risk
management operations for practitioners can be further enhanced.

CONCLUSION

This study has proposed methodology for building a 5D CAD-based risk


visualization system and developed the system for efficient risk management. In
this study, the possibilities of delay in the construction period, greater-than-
expected spendings in construction and accidents due to operational risks have
been analyzed as critical factors. The risk visualization system developed in this
study is expected to be widely applicable as an efficient decision-making tool as it
does not only quantify risk factors reasonably but also visualizes conventional
mathematical modes of expression.
COMPUTING IN CIVIL ENGINEERING 697

ACKNOWLEDGEMENTS

This study has been made under the sponsorship of the construction
technology innovation project (project no: 06 E01). We would like to thank the
Ministry of Land, Transport and Maritime Affairs and the Korea Institute of
Construction and Transportation Evaluation for making this study possible.

REFERENCES

L.S. Kang, H.S. Moon, S. Y. Park, C.H. Kim and T.S. Lee (2010). “Improved Link
System between Schedule Data and 3D Object in 4D CAD System by Using
WBS Code”, KSCE Journal of Civil Engineering, 14(6), 803-814.
Nasir, D., Mccabe, B., and Hartono, L. (2003). “Evaluating Risk in Construction–
Schedule Model (ERIC–S): Construction Schedule Risk Model.” Journal of
Construction Engineering and Management, 129(5), 518 – 527.
Carr, V., and Tah, J. H. M. (2001). “A fuzzy approach to construction project risk
assessment and analysis: construction project risk management system.”
Advances in Engineering Software, 32(10), 847 – 857.
Zeng, J., An, M., and Smith, N. J. (2007). “Application of a fuzzy based decision
making methodology to construction project risk assessment.” International
Journal of Project Management, 25(6), 589 – 600.
Integration Of Safety In Design Through The Use Of Building Information
Modeling
Jia Qi1, R. R. A. Issa2, J. Hinze3 and S. Olbina4

Rinker School of Building Construction, University of Florida,Gainesville, FL 32611


1
airsimonqi@ufl.edu, 2raymond-issa@ufl.edu, 3hinze@ufl.edu, 4solbina@ufl.edu

ABSTRACT
The construction industry has incurred the most fatalities of any industry in
the private sector in recent years. It is partly because of the fact that the designers
usually lack design for construction safety knowledge, which results in many safety
hazards manifesting themselves at any given stage during the construction process. In
this research study, the researchers devised a design for construction worker safety
tool which makes designing for safety suggestions available to designers and
constructors in an efficient way, which will effectively alleviate the potential hazards
on construction sites.
This research study looks at formalizing the collected design for construction
worker suggestions. A dictionary and a constraint model are then developed to store
these formalized suggestions. These can then be used by a model checking software
package to conduct designing for construction worker safety checking during the
design process. These tools make it possible for architects to optimize the drawings to
ensure minimization of safety hazards during construction. In the meantime,
constructors can take protective procedures to eliminate the construction site hazards
from the beginning of the project. Therefore in both the design and construction
phases, significant improvements to construction worker safety could be realized by
using this designing for safety tool.

INTRODUCTION
The construction industry has incurred the most worker fatalities of any
industry in the private sector in recent years. It is partly because designers cannot
access design for construction safety knowledge, which results in many safety
hazards being built in the project models/drawings. To improve the current situation,
this research study identifies the possible influences of Building Information
Modeling (BIM) technology on construction worker safety. After identifying the
extent of the positive impact of BIM technology on construction worker safety
through extensive literature review, the researchers describe the development of a
design for construction safety tool which can automatically check three-dimensional
(3-D) building models and make the designing for construction worker safety
suggestions available to the designers and constructors in an efficient way.
Using a software tool to help designers implement the design for construction
safety knowledge is not a new idea. In the 1990s, after recognizing the lack of

698
COMPUTING IN CIVIL ENGINEERING 699

designer involvement in construction worker safety due to their minimal education


and experience in addressing safety on the construction site, the Construction Industry
Institute (CII) funded a research project to develop a software tool to assist designers
in recognizing project-specific hazards and in providing them with design suggestions
for consideration in the project design (Gambatese 1997). The design for safety
suggestions were accumulated through research efforts that included input from
designers, traditional construction contractors, and design-build firms. Then these
suggestions were incorporated into the “Design For Construction Safety ToolBox” by
Gambatese and Hinze (1997).
With the emergence of new information technology, the Construction Industry
Institute (CII) expressed a need for a software tool which would replace the software
tool created in 1996. To give design professionals the ability to more quickly and
easily access design for construction safety suggestions, the “Design for Construction
Safety Toolbox,” second edition, was developed by Marini and Hinze (2007), through
the support of the CII. While it is not commercially available at this time, the second
edition of the toolbox is a web-based application while it may also be operated via a
compact disc. The database of this application consisted of a simple, external, text-
based (XML) file designed to easily accommodate the addition of future design for
construction safety suggestions (Marini 2007). Besides the changes made to the
database, other elements such as the application design, application navigation and
software tour functions of the second edition also include substantial improvements to
make the application more deliverable and easy-to-use. This application is expected
to be commercialized in the near future.
Other researchers also have developed a few automated critiquing systems to
support designers in making design decisions or to check IFC format building
models. The U.K. Health and Safety Executive (HSE) was concerned that safety
should be as much a key aspect in design as it is during construction and operation. A
prototype was developed which was primarily concerned with the hazards while
working at height, and accidents due to falling objects. This prototype used software
that was developed by Singapore CORENET as the design checking mechanism due
to the building regulations compliance checking is analogous to the checking of
designs against health and safety risks (HSE 2003). An object-based CAD system
exports design data in the IFC format to an EDM database provided by EPM
Technology. Design data are tested against health and safety requirements that are
graded according to levels of risk. The checking results are reported through graphic
and rule-browsing software (HSE 2003). Another endeavor is the SMARTcodes
project. Since 2004, the International Code Council (ICC) started to develop object-
based technology to represent their codes and to test submitted construction
documents. The key elements are a model checking application and SMARTcodes.
An online version of the Solibri software application or the AEC3 XABIO web-based
test-bed could be adopted in the model checking application. A protocol and software
program were used to create tagged representations of building codes that use a
tagging schema that reflects the logic and requirements of the codes from the text of
the codes (Conover 2010).
700 COMPUTING IN CIVIL ENGINEERING

ACCUMULATION OF DESIGN SUGGESTIONS


Previous studies (Gambatese 1997) have already developed and compiled
numerous designing for safety suggestions. These past research studies found that fall
accidents account for a large portion of construction injuries and fatalities. In this
study, the construction safety checking system is mainly targeted at eliminating
potential fall hazards. The designing for construction safety best practices were
reviewed to identify those provisions that deal with the fall protection. Currently
more than thirty provisions which are related to the fall protections have been singled
out. These suggestions are classified into two categories. The first category of
suggestions are constrained either by precise parameters or by certain materials. Such
as “Design window sills to be 42 inches minimum above the floor level. Window sills
at this height will act as guardrails during construction.” Another kind of suggestions
is currently uncheckable. A concept can be uncheckable because a BIM may never
have the information, or because the information will exist only on site in the actual
building, or in the mind of an inspector. For example, one best practice is “Design
appropriate and permanent fall protection systems for roofs to be used for
construction and maintenance purposes. Consider permanent anchorage points,
lifeline attachments, and/or holes in perimeter for guardrail attachment.” The
Occupational Health and Safety Administration (OSHA) standards also
correspondingly require that (Appendix C to Subpart M Fall Protection):

“(h) ‘Tie-off considerations.’ (1) One of the most important aspects of


personal fall protection systems is fully planning the system before it is
put into use. Probably the most overlooked component is planning for
suitable anchorage points. Such planning should ideally be done before
the structure or building is constructed so that anchorage points can be
incorporated during construction for use later for window cleaning or
other building maintenance. If properly planned, these anchorage points
may be used during construction, as well as afterwards.”

After the designing for construction safety suggestions are classified, two
major components of the Model Checking System need to be developed: the
Dictionary and the Constraint Model.

ARCHITECTURE AND FUNCTIONALITY OF THE SYSTEM


After the collected suggestions have been formalized, the next step is to
develop the proposed Construction Safety Checking System. The purpose of this
system is to automatically check imported drawings which are in IFC format to alert
designers to opportunities for improving construction safety. The system should
provide design for safety knowledge quickly, easily and economically.
Figure 1 shows the architecture of Construction Safety Checking tool. The x-
axis represents the project process from the beginning of the design to delivering the
documents to constructors. This begins on the left with the design development
period when the designers draft the initial drawings. It then evolves into the design
review phase and the agency permitting phase. This culminates into the construction
COMPUTING IN CIVIL ENGINEERING 701

period. The design process is an iterative one. Users could submit construction
documents and check the design for non-compliance by using the Construction Safety
Checking software tool. After the report identifies the problematic building
components, the designer(s) can revise their drawings by returning to architectural
design tools. The core of the entire process is the model checking software which is
supported by a dictionary and a design for construction safety rule set. After the
design for construction safety knowledge has been incorporated into construction
documents, shop drawing can be delivered to constructors for further construction
work.

Figure 1. Architecture of Construction Safety Checking Software Tool

The Construction Safety Checking software tool is based on a Model


Checking Software. An online version of the Solibri software application or the
AEC3 XABIO web-based test-bed can be adopted as the model checking application.
AEC3 XABIO uses EPM and Octaga technology (Nisbet 2010). AEC3 XABIO can
check a whole regulation or an individual clause and then generate a full explanation.
It entirely web-based: the Apache Tomcat web server is used to harness the
EXPRESS Data Manager database, and the Octaga 3D viewer to highlight the
building elements at issue (AEC3 2010). This application is designed to find potential
problems, conflicts, or design code violations in a building model.
The other two important parts of the Construction Safety Checking tool are
labeled as Constraint Model, and Maindictionary. After the designing for construction
safety suggestions are classified, a MS Excel spreadsheet is used to collect ‘terms’
and ‘properties’ from these suggestions. It is also helpful to avoid having the concepts
expanding in an uncontrolled fashion. Then the dictionary is developed, which is
comprised of terms, objects, properties critical for communication between Model
Checking Software, BIM authoring tools and the Constraint Model. That is because
the same property can occur within the safety suggestions in many places. The
702 COMPUTING IN CIVIL ENGINEERING

Dictionary can make sure that the property is always assigned the same meaning and
unit of measurement.
The constraint model, also known as rule sets, is the electronic format design
for safety suggestions. It takes three steps to transfer the original paper-based design
for safety suggestions to the Constraint Model. The first step is to transfer original
design for construction safety suggestions into computer readable baseline electronic
suggestions which are in XML format. Then the logic between different ‘terms’ and
‘term properties’ in each suggestion is tagged by marking them with different colors.
Finally, different logic is encoded, which makes the baseline electronic suggestions
transfer into Safety Constraint Model/ Safety Rule Sets.
After the architecture of the tool has been determined, the next issue is to
define the functionalities of the tool. The safety checking system is expected to have
two main functions. One function consists of checking the drawings against the
design for construction safety rule set. The tool should also be able to provide safety
information related to certain building components. This is based on both the
characteristics of the design for construction safety knowledge and the reasoning
process of the safety checking tool. One of the differences between building codes
and design for construction safety knowledge is that a large number of design
suggestions are in the textual form without any parametric information. Many of
these suggestions are very difficult to encode into rule sets that can be compared with
the properties of building components and be used to restrict non-compliance.
Consequently, it is better to keep them in their original form and show them to the
user in text, while most of the building codes are connected to attributes that can be
physically measured. Second, usually the building code checking systems just provide
detailed information after the checking task has been completed. While the design for
safety tool is expected to provide suggestions during the design process. These two
points are very similar to delivering constructability knowledge to designers during
the preliminary design phase. Taking the above two points into consideration, an
appropriate way to deliver safety knowledge to designers must be found.
The process of checking a construction drawing includes the following steps.
First, the user loads the design into the rule checker. Then the 3D view can be shown
on the right hand side of the safety checking tool. The navigation functions usually
include Zoom, Spin and Walkthrough. On the left hand side there are checkboxes
which are used to select objects and rule sets. The user could get detailed properties
of any object by selecting an object tab. The user also can access all design for
construction safety suggestions by selecting them from the rule sets. A detailed
explanation of every suggested design provision will be provided and some graphs
will also be given to illustrate complex issues. Next, the user can select the rules that
will be used to check against specific objects. After running the checking function,
two sets of results will be produced. One is a list of all non-compliance issues
identified in the drawings, along with suggestions about how to eliminate or mitigate
these issues. The user could print the report out. Another set of results will be shown
on the right hand side in the form of a 3D view. Red circles will show all the
components which violate certain rule sets. After getting the report from the model
checker, the user can change drawings in the architectural modeling tools or keep the
COMPUTING IN CIVIL ENGINEERING 703

original design ideas if other requirements need to be met. Designers will be advised
to keep a record of their decisions for future use.
Next a case study of how to use the Safety Checking tool to check a building
model is discussed. The user imports the sample model into the Model Checking
Software to check whether the slope of roof meets the requirement. The following
requirements need to be met. “1. Design the parapet to be 42 inches tall. A parapet of
this height will provide immediate guardrail protection and eliminate the need to
construct a guardrail during construction or for future roof maintenance. 2. Minimize
the roof pitch to reduce the chance of workers slipping off the roof.”
After loading the Constraint Model and clicking the ‘Navigation’ button, the
system should generate a 3D view of the building model. As shown in Figure 2, the
roof of the building model is so sloped that there is a possibility that the roof does not
meet the requirement. Then the pitch of the roof can be checked. After running the
Model Checking Software, the tool will show the results as shown in Figure 3.
The detailed description also demonstrates that the pitch of the subject roof
does not meet requirements. According to OSHA standard, low-slope roof means a
roof having a slope less than or equal to 4” in 12”, the following requirement need to
be met. “Minimize the roof pitch to reduce the chance of workers slipping off the
roof.” The project participants need to consider either revising the building model or
installing fall protection on the job site. Suppose that after negotiation between the
Design-Build team members, the designers find the pitch of the roof exceed 4” in 12”,
they can go back and revise the building model. As shown in Figure 4, the pitch of
the roof meets the safety requirement after changing the pitch of the roof in the BIM
authoring software.

DELIVERY METHOD INFLUENCE ON SAFETY PERFORMANCE


Designers often do not have the expertise to make the design drawings safe
enough for construction workers to complete the construction work in a safe manner
due to the inadequate interoperability between the software used by different project

Figure 2. Sloped Building Roof Figure 3. Checking Result


704 COMPUTING IN CIVIL ENGINEERING

Figure 4. Change in the Roof Parameters

participants. Interoperability identifies the need to pass data between applications, and
for multiple applications to jointly contribute to the work at hand. Research studies
show that the lack of interoperability can cause tremendous inefficiencies and waste
in the construction industry (Gallaher et al. 2004).
The type of project delivery method will impact the extent to which
construction worker safety can be addressed in the design. The forms of project
delivery alter the roles played by the different parties and the allocation of their
responsibilities. In the most prevalent delivery method of Design-Bid-Build, the
designer develops a design based on the owner’s requirements, and then a constructor
is selected to build it. With this procedure, the project is designed with little expertise
from the constructor who actually constructs the project. As a result, many
constructability and safety issues are not considered until the construction phase.
Furthermore, governments often dictate that “open bidding” must be used in
government construction projects, so substantive early involvement of the actual
constructor is prohibited. Alternative project delivery methods can be used to access
the constructor’s knowledge to find safety hazards and to facilitate the
implementation of design modifications. For example, Toole (2007) confirms that
both the fee structure and model contract terms of a design-built project could induce
design engineers to consider construction safety during the design phase.
The Design-Build (DB) or Integrated Project Delivery (IPD) project delivery
method can be introduced to solve the current problem. DB and IPD allow
constructors to contribute their expertise in construction techniques early in the
design process resulting in improved project quality and financial performance during
the construction phase. Therefore the designer could benefit from the early
contribution of the constructors’ expertise during the design phase. Designers can
fully understand the ramifications of their decisions at the time the decisions are
made. The close collaboration eliminates a great deal of waste in the design, and
allows data sharing directly between the design and construction team, thereby
eliminating a large barrier to increased productivity in construction. DB and IPD also
leverage early contributions of knowledge and expertise through the utilization of
new technologies. The DB and IPD processes unlock the power of BIM, and the full
potential benefits of both DB or IPD and BIM can be achieved only when they are
used together.
COMPUTING IN CIVIL ENGINEERING 705

SUMMARY
A design for construction worker safety software tool is developed. This tool
can automatically check for fall hazards in the building information models and
provide design alternatives to users. It can be used by the architects/engineers during
the design process or be used by the constructors before conducting the construction
works.
This tool consists of a ‘Model Checking Software’ and the ‘Constraint
Model/Rule Sets’. The model checking software is an object-based rule engine such
as Express Data Manager (EDM), which can conduct automatic design checking
process. These rule sets are electronic computer-readable construction safety
suggestions. The user loads the building model into the design for construction safety
tool. The user can get familiar with the building model through 3D navigation which
includes function such as zoom, spin and walkthrough. Then, the user needs to select
the specific rules sets which will be used to check against the subject building model.
After running the model checking tool, two sets of results will be produced. One is a
list of all non-compliances identified in the drawings, along with detailed suggestions
about how to eliminate or mitigate these hazards. Another set of results will be shown
in the 3D view. The building model will be marked with different colored circles
which show all the building objects violating certain design for construction safety
rules. After getting the report from the model checker, the user can either change the
drawings or keep the original design ideas if other requirements need to be met. At
the same time, the change in delivery methods provides project participants with new
opportunities to succeed.

REFERENCES
AEC 3. (2010). “Test Drive On-line Code Compliance Checking.” <
http://www.aec3.com/1/1_2007-02-xabio.htm> (December 1, 2010)
Conover, D. (2010). “Method and apparatus for automatically determining
compliance with building regulations.” <
http://www.faqs.org/patents/app/20090125283> (December 1, 2010).
Gallaher, M., et al. (2004) “Cost analysis of inadequate interoperability in the U.S.
capital facilities industry.” National Institute of Standards and Technology, U.S.
Department of Commerce, Gaithersburg, Md.
Gambatese, A. (1996). “Addressing construction worker safety in the project design.”
Ph.D. Dissertation, University of Washington, Seattle, WA.
Gambatese, J., Hinze, J. and Haas, C. (1997). “Tool to design for construction worker
safety.” Journal of Architectural Engineering, ASCE 3 (1), 32-41.
Health and Safety Executive (HSE). (2003). The development of a knowledge based
system to deliver health and safety information to designers in the construction
industry, HSE Books, Sudbury.
Marini, J. (2007). “Design for construction worker safety: a software tool for
designers.” MSBC thesis, Gainesville, University of Florida.
Nisbet, N.(2010).“Projects.”<http://www.aec3.com/5/index5.htm>(December1, 2010)
Toole, T. M., (2007). “Design engineers’ responses to safety situations.” Journal of
Professional Issues in Engineering Education and Practice, 133(2), 126-131.
A Study of Sight Area Rate Analysis Algorithm on Theater Design

Yeonhee Kim1 and Ghang Lee2

1
Graduate Research Assistant, Department of Architectural Engineering, Yonsei
University, Korea, 120-749; PH (822) 2123 7833; email: yeony8@gmail.com
2
Corresponding Author, Associate Professor, Ph D. Department of Architectural
Engineering, Yonsei University, Korea, 120-749; PH (822) 2123 7833; email:
glee@yonsei.ac.kr

ABSTRACT

This paper proposes a new quantitative sight area rate analysis algorithm based on
the “sight area rate” of a stage from the audience seats in the theater. The current
sightline analysis checks whether a sightline from a seat is blocked by front-row
seats from a cross-sectional and plane view at the center of a theater. Although this
method is a commonly accepted practice, it is not uncommon to find people who
have their view blocked by the front-row seats in a theater. The newly proposed
algorithm analyzes and quantifies the actual view area from each seat. The sight area
rate is the actual sight area divided by the total unblocked sight area (or screen area)
from each seat. The proposed algorithm provides quantitative results which make it
easier to design a theater. Since the proposed algorithm can derive sight area at the
early design stage of theater utilizing a set of plan and cross section drawings, it can
be applied to analyze view of audiences though a 3D BIM model is not fully
developed.

Keywords: sightline, theater, sight area rate, analysis, quantitative

INTRODUCTION

Sightline is a ‘line of sight’ between the viewpoint of stage and audience at theater
(Burris-Meyer and Cole 1964; DCMS 2008; Ham 1987; Izenour 1996; John and
Sheard 2000). Viewpoint locates at the edge of stage and is the lowest and closest
point that every audience can see (Shows in Figure 2). Existing theater design

706
COMPUTING IN CIVIL ENGINEERING 707

manuals (Burris-Meyer and Cole 1964; DCMS 2008; Ham 1987; Izenour 1996; John
and Sheard 2000) suggest the sightline analysis method, which only examines
whether the sightline from a seat is blocked by front-row seats through cross-
sectional and plane view.
To overcome the limitation of the existing method, a 3D modeling tool has been
widely used to check sightline being secured through a 3D BIM model. Although this
method presents results visually, it is hard to check sightlines of every seat at the
same time. Since a 3D BIM model has been modified frequently at an early design
stage, sightline should be also analyzed whenever a 3D BIM model is modified.
This paper suggests new sightline analysis algorithm based on the ‘sight area rate’
index. The proposed algorithm utilizes coordinates of a seat and automatically
calculates the visible screen area of every seat which derives sight area. This
algorithm can be adapted at an early design stage when the 3D BIM model isn’t fully
developed.
This paper proposes the sight area rate analysis algorithm of the theater based on
cross sectional and plane drawings. First, the limitations of existing sightline analysis
methods will be briefly described and then a new analysis algorithm based on sight
area will be proposed.

PREVIOUS METHODS

Since sightline affects the choice of stage type and the auditorium’s width and
depth (Burris-Meyer and Cole 1964), sightline should be analyzed when the theater
is designed. Sightline is categorized into two types: vertical sightline and horizontal
sightline. A vertical sightline is “the angular path of vision in the vertical plane over
or under impediments, if any, between a sight point and the performance area”
(Izenour 1996, p.4). When a vertical sightline is analyzed, spectators in the front-row
of considered seat or building elements can be obstacles of sightline of considered
seat. A vertical sightline is an important factor in deciding the slope of the auditorium
since steep slopes ensure the vertical sightline. A horizontal sightline is “the angle of
vision in the horizontal plane between or around intervening obstructions” (Izenour
1996, p.4), and is affected by width of the auditorium (Ham 1987; Izenour 1996).
There are two types of sightline analysis methods: sightline analysis through cross-
sectional and plane drawings, and sightline analysis through 3D modeling tools. The
former method only checks whether obstacles exist on the sightline path. The latter
shows the visible area of considered seat using a camera view function based on 3D
modeling. This method has limitations in that each visible area of considered seat
should be checked manually and spends too much time analyzing every seat in the
auditorium. This analysis method can easily be adapted to a fully developed 3D
model which contains information on the type and angle of the seats. Since it is
difficult to avoid modifying 3D models at the early design stages, there is a limit to
the application of this method.
708 COMPUTING IN CIVIL ENGINEERING

There is a commercial sight line analysis program called ‘Extreme Sightlines’


which analyzes sightline automatically based on existing analysis method (FDA).
This program suggests theater design alternatives based on sightline analysis which
shows the visible area of a considered seat using the camera view function of the 3D
modeling tool. The limitation of this program is that it only can analyze sightline
after the 3D model has been fully developed. To analyze sightline through this
program, the 3D model should be fully developed.
This paper suggests a sight area rate analysis algorithm which can be conducted
without a fully developed 3D model. The proposed algorithm calculates the sight
area rate of considered seats after checking for the existence of obstacles in the
sightline path through cross-sectional and plane drawings. It can be used when a 3D
model isn’t fully developed and automatically draws the sight area of every seat in a
short time.

Sight Area Definition

This paper proposes the notion that ‘sight area’ is the visible area of the screen from
a considered seat. The existing methods analyze the view of audience in theater
based on ‘sightline’ whereas the proposed analysis method focuses on sight area to
secure the view of the audience rather than sightline. Figure 1 illustrates the notion of
sight area with the gray area indicating the sight area of the considered seat. The
figure expressed as a percentage in Figure 1 indicates the actual visible screen area
rate of a considered seat compared to the total screen area.

Figure 1 the notion of ‘sight area’

The sight area rate of a considered seat can be calculated by the following equation
[1]. The sight area rate is the actual visible sight area (or screen area) divided by the
total unblocked sight area (or screen area) from a seat.

Visible Screen Area


Sight Area Rate 100 [1]
Total Screen Area
COMPUTING IN CIVIL ENGINEERING 709

Sight Area Rate Analysis using the Coordinates of a Seat

To analyze the sight area rate of a theater, the sightline analysis of every seat in the
theater should be analyzed. After analyzing the sightline of every seat, the sight area
of considered seat can be obtained from the proposed algorithm, which consists of
two parts: obtaining the vertical distance from the vertical sightline analysis and
obtaining horizontal distance from the horizontal sightline analysis. Since this paper
applies the proposed algorithm to a 2D based theater design, X, Y, and Z coordinates
extracted from plane and cross-sectional drawings are critical to calculating the sight
area of every seat in a theater.

Obtaining the Vertical Distance from Vertical Sightline Analysis

To obtain the vertical distance of the visible area of a considered seat, Z coordinates
from the eye point of the considered audience should be identified. First, obstacles in
the vertical sightline path which connects the eye point of the considered seat and the
viewpoint of the imaginary screen should be identified. If rows in front of a
considered seat block the vertical sightline path, the critical line which connects the
eye point of the considered seat and the highest head point of audience in front of
that should be identified to obtain a vertical distance (shown in Figure 2). The
vertical distance is defined as the vertical distance of the screen, which is unblocked
from the considered seat. Figure 2 illustrates the vertical distance from the critical
point to the highest point of the screen.

Obtaining the Horizontal Distance from Horizontal Sightline Analysis

The horizontal distance is defined as horizontal distance of the screen that is


unblocked by the front-row audience from a considered seat. The definition of
horizontal distance can be derived from that of the horizontal sightline. In a planar
plan, horizontal sightlines are tangent lines of heads of front-row audiences of the
considered seat which pass the eye point of the considered seat (shown in Figure 3).
The horizontal distance of the considered seat is a distance between intersection
points of a screen and horizontal sight lines.

Calculation of Sight Area Rate

Once the vertical distance and horizontal distance are obtained, the screen can be
divided into several sections (shown in Figure 4) according to Figure 2 and Figure 3.
710 COMPUTING IN CIVIL ENGINEERING

Figure 2 Definition of vertical distance and sightline and critical line

Figure 3 Definition of horizontal distance

Section 4, 5, and 6 are blocked screen area of a considered seat after analyzing
vertical sight line. Section 1, 3, 4, and 6 are blocked screen area of a considered seat
after analyzing horizontal sight line. The total unblocked screen area of a considered
audience is intersections of unblocked screen area of vertical sightline analysis result
and horizontal sightline analysis result. In this case, the intersections are section 4
and 6. The union sections of blocked screen area are the invisible area of the screen
from the considered seat. Except for union sections of blocked area, sight area can be
calculated by adding the area of the other sections. By applying equation [1], the
sight area rate can be derived from the total sight area.
COMPUTING IN CIVIL ENGINEERING 711

Figure 4 Concept of calculating sight area of screen

CONCLUSIONS

To secure the view of the audience in the theater, the existing methods analyze the
view of audience based on ‘sightline,’ which is ‘line of sight’ between the viewpoint
of the stage and the eye point of audience in the theater. The existing method can be
categorized into two parts: visual identification of obstacles in the sightline based on
2D drawings, and identification of obstacles in a 3D model with a 3D modeling tool.
However, these methods have the drawbacks of having to redo the 3D model
whenever changes in the theater design plans are made and not having accurate
analysis results. This paper proposes the analysis method focused on ‘sight area’ to
secure the view of the audience rather than ‘sightline’. The newly proposed notion
sight area in this paper is the visible area of a screen from the considered seat. The
proposed analysis method can be applied to any theater stage design irrespective of
developing a full 3D model. In the future, we will validate the sight area rate analysis
method through a case study and compare the accuracy of the result with those of
existing methods.

Acknowledgement

This research was supported by the MKE (The Ministry of Knowledge Economy),
Korea, under the national HRD support program for convergence information
technology supervised by the NIPA (National IT Industry Promotion Agency) (NIPA-
2010-C6150-1001-0013).

REFERENCES

Burris-Meyer, H., and Cole, E. (1964). "Theaters and Auditoriums." Von Nostrand
Reinhold Publishing Corporation, , New York.
DCMS. (2008). "Guide to safety at sports grounds." Department for culture media
712 COMPUTING IN CIVIL ENGINEERING

and sport, ed., TSO, London.


FDA. "Fisher Dachs Associates.",
<http://www.fda-online.com/services_detail.php? id=19 >(accessed Dec 6,
2010).
Ham, R. (1987). Theatres : planning guidance for design and adaptation,
Architectural Press, London.
Izenour, G. C. (1996). Theater design.
John, G., and Sheard, R. (2000). Stadia: A design and development guide,
Architectural Press.
Algorithm for Efficiently Extracting IFC Building Elements from an IFC
Building Model

Jongsung Won1 and Ghang Lee2

1
Graduate Research Assistant, Department of Architectural Engineering, Yonsei
University, Korea, 120-749; PH (822) 2123 7833; email: jongsungwon@yonsei.ac.kr
2
Corresponding Author, Associate Professor, Ph D. Department of Architectural
Engineering, Yonsei University, Korea, 120-749; PH (822) 2123 7833; email:
glee@yonsei.ac.kr

ABSTRACT

This research proposed two algorithms, which may reduce the size of IFC files, by
extracting only the information requested by each project participant. The extraction
algorithms could help to increase the productivity related to the exchange project
information between them. One of the algorithms extracted entities related to the
required building elements and instances recursively explain these entities from an
IFC file. The other one eliminated the unnecessary entities and instances from the
file. This research compared the IFC file extracted by the two algorithms to identify
the more efficient algorithm. The extraction algorithm was more efficient than the
elimination algorithm because the size of an IFC file through the extraction
algorithm was almost 1/11 of the file size when the elimination algorithm was used.
The identified algorithm could reduce the file size to 8.7% of the original size when
extracting information related to the slab element in an IFC file.

Keywords: IFC(Information Foundation Classes), Algorithm, instance, entity

INTRODUCTION
There are diverse software used in the Architecture, Engineering, and Construction
(AEC) industries, and because of different and various formats supported by the
software it is difficult to exchange data automatically and directly. To overcome these
limitations, buildingSMART International (former International Alliance for
Interoperability, IAI) proposed Industry Foundation Classes (IFC) as the
international standard for exchanging data between project participants. However,
since a master IFC model generally means an integrated model that includes a lot of

713
714 COMPUTING IN CIVIL ENGINEERING

information about a BIM-based project generated by various project participants, the


size of the IFC model file should be increased (Hwang 2004). Importing and
exporting IFC files takes a lot of time, and it is difficult to exchange information
among the project participants without difficulties. The participants request only
partial information related to their field from the IFC file, such as design, mechanical,
electrical and plumbing (MEP), and construction. Therefore, it is more efficient to
use IFC files containing only essential information to carry out specific activities
(Hwang 2004; Park and Kim 2009) than it is to use a master model; however, it is
difficult to resolve problems in generating and managing the BIM model according
to the characteristics of the specific tasks that will be carried out (Chen et al. 2005;
Hwang 2004; Katranuschkov et al. 2010; Lee 2009; Yang and Eastman 2007). In this
research the authors developed two algorithms which may reduce the size of an IFC
file by extracting only the information requested by each project participant. It is
possible to exchange and manage information efficiently by using only the minimum
valid IFC file data extracted by the developed algorithms.
To identify the most efficient algorithm for extracting the required information
from an IFC file, the authors developed two algorithms which could extract the
information requested to represent the selected building component from an IFC
model file. Algorithm 1 selects specific entities recursively, extracting instances
which refers to the instances of the selected entities from an IFC file, and other
instances which refers to the previously extracted instances. Algorithm 2 is a method
which eliminates entities which are not requested by each project participant from an
IFC model. To confirm and compare the efficiency of the two developed extraction
algorithms, the authors created a simple BIM model and converted it into an IFC
model to validate the extraction algorithms.

PREVIOUS STUDIES
Some of previous studies insisted that it was more efficient to use an extracted IFC
model containing only requested information instead of a master IFC model which
integrated all the information generated by diverse project participants (Chen et al.
2005; Hwang 2004; Park and Kim 2009). Park and Kim (2009) claimed that
utilization of software based on description logics were necessary because of more
complex and bigger IFC models than before. The authors proposed using an ontology
representation of an IFC based building information model by adding ontology web
language (OWL) notation into the IFC model. However, algorithms of the proposed
representation were not mentioned. Hwang (2004) attempted to identify a method to
calculate quantity takeoff of a building. The concept of the method was to extract
basic information related to quantity takeoff from an IFC-based instance file as a
subset of the master IFC model. However, this study has several limitations as
follows. The authors should identify the entities related to the representation of the
preliminary quantity takeoff when users want to use this algorithm. If the IFC will be
COMPUTING IN CIVIL ENGINEERING 715

updated, the authors should also update the list of related entities and this algorithm
could not be utilized in areas other than the quantity takeoff. Since BIM software has
not supported IFC perfectly so far, this algorithm might cause errors like not
including the necessary IFC instances. Chen et al. (2005) developed an IFC-based
web server which could extract geometric information automatically for structural
analysis from a 3D object-oriented Computer-Aided Design (CAD) model. Although
a validation process was conducted by using case studies, they identified and
validated the extraction process of information related only to columns and beams
among various building elements. The extraction process was not utilized to extract
information related other elements but not columns and beams. The server was
implemented only to support a collaboration process between the design and
structural teams by identifying entities related to building elements, which users
wanted to carry out their work.
There are a few studies that use extraction subsets from EXPRESS schema (Lee
2009; Yang and Eastman 2007), but none of them develop algorithms for the
recursive extraction of instances from an IFC file. A few studies (Lee 2009) did
developed a recursive algorithm and a program for extracting meaningful subsets
from EXPREE schema,. however, this program only extracted a minimum valid set
of entities from an integrated IFC file but not an instance-level extractor. Therefore,
in this research the authors developed two algorithms which could extract entities
and instances related to selected building elements from an IFC file and compared
them to identify an efficient instance-level extraction algorithm. The details of the
developed algorithms are explained in the next section.

DEVELOPMENT OF ALGORITHMS
This research proposes two algorithms that could extract a minimum valid
instance-level subset containing information about entities connected with the
requested building elements and developed two programs based on the algorithms.
Algorithm 1 extracts the requested instances and Algorithm 2 eliminates the
unnecessary instances from the IFC file. To extract entities related to the selected
building elements and instances explaining the entities, a relationship between the
entities and the building elements should be defined first.

Mapping Between Entities and Building Elements


This research identified the relationship between the IFC entities and building
elements covering all areas, such as design, structural engineering, and MEP, based
on IFC 2X3. IFC 2X3 was composed of 635 entities and 53 entities among them,
subset of (ABS) IfcElement, were basic entities to represent the building elements.
The examples were IfcBeam, IfcColumn, IfcDoor, IfcFooting, IfcPile, and so on. In
this research, an entity related to the slab among many building elements were
utilized for the evaluation of developed algorithms. IfcSlab was directly connected
716 COMPUTING IN CIVIL ENGINEERING

with the slab element.

Development of Algorithm 1
Algorithm 1 denotes the algorithm to extract instances related to a selected
element from an IFC file. Figure 1 shows an example of an extraction of the entities
and instances related to the slab element from a master IFC model. If IfcSlab was
selected as the entity to be extracted, instance # 2638, representing IfcSlab in the IFC
file, should be extracted. Instances #33 (IfcOwnerHistory), #2614
(IfcLocalPlacement), and #2637 (IfcProductDefinitionShapes) should also be
extracted because instance #2638 referred to these instances, and instances which
#2637 refers to should be extracted recursively.
Except for these recursive processes, the algorithm should take into account
instances referring to the instance explaining the entity IfcSlab. For example,
instance #2642 (IfcRelDefinesByProperties) should be extracted because this
instance referred to instance #2638, which explains the IfcSlab entity. The order of
extracted instances in the IFC file was changed since the extraction process was
carried out according to relationships of instances. However, the order of instances
did not cause errors in the IFC files.

Figure 1 An example of the extraction methodology of Algorithm 1

Development of Algorithm 2
Algorithm 2 is an algorithm that eliminates instances explaining entities connected
with unnecessary building elements not selected among the 53 entities as basic
entities for representing building elements and instances referring to eliminated
instances. Figure 2 shows an example of the elimination process. If a user wanted to
extract information related to column elements from an IFC file, entities related to
slab elements should be eliminated from the IFC file. As in Figure 2, instance #2638,
which explains the entity IfcSlab, was eliminated first and instances #2642
(IfcRelDefinesByProperties), #2654 (IfcRelDefinesByProperties), and #2656
(IfcRelDefinesByProperties), referring to instance #2638 were also eliminated.
COMPUTING IN CIVIL ENGINEERING 717

Through this process, Algorithm 2 eliminated instances explaining entities connected


with unselected building elements which was all of building elements except slabs.

Figure 2 An example of the elimination methodology of Algorithm 2

COMPARISON OF DEVELOPED ALGORITHMS


To evaluate the performance of the two developed extraction algorithms, a BIM
model was created and used. The BIM model generated by using Revit Architecture
was a simple model composed of basic building elements like columns, walls, slabs,
windows, and doors and was converted it into an IFC-format file.

Before Algorithm 1 Algorithm 2

Figure 3 Representation of an IFC model in which slab-related instances were


extracted by two algorithms

Figure 3 shows the image of an IFC model before extraction and the extracted
models by using the two algorithms developed in this research. DDS viewer was
used for representing the IFC models. The number of entities and size of the IFC file,
where the entities and instances related to the slab elements, extracted by Algorithm
1 were different from those extracted by Algorithm 2. Table 1 shows the results of a
comparison. According to the comparison of the size of the extracted IFC files, using
Algorithm 1 was more efficient than Algorithm 2 for reducing the size of an IFC file.
Algorithm 1 reduced the file size to 8.7% of the master IFC file and Algorithm 2
reduced it to 90.0% of the file.
718 COMPUTING IN CIVIL ENGINEERING

Table 1 IFC files extracted by the two developed algorithms

Master Extracted by Extracted by


IFC file algorithm 1 algorithm 2
Number of entities 87 43 (49.4%) 76 (87.4%)
File size (KB) 423 37 (8.7%) 381 (90.0%)

The number of entities included in the IFC file extracted by Algorithm 1 was 56.6%
of the number of entities in the IFC file by Algorithm 2. This means that the IFC file
extracted by Algorithm 1 had fewer unnecessary entities than Algorithm 2.

CONCLUSIONS
This research developed two algorithms to extract information related to building
elements requested from an IFC file and identified more the efficient algorithm. One
of the algorithms, Algorithm 1, extracted the necessary instances from an IFC file
recursively and the other one, Algorithm 2, eliminated unnecessary instances from an
IFC file. Both of the algorithms extracted entities connected with building elements
requested for the extraction and instances explaining the related entities from an
integrate IFC file and generated valid IFC files.
For the evaluation of the developed algorithms the authors created an IFC model
and compared the IFC files extracted by the two algorithms. The size of an IFC file
that used Algorithm 1 was 1/11 of the file size (8.7% of an integrated IFC model)
through the Algorithm 2 (90.0%). Therefore, the authors identified the extraction
algorithm as the most efficient.
The identified algorithm should be implemented into large-scale IFC models that
include design, MEP, and structural information, and be evaluated to confirm the
possibility of implementation into real BIM-based projects.

ACKNOWLEDGEMENTS
This research was supported by a grant titled "06-Unified and Advanced
Construction Technology Program-E01" from the Korean Institute of Construction
and Transportation Technology Evaluation and Planning (KICTEP) and the MKE
(The Ministry of Knowledge Economy), Korea, under the national HRD support
program for convergence information technology supervised by the NIPA (National
IT Industry Promotion Agency) (NIPA-2010-C6150-1001-0013)

REFERENCES

Chen, P.-H., Cui, L., Wan, C., Yang, Q., Ting, S. K., and Tiong, R. L. K. (2005).
"Implementation of IFC-based web server for collaborative building design
COMPUTING IN CIVIL ENGINEERING 719

between architects and structural engineers." Automation in Construction, 14,


pp. 115-128.
Hwang, Y.-S. (2004). "Automatic quantity takeoff from drawing through IFC
model." Architectural Institute of Korea, 20(12), 89-97.
Katranuschkov, P., Weise, M., Windisch, R., Fuchs, S., and Scherer, R. J. (2010)
"BIM-based generation of multi-model views." CIB W78 2010, Cairo, Egypt.
Lee, G. (2009). "Concept-based method for extracting valid subsets from an
EXPRESS schema." Journal of Computing in Civil Engineering, 23(2),
pp.128-135.
Park, J.-D., and Kim, J.-W. (2009). "A study on the ontology representation of the
IFC based building information model." Architectural Institute of Korea,
25(5), pp. 87-94.
Yang, D., and Eastman, C. M. (2007). "A rule-based subset generation method for
product data models." Comput. Aided Civ. Infrastruct. Eng, 22(2), pp. 133-
148.
Evaluating the Role of Healthcare Facility Information on Health
Information Technology Initiatives from a Patient Safety Perspective
J. Lucas1, T. Bulbul1, C. J. Anumba2, J. Messner2
1
Dept. of Building Construction, Virginia Tech, Bishop-Favrao Hall (0156),
Blacksburg, VA, 24061; PH (540)231-3804; FAX 540-231-7339;
email: jlucas06@vt.edu, tanyel@vt.edu.
2
Dept. of Architectural Engineering, Penn State University, 104 Engineering Unit A,
University Park, PA, 16802: PH (814)865-6394; FAX (814) 863-4789;
email: anumba@engr.psu.edu, jmessner@engr.psu.edu

ABSTRACT
Patient safety is a principal factor in healthcare facility operations and
maintenance (O&M). Ongoing initiatives to help track patient safety information and
record incidents and close calls include Common Formats and International
Classification for Patient Safety (ICPS). Both efforts aim to develop ontologies to
support healthcare providers to collect and submit standardized information regarding
patient safety events. Aggregating this information is crucial for pattern analysis,
learning, and trending. The purpose of this paper is to analyze these existing efforts to
see how much facility and facility management information is covered in the existing
frameworks and how they can interface with new systems development. This analysis
uses documented cases from literature on healthcare associated infections, inputs the
data from the cases into the information categories of Common Formats and ICPS,
and identifies gaps and overlaps between these existing systems and facility
information. With this analysis, connections to these efforts are identified that serve
as a leverage for showing the role of healthcare facility information for assessing and
preventing risky conditions. Future work will use these findings and the supported
ontology to connect patient safety information to a building model for supporting
facility operations and maintenance. The aim is generating and interpreting high-level
information to provide effective and efficient patient safety in a healthcare
environment.

INTRODUCTION

Patient care and safety is of prime importance to clinical staff within a


healthcare environment. The design and function of the physical environment and
use of Healthcare Information Technologies (HITs) are two important pieces in
providing quality care and ensuring patient safety within a healthcare setting.
Proper design, maintenance, and care of the physical environment has been
proven to reduce patient and staff stress, improve recovery outcome, and improve
overall healthcare quality (Ulrich et al., 2004). A better indoor working environment
has also been linked to better productivity (Clements-Croome, 2003), which can lead

720
COMPUTING IN CIVIL ENGINEERING 721

to better quality of care. Guidelines exist within the industry, such as those from the
U.S. Department of Health and Human Services (Sehulster and Chinn, 2003), and
other design standards, to ensure the environment of care is safe, with proper
ventilation, systems control, and procedures to help reduce Healthcare-Associated
Infections (HAIs) and patient safety events.
The use of HIT applications within healthcare systems as a way of improving
patient safety is expanding. Research has shown that HIT has the potential for
significant savings, increased safety, and better health (Hilestad et al., 2005; Taylor et
al., 2005; Bigelow et al., 2005; Bates and Gawande, 2003). Reducing medical errors
and improving patient safety can ultimately save healthcare and related industries
$19.5 billion (USD) annually in the United States (Shreve et al., 2010). The
improvement to patient safety and reduction of medical errors linked to the use of
HIT has led to the federal government passing legislation promoting the use of HIT
and to create programs for funding their implementation (Bates and Gawande, 2003).
Integrating facilities and environment information is lacking within existing
HIT solutions that deal with patient and clinical information. This paper reviews two
HIT ontologies related to patient safety events for their ability to support facility
information and explores options for including them in future systems
implementation. This is done by applying data from documented case studies of
patient safety events on healthcare associated infections which involve a failure on
the facility side, into the existing ontologies. The results of this study can help to
develop a decision support system that links patient safety concerns with facility
management and operational tasks that can be used to help improve patient safety and
environmental quality.

PATIENT SAFETY EVENT CASES

Information from two cases involving patient safety events and Healthcare-
Associated Infections (HAIs) caused by facility/maintenance issues were found
within literature and one case scenario developed through interviews with clinical and
facilities staff at Hershey Medical Center, Hershey, PA, were involved in the
following analysis. Information for these cases and scenarios were used as inputs into
the existing frameworks (Common Format and ICPS) to identify information gaps of
environmental and facility information that is important for properly recording
incidents and preventing similar situations from happening again.

Case 1: Operating room air-intake duct. A growth of moss on the room and pigeon
feces on the window ledge both adjacent to an operating room air-intake duct caused
an outbreak of Aspergillus endocarditis (Walsh & Dixon, 1989).

Case 2: Outside construction causes nosocomial aspergillosis. Construction


outside the hospital has been associated with concurrent nosocomial aspergillosis in
immunocompromised patients. The air conditioners were contaminated due to road
construction outside the Medical Center (Walsh & Dixon, 1989).

Case 3: Bacteria growth in air conditioning unit cause legionnaires’ disease.


Because of lack of regular maintenance to the interior of the in-wall air conditioner
722 COMPUTING IN CIVIL ENGINEERING

units, patients and staff were infected with legionnaires’ disease when bacteria
became airborne.

HEALTH INFORMATION TECHNOLOGY AND PATIENT SAFETY

There are a few formalisms underway within the healthcare industry to create
a central system for capturing and classifying patient safety events and related
information within a structured ontology. Two of these initiatives are the Association
for Heath Research and Quality’s (AHRQ) Common Format and the World Health
Organization’s (WHO) International Classification for Patient Safety.

AHRQ – Common Format. The Patient Safety and Quality Improvement Act of
2005 established a framework for voluntary submission of privileged and confidential
information to be collectively analyzed in regards to the quality and safety of patient
care given in a healthcare setting. The idea is to have the information, from different
organizations, in a standardized format to allow the aggregation of data to identify
and address underlying causal factors of patient safety problems. The information will
be stored in a database where AHRQ in the larger scale or individual hospitals locally
can then use the data to analyze statistics and do trending of patterns in regards to
patient safety events (AHRQ, 2010).
AHRQ Common Formats allows for capturing the information on different
incident types. Associated data for each incident is captured and classified in the
Logical Data Model. The Common Format also defines use cases for developers on
how to implement the data model. The processes are captured in a flowchart format to
assist with development of data types that need to be recorded for each incident. The
goal of the Common Format is to support standardization so that data collected by
different entities are clinically and electronically comparable.
The data model in Common Formats is organized around “Concern -
Event or Unsafe Condition” class. There are eight main patient safety
conditions that are defined as sub-types around it: blood/blood product,
device/medical surgical supply, fall, healthcare-associated Infection, medication/other
substance, surgery/anesthesia, perinatal and pressure ulcer. Every event has data
related to the “Contributing Factor”, “Reporter”, “Patient”, and “Linked”.
For the purpose of this study we focused on describing a case for a Healthcare-
Associated Infection (HAI). Figure 1 shows how the information is organized in the
Common Formats for HAI (adapted from PSO Privacy Protection Center, 2010).
In this model, the information which needs to be recorded would include the
type of infection; if the infection was present at time of admittance (such as from a
previous health event) or if it was acquired in the hospital; the source of the infection,
if medical procedures were involved; and what types of treatments were given. Each
of these details is linked to a data element. The data elements are clearly defined
within the Common Formats Data Dictionary that describe their appropriate use
within the overall system, the data type, maximum available length, and where the
information may be collected from (PSO Privacy Protection Center, 2010).
COMPUTING IN CIVIL ENGINEERING 723

Figure 1. Common Format Logical Data Model for HAI Information.

WHO – International Classification for Patient Safety (ICPS). The WHO formed
a Drafting Group that was in charge of developing the conceptual framework for the
ICPS. The framework was validated for multiple languages and approved to fit the
purpose, and to be meaningful, useful, and appropriate for classifying patient safety
data and information. The framework aims at providing a comprehensive
understanding of the patient safety domain by representing a continuous learning and
improvement cycle emphasizing identification of risk, prevention, detection,
reduction of risk, incident recovery and system resilience (WHO, 2009).
At this point, ICPS only focuses on a taxonomy for classifying the patient
safety events. It is more of a conceptual framework than a complete data model. On
the larger scale the classes are created but the attributes are still in the development
process. The taxonomy is based on a conceptual framework, consisting of 10 high
level classes: incident type, patient outcomes, patient characteristics, incident
characteristics, contributing factors/hazards, organizational outcomes, detection,
mitigating factors, ameliorating actions, actions taken to reduce risk. The “incident
type” class identifies13 sub-types as safety events which are: clinical administration,
clinical process/procedure, documentation, healthcare associated infection,
medication/IV fluids, blood/blood products, nutrition, oxygen/gas/vapor, medical
device/equipment, behavior, patient accidents, infrastructure/building/fixtures,
resources/organizational management.
For the cases defined in this paper, two incident types in ICPS, healthcare
associated infection and infrastructure/building/fixtures, fit the purpose. Figures 2 and
3 (adapted from WHO, 2009) show how these classes are formed.
Actual implementation of both initiatives is ongoing and in development
although both offer organizational models and technical information to help with
development. Common Formats has a bottom-up approach where attributes for every
safety event are specifically defined. The context of the data model only covers
hospitals and the model is ready for implementation. The ICPS has a top-down
approach, where the context covers all healthcare environments and the model defines
724 COMPUTING IN CIVIL ENGINEERING

large scale relationships first. The focus is on comprehensive classification but the
attributes are not yet identified for every class. The ICPS is not implementable yet.

Figure 2. ICPS Class for Healthcare Associated Infection

Figure 3. ICPS Class for Structure/Buildings/Fixtures

CAPACITY COMPARISON

The Logical Data Models of Common Formats and Conceptual Framework


for ICPS are used for comparison to support facility management information in HIT.
The information categories for each of these systems are recorded in Table 1.
In comparing the existing patient safety information data structures for their
functionality to assist with facility management and maintenance information, it is
first necessary to compare their capacities with each other. Common Format and
ICPS are extremely similar in purpose, function, and features. In the early conceptual
phases however, ICPS offers more information that can be used to help locate, solve,
and inform for future improved practices, facility management tasks and information
with its event type of Structure/Building/Fixture.
Within the Common Format logic, the information from the case studies that
dealt with locations of the facility and systems within the building would only be
stored under the “Contributing Factors” to the event. The case would be filed as an
HAI event. Arguably, it would fit better under a separate event type dealing with
facilities/maintenance. Research shows that HAI’s and other events can be directly
linked to maintenance, renovation, and construction (Walsh & Dixon, 1989; Cooper
et.al, 2003). These events may be better suited for future planning if they were stored
within their own event type. The ICPS allows for contributing issues to the facility.
Another aspect of the ICPS that is missing from the logic of Common Format
is that ICPS allows within its framework for determining better actions in the future
based on the events. This allows for a type of lessons learned database as the
information is input into the system. One of the initiatives for Common Format is
that the data would be interpreted at a later time to find trends, behaviors, root causes,
and better practices where as ICPS can allow for an ever-growing consistent
development of this type of better practices information.
COMPUTING IN CIVIL ENGINEERING 725

Table 1. Information Categories for Common Format and ICPS


Storage Capability Common Format ICPS
Rescue Intervention (Reduce Risk) X X
Contributing Factors to Event X X
Reporter X
Linked Event or Symptom (Diagnosis) X
Patient Information X X
Event Type and Information X X
Detection X
Outcomes (harms to patient/ accountability in organization) X
Ameliorating Actions X

Types of Events
Blood X X
Device/Supply X X
Fall (or accident) X X
Healthcare-Associated Infection X X
Types of infections X X
Treatment Sources X
Location Where Appeared X X
Medication/IV X X
Surgery/Anesthesia X X
Pressure Ulcer X
Nutrition X
Documentation X
Procedure X
Behavior (Patient or Staff) X
Infrastructure Building or Fixture (and associated problem) X
Resources (Organization) X

Table 2 shows the different types of information directly related to the facility
that is available from the cases and can be useful in determining better practices for
facility maintenance and operations. Note that patient information, including
symptoms, treatments, and other medical information, is omitted from the table as
this information is not directly important to facility management and is stored by both
Common Format and ICPS.

Table 2. Case Information supported in Ontologies


Information Type Common Format ICPS
Building/Room/Space: Operating Room/Patient Room X
Mechanical System: Air-intake Duct X
Systems: In-wall Heating/Ventilation/AC Unit X
Location: Roof/Patient Room X
Facility Cause: Unclean Filter/Contaminated Intake X X
Cause of Infection: Bacteria Growth X X

Although both information structures allow for the storage of all information
related to facilities within the cases, ICPS appears to allow for better sorting of the
information for events caused by facility issues because of the classes of information
that allow for Structure/Building/Fixture information. While not all attributes are
defined through ICPS, the conceptual framework takes facility information into
726 COMPUTING IN CIVIL ENGINEERING

consideration. To cover all areas of information as marked in Table 2, the attributes


for Structure/Building/Fixture would need to take into account aspects of locations
and systems throughout the healthcare setting.

DISCUSSION AND FUTURE WORK

The long-term research goals include the development of a model-based


system to enable facility managers to improve operations that help to reduce patient
safety events related to facility issues. The model-based system would help as a
decision support system and planning tool for maintenance tasks. This is envisioned
to occur through interfacing with existing systems, both internal and external, to the
healthcare facility as well as the model-based system serving as a central depository
for key facility information. The purposes of this model-based system would be to
help in making decisions in time of crisis with unforeseen facility related events (e.g.
malfunctioning HVAC equipment) as well as to aid in better management and
scheduling of regular maintenance tasks (e.g. cleaning coils and filters).
The framework for this model-based system will be developed through
creating and analyzing decision-trees for documented cases and conditions. Once this
analysis is complete, the types of information needed for making certain types of
decisions will be known. This information can then be structured to allow for
referencing and making future decisions within the model-based system.
Some information types that will be included in this information framework
include information from design, engineering, construction, and renovation. These
types of information will mostly contain physical location of systems, system
warrantee information, operational manuals with required maintenance schedules, and
the like. Other information included will be that of best practices, decision support
information (for times of crisis), and regulation based information.
The initiatives discussed in this paper can give insight into the types of
information needed to support best practices and decision support. Both Common
Formats and ICPS can serve as interfaces to inform the system on trends wider than
one healthcare facility or campus. Where they can benefit a planning system the most
is in serving as the basis for a lessons learned database to support improved practices
for facility related HAI’s and safety events. Beyond a lessons-learned database,
information on patient safety events in a central location can help in trending and
finding recurrences to aid infection control and facility personnel to more quickly
locate larger problems within a facility.
Other systems that the facility management model-based system may interface
with are those dealing with the clinical operations of a building. Bed-tracking and
other medical data systems can be connected to the model-based system to send
messages of when spaces are available for regular maintenance or help track trends of
patient events and locate causes with the physical environment. The interface with
other systems can also more easily allow clinical personnel, such as those within
infection to control, to link trends of illness and infection to a facility management
cause.
The linking of all relevant facility management information to a model-based
system that also has the capabilities of interfacing with existing information systems
can prove as a valuable HIT to the healthcare industry. The physical environment is a
COMPUTING IN CIVIL ENGINEERING 727

key point to providing quality of care and maintaining that environment requires
keeping many systems working properly. A HIT that links patient safety to facility
management information can help lead to a reduction of patient safety events, saving
the healthcare industry money, and more importantly improving quality of patients’
lives.

REFERENCES
Association for Health Research and Quality (AHRQ). (2010) “Users Guide: Version
1.1: AHRQ Common Formats for Patient Safety Organizations” AHRQ Common
Formats Version 1.1 – March 2010 Release | Users Guide.
Bates, D.W. and A.A. Gawande. (2003) “Improving Safety with Information
Technology” The New England Journal of Medicine, 348 (25): 2526-2534.
Bigelow JH, Fonkych K, and Girosi F. (2005) “Technical Executive Summary in
Support of ‘Can Electronic Medical Record Systems Transform Healthcare?’ and
‘Promoting Health Information Technology’,” Health Affairs, Web Exclusive,
September 14.
Clements-Croome D. (2003) “Environmental Quality and the Productive Workplace,”
CIBSE/ASRAE Conference (24-26 Sept).
Cooper EE, O’Reilly MA, Guest DI, and Dharmage SC. (2003). “Influences of
Building Construction Work on Aspergillus Infection in a Hospital Setting,”
Infection Control and Hospital Epidemiology, 24(7): 472-476.
Hillestad R, Bigelo J, Bower A, Girosi F, Meili R, Scoville R, and Taylor R. (2005)
“Can Electronic Medical Record Systems Transform Healthcare? An Assessment
of Potential Health Benefits, Savings, and Costs,” Health Affairs, 24(5).
PSO Privacy Protection Center (2010). “AHRQ Common Formats Version 1.1:
Technical Specifications,” Accessed on 12/20/10, website:
https://www.psoppc.org/web/patientsafety/version-1.1_techspecs.
Shreve J, Van Den Bos J, Gray T, Halford M, Rustagi K, and Ziemkiewicz E. (2010)
“The Economic Measurement of Medical Errors: Sponsored by Society of
Actuaries’ Health Section,” Milliman Inc. (June).
Sehulster L and Chinn RYW. (2003) “Guidelines for Environment Infection Control
in Healthcare Facilities,” Centers for Disease Control and Prevention Healthcare
Infection Control Practices Advisory Committee (HICPAC).
Taylor R, Bower A, Girosi F, Bigelow J, Fonkych K, and Hillestad R. (2005)
“Promoting Health Informaiton Technology: Is There as Case for More-
Aggressive Government Action?” Health Affairs, 24(5).
Ulrich R, Quan X, Zimring C, Joseph A, Choudhary R. (2004) “The Role of the
Physical Environment in the Hospital of the 21st Century: A Once-in-a-Lifetime
Opportunity,” Report to the Center for Health Design for Designing the 21st
Century Hospital Project, September 2004.
Walsh T.J., and Dixon D.M. (1989) “Nosocomial Aspergillosis: Environmental
Microbiology, Hospital Epidemiology, Diagnosis and Treatment,” European
Journal of Epidemiology, 5(2):131-142.
World Health Organization (WHO). (2009 “Conceptual Framework for the
International Classification for Patient Safety” World Health Organization.
EVMS For Nuclear Power Plant Construction:
Variables For Theory And Implementation

Y. Jung1, B.S. Moon2, and J. Y. Kim3


1
College of Architecture, Myongji University, Yongin 449-728, South Korea,
PH (8231) 330-6396; FAX (8231) 330-6487; email: yjung97@mju.ac.kr
2
Plant Construction Information Team, Korea Hydro & Nuclear Power Co., Ltd,
Seoul 135-743, South Korea, PH (822) 3456-1980; FAX (822) 3456-1939;
email: moonbs@khnp.co.kr
3
Defense/Public Division, Kongkwan Protech, Seoul 133-120, South Korea,
PH; (822) 3486-1977, FAX; (822) 3486-1977, email: jykim@kkprotech.com

ABSTRACT
It is anticipated that there will be intense competition in the nuclear industry
as the cost and time for nuclear power plant construction are expected to fall
(Richardson 2010). In order to attain competitive advantages under the globalized
market, utilizing advanced project control systems by integrating cost and time
management is of great concern for practitioners as well as the researchers. In this
context, the purpose of this paper is to identify major variables that characterize the
real-world Earned Value Management System (EVMS) implementation for nuclear
power plant construction. Distinct attributes of nuclear power plant construction were
investigated first. Organizational policies, measurement techniques, data collection
methods for nEVMS were then developed. A case-project is briefly introduced in
order to validate the viability of proposed methodology. This study is conducted as
part of effort developing an organization-wide EVMS system from an owner’s
perspective.

INTRODUCTION
It is reported by Richardson (2010) that “the nuclear industry is rapidly
globalizing. As it does so, there will be sharper vendor competition. Cost and
construction time are expected to fall, and more countries will opt for nuclear power”.
Under this globalized intense competition, companies in the nuclear industry strive to
enhance the quality, cost, and time for nuclear construction projects.
Effectively managing quality, cost, and time is the utmost objective for any
type of construction projects, and the most advanced and systematic method of
controlling these three performance measures in an integrated way is known as the
‘Earned Value Management System’ (EVMS). However, additional management
effort required to collect and maintain detailed data has been highlighted as a major
barrier to utilizing this concept over a quarter of a century (Rasdorf and Abudayyeh
1991; Deng and Hung 1998; Jung and Woo 2004). In order to maximize the benefits
that this integration has to offer, tools and techniques to reduce the workload for
integrated cost and schedule control should be investigated in a comprehensive

728
COMPUTING IN CIVIL ENGINEERING 729

manner. Nevertheless, there has been no research addressing these issues for nuclear
construction.
In this context, the purpose of this paper is to explore influencing variables
that would facilitate effective EVMS implementation for nuclear power plant
construction. Distinct attributes of nuclear power plant construction were investigated
first. Organizational policies, measurement techniques, data collection methods for
EVMS were then developed. A case-project is briefly introduced in order to validate
the viability of proposed methodology. This paper presents the result of an ‘action
research’, as the authors have conducted information systems (IS) planning for an
organization-wide EVMS system.

BACKGROUNDS AND RESEARCH OBJECTIVES


A brief introduction of the case-company is presented in this paper for better
understanding of the case study (Project B in Table 2). The case-company is a public
owner constructing and operating nuclear power plants in South Korea. However, this
company has recently joined into the nuclear industry as a consortium member of
nuclear power plants supplier. This fact requires the case-company to perform an
additional role as a design-build-maintain (DBM) project manager onto the currently
existing owner’s business aspects.
In addition to the posture changes, the case-company needs to accelerate
business process reengineering (BPR) effort in order to strengthen competitiveness
under globalized market. EVMS was chosen as being a candidate BPR area.
Technical capability throughout the project life cycle (i.e. planning, engineering,
procurement, construction, start-up, and operation) is stressed in this EVMS research.
Based on these backgrounds, the research team has set up four major
objectives, as described in Table 1, including ‘integrating performance measures’,
‘enhancing organizational capability’, ‘optimizing EVMS workload’, and
‘augmenting cost engineering’. Methods and techniques to achieve these objectives
are also defined in Table 1.
Table 1. Research Objectives and Methods

Objectives Methods
O1: Integrating - Cost, time, and quality
Performance Measures - Lifecycle(planning,E/P/C,startup, operation)
- Hierarchical schedules
- Planning capability as owner
O2: Enhancing
- Project management (PM) capability as a supplier
Organizational Capability
- Organizational learning mechanism and database
- Minimized additional data requirements
O3: Optimizing
- Balanced data linkage and segment
EVMS Workload
- Maximized data utilization for analyses
O4: Augmenting - Redesigning risk & cost management system
Cost Engineering - Focused on cost engineering, not accounting
- Systemized project baseline
730 COMPUTING IN CIVIL ENGINEERING

‘Integrating performance measures’ represents logical and physical


interrelationship between data for performance management. EVMS control accounts
(CAs) in this research will accommodate cost, time, and quality within a common
denominator of work breakdown structure (WBS), so that three measures can be
monitored and controlled in an integrated way. The data stored in CAs will be
connected throughout the project life cycle (e.g. CAs and activities for planning,
E/P/C, startup, and operation are interrelated). Finally, EVMS will physically
interconnect four different levels of schedules, i.e. milestone schedule, critical
schedule, integrated control schedule, and detail schedules. As described, the physical
and logical interrelationships are balanced by having some strict linkage for
integration and also by providing flexible segments for systems effectiveness.
‘Enhancing organizational capability’ concerns the organizational learning (as
opposed to individual learning) by accumulating standardized knowledge especially
in the area of cost and scheduling (Jung 2008). In order to accomplish this objective,
several components (e.g. initial estimate, initial baseline, etc) need to be standardized
and automated (Jung and Kang 2007). This automation coupled with the management
integration discussed in first objective (O1) can effectively accumulate historical
database. By doing so, well organized and integrated dataset will facilitate the
organizational learning both for owner’s and supplier’s aspects. Eventually, project
management capabilities leading all relevant participants will be dramatically
improved by using the EVMS as well as re-engineered management skills.
Third research objective is ‘optimizing EVMS workload’. Excessive
managerial effort for collecting and maintaining detailed data has been a major barrier
to implementing this promising EVMS concept (Rasdorf and Abudayyeh 1991; Deng
and Hung 1998; Jung and Woo 2004). Fortunately, distinct characteristics of nuclear
industry would make it more viable to implement EVMS because they have higher
level of perspective in managing projects. This paper added new features for
minimizing EVMS workloads. An example is the selected data linkage between
related systems as discussed in the first objective (O1). Standardized dataset also
utilizes abstracted information for effectiveness while keeping detailed enough
outputs for further development (Jung 2008).
Final objective is ‘augmenting cost engineering’ which focuses on cost
engineering aspect. It is quite general that, instead of cost engineering system,
accounting systems are more sophisticated and well utilized for construction cost
control in owner organizations. Even though the case-company also has a well-
defined accounting system, it does not suffice for engineering analyses in EVMS.
Thus, this research proposed a new cost management procedure that satisfies current
organizational policies as well as future EVMS requirements. The authors believe that
reengineering cost management processes alone can dramatically benefit to all types
of construction organizations if basic concepts of EVMS are properly applied to cost
control practice.

CHARACTERISTICS OF NUCLEAR CONSTRUCTION EVMS


Nuclear power plant construction has many distinct characteristics as
compared to general industrial plant construction. For the purpose of EVMS
developing, these aspects are briefly discussed. Size of projects, project delivery
COMPUTING IN CIVIL ENGINEERING 731

systems, progress measurement/payment, and project management policies are


explored by comparing three different cases. Even though some attributes are
location-specific (based on local regulations and others), Table 2 provides an
overview how nuclear construction is different from others.

Project Delivery Systems (PDS)

Engineering, procurement, and construction (E/P/C) as a single contract is


typical project delivery system in the nuclear industry. It is notable that a multi-prime
system dominated by major equipment vendors (namely, turbine generator and
nuclear steam supply system) is also used. However, for any sophisticated variation
of PDS, the nature of nuclear construction process will stick to E/P/C principles in
order to maximize the interactions between project participants.
The case-company of this study was awarded a project under E/P/C plus
operation contract (Project B in Table 2), in other words, Design-Build-Maintain
(DBM). E/P/C projects usually require higher level of CAs (bigger CAs) and strong
inter-relationship between E/P/C phases (Jung 2005). In Table 2, it is inferred that
‘Project B’ may have bigger CAs than Project A and C have (considering the project
budget, duration, and project delivery systems).

Table 2. Characteristics of Nuclear Construction EVMS


Description Project A Project B Project C
Survey date 2010-09-10 2010-11-20 2010-10-01
Industry Defense Nuclear Civil infrastructure
Project type R&D + Production E/P/C/M Construction
Project duration About 75 months About 55 months About 48 months**
Project budget 1.3 billion dollars 20 billion dollars 0.12 billion dollars**
Delivery system Multi-prime DBM DBB
Contract type Cost reimbursable Lump-sum Total cost w/ unit price
Progress Milestone w/
Earned Standard* Physical Measurement
Measurement percent complete
Number of CA
136 1,400* 1,000
in EVMS
* Proposed in this study. ** Approximate average of 50 projects implementing
EVMS.

Contract Types

Deciding a contract type for mega construction projects involves many issues
such as politics, regulations, risk sharing, local economy, etc. Despite ‘the highly
uncertain nature of nuclear plant cost estimates’ and ‘the changes toward more
complex hybrid’, fixed price contract serves as a base model in practice (Flaherty
2008). Moreover, as an EPC firm, the concept of fixed price budget is required for the
purpose of risk management and cost engineering under any contract types, including
unit price, reimbursable, and guaranteed maximum price.
732 COMPUTING IN CIVIL ENGINEERING

EVMS of the case-company uses the lump-sum contract as a default type.


However, limited numbers of activities (about 2%) are under cost-reimbursable or
unit-price contracts in Project B. Lump-sum contracts for E/P/C projects have a
difficult issue of having finalized detail work items and quantities, especially in the
planning stage. This issue is directly related to setting up a project baseline and
quantifying planed value (PV; budgeted cost work planned) and earned value (EV;
budgeted cost work performed). ‘Earned standards’ based on historical database were
chosen as being to-be methodology to solve this problem.

Project Management Policies

Due to the mega-size of the project and the technical complexity, nuclear plant
construction is performed by multiple specialty entities. Therefore, the vertical
integration inside an E/P/C organization, which can be observed in industrial plant
construction, cannot be achieved. For this reason, indirect and contractual integration
among many parties and disciplines is a crucial issue for project management
organization (PMO). EVMS needs to support the PMO to enhance technical and
managerial leadership and to improve organizational learning.
Every single principle for construction project management is equally
important. Among these construction management functions, however, the quality
management is strongly stressed throughout the entire project life cycle in the nuclear
industry. This emphasis on quality empowers the EVMS more viable and effective for
nuclear plant construction by adding quality onto the integrated cost and schedule.
Actual cost (AC; actual cost work performed) data for construction activities and CAs
can be directly acquired from legacy site inspection systems. The current practice for
progress payment of the case-company utilizes this process.

EVMS MODELS AND PROCEDURE FOR NUCLEAR CONSTRUCTION

Based on the research objectives and requirements, an EVMS model for


nuclear construction project was proposed. First, current information systems were
studies. About 18,000 activities were then analyzed and grouped into 1,400 control
accounts (CAs). The research objectives in this paper were fully reviewed to develop
1,400 CAs. Several meetings and workshops were held in order to discuss and
calibrate the CAs. Finally, information system, numbering systems, and
implementation procedures were designed for nuclear EVMS (nEVMS).

EVMS Structure

Basic structure of CAs is in the sequence of phase (e.g. engineering) – unit


(e.g. unit 1) - category (e.g. drawing) – subcategory (e.g. building or function).
Regardless of the CA numbering, each activities assigned into a specific CA has an
identification number that follows a different rule for scheduling. The ‘category’ and
‘subcategory’ are not fixed for one facet of information. In order to maximize the
effectiveness (O3 in Table 1), the concept of “flexible work breakdown system”
proposed by Jung and Kang (2007) was applied. For example, any facet of locator
COMPUTING IN CIVIL ENGINEERING 733

(e.g. building), commodity (e.g. piping), or system (e.g. water circulating) can be
used for subcategory.

Figure 1. nEVMS Structure and outlines


However, nEVMS numbering system in Figure 1 uses the current standard
numbers for numbering elements except the CA structure and some new rules for
flexibility. Self-evolving mechanism by continuously updating and improving the
standard CAs for the case-company is under development by the authors. Total
number of CAs is about 1,400 which is concise enough to manage at a glance (Jung et
al. 2000) and also detailed enough to encompass different types of work packages.
Total number, monetary size, duration, and similarity for managerial requirements in
terms of technical capabilities are considered in the CA grouping process.

EVMS Procedures

The nEVMS requires a reengineered budgeting system as a prerequisite. Self


evolving and knowledge embedding mechanisms are proposed in order to facilitate
the organizational learning process (O2 in Table 1). Officially approved internal
project budget (B2) will used as a base for project baseline that determines planned
value (PV). It is designed to issue the project budget in the early stage well before
detail design and estimating is performed.
PV for each CA can be calculated by adding all PVs of subordinate activities.
Basic rules of calculating PVs include top-down allocation of weights, temporal
dissemination by historical earned standards, and overall adjustments (Moon 2009).
EV for each activity will be calculated by comparing the actual budgeted cost work
performed (BCWP) against the total quantity. Note that the ‘total quantity’ here may
be different from that of PV as project proceeds, and ‘quantities’ are from
representing work items only. AC is basically collected by CAs. Cost data from
accounting systems will be decomposed into the CA level. These cost data will be
linked to resource data in order to provide valuable information for cost engineering
as well as standard estimate database (B0).
Proposed procedures and techniques are designed to clearly monitor cost and
schedule of on-going projects and to accumulate historical database. These also meet
734 COMPUTING IN CIVIL ENGINEERING

the research objectives in Table 1. Details of these methods will be introduced in


future publications. It is also planned to explore the integration of these systems with
3D-CAD data (BIM applications), so that automated updating and concurrent
engineering can be achieved.

Table 3. nEVMS Standard Procedures.


Description Method and Procedure Technique
Budget B0: Standard Estimate Database - Knowledge Embedding
B1: Project Estimate - Self Evolving
B2: Project Budget - Future Extension to BIM
PV Standard Weighted Milestones - Earned Standards
(Top-Down Qty based Assignment) - Simplified Resources
EV Performed/PV (Independent Logic) - Target Progress Concept
AC Decomposed Cost from Accounting - Qty & Resource Related

CONCLUSIONS
It is observed that distinct characteristics of nuclear power plant construction
make the EVMS implementation more viable and effective. As a demand pull,
strategic needs for enhancing cost and schedule control capabilities under globalized
competition require the E/P/C firms to furnish EVMS techniques. Finally, the authors
could recognize that EVMS implementation will be very successful if it is properly
optimized in terms of reengineering, workloads, and knowledge embedding.

ACKNOWLEDGEMENTS
This study was mainly supported by Korea Hydro and Nuclear Power Co., Ltd.
(KHNP). Partial expenses were also supported from Ministry of Education, Science,
and Technology (MEST) under Grant No. 2009-0074881.

REFERENCES

Deng, M. Z. M, and Hung, Y. E. (1998). “Integrated cost and schedule control: Hong
Kong perspective.” Project Mgmt. J., Project Management Institute (PMI),
29(4), 43-49.
Flaherty, T. (2008). “Navigating Nuclear Risks: New Approaches to Contracting in a
Post-Turnkey World,” Public Utilities Fortnightly, July, 2008, 39-45.
Jung, Y. (2008). "Automated Front-End Planning for Cost and Schedule: Variables for
Theory and Implementation", Proceedings of the 2008 Architectural
Engineering National Conference, ASCE, Denver, USA, doi:
10.1061/41002(328)43.
Jung, Y. and Joo, M. (2011). “Building Information Modeling (BIM) Framework for
Practical Implementation”, Automation in Construction, Elsevier, 20(2), 126-
133.
Jung, Y. and Woo, S. (2004)."Flexible Work Breakdown Structure for Integrated Cost
and Schedule Control", Journal of Construction Engineering and Management,
ASCE, 130(5), 616-625.
COMPUTING IN CIVIL ENGINEERING 735

Jung, Y., and Kang, S. (2007). "Knowledge-Based Standard Progress Measurement


for Integrated Cost and Schedule Performance Control", Journal of
Construction Engineering and Management, ASCE, 133(1), 10-21.
Jung, Y., Park, H., and Moon, J.Y. (2000). Requirements for Integrated Cost and
Schedule Control: Process Redesign Guidelines for the Korean Contractors.
CERIK Working Paper No. 25, Construction & Economy Research Institute of
Korea (CERIK), Seoul, Korea.
Moon, B.-S. (2009). A Study on the Application of EVMS to Nuclear Power Plant
Construction Project, Master’s Thesis, Soongsil University, Seoul, Korea.
Rasdorf, W.J. and Abudayyeh, O.Y. (1991). “Cost- and schedule- control integration:
Issues and needs.” J. Constr. Engrg. and Mgmt., ASCE, 117(3), 486-502.
Richardson, M. (2010). “Nuclear Plant Construction Up; South Korea Challenging
Market”, The Japan Times Online, Monday, Feb. 1, 2010,
http://search.japantimes.co.jp/cgi-bin/eo20100201mr.html.
Evaluating Eco-efficiency of Construction Materials: A Frontier Approach
O.Tatari1 and M. Kucukvar2
1
LEED AP, Assistant Professor, Civil Engineering Dept., Ohio University, Athens,
OH 45701 (corresponding author). email: tatari@ohio.edu
2
Graduate Research Assistant, Civil Engineering Dept., Ohio University, Athens, OH
45701

ABSTRACT
Sustainability assessment tools are critical in the process of achieving
sustainable development. Eco-efficiency has emerged as a practical concept which
combines environmental and economic performance indicators to measure the
sustainability performance of different product alternatives. In this paper, an
analytical tool that can be used to assess the eco-efficiency of construction materials
is developed. This tool evaluates the eco-efficiency of construction materials using
data envelopment analysis; a linear programming based mathematical approach. Life
cycle assessment and life cycle cost are utilized to derive the eco-efficiency ratios,
and data envelopment analysis is used to rank material alternatives. Developed
mathematical models are assessed by selecting the most eco-efficiency exterior wall
finish for a school building. Through this study, our goal is to show that DEA-based
eco-efficiency assessment model could be used to evaluate alternative construction
materials and offer vital guidance for decision makers during material selection.

INTRODUCTION
The construction industry is one of the major contributors to environmental
problems such as global warming, ozone depletion, acidification, natural resources
depletion, solid waste generation, and indoor air quality. The construction industry
must inevitably employ certain environmental assessment tools in the process of
achieving sustainable development, since it consumes a substantial amount of natural
and physical resources and has significant environmental burdens during its life cycle.
In order to measure the progress, several metrics needs to be devised. Although not
adopted widely in the construction industry, eco-efficiency has emerged as an
alternative tool that combines environmental and economic performance indicators to
measure the sustainability performance of different design alternatives.
The objective of this paper is to develop an analytical tool that can be used to
assess the eco-efficiency of construction materials. This tool is used to evaluate the
projects using data envelopment analysis (DEA), a linear programming based
mathematical approach. LCA and LCC are used to derive the eco-efficiency ratios,
and DEA is utilized to rank alternatives without a need to subjectively weight life
cycle impact dimensions and LCC. Developed mathematical will be assessed by
selecting the most eco-efficient exterior wall finish for a building. The rest of the
paper is organized as follows. First, the need for eco-efficiency assessment is

736
COMPUTING IN CIVIL ENGINEERING 737

discussed. Next, basic aspects of DEA are explained. Then, the data collection and
model development are described. Next, analysis results and discussion are presented.
Finally, the findings are summarized and future work is pointed out.

ECO-EFFICIENCY ASSESSMENT
Eco-efficiency is defined as the delivery of the competitively priced goods and
services that satisfy human needs and enhance the quality of life while progressively
reducing ecological impacts and resources intensity throughout product life cycles to
a level appropriate with the estimated capacity of the Earth (Kibert 2008). Eco-
efficiency ratio consists of two independent variables; an economic variable
measuring the value of products or services added and an environmental variable
measuring their added environmental impacts. The ratio expresses how efficient the
economic activity is with regard to nature's goods and services. According to the
definition, eco-efficiency is measured as the ratio between the added value of what
has been produced (income, high quality goods and services, jobs, GDP etc) and the
added environmental impacts of the product or service (Zhang et al. 2008). Eco-
efficiency improvement can be accomplished by reducing the environmental impact
added while increasing the economic value added for products or services during their
life cycle. Eco-efficiency analysis has been used successfully as a valuable
assessment tool to assess sustainability in various domains (Kicherer et al. 2007;
Korhonen and Luptacik 2004; Kuosmanen and Kortelainen 2005). In this study, LCA
and LCC were utilized as denominator and numerator for eco-efficiency ratio:
LCC
Eco-efficiency ratio = (1)
LCA
The approach of utilizing LCC to represent the economic value added has been
adopted in several research studies (Saling et al. 2002). The main advantage in
utilizing LCC is to be able to account for all costs associated with the life cycle
environmental impacts. As a result, this would properly assess the economic value for
the whole life cycle.

ECO-EFFICIENCY WITH DATA ENVELOPMENT ANALYSIS


Several researchers have proposed the use of DEA to evaluate the eco-
efficiency (Barba-Gutiérrez et al. 2009; Hua et al. 2007; Korhonen and Luptacik
2004; Kuosmanen and Kortelainen 2005). DEA has been used as an effective tool to
measure efficiency of decision making units (DMUs) in a given context, and has been
utilized in over 4,000 scientific journal articles or book chapters (Emrouznejad et al.
2008). DEA has also been utilized in few studies in the construction research domain
(El-Mashaleh 2010; Juan 2009; McCabe et al. 2005; Ozbek et al. 2009; Xue et al.
2008). Most studies have concentrated on evaluating the productivity and technical
performance of the studies phenomena.
The basic premise of DEA is to assess the efficiency of one DMU relative to
other decision making units in consideration. To achieve this, a linear program is
constructed for each DMU. The basic mathematical program equation, coined by
Charnes, Cooper, and Rhodes (termed as CCR), is as follows (1978):
738 COMPUTING IN CIVIL ENGINEERING

max (2)

subject to

1 1, … , (3)

, 0 (4)

where µr is the output multiplier, vi is the input multiplier, o is the DMU which is
being evaluated, s represents the number of outputs, m represents the number of
inputs, j is the number of DMUs, yrj is the amount of output r produced by DMU j,
and xij is the amount of input i used by DMU j. The objective function z is the
weighted sum of outputs for the DMU under evaluation. DEA consists of multiple
inputs and outputs and seeks to minimize the inputs to produce the desired output. If
the output cannot be produced by the combination of the input of all the other DMUs,
then the DMU in consideration is on the efficient frontier. In the cases where the
inputs of other DMUs produce the output of DMU in consideration, that DMU is
considered not efficient, since the inputs of other DMUs were able to produce more
output for the DMU in question.
DEA has also been used to measure eco-efficiency. Eco-efficiency ratio was
modeled as input-output model where environmental impacts represent the inputs to
the system and the economic value added as the output of the system (Kuosmanen
and Kortelainen 2005). As a result, the environmental impacts are forced to be
minimized to achieve the same level of economic value. Alternatives that need more
environmental impacts to produce the same level of economic value were deemed as
inefficient. DEA can be adapted to mitigate the subjective judgment about the
weights of the environmental and economic performance indicators, since DEA does
not require a priori weight assignments (Kuosmanen 2005).

MODEL DEVELOPMENT
Figure 1 presents the general DEA framework in modeling eco-efficiencies of
construction materials. According to DEA notation in Fig. 1, the inputs constitute
LCA and the output constitutes LCC. Utilizing this framework, two DEA models
were developed; CCR-based ECODEA-1 model and weight restricted ECODEA-2
model.

Figure 1. Inputs and outputs of construction materials eco-efficiency framework.


COMPUTING IN CIVIL ENGINEERING 739

CCR-Based ECODEA model


The first model, ECODEA-1, utilized the CCR model depicted above.
Kuosmanen and Kortelainen (2007) first introduced the viability of this model for
eco-efficiency ratio calculation. The CCR model depicted above transforms to the
following equations:

max (5)

subject to

1 1, … , (6)

0 (7)
where represents the life cycle cost of DMU 0. Since life cycle cost is the only
output, output multipliers are not needed for the model. The DMU is regarded as eco-
efficient when z = 1. This model does not force any weight restrictions on
environmental impacts. Thus, the flexibly chosen weights for environmental impacts
are enabled to maximize the relative eco-efficiency of the DMU with respect to other
compared DMUs (Kortelainen 2008). To solve this model as a linear program, it is
linearized by taking the inverse of the eco-efficiency ratio as follows:

1
min (8)

subject to

1
1 1, … , (9)

0 (10)

This mathematical model is solved through linear programming and the eco-
efficiency ratio is derived by taking the inverse of z.

ECODEA OF EXTERIOR WALL FINISHES


Possible exterior wall finishes for a building were utilized to illustrate the use
of the ECODEA models for material selection. LCA and LCC data regarding exterior
wall finishes of a building was extracted from BEES 4.0 software. The functional unit
selected for the exterior wall finishing materials was 1 ft2 of exterior surface over 50
years. The building is assumed to be using electricity for heating and cooling needs
and is located in the city of Atlanta, GA. Phases of LCA, including raw materials
extraction, transportation of raw materials to manufacturing, manufacturing,
transportation to Site, installation at site, use, and end of life were estimated by BEES
740 COMPUTING IN CIVIL ENGINEERING

software. Transportation distance from manufacture to use was calculated based on


the distance of factory locations from Atlanta.
The resultant environmental impact was calculated utilizing the environmental
impact categories based on TRACI, as shown in Table 1. The impact categories that
were assessed are acidification (ACD), ecological toxicity (TOX), eutrophication
(EUT), global warming (GWM), fossil fuel depletion (FFD), smog (SMG), water
intake (WTI), human health (HHL), ozone depletion (OZD), and habitat alteration
(HAB). BEES software was also utilized for estimating the overall cost of wall
finishes over 50-year study period. In the BEES model, categories of cost involve
costs for purchase, installation, operation, maintenance, repair, replacement and the
negative cost item of residual value which is the product value remaining at the end
of the study period (Lippiatt 2007). Since there was an imbalance in the data
magnitude, the data were mean normalized; a procedure that is followed to prepare
the data for DEA (Sarkis 2007). Mean normalization was conducted by calculating
the mean for each input and output, and dividing each input or output by its respective
mean.
Table 1. Environmental impact and LCC of exterior wall finishes

Exterio Environmental Impact Categories


LC
r Wall TO EU FF SM WT CA
ACD GWP HHL C
Finishes X T D G R P
ABR1 7,614.7 41.7 2.4 18,164.0 15.1 61.7 3.6 93.8 2.1 7.3
ABR2 7,493.4 37.7 2.3 17,702.8 15.0 60.1 3.5 5.2 2.0 7.3
GBRM 7,665.2 42.5 2.4 18,368.9 15.2 62.5 3.8 167.8 2.1 11.7
CDRS 9,024.3 45.7 3.0 20,429.9 10.6 70.9 7.3 3.8 2.5 6.7
DECO 511.6 6.0 0.3 1,755.3 3.5 8.5 13.6 89.9 0.2 7.0
HFRS 9,238.0 51.6 2.8 21,274.1 10.6 77.1 1.0 99.1 2.6 2.4
HSST 9,271.3 53.0 2.9 21,421.3 10.7 77.6 1.3 126.6 2.6 3.2
HMCT 9,266.3 52.9 2.9 21,401.6 10.7 77.5 1.3 120.5 2.6 3.2
TRMP 9,903.2 48.7 3.2 22,515.5 16.3 83.6 8.2 4.6 2.7 23.5
GSTC 9,110.9 52.2 2.7 20,936.9 9.8 73.1 1.3 138.3 2.6 3.2
GVNL 9,666.5 49.1 2.7 20,852.9 12.8 71.2 0.2 20.0 2.7 3.6
Units of measurement: ACD (milligrams H+ equivalents/unit), TOX (grams 2,4-
dichlorophenoxy-acetic acid equivalents/unit), EUT (grams nitrogen equivalents/unit), GWP
(grams CO2 equivalents/unit), FFD (MJ/unit), SMG (grams NOx equivalents/unit), WTR
(liters/unit), HHL (grams benzene equivalents/unit), CAP (micro disability-adjusted life
years/unit), LCC (Present Value $/unit)

RESULTS AND DICUSSION


ECODEA was solved and the results were ranked based on the eco-efficiency
ratios. ECODEA LP models were solved eleven times; one for each construction
material, and the optimal weights resulting from each run were recorded as shown in
Table 2. The calculated optimal weights v show which inputs have been utilized
for each DMU for their calculation. For instance, for DMU 1, the weights show that
eco-efficiency has been calculated using only ACD and WTR, whereas other impact
categories were all 0. ECODEA-1 results indicate that eco-efficiency ratios range
COMPUTING IN CIVIL ENGINEERING 741

from .44 to 1. Among wall finishes, DECO, TRMP, and GVNL were found to be
100% eco-efficient. CDRS was found to be the least eco-efficient (.44) when
compared with the other exterior wall finishes in the study.
Table 2. ECODEA results and corresponding weights
Weights ( ) for
Wall DM Rati Ran
Finish U o k AC TO EU GW FF SM WT HH CA
D X T P D G R L P
ABR1 1 0.64 6 0 0 0 0 0 0 1.42 0 0.35
ABR2 2 0.70 5 0 0 0 0 0 0 1.59 1.64 0
GBR
3 0.98 4 0.35 0 0 0 0 0 1.42 0 0
M
CDR 2.3
4 0.44 11 0 0 0 0 0 0 0.24 0
S 7
DEC 2.2
5 1.00 3 0 0 0 0 0 0.09 0 0
O 5
0.4
HFRS 6 0.47 10 0 0 0 0 0 1.36 0 0
0
0.4
HSST 7 0.57 9 0 0 0 0 0 1.36 0 0
0
HMC 0.4
8 0.58 8 0 0 0 0 0 1.36 0 0
T 0
TRM
9 1.00 1 0.35 0 0 0 0 0 1.42 0 0
P
GST 0.4
10 0.59 7 0 0 0 0 0 1.36 0 0
C 0
GVN
11 1.00 1 0 0.38 0 0 0 0 1.43 0 0
L
Table 3. ECODEA-1 based percent improvements of exterior wall finishes
Wall Percent Improvements (%) for
DMU
Finish ACD TOX EUT GWP FFD SMG WTR HHL CAP
ABR1 1 -36 -41 -39 -40 -51 -37 -36 -94 -36
ABR2 2 -47 -47 -46 -49 -58 -46 -31 -30 -46
GBRM 3 -2 -11 -5 -09 -25 -3 -2 -95 -2
CDRS 4 -69 -69 -70 -68 -56 -66 -67 -56 -69
DECO 5 0 0 0 0 0 0 0 0 0
HFRS 6 -60 -64 -64 -62 -53 -64 -53 -93 -60
HSST 7 -52 -58 -56 -55 -43 -57 -43 -94 -52
HMCT 8 -51 -57 -56 -54 -42 -56 -42 -93 -52
TRMP 9 0 0 0 0 0 0 0 0 0
GSTC 10 -54 -60 -56 -56 -41 -57 -41 -95 -55
GVNL 11 0 0 0 0 0 0 0 0 0
DEA also offers insights about percent improvements that could be made to
reduce the environmental impact while LCC is held constant, to reach 100% eco-
efficiency (See Table 3). Although it is not always possible to reduce the
environmental impacts of materials, nevertheless, percent improvement analysis gives
important information regarding ecological inefficiencies. This information could be
used to achieve dematerialization or aid in selecting more eco-efficient sub-materials
742 COMPUTING IN CIVIL ENGINEERING

during production of exterior wall finishes. For instance, based on ECODEA-1, for
ABR1 to become 100% eco-efficient, it needs to reduce ACD 36%, TOX by 41%,
EUT by 39%, GWP by 40%, FFD by 51%, SMG by 37%, WTR by 36%, HHL by
94%, and CAP by 36%. It is worth noting that DECO, TRMP, and GVNL do not
need any improvement in reducing their environmental impacts, since they are 100%
eco-efficient. The same analysis could be done using ECODEA-2, as well.
The results showed that DEA is an effective tool to evaluate construction
material alternatives and offer a critical insight to the decision maker that can lead to
buildings that use much more eco-efficient materials. Percent improvement analysis
provided valuable information to the decision makers regarding which environmental
impacts need more improvements. Although BEES model was used to calculate both
LCA and LCC, other LCA software tools, such as SimaPro and Athena, could be
utilized as well. Since the mentioned LCA software tools utilized process-based LCA
methodology, the results are expected to be similar to the study here. Yet, it should be
noted that SimaPro and Athena do not utilize TRACI environmental impact
categories, and their raw data would be to be used to calculate these categories on a
separate platform.

CONCLUSIONS
In this paper, a DEA-based eco-efficiency assessment framework is presented
as an effective and practical way to evaluate construction materials. The developed
framework utilized LCC and LCA as numerator and denominator for calculating the
eco-efficiency ratio and solved LP models to calculate eco-efficiency ratios for
exterior wall finishes. The model predicted DECO and TRMP to be 100% eco-
efficient. Percent improvement analysis was carried out to investigate environmental
impact categories that need to be reduced to reach 100% eco-efficiency. Eco-
efficiency ratios were analyzed for two cities to compare the results and gain more
insight.
This paper makes several contributions to construction research, including
developing a mathematical model that does not require subjective weighting to assess
the sustainability of construction materials, and presenting a practical way to apply
eco-efficiency to construction materials. The analysis of DEA results could be very
helpful to decision makers to compare relative eco-efficiency of building materials.
However, it should be noted that DEA compares eco-efficiency by analyzing other
sections in the data set. This is a major drawback of DEA, since the eco-efficiency
ratios are relative to the eco-efficiency of other materials in the data set. Also,
accuracy of the results depends on the accuracy of the data extracted. Taking these
limitations into consideration, the developed DEA based eco-efficient assessment
models could provide immediate assessment of building material eco-efficiency and
offer vital guidance for decision makers during material selection. In future work, the
scope of the study could be expanded to address more complex decision making
situations in construction projects. Furthermore, different DEA formulations could be
developed and assessed for different decision making settings.

REFERENCES
Asif, M., Muneer, T., and Kelley, R. (2007). "Life cycle assessment: A case study of
a dwelling home in Scotland." Building and Environment, 42(3), 1391-1394.
COMPUTING IN CIVIL ENGINEERING 743

Barba-Gutiérrez, Y., Adenso-Díaz, B., and Lozano, S. (2009). "Eco-efficiency of


electric and electronic appliances: A Data Envelopment Analysis (DEA)."
Environmental Modeling and Assessment, 14(4), 439-447.
El-Mashaleh, M. (2010). "Decision to bid or not to bid: a data envelopment analysis
approach." Canadian Journal of Civil Engineering, 37(1), 37-44.
Emrouznejad, A., Parker, B., and Tavares, G. (2008). "Evaluation of research in
efficiency and productivity: A survey and analysis of the first 30 years of
scholarly literature in DEA." Socio-Economic Planning Sciences, 42(3), 151-
157.
Hua, Z., Bian, Y., and Liang, L. (2007). "Eco-efficiency analysis of paper mills along
the Huai River: An extended DEA approach." Omega, 35(5), 578-587.
Juan, Y. (2009). "A hybrid approach using data envelopment analysis and case-based
reasoning for housing refurbishment contractors selection and performance
improvement." Expert Systems With Applications, 36(3), 5702-5710.
Kicherer, A., Schaltegger, S., Tschochohei, H., and Pozo, B. F. (2007). "Eco-
efficiency - Combining life cycle assessment and life cycle costs via
normalization." International Journal of Life Cycle Assessment, 12(7), 537-
543.
Korhonen, P., and Luptacik, M. (2004). "Eco-efficiency analysis of power plants: An
extension of data envelopment analysis." European Journal of Operational
Research, 154(2), 437-446.
Kortelainen, M. (2008). "Dynamic environmental performance analysis: A Malmquist
index approach." Ecological Economics, 64(4), 701-715.
Kuosmanen, T., and Kortelainen, M. (2005). "Measuring eco-efficiency of production
with data envelopment analysis." Journal of Industrial Ecology, 9(4), 59-72.
Lippiatt, B. (2007). "BEES 4.0: Building for Environmental and Economic
Sustainability Technical Manual and User Guide." NIST, Gaithersburg, MD.
McCabe, B., Tran, V., and Ramani, J. (2005). "Construction prequalification using
data envelopment analysis." Canadian Journal of Civil Engineering, 32(1),
183-193.
Ozbek, M., Jesús, M., and Triantis, K. (2009). "Data Envelopment Analysis as a
Decision Making Tool for the Transportation Professionals." Journal of
Transportation Engineering, 135(11), 822-831.
Saling, P., Kicherer, A., Dittrich-Krämer, B., Wittlinger, R., Zombik, W., Schmidt, I.,
Schrott, W., and Schmidt, S. (2002). "Eco-efficiency analysis by BASF: The
method." The International Journal of Life Cycle Assessment, 7(4), 203-218.
Sarkis, J. (2007). "Preparing your data for DEA." Modeling Data Irregularities and
Structural Complexities in Data Envelopment Analysis, 305-320.
Xue, X., Shen, Q., Wang, Y., and Lu, J. (2008). "Measuring the Productivity of the
Construction Industry in China by Using DEA-Based Malmquist Productivity
Indices." Journal of Construction Engineering and Management, 134(1), 64-
71.
Zhang, B., Bi, J., Fan, Z. Y., Yuan, Z. W., and Ge, J. J. (2008). "Eco-efficiency
analysis of industrial system in China: A data envelopment analysis
approach." Ecological Economics, 68(1-2), 306-316.
Analysis of Critical Parameters In The ADR Implementation Insurance Model

Xinyi Song1, Carol C. Menassa2, Carlos A. Arboleda3 and Feniosky Peña-Mora4


1
PhD student in Construction Management, Department of Civil Engineering and
Engineering Mechanics, Columbia University, New York, NY, 10025, USA; Phone
217-819-1088; xs2149@columbia.edu
2
M. A. Mortenson Company Assistant Professor of Construction Engineering and
Management, Department of Civil and Environmental Engineering, University of
Wisconsin-Madison; Phone 608-890-3276; Fax 608-262-5199; menassa@wisc.edu
3
Infrastructure Project Director, Conconcreto S.A, Carrera 42 75 - 125 Autopista Sur,
Itagui, Colombia; +57- 4-402-5778; aarboleda@conconcreto.com
4
Dean, Fu Foundation School of Engineering and Applied Science and Morris A. and
Alma Schapiro Professor, Professor of Civil Engineering and Engineering Mechanics
and of Earth and Environmental Engineering; Phone 212- 854- 6574; Fax 212 864
0104; feniosky@columbia.edu

ABSTRACT
In construction projects the implementation of Alternative Dispute Resolution (ADR)
techniques requires capital expenditures to cover related costs such as fees and
expenses paid to the owner’s/contractor’s employees, lawyers, claims consultants,
third party neutrals, and other experts associated with the resolution process. Since
most projects today operate on tight budgets, one way to ease the potential for
variations from an already financially stressed project budget is to price ADR
techniques as an insurance product. However, since the premium charged by
insurance company is designed to cover its underwriting expenses and profit target,
the benefits of purchasing ADR implementation insurance for a specific project must
outweigh its cost for the investment to be worthwhile. A number of factors in the
ADR implementation insurance model combine to determine whether it is financially
advantageous for project participants to invest in ADR implementation insurance, and
the purpose of this paper is to identify and analyze the critical parameters in the
model. Sensitivity analysis is conducted on the effectiveness of each ADR technique
chosen for the project, average ADR implementation cost on each stage of dispute
resolution, and distribution of possible disputes. These results will help determine the
most critical factors related to the pricing of ADR as an insurance product.

INTRODUCTION
Although using Alternative Dispute Resolution (ADR) techniques such as
negotiation, mediation or Dispute Review Board (DRB) to resolve disputes has been
widely adopted in construction projects as a more effective and cost-saving approach
compared to litigation, ADR implementation costs incurred throughout the dispute
resolution process sometimes could account for a large portion of the
settlement/award amount, the original claim amount, and even the total contract value
(Gebken II and Gibson 2006). Typical ADR implementation costs may include fees

744
COMPUTING IN CIVIL ENGINEERING 745

and expenses paid to the owner’s/contractor’s employees, lawyers, claims consultants,


third party neutrals, and other experts associated with the resolution process (Gebken
II and Gibson 2006, Menassa and Peña-Mora 2009). However, because the number of
disputes and the amount of ADR implementation costs for each dispute won’t be
known until the actual occurrence of disputes during the construction phase, project
participants have to face the uncertainty of unexpected high costs. From the
perspective of transferring risk, pricing ADR implementation costs as an insurance
product is worth being considered in order to shift the uncertainty of potential
implementation costs from project participants to the insurance company (Song et al
2009). In this process, insurance company reimburses any costs incurred related to
ADR implementation, and in return it receives a premium. However, since the
premium charged by insurance company is designed to cover its underwriting
expenses and profit target, the benefits of purchasing ADR implementation insurance
for a specific project must outweigh its cost for the investment to be worthwhile.
Thus the key of the ADR implementation insurance model is to find the optimal
premium acceptable to both project participants and the insurance company. A
number of factors in the model combine to determine whether it is financially
advantageous for project participants to invest in ADR implementation insurance, and
the purpose of this paper is to identify and analyze the critical parameters in the
model. Sensitivity analysis is conducted on the effectiveness of each ADR technique
chosen for the project, average ADR implementation cost on each stage of dispute
resolution, and distribution of possible disputes. These results will help determine the
most critical factors related to the pricing of ADR implementation as an insurance
product.

ADR IMPLEMENTATION INSURANCE MODEL

The ADR implementation insurance model proposed by Song et al 2010 is


constructed to help project participants determine whether investing in ADR
implementation insurance is beneficial for a certain project. It includes five key parts
as shown in the flow chart in Figure 1. First, by drawing analogy from seismic risk
insurance, Event Tree Analysis (ETA) is used to simulate scenarios of dispute
resolution process and to determine the probability mass function of ADR
implementation costs (Hoshiya et al. 2004). These probabilities are then employed to
calculate the total expected ADR implementation costs based on which we derive the
policy premium. Then, gross premium as quoted from an insurance company is
calculated and compared with the maximum fixed cost derived from subjective loss to
determine whether insurance is acceptable to project participants. Subjective loss is
defined as the negative value attached by project participants to the uncertain ADR
implementation costs that they might incur based on their degree of aversion to the
risk that they face. Unlike the traditional definition of a utility function, a subjective
loss function (SLF) is used in this research to indicate the negative utility u(c) that is
attached to a given loss amount of ADR cost c resulting from implementation of the
dispute resolution process. For risk-averse project participants, their subjective
function is a convex upward function and the maximum premium they should be
willing to pay is: GP = E(u(C)) (Bowers et al. 1997).
746 COMPUTING IN CIVIL ENGINEERING

Determine Project
participants’
Subjective Loss
Function (SLF) Determine
subjective loss of
ADR
implementation
costs
Probability-weighted
Disputes occur and
scenarios for Total expected ADR Determine if
go through
possible resolution implementation costs insurance is
contractual DRL
outcomes (ETA) neccessary
Determine Gross
Premium to cover
ADR
implementation
costs

Figure 1 Analytic flow of the ADR insurance model

First, Event Tree Analysis (ETA) is a graphical representation of a logic


model that identifies and quantifies all possible outcomes resulting from an accidental
initiating event (Rausand and Høyland 2005). In seismic risk analysis, ETA is utilized
to identify the sequential damage and their probabilities to a concerned structure
(Hoshiya et al. 2004; U.S. Nuclear Regulatory Commission 1975). In this paper, ETA
is used to help identify scenarios of dispute resolution process and quantitatively
determine the probability of corresponding ADR implementation cost, making it
possible to calculate the total expected ADR implementation costs. It first sets up the
event of dispute occurrence as a specified condition. Assume the contractual Dispute
Resolution Ladder (DRL) has m stages on the ladder: ADR1, ADR2,…ADRm. For
the jth stage, assume the effectiveness of ADRj is kj, and the average cost for ADRi is
cj. For example, k1= 0.5 means 50% of the disputes can be resolved in the first stage.
When a dispute occurs, it first goes to ADR1, the first stage of the contractual DRL.
If dispute resolution does not come to a satisfied settlement by both parties, it will go
to the next stage ADR2, and so on. The whole process is shown in Figure 2 in the
illustrative example.

Then, use the probability mass function derived by ETA to calculate the Total
Expected ADR Implementation Costs. Without loss of generality, the risk of
incurring ADR implementation costs in any construction project can be
mathematically represented by:

1. n, the total number of disputes occurring in the period from the notice to
proceed (t = 0) to the project completion (t = T); n = N1, N2,.., Nk with
probability q1, q2,.., qk respectively, where N1 is the minimum possible
number of disputes and N1 ≥ 0, while Nk is the maximum number of possible
disputes. Since construction disputes occur randomly over time, the arrival of
disputes can be approximated with a Poisson Process with occurrence rate λ
(Touran 2003).

2. cj, the average amount of ADR implementation costs for each dispute
resolution process, where j = 1, 2,…, m represents the jth stage on the
contractual DRL. Then, for each dispute, its resolution process bears m
possible outcomes: resolved at ADR1 and cost c1, resolved at ADR2 and cost
COMPUTING IN CIVIL ENGINEERING 747

c2, … , resolved at ADRm and cost cm, with probability p1, p2, and pm,
respectively, where ∑ 1, and

1 1 … 1 Eq. (1)

Assume that the cost on each stage is independent.

3. For the ith dispute (i=1,2,…,n), define xij = 1 represents that the ith dispute is
resolved in the jth stage; otherwise, xij = 0. Thus ∑ represents the
total number of disputes that are resolved in the jth stage and follows a
multinomial distribution M(n, p1 p2,…, pm), with the expected value E(xj) = n
pj, where j = 1, 2,…, m. Specifically, when m = 2, then follows binomial
distribution B(n, p1 p2). E(xj) is the expected number of disputes that are
resolved in the jth stage.

4. Among all n disputes, there are a total of R different possible outcomes. For
each outcome, there could be xj disputes resolved with ADRj. Consequently,
the total ADR implementation cost throughout the time horizon for the rth
outcome is ∑ with a probability of ∏ , given a total
of n disputes. The number of outcome which bears the same total cost and
probability is .

Then the total expected ADR cost is:

Eq. (2)
748 COMPUTING IN CIVIL ENGINEERING

The fourth step in the flow chart is to calculate the Total Expected Subjective
Loss of ADR Implementation Costs. As mentioned earlier, a subjective loss function
(SLF) is used to indicate the negative utility u(c) that project participants attach to a
given loss amount of ADR implementation costs C resulting from dispute resolution.
The total expected subjective loss could be expressed as follows:

∑ Eq. (3)

where is the total subjective loss when the total number of disputes is n.

Eq. (4) defines the total expected subjective loss as

R
n
SL x x x p x u c Eq. (4)

The last step of the model is to compare the gross premium and expected subjective
loss and to determine whether investing in ADR implementation insurance is
favorable. If GP E u C , then there exists the possibility for an insurance policy.
SENSITIVITY ANALYSIS
To determine the most critical factors of the model, sensitivity analysis is conducted
with an illustrative example on the effectiveness of each ADR technique chosen for
the project (kj), average ADR implementation cost on each stage of dispute resolution
(cj), and distribution of possible disputes (λ).
Assume there is a highway bridge project in which project participants decide to
include a three-step DRL in the contract for dispute resolution (m = 3). In this DRL, a
dispute goes through the Architect/Engineer or Supervising Officer (ADR1) to
mediation (ADR2) and then arbitration (ADR3). If the DRL fails to provide a
satisfactory settlement, then dispute resolution will eventually escalate to litigation,
which will be much more costly. Details are shown in Figure 2.

Figure 2. Project DRL (Adapted from Menassa et al. 2010)


COMPUTING IN CIVIL ENGINEERING 749

The estimated duration of this project is T = 720 days from Notice To Proceed
(assume there are 30 days in each month, T = 24 months). Assume that disputes occur
according to Poisson Process with rate λ = 3. To determine the total expected ADR
implementation costs, ETA, is determined as in Figure 3.

Figure 3. Project ETA of ADR Implementation costs


and the following SLF is adopted:
u(x)=x+1880[exp(0.007x)-1]
which is calculated based on 96 samples taken from insurance purchasing owners in a
financial survey (Hoshiya 2004).
The results of 1000 simulation runs and a 25% expense loading for the gross premium
are presented in Table 1.
Table 1. Simulation results
Average No. Expected ADR Expected Gross
of Disputes Implementation Costs Subjective Loss Premium
E(C) (MM$) E(u(C)) (MM$) GP(MM$)
75 7.90 112.14 9.88
The following figures show the results of sensitivity analysis with parameter on a
range of -30%~30%:

15
lambda
10 k1
$MM

5 k2
k3
0
c1
-40% -20% 0% 20% 40%

Figure 4. Sensitivity Analysis I: Total Expected ADR Implementation Costs


750 COMPUTING IN CIVIL ENGINEERING

Total Expected Subjective Loss


160
140
lambda
120
k1
100
k2
$MM

80
k3
60
40 c1

20 c2

0 c3
-40% -20% 0% 20% 40%

Figure 5 Sensitivity Analysis II: Total Expected Subjective Loss

RESULTS AND CONCLUSIONS

From the figures we can conclude that the effectiveness of each ADR technique
chosen for the project (kj) and the rate of dispute occurrence (λ) have larger influence
on Total Expected ADR Implementation Costs and Subjective Loss. The limitation is
that this is just a simplified model with assumptions such as the independence
between dispute occurrences and the effectiveness of each ADR. The real situation
could be more complicated. Thus a more detailed analysis with tests on more
parameters is required in order for the model to be applied to real projects. Moreover,
drawing analogy from other commercial insurances such as medical insurance, the
policy will have a deductible limit on project participants to prevent moral hazard. In
this case project participants will have to bear part of the ADR implementation costs
before insurance kicks in. future work will focus on finding the optimal point on
project participants’ subjective loss curve which will minimize their total expected
subjective loss.

REFERENCES
Bowers, N.L., Gerber, H.U., Hickman, J.C., Jones, D.A. and Nesbitt, C.J. (1997)
“Actuarial Mathematics.” Society of Actuaries. Hardback.
Gebken II, R. J. and Gibson, G. E. (2006) “Quantification of costs for dispute
resolution procedures in the construction industry.” J. Professional Issues in Eng.
Education and Practice, 132( 3), July, 264-271
Hoshiya, M., Nakamura, T. and Mochizuki, T. (2004) “Transfer of Financial
Implications of Seismic Risk to Insurance.” Natural Hazards Review, ASCE, 5(3),
141-146.
COMPUTING IN CIVIL ENGINEERING 751

Menassa, C., Pena Mora, F., and Pearson, N. (2009). “A Study of Real Options with
Exogenous Competitive Entry to Analyze ADR Investments in AEC Projects.”
Journal of Construction Engineering and Management, American Society of Civil
Engineers, Reston, VA, ASCE
Rausand, M. and Høyland, A. (2005) “System reliability theory: models, statistical
methods, and applications.” New Jercy: John Wiley &Sons, Inc.
Song, X, Peña-Mora, F., Arboleda, C., Conger, R., and Menassa, C. (2009). “The
Potential Use of Insurance as a Risk Management Tool for ADR Implementation in
Construction Disputes.” Accepted for publication and presentation at 2009 ASCE
International Workshop on Computing in Civil Engineering, Austin, Texas, U.S. -
June 24-27, 2009
Song, X., Peña-Mora, F., Arboleda, C. (2010), "The Calculation of Optimal Premium
In Pricing ADR As An Insurance Product." The International Conference on
Computing in Civil and Building Engineering (ICCCBE), and XVII European Group
for Intelligent Computing in Engineering (EG-ICE) Workshop, Nottingham, UK,
June 30-July 2, 2010.
Touran, A. (2003). “Calculation of Contingency in Construction Projects.” IEEE
Transactions on Engineering Management, IEEE Engineering Management Society,
Piscataway, N.J, 50 (2), 135-140.
United States Nuclear Regulatory Commission (1975). “An assessment of accident
risk in U.S. commercial nuclear power plants.” Appendix I. Accident definition and
use of event tree, WASH-1400, NUREG-75/ 014, USNRC, Gaithersburg, Md.
Application of Latent Semantic Analysis for Conceptual Cost Estimates
Assessment in the construction Industry
Tarek Mahfouz1
1
Assistant Professor, Department of Technology, College of Applied Science and
Technology, Ball State University, Muncie, Indiana, 47306, email:tmahfouz@bsu.edu
ABSTRACT
Conceptual cost estimates represent the first benchmark upon which owners
define their financial capability of performing a construction project. Consequently,
the accuracy and quality assessments of these estimates are crucial. This paper
proposes an automated conceptual cost estimate assessment model through Latent
semantic Analysis (LSA). The use of LSA in the construction industry has rarely been
implemented, which deprives this industry from utilizing its strengths to facilitate
decision making. The research methodology adopted (1) utilizes data from a set of
completed construction projects; (2) proposes an automated LSA model for the
assessment of conceptual cost estimates based on error ranges; and (3) compares the
attained outcomes to previous researches in the literature review. The outcomes of the
current research illustrate that LSA modeling performs accurately in assessing
conceptual cost estimate making it a powerful tool for construction decision making.
INTRODUCTION
The US Census data showed that the total construction spending in 2007 was
about $ 14 trillion (US Census 2010). This considerable amount of expenditure is due
to the dynamic nature of the construction industry and the increasing sophistication
and complexity of construction projects. These characteristics created a requirement
for an extensive amount of coordination between different parties, different expertise,
and the production of massive amount of documents in diversified formats. All of
these factors impose a high level of burden on the design team and a larger one on the
estimators. At the conceptual estimate stage, these factors affect the accuracy of the
developed estimate which is based on experiences and previous knowledge. Such an
aspect imposes a high level of risk on owners and developers due to the uncertainty
associated with the estimate. In an effort to facilitate construction conceptual cost
estimate (CCCE) assessment, a number of researches developed expert systems,
mathematical models, and machine learning (ML) models. Although those studies
resulted in significant contribution, none of them utilized Latent Semantic Analysis
(LSA). Latent semantic Analysis has proven to be a reliable automated decision
support methodology in previous researches performed by the author in the field of
Knowledge Management and Legal Decision Support (Mahfouz 2009, and Mahfouz
et al. 2010). It achieved higher prediction accuracy in comparison to researches in the
literature review.
Therefore, in an attempt to provide a robust CCCE methodology for the
construction industry, this paper developed an automated assessor through Latent
Semantic Analysis (LSA). The models developed made use of data from a set 89

752
COMPUTING IN CIVIL ENGINEERING 753

completed projects worldwide. To that end, the adopted research methodology (1)
investigated LSA algorithms; (2) developed truncated feature spaces for the utilized
projects; (3) developed 5 LSA automated assessment models; (4) developed a C++
algorithm to facilitate assigning cost assessment; and (5) tested and validated the best
developed model with newly un-encountered projects. It is conjectured that this
research stream will help in relieving the negative consequences associated with
CCCE that are based on incomplete set of documents. In addition, the achieved
outcomes of this research highlight the possibility of this technique to be adopted for
automated decision support in the construction industry.
The rest of the body of this paper describes (1) Literature Review; (2)
Methodology; (3) Results and discussion; and (4) Conclusion.
LITERATURE REVIEW
Over the last decade, researchers in the construction industry have focused
their efforts for developing models to asses the quality of CCCE. These models
ranged between Rule Based Reasoning Systems (RBR) (Serpell 2004); mathematical
modeling systems (Fortune and Lees 1996, Oberlender and Trost 2001, and Trost and
Oberlender 2003); and Machine Learning (ML) modeling (An et al. 2007). Despite
the significant contribution of these systems to the advancement of assessing CCCE,
they faced the following hindrances. The success of RBR models was limited due to
(Bubbers and Christian 1992): (1) the failure to deduce all necessary rules upon
which the system operates; and (2) the assumption of the existence of a full domain
model that captures all required rules about a specific matter. Due to the complexity
of the analyzed problem and the involvement of a number of factors, mathematical
modeling like regression and factor analysis were implemented. However, their
limited capability to integrate nonlinear associations opened the horizon for the use of
more sophisticated ML methodologies. In one of the most recent studies, An et al.
(2007) utilized Support Vector Machines (SVM) for the assessment of construction
conceptual cost estimate errors. However, none of these researched implemented
LSA. It is a mathematical based method that utilizes ML through developing
truncated feature space. This characteristic allows for less computation and signifies
the effect of the analyzed factors.
METHODOLOGY
The following sections of the paper describe the different steps of developing,
implementing, and validating the LSA models. The adopted research methodology is
composed of five main stages. These stages are defined as (1) Data Collection; (2)
Assessment Criteria; (3) Factors Identification; (4) LSA Model Design and
Implementation; and (5) Model Testing and Validation.
Data Collection
The data pertinent to the current analysis is collected from 89 completed
projects worldwide. Table 1 below illustrates the distribution of the projects with
respect to geographic location. The related information was gathered from project
managers and experienced estimators. Since the current research is concerned with
assessing the accuracy of the CCCE, as will be discussed in the following section,
only information related to scope of works that did not undergo any changes was
gathered. Consequently, any additions to the scope of work were excluded and any
754 COMPUTING IN CIVIL ENGINEERING

omissions were eliminated from the initial conceptual cost data. The analyzed
projects were classified into three categories with respect to their % error, adopted
from An et al. (2007), as follows (0-<5%, 5-10%, and >10%). The reason for
adopting these ranges is that the literature illustrates that acceptable error should not
be more than 10%. However, as mentioned by An et al. (2007) “from interviews with
experienced experts, Korean companies generally set the primary goal of the range of
error rate at 5%”. The 0-<5%, 5-10%, and >10% categories included 22 (24.72%), 50
(56.18%), and 17 (19.10) projects respectively.
Table 1. Geographic Distribution of Projects
# of Projects % of Projects Location
13 14.61 USA
23 25.84 Egypt
8 8.99 Qatar
10 11.24 Kuwait
35 39.33 UAE
Assessment Criteria
The adopted assessment measure for the current research is the CCCE
accuracy (refer to equation 1). It could be defined as a measure of how close was the
initial cost estimate compared to the actual cost after completion. However, one
should understand that any changes in the scope of work will affect the assessment.
As a result, only data related to unchanged scope of work are considered when
defining the final cost at completion.

eq. 1

Factors Identification
The set of factors adopted for the current assessment were defined in a three
steps. First, a comprehensive literature review of factors utilized in previous
researches, including Skitmore 1991, Akintoye and Fitzgerald 2000, Trost and
Oberlender 2003, Serpell 2004, and An et al. 2007, was performed attaining a set of
54 factors. Second, interviews with experienced estimators and project managers
from the adopted projects identified an extra 5 factors. Third, after gathering
information related to these factors from all utilized projects, statistical choice models
namely Probit and Logit were developed to define the most significant factors and
their associations in relation to CCCE assessment. As a result of the aforementioned
steps 32 factors were used for the current research task. Table 2 illustrates the full list
of factors definitions, statistical significance, and their types.
LSA Model Design and Implementation
Latent Semantic Analysis (LSA) is a theory that utilizes linear algebra,
particularly, Singular Value Decomposition (SVD) to solve associations and
constraints between factors mathematically. It is based on the concept of Vector
Space Model implemented by SVM. However, the main advantage in LSA is that it
utilizes a truncated space in which the number of features is decreased. LSA
methodology applies SVD for the reduction of dimensionality in which all of the
local relations are simultaneously represented. The implementation of LSA modeling
COMPUTING IN CIVIL ENGINEERING 755

within the current research task is performed in a three steps. First, all gathered
projects are represented in a form of matrix (figure 1). Each row of the developed
matrix demonstrates a specific factor within the defined 32 factors.
Table 2. List of Utilized Factors
Item Factor Definition t-stat Type of Factor
1 Intensity of the site visit 1.66 Ordinal 5:high–0:none
2 Site clearness of obstacles during site visit 2 Ordinal 5:high–1:low
3 Possibility of differing site conditions 1.86 Ordinal 5:high–0:none
4 Level of site survey 1.46 Ordinal 5:high–1:low
5 Experience with similar projects 2.13 Ordinal 5:high–1:low
6 Details of existing data 1.35 Ordinal 5:high–0:none
7 Level of details in project definition 2.19 Ordinal 5:high–1:low
8 Level of details in project scope statement 1.58 Ordinal 5:high–1:low
9 Level of details of the project drawings 1.84 Ordinal 5:high–1:low
Level of details of the project technical
10 1.8 Ordinal 5:high–1:low
specifications
Level of details of the project general
11 1.41 Ordinal 5:high–1:low
conditions
Level of details of the project
12 2.02 Ordinal 5:high–1:low
supplementary conditions
Level of commitment of the company to
13 1.52 Ordinal 5:high–1:low
the project
14 Financial capacity of the company 1.63 Ordinal 5:high–1:low
15 Financial capacity of the client 1.75 Ordinal 5:high–1:low
16 Time to estimate 1.53 Numerical days
17 Difficulty of the estimating procedures 1.47 Ordinal 5:high–1:low
18 Estimator’s career experience 1.8 Numerical years
19 Estimator’s field work experience 1.5 Numerical years
Estimator’s experience with similar
20 1.41 Ordinal 5:high–0:none
projects
Estimator’s experience with field work in
21 1.55 Ordinal 5:high–0:none
similar projects
22 Capacity of the estimating team 2.23 Ordinal 5:high–1:low
Number of others projects under
23 1.68 Numerical integer
estimation
24 Capacity of the architectural team 2.19 Ordinal 5:high–1:low
25 Capacity of the procurement team 1.59 Ordinal 5:high–1:low
26 Capacity of the technical office team 1.74 Ordinal 5:high–1:low
27 Capacity of the quality control team 1.8 Ordinal 5:high–1:low
28 Capacity of the quality control team 1.41 Ordinal 5:high–1:low
29 Capacity of client 2.53 Ordinal 5:high–1:low
30 Level of construction difficulty 1.55 Ordinal 5:high–1:low
31 Level of competition 1.64 Ordinal 5:high–1:low
32 Contingency level 2.73 Ordinal 5:high–0:none
756 COMPUTING IN CIVIL ENGINEERING

Each column of the matrix stands for a project. Each cell contains the
recorded value that each factor has within a specific project (Landauer et al. 2007).
The developed m (number of factors) by n (number of projects) matrix will contain
zero and nonzero elements. Generally, a weighing function is applied to nonzero
element to give lower weights to high frequency factors that occur in many projects
and higher weights to features that occur in some projects but not all (Salton and
Buckley, 1991). Second, SVD is applied to the developed matrix to achieve an
equivalent representation in a smaller dimension space (Choi et al., 2001). With SVD,
a rectangular matrix is decomposed into the product of three other matrices (figure 1).
One component matrix describes the original row entities as vectors of derived
orthogonal factor values, another describes the original column entities in the same
way, and the third is a diagonal matrix containing scaling values such that when the
three components are matrix-multiplied, the original matrix is reconstructed
(Hofmann, 1999). Third, the number of factors adopted for analysis is determined
(Truncation). Since the singular value matrix is organized in an ascending order based
on the weight of each term, it is easy to decide on a threshold singular value below
which terms significance is negligible, refer to (Figures 2) (Dumais, 1991). For an
original matrix A with rank k, a newly truncated matrix Ak can be formulated by the
dot product illustrated in equation 2.

Figure 1. Matrix Representation in LSA (Dumais, 1991)


eq. 2

Figure 2. K Dimensional Space Representation in LSA (Dumais, 1991)


The following is a description of the steps of the LSA algorithm implemented
for development of the automated assessor. The algorithm starts with an argument -
filename- which is the name of the file including the stored data about the projects.
The algorithm moves sequentially through the file and generates the factor-project
matrix. After extracting the relevant factors and associating each one with the project
it was extracted from, the algorithm begins calculating factor weights. The global
COMPUTING IN CIVIL ENGINEERING 757

weights of the factors are computed over the collection of projects. By default, only a
local weight is assigned and this is simply the frequency with which the factor
appears in a project. The algorithm implements two thresholds for factor frequencies:
Global and Local. Next, the local weights of the features are computed. Each factor
weight is the product of a local weight times a global weight. Next, the algorithm
creates a final factor-project matrix. The algorithm finally performs SVD
decomposition. To that end, five truncated feature spaces were generated with the
following k sizes 5, 10, 15, 20, and 25. Each truncated feature space was generated
with a local threshold of Log function and a global threshold of Entropy function. The
Log function (equation 3) decreases the effect of large differences in factor
frequencies (Landauer et al., 2007). The entropy function (equation 4), on the other
hand, assigns lower weights to factors repeated frequently over the entire project
collection, as well as taking into consideration the distribution of each factor
frequency over the projects (Landauer et al., 2007). These thresholds were adopted
for the current analysis due to their success over other types of threshold
combinations in earlier researches performed by the authors (Mahfouz, 2009).
eq. 3

eq. 4

where tfij is the factor frequency of factor i in project j, and gfi is the total number of
times that the factor i appears in the entire collection of n projects.
Model Testing and Validation
The developed LSA models were tested and validated based on correctly
predicting the % error of newly introduced projects that were not utilized for
developing the models. A C++ algorithm was developed to perform the validation.
The implementation of the algorithm performs 4 steps. First, each project in the
feature space is tagged with its % error. The algorithm iterates sequentially through
the projects storing the project number and its corresponding % error. Second, the
LSA algorithm is implemented to extract the closest set of projects to the newly tested
one. A similarity threshold of 95% is considered. In other words, any project retrieved
at a similarity measure of less than 0.95 is disregarded. The algorithm is set to
retrieve each project and its similarity measure. Third, the algorithm reads through the
project numbers attained from the LSA implementation and retrieves the % error of
each project. Fourth, it reports the % error of the newly tested project by two means.
The first is reported as the most repeated % error. The second is reported a weighted
average of the retrieved % errors. The reported outputs are compared against manual
tagging of the newly tested projects to decide on the most accurate method.
RESULTS AND DISCUSSION
The results of the implementation of the aforementioned methodology are
illustrated in table 3. The training and testing of the developed models were
performed on 10 fold cross validation. In each step the models were trained on 90%
of the projects and tested over the other 10%. The process was repeated in an iterative
manner until the models are trained and tested on all projects. For more illustration,
each step would utilize 80 and 9 projects for training and testing respectively. The
758 COMPUTING IN CIVIL ENGINEERING

reported results in table 3 are the averages of the difference in % error of all 10 folds.
A closer look at the results shows the followings.
 All models attained an average % error difference of 6 % or less.
 Generally, the validation scheme of the weighted average attained better
results than the most repeated one.
 Best results with respect to both validation schemes were achieved using a
truncated feature space of size 20. This is supported by the reported
advancements shown in figure 3.
 The developed model is suitable for evaluating conceptual cost estimates of
construction projects with various complexity levels. This could be attributed
to two folds. Firstly, the model was tested and validated using different
projects performed at different locations of the world. Secondly, project
complexity was captured into the analysis through proxy factors like (30)
Level of construction difficulty, (18) Estimator’s career experience, (19)
Estimator’s field work experience, (20) Estimator’s experience with similar
projects, (21) Estimator’s experience with field work in similar projects, and
(22) Capacity of the estimating team.
Table 3. Average % error Difference
Average % error
Truncated Space Size Scheme 1 (Most Repeated) Scheme 2 (Weighted Average)
5 5.8 6
10 4 3.4
15 3.1 2.8
20 2.2 1.6
25 4 5.3

Figure 3. Advancement of Truncated Space Size of 20


CONCLUSION
The paper proposed a methodology for automated assessment of conceptual
cost estimate accuracy through Latent Semantic Analysis (LSA). To that end, 89
construction projects worldwide were utilized for the development of 5 models. The
models were trained and tested through a 10 fold cross validation scheme. The best
results were attained using a truncated feature space of size 20. Furthermore, the
outcomes discussed within the body of the paper illustrate the potential of LSA to be
adopted for automated construction conceptual cost assessment. It is conjectured that
this research line will help in relieving the negative consequences associated with
COMPUTING IN CIVIL ENGINEERING 759

financial uncertainty and provide construction practitioners with better assessment of


their situation and bids.
REFERENCES
Akintoye, A., and Fitzgerald, E. (2000). “A survey of current cost
estimating practices in the UK.” Constr. Manage. Econ., 18, 161–172.
An, S., Park, U., Kang, K., Cho, M., and Cho, H. (2007). “Application of
Support Vector Machines in Assessing Conceptual Cost Estimates.” J.
of Comp. in Civ. Eng., 21 (4), 259-264.
Bubbers, G., and Christian, J. (1992). “Hypertext and claim analysis.” J.
Constr. Engrg. and Mgmt., 118(4), 716-730.
Choi, F. Y. Y., Wiemer-Hastings, P., and Moore, J. (2001). “Latent
semantic analysis for text segmentation.” Proceedings of the 6th
Conference on Empirical Methods in Natural Language Processing,
Seattle, WA, 109–117.
Dumais, S. (1991). “Improving the retrieval of information from external
sources.” Behavior Research Methods, Instruments, and Computers,
23(2), 229-236.
Fortune, C., and Lees, M. (1996). “The relative performance of new and
traditional cost models in strategic advice for clients.” RICS research
paper series, 2(2), March, Royal Institution of Chartered Surveyors.
Hofmann, T. (1999). “Probabilistic latent semantic indexing.”
Proceedings of the National Academy of Science, 101, 5228-5235.
Landauer, T. K., McNamara, D. S., Dennis, S., and Kintsch, W. (2007)
Handbook of latent semantic analysis, Lawrence Erlbaum Associates,
London.
Mahfouz, T. (2009). “Construction legal support for differing site
conditions (DSC) through statistical modeling and machine learning
(ML)” Ph. D. thesis, Department of Civil, Construction, and
Environmental Engineering, Iowa State Univ., Ames, IA.
Mahfouz, T., Jones, J., and Kandil Amr (2010). “"A Machine Learning
Approach for Automated Document Classification: A Comparison
between SVM and LSA Performances." International Journal Of
Engineering Research & Innovation (IERI), 2(2), 53–62.
Oberlender, G. D., and Trost, S. M. (2001). “Predicting accuracy of early
cost estimates based on estimate quality.” J. Constr. Eng. Manage., 127
(3), 173–182.
Serpell, A. F. (2004). “Towards a knowledge-based assessment of
conceptual cost estimates.” Build. Res. Inf., 32 (2), 157–164.
Skitmore, M. (1991). “Early stage construction price forecasting: A
review of performance.” Occasional paper, Royal Institution of
Chartered Surveyors.
Trost, S. M., and Oberlender, G. D. (2003). “Predicting accuracy of early
cost estimates using factor analysis and multivariate regression.” J.
Constr. Eng. Manage., 129 (2), 198–204.
US Census Bureau, < http://www.census.gov/const/www/c30index.html>
(Accessed 2010).
Dynamic Life Cycle Assessment of Building Design and Retrofit Processes

Sarah Russell-Smith1, Michael Lepech2


1
Ph.D. Student, Dept. of Civil and Environmental Engineering, Stanford University,
473 Via Ortega, Stanford, CA, 94305-4020; PH (413) 297-6820; svrs@stanford.edu
2
Assistant Professor, Dept. of Civil and Environmental Engineering, Stanford
University, 473 Via Ortega, Stanford, CA, 94305-4020; PH (650) 724-9459; FAX
(650) 723-7514; mlepech@stanford.edu

ABSTRACT
Designers and managers of buildings and other constructed facilities cannot easily
quantify the sustainability impacts of structures for improved analysis, management,
or decision-making. This is due in part to the lack of interoperability between design
and analysis software and datasets that enable full life cycle assessment (LCA) of
constructed facilities. This work develops a computational framework to enable
building designers, engineers, contractors, and managers to reliably and efficiently
construct dynamic life cycle models that capture environmental impacts associated
with every life cycle phase. This includes 3D architectural tools, structural software,
and virtual design and construction packages. Use phase impacts can be quantified
using distributed sensor networks. This integration provides a dynamic LCA
modeling platform for management of facility footprints in real-time during
construction and use phases, offering unique analysis opportunities to examine the
tradeoffs between design and construction/operation decisions.

INTRODUCTION
The built environment creates significant environmental, economic, and social
impacts. These occur throughout the life cycle of constructed facilities from raw
material acquisition, through construction and use, to demolition and disposal. The
commercial and industrial sectors consume approximately 40% of energy produced in
the US and contribute close to 40% of greenhouse gas emissions, while contributing
to acidification, eutrophication, and smog (EIA, 2010). This represents an opportunity
for improvement; yet presently, few studies exist, little information is available, and
no tools are in use for measuring the distribution of energy consumption and
environmental impacts among the life cycle phases of constructed facilities. Methods
are needed to accurately assess, manage, and control consumption and emissions
starting from early design and continuing through the facility life cycle.
While no tools are available for environmental impact control and process
monitoring, highly developed economic cost controls form the foundation of current
construction management and operation practices. These controls allow construction
managers and facility owners to compare accrued costs with estimates derived from
design documents (e.g., drawings, contract specifications) and predictive performance
models (e.g., building energy models). Variance from budgeted costs or schedules,

760
COMPUTING IN CIVIL ENGINEERING 761

whether positive or negative, is managed by construction superintendents or


operation managers to effectively meet predetermined cost and schedule targets.
In order to methodically manage the reduction of life cycle environmental impacts
of built facilities, it is necessary to link environmental goals with modern
construction management methods. Time dependent impact budgets provide
construction managers and building owners a basis for such process control.
Specifically, 4 components are needed: (A) time dependent impact accrual budgets
during construction, (B) impact measurement during construction, (C) time
dependent impact accrual budgets during use, and (D) impact measurement during
use. Parts A and C are created during design and bidding to be compared to B and D
during construction and operation as benchmarks for whether the project is above or
below expectations, similar to a cost and schedule variance analysis.

LCA AND BIM


To manage for improved sustainability using existing construction and operation
management practices, project impact or footprint (e.g. CO2e) budgets are needed
using analytical tools and indicators of environmental sustainability. Life cycle
assessment (LCA) is one method for creating these tools. LCA is a standardized
method of accounting for all inputs, outputs, and flows within a process, product, or
system to quantify a comprehensive set of environmental, social, and economic
indicators (Finnveden et al., 2009). Today, LCA forms the analytic basis of many
performance-based sustainability design approaches (McAloone and Bey, 2009).
Numerous LCA studies have investigated the sustainability impacts of constructed
facilities. Scheuer et al. looked at a large commercial structure and found that nearly
95% of life cycle impacts stem from the use phase (Scheuer et al., 2003). Keoleian et
al. conducted an LCA on a residence and found a wide distribution of impacts
accruing from all life cycle stages, with most coming from use (Keoleian et al.,
2001). Junnila et al. found that for conventional office buildings in Europe and the
US, the use phase makes up over 90% of life cycle energy consumption, 80% of CO2
emissions, and 65% of SO2 and NOx emissions (Junnila et al., 2006). Additional
studies have been performed by Ochoa et al. (2002) and Khasreen et al. (2009). In a
review of 16 studies, Sartori and Hestnes found significant impacts throughout the
life cycle of facilities, with strong correlation between life cycle energy consumption
and use phase energy consumption (Sartori and Hestnes, 2007).
While LCA can be used to assess the sustainability of the built environment, it is
insufficient for management. Creating an LCA requires collection of life cycle
inventory data for countless materials and processes and manually entering quantities
and transportation distances into LCA software tools. The process of re-entering data
in many software tools is time-intensive and not done by contractors and architects
(Fischer et al., 2004). Thus, a computational framework is needed to enable reliable,
efficient construction of dynamic life cycle models that links design software with
environmental impact models. The integration of building information modeling
(BIM) software and LCA software has the potential to automate this process as
material specifications and quantity takeoffs are included in BIM. Thus, linking BIM
and LCA software eliminates the need for manually inputting data and significantly
accelerates creation of LCA models for constructed facilities.
762 COMPUTING IN CIVIL ENGINEERING

Building information models are constructed using 3D, real-time, dynamic


software tools that incorporate spatial relationships, geometry, material properties and
quantities, and geographic information (Eastman, 1999). BIM models can be used to
represent the entire building life cycle and aid in management as well as design and
construction (Eastman et al., 2008). Integrated BIM models have led to “improved
collaboration, communication, and decision support enabled by horizontal and
vertical integration of data, and information management in the whole value network
throughout the building lifecycle” (Steinmann, 2010). BIM models coupled with
LCA software have the potential to streamline LCA processes and facilitate rigorous
management of the environmental footprint of constructed facilities. Obtaining such
data early in design enables decisions to be made with more information and
achievement of performance improvements at lower cost (AIA, 2007)

INTEGRATION OF BIM WITH LCA


Ma and Zhao (2008) identified 3 essential elements of next generation design
software for constructed facilities to be (1) life cycle assessment of energy and
environmental impacts, (2) BIM support, and (3) a file format that facilitates
interoperability. Several studies have looked at benefits of integrating LCA and BIM
and identified reasons why this has not occurred. First, LCA tools are inaccessible
and complicated (Loh et al., 2007). Second, it is inefficient to input data into LCA
programs (Fischer et al., 2004). Third, there is a lack of interoperability between
software tools and a need for a common data format, for instance Industry Foundation
Classes (IFC) (Loh et al., 2007). This standard, developed by the International
Alliance for Interoperability (IAI), is a data representation standard for definition of
architectural CAD graphic data as 3D real-world objects (Steinmann, 2010).
The VTT Technical Research Centre of Finland identified the following solutions
to integrate LCA and BIM software: (1) linking separate software tools via file
exchange, for instance using IFC; (2) adding functionality to existing BIM software;
(3) using parametric formats such as Geometric Description Language (GDL)
(Häkkinen and Kiviniemi, 2008). In a recent study, Steel et al. concluded that BIM is
most useful not only for design, but for information exchange between building
stakeholders (Steel et al., 2010). For this to become reality however, the question of
interoperability must be resolved. The aforementioned studies confirm the need and
identify potential methods for LCA and BIM interoperability.
Others have developed frameworks to integrate LCA and BIM, but none for life
cycle impact management. Gu et al. developed a framework to evaluate the impacts
of residences in China. The life cycle was divided into 5 phases including material
acquisition, manufacture, construction, operation and demolition and then impacts
were computed. However, it was concluded that for comprehensive results a full life
cycle assessment via LCA software is necessary (Gu et al., 2006). A successful
example is the LCADesign tool developed by the Cooperative Research Center for
Construction Innovation in Australia. Quantity take-offs of building components are
automated from CAD drawings and then combined with a database of life cycle
impacts of Australian data. The required input is the 3D drawing in IFC format. This
tool is currently being deployed commercially (Seo and Newton, 2007).
COMPUTING IN CIVIL ENGINEERING 763

METHODS
The objective of this research is to create a computational framework to link LCA
and BIM, facilitating adoption of widely accepted construction management
techniques of variance control to manage reduced construction and operation
environmental impacts of facilities. The proposed architecture is shown in Figure 1.

Figure 1. Computational Architecture of BIM-LCA Integration

A sample calculation is shown schematically in Figure 1. In this case, placement


of riprap (rock or other large aggregate to protect against erosion) is shown. The
“User Interface,” which is designed to import data directly from BIM, contains the
unique CSI MasterFormat identifier for the work performed, the description of the
work, the quantity of work, the units, and environmental impact categories including
global warming potential (GWP), energy resource consumption, acidification,
eutrophication, and carcinogens (not all shown in Figure 1). CSI’s MasterFormat
serves as the ontology for uniquely identifying materials and processes. Placement of
riprap has a MasterFormat identifier of 31 37 13 100350 (Rip-rap and Rock Lining,
Dumped, 100 pound). The user inputs the quantity, in this case 1 metric ton.
The User Interface is linked to separate datasets for material inputs and crew
assignments in the “CSI Code Array.” Within the CSI Code Array, it is seen that the
placement of Riprap requires cobble (large stones), silt fence, and wooden stakes.
Further, the crew identification is B-11A (RS Means, 2010), which is linked to the
“Crew Array” that lists required construction equipment. For B-11A, RS Means lists
one equipment operator, one laborer, and one dozer. Persons and hand tools are not
listed in the Crew Array as they have negligible environmental impact. Within the
764 COMPUTING IN CIVIL ENGINEERING

CSI Code Array and the Crew Array, a life cycle inventory identifier (LCI No.) is
listed that is linked to a material or process within existing life cycle inventory
datasets. In the case of cobble, the corresponding LCI database identifier is
EIN_UNIT06567700467, corresponding to “Gravel or Rock” in the Ecoinvent life
cycle inventory database. From the Ecoinvent database, impacts for the production of
1 metric ton of cobble in terms of global warming, energy resources, acidification,
eutrophication, and carcinogens are found to be 1.7 kg CO2e, 25.6 MJ LHV, 0.04 kg
SO2, and 0.01 kg PO4, with negligible amounts of B(a)P, respectively. Quantities for
each material or piece of equipment used are cascaded down from the User Interface
to compute the total impacts based on LCI material and process data. Total impact
results for each work item performed are then displayed on the User Interface.
Analogous to the development of a time-dependent construction and facility
operation budget which is made by tying construction activity costs to construction
activity schedules, the impact results from the integrated BIM-LCA model are used to
model the accrual of environmental footprint (e.g., CO2e, SOx, etc.) over the course
of construction. When combined with a use phase model, a budget of environmental
footprint accrued over the life cycle of a constructed facility is developed. Figure 2
shows the GWP footprint accrual for a hypothetical constructed facility over time.

Figure 2. Schematic view of Proposed Impact Accrual versus Time

Here, the ordinate axis shows GWP in terms of percent of total CO2e emissions
over facility life and the abscissa measures time. The projected lifetime is 30 years
with 10% of emissions from construction and 90% from use. The four lines (A, B, C,
D) shown in Figure 2 correspond to expected and measured time dependent impact
accrual. Line A represents forecasted CO2e impact accrued during construction. Line
B represents actual impact realized during construction. Line C represents projected
impact during use. Line D represents actual impact accrued during use. The dashed
vertical line marks the end of construction. Implementing common cost variance
control management techniques, this type of figure can be used to measure
construction processes and facility operations in real-time to better ensure that
environmental performance targets are met. Lines A and B are analogous to cost
variance figures used today in the construction industry for project cost management.
Line A is created using integrated BIM-LCA by associating the impacts of each
construction activity with the construction schedule. Line B is based on measured
COMPUTING IN CIVIL ENGINEERING 765

construction emissions using information on the actual type of equipment used, the
hours it is operated, and actual material production impacts accounting for
construction change orders. Line C is created from use phase models such as eQuest
or EnergyPlus that predict energy consumption. Line D is created by monitoring the
actual accrual of environmental impacts. Sensing technologies and networks are
becoming more widespread in constructed facilities and can provide real-time
information on temperature, humidity, lighting levels, and energy consumption, and
can be used to calculate environmental impact over the use phase.

CASE STUDY
Michigan Department of Transportation Project BHT 9903002, a bridge
rehabilitation project in southeast Michigan, was selected as a case study of the LCA-
BIM integration platform. The project included activities of (1) concrete deck
hydrodemolition, (2) placement of concrete overlay, (3) strengthening of steel girders
by adding plate steel, (4) replacement of guardrail, (5) asphalt paving, (6) epoxy
painting, (7) excavation, (8) removal of old drainage structures, and (9) installation of
replacement drainage structures. The construction material quantities are listed in
Table 1. Based on crew productivity, a construction schedule was also constructed.
Table 1. Materials and Quantity Takeoffs for BHT 9903002
Material Unit Quantity Material Unit Quantity
Bitumen kg 68,025 Reinforcing Steel kg 7,929
Concrete m3 146 Riprap tonne 1000
Epoxy Coating kg 8,646 Sand kg 768,800
Formwork (plywood) kg 548 Structural Steel kg 15416
Grout kg 194,580 Timber m3 0.3
Iron, Sand Casted kg 844 Water kg 2,616,933
PVC pipe kg 63,950

RESULTS
The total impact of the designed work, in terms of life cycle GWP and energy
consumption, was 4.4x106 CO2e and 7.2x107 MJ (lower heating value), respectively.
Based on MasterFormat work codes, project impacts were broken down into
construction activities so that impacts can be associated with specific tasks. This
allows construction managers to pinpoint sources of impact and focus process
improvements. The breakdown of activities and impacts is shown in Table 2.
Table 2. Construction Activities for BHT 9903002 and Associated Impacts
Construction GWP Energy Acidification Eutrophication Carcinogens
Activity (CO2e) (MJ LHV) (kg SO2) (kg PO4) (kg B(a)P)
Hydrodemolition 3.7x106 5.9x107 4.8x104 8.5x103 7.6x10-2
Excavation 5.7x103 1.1x105 7.6x101 1.4.101 1.8x10-3
Drain Structure 1.8x105 4.3x106 9.2x102 7.5x101 2.2x10-4
Structural Steel 4.4x104 7.1x105 3.1x102 9.1x101 1.8x10-2
Concrete Overlay 3.3x105 1.7x106 1.6x103 1.6x102 6.6x10-3
Epoxy Painting 5.6x104 1.2x106 1.9x102 3.0x101 3.0x10-3
Curb and Gutter 3.1x102 1.9x103 6.9x10-1 1.5x10-1 9.2x10-6
Paving 3.3x104 3.7x106 4.9x102 3.6x100 7.1x10-4
Guardrail 1.2x104 2.1x105 7.6x101 2.6x101 4.6x10-3
Totals 4.4x106 7.2x107 5.2x104 8.9x103 1.1x10-1
766 COMPUTING IN CIVIL ENGINEERING

Linking the activities shown in Table 2 with the construction schedule, accrual of
environmental impacts can be plotted versus percent project completion (Figure 3).
This time-dependent “budget” for environmental impacts (Line A described in the
Methods section) can be used to guide project management during the construction
phase when compared against actual accrual of environmental impacts.

Figure 3. Global Warming Potential Variance Budget for Bridge Reconstruction

CONCLUSION
This paper presents a novel framework for managing environmental impacts of a
constructed facility throughout its life cycle. Coupling environmental impacts of
construction activities determined using life cycle assessment with construction
schedules can successfully produce time-dependent environmental impact budgets
that form the basis for management of variance between predicted environmental
impacts and actual environmental impacts of constructed facilities. This is
demonstrated using a case study of a simple bridge reconstruction.
The framework produces environmental impact accrual timelines facilitating
improved sustainability-oriented project management during the construction and use
of a constructed facility and offers unique analysis opportunities to examine the
managerial tradeoffs between design and construction/operational decisions. It
enables designers, contractors, and engineers to methodically manage designed and
actual environmental impacts and make more informed decisions throughout the
facility life cycle. Further, it pushes widespread adoption of building information
models and life cycle assessment tools by making their collective use more valuable.
Future research in this area will develop tools necessary to track actual environmental
emissions realized onsite using existing cost management procedures and tracking of
change orders, predict use phase facility energy and material consumption, and
monitor actual facility use phase energy and material consumption.

ACKNOWLEDGEMENTS
The authors would like to thank the Stanford Center for Integrated Facility
Engineering, the National Science Foundation Graduate Fellowship, the National
Defense Science and Engineering Graduate Fellowship, and the Stanford Terman
Faculty Fellowship for their generous financial support in completing this work.
COMPUTING IN CIVIL ENGINEERING 767

REFERENCES
AIA (2007). “Integrated Project Delivery: A Guide, v1.” Retrieved 2 Dec 2010.
Eastman, C. (1999). Building Product Models: Computer Environments Supporting
Design and Construction, CRC Press, Boca Raton, FL.
Eastman, C., Teicholz, P., Sacks, R., Liston, K. (2008). BIM Handbook: A Guide to
BIM for Owners, Managers, Designers, Wiley, Hoboken, NJ.
EIA (2010). Annual Energy Review 2009, Technical report, US DOE.
Finnveden, G., Hauschild, M., Ekvall, T., Guinée, J., Heijungs, R., Hellweg, S.,
Koehler, A., Pennington, D., Suh, S. (2009). “Recent Developments in Life Cycle
Assessment.” J. Envir. Man., 91(1), 1-21.
Fischer, M., Hartmann, T., Rank, E., Neuberg, F., Schreyer, M., Liston K., Kunz J.
(2004). “Combining different project modelling approaches for effective support
of multi-disciplinary engineering tasks.” In: P. Brandon, H. Li, N. Shaffii and Q.
Shen, Editors, Int. Conf. on Infor. Tech. in Design and Construction (INCITE
2004), Langkawi, Malaysia, 167–182.
Gu, D., Zhu, Y., Gu, L. (2006). “Life cycle assessment for China building
environment impacts.” J. Tsinghua University, 46(12), 1953–1956.
Häkkinen, T., Kiviniemi, A., (2008). “Sustainable Building and BIM.” In proceedings
of World Sustainable Building Conference (SB08), Melbourne, Australia.
Junnila, S., Horvath, A., Guggemos, A. (2006). “Life Cycle Assessment of Office
Building in Europe and the United States.” J. Infra. Sys., 12(1), 10-17.
Keoleian, G., Blanchard, S., Reppe, P. (2001). “Life Cycle Energy, Costs, and
Strategies for Improving a Single Family House.” J. Indust. Ecol., 4(2), 135-156.
Khasreen, M., Banfili, P., Menzies, G. (2009). “Life cycle assessment and the
Environmental Impact of Buildings: A Review.” Sustainability, 1(3), 674-701.
Loh, E., Nashwan, D., Dean, J. (2007). “Integration of 3D Tool with Environmental
Impact Assessment (3D EIA).” In proceedings of Int. Conf. of Arab Soc. for
Computer Aided Arch. Design (ASCAAD 2007), Alexandria, Egypt.
Ma, Z., Zhao, Y. (2008). “Model of Next Generation Energy-Efficient Design
Software for Buildings.” Tsinghua Sci Technol., 13(S1), 298-304.
Ochoa, L., Hendrickson, C., Matthews, H. (2002). “Economic input-output life-cycle
assessment of U.S. residential buildings.” J. Infra. Sys., 8(4), 132-138.
RS Means (2010). “Facilities Construction Cost Data.” RS Means, Kingston, MA.
Sartori, I., Hestnes, A. (2007). “Energy use in the life cycle of conventional and low-
energy buildings: A review article.” Energy and Buildings, 39(3), 249-257.
Scheuer, C., Keoleian, G., Reppe, P. (2003). “Life Cycle Energy and Environmental
Performance of a New University Building: Modeling Challenges and Design
Implications.” Energy and Buildings, 35(10), 1049-1064.
Seo, S., Tucker, S., Newton, P. (2007). “Automated Material Selection and
Environmental Assessment in the Context of 3D Building Modeling.” J. Green
Bldg., 2(2), 11.
Steel, J., Drogemuller, R., Toth, B. (2010). “Model interoperability in building
information modeling.” Softwr. Sys. Model., DOI: 10.1007/s10270-010-0178-4.
Steinmann, R. (2010). “BIM and openBIM from Various Viewpoints.” Lecture.
Stanford University, Stanford, CA. 3 Nov 2010.
A Real Options Approach to Evaluating Investment in Solar Ready
Buildings
B. Ashuri1, H. Kashani2
1
Assistant Professor, School of Construction, Georgia Institute of Technology, 280 Ferst
Drive, 1st Floor, Atlanta, GA 30332-0680. Email: Baabak.Ashuri@coa.gatech.edu
2
Ph.D. Candidate, School of Construction, Georgia Institute of Technology, 280 Ferst Drive,
1st Floor, Atlanta, GA 30332-0680. Email: HammedKashani@gatech.edu
Abstract
Sustainable building technologies such as Photovoltaics (PV) have promising features
for energy saving and greenhouse gas (GHG) emissions reduction in the building
sector. Nevertheless, adopting these technologies generally requires substantial initial
investments. Moreover, the market for these technologies is often very vibrant from
the technological and economic standpoints. Therefore, investors typically find it
more attractive to delay investment on the PV technologies. They can alternatively
prepare “Solar Ready Buildings” that can easily adopt PV technologies later in future;
when their prices are lower, energy price are higher, or stricter environmental
regulations are in place. In such cases, the decision makers should be equipped with
proper financial valuation models in order to avoid over- and under-investment. We
apply Real Options Theory to evaluate the investment solar ready buildings. Our
proposed investment analysis model uses experience curve concept to model the
changes in price and efficiency of the PV technologies over time. It also has an
energy price modeling component that characterizes the uncertainty about future retail
price of energy as a stochastic process. Finally, the model incorporates the
information concerning specific policy and regulatory instruments that may affect the
investment value. Using our model investors’ financial risk profiles of investment in
the “fixed” Solar Building and “flexible” Solar Ready Buildings will be developed.
Also, for solar ready buildings, the model determines whether the PV panels should
be installed and, if yes, how much should be invested. Finally, by utilizing the
proposed model, the optimal time for installing the PV panels can be identified.
Introduction
Given the increasing scale of investments in sustainable building technologies such as
the Photovoltaic (PV) panels, it is of crucial importance to offer the proper financial
decision-making tools to the stakeholders and decision-makers. Without a proper
methodology, the risk that funds are misappropriated is imminent, e.g., by choosing
wrong technologies or by timing the investment incorrectly.
Proper allocation of resources to sustainable building projects (e.g. installing Solar
Panels) requires an assessment of the cost and performance of proposed solutions to
establish their profitability. Metrics such as Payback Period (PP), ROI and NPV have
been traditionally applied to measure this profitability. Of all these measures, Net
Present Value (NPV) is the widely prescribed metric, e.g., in ASTM E917–05 (2010)
for conducting life cycle costs and benefits analysis for a building system. Despite the
popularity of NPV, this method has serious limitations in financial assessment of an
energy retrofit solution.
A NPV analysis approach assumes that all decisions related to an energy investment
are made at once and are completely irrevocable. These assumptions are not

768
COMPUTING IN CIVIL ENGINEERING 769

consistent with real-world decision-making processes for investing in sustainability


projects such as installing the PV panels. Many of the PV technologies are still in
their early development stages. It is expected that their prices will go down and their
efficiencies will improve in future due to the economies of scale and learning by
doing effects. Therefore, it seems reasonable that building owners delay investing in
these technologies but maintain the capacity to implement them in future when
investors become more confident about technical and financial aspects of such
investments. Thus, constructing Solar Buildings (with PV panels already installed in
the building) may not be an economically attractive solution today. However, it could
be a financially-wise choice to prepare Solar Ready Buildings that enable the easy
installation of PV panels in the future when the electricity retail price reaches a new
high level or the price and efficiency of PV panels improve significantly.
Nevertheless, conventional investment valuation methods such as NPV are not able to
systematically price the flexibility embedded in the solar ready buildings. Also, NPV
cannot specify whether and when the final stage of the investment, i.e. installation of
the PV panels should be carried out. Therefore, the financial performance of investing
in the solar ready buildings is computed erroneously under a NPV approach. This, in
turn, may lower the overall effectiveness of the investment in the PV panels. Thus, to
avoid under- and over-investment and ensure that scarce financial resources are
efficiently allocated an appropriate valuation method is needed (Ellingham and
Fawcett 2006). The Real Options Theory from finance/decision science can be
utilized to evaluate the investment in the solar ready buildings and price the delayed
investments for PV panels installation.
Real Options Analysis
Generally, the financial assessment of a delayed investment (e.g. installing PV panels
in the case of solar ready buildings) is performed under the uncertainty about whether
and when the investment should be implemented. Real Options Analysis properly
meets this objective. The term “Real Options” refers to the application of financial
option pricing techniques such as the Black and Scholes (1973) formula to assessment
of non-financial or “Real” investments with strategic management flexibility features
like delayed retrofit solutions (see Dixit and Pindyck (1994) for a detailed overview
of real options analysis). This field has gone through a significant transition from a
topic of modest academic interest in 1990s to considerable, active academic and
industry attention (Borison 2005). However, the applications of real options in
building design and engineering have not been numerous. (Greden et al. 2006; Greden
and Glicksman 2005; Ashuri et al. 2010). To the best of authors’ knowledge, real
options analysis has not been applied to evaluate energy investments in buildings
including the investments in PV technologies and solar ready buildings. Considering
the expected increase in the level of investments in sustainable buildings, creating
more appropriate investment valuation models in order to avoid under- and over-
investments is crucial and the application of the real options theory from
finance/decision science can result in significantly improvements in the investment
valuation of energy retrofit solutions.
Investment Analysis Framework for Solar Ready Buildings
An Investment Valuation Model based on Real Options Theory is at the core of the
framework proposed in this paper. It receives input from external modeling
770 COMPUTING IN CIVIL ENGINEERING

components, which generates the information that proper financial analysis of the
investment in solar ready buildings requires. Specifically, the model receives input
from an external Building Energy Simulation component, which is used to assess the
energy performance of the solar ready building prior and after the installation of the
PV panels. Thus, the module determines the potential energy savings resulted from
the installation of the PV panels. An important component of our model is Retail
Energy Price Modeling module, which shows future projected paths for the energy
price. The financial benefit of installing the PV panels will be calculated based on
these energy price models. The other component is Experience Curve Modeling,
which is used to characterize how price and efficiency of the PV technologies evolve
over time. This is critical in finding the optimal investment time for a proposed
energy retrofit. The modeling process is described in the following sections
Building Energy Simulation: Characterize Energy Savings Performance
The Building Energy Simulation component explicitly addresses the determination of
the energy savings performance of PV panels. The analysis first quantifies the
performance of the solar ready building prior to the installation of the PV panels
considering a variety of factors including the meteorological, urban and micro climate
effects, related to the environmental conditions around the building. Next the
simulation model quantifies the expected level of energy saving in the building
following the installation of the Solar Panel. The detailed discussion about the
implementation of Building Performance Simulation is out of the scope of this paper.
Our financial analysis only uses the expected energy consumption of the solar ready
buildings prior to the installation of the solar panels and after their potential
installation as the essential inputs.
Retail Energy Price Modeling: Create a Stochastic Model for Energy Price
Retail Energy Price Modeling explicitly addresses uncertainty about energy price as
major benefit driver of an energy retrofit investment. Financial benefits of energy
savings depend on the price of energy in the utility retail market. Although average
energy price rises over time, it is subject to considerable short-term variations. A
Binomial Lattice model (See Hull (2008) for detailed descriptions) can be created to
characterize the energy price uncertainty. A binomial lattice model is a simple,
discrete random walk model, which has been used to describe evolving uncertainty
about energy price (Liski and Murto 2010; Ellingham and Fawcett 2006). The
modeling choice of binomial lattice is also consistent with the general body of
knowledge in real options (Hull 2008; Luenberger 1998). In economics and finance,
binomial lattice is an appropriate model to capture uncertainty about a factor like
energy price that grows over time plus random noise (Dixit and Pindyck 1994).
Binomial Lattice Model
To define a binomial lattice (Figure 2) for energy price (S), a basic short period with
length ∆t will be considered. Suppose the current energy price is S0. Energy price in
the next period is one of only two possible values: u×S0 or d×S0 where both u and d
are positive rates with u>1 and d<1. The probabilities of upward and downward
movements are p and 1-p, respectively. This variation pattern continues on for
subsequent periods until the end of investment time horizon. Binomial lattice
parameters can be determined using data on the expected annual growth rate of
COMPUTING IN CIVIL ENGINEERING 771

energy price (α) and the annual volatility of energy price (σ) as explained by Hull
formulation (2008). This binomial lattice can be used to generate future price paths.
Monte Carlo Simulation
Next, Monte Carlo simulation technique can be applied to generate several random
paths for energy price S – from the start to the end of investment time horizon – based
on the described binomial lattice. Considering the binomial lattice formulation,
energy price in any period of the lattice is a random variable that follows a discrete
binomial distribution; this is the basis of applying Monte Carlo simulation technique
for generating a large number of random energy price paths along the investment time
horizon (Figure 1). Random energy price paths are used to compute respective energy
savings series. In addition to benefits, it should be specified how the initial cost of the
PV panels changes over time to find when it is optimal to invest in. This is discussed
in the following section.

Figure 1: Random Energy Price Paths along the Binomial Lattice


Experience Curve Modeling: Create an Experience (Learning) Curve for the
Proposed Emerging Technology
The concept of Experience Curve describes how the marginal costs decline with
cumulative production over time (Hartley et. al 2010; Weiss et al. 2010). Typically,
this relationship is characterized empirically by a “Power Law” of the form: Pt=P0X−α
where P0 is the initial price ($ cost of first Megawatt MW of sales), X is the
cumulative production in MW up to year t, and 2−α is Progress Ratio (PR); for each
doubling of the cumulative production (sales) the cost declines to PR% of its previous
value. For instance, Figure 2 shows an experience curve created for PV modules. The
apparent decline in costs may be due to several reasons, including process innovation,
learning-by-doing, economies of scale, R&D expenditures, product
innovation/redesign, input price declines, etc. (Hartley et. al 2010; Yu et al. 2010).
Experience Curve Modeling characterizes price reduction and efficiency
improvement trends of a proposed emerging technology. The parameter α in the
experience curve – i.e., Pt = P0X−α or ln(Pt) = ln(P0) − αln(X) – is defined using
historical data of marginal costs and cumulative productions of the emerging
technology. α can be estimated by a standard Ordinary Least Square (OLS) method.
Nevertheless, the development of experience curves is not without trouble mainly
because the estimation of PR for each technology is subject to great uncertainty (van
Sark et al. 2007); it is not easy to forecast whether this PR remains constant or change
over time (Yeh et al. 2009). Research has been focused on development of models
that incorporate such uncertainties (Yeh et al. 2009; Gritsevskyi and Nakicenovic
2000). The best engineering judgment for the future level of decline in price of a
technology can be used in these circumstances to characterize the cost trend of PV.
772 COMPUTING IN CIVIL ENGINEERING

Figure 2: Experience Curve of PV Modules 1968 to 2006


Investment Valuation Modeling based on Real Options Analysis
With the input from above three steps, Investment Valuation Modeling will determine
the optimal time to invest in the installation of the PV panels in a solar ready building.
It also establishes the value of embedding flexibility in the building.
A probabilistic NPV analysis can be conducted to describe the financial risk profile of
the immediate investment in the PV panels. This is carried out under the assumption
that investors adopt the current PV technologies right away at the current price and
efficiency rate. Randomly generated energy saving streams are used to characterize
investors’ NPV distribution (Figure 3). Investors’ cost of capital or required rate of
return can be used as the discount rate in NPV analysis.
In addition, using the risk-neutral valuation method – developed in mathematical
finance for pricing options and derivatives –the correct market-based value of a
delayed PV installation in the solar ready house can be determined. In this technique,
the probabilities of upward and downward movements in the initial energy price
binomial lattice are modified – as described by (Luenberger 1998; Hull 2008) – to
conduct option valuation. Risk-neutral binomial lattice can then be used as Decision
Tree to determine the optimal investment time. Hence, investors’ NPV distribution is
calculated considering this optimal PV installation time. The difference between
expected investors’ value under immediate and delayed investment represents the
expected value of optimal delayed investment (Figure 4).

Figure 3: Investor’s NPV Distribution of Immediate PV Installation


Impact of the Political and Regulatory Environments
Political and Regulatory Environments component encompass the impact of energy
efficiency policies and incentive programs on investment valuation. Scenario analysis
should be applied to specify possible energy targets and their likelihoods. Random
upgrade scenarios, e.g., regulatory, political, technical, and/or market environments,
in which an energy retrofit solution takes place should also be generated. Each
scenario can be investigated with respect to its impact on future level of energy price,
as well as its contribution to cost reductions of the proposed energy technology.
COMPUTING IN CIVIL ENGINEERING 773

Through what-if analyses, the impact of the regulatory conditions on the investment
timing for an energy retrofit solution can be evaluated.

Figure 4: Investment Value of Optimal Delayed Investment


Illustrative Example
Based on the proposed investment analysis framework, the financial performance of
the “flexible” Solar-Ready Building was compared with the financial performance of
the “fixed” Solar Building. The initial cost of preparing electrical, structural, and
roofing systems for PV panels was considered to be $10,000. This is the additional
cost of embedding flexible features in a solar-ready building. Also, it was supposed
that the purchase price of PV panels with the service life of 20 years is currently $4/W
and is anticipated to decrease every year due to experience curve effect
(PR=0.46329). Solar panels for this building were required to provide 6,300W power
and to generate 12,000 kWh per year for electricity consumption in the building. The
initial retail price of electricity was also assumed to be $0.1031/kWh; this unit price
changes over time with the expected annual growth rate 4% and the volatility of 20%.
These values were used to create a binomial lattice to model electricity price
variations. Financial benefits of PV panels are in terms of energy savings, which must
otherwise be purchased from the utility company. Federal and State tax benefits are
$5,000 and the homeowner’s discount rate is 7%/year. Under these circumstances, the
real options analysis methodology was applied and the financial performance of solar
building and solar-ready building under uncertainty about the electricity price were
evaluated. Figure 5(a) shows the optimal electricity price, which triggers conversion
from a solar-ready building to a solar building; the increasing boundary effect is due
to the option expiration in 2030. Below the price threshold, an investor or homeowner
should delay the installation of PV panels. When the electricity price rises to a
substantially high level, the value of waiting becomes lower than the energy savings
benefits of the immediate PV panels installation; therefore, the solar-ready building
should be converted into the PV building. Figure 5(b) shows the likelihood profile of
the optimal conversion year; this is the probability of the event that the random
electricity price path reaches the optimal investment threshold specified in (a) for the
first time in the current year. It can be seen that initially waiting is more valuable than
immediate exercise; but, as the time passes, the opportunity cost of waiting becomes
large enough that triggers investment. Figure 5(c) shows the NPV distribution of a
solar-ready building under uncertainty about energy savings. Figure 5(d) shows the
NPV Cumulative Distribution Functions (CDFs) of solar and solar-ready buildings.
The expected NPV of the solar building is $-11,772 and the chance of investment
loss, i.e., Probability (NPV<0), is approximately 75% , which make the solar building
an unattractive retrofit solution. Delayed retrofit decision-making can enhance the
774 COMPUTING IN CIVIL ENGINEERING

value of solar upgrade. The two-phase development of the solar-ready building


represents the hidden value of flexibility in the solar upgrade. It can be seen that the
expected NPV of the solar-ready building is $5,480, which is much larger than the
expected NPV of the solar building $-11,772. Therefore, the expected price of
flexibility in the solar-ready building is $5,480-($-11,772) =$17,252. Also, due to the
two-stage installation of PV systems, the chance of investment loss for the solar-ready
building is approximately 35%, which is much lower than 75% for the solar building.

Figure 5: (a) Optimal Retail Price of Electricity ($/kWh) Triggering the installation of
Solar Panels; (b) Installation Likelihood of PV Panels Over the House Service Life;
(c) NPV Distribution of Solar Ready Home; (d) NPV Cumulative Distribution
Functions (CDFs) of Solar House and Solar-Ready Building and Price of Flexibility
Conclusion
Better investment decision models can facilitate achieving energy savings in the
buildings through increasing the efficiency and effectiveness of investments in energy
efficiency measures. The proposed investment analysis framework for evaluating
investment in solar ready buildings will enlighten investors about the economic
inefficiencies that conventional fixed energy investment strategies produce and
facilitates the valuation of the flexible solutions that mitigate such inefficiencies.
Explicit pricing of flexibility is significant for systematic decision-making beyond the
current energy target; embedded options in delayed retrofit solutions reflect on the
possibility to meet future stricter targets and prepare for future upgrades.
The proposed investment framework can be used as a decision making instrument,
looking at different scenarios in technology and market developments, and deciding
between immediate or delayed investment in PV technologies. Thus, it can also
become an instrument in the selection of the right government incentives over time.
As a corollary, the methodology will be used to single out the type of technologies
that are ripe in the expected market of competing sustainable technologies.
COMPUTING IN CIVIL ENGINEERING 775

References
Ashuri, B., Kashani, H., Molenaar, K., and Lee, S. (2010). "A Valuation Model for Choosing
the Optimal Minimum Traffic Guarantee (MTG) in a Highway Project: A Real-Option
Approach " Proceedings of the 2010 Construction Research Congress, Canada.
ASTM (2010) Standard Practice for Measuring LCC of Buildings & Building Systems.
Borison, A. (2005) " Where Are the Emperor's Clothes?" J. App. Corp. Fin. 17(2), pp. 17-31.
Crawley, D.B. (2007) “Creating Weather Files for Climate Change and Urbanization Impacts
Analysis”. Proceedings of the 10th International IBPSA Conference, Beijing, China.
Dixit, A., and Pindyck, R. (1994) Investment Under Uncertainty, Princeton University Press,
Draper, N.R. and Smith, H., (1998) Applied Regression Analysis, 3rd ed., Wiley, New York.
Ellingham, I., and Fawcett, W. (2006) New Generation Whole-life Costing: Property and
Construction Decision-making Under Uncertainty, Taylor & Francis, New York, NY.
Greden, L. V., Glicksman, L. R., and Lopez-Betanzos, G. (2006) "A Real Options
Methodology for Evaluating Risk and Opportunity of Natural Ventilation." J. Sol. Energ.
Eng., 128(2), pp. 204-212.
Greden, L., and Glicksman, L. (2005) "A Real Options Model for Valuing Flexible Space."
Journal of Corporate Real Estate, 7(1), pp. 34-48.
Gritsevskyi, A. and Nakicenovic, N. (2000) “Modeling Uncertainty of Induced Technological
Change”, Energy Policy, 28, pp. 907–921.
Hartley, P., Medlock, K. B., Temzelides, T. and Zhang , X. (2010) “Innovation, Renewable
Energy, and Macroeconomic Growth”.
Hopfe, C., Augenbroe, G.. Hensen, J. Wijsman, A. and Plokker, W. (2009) “The impact of
climate scenarios on Decision making in building performance simulation: a case study”.
Hu, H. (2009) Risk-Conscious Design of a Zero Energy House. Ph. D. dissertation, Ga Tech
Hull, J. C. (2008) Options, Futures, and Other Derivatives, Prentice Hall, New Jersey.
Luenberger, D. G. (1998). Investment Science, Oxford University Press, New York.
McGraw-Hill Construction (2010) “Green Building Retrofit & Renovation”.
Morris, M.D., (1991) "Factorial Sampling Plans for Preliminary Computational
Experiments," Technometrics, 33(2) pp. 161-174, 1991.
Robinson D., Campbell N., Gaiser W., Kabele K., Le-Mouel A., Morel N., Page J.,
Stankovic, S., and Stone, A. (2007). “Suntool - A New Modeling Paradigm for Simulating
and Optimizing Urban Sustainability”, Solar Energy, 81(9), pp. 1196-1211.
Rye, C. (2008). “Solar-Ready Buildings.” Solar Power Authority the Dirt on Clean, 2008.
Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M., and
Tarantola, S. (2008). Global Sensitivity Analysis: The Primer. John Wiley & Sons,
Chichester.
SBI Energy (2010) “Global Green Building Materials and Construction”, 2nd Edition.
van Sark, W. G. J. H. M. E., Alsema, A. , Junginger, H. M., de Moor, H. H. C., and
Schaeffer, G. J.(2008) “Accuracy of Progress Ratios Determined From Experience Curves:
The Case of Crystalline Silicon Photovoltaic Module Technology Development” Progress in
Photovoltaics: Research and Applications, (16), pp. 441–453.
Weiss, M., Junginger, H. M., Patel, M.K. and Blok, K. (2010) “Analyzing Price and
Efficiency Dynamics of Large Appliances with the Experience Curve Approach”, Energy
Policy, 38, pp.770–783.
Yeh, S., Rubin S., Hounshell, D.A. Taylor, M. R. (2009) “Uncertainties in Technology
Experience Curves for Integrated Assessment Models”, available at:
http://repository.cmu.edu/epp/77
Yu, C.F., van Sark, W.G.J.H.M., Alsema, E.A. (2010) Unraveling the photovoltaic
technology learning curve by incorporation of input price changes and scale effects,
Renewable and Sustainable Energy Reviews, Article in Press.
Agile IPD Production Plans as an Engine of Process Change

Renate Fruchter1 and Plamen Ventsislavov Ivanov2


1
ASCE Member, Director of Project Based Learning Laboratory (PBL Lab), Civil
and Environmental Engineering, Stanford University, fruchter@stanford.edu
2
Engineer, Clark Construction Group LLC., past Graduate Student Civil and
Environmental Engineering, Stanford University, pivanov@stanford.edu

ABSTRACT
The design and construction industry experiences a continuous pressure to
reduce time to market through fast track projects. Projects engage large
multidisciplinary teams that interact and impact each other’s solutions. Integrated
project development (IPD) process represents an improvement to the waterfall
process. Nevertheless, ethnographic observations show that state of practice IPD
processes still lead to significant rework and coordination. The aim is to improve IPD
team process in order to reduce rework, coordination, number of iterative design
cycles, and length of design iteration cycle. Today, IPD is achieved through co-
creation of production plans that enable participants to explicitly represent their tasks
and workflow. This paper presents an approach for agile IPD production plans that
extends this state-of-practice by modeling information about the task interdependence
types, and a process to make timely and explicit decisions when and how to form
subgroups that engage in sprints to address reciprocal task interdependencies.

INTRODUCTION
The design and construction industry experiences a continuous pressure to
reduce time to market through fast track projects. Complex building projects engage
large teams of stakeholders from diverse trades that interact and impact each other’s
decisions and solutions. Integrated project development (IPD) process is becoming
increasingly central to large complex building projects and large teams of
stakeholders [AIA 2007]. IPD represents a significant improvement to the waterfall
sequential project process. Nevertheless, ethnographic observations show that state of
practice IPD processes still lead to significant rework and coordination especially in
cases of reciprocal task interdependence. This paper presents initial results of an
ongoing project focused to formalize, develop, deploy, and assess an agile IPD
process that extends the state of practice IPD. The aim is to improve IPD team
process in order to reduce rework, coordination, number of iterative design cycles,
and length of design iteration cycle.
Corporate experience claims that the most efficient and effective interactions
are when all project stakeholders are in face-to-face collocated team environments

776
COMPUTING IN CIVIL ENGINEERING 777

such as the Jet Propulsion Lab Integrated Concurrent Engineering (ICE) space
[Chachere, Kunz, Levitt 2004], the iRoom at CIFE, Big Room at DPR or Turner
Construction. This is extreme collaboration when, people, content, models, activities
and processes are collocated. Nevertheless, the AEC industry experiences a
continuous increase in mobility, geographic distribution of project stakeholders,
collaboration technologies, digital content, interactivity, and convergence of physical
and virtual workplaces.
People and knowledge represent the corporations’ strategic asset. Knowledge
workers in the AEC industry are challenged to engage in today’s competitive markets
in which corporate objectives are to significantly reduce (up to 50%) project duration,
travel budgets, work space, and personnel, as well as significantly increase
productivity (by 50%) and maintain a high quality of their products. Fast track
projects lead to overlapping interdependent tasks and consequently generate large
unanticipated volumes of coordination and rework [Levitt and Kunz, 2002]. Such
rework is not planned for, is hard to track, manage, and acknowledge. Managers
assign resources only to direct tasks and work. This can lead to stress, underestimated
scope and scale of coordination, unrealistic schedules with heroic attempts to meet
dead lines.
Today, IPD is achieved through co-creation of production plans that enable
participants to explicitly represent their tasks and workflow. This paper presents an
approach for agile IPD production plans that extends this state-of-practice by
modeling information about the task interdependence type, and a process to make
timely and explicit decisions when and how to form subgroups that engage in sprints
to address reciprocal task interdependencies.

PRACTICAL AND THEORETICAL POINTS OF DEPARTURE


Our approach builds on practical and theoretical points of departure that include:
ethnographic observations of the IPD state-of-the-art in the AEC industry and virtual
design and construction (VDC) [Khanzode et al, 2006]; the AIA National / AIA
California Council 2007; Thompson’s (1967) contingency theory related to task
interdependence classification and task interaction intensity; and best practice
principles from the agile software development.
AIA offers a working definition for IPD that the AEC industry aims to achieve.
IPD is a project delivery approach that assembles people, systems, business structures
and practices and leverages the talents of all participants in order to eliminate waste
and optimize efficiency during all phases of design, fabrication and construction. IPD
focuses its integration effort at two levels: process through task-actor
interdependence, and 3D multidisciplinary model integration through clash detection.
VDC facilitates mapping the work process by building symbolic models of the
product, organization and process early on in the project before a large commitment
of time or money is made.
While IPD is to date more of a goal than a reality for most of the design and
construction industry, there are a few ground-breaking projects that are already using
this new collaboration and delivery method, such as Sutter Medical Center Castro
Valley (SMCCV) in CA and Autodesk’s AEC headquarter project in Waltham in
MA. We performed an ethnographic study of SMCCV in 2009. SMCCV will be a
778 COMPUTING IN CIVIL ENGINEERING

new state of the art hospital that will meet California’s hospital seismic safety law,
SB1953, passed in 1994. The deadline for complying with SB1953 is 2013. Sutter
Health looked for new ways to transform and improve the design and construction
delivery process with accelerated schedule that is 30% faster than the traditional
design-bid-build process. They apply lean construction principles, IPD process, and
BIM technology. In addition, they engaged from day one a core multidisciplinary
core team of 10 stakeholders that included the owner Sutter Health, architect-
Devenney Group, structural design – TMAD-Taylor & Gaines, general contractor
DPR Construction, mechanical and plumbing design – Capital Engineering, electrical
design – The Engineering Enterprise, mechanical design-assist and construction –
Superior Air Handling, plumbing design-assist and construction – JW Meadows,
electrical design-assist and construction – Morrow Meadows, fire protection –
Transbay Fire Protection, lean BIM project integration – GHAFARI Assoc. All core
team members are geographically distributed in California, Arizona, and Michigan.
Most of them travel to Castro Valley project site weekly or biweekly for a collocated
project process coordination and 3D CAD / BIM integration meeting in their Big
Room where they use two SmartBoards to view floor plans and NavisWorks
integrated model and clash detection. These face-to-face meetings allow participants
to identify problem areas in the building through cross-disciplinary reviews in
NavisWorks. There are typically two dozen participants in the Big Room that engage
in the cross-disciplinary review process. Often team members that are not present but
need to address an issue, connect to the Big Room via GoToMeeting. The review
process was typically visual inspection of each room in the BIM model that was
operated by GAPHARI participant. At times team members would approach the
displays to point or annotate the model as they discussed issues. The accuracy of the
model based approach facilitates cost estimating, code checking. The team uses a
technique developed by Toyota called Value Stream Mapping (VSM) that enables
them to create a representation of the production plan workflow that they regularly
evaluate. The ethnographic observations indicate that:
 The IPD production plan workflow description does not distinguish among task
interdependencies types. However, different types of task interdependencies will
have various levels of uncertainty and require corresponding coordination
mechanisms. Thompson distinguished between three types of task
interdependence: pooled, sequential and reciprocal [Thompson 1967] and
attributed to each a type of task-actor coordination. (Table 1) The types of task
interdependence depend on the degree or intensity of interaction. Thomson
suggests that activities and actors that need intense interaction should be placed
near to each other spatially and organizationally.

Table 1. Task interdependence, uncertainty, coordination mechanisms, and intensity


of interaction
Task interdependence pooled sequential reciprocal
Task uncertainty low medium high
Task coordination mechanism standardization planning mutual adjustment
Task intensity of interaction none or low medium high
COMPUTING IN CIVIL ENGINEERING 779

 Significant rework occurs as unpredicted changes lead to reciprocal task


interdependence across disciplines that originally were treated as pooled tasks.
This, in turn leads to high uncertainty and a need for coordination by mutual
adjustment among the impacted trades. For instance, a “small change” made by
one discipline may remain undetected since the activities and content of each
team member are not persistently visible and available to all team members for
model based assessment. Eventually they are identified during the review
meetings. Often “small changes” in one discipline can lead to significant impacts
across disciplines and consequently to major rework and coordination efforts
(e.g., a “small” change made in the structural system clashed with MEP, sprinkler
system, architectural floor to ceiling height).
 Significant rework in different disciplines leads to parallel moving targets since
all trades revise their discipline solutions and models concurrently in between
project team meetings. This increases the number of iterations and rework.
 During Big Room sessions a lot of time is spent talking in large groups about
issues, instead of identifying issues and going off in subgroups to solve them.
 There is no consistent capture of issues, their progress, and a feedback link to the
IPD workflow process representation. The issues are highlighted in Navisworks.
Consequently, the team needs to re-learn the issues after some weeks and
reconnect the people who need to solve the problem.
The software industry offers an effective process model known as scrum or
agile software development. Scrum is a test-driven software development process vs.
specification driven software implementation. Key scrum principles include: the
scrum master, the scrum team that is identified up front as a function of competencies
and availability (typically 7 +/- 2), client stories that define the desired user
experience of the software, the sprint is the fixed time frame in which the scrum team
develops the functionalities for a specific software module and all the other modules
do not change, number of tasks per sprint, client defines DONE completion of sprint,
acceptance test, test cycles, a “build” is the integration of software code part which is
done at the end of every day.
Our hypothesis is that explicit and transparent representation of task
interdependence types in the production plan facilitates agile IPD through planned
subgroup sprints that reduce response latency, rework, coordination, number of
iterative design cycles, and length of design iteration cycle.

TESTBED
We used the AEC Global Teamwork course as a testbed. The course offers a
project-based learning (PBL) experience focused on problem based, project organized
activities that produce a product for a client, re-engineered processes that bring
people from multiple disciplines, and engages faculty, practitioners, and students
from different disciplines, who are geographically distributed. It is offered annually
since 1992 – January through May. It engages architecture, structural engineering,
and construction management students from universities in the US, Europe and Asia
[Fruchter 1999, 2006]. The AEC student teams work on a university building project.
The project specifications include: (1) building program requirements for a 30,000
sqft university building; (2) a university campus site that provides local conditions
780 COMPUTING IN CIVIL ENGINEERING

and challenges for all disciplines, e.g. local architecture style, climate, and
environmental constraints, earthquake, wind and snow loads, flooding zones, access
roads, local materials and labor costs; (3) a budget for the construction of the
building, and (4) a time for construction and delivery. The project progresses from
conceptual design in Winter Quarter to 3D and 4D CAD models of the building and a
final report in Spring Quarter. The teams experience fast track project process with
intermediary milestones and deliverables. They interact with industry mentors who
critique and provide constructive feedback.
All AEC teams hold weekly two hour project review sessions similar to
typical building projects in the real world. During these sessions they present their
concepts, explain, clarify, question these concepts, identify and solve problems,
negotiate and decide on changes and next steps. The interaction and the dialogue
between team members during project meetings evolve from presentation mode to
inquiry, exploration, problem solving, and negotiation. Similar to the real world, the
teams have tight deadlines, engage in design reviews, negotiate and decide on
modifications. To view AEC student projects please visit the AEC Project Gallery
(http://pbl.stanford.edu/AEC%20projects/projpage.htm).

AGILE IPD PRODUCTION PLAN APPROACH


Today, IPD is achieved through co-creation of production plans that engages
all the key project stakeholders to plan and re-plan. The IPD production plans enable
participants to explicitly represent their tasks, who is responsible for the execution of
the task, by when it will be accomplished, and task sequence.
We introduce an approach for agile IPD production plans that extends this
state-of-practice by modeling information about the task interdependence type (i.e.,
pooled, sequential, reciprocal) and a process that facilitates the team to make timely
and explicit decisions when and how to form sub groups in the case of reciprocal task
interdependencies to engage in a sprint (a concept modeled from agile software
development). Agility is the process of organizing the production process into sprints
based on the explicit modeling of task interdependence types. Agility of a team is
directly related to subgroup formation and sprints that address reciprocal
interdependent task and reduce time to market. More specifically the project team
members co-create a detailed task list that is revised weekly allowing for planning
and re-planning to respond to emerging challenges and changing needs of the client.
The task list structure includes rubrics that identify: What (task) – By Whom
(responsible person and trade) – For Whom (trade that requires the output from
another discipline) – By When (dead line for deliverable) – How Long (estimated time
it will take to produce the deliverable). The task list also is an explicit indication of
task commitments. The completion of a deliverable identified as DONE is determined
by the receiver “For Whom” not by the sender “By Whom.” The task list provides
transparency, as well as facilitates tracking, statusing of tasks, interdisciplinary
understanding, and goal oriented teamwork, planning and re-planning.
The production plan is created to show the workflow. A key extension of the
traditional production plan is to explicitly model the task interdependence types. This
enables the team to identify which issues lead to reciprocal task interdependencies,
who is impacted and needs to be involved in a scrum subgroup sprint, when to
COMPUTING IN CIVIL ENGINEERING 781

schedule the sprint, as well as what the deliverable of the sprint is. The structure of
the task list is further extended to include the rubrics: Subgroup Members - Date for
Subgroup Sprint – Deliverable(s) (Figure 1). The agile IPD production plan is the
result of integrating the Task List, Production Plan, and Task Interdependence Types
(i.e. pooled, sequential, and reciprocal). State-of-the-art approaches and systems (e.g.,
SPS software) facilitate only the representation of production plans with sequential
task interdependencies. This leads to linear sequences of repeatable workflow
segments every time there is reciprocal task interdependence. More importantly,
sequential production plans do not highlight the need for subgroup formation to
immediately address issues triggered by such reciprocal interdependent project tasks.
Agile production plans models such workflow situations. It enables the team to
decide in a timely manner when and why to form a subgroup and engage in a sprint to
address an issue that had reciprocal task interdependence and required close and
intense interactions among specific team members and trades.
The AEC Ridge Team case study illustrates how the proposed agile IPD
production plan approach was implemented and led to process changes (Figure 1).
Ridge team was composed of an architect in Puerto Rico, two structural engineers at
Stanford, and one construction manager in Stockholm Sweden. Each of them was
working in the respective university laboratory, using their laptops on WiFi, with a
headset for audio. They used the 3D Team Neighborhood in Teleplace
(http://www.teleplace.com/) as their multimedia collaboration environment [Fruchter and
Cavallin, 2011]. The 3D Team Neighborhood provided a highly immersive
environment that enabled the team members to construct in real time their
collaboration space around them as the dialog and interaction evolved during the
meeting. Each team member could share their content on any number of displays that
were created on as needed bases, as well as manipulate and annotate any content
displayed in their shared workspace. This provided a persistent presence of team
members, visibility and transparency of activities performed and content created by
them that allows for immediate interaction and co-creation of solutions to problems.
There was a consistent and continuous capture of project issues they jointly
identified, tracking their progress and linking it to the task list and agile production
plan as they planned and re-planned weekly. As they identified tasks that had
reciprocal interdependencies during their weekly project review sessions they formed
subgroups and engaged in parallel sprints for given amounts of time to produce
specific deliverables. This intense and close subgroup sprints avoided significant
rework, coordination, and led to zero response latency.
The explicit task list and agile production plan provided quantitative
information to determine the weekly work distribution among the team members as
well as track the team progress by means of the weekly burndown chart (Figure 2).
A key team process transformation was observed over time (Figure 3):
 From traditional, static, linear, agenda and meeting minutes driven weekly project
review sessions – experienced during the first three weeks of their project as the
team used the sequential production plan approach;
 To agile, dynamic, concurrent, and result driven weekly project review sessions –
experienced for the rest of the twelve weeks of the project as the team adopted the
agile IPD production plan approach.
782 COMPUTING IN CIVIL ENGINEERING

Figure 1. Agile Production Plan as a Result of the Integration of Task List,


Production Plan, Task Interdependence Types, Subgroup and Sprint
Identification.

Figure 2. Examples of Weekly Work Distribution and Team Burndown Chart.


COMPUTING IN CIVIL ENGINEERING 783

Figure 3. Process Change from Traditional Meeting Dynamics to Agile Meeting


Dynamics.

CONCLUSION
This paper introduced an agile IPD production plans approach that extends the state-
of-practice production plan method by modeling information about the task
interdependence type, and a process to make timely and explicit decisions when and
how to form sub groups that engage in sprints to address reciprocal task
interdependencies. The paper presents the pilot testbed and validation run in 2009-
2010. The preliminary results show that the agile IPD production plans approach
leads to less and shorter design iteration cycles, reduced rework and provides a
consistent feedback link between issues, the progress made to resolve the issues, and
the IPD workflow representation and dynamic update. Agile IPD production plans act
as an engine of process change in the project team.
Building on contingency theory task interdependence classification Table 2
summarizes our contributions as further recommendations based on the findings of
the agile IPD production plan approach. We plan to continue deployment and
assessment of the agile IPD production plan approach in Winter and Spring 2011
with seven AEC global project teams.

Table 2. Agile IPD Recommendations.


Task Min Location Additional Work Min Tools Process
Interdependence Required Practices Required Agility
No special
Transparency of
Pooled location Task list Regular
process
requirements
Regular Big Asynchronous Sequential
Sequential Medium
Room Meetings coordination production plans
Permanent
Subgroup formation Agile IPD
Reciprocal physical or virtual High
and sprints production plans
collocation

ACKNOWLEDGEMENTS
The project was partially sponsored by the PBL Lab and CIFE at Stanford University.
The authors thank DPR and the AEC teams in 2010.
784 COMPUTING IN CIVIL ENGINEERING

REFERENCES
AIA National / AIA California Council (2007). “Integrated Project Delivery: A Guide”
www.aia.org/contractdocs/AIAS077630
Agile Software Development www.agilemodeling.com/essays/agileSoftwareDevelopment.htm
Chachere, J., Kunz, J. and Levitt, R. (2003) Can you Accelerate your Project using Extreme
Collaboration? A Model Based Analysis, CIFE TR154.
Fruchter, R. (1999) Architecture/Engineering/Construction Teamwork: A Collaborative
Design and Learning Space. Journal of Computing in Civil Engineering, 13 (4): 261-270.
Fruchter, R., (2006) The Fishbowl: Degrees of Engagement in Global Teamwork. LNAI,
2006: 241-257.
Fruchter, R. and Cavallin, H, (2011) Attention and Engagement of Remote Team Members in
Collaborative Multimedia Environments, ASCE Computing in Civil Engineering Workshop,
Miami, June 2011.
Khanzode A., Fischer M., Reed D., Ballard G. (2006). A Guide to Applying the Principles of
Virtual Design and Construction (VDC) to the Lean Project Delivery Process, CIFE Working
Paper #093, December 2006
Levitt R., and Kunz J. (2002). Design your project organization as engineers design bridges,
CIFE Working Paper #73, August 2002
Thompson J. D. (1967). Organizations in Action, McGraw-Hill.
An Automated Collaborative Framework To Develop Scenarios For Slums
Upgrading Projects According To Implementation Phases And Construction
Planning

O. E. Anwar1 and T. A. Aziz2


1
Department of Construction Management, University of Washington, Seattle, WA
98195-1610; PH (206) 543-4736; FAX (206) 685-1976; email: elanwar@uw.edu
2
Lecturer, Architecture Department, Cairo University, Cairo, Egypt; PH/FAX +20
(23)

ABSTRACT
Slums are informal areas that are illegally developed on property of the State
with no physical planning. Accordingly, governments adopt various intervention
strategies to replace or upgrade these slums. However, implementing these strategies
is often faced by several planning and constructability challenges, because the slums
areas that should be upgraded (1) are already occupied by resident families; and (2)
are often characterized by unplanned and extremely crowded transportation networks.
Accordingly, the construction period of these upgrade projects can result in
significant social disruption to resident families and requires protracted timelines and
additional budgets. The objective of this paper is to present a multi-objective
optimization model that is capable of accelerating the delivery of urgent
redevelopments while minimizing construction costs and socioeconomic disruptions
to slum dwellers. An application example is presented to demonstrate the model
capabilities and is followed by a discussion of formulation challenges.
INTRODUCTION
Slums are areas of population concentrations developed in the absence of
Physical Planning. Slum dwellers suffer from one or more of the following
conditions: (1) lack of access to clean water; (2) lack of access to improved sanitation
facilities; (3) insufficient and overcrowded living area; (4) inadequate structural
quality or durability of dwellings; and (5) lack of tenure security (UN-HABITAT
2008). Slums represent a violation to public and/or private properties and are
characterized by high crime rates and illiteracy (Abdel Aziz and El-Anwar 2010).
Dealing with urban slums is widely recognized as a global challenge, where an
estimated one billion people worldwide live in urban slums and four out of ten
inhabitants in the developing countries are slum dwellers (Abdelhalim 2010; Nijman
2008; UN-HABITAT 2003a).
There are several upgrading intervention strategies that can be employed to
deal with the slums issue, including (1) on-site redevelopment of informal areas; (2)
redevelopment and relocation; (3) servicing informal areas; (4) sectorial upgrading;
(5) planning and partial adjustment; and (6) participatory upgrading (Abdelhalim
2010; Algohary and El-Faramwy 2010). These upgrading strategies focus on different
aspects of the living environment in informal areas, such as on physical

785
786 COMPUTING IN CIVIL ENGINEERING

improvements and/or on human and social development (Khalifa 2011). However,


implementing these upgrading strategies is often faced by serious construction-related
challenges, such as (1) the limited accessibility of construction equipment to site and
limited the availability of materials storage areas, because of the disorderly and
extremely dense occupation of slums as shown in Figure 1(a); (2) the unsuitability of
slums in many cases to residential construction, because they have not been
subdivided and suffer from low load-bearing capacity of the soil; (3) the need for
temporary housing to relocate slums dwellers whose houses are affected in the course
of construction work or found to be in imminent-risk situations; and (4) the need to
use special tasks and techniques to ensure the safety of families who remain in their
locations during construction (Abiko et al. 2007; The Cities Alliance 2008).
Moreover, the relocation of families as well as the disruption to businesses and
transportation that occurs during the execution of the upgrading projects can result in
negative social and economic impacts to the resident families.
Roads that can be accessed (a)
by construction equipment Attributes-
Loaded Map (2D
Representation)

Zone boarder

Urban
Attributes

Construction
Attributes

Social
Attributes

Cost (c) Cost Dimension


(b) Time Dimension
Annual Cost
Zone 3
Zone 2
1 2 3 4 Years
Zone 1
Overall Socioeconomic

Zone 6
Disruption (OSD)

(d) Socioeconomic Dimension

Zone 12

Time
1 2 3 4 Years

Figure 1. Multi-dimensional analysis process for slums upgrading projects


To address the construction and social challenges associated with slums
upgrading projects, El-Anwar and Abdel Aziz (2011) proposed an automated
collaborative framework for optimizing urban and construction decisions for these
upgrading projects. The objective of this paper is to present the development of a
multi-objective optimization model, which represents the first step in implementing
the proposed framework. To this end, the following sections (1) provide a brief
overview of the proposed framework; (2) briefly describe the optimization model
design; (3) present an application example to demonstrate the model capabilities; and
(4) illustrate the main challenges in formulating the model and discuss possible
solutions.
FRAMEWORK OVERVIEW
The proposed framework is designed to integrate urban and construction
planning using a participatory process that solicits input from planners, contractors,
COMPUTING IN CIVIL ENGINEERING 787

and slums dwellers in order to optimize slums upgrading projects. To this end, the
framework consists of two main phases: (1) data generation and modeling; and (2)
plans evaluation and optimization. First, the objective of the data generation and
modeling phase is to (1) generate all the needed urban, construction, and social data
from the involved stakeholders using a participatory process that involves planners,
contractors, and representatives of slums dwellers; (2) divide the slums area into
zones and propose intervention strategies for these zones based on their
characteristics and level of risk; and (3) model the generated data for each zone using
an object-oriented two-dimensional representation that enables analyzing and
utilizing this data. The product of this phase is a two-dimensional attributes-loaded
map for the slum area under consideration, as shown in Figure 1(a). In this
representation, each zone will be modeled using object-oriented programming and
will have a set of attributes, including (1) urban attributes such as the proposed
intervention strategy, the urgency of upgrading this zone based on its condition and
level of risk (using an urgency factor), roads width and conditions, and accessibility
to utilities and transportation; (2) construction attributes, such as estimate
construction cost and duration to upgrade this zones, need for access roads for
construction equipment, and availability of storage areas for construction materials;
and (3) social attributes such as the socioeconomic impacts of closing roads during
upgrading this zone, the number of local businesses that will be temporary closed or
relocated, and number of families to relocated from this zone.
Second, the objective of the plans evaluation and optimization phase is to
identify the optimal integrated upgrading plans that can (1) maximize the benefits of
slums upgrading projects by accelerating the delivery of the urgent projects; (2)
minimize the total costs of these projects; and (3) minimize the social and economic
disruptions for resident families during the construction phases of slums upgrading.
As shown in Figure 1(b), this phase utilizes a multi-dimensional analysis process that
consists of four main modules, including (1) performing multi-objective optimization;
(2) developing time schedules; (3) incorporating cost dimension; and (4) quantifying
social disruption. The following section briefly describes the design of a multi-
objective optimization model, which represents the computational implementation of
the first module in this phase.
MODEL DESIGN
A multi-objective optimization model is designed to identify the optimal
slums upgrading plans in order to maximize benefits to residents, minimize
construction costs, and minimize the associated socioeconomic disruptions. The focus
of the model in this development stage is on the on-site redevelopment intervention
strategy. This strategy is used when housing conditions are very poor, the urban
fabric is irregular and unsafe, and/or tenure status is illegal. This intervention strategy
refers to a complete replacement of the physical fabric through gradual demolition
and in-situ construction of alternative housing (Abdelhalim 2010; Algohary and El-
Faramwy 2010).
The model optimizes the construction sequencing of the slum zones taking
into account budget constraints and logistics constraints such as the limited access of
788 COMPUTING IN CIVIL ENGINEERING

equipment to some zones, as shown in Figure 1(a). Accordingly, the decision


variables are designed to represent this sequencing problem by defining two variables
(Dz, Rz) for each zone. The first variable (Dz) represents the start date of demolition
and site clearing activities for zone z. This construction phase cannot start unless
construction equipment has access to the zone, which is the case for zones that have
access to public roads (such as zones 1, 2, 3, 4, 7, and 10 in Figure 1) or when
adjacent zones have been demolished/redeveloped to provide access roads. The
second variable (Rz) represents the start date of the redevelopment construction
activities for zone z, such housing construction and providing missing infrastructure
systems and lifelines. Furthermore, an additional decision variable is defined to
represent the compensation scheme (CS) for temporary closing or relocating local
businesses while construction. The selected compensation amount affects the total
projects costs as well as the level of socioeconomic disruption experienced by the
families who benefit from those businesses. Accordingly, the total number of
variables is equal to twice number of zones + 1. It should be noted that a more
detailed representation of zones redevelopment can be easily accommodated in the
model design.
There are three main optimization objectives in the present model. The first
objective is to maximize the benefits (OBI) of slums upgrading projects to residents
by accelerating the redevelopment of zones that have higher urgency factors, as
shown in Equation (1). The second objective is to minimize the total costs (TC) of
slums upgrading projects, as shown in Equation (2). The third objective is to
minimize the overall socioeconomic disruption (OSD) experienced by slum dwellers
during the construction activities, as shown in Equation (3).
Z
 UFz  nz 
Maximize OBI     (1)
z 1  Rz  durRz 

Z
Minimize TC   CostDz  CostRz  durz   nf z  Costh  nbz  Costb (CS )   (2)
z 1

Z
Minimize OSD    durz  Wh  I h  Wb  I b (CS )   (3)
z 1

durz  Rz  durRz  Dz (4)

Where, OBI is the overall benefits of upgrading the slums area; Z is the total
number of zones in the considered slum; UFz is the urgency factor for upgrading zone
z; nz is the total number residents in zone z; Rz is the start date of redevelopment
activities for zone z; durRz is the duration of redevelopment activities for zone z; TC
is the total cost of the slum upgrading project; CostDz and CostRz are the costs of
demolition and redevelopment activities for zone z, respectively; durz is the total
duration of direct disruption to zone z, which starts with the demolition and site
clearing activities and ends with the finish of redevelopment activities and can be
calculated as shown in Equation (4); nfz is the number of families to be temporary
COMPUTING IN CIVIL ENGINEERING 789

relocated from zone z during construction; Costh is the cost for providing temporary
housing for one family per week; nbz is the number of families to be directly affected
by the temporary closure/relocation of local businesses during upgrading zone z;
Costb(CS) is the compensation amount to be paid to each family affected by business
disruption and is a function in the compensation scheme (CS); OSD is the overall
socioeconomic disruption to slum dwellers during upgrading zone z; Wh and Wb are
the relative weights of the socioeconomic impacts of temporary relocating families
and temporary disrupting local businesses, respectively; Ih and Ib(CS) are the
socioeconomic impacts of temporary relocating families and temporary disrupting
local businesses, respectively, which can take the values of 0 (i.e. negligible impact)
to 3 (i.e. major impact); and Dz is the start date of demolition and site clearing
activities for zone z.
The optimization model is implemented using Non-Dominated Sorting
Genetic Algorithms II (NSGA2), because of (1) the non-linear and multi-objective
nature of the problem; (2) the need for near-optimal solutions; (3) the huge search
space; and (4) the superior performance of NSGA2 and its unique characteristics,
such as fast non-dominated sorting, crowding, and elitism (Deb et al. 2001). The
following section presents a brief application example to demonstrate the model
capabilities and formulation challenges.
APPLICATION EXAMPLE
Figure 1(a) shows a satellite photo of part of Manshiet Naser in Cairo (which
is the largest informal area in Egypt) using Google Maps. In this example, it is
assumed that the shown part of the informal area is divided into 12 zones with a total
population of 3,730 families, with higher urgency factors assigned to zones 3, 6, 9,
and 12 because of unsafe conditions. Other needed input data (such as construction
costs and durations) were reasonably assumed. Furthermore, a maximum annual
budget of $25 million was defined as a budget constraint.
The optimization model is used to search for near-optimum solutions for this
slums upgrading problem with a population size of 1,000, crossover probability of
0.9, and mutation probability of 0.003906. Figure 2 shows the identified near-optimal
tradeoffs among the three optimization objectives after 10,000 generations using 2D
graphs showing tradeoffs between maximizing benefits and minimizing each of the
total costs and socioeconomic disruption. In this figure, the normalized values of the
three objectives are used instead of their absolute values in order to illustrate the
solutions performance in comparison to the ideal values, where the normalized values
are computed as shown in Equation (5).

OBI IdealTC IdealOSD


NOBI  , NTC  , NOSD  (5)
IdealOBI TC OSD

Where, NOBI, NTC, and NOSD are the normalized values of OBI, TC, and
OSD, respectively, which can range from 0 (lowest performance) to 1.0 (ideal
790 COMPUTING IN CIVIL ENGINEERING

Benefits vs. Cost Benefits vs. Socioeconomic Disruption


1 1
0.95
0.95
0.9

NOSD
NTC

0.85
0.9
0.8
0.85 0.75
0.65 0.75 0.85 0.95 0.65 0.75 0.85 0.95
NOBI NOBI

Figure 2. Optimization results for the initial formulation.


performance); and IdealOBI, IdealTC, and IdealOSD are the ideal values of OBI, TC, and
OSD, respectively, assuming no constraints. As shown in Figure 2, the model could
identify upgrading plans that can maximize the performance in the cost and
socioeconomic objectives (NTC and NOSD). However, the model could not maximize
the performance in terms of the benefits provided (NOBI). In order to improve the
performance in NOBI, a number of different formulations are investigated as
described in the following section.
FORMULATION VARIATIONS
The purpose of this section is to illustrate a number of formulation variations
for the optimization objectives, which are proposed to improve the performance of
the generated solutions especially in NOBI. In this section, each of these formulations
is presented together with the reasoning behind including it. The Pareto optimal
solutions are then identified among the generated solutions by all the formulations
after 10,000 generations and are presented in Figure 3 to compare the performance of
all presented formulations.
Benefits vs. Cost Benefits vs. Socioeconomic Disruption
Formulation 1 1
1
Formulation 2
0.98 0.95
Formulation 3
0.96 0.9
NOSD

Formulation 1
NTC

Formulation 4
0.94 0.85 Formulation 2

0.92 Formulation 3
0.8
Formulation 4
0.9 0.75
0.75 0.8 0.85 0.9 0.95 0.75 0.8 0.85 0.9 0.95
NOBI NOBI

Figure 3. Contribution of all formulations to generated pareto optimal solutions.


Formulation #1: This is the initial formulation presented in the previous
section. In this formulation, the model was designed to optimize the absolute values
of the three objectives as computed using Equations (1), (2), and (3). Accordingly, the
model was designed to maximize OBI, minimize TC, and minimize OSD. As shown
in Figure 3, this formulation could contribute four Pareto solutions when compared to
other formulations.
Formulation #2: This formulation is presented to improve the performance in
maximizing the benefits. To this end, the model maximizes the normalized values of
COMPUTING IN CIVIL ENGINEERING 791

all three optimization objectives, as computed using Equation (5). Accordingly, the
model maximizes each of NOBI, NTC, and NOSD. This formulation is introduced
assuming that normalizing the optimization objectives will reduce any bias the model
has towards or against any objective. However, the results illustrated that this is not
the case. As shown in Figure 3, this formulation could only contribute one Pareto
optimal solution when compared to other formulations.
Formulation #3: This formulation is presented to simplify the optimization
problem to the presented model by converting it into a single objective optimization
problem. To this end, the normalized values of the three objectives are aggregated as
shown in Equation (6).

Overall Performance  WOBI  NOBI  WTC  NTC  WOSD  NOSD (6)

Where, WOBI, WTC, and WOSD are the relative weights of OBI, TC, and OSD,
respectively. In this example, all weights are set to 33.33%. This formulation could
offer three Pareto optimal solutions when compared to other formulations. The
limited number of generated solution is attributed to the fixed values of the objectives
relative weights (which should converge into one solution if more generations are
allowed). A possible way to overcome this is to automatically generate a set of unique
combinations of relative weights and solve each of these combinations as a separate
optimization problem. This method should generate a diversified Pareto front;
however it will increase the computational time of the optimization model
proportional to the number of unique combinations of weight (Kandil et al. 2010).
Formulation #4: This formulation is also introduced to simplify the
optimization problem but using an entirely different approach. Instead of designing
the model to optimize the three objectives, it is rather designed to optimize the values
of the underlying factors that affect the performance of upgrading plans in the three
objectives. To this end the model is designed to optimize two objectives: (1)
accelerate the delivery of urgent developments as shown in Equations (7), which in
turn maximizes the overall benefits to residents (OBI); and (2) minimize the total
duration of direct disruption due to construction as shown in Equation (8), which in
turn minimizes socioeconomic disruptions as well as projects costs by reducing the
periods of temporary housing and businesses disruption and their associated costs and
compensations.
Z
Minimize WFD   UFz  nz   Rz  durRz   (7)
z 1
Z
Minimize TD   durz (8)
z 1

Where, WFD is the summation of the weighted finish dates of zones


redevelopment; UFz  nz is a weight to magnify the impact of accelerating the
delivery of zones with higher urgency factors (UFz) and more residents (nz);
Rz  durRz is the finish date for redeveloping zone z; TD is the total period of direct
792 COMPUTING IN CIVIL ENGINEERING

disruption during upgrading work; and durz is the total duration of direct disruption to
zone z. This formulation resulted in six additional Pareto optimal solutions compared
to other formulations, as shown in Figure 3. It could offer the highest NOBI among all
other formulations; however it could not achieve high performance in minimizing
socioeconomic disruption. This is attributed to the model’s inability to identify the
impact of business compensation schemes on total costs and socioeconomic
disruptions as shown in the formulation of its objectives using Equations (7) and (8).
Accordingly, the model arbitrarily selected the least cost compensation scheme which
resulted in higher socioeconomic disruption.

CONCLUSIONS AND FUTURE WORK


This paper presented the development of a multi-objective optimization model
for slum upgrading projects. The model is a part of an integrated collaborative
framework for improving slums upgrading. The optimization model is designed to (1)
maximize the benefits of slums upgrading projects by accelerating the delivery of
urgent projects; (2) minimize the total costs of these projects; and (3) minimize the
socioeconomic disruptions for resident families during the construction phases of
slums upgrading. A number of formulations are presented to improve the model
performance. The computational time for any of these formulations is less than one
minute on a 2.8 GHz Intel Core 2 Duo with 2.98 GB of RAM, which illustrates the
computational efficiency of the developed model. Among the presented formulations,
the fourth formulation represents the most promising alternative, especially if it can
be modified to account for the impact of business compensations schemes. The full
computational implementation of the proposed framework will address the current
limitation, especially with the time, cost, and social modules which are designed to
increase the robustness and capabilities of the model.
REFERENCES
Abdel Aziz and El-Anwar (2010). “A Framework for an Automated Decision Support
System to Optimize the Upgrading and Replacement Projects for Slums Areas”,
Proceedings of First International Conference on Sustainability and the Future,
the British university in Egypt.
Abdelhalim, K. (2010). “Participatory Upgrading of Informal Areas - A Decision-
makers Guide for Action.” Participatory Development Programme in Urban
Areas (PDP), <http://www.citiesalliance.org/ca/node/2044>, (July 25, 2010).
Abiko, A., Cardoso, L., Rinaldelli, R., and Haga, H. (2007). “Basic Costs of Slums
Upgrading in Brazil,” Global Urban Development, 3(1), Nov 2007, Washington.
Algohary, S. and El-Faramwy, A. (2010) “Egyptian Approach to Informal
Settlements Development”, Egyptian Cabinet of Ministers, Informal Settlements
Development Facilities.
Deb, K., Agrawal, S., Pratap, A., and Meyarivan, T. (2001). “A Fast Elitist Non-
Dominated Sorting Genetic Algorithm for Multi-objective Optimization.”
COMPUTING IN CIVIL ENGINEERING 793

KANGAL Report 200001, Genetic Algorithm Laboratory, Indian Institute of


Technology, Kanpur, India.
El-Anwar, O. and Abdel Aziz, T. (2011). “An Integrated Urban-Construction
Planning Framework for Slums Upgrading Projects.” J CONSTR ENG M ASCE,
in review.
Kandil, A., El-Rayes, K., and El-Anwar, O. (2010) "Optimization Research:
Enhancing Robustness of Large-Scale Multi-Objective Optimization in
Construction," J CONSTR ENG M ASCE ASCE, 136(1), 17-25.
Khalifa, M.A. (2011) “Redefining slums in Egypt: Unplanned versus unsafe areas”,
Habitat International, 35, 44-46
Nijman, J. (2008). “Against the odds: slum rehabilitation in neoliberal Mumbai,”
Cities, 25(2), 73–85.
The Cities Alliance (2008) “Slum Upgrading, Up close, Experiences of Six Cities.”
The Cities Alliance, October 2008, Washington DC, U.S.A.,
<http://www.citiesalliance.org/ca/node/694>, (July 25, 2010).
UN-HABITAT (2003a), “Slums of the World: The face of urban poverty in the new
millennium?” United Nations Human Settlements Programme,
http://www.unhabitat.org/pmss/listItemDetails.aspx?publicationID=1124, (Dec.
6, 2010).
UN-HABITAT (2008), “Slum Households and Shelter Deprivations: Degrees and
Characteristics” United Nations Human Settlements Programme,
<http://www.unhabitat.org/downloads/docs/presskitsowc2008/slum%20househol
ds.pdf>, (December 6, 2010).
Preparing for a New Madrid Earthquake: Accelerating and Optimizing
Temporary Housing Decisions for Shelby County, TN

Omar El-Anwar1 , Khaled El-Rayes2 , and Amr Elnashai3


1
Department of Construction Management, University of Washington, Seattle, WA 98195-
1610; PH (206) 543-4736; FAX (206) 685-1976; email: elanwar@uw.edu
2
Department of Civil and Environmental Engineering, University of Illinois at Urbana-
Champaign, Urbana, IL 61801; PH (217) 265-0557; FAX (217) 265-8039; email:
elrayes@illinois.edu
3
Department of Civil and Environmental Engineering, University of Illinois at Urbana-
Champaign, Urbana, IL 61801; PH (217) 265-5497; FAX ( 217) 265-0318; email:
aelnash@illinois.edu
ABSTRACT
In the early 1800s Central USA has experienced some of the strongest earthquake ground
motions observed nationwide. A recurrence of these earthquakes would cause significant
social and economic impacts affecting the lives of millions of residents. For instance, it is
estimated that more than 60,000 families will need temporary housing assistance in Shelby
County, TN, alone. Identifying temporary housing solutions for these families will be a
challenging task, especially with the need for quick decisions. The objective of this paper is
to present a case study that illustrates the use of automated multi-objective optimization in
identifying optimal large-scale temporary housing plans for displaced families in Shelby
County. These optimal plans have the potential to (1) minimize social and economic
disruptions for displaced families; (2) maximize housing safety in the presence of a large
number of potential post-disaster hazards; (3) minimize negative environmental impacts; and
(4) minimize total public expenditures.

INTRODUCTION
During 1811 and 1812 the New Madrid seismic zone in the Central USA has experienced
some of the strongest earthquake ground motions observed in the US, where a series of three
earthquakes shook the Midwest region with magnitudes around 8 (Cleveland et al. 2007).
Figure 1(a) shows the three main New Madrid fault lines. A recurrence of the 1811 and 1812
earthquakes would cause significant social and economic impacts affecting the lives of over
45 million residents of the states surrounding the New Madrid seismic zone. Moreover, the
recurrence of this series of earthquakes would subject the major urban center of Memphis,
Tennessee to intense ground shaking (Cleveland et al. 2007). The Mid-America Earthquake
(MAE) Center and the Institute for Crisis, Disaster and Risk Management (ICDRM)
performed an earthquake impact assessment for the State of Tennessee using HAZUS-MH
MR2 software, where the earthquake scenario considered a magnitude 7.7 event along the
southwest extension of the presumed eastern fault line in the New Madrid Seismic Zone
(Elnashai and Jefferson 2008). The results showed that direct economic losses from damaged
buildings, transportation, and utility systems are estimated at $56.6 billion for the State.
The recurrence of these series of severe earthquakes will result in large-scale
displacement of families in the impacted areas, where it is estimated that more than 60,000
households will be displaced in Shelby County, TN, alone based on the impact assessment

794
COMPUTING IN CIVIL ENGINEERING 795

performed by the MAE Center and ICDRM. Those displaced families will urgently need
temporary accommodations for several months (or even years) until permanent housing can
be eventually obtained. In order to enable emergency planners quickly identify temporary
housing solutions, El-Anwar et al. (2009) developed an automated decision support system
(DSS) for optimizing temporary housing arrangements following large-scale natural disasters.
This DSS supports the optimization of a number of important objectives, including (1)
minimizing social and economic disruptions for displaced families; (2) maximizing
temporary housing safety in the presence of potential post-disaster hazards; (3) minimizing
negative environmental impacts of temporary housing on host community; and (4)
minimizing total public expenditures.
(a) Potential Hazards in TN
New Madrid Fault Lines Hazmat location

(b) Displacements of Houesholds in Shelby County


Displaced Households
per Census Tract 60,772 Displaced
Households

40,510 Households
in need of
Temporary Housing

Figure 1. (a) Distribution of considered potential hazards in Tennessee; and (b)


expected distribution of displaced Households in Shelby County
The DSS is currently integrated in MAEviz software, which is an open-source
software system for seismic risk assessment developed by the MAE Center in cooperation
with the National Center for Supercomputing Applications (Elnashai et al. 2008). During the
development phases of the system, it was tested using a case study simulating the
displacement of 9,343 families in Los Angeles County, California, if an earthquake similar to
the 1994 Northridge earthquake would occur (El-Anwar et al. 2008). This paper presents a
larger-scale case study to illustrate the use of the fully developed system and analyze its
performance. Due to the significant consequences of the expected New Madrid series of
earthquakes, it is used as the second case study. In this case study, temporary housing will be
required to accommodate the displaced families in Shelby County, Tennessee. In addition to
the large-scale displacement of families that characterizes this case study; it also considers a
significantly large number of potential post-disaster hazards and discusses how the
optimization model is modified to deal with such cases. This case study enables evaluating
the performance of the DSS during the preparedness phase for future disasters. The following
sections briefly present the model formulation and discuss the case study development and its
results.

MODEL FORMULATION
This section provides a brief description of the formulation of the four main optimization
objectives as they relate to the scope of the presented case study. The first objective is to
minimize the socioeconomic disruptions experienced by displaced families during their stay
in temporary housing (Bolin 1982; Bolin and Bolton 1986; Golec 1983; Johnson 2007). To
this end, the model calculates a socioeconomic disruption index (SDI) for each candidate
housing (e.g. motels, travel trailers, or mobile homes) and its proposed location. This SDI
represents the aggregated weighted performance of the candidate housing in six metrics,
796 COMPUTING IN CIVIL ENGINEERING

including (1) housing quality; (2) delivery time; (3) median household income at the
proposed location; (4) unemployment rates; (5) cost of living index; and (6) reported crime
rates. Accordingly, for any configuration of temporary housing arrangements, the overall
socioeconomic disruption is evaluated by normalizing and averaging the computed SDI for
each family in that configuration.
The second optimization objective is to maximize the safety temporary housing in the
presence of multiple potential post-disaster hazards (e.g. aftershocks and hazmat release). To
this end, the model computes a building performance index for each housing alternative
taking into account (1) characteristics of potential hazards; (2) housing type and distance
from potential hazards; and (3) housing expected building performance if the potential hazard
occurs. Because of the probabilistic nature of potential hazards occurrences, the model
generates all possible scenarios for hazards occurrences and calculates a corresponding
building performance index for the candidate housing for each scenario. The model then
computes a safety index (SI) for the candidate housing to represent the expected value of its
possible building performance indexes. Accordingly, for any configuration of housing
arrangements, the overall safety index is evaluated by normalizing and averaging the safety
indexes for all housing alternatives in that configuration.
The third objective is to minimize the environmental impact of constructing and
maintaining temporary housing projects on host communities. To this end, the model
computes an environmental index (EI) for each candidate housing project. This index
represents the housing project’s weighted impacts on the main environmental areas analyzed
in the expedited environmental review process conducted by the Federal Emergency
Management Agency (FEMA). Accordingly, for any configuration of temporary housing
arrangements, the overall environmental index is evaluated by normalizing and averaging the
environmental indexes of all housing alternatives in that configuration. The fourth
optimization objective is to minimize total public expenditures on temporary housing.
Accordingly, the model enables emergency planners to input all the life cycle costs of
candidate housing alternatives and calculates the net present value of their total costs over the
period of their use. More detailed description of the formulation of these four objectives is
available at El-Anwar et al (2008; 2010a and 2010b).

CASE STUDY
This section presents the development of a case study applying the developed DSS to identify
optimal temporary housing plans for families that would be displaced in Shelby County,
Tennessee. This case study assumes the occurrence of an earthquake of magnitude 7.7 along
the southwest extension of the presumed eastern fault line in New Madrid Seismic Zone,
which is the closest of the three main fault lines to Shelby County and represents the worst
case scenario, as shown in Figure 1(a). The following sections briefly present the input data,
optimization procedure, and results for this case study.
Input data. The required input data includes (1) the number of displaced families; (2) eight
environmental areas and their relative importance weights; (3) available temporary housing
alternatives and their locations and characteristics; and (4) post-disaster hazards data. For the
first required input data, two thirds of the displaced households in Shelby County are
assumed to be in need for temporary housing. Accordingly, the emergency management
agency needs to provide temporary housing to 40,510 households according to the estimated
number of 60,772 displaced households in Shelby County (Elnashai and Jefferson 2008).
Figure 1(b) shows the distribution of displaced families per census tract. For the second set of
input data, importance weights were assumed for eight environmental areas that will be
potentially impacted by developing temporary housing projects according to FEMA’s
expedited review process (FEMA 2005). The eight areas and their assumed weights are as
COMPUTING IN CIVIL ENGINEERING 797

follows: 20% for hazardous materials and toxic wastes; 20% for air quality; 20% for water
quality; 10% for geology and soils; 10% for wetlands; 10% for threatened and endangered
species; 5% for vegetation and wildlife; and 5% for noise, as shown in Table 1. In addition,
the impact intensities of each temporary housing alternative on the environmental areas were
assumed and represented numerically by 0, 1, 2, and 3 for negligible, minor, moderate, and
major impacts, respectively.
The input data for temporary housing alternatives was obtained after conducting a detailed
online search of available alternatives in Tennessee. The results of this search generated 413
temporary housing alternatives with a total capacity of 55,243 families, as shown in Figure 2.
These alternatives consist of 25 campsites for travel trailers and 16 campsites for tents, as
well as 372 hotels, inns, motels, and other lodges. More detailed data on each of these
alternatives were obtained and used in this case study including their monthly cost rate,
location (longitude and latitude), and capacity, as shown in Table 1. Additional data were also
gathered for the locations of all the considered housing alternatives, including crime rates,
median household income, percentage of unemployment among the civil labor force, and cost
of living index. Furthermore, the expected delivery times of the temporary housing

Table 1. Temporary Housing Data


Relative Minimum Mean Maximum
Weight Value Value Value
Housing Capacity in number of families 11 134 2,883
1. Socioeconomic Disruption Metrics
1.1 Housing Quality in star ratingb 20%a 1 2.2 5
1.2 Delivery Time in days 20%a 1 4.5 15
1.3 Median Household Income in $c 15%a $8,993 $39,336 $116,200
c
1.4 Unemployment Rate 25%a 0.6% 6.6% 49.3%
1.5 Cost of Living Indexc 10%a 69.0 84.9 111.0
d a
1.6 Crime Rate 10% 0.5% 8.2% 13.3%
2. Environmental Impact Intensity
2.1 Hazardous Materials and Toxic Wastes 20% 0 0.23 1.5
2.2 Air Quality 20% 0 0.14 1.5
2.3 Water Quality 20% 0 0.03 1.5
2.4 Geology and Soils 10% 0 0.04 1.5
2.5 Wetlands 10% 0 0.04 1.5
2.5 Threatened and Endangered Species 10% 0 0.03 1.5
2.6 Vegetation and Wildlife 5% 0 0.03 1.4
2.7 Noise 5% 0 0.04 1.4
3. Public Expenditures
3. Monthly Cost in $/familye $400 $1,490 $10,155
a
These values are to be defined by decision-makers.
b
Star-rating for travel trailers and tents was assumed.
c
These values are based on characteristics of zip codes within which temporary housing is located.
d
Crime rates were collected from published crime rates by the FBI for cities.
e
Monthly costs for travel trailers and tents were assumed. A 50% discount rate is assumed for other
temporary housing alternatives for government use.
798 COMPUTING IN CIVIL ENGINEERING

Temporary Housing in TN

Temporary Housing

Figure 2. Identified temporary housing alternatives in Tennessee


alternatives were assumed, as shown in Table 1. The fourth required input data on the
potential post-disaster hazards include (1) the eastern fault line in the New Madrid Seismic
Zone; and (2) 4,006 hazardous materials locations in Tennessee. Figure 1(a) shows the
locations of these hazards. Other required characteristics of these potential hazards were
assumed, as shown in Table 2. In addition, the temporary housing building types and the
building performance data were also assumed.

Optimization procedure. This temporary housing optimization problem includes 384


decision variables, which is equal to the number of feasible temporary housing alternatives,
where each decision variable represents the number of families that should be assigned that
housing alternative. In order to identify the optimal configuration of temporary housing
arrangements among these 384 alternatives, the multi-objective optimization model is
implemented in four phases, including (1) data input; (2) constraint compliance; (3) multi-
objective optimization; and (4) output analysis. First, the input phase processed and
categorized all the input data for this temporary housing problem. Second, the constraint
compliance phase was performed to ensure full compliance of all temporary housing
alternatives with all safety and environmental constraints. The model identified 29 infeasible
temporary housing alternatives, because they were located in unsafe areas. Accordingly, the
model identified 384 feasible temporary housing alternatives.
The multi-objective optimization phase was then performed to generate optimal
tradeoffs among the four main objectives. In order to compare the performance of each
candidate solution, the four objectives need to be normalized and weighted. To this end, the
model first identifies the maximum and minimum values for each of the four optimization
objectives, which is essential for normalizing the objectives and comparing them to each
other. Then, the model triggers an automation module which assigns each of the four
optimization objectives a set of weights ranging from 0% to 100% with a user-defined
increment. In this case study, an increment of 10% was assumed for each of the four
optimization objectives resulting in a total of 286 unique optimization problems, where each
problem represents a unique combination of relative weights for the optimization objectives.
Each of these unique optimization problems was solved using linear programming to generate
a unique Pareto optimal solution. Accordingly, the model then generated 286 unique Pareto
optimal solutions, where each represents an optimal tradeoff among the four optimization
objectives and produces an optimal configuration of temporary housing arrangements. It
should be noted that this multi-objective optimization process is fully automated. Fourth, the
output analysis phase then (1) enables decision-makers to visualize the optimal tradeoffs
among the four objectives; and (2) provides detailed information about any selected optimal
configuration of temporary housing arrangements. Figure 3(a) illustrates the 286 optimal
tradeoff solutions. Figure 3(b) highlights the details of one of those optimal solutions.
COMPUTING IN CIVIL ENGINEERING 799

Table 2. Potential Hazards Data


Potential Hazards Data Fault Line Hazardous Materials
Hazardous Buffer Distance in km 6.4 0.5
Vulnerability Distance in km 289.7 64.4
Probability of occurrence 15% 4%
Hazard Attenuation Functiona 2.43x(d2+5.62)-0.974 d1/2
a
d is the safety distance between the temporary housing alternative and the potential hazard.
Results. The automated DSS could identify 286 optimal temporary housing plans for the
New Madrid case study. The computational time for optimizing this large-scale temporary
housing problem was only 1.5 minutes on a 2.8 GHz Intel Pentium 4 processor with 3 GB of
RAM, where the number of decision variables was 384 (which are equal to the number of
feasible housing alternatives). These results illustrate the efficiency and effectiveness of the
developed system and its practical computational requirements for optimizing large-scale
problems.
It should be noted that two main modifications were made to the optimization model
to achieve this high level of computational efficiency. First, the number of decision variables
was significantly reduced by limiting the definition of decision variable to only represent the
number of families to be assigned to housing alternatives. The model formulation initially
presented by El-Anwar et al. (2008) defined decision variables as the number of families
preferring a specific location for temporary housing (e.g. a specific zip code) that should be
assigned to each housing alternative. That initial formulation resulted in a number of decision
variables equal to the number of preferred locations multiplied by the number of housing
alternatives. The reason behind that initial formulation was to enable the computation and

(a) Optimal Tradeoffs (b) Details of Solution # 266

Expenditures Vs. Socioeconomic Disruptions (b.1) Objectives Relative Weights


100%
Socioeconomic Disruption

90% 100%
80% 70%
80%
Index (SDI)

70%
60% 60%
30%
50% 40%
40% 0% 0%
20%
30%
$55.0 $65.0 $75.0 $85.0 0%
Monthly Public Expenditures (PE) in millions of $ SDI SI EI TPE
Expenditures Vs. Housing Safety (b.2) Solution Performance
0.75
Average Safety Index (SI)

0.73
Optimization Objective Value

0.71 Socioeconomic Disruption Index (SDI) 43.42%

0.69 Safety Index (SI) 0.69


0.67 Environmental Impact Index (EI) 0.20
0.65
Total Public Expenditures (TPE) in
$55.0 $65.0 $75.0 $85.0 69.31
millions of $
Monthly Public Expenditures (PE) in millions of $

Expenditures Vs. Environmental Impacts (b.3) Temporary Housing Assignments


0.22
Average Environmental

0.20
Impact Index (EI)

0.18
0.16
0.14
0.12
0.10
$55.0 $65.0 $75.0 $85.0
Monthly Public Expenditures (PE) in millions of $

Figure 3. Optimization Results


800 COMPUTING IN CIVIL ENGINEERING

minimization of the displacement distance between preferred temporary housing locations


and the actual assigned housing locations, which has important socioeconomic impacts.
However, if decision-makers do not wish to include this displacement distance metric, the
model in its modified formulation provides them with the flexibility to use a more aggregated
definition of decision variables to only represent the number of families to be assigned to
each housing alternative. That new approach was adopted in the presented case study to
reduce the required computational time.
The second main modification was inevitably introduced because of the significantly large
number of potential post-disaster hazards (4,007 potential hazards). As briefly explained in
the computation of the safety indexes, the model generates all the possible combinations of
potential hazards occurrences. For example, for three potential post-disaster hazards, there are
eight possible combinations of hazards occurrences (i.e. 23 combinations), starting with the
scenario that no post-disaster hazard will occur until the scenario that all three hazards will
occur. In the presented case study there are 4007 potential post-disaster hazards, which means
that there are 24007 possible scenarios. This number of possible combinations is too large for a
computer to handle; and even if the computer could generate all possible combinations, the
required computational time to compute the safety indexes will be impractical. Therefore, the
model was modified to identify such cases of large number of potential hazards and utilize
Monte Carlo simulation to compute the safety indexes by generating a representative sample
of the possible combinations of hazards occurrences. Accordingly, the model can compute
the required safety indexes in reasonable computational time, as illustrated in this case study.

CONCLUSIONS
This paper presented the development of a case study representing a large-scale
temporary housing allocation problem. For this case study, the model identified 286 optimal
temporary housing plans for 40,510 families that would be displaced in Shelby County,
Tennessee, in case of the occurrence of an earthquake of magnitude 7.7 along the southwest
extension of the presumed eastern fault line in the New Madrid Seismic Zone. This case
study illustrated the unique capabilities of the developed automated decision support system
in optimizing large-scale real-life temporary housing problems in order to (1) minimize social
and economic disruptions for displaced families; (2) maximize temporary housing safety in
the presence of multiple potential post-disaster hazards; (3) minimize the negative
environmental impacts of constructing and maintaining temporary housing on host
communities; and (4) minimize total public expenditures on temporary housing. This case
study also illustrated the efficiency and effectiveness of the developed system and highlighted
the modifications and flexibilities added to the model to ensure its practical computational
requirements.

REFERENCES
Bolin, R. (1982). “Long-term family recovery from disaster.” Institute of Behavioral Science
Monograph 36, University of Colorado, Boulder.
Bolin, R. C. and Bolton, P. (1986). Race, religion, and ethnicity in disaster recovery,
Boulder, CO: Institute of Behavioral Science, University of Colorado.
Cleveland, L. J., Elnashai, A. S., Pineda, O. (2007). New Madrid Seismic Zone Catastrophic
Earthquake Response Planning, Mid-America Earthquake Center, Report 07-03, May
2007.
El-Anwar, O., El-Rayes, K., and Elnashai, A. (2008). "Multi-objective optimization of
temporary housing for the 1994 Northridge earthquake," J. of Earthquake Engineering,
12(1), 81 — 91.
COMPUTING IN CIVIL ENGINEERING 801

El-Anwar, O., El-Rayes, K., and Elnashai, A. (2009) "An Automated System for Optimizing
Post-Disaster Temporary Housing Allocation," Automation in Construction, 18(7), 983-
993.
El-Anwar, O., El-Rayes, K., and Elnashai, A. (2010) (a) "Maximizing Temporary Housing
Safety after Natural Disasters," Journal of Infrastructure Systems, ASCE, 16(2), 138-148.
El-Anwar, O., El-Rayes, K., and Elnashai, A. (2010) (b) "Minimization of Socioeconomic
Disruption for Displaced Population Following Disasters," Disasters, 34(3), 865−883.
Elnashai, A. and Jefferson, T (2008). Analysis of: New Madrid Seismic Zone - M7.7 Event,
New Madrid Seismic Zone Catastrophic Earthquake Response Planning, Mid-America
Earthquake Center and Institute for Crisis, Disaster and Risk Management, State Report
for Tennessee Earthquake Impact Assessment, March 2008.
Elnashai, A., Hampton, S., Karaman, H., Lee, J.S., McLaren, T., Myers, J., Navarro, C.,
Sahin, M., Spencer, B., and Tolbert, N. (2008). “Overview and Applications of Maeviz –
HAZTURK 2007,” Journal of Earthquake Engineering, 12(1), 100 — 108.
FEMA (2005) “Programmatic Environmental Assessment: Temporary Housing for Disaster
Victims of Hurricane Katrina,” FEMA-DR-1604-MS, September 2005.
Golec, J. (1983). “A contextual approach to the social psychological study of disaster
recovery,” Journal of Mass Emergencies and Disasters, 1, August, 255-276.
Johnson, C. (2007). “Impacts of prefabricated temporary housing after disasters: 1999
earthquakes in Turkey,” Habitat International, 31(1), 36-52.
Requirements for an Integrated Framework of
Self-managing HVAC Systems

Xuesong Liu1, Burcu Akinci2, James H. Garrett, Jr.3 and Mario Bergés4
1
Ph.D Candidate, Dept. of Civil & Environmental Engineering, Carnegie Mellon
University, 5000 Forbes Ave., Pittsburgh, PA 15213; PH (412) 953-2517; email:
pine@cmu.edu
2
Professor, Dept. of Civil & Environmental Engineering, Carnegie Mellon University,
5000 Forbes Ave., Pittsburgh, PA 15213; email: bakinci@andrew.cmu.edu
3
Professor and Head, Dept. of Civil & Environmental Engineering, Carnegie Mellon
University, 5000 Forbes Ave., Pittsburgh, PA 15213; email: garrett@cmu.edu
4
Assistant Professor, Dept. of Civil & Environmental Engineering, Carnegie Mellon
University, 5000 Forbes Ave., Pittsburgh, PA 15213; email: marioberges@cmu.edu

ABSTRACT
Heating, ventilating and air conditioning (HVAC) systems account for about 16% of
the total energy consumption in the United States. However, research shows that
25%-40% of the energy consumed by HVAC systems is wasted because of
undetected faults. Actively detecting faults requires continuously monitoring and
analyzing the status of hardware and software components that are part of HVAC
systems. With the increasing complexity in HVAC systems, fault detection that relies
on manual processes becomes even more challenging and impractical. Hence, a
computerized approach is needed, which enables HVAC systems to continuously
monitor, assess and configure themselves. This paper proposes an integrated
framework for developing and implementing self-configuring approaches to operate
and maintain HVAC systems. The discussions include the identification of functional
requirements, a synthesis of existing self-configuring approaches, and an analysis of
the requirements for developing an integrated framework using an implemented
prototype system.

INTRODUCTION
Buildings account for 41% of the total energy consumption and 38% of carbon
dioxide emissions in the United States. About 40% of the energy consumed in both
residential and commercial buildings is used by HVAC systems (DoE 2008; EIA
2008). However, research shows that 25%-40% of the energy used by HVAC systems
is wasted due to faults, such as misplaced and uncalibrated sensors, malfunctioning
controllers and controlled devices, improper implementation and execution of control
logic, improper integration of control software and hardware components, and
sub-optimal control strategy (Mansson and McIntyre 1997; Liddament 1999; Liu et al.
2002; Roth et al. 2005). This waste accounts for $36 – $60 billion every year in the
United States (EIA 2008). Indirect social and environmental impacts of the waste are

802
COMPUTING IN CIVIL ENGINEERING 803

beyond estimation due to the fast depleting energy resources and increasing
environmental pollution (Liang and Du 2007).
Several researchers have stated that a primary reason for the occurrence of different
types of faults that result in significant waste in energy is that HVAC systems are
getting increasingly complex and it is difficult for operators to manually detect and
diagnose these faults (Lee et al. 2004; Katipamula and Brambley 2005a; Jagpal 2006).
Due to increasing needs for better indoor environment control, more and more HVAC
systems are equipped with software and hardware components. To maintain the
desired performance of these HVAC systems, operators need to continuously monitor
and diagnose hundreds of components. Moreover, because different faults occurring
in HVAC systems can have similar symptoms, it is difficult for the operator to
diagnose the root cause of the faults (Schein and Bushby 2005). All of these issues
make it very difficult, if not impossible, to manually monitor the performance of
HVAC systems and to detect possible problems resulting in inefficient operations.
Computerized approaches, such as computer-aided fault detection and diagnosis
(FDD), automated commissioning and optimized operating schedule, have been
studied and developed to address some of these challenges associated with manual
operation and maintenance of HVAC systems. Both laboratory and real-world
experiments have been conducted to validate the energy saving capability of these
approaches (Mansson and McIntyre 1997; Castro 2004; Katipamula and Brambley
2005a; Katipamula and Brambley 2005b; Schein and Bushby 2005).
These studies show that computerized approaches have the potential to improve
energy efficiency of HVAC systems by addressing issues associated with managing
complex systems through the elimination of human involvement in maintaining these
systems. They can enable the systems to automatically detect abnormal conditions,
diagnose the causes, and mitigate the faults, thus eliminating their impacts on the
performance of the systems. However, many of the studies and developments are still
conducted in academic fields. Very few commercial products were deployed in
real-world projects (Liang and Du 2007). One primary reason identified by the
researchers is that because these approaches were developed by researchers, their
deployment requires thorough knowledge of HVAC systems so that the correct
information can be provided to them and the systems can be adjusted according to
their outputs. This requirement is beyond the average skill level of most system
operators (Katipamula and Brambley 2005a).
We envision that an integrated framework, which can automatically manage the
computerized approaches by providing the needed information and reconfiguring the
HVAC systems according to their outputs, can solve this problem. In this paper, we
discuss the identification of functional requirements for developing such an integrated
framework. We will introduce the synthesis of existing self-configuring approaches,
and an analysis of the requirements for developing the integrated framework using an
implemented prototype system.
804 COMPUTING IN CIVIL ENGINEERING

PROBLEM STATEMENT
Previous studies showed that computerized approaches are able to improve the energy
efficiency of HVAC systems by automating two processes: (1) detecting, diagnosing
and mitigating faults; and (2) evaluating the performance of the HVAC systems and
improving their control strategy. We identified three challenges which contribute as
possible impediments for the deployment of these approaches in the real-world.
First, it is very difficult for system operators to prepare the needed inputs and process
outputs for the approaches (Kumar et al. 2001; Venkatasubramanian et al. 2003;
Katipamula and Brambley 2005b). As shown in Figure 1, every approach requires
some inputs, such as the condition measures of the building environment, the
configuration of the HVAC systems, or the properties of the building elements. For
different buildings and different HVAC systems, the inputs are very different in terms
of data type, communication protocol, file format and the stakeholders who create
them. Outputs of these approaches also need to be interpreted by the system operators
so that they can use the information to re-configure the systems. As a result, it is very
challenging for the system operators to collect and process all the required
information manually.

Figure 1. Illustration of the requirements for preparing inputs and processing


outputs for the computerized approaches
Second, there is no single solution that fully automates the process of mitigating
faults or improving control strategies. Each of these approaches focuses on different
portion of the processes. For example, the rule-based approach proposed by Schein et
al. (2006) aims to identify the faults and does not address the diagnosis and mitigation
of the faults. Similarly, the approach developed by Fernandez et al. (2009) focuses on
automatically mitigating the sensor drift fault and assumes that the fault can be
correctly identified. There is no single approach that is able to achieve the overall
objective of improving energy efficiency by itself. As a result, there is a need for
integrating various approaches developed in this domain to address the overarching
goal of increasing energy efficiency through better performing HVAC systems.
To summarize, it is challenging to apply the computerized approaches in real-world
systems because it is very difficult for system operators to prepare inputs and process
outputs of these approaches manually and many different approaches need to be
combined to achieve the improvement to energy efficiency. To utilize the energy
saving potential of these computerized approaches, a framework is needed to
COMPUTING IN CIVIL ENGINEERING 805

integrate them and enable the system operator to deploy them in real-world systems.

RESEARCH APPROACHES
This research first explored the existing computerized approaches and analyzed their
information requirements. Based on the findings, functional requirements were
identified for an integrated framework to address the vision described in section 2.
Finally, a prototype application was developed to test the feasibility of the envisioned
framework and investigate challenges associated with that framework. The following
sections discuss these three steps in the research.
Analysis of information requirements for the existing computerized approaches
Based on the review of the existing computerized approaches, we selected thirty-two
scientific publications for use in identification of information requirements. The main
goal in selecting publications was to have a diverse set of approaches to be
incorporated in the initial framework. Hence, the criteria for selection were to include
the publications that cover different types of approaches and that are developed by
different researchers and/or organizations.
According to their information sources, the identified information requirements can
be categorized into two groups: dynamic and static information items. Static
information items are documented in drawings, manuals and spreadsheets. They only
change when the configuration of the building layout or HVAC systems is changed.
For example, dimensions and materials of the building elements typically do not
change frequently after construction. A summary of the static information items is
listed in Table 1.
Table 1 Summary of the static information requirements
Information requirement Example
Building Building layout Total size of the windows in Room 01.
Material of building elements Material of the external walls.
Occupancy and equipment load Number of occupants in Room 01.
Sensor Type of measurement Temperature, pressure, flow rate, etc.
Measured object Supply air duct of a VAV box.
Data interface for acquiring the BACnet device ID and object ID of the
measurement HVAC components.
Controller Controlled device Speed of the supply fan for an AHU.
Communication interface ID of the BACnet device and object.
Set-point Temperature set-point for a thermal zone.
Actuator Controlled device Damper in a VAV box.
Data interface for acquiring the ID of the BACnet device and object.
status of the controlled device
Relationship Spatial relationship Space where the temperature sensor is
located.
Topological relationship Connection between the air terminals
and the VAV box.
Functional groups of the HVAC Components which serve the temperature
components control of a space.
806 COMPUTING IN CIVIL ENGINEERING

Dynamic information items are generated by the components of the HVAC systems
and the framework. The identified dynamic information items include the variables in
HVAC systems and outputs of the computerized approaches. Variables in HVAC
systems include the sensor measurements, set-point values, control signals, and
working status of the controlled HVAC components. These information items are
typically collected by the HVAC systems. To acquire these information items, the
framework needs the capability to communicate with the HVAC systems. Examples
of the outputs of computerized approaches include the type of fault and faulty
components which are detected by the FDD approaches.
Analysis of functional requirements for the integrated framework
The primary objective of the integrated framework is to automatically provide the
requested information to the computerized approaches and process their outputs.
According to the information requirements, the following functional requirements
were identified for the proposed framework:
 Self-recognizing: The ability to recognize its own components and their
configurations and functions.
The static information items represent the characteristics of the building elements and
HVAC components. To be able to provide this information to the computerized
approaches, the framework needs the capability to recognize the configuration of its
components and their functions. For example, to provide the material information
about the windows in a building to a model-based FDD approach (Salsbury and
Diamond 2001), the framework should be able identify the information about the
windows in the building and the associated material types.
 Self-monitoring: The ability to monitor the conditions of the building indoor
environment and the HVAC systems.
The dynamic data and corresponding information items are generated by HVAC
components, such as sensors and controllers, and the computerized approaches in the
framework. To collect and process these items, the framework should be able to
communicate with the components and acquire the needed information items.
 Self-configuring: The ability to re-configure the HVAC systems according to the
outputs of the computerized approaches.
To mitigate the faults and apply control strategy, which results in higher energy
efficiency, the configuration of HVAC systems needs to be modified. For example, to
apply the supervisory control approach (Gibson 1997), the values of set-points in the
HVAC systems need to be modified. The framework should be able to reconfigure the
HVAC systems according to the outputs of the computerized approaches.
Vision of the integrated framework for self-managing HVAC systems
Based on the analysis of information requirements and functional requirements, we
envisioned an integrated framework for the self-managing HVAC systems. The three
functional requirements are achieved by three modules in the framework. There is
also a controller module that controls the operation of other modules. These modules
connect the computerized approaches with the real-world HVAC systems and
COMPUTING IN CIVIL ENGINEERING 807

information sources. Figure 2 shows the envisioned framework.

Figure 2. Proposed vision of the integrated framework


At the center of the framework is the controller module. The information
requirements of the computerized approaches are sent to self-recognizing and
self-monitoring modules by the controller. The self-recognizing module retrieves the
configuration information of the building and HVAC systems from the information
bases. The self-monitoring module acquires the needed condition measures from the
available sensors, controllers and actuators in the HVAC systems. After the
information is acquired, the controller invokes the computerized approaches and
provides the relevant information to them. If the computerized approaches generate
any output, the controller passes it to the self-configuring module to reconfigure the
HVAC systems.
Prototype development and discussion
A prototype application was developed to validate the feasibility of implementing the
envisioned framework and identify challenges. A rule-based FDD approach (Schein
and House 2003), a statistics-based FDD approach (Schein et al. 2006) and a virtual
sensor based self-healing approach (Fernandez et al. 2009) were implemented in the
framework to test the capability of integrating different approaches. In the
self-recognition and self-monitoring modules, several ad hoc procedures were
implemented to retrieve the needed information from three standards: Industry
Foundation Classes (IFC), SensorML and BACnet. These modules then deliver this
information to the controller module. An integrated information model was
implemented in the controller module to enable the information exchange among the
three computerized approaches. The prototype was tested with a real-world HVAC
system that serves an office building in a university. Results showed that the
integrated framework was able to support the operations of the three computerized
approaches. However, one limitation of the prototype is that it does not support other
computerized approaches.
To develop an integrated framework that generally supports different computerized
approaches, other information sources, such as gbXML and cfiXML, need to be
integrated to provide detailed information for the configuration of HVAC systems.
Information items which are defined in different sources are not unique (Glazer 2009).
For example, building geometric information is represented in both IFC and gbXML.
Therefore, what is needed is to check the consistency of the information items that are
maintained by different sources. Given the large number of information items and the
variety of information sources, it is very challenging to develop ad hoc procedures to
808 COMPUTING IN CIVIL ENGINEERING

interpret the information requirements of different computerized approaches, extract


needed information from each of these information sources, check the consistency of
all information items, and provide the information to the computerized approaches.
An approach is needed to support these processes more generally.

CONCLUSIONS
This paper has shown the need for an integrated framework to achieve the vision of
self-managing HVAC systems. By exploring the existing computerized approaches
that enable the automated performance analysis and fault mitigation, the energy
saving potential of these approaches were recognized. The information and functional
requirements for implementing such an integrated framework were analyzed based on
thirty-two previous studies. A prototype application was implemented to test the
feasibility of the envisioned framework.
The prototype showed the need for an approach that can extract information
requirements from different sources and check their consistency, and provide the
needed information to the approaches. Further research is needed to develop such an
approach to address these needs.

ACKNOWLEDGEMENTS
The authors would like to acknowledge and thank the National Institute for Standards
and Technology (NIST) for the grant that supported the research presented in this
paper, which is part of the research project Identification of Functional Requirements
and Possible Approaches for Self-Configuring Intelligent Building Systems. The
authors would also like to acknowledge and thank Dr. Steven Bushby from NIST for
the input and feedback received over the duration of this project.

REFERENCES
Castro, N. (2004). Commissioning of building HVAC systems for improved energy
performance. Proceedings of the Fourth International Conference for
Enhanced Building Operations, Paris, France.
DoE, U. S. (2008). Building Energy Data Book. Washington, D.C., Energy Efficiency
and Renewable Energy, Buildings Technologies Program, U.S. DoE.
EIA (2008). The 2007 Commercial Building Energy Consumption Survey (CBECS).
Washington, D.C., U.S. Energy Information Administration.
Fernandez, N., M. Brambley, S. Katipamula, H. Cho, J. Goddard and L. Dinh (2009).
Self-Correcting HVAC Controls Project Final Report. Richland, WA (US),
Pacific Northwest National Laboratory (PNNL).
Gibson, G. (1997). "Supervisory controller for optimization of building central
cooling systems." ASHRAE Transactions.(493).
Glazer, J. (2009). "Common Data Definitions for HVAC&R Industry Applications."
ASHRAE Transactions 115: 531-544.
Jagpal, R. (2006). Computer Aided Evaluation of HVAC System Performance:
COMPUTING IN CIVIL ENGINEERING 809

Technical Synthesis Report.


Katipamula, S. and M. Brambley (2005a). "Methods for fault detection, diagnostics,
and prognostics for building systems-A review, part I." HVAC&R Research
11(1): 3-25.
Katipamula, S. and M. Brambley (2005b). "Methods for fault detection, diagnostics,
and prognostics for building systems-A review, part II." HVAC&R Research
11(2): 169-187.
Kumar, S., S. Sinha, T. Kojima and H. Yoshida (2001). "Development of parameter
based fault detection and diagnosis technique for energy efficient building
management system." Energy Conversion and Management 42(7): 833-854.
Lee, W., J. House and N. Kyong (2004). "Subsystem level fault diagnosis of a
building's air-handling unit using general regression neural networks."
Applied Energy 77(2): 153-170.
Liang, J. and R. Du (2007). "Model-based fault detection and diagnosis of HVAC
systems using support vector machine method." International Journal of
Refrigeration 30(6): 1104-1114.
Liddament, M. W. (1999). Technical Synthesis Report: Real Time Simulation of
HVAC Systems for Building Optimisation, Fault Detection and Diagnostics.
Coventry, UK, ESSU.
Liu, M., D. Claridge and W. Turner (2002). Continuous Commissioning Guidebook:
Maximizing Building Energy Efficiency and Comfort, Federal Energy
Management Program, U.S. Dept. of Energy.
Mansson, L.-G. and D. McIntyre (1997). Controlling and regulating heating, cooling
and ventilation methods and examples. IEA Annex 16 & 17 Technical
Synthesis Report, International Energy Agency.
Roth, K. W., E. Westphalen, M. Y. Feng, P. Llana and L. Quartararo (2005). The
Energy Impact of Commercial Building Controls and Performance
Diagnostics: Market Characterization, Energy Impact of Building Faults and
Energy Savings Potential. Cambridge, MA, TIAX LLC.
Salsbury, T. and R. Diamond (2001). "Fault detection in HVAC systems using
model-based feedforward control." Energy and Buildings 33(4): 403-415.
Schein, J. and S. Bushby (2005). A Simulation Study of a Hierarchical, Rule-Based
Method for System-Level Fault Detection and Diagnostics in HVAC Systems,
NISTIR 7216.
Schein, J., S. T. Bushby, N. S. Castro and J. M. House (2006). "A rule-based fault
detection method for air handling units." Energy and Buildings 38(12):
1485-1492.
Schein, J. and J. House (2003). "Application of control charts for detecting faults in
variable-air-volume boxes." ASHRAE Transactions 109(2): 671-682.
Venkatasubramanian, V., R. Rengaswamy, S. Kavuri and K. Yin (2003). "A review
of process fault detection and diagnosis Part III: Process history based
methods." Computers and chemical engineering 27(3): 327-346.
A Web-Based Resource Management System for Damaged Transportation
Networks

W. Orabi1, M. ASCE
1
Assistant Professor, Department of Construction Management, Florida International
University, 10555 West Flagler Street, EC 2952, Miami, FL 33174-1630; PH (305)
348-2730; FAX (305) 348-6255; email: worabi@fiu.edu

ABSTRACT
Post-disaster response and recovery efforts for damaged transportation
networks are typically complex and challenging tasks. This is mainly due to the
limited availability of resources and the dynamic changes to the status of the
transportation networks undergoing recovery efforts. The complexity of these tasks is
however exacerbated due to the lack of adequate communication between the
different stakeholders involved in the response and recovery efforts. Therefore,
improving the communication between the Departments of Transportation (DOTs),
contractors, suppliers and the public can facilitate swift, hassle-free and cost-effective
post-disaster response and recovery efforts of damaged transportation networks. This
paper presents the development of a web-based resource management system that is
designed to provide near real-time and cost-effective medium to exchange important
data between the main response and recovery stakeholders and provides useful and
up-to-date information to the public about the progress in the response and recovery
efforts. To this end, the system is designed to have four main portals for: DOTs,
contractors, suppliers and the public. The use of this system should prove useful to
all users and should help control and minimize the impact of disasters on society.

INTRODUCTION
The response and recovery efforts for damaged transportation networks in the
aftermath of natural disasters are a challenging and complex task. This is mainly due
to the limited availability of reconstruction resources (Orabi et al. 2009). It is
therefore extremely important to optimize the utilization of these limited resources in
order to control and minimize the impact of natural disasters on the society (Orabi et
al. 2010). This optimization process however requires prompt and accurate exchange
of data between the main stakeholders of the post-disaster recovery process (Manoj
and Baker 2007).
The four main stakeholders involved in post-disaster recovery of damaged
transportation networks are: departments of transportation (DOTs), contractors,
suppliers and the public. There are myriad types of data and information that needed
to be exchanged between pairs of these stakeholders on a frequent basis. For
example, departments of transportation (DOTs) need accurate and almost instant
information about the availability of resources they can deploy to respond to disasters
(Chen et al. 2007), as shown in Figure 1. Similarly, both DOTs and contractors need
an effective and efficient way to communicate recovery project data as shown in

810
COMPUTING IN CIVIL ENGINEERING 811

Figure 1. In addition, DOTs need to keep the public updated on the progress of the
recovery effort and receive their feedback on the disaster management practices,
while the contractors need to promptly acquire the construction materials needed for
the reconstruction works, as shown in Figure 1.
It is therefore important to provide a swift, hassle-free and cost-effective
communication media that can facilitate the exchange of data and information

Figure 1. Resource Management System (RMS)

between the aforementioned stakeholders. A number of research studies addressed


the communication challenges in extreme events. Most of these studies focused on
providing reliable way of communication between first responders (Aldunate et al.
2006; Portmann and Pirzada 2008). Other studies focused on organizational and
communication issues between government agencies involved in the disaster recovery
process (Kapuc 2006; Lambert and Patterson 2002). There is however no research
studies that focused on facilitating communication between different post-disaster
recovery stakeholders or providing a way to manage deployment of limited resources.
Accordingly, this paper presents the development of a web-based resource
management system (RMS) that is designed to facilitate communication between
different stakeholders. To this end, RMS includes four main portals each designed to
serve the communication needs of each of: DOTs, contractors, suppliers, and the
public, as shown in Figure 1. The following subsections describe in detail the RMS
web portals and the implementation of the system.

RMS WEB PORTALS


The main objective of the resource management system (RMS) is to provide
the stakeholders of the post-disaster recovery process with effective and efficient
communication tools to facilitate their exchange of data and information. The
following subsections describe the tools available in each of the four portals of RMS.
812 COMPUTING IN CIVIL ENGINEERING

Planner Portal
This portal is designed to provide planners and decision makers in DOTs with
the tools needed for successful disaster management efforts. Through this portal,
planners can efficiently and effectively exchange important data and useful
information with contractors and the public. The major tools available for planners in
this portal include, as shown in Figure 2:

Figure 2. RMS tools available to planners

Creating and monitoring post-disaster recovery projects for damaged


transportation networks. Based on the post-disaster damage survey and assessment
efforts, planners can identify the scope of reconstruction works needed to restore the
network to its pre-disaster conditions. This scope of work can be divided into a
number of manageable recovery projects. These projects can be added to the RMS
and assigned to interested and qualified contractors in such a way that best serves the
societal needs. Through the use of this tool, planners will therefore be able to: (i)
monitor the progress of these recovery projects as updated on a frequent basis by the
contractors (described later in the contractor portal subsection of this paper); (ii)
evaluate the adherence of the reconstruction works to the overall recovery plan; and
(iii) assess the impact of any variance on the public.

Searching for and downloading the contractor resources available for post-
disaster response and recovery efforts. The planner can sort these resources by
their type, availability, location, productivity, among other attributes. Orders to
deploy resources to respond to extreme events can be placed and delivered almost
instantly to a contractor’s email inbox and/or cellular phone. In addition, planners
can use the downloaded reconstruction resources data to plan and optimize the
recovery and reconstruction efforts of damaged transportation networks. The
reconstruction efforts can be optimized to simultaneously minimize both the network
service disruption and the public expenditures on reconstruction works (Orabi et al.
2009; Orabi et al. 2010). Based on the results of the optimization process, planners
can assign recovery projects to interested and qualified contractors and notify them
through the system accordingly.

Monitoring and controlling the status of damaged roads during post-disaster


reconstruction works. Planners can keep track of all the important traffic data of the
damaged transportation network during the reconstruction efforts including: location
of road closures, number of closed lanes, length of road closed, travel speeds, and
COMPUTING IN CIVIL ENGINEERING 813

traffic flow. Monitoring such data can allow DOTs to: (i) evaluate the impact of
reconstruction work on the level of service provided by the network; (ii) inform the
public about road closures and recommend suitable detours; and (iii) keep the public
updated on the progress of the recovery efforts. RMS also allows contractors,
through their portal as described in the following subsection, to keep the DOT
informed on any planned road closures due to construction activities and/or
reintroducing closed roads into the network after the completion of construction
work.

Keeping the public updated on the progress of the recovery efforts and receiving
their feedback. DOTs can use RMS to communicate with the public on the progress
of the recovery efforts of the damaged transportation network and solicit their
feedback on the handling the post-disaster issues. For example, DOTs can provide
the public with information on: (i) current road status and suggested detours for
closed or partially closed roads; (ii) expected project completion dates of major
reconstruction projects; (iii) safety tips for motorists driving near construction
jobsites; and (iv) reports on the government’s handling of post-disaster issues
including contract solicitation and public expenditures on reconstruction works. In
addition, RMS provides DOTs with the capability of soliciting important feedback
from the public on: (i) the decisions made by the DOT in response to the disaster; (ii)
the progress of the recovery efforts; and (iii) suggestions to improve handling of post-
disaster issues, if any.

Contractor Portal
This portal is designed to facilitate the communication and data exchange
between contractors and each of the DOT and suppliers. The portal enables
contractors to exchange resource and project data with the DOT; and check the
availability and place orders for construction materials from suppliers. To support
these functions, the contractor portal provides the following tools to contractors, as
shown in Figure 3:

Figure 3. RMS tools available to contractors

Provide and update data on resources available for reconstruction works of


damaged transportation networks. Contractors can use RMS to provide DOTs
with a list of all resources they wish to make available to reconstruction efforts of
damaged transportation networks in the aftermath of any disaster. The resource data
provided by contractors include: (i) type of work that can be performed by this
814 COMPUTING IN CIVIL ENGINEERING

resource; (ii) number of crews available from this resource; (iii) availability dates for
each crew; (iv) productivity of each crew, if different; (v) the current location of each
crew; and (vi) daily unit cost for regular, overtime and weekend shifts. The
contractor should provide this list in pre-disaster time and is responsible for keeping
this list of resources updated on a regular basis (e.g. bi-weekly). A complete and up-
to-date resource list will therefore be available to DOTs when planning for response
and recovery efforts from disasters (as described above in the planner portal
subsection).

Update progress of assigned recovery projects. Once notified of the recovery


projects assigned to them, contractors are responsible for adhering to the project
priorities set by DOTs. Contractors are also responsible for reporting the progress of
their assigned recovery projects on a regular basis. RMS enables contractors to easily
and timely provide the following data on the progress of each recovery project for
reviewing and monitoring by DOTs (as described above in the planner portal
subsection): (i) activity start and finish dates (actual and expected); (ii) percent
complete of ongoing activities; (iii) time and effort spent on completed activities; (iv)
cost of reconstruction work (actual and estimated); and (v) reports on any unforeseen
conditions or issues that might affect the progress of the reconstruction work.

Report any planned road closures during reconstruction works. Contractors can
use RMS to keep DOTs updated on any planned road closures based on the progress
data discussed in the previous tool. According to the schedule of planned
reconstruction work, the contractor should in advance identify for any planned road
closure: (i) location of closure; (ii) number and length of closed lanes; (iii) start time
and duration of closure. DOTs can therefore monitor and analyze these data to plan
for any road closure. For example, suitable detours can be identified and timely
announced to the public (as described in the planner portal subsection). This kind of
information exchange can facilitate an effective and efficient recovery process and
contribute to controlling and minimizing the level of service disruption experienced
by travelers during the reconstruction efforts.

Check the availability of and place orders for construction materials needed for
reconstruction of damaged transportation networks. Contractors can use RMS as
a one-stop-shop to search for and compare construction materials from different
sources. Using the material search tool, contractors are able to compare materials
available from different suppliers and sort them based on technical specifications, unit
price, availability, and delivery estimate. In addition, contractors can place initial
purchase orders for construction materials through the system. This initial order is
simply a notification to the supplier of the contractor’s intent to purchase a specific
quantity by a given date. Such communication and exchange of information can
significantly contribute to an effective and efficient post-disaster recovery process.

Supplier Portal
This objective of this portal is to provide suppliers with the useful tools
needed to promote and trade their products to contractors while facilitating a
successful post-disaster recovery process. It is also possible to charge suppliers a
COMPUTING IN CIVIL ENGINEERING 815

registration fee for using the system and the proceeds can be collected in a disaster
management fund. The following describe the RMS tools available to suppliers, as
shown in Figure 4:

Figure 4. RMS tools available to suppliers

Keeping an up-to-date construction materials inventory. RMS provides suppliers


with a user-friendly interface to incorporate their construction materials inventory
into the system’s database. The supplier can add any number of materials they wish
and should provide the following data for each material: (i) inventory number; (ii)
commercial name; (iii) type of work for which it can be used; (iv) technical
specifications; (v) unit price; (vi) available quantity; and (vii) delivery estimate.
Once the initial inventory is set up after registration, suppliers are responsible for
keeping the database accurate and up-to-date, especially concerning the material
price, availability and delivery estimates. These data should continuously be
accessible to contractors and could also be used at regular times (i.e. other than in the
aftermath of natural disasters) to facilitate for construction material acquisition.

Receiving and fulfilling material purchase orders. As previously described, in the


contractor portal subsection, contractors can check the availability of the construction
materials they wish to acquire and can place initial purchase orders to the suppliers
through RMS. These orders are instantly delivered to the supplier’s notifications
inbox of the RMS and also forwarded to other communication methods of their
choice (i.e. email and/or cellular phone). It is therefore the supplier’s responsibility to
double check the availability of the requested material; prepare for shipping out the
material; and notify the contractor accordingly to proceed with a regular purchase
order. It should be noted that the RMS, in its current form, does not support
finalizing the whole transaction, but this can be added to the system at later stages.

Public Portal
This portal is designed to allow departments of transportation (DOTs) to
communicate with the public during regular times and in pre-disaster situations for
the benefit of the society at large. Through this portal, citizens can get updates on the
recovery process and provide their feedback on the RMS and the DOT’s disaster
management practices. In order to facilitate these functions, RMS provides the
following tools to the public, as shown in Error! Reference source not found.:
816 COMPUTING IN CIVIL ENGINEERING

Figure 5. RMS tools available to the public

Obtain disaster handling information and recovery updates. RMS provides the
public with important information on handling of disasters. This information includes
the planning, preparedness, response, and recovery practices adopted by the DOT. In
addition, the public can also obtain tips on traveling safely though a damaged
transportation network including the suggestion of suitable detours for closed roads
(as described above in the planner portal subsection). DOTs can also keep the public
informed on the progress of the recovery process including tracking of contract
solicitation and expenditures on the reconstruction works.

Provide feedback on disaster management practices. In order to assist DOTs in


analyzing and improving their disaster management practices, this tool provides the
public with a method to provide their feedback on the disaster handling by DOT
officials. This feedback can be submitted anonymously or with added contact
information if a reply to the comment is necessary.

SYSTEM IMPLEMENTATION
The RMS is running PHP 5 for the server-side code and MySQL 5 database
on an Apache 2 server. In order to provide users with faster and cleaner interfaces,
RMS also includes AJAX code. The database is designed to effectively and
efficiently store and retrieve data on: users, roads, projects, resources, and materials.
Cascading Style Sheets (CSS) is used to provide support all major desktop Internet
browsers and eliminate browser compatibility issues.

SUMMARY AND CONCLUSIONS


This paper presented the development of a web-based resource management
system (RMS) that facilitates and improves communication and data exchange
between post-disaster recovery stakeholders. The four web portals of RMS provide
customized communication tools that fit the needs of DOTs, contractors, suppliers
and the public. RMS is running latest versions of PHP and AJAX and supported with
MySQL database to provide users with fast, organized and friendly interfaces at a
minimal system development and maintenance costs. RMS should prove useful in
analyzing and improving disaster management practices and could contribute to
controlling and minimizing the impact of disasters on the society.
In order to further improve communication and data exchange in post-disaster
recovery conditions, it is recommended to complement RMS tools with additional
features aimed at improving data interoperability and visualization. For example,
new modules can be added to support exchanging project schedule and cost data with
COMPUTING IN CIVIL ENGINEERING 817

commercially available software that are currently used by DOTs and contractors.
Similarly, new modules are needed to integrate construction material inventory with
suppliers’ in-house inventory systems to provide the capability of finalizing material
acquisition on RMS. In addition, geographical information systems (GIS) can be
incorporated with the transportation network data in RMS to allow: (i) improved
reporting of progress in reconstruction efforts of the damaged transportation network;
(ii) advanced tracking of reconstruction resources; and (iii) enhanced visualization of
suitable detours to closed roads.

REFERENCES
Aldunate, R.; Ochoa, S. F.; Peña-Mora, F.; and Nussbaum, M. (2006). “Robust
Mobile Ad Hoc Space for Collaboration to Support Disaster Relief Efforts
Involving Critical Physical Infrastructure,” Journal of Computing in Civil
Engineering, ASCE, 20(1), 13–27.
American Society of Civil Engineers (ASCE) (2009). “Report Card for America’s
Infrastructure.” <http://www.infrastructurereportcard.org/> (December 29,
2010).
Chen, A. Y.; Tsai, M-H; Lantz, T. S.; Plans, A. P.; Mathur, S.; Lakhera, S.; Kaushik,
N.; Peña-Mora, F. (2007). “A Collaborative Framework for Supporting Civil
Engineering Emergency Response with Mobile Ad-Hoc Networks,” ASCE
Conference Proceedings 261, ASCE, Reston, VA, 68.
Franco, G.; Green, R.; Khazai, B.; Smyth, A.; and Deodatis, G. (2010). “Field
Damage Survey of New Orleans Homes in the Aftermath of Hurricane
Katrina,” Natural Hazards Review, ASCE, 11(1), 7–18.
Kapuc, N. (2006). “Interagency Communication Networks During Emergencies:
Boundary Spanners in Multiagency Coordination,” The American Review of
Public Administration, 36(2), 207–225.
Lambert, J. H. and Patterson, C. E. (2002). “Prioritization of schedule dependencies
in hurricane recovery of transportation agency,” Journal of Infrastructure
Systems, 8(3), 103–111.
Manoj, B. S. and Baker, A. H. (2007). “Communication challenges in emergency
response,” Communications of the ACM, 50(3), 51–53.
Orabi, W.; El-Rayes, K.; Senouci, A.; and Al-Derham, H. (2009). “Optimizing Post-
Disaster Reconstruction Planning for Damaged Transportation Networks,”
Journal of Construction Engineering and Management, ASCE, 135(10),
1039–1048.
Orabi, W.; El-Rayes, K.; Senouci, A.; and Al-Derham, H. (2010). “Optimizing
Resource Utilization during the Recovery of Civil Infrastructure Systems,”
Journal of Management in Engineering, ASCE, 26(4), 237–246.
Portmann, M. and Pirzada, A. A. (2008). “Wireless Mesh Networks for Public Safety
and Crisis Management Applications,” IEEE Internet Comp., 12(1), 18–25.
Time, Cost and Environmental Impact Analysis on Construction Operations

Gulbin Ozcan-Deniz1, Victor Ceron2 and Yimin Zhu3


1
PhD Candidate, Department of Construction Management, Florida International
University, Miami, FL, 33174; PH (305) 348-3172; FAX (305) 348-6255; email:
gulbin.ozcan@fiu.edu
2
PhD Student, Department of Construction Management, Florida International
University, Miami, FL, 33174; PH (305) 348-3172; FAX (305) 348-6255; email:
vcero001@fiu.edu
3
Associate Professor, Department of Construction Management, Florida International
University, Miami, FL, 33174; PH (305) 348-3517; FAX (305) 348-6255; email:
zhuy@fiu.edu

ABSTRACT

Environmentally conscious construction has been a subject of research for decades.


Even though construction literature involves plenty of studies that emphasize the
importance of environmental impact during the construction phase of a project, most
of them were not focused on better understanding the relationship among project
performance criteria in construction especially including environmental impact. Due
to the multi-objectives nature of construction projects and the lack of understanding
of such relationship, it is important to have a method for construction professionals to
select optimal construction solutions when environmental impact is considered as an
additional performance criterion. This paper presents a framework to determine
optimal solutions based on project time, cost and environment impact (TCEI). Life
cycle assessment was applied to the evaluation of environmental impact in terms of
global warming potential (GWP). Genetic algorithms were used for time, cost and
environmental impact optimization. A case of study was used to illustrate the
application of the framework. Such framework can be applied to the planning of
construction projects.

INTRODUCTION

The environmental impact of buildings and their operations have received a


significant amount of research attention, e.g., optimization of building operations
(Zhu 2006). Environmentally conscious construction has been investigated for
decades and there is a significant body of knowledge on environmental performance
criteria (e.g., Shen et al. 2005), methods of environmental impact analysis (e.g., Li et
al. 2009), and environmentally conscious construction management (e.g., Chen et al.
2005). While these studies have clearly demonstrated the significance of

818
COMPUTING IN CIVIL ENGINEERING 819

environmental impact during construction phases, there are still gaps between the
ultimate goal of environmentally conscious construction and contributions of those
studies. This is because most of the studies have been focused on a specific
dimension, i.e., environmental impact, and overlooked the multi-objective nature of
construction projects. Only recently, a couple of studies in the environmentally
conscious construction management category have addressed the issue of multi-
objectives to a certain degree (e.g., Marzouk et al. 2008).
It is critical to develop an analytic procedure for studying the multi-objective
characteristic of construction projects, and thus this paper will discuss methodology
for analyzing the relationships between project time, cost and environment impact
(TCEI). Currently, time and cost are the major project constraints that are carefully
planned and controlled by construction professionals. Although there are other factors
such as quality and safety that are also important, this study will only include
environmental impact. Other considerations can be added later. A sample project, the
Future House USA project, is used as a case study to demonstrate the application of
the framework.

BACKGROUND

Life Cycle Assessment (LCA). Environmental impact data for multi-objective


optimization can be derived by using different methods. LCA has been widely
applied to the evaluation of building products, systems, and construction processes.
Three types of LCA-based assessment methods are available, including process-based,
input-output analysis and hybrid. The advantages and disadvantages of each approach
have been discussed by (Bilec et al. 2007). The LCA-based approach potentially
provides better quantification of environmental impact than those using expert
judgment.
Especially, environmental impact studies on construction have also gone
beyond scoring and viewed environmental mitigation from a life cycle perspective.
For example, Bilec et al. (2007) discussed the application of the hybrid approach to
construction by considering delivery, ancillary materials, construction equipment,
upstream production and maintenance effects of construction equipment, on-site
electricity use, and on-site water consumption. Although LCA has a more solid
scientific and engineering foundation than other approaches such as the LEED
scoring approach, it relies on the availability of life cycle inventory data to include a
comprehensive set of impact categories, especially the process-based LCA. Often,
such life cycle inventory data are not available.

Construction Operations. Studies on construction operations may potentially reveal


valuable information for modeling environmental impact of construction processes.
They have developed different ideas to integrate environmental impact into
construction processes. While some proposed a work breakdown structure to identify
materials and equipment used in a construction activity (Li et al. 2009), others used
control account to measure the performance of a project (Howes 2000).
There is a causal link between construction processes and environmental
impact through resource consumption (e.g., labor, materials, equipment, energy, and
820 COMPUTING IN CIVIL ENGINEERING

water). The connection between construction process models and environmental


impact will provide a mechanism to observe the effect of changes in construction
operations on environmental impact.

Multi-Objective Optimization. Multi-objective optimization has been applied to


time and cost trade-off analysis of construction projects based on different algorithms,
e.g., heuristic methods, mathematical programming and more recently evolutionary
algorithms including genetic algorithms (GAs) and ant colony optimization
algorithms.
In addition, there are studies analyzing the relationships of project constraints
beyond time and cost. For example, El-Rayes and Kandil (2005) visualized the three-
dimensional trade-offs among project time, cost, and quality by evaluating the impact
of various resource utilization plans on the project performance. They considered
construction methods, crew formation and crew overtime policies as decision
variables that finally influence a single decision variable that was called resource
utilization. A more recent example is the study by Marzouk et al. (2008) using genetic
GAs to perform multi-objective optimization with three objectives, project duration,
project cost, and total pollution. The study was focused on minimizing total pollution
and observing its relationship with time and cost.
Evolutionary algorithms such as genetic algorithms, ant colony optimization,
and particle swarm optimization are selection-based. The mathematical relationships
among variables are not required (Jiang and Zhu 2010). This characteristic of
evolutionary algorithms make them desirable to time, cost and environmental impact
analyses, because the relationships among them are not known, but each of them can
be analyzed and calculated independently (Ozcan and Zhu 2009). Results of analyses
by using evolutionary algorithms are typically represented as Pareto front, i.e., a set
of optimal solutions. Decision-making methods can be applied to assist users in
choosing a right solution (e.g., Mouzon and Yildirm 2008), applied analytical
hierarchy process to determine the best alternative among a set of solutions on the
Pareto Front.

METHODOLOGY

Definition of Alternatives for Construction Operations. The analysis of TCEI


depends on properly modeling alternative construction operations. Data of observed
construction operations can be obtained from sources such as project schedule,
estimate or daily log. Data of alternatives need to be estimated. The framework helps
to define alternatives of construction operations to be used in optimization and
analysis. Construction projects can be decomposed into activities using a work
breakdown structure (WBS). Through a set of different activities, alternatives of
construction operations may be formed. At the same time, important information such
as cost, durations and environmental impact of relevant activities is integrated in the
control account.

TCEI Estimation for Analysis. The TCEI information for the different alternatives
of construction operation is used in the optimization. Time and cost data are derived
COMPUTING IN CIVIL ENGINEERING 821

using conventional scheduling and estimating methods, while LCA is applied to


analyze the environmental impact of identified materials, energy, water, and other
resources associated with a construction operation. In this study, ATHENA Impact
Estimator for Buildings 4.0 is used to perform LCA on assemblies associated with the
eleven activities and determine GWP values of different alternatives. ATHENA
evaluates the environmental performance regarding material manufacturing, resource
extraction, transportation, on-site construction, demolition, and disposal. The
potential environmental impacts defined in ATHENA are GWP, embodied primary
energy use, solid waste emissions, pollutants to air, pollutants to water, and weighted
resource use. This study only considers GWP as the environmental impact of
different construction methods.
ATHENA requires material types together with their quantities to perform
LCA. In order to calculate GWP, major types of construction materials of the activity
alternatives are captured and analyzed for deriving GWP per unit and the quantity of
each type of materials. The impact of equipment and tools applied during the
construction phase is analyzed separately because ATHENA does not allow users to
customize crew composition, including equipment. Therefore, alternatives were
formed by using different types of equipment, and in addition to the ATHENA results,
the GWP for equipments were calculated.

Optimization and Analysis. A construction project is defined as a set of building


elements and construction operations. In this study, construction operations are
defined as a set of N activities. Each solution, represented by a chromosome such as
solution k, contains all activities of a project so the length of a chromosome is N.
Each activity, as well as its alternatives, are analyzed separately and have an
associated value of TCEI. A solution, Sm, is thus associated with three objectives,
time, cost and environmental impact, defined as (Tm, Cm, Em).
Since the purpose is to minimize the three objectives, the objective function,
OF, can be expressed as:
OF  MIN(Tm , Cm , EIm ), m  [1,U ]
To understand the multivariable approach, a Euclidian space is defined having
the variables T, C and EI as axes (x,y,z). It is assumed that an absolute dominant
solution would be the origin of the space (0,0,0). The space is then normalized using
as an upper limit of the corresponding maximums of time, cost and environmental
impact. The minimum is calculated by finding the minimum achievable for each
activity in each of the variables.
The total difference between the maximum and the minimum on each
dimension will have then a total dimension of one. The normalized values for time,
cost and environmental impact for a solution K will be:

The performance of each solution will be determined by a fitness function that


is defined as the total distance between the solution obtained and the origin in the new
normalized subspace which would correspond to the size of the vector given. A lower
822 COMPUTING IN CIVIL ENGINEERING

fitness value means a better solution as it would be closer to the origin. The fitness
function defined would be then:

f (TK ,CK ,EIK ) =

CASE STUDY

Case Description. The “Future House USA” is a two-storey zero-net-energy


residential house built in Beijing China. Florida International University is one of the
sponsors of this demonstration project. The case study is designed to experiment the
aforementioned analytic procedure to determine optimized construction alternatives
in terms of time, cost and environmental impacts. To simplify calculations,
environmental impact is limited to GWP in this paper.
Table 1 shows four of the eleven activities and their alternatives for the
“Future House USA”. As an example, the footing construction activity has two
alternatives, No. 1 and No. 2. Different alternatives are created by using different
types of equipment (e.g., alternatives for the footing construction), crew (e.g., the
exterior wall construction), and materials (e.g., the exterior and the interior wall
construction). The alternatives of construction activities shown in Table 1 may form
in total 9216 different options to deliver the project. With regard to the considerably
high number of options, genetic algorithm, which is a type of evolutionary algorithms,
is selected to optimize the project duration, cost and GWP.

GWP
Act. Alt. Cost Time
Description (kg CO2
No No (US $) (days)
eq)
Sitework, cut & chip light trees to 12"
1 5,039.71 4 1,728.86
diam., finish grading, removal not incl.
1
Sitework, cut & chip light trees to 12"
2 4,924.93 4 2,938.36
diam., finish grading, removal incl.
Excavation and Fill, 1' to 4' deep, 3/8
1 360.71 2 317.66
CY excavator, backfill trench
2
Excavation and Fill, 1' to 4' deep, 1/2
2 297.05 2 399.34
CY excavator, backfill trench
Footing, 3000 psi concrete, 60000 psi
1 84,232.67 6 9,541.15
rebar, direct chute
3
Footing, 3000 psi concrete, 60000 psi
2 90,392.28 5 9,715.51
rebar, pumped, formwork crew doubled
Stem Wall, 3000 psi concrete, 60000
1 76,650.79 13 9,647.65
psi rebar, direct chute
4 Stem Wall, 3000 psi concrete, 60000
2 psi rebar, pumped, formwork crew 86,174.94 8 9,822.01
doubled
Table 1 - Activities and Alternatives
COMPUTING IN CIVIL ENGINEERING 823

Table 2 shows the cost, time and GWP results of a random set of
chromosomes as an example. In the first chromosome, Activity No. 1 is performed
using option 1; Activity No. 2 is also performed using option 1; and Activity No. 3 is
performed using option 2 from the options available to perform each activity
respectively.

PARETO FRONT GWP


Cost Time
(kg CO2
1 2 3 4 5 6 7 8 9 10 11 (US $) (days)
eq)
1 1 2 2 2 1 2 3 1 1 2 430,286.00 112 64,944.00
1 2 2 2 2 1 2 3 1 1 2 427,984.00 111 75,668.00
2 1 2 2 2 1 2 3 1 1 2 427,920.00 111 75,750.00
2 2 2 2 2 1 2 3 1 1 2 433,617.00 119 73,263.00
1 2 1 2 2 1 2 3 1 1 2 484,362.00 116 80,597.00
2 1 1 2 2 1 2 3 1 1 2 437,162.00 120 62,365.00
1 2 2 2 2 1 2 3 1 2 2 411,424.00 123 76,779.00
1 2 1 2 2 1 2 3 1 2 2 542,058.00 115 89,859.00
2 1 1 2 2 1 2 3 1 2 2 583,769.00 105 97,516.00
2 2 1 2 2 1 2 3 1 2 2 461,397.00 124 80,084.00
Table 2 - Cost, Time and GWP Results of a random set of chromosomes

Multi-Objective Optimization with Genetic Algorithms (GAs). The decision of the


population size is a crucial step in the process, because the population size is
positively associated with the number of required iterations to converge and the
accuracy of results. An initial population is randomly generated and the selection of
the chromosomes for crossover is determined randomly based on fitness, as proposed
by other authors (Marzouk, Madany, Abou-Zied, & El-Said, 2008). A roulette wheel
selection was applied. Once individuals are selected, the crossover process is initiated
and mutation is implemented as well.
A Pareto front, defined as a set of best possible answers is obtained by
selecting the chromosomes on the top of the fitness sorted population and evaluating
the dispersion of their fitness. Once the chromosomes in the resulting population
comply with the criteria set in size and proximity one to the other, a Pareto front is
achieved and no more generations are produced. In the case of the proposed problem,
the maximum level of dispersion has been set at 5%.
Table 3 shows the Pareto front obtained for the exercise; this table also
indicates the values of cost, time and environmental impact as well as the fitness and
the level of dispersion for each individual. A 5% of dispersion in the proposed study
means that the distance from the possibility frontier to the point defining each
chromosome listed on the Pareto front are no more than 5% larger than the best-fitted
answer, set in the normal subspace. The ten solutions in Table 3 were obtained to be
the best alternatives among the others. There is no single best solution as expected.
None of the ten solutions in the Pareto front was dominant when all three objectives
were considered at the same time.
824 COMPUTING IN CIVIL ENGINEERING

PARETO FRONT

Dispersion
(kg CO2

Fitness
(US $)

GWP
(days)
Time
Cost

eq)
1 2 3 4 5 6 7 8 9 10 11
1 1 2 2 2 1 2 3 1 1 2 439,811 107 65,119 0.291 0.00%
1 2 2 2 2 1 2 3 1 1 2 439,747 107 65,200 0.291 0.00%
2 1 2 2 2 1 2 3 1 1 2 439,696 107 66,328 0.293 0.14%
2 2 2 2 2 1 2 3 1 1 2 439,632 107 66,410 0.293 0.15%
1 2 1 2 2 1 2 3 1 1 2 433,587 108 65,026 0.305 1.41%
2 1 1 2 2 1 2 3 1 1 2 433,536 108 66,154 0.307 1.54%
1 2 2 2 2 1 2 3 1 2 2 437,487 108 65,200 0.309 1.75%
1 2 1 2 2 1 2 3 1 2 2 431,327 109 65,026 0.324 3.25%
2 1 1 2 2 1 2 3 1 2 2 431,276 109 66,154 0.325 3.37%
2 2 1 2 2 1 2 3 1 2 2 431,212 109 66,235 0.325 3.38%
Table 3 - The Pareto front for the "Future House USA"

CONCLUSION

Construction is a multi-objective operation, where project parties’ ability and


willingness to reduce environmental impact such as greenhouse gas emissions is
limited by various project constraints. Thus, understanding the relationship among the
constraints is critical. In this study, a generic framework was developed to observe
the relationship between time, cost and environmental impact of construction
operations. The proposed methodology combined two approaches: (1) life cycle
assessment; and (2) multi-objective optimization with genetic algorithms. The causal
link between construction processes and environmental impacts was set through
resource consumption during the life cycle of the project, while relationship among
time, cost and environmental impact was analyzed with the help of GAs.
The case study “Future House, USA” was implemented to illustrate the use of
the framework and the application of GAs to the TCEI optimization problem. The
concept of GAs ideally matched with the data and organization of the framework as
they do not require the knowledge of the relationship between objectives. In this case,
the model was searching for the relationship between time, cost and GWP, and GAs
helped this search by creating the Pareto optimal solutions. The proposed framework
can be applied to other environmental impact parameters as well as GWP, to examine
the relationship between TCEI.
This paper contributes to literature with a different perspective by combining
resource utilization and LCA in the multi-objective optimization procedure.
Obtaining the optimal set based on time, cost and GWP shows the existence of
interrelations between these three objectives. The relations among time, cost and
GWP have significant importance to cope with any limitations or to implement any
reductions in one of them. The framework reveals how the construction method
selection can affect the improvement of the project performance. In addition to the
COMPUTING IN CIVIL ENGINEERING 825

construction method selection, other factors that can have an impact on the
connections between time, cost and GWP need to be studied in the future.

REFERENCES
Bilec, M., Ries, R., & Matthews, H. S. (2007). Sustainable development and green
design - who is leading the green initiative? Journal of Professional Issues in
Engineering Education and Practice , 133 (4), 265-269.
Chen, Z., Li, H., Kong, S. C., & Xu, Q. (2005). A knowledge-driven management
approach to environmental-conscious construction. Construction Innovation ,
5, 27-39.
El-Rayes, K., & Kandil, A. (2005). Time-cost-quality trade off analysis for highway
construction. Journal of Construction Engineering and Management , 131 (4),
477-486.
Howes, R. (2000). Improving the performance of earned value analysis as a
construction project management tool. Journal of Engineering, Construction
and Architectural Management , 7 (4), 399-411.
Jiang, A., & Zhu, Y. (2010). A multi-stage approach to time-cost trade-off analysis
using mathematical programming. International Journal of Construction
Management .
Li, X., Zhu, Y., & Zhang, Z. (2009). An LCA-based environmental impact
assessment for construction processes. Building and Environment , accepted
for publication.
Marzouk, M., Madany, M., Abou-Zied, A., & El-Said, M. (2008). Handling
construction pollutions using multi-objective optimization. Construction
Management and Economics , 26, 1113-1125.
Mouzon, G., & Yildirim, M. B. (2008). A Framework to minimize total energy
consumption and total tardiness on a single machine. International Journal of
Sustainable Engineering , 1 (2), 105-116.
Ozcan, G., & Zhu, Y. (2009). Life-cycle assessment of a zero-net energy house. The
Proceedings of the International Conference of Construction and Real Estate
Management (ICCREM). Beijing, China: The Chinese Construction Industry
Press.
Shen, L., Lu, W., Yao, H., & Wu, D. (2005). A computer-based scoring method for
measuring the environmental performance of construction activities.
Automation in Construction , 14, 297-309.
Zhu, Y. (2006). Applying computer-based simulation to energy auditing: a case study.
Energy and Buildings , 38, 421-428.
Learning to Appropriate a Project Social Network System Technology
Ivan Mutis1 and R.R.A. Issa2
1, 2M.E. Rinker School of Building Construction, College of Design Construction

and Planning, University of Florida, P.O. Box 115703, Gainesville, FL 32611-5703;


PH (352) 273-1178; FAX (352) 846-2772; email: imutis@ufl.edu; raymond-
issa@ufl.edu

ABSTRACT
Construction project participants constitute a complex social human network
composed of a heterogeneous and fragmented set of stakeholders. The disjoint group
of actors that team to work on a project constitutes collective entities, social networks
at different scales in time and space. There is a need to incorporate new social network
systems that respond to the demand for the interfacing of actors’ communication in the
construction project practices. For this purpose, it is critical to understand how social
actors interact with these new technologies. This research proposes a framework to
understand the actors’ learning process of a social networking system technology, in
particular an in-house developed social-network- system for construction projects. The
challenge is to understand the interplay of social network system and social actors, as
it involves the interaction of multiple actors. It is expected that learning and
understanding the components of this technology will lead learners to effectively use
the its resources and enable them to effectively appropriate its associated processes for
its use, including communication and coordination with the construction work force,
and creation, contribution, and distribution of information content.

INTRODUCTION
As appropriation is the process by which users adopt and adapt technologies, fitting
them into their working practices (Dourish 2003), there is need to understand how this
process occurs with the critical mass of learners. This research takes on social network
system as a mediating technology for the analysis.
Learning to interact and to communicate the information content of
construction projects will improve the speed of the adoption of new technologies. This
research explores methods for learning to collaboratively communicate construction
project information through social networking environments by appropriating a social
network system. The urgent challenge to advance the competitiveness and efficiency
of the construction industry through innovative methods to connect its workforce is
recognized.
This investigation uses a systematic model based on the structuration theory
(Giddens 1984) to learn how social actors appropriate technology (DeSanctis et al.
1999; Orlikowski 1992; Orlikowski 2000). The model is an analytical construct that
assists in the understanding of the use, advantages, and limitations of mediating
technologies. The model explicitly associates the social network actors as a social
structure, and the social network system as a technology that enables effective
communication of information content. It is expected that the deployment of this

826
COMPUTING IN CIVIL ENGINEERING 827

technology will benefit the acquisition of concepts, knowledge, and skills for their
effective interfacing through the use of mediating technologies. Learners will use the
technology and its resources to simulate the interfacing of actors in collaborative
settings within the contexts of a project, to interact, share social objects, look for
affinity roles within the social network, annotate documents, and send messages, as
part of the features and services of a social network. Central to building the learners’
experience with the technology is understanding of the mediation process between the
technology and the users
As it is critical to clearly comprehend the fundamental components that
underlie the proposed framework, the following session defines actors, team, and
communities as social components, followed by the explanation of the social network
system as a mediating technology.

ACTORS, TEAMS, AND COMMUNITIES


The underlying groups of actors that form the social network are the ones that have
ties around interests and expertise. They constitute distinct communities in the
network and they are responsible for making decisions, and operate from distributed
geographical locations. A group of actors that works in a collaboratively environment
resemble the features that defined global virtual teams (Jarvenpaa et al. 1999;
Maznevski et al. 2000; Powell et al. 2004; Shachaf 2008), since the actors and the
teams are cultural diverse, geographically dispersed, and structured within virtual
organizations (DeSanctis et al. 1999). In defining a social network, this investigation
takes into account the social dimension of the network and does not consider actors as
entities of a network structure, or individual interconnected entities, or individual
nodes from the network. Actors are members of one or more communities, perform
individual or collective actions in teams, and have a role with a social dimension. For
a further conceptualization, actors, teams and communities are defined as follows:
Actors. Actors are legitimized entities of knowledge and their identity is
constructed as they begin to be engaged with acts within an organization (Chia 2000).
Organizations are systems of social individuals that create a social structure and
Actors are social entities, which can be defined as discrete individuals, units of
organizations, or collective social units (Wasserman et al. 1994). Actors are linked to
one another and are associated by organizational or organizational sets of relationships
defined by common, specific objectives. They constitute a dynamic social network, a
structure composed of actors (Kadushin 2004) that have one or multiple relationships
among them.
Actors will define their relationships to one another in a rich set of ties to the
social network. The purpose is to solve the complex interplay of relations and
dependencies, which ultimately lead to success in communication, as actors define
their relationships as teams and not as a set of individuals (Foley et al. 2005).
Teams. Construction project teams have a mixture of actors from different
disciplines and communities. They have different views of seeing problems, subjects
of discussion, and motivations. A widely accepted definition of team is that of it is a
collection of individuals “who are interdependent in their tasks, share responsibility
for outcomes … who manage their relationship across organizational boundaries”
(Cohen et al. 1997, p. 241). Project teams are formed as the project progresses, and get
828 COMPUTING IN CIVIL ENGINEERING

together through face-to-face settings or get connected through mediating technologies


to have discussions on technical meetings, cost reviews, problem-solving discussions
among other motivations. The creation of teams constitutes the surge of ties between
actors and these ties are the connections that form the social network.
Communities. To contribute to a construction project activity, actors share
their expertise within the communities with which they are associated. Communities
are centers where actors circle on specific interests, constituting communities of
practice (Wenger 1998; Wenger 2010).

SOCIAL NETWORK SYSTEM AS A MEDIATING TECHNOLOGY


In lieu of developing face-to-face meeting among construction team actors, a
mediating technology can be used as an interface system for the actor’s meeting. This
system eliminates geographical barriers by connecting actors across a network. The
mediating technology is a social network system that is used to investigate the
interaction of the construction workforce and technology from the (1) social, (2)
organizational, and (3) technological viewpoints. It enables, for example, real time
data collection and knowledge discovery by capturing new contexts, variables, and
units of analysis. New social media systems provide a platform for information
sharing, interoperability, and collaboration with new contexts that controlled
experiments cannot adequately capture (Shneiderman 2008; Suchman 2007).
The creation, contribution, and distribution of information content, its
personalization and semantic analysis, and the collaborative evaluation in a social
network system, all together are new ways of interacting with information, called the
‘social information processing’ paradigm (Lerman 2008).

FRAMEWORK FOR APPROPRIATION AND LEARNING


This framework provides the theoretical constructs to build our assumptions and it is
used as a strategy and method to lead to the formulation of an evolving lifelong model
for learning. The goal is to develop research on understanding of the learner’s
appropriation and adoption of mediation technologies, as they are gradually
incorporated in their work practices. It is expected that the examination of the
interaction with mediation technologies uncover new layers of meaning of the social
structures of construction actors, such as teams, and communities of practice. The
framework, therefore, is applied to understand how learners appropriate new
technology and to the changes they assimilate within typical work practices of
construction projects. The appropriation of mediating technologies is an evolving
learning process that starts with the basic understanding of the status quo of the social
structure that includes the typical roles, hierarchies, and composition of construction
organizations.
The framework positions the researcher to observe and discover new social
structures and actors’ behaviors towards the mediating technology as the learners’
perception changes through understanding, learning, and practicing. In consequence,
the approach is different from that of the study of the array of resources that mediating
technology offers (e.g. the ability of the system to provide annotations, search and
retrieval of documents) and the cases of the actors’ uses with the technology (e.g. the
COMPUTING IN CIVIL ENGINEERING 829

measurement of actors’ effectiveness in annotating, searching and retrieving


documents).
This research explores a social network system as a mediating technology and
social actors, to consider the mutual influences of this interplay. Therefore, the
interaction with the social network system involves four basic elements, as shown in
the Figure 1: (1) individual actors or group of actors within the network; (2) actors
roles within the social structure or social organization; (3) representation of
information; and (4) actions to be executed through the system. For example, the
representations of information are visualizations, specifications, documents, and any
other forms that are exchanged and shared. The actions are all interactions that the
actors and the mediating technology are able to execute.

Theoretical Framework. Social networks have a flexible structure in their


composition and present an array of social structures. The network ties can
dynamically be defined according to environmental contexts and others properties.
The framework is based on the Adaptive Structuration Theory (AST) (DeSanctis et al.
1994; Giddens 1984; Orlikowski 2000) as a template to understand: (1) the process of
incorporating a new technology, (2) the social structures that emerge with actors
actions, (3) the organizational changes as technology is used, (4) the changes in the
work practices. AST formulates how social group perception about a technology
change with its use, including the impact of group practices. As shown in Figure 2, the
following are the constructs:

Structure Layer. The first layer provides


the description of elements of the
framework that can be defined by
structures. Any structure is both the
medium and the outcome. In this case, our
proposed framework defines technology,
information, and social structures. A
structure by itself can be characterized
through properties that define a dimension.
Each dimension has an arrangement of
components of the structure. For example,
the arrangement of social actors who are
technology innovators are (Rogers 2003):
innovators, early adopters, early majority,
late majority, and laggards. Social actors
Figure 1. Elements that interact with
who are technology innovators therefore
the social network system
constitute a social structure.
As shown in Figure 2., the first
element of the structure is technology. The properties that define dimension of
technology component are (a) capabilities and (b) resources. In this case study, the
social network system has capabilities and resources. The system has the ability to
connect project actors, to connect blogs, to organize meetings, and to retrieve
documents, among other services.
830 COMPUTING IN CIVIL ENGINEERING

The second element of the structure layer is information. Information


represents the set of structures that provide the users with a mean to govern and
manipulate the technology and the information content. This component therefore
gives control and meaning to the users. The information component is defined by
properties of form, procedures, and structure. For example: a database is defined by its
constraints, a workflow is defined by it procedures, and drawings and visualization are
images and symbols that represent information.
The technology and the information components are characterized by reasons
and purposes (see Figure 2). Reasons explain the relevance of the technology structure
and information. Reasons help the users understand the meaning of the technology.
The purpose defines the intention of the structure of the technology and the
information. The purpose explains why the architects of the system design such
structure. For example, the purpose of naming a feature of a system ‘blogging’ is to
facilitate the user’s understanding of the technical jargon used by IT specialist such as
‘speech acts’ for the same feature. Reasons and purposes can be present according to
the degree of influence of the user in a structure. A complete and coherent set of
reasons and purposes is translated into the users’ easy understanding of features and of
technology usability. If the reasons and purposes are confusing and poor, they lead the
users’ to contradictions in understanding the technology and equivocal use of the
technology resources.

Figure 2. Framework of mediating technologies, structures, and actors.

The third element of the layer in the Figure 2 is social structures. Social
structures are defined according to properties of the actors. The actors’ status quo can
COMPUTING IN CIVIL ENGINEERING 831

be defined by beliefs (e.g. the degree to which members trust in the technology);
modalities in conduct; contextual relationships, and history records actors have access.
The contextual relationships describe the status of the user or actor within the
organization, which generally is defined by the organization’s normative. Other
dimension of contextual relationships is the one the actor has with the environment.
For example, actors can belong to other forms of structures other than the actors’ own
organization, such as union organizations. Alternatively, other social structures that
offer a source of constraints to the users in their activities can be defined as contextual.
There are multiple combinations of the dimensions according to the properties
of the structures of the technology, the information, and the social actors. There are
also multiple instances when the structures are not defined explicitly. In this case, it
requires the actors’ actions with the technology to uncover the implicit structure or
actions to evolve structures to uncover new dimensions. This case is represented in
Figure 2 as the non-explicit social structure.

Action. The technology and its resources are brought into action along with the users’
social structures. As the actors’ status quo is defined by the social structures, the
actors’ actions are produced under such conditions. The actions are the actors’
responses towards the technology, and they reflect the reactions to the technology
rules and constraints (DeSanctis et al. 1994). Typical uses of technology are
integration, sharing, exchanging of information the technology provides to the users.
The actions reflect either the good or poor understanding of the reasons and purposes
of the technology and information content.
There is a mediation process between the technology and the users. The
mediation consists of processing, computing, and facilitating the accessing, retrieving,
and searching of information. Other mediation processes connect users, which is the
case of the social networking system. Figure 2 shows the action layer, including the
elements and their features.
Output. The act of appropriation of technology occurs when users decide to
employ or not employ certain resources of the technology (DeSanctis et al. 1994). For
example, if users decide to only mark up construction documents through the tagging
tools provided by social network system, users appropriate the annotation resource of
the mediating technology. When users bring technology into action, they may find
other uses from the ones that the resources where originally designed for. In this case,
the appropriation of the technology occurs but the resources take new forms
(Orlikowski 1992; Orlikowski 2000). The technology takes new forms and they are
defined by the structure for those particulars actors. These new forms occur when
certain groups of users repeatedly interact with the technology.
Learning and appropriation. Learning how to use the technology resources,
therefore, is a resulting process of appropriation. There is a wide range of cases where
students learn to fully manipulate the resources provided by the technology or poorly
understand the reasons and purposes of those resources. The proposed framework is
used to explore the learning process of a group of users. The framework provides the
template to understand the social interaction of a group of actors in learning a social
network system, since actors perform collective and individual actions. The
framework provides the strategy to understand the attitudes and abilities of users
832 COMPUTING IN CIVIL ENGINEERING

towards the mediating technology. This is possible for example by studying the
actions and reactions to the purpose and reasons, as was stated in the framework,

CONCLUSIONS AND FUTURE WORK


To learn and incorporate new technologies in work practices, it is required to
understand the dynamic interplay between the users as social actors and the new
technology. This process compels the interaction of social structures that shapes its
appropriation. This research proposes a framework based on AST to understand the
features that define the technology and the social structures that the users need to
consider in adopting a social network system. As the appropriations of this system
involve the interaction of collective and individual actors, it is imperative to
understand the properties and the dimensions of the social structures, including the
nature of the members and their relationship within the group. Future work involves
using this framework to understand the appropriation of mediating technologies by
collective actors as collection of individual actors, to evaluate if there are differences
in learning when they are considered as sum of individuals.

REFERENCES
Aconex. (2010). Project collaboration and online project management system,
Accessed, September 2010.
Beer, M. (1998). "Organizational behavior and development." HBS Working
Papers Collection, Harvard School of Business Cambridge, MA, 17.
Carroll, J. M., Rosson, M. B., Farooq, U., and Xiao, L. (2009). "Beyond being aware."
Information and Organization, 19(3), 162-185.
Chia, R. (2000). "Discourse Analysis Organizational Analysis." Organization, 7(3),
513-518.
Cohen, S. G., and Bailey, D. E. (1997). "What Makes Teams Work: Group
Effectiveness Research from the Shop Floor to the Executive Suite."
Journal of Management, 23(3), 239-290.
DeSanctis, G., and Monge, P. (1999). "Introduction to the Special Issue:
Communication Processes for Virtual Organizations." Organization
Science, 10(6), 693-703.
DeSanctis, G., and Poole, M. S. (1994). "Capturing the Complexity in Advanced
Technology Use: Adaptive Structuration Theory." Organization Science,
5(2), 121-147.
Dourish, P. (2003). "The Appropriation of Interactive Technologies: Some
Lessons from Placeless Documents." Computer Supported Cooperative
Work (CSCW), 12(4), 35.
Dourish, P., and Bellotti, V.(1992) "Awareness and coordination in shared
workspaces." Proceedings of the 1992 ACM conference on Computer-
supported cooperative work, Toronto, Ontario, Canada, 107-114.
Foley, J., and Macmillan, S. (2005). "Patterns of interaction in construction team
meetings." CoDesign: International Journal of CoCreation in Design and the
Arts, 1(1), 19 - 37.
Giddens, A. (1984). The constitution of society : outline of the theory of
structuration, University of California Press, Berkeley.
COMPUTING IN CIVIL ENGINEERING 833

Jarvenpaa, S. L., and Leidner, D. E. (1999). "Communication and Trust in Global


Virtual Teams." Organization Science, 10(6), 791-815.
Kadushin, C. (2004). Introduction to social network theory, Accessed, December
Lerman, K. (2008). "Social Information Processing in News Aggregation." IEEE
Internet Computing, 11(6), 16-28.
Maznevski, M. L., and Chudoba, K. M. (2000). "Bridging Space Over Time: Global
Virtual Team Dynamics and Effectiveness." Organization Science, 11(5),
473 - 492.
Orlikowski, W. J. (1992). "The Duality of Technology: Rethinking the Concept of
Technology in Organizations." Organization Science, 3(3), 398-427
Orlikowski, W. J. (2000). "Using Technology and Constituting Structures: A
Practice Lens for Studying Technology in Organizations." Organization
Science, 11(4), 24.
Powell, A., Piccoli, G., and Ives, B. (2004). "Virtual teams: a review of current
literature and directions for future research." SIGMIS Database, 35(1), 6-
36.
Rogers, E. M. (2003). Diffusion of innovations, Free Press, New York.
Shachaf, P. (2008). "Cultural diversity and information and communication
technology impacts on global virtual teams: An exploratory study."
Information & Management, 45(2), 131-142.
Shneiderman, B. (2008). "Computer Sciecne: Science 2.0." Science, 319(5), 1349-
1350
Suchman, L. A. (2007). Human-machine reconfigurations : plans and situated
actions, Cambridge University Press, Cambridge ; New York.
Wasserman, S., and Faust, K. (1994). Social network analysis : methods and
applications, Cambridge University Press, Cambridge ; New York.
Wenger, E. (1998). Communities of practice : learning, meaning, and identity,
Cambridge University Press, Cambridge, U.K. ; New York, N.Y.
Wenger, E. (2010). "Communities of practice and social learning systems: the
career of a concept", (Chapter 11) in Social learning systems and
communities of practice, Ch. C. Blackmore, ed., Springer, London, xv, 225 p.
Yang, S. J. H., and Chen, I. Y. L. (2008). "A social network-based system for
supporting interactive collaboration in knowledge sharing over peer-to-
peer network." International Journal of Human Computing Studies 66(1),
36-50.
Decision Support for Building Renovation Strategies
H. Yin1, M. ASCE and P. Stack1, K. Menzel1, M. ASCE
1
University College Cork, Department of Civil and Environmental Engineering,
Room 2.12, Western Gateway Building, Western Road, Cork, Ireland; PH (353) 21-
4205454; FAX (353) 21-5451; email: hang.yin@umail.ucc.ie

ABSTRACT

The renovation of existing buildings usually involves decision-making


processes aiming at reducing energy consumption and building maintenance costs.
The goal of this paper is to prioritize what components/systems need to be replaced.
It focuses on renovation strategies from the maintenance density and energy
consumption point of view. A Decision Support Framework (DSF) was developed for
future development of Decision Support System. Data warehousing methodologies
were used to extract, store and analyze maintenance data. A Web application was
developed to display the frequency of faults and density of maintenance of six
distinct buildings at University College Cork (UCC). A generic Decision Support
Model (DSM) with six sections was developed, which was implemented to a
computerized semi-automatic tool – Decision Support Tool (DST). Integrating the
analysis results and the DST, five renovation suggestions were proposed.

INTRODUCTION

Currently, buildings account for 40 percent of global energy consumption and


greenhouse gas emissions (Burnham, 2009). Many buildings are still constructed and
refurbished without considering possible improvements of user comfort levels or
possible methods of energy conservation. It becomes necessary to renovate as a
component or a whole system ages or deteriorates due to poor maintenance or design.
Decision makers are always in a dilemma whether to renovate systems or continue its
maintenance or which option should be selected during renovation strategies.
This paper is part of a research project focusing on building performance
analysis and decision support for renovation strategies. This research developed a
holistic, methodological framework, which is a guide for the user to analyze building
performance for comparison of variable renovation solutions in order to make
decisions. The building performance factors addressed in this research are: energy
performance (technical parameter), thermal performance (technical parameter) and
maintenance activities. This paper focuses on maintenance activities perspective. It is
envisaged to improve the potentials for decision-making of renovation strategies
through the extensive use of historical data analysis. Analysis of historical data will
help owners to understand performance of the existing building for making decisions
in future renovation strategies. Data for our research was provided by the Building &
Estates (B&E) Office of UCC. These maintenance reports include upgrade and repair

834
COMPUTING IN CIVIL ENGINEERING 835

requests from a wide range of buildings from the campus. The maintenance data of
selected buildings is extracted for storage and analysis in a Data Warehouse (DW).
This paper will first introduce the scope of decision support. The second part
introduces a DSF. It includes a methodology of maintenance activities analysis and
development of a generic DSM and a DST based on the DSM. The final part
describes the application of the tool and proposes five suggestions. It involves an
example of decision support for upgrading existing tubes of Boole Building of UCC.

THE SCOPE OF DECISION SUPPORT

Decision Support (DS) is a broad field concerned with supporting people in


making decisions (Bohance, 2003). Many parties are involved in building design or
renovation, including the client, the architect, the performance analysts, the
mechanical engineer, the cost estimator, the structural engineer and the construction
manager. Each decision maker plays a different role. Due to a lack of knowledge, it is
difficult to determine which low-energy technique has the best results for energy
consumption and CO2 emissions (Vreenegoor et al. 2008). However, computer based
tools can aid in decision making.
Initially, there was a close link between DS and Operations Research and
Decision Analysis (Bohance, 2003). In a second step, DS was coupled with the
development of Decision Support Systems (Bohance, 2003). Nowadays, DS is
probably most often associated with an integrated usage of DW, Data Mining, OLAP
(On-Line Analytical Processing) and Modeling and Simulation. These technologies
enable viewing data from different perspectives, identifying all the data and gathering
it together (Morrison et al. 1999), supplying specialized data analysis (IMOS, 1997),
organize results as meaningful information enhanced efficiency in making a decision.
This implies that DS is often associated with DW and Data Mining methodologies.
On the other hand, DS provides a variety of preference modeling, simulation,
visualization and interactive techniques, which means it is more focused to Modeling
and Simulation (Gilfillan, 1997; SRI, 2001).
The maintenance data used in this paper was managed using data warehousing
methodologies in order to support statistical analysis. DW is a repository of multiple
heterogeneous data sources, organized under a unified schema in order to facilitate
management decision-making (Han, 2001).

DEVELOPMENT OF DECISION SUPPORT FRAMEWORK

The framework was developed to enable comparisons of variable renovation


solutions and selection of the best solution. A simplified diagram of the DSF is
shown in Figure 1. Independent (climate, location, site and building type), and
dependent (renovation policy, ‘core and shell’, service systems, appliances and
occupants, maintenance activities) factors are inputs for the analysis of the building
performance. Based on the technical analysis, a financial analysis is carried out by
calculation of the payback time for evaluation of renovation options. A DSM was
developed to form the basis for a DST. A database is used to store all feasible options
of renovation solutions for DST. The DST supports users to make final decisions.
836 COMPUTING IN CIVIL ENGINEERING

D
eF
pa
ec
nt
do
er
ns
t

B
u
i
l
d
i
n
g
P
e
r
f
o
r
m
a
n
c
e
EB
iu
x
s
ti
i
nn
gg
i
l
d

Dp
ep
co
ir
st
iM
oo
nd
S
u

e
l
e e
I
n
dF
ea
pc
et
no
dr
es
n
t
r
a r
p a
p
m
o m
C o
C

TA
e
cn
h
n
iy
c
ai
ls

M
a
iA
na
t
ey
n
as
n
cs
e
a
l
s

n
l
i

Dp
ep
co
ir
st
i
oT
no

D
eM
ca
ik
si
in
og
n
RS
eo
nl
ou
vt
ai
to
in
os
n

S
u

o
l
F
i
n
a
n
c
i
a
l
A
n
a
l
y
s
i
s

D
a
t
a
b
a
s
e
Figure 1 Simplified diagram of the Decision Support Framework

Methodology of Maintenance Analysis

In this case, information gathering, including energy consumption,


maintenance reports and main characteristics of buildings at UCC campus, were
provided by the B&E Office. Maintenance data are normally available in a relational
DW if an appropriate facility management system is used. However, the data from
UCC is provided in .pdf format with the fault descriptions unclassified using
descriptive English which cannot be directly analyzed. Processing of fundamental
maintenance data is necessary. Firstly, the information has to be extracted from .pdf
to Excel. Secondly, classification of all components and fault reasons required are
needed and then assign component index and fault index to each classification of
components and fault reasons. Finally, an application is developed to import all
information into the existing DW. A Web application linked to DW was developed to
display the frequency of faults and density of maintenance (see Figure 3).

Development of Decision Support Model and Decision Support Tool

A DSM with six-section was developed, including an initial stage of area


condition, analysis of building performance, consideration of all possible solutions,
specification of feasible options, evaluation of the options and selection of the best
option. The detail of DSM was published in the International Conference on Building
Science and Engineering (ICBSE) (Yin et al. 2011). For example, Figure 2 shows one
section (Section II) of this DSM from maintenance perspective. The DST was
designed based on the DSM. The interface has left panel and right panel. The left
panel shows the six sections of the DSM and list of subsections within each section,
and the right panel shows user input, tables, and solutions/options (see Figure 4). The
following chapter will introduce the implementation of this tool.

IMPLEMENTATION OF THE DECISION SUPPORT TOOL

Renovation works are not in practice for all of these six buildings, so this case
study focuses on Section II and Section III from the maintenance perspective. For a
COMPUTING IN CIVIL ENGINEERING 837

particular building, for example Boole Building, upgrade lighting system is in


practice. Therefore, this example can focus on all sections of the DSM.
Section I: Initial survey of the area condition

Analysis of
Maintenance Data

Is the Is the maintenance


No energy usage of the No density of the component
building higher than higher than other components
local benchmark? of this building?

Yes
Energy performance Yes
analysis (simulation)
aims to select a better
alternative to renovate.
Yes Has the
Section II

component exceeded
its life span?

No

Is the
No maintenance cost
higher than
normal?

Yes

No Will renovation
save energy?

Yes

No Would the
owner like to renovate
or improve it?

Yes
Continue Optimize maintenance
maintenance and scheduling to reduce Consideration of all
repair maintenance density or cost possible solutions

Section III: Consideration of all possible solutions

Figure 2 The Section II of the Decision Support Model

Section II – Building Performance Analysis: The presented research has


focused on the maintenance requests and activities of six distinct buildings, which are
Kane Building, Boole Library Building, O’Raphilly (O’R) Building, Civil
Environmental Engineering (CEE) Building, Electrical Engineering (EE) Building
and Environmental Research Institute (ERI) Building. We selected three buildings
with areas more than 10000 m2 and the other three buildings with small areas. Their
construction dates vary from 1910 to 2004. Table 1 shows main characteristics of the
six buildings. Energy usages of four buildings except O’R and ERI are greater than
energy usage of benchmark of the typical building, therefore renovation solutions for
reducing energy usage need to be considered.
A Web application was developed to display maintenance density, fault
density and fault date. The function of Web application is to facilitate relational
reporting linked to the DW. The account for maintenance data and fault data were
carried out by DW. Figure 3 shows the Web application with maintenance density for
the CEE building in 2009. Figure 4 shows a screenshot of the DST with demonstrator
of the CEE building. Table 2 shows the maintenance density of selected six buildings.
Statistics of maintenance density shows that plumbing systems of buildings with
large areas, not only old buildings, but also new building make up the most
838 COMPUTING IN CIVIL ENGINEERING

components that need to be maintained, then followed by lighting system, HVAC


system and ‘core and shell’. The lighting systems of buildings with small areas have
the largest proportion of maintenance density, followed by plumbing system, ‘core
and shell’ and HVAC system.
Table 1. The main characteristics of selected six buildings
Bldg. Area Built EU EU Standard Window Wall Roof
(m2) Year good practice typical
Kane 13699 1971 615 348 568 Single No insulation No insulation
Boole 19662 1971 620 348 568 Double No insulation No insulation
O’R 11812 1997 340 348 568 Double Insulation Insulation
CEE 1741 1910 424 112 205 Single No insulation No insulation
EE 2791 1954 445 112 205 Double No insulation No insulation
ERI 2781 2004 182 112 205 Double Insulation Insulation
EU – Energy Usage (kWh/m2/yr); EU Standard – reference: energy consumption guide 19.

Figure 3 Maintenance density for the CEE Building in 2009


Section III – All possible Solutions: Focusing on issues about maintenance
density of Table 2 and faults frequency of those building, five suggestions for
improving building performance were purposed:
Optimize the maintenance schedule of plumbing systems – Checking the
maintenance data, for example the Boole building, more than 80% maintenance
activities occurred in the public toilets, with 69% remove blockage, 13% repair or
inspect taps, etc. According to the blocked date, the most blocked problems occurred
during exam months (April and May) when more students use the Boole building
than normal. That means the external reasons (occupants) controls the maintenance
activities rather than system itself. Therefore, it is better to develop an optimized
maintenance schedule to avoid too many maintenance requests.
COMPUTING IN CIVIL ENGINEERING 839

Figure 4 Decision Support Tool - maintenance activities

Table 2. The maintenance density of main component of selected six buildings


Buildings Plumbing Lighting HAVC system (%) Core and Shell (%)
system system
Heating Vent. Air Wall Door Window Roof
(%) (%) Cond.
Kane 37.4 19.8 8 10.3 X 0.5 1.4 0.7 0.9
Boole 50 8.6 7.1 1.3 8.4 1.1 7.1 2.4 0.9
O’R 34.6 19.7 2.2 X1 2.1 1.3 14.5 2.5 0.3
CEE 14.0 17.2 4.2 X0 X 1.7 16.5 3.2 5.8
EE 11.6 59.6 1.9 3.9 X 0 9.6 1.9 3.9
ERI 19.6 25.5 0 X1 X 2 3 1 0
X – no this type of system in the building; X0 – currently it is not used; X1 – natural vent. system

Improve settings of existing BMS – Checking the fault reason, for example
Boole building, more than 90% maintenance activities were associated with poor
indoor performance (the room was too cold or too warm). The historical maintenance
data enable display of which area always has this kind of problem, therefore, it is
important to re-set up temperature point for sensors of existing Building Management
System (BMS) or add more sensors to this area for improvement of occupant comfort
level.
Upgrade lighting systems – The main type of energy consumption in UCC
campus is electricity. Currently, most of fluorescent tubes of lighting system of UCC
are T12 or T8. Thomas et al. (1991) showed that significant energy savings could be
attained by use of more efficient lighting systems. T5 tubes are new version of
fluorescent tubes designed to optimize energy consumption, which its lifespan more
than 30,000hrs, providing certain savings on installation and long-term maintenance.
840 COMPUTING IN CIVIL ENGINEERING

According to previous retrofit experience, lighting system with T5 tubes was about
38% more energy efficient than the conventional T8 system (Wu, 2005) and the
illumination at working plane was increased from 500-700lux at the same time.
Improve existing mechanical ventilation systems – Many buildings at UCC
were built with mechanical ventilation systems. Some of them currently do not use
this system (e.g. Kane building). The existing vents waste more energy during winter
time because cold air always comes from those vents into occupied rooms. Therefore,
for the offices, which can have enough fresh air through natural ventilation, it is
better to seal existing vents, avoiding air leakage. For the public computer labs which
the room temperature is always higher than the prescribed value 19-21oC, not only
buildings currently use mechanical ventilation system but also buildings do not use
this system, ventilation units with heat recovery were suggested to improve
traditional mechanical ventilation systems, which can save more than 75% energy
and eliminate 80% heat losses (Hazucha, 2009).
Improve existing ‘core and shell’ – Buildings built before 1990 in UCC did
not have materials for insulation. According to previous research experience (Yin et
al, 2009), for example the CEE building, illustrate that the improved roof and walls
insulation will lead to 33% savings and 19% savings respectively, and the
replacement of windows leads to 11% savings and eventually the reduction will reach
almost 65% of savings.
Section I, IV, V, VI – Initial Survey, Feasible Options, Options
Evaluation and Decision Making: When possible renovation solutions were
proposed, feasible options have to be generated, then evaluate those options, and
finally make a decision for renovation. For example, the Boole building, the existing
T8 fluorescent tubes were replaced by T5 after discussions. The renovation work had
two options and four contractors (Table 3). The evaluation and comparison for these
two options and four contractors were carried out. Table 3 shows the summary of this
comparison (O' Regan, 2010), including four contractors with eight options by
comparing renovation costs (fitting cost, labor cost, metering cost,
commissioning/certification cost) and payback time. Finally, the option 2 of
contractor 2 was decided because of the lowest cost and satisfactory payback time.

Table 3. The summary of options comparison


UCC Boole Contractor 1 Contractor 2 Contractor 3 Contractor 4
Lighting
Upgrade O1 O2 O1 O2 O1 O2 O1 O2
Fitting Cost(€) 26,170 19,620 28,900 21,840 30,080 19,980 27,030 20,705
Labor Cost(€) 7,620 7,620 8,000 8,000 11,600 11,600 10,885 10,885
Temporary
Metering (€)
350 350 300 300 - - 160 160
Commissioning/
Certification(€)
- - 300 300 - - - -
Total Costs(€) 34,140 27,590 37,500 30,440 41,680 31,580 38,075 31,750
Payback time 4.68 2.84 5.14 3.14 5.71 3.25 5.22 3.27
O1(Option 1): 5 Foot, T5/35FSE complete with T5 HE 35/840 Cool White lamps, Wee lamp charge.
O2(Option 2): 5 Foot, EVG-230-035-T5A EVG Ballast 5Ft 35 Watt Adapter completes with
F35W/840 Cool White lamps.
COMPUTING IN CIVIL ENGINEERING 841

CONCLUSION

This research developed a DSF to support building performance analysis and


decision support for renovation strategies. This paper focused on maintenance
perspective. A methodology for analysis of maintenance data was introduced. The
DSF was applied to six distinct buildings within the UCC’s campus to evaluate
multiple types of buildings for old and modern buildings with different areas and
HVAC systems. A DSM with six sections was developed, including initial survey of
the area conditions, analysis of building performance, consideration of all possible
solutions, specification of feasible options, evaluation of the options and selection of
the best option. A DST based on the DSM was developed as a computerized semi-
automatic tool. By application of the DST, five suggestions were proposed: optimize
the maintenance schedule of plumbing systems; improve settings of existing BMS;
upgrade lighting systems; improve existing mechanical ventilation systems and
improve existing ‘core and shell’. A Web application was developed to display the
frequency of faults and density of maintenance of each building. Future research
work is to develop a decision support system based on the DSF that will help
engineers easily select a building’s components/systems in renovation strategies.

REFERENCES
Bohance, M., 2003. "What is Decision Support? "
Energy Consumption Guide 19, “Energy Use in Offices”,
< http://www.carbontrust.co.uk/Publications/pages/publicationdetail.aspx?id=ECG019>
Gilfillan, L., 1997. “Project Management and Evaluation.”
<http://lga-inc.com/ut/syllabus/Session7and8/index.htm>.
Han, J., Kamber, M., 2001. Data Mining: Concepts and Techniques, Morgan Kaufman.
Hazucha, J., 2009, “Renovation of Social Buildings - Guidelines for complex renovations.”
IMOS, Inc., 1997. “Decision Support Primer.” <http://www.imos.com/whatis.htm.>
Morrison, J.G., Moore, R.A., 1999. “Design Evaluation and Technology Transition: Moving Ideas from the
drawing board to the Fleet.” <http://wwwtadmus.spawar.navy.mil/Slides/JGMC2Conf/index.htm>.
O' Regan, K., Sweeney S. M., 2010, “Upgrade of Boole Library Lighting - Proposed Luminaries.” UCC Report.
SRI, 2001. Maths & Decision Systems Group, Silsoe Research Institute,
<http://www.sri.bbsrc.ac.uk/scigrps/sg9.htm>.
Thomas, P. C., Natarajan, B., Anand, S., 1991, “Energy conservation guidelines for government office buildings
in New Delhi.” Energy and Buildings,Vol. 16 (1–2), pp.617–623.
Vreenegoor, R.C.P., de Vries, B., Hensen, J.L.M. (2008). “Energy saving renovation, analysis of critical factors at
building level.” Proc. 5th International Conference on Urban Regeneration and Sustainability.
Skiathos: WIT Press. 653-663.
Wu, K. T., Lam, K. K., 2005, “Office lighting retrofit using T5 fluorescent lamps and electronic ballasts.” The
Hong Kong Institution of Engineers Transactions, Vol. 10 (1).
Yin, H., Otreba, M., Allan,L., Menzel, K., 2009, “A Concept for IT-Supported Carbon Neutral Renovation.”
Dikbas A., Ergen E. & Giritli H. (eds.): “Sustainability”, Proceedings of 26th W78 Conference on
Information Technology in Construction, ISBN 978-0-415-56744-2 (hbk), ISBN 978-0-203-85978-0
(eBook) pp.611 – 619, ITU, Istanbul, Turkey.
Yin, H., Menzel, K., 2011, “Decision Support Model for Building Renovation Strategies.” International
Conference on Building Science and Engineering (ICBSE) 2011, Venice, Italy, April 2011.
Environmental Performance Analysis of a Single Family House Using BIM

A. A. Raheem1, R. R. A. Issa2 and S. Olbina3


1
Ph.D. student, Rinker School of Building Construction, University of Florida,
Gainesville, FL 32611; Phone (352) 273 1178; adeebakas@ufl.edu
2
Holland Professor, Rinker School of Building Construction, University of Florida,
Gainesville, FL, 32611; Phone (352) 273 1152; raymond-issa@ufl.edu
3
Assistant Professor, Rinker School of Building Construction, University of Florida,
Gainesville, FL, 32611; Phone (352) 273 1166; solbina@ufl.edu

ABSTRACT
Energy consumption and greenhouse gas emissions are major indicators of
environmental performance of any building. In the recent years, the need for
much-improved energy efficient performance in the housing sector has
substantially grown due to serious energy concerns in the United States. According
to the World Business Council for Sustainable Development, energy use for
buildings in the Unites States is appreciably higher than in other regions, and this
is likely to continue. The lack of a structured approach to planned use of the
sustainability features like post occupancy evaluation, benchmarking against
similar projects, or setting performance targets has made the situation grimmer.
For the past 50 years, a wide variety of building energy simulation programs have
been developed, improved and are in use throughout the building energy
community. With the advancement in Building Information Modeling (BIM) and
simulation technology, the environmental performance of buildings can be
assessed before their actual construction. The primary goal of this research was to
analyze annual energy consumption and CO2 emissions in a single family house in
Florida occupied by a defined type of household using BIM. The secondary goal
was to compare the results with the U.S. Energy Information Administration (EIA)
data published in the Building Energy Data book (DOE 2009) for validation
purposes and to establish the importance of BIM and its use in simulation. This
research has shown that BIM when used in conjunction with computer-aided
building simulation is a very valuable tool in the study of energy performance,
design and operation of buildings. Using energy simulation technology at the
design stage of dwellings facilitates the sustainability decision making process.
INTRODUCTION
The environmental performance of buildings depends on many factors. Energy
consumption and CO2 are the major concerns in the recent times especially with
the spread of sustainable design and green buildings concepts throughout the world
(Figure 1). Under the 1997 Montreal Protocol, participating governments agreed to
phase out chemicals used as refrigerants that have the potential to destroy
stratospheric ozone. It was therefore considered desirable to reduce energy

842
COMPUTING IN CIVIL ENGINEERING 843

consumption and decrease the rate of depletion of world energy reserves and
pollution of the environment (Omer 2009).

Figure 1: Major factors affecting environmental performance of a building


According to the U.S. Department of Energy (DOE), the residential sector
consumed 10.8 Quads of delivered energy and this does not include energy lost
during production, transmission and distribution to the consumers. Moreover in
residential buildings alone, 1192 million metric tons of CO2 emissions were
recorded in 2006. Between 1990 and 2008, total residential CO2 emissions
increased by 27.5% while population increased by only 22%. Also U.S. buildings
emissions approximately equal the combined CO2 of Japan, France and the United
Kingdom (EIA 2008). In 2007, approximately 1,219,000 new single family
housing units were built in United States and per household energy expenditure
was increased about 12% from the averaged national amount of 2005 ($1873).
This trend thus indicates that construction of single family units is on the rise and
in the future it will have a huge impact on the overall energy consumption in the
U.S. Such ever-increasing demand could place significant strain on the current
energy infrastructure and potentially damage world environmental health by CO,
CO2, SO2, and NOx effluent gas emissions and global warming.
RESIDENTIAL ENERGY CONSUMPTION IN FLORIDA

According to U.S. Energy Information Administration” Florida’s per capita


residential electricity demand is among the highest in the country, due in part to
high air-conditioning use during the hot summer months and the widespread use of
electricity for home heating during the winter months”. In 2006, Florida residential
energy consumption was 767.6 trillion Btu that was almost doubled at the end of
2007 (Figure 2A). In Florida due to high temperatures and humid climate electrical
energy consumption is relatively higher. As described by FPSC “the residential
customers’ electrical usage varies more throughout the day than commercial usage
and shows more pronounced peaks in the early evening in the summer and in the
mid-morning and late evening in the winter” (Figure 2B).
USE OF BIM IN ENVIRONMENTAL PERFORMANCE ANALYSIS

Due to the technological developments and availability of modern construction


844 COMPUTING IN CIVIL ENGINEERING

materials the limitations of architect’s imagination have been minimized (Laptali


et al. 1997). As performance issues like comfort and energy become increasingly
important, the capabilities of building simulation are increasingly in demand to
provide information for decision-making during the building design process. This
need has started the development of design advice tools where the common
objective is to facilitate the use of building simulation in the design process
(Petersen et al. 2010).
Per Capita ResidentialEnergyConsumption Daily Load Shapes for Summer and
(Trillion BTU) Winter
110
2006 2007
100

%age of daily peak


1140.5 1594.1 90
80
70
867.5 1535.2
60
50 Summer
40 Winter
767.6 1339.5
1 3 5 7 9 11 13 15 17 19 21 23

A) Per Capita consumption (EIA 2008) B) Daily load shapes (FPSC 2009).
Figure 2. Florida Electricity Consumption

The energy analysis is performed using BIM simulation analysis techniques and
the results are then compared to EIA (2005) data for validity purposes. Based on
the information from these results, a recommendation metrics has been developed
for constructing more energy efficient houses in Florida. The secondary goal of
this paper is to show how the availability of user friendly energy analysis software
can help construction professionals and designers make decisions at the early
stages of their designs.
BASE MODELING DATA

The research started with the modeling of a typical single family house in Florida.
The general data was obtained from two sources:
1. Energy Information Administration (EIA) - for general characteristics of single
family houses in the U.S.
2. Florida Energy Efficiency Code For Building Construction (FEECBC) -For
insulation and equipment efficiency values in Florida
This data was studied to determine the typical number of rooms; glass to floor ratio
and type of heating and cooling equipment found in a single family house in
Florida and then the appropriate values were selected from the FEECBC data. Six
houses were selected from FEECB database with approximately similar square
foot area and geographical location as that of the intended base model house and
the values were averaged for the following components: R value for internal and
external walls, roof and floor, U value for windows, SHGC value for windows,
COP values for heating and cooling equipment, energy efficiency factor (EF) for
COMPUTING IN CIVIL ENGINEERING 845

hot water system and glass to floor ratio. These values were then used as input data
for simulation purposes in the BIM software.
SIMULATION INPUT DATA

The study was intended to perform a detailed energy analysis for a single family
house in a hot and humid climate therefore Gainesville was selected as a
geographical location in Florida. Based on the information from EIA and
FEECBC, the base model house was developed in Design Builder software with
the demographics shown in Table 1. Design Builder software is used to model the
house based on brick/block construction. The design was kept simple by creating
an appropriate number of zones (area with similar functions were assigned to a
single zone) to save simulation execution time.
Table 1. Data Input for Location and Climate
Location Gainesville FL, USA
Source ASHRAE/ TMY3
WMO 722146
General Climatic Region 4A
Latitude 29.70
Longitude -82.28
Elevation (m) 40.0
Time and Daylight Standard Pressure (KPa) 100.9
Saving Time zone (GMT -05:00) Eastern Time
Energy Codes Legislative Region Florida

ENVIRONMENTAL PERFORMANCE ANALYSIS


A. Annual Energy Consumption
i. Internal heat Gains
Internal heat gains include gains from equipment, lighting, occupancy, and HVAC.
Results from the analysis showed that the major part of the internal gains was from
sensible cooling. Its annual value for this model was 10083 KWh. Consumption
due to lighting was 6453 KWh (Figure 3). The main contributor to the large value
of cooling is solar gains through exterior windows. So this variable is very
important in the design of the house. The model house is designed based on a
density value of 0.0018people/ft2 which is not a large occupancy value; therefore
the occupants’ energy consumption is about 113 KWh.

Annual Monthly
Figure 3. Internal heat gains
846 COMPUTING IN CIVIL ENGINEERING

ii. Envelope heat gains and losses


Heat gains from the envelope include gains to the space from the surface element
(walls, floors, ceilings etc.). Negative values indicate heat loss from the space. A
large amount of energy is lost from the ground floors through conduction. The
major heat loses are from the floor and external infiltrations which accounts for
85% of the total heat losses in the house. The major heat gain components in the
envelope of the house are ceilings and walls which account for more than 73% of
the total heat gains in the house (Figure 4).

He
He
at
at
Bal
Ba
anc
lan
e
ce
K
K
W
W
h
h

Annual Monthly
Figure 4. Envelope heat gains and losses

B. Annual Fuel consumption


i. Fuel breakdown
The fuel consumption breakdown (Figure 5) from the model indicates that
electricity consumption for cooling constitutes the largest proportion. It reaches its
maximum value in July and August and varies throughout the year with the change
of weather. The second biggest consumer of electricity is lighting (6453 KWh) and
its consumption is almost uniform throughout the year.

Annual Monthly
Figure 5. Fuel Consumption Breakdown
COMPUTING IN CIVIL ENGINEERING 847

ii. Total fuel consumption


Electricity demand from June to August increases due to the hot and humid climate
of Gainesville, FL. Gas is used only for space heating in the house at the start and
end of the year as it is a cheap source of fuel for heating. For the rest of the year it
is just used for heating the water or cooking. Annual predicted electricity and gas
consumption is 15053 KWh and 854 KWh respectively. Electricity accounted for
94% of overall annual energy consumption in the model house.

Annual Monthly
Figure 6. Total fuel consumption
C. Annual CO2 production:
CO2 emissions produced in a house vary as the use of electricity varies throughout
the year. The total value of CO2 emissions in a typical house in Gainesville, FL is
10,478 Kg which is 23 times the maximum allowance of CO2 emissions per person
per year (World Resource Institute 2010).

Annual Monthly

Figure 7. CO2 emissions


VALIDITY OF THE DESIGN BUILDER RESULTS
The results from the Design Builder software were compared to the EIA data
published in Building Energy data book (U.S. DOE 2009). EIA data was selected
for comparison because it is the most authenticated U.S. energy consumption data.
The only problem is non-availability of the specific data for the selected location
848 COMPUTING IN CIVIL ENGINEERING

for the base model house. EIA data is available for the broadly divided regions
and therefore the values given are the averaged ones. Comparison shows
approximately similar results to EIA and the differences are due to the following
reasons: 1) EIA values are national average values for the whole Southern region
and the data generated from the model is particular to Gainesville, Florida 2)
Florida falls in the South Atlantic region and some of the other Southern regions
have a different climate than Gainesville, FL which affects the duration of cooling
and heating annually. Gainesville has a hot and humid climate so cooling expenses
are more than the heating expenses. Moreover cooling is required for more than 9
months as compared to heating in the houses for creating comfortable indoor
conditions which again is a big factor affecting overall expenses. Figure 8 shows a
comparison of delivered end use energy values from the analysis and EIA data.
The lighting and cooling values were compared and not the heating values because
the EIA data is for the whole South region which consist of three sub regions
namely south Atlantic, east south central and west south central. Florida falls in
South Atlantic region and in this division cooling degree-days (a measure of how
much space cooling is needed in summer) averaged 2,071 per household,
compared with a U.S. average of 1,407. The observed difference in the results is
due to averaged values of EIA data for the whole south region. Therefore another
source (Terrapass) was used for getting an accurate dollar amount for the energy
expenses per year. Gainesville electricity companies have their own rates for
electricity/gas and these rates may differ according to the location. Comparison of
results between the model and EIA data (2006) shows a difference of $230
whereas comparison between Terrapass and Design Builder shows a difference of
just 1.83% (Figure 9A). Figure 9B shows the comparison of results for annual CO2
emissions. The data shows very little differences which really shows that the
results obtained from analysis through Design Builder are accurate.

Energy consumption comparison


Energy Consumption(Million

30

20
BTU)

10

0
Design Builder
Lighting & Appliances Space cooling
EIA

Figure 8. Comparison of model results and EIA data.


COMPUTING IN CIVIL ENGINEERING 849

Comparison of Annual
Energy Expenses
Annual Energy Expenses($)

2200

2000
1800 Design
EIA, Terrapass
Builder 2100
1600 2065 1835
1400
1200

1000

A) Annual energy expenditure B) CO2 emissions results.


Figure 9. Comparisons of energy expenditure and CO2 emissions

CONCLUSIONS
The ever increasing U.S. energy demand can only be curbed with the use of
modern simulation technology to address all the energy related issues in the
housing sector in order to minimize consumption. This will also help in controlling
CO2 emissions which is also a major environmental issue. The early stages of
building design include a number of decisions which impact the performance of
the building throughout the rest of the process. It is therefore important that
designers are aware of the consequences of these design decisions. The use of BIM
and simulation programs can greatly contribute toward more feasible design
decisions when used during the building design process to predict the performance
of various design alternatives on parameters such as energy, CO2 emissions and
indoor air quality.

REFERENCES
Energy Information Administration (EIA), (2008). U.S. Carbon Dioxide Emissions
from Energy Sources 2007 Flash Estimate, DOE U.S.
Florida Public Service Commission (2009). Annual report on Activities Pursuant
to the Florida Energy Efficiency and Conservation Act
Laptali, E., Bouchlaghem, N., and Wild, S. (1997). “Planning and estimating in
practice and the use of integrated computer models,” Automation in
Construction, 7, 71-76.
Omer, A. (2009). “Energy use and environmental impacts: A general review”,
Journal of Renewable and Sustainable Energy, 1, 053101-1.
Petersen, S., and Svendsen, S. (2010). “Method and simulation program informed
decisions in the early stages of building design”, Journal of Energy and
Building, Elsevier Science Ltd, Article in press.
World Resources Institute, (2010). WRI summary of the carbon limits and energy
for America’s renewal act, Washington DC.
U.S. Dept. Of Energy, 2009 Building Energy Data Book.
Enhancing Student Learning in Structures Courses with Building Information
Modeling

Wasim Barham1, Pavan Meadati2 and Javier Irizarry3

1
Assistant Professor, Civil and Construction Engineering, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA, 30060; PH (678) 915-3946;
FAX (678) 915-5527; email: wbarham@spsu.edu
2
Assistant Professor, Construction Management, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA, 30060; PH (678) 915-3715;
FAX (678) 915-4966; email: pmeadati@spsu.edu
3
Assistant Professor, Building Construction Program, Georgia Institute of
Technology, 280 Ferst Drive,1st Floor, Atlanta, GA, 30332; PH (404) 385-7609; FAX
(404) 894-1641; email: Javier.irizarrry@coa.gatech.edu

ABSTRACT

This paper presents the findings of a study conducted to evaluate the effectiveness of
Building Information Modeling (BIM) in enhancing student learning in structural
concrete design courses. In reinforced concrete design, three dimensional (3D)
visualization of concrete members can advance students understanding of
reinforcement details and rebar-placement. BIM facilitates the usage of 3D views in
teaching such courses and provides opportunities to address the challenges faced by
the students during the visualization process. A study involving the use of BIM 3D
views were conducted in two courses over a one-semester period. In the study,
improvements in student performance were observed in several of the problems
presented when 3D models were used. BIM has the potential to provide faculty with a
tool that can improve teaching of structural design courses in a more visual and
interactive way and greatly enhance the educational experience of the students.

Keywords: BIM, 3D, Visualization, Students

INTRODUCTION

Two-dimensional (2D) drawings are most widely used as pedagogical tools for
teaching courses to Architecture, Engineering and Construction (AEC) students. The
interpretation of 2D drawings by students varies based on their educational
background, previous practical experience, and visualization capabilities among other
factors. Students are required to develop three-dimensional (3D) models mentally by
visualizing the different components of the project. Students with little or no practical
experience often face challenges and spend more time in developing 3D visual
models. In reinforced concrete design, 3D visualization of concrete members can
advance students understanding of reinforcement details and rebar-placement.
Building Information Modeling (BIM) facilitates the usage of 3D models in teaching
courses and provides opportunities to address the challenges faced by the students
during the visualization process. BIM is a process that provides a framework to

850
COMPUTING IN CIVIL ENGINEERING 851

develop data rich product models. In this process, real world elements of a facility
such as beams, columns, and slabs are represented as objects in a three dimensional
(3D) digital model. In addition to modeling, it provides a framework that fosters the
integration of information from conception to decommissioning of the constructed
facility (Goedert & Meadati, 2008). This paper presents the findings of a study
conducted to evaluate the effectiveness of BIM in enhancing student learning in
structural concrete design courses in the department of civil and construction
engineering and the department of construction management at Southern Polytechnic
State University in the United States. BIM 3D views used for the study are developed
using Autodesk’s Revit Structure. The following section discusses the usefulness of
BIM in teaching environments.

BIM AS A TEACHING TOOL

Based on learning styles students can be identified as auditory, visual, and kinesthetic
learners. Auditory, visual, and kinesthetic learners learn through hearing, seeing, and
doing respectively (Marvin, 1998). Teaching AEC courses by addressing students’
different learning styles is a challenging task. Traditional lecture is one of the styles
which is widely used for teaching AEC courses. Sometimes, the lecture format style
is complimented by including construction site visits. This teaching style provides an
auditory and visual learning environment. However, inclusion of site visits within the
course schedule is not always feasible due to reasons such as unavailability of
construction sites meeting the class needs, class schedule conflicts, and safety issues
(Haque et al. 2005). Additionally, lack of laboratory and training facilities are
impeding the creation of kinesthetic learning environments. Sometimes the traditional
lecture teaching style also falls short to serve as an effective communication tool for
transferring knowledge to students. Due to the lack of a conducive learning
environment, which stimulates auditory, visual, and tactile senses, currently AEC
students are unable to gain the required skills to solve real world problems. A user-
friendly interactive knowledge repository that provides a conducive learning
environment is needed to enhance students’ learning capabilities. BIM facilitates
development of such knowledge repositories and fosters conducive learning
environments. BIM serves as an excellent tool for data management. It facilities easy
and fast access to the information stored in a single centralized database or in
different databases held at various locations through the 3D model. Some of the BIM
characteristics such as easy access to the information, visualization, and simulation
capabilities allow auditory, visual, and kinesthetic learning environments to emerge.
Any-time and interactive access to the repository through a 3D model creates learning
environment beyond time and space boundaries and facilitates students to learn at
their own pace. These environments allow students to discover strengths and
weaknesses of their learning practices and facilitate self-improvement. As shown in
Figure 1, BIM has the potential to greatly enhance the educational experience of AEC
students in acquiring skills related to different areas and will provide faculty with a
tool that can improve teaching different courses in a more visual and interactive way.
852 COMPUTING IN CIVIL ENGINEERING

Conceptual Design
Construction Means &
Methods
Structural Analysis

BIM

Estimation
Scheduling

Figure 1. Applications of BIM in teaching AEC courses

METHODOLOGY - CLASS DATA COLLECTION INSTRUMENT DESIGN

The data collected for this study was large in scope and it was collected using data
collection instruments that included a survey which was designed to be consistent
with the objectives of this research. Students from two classes were involved in this
action research project. The first course is Steel and Concrete design from the Civil
and Construction Engineering Department. The second class was the Applied
Structures I course from the Construction Management (CM) Department. Both
courses are senior level courses and part of the undergraduate curriculum at Southern
Polytechnic State University.

Since the main objective of this study is to measure whether or not BIM 3D views can
enhance student learning in structures courses, BIM has been used as a teaching tool
in both courses during the semester and many concrete structural elements were
presented to students using BIM 3D views in addition to the traditional 2D approach
(See Figure 2, Figure 3 and Figure 4). The questionnaire was given to students at the
end of the semester after they had been exposed to different BIM models. The
questionnaire was designed in such a way that the questions, timing, the scoring
procedures and interpretations were administered and scored in a predetermined,
standard manner so that they were valid and relevant. The questionnaire was
composed of three sections - namely: demographic questions, qualitative part, and
quantitative test. The goal of the demographic questions was to profile students and
their background. The qualitative part consists of 10 questions with a 5-level Lykert
scale (1=Strongly Agree, 2=Agree, 3=Neutral, 4=Disagree, and 5=Strongly Disagree)
focusing on students’ opinion about BIM and whether or not they think BIM helped
them to gain better understanding of the taught material and improve their
visualization capabilities.

The quantitative section composed of problems aimed to compare students’


interpretation of BIM 3D views versus the traditional 2D views that is used widely in
many engineering applications. The first problem is a 2D example of a concrete
member followed by three multiple-choice questions about the steel detailing in that
COMPUTING IN CIVIL ENGINEERING 853

member. The second problem is similar to the first problem but we used a 3D BIM
view to represent the concrete member and its steel reinforcement. The problems in
the study include three reinforced concrete members; a simply supported beam
(Figure 2), one-way slab (Figure 3), and an isolated footing and column (Figure 4).
The students in both courses completed the designed questionnaire for simply
supported beam at the end of the semester. The one way slab and an isolated footing
and column questionnaires were completed by CM students only. The students in
both sections were given enough time to complete the questionnaire. The content of
the questionnaire was evaluated by making sure that the items in it agree with
objectives of this study and learning outcomes of both courses. The qualitative and
quantitative sections were used to study BIM actual impact in improving students’
learning and their perception about visualization using 2D and BIM.

Figure 2. 2D and 3D Views of the Simply Supported Beam

Main Reinforcement
Shrinkage Reinforcement
#5@16"
#4@14"

Shrinkage Reinforcement
Main Reinforcement #4@12"
#4@16"

Main Reinforcement Shrinkage Reinforcement


#4@8" #4@12"
Main Reinforcement Shrinkage Reinforcement
#4@9" #4@14"

Figure 3. 2D and 3D View of the One-Way Slab


854 COMPUTING IN CIVIL ENGINEERING

#3 Ties at 16"

8#10 Bars

8#11
Ties #4@12"

8#8 (both directions)

8#10
8#10

8#10 Bars

#3 ties
at 16"oc

Figure 4. 2D and 3D View of the Isolated Footing and Column

RESULTS

The data collected in the CE and the CM courses was analyzed and the results are
presented next. Demographic information about the study population is presented first
followed by an analysis of student performance when responding to questions using
2D and 3D views of structural elements. Benefits of using BIM for visualization are
then discussed based on the analysis of student performance in the quantitative test
and finally, the issues found with performance and perceptions about visualization
using 2D are discussed.

As part of this study, demographic data was collected. Tables 1 and 2 display this
data.

Table 1. Enrollment per course Table 2. Age of participants


Course Frequency Percent Age (years) Frequency Percent
CM course 21 52.5% 18 to 24 21 52.5%
CE course 19 47.5% 25 to 31 13 32.5%
Total 40 100% 32 to 38 6 15.0%
Total 40 100%
COMPUTING IN CIVIL ENGINEERING 855

There were a total of 39 students enrolled in the two courses included in the study.
The majority of the students were seniors at the time the study took place (85%,
n=40). A total of 52.5% (n=40) of the students were between the ages of 18 to 24
years and the balance was over 24 years of age. The following sections discuss the
results obtained by analyzing the respondent’s performance on the questions asked
and their responses to the survey questions, which required them to rate their level of
agreement with the statements presented.

3D enhanced learning through visualization:

Analysis of student performance on the problems presented provided some interesting


results. Table 3 show performance results on beam and slab related problems for the
2D and 3D cases.

Table 3. Student performance improvement on 2D and 3D problems


% Correct % Incorrect
Course Problem Type and Number
Responses Responses
CE 2D Beam #2 78.90% 21.10%
Course
3D Beam #2 89.50% 10.50%
(n=19)
2D Beam #1 81.00% 19.00%
2D Slab #2 95.20% 4.80%
CM
2D Foundation and Column #2 95.20% 4.80%
Course
(n=20) 3D Beam #1 85.70% 15.30%
3D Slab #2 100.00% 0%
3D Foundation and Column #2 100.00% 0%

It was observed that performance improved with 10.6% more correct responses in the
CE class when students had a 3D representation of the beam design. In the CM class,
4.7% more correct responses were observed when students used the 3D representation
of the beam problem. A smaller increase of 4.8% was observed in the CM class when
students used a 3D representation of the slab design presented in the problem. A
similar increase of 4.8% was also observed in the CM class when students used the
3D representation for the foundation and column problem. These results show that
students performed better when a 3D graphic representation of the problem was
provided, particularly in the CM class. More data will allow a more in-depth analysis
of the factors that may influence the difference in performance observed between the
CE and CM students.

Benefit of BIM for visualization of reinforcement in concrete elements:

Analysis of the responses to the survey questions provided some insight into possible
reasons for the observed increases in performance in reinforcement related problems.
These problems included the 2D and 3D Beam Problem #1, the 2D and 3D Slab
Problem #2, and the 2D and 3D Foundation and Column Problem #3. It was
observed that on average, students expressed agreement with the statement “BIM 3D
856 COMPUTING IN CIVIL ENGINEERING

models helped me to visualize beam reinforcement” (average rating of 1.9, n=20) for
the CM class and 2.71, n=19 for the CE class).

Issues with performance and perceptions about visualization using 2D:

An analysis between the qualitative and quantitative data was made to study the
impact of students’ performance and their perception about visualization using 2D
and BIM 3D views. To study the impact, an analysis between students perception vs
performance was made using responses for statement “I fully understand beam
reinforcement and steel placement using 2D cross-sections” and their performance on
2D Beam problems. When students performance on the 2D problems was reviewed,
it was observed that 19 % of students in the CM class responded incorrectly to 2D
Beam Problem #1 and 2D Beam Problem #2 and expressed an agreement (average
rating of 2.10, n=20) with the statement “I fully understand beam reinforcement and
steel placement using 2D cross-sections.” as shown in Table 4 regarding their
perceived level of visualization using 2D-cross sections. If this perception is accurate,
students should have answered these questions correctly. The results show that
students may have an inaccurate assessment of their visualization skills and may
underestimate the benefits of using 3D BIM for enhancing their visualization of
reinforcement in structural concrete elements.

Table 4. CM course student performance on 2D beam problems


Problem % Correct Responses (n=19) % Incorrect Responses (n=19)
2D Beam #1 81.0% 19.0%
2D Beam #2 81.0% 19.0%

Student perceptions regarding visualization using 2D graphics in the CE course were


consistent with their performance in the 2D problems with only a minimum number
of students answering incorrectly the 2D Problems as shown in Table 5. Students in
the CE course expressed on average agreement (average rating of 2.00, n=19) with
the statement “I fully understand beam reinforcement and steel placement using 2D
cross-sections.”

Table 5. CE course student performance on 2D beam problems


Problem % Correct Responses (n=19) % Incorrect Responses (n=19)
2D Beam #1 94.0% 5.1%
2D Beam #3 100% 0%

CONCLUSIONS AND FUTURE RESEARCH

Data was collected in a CE and a CM course to explore the benefits of using 3D BIM
models for assisting in visualization in concrete structures courses. During the Fall
2010 semester, an in-class exercise was conducted in the two courses to measure
student performance in solving problems using 2D and 3D models of the structural
members used in the problems. In addition, students were presented with several
COMPUTING IN CIVIL ENGINEERING 857

statements regarding their perceptions about the value of 2D and 3D models for
visualization of the concepts covered in the problems. Students were required to
express their level of agreement with the statements through a 5 level Lykert scale.
The data collected was analyzed to determine student performance when solving the
presented problems when 2D and 3D models were used. Improvements in student
performance were observed in several of the problems presented when 3D models
were used. An increase between 4.7% and 10.1% in the number of correct answers
was observed with the CM students and 10% with CE students.

The study also observed differences in performance between the two groups of
students for the beam problem. . The results showed that the BIM 3D views seem to
benefit more the CM students than CE students who participated in the study. These
differences may be due to a number of factors that were not included in this study
such as work experience, overall academic performance (i.e. GPA), and academic
maturity level (academic rank such as sophomore vs. seniors) when they take the
course. Future studies will consider these factors as well as the impact of using color
in the models and the view angle used in the images presented to students.

REFERENCES

Goedert, J. D., and Meadati, P. (2008). “Integration of construction process


documentation into Building Information Modeling.” Journal of Construction
Engineering and Management, 134(7), 509-516.

Haque, M.E. (2007). “n-D Virtual Environment in Construction Education.” The 2nd
International conference on Virtual Learning, ICVL 2007, Retrieved
November 12, 2010 from
http://www.cniv.ro/2007/disc2/icvl/documente/pdf/met/2.pdf.

Marvin, B. R. (1998). “Different learning styles: visual vs. non-visual learners mean
raw scores in the vocabulary, comprehension, mathematical computation, and
mathematical concepts.” 1998, Retrieved November 11, 2008 from
http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000
019b/80/17/95/48.pdf

Meadati, P., and Irizarry, J. (2010). “BIM – A knowledge Repository.” Proceedings


of the 46th Annual International Conference of the Associated Schools of
Construction, Retrieved November 12, 2010 from
http://ascpro0.ascweb.org/archives/cd/2010/paper/CERT177002010.pdf

Irizarry, J., and Meadati, P. (2009) “Use of interactive Display Technology for
Construction Education Applications.” American Society for Engineering
Education Southeastern Section Annual Conference, April 5-7, Marietta, GA
(in CD-ROM).
Using Applied Cognitive Work Analysis for a Superintendent to Examine
Technology-Supported Learning Objectives in Field Supervision Education

Fernando A. Mondragon Solis, S.M.ASCE1, William J. O’Brien, Ph.D., M.ASCE2


1
The University of Texas at Austin, Department of Civil, Architectural and
Environmental Engineering, 1 University Station C1752, Austin, TX 78712-0273; PH
(512) 696-3476; FAX (512) 471-3191; email: fernando.mondragon@mail.utexas.edu
2
The University of Texas at Austin, Department of Civil, Architectural and
Environmental Engineering, 1 University Station C1752, Austin, TX 78712-0273; PH
(512) 471-4638; FAX (512) 471-3191; email: wjob@mail.utexas.edu

ABSTRACT

Superintendents in construction jobsites face the complex task of coordinating


a great number of resources to manage field activities. Their job demands that they
process large amounts of information in order to perform successfully. As a result,
dozens of decisions are made by superintendents in a given day at work, which imply
heavy information processing activities. Articulating the basis for such activities to
transmit expertise can prove to be a quite extensive and difficult task, and novice
superintendents must again make sense of the world around them. Given the
complexity of the job, and the difficulty to develop an initial understanding,
Cognitive Task Analysis (CTA) methods appear as a useful tool to support
performance and instruction. CTAs are methods that can be used to describe a job,
from the practitioner’s perspective, as a set of decision tasks in terms of the necessary
information processes. One method of CTA, Applied Cognitive Work Analysis
(ACWA), is useful to uncover the goals that must be accomplished for a
superintendent to manage field activities in a construction project, the required
decisions to achieve those goals, as well as the information needed to make each
decision. This information can serve to identify functionality in computer systems to
aid cognitive performance. In addition, sets of the superintendent’s goals show a
demand for certain knowledge and thinking skills that are desirable for future field
supervisors to learn, thus positioning those goals as learning objectives. This study
utilizes the results of an ACWA study to explore their use for developing technology-
supported instruction in field supervision education.

INTRODUCTION

Construction superintendents have the main responsibility of managing field


activities in a construction project. This task requires for them to consider production,
quality and safety objectives to direct crews, materials and equipment. Each of these
categories accounts for dozens of variables for a single project. Information available
for those variables must be collected, processed, stored and shared in order to plan,

858
COMPUTING IN CIVIL ENGINEERING 859

coordinate and make decisions that will lead to successful management of the
jobsite. This results in complex work that is difficult to perform and challenging even
to expert practitioners. A consequence of this complexity is that it requires several
years for a person to develop the skills to become an expert superintendent. Once a
person becomes an expert, their mental models, sense of typicality, perceptual skills
and routines related to their job will be highly developed (Klein and Militello, 2005).
However, articulating the subtle aspects of their job and their basis for decision will
become a difficult task itself (Crandall et al., 2006; Smith, 2003). In this way, passing
on their expertise is complicated and novices have to face the same challenge of
making sense of a complex job. Cognitive Task Analysis (CTA) are procedures to
understand how people think and how they perform complex work (Crandall et al.,
2006). That is, CTA can provide insight into the superintendents’ complex mental
work that leads to attaining the main objective of managing field activities.

In this research, a particular method of CTA, called Applied Cognitive Work


Analysis (ACWA), is used to describe the tasks performed by the superintendent as a
set of goals, decisions and information needs. These pieces of information can be
provided by computer systems in a way that is straightforward and compatible with
the practitioner needs, thus aiding cognitive performance and reducing the
complexity of the job. Also, each of the goals that practitioners encounter in their
jobs, with the inherent information required, implies certain knowledge and thinking
skills that superintendents must acquire and develop throughout their professional
career. These goals can be set as learning objectives in the instruction of novice
superintendents to develop their own mental models of the knowledge domain.
Utilization of CTA methods can help the development of mental models for novices
and reduce both the complexity of the domain and the difficulty of performance by
incorporating computer systems that support their cognitive activity. In this paper, the
results of an ACWA study are analyzed to explore their potential to develop
technology-supported instruction of field supervision.

LITERATURE REVIEW

Cognitive Task Analysis

Cognitive activity is at the center of human work (Hollnagel and Woods,


1983). Decision making, sensemaking, planning, problem detection and coordination
are all cognitive functions that people use in their jobs. Work settings that involve
heavy cognitive and collaborative activity across multiple human and machine agents
have come to be known as complex cognitive systems (Hoffman and Woods, 2000).
Such is the case of field management in construction, where large amounts of
information are transferred and processed by project stakeholders and information
systems. From the schedule documents, contracts, drawings and specifications, much
has to be communicated before work can be put in place. Several methods have been
developed to study such systems, under the label of Cognitive Task Analysis. CTAs
are methods that can be useful to describe a job, from the practitioner’s perspective,
as a set of decision tasks in terms of the necessary information processes (Rasmussen,
860 COMPUTING IN CIVIL ENGINEERING

1986). Such analysis renders the required information that people need to accomplish
a given task that requires mental activity. In this sense, CTA studies are deemed
particularly useful to understand the details and subtle elements of work that people
consider for achieving their job’s objectives and responsibilities. The three primary
aspects for capturing cognition through CTA studies are knowledge elicitation, data
analysis, and knowledge representation. Inclusion of these elements in a CTA study
will facilitate reproduction of expertise.

In the construction domain few studies have used CTA methods. Distefano
and O’Brien (2009) analyzed experts on infrastructure assessment in small combat
units using the Applied Cognitive Task Analysis framework for interviews, aimed at
obtaining critical elements of performance that can be improved through information
technologies. Saurin, Saurin and Costella (2010) used the Critical Decision Method
framework for interviewing workers and gain insight into causes of workers’
accidents in jobsites to improve error classification and safety procedures. It can be
noted that both CTA methods used are mostly focused on knowledge elicitation, and
they are aimed at performance improvement.

Applied Cognitive Work Analysis

A method of CTA known as Applied Cognitive Work Analysis (ACWA) has


been developed to systematically address the design of functionality from an analysis
of the demands of a domain to identifying visualizations and decision-aiding concepts
that will provide effective support (Potter et al., 2002). The methodology presented
by Elm et al. (2003) consists of five steps that provide links from cognitive analysis
to design: use of a functional abstraction network (FAN), identification of cognitive
demands, tasks and decisions as a set of cognitive work requirements (CWR),
identification of associated information and relationship requirements (IRR),
specification of the representation design concepts (RDC) that suit the information
and relationships, and development of the presentation design concepts (PDC) for
implementation of the representation requirements. ACWA is meant to bridge the gap
between cognitive analysis and the design and development of decision support
systems for complex jobs (Elm et al. 2003). It unveils the connection between a
subject matter expert’s (SME) understanding of the demands, constraints and
resources in a domain and decision-aids that support such understanding of the
domain. The ACWA methodology will also provide a thorough description of the
cognitive activity behind the mental model of the domain. Having a verbal expression
of a mental model is highly valuable for explaining how the domain works and
transferring the model itself. The first three steps form the cognitive analysis section
of the ACWA methodology, while the latter form the design part.

METHODOLOGY

The cognitive analysis section of the ACWA methodology is utilized for


identifying the way superintendents understand the objectives of their work. These
objectives are stated in the FAN. The cognitive work that the practitioner must
COMPUTING IN CIVIL ENGINEERING 861

perform to achieve such objectives is unveiled through the CWRs; in turn, this allows
to determine the IRRs, which are the information needed to perform the cognitive
work. Since system design is out of the scope of this research, only the cognitive
analysis section is of interest.

ACWA covers all the aspects of cognitive task analysis in the different steps
that form the cognitive analysis section. The knowledge elicitation method of ACWA
consists in targeting information about how goals and processes work, how is
feedback about them obtained, and actions responding to incorrect functioning (Elm,
2002). The analyst is in charge of compiling the obtained data and turning it into the
main representation of the ACWA, which is the FAN. The analyst is also in charge of
verbalizing the CWRs and IRRs for each of the goals in the FAN. Three additional
knowledge elicitation methods were used to support collection of data; observations,
the think-aloud method and the Critical Decision Method modified for day-to-day
activities. The think-aloud method consists in having someone narrate their thoughts
as they perform any task. The modified Critical Decision Method consists in having
an expert narrate their daily activities and searching for decision points even if
incidents are not critical (Crandall et al., 2006). These methods provided input for the
three steps of interest. The ACWA methodology does not have to be carried out in a
strictly sequential manner (Potter et al., 2002), but the cognitive analysis must be
completed through to adequately describe the job in terms of goals and information
needs.

Development

The subject of analysis was a superintendent with more than 20 years of


experience in the construction industry, of which 6 have been in the superintendent
position. Background information on the superintendent and the construction
company provided the initial input for the FAN. For a period of one month,
observations and informal interviews kept feeding the development of the FAN and
the requisites for each goal. While informal, interviews were conducted around the
think-aloud and Critical Decision methods, to get insight into the superintendent’s
sense of what is common and critical.
The first product to be complete was the FAN, though at that point the
development of the requirements for each goal was quite advanced. All the products
of the study were reviewed periodically with the superintendent to verbally confirm
their validity. A particular condition is that the logic behind the FAN must appear
straightforward to the practitioner, and the explanation of the relationships expressed
both in terms of goals and information processes should be simple and make sense. In
addition, anecdotic evidence was also gathered to confirm the statements included in
each product.

Knowledge Representation

The FAN is considered complete as shown in Figure 1, where 24 goals at


different levels of abstraction were discovered to be part of the superintendent’s
862 COMPUTING IN CIVIL ENGINEERING

practice. Interpretation of the relationships expressed can be made in terms of


supporting and supported goals. For example, it can be observed that the Goal-
Process node of “Successfully supervise field work being performed” is the busiest in
terms of the links with other nodes (Figure 1 inside the blue frame). This implies that
successful achievement of this goal highly depends on successful achievement of
other goals, and viceversa. For instance, developing the monthly schedule becomes
very complicated if appropriate supervision does not inform about actual field
progress. Similarly, directing the weekly meeting with the subcontractors’ foremen
implies having knowledge of current progress of activities on site, as well as an
updated schedule for communication and collaboration purposes. Such is the mental
model of the superintendent, which heavily relies on adequate field supervision to
coordinate people and manage construction tasks around the jobsite.

Figure 1. Functional Abstraction Network.

Further along, the process of field supervision can be described in detail by


the cognitive work required for successful performance. In Figure 2, the CWRs
indicate that schedule, building processes, resources, equipment and materials are all
variables that the superintendent takes into account both for observing current
progress and estimating future progress. With the detail provided by the IRRs, it
becomes clear how field supervision has to be performed to adequately inform other
goals and processes.

ANALYSIS

An exploration of the utility of the ACWA methodology for technology-


supported instruction of field managers can be made once all the products of the
COMPUTING IN CIVIL ENGINEERING 863

cognitive analysis part have been developed. Identification of useful functionality that
will not hinder the mental model obtained is essential to determine the IT tools that
can reduce the complexity of the job. In addition, the obtained products can be used
to determine relevant learning objectives for novices, with the purpose of building
their own mental models of the knowledge domain. The potential benefit of
developing mental models that rely on computer systems for reduced complexity is
discussed in this section.

CWR-Goal 11 Successfully supervise field work being performed


CWR-G11.1 Monitor field work being performed according to schedule, quality and safety
objectives
IRR-G11.1.1 Amount of tasks’ progress expected per a period of time
IRR-G11.1.2 Number of safety violations
IRR-G11.1.3 Compliance with quality process

CWR-Process 11 Overview work performed


CWR-P11.1 Monitor adherence to scheduled sequence of activities
IRR-P11.1.1 Space availability per area of the jobsite
IRR-P11.1.2 Type of work being done per area on the jobsite
IRR-P11.1.3 Number of subareas for each task
IRR-P11.1.4 Task predecessors
IRR-P11.1.5 Task successors
CWR-P11.2 Monitor progress of activities according to building process
IRR-P11.2.1 Activities following specs or standard construction procedures
IRR-P11.2.2 Expected construction sequence
IRR-P11.2.3 Expected use of tools for construction
CWR-P11.3 Select activities that require further attention and closer supervision
IRR-P11.3.1 Simultaneous tasks that can be conflicting if delayed
IRR-P11.3.2 Activities entailing unfamiliar or knowingly complex tasks that may be late
IRR-P11.3.3 Activities with long durations
IRR-P11.3.4 Critical activities
IRR-P11.3.5 Activities that will finish late
IRR-P11.3.6 Activities with lower productivity than expected
CWR-P11.4 Monitor availability of laborers to complete tasks on schedule
IRR-P11.4.1 Expected number of people per task
IRR-P11.4.2 Short-term activity’s progress
IRR-P11.4.3 Space available to allocate people
CWR-P11.5 Monitor labor performance on site
IRR-P11.5.1 Incidences of idle workers
IRR-P11.5.2 Incidences of safety violations
IRR-P11.5.3 Number of requests for information for drawings and specs
CWR-P11.6 Monitor equipment performance on site
IRR-P11.6.1 Incidences of idle equipment
IRR-P11.6.2 Incidences of safety violations
IRR-P11.6.3 Number of maintenance requests
IRR-P11.6.4 Number of availability issues
CWR-P11.7 Monitor material usage on site activities
IRR-P11.7.1 Location of material with respect to location of task
IRR-P11.7.2 Material amount required per task
IRR-P11.7.3 Material availability on site each task
CWR-P11.8 Monitor expected activity progress with respect to schedule
IRR-P11.8.1 Scheduled durations
IRR-P11.8.2 Weather conditions permitting work per task
IRR-P11.8.3 Baseline look-ahead
IRR-P11.8.4 Late activities with respect to baseline schedule
IRR-P11.8.5 Late activities with respect to previous look-ahead
CWR-P11.9 Choose to take action for inadequate resource utilization
IRR-P11.9.1 Low number of resources that will lead to a late task completion
CWR-P11.10 Choose to take action for inadequate material utilization
IRR-P11.10.1 Large amount of wasted material
Figure 2. Cognitive Work Requirements and Information and
Relationship Requirements for the goal-process “Successfully
supervise field work being performed”.
864 COMPUTING IN CIVIL ENGINEERING

Information Requirements and Functionality

Each of the IRRs represents data that has to be processed in order to obtain
meaningful information that allows to perform the CWRs. Some IRRs are readily
available but still have to be stored in the mind of the practitioner. Other IRRs are not
available as such, and the practitioner has to process available data to produce them.

Existing scheduling software contains functions that provide information to


supply the information requirements or even broaden the information basis for
CWRs.

The acquisition of IRRs can be supported by existing scheduling systems such


as Microsoft Project or Primavera Project Manager, which are broadly used in the
construction industry. These computer software contain functions that provide
information in a way that can satisfy the information requirements of construction
superintendents. For example, (IRR-P11.3.4) is directly provided by a function that
identifies critical activities in a bar chart. Another example is that (IRR-P11.1.1
Space availability per area of the jobsite) is not directly provided by the system, but it
may be supported, if the task list is organized per area as to remind the user of
different area constraints.

Job Goals and Learning Goals

All the elements contained in the ACWA products explain how a


superintendent works and why. Bringing all these elements to the attention of novice
superintendents would provide them with a very robust initial reference for
developing their own understanding of the complex world they are facing. Instruction
for field supervisors can make use of the results of this study. Starting with the FAN,
job goals are relevant references for anyone performing a superintendent’s job. These
can serve as learning objectives, since the CWRs and IRRs provide enough
information to measure success in attaining such objectives. For instance, the goal of
“Successfully supervise field work being performed” (Figure 2) can be measured
upon performance of cognitive work regarding monitoring resources, progress,
schedule and performance. In turn, success in performing cognitive work is
determined by the information necessary in the domain. For example, CWR-P1.1.3
“Selecting activities that require closer supervision” has six IRRs, which are points of
reference that will determine what this particular cognitive work entails in the context
of field supervision.

The guidance provided by such a high level of detail is expected to accelerate


novices’ learning process, since they are not left to discover the information
requirements and relationships imposed by the data available in the domain and the
responsibilities of their work themselves. In addition to this aid in making sense of a
complex job, the complexity of job performance can also be reduced by making use
of computer systems that support information needs in field supervision. By
COMPUTING IN CIVIL ENGINEERING 865

incorporating these information technologies early in their practice, novices are


headed for a consistent, improved performance.

Using the results of the ACWA study would also have an implication for
designing instructional programs. The relationships expressed in the FAN provide a
context for practice, in which goals are interrelated and information obtained for a
goal serves to attain other goals. This would fit modular instruction, in which learning
objectives for each module can build on one another. Then, the CWRs and IRRs can
determine the instructional strategy that better fits each learning objective. For
example, scenarios and roles can be designed for each job goal, or sets of job goals.
Overall, design of instruction can be grounded in the mental model of expert
superintendents, since such models are comprehensive and responsive to the
constraints of the field management domain.

CONCLUSIONS

The products of an ACWA study provide insight into an expert’s mental


model of a domain. Concerning field supervision, such insight can be used to reduce
the complexity that practitioners must deal with, both when learning their job, and
when processing information for daily performance. In this paper, the results of an
ACWA study were used to explore their potential to develop instructional objectives,
and identify useful functionality in computer systems to support information
processing tasks. Coupled, these benefits would allow for development of instruction
with technology support, which is expected to accelerate the process of becoming an
expert given the increment in initial guidance, as well as the support for information
processing in practice.
An implication of using these results is that learning goals can be defined in
terms of job goals and the necessary information to make decisions. Furthermore, the
instructional design process can receive input for developing learning strategies.
Sufficient information is provided for developing scenarios and roles to support
practice and provide opportunities for novices to explore the domain and make errors.
While the results of a single study cannot be generalized, it was possible to examine
the implications of using CTA for supporting instruction in field supervision, and set
a step stone for future comparison with results from other similar studies.

REFERENCES

Crandall B., Klein G., Hoffman R.R. (2006). Working Minds – A practitioner’s guide
to Cognitive Task Analysis, The MIT Press, Cambridge, MA.
Distefano M.J., O'Brien W.J. (2009). “Comparative Analysis of Infrastructure
Assessment Methodologies at the Small Unit Level.” Journal of Construction
Engineering and Management, ASCE, 135(2), 96-107.
Elm, W. (2002). Applied cognitive work analysis: ACWA, Unpublished briefing,
<http://mentalmodels.mitre.org/cog_eng/ce_references_V.htm> (Dec. 18th,
2010).
866 COMPUTING IN CIVIL ENGINEERING

Elm, W., Potter, S., Gualteri, J., Roth, E., Easter, J. (2003) “Applied cognitive work
analysis: a pragmatic methodology for designing revolutionary cognitive
affordances.” Handbook of Cognitive Task Design, Hollnagel E., ed.,
Lawrence Erlbaum Associates, Mahwah, NJ, Ch. 16.
Hoffman, R. R., & Woods, D. D. (2000). “Studying cognitive systems in context.”
Human Factors, 42, 1-7.
Hollnagel, E. and Woods, D.D. (1983). “Cognitive systems engineering: new wine in
new bottles.” International Journal of Man-Machine Studies, 18, 583-600.
Klein, G., and Militello, L. (2005). “The knowledge audit as a method for cognitive
task analysis.” How professionals make decisions, H. Montgomery, R.
Lipshitz and B. Brehmer, eds., Lawrence Erlbaum Associates, Mahwah, NJ.
Potter S. S., Elm W. C., Roth E. M., Gualtiere J. W., and Easter J. R. (2002).
“Bridging the gap between cognitive analysis and effective decision aiding.”
State of the Art Report (SOAR): Cognitive Systems Engineering in Military
Aviation Environments: Avoiding Cogminutia Fragmentosa! McNeese, M.D.,
and Vidulich, M.A., eds., Wright-Patterson AFB, Human Systems
Information Analysis Center, 137-168.
Saurin, T.A., Saurin, M.G., and Costella, M.F. (2010). “Improving an algorithm for
classifying error types of front-line workers: Insights from a case study in the
construction industry,” Safety Science, 48, 422-429.
Smith, P.J. (2003). “Workplace learning and flexible delivery.” Review of
Educational Research, 73(1), 53–88.
Developing and Testing a 3D Video Game for Construction Safety Education

JeongWook Son1, Ken-Yu Lin2, and Eddy M. Rojas3

1
Ph.D. Candidate, Ph.D. Program in the Built Environment, University of
Washington, 130E Architecture Hall, Box 351610, Seattle, WA 98195; PH (206) 616-
3205; FAX (206) 685-197; email: json@uw.edu
2
Assistant Professor, Department of Construction Management, University of
Washington, 120 Architecture Hall, Box 351610, Seattle, WA 98195; PH (206) 616-
1915; FAX (206) 685-1976; e-mail: kenyulin@uw.edu
3
Director and Professor, The Durham School of Architectural Engineering and
Construction, University of Nebraska-Lincoln, 1110 S. 67th St., Omaha, NE 68182;
PH (402) 554-3186; FAX (402) 554-3850; e-mail: er@unl.edu

ABSTRACT
Construction safety education has mostly relied on one-way transference of
knowledge from instructors to students through traditional lectures and media such as
textbooks. However, we argue that safety knowledge could be more effectively
acquired in experiential situations. The authors have developed a 3D video game
where students learn by themselves about safety issues in a virtual construction site.
Students, who assume the roles of safety inspectors in the game, explore a virtual site
to identify potential hazards and learn from the feedback provided by the game as a
result of their input. This paper reports on the game design and development process
as well as a preliminary assessment of the game’s effectiveness. The preliminary
assessment was conducted on five students and the results suggested a positive
outlook as well as areas for improvement. Further work to improve the game includes
incorporating additional violation scenarios, adding new game features to enrich the
game experience, and providing enhanced pedagogical opportunities.

INTRODUCTION
Promoting safety education and preparing our future taskforce for a safer and
healthier work environment is in no doubt a critical agenda amongst other high
priorities in construction. However, traditional safety teaching practice based on the
textbook–chalkboard–lecture–homework–test paradigm has long been criticized as

867
868 COMPUTING IN CIVIL ENGINEERING

inadequate and inappropriate for student learning (Nirmalakhandan et al. 2007). The
authors propose a 3D video game, Safety Inspector (SI), to explore how game
technology can intertwine with safety education and complement existing learning
approaches. For safety education, the game aims to provide a “safe” training
environment that engages students in comprehensive hazard recognition challenges as
a way to evaluate student performance and increase student learning interests. In
addition, the game development process is expected to serve as an operational model
of available technologies for those who are also interested in educational video games
in the Architectural/Engineering/Construction industry. In addressing these objectives,
this paper reports the authors’ overall game development process, the implementation
of a preliminary prototype system, and game evaluation.

DESIGN AND IMPLEMENTATION


Game Object Design: Requirements and Constraints
1. Construction Site: The construction site encompasses areas at different construction
phases and includes safety violations pertinent to each phase. The first step in
implementing a scenario in Safety Inspector is to plug in the construction elements
typically situated in the scenario. For example, to recreate the improper set-up of
scaffolds for masonry work, scaffold objects are placed in the finish area for modeling
the missing planks. The entire site is also mapped with textures in order to convey a
realistic sense for the construction field, addressing realism requirements in game
design.
2. Workers and Fleets: By design, workers wear proper outfits and are equipped with
construction tools because many safety regulations are related to dress codes and tool
handling. Worker movements and actions are modeled to implement safety violations
that involve dynamic operations. This adds a realistic sense to the construction field
and also fulfills realism requirements in game design. Construction fleets are modeled
similarly. Although not yet implemented, extras (i.e. additional laborers and
construction fleets not directly involved with violations) will be added in the future to
further enrich the game. Sound effects will also be included to enhance a sense of
presence.
3. The Surrounding Environment: The construction site is designed to include most of
the elements that are likely to be present on a typical site; e.g., office trailers, fences,
power lines, safety signs, and so on. Buildings and roads adjacent to or outside the
construction site are modeled with minimal details to improve the outlook of the
game without sacrificing its overall performance.
COMPUTING IN CIVIL ENGINEERING 869

Safety Violations
The safety violations targeted for game implementation are listed in Table 1.
Table 1 is a preliminary hazard classification derived from the US Washington State
Labor and Industry (WA L&I) safety training materials used to guide the game
development. One of the dimensions to categorize these violations is the level of
knowledge that learners need to perceive them. Using this dimension, the
categorization can naturally evolve into three game stages (e.g. easy, moderate, and
difficult). However in the current stage of the game, a mix of easy and moderate
violation recognition challenges are implemented for testing purposes.

Game Logic Design


Students playing the game assume the role of a contractor’s safety inspectors
and have the responsibility to point out all on-site hazards during their virtual job
walk. In a limited amount of time, safety inspectors are to explore the entire jobsite
with an attention to detail, being aware of the specialty operations occurring on the
site and checking them out, while identifying any unsafe conditions or behaviors
observed. If safety inspectors are convinced that a potential job hazard is present, they
can point out the potential violation by clicking on the game object. If the object truly
constitutes a violation, instructional messages relevant to the violation are displayed
on screen and the learner gets points corresponding to the level of the hazard
recognition challenge. Figure 1 shows an example when a learner correctly identifies
that a mobile crane is too close to power lines. The learner is given 10 points for
correctly identifying the hazard. This self-guided trial-and-error process continues
until termination conditions are met. The game is terminated if a learner successfully
finds out all safety violations or if time runs out. After finishing a level, a learner can
either go to the next level or simply exit the game.

Figure 1. Hazard Identification


870 COMPUTING IN CIVIL ENGINEERING

Table 1. Classification of Common Safety Hazards for Modeling Purposes


Safety Modeling Difficulty
Knowledge Low High
1. PPE – missing hardhat when standing 1. Electrical – distance between the crane
underneath scaffolds and the power line is larger than ten feet
2. Hammer rests on the edge of the scaffold 2. Trench over four feet deep with no
3. Scaffold missing guard rails and toe protection systems in place
boards 3. Unnecessarily steep (pitch larger than
4. Unprotected rebar twenty degrees) and narrow ( smaller than
5. Worker standing underneath the hoisted eighteen inches) walkways
loads 4. Walkboards four feet or more above
6. Standing on a window sill w/o PFAS grounds
Low 7. Concrete pump goes above power lines 5. PPE – carrying lumber w/o gloves
8. Holes on the floors w/o covers 6. PPE – missing safety glasses when using
9. No perimeter cables along the perimeter nail guns
steel columns 7. Material handling – hoisting personnel
10. Worker smoking next to tanks that store 8. Carrying a tool by the cord
flammables 9. Worker climbing scaffold bracings
11. No guardrails around stairwell 10. Cement truck backing onto a worker
12. Uncapped rebar 11. Worker on the ladder reaching too far
13. Personnel inside the swing radius of a
crane
1. PPE – using bump hat (instead of 1. PPE – beard guy wearing a tight-fitting
hardhat) respirator when spraying paint
2. Material storage – lumber stacking too 2. Loose items aloft on installed steel beams
high
3. Material handling – rigging when the
shackle is used upside down
4. Scaffold base plate sits on shaky objects
instead of on firm foundation
5. Scaffold platform not fully planked
6. Skylight not covered for roofing
construction
7. Trench - spoil pile too close to the trench,
no means of egress/access, equip working
right at the edge, walking outside the
trench box
8. Rusty or damaged shoring posts
9. Waste dump w/o fencing
10. Nails remaining on lumber
Medium 11. Workers on aerial lifts w/o fall protection
12. Tripping hazards due to messy
housekeeping (e.g. cords all over the
place)
13. Electrical tools not in use are plugged in
14. Workers wearing athlete shoes
15. Ladder not properly secured or set up
16. Man using jackhammer w/o hearing
protection
17. Climbing stairs w/o three point contacts
18. Crane outrigger on unstable bases
19. Elevated concrete pouring bucket on top
of employees
20. Concrete pouring hose on the shoulder,
kinking
21. Unsafe pedestrian walkway
22. More than one person on a ladder at one
time
COMPUTING IN CIVIL ENGINEERING 871

Table 1 (Continuous). Classification of Common Safety Hazards for Modeling


Purposes
Safety Modeling Difficulty
Knowledge Low High
1. Material storage – stacking pipes in racks 1. Material storage inside buildings under
facing main aisles construction shall not be placed within 6’
2. Step ladders on top of scaffolds of any hoist way or 10’ of an exterior wall
3. Bundles of metal decking on top of joists 2. Trench – means of egress/access over
w/o bridging twenty-five feet of reach, ladder not
4. Multiple lift w/ more than five members extending three feet above the trench box,
or w/ members too close to each other box too low
5. Using the stepladder to gain access to 3. Post-tensioning operations w/o barricades
upper levels to limit employee access
4. Un-braced masonry wall over eight feet in
a control access zone
5. Steel columns w/ only two anchor bolts on
the bottom
6. Workers standing on buckets to reach for
High
high objects
7. Change in elevation more than nineteen
inches w/o adequate stairs/ramp access
8. Workers on walkways exposed to opening
w/ extreme hazardous conditions w/o
guardrails (even though the walkways are
only one or two feet above ground)
9. Window openings on or above the second
floor where the sill is less than thirty six
inches from the floor w/o guardrails
10. Ladders not set at a four-to-one angle
11. Only one seat is installed before releasing
hoist lines when installing a joist over
sixty feet

Game Implementation
Safety Inspector is powered by the Torque 3D game engine through Torque
Software Development Kit (SDK) V1.0. The engine time code is written in C++ with
tools written in C++ and a proprietary scripting language, TorqueScript. Main
functionalities of the Torque 3D game engine include 3D rendering, physics, and
animation. To simplify the game development processes, Torque 3D game engine is
manipulated through Torque SDK. Editors and tool kits such as Terrain Editor, Shape
Editor, and Material Editor in Torque SDK enable developers to complete games
without laborious coding. Detailed game implementation processes are presented in
Fig. 2 and described in the following paragraphs.
1. Creating Terrain: One of the first objects added to the game was the construction
site terrain. The terrain was created (Fig. 3) by modifying a default terrain shape and
textures using the Terrain Editor in Torque SDK.
2. Creating 3D Objects: 3D game objects were produced and then imported into
Safety Inspector virtual space. These objects include, but are not limited to, worker
characters, fleets, buildings, equipments, tools, materials, and background objects.
872 COMPUTING IN CIVIL ENGINEERING

Some objects were created from scratch and some were modified from existing 3D
models. Autodesk 3DS Max 2009 was used to create/edit most of the 3D objects.
Furthermore, the collision boundaries for each object were added in the object’s 3D
hierarchy for defining collision detection mechanism among game objects.

Figure 2. IDEF0 Diagram for Safety Inspector Implementation

3. Exporting and Importing Game Objects: Completed 3D objects were imported into
the game in the format of DTS (Dynamix Three Space) or DIF (Dynamix Interior
File), both proprietary. DTS is generally used for representing non-structural game
objects such as characters, fleets, and equipment while DIF is used for representing
structural game objects such as buildings or other enclosing structures. DTS objects
were produced by exporting 3D objects through a DTS exporter (e.g.
max2dtsexporter) in Autodesk 3DS Max 2009 and DIF objects were created through
two steps, including exporting 3D objects into the file format of Torque Constructor
and then exporting again the 3D objects in Torque Constructor into the DIF format.
Exported game objects were incorporated into the virtual game space through Torque
3D SDK.
4. Customizing Game Code: The game engine has been customized so that required
properties, responsive actions, and dynamic behaviors of game objects could be
accomplished.
COMPUTING IN CIVIL ENGINEERING 873

5. Creating Graphic User Interface (GUI): GUI was designed to display necessary
information such as current total points and instructional messages so that learners get
feedback along their game play.

Figure 3. (Left) Earthmoving Terrain, (Right) Earthmoving Terrain Texture

SYSTEM EVALUATION
Before a full-scale implementation and evaluation is attempted, a small group
of students from the Department of Construction Management at the University of
Washington were invited to test the game and to provide feedback on their learning
experiences. A total of five students, who have taken the construction safety class
required in the CM curriculum, voluntarily participated in the game testing. They
played the game for ten minutes and then filled up a feedback survey to help evaluate
the research effort. A total of eighteen questions were listed in the survey. A 7 point
Likert scale (1 being the lowest level and 7 being the highest level) was used. Table 2
presents the survey questions and their results. Although these results are not
statistically significance given the small sample size, the evaluations still provide
some insights about game performance and useful feedback to improve the game for
future versions.

DISCUSSION AND CONCLUSION


The authors developed a 3D video game where students learn about safety
issues in a virtual construction site. This study shows an engaging and motivating
learning experience for participating students. It also revealed the game’s potential in
terms of measuring students’ hazard recognition capabilities, complementing existing
approaches. In addition, this study also provides incremental knowledge about the 3D
video game technologies employed and generated encouraging feedback as well as
recommendations for future research. The next version of Safety Inspector will
extend the scope of modeled violations and incorporate more features leveraging the
874 COMPUTING IN CIVIL ENGINEERING

design analysis.

Table 2. Survey Questions and Results.


Question Results
How realistic does the game reflect the everyday construction operations? 4.6
(1-7)
Which visual aid provides you with a more comprehensive challenge of 40% rated the game
hazard recognition? The game or the image? 60% rated the image
Which visual aid are you more comfortable with when given the task of 40% rated the game
hazard recognition? The game or the image? 60% rated the image
Does the game motivate you to refresh your knowledge on some of the 100% replied “Yes”
safety topics? Yes or no. 0% replied “No”
How much does the game intrigue your learning interests? (1-7) 4.8
Were you unsure about some of the potential violations in the game? 80% replied “Yes”
20% replied “No”
Is the learning experience facilitated by the game interactive? 100% replied “Yes”
0% replied “No”
How important is it to have learning guidance (e.g. safety tips, hints, 4.6
related regulations) in the game? (1-7)
What types of learning guidance would you like to see in the game? N/A
How challenging is it for you to identify the violations in the game? (1-7) 3.8
How much does your game performance reflect your safety knowledge? 4.6
(1-7)
Is the game visually appealing to you? “Yes” (80%)

Is the game user-friendly and easy to operate for you? “Yes” (80%)

Is the experience enjoyable compared to the traditional learning “Yes” (100%)


experience?
Do you think that the game scoring can be one way to measure your safety “Yes” (80%)
knowledge, in addition to your assignments, quizzes and exams grades?
What are the three best features of the game? N/A
What are the three worst features of the game? N/A
Any other feedback? N/A

ACKNOWLEDGMENTS
The authors would like to acknowledge the financial support for this research
received from the National Science Foundation Award 0753360. The authors would
also like to recognize their collaborators from the University of Texas at Austin and
the Rinker School of Building Construction at the University of Florida.

REFERENCES
Nirmalakhandan, N., Ricketts, C., McShannon, J., and Barrett, S. (2007). "Teaching Tools to
Promote Active Learning: Case Study." Journal of Professional Issues in Engineering
Education and Practice, 133(1), 31-37.
Attention and Engagement of Remote Team Members in
Collaborative Multimedia Environments

R. Fruchter1 and H. Cavallin2


1
ASCE Member, Director of Project Based Learning Laboratory (PBL Lab),
Department of Civil and Environmental Engineering, Stanford University,
fruchter@stanford.edu
2
Professor, School of Architecture, University of Puerto Rico, hcavallin@uprrp.edu

ABSTRACT.

An important aspect in multimedia computer mediated collaboration is to


sustain the attention and engagement of remote participants during project meetings.
This paper presents preliminary findings of a comparative study of two types of
collaborative multimedia environments (ICT) – Webconferencing with application
sharing and 3D Team Neighborhood virtual world - by evaluating the syntactic levels
of micro-awareness that consist of locus of attention and attention span. These
metrics provide insights into what key ICT interaction characteristics and how people
attend to content presented through a collaboration interface will generate awareness,
establish and sustain attention and engagement of remote participants. We used the
2009-2010 “Architecture, Engineering, construction Global Teamwork” program as
the testbed. Qualitative and quantitative data was collected through field observation
and EyeTracker data. Preliminary results show that meetings held in the 3D Team
Neighborhood kept participants’ attention 24% more time on screen vs. meetings held
in Webconference with application sharing. In addition, multitasking was the typical
behavior during Webconference with application sharing, whereas none or minimal
multitasking was observed in the 3D Team Neighborhood.

INTRODUCTION

When collaborating in geographically distributed teams to solve problems,


awareness is a crucial process in order to generate a successful interaction among the
participants. In developing the awareness in computer mediated collaboration
environments required for remote situations, there are multiple channels (visual,
auditory) that can be used to funnel information that could cognitively engage
participants in the interaction. In cases in which the interaction takes place via the use
of software design for collaboration, the qualitative nature of their interfaces impacts
the dynamics and level of the interaction. In this study, we explore the way in which
these qualities of the interfaces affect the levels of awareness, attention and
engagement during the process of collaboration in geographically distributed teams.
In order to generate and sustain the awareness of participants, there are
processes that are crucial when establishing the attention and engagement of the other
participants required in non-collocated collaboration problem solving.

875
876 COMPUTING IN CIVIL ENGINEERING

Jiazhi et al. [1] suggests that when collaborating through computer mediation, people
will look at targets that help them determine whether or not their messages have been
understood as intended, and that gaze patterns of speakers and listeners are closely
linked to the words spoken, and help in the timing and synchronization of utterances.
Vertegaal et al. [2] found that in multi-party conversations, speakers looked at the
person they were talking to 77% of the time and listeners looked at the speaker 88%
of the time.
According to Gutwin and Greenberg [3] , awareness has four basic characteristics:
1. Awareness is knowledge about the state of a particular environment.
2. Environments change over time, so awareness must be kept up to date.
3. People maintain their awareness by interacting with the environment.
4. Awareness is usually a secondary goal—that is, the overall goal is not simply
to maintain awareness but to complete some task in the environment.
We can see that even though awareness is not a goal in itself, it is an
important condition in order to achieve the proper environment for collaborative
problem solving. Vertegaal et al. differentiate two different levels of awareness in
cooperative work. The macro-level, refers to the awareness that conveys background
information about the activities of others prior to or outside of a meeting. The micro-
level of awareness according to them gives “online information about the activities of
others during the meeting itself. Micro-level awareness usually has a more continuous
nature than its macro level counterpart. It consists of two categories: conversational
awareness and workspace awareness. Vertegaal summarizes the elements of micro-
level awareness according to the attentive state, from the syntactical to the pragmatic
aspects of the interaction (Table1).

Table 1. Organizing elements of awareness according to the attentive state [2].

The syntax level contains two subcategories. The locus of attention describes
the spatial aspects of attention, i.e., where the person directs their attention, while
attention span describes the temporal aspects of attention, i.e., the amount of time a
person can concentrate on a task without being distracted. This paper concentrates on
the measurement of this particular aspect, by using methods that enable us to
indirectly infer the attention of the participant, in order to establish if there is a
connection between the characteristics of the interface and the way in which people
attend to the interface. We are assuming in this case that the attention to the interface
COMPUTING IN CIVIL ENGINEERING 877

will be acting as an indicator of the level of engagement that the person is having in
the collaboration process.
Although this variable in itself is not a unique component of the micro-level
awareness in the interaction, its study provided us with important insights about how
the characteristics of the interface affect this crucial aspect of non-collocated
interactions.

METHODOLOGY

To explore awareness of participants in geographically distributed


collaboration processes we used the “Architecture, Engineering, construction (AEC)
Global Teamwork” program offered in 2009-2010 as the testbed for our study. It
consisted of thirty five students engaged in the six AEC global project teams. We
report preliminary findings using as examples two teams who’s choice of multimedia
collaborative environment was representative to the options given to all students in
the program. Each of the two teams had participants at Stanford University and
University of Puerto Rico. Data was collected through:
 Observations of the Stanford team members, as Dr. Fruchter was playing a
dual role of mentor participant and observer during the weekly team meetings.
 EyeTracker data from a ViewPoint PC-60 Scene Camera Version eye tracker
with EyeFrame hardware in the Puerto Rico site in order to identify how the
attention span is distributed based on the gaze of the participants.
In this study gaze is a key indicator of conversational attention. As Vertegaal
et. al [4] point out, results indicate that when someone is listening or speaking to
individuals, there is a high probability that the person looked at is the person listened
(p=88%) or spoken to (70%). According to Kendon [in 4], seeking or looking at the
face of a conversational partner serves at least four functions: (1) to provide visual
feedback, (2) to regulate the flow of conversation, (3) to communicate emotions and
relationships; and (4) to improve concentration by restriction of visual input.
We therefore interpreted the direction of gaze as an indicator of the focus of
attention of participants during meetings. Students in Puerto Rico wore the eye
tracker during the meetings. For each session we recorded a total of 18 minutes of
interaction. The portable EyeTracker is a non-intrusive device to track the direction
of the gaze of the participants, without affecting their natural performance.
These interactions were afterwards analyzed by evaluating the position of the
gaze every 12 seconds (1/5 of a minute) and correlated afterwards with the
observations of the Stanford participants. The span of 12 seconds was partially
decided based on information coming from literature, and also by our own
observations of the meaningfulness of the interactions. Literature reports that
attention in mobile interactions in laboratory conditions, as the ones we worked at for
this research, takes place in an average of 12 seconds intervals [5]. We are using this
parameter as a reference as there is no equivalent information connecting team
interactions and span of attention to regular computer monitors.
At Stanford participants, their collaboration space, interactions, and their
computer screens were observed by Dr. Fruchter in person in the PBL Lab and
through her avatar in the virtual world. Instances of engagement or disengagement,
878 COMPUTING IN CIVIL ENGINEERING

side conversations, gaze foci, and use of ICT tools were noted. This provided a sense
of the engagement at the individual level occurring during the team meetings. Five
out of six teams had more two or three members at Stanford. Observations included:
(1) How the collocated participants make their engagement (or lack thereof) visible to
each other? (2) How do artifacts and ICT support or constraint engagement activities?
(3) When participants engage with ICT, where is their gaze? (4) When and how did
their gaze move between objects, from person to objects and back again?

TESTBED

The AEC Global Teamwork course is based on the project-based learning


(PBL) methodology that focuses on problem based, project organized activities that
produce a product for a client. PBL is based on re-engineered processes that bring
people from multiple disciplines together. It engages faculty, practitioners, and
students from different disciplines, who are geographically distributed. It is a two
Quarter course that engages architecture, structural engineering, and construction
management students from universities in the US, Europe and Asia.[6-7]
The core atom in this learning model is the AEC student team, which consists
of an architect, one or two structural engineers, and one or two construction managers
from the M.Sc. level. Each team is geographically distributed, and has a demanding
owner/client that typically wants an exciting, functional and sustainable building, on
budget and on time. The students have four challenges – cross-disciplinary teamwork,
use of advanced collaboration technology, time management and team coordination,
and multi-cultural collaboration. The building project represents the core activity in
this learning environment. The project is based on a real-world building project that
has been scoped to address the academic time frame and pedagogic objectives. The
project specifications include: (1) building program requirements for a university
building of approx. 30,000 sqft of functional spaces that include faculty and student
offices, seminar rooms, small and large classrooms, and an auditorium; (2) a
university site where the new building will be build, such as San Francisco, Reno,
Puerto Rico. The site provides local conditions and challenges for all disciplines, such
as local architecture style, climate, and environmental constraints, earthquake, wind
and snow loads, flooding zones, access roads, local materials and labor costs; (3) a
budget for the construction of the building, and (4) a time for construction and
delivery. The project progresses from conceptual design in Winter Quarter to 3D and
4D CAD models of the building and a final report in Spring Quarter. The concept
development phase deliverables of each team include: two distinct integrated AEC
concepts, a decision matrix that indicates the pros and cons of the two alternatives
and justifies the selection of one of the two concepts to be developed in Spring
Quarter. The project development phase engages students in further iteration and
refinement of the chosen alternative, detailing, modeling, simulation, cost benefit
analysis and life cycle cost investigation. Spring Quarter culminates with a final AEC
Team project presentation of their proposed solution, and reflection of their team
dynamics evolution. The teams experience fast track project process with
intermediary milestones and deliverables during which they interact with industry
mentors who critique and provide constructive feedback.
COMPUTING IN CIVIL ENGINEERING 879

All AEC teams hold weekly two hour project review sessions similar to
typical building projects in the real world. During these sessions they present their
concepts, explain, clarify, question these concepts, identify and solve problems,
negotiate and decide on changes and next steps. Since the concepts, problems and
challenges are defined by the students who work on that specific project, their level
of attention and engagement is maximized. Consequently the students are highly
motivated to exchange and acquire as much knowledge as they participate in the
cross-disciplinary dialogue. The interaction and the dialogue between team members
during project meetings evolved from presentation mode to inquiry, exploration,
problem solving, and negotiation. Similar to the real world, the teams have tight
deadlines, engage in design reviews, negotiate and decide on modifications. Most
importantly, students learn to use and combine diverse communication channels and
media to express and share their ideas and solutions. To view AEC student projects
please visit the AEC Project Gallery
(http://pbl.stanford.edu/AEC%20projects/projpage.htm).

MULTIMEDIA COLLABORATION ENVIRONMENTS

In this study, we compared two multimedia collaborative environments


adopted by two AEC teams for their weekly meetings, in which the observed subjects
were negotiating the upcoming actions. The two teams were - – Island team and
Ridge team. Each team adopted a specific multimedia collaboration environment that
became part of their work practice and team process. Island team adopted
GoToMeeting [8] and Ridge team adopted the 3D Team Neighborhood developed in
the PBL Lab at Stanford University using Teleplace [9].
GoToMeeting facilitates webconference meetings saving time and travel cost.
It supports application or screen sharing. Teams can record their meeting sessions for
future review. Participants can interact on the screen using the pen, highlighter tools.
The organizer can change presenters and transfer control to different participants.
Teleplace provides a virtual world in which a 3D Team Neighborhood was
created for each of the six AEC global project teams. It contains offices and meeting
rooms, interactive rooms with multiple displays where team members can go to work,
collaborate. Using avatars and unique "laser pointer" controls, participants can easily
see where people are, what they are looking at, what content they are editing, and
how they are using applications. In combination with Teleplace' built-in high fidelity
VoIP, webcam video-conferencing and text chat, team members have an immersive
environment and social cues that help them interact effectively.
EyeTracker data was collected from the two architecture students in Puerto
Rico. The analysis of quantitative data from the EyeTracker was supported by in-
person observations collected from four Stanford students, and 3D virtual world
observations collected from the Ridge team 3D Team Neighborhood weekly
meetings.

ATTENTION AND ENGAGEMENT IN COLLABORATIVE ICT SETTINGS


880 COMPUTING IN CIVIL ENGINEERING

The following describes Island and Ridge team collaborative ICT settings
according to Vertegaal’s micro-level “Functionality” characteristics, i.e., workspace
and conversational awareness:
 Island team was composed of an architect in Puerto Rico, a structural engineer at
Stanford, an energy simulation engineer at Stanford, a construction manager at
UW Madison, a life cycle financial manager at Bauhaus University, in Germany.
Each of them worked in the respective university laboratory, using their laptops
on WiFi, with a headset for audio. They used GoToMeeting as their multimedia
collaboration environment. GoToMeeting allowed them to share their
applications that were running on their individual laptops, e.g., architect showing
3D images of the building, structural engineer showing structural component
options, construction manager showing cost estimates and schedules spreadsheets,
life cycle financial manager showing cash flow model diagrams. GoToMeeting
allows viewing, sharing, controlling one application at a time. This required
participants to switch presenter and control as they were toggling between the
different applications running on the different computers. It allowed all
participants to view and manipulate data only on one application at a time.
(Figure 1a).
 Ridge team was composed of an architect in Puerto Rico, two structural engineers
at Stanford, and one construction manager in Stockholm Sweden. Each of them
was working in the respective university laboratory, using their laptops on WiFi,
with a headset for audio. They used the 3D Team Neighborhood in Teleplace as
their multimedia collaboration environment. The 3D Team Neighborhood
provided a highly immersive environment that enabled the team members to
construct in real time their collaboration space around them as the dialog and
interaction evolved during the meeting. Each team member could share their
content on any number of displays that were created on as needed bases, as well
as manipulate and annotate any content displayed in their shared workspace. All
participants were able to view and interact with their content and models in
context, i.e., in relation to the content and models shared by the other team
members. This allowed them to interpret correlate, combine, and compare items
on different displays. In addition, they were constantly aware where their team
members look and are located with respect to them and the displayed content in
their shared 3D Team Neighborhood. The interaction and communication in this
multimedia immersive collaboration environment lead to a free and continuous
flow of interaction and communication. (Figure 1b).
Based on the position of the gaze of the two architecture students at University of
Puerto Rico, we used the following categories to classify the EyeTracker information:
1. Screen: person looking at the computer screen, specifically to the window of the
application evaluated.
2. Notes: person taking notes related to discussion
3. Keyboard: person typing
4. Outside: person looking to anything else beyond the three previous categories.
We analyzed the videos from the EyeTracker device based on these
categories, and represented the interaction graphically (Figure 1c and 1d). These
preliminary results were supported by the observations made in-person at Stanford
COMPUTING IN CIVIL ENGINEERING 881

and in the 3D Team Neighborhood showing where the students and their avatars
gazed as well as their discourse dynamics. We used the notations “on task on screen”
and “off task engaged-observing screen content” for the situations in which the
students’ gaze was focused on the GoToMeeting or 3D Team Neighborhood content
on screen. The cases in which the students were “off task and disengaged” indicated
that their gaze was off screen, or on screen multitasking doing other activities
unrelated to the project or ongoing discourse. Five situations were observed:
 one student on task on screen - the other(s) off task but engaged-observing
content on screen or taking notes,
 one student on task on screen - the other student on a different task that is
directly related to the task and topic that is discussed,
 none on task but engaged-observing screen or taking notes.
 one student on task on screen - the other(s) off task and disengaged, i.e. looking
off screen or multitasking on screen,
 none on task and disengaged, (e.g., having a side conversation that is not related
to task at hand or each multitasking, performing different tasks such as working
on other homework, email, browsing, chatting online).

Figure 1. Preliminary Results: (a) GoToMeeting of Island Team Meeting – Structural


Engineer Shares an Image of Structural Component Options to be considered by
Architect, Construction Manager. (b) 3D Team Neighborhood Screenshot in Teleplace
of Ridge Team Meeting Displays from Left to Right: Weekly Task List, Structural
Detail Sketch, Architectural Floor Plan, Construction 4D CAD, Brainstorm Panel. (c)
and (d) Distribution of time allocation by tasks during meetings in GoToMeeting and
3D Team Neighborhood.

From the preliminary qualitative observations at Stanford and in the 3D Team


Neighborhoods we found that:
 The students’ and their avatars’ gaze was focused on the 3D Team Neighborhood
project content on screen significantly longer periods of time and more often
compared to meetings held using application sharing GoToMeeting.
 The students did almost none or minimum multitasking that was not related to the
project task when running their meetings in the 3D Team Neighborhood.
However, multitasking related to different projects and other homework was the
typical behavior during meetings run through GoToMeeting Webconferencing.
882 COMPUTING IN CIVIL ENGINEERING

From the EyeTracker preliminary quantitative data analysis, it is interesting to


observe that meetings held in the 3D Team Neighborhood kept participants’ attention
24% more time on the screen than the in meetings held with GoToMeeting. The
difference between applications is significant (χ² = 26.032, df = 3, p < 0.01).

DISCUSSION

These preliminary results show that participants tend to visually engage in the
3D Team Neighborhood environment more time and more frequently than they do in
the GoToMeeting with sharing environment. These preliminary observations are
supported by previously reported findings in the literature, in which 3D simulated
highly immersive and interactive environments seem to attract the attention and
engagement of the participants in more consistent and efficient ways [10]. It is
important to note that the AEC global teams had very effective team meetings once
they were fluent in using the functionalities of the ICT and embedded them into their
daily work practice. Nevertheless, in the case of webconferencing, participants who
were not presenting or their decisions were directly impacted, tended to multitask and
their gaze attention was directed elsewhere. In contrast, the 3D Team Neighborhood
collaboration environment created a rich multimedia and multimodal context that
kept the participants almost continuously engaged in the activity and discourse.
Attention has been studied extensively by scholars in psychology, pedagogy,
neuroscience, communication and cognitive science. This study is a first step in a
long term effort we started with many opportunities to extend the breadth and depth
of the experiments, data collection and analysis, as well as increasing the data points.

REFERENCES

1. Jiazhi, O., et al., Analyzing and predicting focus of attention in remote collaborative tasks, in
Proceedings of the 7th international conference on Multimodal interfaces. 2005, ACM:
Torento, Italy.
2. Vertegaal, R., B. Velichkovsky, and G.v.d. Veer, Catching the eye: management of joint
attention in cooperative work. SIGCHI Bulletin, 1997. 29(4).
3. Gutwin, C. and S. Greenberg, The importance of awareness for team cognition in distributed
collaboration, Report 2001-696-19. 2001, University of Calgary: Alberta, Canada.
4. Vertegaal, R., et al., Eye gaze patterns in conversations: there is more to conversational agents
than meets the eyes, in Proceedings of the SIGCHI conference on Human factors in
computing systems. 2001, ACM: Seattle, Washington, United States.
5. Antti, O., et al., Interaction in 4-second bursts: the fragmented nature of attentional resources in
mobile HCI, in Proceedings of the SIGCHI conference on Human factors in computing
systems. 2005, ACM: Portland, Oregon, USA.
6. Fruchter, R., Architecture/Engineering/Construction Teamwork: A Collaborative Design and
Learning Space. ASCE Journal of Computing in Civil Engineering, 1999. 13 (4): 261-270.
7. Fruchter, R., The Fishbowl: Degrees of Engagement in Global Teamwork. LNAI, 2006: 241-257.
8. GoToMeeting Webconferencing [cited 2010 4/26]; Available from:
http://www.gotomeeting.com/fec/
9. Teleplace: Virtual Worlds Collaboration Solutions for Program Management, Virtual Operations
Centers. [cited 2010 4/26]; Available from: http://www.teleplace.com/.
10. Reeves, B. and R. Leighton, Total Engagement: Using Games and Virtual Worlds to Change the
Way People Work and Businesses Compete. 2009, MA: Harvard Business School Publishing.
Teaching Design Optioneering: A Method for Multidisciplinary Design
Optimization

David Jason Gerber1 and Forest Flager2


1
Assistant Professor, School of Architecture, University of Southern California, CA
90089; email: dgerber@usc.edu
2
Ph.D. Candidate, Department of Civil and Environmental Engineering, Stanford
University, CA 94305; email: forest@stanford.edu

ABSTRACT
This paper describes a Design Optioneering methodology that is intended to
offer multidisciplinary design teams the potential to systematically explore a large
number of design options much more rapidly than currently possible using
conventional methods. Design Optioneering involves first defining a range of design
options using associative parametric design tools; then coupling this model with
integrated simulation-based analysis; and, finally, using computational design
optimization methods to systematically search though the defined range of
alternatives in search of design options that best achieve the problem objectives while
satisfying any constraints. The Design Optioneering method was tested by students
as part of a parametric design course at Stanford University in the spring of 2010. The
performance of the method are discussed in terms of the student’s ability to capture
the design intent using parametric modeling, integrate expert analysis domains, and
select a preferred option among a large number of alternatives. Finally, the potential
of Design Optioneering to reduce latency, further domain integration, and enable the
evaluation of more design alternatives in practice is discussed.

INTRODUCTION
Current Computer-Aided Design and Engineering (CAD/CAE) tools allow
architects and engineers to simulate many different aspects of building performance
(e.g. financial, structure, energy, lighting) (Fischer 2006). However, designers are
often not able to leverage simulation tools early in the design process because of the
time required to complete a design cycle involving the generation and analysis of a
design option using model-based CAD/CAE tools. It often takes multidisciplinary
design teams longer than a month to complete a single design cycle (Flager and
Haymaker 2007). High design cycle latency in current practice has been attributed to
software interoperability (Gallaher, O’Connor et al. 2004), lack of collaboration
between design disciplines (Akin 2002; Zhao and Jin 2003; Holzer, Tengono et al.
2007), among other issues.
Associative parametric CAD tools have been shown to reduce latency
associated with the generation of design options (Sacks, Eastman et al. 2005) as well
as to manage greater project complexity (Gerber 2009). A parameter in this context is
a design variable that can be associated or related to other parameters to define

883
884 COMPUTING IN CIVIL ENGINEERING

particular design logic. The designer can then manipulate a single parameter or set of
parameters to rapidly generate many unique design configurations (Szalapaj 2001).
Parametric modeling as a concept and mathematical construct (e.g. parametric curves
and surfaces), has been around for years with the first parametric CAD tools
emerging in 1989 (Eastman, McCracken et al. 2001). However, providing tools that
enable designers to readily develop these robust and rigorous input models that
describe their design intent in order to guide design generation remains a challenge
(Shea, Aish et al. 2005; Gerber 2007).
The use of associative parametric tools to reduce design cycle latency in
current AEC practice has been limited by two primary factors. First, there are
inherent differences in the way architects and engineers iteratively define and
represent design problems (Akin 2002). Therefore, it is often difficult for these
different disciplines to agree on a common parametric representation of the design,
particularly when opportunities for collaboration are limited by organizational and/or
geographic boundaries (Burry and Kolarevic 2003; Holzer, Tengono et al. 2007).
Few methods have been developed to instruct practitioners on how to use parametric
methods in collaborative, multidisciplinary environments, and those developed have
not been pervasively disseminated. Second, there is limited interoperability between
parametric CAD tools commonly used by architects and CAE tools commonly used
for engineering analysis. With a few exceptions (Shea, Aish et al. 2005; Holzer,
Hough et al. 2007; Flager, Welle et al. 2009), engineers are not able to provide timely
simulation-based performance feedback on the parametric variations generated by the
design team.
This paper introduces the Design Optioneering methodology that aims to
address the limitations associated with parametric modeling discussed above. The
paper is presented in the following structure. First, a Design Optioneering method is
described. Second, the context of initial use of the method by students in a
parametric design course at Stanford University is described and the findings of the
use-case are presented. Finally, the potential and implications of Design
Optioneering to reduce latency and enable the evaluation of more design alternatives
in practice is discussed.

THE DESIGN OPTIONEERING METHODOLOGY


The Design Optioneering methodology for Multidisciplinary Design Optimization
(MDO) consists of three primary activities which are described in more detail below:
problem formulation, process integration, and design exploration / optimization.

Problem Formulation
The first step is to formally define the design problem including the design objective,
variables and constraints. The design objective is the goal of the optimization
exercise and generally involves maximizing or minimizing a real function (e.g. cost,
energy consumption, etc.). The constraints are the criteria that a design option must
satisfy to be considered feasible. Finally, the variables are the parameters of the
design that the can be manipulated within a defined range to achieve the objectives
and satisfy the constraints.
Definition of the problem objective, constraints and variables are then used to
COMPUTING IN CIVIL ENGINEERING 885

inform the creation of an associative parametric digital model. This involves creating
a parametric representation of the project in CAD that is driven by the design
variables specified. Designers can then test the parametric model by modifying the
variable values and observing the resulting design configuration to ensure that it is
consistent with design intent. This process is often iterative; observations made from
variable testing can lead to the selection of new variables and/or ranges as well as the
refinement of parametric model logic.

Process Integration
The goal of this activity is to create an integrated process model that includes the
parametric CAD model created in the previous activity as well as any CAE models
used to assess design objectives and constraints. Process integration involves first
defining the information dependencies between all of the CAD/CAE tools used in the
design process. Next, the data flow between the tools is automated to reduce design
cycle latency that is pervasive in current practice. Finally, the integrity of the data
flow and the analysis representation is checked by modifying the variables in the
parametric CAD model and ensuring the necessary analysis configurations update
correctly to ensure rapid evaluation of all design domains.

Design Exploration / Optimization


Once the design problem is formalized and an integrated process model has
been created, the designer is capable of completing a design cycle much more rapidly
than conventional processes. However, exploring the design space using manual trial
and error methods is still often impractical due to the large number of possible
alternatives (Flager, Aadya et al. 2009). In this case, computational techniques such
as Design of Experiments or optimization algorithms can be applied to systematically
explore the design space in an automated fashion.

COURSE BACKGROUND
The course “CEE 135A/235A: Parametrics - Applications in Architecture and
Product Design” was originally conceived by the authors in 2008 to explore how to
capture and communicate design intent using parametric methods at both an
architectural and a product scale. The course was first offered in the fall term of 2008
to undergraduates and graduates in product design, architectural design and
engineering disciplines at Stanford University. The course evolved through two
quarter offerings at Stanford University’s Civil and Environmental Engineering
(CEE) Department. The more recent course offered in the spring of 2010 included
the addition of the Design Exploration module which involved coupling the
parametric model with integrated simulation-based analysis and using computational
design exploration and optimization techniques. For this course which is described in
more details below, author Flager developed the curriculum and served as the
primary instructor and author Gerber participated as a guest lecturer.

Objectives
The pedagogical goals for the course are as follows: (1) be proficient with
parametric modeling methods and understand the strengths and limitations of these
886 COMPUTING IN CIVIL ENGINEERING

methods with regard to capturing design intent; (2) learn to communicate design
intent to others in a multidisciplinary team; (3) understand how to integrate
parametric CAD tools with CAE tools; (4) be able to critically assess a given design
logic/process and its impact on the range of possible solutions, emphasizing the value
of solution space thinking; and, (5) hear from leading design practitioners about how
they are applying parametric design concepts to their own work.
The primary research goal for the course was to get user feedback on the
Design Optioneering method in the following areas: (1) effectiveness in capturing
design intent; (2) quality of performance feedback provided; (3) ability to
systematically search through the design space in search of preferred designs; and (4)
ease of use. A second research goal was to document how multidisciplinary design
teams collaborate using the Design Optioneering method and to compare these
observations to conventional design methods.

Structure
The course was organized in two modules (1) defining the design space and (2)
exploring the design space. The former module provided instruction related to
parametric modeling methods and communicating and abstracting design intent into
computable and shareable constructs. The later module dealt with methods for
integrating parametric CAD with CAE tools as well as computational optimization
and sampling methods to systematically search the design space for high performance
solutions. Class time was divided approximately equally between lecture and studio /
workshop components. The lectures are structured to give students a background in
parametric design and its applications. Topics include design theory, precedents in
architecture and product design, as well as methods for mapping design intent to
parametric logic and design exploration. Workshops are designed to provide students
with hands-on experience with parametric modeling and simulation software that will
be used to complete the design exercises. The workshop time is also used to mentor
individuals and teams on their design projects.

Assignments
The primary assignments for the course were the completion of three design
projects: (1) beverage container, (2) building façade, and (3) tall building (final
project). The first two design exercises are described below and the final project is
explained in the following section.
The objective of the first assignment was to introduce the class to associative
parametric modeling methods. The brief was to select a single factor to drive the
physical form of a beverage container (e.g. ambient temperature, user age, etc).
Students began by sketching what they thought would make a good beverage
container for the extreme cases given the chosen driver and then identified at least
three geometric dimensions that responded to the customer needs identified from
their driving concept. Next, students described the dependencies between the design
driver, the customer needs, and the geometric parameters using a logic diagram.
Finally, the students created a 3-D parametric CAD model of the beverage container
and documented the possible geometric variations.
The second assignment was to design a façade system for a series of rail
COMPUTING IN CIVIL ENGINEERING 887

station canopies to be built in various Chinese cities. The functional requirements for
the façade were to provide shading from direct sunlight during the summer and to
allow solar penetration and maximum day lighting during the winter. The design
challenge was to create a single parametric façade panel that could satisfy the
requirements above for the specified canopy geometries and geographic locations.
The second project instructed students in the value of developing and prototyping a
design logic for repeatable deployment, where each instance was topologically
identical but geometrically unique given the varying context of the panel. As with
the first assignment, the deliverables were a parametric logic diagram and 3-D
parametric CAD model that could be reconfigured to each of the specified station
locations.

COURSE FINAL PROJECT


For the final project, the students worked in teams of three to apply the
Design Optioneering Methodology to group design project. The project brief was the
design of a tall building for a pseudo corporate client to be located in the Middle East.
Each student played the role of a specialist and was responsible for the design of a
particular subsystem of the tower: architecture, structure, and façade. The assignment
was divided into two parts: (1) subsystem design involving the optimization of the
tower form considering only a single design discipline; and, (2) system design
involving the optimization of the tower form considering all of the design disciplines
simultaneously. The application of the Design Optioneering method to address the
final project brief is discussed below.

Problem Formulation
The objectives and constraints for each subsystem were included in the design
brief as described below:

Architecture Structure Facade


GOALS GOALS GOALS
 Appropriateness to site  Minimize cost of  Maximize daylight
context structure factor
 Symbolize commitment
to sustainable design CONSTRAINTS CONSTRAINTS
 Maximize IRR over a  Material stress  Solar gain
10 year period  Global tower deflection  Façade cost
 Net Leasable Area  Net Leasable Area
CONSTRAINTS (NLA) (NLA)
 Net Leasable Area
(NLA)

Based on the design brief, each multidisciplinary student team collaboratively


developed a design scheme through charettes and then created a digital parametric
model of the tower design concept. The parametric model was driven by a set of
variables that the design team planned to vary to optimize the performance of the tall
building concept.
888 COMPUTING IN CIVIL ENGINEERING

The construction of the parametric model was one of the most challenging
aspects of the assignment. Student teams generally took one of two approaches to
parametric model creation: the first approach was to create the parametric model
collaboratively, essentially all team members participating concurrently in the
modeling process. Teams that used this approach were generally satisfied with the
quality of the parametric model, but felt that having all team members participating
concurrently in the modeling process limited the productivity of the team.
Alternatively, some teams assigned the architect role to create the parametric
model with relatively little input from others. In this scenario, the architect found it
difficult to communicate the parametric logic to the rest of the design team. In
addition, the other team members often found the parametric model deficient in that it
did not afford them enough flexibility to explore desired design variations that were
significant for their particular discipline.

Process Integration
A variety of analysis tools were required to assess the performance of a given design
option with respect to the design objectives and constraints defined above. The
software tools used and their purpose are described below.

Software DIGITAL MICROSOFT


OASYS GSA
Name PROJECT EXCEL
Structural Finite
Description Parametric CAD Element Analysis Spreadsheet
(FEA)
 IRR  Material Stress  Construction
Parameters  NLA  Global tower Cost
Calculated  Daylight Factor deflection  IRR
 Solar Gain

Phoenix Integration’s ModelCenter® software was used to automate the execution of


the commercial software tools described above and to integrate data between the
different domain specific CAD/CAE applications into a single common environment.
In general, students were able to successfully create integrated process
models. Many students found it difficult initially to create analysis models that were
robust for all possible configurations of the parametric model, but felt that they
became more stilled in this area as the assignment progressed. Students also
commented that while these models enabled significant reductions in design cycle
time compared to conventional methods, they are required substantially more time to
set up. As a result, students had to wait much longer than expected to receive
preliminary feedback on the performance of their design concept and, therefore, had
relatively little time to revise the model parameterization if they were not satisfied
with the initial iteration.

Design Exploration / Optimization


Once the students had created and tested integrated process models for each
COMPUTING IN CIVIL ENGINEERING 889

subsystem as described above, computational methods were applied to iterate


analyses of the design across the range of design variables to best achieve the
specified objectives while satisfying the constraint(s). In this case, students used
Design of Experiments techniques (Booker 1998) and the SEQOPT optimization
algorithm (Booker, Dennis et al. 1999; Audet, Dennis Jr et al. 2000) to explore the
design space. Sample results of the optimization process for two tower design
concepts are shown below.

Figure 1: Sample final project results showing optimal tall building forms from the
perspective of each subsystem (courtesy of John Basbagill, Spandana Nakka and Jieun Cha)

The students found the computational techniques provided to be extremely


valuable to the design exploration process for three reasons: (1) it allowed them to
evaluate many more design alternatives than otherwise possible; (2) it lead them to
counterintuitive design solutions can they otherwise would not have discovered; and,
(3) The large sample size provided the design team with piece of mind that they had
indeed found the best performing design configuration given the set of variables
considered. Suggestions to improve the usability of these tools included making the
optimization process more interactive by making the results viewable in real time. In
addition, the students would have appreciated more formal training in computational
optimization techniques before beginning the design project.

CONCLUSIONS
The Design Optioneering methodology was presented and applied by students
to a multidisciplinary design project involving the optimization of a tall building
massing considering architectural, structural, and façade performance. In general, the
students felt that Design Optioneering enabled them to substantially reduce design
cycle latency, and enable the evaluation of more design than conventional design
methods. It was observed that the method required a substantially different approach
to the design that the students were accustomed to. Perhaps the most significant
changes involved the requirement to define the complete range of design alternatives
at the beginning of the process and the relatively long set up time required to create
the integrated process models. At the beginning of the class, students struggled to
understand how a given design parameterization might impact the range of design
forms and performance, but the students became much more skilled in this area with
practice. Further research is underway to make Design Optioneering more
collaborative and interactive as well as to understand what types of design problems
are best suited for this method.

ACKNOWLEDGEMENTS
890 COMPUTING IN CIVIL ENGINEERING

We would like to thank John Barton, the director of the Architecture program
at Stanford University, Professor Martin Fischer and USC’s School of Architecture
Dean, Qingyun Ma for the opportunity to teach the course. The course was supported
by the Stanford Institute for Creativity and the Arts (SiCa), the Department of Civil
of Environmental Engineering, and the Center for Integrated Facility Engineering
(CIFE).

REFERENCES
Akin, Ö. (2002). Variants in Design Cognition. Design Knowing and Learning Cognition in Design
Education. C. Eastman, M. McCracken and W. Newstetter. Amsterdam, Elsevier: 105–124.
Audet, C., J. Dennis Jr, et al. (2000). A surrogate-model-based method for constrained optimization.
Eighth AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, AIAA-2000-4891.
Booker, A. (1998). Design and analysis of computer experiments. 7th AIAA/USAF/NASA/ISSMO
Symposium on Multidisciplinary Analysis and Optimization. St. Louis, MO AIAA.
Booker, A. J., J. E. Dennis, et al. (1999). "A rigorous framework for optimization of expensive
functions by surrogates." Structural and Multidisciplinary Optimization 17(1): 1-13.
Burry, M. and B. Kolarevic (2003). "Between intuition and process: Parametric design and rapid
prototyping." Architecture in the Digital Age: Design and Manufacturing. Ed. Branko
Kolarevic. London: Taylor & Francis: 54-57.
Eastman, C. M., W. M. McCracken, et al. (2001). Design Knowing and Learning: Cognition in Design
Education. Oxford, UK, Elsevier Science Ltd.
Fischer, M. (2006). Formalizing Construction Knowledge for Concurrent Performance-Based Design.
Intelligent Computing in Engineering and Architecture: 186-205.
Flager, F., A. Aadya, et al. (2009). Impact of High Performance Computing on Discrete Structural
Member Sizing Optimization of a Stadium Roof Structure. CIFE Technical Report Stanford,
CA, Stanford University: 1-10.
Flager, F. and J. Haymaker (2007). A Comparison of Multidisciplinary Design, Analysis and
Optimization Processes in the Building Construction and Aerospace Industries. 24th
International Conference on Information Technology in Construction. I. Smith. Maribor,
Slovenia: 625-630
Flager, F., B. Welle, et al. (2009). "Multidisciplinary Process Integration and Design Optimization of a
Classroom Building." Information Technology in Construction 14(38): 595-612.
Gallaher, M. P., A. C. O’Connor, et al. (2004). Cost Analysis of Inadequate Interoperability in the U.S.
Capital Facilities Industry. Gaithersburg, Maryland, National Institute of Standards and
Technology. NIST GCR 04-867: 210.
Gerber, D. J. (2007). Parametric Practices: Models for Design Exploration in Architecture.
Architecture. Cambridge, MA, Harvard Graduate School of Design. D.Des.
Gerber, D. J. (2009). The Parametric Affect: Computation, Innovation and Models for Design
Exploration in Contemporary Architectural Practice. Design and Technology Report Series.
Cambridge, MA, Harvard Design School.
Holzer, D., R. Hough, et al. (2007). "Parametric Design and Structural Optimisation For Early Design
Exploration." International Journal of Architectural Computing, 5(4): 625-643.
Holzer, D., Y. Tengono, et al. (2007). Developing a Framework for Linking Design Intelligence from
Multiple Professions in the AEC Industry. Computer-Aided Architectural Design Futures
(CAADFutures) 2007: 303-316.
Sacks, R., C. M. Eastman, et al. (2005). "A target benchmark of the impact of three-dimensional
parametric modeling in precast construction." PCI journal 50(4): 126.
Shea, K., R. Aish, et al. (2005). "Towards integrated performance-driven generative design tools."
Automation In Construction 14(2): 253-264.
Szalapaj, P. (2001). CAD Principles for Architectural Design, Architectural Press.
Zhao, L. and Y. Jin (2003). Work Structure Based Collaborative Engineering Design. ASME 2003
Design Engineering Technical Conferences and Computers and Information in Engineering
Conference. Chicago, IL. DETC2003/DTM-48681: 1-10.
Synectical Building of Representation Space: a Key to Computing Education

Sebastian Koziolek1 and Tomasz Arciszewski2


1
Wroclaw University of Technology, Institute of Machine Design and Operation,
Lukasiewicza 7/9, 50-371 Wroclaw, Poland, (Fall 2010, Visiting Professor, George
Mason University, Fairfax, VA, USA), phone: 0048-71-320-4285, e-mail:
sebastian.koziolek@pwr.wroc.pl
2
George Mason University, the Volgenau School of Engineering, Civil,
Environmental and Infrastructure Engineering Department, 4400 University Drive
Fairfax, VA 22030, USA, phone: (703) 993-1513, email: tarcisze@gmu.edu

ABSTRACT

This paper proposes a method for building a design representation space capturing
domain knowledge and at the same time creating an opportunity to acquire
knowledge outside the problem domain. This dual emphasis increases the potential
for producing novel designs. The method combines the advantages of heuristic
thinking based on Synectics with traditional systematic and analytical thinking and is
intended mostly for use in computing education. It will allow students to develop
a fundamental understanding how to acquire knowledge necessary for conceptual
design while preserving their ability to explore various domains and to expand
a representation space.

INTRODUCTION

Design innovation depends on the novelty of design concepts, products of conceptual


design. Unfortunately, very often design concepts are simply a reflection of the
present customer needs and the entire design process is focused only on satisfying
these needs. In such a process mostly the problem-specific knowledge is exploited
and it rarely produces truly innovative design concepts, i.e. patentable concepts
advancing evolution of engineering systems [11].
From the knowledge perspective, conceptual design can be considered as a two- or
three-stage process. First, design knowledge is acquired. In the case of conceptual
design entirely conducted by humans, the acquired knowledge is used directly in the
second stage of concept development. When computers are used, in the second stage
the acquired knowledge must be formally presented as a knowledge representation
space, in the form of decision rules, and/or of ontologies. Then, in the third stage this
formal knowledge may be used for the concept development.
In both cases of human and computer-aided design the nature of acquired knowledge
is crucial for the conceptual design process and its results. If knowledge is acquired
only from the problem-specific domain, the subsequent development of design

891
892 COMPUTING IN CIVIL ENGINEERING

concepts can be considered as exploitation of a design representation space prepared


only for a specific domain. If knowledge is acquired also from outside this domain,
from other engineering or science domains, then the development of design concepts
can be considered as exploration. Obviously, this type of conceptual design is much
more effective than exploitation-based design as far as innovation is concerned [6].
Presently, engineering education is focused mostly on the analytical aspects of
design and does not sufficiently address issues associated with conceptual design. If
conceptual design is discussed at all, it usually is considered as an exploitation
process, which can be deductively conducted using, for example, various search
methods. To advance engineering design education, a much better and complex
understanding of conceptual design based on exploration must be developed,
including understanding of methods, and tools.
This paper is focused on the fundamental issue how to acquire knowledge to be used
in computer-aided conceptual design, knowledge which will allow exploration and
ultimately will create opportunities for the development of novel design concepts.
As a result of our studies a method is proposed, which is intended for use in
engineering education to prepare future engineers for their challenges as explorers
and inventors.

INVENTIVE CONCEPTUAL DESIGN METHOD

The proposed method is based on the following assumptions:


 It is intended for inventive conceptual design
 Inventive conceptual design is a process leading to the development of design
concepts, which are new, non-obvious and surprising, as well as potentially
patentable concepts
 A design concept is an abstract description of a given engineering system in
terms of mostly symbolic attributes
 Inventive conceptual design is a process of learning/knowledge acquisition
related to the problem and of acquiring knowledge about inventive design
concepts [2]
 It is intended for a team conceptual design
 It uses Synectics [8] for knowledge acquisition and for generation of design
concepts
 Knowledge acquisition is conducted using “Knowledge Acquisition
Network”
 “Knowledge Acquisition Network” utilizes both internal and external
knowledge acquisition processes

The method is based on a five-stage process (Fig. 1), including such stages as:
Problem identification, Team selection, Problem formulation, Knowledge acquisition
and development of design concepts, Concept evaluation and selection.
The first three stages are preparation for the most important and difficult Stage 4 [3],
called „Knowledge Acquisition and Development of Design Concepts”. In the first
COMPUTING IN CIVIL ENGINEERING 893

stage, called „Problem Identification”, the Team Leader indentifies the problem and
presents it in a desriptive form. Next, he/she determines the relative position of the
problem with respect to the State of the Art (SOTA). In the second stage, the Team
Leader selects team members, called „Synectors”. The ideal team of Synectors
should be balanced considering at least eight main characteristics, listed below:
1. Domain Differentiation: Synectors should represent various domains, for
example four engineers and two-three professionals from non-engineering
domains (for example, biology, law, history, etc.)
2. Emotional involvement: differentiated levels of motivation
3. Thinking styles: global and local thinkers, members with legislative,
executive, and judicial thinking styles, etc.
4. Differentiated Age: optimal age is 25-40, but all ages are acceptable
5. Administrative Experience: one or two experienced executives understanding
managment
6. Entrepreneurship: one or two entrepreneurs focused on action
7. Job Experience: Synectors should be experienced and successful
8. Differentiated Education: as many domains as possible, including Art,
Engineering, Biology, etc.
9. The “Almost” Individual included: people who are not very successful at
work but have potential

Figure 1. Conceptual design process

In the Stage 3, called “Problem Formulation”, the entire team is building a group
understanding of the problem and is working on the formulation of the specific
design tasks. In the Stage 4, called “Knowledge Acquisition and Development of
„Design Concepts,” all knowledge is acquired and design concepts are developed.
This integrated process is based on the assumption that human development of new
ideas (design concepts in our case) requires knowledge and is inspired by knowledge
from various domains. For this reason, the process of knowledge acquisition in the
proposed method is conducted using Synectics and both internal and external
Synectics sessions. Our idea of using Synectics for knowledge acquisition has been
inspired by great inventors, who also learned using unconventional methods. For
example, Leonardo da Vinci learned human anatomy using human carcasses to
improve his inventions [5] and Thomas Edison sought knowledge in poetry, which
his mother taught him while schooling him at home [6]. Another great inventor,
894 COMPUTING IN CIVIL ENGINEERING

Genrich Altshuller [18] learned inventive knowledge studying thousands of patents


and looking for patterns in the evolution of various engineering systems.
In the proposed methods, the integrated process of knowledge acquisition and
development of design concepts is called the “Knowledge Acquisition Network”. In
this case, knowledge and new design concepts come from internal and external
Synectors participating in Synectics sessions. First, an internal Synectics session is
conducted. In this session, six-eight Synectors use the problem description and the
problem formulation and try to acquire knowledge both from the problem domain
and from other unrelated domains. This initial knowledge is then reformulated and
its context changed using four analogies (Figure 2), including: Personal Analogy,
Direct Analogy, Symbolic Analogy and Fantasy Analogy.

Figure 2. Illustration of Synectics Analogies

When an internal Synectics session is conducted, the most fantastic and infeasible
concepts are created. They are not useless because they can be considered as seeds in
an evolutionary process, which may ultimately lead to novel and feasible. Next, the
initial concepts created using Analogies are transformed by Synectors in accordance
with selected metaphors [4]]. As a result of this, the concepts gradually become
better and their feasibility is improved. This part of the session is very important,
because the conducted process leads to the seed questions for the External Synectics
Session. Then, all Synectors distribute the questions through the entire Knowledge
Acquisition Network, searching for new sources of inspirations and concepts, if
possible. An External Synectic Session could be something small like a short
conversation with friends or family, or something big like an international
teleconference or a forum on the Internet. All knowledge acquired from these
interactions is then presented in the second internal Synectics Session [8]. In this
session the most interesting, novel, and plausible design concepts are produced. After
the development of a class of design concepts, Stage 5, called “Concept Evaluation”
is conducted (Figure 1). In this stage, the produced design concepts are evaluated
first and next the best concept, or several comparable concepts in terms of their
COMPUTING IN CIVIL ENGINEERING 895

novelty, utility, or feasibility are selected. At this stage, concepts are usually
presented in the descriptive form. Their descriptions are used then to identify
symbolic attributes and their values. Next, other possible values for the individual
attributes are determined and in this way the entire ranges of variation are obtained
for all identified attributes. Finally, these attributes and their values are used to
construct the design knowledge representation space for a given problem. The
developed design representation space allows the preparation of patent claims and/or
of design specifications for the detailed design.

VALIDATION OF METHOD

An Internal Synectics Session was held at George Mason University in November,


2010. There were four team members with different personal and professional
backgrounds, who represented various engineering domains, and were at different
stages in their respective professional careers. Members of the team included an MS
student, a Ph.D. student, a junior faculty, and a 75-year old student and inventor with
50 patents.
Problem Identification. Today a threat to the lake fisheries and recreation industry
in the Great Lakes is in the form of two related invasive species (Asian Carp and
Silver Carp) that escaped into the Mississippi River in Arkansas during the mid-
1990s and have been moving upstream through the watershed ever since. Both carps
are voracious eaters, and they crowd out native species. They also create hazards to
boaters, as they are easily stimulated to jump out of the water by the sound/vibration
of motors. These fish may grow to 50 pounds or more, and are dangerous projectiles
to boaters when jumping.
Problem Formulation. The problem was finally formulated as follows:
Keep the Asian and Silver Carp population out of the Great Lakes and satisfy the
following requirements:
1) Minimal change to the time required for barge transit, and minimal additional
operating costs.
2) Strong assurance that the new system will prevent the invasive species from
entering the Great Lakes through Lake Michigan.
Personal Analogy. Concept development using the Personal Analogy is the third
step in the Synectics Session. All concepts developed with the Personal Analogy
describe personal ability to keep the Asian Carp out of the Great Lakes. Thus the
ideas involve sentences such as “I eat”, “I block”, “I kill”, etc. It is important to use
this concept development in the Personal Analogy Stage of Synectics Session.
Synectors usually have a tendency to use all analogies at the same time. Direct idea
formulation keeps Synectors focused on using one analogy at the time, thereby
maximizing the effectiveness of the session.
Direct Analogy. In this analogy, Synectors look for similarities in different systems.
The most powerful effect of this part of the session is the use of the Direct Analogy
in the context of energy. Application of different form or source of energy to keep
the Asian Carp out of the Great Lakes was the task in this stage of the session. The
selected results from the Direct Analogy Stage of Synectics Session are presented in
Table 1.
896 COMPUTING IN CIVIL ENGINEERING

Table 1. First class of selected concepts developed using Direct Analogy


DA Item Concepts
1 High frequency sounds to repel fish movement
2 Turbulence in the water to prevent fish movement
3 Temperature barriers to prevent fish movement into Great Lakes
4 Pressure barriers to prevent fish movement into Great Lakes
… …

Symbolic Analogy. One of the most powerful analogies is Symbolic Analogy. This
analogy shows a logical unit represented by a symbol. Very often, the symbols in this
analogy are the natural objects, such as human body parts, trees, leaves, etc. Often
ideas generated by the symbolic analogy could equally well be developed using a
direct analogy. But the mere fact of looking for symbols affects the development of
new solutions. Thus, despite the seemingly similar results, application of only one of
them may be insufficient. The selected results from the Symbolic Analogy Stage of
Synectics Session are presented in Table 2.

Table 2. 1st , 2nd & 3rd class of concepts developed using Symbolic Analogy
SA 1st class of concepts 2nd class of concepts 3rd class of
Item concepts
Valves of combustion
Canal Lock System
Human heart used as engine used to control
(with the use of
1 a pump in water water pumping
water exchange
exchange in the river Dialysis Machine used to
system)
clean the water in rover
… …

Fantasy Analogy. Another analogy used is Fantasy Analogy. It is the simplest


analogy for development of the first class of concepts. However, this part of the
Synectics Session is the most difficult in terms of evolutionary concepts. The
transformation of fantastic ideas from the first class of concepts into the next one is
the crucial part of the session. This is because the transformation requires the change
of a fantasy into a real, possible concept.

Attributes determination. After concept selection, the function tree has been
decomposed into various attributes in order to describe each function.

Conclusions. The proposed method is intended to bridge an important gap in


computing education between education focused on the traditional analytical use of
computers and education dealing with advanced use of computers in design,
including inventive conceptual design. In both cases a knowledge representation
space is a must. However, in the case of analysis, a knowledge representation space
is usually known or easy to prepare as a result of repetitive nature of computations.
In the case of inventive conceptual design the situation is entirely different. Usually
solving inventive problems requires exploration of knowledge and that leads to
building a representation of knowledge from the problem domain and from other
COMPUTING IN CIVIL ENGINEERING 897

domains. Also, the nature and extend of such knowledge representation has a direct
impact on the novelty of produced design concepts and often also determines if a
given problem can be solved. The proposed method has been developed as a result of
extensive research on methods and tools for building design knowledge
representation space. It has been tested with a group of students and modified as a
result of the provided feedback. The method is not easy to use and it is appropriate
only for students familiar with Inventive Engineering and with Synectics. Also, all
team members must be carefully selected and prepared for their participation in the
team efforts. During the entire process the team cohesion must be maintained and
team members constantly motivated and encouraged to be involved and to contribute
to the process. The method is also sensitive to the internal group balance, i.e. no
group member is supposed to dominate the team and the Team Leader must
constantly react to the changes in the group’s dynamics.
The conducted experiments were successful. The team efforts produced a continual
flow of concepts and all members contributed in various ways, reflecting their
knowledge and personalities. Most likely, the team size (four members) was too
small and that might have negative impact on results because all team members
needed to be fully engaged all time, sometimes nearly impossible to maintain.
Our experiments clearly demonstrated that it is possible to develop a
transdisciplinary design representation space for inventive conceptual design and that
this goal can be accomplished in a systematic manner. Synectics proved to be
difficult to use but it helped to acquire a rich body of knowledge. The method still
requires refinements but even in its present form it can be used for teaching how to
acquire transdisciplinary knowledge and how to use it to develop a design knowledge
representation space.

REFERENCES

[1] Arafat G., Goodman B., Arciszewski T., "Ramzes: A Knowledge-Based


System for Structural Concepts Evaluation," Special Issue on Artificial
Intelligence in Civil and Structural Engineering, International Journal on
Computing Systems in Engineering, pp. 211-221, 1992.
[2] Arciszewski, T., Grabska, E., Harrison, C., “Visual Thinking in Inventive
Design: Three Perspective”, Soft Computing in Civil and Structural
Engineering, Topping, B.H.V. and Tsompanakis, Y, (Editors), chapter 6, pp.
179-202, Saxe-Coburg Publications, UK, 2009.
[3] Arciszewski, T., Successful Education. How to Educate Creative Engineers,
Successful Education LLC, pp. 200, December, 2009.
[4] Brewbaker J., Metaphor making through Synectics, Exercise Exchange,
Spring 2001, Vol. 46, No. 2. ProQuest Education Journal, pp. 6.
[5] Gelb. M., How to Think Like Leonardo da Vinci: Seven Steps to Genius Every
Day, Dell Publishing, 1998
[6] Gelb. M., Miller Caldicott S., Innovate Like Edison: The Five-Step System for
Breakthrough Business Success, Dutton Books, 2007.
[7] Gero, J., “Computational Models of Innovative and Creative Design
Processes, special double issue, “Innovation: the key to Progress in
898 COMPUTING IN CIVIL ENGINEERING

Technology and Society,” Arciszewski, T., (Guest Editor), Journal of


Technological Forecasting and Social Change, North-Holland, Vol. 64, No.
2&3, June/July, 2000.
[8] Gordon W. J., Synectics: The Development of Creative Capacity, Harper and
Row, 1961.
[9] Harrison C., “Inventive Design, Neuroscience and Cognitive Psychology,” an
invited lecture, CEIE 896, “Design And Inventive Engineering, George
Mason University, 2010.
[10] Kano N., Nobuhiko S., Fumio T., Shinichi T.: Attractive quality and Must-
Be Quality, Hinshitsu, 1984.
[11] Karlinski J., Rusinski E., Lewandowski T., New generation automated
drilling machine for tunnelling and underground mining work, Automation in
Construction, vol. 17, nr 3, s. 224-231, 2008.
[12] Kicinger R, Presentation on Bioinspired Design. IT 896 – DESIGN AND
INVENTIVE ENGINEERING, George Mason University, 2010. CHANGE
[13] Kleyner A. Sandborn P., Boyle J., “Minimization of Life Cycle Costs
Through Optimization of the Validation Program – A Test Sample Size and
Warranty Cost Approach”, Reliability and Maintainability, 2004 Annual
Symposium – RAMS, 26-29 January, 2004.
[14] Kolb E. M. W., Hey J., Hans-Jürgen S., Agogino A. M., “Generating
compelling metaphors for design”, Proceedings of the 20th International
Conference on Design Theory and Methodology, DTM 2008, August 3-6,
2008, New York City, New York, USA.
[15] Lerdahl E., “Using Fantasy Story Writing and Acting for Developing
Product Ideas”, Proceedings of EURAM, 2002.
[16] Shelton, K., Arciszewski, T., “Formal Innovation Criteria,” International
Journal of Computer Applications in Technology,” pp. 21-32, January, 2008.
[17] Yi-Luen Do E., Gross M. D., “Drawing Analogies: finding visual
references by sketching”, Proceedings of Association of Computer Aided
Design in Architecture (ACADIA) 1995, Seattle WA , pp 35-52.
[18] Zlotin B., Zusman A., Directed Evolution: Philosophy, Theory and
Practice, Ideation International Inc., 2001.

ACKOWLEDGEMENTS

This article is a result of research conducted by the first author at George Mason
University and in cooperation with the second author. The Synectics session was
held at George Mason University in November, 2010. The authors thank all
synectors for their participation and numerous contributions, including Mario
Cardullo, David Flanigan and Ali Adish. Finally, the authors would like to
acknowledge contributions of Izabela Koziolek (izabela.kw83@gmail.com) who has
prepared all drawings used in Figure 2.
Enhancing Construction Engineering and Management Education using a
COnstruction INdustry Simulation (COINS)

T. M. Korman1 and H. Johnston2


1
Associate Professor, Department of Construction Management, California
Polytechnic State University, San Luis Obispo, 1 Grand Avenue, San Luis Obispo,
CA 93407-0284; PH (805) 756-5612; FAX (805) 756-5740; email:
tkorman@calpoly.edu
2
Professor Emeritus, Department of Construction Management, California
Polytechnic State University, San Luis Obispo, 1 Grand Avenue, San Luis Obispo,
CA 93407-0284; PH 805-756-2613; FAX (805) 756-5740;hjohnsto@calpoly.edu

ABSTRACT

Simulations and learning games use technology to create real-world


experiences to provide an opportunity to engage, enjoy, and learn. Many simulations
have been designed to meet specific learning goals, i.e. sharing case studies to
demonstrating very complex situations. Simulation and gaming is not new to higher
education but in the past was done in a very narrow vein and because of the
complexity and development time required to produce them. Most have not been
robust enough to engage students. Managing construction involves being able to make
decisions that involve balancing time, cost, quality, resources, and identifying and
solving a variety of issues related to the selection of equipment, labor, and tools. The
skills required of today’s construction engineering and management professionals are
a combination of management skills and technical knowledge. This paper describes
the development and implementation of a COnstruction INdustry Simulation (COINS)
designed and developed at California Polytechnic State University, San Luis Obispo
(Cal Poly) to prepare construction engineering and management students for the real
world.

INTRODUCTION AND BACKGROUND

As the millennium generation enters the higher education system many have
spent hours playing computer games as they have in the classroom during their
lifetime; therefore, a natural transition is for our learning environments to begin to
use techniques from the gaming world. Simulations have now been used for decades
to help people learn; i.e. flight and driving simulators. Current on-line simulations
run the gamut from complicated mathematical models to interpersonal skills

899
900 COMPUTING IN CIVIL ENGINEERING

development tools. Some simulations are all online while others mix in real-world
in-person rehearsals that follow your time online.

The use of performance-based simulation learning tools to educate has been


growing rapidly due to the decisive success rates of specialized, interactive content
that teaches leaders high-level business acumen in a real-world, risk-free setting. A
recent survey revealed that "by 2020 the use of simulations will quadruple....
Simulations provide a parallel universe in which employees hone their skills...
Innovative companies have realized this, and others will follow (Kraft 1994)."
Companies like Accenture, IBM, SimuLearn and OutStart have expanded
performance testing from mildly interactive e-learning programs into full-fledged
training development software and content-authoring tools, which may be customized
to fit any organization’s needs. In the effort to enhance the education experience for
construction engineering and management students, a COnstruction INdustry
Simulation (COINS) was designed and developed at California Polytechnic State
University, San Luis Obispo (Cal Poly) to prepare construction engineering and
management students for the real world. To begin the discussion on the development
of COINS, a review of experiential learning is provided below.

OVERVIEW OF EXPERIENTIAL LEARNING

Much of the basis for simulation design has historically been rooted in
experiential learning. Human beings absorb information through the senses, yet
humans ultimately learn by doing. Learning also involves feeling things about the
concepts (emotions) and doing something (action). These elements need not be
distinctive; they can be, and often are, integrated. (Lewin 1995) In the book
Experiential Learning, David Kolb describes learning as a four-step process (Kolk
1994). He identifies the steps as first watching, second thinking (mind), third feeling
(emotion), and forth doing (muscle). Kolb wrote that learners have immediate
concrete experiences that allow humans to reflect on new experience from different
perspectives (Kolb 1994). From these reflective observations, humans engage in
abstract conceptualization, creating generalizations or principles that integrate
observations into sound theories. Finally, humans use these generalizations or
theories as guides to further action. Active experimentation allows humans to test
what was learned in new and more complex situations. The result is another concrete
experience, but this time at a more complex level.

Simulations provide a mechanism for active learning that results in longer-


term recall, synthesis, and problem-solving skills than learning by hearing, reading,
or watching. Simulations assist the learning environment to move from a learning-
by-telling model and even learning-by-observing (as in the case-method) to a
learning-by-doing model, from passivity to activity, and to extrapolate experiences to
application (Shanck 1994).
COMPUTING IN CIVIL ENGINEERING 901

DESIGN AND DEVELOPMENT OF COINS

The first version of COINS was Building Industry Game (BIG) and was
originally developed to enhance student learning in construction management
departments. BIG originated out of an idea by Hal Johnston, Professor in
Construction Management Department and Emeritus Faculty Jim Borland at Cal
Poly. The BIG simulation game focused on the commercial building sector of the
construction industry. BIG had a built-in an estimating and scheduling simulation
and a limited accounting database. Students used BIG to emulate managing a
commercial building contractor. The origins of BIG began with Glenn Sears,
Professor Emeritus, of the University of New Mexico. Professors Johnston and
Borland were granted permission from Professor Sears to write, modify, convert BIG
to C++. The idea that BIG could become something much larger and more a robust
game came about with collaboration between Hal Johnston and Jim Borland at Cal
Poly. It was their goal that BIG would become part of a larger integrated
construction company simulation that incorporated more sectors of the construction
industry; COINS incorporates their vision.

Using BIG as a template, COINS was developed into a web based simulation
written with a in JAVA front-end and using PostgreSQL database. COINS was
developed using open source software. The intent of COINS was to develop a
simulation beyond just an estimating game; the goal was to produce a simulation
required students to create a strategy for human resources management, business
development/procurement of work, and project management.

Currently, COINS includes projects from two of the largest construction


industry sectors: Commercial Buildings and Heavy Civil Infrastructure projects. The
table below list the types of commercial building and heavy civil infrastructure
projects included in the simulation:

Table 1. Commercial Building and Heavy Civil Infrastructure Projects.


Commercial Building Projects Heavy Civil Infrastructure Projects
 Multi-family housing  Highways projects
 Educational facilities  Bridges
 Hospitals and medical office  Residential site development
buildings  Mass excavation
 Commercial office buildings  Underground utilities
 Industrial manufacturing facilities

Each project consists of nine (9) activities, which together comprise the
project schedule. These are listed in the table below.

For each project activity, there are five (5) different construction methods to
select from; therefore, the schedule and cost estimate is dependent on the methods
selected for each project activity.
902 COMPUTING IN CIVIL ENGINEERING

Table 2. Activities for Commercial and Heavy Civil Infrastructure Projects.


Commercial building construction sector Heavy Civil Infrastructure
 Excavation  Clear and grubbing
 Foundation  Rough grading
 Basement  Excavation
 Framing  Underground utilities (water,
 Closure sewer, storm drain)
 Roofing  Concrete placing and finishing
 Siding  Backfilling and compaction
 Finishing  Aggregate base placement and
 Mechanical, electrical, and compaction
plumbing  Paving
 Finish grade

USE OF COINS IN THE EDUCATIONAL PROCESS

The typical use of COINS involves dividing a class into teams, who will form
a virtual construction company. Student teams are able to hire virtual staff as needed,
deal with monthly problems, make choices, and experience the effects of their
decisions. During game play, participants gain experience and exposure to a cadre of
real world scenarios, are provided with the opportunity to gain experience, learn from
their mistakes, and experience the totality of management required of the construction
professional. Each team is given an equal amount of capital at the beginning of the
game.

Time is represented as "periods," each period represents two (2) months of


real-time. The period is advanced once or twice per week. Each period, new projects
are available for the teams to propose on. With the increasing number of awarded
contracts, companies must recruit other overhead personnel or the companies must
pay large additional sums for the employment of external consultants. Ultimately, the
goal of the students during gameplay is to achieve the best possible outcome for their
company by analyzing situations, gathering data, and making strategic decisions
between time, cost and quality.

COINS allows the game administrator (instructor) to place the player or team
into a situation or incident that could require a quick short term solution or possibly a
long term change in the company. Situations also take the form of cases that require
ongoing management by the team over an extended period of time. The game can
simulate the month-to-month problems, issues and decisions required to manage a
construction company successfully. Specific aspects of the game play are described
below: Human Resources Management, Business Development/Procurement of
Work, and Project Management.
COMPUTING IN CIVIL ENGINEERING 903

Human Resources Management

The first order of business in game play involves students forming multiple
teams and creating a virtual construction company. They must develop a mission and
value statement to define their company. Student teams are given a username and
password by instructor. Teams register their team members and each student team
member plays the role in the companies organization. Teams are required to hire
personnel, creating main office overhead, i.e. President, Marketing Director,
Estimator, Student Intern, Scheduler, Accountant, etc. They are permitted to change
personnel, as they need either for growth or other reasons.

Business Development/Procurement of Work

The business development/procurement of work aspects involves student


teams working to propose on projects for the award of work (projects). Teams are
given quantities, expected production, and costs for each activity on every job
available for each project. The student teams must decide which project to propose
on, select a method for each activity, determine their direct cost, and then finalize
their cost estimate by adding cost for indirect jobsite cost, construction contingencies,
and project margins.

As the period advances, the computer evaluates the estimates for each project
and awards a construction contract to the lowest responsive team. COINS also
generates an estimate internally for every project in effort to check the teams
estimates within reason. Teams evaluate the results and attempt to interpret their
competitor’s strategies as the game progresses. Construction estimates are rejected if
they fall below a minimum amount (calculated by the computer). In order to propose
on a project, teams must have cash-on-hand in addition to positive financial
indicators. These factors assist COINS to determine individual project size limits for
bonding purposes, which is at least 5% of their estimate. COINS does not permit
teams to become overloaded with too many projects; therefore, all teams have a work
in progress bonding limit that may not be exceeded.

Considering the market place has both private projects in which contracts are
most frequently negotiated and non-bonded and public projects where contracts are
typically bid and must be bonded; COINS contains both. After some success, a
company may be put on select lists and even move onto being considered for
negotiated projects.

Project Management

Players must monitor their financial position as work progresses, and submit
request for payment for their work to date. Also, teams must create strategies to
improve their bonding limits. A record of successful projects creates an opportunity
to obtain negotiated work. At the end of every period, each team receives a:
904 COMPUTING IN CIVIL ENGINEERING

 Progress Report
 Complete Dynamic Financial Report
 Analysis Report of the work accomplished, and
 financial result to date.

The amount of work completed during a period depends on: the production
rate for the work packages selected on each activity and the uncertainty factors,
including - weather conditions, labor availability, and fluctuating cost of materials.
Each team must evaluate the projects in progress by changing the construction
methods, and at the very least, submit a progress payment for the work they
completed during that period. Accounts received affects cash flow and cash position
on the balance sheet and the teams bonding capacity. The end-of-period financial
reports show expenses incurred for:
 Direct construction costs
 Bidding costs
 Consulting services
 Liquidated damages, and
 Interest on borrowed money

Student teams must monitor their financial position as work progresses, and
bill for their progress payments. Also, teams must create strategies to improve their
bonding limits. A record of successful projects creates an opportunity to obtain
negotiated work. Changes in company’s financial position will change ratios and are
also logged along with changes to the company’s appraisal metrics:
 Financial liquidity
 Financial success
 Responsibility
 Pace
 Ethics
 Name recognition

As gameplay progresses, teams have the following options:


 Pay a consulting fee to receive information on weather forecasts, material
prices, labor and material availability, and market projections for future
periods.
 Apply for loans.
 Make a change and specify a different method for the following periods.
 Use overtime to speed up certain activities (greatly increasing the labor costs).

A financial report shows the final total worth of the firm in either case.
Maximization of profit and positive strong financial condition are the main
objectives, but additional emphasis can be placed on the company appraisal metrics.
At the conclusion of the game play, the instructor can either have the simulation
COMPUTING IN CIVIL ENGINEERING 905

forecast the expected results of any on-going projects or use the actual results at that
time.

ASSESSMENT OF STUDENT LEARNING

At Cal Poly, COINS has been used in several courses including: Professional
Practice, Construction Estimating, Construction Accounting, Management of the
Construction Firm, and Business Practices, and most recently Heavy Civil
Construction Management. During the 2005/2006 academic year, COINS was first
used at regional level. Teams from several universities in the Associated Schools of
Construction (ASC) regions 6 and 7 competed against each other.

In November 2009, COINS was first used internationally with universities


from Alabama, California, Idaho, North Carolina, Washington State, and the Czech
Republic competing against each other. Results were evaluated in three categories:
Highest Retained Earning - received the highest profit, Highest Appraisal Metrics -
the best valuation metrics and third, Most Awarded Projects - the company with the
most awarded projects.

The simulation has a built-in grading module that can be used to obtain
statistic on the various companies for comparison or to use in the classroom for
grading the simulation. Faculty members can also develop their own method of
grading. To assess participation and student learning, the faculty member is able to
use the following criteria:
 Number of instances a team proposes to perform a project
 Number of instances a teams proposal is rejected (due to factors such as: not
enough bonding capacity, substantially low cost estimate, etc.)
 Number of instances a team procures a project
 Number of instances the team retains earnings at the end of a cycle
 Company’s appraisal metrics

Using the seven principles of good practice as an evaluation metric, the


COINS system performs well. It develops reciprocity and cooperation among
students. When using the COINS systems, learning is enhanced when it is more like
a team effort than a solo race. Good learning, like good work, is collaborative and
social, not competitive and isolated. Working with others often increases
involvement in learning. Sharing one's own ideas and responding to others' reactions
sharpens thinking and deepens understanding. With the idea in mind that learning is
not a spectator sport; COINS was developed to encourage active learning. Students
do not learn much just by sitting in classes listening to teachers, memorizing pre-
packaged assignments, and reciting predetermined answers. COINS also gives
prompt feedback. Students receive immediate feedback on their performance. When
getting started, students need help in assessing existing knowledge and competence.
In classes, students need frequent opportunities to perform and receive suggestions
for improvement. COINS also emphasizes time on task. The time plus energy equals
906 COMPUTING IN CIVIL ENGINEERING

learning. There is no substitute for time on task. The use of COINS assist students in
budgeting their time. Allocating realistic amounts of time means effective learning
for students and effective teaching for faculty. COINS communicates high
expectations. High expectations are important for everyone, even for the poorly
prepared, for those unwilling to exert themselves, and for the bright and well
motivated. Expecting students to perform well becomes a self-fulfilling prophecy
when teachers and institutions hold high expectations for themselves and make extra
efforts. COINS respects diverse talents and ways of learning.

CONCLUSIONS AND FUTURE ENHANCEMENTS TO COINS

To assist in the development of COINS, the developers have developed an


Industry Advisory Board (IAB) from the construction industry as well as a working
group of educators to continue the development and ideas for changes. The fact that
modules in COINS can be turned on and off, the simulation can be tailored for a
course to focus on specific learning objectives. For example, estimating can be
turned to an automatic mode which in a construction accounting class helps the
student focus on accounting and not on the estimating itself which can be very time
consuming and complex. Periods can move much quicker giving the students more
accounting to analyze and in a shorter time in which they can see the changes that
occur within a company without being burdened in the estimating/procurement of
work. Progress payments can be turned on to auto mode and additional projects can
be added to each team to create additional project or backlog. The game play
between commercial and heavy/civil construction is also modulized so a faculty can
play only commercial, heavy/civil or both can be played in one game. Future
modules are also planned, i.e. labor management, equipment management, safety
management, etc. Future changes are planned for COINS simulation in effort to
create a more robust simulation. These may include: equipment management, unit
price bidding, unit price billing, equipment parameters, equipment as a cost item,
dynamic depreciation, equipment feedback loop, personnel resumes and interview,
case studies involving environmental and labor regulations.

REFERENCES
Aldrich (2005) - Learning by Doing: A Comprehensive Guide to Simulations,
Computer Games, and Pedagogy in e-Learning and Other Educational Experiences
by Clark Aldrich. (John Wiley & Sons, 2005)

Kaye (2002) Flash MX for Interactive Simulation: How to Construct & Use Device
Simulations by Jonathan Kaye, PhD and David Castillo (Delmar Learning, 2002)
Companion CD-ROM with full source code.

Pfeiffer (2005) - Engaging Learning: Designing e-Learning Simulation Games by


Clark N. Quinn, forward by Marcia Conner. (Pfeiffer, 2005)

Whitney (2004) - "Performance-Based Simulations: Customizable Tool" by Kellye


Whitney. Chief Learning Officer Magazine, October 2004
Effectiveness of Ontology-based Online Exam Platform for Programming
Language Education
Chia-Ying Lin1 and Chien-Cheng Chou2
1
Department of Civil Engineering, National Central University, 300 Jhongda Rd.,
Jongli, Taoyuan 32001, Taiwan.; PH (886) 3-4227151 ext.34150; email:
993402003@cc.ncu.edu.tw
2
Department of Civil Engineering, National Central University, 300 Jhongda Rd.,
Jongli, Taoyuan 32001, Taiwan.; PH (886) 3-4227151 ext.34132; email:
ccchou@ncu.edu.tw

ABSTRACT
To teach programming language courses for undergraduate engineering
students, instructors are faced with a plethora of challenges. Unlike similar
courses provided by the computer science department, engineering students should
review mathematical concepts as well as learn programming pragmatics in order to
solve an engineering problem, e.g., matrix class creation and manipulation.
Additionally, during the learning process practices are considered to be an essential
part for further understanding. However, plagiarism always exists among students’
source codes. To resolve such problems, an ontology-based model and system,
called Programming Language Online Exam Platform (PLOEP), were proposed to
help the practice and examination of the programming course. A questionnaire
was designed and distributed to assess the effectiveness of PLOEP. Results show
that engineering students can learn programming concepts more efficiently and
effectively by taking exams on PLOEP. Finally, expanding the knowledge base of
PLOEP was recommended to cover more concepts and other challenges associated
with PLOEP were discussed.

INTRODUCTION
Nowadays programming language has become a required course in many
universities for undergraduate students in the department of civil engineering.
However, the instruction points for engineering students and for computer science
students may not be the same. The instruction for engineering students may be more
concentrated on the application of programming language. To achieve a better
learning effect, exercise-oriented instruction is considered a proper way for
students (Lahtinen et al., 2005), but generating lots of questions can be annoying for
teachers. In addition, plagiarism may appear (Spinells et al., 2007) and reduce the

907
908 COMPUTING IN CIVIL ENGINEERING

learning effect when give such a number of exercises. To resolve these problems, an
ontology-based approach is suggested in this research. Ontology is a representative
approach of knowledge sharing and reuse. With the ontology model, some
reasoning mechanism can be operated. Such characteristics can be applied to
generate questions dynamically. The proposed approach constructed an online
exam platform named programming language online exam platform (PLOEP) with
an ontology model of basic set concept in high school mathematics (Halmos, 1974)
as the core question generator. After the interpretation of the PLOEP, the
verification and the validation done in a freshmen programming language course
are presented with some discussion about the performance evaluation. Finally the
conclusion for the proposed approach is presented.

PROPOSED APPROACH
An online exam platform is the chosen approach to perform such an
exercise-oriented instruction way for programming language course, which is
named PLOEP. The expected actors of the PLOEP are sorted into four categories,
senior teacher, junior teacher, student, and grader. The whole instruction process
with the aid of the PLOEP may proceed with four steps. First, senior teacher designs
an appropriate question template for the varying questions. Second, junior teacher
obtains questions of requested difficulty levels, confirms the suitability of the
question to be in the test, and joins them to a test for students. Third, students take
the test for programming language practice. Finally the junior teacher or the grader
gives scores of the tests, since the PLOEP does not include automatic assessment
function. The system structure can be simplified to an online portion and a core
question generator. Senior teacher utilizes the question generator, and the other
three actors interact with the online portion.
The question generator is developed depended on the PLOEP ontology
model. As shown in figure 1, the model contains two parts, concept part for set
concept in high school mathematics, and C++ implementation part. In the C++
implementation part, there are loop methods and data structure for arranging a set.
And in the concept part, there are three categories of operators described, basic
operator, element operator, and principle operator, which are demonstrated in
shaded ellipse. These operators are assumed to have an original set and finally get
an output after operating with or without an input. They are described in the form as
“OperatorName (Input): Output.” The single-line arrow representing “use”
relationship indicates that element operator and principle operator can be
implemented by basic operator. Each of the inheritance relationship of basic
operator and element operator simply has two subclasses as well as the subclasses
COMPUTING IN CIVIL ENGINEERING 909

of principle operator are more complicated. Principle operator can be viewed as two
groups, one group contains the classes which use “Belonging”, the other one
contains those using “Difference” and “Union”. The difficulty level shown by the
stereotype in figure 1 is set to each operator. In the model, the static difficulty levels
is set from 1 to 4, but the difficulty level can increase 1 through the “use”
relationship; thus, the exact difficulty level of questions can be produce from the
model is from 1 to 5.

Figure 1. The PLOEP ontology model.

The questions generated from the PLOEP ontology model are fit in with a
template shown in figure 2. The sentences in [ ] brackets is alternative, some will
appear when the question is generated through the “use” relationship. From the
template, it is known that the change of the questions can be sorted into four types,
“which operator is asked to be implemented,” “which loop method must be used,”
“which data structure must be arranged in,” and “which operator must be used to
implement another operator.” The total number of questions can be generated by the
PLOEP ontology model is calculated 3,624 from these four groups. With such a
great number of questions, students may have more opportunity to practice
programming language.
910 COMPUTING IN CIVIL ENGINEERING

Figure 2. The question template.

VERIFICATION
The questions can be generated from the PLOEP ontology model is
estimated at 3,624 as described before. A program is designed to inspect the model
performance including the automatic test generation for different difficulty level as
well as the description clarity and the grammar accuracy of the words in the
questions. Nevertheless, on account of the large number of questions, it is hard to
test all of the possibilities generated by the PLOEP ontology model. Consequently,
the verification of the model is simplified for only each difficulty level and some
derived difficulty levels. After the testing, the ability and accuracy of automatic test
generation of the PLOEP ontology model are verified.

VALIDATION
Primarily teaching by the instructor and using some fixed exercises for
assistance is the traditional way in programming language course. However, mainly
led by the instructor and without sufficient practices cannot impress student much.
Also, the fixed exercises bring the chances for plagiarism and will decrease the
learning effects. In addition, the examination collaborating with the traditional way
is usually provided in paper form, which may have more opportunities for students
to cheat in the exam; some other ways are based on computer, but most questions
are choice questions or cloze test questions, which may unable to scale the complete
realization of students.
In order to verify the exercise-oriented instructing way proposed in this
research is better than the traditional way, an experiment in freshman programming
language course was implemented. The level of realization of programming
COMPUTING IN CIVIL ENGINEERING 911

language of students who joined the experiment were assumed to be in normal


distribution since these students are civil engineering students and did not enter the
department by their programming skills. Additionally, there were two foreign
students in this course. Before the experiment, students already learned some basic
syntax such as flow control, and they could write some simple program in C++. The
experiment was separated in several stages.

Instruction. The first stage started immediately after the mid-term exam of the
course. The contents including set concepts in high school mathematics review and
programming language knowledge related to the set were distributed in two days,
December 13 and 20. Set arrangement in C++ program and some basic operations
were taught on December 13 as well as advanced set operations such as union and
intersection were taught on December 20. In this stage, students were divided into
two groups. The instructor and the teaching materials for these two groups were the
same, but the instruction ways were different. For the first group, the instructor
taught some necessary concepts, and in the class students did the exercises most of
the time. For the second group, the instructor taught the concepts, and students had
no chance to do any exercise. After the teaching process, instructor interpreted the
exercises for all students, in order to ensure they all knew how to resolve the set
problem by programming language.

Examination. After the instruction stage, a quiz was held to evaluate and compare
the learning performance of these two groups. The proposed approach stated in
previous section is designed for exercise-oriented instruction which provides lots of
questions and includes automatic test generation; however, the objective of the quiz
was using the same questions to test the students in a fair way. As a result, the quiz
was paper-based to exclude the automatic generation part. Furthermore, the
question form of the quiz was open-book and all questions were short answer
questions. In the quiz there were two types of questions, one is basic set question as
well as the other one is the application of set concepts in the engineering problem.
The basic set question contains the concepts introduces in the course. For the type
of application questions, unified soil classification was chosen to be modified to
programming language questions.
Since unified soil classification is taught in soil mechanics course which is
for senior students, the questions only applied the concept of plasticity chart. Also,
the plasticity chart shown as figure 3 was simplified to emphasize its relationship
with set concept. The organic types of soil in the chart were removed. Different
types of soil could be viewed as sets, and using the set operations such as
912 COMPUTING IN CIVIL ENGINEERING

intersection can be one way to find out the type of the given soil.

Figure 3. The simplified plasticity chart.

Assessment. After the quiz, the scores were calculated to know the performance of
the instruction way proposed before. The results are discussed in the next section.

DISCUSSION
The experiment was arranged between the mid-term exam and the final
exam. The form of the mid-term exam and the final exam was different from the
quiz, which are computer-based with static choice questions but the questions and
the choices were in randomized order for each student. Before the mid-term exam
there was the introduction to basic C++ programming language concept; in the
period between the quiz and the final exam, only the course review was provided
and no any other instruction. Thus, two indices are used for the learning
performance evaluation, Quiz-Mid (QM) and Final-Mid (FM). The name of the
index represents how it is calculated, for example, the QM index means the quiz
grade minus the mid-term exam grade. From the indices the progress of students
can be presented clearly. The result is shown as table 1, although the total grade of
mid-term exam and final exam was 20 points as well as that of quiz was 7 points,
the scale of them are extended to 100 points here. Group 1 represents the students
who learned in exercise-oriented way as well as group 2 are the students instructed
by traditional way.

Table 1. Average number of indices for students in the two groups.


COMPUTING IN CIVIL ENGINEERING 913

Group QM FM
Group 1 -10.550 -6.950
Group 2 -17.214 -14.920

The content of the quiz could be viewed as a kind of advanced application of


C++ programming language, and the scope of the final exam were the contents in
the whole semester. Hence, students might consider these two exams were more
difficult than the mid-term exam. As the table shown, QM and FM are negative,
which means compared with the mid-term exam, students deteriorated in the quiz
and the final exam; nevertheless, the number of QM and FM of group 1 are greater
than group 2, which means the deterioration of group 1 is less than group 2. From
the comparison of these two indices, we can conclude that exercise-oriented
instruction way led a better understanding for students. Another incident that
deserves to be mentioned here is few students tried to cheat in the quiz. The data of
these students are already removed from the calculation of the indices, but this
incident still can be the proof that students can have more opportunities to cheat in
the traditional testing way. With the automatic test generation function of the
PLOEP ontology model, plagiarism may not take place because students will not
have the same questions.

CONCLUSION
A number of issues in VLE for programming language were studies, but few
of them concerned about the difference between teaching civil engineering students
and computer science students. This study constructs an ontology model to
represent knowledge of existing concepts of students and C++ concepts. The “set”
concepts taught in the mathematics course of high school are selected as an example
of existing concepts. The concepts from the set theory and the C++ mechanisms are
integrated so as to dynamically generate questions for students to practice. The
verification indicates the function of the PLOEP ontology model works. The
experiment done in the validation section and the results presented in the discussion
proves not only the exercise-oriented instruction is better for students’
understanding, but also reduce the chance of plagiarism since the dynamic
characteristic from the automatic question generation. As a result, this
ontology-based approach provides an effective way of programming language
course.

REFERENCES
914 COMPUTING IN CIVIL ENGINEERING

Halmos, P.R. (1974). Naive set theory, New York: Springer-Verlag, New York.
Lahtinen, E., Ala-Mutka, K. and Jarvinen, H. M. (2005). “A study of the
difficulties of novice programmers.” In: Proceedings of the 10th annual
SIGCSE conference on Innovation and technology in computer science
education, 2005, Caparica, Portugal.
Spinells, D., Zaharias P. and Vrechopoulos, A. (2007). “Coping with plagiarism
and grading load: randomized programming assignments and reflective
grading.” Computer Applications in Engineering Education, 15(2), 113-123.
Author Index
Page number refers to the first page of paper

Abdel-Raheem, Mohamed, 250 Cho, Y., 552


Abraham, Dulcy, 41 Choi, Gwang-Yeol, 690
Ahn, Hongseob, 627 Chou, Chien-Cheng, 266, 907
Akhnoukh, Amin, 512 Chu, Chih-Yuan, 603
Akinci, Burcu, 315, 486, 802 Chu, Mei Ling, 544
Allan, L., 323 Chua, D. K. H., 619
Aly, Ebrahim A., 396 Cong, Z., 323
Anderson, Kyle, 635 Cox, Robert F., 347
Anil, E. B., 486 Cui, Qingbin, 186, 210
Anumba, Chimay J., 454, 720
Anwar, O. E., 785 Dai, Fei, 363
Arboleda, Carlos A., 744 Das, M., 649
Arciszewski, Tomasz, 891 de la Garza, Jesus M., 1, 67
Ashuri, B., 768 DeLaurentis, Daniel, 41
Azar, Elie, 536 Demiralp, G., 291
Aziz, T. A., 785 DesRoches, Reginald, 152
Dib, H. Y., 578
Barham, Wasim, 227, 850 Ding, Qinyi, 186
Barison, M. B., 594 Dong, N., 134
Becerik-Gerber, Burcin, 59, 77, 161, Dong, Suyang, 494
169
Behzadan, Amir H., 586 East, E. William, 315, 421, 470
Bergés, Mario, 802 Eastman, Charles, 611
Bogen, A. Chris, 421, 470 El-Anwar, Omar, 794
Borrmann, André, 430, 528 El-Gohary, N. M., 641
Bouvier, Dennis J., 234 Elmitiny, N., 202
Brilakis, Ioannis, 110, 118, 152, 363 Elnashai, Amr, 794
Bulbul, T., 720 El-Nashar, A., 202
El-Rayes, Khaled, 794
Caldas, Carlos H., 274 Ergen, E., 291
Calis, G., 77, 85 Erlemann, K., 438
Cavallin, H., 875 Euringer, T., 430
Ceron, Victor, 818
Chen, Albert Y., 299 Farajian, Morteza, 210
Chen, C., 578 Fathi, H., 118
Chen, Don, 51 Feng, Chen, 494
Cheng, J. C. P., 649 Fischer, M., 134

915
916 COMPUTING IN CIVIL ENGINEERING

Flager, Forest, 883 Jazizadeh, Farrokh, 161, 169


Flood, I., 219 Ji, Yang, 528
Francis, A., 560 Jin, Yuanwei, 242
Fruchter, Renate, 776, 875 Jog, Gauri M., 110
Fukuda, Tomohiro, 307 Johnston, H., 899
Jung, Y., 728
Gao, Zhili (Jerry), 51
Garrett, James H., Jr., 242, 315, 802 Kalin, Mark, 421
Gatti, Umberto C., 194 Kamat, Vineet R., 494
Gerber, Burcin Becerik, 85, 110 Kang, Leen-Seok, 690
Gerber, David Jason, 883 Kashani, H., 768
German, Stephanie, 152 Kavulya, Geoffrey, 161, 169
Giel, B., 665 Khalafallah, Ahmed, 202, 250
Golparvar-Fard, Mani, 67, 504 Khalili, A., 619
Gong, Jie, 274 Kikushige, Yuki, 307
Gonzalez, E., 478 Kim, C., 178,
Gordon, Chris, 234 Kim, Chang-Hak, 690
Guven, G., 291 Kim, Hyeon-Seung, 690
Kim, Hyunjoo, 627, 635
Haddad, Z., 134 Kim, J. Y., 728
Hamm, M., 682 Kim, Yeonhee, 706
Han, SangUk, 102 Klein, Laura, 59, 161
Harley, Joel, 242 König, Markus, 438, 446, 462, 682
Hartmann, D., 438 Koch, C., 438
Hartmann, Timo, 282 Korman, T. M., 899
Hegemann, F., 438 Koseoglu, O., 355
Heydarian, Arsalan, 504 Koziolek, Sebastian, 891
Hinze, J., 698 Kucukvar, M., 736
Ho, T. W., 266 Kumar, Bimal, 339
Hofmeyer, H., 9
Hosney, Ossama, 396 Laory, Irwanda, 25
Howerton, C. G., 1 Lasker, G. C., 578
Huang, Qian, 347 Law, Kincho, 544
Huber, D., 486 Lee, Ghang, 706, 713
Hubers, J. C., 413 Lee, Sanghoon, 454
Lee, SangHyun, 102, 380
Irizarry, Javier, 512, 850 Lehner, K., 438
Issa, Raja R. A., 657, 665, 673, 698, Leite, Fernanda, 331
826, 842 Lepech, Michael, 760
Itani, L., 478 Li, Chunxia, 380
Ivanov, Plamen Ventsislavov, 776 Li, Nan, 59, 77, 85
Li, Shuai, 77, 85, 110
Jahanshahi, Mohammad R., 372, 388 Li, Xiaohang, 347
Jardaneh, M., 202 Lin, C. Y., 266, 907
COMPUTING IN CIVIL ENGINEERING 917

Lin, Ken-Yu, 867 Raheem, A. A., 842


Liu, Xuesong, 802 Rank, E., 430
Lucas, J., 720 Raphael, B., 17
Luo, Xiaowei, 331 Rashidi, Abbas, 363
Rezgui, Yacine, 143
Mahfouz, Tarek, 126, 752 Roberts, Sara, 152
Marks, Adam, 143 Rojas, Eddy M., 867
Marx, Arnim, 462 Russell-Smith, Sarah, 760
Masri, Sami F., 372, 388
Masry, Mohamed El, 94, 520 Sacks, Rafael, 611
McDonald, Matthew, 234 Salama, D. M., 641
McGibbney, Lewis John, 339 Santos, E. T., 594
McKay, David T., 421 Schneider, Suzanne, 194
Meadati, Pavan, 512, 850 Shaurette, Mark, 347
Menassa, Carol C., 536, 744 Sherif, Yasmine, 94
Menzel, K., 323, 834 Shi, Jun, 242
Messner, John I., 454, 720 Shi, Z. K., 258
Migliaccio, Giovanni C., 194 Sideris, D., 1
Miresco, E., 560 Smith, Ian F. C., 25
Moon, B. S., 728 Smulders, C. D. J., 9
Moon, Hyoun-Seok, 690 Soibelman, Lucio, 242, 315
Mostafavi, Ali, 41 Solis, Fernando A. Mondragon, 858
Mutis, Ivan, 826 Son, H., 178
Son, Jeong Wook, 867
Nassar, Khaled, 94, 396, 520 Song, Xinyi, 744
Nawari, Nawari O., 405, 478, 569 Stack, P., 834
Neelamkavil, Joseph, 33 Szczesny, K., 682
Nguyen, V. V., 682
Nikolic, Dragana, 454 Taneja, S., 315
Tang, P., 486
Obergrieβer, Mathias, 430, 528 Tatari, O., 736
Obonyo, E., 219 Teizer, J., 258, 611
O'Brien, William J., 331, 858 Trinh, Thanh N., 25
Oh, Ilseok, 227 Tseng, S. M., 266
Olbina, S., 698, 842
Oppenheim, Irving J., 242 Uslu, Berk, 67
Orabi, W., 810
Osman, Hesham, 396, 520 Vala, G., 219
Ozcan-Deniz, Gulbin, 818 Vela, Patricio, 118, 258, 363
Venugopal, Manu, 611
Pan, Xiaoshan, 544
Peña-Mora, Feniosky, 102, 299, 744 Wang, C., 552
Won, Jongsung, 713
Qi, Jia, 698 Wu, Wei, 673
918 COMPUTING IN CIVIL ENGINEERING

Yabuki, Nobuyoshi, 307 Zhang, L., 657


Yang, J., 258 Zhu, Xinyuan, 186
Yin, H., 834 Zhu, Yimin, 818
Ying, Yujie, 242 Zhu, Zhenhua, 152

Zavichi, Amir, 586


Subject Index
Page number refers to the first page of paper

Access roads, 520 Construction industry, 85, 126, 134,


Accidents, 102 169, 186, 194, 250, 282, 380, 560,
Air conditioning, 802 586, 641, 657, 682, 690, 752, 776,
Algorithms, 51, 77, 219, 396, 706, 826, 899
713 Construction management, 143, 178,
Architecture, 413, 641 258, 446, 454, 462, 504, 627, 744,
Asphalt pavements, 227 785, 818, 867
Assessment, 67, 388, 486, 752 Construction materials, 736
Assets, 1 Construction sites, 102, 202, 234, 355
Automation, 9, 569, 673, 785 Contractors, 134
Costs, 250, 818
Bayesian analysis, 274 Cracking, 388
Bridges, 94, 528 Cranes, 258, 331, 586
Building design, 17, 619, 641, 698,
706, 713, 760, 776 Damage, 25, 152, 810
Building information models, 405, Data analysis, 25, 110, 486, 528
478, 486, 512, 578, 594, 611, 627, Data collection, 110, 118, 234
635, 649, 665, 673, 698, 842, 850 Data processing, 186
Buildings, 51, 59, 77, 152, 315, 339, Decision support systems, 210, 586,
421, 470, 552, 569, 603, 768, 834, 834
891 Deformation, 227
Design, 266, 430
Cameras, 118, 258, 363 Disasters, 299
Classification, 126, 274 Dispute resolution, 744
Commercial buildings, 536 Documentation, 59, 126
Comparative studies, 594
Computation, 891 Earthquakes, 152, 794
Computer aided design, 883 Emergency services, 299, 544, 603,
Computer applications, 169, 234, 544 794
Computer programming, 907 Emissions, 186, 504
Computer software, 282, 421 Energy consumption, 51
Constraints, 462 Energy efficiency, 339, 536, 635
Construction, 94, 728 Engineering, 110
Construction costs, 396, 752 Engineering education, 454, 850, 858,
Construction equipment, 186, 274, 867, 883, 891, 899, 907
299 Environmental issues, 736, 818

919
920 COMPUTING IN CIVIL ENGINEERING

Evacuation, 544, 603 Mixing, 227


Models, 323, 438, 470, 512, 528
Facilities, 33, 323 Monitoring, 25, 77, 102, 134, 194,
Financial factors, 41 242, 291, 331, 347, 372, 388, 494,
Funding, 210 504
Motion, 102, 380
Gas pipelines, 242 Multimedia, 875
Geometry, 603
Geotechnical engineering, 430 Navigation, 315
Neural networks, 227
Health care facilities, 578, 720 Nuclear power, 728
Heating, 802
Highways and roads, 1, 67, 202, 307, Occupational health, 380
528 Optimization, 17, 51, 94, 202, 682
Housing, 794, 842 Optimization models, 250, 266
Hybrid methods, 552
Parameters, 77, 528, 744
Imaging techniques, 67, 169, 355, Personnel management, 594
363, 372, 380, 470, 494, 504, 665, Photogrammetry, 59, 178
690, 867 Planning, 785
Indoor environmental quality, 59, 77, Power plants, 520, 728
161 Predictions, 227
Information management, 143, 421, Private sector, 210
470, 560, 569, 657, 720 Productivity, 258, 504
Information systems, 282, 307, 339 Project management, 560
Infrastructure, 1, 41, 118, 169, 210, Public buildings, 706
363, 372, 388
Innovation, 41 Rapid transit systems, 266
Inspection, 152, 186, 307 Reconstruction, 1
Insurance, 744 Rehabilitation, 1, 760
Integrated systems, 802 Renovation, 834
Intelligent structures, 347 Research, 282
Internet, 234, 649, 657, 673, 810, 826 Residential buildings, 842
Investments, 210, 768 Resource management, 810
Risk management, 33
Japan, 307 Routing, 520

Labor, 194, 274 Safety, 258, 331, 698, 720, 867


Life cycles, 512, 760 Scheduling, 134, 446, 560, 682, 690
Simulation, 446, 454, 462, 520, 544,
Maintenance, 33, 512 560, 635, 682, 899
Manufacturing, 586 Social factors, 544, 826
Mapping, 1 Solar power, 768
Measurement, 178, 194 Spatial analysis, 9
COMPUTING IN CIVIL ENGINEERING 921

Spatial data, 118, 315 Time factors, 250, 818


Standards and codes, 405, 569 Traffic management, 202
Structural analysis, 25 Transformations, 9
Structural design, 9 Transportation networks, 810
Structural reliability, 219 Trees, 307
Supply chain management, 291 Trucks, 219
Surveys, 59 Tunneling, 438
Sustainable development, 143, 413,
768 Underground construction, 438
United Kingdom, 143
Taiwan, 266
Technology, 85, 110, 161, 242, 291, Ventilation, 802
323, 826, 858, 875, 883
Telecommunication, 355 Wind power, 520
Tennessee, 794 Windows, 17
Thermal factors, 552
Three-dimensional models, 363, 430,
494, 552

You might also like