Professional Documents
Culture Documents
Computing in Civil Engineering 2011
Computing in Civil Engineering 2011
CIVIL ENGINEERING
PROCEEDINGS OF THE 2011 ASCE INTERNATIONAL
WORKSHOP ON COMPUTING IN CIVIL ENGINEERING
SPONSORED BY
Technical Council on Computing and Information Technology
of the American Society of Civil Engineers
EDITED BY
Yimin Zhu, Ph.D.
R. Raymond Issa, Ph.D., J.D., P.E., F.ASCE
www.pubs.asce.org
Any statements expressed in these materials are those of the individual authors and do not
necessarily represent the views of ASCE, which takes no responsibility for any statement
made herein. No reference made in this publication to any specific method, product,
process, or service constitutes or implies an endorsement, recommendation, or warranty
thereof by ASCE. The materials are for general information only and do not represent a
standard of ASCE, nor are they intended as a reference in purchase specifications, contracts,
regulations, statutes, or any other legal document. ASCE makes no representation or
warranty of any kind, whether express or implied, concerning the accuracy, completeness,
suitability, or utility of any information, apparatus, product, or process discussed in this
publication, and assumes no liability therefore. This information should not be used without
first securing competent advice with respect to its suitability for any general or specific
application. Anyone utilizing this information assumes all liability arising from such use,
including but not limited to infringement of any patent or patents.
ASCE and American Society of Civil Engineers—Registered in U.S. Patent and Trademark
Office.
This year, we have received many high quality papers. The workshop has accepted
over 100 papers from 19 countries in four subjects, 1) novel engineering, construction
and management technologies, 2) design, engineering and analysis, 3) sustainable and
resilient infrastructure, and 4) cutting edge development. These papers are the result
of a rigorous peer review process starting from over 200 abstracts we received. Each
abstract and paper was assigned to at least two reviewers. Only the outstanding papers
have been collected in the proceedings. These papers are also a genuine
representation of the very best research being conducted in this community.
Enjoy your stay in Miami! Don’t miss the beach and the sunshine!
Yimin Zhu, Ph.D. and Raymond Issa, Ph.D., P.E., J.D., FASCE
Workshop Co-Chairs
2011 ASCE International Workshop on Computing in Civil Engineering
iii
Acknowledgments
Organizing Committee
Raymond Issa (co-chair) Yimin Zhu (co-chair)
Technical Committee
Amr Kandil Mani Golparvar-Fard Svetlana Obina
Baabak Ashuri Mehmet Bayraktar Wallied Orabi
Huangqing Lu Pinchao Liao Zhigang Shen
Ioannis Brilakis Salman Ahzar
John Messner SangHyun Lee
Reviewers
Amr Kandil Ioannis Brilakis Pinchao Liao
Baabak Ashuri Ivan Mutis Qingbin Cui
Benny Raphael Javier Irizarry R. Raymond Issa
Boong Yeol Ryoo Jerry Gao Renate Fruchter
Burcin Becerik-Gerber Jesus de la Garza Salman Azhar
Burcu Akinci Jie Gong SangHyun Lee
Carlos Caldas Jochen Teizer Semiha Kiziltas
Chimay Anumba John Haymaker Sergio Scher
Don Chen John Messner Svetlana Olbina
Esin Ergen Ken-Yu Lin Tarek Mahfouz
Esther Obonyo Kihong Ku Tomasz Arciszewski
Federico Boadilla Kincho Law Vineet Kamat
Fernanda Leite Lucio Soibelman Wallied Orabi
Ghang Lee Mani Golparvar-Fard Wassim Barham
Giovanni Migliaccio Mark Shaurette Wei Wu
Guillermo Salazar Mehmet Bayraktar William O'Brien
Hani Melhem Nashwan Dawood Yacine Rezgui
Hazar Dib Nora El-Gohary Yimin Zhu
Huanqing Lu Omar El-Anwar Yung-Ching Shen
Hyunjoo Kim Omar Tatari Zhigang Shen
Ian Flood Patricio Vela
Ian Smith Patrick Hsieh
iv
Contents
v
Application of Dimension Reduction Techniques for Motion Recognition:
Construction Worker Behavior Monitoring ..................................................................... 102
SangUk Han, SangHyun Lee, and Feniosky Peña-Mora
Civil and Environmental Engineering Challenges for Data Sensing and Analysis ........110
Gauri M. Jog, Shuai Li, Burcin Becerik Gerber, and Ioannis Brilakis
Automated 3D Structure Inference of Civil Infrastructure Using a Stereo
Camera Set............................................................................................................................118
H. Fathi, I. Brilakis, and P. Vela
Unstructured Construction Document Classification Model
through Support Vector Machine (SVM) .......................................................................... 126
Tarek Mahfouz
Automatic Look-Ahead Schedule Generation System for the Finishing Phase
of Complex Projects for General Contractors .................................................................. 134
N. Dong, M. Fischer, and Z. Haddad
Sustainable Construction Ontology Development Using Information
Retrieval Techniques ........................................................................................................... 143
Yacine Rezgui and Adam Marks
Machine Vision Enhanced Post-Earthquake Inspection ................................................. 152
Zhenhua Zhu, Stephanie German, Sara Roberts, Ioannis Brilakis,
and Reginald DesRoches
Continuous Sensing of Occupant Perception of Indoor Ambient Factors ..................... 161
Farrokh Jazizadeh, Geoffrey Kavulya, Laura Klein,
and Burcin Becerik-Gerber
Effects of Color, Distance, and Incident Angle on Quality of 3D Point Clouds ............. 169
Geoffrey Kavulya, Farrokh Jazizadeh, and Burcin Becerik-Gerber
The Effective Acquisition and Processing of 3D Photogrammetric Data
from Digital Photogrammetry for Construction Progress
Measurement ....................................................................................................................... 178
C. Kim, H. Son, and C. Kim
Data Transmission Network for Greenhouse Gas Emission Inspection ......................... 186
Qinyi Ding, Xinyuan Zhu, and Qingbin Cui
Wearable Physiological Status Monitors for Measuring and Evaluating
Workers’ Physical Strain: Preliminary Validation ........................................................... 194
Umberto C. Gatti, Giovanni C. Migliaccio, and Suzanne Schneider
A Framework for Optimizing Detour Planning and Development
around Construction Zones ................................................................................................ 202
M. Jardaneh, A. Khalafallah, A. El-Nashar, and N. Elmitiny
A Multi-Objective Decision Support System for PPP Funding Decisions ...................... 210
Morteza Farajian and Qingbin Cui
Truck Weigh-in-Motion Using Reverse Modeling and Genetic Algorithms .................. 219
G. Vala, I. Flood, and E. Obonyo
The Application of Artificial Neural Network for the Prediction
of the Deformation Performance of Hot-Mix Asphalt ..................................................... 227
Ilseok Oh and Wasim Barham
vi
An Approach for Occlusion Detection in Construction Site Point Cloud Data ............. 234
Dennis J. Bouvier, Chris Gordon, and Matthew McDonald
Applications of Machine Learning in Pipeline Monitoring ............................................. 242
Yujie Ying, Joel Harley, James H. Garrett, Jr., Yuanwei Jin,
Irving J. Oppenheim, Jun Shi, and Lucio Soibelman
Using Electimize to Solve the Time-Cost-Tradeoff Problem
in Construction Engineering .............................................................................................. 250
Mohamed Abdel-Raheem and Ahmed Khalafallah
Vision-Based Crane Tracking for Understanding Construction Activity ...................... 258
J. Yang, P. A. Vela, J. Teizer, and Z. K. Shi
Design of Optimization Model and Program to Generate Timetables
for a Single Two-Way High Speed Rail Line under Disturbances .................................. 266
T. W. Ho, C. Y. Lin, S. M. Tseng, and C. C. Chou
Learning and Classifying Motions of Construction Workers and Equipment
Using Bag of Video Feature Words and Bayesian Learning Methods............................ 274
Jie Gong and Carlos H. Caldas
Evolutionary Software Development to Support Ethnographic Action Research ........ 282
Timo Hartmann
Determining the Benefits of an RFID-Based System for Tracking
Pre-Fabricated Components in a Supply Chain ............................................................... 291
E. Ergen, G. Demiralp, and G. Guven
Coordination of Converging Construction Equipment in Disaster Response ............... 299
Albert Y. Chen and Feniosky Peña-Mora
A Management System of Roadside Trees Using RFID and Ontology ........................... 307
Nobuyoshi Yabuki, Yuki Kikushige, and Tomohiro Fukuda
Transforming IFC-Based Building Layout Information into a Geometric
Topology Network for Indoor Navigation Assistance ...................................................... 315
S. Taneja, B. Akinci, J. H. Garrett, L. Soibelman, and B. East
Business Models for Decentralised Facility Management Supported by Radio
Frequency Identification Technology ................................................................................ 323
Z. Cong, L. Allan, and K. Menzel
Requirements for Autonomous Crane Safety Monitoring............................................... 331
Xiaowei Luo, Fernanda Leite, and William J. O’Brien
A Knowledge-Directed Information Retrieval and Management Framework
for Energy Performance Building Regulations ................................................................ 339
Lewis John McGibbney and Bimal Kumar
Novel Sensor Network Architecture for Intelligent Building Environment
Monitoring and Management ............................................................................................ 347
Qian Huang, Xiaohang Li, Mark Shaurette, and Robert F. Cox
Planning of Wireless Networks with 4D Virtual Prototyping for Construction
Site Collaboration................................................................................................................ 355
O. Koseoglu
vii
Comparison of Camera Motion Estimation Methods for 3D Reconstruction
of Infrastructure .................................................................................................................. 363
Abbas Rashidi, Fei Dai, Ioannis Brilakis, and Patricio Vela
Multi-Image Stitching and Scene Reconstruction for Evaluating Change
Evolution in Structures ....................................................................................................... 372
Mohammad R. Jahanshahi and Sami F. Masri
Computer Vision Techniques for Worker Motion Analysis to Reduce
Musculoskeletal Disorders in Construction ...................................................................... 380
Chunxia Li and SangHyun Lee
A Novel Crack Detection Approach for Condition Assessment of Structures ............... 388
Mohammad R. Jahanshahi and Sami F. Masri
Developing an Efficient Algorithm for Balancing Mass-Haul Diagram......................... 396
Khaled Nassar, Ossama Hosney, Ebrahim A. Aly, and Hesham Osman
viii
Occlusion Handling Method for Ubiquitous Augmented Reality Using
Reality Capture Technology and GLSL ............................................................................ 494
Suyang Dong, Chen Feng, and Vineet R. Kamat
A Visual Monitoring Framework for Integrated Productivity and Carbon
Footprint Control of Construction Operations ................................................................ 504
Arsalan Heydarian and Mani Golparvar-Fard
Building Information Modeling Implementation—Current and Desired Status .......... 512
Pavan Meadati, Javier Irizarry, and Amin Akhnoukh
Simulating the Effect of Access Road Route Selection on Wind
Farm Construction .............................................................................................................. 520
Mohamed El Masry, Khaled Nassar, and Hesham Osman
Toward the Exchange of Parametric Bridge Models Using a Neutral
Data Format......................................................................................................................... 528
Yang Ji, André Borrmann, and Mathias Obergrieȕer
An Agent-Based Approach to Model the Effect of Occupants’ Energy Use
Characteristics in Commercial Buildings ......................................................................... 536
Elie Azar and Carol Menassa
Incorporating Social Behaviors in Egress Simulation ..................................................... 544
Mei Ling Chu, Xiaoshan Pan, and Kincho Law
3D Thermal Modeling for Existing Buildings Using Hybrid LIDAR System................ 552
Y. Cho and C. Wang
A Generalized Time-Scale Network Simulation Using Chronographic
Dynamics Relations ............................................................................................................. 560
A. Francis and E. Miresco
Automating Codes Conformance in Structural Domain ................................................. 569
Nawari O. Nawari
Benefits of Implementing Building Information Modeling for Healthcare
Facility Commissioning ...................................................................................................... 578
C. Chen, H. Y. Dib, and G. C. Lasker
A Real Time Decision Support System for Enhanced Crane Operations
in Construction and Manufacturing .................................................................................. 586
Amir Zavichi and Amir H. Behzadan
The Competencies of BIM Specialists: A Comparative Analysis
of the Literature Review and Job Ad Descriptions .......................................................... 594
M. B. Barison and E. T. Santos
Adaptive Guidance for Emergency Evacuation for Complex
Building Geometries............................................................................................................ 603
Chih-Yuan Chu
Improving the Robustness of Model Exchanges Using Product Modeling
“Concepts” for IFC Schema ................................................................................................611
Manu Venugopal, Charles Eastman, Rafael Sacks, and Jochen Teizer
Framework for an IFC-Based Tool for Implementing Design for
Deconstruction (DfD) .......................................................................................................... 619
A. Khalili and D. K. H. Chua
ix
Temporary Facility Planning of a Construction Project Using BIM (Building
Information Modeling) ....................................................................................................... 627
Hyunjoo Kim and Hongseob Ahn
Energy Simulation System Using BIM (Building Information Modeling)..................... 635
Hyunjoo Kim and Kyle Anderson
Semantic Modeling for Automated Compliance Checking ............................................. 641
D. M. Salama and N. M. El-Gohary
Ontology-Based Standardized Web Services for Context Aware Building
Information Exchange and Updating ................................................................................ 649
J. C. P. Cheng and M. Das
IFC-Based Construction Industry Ontology and Semantic Web Services
Framework........................................................................................................................... 657
L. Zhang and R. R. A. Issa
Using Laser Scanning to Access the Accuracy of As-Built BIM ...................................... 665
B. Giel and R. R. A. Issa
BIM Facilitated Web Service for LEED Automation ...................................................... 673
Wei Wu and Raja R. A. Issa
Optimization of Construction Schedules with Discrete-Event Simulation
Using an Optimization Framework ................................................................................... 682
M. Hamm, K. Szczesny, V. V. Nguyen, and M. König
Development of 5D CAD System for Visualizing Risk Degree and Progress
Schedule for Construction Project ..................................................................................... 690
Leen-Seok Kang, Hyoun-Seok Moon, Hyeon-Seung Kim, Gwang-Yeol Choi,
and Chang-Hak Kim
Integration of Safety in Design through the Use of Building
Information Modeling ......................................................................................................... 698
Jia Qi, R. R. A. Issa, J. Hinze, and S. Olbina
A Study of Sight Area Rate Analysis Algorithm on Theater Design ............................... 706
Yeonhee Kim and Ghang Lee
Algorithm for Efficiently Extracting IFC Building Elements from an IFC
Building Model .................................................................................................................... 713
Jongsung Won and Ghang Lee
x
Analysis of Critical Parameters in the ADR Implementation Insurance Model ........... 744
Xinyi Song, Carol C. Menassa, Carlos A. Arboleda, and Feniosky Peña-Mora
Application of Latent Semantic Analysis for Conceptual Cost Estimates:
Assessment in the Construction Industry ......................................................................... 752
Tarek Mahfouz
Dynamic Life Cycle Assessment of Building Design and Retrofit Processes ................. 760
Sarah Russell-Smith and Michael Lepech
A Real Options Approach to Evaluating Investment in Solar Ready Buildings ............ 768
B. Ashuri and H. Kashani
Agile IPD Production Plans As an Engine of Process Change ........................................ 776
Renate Fruchter and Plamen Ventsislavov Ivanov
An Automated Collaborative Framework to Develop Scenarios for Slums:
Upgrading Projects According to Implementation Phases
and Construction Planning................................................................................................. 785
O. E. Anwar and T. A. Aziz
Preparing for a New Madrid Earthquake: Accelerating and Optimizing
Temporary Housing Decisions for Shelby County, TN .................................................... 794
Omar El-Anwar, Khaled El-Rayes, and Amr Elnashai
Requirements for an Integrated Framework of Self-Managing HVAC Systems .......... 802
Xuesong Liu, Burcu Akinci, James H. Garrett, Jr., and Mario Bergés
A Web-Based Resource Management System for Damaged
Transportation Networks ................................................................................................... 810
W. Orabi
Time, Cost, and Environmental Impact Analysis on Construction Operations ............ 818
Gulbin Ozcan-Deniz, Victor Ceron, and Yimin Zhu
Learning to Appropriate a Project Social Network System Technology ........................ 826
Ivan Mutis and R. R. A. Issa
Decision Support for Building Renovation Strategies...................................................... 834
H. Yin, P. Stack, and K. Menzel
Environmental Performance Analysis of a Single Family House Using BIM ................ 842
A. A. Raheem, R. R. A. Issa, and S. Olbina
Cutting-Edge Development
Enhancing Student Learning in Structures Courses with Building
Information Modeling ......................................................................................................... 850
Wasim Barham, Pavan Meadati, and Javier Irizarry
Using Applied Cognitive Work Analysis for a Superintendent to Examine
Technology-Supported Learning Objectives in Field
Supervision Education ........................................................................................................ 858
Fernando A. Mondragon Solis and William J. O’Brien
Developing and Testing a 3D Video Game for Construction Safety Education ............. 867
Jeong Wook Son, Ken-Yu Lin, and Eddy M. Rojas
xi
Attention and Engagement of Remote Team Members in Collaborative
Multimedia Environments .................................................................................................. 875
R. Fruchter and H. Cavallin
Teaching Design Optioneering: A Method for Multidisciplinary Design
Optimization ........................................................................................................................ 883
David Jason Gerber and Forest Flager
Synectical Building of Representation Space: A Key to Computing Education ............ 891
Sebastian Koziolek and Tomasz Arciszewski
Enhancing Construction Engineering and Management Education Using
a COnstruction INdustry Simulation (COINS) ................................................................ 899
T. M. Korman and H. Johnston
Effectiveness of Ontology-Based Online Exam Platform for Programming
Language Education ........................................................................................................... 907
Chia-Ying Lin and Chien-Cheng Chou
Indexes
Author Index........................................................................................................................ 915
Subject Index ....................................................................................................................... 919
xii
A Study of Implementation of IP-S2 Mobile Mapping Technology for Highway
Asset Condition Assessment
ABSTRACT
The national highway infrastructure is continually deteriorating and in need
of reconstruction and repairs. This is revealed by national highways poor grades in
the 2005 and 2009 ASCE report cards. As major arteries for the flow of goods and
people in the United States, poor highways can lead to fatalities, economic distress,
and frustration among motorists. Prior to performing maintenance, state DOTs
need to assess damages and determine what highway assets need to be repaired.
Data collection techniques have not been standardized in the United States, but
most state DOTs make extensive use of manpowered collection crews.
Manpowered crews’ data collection efforts are time consuming, costly, and
potentially unsafe. Mobile mapping enables DOTs to determine the condition and
location of assets while increasing safety for surveyors. Positioning and visual
recognition of assets is an important aspect while inspecting numerous dispersed
assets along highways. This paper presents a preliminary study of Topcon’s IP-S2
Mobile Mapping system. Two separate but interrelated projects were conducted.
The first project’s primary objectives are: (1) to measure the time it takes to collect
data using the IP-S2 method versus the traditional method; and (2) to measure the
accuracy of the data using the IP-S2 method versus the traditional method. These
tests were conducted at two variable speeds: slow and highway.
INTRODUCTION
Maintenance plays a critical role in the condition and operation of roads.
Given that road conditions in the U.S. are getting worse (ASCE 2005; ASCE 2009),
government must allocate funds for highway maintenance to keep highways from
becoming unserviceable. Ultimately, proper maintenance will save money and
improve citizen satisfaction. Highways need to be maintained frequently for two
reasons: to ensure the safety of those who travel them and to mitigate economic
stress that can result from road deterioration (de la Garza et al., 1998).
The Federal government understands the criticality of maintaining the
nation’s arteries for the transport of people, goods, and services. Along with
nationwide awareness of bridge maintenance following the I-35 bridge collapse in
Minnesota, the government has imposed national mandates to improve critical
highway assets such as pavement markings and traffic signs (Rasdorf et al., 2009).
1
2 COMPUTING IN CIVIL ENGINEERING
Moreover, different methods have been applied over the years by most of the
country’s state DOTs to prioritize maintenance depending on the visual condition of
the highways and their assets (Bandara and Gunaratne, 2001). Most of these
methods gather data concerning the condition of the highway pavement, bridge
decks, and other essential assets. This approach generally aims to allocate funds for
maintaining specific highway assets depending on their importance and safety
concerns (Bandara and Gunaratne, 2001).
BACKGROUND
Center for Highway Asset Management ProgramS (CHAMPS)
The Commonwealth of Virginia leads the way in highway asset
management with Virginia’s Department of Transportation (VDOT) performance-
based road maintenance. In 2001, Virginia Tech (VT) and VDOT established the
VT-VDOT Partnership for Highway Maintenance Monitoring Program (HMMP).
Under this partnership, Virginia Tech’s CHAMPS provides VDOT with
independent assessment and ratings of Virginia’s Highways (Piñero, 2003). These
results are published in a Maintenance Rating Program (MRP) report, which VDOT
uses to assess highway conditions and is the basis for the overall performance of
maintenance contractors. In this study, Virginia Tech’s CHAMPS collected asset
condition data with assistance from Topcon (Howerton and Sideris, 2010). The
Virginia Tech Transportation Institute (VTTI) Smart Road was used as the data
collection test-bed. VTTI is Virginia Tech’s largest university-level research center
and is mainly involved with research focused on the general transportation field.
Highway Asset Management Research
Current research has focused on three areas of asset management:
performance measurement, decision-making and data collection. Elements of asset
management are closely tied together, and data collection is the bridge between
performance measurement and decision-making. Decision making prioritization
models cannot be implemented without properly assessing the condition of the
highway assets (Durango-Cohen and Sarutipand 2006; Vanier 2001).
Because of the time commitments and costliness of data collection,
innovative new approaches are needed for highway departments because of limited
budgets (Bandara and Gunaratne, 2001). Advanced technology will allow agencies
to continue maintaining highways during periods of budget shortfall. Considering
the importance of inventory and location data for low-cost capital assets, new
technology must be used.
Literature agrees that compiling an inventory of assets and assessing asset
performance are critical elements of highway asset management (Hassanain et al.
2003, Rasdorf et al. 2009). Collecting baseline data on assets of a section of
highway creates inventory in a DOT’s database. From this inventory, random sites
can be selected and assessed based on predefined criteria. Photographic
documentation is a critical element involved in creating an information technology
(IT) database (Rasdorf et al., 2009). VDOT has recently initialized a pilot project
for photographic documentation for all asset failures within the Stanton South
TAMS Project (Roca, 2009).
COMPUTING IN CIVIL ENGINEERING 3
control group began working on the Smart Road to the time all data was collected.
Data was collected using the IP-S2 system at slow speed (15-20mph) and highway
speed (60-65mph) after the traditional inspections were completed. The slow speed
simulates vehicle inspections from the shoulder, while highway speed evaluation
simulates the vehicle driving normally along the road. Each data run was timed
from the time on the Smart Road to the time a data run was completed. The data
was then post-processed using the Topcon’s Geoclean software; this time was
recorded as well.
Research Results
The primary analysis considers the time to collect, process and analyze the
data, and the determination of whether or not varied IP-S2 speed changes results.
As shown on Table 1, on average, the time to collect and analyze the data using the
traditional inspection was 59 minutes, while the average for both fast and slow runs
of the IP-S2 using interactive and batch processing were 70 minutes and 53 minutes
respectively. The traditional inspections require travel time, stopping time and
walking time; the Interactive processing option with the IP-S2 required 17min of
data processing, and the Batch processing option of the IP-S2 did not. With the IP-
S2, assets could be located and zoomed to assess their condition.
As shown in Table 1 Geoclean post-processing time accounted for a major
portion of the time analysis. If the processing time is eliminated or automated, the
IP-S2 inspections could be faster than traditional. Traditional data collection is
slightly faster than IP-S2 based inspections, if interactive Geoclean processing time
is included.
Table 1. Average Data Collection, Processing and Analysis Times
IP-S2 IP-S2
Activity Traditional
(10-15mph) (60-65mph)
Data Collection n/a 6 min 1 min 30 sec
Geoclean Processing n/a 17 min 17 min
Data Analysis 59 min 38 min 41 min
GNSS Static Alignment n/a 10 min 10 min
Total Interactive Processing n/a 71 min 70 min
Total Batch Processing n/a 54 min 53 min
Research Results
In order to research the IP-S2 unit’s recording capabilities, three different
speeds were tested; 30, 45 and 60 mph. According to the data received from these
three runs, there is no significant difference based on the speed of the vehicle. This
is attributed to the fact that the IP-S2 unit records photographic data every few
milliseconds and as such, the highway speeds do not affect the quality of the data
received.
Distance was the second of the three parameters identified for this study.
This study shows assets condition can be assessed if the IP-S2 unit is within twelve
feet of the asset or less. Due to asset size, certain assets could not be assessed even
within twelve feet. Considering the frame rate of photographs taken by IP-S2, the
data collected will always contain at least one frame where the asset item is within
twelve feet.
Finally, lighting conditions affect the clarity of data. Data collected at night
with no lighting provided (apart from the vehicle’s headlights) conveyed little
information for the assets depicted, while the data collected during the day can be
readily assessed.
Data could not be assessed in three occasions due to: distance, obstruction
and size. Asset items situated more than 40 feet from the shoulder could not be
assessed due to the range of the camera. Certain asset items found behind another
asset item could not be assessed due to the obstruction (i.e. paved ditch located
behind guardrail). Finally, asset items smaller than 2 ft by 2ft could not be assessed
due to the small relative size. The findings per asset item are presented in Table 4.
Table 4. Overall Quality of data per Asset Item
ASSET Asset Item Overall Total Number of
GROUP Clarity Number of Asset Items
COMPUTING IN CIVIL ENGINEERING 7
CONCLUSIONS
Mobile mapping is used in a variety of fields from terrain modeling to
emergency management. This research was performed to assess the capabilities of
Topcon’s IP-S2 mobile mapping technology for VDOT. In particular, two projects
were completed. The first compared the speed and accuracy of surveyors to assess
certain roadside assets, using the traditional manpowered crews versus the IP-S2
technology, whereas the second evaluated the quality of the data received through
the IP-S2 in regard to the clarity of specific assets.
Under the experimental design conditions and various processing
workflows, it was determined that a timed comparison between human-based and
IP-S2 based technology was directly dependent on the processing and analysis
methods employed. The time to analyze the data using batch-mode processing
technology is faster than traditional collection.
From the experimental data, 96% of IP-S2 runs were within a 95%
confidence level of the manually collected data. The IP-S2 cannot assess certain
failure codes of the abovementioned assets, including missing guardrail bolts,
damage to the back of guardrail components, turned signs, and missing object
markers.
Under the experimental design conditions presented in the second project,
distance and lighting conditions affected the ability for assets to be assessed greatly,
whereas speed of the collection vehicle did not. In particular, the condition of assets
found behind the guardrail was impossible to assess due to issues with distance and
obstructions.
Finally, the IP-S2 system offers a unique ability of simulating the actual
course of the vehicle. With the software provided, inspectors have the ability to stop
or rewind the photographic data to search or inspect assets. This in itself offers
numerous benefits, since the data assessment can be conducted at any given time
and the assets can be inspected as many times as desired.
ACKNOWLEDGEMENTS
The research reported in this paper was conducted at Center For Highway
Asset Management Programs (CHAMPS) and funded by the Virginia Department
of Transportation (VDOT) The opinions and findings presented in this paper are
those of the authors and do not necessarily represent the views of VDOT and
Topcon.
8 COMPUTING IN CIVIL ENGINEERING
REFERENCES
ASCE, (2005). “The 2005 Report Card for America’s Infrastructure.”
http://www.asce.org/reportcard/2005 (May 25 2009).
ASCE, (2009). “The 2009 Report Card for America’s Infrastructure.”
http://www.asce.org/reportcard/2009 (May 25 2009).
Bandara, N., and Gunaratne, M. (2001). “Current and Future Pavement
Maintenance Prioritization Based on Rapid Visual Condition Evaluation.” J. of
Transportation Engr., 127(2), 116-123.
De la Garza, J.M., Drew, D.R., and Chasey, A.D. (1998). “Simulating Highway
Infrastructure Management Policies”. J. of Management in Engr., 14(5), 64-72.
Durango-Cohen, P.L., and Sarutipand, P. (2006). “Coordination of Maintenance
and Rehabilitation Policies for Transportation Infrastructure.” Applications of
Advanced Technology in Transportation 2006, 213, 34.
Hassanain, M., Froese, T., and Vanier, D. (2003). “Framework Model for Asset
Maintenance Management.” J. of Performance of Constructed Facilities, 17
(1), 51-64.
Howerton, C.G. and Sideris, D. (2010). “A Study of Implementation of IP-S2
Mobile Mapping Technology for Highway Asset Condition Assessment.”
Project & Report, presented to Virginia Polytechnic Institute and State
University VA, for fulfillment of the requirements for the degree of M.S. in Civil
and Environmental Engineering.
Karimi, H., Khattak, A.J. and Hummer, J. (2000). “Evaluation of Mobile Mapping
System for Roadway Data Collection.” J. of Computing in Civil Engineering,
14(3), 168-173.
Medina, R., Haghani, A., and Harris, N. (2009). “Sampling Protocol for Condition
Assessment of Selected Assets.” J. of Transportation Engr., 127(2), 116-123.
Mizusawa, D., and McNeil, S. (2006). “The Role of Advanced Technology in Asset
Management: International Experiences.” Applications of Advanced Technology
in Transportation 2006 (AATT 2006), 213, 33.
Pinero, J.C. (2003). “A Framework for Monitoring Performance-Based Road
Maintenance.” PhD Dissertation, presented to Virginia Polytechnic Institute and
State University VA, for fulfillment of the requirements for the degree of Doctor
of Philosophy in Industrial and Systems Engineering.
Rasdorf, W., Hummer, J., Harris, E., and Sitzabee, W. (2009). “IT Issues for the
Management of High-Quantity, Low-Costs Assets.” J. of Computing in Civil
Engineering, 135(4), 183-196.
Roca, I. (2009). “Visualization of Failed Highway Assets through Geo-Coded
Pictures in Google Earth and Google Maps.” Project & Report, presented to
Virginia Polytechnic Institute and State University VA, for fulfillment of the
requirements for the degree of M.S. in Civil and Environmental Engineering.
Tao, V. and Li, J. (2007). “Advances in Mobile Mapping Technology.” London:
Taylor & Francis Group.
Vanier, D. J. (2001). “Why Industry Needs Asset Management Tools.” J. of
Computing in Civil Engineering, 15(1).
An Automated Stabilization Method for Spatial-to-Structural Design
Transformations
ABSTRACT
INTRODUCTION
9
10 COMPUTING IN CIVIL ENGINEERING
spatial design, and (4) Finally adjusting the spatial design to comply to the properties
of the initial spatial design. During the second transformation, the finite element
method is needed, for which the structural design should be stable. Because the first
transformation step will add structural elements to the spatial design without
knowing how to build a stable structural design, it is thus necessary to include a
method that automates the stabilization of the structural system. In this paper,
instability refers to the kinematically undetermined state of a structural system for
which, due to the lack of a sufficient number of constraints (see Figure 1(b) and
1(c)), mechanisms may occur. Mechanisms represent parts of the structural system
that are able to move freely with respect to other parts. The number of unique
mechanisms is a measure for the degree of instability of the system.
Figure 1. (a) Schematic research engine, (b) instability due to lack of support,
(c) instability due to lack of elements or support.
METHOD REQUIREMENTS
as shown in Figure 2; (2) are orthogonal assembled; (3) are built up out of rods. Rods
are defined here as linear elements that are hinge connected to each other. The
orthogonal assembly refers to the structural system key points only, which means
that the rods themselves do not necessarily need to be positioned axes-aligned with
the global axes.
Based on the above restrictions, the method must be effective: it must be able
to generate a solution for any possible problem within the previously defined scope.
Secondly, the method must be efficient: to stabilize a system, a minimum of
adjustments must be made to avoid unnecessary elements that hamper further design
explorations.
METHOD DESCRIPTION
Figure 3. (a) Grid coordinates; restrictions: (b) spatial diagonal, (c) span along
more than a single grid increment.
12 COMPUTING IN CIVIL ENGINEERING
Figure 4. (a) Rotational mechanism, (b) DOFs: keypoint 5(x), 6(x,y), 7(y).
All possible mechanisms, each defined by their set of DOFs, are given by
finding the null space of the structural design's stiffness matrix (Hofmeyer and
Russell 2009). Using this, the method starts with the first null vector (i.e.
mechanism) and its first DOF. If no effective addition can be found for this DOF, the
method selects the next DOF. When all DOFs of a null vector have been tried
without success, the method selects the next null vector, etc. Note that a
mathematical procedure yields a sequence of mechanisms that is not related to
structural engineering logic, the method inevitably selects a practically random first
mechanism and DOF to solve.
The sequence in which surrounding key points are found may influence the
final solution and thus needs explanation. The search for surrounding key points
starts with a search for axes-aligned key points as shown in Figure 5.
COMPUTING IN CIVIL ENGINEERING 13
Figure 5. Axes-aligned key points for (a) x-axis, (b) y-axis, and (c) z-axis, selected
DOF key point is at the origin.
Assume the key point from the DOF has coordinates (i,j,k), then the existence of
surrounding key points is checked using the same sequence as used in formula (4).
Note that other possibilities are excluded due to the conditions in formula (2) and (3).
DOFaxis x I (i 1, j , k ), II (i 1, j , k )
DOFaxis y I (i, j 1, k ), II (i, j 1, k ) (4)
DOFaxis z I (i, j , k 1), II (i, j , k 1)
Figure 6. Diagonally oriented key points for (a) xz-plane, (b) yz-planes, and (c)
xy-plane.
14 COMPUTING IN CIVIL ENGINEERING
plane xz :1 i 1, j , k 1 , 2 i 1, j , k 1 ,3 i 1, j , k 1 , 4 i 1, j, k 1
plane yz :1 i, j 1, k 1 , 2 i, j 1, k 1 ,3 i, j 1, k 1 , 4 i, j 1, k 1 (6)
plane xy :1 i 1, j 1, k , 2 i 1, j 1, k ,3 i 1, j 1, k , 4 i 1, j 1, k
Planes xz and yz are considered before xy because for regular designs, which
have their height defined in z-direction, a vertical connection is expected to yield the
highest chance of success. If no axes-aligned or diagonally surrounding key point can
be found that is suitable to be rod connected to the DOF-keypoint, as mentioned at
the start of this section, the next DOF-keypoint will be selected for which the
procedure is repeated.
DEMONSTRATION
Figure 7. System before and after adjustment (addition of rod between key
points 2 and 5): (a) 2 mechanisms, vector 1 (5 and 6 in x), vector 2 (5 and 8 in y),
(b) 1 mechanism, vector 1 (5 and 8 in y).
COMPUTING IN CIVIL ENGINEERING 15
Using one more addition sequence, comparable to the previous one, the structural
design is completely stable. The method presented here has been implemented in
C++, visualized via OpenGL, and applied to a variety of academic and practical
complex structural designs, as shown in figure 8.
solutions that were not or could not be conceived by hand. The range of problems
that the presented method can solve is limited to orthogonal systems built up from
(hinge-connected) rods. Currently, the method is extended for rigidly connected
beams and flat shells elements, being members of the initial design as well as being
used as addition elements.
REFERENCES
Austin, S., Baldwin, A., Li, B. and Waskett, P. (2000). "Analytical Design Planning
Technique (ADePT): a dependency structure matrix tool to schedule the building
design process." Construction Management and Economics 18(2), 173-182.
Camelo, D. M., and Mulet, E. (2010). "A multi-relational and interactive model for
supporting the design process in the conceptual phase." Automation in Construction
19(7), 964-974.
Chou J. S., Chen, H. M., Hou, C. C., and Lin, C.W. (2010). "Visualized EVM system for
assessing project performance.", Automation in Construction 19(5), 596-607.
Eilouti, B. H. (2009). "Design knowledge recycling using precedent-based analysis and
synthesis models." Design Studies 30(4), 340-368.
Hofmeyer, H. (2007). "Cyclic apllication of transformations using scales for spatially or
structurally determined design." Automation in Construction 16(1), 664-673.
Hofmeyer, H., and Russell, P. (2009). "Interaction between spatial and structural building
design: a finite element based program for the analysis of kinematically
indeterminable structural topologies." CONVR2009, Proceedings of the 9th
international conference on construction applications of virtual reality, Sydney,
Australia (November 5-6), 247-256.
Isikdag, U., and Jason Underwood, J. (2010). "Two design patterns for facilitating Building
Information Model-based synchronous collaboration." Automation in Construction
19(5), 544-553.
Krish, S. (2010) "A practical generative design method." Computer-Aided Design, accepted,
in press.
Kuznetsov, E. N. (1988) "Underconstrained Structural Systems." International Journal of
Solids and Structures 24(2), 153-163.
Maher, M. L. (2000). "A Model of Co-evolutionary Design." Engineering with Computers
16(3-4), 195-208.
Nelson, B. A., Wilson, J. O., Rosen, D., and Yen, J. (2009). "Refined metrics for measuring
ideation effectiveness" Design Studies 30(6), 737-743.
Rafiq, M. Y., Mathews, J. D., and Bullock, G. N. (2003). "Conceptual Building Design –
Evolutionary Approach" Journal of Computing in Civil Engineering 17(3), 150-158.
Volokh, K. Y., and Vilnay, O. (1997). "'Natural', 'Kinematic' and 'Elastic' Displacements of
Underconstrained Structures" International Journal of Solids and Structures 34(8),
911-930.
Zang, W., and Wang, G. (2010). "A generative concept design model based on parallel
evolutionary strategy." CSCWD, Proceedings of the 2010 14th International
Conference on Computer Supported Cooperative Work in Design, Shanghai, China
(April 14-16), 748 - 752.
Determination of The Optimal Positions of Window Blinds
Through Multi-Criteria Search
B. Raphael1
1
Assistant Professor, Department of Building, National University of Singapore.
Email: bdgbr@nus.edu.sg
ABSTRACT
INTRODUCTION
It is quite well known that parameters that influence the Indoor Environment
Quality strongly interact with each other (Gero et al 1983, Wright et al. 2002). For
example, increasing the natural daylight in a room might increase the amount of heat
transmitted. Design of building systems should consider trade-offs between such
conflicting objectives (Dikaki et al. 2008). Multi-criteria search and optimization
techniques have been developed for this purpose and these have been successfully
applied to many design tasks.
17
18 COMPUTING IN CIVIL ENGINEERING
This paper presents a new algorithm called Relaxed Restricted Pareto (RR-
Pareto) for selecting a single solution that achieves a reasonable trade off among
conflicting objectives in a multi-objective optimization task. The application of the
algorithm to window blind control is presented to illustrate potential advantages.
RR-PARETO ALGORITHM
In this algorithm, the solution with the best trade-offs among all the objectives is
chosen using two pieces of information:
Ordering of the objectives according to their importance
The sensitivity of each objective
The algorithm starts off with a set of solutions that are generated by any
search technique, for example, PGSL (Raphael and Smith, 2003b) or Genetic
algorithms. Each solution point contains the values for all the objectives as well as the
decision variables (optimization variables). The set of solutions are sequentially
filtered according to the order of importance of objectives. At each stage of filtering,
the solution point with the best value for the current objective from among all the
points in the current set is chosen. All the points that lie outside the sensitivity band
of the chosen point are eliminated from the set. At the end of the process, one or more
points might remain in the solution set. The user is asked to choose the preferred
solution from this set or in the automatic mode, the best solution according to the
most important criterion is selected.
quantified, these may have important characteristics and may represent unique and
distant solutions in the decision variable space.
10700
9700
L i g h ti n g
L o a d (W )
8700 S o la r
7700
6700
5700
0 0 .2 0 .4 0 .6 0 .8 1 1 .2 1 .4 1 .6 1 .8
B lin d p o s itio n (m )
Figure 1. Lighting and thermal load for various positions of a window blind.
11000
10500
10000
9500
9000
T h e r m a l lo a d (W )
8500
8000
7500
7000
6500
6000
5800 5900 6000 6100 6200 6300 6400 6500
L ig h t in g P o w e r ( W )
In this example, all the blind positions up to 1.56 m are on the Pareto front.
Above this value, the lighting levels are higher than the prescribed values and there is
no reduction in lighting energy. At the same time there is an increase in the cooling
load. With pure Pareto filtering, it is not possible to select a single best blind position.
The selection process can be interpreted as follows. All the points are
equivalent with respect to the objective of minimizing lighting energy since all the
solutions lie within the sensitivity limit. However, some of the points are not good
with respect to the second objective and therefore, these are removed from the set.
The best point with respect to the primary objective is selected from the remaining set.
This point represents a good trade off between the two conflicting objectives.
This example illustrates how the new algorithm is able to select good
solutions without the use of arbitrary weight factors. Users are able to control the
selection of the optimal point by specifying sensitivities of objectives. These
sensitivities represent important domain knowledge and reflect the priorities of the
organization. How much increase in energy is unacceptable and what level of
increase in lighting is significant for visual comfort are really the decisions of the
facilities manager.
In the RR-Pareto control strategy, the optimal blind positions for all the hours
of a typical summer day from 8 am to 6 pm are computed, using total energy as the
primary objective function and the lighting energy as the secondary objective.
Thermal load is computed using the energy simulation software EnergyPlus (2005)
and lighting levels are computed using the lighting simulation software Radiance
(Ward and Shakespeare 1998). For each hour, the energy consumption of the optimal
control action is computed. This energy is compared with that of the second control
strategy. The difference between these two cases gives the energy savings that can be
achieved using the integrated control strategy.
The case study involves the first floor of an office building of size 36m x 18m
in Singapore, with the longer side oriented along the north-south direction. There are
windows with controllable blinds on the east and west facades, named W1 and W3
respectively. The window height is 2.4m and the ceiling is at a height of 2.7 m from
the floor. A 3D rendering of the building using Radiance is shown in Figure 3.
22 COMPUTING IN CIVIL ENGINEERING
Table 2 summarizes the energy computations for all the hours of the day. The
second and third columns contain the optimal blind positions determined by the
control algorithm. The fourth column gives the lighting power and the fifth column
the cooling load. The sixth column gives the total energy for the optimal blind
positions. The last column contains the energy for the second control strategy.
The total savings in energy for the whole day with respect to the second
control strategy is 26.87%. A plot of the total energy for the two control strategies is
given in Figure 4.
32
30
28
O p t im a l
26
E n e rg y ( K W H )
S tr a te g y 2
24
22
20
8 10 12 14 16 18
Hour
It can be seen that for every hour, the optimal control results in lower energy
consumption. When the blinds are fully open, either too much solar gain causes the
cooling energy to be higher or the excessive brightness causes the blinds to be closed,
thereby increasing the lighting energy consumption. The only exception is at 13:00
hours, when the shading is just adequate to prevent excessive heat and light, causing a
dip in the energy consumption of the second control strategy. Only at this hour, the
external shade of 0.6 m width prevents direct sunlight from entering the room.
CONCLUDING REMARKS
A new algorithm for selecting a single solution that achieves reasonable trade-
offs among multiple objectives is presented in this paper. The algorithm has been
evaluated by applying it to the task of window blind control using the case study of an
office building. In the selected example, an energy savings of 26.87% is obtained
compared to a traditional control strategy. The control algorithm has already been
applied to a number of tasks such as personalized ventilation and light shelves. The
24 COMPUTING IN CIVIL ENGINEERING
ACKNOWLEDGEMENTS:
REFERENCES:
Gero J.S., Neville D.C., Radford A.D., (1983). Energy in context: a multicriteria
model for building design, Building and Environment 18 (3) 99–107.
Ward L. G, Shakespeare R. (1998). Rendering with Radiance: the art and science of
lighting visualization, San Francisco: Morgan Kaufmann.
ABSTRACT
INTRODUCTION
25
26 COMPUTING IN CIVIL ENGINEERING
et al. 2008). In structural health monitoring, there are typically two classes of data
interpretation methods: model-based methods and model-free methods (ASCE 2011).
Model-based data interpretation methods typically utilize measurement data to
identify models that are able to reflect the real behavior of structures (Goulet et al.
2010; Koh and Thanh 2009; Koh and Thanh 2010; Robert-Nicoud et al. 2005). Thus,
such methods involve the development and use of behaviour (physical) models to
validate the results. Nevertheless, creating such models for civil infrastructure is
often difficult and expensive, and may not always reflect real behavior due to the
presence of uncertainties in complex civil-engineering structures (Goulet et al. 2010).
Model-free data interpretation methods involve analyzing data without
behavior models (i.e. without using geometrical and material information). These
methods identify changes in time-series signals statistically. They are thus well-
suited for interpreting measurement data during continuous monitoring of structures.
Many signal-processing methods have been proposed for the application in
continuous monitoring (Hou et al. 2000; Lanata and Grosso 2006; Omenzetter and
Brownjohn 2006; Omenzetter et al. 2004; Yan et al. 2005a; Yan et al. 2005b).
Posenato et al (2010; 2008) proposed two model-free data interpretation methods,
MPCA and RRA, to detect and localize anomalous behavior for the context of civil-
engineering structures. The performance of these two methods was compared with
that of eight other methods. The studies demonstrated that MPCA and RRA perform
better other methods for anomaly detection in the presence of noise, missing data and
outlier. Both methods were also observed to require low computational resources to
detect anomalies, even when there were large quantities of data. In addition, they
were adaptable in changing the condition of structures for further damage detection.
This paper investigates the performance of MPCA and RRA in terms of
damage detectability and time to detection (i.e. the time from the moment that
damage occurs to the moment it is detected). These metrics are evaluated with
respect to changes in traffic loading and the proximity of sensors to damage locations.
This paper also studies the influence of removing seasonal temperature variations on
the reduction of time to detection. A railway truss bridge in Germany is used for this
study.
MPCA employs a fixed-size window that moves along the measurement time
series to track changes in its principal components in order to detect anomaly in
structures. The procedure of computing principle components inside a window is
described as the following steps.
Step 1. Formulate a matrix U with each column containing a measurement time
series and each row corresponding a time step (observation) of all time series.
Step 2. Move a fixed-size window along the columns of U to extract datasets at each
time step k as
COMPUTING IN CIVIL ENGINEERING 27
u t u t
T
Ck j j
j k
NUMERICAL STUDIES
Figure 1. A truss structure of a 80-m railway steel bridge with sensor locations marked
as black bars and damage locations marked as black dots.
Figure 2. Damage detectability (left) and time to detection (right) at three locations
using MPCA and RRA.
100 200
No seasonal variation removal (MPCA)
) )
% s Moving average filter (MPCA)
( y
75 a 150
ty d No seasonal variation removal (RRA)
lii ( Moving average filter (RRA)
b n
o
a
t ti 100
c 50 c
te No seasonal variation removal (MPCA) te
e e
d Moving average filter (MPCA) d
e 25 o
t 50
g No seasonal variation removal (RRA)
a e
m
a
Moving average filter (RRA)
im
T 0
D 0
0 20 40 60 80 100 0 20 40 60 80 100
Traffic loading level (%) Traffic loading level (%)
Figure 3. Damage detectability (left) and time to detection (right) when using MPCA
and RRA for damage at location 2 with traffic loading levels from 20% to 100%.
CONCLUSIONS
ACKNOWLEDGEMENTS
This work was partially funded by the Swiss Commission for Technology and
Innovation and the Swiss National Science Foundation (contract 200020-12638). An
extended version of this paper has been accepted for publication in Advanced
Engineering Informatics (Laory et al. 2011).
REFERENCES
Smith, S. W. (1997). The Scientist and Engineer's Guide to Digital Signal Processing,
California Technical Pub.
Yan, A. M., Kerschen, G., De Boe, P., and Golinval, J. C. (2005a). "Structural damage
diagnosis under varying environmental conditions--Part I: A linear analysis."
Mechanical Systems and Signal Processing, 19(4), 847-864.
Yan, A. M., Kerschen, G., De Boe, P., and Golinval, J. C. (2005b). "Structural damage
diagnosis under varying environmental conditions--part II: local PCA for non-linear
cases." Mechanical Systems and Signal Processing, 19(4), 865-880.
Condition-based Maintenance in Facilities Management
Joseph Neelamkavil1
1
Centre for Computer-assisted Construction Technologies, National Research Council
Canada, London, Ontario Canada N6G 4X8; email: joseph.neelamkavil@nrc.gc.ca
ABSTRACT
A facility management strategy requires that an organization’s major operational
concerns are dealt with, such as: avoiding the risk of catastrophic failures, planning
for asset maintenance and reducing the quantity of spare parts and associated
inventory costs. To bring this into further perspective, it is a well known fact that
many systems suffer increasing wear with usage and age and are subject to random
failures that are linked to the deterioration of these assets. Some examples of such
affected items can be building components, hydraulic structures, turbine blades, and
rotating equipment. In these cases, various physical deterioration processes can be
observed, such as cumulative wear, crack growth, corrosion, fatigue, and so on.
The deterioration and failures of such systems might incur safety hazards, as well as
high operational costs (due to work stoppage, delays, unplanned intervention, etc.).
To cope with this, preventive maintenance strategies are often adapted thereby
replacing the deteriorated system before it even fails. If the deterioration of the
system, or a parameter strongly correlated with the state of that system can be directly
measured (via corrosion assessment, wear monitoring, etc.), and if the system stops
functioning when it deteriorates beyond a given threshold, then it is appropriate to
base any maintenance decisions on the actual deterioration of the system rather than
on its age. And this leads to the choice of a condition-based maintenance (CBM)
policy. CBM techniques provide an assessment of the system’s condition, based on
data collected from the system through continuous monitoring and/or via inspections.
The main intent is to determine the required maintenance plan prior to any predicted
failure. Such a strategy will contribute by minimizing maintenance costs, improving
operational safety and reducing the number of in-service system failures. This paper
will address the merits of adapting CBM strategies in Facilities Management.
33
34 COMPUTING IN CIVIL ENGINEERING
Selecting a suitable model to be used in a CBM scheme is not a trivial task. It should
be based on the ability of the model to accurately describe the degradation process
and make effective extrapolations of the component state into smart decisions related
to the maintenance. The model must ensure that the degradation phenomenon is
captured by the most realistic and practical method available for implementation.
Degradation measurements traverse downward (or upward) toward a threshold, and
the system is said to have failed at the instance when the measured value crossed a
predetermined failure threshold. There are continuous time, discrete time, continuous
state, and discrete state degradation representations. Many of the discrete state/time
methods involve Markov methods, while some of the continuous degradation models
include polynomials, cumulative damage, Brownian motion and gamma processes.
Grall et al. (2002) have described a system that undergoes random deterioration,
while being monitored through “perfect” inspections. When the system condition
exceeds its failure level, it enters into a failed state and a corrective replacement is
carried out. When the system state is found to be greater than a critical threshold
level, the still-functioning system is considered as ‘worn-out’ and a preventive
replacement is performed. A low critical threshold leads to frequent preventive
maintenance operations, and prevents the full exploitation of the residual life of the
deteriorated (still functioning) system. But, a high critical threshold tends to keep the
device working even in an advanced deterioration state, with increased risk of failure.
36 COMPUTING IN CIVIL ENGINEERING
Goto et al. (2008) have proposed an on-line deterioration and residual life prediction
method for the rotating equipment. The equipment is inspected for vibration measures
and a mathematical model is created in order to predict the future condition of the
equipment. Prior to building the deterioration model, the ‘noise’ in the vibration data
caused by measurement errors is eliminated, which will also improve the accuracy of
the model. An on-line deterioration data management scheme is included.
In most CBM modeling approaches, the deterioration measures are inspected and
compared with a predefined threshold for maintenance decisions. Departing from this
approach, Lu et al. (2007) describe what is called a predictive CBM (PCBM) to
foretell the deterioration condition in the future. In the PCBM model the degradation
states are modeled as continuous states using a state-space model in which the state
vector includes both the degradation level and the degrading rate, both of which
influence maintenance decisions.
The CBM modeling of deteriorating systems with multiple different units has not
been a focus item. Wang et al. (2009) present a novel CBM approach for multi-unit
systems in which the deterioration processes of multi units are modeled using
continuous-time Markov chains. Segmenting the system deterioration into several
discrete states is more practical than describing the deterioration condition by a single
scalar continuous variable.
An important assumption that is implicit in many of the research works is that after
each maintenance action, the state of the system returns to its initial state. Shahanaghi
et al. (2008]) extend this assumption. In a nutshell it means that after each
maintenance action, the system state is not fully improved and the amount of
improvement made on the system state depends on the current state of the system.
According to Yam et al. (2001), intelligent systems that are used for condition-based
fault diagnosis fall into three categories - rule-based diagnostic systems, model-based
diagnostic systems and case-based diagnostic systems. Rule-based systems detect and
identify equipment faults in accordance with the rules representing the relation of
each possible fault with the corresponding condition. A model-based system uses
various mathematical, neural network and logical methods and compares the real time
monitored condition with the model of the object in order to predict the fault
behavior. Case-based systems use historical records of maintenance cases to provide
an interpretation for the actual monitored conditions of the item. A record of all
previous incidents and system malfunctions along with their maintenance solutions
are stored in a computer. If a fault similar to a stored case occurs, the case-based
diagnostic system will pick up a suitable maintenance solution from the case library.
potential failure, an inspection scheme must be instituted, the interval of which must
be significantly less than the P-F interval in order to avoid reaching the threshold of
functional failure. The P-F interval can be measured in units relating to exposure to
fatigue cycles (running time, units of output, stop-start cycles, etc).
CONCLUSION
REFERENCES
Goto, S., Adachi, Y., Katafuchi, S., Furue, T., Uchida, Y., Sueyoshi, M., Hatazaki
H., and Nakamura, M. (2008). “On Line Deterioration Prediction and
Residual Life Evaluation of Rotating Equipment based on Vibration
Measurement”, SICE Annual Conference, Japan.
Lacasse, M.A.; Kyle, B.R.; Talon, A.; Boissier, D.; Hilly, T.; Abdulghani, K.
(2008) “Optimization of the building maintenance management process
using a Markovian model”, NRC Canada Report NRCC-51170
Lounis, Z and Vanier, D. J., (2000) “A Multi-objective and Stochastic System for
Building Maintenance Management”, Computer-Aided Civil and
Infrastructure Engineering 15 Pp 320–329
Lu, S., Tu, Y. C and Lu, H. (2007). “Predictive Condition-based Maintenance for
Continuously Deteriorating System” Qual. Reliab. Engng. Int. 23. pp 71–81.
Rao, P. N. S., and Naikan, V.N. A. (2006). “An Optimization Methodology for
Condition Based Minimal and Major Preventive Maintenance”, Economic
Quality Control, Vol 21, No. 1, pp 127 – 141.
Shahanaghi, K., Babaei, H., Bakhsha, A., and Fard, N. S. (2008) “A new
condition based maintenance model with random improvements on the
system after maintenance actions: Optimizing by Monte Carlo simulation”,
World Journal of Modelling and Simulation, Vol. 4 No. 3, pp. 230-236.
Talon, A., Boissier, D., Hans, J., Lacasse, M.A., Chorier, J, (2008) “FMECA and
Management of Building Components”. National Research Council Canada
Report NRCC-51168.
Wang, L., Zheng, E., Li, Y., Wang, B., and Wu, J. (2009). “Maintenance
Optimization of Generating Equipment based on a Condition-based
Maintenance Policy for Multi-unit Systems”, CCDC – IEEE 2009.
Yam, R. C. M., Tse, P. W., Li L., and Tu, P. (2001) “Intelligent Predictive
Decision Support System for Condition-Based Maintenance”, Int. Journal of
Advanced Manufacturing Technology 17. Pp 383–391
TOWARDS SUSTAINABLE FINANCIAL INNOVATION POLICIES IN
INFRASTRUCTURE: A FRAMEWORK FOR EX-ANTE ANALYSIS
Ali Mostafavi1, Dulcy Abraham2, and Daniel DeLaurentis3
1
Ph.D. Candidate and Research Assistant, School of Civil Engineering, Purdue
University, 550 Stadium Mall Drive, West Lafayette, IN 47909-2051, USA, Phone
765/543-4036, FAX 765/494-0644, amostafa@purdue.edu.
2
Professor, School of Civil Engineering, Purdue University, 550 Stadium Mall Dr.,
West Lafayette, IN 47907-2051, USA, Phone 765/494-2239, FAX 765/494-0644,
dulcy@purdue.edu.
3
Associate Professor, School of Aeronautics and Astronautics, Purdue University, 701
W. Stadium Ave., West Lafayette, IN 47907-2045, USA, Phone 765 / 494-0694,
ddelaure@purdue.edu.
ABSTRACT
Innovative financing emerged to complement traditional financing structures in
closing the gap for infrastructure financing. The key to sustainable financial
innovations is policy analysis. The objective of this paper is to create and test an ex-
ante policy assessment model as a part of a System of Systems analysis framework to
assist policy-makers in examining innovative financing alternatives. A hybrid Agent-
Based/System Dynamics model is created to perform the following: 1) captures the
emergent dynamics of private investment in infrastructure by simulating the activities
and institutions of the players at a micro level, and 2) analyzes the determinants of
financial innovation at a macro level. The significant parameters and variables for
policy-making are identified through Meta-modeling. The application of the
methodology and its implications are then discussed using a hypothetical case. This
study illustrates the potential for the methodology to be used and tested for ex-ante
analysis in innovative financing policy-making in infrastructure systems.
INTRODUCTION
Infrastructure is a key driver of economic development. According to Levine (1997),
the importance of infrastructure for economic growth and public welfare on one hand
and the centrality of financial systems in economic growth on the other hand, as well
as the ever changing local, political, economic, social, and technological environment
and emerging globalization, raises the importance of financial innovation in
infrastructure projects. The key to sustainable financial innovation is sound policy-
making. The questions to be answered in order to make effective policies include but
are not limited to the following (Mostafavi et al. 2010): 1) What are the organizations
engaged in innovative financing? 2) What are the activities and institutional rules
affecting the development and the diffusion of innovative financing tools? The
objective of this paper is to create and test a systemic methodology for assessment of
innovative financing policies. The analysis includes: 1) creation of a hybrid Agent-
Based/System Dynamics model to facilitate implementation of ex-ante simulation
experimentation regarding the dynamics of investment in infrastructure, and 2)
41
42 COMPUTING IN CIVIL ENGINEERING
BACKGROUND INFORMATION
Policy analysis tools can be divided into two categories of techniques: ex-post and ex-
ante. Ex-post analysis tools consider the previously observed system behavior and
identify the significant underlying factors that trigger the search for a “best” solution
for a specific scenario. Despite their robustness in static policy analysis, ex-post
analysis tools, such as game theory and statistical decision theory have not been
successful for problems “where complexity and adaptation are central” (Bankes
2002).
The limitations of these methods in capturing the complexity of public policy analysis
and management has been recognized (Bankes 2002). Lempert (2002), Bankes (2002)
and Kim and Lee (2007) discuss the shortcoming of ex-post models (e.g., statistical
models) for dynamic policy analysis. Such models do not capture the complexity of
policy problem, competing values, emergent behaviors, interdependencies, and
uncertainties (Pfeffer and Salancik, 2003; Mostafavi et al., 2011a). These issues can
be addressed using complex systems simulation models which facilitate
understanding the probable macro patterns of a system based on the micro behaviors
of adaptive components. Such models (so-called ex-ante analysis) facilitate
considering various probabilities and possibilities to provide a set of “robust”
solutions across different parameter values, scenarios, and model representations
(Bankes, 2002). Table 1 summarizes the traits of ex-post versus ex-ante policy
analysis. Based on the traits of the policy problem, the appropriate analysis tool can
be selected.
operationally independent and adapt new behaviors as they learn from their
environment over time. Thus, the assessment of the activities and interactions of the
players for policy analysis requires complex system simulation (ex-ante analysis).
Analysis of policies (such as innovation policies) using complex systems simulation
requires a theoretical framework (DeLaurentis and Callaway 2004). Mostafavi et al.
(2011a) proposed a theoretical framework called the Innovation System of Systems (I-
SoS) for such analysis. The three dimensions of analysis in the I-SoS framework
(definition, abstraction, and implementation) are discussed in this paper to investigate
the dynamics of investment in infrastructure to be used for innovative financing
policy-making. Further details regarding the components of the I-SoS framework can
be found in Mostafavi et al. (2011a).
Definition: The analysis begins with the definition phase. The context of the analysis
includes assessment of private investment in infrastructure. The category of
innovative financing policy that is considered in this paper includes those policies
which facilitate private investment in infrastructure. The levels of analysis include
sub-national (local), national, and global levels, which means that the players,
interactions, and factors within and across these levels are considered. The barriers in
the analysis include the heterogeneity of the players and the activities within and
across different levels of analysis, which adds to the complexity of the analysis.
Abstraction: The abstraction phase includes identification of the players, institutions
(norms and practices), activities, networks, and resources within and across the
different levels of analysis (sub-national, national, and global). Mostafavi et al. (2011
b) identified these elements using a case-based research approach. Constructs
regarding the activities and institutions of different players were explored to be used
as rules in creating the simulation model in the implementation phase. Please see
Mostafavi et al. (2011 b) for further details.
Implementation: The implementation phase includes modeling methods, objects,
data, and classifications. The first step in the modeling phase is to identify the
appropriate modeling method. Application of modeling tools depends on the level of
abstraction and level of aggregation in the modeling problem at hand. In the case of
systemic assessment of private investment in infrastructure, the level of abstraction is
at the micro level. The level of abstraction is the level of complexity (detail) by
which a system is assessed. The level of aggregation is the level at which the
aggregate emergent outcome of the players' activities and interactions is considered.
The level of aggregation is national, which means that the aggregate outcome of the
activities and interactions among the different players and factors is considered at the
national level (e.g., the amount of infrastructure investment at the national level is the
result of activities and interactions among different players and factors within and
across different levels). The most appropriate modeling tools for such analysis are the
Agent-Based Model (ABM) and System Dynamics (SD). ABM is capable of micro-
modeling the emergent behavior of a system that consists of managerially and
operationally independent players (Bonabeau, 2002; Sanchez and Lucas, 2002; Macal
and North, 2005). ABM is gaining popularity as a standard tool policy analysis
(Bankes, 2002; Macy and Willer, 2002; and Kim and Lee, 2007). System Dynamics
is useful for understanding the behavior of complex systems and the effects of causal
factors over time (Sterman, 2001). Concurrent use of ABM and SD takes advantage
44 COMPUTING IN CIVIL ENGINEERING
II
I II II
II I
III I
(a) (b)
Figure 1 – (a) Preview of Traditional Investors object class in the model; (b) Preview of New
Investor active object class in the model
NewInvestors Object Class: A preview of this object is shown in Figure 1b. This
active object class encompasses a state chart, parameters, and variables structured to
COMPUTING IN CIVIL ENGINEERING 45
define the BKI of the object. There are three states for this object class:
GlobalInvestor, PotentialInvestor, and InfrastructureInvestor. At the beginning, all
the objects of this class are in the GlobalInvestor state, which indicates that the
investors are investing in sectors other than infrastructure. These agents start
considering investing in infrastructure upon receiving signals of successful
investments by TraditionalInvestors and NewInvestors who have already begun
investing in infrastructure. Then, the agents of this object class change their state to
PotentialInvestor state, at which time the active state of the agents changes to
InfrastructureInvestor with a rate that is equal to the InvestmentRate variable. The
InvestmentRate variable type is double, which is calculated in the Traditional
Investors active object class. In the state chart, type I transitions are triggered by the
rate (e.g., InvestmentRate) and type II are triggered by the message (e.g., successful
investment message sent to Global Investors) as shown in Figure 1b. The arrow
inside a state signifies sending a message.
Public Object Class: This active object class encompasses an action chart which
includes a decision and two actions. The decision determines which action will be
taken. The condition for the decision is whether the level of Need for infrastructure
investment is higher than a pre-set value. If the need is higher than the specific value,
the decision leads to Support action; otherwise, it leads to Object action. The effect of
public support or objection is reflected in the probability of successful investment
which affects the InvestmentRate variable.
Infrastructure Object Class: A preview of this class is shown in Figure 2. The
variables in the object include:
FinancingCapacity variable (Eq.1): This variable determines the monetary value of
the annual capacity within the budget of a public agency to finance infrastructure
through either traditional systems of pay-as-you-go or borrowing (e.g., bonds) plus
the innovative pay-as-you-go capacity of a public agency.
Flow variable (Eq.2): This variable determines the rate of flow between the
NeededInfrastructure and FinancedInfrastructure stock variables. The Flow and
Need (Eq.3) variables are calculated using the following formulas:
Private variable: This variable refers to the number of Traditional Investors whose
active state becomes ActiveInvestor at each time step. This variable is calculated by
counting the agents in the Traditional Investor object class.
NewPrivate variable: This variable refers to the number of NewInvestors whose
active state becomes InfrastructureInvestor at each time step. This variable is
calculated by counting the agents in the New Investor object class.
NeededInfrastructure stock variable: This variable refers to the stock of
infrastructure projects that need a financing source. This variable is calculated using
Eq. 4. The initial value of the stock variable is an input entered at the time the policy
analysis is implemented. Another component of this stock variable is the annual rate
of growth for the needed infrastructure, which is an input at the time of policy
analysis.
META-MODELING
Meta-modeling techniques are useful in such analysis. Classification and Regression
Tree (CART) is a technique that can select, from among a large number of variables,
the most important variables in determining the outcome variable to be explained and
their interactions (Breiman et al. 1984). In the infrastructure finance model, the
CART technique is used to construct a regression tree using data obtained from
different runs of Monte Carlo experiments in the simulation model to identify the
significant factors for policy-making.
CONCLUSIONS
This paper presented a systemic approach for ex-ante analysis of innovative financing
policy-making. A hybrid Agent-Based/System Dynamics model was created to
simulate the dynamics of infrastructure financing to be used for policy analysis. The
model output variable, which is the total currency value of the financed infrastructure,
was simulated using a Monte Carlo experiment. Classification and Regression Tree
analysis was performed on the simulated data to identify significant factors affecting
the total level of financed infrastructure. Increased probability of successful
investment and enhanced pay-as-you-go capacity of public agencies were identified
COMPUTING IN CIVIL ENGINEERING 49
as the most significant factors affecting the total level of financed infrastructure in the
case study. Policies which enhance these factors could be effective in enhancing the
total value of the financed infrastructure. To enhance the probability of success of
private investments, innovative risk mitigation and contract tools might be considered
by policy-makers and public agencies when an innovative financing system is
proposed. The methodological approach proposed herein could potentially assist
policy-makers in expanding infrastructure investment by providing recommendations
regarding the significance of different factors and the effects of different policies.
REFERENCES
Bankes, S. C. (2002). "Tools and techniques for developing policies for complex and
uncertain systems." Proceedings of the National Academy of Sciences, 99(3), pp. 7263-7266.
Beiman, L., Friedman, J. H., Olshen, R. and Stone, C. J. (1984). Classification and
Regression Trees. Belmont, CA: Wadsworth.
Bonabeau, E. (2002). "Agent-based modeling: methods and techniques for simulating
human systems." Proceedings of the National Academy of Sciences, 99, pp. 7280-7287.
DeLaurentis, D., and Callaway, R.K.C.A. (2004) "System-of-Systems Perspective for
Public Policy decisions." Review of Policy Research, 21(6), pp. 829–837.
Lempert, R. (2002). "Agent-based modeling as organizational and public policy
simulators." Proceedings of the National Academy of Sciences, 99(3), pp. 7195-7196.
Levine, R. (1997). "Financial Development and Economic Growth: Views and Agenda.",
Journal of Economic Literature, American Economic Association, 35(2), pp. 688-726.
Kim, Y., and Lee, M. (2007). "Agent-based models as a modeling tool for complex
policy and managerial problems." Korea Journal of Public Administration, 45(2), pp. 25-50.
Macal, C.M., and North, M.J. (2005). "Tutorial on Agent-Based Modeling and
Simulation." Proceedings of the 2005 Winter Simulation Conference, Orlando, FL, Dec. 4-7,
pp. 2 15.
Macy, M. W., and Willer, R. (2002). "From factors to actors: Computational sociology
and agent-based modeling." Annual Review of Sociology, 28: pp. 143–166.
Mostafavi, A., Abraham, D.M., DeLaurentis, D., and Sinfield, J. (2011a). "Exploring the
Dimensions of Systems of Innovation Analysis: A System of Systems Framework.", IEEE
Systems Journal, Accepted for publication on February 17, 2011.
Mostafavi, A., Abraham, D.M., and Sullivan, C.A. (2011b)."Drivers of Innovation in
Financing Transportation Infrastructure: A Systemic Investigation." Electronic Proceedings
of the Second International Conference on Transportation Construction Management,
February 7 - 10, 2011, Orlando, FL.
Mostafavi, A., and Abraham, D.M. (2010). "Frameworks for Systemic and Structural
Analysis of Financial Innovations in Infrastructure." Working paper Electronic Proceedings
of 2010 Engineering Project Organization Conference (EPOC 2010), November 4 - 6, 2010,
South Lake Tahoe, CA.
Nelson, R.R., editor. National Innovation Systems: A Comparative Analysis. New York:
Oxford University Press; 1993.
Pace, D.K. (2000) "Ideas about simulation conceptual model development." Johns
Hopkins Apl Technical Digest, 21 (3), pp. 327–336.
Pfeffer, J., and Salancik, G. R. (2003). The external control of organizations: A resource
dependence perspective. Stanford, CA: Stanford Business Books.
Sanchez, S. M., and Lucas, T. W. (2002). "Exploring the world of agent-based
simulations: Simple models, complex analyses." E. Yücesan, C.-H. Chen, J. L. Snowdon, J.
Charnes, eds. Proc. 2002 Winter Simulation Conf. Institute of Electrical and Electronics
Engineers, Piscataway, NJ, pp. 116–126.
50 COMPUTING IN CIVIL ENGINEERING
ABSTRACT
This paper presents the results of a pilot study conducted to optimize building
energy performance using a Multi-objective Generic Algorithm (MOGA), an
evolutionary adoptive approach. In this study, a Building Information Modeling
(BIM) model was built to provide design data, such as building form and space
layout, and site and building orientation to IES <VE>, a building energy simulation
software. Energy performance of design options was evaluated. The optimal settings
of the design parameters were then obtained using a MOGA approach. This study
indicates that the MOGA approach (1) enables continuous investigation of design
parameters over their entire spectrum, (2) accounts for that fact that design
parameters dynamically, not statically, impact energy performance, and (3) optimizes
multiple design criteria simultaneous. This study concluded that MOGA is an
appropriate approach that can better ensure a global optimal solution for design of
energy efficient buildings.
INTRODUCTION
51
52 COMPUTING IN CIVIL ENGINEERING
several drawbacks. First of all, design parameters are altered discretely, not
continuously, e.g., building energy consumption is usually simulated at several
chosen angles from the project north at the designers’ discretion. The consideration
of infinite number of building orientations besides the chosen orientations is usually
omitted due to simulation time and cost constraints. Secondly, the dynamic impacts
of design parameters are not accounted for in simulation. The use of total energy
consumption in the evaluation process fails to consider the dynamic interactions
between the two components of the total energy consumption: heating and cooling.
With the increased window wall area ratios, typically energy consumption for heating
is decreasing and energy consumption for cooling is increasing, but the total energy
consumption might have very slight amount of fluctuations and thus can be
considered a constant. Therefore, energy consumption for heating and cooling,
instead of total energy consumption, should be evaluated. Thirdly, not all the design
objectives can be optimized simultaneously. During a typical energy evaluation
process, for instance, the professionals alter design parameters to minimize energy
consumption, but very likely that construction costs are not minimized with the same
set of design parameters. Lastly, because of the abovementioned drawbacks, usually
only a local, instead of a global, optimal solution is achieved. Thus the goal of
designing a truly energy efficient building cannot be achieved.
The main question that this study is to answer is: to design a truly energy
efficient building, what is an appropriate optimization technology that can search the
entire spectrum of design parameters, optimize multiple building design objectives
simultaneously, and achieve a global optimal solution? The main objective of this
study is to use this optimization technology to find an optimal design that can achieve
the following design criteria at an acceptable level:
to minimize energy consumption that is required to meet heating and cooling
conditions, and meanwhile
to minimize construction costs.
METHODOLOGY
population, called a generation, which solves the problem better than the initial
population; 3) the first two steps are repeated for the number of pre-defined
generations to produce the optimal solution. In many real-life problems, more than
one objectives need to be optimized. To this end, multi-objective genetic algorithms
(MOGA) can be used to find an optimal solution that satisfies the objectives, often
time conflicting objectives, at an acceptable level. The application of the genetic
operators in a GA/MOGA prevents the searches of solutions from being falling into a
local optimum, and thus a global optimal solution can be ensured.
Case Study Building. The building chosen for this study is a new 52,000 S.F., 2-
story academic building located in North Carolina. Figures 1, 2, and 3 show the first
and second floor plans, and the BIM model of the building.
Figure 1. 1st Floor Plan Figure 2. Second Floor Plan Figure 3. BIM model
Simulation Inputs and Assumptions. The source of design weather used in this
study was the ASHRAE design weather database, and the weather design file was
Raleigh TM2.fwt.
To accurately perform thermal simulation, magnetic declination should be
considered. For the case study building, the declination is 9° 45' W. The BIM model
was adjusted to the “true” north in Revit using this declination.
54 COMPUTING IN CIVIL ENGINEERING
Simulation Procedure and Results. The BIM model of the academic building was
first developed using Revit Architecture. This model was exported as a gbXML file
which is then imported into IES <VE>. In IES <VE>, a module called Apache was
used to perform thermal calculations and simulations. One of the simulation outputs
is total energy consumption (MBtu) per year. This total energy consumption is split
into several sub-categories. They are heat, cool, fans/pumps, lights, and equipment.
This enables further studies of energy usage for heating and cooling individually.
In this study, two design parameters were altered to develop new design
options. Energy performance of these new design options was then simulated. The
two design parameters are the building orientation and the window wall area ratio.
The building was rotated counter-clock wise by 0°, 45°, 90°, 135°, 180°, 225°, 270°,
and 315° from the “true” north. Five window sizes were chosen for all the windows:
fixed (36”x48”), casement double with trim (48”x48”, 72”x36”), awning-triple
(60”x48”), and casement-quad (72”x48”). The window wall area ratio was calculated
by dividing the total window area by the total exterior wall area. Considering the
different combinations of these two design parameters produces a total of 40 design
options. Energy performance of these 40 design options was simulated. The
simulation results are shown in Table 1. Regress analyses of these results generate
equations (a) and (b) in the following section.
f(cost): construction costs for building exterior walls and windows, dollars.
Since all other building components (roof, interior walls, doors, floors, etc.)
remain unchanged, f(cost) is a good indicator of construction costs. $15.17
/SF is the cost for building exterior walls; $41.16 /SF is the cost for installing
exterior windows; and 33,640 SF is the total exterior wall surface area.
Equations (a) and (b) were obtained by regressing f(heating) and f(cooling)
against x1 and x2, respectively. For equation (a), R-square is 0.30, p-value is 0.0013;
for equation (b), R-square is 0.98, p-value is less than 0.0001.
The top best 30 optimization solutions are listed in Table 2. In this table, for a
set of x1 and x2, the corresponding f(heating), f(cooling), and f(cost) are predicted. A
3-D surface built from these solutions is shown in Figure 4. The following
observations were made by carefully examining these solutions:
Solutions #1 and #24 appear to be the same, and both can be consider the
optimal solution. For this optimal solution, x1 and x2 are 0.25 and 1.67°,
respectively. These two figures are not the pre-determined figures for x1 and
x2. Therefore, it has been proven that MOGA had searched the entire spectrum
of design parameters for the global optimal solution.
For the optimal solution (#1 or #24), f(heating) was 1,114.37 MBtu, which is
larger than the minimum figure 1,109.54 MBtu , but its f(cooling) was the
smallest. The sum of f(heating and f(cooling) was the smallest compared to
other solutions. In addition, its f(cost)was the smallest. Therefore, overall the
optimal solution minimizes the design objectives simultaneously at an
acceptable level.
Recommendations for Future Studies. Window wall area ratios of North, West,
South, and East walls should be considered. In this study, the overall window wall
56 COMPUTING IN CIVIL ENGINEERING
area ratio was studied. If the ratio of individual exterior wall can be considered, the
A/E/C professionals will gain better understanding on exactly how many windows
will be placed on each exterior wall. This will result in a more accurate final design.
More design parameters, for example, shading, construction materials, and
sources of renewable energy, can be included in future studies.
Building energy performance is largely dependent on the life styles of
residents. How to quantify the impacts of various life styles is an interesting topic for
future studies.
REFERENCES
ABSTRACT
As-built models and drawings are essential documents used during the operations and
maintenance of buildings for managing facility spaces, equipment, and energy
systems. Inefficiencies in processing, communicating, and revising as-built
documents therefore result in high costs imposed on building owners. Facility
managers still rely heavily upon manual surveying procedures for developing and
verifying as-built drawings and models. To streamline this often time consuming
process, this paper addresses the advantages and limitations of photogrammetry for
remote sensing and verification of interior as-built conditions. Two classrooms are
captured using photogrammetric image processing software and image-based
dimensions are compared to dimensions gathered through a traditional manual survey
yielding an average percent error of approximately 2%. Both image-based and
manual dimensions are then compared to dimensions extracted from an existing as-
built BIM model of the interior spaces, and the proposed image-based verification
method successfully identifies the same gross errors in the as-built BIM model.
Keywords: image-based measurements; as-built verification; as-built documentation;
photogrammetry; facilities management
INTRODUCTION
As-built models and drawings are essential documents used during the operations and
maintenance of buildings for managing facility spaces, equipment, and energy
systems. While these documents are typically generated, developed, and used
throughout the design and construction phases of new buildings, they are of greatest
value to building owners and managers of existing facilities for assessing building
performance, managing building repairs and renovations, and assisting building
decommissioning (Akcamete et al., 2009; Eastman et al., 2008; Gallaher et al., 2004).
Inefficiencies in processing, communicating, and revising as-built documents
therefore result in high costs imposed on building owners. A 2004 NIST report found
that an estimated $1.5 billion is wasted every year as a result of unavailable and
inaccurate as-built documents causing information delays to facilities management
(FM) personnel. Changes that occur during construction are often reflected as redline
markups or partial drawings that are not transferred to complete as-built
documentation handed over to owners during building closeout or after major
59
60 COMPUTING IN CIVIL ENGINEERING
the scene or object, the required accuracy and level of detail, and budgetary
constraints. In comparison to 3D laser scanning, photogrammetry offers a low cost,
low skill, portable solution for remote sensing (Remondino and El-Hakim, 2006).
Photogrammetry traditionally refers to the process of deriving geometric information
(distances and dimensions) about an object through measurements made on
photographs. Photogrammetry can involve one photo or multiple photos, analogue or
digital images, still-frame or video images (videogrammetry), and manual or
automatic processing (Mikhail et al., 2001). Generally, photogrammetry includes
selecting common feature points in two or more images; calculating camera positions,
orientations, and distortions; and reconstructing 3D information by intersecting
feature point locations. Over the past decade, major developments in computer vision
and image processing have allowed increased automation in each of these steps,
thereby expanding the potential applications and the commercially available software
for photogrammetry (Nister, 2004; Pollefeys et al., 1999).
Automated detection and stitching of overlapping feature points requires a large
number of images taken closely together to provide sufficient overlap and repetition
of captured objects (El-Hakim, 2001; Shum and Kang, 2000). While automated
stitching reduces the need for human intervention, it is, at this time, more prone to
stitching errors and increased noise (Remondino and El-Hakim, 2006) caused by the
extraction of unwanted background feature points such as trees, surrounding
buildings, and sky. After feature points are defined and stitched between 2D images,
camera positions and orientations are calculated based on corresponding collections
of approximated 3D feature point locations. A method known as bundle adjustment is
often employed to simultaneously optimize calculated structure and camera poses
(Triggs et al., 2000). The final reconstructed scene includes the optimized camera
positions and their associated visual data in a 3D representation such as a sparse point
cloud. Once cameras are positioned and calibrated for each image, the 3D coordinate
of any point or image pixel can be calculated with a relatively high degree of
accuracy by defining the same point in two images taken from different perspectives.
TEST BED DESCRIPTION
This paper assesses the accuracy of semi-automated photogrammetric image-
processing software in capturing and verifying interior as-built documents of an
operational building at USC. The School of Cinematic Arts “Student Services and
Media Arts” building, referred to as SCB, was selected to test the current and
proposed as-built verification methods on existing conditions. The test bed building is
of relatively recent construction and has been occupied since June 2010. As part of
construction closeout, a BIM model was delivered to the university as as-built
documentation. The existing BIM model is currently undergoing standard verification
processes executed by USC FM.
A research library (Room 206) and a classroom (Room 207) on the second floor of
SCB were selected for the interior case study as they represent typical spaces found
on the university campus (Figure 1). Room 207 is roughly twice the size of Room
206, each room covering approximately 25.5 and 53 m2, respectively. Both rooms
62 COMPUTING IN CIVIL ENGINEERING
include one or more windows on their southern walls which allow in natural light. At
the time of the surveys, both rooms were heavily populated with equipment and
furniture, obstructing corners of the floor, windows and door. The walls of Room 207
were also covered with posters and other visually distinct graphics but the walls of
Room 206 were mostly clear.
Figure 2 (left). Visual markers added to building interior to augment feature points.
Figure 3 (right). Automatically generated point cloud and manually modeled lines.
dimensions in Room 207 exceeded 2% although the average percent errors in each
room were close to the 2% threshold. The maximum absolute errors and maximum
percent errors reported in Table 1 do not represent the same dimensions.
Table 1. Absolute and percent errors for image-based dimensions.
Room Number of Min. Error Max. Error Mean Error Std. Deviation
Dimensions cm % cm % cm % cm %
206 17 0.10 0.07 13.49 5.25 3.79 1.78 3.15 1.33
207 23 0.09 0.02 12.37 4.96 5.25 2.50 3.77 1.65
The largest errors seen in image-based dimensions in both rooms resulted from
dimensions partially or completely obstructed by furniture or by the limitations of the
room’s perimeter illustrating the potential difficulties in using line-of-sight sensing
tools to capture operational building conditions. In Room 207, two dimensions, the
bottom left corner of the door and the bottom right corner of one window, were
obstructed by furniture in all photos used for reconstruction. Similarly, in Room 206
the smallest dimension of the north wall was only visible in one photo due to the
limitations of the room for viewing the recessed corner. Together the occluded
dimensions represented three of the top five greatest percent errors averaging 4.62%
(circled in Figure 4). When these dimensions were removed from the data sets, the
average percent errors for image-based dimensions in Room 206 and Room 207
reduced to 1.54% and 2.33%, respectively.
Manually measured dimensions, considered as the “ground truth”, were then used to
verify the corresponding dimensions extracted from the as-built BIM model. The
absolute and percent errors of the as-built BIM model dimensions are summarized in
Table 2. The percent errors of 7 as-built BIM dimensions in Room 206 and 14 as-
built BIM dimensions in Room 207 exceeded the 2% threshold, requiring updating of
the as-built BIM model. These erroneous dimensions, however, were unrelated to the
erroneous image-based dimensions previously found. In each room, the door widths
saw the greatest discrepancies between as-built conditions represented in the existing
BIM model and the true as-built conditions with percent errors exceeding 10% and
absolute errors exceeding 10 cm. The manual survey also found the as-built BIM
dimensions of the windows in both rooms to differ by 2 to 4% or 4 to 6 cm.
Table 2. Absolute and percent errors for as-built BIM model dimensions.
Room Number of Min. Error Max. Error Mean Error Std. Deviation
Dimensions cm % cm % cm % cm %
206 17 0.00 0.00 10.72 11.65 4.02 2.74 3.24 3.61
207 23 0.52 0.10 9.72 10.68 4.44 2.68 2.48 2.75
Finally, the image-based dimensions were used in a direct assessment of the existing
as-built BIM model to parallel the as-built BIM assessment already performed with
manual measurements. Differences between as-built BIM dimensions and image-
based measurements were plotted against zero and directly compared in the same plot
to differences between as-built BIM dimensions and manual measurements (Figure
COMPUTING IN CIVIL ENGINEERING 65
4). As visually observed, a relatively high level of agreement was found between the
manual field and image-based assessments especially with respect to those dimension
differences far outside the 2% error threshold. As the manual field assessment found
the greatest discrepancies in dimensions for door widths in both rooms, the image-
based survey similarly showed the as-built BIM model to under represent the actual
door widths in each room (dimensions 5, 6, 21, and 22 in Figure 4). In this way, the
image-based survey method achieved virtually the same identification of gross errors
in the existing as-built BIM model as the manual survey method.
Figure 4. Difference between as-built BIM model dimensions and manual and image-
based dimensions.
CONCLUSION
The results of the manual survey to verify the existing as-built BIM model revealed
that true as-built conditions can differ by more than 10% from interior as-built
documentation. This finding supports the need for improved methods for efficiently
and automatically verifying as-built drawing and models. While work must still be
done to improve image acquisition and image processing for complex environments
such as the interiors of operational buildings, the image-based reconstructions of both
interior rooms came close to the 2% standard threshold dictated by current FM
practices. Even more, the greatest geometric errors found in the existing as-built BIM
model through the manual field survey, the door widths in both classrooms, were also
detected through the image-based survey. The proposed image-based survey method
offers potential advantages to the currently employed manual survey method
including: less time and labor spent on-site, increased accessibility to building
geometry and features beyond the limits of traditional measuring devices, and the
simultaneous generation of both 2D dimensions and 3D spatial data. These
opportunities should motivate further research in remote sensing technologies,
including automated photogrammetry, for capturing and verifying operational
building exteriors and for automatically generating as-built documentation.
66 COMPUTING IN CIVIL ENGINEERING
ACKNOWLEDGEMENTS
Authors would like to thank Autodesk IDEA Studio for their support of this project.
Any opinions, findings, conclusions, or recommendations presented in this paper are
those of the authors and do not necessarily reflect the views of Autodesk.
REFERENCES
Akcamete, A., Akinci, B., Garrett, J.H. (2009). “Motivation for computational support for
updating building information models (BIMs).” Proceedings of the 2009 ASCE
International Workshop on Computing in Civil Engineering, 346, 523-532.
Brilakis, I., Lourakis, M., Sacks, R., Savarese, S., Christodoulou, S., Teizer, J., Makhmalbaf, A.
(2010). “Toward automated generation of parametric BIMs based on hybrid video and
laser scanning data.” Advanced Engineering Informatics, 24, 456-465.
Dai, F. and Lu, M. (2010). “Assessing the accuracy of applying photogrammetry to take
geometric measurements on building products.” Journal of Construction Engineering and
Management, 136(2), 242-250.
Eastman, C., Teicholz, P., Sacks, R., Liston, K. (2008). BIM Handbook: A Guide to Building
Information Modeling for Owners, Managers, Designers, Engineers, and Contractors,
John Wiley and Sons.
El-Hakim, S. (2001). “3D modeling of complex environments.” Proceedings of SPIE – The
International Society for Optical Engineering. 4309, 162-173.
El-Omari, S., Moselhi, O. (2008). “Integrating 3D laser scanning and photogrammetry for
progress measurement of construction work.” Automation in Construction, 18(1), 1-9.
Gallaher, M. P., O'Connor, A. C., Dettbarn, J. L., Jr., Gilday, L. T. (2004). “Cost Analysis of
Inadequate Interoperability in the U.S. Capital Facilities Industry.” NIST GCR 04-867.
Markley, J.D., Stutzman, J.R., Harris, E.N. (2008). “Hybridization of photogrammetry and laser
scanning technology for as-built 3D CAD models.” 2008 IEEE Aerospace Conference,
1014(1).
Mikhail, E.M., Bethel, J.S., McGlone, J.C. (2001). Introduction to Modern Photogrammetry,
Wiley & Sons.
Nistér, D. (2004). “Automatic passive recovery of 3D from images and video.” Proceedings - 2nd
International Symposium on 3D Data Processing, Visualization, and Transmission,
3DPVT, 438-445.
Ordonez, C., Martinez, J., Arias, P., Armesto, J. (2010). “Measuring building façades with a low-
cost close-range photogrammetry system.” Automation in Construction, 19(6), 742-749.
Pollefeys, M., Koch, R., Van Gool, L. (1999). “Self-calibration and metric reconstruction inspite
of varying and unknown intrinsic camera parameters.” International Journal of Computer
Vision, 32(1), 7-25.
Remondino, F., Guarnieri, A., Vettore, A. (2005). “ 3D modeling of close-range objects:
photogrammetry or laser scanning?” Proceedings of the SPIE - The International Society
for Optical Engineering, 5665(1), 216-25.
Remondino, F., El-Hakim, S. (2006). "Image-Based 3D Modelling: A Review." The
Photogrammetric Record, 21(115), 269-291.
Shum, H.Y., Kang, S.B. (2000). “Review of image-based rendering techniques.” Proceedings of
SPIE-The International Society for Optical Engineering, 4067(1-3), 2-13.
Tang, P., Huber, D., Akinci, B., Lipman, R., Lytle, A. (2010). “Automatic reconstruction of as-
built building information models from laser-scanned point clouds: a review of related
techniques.” Automation in Construction, 19, 829-843.
Triggs, B., McLauchlan, P., Hartley, R., Fitzgibbon, A. (2000). “Bundle adjustment – A modern
synthesis.” Vision Algorithms: Theory and Practice, 1883, 298-375.
Image-based 3D reconstruction and Recognition for Enhanced Highway
Condition Assessment
Berk Uslu1, Mani Golparvar-Fard2, and Jesus M. de la Garza3
1
Graduate Student, Construction Engineering and Management Group. Via Dept. of Civil
and Environmental Engineering, Virginia Tech, Blacksburg, VA; PH (540) 905-8525; FAX
(540) 231- 7532; email: berkuslu@vt.edu
2
Assistant Professor, Construction Engineering and Management Group. Via Dept. of Civil
and Environmental Engineering, and Myers-Lawson School of Construction, Virginia Tech,
Blacksburg, VA; PH (540) 231-7255; FAX (540) 231- 7532; email: golparvar@vt.edu
3
Vecellio Professor, Construction Engineering and Management Group. Via Dept. of Civil
and Environmental Engineering, and Myers-Lawson School of Construction, Virginia Tech,
Blacksburg, VA; PH (540) 231-7255; FAX (540) 231- 7532; email: chema@vt.edu
ABSTRACT
Frequent and accurate condition assessment is essential for an effective transportation
system operation and asset management. Despite the importance, current manual data
collection methods for highway assets are time consuming, subjective and sometimes
unsafe. There is a need for an automated and efficient data collection method that
does not have a significant cost impact and can achieve automation, accuracy, and
safety in condition assessment. Over the past few years, advances in technology such
as cheap and high-resolution digital cameras and availability of vast data storage has
allowed a number of computer vision models to be developed that can detect and
assess condition of some individual assets. However, none of these vision-based
methods recognize, locate, assess condition of the assets, and visualize their most
updated status in a 3D environment. This paper proposes a new approach, based on
3D image-based reconstruction and integrated recognition of color, shape, and texture
for highway assets, and presents preliminary results from the developed system on a
real world case study.
INTRODUCTION
Infrastructure systems are recognized as the fundamental foundation of societal and
economic functions such as transportation, communication, energy distribution,
wastewater collection, and water supply. Most of the infrastructure systems are both
geographically extensive and have a long service life. It is expensive to provide and
manage any physical infrastructure over spatially extensive areas and for longtime
spans. This spatial and temporal range of infrastructure systems causes a high degree
of uncertainty in setting numerical models for modeling deterioration rates. These
characteristics of the infrastructure systems complicate the planning for future
infrastructure maintenance, repair, and reconstruction of the existing facilities. High
costs, tight budgets, and previous decisions that were based on inaccurate predictions
of infrastructure performance are resulting in serious consequences (Maser 2005).
American Society of Civil Engineers is estimating that $2.2 trillion is needed over
five years to repair and retrofit the U.S. infrastructure to a good condition (ASCE
2009). This issue is not only limited to the U.S. as the infrastructure in other countries
is also aging and failing. Although managing and maintaining infrastructure is not a
67
68 COMPUTING IN CIVIL ENGINEERING
PROBLEM STATEMENT
In current practice, assessing asset conditions is still a predominantly manual and thus
a time consuming process. A certain amount of subjectivity and the experience of the
raters have an undoubted influence on the final assessment (Binachini et al. 2010). In
addition, most maintenance decision-making approaches employ a discrete
representation of condition. For example, pavements are usually evaluated in five
different condition states varying form excellent to very poor (de la Garza and
Krueger 2008). Advances in continuous condition based decision-making are of
interest to the infrastructure management community, since infrastructure damage
variables are typically continuous in nature. Rapid advances in automated inspection
techniques are easily measuring these damage variables, and practical benefits from
considering this more natural representation of condition are increasingly possible.
These advances foster further research in formulating, solving, and implementing
infrastructure management methods using continuous representations of important
condition variables. Some research studies have already addressed the problem of
automated detection, classification, and assessment of assets in a discrete fashion
(Mashford et al. 2009, Meegoda et al. 2006). Current research efforts in devising a
computer vision model for highway asset detection are roughly divided into three
stages: segmentation, detection and condition assessment. Bascon (2010) presented a
Support Vector Machine to recognize road-signs. Krishnan (2007) has presented a
triangulation and bundle adjustment approach for identifying road signs. Hu and Tsai
(2010) and Wu and Tsai (2006) have created a nearest-neighbor assignment of feature
descriptors for an image recognition model for developing a sign inventory. Although
most of these techniques have achieved the goal of automation and accuracy to a
reasonable level, nonetheless none of these systems use the same visual information
to locate the assets and more importantly detect them in a continuous fashion.
COMPUTING IN CIVIL ENGINEERING 69
RESEARCH APPROACH
The new proposed approach and the developed system will be able to exceed
minimum requirements of standards on safety, efficiency and the consistency by
utilizing visual sensing techniques. The working principle of the system is
summarized in Figure 1. The steps that will be followed to create the proposed system
are as follows:
1. 3D image-based reconstruction of all objects using the D4AR reconstruction
approach (Golparvar-Fard 2010) which integrates structure-from-motion, multi-
view stereo and voxel coloring/labeling;
2. Utilizing Semantic Texton Forrest (STF) algorithm to independently segmentize
each image into proper asset categories;
3. Integrate camera parameters recognized through the reconstruction step with the
segmented areas to stitch relevant image parts into an panoramic image (Necessary
for large sized assets which are present in more than one frame like guardrail and
pavement);
4. Project and visualize the results into a common 3D environment, accessible
through ubiquitous devices in onsite and remote coordination centers.
3D Image-based Reconstruction
The state-of-the-art 3D reconstruction has gone under a significant important over the
past few years. Availability of cheap and high-resolution imagery along with large
data storage capacity, in addition to advances in computing, has created a great
opportunity to run 3D image-based reconstruction at large scales. A few research
groups (Furukawa et al. 2010, Gallup et al. 2010) have already demonstrated high
density and accurate image based reconstruction results. Application of image-based
3D reconstruction in the construction industry is relatively new. These images are
traditionally unordered and uncalibrated, and usually include significant amount of
occlusion, which makes the application of existing 3D reconstruction algorithms
difficult. Recently Golparvar-Fard et al. (2010, 2009a) proposed a new dense
reconstruction algorithms which is based on Structure-from-Motion (SfM), Multi-
View Stereo (MVS) and a voxel coloring/ labeling mechanism which results in dense
reconstruction. In this research, the 3D image-based reconstruction module builds
upon the newly proposed algorithm and is tested in the context of sequentially
captured images for highways.
Randomized Learning
Each tree is trained separately on a small random subset of the training data I.
Learning precedes repetitively, splitting the training data In at node n into left and
right subsets Il and Ir according to a threshold t of some split function f of the feature
vector v.
| , (2)
\ (3)
At each split node, several candidates for function f and threshold t are generated
randomly, and the one that maximizes the expected gain in information about the
node categories is chosen.
| | | |
∆ | | | |
(4)
Where E(I) is the Shannon entropy of the classes in the set of examples I (Shotton
et al. 2008). The training continues to a maximum depth D or no further information
can be acquired. The class distribution P(c|n) are estimated as a histogram of the class
labels ci of the training examples i that reached node n.
d: 2 3 4 5 d: 2 3 4 5 Category
containing both STF leaf nodes (green) and split nodes (yellow). The region prior is
computed as the average of the individual leaf node class distributions P(c|l).
RESEARCH EXPERIMENTS
The developed asset management system first performs a 3D image-based
reconstruction using the images that are collected in a sequential fashion, next, the
STF algorithm is implemented to perform segmentation and classification of the
images acquired form the highway. The performance of the semantic texton forest
algorithms for the segmentation and detection of the highway assets is evaluated in
the newly created automatic condition assessment system. The recognition algorithm
uses a dataset consisting of the images and the ground truths (same image labeled in a
supervised fashion) of these images that are used to create the decision trees.
There were two experiments performed to evaluate the performance of this
algorithm. First experiment was performed with a new image dataset consisting of
four categories (i.e., guardrail, pavement, poles, and signs) plus the void category
consisting of fourteen images to investigate the performance of the algorithm for the
segmentation and detection of the highway asset images. These images are taken
from Virginia Tech’s Smart Road, which is a 2.1 mile long research facility used for
highway research located at Blacksburg VA. An initial 3D reconstruction was
performed with this dataset. The results of this initial and controlled experiment
suggested that the number of categories used for the training should be increased in
order to have correct segmentation with minimal segmentation confusion (wrong
recognition of the category).
Subsequently, second experiment was performed with extending the dataset that
was created for the 1st experiment. By adding the background objects (such as sky,
grass, soil or trees) as new categories for the algorithm to be trained, the confusion
was reduced significantly. The dataset for this experiment were consisting of twelve
different categories plus a void category to train the algorithm. Similar to the first
experiment, a 3D image-based reconstruction was performed with this dataset. Table
1 presents the results of evaluating performance of the 3D image-based reconstruction
algorithm with the state-of-the-art Structure from Motion algorithm (Snavely et al.
2007) on the dataset.
Guardrail Pavement
Markings
(c-1) (c-2) (d-1) (d-2)
(c-1)
segmentations were randomly selected per category. Ten principal pixels were
selected per image from the asset in interest and the segmentation result was
evaluated by acquiring RGB value of these points. If the RGB of the specific point
matches the specific color assigned for the asset in interest, it was considered to be a
True Positive (TP), if it did not match, it was considered to be a False Negative (FN).
The results of this analysis were plotted in a Receiver Operating Characteristic (ROC)
plot (Figure 6).
Concrete Pavement
True Positine
Guardrail
Poles
50 Signs
Trees
Grass
0 Soil
0 20 40 60 80 100
False Negative Rate (Percent)
Figure 6. ROC plot for trained categories.
The results show that if distinct features of the highway asset present, the success
rate in segmentizing that asset is increased. As represented in Figure 6, the True
Positive rates for the signs are among the highest. This is caused by the distinct green
color of these signs. In contrary, the segmentation results for the poles are among the
lowest since the features of these asset items resemble other asset items such as the
guardrails. The computational time confirms that application of such a machine
learning algorithm is much faster and more convenient compared to other algorithms
used for segmentation. The machine learning kernel allows the thresholds for the
filter bank to be automatically trained through the ground truth data and dynamically
COMPUTING IN CIVIL ENGINEERING 75
finds the threshold surface. This flexibility is an important attribute for the highway
asset condition assessment system, yet confirms that for a more robust segmentation
& categorization of assets; more systematic collection of training data is required.
Conclusion
The automated and integrated image-based 3D reconstruction and recognition asset
management system presented in this paper demonstrates promising results. The low-
cost, and accuracy of this technology along with the high safety associated with its
application, can replace the current manual and subjective data analysis and/or the
computer vision systems that are currently in use. The implementation of this
algorithm is the first step in creating this new condition assessment system. By using
this approach, there will be no need for application of filter-bank responses or local
descriptors which are computationally expensive. More experiments need to be
conducted by expanding the training dataset, and testing performance on different
datasets with different levels of visibility and occlusion. Since the 3D image-based
reconstruction algorithm geo-registers and associates images together, the
segmentation results in any of these paired images can help in boosting the
confidence in segmentation and recognition of any new training image. This
integration will also be tested and reported in a near future.
References
ASCE. (2009). The 2009 report card for America’s infrastructure.
http://www.asce.org/reportcard/2009. Accessed Jan. 10 2011.
Bascon S. M., Rodriguez J. A. , Arroyo S. L., Caballero A. F., and Lopez-Ferreras F. (2010). “An
optimization on pictogram identification for the road-sign recognition task using SVMs.” CVIU. 14
(3), 373-383.
Bianchini A., Bandini P., and Smith D.W. (2010). “Interrater reliability of manual pavement distress
evaluations.” ASCE J. of Transp.Eng., 136 (2), 165-172.
de la Garza J. M., and Krueger D. A. (2007). “Simulation of highway renewal asset management
strategies.” Proc., ASCE Conf. of Computing in Civil Eng., 527-541, 2007.
Furukawa Y., Curless B., Seitz S.M. and Szeliski R. (2010). “Towards internet-scale multi-view
stereo.” Proc., Computer Vision and Pattern Recognition Conf.
Gallup D., Frahm J.-M., Pollefeys M. (2010). “A heightmap model for efficient 3D reconstruction
from street-level video.” Proc., Int. Conf. on 3D Data Processing, Visualization and Transmission
(3DPVT2010).
Golparvar-Fard M., Peña-Mora F. and Savarese S. (2010). “D4AR – 4 dimensional augmented reality -
tools for automated remote progress tracking and support of decision-enabling tasks in the
AEC/FM industry.” Proc., The 6th Int. Conf. on Innovations in AEC.
Golparvar-Fard M., Peña-Mora F., and Savarese S. (2009a). “D4AR- a 4-dimensional augmented
reality model for automating construction progress data collection, processing and
communication.” Journal of Information Technology in Construction (ITcon), 14, 129-153.
Golparvar-Fard M., Peña-Mora F. Arboleda C. A., and Lee S. H. (2009b). “Visualization of
construction progress monitoring with 4D simulation model overlaid on time-lapsed photographs.”
ASCE J. of Computing in Civil Engineering, 23 (6), 391-404
Hu Z. and Tsai Y. (2010) “Image Recognition Model for Developing a Sign Inventory” ASCE J. of
Comp. in Civil Eng., in press.
Krishnan A. (2009). “Computer vision system for identifying road signs using triangulation and bundle
adjustment”. MS Thesis, Computer Engineering. Kansas State University, Manhattan, Kansas.
Maser K., J. (2005) “Automated systems for infrastructure condition assessment” ASCE J. Infrastruct.
Syst. 11, 153.
76 COMPUTING IN CIVIL ENGINEERING
Mashford J., P. Davis P., Rahilly M. “Pixel-based colour image segmentation using support vector
machine for automatic pipe inspection,” Proc. the 20th Australian Joint Conf. on AI, vol.
4830,739–743.
Meegoda J. N., Juliano T. M., and Banerjee A., (2006). “A Framework for Automatic Condition
Assessment of Culverts,” Paper No. 06-2414, 85th Annual Meeting of the Transportation Research
Board, Washington, DC,
NAE, National Academy of Engineers (2010). Grand Challenges for Engineering. NAE of the
National Academies.
Sotton J., Johnson M., Cipolla R., (2008). “Semantic Texton Forests for Image Categorization and
Segmentation.” Proc. Int. Conf. Computer Vision and Pattern Recognition.
Snavely N., Steven M. Seitz, S. M., Szeliski, R. (2007). “Modeling the World from Internet Photo
Collections”. Int. J. of Comp.Vis., 2007.
Wu J. and Tsai Y. (2006). “Enhanced Roadway Inventory Using 2-D Sign Video Image recognition
Algorithm”, J. of Computer-Aided Civil & Infrastructure Eng., 21, 369-382.
Design and Evaluation of Algorithm and Deployment Parameters for an RFID-
Based Indoor Location Sensing Solution
ABSTRACT
INTRODUCTION
Location information is of paramount value to the building industry. It is the
basis of context-awareness (Aziz et al. 2005), which is based on automatic
recognition of both the user’s location and activity. Context-aware information
delivery can replace current manual processes with automated delivery of spatial
information to on-site mobile users. With its application, targets such as building
materials, equipment, construction tools, and people can be easily located and target-
77
78 COMPUTING IN CIVIL ENGINEERING
coordinate of target i , ( xij , yij ) is the actual coordinate of the j -th nearest neighbor of
target i , where j (1, k ) , and w j is the weighting factor.
EXPERIMENT DESCRIPTION
Based on the mathematical framework, field tests were conducted using off-
the-shelf ultra high frequency (UHF) active RFID technology that runs at a frequency
80 COMPUTING IN CIVIL ENGINEERING
of 915 MHz. The reader supports two antennae, which can be attached to the reader
directly or via data cables. The active tag is encapsulated in a plastic case, so that it
can be attached to a wider range of materials without significant interference to the
performance. Powered by an AA battery, a tag emits a non-directional signal every
1.5 seconds. A middleware is used to communicate with the reader and extract real-
time data, including tag ID, tag model, battery life, RSSI readings, last contact time,
and contact count.
To achieve the research objectives, a total of 9 field tests were completed in a
6 m by 7 m conference room in an educational building. A total of 2 readers, 4
antennae and 16 tags were used in the tests. The numbers and positions of the
reference tags and the targets varied in different tests. The antennae positions were
fixed throughout the tests, one of which was in the corridor and another in a room
next door. The other two were inside the conference room. Reference tags were
attached to the ceiling to simulate the use scenario, where the mechanical, electrical
and plumbing (MEP) equipment at the same height is tagged for maintenance
purposes. The target tags were placed either on the ground or above the ceiling.
Two algorithm parameters (number of k nearest neighbors and weighting
method) and one deployment parameter (reference tag layout - RTL) were tested, and
findings are summarized in the following section. Different RTL configurations
tested are illustrated in Figure 1. Accuracy is used as the evaluation criterion in this
paper, and the cost and robustness of the proposed solution will be evaluated in future
research. The accuracy is measured by the difference between targets' actual locations
and estimated locations.
FINDINGS
To optimize the algorithm parameters, different values of a parameter are
applied to each test, and the resulting accuracies are compared.
If the k value is too large, selected neighbors may not be necessarily close to
the target, and being far reduces their reliability as reference points; if the k value is
too small, selected neighbors are less likely to be evenly distributed around the target,
leading to an increased error in the accuracy. Therefore, a desired k value should be
balanced. Both k=3 and k=4 values have been reported by different publications as
the optimal value (Huang et al. 2009; Ni et al. 2004), which indicates that the optimal
value may depend on the design of the specific solution. Table 1 illustrates the
optimal k value under the design of the proposed solution.
Table 1. Comparison of k values.
Arithmetical average Weighted average
k=4 k=3 k=4 k=3
Mean error distance (m) 1.94 2.23 1.96 2.14
Max error distance (m) 2.15 2.60 2.21 2.47
Min error distance (m) 1.65 1.91 1.60 1.95
Standard deviation (m) 0.13 0.22 0.20 0.17
When the error distance is calculated using arithmetical averages, k=4 yielded
a higher accuracy than k=3 in all tests. When the error distance was calculated using
weighted averages, k=4 yielded a higher accuracy than k=3 in 88.9% of all tests. The
average increases of accuracy in both scenarios are 0.29 m or 14.7%, and 0.18 m or
9.3%, respectively; with a maximum increase of 0.55 m in test 3 using arithmetical
averages. In addition, when k=4, the accuracy in most tests was within 2.2 m, with a
highest accuracy of 1.60 m, while when k=3, the accuracy in about half of the tests
exceeded 2.1 m, with a highest accuracy of 1.91 m.
For the weighting method, both arithmetical average and weighted average
can be used. Table 2 is extracted from table 1 that compares these two methods, using
k=4. Under arithmetical averages, the mean error distance of all 9 tests was 1.94 m,
slightly smaller than that using weighted averages. The standard deviation of the
former was also smaller than that of the latter. Although also not significant, it
suggests a more stable performance of the algorithm.
Table 2. Comparison of weighting methods.
Arithmetical average Weighted average
Mean error distance (m) 1.94 1.96
Standard deviation (m) 0.13 0.20
RTL2 had the largest max error distance, and RTL3 had the smallest min error
distance. Standard deviation did not vary significantly between different RTLs. In
general, the difference between RTLs did not lead to noticeable changes in accuracy.
To further validate the finding, a statistic t-test is done using all the data gathered to
verify the following hypothesis: "the error distance of an individual target does not
change as the RTL switches from one of the three RTLs to another". No hypothesis
was rejected in the t-test under a confidence level of 90%, which indicates that
changing RTLs did not statistically cause any change in accuracy.
Table 3. Comparison of RTLs.
RTL1 RTL2 RTL3
Mean error distance (m) 1.85 1.92 1.90
Max error distance (m) 2.89 3.95 3.49
Min error distance (m) 0.78 0.33 0.27
Standard deviation (m) 0.72 0.89 1.08
DISCUSSION
Results from the field tests show that the proposed solution yielded
significantly better performance when applied with k=4 than k=3 under either
weighting method, with the latter resulting in an increase of 14.7% and 9.3% in error
distance using arithmetical averages and weighted averages, respectively. This may
be caused by the fact that a smaller k value increases the chance that the identified
nearest neighbors are not evenly distributed around a target, leading to biased
estimated locations. In addition, an error in identifying one of the nearest neighbors
would lead to a larger error distance when fewer nearest neighbors are used in
calculation. On the other hand, using arithmetical averages yielded less error distance
and standard deviation than using weighted averages, although the improvements
were not as significant. Tests conducted to optimize the deployment parameter
indicate that the solution could keep its performance consistent under different RTLs,
and that a strict grid layout is not a must. Therefore, it is possible that RFID tags
attached to equipment and building components at the manufacturing stage, whose
layouts are likely to be random, could be used for ILS purpose. This would lead to
reduced costs and strengthen the argument that RFID-based solutions could be
implemented throughout a building’s life cycle.
With its optimal algorithm parameters, the solution demonstrated its ability of
adapting to different RTLs and potential to share existing RFID equipment with other
applications. However, to better assess the capability of the solution especially its
adaptability to building-scale implementations, the following issues need to be further
examined: performance of both stationary and mobile targets, optimization of the
number and layout of virtual reference tags, tradeoff between accuracy and cost,
robustness of the solution, and integration with various location-based services.
CONCLUSION
Indoor location information is of paramount value to the building industry and
can be used to facilitate FM practices, improve occupant experience and building
COMPUTING IN CIVIL ENGINEERING 83
utilization, and ensure building safety and security. Currently no ILS solution has
been validated and widely used in the industry. This research proposed a new
approach for ILS. A solution was built and tested in a controlled environment for
validation. A series of 9 tests were conducted, most of which reported an accuracy of
within 2 m. The use of k=4 and arithmetical averages yielded best results. The
performance of the solution was consistent for different RTLs. Based on the results of
this research, the authors plan to further explore the effects of the following
deployment parameters: target type (stationary or mobile, ground or above ceiling),
number of readers, and number of reference tags. The accuracy/cost tradeoff and the
robustness of the proposed approach will also be assessed. Then the algorithm will be
implemented at a building level, and the solution’s technical viability, cost
implications, and potential value for supporting various location-based services will
be examined.
REFERENCES
Aziz, Z., Anumba, C. J., Ruikar, D., Carrillo, P. M., Bouchlaghem, N. M. (2005).
"Context aware information delivery for on-site construction operations." Proc.
22nd CIB-W78 Conference on Information Technology in Construction, 304,
321-327.
Domdouzis, K., Kumar, B., Anumba, C. (2007). "Radio-frequency identification
(RFID) applications: A brief introduction." Advanced Engineering Informatics,
21(4), 350-355.
Ergen, E., and Akinci, B. (2007). "An overview of approaches for utilizing RFID in
construction industry." RFID Eurasia, 2007 1st Annual, 1-5.
Ergen, E., Akinci, B., Sacks, R. (2007). "Life-cycle data management of engineered-
to-order components using radio frequency identification." Advanced
Engineering Informatics, 21(4), 356-366.
Hightower, J., Borriello, G., Want, R. (2000). SpotON: An Indoor 3D Location
Sensing Technology Based on RF Signal Strength, Department of Computer
Science and Engineering, University of Washington, Seattle, WA.
Huang, Y., Lv, S., Liu, Z., Jun, W., Jun, S. (2009). "The topology analysis of
reference tags of RFID indoor location system." Proc., 2009 3rd IEEE
International Conference on Digital Ecosystems and Technologies (DEST),
IEEE, 313-17.
Jin, G., Lu, X., Park, M. (2006). "An indoor localization mechanism using active
RFID tag." Proc., IEEE International Conference on Sensor Networks,
Ubiquitous, and Trustworthy Computing, June 5, 2006 - June 7, IEEE, 40-43.
Khoury, H. M., and Kamat, V. R. (2009). "Evaluation of position tracking
technologies for user localization in indoor construction environments." Autom.
Constr., 18(4), 444-457.
Li, N., and Becerik-Gerber, B. (2011). "Performance-based evaluation of RFID-based
indoor location sensing solutions for the built environment." Advanced
Engineering Informatics, (Article In Press).
Li, W., Wu, J., Wang, D. (2009). "A novel indoor positioning method based on key
reference RFID tags." Proc., 2009 IEEE Youth Conference on Information,
Computing and Telecommunication (YC-ICT 2009), IEEE, 42-5.
84 COMPUTING IN CIVIL ENGINEERING
ABSTRACT
INTRODUCTION
85
86 COMPUTING IN CIVIL ENGINEERING
performance in construction industry has been based on read range, read rate and read
time, whichdirectly depend on received signal strength indication (RSSI). RSSI is the
indicator used to measure the strength of radio waves received by an antenna. Across
a wide range of RFID applications in construction industry, RSSI serves to lay the
basis of calculations and analysis. When RFID is applied for tracking materials and
equipment in construction industry, read rate and read range are frequently used to
represent RFID performance. RSSI evaluation of a tag enables researchers to assess
the read range and read rate (Clarke et al., 2006; Dziadak et al., 2008; Ergen et al.,
2007; Goodrum et al., 2006; Tzeng et al., 2008). An increase in read rate and read
range in association with higher RSSI readings represents a better RFID performance.
stores the tag’s ID and other customized information, and it sends out radio waves
containing the on-board information, which is captured by an antenna. The antenna is
connected to a reader, and it establishes the communication between the reader and
the tag. The reader receives the information from the tag, processes it, and transfers it
to users for further analysis.
There are two kinds of RFID tags which differ from the power source: passive
and active. There is no built-in power source for passive tags which have to operate
on the electromagnetic energy radiated by the reader. An internal battery is installed
in an active tag, and it provides power for the tag to function. Passive tags are
inexpensive, small in size and thus easy to deploy, and have been widely used in
construction industry for materials and components identification and tracking.
However, the short read ranges, mostly within 1 m, exclude passive tags from
applications where long read range is required. In addition, it is proven that electric
waves of passive tags fail to penetrate common floor materials with a thickness of
1cm to 2cm (Tzeng et al., 2008). On the other hand, active tags have longer read
ranges and extendable on-board memory, but they are more expensive and have a
limited lifespan of up to 10 years. Examples of active tag applications are found in
RFID localization (Zhou and Shi, 2010), building maintenance (Ko, 2009), personnel
monitoring (Lin et al., 2010), and construction material management (Ren et al.,
2010).
RESEARCH APPROACH
The authors designed several tests to assess the effect of each environmental
factor on RSSI readings. Tests were carried out in a typical educational building at
the University of Southern California. In the field tests, 4 antennae, 2 readers and 16
tags were deployed. Different quantities of equipment were used based on the
specific test design. For all field tests, tags emitted signals every 1.5 seconds. Omni
directional antennae were selected due to their uniform power radiation in space and
continuous message reception without disconnection.
Tests for relative orientation and temperature were carried out in a conference
room 7m in width and 6 m in length (Figure 1). A total of eighteen tests were
88 COMPUTING IN CIVIL ENGINEERING
conducted for the selected orientations. To assess the effect of orientation, tags were
located in three different positions: forward facing the antenna, backing from the
antenna and upward facing the ceiling (Figure 2). The antenna was either horizontally
or vertically placed. Tests to assess the effect of temperature settings were conducted
in the same room which was selected due to the availability of access to controlled
temperature settings. A total of 16 tags were attached to acoustical plaster ceiling tiles
in two parallel lines at an equal interval of 0.8 m from each other, facing downwards
to the floor (Figure 3).
The second test bed was designed to assess the effect of the distance between
tags and antennae. Tests were carried out in a corridor of 3 m in width and 60 m in
length. A total of 8 tags and 1 antenna were deployed in the test. Tags were attached
to a wooden table and remained fixed throughout the test. The table was moved along
a straight line away from the antenna. Tags were first placed 2.5 m away and moved
2.5 m further each time from the antenna until no signal could be detected.
RESULTS
This section reports the effect of environmental factors on the RSSI readings.
Maximum, minimum and mean RSSI readings are garnered to assess the effect. It is
assumed that an undetected tag has a (-128) RSSI reading, which was defined by the
manufacturer’s specifications. In addition, trend lines were generated to realize the
behavior of the RSSI readings under different environmental factors.
To assess the effect of relative orientation of tags and the antenna on RSSI
readings, tests were repeated three times. Among all combinations, the best RSSI
COMPUTING IN CIVIL ENGINEERING 89
readings and tag read rates were obtained when tags were facing the vertical antenna.
20 tags were detected with a mean RSSI reading of (-65.9). In all combinations, the
minimum RSSI reading was (-128), which means that at least one tag was not
detected in all tests. The mean RSSI readings of different antenna orientations are
plotted in Figure 4.
-103.63
Horizonta
-106.75
l
-106.88
15 20 25 30
0
-10 Temperature
-20
Mean RSSI
-30
-40
-50
-60
-70
-80
As can be seen from the Equation 1, the coefficient between RSSI readings
and temperature is a positive value of 0.10, which indicates that 1°C degree increase
in temperature will lead to a 0.1009 improvement in RSSI reading. The R square
statistic is 0.71 which indicates that the linear regression can reliably predict the
relationship between RSSI and temperature. The active tags in the test were powered
by lithium batteries which tend to suffer from voltage decrease under low temperature
(Linden and Reddy, 2002). The drop of battery power in RFID tags can lead to a
decrease in RSSI readings.
The result of the tests for varying distances between the tags and antenna is
plotted in Figure 6. Overall, RSSI readings were better when the distance between
tags and antenna decreased. The radio signal transmitted by RFID tags could not be
detected when tags were placed over 25 m away from the antenna.
-40
-60 y = -33.8ln(x) - 9.1673
-80 R² = 0.882
-100
-120
-140
A logarithmic trend line was generated with a R-square of 0.882, which shows
the trend line is relatively reliable to predict the relationship between the mean RSSI
readings and the distance. This relationship complies with the classical signal
propagation model used frequently in the electrical engineering field where the path
loss of signal strength is correlated with the logarithmic distance (Keenan and
Motley, 1990). Based on the data collected in the test, the regression equation can be
depicted as in Equation 2.
y = -33.8ln(x) - 9.1673 Equation 2
As seen from the equation, the constant value is (-9.1673) in the equation,
which can be interpreted, as the RSSI reading will be (-9.1673) when tags are placed
1 m away from the antenna.
Regardless of the tag orientation, horizontal antenna orientation yielded worse results
compared to vertical antenna. The RSSI readings were significantly better when the
antenna was vertically positioned than the horizontally positioned antenna at a 1%
confidence level. No significant difference was recorded among different tag
orientations. This result can be interpreted as antenna orientation has more effect on
RSSI readings than the tag orientation. Results show that RSSI readings can be
improved by 38.3% when tags are placed facing a vertical antenna compared to the
tags positioned backing to the horizontal antenna. No significant effect was reported
in any of the temperature settings. It can be concluded that the effect of temperature
on RSSI readings in an indoor environment is negligible within a narrow temperature
range. Moreover, RSSI readings had an inverse relationship with the distance and the
radio signal transmitted by RFID tags could not be detected when tags were over 25
m away from the antenna.
Overall, this study filled the gap in research in which there are a lack of
systematic approaches in assessing the environmental effect on RSSI readings and
thus RFID systems. This study can be beneficial to RFID technology users and
researchers to design and deploy systems in a more location- and performance-aware
manner, which will lead to higher RSSI readings and thus better RFID performance.
Future studies of the authors will focus on expanding environmental factors involving
different materials tags attached to, obstructions between tags and antenna, building
components, furniture layouts, and room occupancies. In addition, accuracy for
indoor localization will be a potential evaluation criterion for assessing the
importance of the effect of the environmental factors on RSSI readings.
REFERENCES
Dziadak, K., Kumar, B. and Sommerville, J. (2009). Model for the 3D location of
buried assets based on RFID technology. Journal of Computing in Civil
Engineering, 23(3), 148-59.
Ergen, E., Akinci, B., East, B. and Kirby, J. (2007).Tracking components and
maintenance history within a facility utilizing radio frequency identification
technology.Journal of Computing in Civil Engineering, 21(1), 11-20.
Goodrum, P. M., McLaren, M. A. and Durfee, A. (2006).The application of active
radio frequency identification technology for tool tracking on construction job
sites. Automation in Construction, 15(3), 292-302.
Hightower, J. and Borriello, G., 2001a, Location systems for ubiquitous computing,
Computer, 34(8), 57-66.
Keenan, J. M. and Motley, A. J. (1990). Radio Coverage in Buildings. British
Telecom Technology Journal, 8(1), 19-24.
Ko, C. (2009) RFID-based building maintenance system, Autom.Constr. 18, 275-284.
Ladd, A.M., Bekris, K.E., Rudys, A.P., Wallach, D.S. and KAvraki, L.E., 2004, On
the feasibility of using wireless Ethernet for indoor localization, IEEE
Transactions on Robotics and Automation, 20(3), 555-559.
Landt, J. (2005). The history of RFID. IEEE Potentials, 24(4), 8-11.
Lehmann, E.L. and Romano, J.P. (2006).Testing statistical hypotheses, Springer,
New York.
Lin, C.J., Lee, T.L., Syu, S.L., Chen, B.W., 2010 Application of intelligent agent and
RFID technology for indoor position: Safety of kindergarten as example.
International Conference on Machine Learning and Cybernetics (ICMLC 2010),
5 2571-6.
Linden D. and Reddy T. (2002), Handbook of Batteries. Mcgraw-Hill, New York
Luo, X., O’Brien, W.J. and Julien, C.L., 2010, Comparative evaluation of Received
Signal-Strength Index (RSSI) based indoor localization techniques for
construction jobsites, Advanced Engineering Informatics, in press.
Mautz, R., 2009, Overview of current indoor positioning systems, Geodesy and
Cartography, 35(1), 18-22.
McCarthy, J. F., Nguyen, D. H., Al, M. R. andSoroczak, S. (2002). Proactive
displays the experience UbiComp project. SIGGROUP Bulletin, 23(3), 38-41.
Pradhan, A., Ergen, E. and Akinci, B., 2009, Technological assessment of radio
frequency identification for indoor localization, Journal of Computing in Civil
Engineering, 23, 230-238.
Ren, Z., Anumba C. J., Tah J. (2010) RFID-facilitated construction materials
management (RFID-CMM) - A case study of water-supply project, Advanced
Engineering Informatics. In Press.
Tzeng, C., Chiang, Y., Chiang, C. and Lai, C. (2008). Combination of radio
frequency identification (RFID) and field verification tests of interior
decorating materials. Automation in Construction, 18(1), 16-2 3.
Wang, L-C., Lin, Y-C, and Lin, P. H. (2007).Dynamic mobile RFID-based supply
chain control and management system in construction.Advanced Engineering
Informatics, 21(4), 377-90.
COMPUTING IN CIVIL ENGINEERING 93
94
COMPUTING IN CIVIL ENGINEERING 95
many decisions such as changing the size and number of crews, equipment types,
activity relationships.
Time-cost trade-off analysis has been extensively studied in literature with different
application areas, including highways El-Rayes and Kandil 2005, and earthwork
Marzouk and Moselhi 2004. Evolutionary algorithms (EAs) have been extensively
utilized to solve time cost trade-off problems Li and Love 1997; Que 2002; Elbeltagi
et al. 2005. EAs are stochastic search methods that mimic the metaphor of natural
biological evolution and/or the social behavior of species. They include: genetic
algorithms, memetic algorithms, particle swap optimization, shuffled frog leaping
algorithm, and ant colony optimization (Elbeltagi et al. 2005).(Marzouk et al., 2009)
ADVANCED SHORING
Flying Shuttering System is, also, called Advanced Shoring, Mobile or Moving
Scaffolding, or Self-Launching Erection Girder. The advanced shoring system was
initially used for pre-stressed cast-in-situ concrete bridges with spans of a relatively
short length. If the ground conditions are poor, the ground level is variable or the
bridge is high above the ground, movable casting girders which are supported off the
permanent sub or superstructure can be a viable alternative solution. The system’s
main concept is that the formwork is supported on a moving gantry system, which
will simulate a factory operation transported on the site.
While casting the piers, recesses (to support the brackets) and small diameter
horizontal holes should be made (to allow the insertion of steel bars in order to erect
brackets), Steel brackets are, then, mounted (This is mainly done through friction
between the surfaces of the steel plates of the brackets (that’s provided through
tensioning 6 bars at each column) and the concrete column). System is assembled on
the ground and the formwork is supported on moving gantry system, The system is
then lifted to position on brackets, Construction is done in stages (each stage ends at
the point of zero bending moment). The construction of a typical stage starts by
lowering formwork to free it from the bottom slab and webs. The brackets are
required to move forward to support the next span. Accordingly, the main girders and
formwork move forward until the girders’ end pass the next column in a manner that
would preserve system stability. The main trusses are supported temporarily on the
superstructure by means of high tensile bars. The brackets are dismantled from their
current position and travel along rails fixed on the bottom chord of the two trusses
until they reach the required position. The main girders are lowered to rest on
brackets and then travel to-their final position. Formwork levels are adjusted as well
as carpentry works. Steel reinforcement and pre-stressing components are fixed for
the bottom slab and webs, Concreting is performed normally on two stages, bottom
slab and webs and then top slab, After the concrete of the bottom slab and webs gains
sufficient strength the formwork of the inner sides of the webs are dismantled, The
same activities are repeated for the top slab, and the span is stressed after it gains the
required strength and the system becomes in its final position. Then, the gantry rolled
forward by means of outriggers on both side’s gantry's deck, and the cycle repeats for
the next span. (Essawy, 2007)
96 COMPUTING IN CIVIL ENGINEERING
SIMULATION MODULE
Simulation can be considered as a powerful tool because it imitates what happen
in reality to a certain level of accuracy and reliability without extra costs.
STROBOSCOPE (Martinez, 1996) is used as a simulation tool to represent the tasks
in reality using rectangular and chamfered rectangles called “combi” and “normal”
activities while the resources can be represented by circle shapes named “queues”.
The combi activities should have queues to support the combi activity. Each activity
can take an argument called semaphore to control start and end of this activity. This
was used to entail stroboscope to start work at the beginning of the working day and
ends at the end of the day.
The basic idea of using stroboscope is its ability to create multiple replications for the
various alternatives that could affect the simulation time. As such to determine which
are the factors that could affect the simulation a do loop was performed in the form of
while to create multiple replications of the various alternative.
Table.1: Description of the activities and Resources used in stroboscope model
Model
Description
Entity
A queue that represents the parts of the gantry system used in supporting the
GntryPrts
formworks of the bridge deck
GrdrAssmbly Combi activity of assembling the gantry system parts
A queue that represents the finished assembled gantry system ready to get
Grdr
launched
Combi activity of positioning the girder on the brackets supported on the
PsntgGrdr
piers of the bridge
PrsFrms A queue that represents the parts of the piers forms
Combi activity representing the activity of constructing the piers of the
PrsCnstr
bridge
A dummy queue that has the gantry system positioned and ready for form
GrdrPst
works and steel reinforcement
AdjFrm Combi activity for the adjustment of formworks
Crn A queue that has the crane used in different tasks in the bridge construction
FrmCr A queue for the form crew that is used in adjusting the forms
A queue that represents the steel reinforcement used in reinforcement of the
StlRf
bridge deck
Combi activity representing the process of lifting the steel bars used in
LftgStl
reinforcement
A queue that representing the bottom forms that were constructed by the
BtmFrms
form crews
Rebar Combi activity for the reinforcement of the bridge deck
StlCrw A queue that representing the steel crew
Rnf1 A queue that represents the bottom reinforcement
CncrtArrvl Normal activity that represents the arrival of concrete
Trk Queue representing concrete trucks or containers
FllgPmps Combi activity representing filling of the pumps prior pouring starts
Cncrt1 Dummy queue that transfers the concrete resource
COMPUTING IN CIVIL ENGINEERING 97
Combi activity that represents the pouring of concrete in the web and bottom
PrgCncrt1
part of the girder (assuming it is a box section which is usually the case)
a queue that represents the poured concrete section of the web of the bridge’s
PrdWb
girder
Normal activity for curing of the poured section and consuming time to attain
Curing1
the characteristic strength required and removing the formworks
CrtCr A queue that represents the concrete crew
FnshdWb A queue that represents the finished web section and the bottom of the girder
Combi activity of dismantling the forms to use in the top formworks of the
DsmntlgFrms
girder
TpFrms Dummy queue that holds the formworks until pouring concrete starts
Dck A queue that represents the deck formworks ready for pouring concrete
PrdDck Queue representing the poured deck
FshdSpn Queue representing the finished spans
Prstrg Combi activity representing the prestressing of the post tensioned steel
Combi activity that represents the lowering of form works to move to the
LwrgFrmWk
next span
MvgToNxtPr Normal activity to move the gantry system to the next span
Normal activity to move the brackets supporting the gantry system to the
MvgBrkts
next span
LwgGrdr Normal activity to lower the gantry system to start an new segment
Sgmnt An empty queue that represents finished number of spans
OPTIMIZATION MODULE
Creating multiple replications for the model with different alternatives for the
critical resources and running a simulation would help in determining which
resources had an effect on the construction time. Critical resources that could affect
the duration of construction or cost are resources that have minimum average waiting
time in the resources queue.
The crane used in lifting machinery, equipment and resources can affect the
simulation time, changing the rest of the resources available would affect the
simulation as discussed below.
179
175 111
171
Simulation Time (Days)
Simulation Time (Days)
107
167
163 103
159 99
155
151 95
147 91
143
139 87
135 83
131
127 79
1 2 3 1 2 3
No of Cranes No of Cranes
Figure.2: Simulation time versus Figure.3: Simulation time versus number
number of cranes using 1 gantry system of cranes using 2 gantry systems
Figure.2 shows how the simulation time decreases by increasing number of cranes
while using different resources; this curve was plotted by changing number of form,
steel and concrete crews with the number of cranes. The results were identical no
matter how many resources were used. This is because no new segment can start
unless the previous segment is finished because gantry system is used all the time.
Therefore when two gantry systems were used, the simulation time decreased by a
considerable value which was 67 days approximately 40% of the total duration. This
can be seen in figure.3.
The above shows that many alternatives are involved when a decision is made; the
previous alternatives may have an impact on the cost and simulation time. The total
number of alternatives can yield into a huge number of combinations, as such
optimization was used in an attempt to reduce processing time and improve the
quality of solutions.
Evolutionary Algorithms have been introduced during the past 10 years. In addition
to various Genetic algorithm improvements, recent developments in Evolutionary
algorithms; several techniques inspired by different natural processes. To optimize
the different alternatives to use, the particle swarm optimization (PSO) algorithm is
used. Particle swarm optimization was found to perform better than other
100 COMPUTING IN CIVIL ENGINEERING
evolutionary algorithms in terms of success rate and solution quality (Elbeltagi et al.,
2005).
In PSO, each solution is a ‘bird’ in the flock and is referred to as a ‘particle’. A
particle is analogous to a chromosome (population member) in GAs. As opposed to
GAs, the evolutionary process in the PSO does not create new birds from parent ones.
Rather, the birds in the population only evolve their social behavior and accordingly
their movement towards a destination (Elbeltagi et al., 2005).
Product of simulation time and total cost was set as the objective function that would
be optimized. Different particles were initiated to get optimized and get the optimum
solution.the number of alternatives that are optimized can be seen in Table.2. Each
particle represents different number of alternatives of resources that can affect the
simulation time (steel, concrete, form crew numbers and number of the cranes). A
system of one and two gantry systems was used and the effect on cost and simulation
time was observed. In addition to using multiple gantry systems, additives were used
to increase the rate of hardening of the girder. Given all the previously mentioned
resources, cost and time were obtained and PSO was performed. Convergence was
achieved and pareto optimal face was drawn as shown in figure 4.
35000000
30000000
Total Cost (L.E.)
25000000
20000000
15000000
10000000
60 80 100 120 140 160 180
Simulation Time (Days)
Figure.4: PSO output, total cost versus simulation time and pareto optimal frontier
Figure.4 shows that pareto set varied between different alternatives, it was found that
usin 2 gantry systems is efficient when using 2 cranes, form crews and concrete crews
in presence of additives. Another solution was found to be effective when using only
one gantry system but while using three resources of form, steel and concrete crews.
COMPUTING IN CIVIL ENGINEERING 101
CONCLUSION
Flying shuttering is one of the newly introduced techniques in construction of
bridges, that could be advantageous due to its speedy construction feature.A
framework was introdueced to help contractors in performing time- cost trade off
analysis to optimize resources utilization in flying shuttering technique. A pareto
optimal forentier is introduced that would help the contractors in deciding how many
resources to use and the effect of this decision on cost and time. The multi-objective
optimization was performed to get the previously mentioned pareto face by using
particle swarm optimization (PSO) with an objective function that is the product of
the cost and simulation time of construction. The PSO algorithm was plugged in
STROBSCOPE simulation tool that imitates the processes in reality. To account for
the uncertainties in durations of tasks performed, duration of some activities were
defined as beta distribution that would vary between a certain maximum and
minimum value.
By performing multiple objective optimization it was found that using additives and
more than one gantry system would increase the cost of construction by
approximately 65 % while it would help in decreasing the construction duration by 55
%.
References:
Elbeltagi,E.(2007) “Evolutionary Algorithms for Large Scale Optimization In
Construction Management” The Future Trends in the Project Management,
Riyadh, KSA.
Elbeltagi,E., Hegazy, T., and Grierson, D. (2005). “Comparison among five
evolutionary-based optimization algorithms.” Adv. Eng. Inf., 19(1), 43–53.
El-rayes, K. and Kandil, A. (2005). “Time-Cost quality trade-off analysis for highway
construction.” Journal of construction Engineering and Management.” Vol.
131(4), 477-486.
Essawy,Y. (2007). “Value Engineering in Bridge Deck Construction during the
Conceptual Design Phase”. Master of Science in Construction Management
thesis, The American University in Cairo.
Li, H., and Love, P. (1997). “Using improved genetic algorithms to facilitate time-
cost optimization.” J. Constr. Eng. Manage., 123(3), 233–237.
Martínez, J. C. (1996) STROBOSCOPE: State and Resource Based Simulation of
Construction Processes, Doctoral Dissertation, University of Michigan.
Marzouk, M. and Moselhi, O. (2004).”Multiobjective Optimization of Earthmoving
Operations.” Journal of construction Engineering and Management.” Vol.
130(1), 105-113.
Marzouk, M., Said, H., El-Said, M. (2009) “Framework for Multiobjective
Optimization of Launching Girder Bridges.” Journal of Construction
Engineering and Management Vol. 135(8), 791-800.
Que, B. C. (2002). “Incorporating practicality into genetic algorithms based time-cost
optimization.” J. Constr. Eng. Manage., 128(2), 139– 143.
Application of Dimension Reduction Techniques for Motion
Recognition: Construction Worker Behavior Monitoring
102
COMPUTING IN CIVIL ENGINEERING 103
are unique; etc.) (Hendrickson 1998) and the high fatality and incident rates may result
from these characteristics. However, the unsafe behavior of workers on a construction
site also leads to injuries (Hinze 1997); previous studies state that about 80 to 90
percent of accidents are caused by unsafe acts rooted in employee behavior (Heinrich
et al. 1980; Helen and Rowlinson 2005). Measurement of worker behavior thus is a
way to assess safety management and can be used as a positive indicator to prevent
accidents (Levitt and Samelson 1987). By identifying and reducing unsafe behavior,
major and minor injuries could be reduced; this is based on the theory that
approximately one serious and ten minor injuries occur among 600 near-miss incidents
(Phimister et al. 2003; Bird and Germain 1996). Despite the importance of monitoring
worker behavior, however, it has not been applied actively to practical safety
management for the following reasons: (1) field observation is a time-consuming and
painstaking task (Levitt and Samelson 1987); (2) there is a lack of safety experts on-
site for behavior observation (Han et al. 2010); (3) traditional reporting systems for
unsafe behavior requires the active participation of workers; and (4) current methods
have systemic issues, including how the observed results are analyzed and applied to
safety practices.
To address these limitations, the automated monitoring, analysis, and
visualization of worker behavior is proposed. In our scenario, workers are monitored
with video cameras installed on-site. Safe and unsafe poses are pre-defined and
utilized as templates. Worker behavior thus can be detected, analyzed, and visualized
in the shape of a human skeleton and its joints in a Virtual Reality (VR) environment.
In this paper, we focus on motion recognition that captures the motions predefined in
the templates. A dimension reduction technique is applied to analyze high dimensional
motion data (e.g., 78 dimensions in this paper). Using a dimension reduction technique
on a set of spatio-temporal motion segments, the human motion data are clustered and
generalized to recognize the same motions. The entire dataset is obtained from
experiments and then separated into training and testing datasets. A training dataset is
used to learn human motions (e.g., in a brick laying activity) and label each action
(e.g., mixing mortar, lifting a brick, stacking a brick). A testing dataset which
recognizes the same motions then is projected into a low dimensional space.
LITERATURE REVIEW
Human motion data are high dimensional. Dimension represents the number of
features (i.e., variables) in data. Motion datasets, with their high number of features,
thus contain underlying challenges regarding efficient and accurate data analysis.
Dimension reduction techniques, which identify important features, thus facilitate
efficient analysis, reducing computational time, decreasing the impact of noisy or
irrelevant features, and improving the resolution of similarity measures in lower
dimensions (Cunningham 2008). Data transformation thus converts high dimensional
data to lower dimensions. To understand this transformation, both linear (e.g.,
principal component analysis) and nonlinear (e.g., kernel principal component analysis,
semidefinite embedding, and minimum volume embedding) dimension reduction
techniques have been studied for this paper. Among linear techniques, principal
component analysis (PCA) is used widely and yields reasonably good results
(Carreira-Perpinan 1997). This technique maximizes the variance of original variables
104 COMPUTING IN CIVIL ENGINEERING
in an interrelated dataset through linear mapping that identifies the uncorrelated and
ordered principal components (Jolliffe 2005). Linear principal components, however,
may not properly represent the nonlinear characteristics inherent in human motion data
(Jenkins and Mataric 2002). To address this limitation, nonlinear dimension reduction
techniques have been explored. Kernel PCA (Schölkopf et al. 1998) is a dominant
technique that uses kernel methods to reproduce data in a kernel induced feature space
through non-linear mapping (Cunningham 2008). In addition to kernel PCA, a number
of non-linear dimension reduction techniques have been suggested. For example,
semidefinite embedding (SDE) uses semidefinite programming for optimization to
learn a kernel matrix with preservation of the local distances (Weinberger et al. 2004).
Minimum volume embedding (MVE) uses semidefinite programming but optimizes
the eignespectrum to maximize energy in lower dimensions (Shaw and Jebara 2007).
However, these techniques, which are based on an eigendecomposition, do not provide
a straightforward extension that can be applied to new testing samples. In this paper,
the kernel PCA thus is used as a preliminary study to reduce the dimensions of motion
data and recognize motions with new sample datasets.
DATA COLLECTION
In this study, human motion data was collected from the University of
Michigan (UM) 3D Lab using a Vicon motion capture system. Reflective markers
were attached to the joints of a human body and motions were recorded by eight
cameras that circled the performer. The resulting data includes three dimensional
locations for body joints moving over time and can be converted to the Biovision
hierarchical data (BVH) format. This format contains skeleton hierarchy information
and provides location and rotation information for body joints (e.g., joint rotation
angles and 3D joint positions); these are useful to define and analyze motions.
In the experiment, motions for a bricklaying activity were analyzed. Back
injuries are common in construction and the back injury rate for masonry workers is
the highest, about 1.6 times higher than the average for all construction workers
(CPWR 2008). Bricklaying typically consists of a sequence of seven actions (mixing
mortar, putting mortar on top of bricks, putting mortar on a side, lifting a brick,
carrying, stacking, and fastening) that are repetitive and require the lifting of heavy
objects—this is a major cause of back injuries. Figure 1 illustrates the activities in
order and shows snapshots of the data collected during the experiment. Out of about
23,000 frames of collected data, 1,877 and 6,000 frames were used as training and
testing datasets respectively. The training dataset was manually labeled according to
frame ranges for each action (e.g., mixing mortar, etc.) and then used to identify
specific actions within the testing dataset.
COMPUTING IN CIVIL ENGINEERING 105
Figure 1: Human motion data for bricklaying with the sequence of actions
DATA ANALYSIS
The motion data used for training consists of 1,877 points with 78 dimensions.
Kernel PCA was applied to this data and the results were compared through cross
validation with various kernels and target dimensions to identify those that could be
easily visualized and be useful for motion recognition. As a result of the cross
validation, a polynomial kernel and two dimensions as target dimensions were
selected. This is shown in Figure 2. Using a kernel PCA technique, the training dataset
then was analyzed to learn the principal components and the coefficient for mapping
(left panel in Figure 2). Through this process, the eigenvectors with the largest
eigenvalue were selected and the data points were mapped into the eigenvector
coordinates (i.e., the x and y axes in Figure 2 represent the selected eigenvectors).
With the learnt information, manually labeled points for actions from the training data
could be mapped into the same space (right panel in Figure 2). This indicates that from
the first action (i.e., mixing mortar) to the last (i.e., fastening a brick), each point is
drawn consecutively over time. The training dataset with 78 dimensions also is
mapped into the two-dimensional space. In Figure 3, the first two eigenvectors from
the kernel (i.e., the x-axis) contain significantly high energy (i.e., large eigenvalues on
the y-axis) to represent most dimensions and to support the selection of two
dimensions as target
106 COMPUTING IN CIVIL ENGINEERING
Figure 2. Results of the kernel PCA for training data: motions (left) and marked
actions (right)
RESULT
The objective of applying dimension reduction techniques for human motion
data in this paper is to recognize predefined motions from random samples.
Reconstruction of new sample points thus was conducted with testing datasets to
examine whether the new data could be properly projected in the space where training
datasets are mapped. As a testing dataset, 6,000 points were used and transformed into
two dimensions using the coefficient obtained from learning (left panel in Figure 4).
The right panel in Figure 4 clearly shows that the testing dataset can be accurately
projected onto the same space. The datasets contain the motions that performers
repeatedly required for the bricklaying activity. The overall flow of points over time
thus takes place in similar areas. As marked in the figure, each action can be
recognized by identifying regions near the trajectory of testing datasets.
COMPUTING IN CIVIL ENGINEERING 107
Figure 4: Result of kernel PCA for a testing dataset (left) and comparison with a
training dataset (right)
However, it is difficult to define standardized actions (e.g., mixing mortar,
lifting a brick, etc.) in practice. Actions vary from individual to individual and each
person’s motions can differ over time. In the experiment, the performer used similar
poses for the activity. However, the range of sample motions is distributed widely in
comparison to the training data. Furthermore, workers in the real world may utilize
actions that are not pre-defined (e.g., talking with a supervisor, shifting equipment,
etc.). It thus is not realistic that every possible action can be defined. To recognize
predefined motions accurately, more datasets therefore need to be applied in training
to identify potential areas of motion. As the regions of actions we want to monitor are
more accurately determined with data, many more sample datasets can be mapped into
the training coordinates and analyzed to recognize the motions systematically.
DISCUSSION
In this paper, the motion data representing masonry work was tested to
recognize motions during construction activities. Based on defined actions with
training datasets, similar motions in new testing datasets can be identified by mapping
into the same coordinates. The results show that unsafe actions can be detected
through the training data. For example, unsafe actions such as slip, loss of balance,
and bad posture while lifting heavy objects (e.g., bending one’s back rather than one’s
knees) can be defined and the poses captured. With the resulting information, it also is
possible to compute cycle times for an activity and the performing time for each pose
by calculating the time between actions. This is assuming that workers may take
unsafe actions or make errors under production pressures, such as attempting to work
faster in order to increase productivity (Hollnagel and Woods 2006; Hinze 1997). This
information thus may prove useful in the investigation of the impact of such pressures
on safety. Moreover, the data provides useful information that can prevent injuries
related to the performance time of poses (e.g., back injuries are affected by carrying
time and trajectories which cause back strains). Motion data thus has a high potential
to provide fruitful information for safety management.
108 COMPUTING IN CIVIL ENGINEERING
CONCLUSION
Dimension reduction techniques can be applied to monitor worker behavior on
construction sites. To analyze behavior, motions during an activity are divided into
specific actions and the actions are identified with training datasets using kernel PCA.
The results indicate that motion data can be used to recognize construction worker
motions with machine learning techniques. Testing data shows similar behaviors over
time in the space that training data is transformed. Training with more datasets thus
can be used to statistically determine the potential regions of individual actions and
eventually lead to an improvement in the accuracy of motion recognition. By defining
unsafe actions, this technique can be useful in detecting the unsafe actions of workers
during their activities. The use of video cameras allows worker behavior to be
monitored automatically and constantly. Safety experts thus will not need to undertake
time-consuming tasks and the measured information can be used to reduce the
frequency of unsafe behavior and potentially reduce the number of accidents.
FUTURE WORK
In this study, kernel PCA with polynomial kernel was applied for motion
recognition. However, there are a number of non-linear dimension reduction
techniques (e.g. Gaussian Process Dynamical Model) and various kernels (e.g.,
probability product kernel) which may provide better visualization and embedding
results for motion data. Further investigations will be carried out to compare these
techniques and kernels to identify those most reliable and applicable to construction
worker motion data. Since this paper focuses on motion recognition, all the datasets
were collected from the UM 3D Lab. In future studies, however, we plan to collect
datasets from a construction site using a motion capture system. Our project is
ongoing and our intent is to develop a markerless system to extract 3D human skeleton
information from images taken by multiple video cameras. With these samples, it will
be possible to identify numerous actions taken by construction workers. Training with
sufficient data will improve monitoring accuracy and the detection of worker actions
on-site.
ACKNOWLEDGEMENT
We would like to thank Chunxia Li, a PhD student at the University of
Michigan, for her help in collecting motion data. The work presented in this paper was
supported financially by two National Science Foundation Awards (No. CMMI/ITR-
0427089 and CMMI-0800500).
REFERENCES
Bird, F. E., and Germain, G. L. (1996). Practical loss control leadership, Det Norske
Verita, Loganville, GA.
Bureau of Labor Statistics (2010). “Fatality injury rates, 2003–2008.” U.S.
Department of Labor, Washington, DC.
<http://www.bls.gov/iif/oshcfoi1.htm#rates> (Mar 2010).
Carreira-Perpinan, M. A. (1997). “A review of dimension reduction techniques.”
Technical report CS-96-09, Department of Computer Science, University of
Sheffield.
COMPUTING IN CIVIL ENGINEERING 109
The Center for Construction Research and Training (CPWR) (2008). The Construction
Chart Book: The U.S. Construction Industry and Its Workers, The Center for
Construction Research and Training, Silver Spring, MD.
Cunningham, P. (2008). “Dimension reduction.” M. Cord and P. Cunningham
eds., Machine learning techniques for multimedia: case studies on organization
and retrieval, Springer, Berlin.
Han, S., Lee, S., and Peña-Mora, F. (2010). “Framework for a resilience system in
safety management: a simulation and visualization approach.” The International
Conference on Computing in Civil and Building Engineering (ICCCBE) 2010,
Nottingham, U.K., Jun 30 – July 2, 2010.
Heinrich, H. W., Petersen, D., and Roos, N. (1980). Industrial accident prevention,
McGraw-Hill, Inc., New York.
Helen, L., and Rowlinson, S. (2005). Occupational health and safety in construction
project management, Spon Press, London, pp. 157-158.
Hendrickson, C. (1998). Project management for construction: fundamental concepts
for owners, engineers, architects and builders, Prentice Hall, New Jersey.
Available from <http://pmbook.ce.cmu.edu>.
Hinze, J. (1997). Construction safety, Prentice Hall, Upper Saddle River, NJ, pp. 213–
215.
Hollnagel, E., and Woods, D. (2006). “Prologue: resilience engineering concepts.”
Hollnagel, E., Woods, D., and Leveson, N., eds., Resilience engineering: concepts
and precepts, Ashgate, Aldershot, United Kingdom, pp. 1-6.
Jenkins, O. C., and Mataric, M. J. (2002). “Deriving action and behavior primitives
from human motion data.” Proceedings of 2002 IEEE/RSJ international conference
on intelligent robots and systems (IROS-2002), Lausanne, Switzerland, Sept. 30 -
Oct. 4, 2002, 2551-2556.
Jolliffe, I.T. (2005). “Principal component analysis.” B. S. Everitt and D. C. Howell,
Encyclopedia of Statistics in Behavioral Science, Wiley, New York, 3, 1580-1584.
Levitt, R. E., and Samelson, N. M. (1987). Construction safety management,
McGraw-Hill, New York.
Phimister, J. R., Oktem, U., Kleindorfer, P. R., and Kunreuther, H. (2003). “Near-miss
incident management in the chemical process industry.” Risk Analysis, Vol. 23, No.
3.
Schölkopf, B , Smola, A., Müller, K. (1998). “Nonlinear component analysis as a
kernel eigenvalue problem.” Neural Computation, 10, 1299-1319.
Shaw, B., and Jebara, T. (2007). “Minimum volume embedding.” JMLR W&P, 2, 460-
467.
Weinberger, K. Q., Sha, F., and Saul, L.K. (2004). “Learning a kernel matrix for
nonlinear dimensionality reduction.” Proceedings of the Twenty First International
Conference on Machine Learning (ICML-04), Banff, Canada, 839-846.
Civil and Environmental Engineering Challenges for Data Sensing and Analysis
ABSTRACT
The objective of this study was to identify challenges in civil and
environmental engineering that can potentially be solved using data sensing and
analysis research. The challenges were recognized through extensive literature review
in all disciplines of civil and environmental engineering. The literature review
included journal articles, reports, expert interviews, and magazine articles. The
challenges were ranked by comparing their impact on cost, time, quality, environment
and safety. The result of this literature review includes challenges such as improving
construction safety and productivity, improving roof safety, reducing building energy
consumption, solving traffic congestion, managing groundwater, mapping and
monitoring the underground, estimating sea conditions, and solving soil erosion
problems. These challenges suggest areas where researchers can apply data sensing
and analysis research.
INTRODUCTION
Though civil and environmental engineering is one of the oldest disciplines of
engineering, its adoption of technology to improve practices in the discipline has
been slow. With technological advances, readily available and cost efficient tools can
be used to alleviate and solve some of the most exigent challenges faced by the civil
and environmental engineering (CEE) community. This study identifies challenges
across different areas within civil and environmental engineering that can possibly be
solved using data sensing and analysis as a technological tool. Data sensing and
analysis (DSA) involves the use of sensors such as radio frequency sensors and
cameras to collect data from the real world. This data such as spatiotemporal data is
then processed to convert it to create meaningful information. The knowledge gained
from data and information is inferred to make various decisions.
METHODOLOGY
An exhaustive literature review including journal articles, conference
proceedings, magazine articles, expert interviews, and news articles was conducted to
find the challenges faced by the CEE community. The increase in required time, or
the decrease in quality, injury and fatality statistics, environmental impacts, and the
increases in cost due to these challenges were used to first rank these challenges
110
COMPUTING IN CIVIL ENGINEERING 111
based on their impact within each discipline of CEE and then they were ranked as a
CEE challenge in general. The number of metrics (cost, time, quality, safety, and
environment) that the challenge impacted and their magnitude was considered to
identify and rank the challenges. The challenges that impacted the highest number of
metrics was considered the most pressing and needed to be solved/ alleviated
immediately. Based on these ranks and their pertinence to CEE solutions, challenges
were selected to be further researched. Their applicability for DSA solutions in CEE
was then verified based on discussions with faculty and experts. These challenges are
presented in the next section.
RESULTS
The number in parentheses before the name of the challenge indicates its rank.
(1) Outwit traffic congestion: In 2007, U.S. citizens wasted 2.8 billion gallons of
fuel, 4.2 billion hours and spent $87.2 billion for extra time and cost in 439 urban
areas (Schrank and Lomax, 2009). The National Academy of Engineering identified
“Improvement of Transportation Systems” as a grand challenge (National Academy
of Engineering, 2008). Traffic congestion is a problem that needs to be addressed
because it wastes resources and adds expenses that burden individuals and businesses.
To help resolve the traffic congestion problem in roads, railways, seaports, and
airports, it is necessary to gain an insight into the existing conditions, the behavior of
the public in congestion, and traffic flow. Traffic monitoring, traffic flow analysis,
traffic volume and speed data, driver behavior, and traffic management in general can
help to understand these issues. Optimization, improvement, and betterment of
transportation modes require extensive research for modeling the system. DSA can
help to gather good quality data for transportation models for planning and analysis
of better transportation.
Data collected from sensors can be used to draw inferences about current
practices, needs and planning for the future. Traffic flow analysis requires trajectories
to analyze the flow of traffic. Data can be used to create trajectory information for
this purpose to formulate models with realistic data and to understand driver behavior
in congestion. Installation of sensors at traffic lights throughout a city can help to
determine the number of cars that cross a traffic light per day on average and origin-
destination data. This average can be taken into account by transportation planners
while developing and optimizing transportation. Overall, DSA can help in the
evaluation phase of decision-making and to measure the effectiveness of the solution.
(2) Enhance Construction Site Safety: Accidents on construction sites have
accounted for a preliminary number of 816 deaths in 2009 19% (Bureau of Labor
Statistics, U. S. Department of Labor 2010) of the total work related deaths but,
construction employs only 7% (Bureau of Labor Statistics, U.S. Department of Labor
2009) of the total U.S. workforce. Research in construction safety reported that
construction related injuries cost $4billion for fatal injuries and $7billion for non-fatal
accidents, due to days away from work in 2002 (Waehrer et al. 2007). These numbers
indicate that there is a great danger to life on a construction site. Accidents that result
in fatal or nonfatal incidents affect not only the person involved but also his/her
family dependents. Therefore, enhancing construction safety to have an accident-free
jobsite is of utmost importance and is entitled a top priority in construction sites.
112 COMPUTING IN CIVIL ENGINEERING
DSA can help locate dangerous activities then find the distance between a
piece of heavy equipment or trench and the construction worker to alert personnel.
Examples of research in this area include the use of 3D range cameras, RFID
technology, laser sensors, ad hoc wireless network, and development of obstacle free
paths. However, most of these methods are in preliminary stages of development and
need future work such as improvement, validation, implementation on construction
sites, and cost-benefit analysis. DSA applications can be used to inform project
managers of impending accidents by analyzing site conditions to take appropriate
action. For example, if a worker is not wearing a hard hat and a safety vest or if the
worker is not tied off when working at a height, the project manager can immediately
make sure that the worker wears a hardhat and vest or is tied off. Other examples
include creating algorithms for crane safety and trench cave in safety measures.
(3) Improve Construction Productivity: According to the US Bureau of Economic
Analysis 2010, 4.3% of the US gross domestic product (GDP) ($519 billion in 2009)
is generated by the construction industry (Bureau of Economic Analysis, U.S.
Department of Labor 2010). Construction productivity (measure of output per unit of
input) directly influences construction industry output. Research shows that
construction productivity has been declining since the 1960s and has fallen behind
other industries such as manufacturing (Dyer and Goodrum, 2009). DSA can
potentially help to generate new knowledge by abstracting tacit knowledge into
representative data or by building an integrated database where construction
companies share the knowledge. DSA can help to preserve and transfer knowledge to
young workers. Current knowledge management research is working on creating
network structures to transfer explicit knowledge to new workers. Some AEC firms
have been successful at collecting and storing explicit information in enterprise
databases (Woo et al. 2003). These databases can provide a foundation where new
methods and technologies in CEE could be incorporated to generate new knowledge.
DSA can help to manage construction materials, which can also improve
construction productivity. Engineers and academia have been using the RFID or GIS
to track materials which have led to an increase in craft productivity. By
automatically collecting data with the sensing devices on equipment and post-
processing the data, workers can find and flag the components more easily and time
to track components was reduced from 36.8min to 4.56min (Grau et al. 2009).
Another issue is transfer of documented data (paper and electronic) into useful
information that can be analyzed and used for site management. There is no well-
defined automated mechanism to extract, to preprocess, and to analyze data and
summarize the results so that the site managers could use it.
(4) Monitor the health of infrastructure: An average grade of ‘D’ in the 2009 report
card for America’s Infrastructure reveals the poor condition of the infrastructure
(American Society of Civil Engineers, 2009). Monitoring the condition of this failing
infrastructure to make appropriate decisions regarding improvement or replacement is
fundamental for the maintenance and enhancement of infrastructure.
DSA can help through the installation of wireless sensors on bridges and
roads to collect data that can be analyzed without any subjectivity and give a verdict
on the health of the structures. Research is being conducted to mimic nature by
replicating the crawling capabilities of a gecko to provide mobile sensor networks.
COMPUTING IN CIVIL ENGINEERING 113
Quality assessment of concrete columns also has been studied. However, there still is
a wide scope for data sensing and analysis community to make the technology more
robust and readily available by (1) developing mobile sensors that can maneuver real
world structures and detect damage; and (2) applying data sensing and analysis to
existing infrastructure and embedding sensors into new infrastructure during
construction. Using machine-learning techniques, the data can be analyzed to provide
both qualitative and quantitative results so that the authorities can decide the required
plan of action (regular maintenance or replacement) for infrastructure systems.
(5) Map and Monitor the Sub-surface: Most of the infrastructure in the United
States including the internet, sewage, water lines, and electrical conduits is buried
underground. As noted by the Grand Challenges for Engineering developed by the
National Academy of Engineers, “one major challenge will be to devise methods for
mapping and labeling buried infrastructure, both to assist in improving it and to help
avoid damaging it” (National Academy of Engineering, 2008). The mining industry
and geotechnical engineering could also benefit from knowing the substructure to
avoid accidents/ deaths in mines and to have better geotechnical reports respectively.
Currently, underground assets can be mapped using geophysical techniques such as
Ground Penetrating Radar (GPR) which can only be applied for utilities and
instruments such as geophones, pore pressure transducers, and accelerometers, are
used for geotechnical measurements which traditionally are wired acquisition
systems.
DSA applications can include electromagnetic waves being reflected by
metallic surfaces. This idea is being used in the United Kingdom in an attempt to
locate buried metallic pipes. Sensors can be installed on buried infrastructure during
installation or maintenance. Radio frequency sensors and Geographic Information
System (GIS) can potentially be used to collect information regarding the subsurface.
Grain size of a soil is the fundamental property of the soil, which gives the shear
strength, compressibility, and hydraulic conductivity. DSA tools can be applied to
“see” the soil and find its grain size in an easier and faster manner. DSA can be
applied inside mines to analyze safety conditions. Researchers believe robotics to be
a promising technology to replace humans in mines.
(6) Improve Building Energy Efficiency: Building energy consumption reached
38.9% in 2006 and is expected to be 42.4% by 2030. Energy consumed by residential
buildings cost $225.6 billion and $392.2 billion for commercial buildings (D&R
International, 2009). Improving building energy efficiency is one of the most cost-
effective ways to solve challenges of energy crisis, global warming, and air pollution
and to reduce demand for fossil fuels, and stabilize energy prices.
A common discrepancy exists between the intended and actual building
energy consumption. Currently, it is hard to monitor the distribution of energy
consumption within a building by the individual end-user. For example, the number
of free riders that exist in an energy system is still determined by surveying
participants or appliance retailers.
Most research on improving energy efficiency is limited to a certain size and
types of buildings. To further improve building energy efficiency, it is very urgent to
develop an integrated database and analysis framework that can cover a larger range
of building types, locations, and design details. The National Institute of Standards
114 COMPUTING IN CIVIL ENGINEERING
and Technology is working on expanding the database by adding more detailed data
and specifying the framework by incorporating elements such as environmental flow
estimates and building temporal efficiency deterioration.
(7) Reduce Soil Erosion: According to Global Assessment of Human-induced Soil
Degradation, around 15% of earth’s ice-free land surface is affected by soil erosion.
Of the accelerated erosion, water is responsible for 56%, wind for 28%, and chemical
and physical deterioration for 16%.
DSA can help to inspect soil conditions and potentially be used to analyze the
non-linear manner of rainfall and can help to predict the rates of water-induced
erosion in order to take appropriate actions. DSA can help to sense the three-
dimensional geometrics of waves; the affected land area by salt water can be
predicted. DSA was also used to quantitatively evaluate wave-induced erosion on the
flood side of the Mississippi River Gulf Outlet spoil bank (Storesund et al. 2010).
However, this technology is now mostly applied at a laboratory level and has not
been applied at a large scale.
More specific data, which will reflect the dynamic soil conditions in spatial
dimensions, are still needed but unavailable for the research. Current experiment
results are usually based on laboratory level that represents soil conditions at certain
point of time. It is still a challenge to build up a systematic analysis frame for
inspecting soil conditions.
(8) Manage Ground Water: According to the U.S. Geological Survey (USGS), 50%
of the drinking water comes from ground water. In the high plains of the United
States such as Nebraska, Colorado, and Texas, the water level has been declining for
the recent 30 years. Contamination of groundwater is another threat. Of the 33
drinking ground water samples tested by USGS, 15% had exceeded nitrate content
(Kolpin et al. 2002). Traditional municipal wastewater-treatment technology is not
designed to effectively remove pesticides, chemicals and pharmaceuticals entering
the system. Inspection and reduction of contamination in groundwater remain to be a
challenge.
DSA can potentially provide a broad idea of the national water availability.
Water storage can be represented by analysis of microgravity data. The general
measurement of underground water, which is called water budget, is based on
collecting and analyzing data related to the inflows, outflows, and changes in storage
in the whole ground-water system. Water recharge refers to the process where water
moved downwards from surface to ground water. It is an important factor affecting
water budget, which happens randomly and continuously in space and time.
Therefore, it is difficult to estimate the accurate recharge rate. With proper data
sensing and analysis, water recharge amount can be determined as the residual term
of water budget. Engineers are making efforts to solve this problem by measuring the
subtle changes in gravity detected by gravimeter. Aquifer storage can be represented
by analysis of microgravity data.
The lack of a nation-wide, comprehensive, consistent and up-to-date database
and an integrated analysis form for water availability directly leads to a lack of
indicators representing the status and trends in storage volumes, flow rates, and uses
of water nationwide. Underground water systems have a long-term equilibrium
between inflows and outflows. Sensors used for underground inspections tend to be
COMPUTING IN CIVIL ENGINEERING 115
CONCLUSION
This paper reported challenges across all disciplines within CEE that can be
aided with the use of DSA methods. These challenges were identified based on their
impact on cost, quality, time, safety, and environment. The challenges included
improving construction productivity, enhancing construction site safety, outwitting
traffic congestion, monitoring the health of infrastructure, mapping and monitoring
the subsurface, improving building energy efficiency, maintaining roof efficiently,
reducing soil erosion, estimating sea level, and managing ground water. Due to page
limit, complete evidence regarding the severity of the challenges could not be
presented. Different methods using DSA that can be employed to gather information
about the problem to make better decisions or to help solve/alleviate the problem
were also suggested. The challenges can possibly help researchers define their
research direction and the suggested methods to aid solving the challenges can
potentially be used by researchers to solve the problems faced by the CEE
community.
ACKNOWLEDGEMENTS
This work was financially supported by the American Society of Civil
Engineers through the Technical Council on Computing and Information Technology
Council’s Data Sensing and Analysis Committee.
REFERENCES
American Society of Civil Engineers (2009) “Report card for America’s
infrastructure” <http://www.infrastructurereportcard.org> (Nov. 21, 2010)
Bureau of Economic Analysis, U.S. Department of Commerce (2010) “National
Income and Product Accounts”
Bureau of Labor Statistics, U.S. Department of Labor (2009) “Household Data
Annual Averages- Employed persons by industry, sex, race, and
occupation”<http://www.bls.gov/cps/cpsaat17.pdf> (Nov. 21, 2010)
Bureau of Labor Statistics, U.S. Department of Labor. (2010). “National Consensus
of Fatal Occupational Injuries in 2009 (Preliminary
Results)”<http://www.bls.gov/news.release/pdf/cfoi.pdf> (Nov. 21, 2010)
Coffelt, D. P., Hendrickson, C. T., & Healey, S. T. (2010). “Inspection, condition
assessment, and management decisions for commercial roof systems”.Journal
of Architectural Engineering; American Society of Civil Engineers, 16(3), 94-
99. Retrieved from http://dx.doi.org/10.1061/(ASCE)AE.1943-5568.0000014
Dyer, B. D., &Goodrum, P. M. (2009). “Construction industry productivity: Omitted
quality characteristics in construction price indices”. Paper presented at the
2009 Construction Research Congress - Building a Sustainable Future, April 5,
2009 - April 7,121-130. Retrieved from http://dx.doi.org/10.1061/41020(339)13
D&R International, L. (2009). “2008 buildings energy data book” Retrieved from
http://buildingsdatabook.eren.doe.gov/docs%5CDataBooks%5C2008_BEDB_U
pdated.pdf
Grau, D., Caldas, C. H., Haas, C. T., Goodrum, P. M., & Gong, J. (2009).“Assessing
the impact of materials tracking technologies on construction craft
productivity”.Automation in Construction, 18(7), 903-911.
COMPUTING IN CIVIL ENGINEERING 117
Huang, P., Mirmiran, A., Chowdhury, A. G., Abishdid, C., & Wang, T. (2009).
Performance of roof tiles under simulated hurricane impact. Journal of
Architectural Engineering, 15(1), 26-34.
Kolpin, D. W., Furlong, E. T., Meyer, M. T., Thurman, E. M., Zaugg, S. D., Barber,
L. B., et al. (2002). “Pharmaceuticals, hormones, and other organic wastewater
contaminants in U.S. streams, 1999-2000: A national reconnaissance”.
Environmental Science and Technology; American Chemical Society, 36(6),
1202-1211.
Liu, J. C., Lence, B. J., & Isaacson, M. (2010).“Direct joint probability method for
estimating extreme sea levels”. Journal of Waterway, Port, Coastal and Ocean
Engineering, 136(1), 66-76. Retrieved from
http://dx.doi.org/10.1061/(ASCE)0733-950X(2010)136:1(66)
Miami-Dade Country Building Code Compliance Office (MDC-BCCO). (2006).
“Post hurricane Wilma progress assessment. Miami”
National Academy of Engineering (2008) “Grand Challenges for Engineering”
<http://www.engineeringchallenges.org/Object.File/Master/11/574/Grand%20C
hallenges%20final%20book.pdf> (Nov. 21, 2010)
Schrank, D. and Lomax, T. (2009). “2009 Urban Mobility Report”
<http://tti.tamu.edu/documents/mobility_report_2009_wappx.pdf> (Nov. 21,
2010)
Scotto, M. G., Alonso, A. M., & Barbosa, S. M. (2010).“Clustering time series of sea
levels: Extreme value approach”.Journal of Waterway, Port, Coastal, and
Ocean Engineering, 136(4), 215-225.
Storesund R., Bea R. G., & Huang Y. (2010). “Simulated wave-induced erosion of
the mississippi river-gulf outlet levees during hurricane Katrina”. Journal of
Waterway, Port, Coastal and Ocean Engineering, 136(3), 177-189. Retrieved
from http://dx.doi.org/10.1061/(ASCE)WW.1943-5460.0000033
Waehrer, G.M., Dong, X.S., Miller, T., Haile, E., Men, Y. (2007). “Costs of
occupational injuries in construction in the United States”, Accident Analysis &
Prevention, Vol. 39, Issue 6, Pg 1258-1266
Woo J.H., Clayton M.J., Johnson R.E., Flores B.E., Ellis C. (2003). “Dynamic
Knowledge Map: reusing experts' tacit knowledge in the AEC industry”,
Automation in Construction, Vol. 13, Issue 2, Pg 203-207
Automated 3D Structure Inference of Civil Infrastructure Using a Stereo
Camera Set
H. Fathi1, I. Brilakis2 and P. Vela3
1
Construction IT Lab, School of Civil and Environmental Engineering, Georgia Institute of
Technology; Phone: (404)713-3667; Email: ha_fathi@gatech.edu
2
Assistant Professor, School of Civil and Environmental Engineering, Georgia Institute of
Technology; Phone: (404)894-9881; Fax: (404)894-1641; Email: brilakis@gatech.edu
3
Assistant Professor, School of Electrical and Computer Engineering, Georgia Institute of
Technology; Phone: (404)894-8749; Fax:(404)894-5935; Email: pvela@gatech.edu
Keyword: Spatial data collection; Infrastructure; Stereo vision; Videogrammetry.
Abstract:
The commercial far-range (>10m) infrastructure spatial data collection methods are
not completely automated. They need significant amount of manual post-processing
work and in some cases, the equipment costs are significant. This paper presents a
method that is the first step of a stereo videogrammetric framework and holds the
promise to address these issues. Under this method, video streams are initially
collected from a calibrated set of two video cameras. For each pair of simultaneous
video frames, visual feature points are detected and their spatial coordinates are then
computed. The result, in the form of a sparse 3D point cloud, is the basis for the next
steps in the framework (i.e., camera motion estimation and dense 3D reconstruction).
A set of data, collected from an ongoing infrastructure project, is used to show the
merits of the method. Comparison with existing tools is also shown, to indicate the
performance differences of the proposed method in the level of automation and the
accuracy of results.
1. Introduction
Spatial data can be used to infer required information on the current state and/or
condition of civil infrastructure and make optimal decisions at various stages of the
infrastructure’s life cycle. It can assist constructors, facility managers and inspectors
to design the site layout more efficiently, assess on-site 3D status of the project and
collect information for health monitoring of the built structures. A number of
infrastructure’s spatial data collection techniques and 3D reconstruction
methodologies are commonly used today; however, current practice lacks a solution
that is accurate, automatic and cost efficient at the same time.
Videogrammetry, the process of measuring coordinates of object points from two or
more video frames captured by camcorders (Zhu and Brilakis, 2009), is a promising
area of research which is potentially able to address the limitations of the available
methods. A videogrammetric method needs little human intervention and can provide
a high degree of automation. Low equipment cost is another advantage since the
method only needs off-the-shelf cameras.
118
COMPUTING IN CIVIL ENGINEERING 119
Considering a set of two calibrated cameras, this paper aims to present an automated
and robust method for the first step of progressive infrastructure modeling using
videogrammetry which is an ongoing research. 3D coordinates of visual feature
points of the infrastructure are calculated in this step and the outcome is presented in
the form of a 3D point cloud.
As the input data, the proposed method uses two video frames (i.e., left and right
view) captured at the same time by a set of two calibrated cameras. Speeded-Up
Robust Features (SURF) (Bay et al., 2006) are used to detect the location of the
distinctive features. SURF also encapsulates the descriptive information of each
feature in the form of vectors in a multi-dimensional space. The descriptor vectors are
then used to automatically match the feature points between two frames.
Mathematical point matching constraints and RANdom SAmple Consensus
(RANSAC) (Fischler and Bolles, 1981) algorithm are used to discard mismatches.
Given the point correspondences and the constraints, corrected correspondences are
calculated using an optimal correction algorithm (Kanatani et al., 2008) such that
geometric error is minimized. Finally, the structure information of the scene (i.e.,
sparse 3D point cloud) is calculated using triangulation.
The proposed method is implemented using Microsoft Visual C# and EmguCV (a
.Net wrapper to the Intel OpenCV library). A set of stereo video frames, collected
from an ongoing infrastructure project, is used to validate the accuracy of the results.
Spatial distance between randomly selected features is used for this purpose. In the
evaluation, tape measurements are considered as the actual distance. Using 95% limits
of agreement method (Bland and Altman, 1986), the results for the typical range values of
infrastructure mapping indicate that a point cloud-based measurement can differ from
its corresponding tape measurement by -44.1 to 53.5mm.
2. Background
This section, first, reviews the existing technologies that are used in practice for
spatial data collection of infrastructure. State of the research is then presented to show
the latest efforts in this field.
2.1. Remote Spatial Sensing of Infrastructure
The current practice in the Architecture, Engineering, Construction and Facilities
Management (AEC/FM) industry is to use remote spatial sensing methods to collect
spatial data. Remote spatial sensors for far-range (>10m) spatial data acquisition are
generally categorized into two classes: active (e.g., terrestrial laser scanner) and
passive (e.g., photogrammetry and videogrammetry) sensors.
Terrestrial laser scanners can provide tens of thousands of measurements per second
with millimeter level accuracy (Tang et al., 2009). It can maintain the accuracy on the
order of a few millimeters even for objects at the distance of hundreds of meters.
However, at spatial discontinuities (e.g., object edges), the scanned data contains
inaccurate data points, known as mixed-pixels (Tang et al., 2009). On the other hand,
in the range value required for infrastructure mapping, data collection has to be done
in several steps and individual point clouds should be merged together to create the
overall result. The main limitation of laser scanning, however, lies in its high
120 COMPUTING IN CIVIL ENGINEERING
equipment cost. A laser scanner, appropriate for the measurement range required in
civil infrastructure, can cost tens of thousands of dollars.
In contrast, photogrammetry is the process of measuring the properties of real world
objects from digital images. Photogrammetry requires a two-step procedure to
provide spatial information. First, the site engineer has to shoot the right source
photos as the input data. After the collection of all photos, 3D object point
coordinates are calculated through some post-processing stages. At least two images
from different views of an object are needed to calculate the depth value. A number
of commercial photogrammetry software such as ImageModeler and PhotoModeler
are now available in the market. They provide the possibility to take measurements in
the image and create photo-realistic 3D models. However, they have some limitations
as well. The user needs to provide the information that the software requires to derive
3D position, orientation, focal length and distortion of the camera (ImageModeler,
2009). This information is 2D points in the images that correspond to the same point
in the space. In general, high level of human intervention is required in several steps
(Zhu and Brilakis, 2009).
2.2. Related Work
Remote spatial sensing and its applications related to infrastructure has been an active
research topic in the recent years. Kim et al. (2005) acquire on-site spatial
information using targeted laser range finder to generate a sparse 3D point cloud. This
cloud helps to create a 3D workspace model which is then used for various safety-
enhancement applications such as obstacle avoidance. Akinci et al. (2006) plan a
process for active quality control of construction sites (i.e., identifying defects early
in the construction) using laser scanners to collect the spatial data. The method uses a
number of commercially available software in the modeling process and needs human
intervention in some stages.
A number of vision-based technologies are also presented for 3D point cloud
generation of civil infrastructure. On-site digital images have been used in Memon et
al. (2005) to create 3D models of the structural elements presented in 3D CAD
drawings. In this semi-automated approach, the 3D model of a specific object is
generated using commercial photogrammetry software. Structure from Motion (SfM)
techniques were used by Golparvar-Fard et al. (2009) to extract sparse 3D data from
daily progress photographs of construction sites. It helps to compare as-built and as-
planned construction by superimposing the sparse 3D data over as-planned 4D
models. The accuracy of a photogrammetric method is evaluated in Dai and Lu
(2010) for generating 3D models of building components. First, each object’s spatial
data is acquired from a set of digital images using commercial photogrammetry
software. High degree of human intervention is necessary in this step for point
matching and data smoothing. Then, the 3D model is generated up to a scale and
hence the length of a reference line is required to convert it to the metric
reconstruction. Son and Kim (2010) use video streams as the input for 3D data
acquisition and 3D structural component recognition. A trinocular stereo camera is
used in conjunction with its available software to acquire 3D data. This type of
camera generates rectified images and hence the search for corresponding points only
needs to be performed along the scanline which significantly simplifies the problem.
COMPUTING IN CIVIL ENGINEERING 121
3. Methodology
The goal of the proposed method is to automatically generate a sparse 3D point cloud
of infrastructure scenes using a stereo set of video frames collected by a set of two
calibrated cameras. Fig. 1 shows an overview of the method. The output can be used
for camera ego-motion estimation and dense 3D reconstruction of the infrastructure
scene.
Moreover, the slightly smaller ratio of correct matches reported in Bauer et al. (2007)
can be compensated using the RANSAC algorithm to discard mismatches.
Having the feature set in each video frame, the reliable correspondences are found by
comparing individual features from one set with the features in the other set. The
matching is based on the distance between descriptors. Euclidean distance is used for
this purpose. The constraint described by Lowe (2004) is used to discard the
candidate matches that are not reliable. While using this constraint can increase the
accuracy of the correct matches significantly, outliers are further discarded using the
RANSAC algorithm. The method employs the normalized 8-point algorithm (Hartley,
1997) to estimate the fundamental matrix as the mathematical model of the RANSAC
algorithm. The fundamental matrix describes epipolar geometry between two images
and provides the transformation by plotting a selected point in one image as an
epipolar line on another image, thus projecting a point onto a line. Once the
appropriate mathematical model is established, the consensus number is determined
according to the number of pairs in the data set which fit the model. The model
corresponding to the maximum consensus is finally used to discard mismatches.
The next step is to displace the location of the points in each matched pair in order to
satisfy the epipolar geometry constraint (i.e., xT Fx 0 where x is a point in the first
view, F is the fundamental matrix and x T is the transpose of the corresponding point
in the second view). The optimal correction algorithm (Kanatani et al., 2008) is used
to find the minimum displacement based on the geometric error minimization.
Finally, spatial coordinates of the 2D points in stereo frames are estimated using
triangulation. For the given set of corresponding points, the output can be represented
in the form of a sparse 3D point cloud.
4. Experimental Results
A set of stereo video streams were collected from the Clough Undergraduate
Learning Commons project of Turner Construction at the Georgia Tech campus using
a calibrated set of Microsoft LifeCam NX-6000 notebook web cameras (Fig. 2). The
resolution of the video streams was set to 1600 1200 pixels. Prior to data collection,
the camera set was calibrated using the Bouguet’s stereo camera calibration toolbox
(Bouguet, 2004). A set of 28 stereo video frames were selected randomly to verify the
accuracy of the method with adequate statistical significance of the results. SURF
features were then extracted and 64-dimensional feature descriptors were calculated
for each frame in the database. Fig. 3 demonstrates the result of feature extraction for
one of the frames.
Fig. 4. Sample matched feature points. (a) Sample correct matches; (b) Sample
incorrect matches discarded by RANSAC algorithm
124 COMPUTING IN CIVIL ENGINEERING
Once the 2D location of the point pairs are corrected using geometric error
minimization, their corresponding back-projected rays meet in space. The 3D
coordinates of the matched feature points can then be estimated via triangulation.
Triangulation generates a sparse 3D point cloud for each pair of the stereo frames.
In order to evaluate the accuracy of the point clouds, on-site tape measurements
between the spatial location of randomly selected feature points were compared with
the distance between the corresponding points in the generated point cloud. In this
comparison, the minimum sample size required for 95% confidence level and ±10%
confidence interval is equal to 96. Since there are 28 stereo frames in the database, 4
random samples were selected from each pair which led to 112 samples. The samples
were selected from those points having a depth value between 15m to 20m. The 95%
limits of agreement method were used to assess the agreement of the tape and point
cloud-based measurements. The obtained results indicate that the mean value of the
differences is 4.7mm and the standard deviation is 24.9mm. Therefore, the lower and
upper limits in the 95% limits of agreement method are calculated as -44.1 and
53.5mm. It implies that with 95% confidence, a point cloud-based measurement
would differ from the corresponding tape measurement by no less than 44.1mm and
no more than 53.5mm at a depth value between 15 to 20m.
5. Conclusion
This paper presented a method for sparse 3D point cloud generation of an
infrastructure scene which is the first step of a general videogrammetric framework
for remote spatial sensing of civil infrastructure. 3D coordinates of the corresponding
SURF feature points in a stereo pair of video frames were calculated to form a point
cloud. This sparse point cloud will be used for camera motion recovery and dense 3D
reconstruction of the infrastructure. The general framework, upon success, can fully
automate the process of spatial data collection that is a necessary step for applications
such as infrastructure as-built modeling.
A database of stereo frames was considered to evaluate the validity and statistical
significance of the results. The distance between randomly selected points in a point
cloud was calculated using the estimated 3D coordinates of each point and then was
measured by a measurement tape. The difference between these two measurement
sets was used to find the 95% limits of agreement.
Future work will focus on several areas to improve the accuracy of the presented
method. First, the effect of the distance between two cameras on the accuracy of the
generated point cloud needs to be investigated. Second, the proposed method was
only at scene level and the potential benefit of using video sequences as the input data
was not considered. The interaction of the video frames in the sequence would
significantly increase the accuracy.
6. Acknowledgement
This material is based upon work supported by the National Science Foundation
under Grant #0904109. Any opinions, findings, and conclusions or recommendations
COMPUTING IN CIVIL ENGINEERING 125
expressed in this material are those of the authors and do not necessarily reflect the
views of the National Science Foundation.
7. References
B. Akinci, F. Boukamp, C. Gordon, D. Huber, C. Lyons, K. Park, A formalism for
utilization of sensor systems and integrated project models for active construction
quality control, Aut. Const. 15(2) (2006) 124-138.
J. Bauer, N. Sunderhauf, P. Protzel, Comparing several implementations of two
recently published feature detectors, in: Proceedings of the Int. Conf. on Intelligent
and Autonomous Systems, IAV, Toulouse, France (2007).
H. Bay, A. Ess, T. Tuytelaars, L.V. Gool, Speeded-up robust features (SURF), in
Computer Vision-ECCFV 2006, Springer, 3951 (2006) 404-417.
J.M. Bland, D.G. Altman, Statistical-methods for assessing agreement between 2
methods of clinical measurement, Lancet 1(8476) (1986) 307-310.
J.Y. Bouguet, <http://www.vision.caltech.edu/bouguetj/calib_doc/>.
F. Dai, M. Lu, Assessing the accuracy of applying photogrammetry to take geometric
measurement on building products, J. Constr. Eng. M., 136(2) (2010) 242-250.
M. Fischler, R. Bolles, Random sample consensus: a paradigm for model fitting with
applications to image analysis and automated cartography, Communications of the
ACM 24(6) (1981) 381-395.
M. Golparvar-Fard, F. Peña-Mora, S. Savarese, D4AR – a 4-dimensional augmented
reality model for automating construction progress monitoring data collection,
processing and communication, J. of Inf. Tech. in Constr. 14 (2009) 129-153.
R. Hartley, In defence of the 8-point algorithm, IEEE Transactions on Pattern
Analysis and Machine Intelligence 19 (1997) 580-593.
R. Hartley, A. Zisserman, Multiple view geometry in computer vision, second ed.,
Cambridge University Press, Cambridge, 2003.
ImageModeler, 5-step tutorial of a complete ImageModeler project (2009).
K. Kanatani, Statistical optimization for geometric computation: theory and practice,
Dover, York, New York, USA, 2005.
C. kim, C.T. Hass, K.A. Liapi, Rapid, on-site spatial information acquisition and its
use for infrastructure operation and maintenance, Aut. Con., 14 (2005) 666-684.
D. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. of
Computer Vision 60(2) (2004) 91-110.
Z.A. Memon, M.Z. Abd-Majid, M. Mustaffar, An automatic project progress
monitoring model by integrating Auto-CAD and digital images, in: Proceedings of
the ASCE Int. Conf. on Computing in Civil Eng., Mexico, July 12-15, 2005.
N. Snavely, S. Seitz, R. Szeliski, Modeling the world frame internet photo
collections, Int. J. of Computer Vision 80(2) (2008) 189-210.
H. Son, C. Kim, 3D structural component recognition and modeling method using
color and 3D data for construction progress monitoring, Aut. Con. 19(7) (2010)
844-854.
P. Tang, B. Akinci, D. Huber, Quantification of edge loss of laser scanned data at
spatial discontinuities, Aut. Con., 18 (2009) 1070-1083.
Z. Zhu, I. Brilakis, Comparison of optical-sensor-based spatial data collection
techniques for civil infrastructure modeling, J. of Comp. in Civil Eng., 23(3)
(2009) 170-177.
Unstructured Construction Document Classification Model through Support
Vector Machine (SVM)
Tarek Mahfouz
126
COMPUTING IN CIVIL ENGINEERING 127
construction subject) from the set of negative examples (documents not belonging to
the same construction subject) with a maximum margin. Binary classification is
performed by using a real valued hypothesis function, equation 1, where input x
(document) is assigned to the positive class (Specific Subject) if ƒ(x)≥0; otherwise, it
is assigned to the negative class.
Y = <w.x> + b (1)
For a binary linear separation problem a hyper-plane is assigned to be ƒ(x) = 0. With
respect to equation 1, the vector w (weight vector) and b (functional bias) are the
parameters that control the function of the separation hyper-plan (refer to Figure 1).
In addition, x is the feature vector which may have different representations based on
the nature of problem. Within the context of the current research, the input feature
space X constitutes of the training documents that are defined by the vectors x and o
in figure 1.
ndence
Meeting 71% 72% 78% 72% 72%
Minutes
Claims 71% 74% 79% 72% 72%
Group 2
REFERENCES
Caldas, C. H., and Soibelman, L. (2003). “Automating hierarchical document
classification for construction management information systems.” Autom. in
Const., 12(4), 395-406.
Caldas, C. H., Soibelman, L., and Gasser, L. (2005). “Methodology for the integration
of project documents in model-based information systems.” J. of Comput. in Civ.
Eng., 19(1), 25-33.
Caldas, C. H., Soibelman, L., and Han, J. (2002). “Automated classification of
construction project documents.” J. of Comput. in Civ. Eng., 16(4), 234-243.
Cannon, E. O., Amini, A., Bender, A., Sternberg, M. J. E., Muggleton, S. H., Glen, R.
C., and Mitchel, J. B. O. (2007). “Support vector inductive logic programming
outperforms the Naïve Bayes classifier and inductive logic programming for the
classification of bioactive chemical compounds.” J. of Comput Aided Mol., 21,
269-280.
Ioannou, P. G., and Liu, L. Y. (1993). “Advanced construction technology system—
ACTS.” J. of Const. Eng. and Manag., 119(2), 288-306.
Labidi, S. (1997). “Managing multi-expertise design of effective cooperative
knowledge-based system.” Proc., 1997 IEEE Knowledge & Data Engineering
Exchange Workshop, IEEE, Piscataway, NJ, 10-18.
Mahfouz, T., and Kandil, A. (2010a). “Unstructured construction document
classification model through latent semantic analysis (LSA).” Proceeding of the
27th International Conference on Applications of IT in the AEC Industry (CIB-
W78 2010), Cairo, Egypt.
Mahfouz, T., and Kandil, A. (2010b). “Construction legal decision support using
support vector machine (SVM).” Proc. of the CRC 2010: Innovation for
Reshaping Construction), Banff, Canada.
Mahfouz, T., and Kandil, A. (2010c). “Automated outcome prediction model for
differing site conditions through support vector machines.” Proc. of the ICCCBE
2010, Nottingham, United Kingdom.
Mangasarian, L. and Musicant, D. (1999). “Massive support vector machine
regression.” NIPS*99 Workshop on Learning with Support Vectors: Theory and
Applications, December 3, 1999.
Manning, C. and Scheutze, H. (1999). “Foundations of statistical natural language
processing.” Cambridge: MIT Press.
Ng, H. S., Toukourou, A., and Soibelman, L. (2006). “Knowledge discovery in a
facility condition assessment database using text clustering.” J. of Comput. in Civ.
Eng., 12(1), 50-59.
Salton, G., and Buckley, C. (1991). “Automatic text structuring and retrieval –
experiment in automatic encyclopedia searching.” Proc. of the 14th Annual
International ACM SIGIR Conference on Research and Development in
information Retrieval, 21-30.
Scherer, R. J., and Reul, S. (2000). “Retrieval of project knowledge from
heterogeneous AEC documents.” Proc. of the Eight International Conference on
Computer in Civil and Building Engineering, Palo Alto, Calif., 812-819.
US Census Bureau, < http://www.census.gov/const/www/c30index.html> (Accessed
2010).
Automatic Look-Ahead Schedule Generation System for the Finishing Phase of
Complex Projects for General Contractors
N. Dong1, M. Fischer2, Z. Haddad3
1
Ph.D. student, Center for Integrated Facility Engineering (CIFE), Dept. of Civil and
Environmental Engineering, Stanford University, Y2E2 Building, 473 Via Ortega,
Room 292, Stanford, CA 94305, United States of America, Phone: +1-650-391-5599,
E-mail: ningdong@stanford.edu
2
Professor, Center for Integrated Facility Engineering (CIFE), Dept. of Civil and
Environmental Engineering, Stanford University, Y2E2 Building, 473 Via Ortega,
Room 292, Stanford, CA 94305, United States of America, Phone: +1-650-725-4649,
Fax: +1-650-723-4806, E-mail: fischer@stanford.edu
3
VP, Corporate Affairs & CIO, Consolidated Contractors Company (CCC), 62B
Kifissias Ave., Amaroussion, Athens, Greece 15125, Phone: +30-210-618-2162, E-
mail: zuhair@ccc.gr
ABSTRACT
INTRODUCTION
A “good” LAS for the finishing phase for a complex project should take into
account factors including work content and work sequence in different types of
134
COMPUTING IN CIVIL ENGINEERING 135
rooms, priorities of rooms and activities, effective crew formation from sharable
skilled workers, their availability, actual progress from job site, and etc. It cannot be
used to guide field people’s work unless it considers the above factors.
Unfortunately, such LAS is not widely used in the finishing phase of complex
projects. The reasons are multi-fold. First, no existing tools can help the site
engineers to consider both spatial resource (i.e., rooms) and crew resource at the same
time to avoid work conflict, besides progress update and activity dependencies they
need to worry about. A second challenge is considering crew resource availability
when dynamic crew formation from sharable skilled workers is allowed. For
example, two carpenters are needed to install door frames and panels in one room;
but when they do not have doors to install, they could be assigned to work on wood
skirting activity which requires 3 carpenters. This paper aims to create a solution to
fully address these challenges and get ready for schedule optimization.
Critical path method (CPM) is the commonly used network technique for
scheduling both repetitive and non-repetitive projects (Clough and Sears 1991;
O’Brien and Plotnick 2005). However, using CPM to schedule a project with a large
number of activities while considering assigning crew resource to each individual
activity is time consuming and extremely difficult to maintain (Reda 1990). Line of
balance (LOB) is another popular network-based method for scheduling for repetitive
projects (Carr and Meyer 1974; Johnston 1981; Arditi and Albulak 1986). However,
its effectiveness in non-repetitive projects is not well documented and remains
unclear.
Through the use of fragnet, CPM tools (i.e., Primavera and Microsoft Project)
provide a level of automation to project planners to allow them to define activity sub-
network (i.e., activity template) to be reused over time. In the finishing phase, a
fragnet can represent a specific type of room with a unique work sequence. However,
fragnet itself does not address problems such as activity definition and its relation
with resource, activity dependencies in the finishing phase and automatic activity
duration calculation. Much prior literature has discussed automated schedule
generation methods considering activity definition, activity dependencies and crew
resource allocation in the schedule generation process as well (Darwiche et al., 1998;
Waugh 1990; Echeverry et al., 1991; Yau et al. 1991; Winstanley et al. 1993; Dzeng
and Tommelein 1995; Thabet and Beliveau 1997; Aalami 1998; Chevallier and
Russell 1998; Kanit et al. 2009), but none fully addressed the problems including the
consideration of spatial resource and crew resource at the same time, multiple
perspective of room (i.e., a type of resource but also an instance of a fragnet from the
process perspective), dynamic crew formation from sharable skilled worker, and
schedule generation based on progress updates.
Table 2 only covers part of the dynamic process features of room listed in
table 1. The rest of the features are not scheduling inputs but must be recorded in the
schedule generation process to facilitate space and crew allocation. Such features
combined with the inputs from table 2 form the data framework of the scheduling
information model shown in Figure 1.
n :1
Progress Update Crew Productivity
- Room ID - Composition
1:n
- Operation ID - Required Worker Type
- Assigned Crew/worker IDs - Fragnet ID Skilled Worker Pool
- Expected Operation - Operation ID
Duration - BOQ - Worker Type
- Actual Start Date - Productivity Rate - Worker ID
- Actual Finish Date - Worker Available Date
- Days Left
1:n
Table 3 summarizes the fragnets and related operations of this case study.
Although certain fragnets contain the same operation names, the work contents and
the related productivity rates are often different. For example, the “Electrical final
fix” in the ELE fragnet concentrates on finalizing the power boxes and panels while
in the IDF fragnet on data servers above the raised floor. The successor(s) of each
operation is also indicated in table 3.
COMPUTING IN CIVIL ENGINEERING 139
Since most of the rooms are too small to allow multiple operations to proceed
at the same time, we only define start-finish relations between any two operations. In
other words, the crew for the next operation cannot move into the same room until the
crew for the first operation finish their work. For two or more operations that can go
in parallel in a room, only one operation is allowed to proceed at a time.
The crew formation only focuses on sharable skilled workers considering they
are the most important part of the crew. The plastering and screed operations have
fixed crew formations in any fragnet. We treat these crews as single skilled workers
in scheduling. All subs’ crews are treated in the same way.
room. Each cell contains the skilled workers and the operation to take place in a room
on a specific day. For example, in room 1001 (where), on January 7, 2009 (when),
four electricians (who) are required to work on “Conduit & box” installation (what).
By rearranging the who-what-when-where elements, we get a worker-centered view
of a schedule, as demonstrated in Figure 2 (b), with each row representing the
operation and its location for a skilled worker to perform on a daily basis. A grey cell
in either of these schedules indicates a resource unit (room or skilled worker) is idle
on a specific day.
Figure 3 shows the duration distribution when we run the simulation 3,000
times with the following assumptions: (1) no operation has started yet in any room,
(2) all rooms have the same priority, (3) all rooms only allow one operation to
proceed at a time and (4) the availability of skilled workers is only enough to for one
operation in one room at a time. The computing time for this simulation is 136
seconds using a PC with Intel Core(TM)2 Duo CPU (2.53GHz).
LAS generated from ALASGM can effectively guide site engineers’ field
work and avoid running everything in their heads causing work conflicts and rework.
They can also use the system to analyze resource utilization and determine when to
add/remove people from the job site to better achieve project goals such as shorter
project duration. Future work includes developing a user interface to allow the site
people to conveniently input progress updates, track field people’s work efficiency
and examine the scheduling result, using artificial intelligence to quickly discover
optimal solutions and consideration of project related constraints into the schedule
generation process.
REFERENCES
ABSTRACT
The paper describes the “SCrIPt” methodology used to develop a sustainable
construction domain ontology, taking into account the wealth of existing semantic
resources in the construction sector. The latter range from construction taxonomies
(e.g. IFCs) to energy calculation tools’ internal data structures (i.e. conceptual
database schema). The paper argues that taxonomies provide an ideal backbone for
any ontology project. Equally, textual documents have an important role in sharing
and conveying knowledge and understanding. Therefore, a construction industry
standard taxonomy is used to provide the seeds of the ontology, enriched and
expanded with additional concepts extracted from large construction sustainability
and energy oriented document bases using information retrieval (tf-idf and Metric
Clusters) techniques. The SCrIPt ontology will be used as the semantic engine for a
Sustainability Construction Platform, commissioned by the Welsh Assembly
Government in the UK.
INTRODUCTION
143
144 COMPUTING IN CIVIL ENGINEERING
processes of the firm. These also relate to knowledge about the personal skills,
sustainable construction project experience of the employees and cross-
organizational knowledge. The latter covers sustainable construction
knowledge nurtured through collaborative relationships with other partners,
including clients, architects, engineering companies, and contractors.
Sustainable construction Practical Knowledge: this is knowledge acquired
by individuals through practice drawing from the two above categories of
knowledge. This exists in a tacit form and in several instances is codified but
mainly available from users’ computers, hence, not shared by others.
Commercial Sustainable construction Knowledge: this knowledge is
formalized and conceptualized by software vendors through their commercial
software solutions. This can only be accessible through the functionality
exposed via their software.
Figure 2. The various stages of the Methodology (adapted from Rezgui, 2007).
COMPUTING IN CIVIL ENGINEERING 147
The Industry Foundation Classes (IFC, 2010) play a pivotal role in the
representation and conceptualisation of a building. However, this cannot support in its
present form, building thermal analysis and sustainable construction design. The IFCs
need to be enhanced to support features (concepts, facets, and semantic relationships)
required by existing energy calculation, simulation, and compliance checking tools.
The IFCs are therefore used as a basis to develop the sustainable construction
148 COMPUTING IN CIVIL ENGINEERING
Where Wi,j represents the quantified weight that a term ti has over the document dj; fi,j
represents the normalised occurrence of a term ti in a document dj, and is calculated
using Equation (2):
freqi , j
fi , j (2)
maxfor all terms in document freqterm, j
Where freqi,j represents the number of times the term ti is mentioned in document dj;
maxfor all terms in document freqterm, j computes the maximum over all terms which are
mentioned in the text of document dj; idfi represents the inverse of the frequency of a
term ti among the documents in the entire knowledge base, and is expressed as shown
in Equation (3):
N
idfi log (3)
ni
Where N is the total number of documents in the knowledge base, and ni the number
of documents in which the term ti appears. The intuition behind the measure of Wi,j
is motivated by the fact that the best terms for inclusion in the ontology are those
featured in certain individual documents, capable of distinguishing them from the
remainder of the collection. This implies that the best terms should have high term
frequencies but low overall collection frequencies. The term importance is therefore
obtained by using the product of the term frequency and the inverse document
frequency (Salton and Bukley, 1988).
The next step involves building the relationships connecting the concepts,
including those that have not been retained in the previous stage. Concept
relationships can be induced by patterns of co-occurrence within documents. We
distinguish three main types of relationships: (a) Generalization / Specialization
Relationship (e.g., Wall can be specialised into separation wall, structural wall,
Loadbearing Separation Wall), (b) Composition / Aggregation Relationship (e.g.,
Door is an aggregation of a Frame, a Handle, etc), (c) Semantic relationship between
concepts (e.g., a Beam supports a Slab, and a Beam is supported by a Column).
150 COMPUTING IN CIVIL ENGINEERING
The last two categories above are addressed in this step. The process is semi-
automated in that relations are first identified automatically. Contributions from
knowledge specialists are then requested to qualify and define the identified relations.
In order to assess the relevance of relationships between concepts, an approach that
factors the number of co-occurrences of concepts with their proximity in the text is
adopted. This is known as the “Metric Clusters” method (Baeza-Yates and Ribeiro-
Neto, 1999) (Equation 4). This proceeds by factoring the distance between two terms
in the computation of their correlation factor. The assumption being that terms which
occur in the same sentence, seem more correlated than terms that appear far away.
1
Cu , v
tiV ( Su ) tjV ( Sv ) r (ti, tj )
(4)
The distance r(ti, tj) between two key words ti and tj is given by the number of
words between them in the same document. V(Su) and V(Sv) represent the sets of
keywords which have Su and Sv as their respective stems. In order to simplify the
correlation factor given in Equation 4, it was decided not to take into account the
different syntactic variations of concepts within the text, and instead use Equation 5,
where r(tu, tv) represents the minimum distance (in terms of the number of separating
words) between concepts tu and tv in any single document
1
Cu , v (5)
Min[ r (tu , tv )]
The domain knowledge experts drawn from the SCrIPt project stakeholders
have the responsibility of validating the newly integrated index terms as well as their
given names, and then defining all the concept associations that do not belong to the
generalization / specialization category. First, these relationships are established at a
high level within the Core Ontology, and then subsequent efforts will establish
relationships at lower levels within the discipline ontologies. The use of discipline
documents to identify ontological concepts and relationships was revealed to be the
right strategy to construct the discipline sub-ontologies (Rezgui, 2007).
CONCLUSION
concepts and relationships using tf-idf and metric clusters techniques, which are then
validated by human experts. At the time of writing the paper, the ontology is still
under development using the techniques described in the paper. Once completed, the
final version of the sustainable construction ontology will be reported in a follow-on
publication.
REFERENCES
Zhenhua Zhu1, Stephanie German1, Sara Roberts1, Ioannis Brilakis2 and Reginald
DesRoches3
1
School of Civil and Environmental Engineering, Georgia Institute of Technology,
Atlanta, GA. 30332; email: {zhzhu, s.german, sroberts4@gatech.edu}
2
School of Civil & Environmental Engineering, Georgia Institute of Technology,
Atlanta, GA. 30332; PH (404)894-9881; email: brilakis@gatech.edu
3
School of Civil & Environmental Engineering, Georgia Institute of Technology,
Atlanta, GA. 30332; PH (404)385-0402; email: reginald.desroches@ce.gatech.edu
ABSTRACT
Manual inspection is required to determine the condition of damaged buildings after
an earthquake. The lack of available inspectors, when combined with the large
volume of inspection work, makes such inspection subjective and time-consuming.
Completing the required inspection takes weeks to complete, which has adverse
economic and societal impacts on the affected population. This paper proposes an
automated framework for rapid post-earthquake building evaluation. Under the
framework, the visible damage (cracks and buckling) inflicted on concrete columns is
first detected. The damage properties are then measured in relation to the column’s
dimensions and orientation, so that the column’s load bearing capacity can be
approximated as a damage index. The column damage index supplemented with other
building information (e.g. structural type and columns arrangement) is then used to
query fragility curves of similar buildings, constructed from the analyses of existing
and on-going experimental data. The query estimates the probability of the building
being in different damage states. The framework is expected to automate the
collection of building damage data, to provide a quantitative assessment of the
building damage state, and to estimate the vulnerability of the building to collapse in
the event of an aftershock. Videos and manual assessments of structures after the
2009 earthquake in Haiti are used to test the parts of the framework.
KEYWORDS: Post-earthquake inspection; Machine vision
INTRODUCTION
Post-earthquake inspection is performed by teams comprising licensed inspectors
and/or structural engineers. They follow the guidelines in the ATC-20 documents
(ATC-20, 1989; ATC-20-2, 1995) and classify a post-earthquake building as 1)
imminent threat to life-safety (red-tag), 2) risk from damage but not imminent threat
to life-safety (yellow-tag), or 3) safe for entry and occupancy as earthquake damage
has not significantly affected the safety of the building (green-tag). As suggested by
the definitions of the categories, the application of these guidelines requires
152
COMPUTING IN CIVIL ENGINEERING 153
significant judgment and the inspection results are highly subjective. Also, mobilizing
post-earthquake reconainssance teams and assessing damaged buildings often take
days to weeks to complete. This is the case even for a moderate earthquake.
According to a summary report of the October 15, 2006 Hawaii Earthquake, over
several hundred buildings were requested to be assessed each day from October 15 to
the end of October in the County of Hawaii, while the inspection capacity was only
around 1000 buildings in a week (Chock ,2007).
Prompted by the critical role of post-earthquake inspection in hazard mitigation
and the need for its fast performance in earthquake damaged areas, several efforts
towards automating building safety assessment have led to the creation of
sensing-based evaluation methods. For example, Kottapalli et al. (2003) showed that
sensor networks installed in new buildings can provide useful information for
evaluating structural damage. However, sensor networks are installed in a very small
percentage of existing structures in earthquake prone areas, and rarely in most
susceptible, old reinforced concrete (RC) buildings.
This paper proposes an automated framework for the evaluation of
post-earthquake RC buildings using machine vision techniques. Under the framework,
the visible damage inflicted on concrete columns is first detected. The spatial
properties of the damage are measured in relation to the column’s dimensions and
orientation to approximate the column’s load bearing capacity as a damage index. The
column damage index supplemented with other building information (structural type
and columns arrangement) is used to query fragility curves of similar buildings,
constructed from the analyses of existing and on-going experimental data. The query
estimates the probability of the building being in different damage states. The
framework is expected to provide the quantitative assessment of the damage state of
an RC frame, and its vulnerability to collapse in an aftershock.
RELATED WORK
In this section, the recent work of machine vision-based structural element detection
is introduced first. Following that, the assessment of the vulnerability of RC buildings
to collapse and the loss estimation for the buildings subjected to earthquake loading
are described. All of them are what the framework builds on.
Machine Vision-Based Structural Element and Damage Detection
Machine vision-based detection methods rely on: 1) scale/affine-invariant features, 2)
color/texture features, and 3) geometry features. Scale/affine-invariant feature-based
methods are powerful in detecting a specific object, but not appropriate for object
category detection (Zhu and Brilakis, 2010).
Color/texture based methods use the objects’ interior color/texture values to
perform detection. Neto et al. (2002) observed that the color/texture values for most
materials (e.g. concrete and steel) in an image do not change significantly. Based on
this observation, material regions in an image can be identified and the type of
154 COMPUTING IN CIVIL ENGINEERING
structural element of one region is determined from the region dimensions (Brilakis
and Soibelman, 2008). However, when one element is connected to another structural
element with the same material, this kind of methods regards them as one single
element instead of two separate elements.
Edge information is another type of detection indicator. Geometry-based
methods make use of this information. They start with edge detection using common
operators, and then form object boundaries by analyzing the distribution of edge
points through Hough transform, covariance matrices or principle component analysis
(Lee et al. 2006). The sole reliance on edge information renders these methods
inadequate for complex scenes.
As for automated damage detection, a lot of methods have been created using
image processing techniques, such as wavelet transforms, edge detection, and/or
region-based segmentation. Their effectiveness has been verified in inspecting
concrete structures such as bridges, underground pipes and tunnels. For example,
Abdel-Qader et al. (2006) proposed a principal component analysis (PCA) based
algorithm for detecting unsupervised bridge cracks. Sinha and Fieguth (2006)
introduced two crack detectors for identifying crack pieces in buried concrete pipes.
Yu et al. (2006) used Sobel and Laplacian operators to retrieve crack information
from captured concrete surface images. The error of measurement for the extracted
cracks in their system was below 10%. These successful efforts validated the ability
of machine vision technologies in detecting damage, even when well-light conditions
were not available.
Assessment of the Vulnerability of RF Buildings to Collapse
In earthquake engineering, models are required to link the component damage
visually identified on-site to the building performance and vulnerability to
aftershocks. These models are referred to as “fragility functions”. Researchers have
developed fragility functions that advance assessment of the post-earthquake
vulnerability of buildings beyond ATC-20 documents. One study was undertaken as
part of the Pacific Earthquake Engineering Research (PEER) Center Lifeline
Research Program (Bazzurro et al. 2004). The study developed the recommendations
to quantify the vulnerability of the building to collapse during an aftershock given
that the building had been red, yellow, or green-tagged following the main shock.
Maffei et al. (2008) applied these recommendations for the evaluation of utility
company buildings that required limited access after an earthquake so that personnel
could access equipment to restore power supply and thereby enable post-event
recovery.
Another primary component of the assessment methods is a pushover analysis to
determine the response of the structure under earthquake loading (Bazzurro et al.
2004). So far, the data from suites of analyses have been used to develop fragilities
for different types of buildings, including concrete frames (Haselton and Deierlein,
2008), concrete continua (Ji et al. 2009), and concrete wall buildings (Elnashai 2002).
COMPUTING IN CIVIL ENGINEERING 155
The pushover response history can be quickly estimated using a few basic parameters
characterizing the building system.
A third critical component of the assessment methods is the ability to link
analysis results with observed damage. For example, when a pushover response
history is used, it is necessary to identify the earthquake load–roof displacement
points on the history corresponding to development of specific observable damage
states, such as initiation of measurable residual concrete cracking, initiation of
concrete crushing, or buckling of longitudinal reinforcing steel. This introduces
additional effort into the model-building process (in defining hinge response for RC
elements) and additional uncertainty into the process; both can be reduced, with
relatively little impact on computational time, through the use of fiber-type response,
and associated damage prediction, models.
Loss Estimation from Buildings Subjected to Earthquake Loading
Typically, the extent of federal and state funding provided for recovery efforts is
determined by the estimates of earthquake losses; making rapid, accurate estimation
of these losses critical to recovery. For most buildings, and especially under light to
moderate earthquake loading, the cost of repairing non-structural elements greatly
exceeds that for structural damage. However, economic losses must include the cost
of lost productivity during the time the structural system is repaired. This cost can be
quite significant.
Previous studies provided a basis for developing rapid, automated procedures for
using damage data to estimate repair costs and downtime for repair. For example,
Pagni and Lowes (2006) and Lowes and Li (2009) linked damage with the repair
methods required to return the structure to its original stiffness and strength. Pagni
(2003) demonstrated the use of these repair-specific fragility functions to compute the
cost and time for the repair of old concrete frames. The on-going ATC-58 project has
developed a framework for loss estimation on the basis of the required repair.
THE FRAMEWORK OF THE PROPOSED METHODOLOGY
This paper proposes a novel, automated framework for post-earthquake inspection
(Figure 1). The framework first collects video frames via a high-resolution video
camera and transmits the frames to a computer off-site for analysis. There, each frame
is searched for concrete columns and the damage inflicted on the columns. The spatial
damage properties are measured, so that the column’s load bearing capacity can be
approximated as a damage index. In parallel to this process, the building structural
type and the columns arrangement per floor is recorded by the user while performing
the building safety evaluation. The collected information is used to query a fragility
database constructed from analyses of existing and on-going experimental data. The
database contains building fragility curves that report the probability of various levels
of structural damage. Consulting these curves gives the estimate of the probability of
156 COMPUTING IN CIVIL ENGINEERING
or recommendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation.
REFERENCES
Abdel-Qader, I., Pashaie-Rad, S., Abudayyeh, O., & Yehia, S. (2006). “PCA-Based
Algorithm for Unsupervised Bridge Crack Detection.” Advances in Engineering
Software, 37 (12), 771-778
ATC-20 (1989). “Procedures for Postearthquake Safety Evaluations of Buildings.”
Report ATC-20, Redwood City, CA.
ATC-20-2 (1995). “Addendum to ATC-20, Procedures for Postearthquake Safety
Evaluations of Buildings.” Report ATC-20, Redwood City, California.
Bazzurro, P., Conell, C.A., Menun, C. Luco, N. and Motahari, M., (2004). “Advanced
Seismic Assessment of Buildings.” Report for Pacific Gas & Electric Company &
the Pacific Earthquake Engineering Research Center.
Brilakis, I. and Soibelman, L. (2008). "Shape-Based Retrieval of Construction Site
Photographs." J. of Computing in Civil Engineering, 22(1): 14 – 20
Chock, G., (2007). “ATC-20 Post-Earthquake Building Safety Evaluations Performed
after the October 15, 2006 Hawaii Earthquakes Summary and Recommendations
for Improvements (updated).”
http://www.scd.state.hi.us/HazMitPlan/chapter_6_appM.pdf (Dec. 10, 2008)
Elnashai, A.S., Papanikolaou, V. and Lee, D.H. (2002). “Zeus-NL — A system for
inelastic analysis of structures.”
Haselton, C., Deierlein, G. (2008) “Assessing Seismic Collapse Safety of Modern
Reinforced Concrete Moment-Frame Buildings” PEER Report 2007/08.
Ji J, Elnashai, A.S., Kuchma, D.A. (2009). “Seismic Fragility Relationships of
Reinforced Concrete High-Rise Buildings.” Structural Design of Tall & Special
Buildings 18(3): 259-277.
Lee, Y., Koo, H., and Jeong, C. (2006). “A Straight Line Detection using Principal
Component Analysis.” Pattern Recognition Letters, 27 (14), 1744-1754.
Lowes, L.N., Li, J. (2009). “Fragility Functions for RC Moment Frames.” Report to
ATC-58 Structural Performance Products Review Panel.
Maffei, H., Telleen, K, and Nakayama, Y. (2008). “Probability-Based Seismic
Assessment of Buildings, Considering Post-Earthquake Safety.” Earthquake
Spectra 24(3)
Neto, J., Arditi, D., and Evens, M. (2002). “Using Colors to Detect Structural
Components in Digital Pictures.” Computer Aided Civil and Infrastructure
Engineering, 17(2002): 61-76
Kottapalli, V. A., Kiremidjiana, A. S., Lyncha, J. P., Carryerb, E., Kennyb, T. W.,
Lawa, K. H., (2003). “Two-tiered wireless sensor network architecture for
structural health.” SPIE’s 10th Annual International Symposium on Smart
Structures and Materials, (pp. 8-19). San Diego.
160 COMPUTING IN CIVIL ENGINEERING
Pagni, C.A. and L.N. Lowes. (2006) “Fragility Functions for Older Reinforced
Concrete Beam-Column Joints.” Earthquake Spectra 22(1): 215-238
Sinha, S., & Fieguth, P. (2006). Automated detection of cracks in buried concrete
pipe images. Automation In Construction , 15 (1): 58-72.
Yu, S.-N., Jang, J.-H., & Han, C.-S. (2007). Auto Inspection System Using a Mobile
Robot for Detecting Concrete Cracks in a Tunnel. Automation in Construction ,
16 (3), 255-261.
Zhu, Z. and Brilakis, I. (2010). “Concrete Column Recognition in Images and
Videos.” J. of Computing in Civil Engineering, 24(6): 478 – 487.
Continuous Sensing of Occupant Perception of Indoor Ambient Factors
Farrokh Jazizadeh1, Geoffrey Kavulya 2, Laura Klein3, Burcin Becerik-Gerber4
1,2,3,4
Sonny Astani Department of Civil and Environmental Engineering, University of
Southern California, Los Angeles, CA 90089;
Email: 1jazizade@usc.edu, 2kavulya@usc.edu, 3lauraakl@usc.edu, 4becerik@usc.edu
ABSTRACT
Ambient factors such as temperature, lighting, and air quality influence occupants’
productivity and behavior. Although these factors are regulated by industry standards
and monitored by the facilities management groups, occupants’ perceptions vary from
actual values due to various factors such as building schedules and occupancy,
occupant activity and preferences, weather and climate, and the placement of sensors.
While occupant comfort surveys are sometimes conducted, they are generally limited
to one-time or periodic assessments that do not fully represent occupant experiences
throughout building operations. This study proposes a new methodology for gathering
real time data on a continuous basis through participatory sensing of occupant
ambient comfort in indoor environments based on a smart phone application. The
developed application is presented and validated by a pilot study in a university
building. Occupant perceptions of temperature are compared to actual temperature
records. No correlation is found between perceived and actual room temperatures
demonstrating the potential of a participatory sensing tool for adaptively controlling
building temperature ranges.
INTRODUCTION
Ambient factors such as temperature, lighting, and air quality can greatly influence
occupants’ productivity and behaviour in indoor environments (Sensharma et al.,
1998). As a result, industry standards have been developed to define acceptable
ranges for these factors according to rational comfort indices, most notably the PMV
(predicted mean vote) index for thermal comfort (Fanger, 1982; ASHRAE, 2004;
CEN, 2005). Recent studies, however, have shown weak and context dependant
correlations between code-defined comfort ranges and occupant reported comfort
ranges (Barlow and Fiala, 2007; Corgnati et al., 2009). Often times, occupant comfort
ranges are found to be larger and more forgiving than predicted ranges implying a
potential for reduced building energy consumption by allowing more flexible and
adaptive control of HVAC and lighting system set points (Hwang et al., 2006; Nicol
and Humphreys, 2009). In the United States, buildings account for 40% of national
energy consumption of which 33% is associated with heating and cooling and 18% is
associated with lighting (U.S. Department of Energy, 2009). Consequently, there is a
161
162 COMPUTING IN CIVIL ENGINEERING
significant opportunity for improving occupant comfort levels and reducing building
energy demands by collecting and analysing occupant perceptions of indoor
environmental conditions.
Current practices for controlling and assessing indoor environments limit occupant
feedback to one-time or periodic occupant surveys and individual occupant
complaints (Nicol and Roaf, 2010). Such limitations reflect challenges to continuous
and large-scale acquisition of human-contributed data (Ari et al., 2008). To allow
frequent and real-time assessment of a large collection of indoor environments, this
research proposes a new methodology for gathering real time data on a continuous
basis through participatory sensing of occupant ambient comfort in indoor
environments. In recent years, smart phones have evolved from devices used solely
for voice and text communication to platforms that are able to capture and transmit a
range of data types including image, audio, location, and cloud services to collect and
analyze systematic data (Estrin, 2010). Participatory sensing involves and empowers
end-users in collecting and sharing these data types (Reddy et al., 2010; Payton and
Julien, 2010) by using mobile devices. Consequently, there is a shift towards
employing the widespread capabilities of smart phones for manual, automatic, and
context-aware data capture especially by incorporating participatory sensing
functionality (Trossen and Pavel, 2010; Sääskilahti et al, 2010), which includes
sensors and techniques for pervasive environmental monitoring.
The research develops and tests a new participatory sensing smart phone application
for building occupants that is intended to facilitate more customized and therefore
more efficient heating, cooling, ventilation, and lighting standards and protocols for
building facilities. The developed application is presented and the proposed
methodology for facilitating continuous sensing of occupant perceptions of indoor
ambient factors is explained. The application is tested and validated by a pilot study
in a university building. Occupant perceptions of temperature are compared to actual
temperature records and results are used to assess the value of participatory sensing
for indoor environmental control and for building energy management. The main
objectives for improving data collection for indoor environmental assessment include
monitoring multiple buildings and building zones, accepting input from multiple
occupants, and analysing and addressing input data in real time. Thorough literature
reviews of building occupant comfort and participatory sensing methods have been
completed but due to the page limitations, authors were not able to include the review
in this paper.
Participatory sensing using mobile devices, specifically smart phones, could be the
optimal approach to collecting data from large user groups and in large facilities such
as university campuses or urban regions. This method offers great potential for data
collection as mobile devices are frequently available to majority of the occupants of
large facilities due to popularity of this type of technology. According to the result of
COMPUTING IN CIVIL ENGINEERING 163
a survey study in 2010 among college and university students in United States, 53%
of the students own smartphones (Digital Media Test Kitchen, 2010).
As part of the approach, a smart phone application was designed and shared with
building occupants for free, to allow them real time and continuous input of their
perceptions of indoor ambient factors. This participatory sensing application differs
from most traditional building occupant surveys in that it does not include a
comprehensive list of questions but rather a few questions designed to encourage fast
and frequent input. By developing the application for different types of smart phones
and operating systems, most building occupants are able to access a compatible
device. Such widespread opportunity supports the goals of this study to enable large
scale data acquisitions from a large population.
Survey Design. The application was designed for collecting, recording, and
analyzing spatiotemporal perceived ambient factors. In order to identify the most
important ambient factors and their associated effects on building environments, the
results of a study, conducted by the third and the fourth author, which explored the
relation between student’s learning and classroom features including ambient factors,
space layout, and classroom technology were analyzed. Based on the study results
and an extensive literature review, three important ambient factors were selected,
temperature, light intensity, and airflow. These factors also have the greatest impact
on building energy consumption as discussed in previous sections.
The survey question for perceived temperature was based on the thermal sensation
scale proposed by ASHRAE with the following choices: +2 Hot, +1 Warm, 0 Neutral,
-1 Cool and -2 Cold. The two options, slightly warm and slightly cool, were removed
from the original ASHRAE scale for this study as these options were judged to be
potentially ambiguous and difficult for participants to interpret in comparison to cool
and warm levels. Moreover, in participatory sensing, the brevity of questions and
answers plays an important role in encouraging high levels of participation.
Metabolic rate and clothing factors (ASHRAE, 2004) were not included since this
study focuses only on those factors that are under the control of the building systems
and are related to energy consumption. The adopted approach in this study is based
on continuous and real time data collection with large data samples which are
assumed to normalize impacting factors like clothing, gender, and so forth.
Perceived lighting was assessed by two survey questions: light source and light
intensity. Participants were asked to share whether their environment used natural
light, artificial light, or both to evaluate the contribution of lighting to energy
consumption and the contribution of light source to occupant perception. For light
intensity, a scale similar to that of temperature was adopted with the following five
levels: +2 Glaring, +1 Bright, 0 Neutral, -1 Dim, and -2 Dark. Ventilation or airflow,
which plays an important role preserving acceptable air quality and acceptable air
speeds for occupant comfort, was assessed with the following scale: +2 Draughty, +1
164 COMPUTING IN CIVIL ENGINEERING
Slightly Draughty, 0 Neutral, -1 Slightly Stuffy and -2 Stuffy. The final survey
question asked participants to share their mood to investigate correlations between
occupants' moods and ambient conditions. Mood is assessed with the following
discrete answers, not intended to represent a continuous scale: +2 Focused, +1 Calm,
0 None, -1 Distracted, and -2 Sleepy.
Application Design and Workflow. Location and time of participation are two
important parameters of this study. A GPS based locating algorithm running on the
application central server provides the three nearest buildings to a participant’s
location, from which they can navigate and scroll through floors and rooms. Storing a
list of campus buildings and rooms and running parts of the application on the server
reduce the computing load on the mobile devices. The location-sensing module of the
application runs as a service in the background to record the last available latitude
and longitude of the participant. This also reduces the computing time of location
sensing while participants use the application. Reducing manual data entry is an
important step in participatory sensing as it encourages participants to contribute
easily and also reduce faulty data. Once the building and room location is defined, a
participant completes the questions regarding temperature, light source, light
intensity, air quality, and mood. The entire process takes approximately 10 seconds.
The captured data is then sent to the server and recorded in a database.
To implement the application, the Android operating system was selected as the test
platform. For the Android application, Java programming language and Eclipse
developer platform were used. Screen shots of the mobile device interface are
presented in Figure.1.
List of
nearest List of List of
Temp. Light Light
Buildings Floors Room Air Flow
Source Intensity
In addition to participatory perceived temperature data gathered over ten days by the
ambient factors application, actual temperature data for each of the surveyed rooms
was collected. Existing temperature sensors in each of the rooms allowed the record
of air temperatures every six minutes over the study period. The averages and
standard deviations for perceived and actual temperatures in addition to the
distribution of data points are summarized in Table 1. The actual temperatures were
matched by date, time, and location to the perceived temperature data. In Figure 2,
perceived temperature votes are plotted against actual temperature ranges in which
the perceived temperatures were reported. Over 80% of perceived temperature votes
fell under the “cool” and “neutral” categories and covered an actual temperature
range of 20 to 26 degrees Celsius. As expected, the median actual temperatures of
votes for “cool”, “neutral”, “warm”, and “hot” showed a positive correlation with
perceived increases in room temperatures. The median actual temperature of votes for
“cold”, however, was higher than the median actual temperatures for both “cool” and
“neutral” votes.
Table 1. Means and standard deviations for perceived and actual temperatures
Room Number of Data Points for Each Level Perceived Temp. Actual Temp. (◦C)
Number Cold Cool Neutral Warm Hot Mean Std Dev Mean Std Dev
130 0 1 3 1 2 -0.57 1.13 23.33 0.14
144 7 20 8 1 0 0.92 0.73 24.70 0.54
163 2 23 7 1 0 0.79 0.60 21.49 1.23
164 3 15 10 0 0 0.75 0.65 20.94 0.41
209 7 19 26 4 0 0.52 0.81 21.93 0.39
444 0 3 3 1 0 0.29 0.76 23.89 0.28
444C 0 0 2 1 3 -1.17 0.98 24.19 0.43
444L 1 0 4 0 0 0.40 0.89 22.64 0.88
Total 20 81 63 9 5 -
Based on the analysis, the HVAC systems operating in the eight surveyed rooms were
found to maintain relatively uniform air temperatures in each space. The highest
standard deviation for actual temperatures was 1.23 degrees in Room 163. Six of the
rooms had standard deviations for actual temperatures less than or approximately
equal to 0.5 ⁰C implying that temperatures remained within a one degree range for at
least two thirds of the time in these rooms. While each room operated in a somewhat
narrow and regulated range, temperature ranges varied between rooms. Room 164
saw the lowest standard temperature range with the majority of recorded temperatures
falling between 20.53 and 21.35 degrees Celsius. In contrast, Room 144 saw the
highest temperatures with a standard range of 24.18 to 25.26 degrees Celsius. The
maximum temperature in Room 144 was 26.2 degrees which was almost five degrees
higher than the maximum temperature of 21.8 degrees recorded in Room 164. This
variance reveals that the regulated temperature set points in each room differ
somewhat substantially.
166 COMPUTING IN CIVIL ENGINEERING
Figure 2. Box plot of perceived temperature votes against actual temperature ranges
Surprisingly, no correlation was found between perceived and actual mean room
temperatures. While Room 444C, which saw the second highest actual temperatures,
was perceived as warm to hot, Room 144, which saw the highest actual temperatures,
was perceived as the coolest of the eight rooms. Rooms 444 and 444C belong to the
same HVAC zone and therefore shared one temperature sensor and one VAV box for
controlling air temperature. As a result, their average temperatures are almost
identical at 23.9 and 24.2 degrees Celsius respectively. Despite this relation, Room
444 was perceived as slightly cool (0.29) and Room 444C was perceived as very
warm (-1.17). This discrepancy in occupant perceptions could be explained by
numerous factors including locations of building spaces, vents, and windows, room
size, occupancy rate, occupant activity and clothing, and occupant preferences.
The absence of a correlation between perceived and actual room temperatures reveals
the potential value of the ambient factors participatory application for adjusting
ambient factors to optimize occupant comfort. The findings of this study demonstrate
that standardized temperature set points do not guarantee ideal thermal conditions in
all indoor environments. Continuous and real-time access to occupant perceptions of
thermal conditions would therefore provide more effective means for adjusting
heating and cooling ranges for different building spaces. At least four of the test-bed
COMPUTING IN CIVIL ENGINEERING 167
rooms would benefit from slight increases in their set temperature ranges and two of
the rooms would benefit from slight decreases in their set temperature ranges.
Heating, cooling and lighting systems together make buildings some of the greatest
consumers of energy in the United States. Much of their energy consumption results
from requirements that these systems are highly regulated and controlled to meet
established standards for occupant thermal and lighting comfort. Accordingly, in this
study, participatory sensing was adopted to allow for real-time continuous assessment
of the ambient conditions of large facilities and urban regions. A smart phone
application was developed in order to provide concrete solutions to inherent gaps in
building operations and performance assessment methods. Conventional methods rely
on periodical measurement and verification surveys which do not address the full
operational life cycle of the building. The application is used to gather occupants’
perceptions of temperature, lighting, air quality, and mood. Performance verification
of the application over a period of ten days in eight rooms revealed no significant
correlation between perceived and actual temperatures in these rooms. However, as it
is illustrated in the verification section, about 65 percent of occupant perceptions of
temperature differed from the neutral condition. This finding indicates the proposed
methodology's potential for improving the building systems efficiency in large scale
data collection. Using this approach, optimizing standards for ambient conditions to
improve both energy consumption efficiency and occupant comfort could also be
achieved.
Future work is focused on implementing the developed methodology at larger scales
for building level and campus level data collection. Moreover, a test bed is developed
by the authors for sensing and measuring other ambient factors such as lighting
intensity and air quality. Conducting large scale data collection will provide a large
source of data points for analytical and statistical assessment of occupant satisfaction
with building system performance. The long term objective is the development of an
intelligent adaptive control system, which relies on continuous and real time occupant
perceptions to set optimal ambient conditions for occupant comfort and building
energy efficiency. To achieve these goals, several necessary infrastructural
developments such as expansion to other smart phone operating systems,
development of a visualization platform for energy literacy, and incorporation with
facilities management information systems, are part of the future work of the authors.
ACKNOWLEDGEMENTS
The authors would like to thank University of Southern California (USC) Integrated
Media Systems Center (IMSC). Any opinions, findings, conclusions, or
recommendations presented in this paper are those of authors and do not necessarily
reflect the views of USC IMSC.
168 COMPUTING IN CIVIL ENGINEERING
REFERENCES
Ari, S., Wilcoxen, P., Khalifa, H.E., Dannenhoffer, J.F., Isik, C. (2008). “A
Practical Approach to Individual Thermal Comfort and Energy Optimization
Problem.” NAFIPS 2008 - Annual Meeting of the North American Fuzzy
Information Processing Society, 388-93.
ASHRAE. (2004). “Thermal environment conditions for human occupancy.”
ASHRAE Standard 55/2004. Atlanta.
Barlow, S., Fiala, D. (2007). “Occupant comfort in UK offices-How adaptive
comfort theories might influence future low energy office refurbishment
strategies.” Energy and Buildings, 39, 837-846.
CEN. (2005). "Ergonomics of the thermal environment – analytical determination
and interpretation of thermal comfort using calculation of PMV and PPD
indices and local thermal comfort criteria. Standard EN ISO 7730. Bruxelles.
Corgnati, S.P., Filippi, M., Viazzo, S. (2007). “Perception of thermal environment
in high school and university classrooms: Subjective preferences and
thermal comfort.” Building and Environment, 42, 951-959.
Digital Media Test Kitchen (2010)." Smartphone survey methodology" <
http://testkitchen.colorado.edu/projects/reports/smartphone/smartphone-
methodology/> (March 08, 2011)
Estrin, D., (2010) “Participatory sensing: applications and architecture [Internet
Predictions].” IEEE Internet Computing,14, 12-14.
Fanger, P.O. (1982). Thermal Comfort. Malabar: Robert E. Kriger Publishing Co.
Hwang, R.L., Lin, T.P., Kuo, N.J. (2006).“Field experiments on thermal comfort in
campus classrooms in Taiwan.” Energy and Buildings, 38, 53-62.
Nicol, J.F., Humphreys, M.A. (2009). “New standards for comfort and energy use in
buildings.” Building Research & Information, 37(1), 68-73.
Nicol, F., Roaf, S. (2005). “Post-occupancy evaluation and field studies of thermal
comfort.” Building Research & Information, 33(4), 338-346.
Payton, J., Julien, C., (2010). Integrating participatory sensing in application
development practices. FoSER '10 Proceedings of the FSE/SDP workshop
on Future of software engineering research, 1817-1820
Reddy, S., Estrin, D., Srivastava, M., (2010). “Recruitment Framework for
Participatory Sensing Data Collections.” Pervasive Computing. Proceedings
8th International Conference, Pervasive, 138-55.
Sääskilahti, K., Kangaskorte, R., Luimula, M., Hemminki, J.H.,( 2010). “ Collecting
and visualizing wireless geosensor data using mobile devices.” In COM.Geo
'10: Proceedings of the 1st International Conference and Exhibition on
Computing for Geospatial Research, 38, 1-8.
Sensharma, N.P., Woods, J.E. Goodwin, A.K. (1998). " Relationships between the
indoor environment and productivity: A literature review.”ASHRAE
Transactions, 104(1A), 686-701.
Trossen, D., Pavel, D., (2007)."An Open Source Platform to Facilitate Participatory
Sensing with Mobile Phones." Proceedings of the 4th Annual International
Conference on Mobile and Ubiquitous Systems: Computing, Networking
and Services, MobiQuitous
U.S. Department of Energy (2009). Energy Data Book.
EFFECTS OF COLOR, DISTANCE, AND INCIDENT ANGLE ON QUALITY
OF 3D POINT CLOUDS
Geoffrey Kavulya1, Farrokh Jazizadeh2, Burcin Becerik-Gerber3
1,2,3
Department of Civil and Environmental Engineering, University of Southern
California, Los Angeles, California,
Email: 1kavulya@usc.edu, 2jazizade@usc.edu, 3becerik@usc.edu
ABSTRACT
In laser scanning, the precision of the point clouds (PC) acquisition is influenced by a
variety of factors such as environmental conditions, scanning tools and artifacts,
dynamic scan environments, and depth discontinuity. In addition, object color, object
texture, and scanning geometry are other factors that affect the quality of point
clouds. These factors can affect the overall quality of point clouds, which in turn
could result in a significant impact on the accuracy of as-built models. This study
investigates the effect of object color and texture on the PC quality using a time of
flight scanner. The effect of these factors has investigated through an experiment
carried out on the Rosenblatt Stadium in Omaha, Nebraska. The outcomes of this
ongoing research will be used to further highlight the parameters that must be taken
into consideration in 3D laser scanning operations to avoid sources of errors that
result from laser sensor, object characteristics, and scanning geometry.
Keywords: 3d laser scanning, point cloud, quality, color, texture, incident angle,
distance
INTRODUCTION
There are various sources of errors that may contribute to undesired quality of point
clouds. Scanning errors may result from environmental conditions such as dust or
mist, instrument vibration, thermal expansion, surface reflectivity, and dynamic scan
scenes (Becerik-Gerber et al., 2010). The mixed-pixel phenomenon is another source
169
170 COMPUTING IN CIVIL ENGINEERING
of error that causes inaccurate data acquisition. A mixed pixel forms, when the laser
beam hits two surfaces on two or more planes e.g. laser partially striking the front
surface and another surface behind and two ranges are recorded for one point (Tang
et al, 2007). Invalid data may also be generated during the scanning process because
of shadows or movement of objects in the scene, which are referred to as noise. The
process of aligning and merging different point clouds from one scene is called
registration. Artifacts known as targets are used to merge multiple point clouds.
Displacement of targets during scanning process, poor target layout design, and errors
in target acquisition algorithm are common sources of registration errors. Modeling
errors are inherently dependent on the scanning and registration errors. Errors or
missing points in the point clouds decrease the quality of the final model. Prior
research focused on addressing different sources of errors and developed algorithms
for detecting mixed-pixel phenomenon and its effects on final products (Tang et al.,
2007), noise filtering, coarse and fine registration (Huber and Herbert, 2003),
methods for removing faulty data (Tuley et al., 2005), identified reasons for edge loss
in point clouds, and developing algorithms for its corrections (Tang et al., 2009). In
addition, in a recent study the effects of different target types, scanner types, target
layout design, and scanning process on the registration accuracy were investigated
(Becerik-Gerber et al., 2010).
Affects of distance and angle of incidence have been subjects of some of the previous
research efforts (Kukko et al., 2008; Vukašinovi´c et al., 2010), though there is still
lack of empirical research focusing on the object color/texture and their correlation
with scanning geometry angle. This paper reports findings from an investigation that
COMPUTING IN CIVIL ENGINEERING 171
focuses on the affects of laser sensor specification and object color which are
correlated with the object texture, laser beam incident angle, and the distance between
the scanner and objects. The remaining sections of the paper are structured as
follows. First, object characteristics and scanning geometry that might affect the
noise level and the quality of point clouds is discussed. Then the test bed, the
experiment and its findings are presented.
Color and Surface Reflection. A 3D laser scanner works with a signal reflected
back from the object surface to the receiving unit. The reflective abilities of the
surface (albedo) affect the signal strength (Ingensand, 2006 and Tang et al, 2009) and
as reported in (Boehler et al, 2003), white surfaces result in strong reflections,
whereas reflection is weak on black surfaces. Accordingly, the detection of colored
surfaces depend on the spectral characteristics of the laser beam (green, red, near
infrared) and shiny surfaces pose detection challenges (Boehler et al, 2003). Surfaces
and colors observed in a visible spectrum by naked eye may not necessarily be
detectable by the laser scanners (Becerik-Gerber et al, 2010). Therefore, surfaces of
different reflectivity may result in systematic errors (Boehler et al, 2003). This
reflection retraces the path of the transmitted beam that depends on the object
properties, such as its material and its shape dependent anisotropy, and the scanning
geometry (Soudarissanane, 2007).
Distance (Range). A time of flight scanner calculates the distance by multiplying the
light velocity with the time of travel, which means that in order to decrease the pulse
expansion, the velocity of the beam must be increased or the time of travel must be
decreased. During scanning, a laser scanner generates a triangle between the scanner
lens, laser, and object to gather accurate 3D data by the principle of laser
triangulation (Froehlich and Mettenleiter 2004). To obtain the x, y, z coordinates of
an object, the distance between the scanner lens and the laser, also known as the
parallax base, and the angle of the laser as provided by the galvanometer, must be
established. Time-of-flight scanners use two methods for distance measurement
(Bogue, 2010). The first method uses amplitude modulated light and measures the
phase difference between a reference signal and calculates the distance (Lange,
1999). The second method calculates the distance by means of direct measurement of
the runtime of a travelled light pulse using arrays of single-photon avalanche diodes
(Falie and Buzuloiu, 2007). To ensure a larger scan point density, studies (Becerik-
Gerber et al, 2010 and Boehler et al, 2003) show that fixed, paddle or sphere targets
may be used if their precise positions are surveyed with instruments and methods that
are more accurate than of a laser scanner. While measuring distances, technical
specifications such as scanning speed and spatial resolution (Froehlich and
Mettenleiter,2004), field of view (Ingensand, 2006; Ryde, 2009 and Tuley, 2005) and
accuracies of range measurement (Lichti and Harvey, 2000) must be considered.
changed angle, from one end of a base onto an object, and a CCD camera at the other
base, which detects the laser spot on the object surface. In laser scanning operations,
the performance is affected and limited by the laws of retro directive reflection (Tang,
2007; Yong-hua et al, 2009 and Ingensand,2006), where laser pulse irradiates objects,
and the optical properties of materials (Tuley, 2005). When a laser sensor measures a
scan scene, its beam rotates horizontally and/or vertically. This establishes a
relationship between the incident and reflected beam, which in turn pivotally
delineates the accuracy. The different incidence angles of the laser beam on a surface
result in 3D points of varying quality.
EXPERIMENT DESCRIPTION
Experiments were carried out at the Johnny Rosenblatt Stadium as the test bed, which
is located in Omaha, Nebraska and was built in 1947 with a maximum capacity of
23,000 people. To conduct the experiments, the entire exterior stadium façade was
scanned with a time of flight scanner (Figure 1). Objects that had different
homogenous and heterogeneous materials with different colors including brick walls
and columns, red steel columns and trusses, blue steel columns and trusses, and silver
steel flagpoles were present in the scan scene. The curvilinear architecture of the
stadium façade required wide shot scanning, which provided a unique opportunity for
analyzing the effects of different angles and distances.
The exterior façade was scanned at four different locations in high-resolution. Each
scan shot, including the equipment set up, target acquisition, and scanning, took about
1 hour. Technical specifications of the time-of-flight scanner used in this study are
illustrated in Table 1:
Captured point clouds were extracted and the object detection statuses were defined.
Figure 2 illustrates four scanner locations and two types of steel columns and
flagpoles used in the analysis.
C
B A
Figure 2: Four scanner locations, two types of columns and flagpoles
RESULTS
Most of the red steel columns were not detected when the façade was scanned from
scanner locations B and C. However, almost all columns were detected when the
façade was scanned from scanner locations A and D. In Figure 3, the screenshots of
the point clouds are presented. In general, red colored objects return a very low laser
return intensity for any laser scanner that employs a visible green laser (Hiremagalur
et al., 2007). However, the results show that there is a correlation between the object
texture, distance and angle of incidence and detection of objects. In addition, other
objects with red color such as brick columns which were in the same relative position
as the red steel columns in terms of distance and angle were detected with no issues.
Moreover, blue steel columns and trusses, and flagpoles were also detected from all
scanner locations. In order to determine which factors are more important for
detecting objects with different colors, the analysis of distance and orientation was
carried out and discussed below in detail.
First, the façade was modeled in Autodesk Revit Architecture by using the imported
point cloud data. Then, distances and angles of incidence were calculated based on
the coordinates acquired. Both distances and angles were measured in horizontal
planes. The vertical angle is not important in this analysis because in most cases, the
vertical angle is constant for all the columns.
174 COMPUTING IN CIVIL ENGINEERING
Scanner Location A
Scanner Location D
Scanner Location B
Not detected
Detected
Scanner Location C
The numerical results along with the detection status of the columns are presented in
Table 2.
For scanner location A, based on the scanner setting (defined by the operator),
columns C1 to C6 were out of scanning range. However the rest of the columns (C7 –
C10) were detected successfully. For scanner location D, columns C5 to C10 were
out of scanning range. The rest of the columns were detected except C1. For scanner
locations B and C, all columns were in line of sight and scan range. For scanner
location B, only C5 and C6 were detected. The two detected columns are at the center
COMPUTING IN CIVIL ENGINEERING 175
of the façade and almost perpendicular to the scanner location axis. The distances for
these two columns are equal to 40 meters. But, for scanner location C, the closest
column is 43 meters from the scanner, yet no column was detected. Nonetheless, all
red brick columns at the same relative locations (under the red steel columns) were
detected from all scanner locations. The same method was carried out for blue steel
columns and silver flagpoles. While the distances between the scanner and the blue
columns were larger and the distances between the scanner and the flagpoles are
smaller than the distances between the scanner and the red columns, all blue columns
and flagpoles were detected. These results indicate that the texture which affects the
laser beam reflection from the surface is important in return intensity of the laser
beam. Moreover, for red steel columns, lack of detection could be correlated to the
critical geometric characteristics in addition to the red color effect. In order to
determine the impact of distances and angles, results are sorted in the Figures 4 and 5.
Figure 4: Relative column distances and detection rates for red steel columns
Figure 5: Relative angles and detection rates for red steel columns
Any column that is over 40 meters far away from the scanner could not be detected.
However, angles do not play significant role for large distances. Even columns that
are almost 90 degrees to the scanner were not detected in some cases. It could be
concluded that there is no specific relation for detection or lack of detection and
angles. Based on this analysis, besides the importance of the surface material/texture
(in case of red brick columns), the most impactful factor is the distance, and angles
might be important when they are combined with critical distances such as above 40
meters. In critical distances, angles close to 90 degrees could affect the detection of
red color objects in case that laser source emits a green laser beam.
176 COMPUTING IN CIVIL ENGINEERING
CONCLUSIONS
The paper presented a test bed that was developed to model how object’s color,
scanning distance and angle of incidence influence point cloud quality. This test bed
with rich interplay of colors, materials, textures and geometry enabled authors to
explore additional factors that the AEC industry ought to pay special attention to
when constructing three-dimensional virtual representations of buildings and
infrastructure. Experimental results indicate that the distance between the laser sensor
and the object with very low laser return intensity is essential in object detection. This
study can inspire future research in defining the standard procedures in scanning
operations for object colors with low laser return intensity.
ACKNOWLEDGMENTS
The authors would like to thank Optira Inc, which provided funding and expertise to
this project. Any opinions, findings, conclusions, or recommendations presented in
this paper are those of the authors and do not necessarily reflect the views of Optira.
REFERENCES
Akinci, B., Garrett, J., Patton, M., (2002). "A vision for active project control using
advanced sensors and integrated project models." Specialty Conference on
FIAAP, ASCE, Virginia Tech, January 23–25 , 386–397.
Anderson, D., Herman, H., Kelly, A. (2005). “Experimental characterization of
commercial flash ladar devices.” In Proceedings of International Conference on
Sensing Technologies.
Arayici, Y., (2007). "An approach for real world data modeling with the 3D terrestrial
laser scanner for built environement." Automation on Construction, 16, 816-829.
Becerik-Gerber, B., Jazizadeh, F., Kavulya, G., Calis, G. (2010). “Assessment of
target types and layouts in 3D laser scanning for registration accuracy.”
Automation in Construction.
Boehler, W., Bordas-Vicent, M., Marbs, A.(2003). “Laser Scanner Accuracy.”
Proceedings of the 19th. CIPA Symposium, ISPRS/CIPA, 696-701.
Bogue, R.(2010).“ Three-dimensional measurements:a review of technologies and
applications.” Sensor Review,102-106.
Cheok, G. S., Stone, W. C., Bernal, J., (2001). "Laser scanning for construction
metrology, National Institute of Standards and Technology." American Nuclear
Society 9th International Topical Meeting on Robotics and Remote Systems,
Seattle, Washington, March 4-8.
Falie, D., Buzuloiu, V.( 2007). Noise characteristics of 3D Time-of-Flight cameras.
In Proceedings of IEEE Symposium on Signals Circuits & Systems (ISSCS),
Iasi, Romania, 229-232.
Franaszek, M., Cheok, G.S., Witzgall, C.(2009). “Fast automatic registration of range
images from 3D imaging systems using sphere targets.” Automation in
Construction, 265-274.
Froehlich, C., Mettenleiter, M, (2004). “Terrestrial Laser Scanning-New Perspectives
in 3D Surveying.” International Archives of Photogrammetry, Remote Sensing
and Spatial Information Sciences, 36, 7-13.
Gong, J., Caldas, C., (2007). "Processing of high frequency local area laser scans for
construction site resource management." Proceedings of the 2007 ASCE
COMPUTING IN CIVIL ENGINEERING 177
ABSTRACT
With the development of digital imaging technology, digital photogrammetry
has found various engineering applications, such as architecture, automotive and
aerospace engineering. Although digital photogrammetry allows the generation of
3D photogrammetric data with high density and resolution, it has not been as popular
in construction as it has in other industries. In particular, acquisition and processing
of 3D photogrammetric data from digital photogrammetry for construction progress
measurement applications is at an early stage, and its feasibility has not been
evaluated. The objective of this research is to propose a processing method for
progress measurement application by utilizing acquisition of 3D as-built data using
photogrammetry technology. For this purpose, a framework consisting of 3D
photogrammetric data acquisition, 3D photogrammetric data refinement, and 3D
structural components detection is presented. The effectiveness of the proposed
method is verified by evaluating the quality of the processed 3D photogrammetric
data with respect to the density. The preliminary experimental result shows that using
processed 3D photogrammetric data for advanced and automated construction
progress measurement applications is possible.
178
COMPUTING IN CIVIL ENGINEERING 179
INTRODUCTION
With the development of digital imaging technology, photogrammetry offers
low cost, portable, and accurate methods of obtaining three-dimensional spatial
information. Based on these advantages, photogrammetry has been widely utilized in
various engineering applications such as architecture, automotive, and aerospace
engineering. Further, advances in digital photogrammetric systems have achieved
high density and resolution to the extent it can possibly detect the performance
deviations between 3D as-built data and an as-planned 3D CAD model.
Although photogrammetry offers new opportunities for users to obtain dense
and accurate 3D photogrammetric data, these data can contain noise. The cause of
noise is the mismatch between correspondence pixels in each image (Snavely and
Szeliski 2010). A construction site is an outdoor environment and is cluttered, so,
images obtained from construction sites contain illumination variations, sensor
noises and occluded pixels. These pixels cause a mismatch by existing in only one
image and not corresponding with pixels in other images (Niese et al. 2007). Thus, a
3D photogrammetric data obtained from a construction site can contain a large
amount of noise. Such a noisy data may have a negative effect on the accuracy of
progress measurement; for this reason, processing is needed.
Many studies have been conducted in progress measurement using LADAR
(Laser Detection and Ranging) that does not require complex processing (Shih and
Wang 2004; Bosche 2010). However, the LADAR is not only expensive and time
consuming during data collection but also has limitations of scanner placement
(Golparvar-Fard et al. 2009). These limitations are critical for a progress
measurement application that requires continued acquisition of as-built data.
The objective of this research is to propose 3D photogrammetric data
acquisition and processing method for progress measurement application, using
photogrammetry technology. The proposed process consists of 3D photogrammetric
data generation, refinement on 3D photogrammetric data, and 3D structural
component detection. The results of the experiment show possibilities for applying
the proposed process to automatic construction progress measurement.
directly and effectively in practical applications. The noise is reduced using tensor
voting algorithm to achieve better 3D photogrammetric data for construction sites.
The next step is 3D structural components detection based on the color model,
using a machine learning algorithm from 3D photogrammetric data for construction
sites. RGB colorspace of the acquired 3D photogrammetric data is converted to HSI
colorspace. Then, in order to detect the structural components based on their color
information, a support vector machine is used. After extracting the structural
components, the 3D photogrammetric data corresponding to them is acquired. The
obtained 3D as-built data of the structural components in progress can be utilized for
advanced and automated construction progress measurement applications by
comparing the obtained data with the as-planned data, such as that obtained using a
3D CAD model. The process is described in detail in the following section.
METHODOLOGY
In this section, the process of acquisition and processing 3D photogrammetric
data from digital photogrammetry is explained with an outdoor experimental result
performed on a construction site where concrete buildings were under construction.
In the section entitled “3D Photogrammetric Data Acquisition for Construction Sites,”
image acquisition issues for acquisition of 3D photogrammetric data for construction
sites using photogrammetry are discussed and the process of generation of 3D
photogrammetric data from construction site images using the Photomodeler Scanner
is introduced. Then, a detailed method for refining 3D photogrammetric data of
construction sites using the tensor voting algorithm is utilized, as illustrated in the
section entitled “3D Photogrammetric Data Refinement.” The final step, described in
COMPUTING IN CIVIL ENGINEERING 181
the section entitled “3D Structural Components Detection,” is to detect the structural
components in progress, based on the color information using a machine learning
algorithm from 3D photogrammetric data for construction sites.
(a) (b)
Figure 3. (a) 3D photogrammetric data for the construction site;
(b) magnified portion of (a).
(a) (b)
Figure 4. (a) Refined 3D photogrammetric data; (b) magnified portion of (a).
COMPUTING IN CIVIL ENGINEERING 183
(a) (b)
Figure 5. (a) 3D structural components detection result;
(b) magnified portion of (a).
VERIFICATION
In the progress measurement applications, in order to utilize the processed 3D
data, high-quality 3D as-built data are required to both ensure a high level of detail of
the structural components and to better interpret and compare the differences
between the as-built data and that obtained through 3D CAD models. In this section,
the quality of the processed 3D photogrammetric data is tested with respect to the
density to show the effectiveness of the proposed method.
Figure 6(a) shows the as-planned 3D CAD model and Figure 6(b) shows the
3D CAD model overlapping with the corresponding as-built 3D data of the structural
components. The density of the 3D processed data is evaluated by calculating the
number of points per m2 for ten parts obtained from the proposed method [marked by
the red boxes in Figure 6(b)].
184 COMPUTING IN CIVIL ENGINEERING
(a) (b)
Figure 6. (a) The as-planned 3D CAD model; (b) the 3D CAD model
overlapping with the corresponding 3D as-built data.
The density of the ten parts is depicted in Table 1. For the 3D data acquired
and processed using the proposed method, on average, approximately 1,590 points
per m2 were achieved at a range of about 50 m. Compared to a high density laser
scanner (e.g., a TrimbleTM GX3D laser scanner), which produces 3D data with
approximately 1,110 points per m2 at a range of about 50 m, the obtained data has an
acceptable level of quality.
ACKNOWLEDGEMENTS
This research was supported by Basic Science Research Program through the
National Research Foundation of Korea (NRF) funded by the Ministry of Education,
Science and Technology (2010-0023229).
REFERENCES
Arias, P., Herraez, J., Lorenzo, H., and Ordonez, C. (2005). “Control of structural
problems in cultural heritage monuments using close-range photogrammetry
and computer methods.” Computers & Structures, 83(21-22). 1754–1766.
Bosche, F. (2010). “Automated recognition of 3D CAD model objects in laser scans
and calculation of as-built dimensions for dimensional compliance control in
construction.” Advanced Engineering Informatics, 24(1), 107–118.
Dai, F. and Lu, M. (2010). “Assessing the accuracy of applying photogrammetry to
take geometric measurements on building products.” Journal of Construction
Engineering and Management, 136(2), 242–250.
Eos Systems, Inc. (2010). http://www.photomodeler.com/index.htm, last accessed on
December 31 2010.
Golparvar-Fard, M., Pena-Mora, F., and Savarese, S. (2009). “D4AR – A 4-
dimensional augmented reality model for automating construction progress
monitoring data collection, processing, and communication.” Journal of
Information Technology in Construction, 14, 129–153.
Medioni, G., Lee, M., and Tang, C. (2000). “A computational framework for
segmentation and grouping.” Elsevier Science, New York, NY.
Niese, R., Al-Hamadi, A., and Michaelis, B. (2007). “A novel method for 3d face
detection and normalization.” Journal of Multimedia, 2(5), 1–12.
Radisevic, G. (2010). “Laser scanning versus photogrammetry combined with
manual post-modeling in Stecak digitization, Proc., 14th Central European
Seminar on Computer Graphics, Budmerice, Slovakia.
Reyes, L., Medioni, G., and Bayro, E. (2010). “Registration of 2D points using
geometric algebra and tensor voting.” Journal of Mathematical Imaging and
Vision, 37(3), 249–266.
Shih, N.J. and Wang, P.H. (2004). “Point-cloud-based comparison between
construction schedule and as-built progress: Long-range three-dimensional
laser scanner’s approach.” Journal of Architectural Engineering, 10(3), 98–
102.
Snavely, N., Simon, I., Goesele, M., Szeliski, R., and Seitz, S.M. (2010). “Scene
reconstruction and visualization from community photo collections.” Proc.
IEEE, 98(8), 1370–1390.
Data Transmission Network For Greenhouse Gas Emission Inspection
ABSTRACT
Exhaust from construction equipment is one of the major sources of
Greenhouse Gas emissions in the construction industry. And collecting, monitoring,
and managing equipment emissions in a real time environment will help ensure
contractor’s compliance with applicable emission regulations and contractual
requirements. Existing emission compliance systems, however, fail to address the
complexity of construction operations. This paper presents an ad hoc network
optimization model for construction equipment emission inspection. Equipment
specific emission data is collected by a device attached to each vehicle and
transmitted through the ad hoc network to reach the data processing server. The
optimal data transmission mechanism is modeled for minimizing data loss during
transmission. The paper also demonstrates the highly efficiency and accuracy of the
model through a simulation of various equipment distribution patterns and a
discussion of relaxed transmission capacity.
INTRODUCTION
There is general scientific consensus on global warming and that the warming
is primarily due to anthropogenic activities grown since pre-industrial times (Pachauri,
2007). According to the EPA’s GHG emission report (U.S. EPA, 2008), the
construction sector produced 6% of total U.S. industrial GHG emissions in 2002, and
has the third highest GHG emissions among the industrial sectors. The major source
of the construction sector is fossil fuel combustion (76%), which is the use of fossil
fuels, such as gasoline, diesel, or coal, to produce heat or run equipment.
In order to control the GHG emission of the whole construction project
lifecycle, especially the construction stage, US EPA has put forward many related
programs or regulations, such as Diesel Emissions Reduction Program (U.S. EPA,
2005), Idling Reduction Program (U.S. EPA, 2010), and Clean Fuel Program (U.S.
EPA, 2000). Project owners usually incentivize their contract bidders by producing
green contracts involving the emission control technology or strategy packages as
required or optional provisions.
However, it is practically difficult to generate a measurable and enforceable
operation pattern for green construction equipments. It is even harder to control in a
186
COMPUTING IN CIVIL ENGINEERING 187
project level that contractors are complied with regulations or provisions in the
contract. Therefore, a real-time construction project monitoring system for GHG
emission is increasingly important to:
1) Help to collect information about the GHG emission during the whole lifecycle
of the project so as to correctly estimate the total actual impact to the field
environment.
2) Help to provide baseline data for the construction process so as to establish a
standard for green contract performance evaluation.
3) Monitor contractor’s construction equipment behavior so as to ensure that the
process is complied with provisions in the green contract.
4) Timely provide contractors with reminders or warnings once abnormal
information is detected by the inspection system.
In this paper, we consider a general construction project where different fleet
equipments are operating within the construction field. Each piece of equipment has a
device installed so as to collect and transfer the emission data to the central
processing server. An ad hoc wireless sensor network is designed for better collecting
information. An algorithm of data transmission protocol between devices is
established for minimizing data loss during transmission. A simulation of various
equipment distribution patterns is then conducted to demonstrate the efficiency and
accuracy of the model. Different influence factors are discussed in the end for the
model limitation and future improvement.
Assumptions
In order to design a feasible and reasonable wireless ad hoc sensor network,
several assumptions need to be made for the system. Notations are shown in table 1.
1) There are N pieces of equipment (with N devices) and a data server in the network.
Data server could only receive data from equipments, while equipments could both
send and receive data from their peers.
2) To send data across distance d, there is a data loss proportional to d. The data loss
is regarded as the transmission cost in our model.
3) Each device will generate emission data in a constant speed (g).
4) Each device has a transmission capacity limit (tc); total flow sending out from the
device cannot exceed the limit.
5) Each device and the data server have a transmission range. They could only
transmit data to other devices within the range.
6) Equipments are moving in the construction field. It is highly possible that at a
certain time point a device is within the range of the other and later it is not.
COMPUTING IN CIVIL ENGINEERING 189
7) All devices are synchronized at any time. They all keep the same picture and
information at a certain time about how all the devices are distributed.
Table 1 Table of Notation for the Transmission Model
n Number of equipment
Xij Transmission flow from node i to node j
Xik Transmission flow from node i to server or dummy node
Cij Unit data loss of transmission flow from node i to node j
Cik Unit data loss of transmission flow from node i to server or dummy node
g Data flow generated by each device
tc Total transmission capacity for each device
Transmission Model
Based on the assumptions made for the transmission mechanism, we could
formulate the problem as:
∑ ∑ · ∑ ∑ · Eqn 1
1 , ∑ ∑ ∑ Eqn 2
1 , ∑ , Eqn 3
0, 0
First of all, it is necessary to introduce the dummy node in the model for some
potential imbalanced node. If a device cannot find a way to transmit data out to other
devices, it could always go to the dummy node. The total flow going into the dummy
node represents the total data loss of the system. Of course, dummy node does not
send out any data.
With this being said, we have two types of data transmission cost. The first
one is caused by distance, and it is proportional to the distance. The second one is the
total amount of data lost, which is the total flow sending to the dummy node.
Therefore the objective function Eqn 1 is to minimize the total cost of the system.
Equation 2 is the flow balance constraint. For each node, the flow sending out
to other nodes (including the server and the dummy) should be balanced with the data
it generates and the flow it receives from other nodes.
Equation 3 is the transmission capacity constraint. For each node, the total
flow it sends out should be under a certain transmission capacity limit tc.
NUMERICAL ANALYSIS
Basic Parameters
We consider a construction field with dimension of 3 miles by 2 miles. There
are altogether 6 equipments moving in the field, represented by node 1 to 6. The data
server and the dummy node are two additional nodes.
The data server, as well as all devices have a transmission range L=1 mile.
The sensor module on each device is generating information in the flow of g=2 units/s.
Each device is transmitting data out to other devices, including the server, within a
flow capacity limitation of tc = 5 units/s.
Absolute Positions
Since the equipments are moving within the construction field, the absolute
position of each device is changing with time. Therefore, we need to design a series
of position scale (x,y) for each device. We consider the following four special cases,
and other general conditions could be regarded as combinations of these cases.
1) All equipment are in the range of the server
2) Only one equipment is out of the range of the server
3) Only two equipment is in the range of the server
4) One equipment is in the very end of the field and no other equipment is in the
range of it
Since the server is not moving, and generally is set in certain condition by the side
of the construction field, we set the static server position as (0,3). Hence the position
scales of the 6 equipments could be given as:
Table 2 Four Cases of Locations for Six Construction Equipment
Case 1 Case 2 Case 3 Case 4
Node x y x y x y x y
1 2.2 0.1 2.6 0.5 2.6 0.8 2.2 0.4
2 2.3 0.6 2.1 0.2 1.8 0.9 2.6 0.9
3 2.5 0.8 2.9 0.9 2.2 0.5 1.8 1.4
4 2.8 0.9 2.3 0.7 1.4 1.4 1.4 0.5
5 2.9 0.4 2.8 0.2 1 0.5 1 1.1
6 2.5 0.3 2 1.2 0.6 1.2 0.2 1.8
Cost Coefficients
Then we need the cost coefficients among each pair of the nodes to calculate
the data loss. As discussed in the model, if a certain node is in the transmission range
of another node, the cost of the flow is proportional to the distance d between them,
and we simply set the multiplier as 1. If a device is out of the range of another device,
the cost equals to a very large number, and we set it as M1=1000. Since the flow is not
allowed to send to the node itself, we set the cost of Xii, cii=1000. Meanwhile, since
the flow sent to the dummy node represents the information that is lost, we need to
allocate a fairly large cost to those flows but smaller than the out-of-range cost. We
set it as M2=500. Therefore, the cost coefficients could be written as:
, 1, 1, . . , 6, 0, . . , 6,
1000, 1, 1, . . , 6, 0, . . , 6
Eqn 4
1000,
500,
COMPUTING IN CIVIL ENGINEERING 191
General Cases
Given the parameters, the transmission pattern could be solved through the
model:
Table 3 Simulation Result
X21 X31 X41 X51 X61 X32 X42 X52 X62 X43 X53 X63 X54 X64 X65
Case 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Case 2 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0
Case 3 2 0 0 0 0 -3 2 1 0 0 0 0 0 0 0
Case 4 0 0 3 0 0 3 0 0 0 0 1 0 1 0 0
X 1, s X 2, s X 3, s X4,s X5,s X6,s X1,d X2,d X3,d X4,d X5,d X6,d
Case 1 2 2 2 2 2 2 0 0 0 0 0 0
Case 2 4 2 2 2 2 2 0 0 0 0 0 0
Case 3 4 0 5 0 0 0 0 0 0 0 1 2
Case 4 0 5 5 0 0 0 0 0 0 0 0 2
Case one: all devices are located within the range of the data server (Figure 1).
In this case, since the cost coefficients are equal to the distance between device and
data server, each device will transmit its own data (2 units) to data server. There is no
data flow among devices and dummy node.
Case two: device #6 is located outside of the range of the data server (Figure
2). Since device #6 cannot transmit data to data server directly, it has to transmit its
data to device #1 and then device #1 sends all incoming data (4 units) to the data
server. All other devices are transmitting data directly to data server.
For case three, now with sufficient capacity, device #4, #5 and #6 are able to
transmit all their data to device #2, which re-distribute the flow to device #1 and #3
(Figure 5). The data loss of the system is reduced.
Similarly, for case four, with the relaxation of capacity limit, the system takes
more advantage of the low-cost path (device #5 send data through 5–4–1–Server
instead of 5–3–2–Server) (Figure 6). However, increasing capacity could not improve
device #6 whose data are still lost due to out of range.
Figure 5. Case Three with Relaxed Capacity Figure 6. Case Four with Relaxed Capacity
COMPUTING IN CIVIL ENGINEERING 193
REFERENCES
Lang, T., Wiemhöfer, H.-D., & Göpel, W. (1996). Carbonate Based CO2 Sensors with High
Performance. Sensors and Actuators B: Chemical , 34 (1-3), 383-387.
Mohapatra, P., & Krishnamurthy, S. (2004). Ad hoc networks, Technologies and Protocols.
California: Springer.
Pachauri, P. K. (2007). Acceptance Speech for the Nobel Peace Prize Award to the
Intergovernmental Panel on Climate Change(IPCC).
Toh, C.-K. (2002). Ad Hoc Mobile Wireless Networks: Protocols and Systems.
U.S. EPA. (2000). Clean Fuel Fleets. Retrieved from http://www.epa.gov/otaq/cff.htm
U.S. EPA. (2008). Quantifying Greenhouse Gas Emissions from Key Industrial Sectors in the
United States. http://www.epa.gov/ispd/pdf/greenhouse-report.pdf.
U.S. EPA. (2005). Subtitle G—Diesel Emissions Reduction. In ENERGY POLICY ACT OF
2005 (pp. pp 246-252). http://www.epa.gov/OUST/fedlaws/publ_109-058.pdf.
U.S. EPA. (2010). Technologies, Strategies and Policies: Idling Reduction. Retrieved from
http://www.epa.gov/SmartwayLogistics/transport/what-smartway/idling-reduction-
tech.htm
Xu, G. (2007). GPS Theory, Algorithms and Applications, 2nd edition. Springer.
Wearable Physiological Status Monitors for Measuring and Evaluating
Worker’s Physical Strain: Preliminary Validation
ABSTRACT
Construction activities are usually physically demanding and performed in
ubiquitous, highly variable, and, often harsh environments. Excessive physical strain
affects productivity, inattentiveness, and accidents. Therefore, a monitoring system
able to assess workers’ physical strain may be an important step towards better safety
and productivity management. Previous efforts to assess construction workers’
physical demand relied on instrumentation that hindered workers’ activities.
However, worker’s physical strain can now be monitored by recently-introduced,
non-intrusive Physiological Status Monitors (PSMs). We have investigated three
PSMs to assess if they can effectively monitor a person during activities similar to
construction workforce’s dynamic activities. Comparing PSMs’ and standard
laboratory instruments’ measurements, we found that two of the selected PSMs are
mostly reliable and accurate. These preliminary results demonstrate the PSMs’
effectiveness in monitoring subjects during dynamic activities and show promise that
they can be successfully implemented to monitor construction workers’ physical
strain.
INTRODUCTION
Even though progresses in construction equipment and workplace ergonomics
have reduced construction workers’ physical strain, many construction activities are
still physically demanding and have to be accomplished in challenging and harsh
environments. In fact, the construction work environment not only comprises heavy
lifting and carrying, pushing and pulling, but also vibrations and awkward work
postures (Hartmann & Fleischer, 2005). Anecdotal evidence suggests that physically
demanding work, safety and productivity are related (Abdelhamid & Everett, 2002;
Bouchard & Trudeau, 2008; Garet et al., 2005). Hence, the measure of physical strain
for construction activities is a crucial issue in managing productivity and preserving
the workforce’s health and safety.
Numerous studies have been focused on the assessment of workers’ physical
demands. Several authors accomplished studies comparing workers employed in
different trade or occupation, such as iron and steel industry workers (Kang, Woo, &
194
COMPUTING IN CIVIL ENGINEERING 195
Shin, 2007), firefighters (Elsner and Kolkhorst, 2008), and choker setters (Kirk and
Sullman, 2001). Few studies on construction workforce are available (Abdelhamid
and Everett, 2002; Faber et al. 2009; Turpin-Legendre and Meyer, 2003).
Nevertheless, most of these studies present critical drawbacks. In some studies the
measuring equipment was clumsy and uncomfortable; therefore it hindered the
subjects during routine activities (Abdelhamid and Everett, 2002; Elsner and
Kolkhorst, 2008). In other studies the techniques selected to evaluate the physical
demands were suitable for only a small number of subjects or they could not monitor
the subjects continuously (Turpin-Legendre and Meyer, 2003). However, physical
strain can be now monitored using innovative and non-invasive physiological
monitoring technologies, called Physiological Status Monitors (PSMs), which are
able to continuously monitor workers in a remote and automated way. They are
comfortable and do not hamper during any type of activity. Thus, they can be worn
for several hours without interruptions. PSMs have already been used to monitor
patients in remote healthcare, firefighters, miners, soldiers, and athletes. However, at
best of our knowledge there are not studies assessing PSMs’ reliability during
dynamic activities similar to construction workforce’s routine activities. Hence, the
aim of this paper is to evaluate three commercially available PSMs to assess if they
can effectively monitor a person in simulated construction activities. Initially, a brief
description of the selected PSMs is provided. Then, the selection of the monitored
parameters is explained and the different experiments are described. Finally, the
collected data are discussed and conclusions are drawn from these preliminary results.
METHODOLOGY
Several techniques and methods have been developed to assess physical
strain, including heart rate monitoring, rating of perceived exertion, oxygen
consumption, and motion sensors. In particular, it has been proved that Heart Rate
(HR) monitoring is an effective method in applied field studies (Kirk and Sullman,
2001). However, HR monitoring presents some limitations. The main issue is that
several factors not related with physical activity can greatly affect HR reducing its
reliability in the assessment of physical strain. Motion sensors, such as
accelerometers have also been used to monitor physical strain because body
accelerations are directly proportional to muscular forces (Melanson and Freedson,
1996). Unfortunately, they are effective on repetitive activities (e.g., walking) but not
in complex, construction-type activities (e.g., carry a load or walk on a gradient).
Nevertheless, coupling HR and accelerations can enhance the physiological strain
estimation’s accuracy. Motion sensors complement HR monitoring by differentiating
between HR changes caused by physical activity or by other factors. To evaluate
PSMs’ reliability in monitoring physical strain, it is necessary to assess their accuracy
in measuring HR and accelerations. The following sections include details on the
assessments for these two parameters. While some of the tested PSMs are also able to
monitor Breathing Rate, posture and skin temperature, this paper discusses results
from tests of HR and accelerations.
Physical Status Monitors (PSMs)
Three PSMs were selected (see Table 1 and Fig. 1): Zephyr BioHarness (BH)-
BT, Zephyr HxM, and Hidalgo EQ-01. All these devices present three main parts: the
196 COMPUTING IN CIVIL ENGINEERING
monitoring unit (i.e., the sensor electronics module and monitoring belt worn around
the chest), the communication unit, and viewing and analysis software. Via
Bluetooth, BH-BT and EQ-01can either transmit live data to a computer or work as
data logger for several hours, instead HxM can only transmit live data to portable
devices (e.g., cell phone or PDAs). Manufacturer-provided software is available for
BH-BT and EQ-01 whereas third-party applications for various mobile platforms are
available for HxM. For this project a Smartphone application, the Run.GPS
(eSymetric GmbH, Germany), was selected.
RESULTS
HR Assessment Results
To date, data from only two subjects have been collected. Therefore, these
data are not statistically conclusive. In these two experiments, HxM showed
inadequate performance in assessing HR (Fig. 2) in many of the performed activities
(correlation coefficients for each activity: r1= 0.89, r2= 0.12, r3= -0.12, r4= -0.02,
r5=0.77, and r6=0.25). We do not have enough data to clearly determine the reason of
such behavior, but we can assume that it reflected poor contact between the chest belt
and subject’s body. In fact, HxM was the only PSM not equipped with a shoulder
strap.
Heart rate derived from PSMs and EKG were highly correlated (BH-BT r = 0.960,
p<0.0001, 917 data sets; EQ-01 r = 0.936, p<0.0001, 845 data sets) as shown in Fig.
3. The Bland-Altman technique demonstrated good agreement between PSMs and
EKG as shown in Table 2. Further, both PSMs showed a significant difference with
p-value less than 0.05. Descriptive statistics of correlation coefficient and agreement
indexes across the six activities are shown in Table 3. Unlike EQ-01, BH-BT seemed
reliable in every activity. This behavior translated in the BH-BT’s steadier behavior
than EQ-01 across activities. The two systems performed best in activities 1 (static)
and 6 (walking). This result supported the assumption that electric noise generated by
chest muscles and movement of the torso can affect PSMs’ measurements. Last, both
PSMs showed in almost every activity significant difference with HR derived from
EKG.
Table 2. HR agreement indexes.
Data sets > D+1.96s Data sets < D-1.96s
D s Data sets
# % # %
BH-BT vs EKG 1.389 3.469 917 28 3.05 31 3.38
EQ-01 vs EKG -2.782 5.798 845 11 1.30 46 5.44
114
110
109
60 104
0
25
50
75
100
125
150
175
200
225
250
275
100
125
150
175
200
225
250
275
0
25
50
75
160 180
y = 0.9262x + 11.437
140 160
EKG (bpm)
EKG (bpm)
R² = 0.8757
120 140
100 120
80 y = 1.0094x - 2.4387 100
60 R² = 0.9222 80
60 80 100 120 140 160 80 100 120 140 160
BH-BT (bpm) EQ-01 (bpm)
Figure 3. HR derived from BH-BT and EKG (left) and from EQ-01 and EKG (right).
14
12
10
Acc (m/s2)
8
6
4
0.0
0.7
1.4
2.1
2.8
3.4
4.1
4.8
5.5
6.2
6.8
7.5
8.2
8.9
9.6
10.2
10.9
11.6
12.3
13.0
13.6
14.3
15.0
15.7
16.4
17.0
17.7
18.4
19.1
19.8
Time (s) BH-BT Vicon
14
12
10
Acc (m/s2)
8
6
4
10.5
11.2
11.8
12.4
13.1
13.7
14.4
15.0
15.6
16.3
16.9
17.6
18.2
18.8
19.5
20.1
20.8
21.4
22.0
22.7
23.3
24.0
24.6
25.2
25.9
26.5
27.2
8.6
9.2
9.9
35 35 y = 0.4985x + 4.9497
Vicon (m/s2)
30 30
Vicon (m/s2)
25 25 R² = 0.1183
20 20
15 15
10 10
5 y = 1.0876x - 0.9214
0 5
R² = 0.8816 0
0 5 10 15 20 25 30 35 0 5 10 15 20 2 25 30 35
BH-BT (m/s2) EQ-01 (m/s )
Figure 5. Acc. derived from BH-BT and Vicon (left) and from EQ-01 and Vicon (right).
Table 5. Descriptive statistics of correlation coef. and agreement indexes for acc.
BH-BT EQ-01
r D s r D s
mean 0.952 -0.014 0.840 0.407 0.109 2.627
st. dev. 0.019 0.117 0.302 0.226 0.335 1.102
median 0.956 -0.082 1.014 0.355 -0.084 1.990
max 0.968 0.121 1.014 0.654 0.496 3.899
min 0.931 -0.082 0.491 0.211 -0.084 1.990
Using a wide variety of activities and tests, the researchers were able to
initiate a comprehensive assessment of PSM performance in terms of HR and
accelerations. Even though the small sample size limit the statistical validity of the
study, these preliminary results demonstrate the poor reliability of HxM in assessing
HR and the effectiveness of BH-BT and EQ-01 in monitoring subjects during
dynamic activities similar to construction workforce’s routine activities. In fact, even
if these PSMs showed significant difference in regards to lab instruments in almost
every performed test, the assessed correlation and agreement make them suitable
candidate as physiological monitoring device for construction workforce.
COMPUTING IN CIVIL ENGINEERING 201
To date, HR data from only two subjects have been collected. Thus, to obtain a
comprehensive assessment of PSMs capability the next steps of our research project
include: (1) enrolling other subjects to expand the data collection up to at least ten
subjects, and (2) assessing the accuracy of the breathing rate and skin temperature
sensors. Moreover, we will use PSMs in simulated construction activities to analyze
the relationship between physical strain and workers’ productivity.
ACKNOWLEDGEMENT
The authors would like to thank the Exercise Physiology Lab and the
MARHES Lab at the University of New Mexico for providing the instruments
necessary for this study as well as the lab assistants Jeremy Clayton Fransen an Ivana
Palunko for time and efforts in performing the experiments.
REFERENCES
Abdelhamid, T.S., & Everett, J.G. (2002). Physiological demands during construction
work. Journal of Construction Engineering and Management, 128(5), 427-
437.
Bland, M., & Altman, D. (1986). Statistical Methods for Assessing Agreement
between Two Methods of clinical Measurement. The Lancet, 327(8476), 307-
310.
Bouchard, D.R., & Trudeau, F. (2008). Estimation of energy expenditure in a work
environment: comparison of accelerometry and oxygen consumption/heart
rate regression. Ergonomics, 51(5), 663-670.
Elsner, K.L., & Kolkhorst, F.W. (2008). Metabolic demands of simulated firefighting
tasks. Ergonomics, 51(9), 1418-1425.
Faber, A., Strøyer, J., Hjortskov, N., & Schibye, B. (2009). Changes in Physical
Performance among Construction Workers during Extended Workweeks with
12-hour Workdays. International Archives of Occupational and
Environmental Health, 83(1), 1-8.
Garet, M., Boudet, G., Montaurier, C., Vermorel, M., Coudert, J., & Chamoux, A.
(2005). Estimating relative physical workload using heart rate monitoring: a
validation by whole-body indirect calorimetry. European Journal of Applied
Physiology, 94(1), 46-53.
Hartmann, B., & Fleischer, A. (2005). Physical Load Exposure at Construction sites.
Scandinavian Journal of Work Environment and Health, 31, 88-95.
Kang, D., Woo, J., & Shin, Y. (2007). Distribution and determinants of maximal
physical work capacity of Korean male metal workers. Ergonomics, 50(12),
2137-2147.
Kirk, P.M., & Sullman, M.J.M. (2001). Heart rate strain in cable hauler choker setters
in New Zealand logging operations. Applied Ergonomics, 32(4), 389-398.
Melanson, E.L., & Freedson, P.S. (1996). Physical activity assessment: a review of
methods. Critical Reviews in Food Science and Nutrition, 36(5), 385-396.
Turpin-Legendre, E., & Meyer, J. (2003). Comparison of physiological and subjective
strain in workers wearing two different protective coveralls for asbestos
abatement tasks. Applied Ergonomics, 34(6), 551-556.
A Framework for Optimizing Detour Planning and Development around
Construction Zones
ABSTRACT
Construction zones are traffic way areas where construction, maintenance or
utility work is identified by warning signs, signals and indicators, including those on
transport devices that mark the beginning and end of construction zones. Construction
zones are among the most dangerous work areas, with workers facing workplace
safety challenges that often lead to catastrophic injuries or fatalities. In addition, daily
commuters are also impacted by construction zone detours that affect their safety and
daily commute time. These problems represent major challenges to construction
planners, as they are required to plan vehicle routes around construction zones in such
a way that maximizes the safety of construction workers and reduce the impact on
daily commuters. This paper presents a study that aims at developing a framework for
optimizing the planning of construction detours. The main objectives of the study are
to: 1) identify all the decision variables that affect the planning of construction
detours; 2) quantify the impact of these decision variables on construction workers
and daily commuters; and 3) implement a model based on shortest path formulation
to identify the optimal alternatives for construction detours. The ultimate goal of this
study is to offer construction planners with the essential guidelines to improve
construction safety and reduce construction zone hazards, and a critical tool for
selecting and optimizing construction zone detours.
INTRODUCTION
Many commuter drivers have to go through traffic detours on a daily basis.
Traffic detouring (also known as rerouting) is the process of forcing the through
traffic to follow an alternative path to the usual path in order to promote the safety of
construction workers, the safety of commuters and the efficiency of traffic flow. As
such, the provided alternative path is usually selected to ensure the orderly movement
of all road users on streets and highways throughout construction and work zones. In
addition to construction zones, traffic detours are used for lane closures due to
adverse weather conditions, road maintenance work, utility construction activities,
among other reasons. Traffic detours are typically identified by warning signs, signals
and indicators, including those on transport devices that guide commuters throughout
the detour. Local authorities usually require construction planners to include detailed
202
COMPUTING IN CIVIL ENGINEERING 203
traffic detour plans whenever construction work is expected to affect the traffic flow
around the construction zone. The requirement of such traffic detours vary from one
state to the other, and even between counties and cities within the same State.
Most local authorities and municipalities pay the closest attention to make
sure that detour signs are easily understood by both local residents who are familiar
with the area, and daily commuters who are familiar with just the main traffic path.
There are no specific guidelines for defining the path of the detour other than not to
detour traffic into roads that are known to be at or exceeding road capacity (i.e., roads
that failed to achieve the desirable level of service). The lack of guidelines and tools
to help construction planners in selecting an efficient detour can lead to overlooking
potentially good choices of available traffic detours. As such, there is a need for a
system to specify detour guidelines and help construction planners in identifying
optimal traffic routes that maximizes the safety of construction workers and
commuters, while efficiently maximizing traffic flow.
The main objective of this research is to develop guidelines and tools that can
help construction planners select optimal traffic detour routes. Potentially, this could
result in safer construction zones, reduce traffic jams and greenhouse gases, and offer
better utilization of the available capacity in the entire traffic network
LITERATURE REVIEW
Several studies have been conducted to evaluate the safety of highway
construction zones in several locations in the United States. Harb concluded that work
zones produce a significantly higher rate of crashes when compared to non-work zone
locations (Harb 2009). Harb cited that motor vehicle crashes increase by 26% during
construction or roadway maintenance in work zones (Harb 2009).
Anderson et al. introduced the concepts of assignment, transshipment, and
shortest route problems. They categorized traffic rerouting problems under a category
of liner programming named network flow problems. The network model for such
problems consist of nodes and arcs (Anderson et al. 2007). Focusing on shortest route
problems, they considered the main objective for such problems is to find the shortest
path or route between two nodes of the network. This can also be expressed as a
transshipment problem with one origin and one destination. By transporting one unit
from one point (the origin) to another point (the destination), the solution is
determined by finding the shortest route through the network (Anderson et al. 2007).
Radwan discussed the advantages of utilizing new techniques for tackling
traffic incidents, whether these incidents are natural such as hurricanes and floods or
manmade such as road construction and car accidents. Radwan emphasized the
importance of having a good detour around the incident location (Radwan 2003).
Snelder et al. described how a disturbance of even a small section of a
network can cause a major disruption to that network as a whole, making it
vulnerable and easy to all types of traffic problems, including congestions and delays.
They developed a methodology to analyze the specification of the design standards,
analyze the road network and test the quality of the network. The developed
methodology is reported to improve the network by decreasing the travel time by
2.3% and decreasing the lost time in case of accident by 29%. Moreover, the average
speed is reported to have increased by 1.6% (Snelder et al. 2009).
204 COMPUTING IN CIVIL ENGINEERING
Network Representation
In order to select the optimal solution, a mechanism for identifying all
available alternatives should be developed. The traffic network segments are
represented by nodes and arrows. Then they are all compiled into a single matrix to
represent the travel time or cost and allow the software to select feasible routes.
Formulating Shortest Path Problem
To select the optimal solution, a modification to the shortest path formulation
is proposed. This method is based on sending a single unit of flow from one node
(e.g. node 1) to a destination node (e.g. node m) at the least possible cost/time
(Bazaraa 1990). The mathematical formulation of the problem could be described as
follows:
Minimize , Subject to
The summation is taken over the existing arcs of the network. The constant x ij
(which is equal to 0 or 1) indicates whether an arc is in path or not.
APPLICATION EXAMPLE
To better understand the concept of Total Route Cost, an application example
is given on a construction zone within the city of Orlando. It noticed that many daily
commuters who work, and or study at Valencia Community College drive through the
intersection of South Goldenrod Road (SR 551) and Lake Underhill Road. The
common route for those commuters is first heading North on Goldenrod Road (SR
551) until reaching the intersection of Goldenrod Road (SR 551) with Valencia
College lane (distance is 1 mile), then heading east on Valencia College lane
(distance 2.1 miles) until reaching their destination which is Valencia Community
College. The total distance of this route is 3.1 miles as illustrated in Figure 2.
To formulate the shortest path model for this problem, part of the East-West
section (link) of the daily commuter route of Figure 1 is closed, as shown in Figure 2.
The closed section or link is the link from the beginning of Valencia College Lane
until the intersection of Valencia College Lane with North Chickasaw Trail. This link
is considered a critical link in the daily commuter route, since passing through it is a
must for the daily commuter to reach the destination. Closing this link is expected to
generate substantial disturbance to the network. The length of the closed link is
206 COMPUTING IN CIVIL ENGINEERING
approximately half a mile, as illustrated in Figure . In order to close this section of the
daily commuter route, realistic alternative routes are sought and examined using the
modified shortest path formulation. These realistic alternatives are evaluated and
assessed based on the optimization objective (cost or time) to find the most feasible
alternative route for the original common route.
It should be noted that when it comes to economic feasibility, usually a
comparison is conducted between the cost of blocking the whole road for a shorter
total duration, and the cost of partially blocking the road for a relatively longer
duration. While the former allows faster completion of road construction tasks, the
latter requires less detour planning. In addition to these costs, the cost of delay due to
construction should be considered in assessing the alternatives and selecting the best
solution. Moreover, the social value of delay and how much a daily commuter is
willing to pay to avoid going through traffic congestion must be precisely estimated
in terms of time and money. Since the travel time per unit distance is an inverse
function of the speed, delays are expected to noticeably increase as the speed goes
down. Also with reduced speeds, density rises as more and more users enter the
congested zone, reducing inter-vehicle spacing and causing the speed to fall to almost
zero. It has been reported that the travel times tend to rise exponentially as a function
of demand for the scarce road space, as illustrated in Figure 3.
In this study a formula developed by the Bureau of Public Roads (BPR) is
used to calculate the common travel time (FHWA 1979), as shown by the following
equation:
DATA COLLECTION
Data was collected on nine different locations. These nine locations are
considered critical points that commuters have to pass by in order to take one of the
alternative routes. The data was collected on regular days for three consecutive hours
and was analyzed on a 15-min basis in order to provide precise results and more
extensive information. The analysis also identified the peak hour precisely. Figures 4-
9 illustrate the realistic routes that can be taken by daily commuters to avoid the
construction zone.
Route “A” starts similar to the original route by heading north approximately
0.35 miles on FL-551 (Goldenrod Road), then turning right to merge onto FL-408 E
toward Titusville a distance of approximately 0.7 miles. The route continues by
merging slight left at Central Florida Greenway/Eastern Beltway/Florida 417 north, a
distance approximately 0.4 miles, then taking the Valencia College Lane exit, a
distance of approximately 0.3 mile, and then finally turning right at Valencia College
Lane for a distance of approximately 1.0 mile to arrive at Valencia Community
college. The expected travel time on this route is approximately 4 minutes. The main
advantage of this route is its shortest distance and time. On the other hand, the main
COMPUTING IN CIVIL ENGINEERING 207
disadvantage of this rote is its cost since it requires the commuter to take a toll road,
as illustrated in Figure 4.
Route B starts by having the commuter head east on Lake Underhill Rd
toward S Chickasaw Trail for a distance of approximately 2 miles, before turning left
at S Econlockhatchee Trail until reaching the main entrance of Valencia community
college at Valencia College Lane. The main advantage of this route is the distance,
since this route is consider the second shortest route in terms of distance. The main
disadvantage for this route is the existence of a critical facility (a hospital), as
illustrated in Figure 5.
Route F starts similar to route E but passes through Chickasaw Trail instead of
Econlockhatchee Trail. The direction for this route begins by first heading south on
FL-551 (Goldenrod Road) for approximately 1.7 miles then turning left at Curry Ford
Road, driving for a distance of 1.4 miles, before turning left at S Chickasaw Trail for
approximately 1.0 mile, then turning left at El Prado Avenue for 0.4 mile. The route
continues onto S Chickasaw Trail for another 1 mile, and ends by finally turning left
at S Econlockhatchee trail, arriving to the final destination. This route takes
approximately 16 minutes to complete and it is illustrated in Figure 9.
Table 1 Level of Service Definition for Basic Freeway Segments (TRP 2000)*
*Calculation for the Level of Service (LOS) is composed of the following steps: (1) Calculation of FFS, (2)
Determination of Flow Rate, and (3) Calculation of LOS.
CONCLUSION
Implementation factors such as driving costs, critical facilities, school zones,
peak time, highway capacity and level of service, affect the selection of optimal
detour routes around construction zones. A framework for optimizing construction
detours has been proposed in order to take these factors into account when developing
detour plans. The framework is based on combining the effect of these factors into a
cost function and using a modified shortest path formulation to determine the optimal
routes. The formulation considers each objective to have a certain cost (utility) that
could be determined by the user in order to evaluate the overall quality of the
solution. This should prove useful to construction planners as it can help them
identify the optimal construction detour around construction zones.
REFERENCES
Bazaraa, M. S. (1990). Liner Programming and Network Flows, John Wiley & Sons,
New York.
Anderson, D. R., Sweeney, D. J., Williams, T. A., and Martin, R.K. (2007). An
Introduction to Management Science: Quantitative Approaches to Decision
Making, Thomson/South-Western.
FHWA (1979). Urban Transportation Planning System (UTPS), Federal Highway
Administration, Washington, DC.
Harb, R. (2009). Safety and Operational Evaluation of Dynamic Lane Merging in
Work Zones, PhD Dissertation, University of Central Florida, Orlando, FL.
Kockelman, K. (2004). Traffic Congestion. Handbook of Transportation Engineering.
M. Kutz. McGraw- Hill, New York.
Radwan, E. (2003). Framework for Modeling Emergency Evacuation. University of
Central Florida, Center for Advanced Transportation Systems Simulation, and
Florida Department of Transportation, Orlando, FL.
Snelder, M., Schrijver J. M., Immers, L. H., Egeter, B. (2009). "Designing Robust
Road Networks," Report no. 09-2718, the 88th Annual Meeting of the
Transportation Research Board, Washington, DC.
TRP (2000). Highway Capacity Manual: 2000 Washington, D.C, Transportation
Research Board, Washington, DC.
A Multi-Objective Decision Support System for Ppp Funding Decisions
Morteza Farajian1 and Qingbin Cui2
1
PhD Student, Department of Civil & Environmental Engineering, University of Maryland, College
Park, MD, USA, 20742; morteza@umd.edu
2
Assistant Professor, Department of Civil & Environmental Engineering, University of Maryland,
College Park, MD, USA, 20742; cui@umd.edu
ABSTRACT
PPP is an innovative delivery method used as an option to leverage public funds by
attracting private investment into public projects to make the delivery of previously
impossible projects possible. Leveraging resources to build more infrastructure increases the
output (quantity) of funds; however, besides leveraging resources, the public agency should
also increase the outcome (benefits) of those projects by utilizing more efficient funding
strategies. Currently, some project level evaluation methods such as VfM and BCA are being
practiced to evaluate PPPs; however, those methods fail to consider the overall benefits and
costs of PPPs for multiple stakeholders, and they do not provide much assistance in terms of
comparison between different projects in portfolio level. This study introduces a Multi-
Objective Decision Support System (MODSS) which integrates quantitative and qualitative
aspects of PPPs and calculates the utility function based on different interests of multiple
stockholders. The ROR on private investment, the regional economic benefits for local
people, and long term national level benefits are considered as the main attributes to address
different objectives of different stakeholders. This two level MODSS model assists public
agencies such as FHWA to better spend their resources in special programs such as TIFIA by
optimizing their funding portfolio and allocating the available funds into the optimal portfolio
of PPP projects.
INTRODUCTION
Transportation projects in the US have been funded traditionally by excise taxes.
However, in recent years, due to the decrease in the excise tax revenue, and increase in
transportation fund needs, the gap between financial resources and needed funds to maintain
and improve transportation systems has widened (National Surface Transportation
Infrastructure Financing Commission, 2009). There are different options available to fill this
gap such as increasing taxes or using new financing resources such as private investment on
public projects. However, there is a huge public resistance against tax increase or road
privatization in the US. The other available option is increasing the efficiency of public
funding strategy in order to maximize the benefits from each dollar of tax payers’ money.
Public Private Partnership (PPP) is an innovative financing method which has been
recently used by numerous states in the US (Cui, Farajian, & Sharma, 2010) as an option to
leverage their resources and attract new financing resources to public projects. A Public-
Private Partnership can be broadly defined as a long term agreement between public and
private sectors for mutual benefits (HM Treasure 2000), while the private sector is awarded
the right to Design Build Finance and Operate (DBFO) a roadway, often times a toll road, and
based on the risks that the private sector takes, it either gets paid through toll payments by
users or availability payments by state DOTs. Using PPP agreements, public agencies try to
bridge the increasing gap between required investments and limited funding, while increasing
the efficiency and shifting some of the risks during the design, construction and
operating/maintaining phases of the road to the private sector. However, the private sector
210
COMPUTING IN CIVIL ENGINEERING 211
usually seeks an incentive such as federal loans, the Transportation Infrastructure Finance and
Innovation Act (TIFIA) for instance, or grants such as Transportation Investment Generating
Economy Recovery (TIGER). Due to the recent trend in the increase in the number of the
states that use PPPs (Cui, Farajian, & Sharma, 2010), the number of projects that compete to
receive those loans and grants has increased.
In order to enhance the efficiency of funding decisions, funding agencies are limited
by legislation to use PPPs only if there is a well-defined and executed business case analysis
showing added value for money for PPPs compared to publicly financed projects. Due to
competition among different regions and different projects receiving public funds, decision
makers are obligated to decide about the priority of different competing projects before
allocating the grants or loans. In addition, the complexity of public private partnership
proposals and the existence of multiple objectives of different entities involved in such
projects creates a need for funding agencies to use a systematic decision support system that
is able to integrate both qualitative and quantitative aspects of those projects into a single
model in order to decide which projects among all qualified projects have higher priority in
receiving public assistance. The other decision that needs to be made is deciding the optimal
amount of money that each project should receive in order to maximize the benefits from
each dollar of taxpayers’ money. Optimization of this “funding strategy” is of special
importance because of the public resistance against an increase in tax rates or privatization of
public projects. One of the main criticisms of the public agencies is that they do not
efficiently utilize available resources. This research reviews the state of the art of the current
“funding strategy” in the US Departments of Transportation, and suggests a Multi-Objective
Decision Support System (MODSS) which can integrate both quantitative and qualitative
aspects of PPPs, as well as calculate the utility function for the decision maker. The MODSS
enables the decision maker to utilize a funding strategy that will allocate resources more
efficiently with the available funds to the optimal portfolio of PPP projects in order to
increase the output of public investment.
an extra effort to model the benefits and costs of a project delivered using a PPP arrangement.
The current evaluation methods are usually useful as a preliminary project evaluation tool in
order to demonstrate the availability of PPPs as a delivery option, however, they do not
provide enough decision making support when it comes to comparing different competing
PPP projects for federal assistance. Therefore, there is a need for strategic planning for
improving capital allocation decisions in the investment portfolio of federal programs such as
TIFIA while considering the competing objectives of multiple stakeholders. This paper aims
to go beyond the available project evaluation analyses for PPPs at the project level, and
provides a decision support system at the program level, based on the utility theory and multi
objective optimization.
MODEL DEVELOPMENT
The model presented in this paper is based on a two level MAUT and a Bayesian
network. Level 1 of MAUT model captures the utility function of different stakeholders based
on three attributes; meanwhile Level 2, integrates these utility functions into one centralized
utility function based on the preferences of the decision maker which in this case is the
program or the office which is going to make the funding decisions. The centralized utility
function can change based on events that may happen in the future by using a Bayesian
network to update the relative importance weights. The different steps of the model are shown
in Figure 1.
development, and political factors. It should be mentioned that developing such indexes in
more details are beyond the scope of this paper, so we assume such index is available and can
be obtained through collecting surveys and data analysis.
The first thing that should be mentioned about the three attributes - ROR, LLI & NII -
is the fact that they are usually contradictory to each other. For instance, to achieve a better
ROR on private investment, there is a need to scarify the design or the quality of the project,
which means lower LLI, or receive more funding from federal government, which means
lower NII. The other thing that should be mentioned about the model is the fact that the
mentioned attributes cannot be easily combined into one simple formula because each
stakeholder has different preferences and therefore the importance weights vary from
stakeholder to stakeholder. As shown in Figure 1, this paper develops the final utility function
into two steps. In the first step, the utility function of each stakeholder based on the
mentioned three attributes- ROR, LLI & NII - will be obtained. Each one of these utility
functions will be treated as a new attribute in the next step, in which the utility function of the
decision maker in the public agency will be obtained based on his preferences for the utility
of each stakeholder.
Level One: Obtaining Utility Function of Each Stakeholder
A common method for obtaining the utility function for each stakeholder is to
interview key members from each group of stakeholders who are familiar with the
preferences of that group or company. Before calculating the utility functions, there should be
a determination of the appropriate range of the attributes. In measuring the range of the
attributes, an upper limit and a lower limit of the attributes scope should be determined by
using existing research results and a scientific analysis on the basis of an engineering model
(Goicoechea & Duckstein, 1982).
In the three-attribute utility function, we assess the utility function of the ROR, the
Local Livability Index, and the National Impact Index. The utility is assessed assuming
mutual utility independence for ROR, LLI, and NII. It is also assumed that preferential
independence exists because change in the rank ordering of preferences of one attribute does
not change the rank ordering of preferences of other attributes. Thus the utility function can
be expressed as:
1
, , i i 1 1 1
Where 0 < ki < 1, U1 = U(ROR), U2 = U(LLI) and U3 = U(NII) and K≥-1 and non-
zero. The next step in developing the utility function for each stakeholder is obtaining the
scaling factors. It should be mentioned that we assume the worst case for all utilities is 0
meaning lowest ROR, minimum LLI, and minimum NII. The best case for each utility is
assumed to be 1, meaning highest ROR, maximum LLI, and maximum NII.
, , 0, 0, 0 0
, , 1 , 1, 1 1
Third, the k1 can be assessed by asking the decision maker at what chance ‘p’ for the
following lottery such that he/she is indifference between choice #1 and #2. For instance, if
the decision maker is indifferent between choice #1 and #2 at the point where p = 0.5, k1 =
0.5.
COMPUTING IN CIVIL ENGINEERING 215
p
#1 (0, 0, 0)
1-p (1, 1, 1)
#2 (1, 0, 0)
This process should be repeated by setting up similar lotteries to asses k2 and k3. In
the last step of obtaining the utility function for each stakeholder, the obtained ki should be
applied to equation (1) for the best case, U(1,1,1), in order to calculate K.
i 1
1 2
It should be noted that this process should be repeated for the private company,
Up(ROR, LLI, NII), the local community, UL(ROR, LLI, NII), and the federal agency,
UN(ROR, LLI, NII).
1
i i 1
1 3
p, l, n 0, 0, 0 0
p, l, n 1 , 1, 1 1
The next step in obtaining the utility function of the decision maker in the public
agency is assessing the scaling factors by changing p in a set of lotteries similar to the
previous level.
p
#1 (Max Up , Max UL, Max UN)
1-p (Min Up , Min UL, Min UN )
#2 (Max Up , min UL, min UN )
Table 1: Calculation of scaling factors for level 1 and level 2 utility functions
Level 1
KROR KLLI KNII K
Concessionaire (P) 0.80 0.30 0.10 -0.45156
Local Community (L) 0.20 0.70 0.30 -0.45156
Federal Agency(N) 0.10 0.50 0.70 -0.67191
Level 2
Kp Kl Kn K
Funding Agency 0.30 0.50 0.40 -0.45156
Finally, the utility function for each stakeholder is calculated using equation (1) and
Level 1 scaling factors. These utility functions are used in Level 2 as inputs. In Level 2, the
utility function of the funding agency is calculated using these inputs, Level 2 scaling factors
and Equation 2. The results of these calculations are presented in Table 2.
Table 2: Calculation of utility functions in level 1 and level 2 and project ranking
Level 2 Input Level 2 Output
Level 1 Input Level 1 Output
ROR LLI NII Up Ul Un U Rank
Project 1 0.475 0.623 0.512 0.57 0.63 0.62 0.66 2
Project 2 0.195 0.495 0.663 0.35 0.54 0.65 0.58 3
Project 3 0.221 0.858 0.522 0.46 0.74 0.70 0.70 1
Project 4 0.378 0.523 0.147 0.45 0.46 0.37 0.48 5
Project 5 0.281 0.426 0.593 0.39 0.50 0.59 0.55 4
DISCUSSION OF IMPLICATION
As shown in Table 2, this model is able to account for the preferences of
different stakeholders and prioritize projects based on the strategic objectives of the
funding agency and the preferences of different stakeholders.
COMPUTING IN CIVIL ENGINEERING 217
Table 3: What-If Analysis S for Output Utility Function for Funding Agency (Top 9 Inputs Ranked By
Percent Change)
Minimum Maximum
Output Input Output Input
Rank Input Name Cell Value Change(%) Value Value Change(%) Value
1 Funding Agency (Kn) R8 0.31 -46.31% -0.093456088 0.84 46.31% 0.893456088
2 Funding Agency (Kl) Q8 0.35 -39.31% 0.006543912 0.80 39.31% 0.993456088
3 Funding Agency (Kp) P8 0.45 -22.15% -0.193456088 0.70 22.15% 0.793456088
4 Knii for Local AE8 0.46 -19.85% -0.193456088 0.69 19.85% 0.793456088
Community
5 Klli for Local AD8 0.48 -15.98% 0.206543912 0.67 15.98% 1.193456088
Community
6 Knii for Federal Agency AK8 0.49 -15.66% 0.206543912 0.67 15.66% 1.193456088
7 Klli for Federal Agency AJ8 0.52 -9.77% -3.55026E-05 0.63 9.77% 1.000035503
8 Klli for Concessionaire X8 0.52 -8.97% -0.193456088 0.63 8.97% 0.793456088
9 Knii for Concessionaire Y8 0.55 -3.85% -0.064485363 0.60 3.85% 0.264485363
In order to show how the preference of each stakeholder can change the final utility
function of the funding agency, a simulation model is created in which 12 different inputs -
scaling weights for the concessionaire, local community, federal agency, and the funding
agency, have been defined as input variable, and the changes in the utility function for the
funding agency in Level 2 have been studied. Table 3 shows the results of this simulation,
and Figure 2 shows the sensitivity analysis for the 9 main inputs that have a great effect on
the utility function for the funding agency.
10%
20%
30%
40%
50%
0%
in order to study how the model can prioritize projects in a portfolio. Finally, an optimization
program will be added to the model based on some constraints.
One of the most important features of this model is the fact that it can be updated
easily based on the changes in the strategic objectives of the funding agency, preferences of
different stakeholders or even based on the events that may happen for the portfolio. The next
feature which will be added to the model is a Bayesian network which can automatically
update the scaling factors for the funding agency based on different events. For instance, in
the event of a bankrupt PPP project, such as what happened in California to South Bay
Express Lane Project, the sensitivity towards ROR will increase, or in the event of election
season, the sensitivity towards local benefits or national impact may increase so Kl and KN in
the model should be updated.
REFERENCES
Burak Canbolat, Y., Chelst, K., & Garg, N. (2007). Combining decision tree and MAUT for selecting a
country for a global manufacturing facility. Omega , Volume 35, Issue 3, pp. 312-325.
Butler, J., Chebeskov, A., Dyer, J. S., Edmunds, T., Jia, J. and Oussanov., V.2005. The Use of Multi-
Attribute Utility Theory for the Evaluation of Plutonium Disposition Options in the US.
Cox, L. A. (2002). Risk Analysis: foundations, models, and methods . Kluwer Academic publishers.
Cui, Q., Farajian, M., and Sharma, D. (2010). Feasibility Study Guideline for Public Private.
UniversityTransportation Center for Alabama.
Eom, S., and Kim, E. (2006). A Survey of Decision Support System Applications (1995-2001). The Journal
of the Operational Research Society , Vol. 57, No. 11 , pp.1264-1278.
Goicoechea, D. H., & Duckstein, L. (1982). Multi-Objective Decision Analysis with Engineering and
Business Applications. New York: , John Wiley & Sons.
Keeny, R., and Raifa, H. (1976). Decisions with Multiple Objectives: Preferences and Value Tradeoffs.
Cambridge: Cambridge University Press.
Konidari, P. A. (2007). Multi-criteria evaluation of climate policy interactions. Journal of Multi-Criteria
Decision Analysis , Volume 14, Issue 1-3, 35 – 53.
Linkov, I., & Ramadan, A. B. (2004). Comparative risk assessment and environmental decision making .
Kluwer Academic publishers.
National Surface Transportation Infrastructure Financing Commission. (2009). Paying our Way: A New
Framework for Transportation Finance. U.S. Department of Transportation.
Office of the secretary of transortation. (2009). Notice of Funding Availability for Supplemental
Discretionary Grants forCapital Investments in Surface Transportation Infrastructure Underthe
American Recovery and Reinvestment Act. Federal Register /Vol. 74, No. 115 /Wednesday, June 17,
2009 /Notices.
Office of the Secretary of Transportation. (2010 ). Interim Notice of Funding Availability for the
Department of Transportation’s National Infrastructure Investments Under the Transportation, Housing
and Urban Development, and Related Agencies Appropriations Act for 2010; and Request for
Comments. Federal Register , Vol. 75 /No. 79 /Monday, April 26, 2010 /Notices.
Petrovic, D., Duenas, A., & Petrovic, S. (August, 2007 ). Decision support tool for multi-objective job shop
scheduling problems with linguistically quantified decision functions. Decision Support Systems
,Volume 43 Issue 4
Ríos-Insua, S., Mateos, A., Jiménez, A., and Rodríguez, LC. A multi-objective decision support system for
optimization in engineering design., ISC'2004 Proceedings, The International Industrial Simulation
Conference, pages 512-516, June 2004. EUROSIS.
Stanhope, M., and Lancaster, J. (2004). Community and public health nursing. Missouri: Mosby.
Sung-Kyun, K., and Ohseop, S. (2009). A MAUT approach for selecting a dismantling scenario for the
thermal column in KRR-1. Annals of Nuclear Energy , 145-150.
Wallas, M. R. (1995). The Engineering Economist: Integrating business strategy and capital allocation: an
application of multi-objective decision making. The Engineering Economist , 247-266.
Truck Weigh-in-Motion using Reverse Modeling and Genetic Algorithms
Vala, G.1, Flood, I.2 and Obonyo, E.3
1
M.E. Rinker. Sr. School of Building Construction, University of Florida, P.O. Box
115703, Gainesville, Fl-32611-5703; PH (352) 271-1152; email: gvala@ufl.edu
2
M.E. Rinker. Sr. School of Building Construction, University of Florida, P.O. Box
115703, Gainesville, Fl-32611-5703; PH (352) 273-1159; email: flood@ufl.edu
3
M.E. Rinker. Sr. School of Building Construction, University of Florida, P.O. Box
115703, Gainesville, Fl-32611-5703; PH (352) 273-1161; email: obonyo@ufl.edu
ABSTRACT
The ability to accurately determine the loading attributes of a truck (namely
the axle configuration, the spacing between the axles, and the load imposed by each
axle) while it is in motion is an important function for the design and structural health
monitoring of bridges, and highways. Truck weigh-in-motion (WIM) as it is termed is
an inverse problem where the load is identified from the observed response of the
structure over which it is travelling. The problem has been reasonably well solved
using neural network techniques, but there is still significant room for improvement
in terms of reducing the number of misclassifications of trucks and increasing the
precision of the axle spacing and load estimates.
The problem can be formulated as an optimization problem. Genetic
algorithms (GAs) are proven robust and efficient search optimization techniques. The
potential of the GA approach for reverse identification of axle configuration and
loading from bridge girder stress envelopes has been investigated and compared to an
existing neural network solution. The investigation is a pilot study that considers a
simply supported steel girder bridge with a concrete deck. The bending stresses of the
bridge are simulated numerically and are used as the input for reverse modeling. The
identification procedure is carried out using GAs by minimizing error between the
measured bridge response and reconstructed bridge response. The performance of the
GA depends on the tuning of genetic operators, hence different operator settings are
considered and tuned for optimality. Advance strategies such as migration and
multiple species with real coded representation variables are adopted to improve the
performance. The effect of measurement parameters such as sampling frequency (50-
400 Hz), levels of noise (5-25%), time varying load and measuring sections on
accuracy of identification are also investigated. The performance of the GA approach
is found to outperform the existing neural network solution. The significance of this
is that, unlike the neural network approach, the GA solution can be applied to any
bridge configuration for which a reasonable stress model exists. Moreover, the
computational time for the GA is found to be on average 3-4 seconds which, although
is several orders of magnitude slower than the neural network solution, it is well
within what could be considered an acceptable delay for generating a solution.
219
220 COMPUTING IN CIVIL ENGINEERING
BACKGROUND
Traditionally, truck loads are measured at weigh stations used to penalize the
overweight trucks. Also, the load history obtained from the weigh-in-motion (WIM)
station is used for the design of bridges, pavement surface infrastructural design and
planning. However, this system is expensive and causes the traffic to slow down as
each truck takes a considerable amount of time for weighing (Gagarin, N. 1991).
Hence advanced WIM technology which calculates the weight from the responses of
bridge have been developed and analyzed by many researchers.
Identification of the axle loads, axle spacings, and velocity of a truck
travelling across a bridge is an inverse problem, where the attributes of the truck are
identified from the bridge’s time-wise strain response (strain envelope). One
approach to solving this type of problem is reverse modeling. This makes use of a
search algorithm that iterates towards a solution (a loading scenario defined by the
number and spacing of axles, the axle loads, and the truck velocity) that would have
caused the observed strain envelope. The fitness (accuracy) of a solution is
determined using a forward model that computes the resultant strain envelope for a
given loading scenario using, for example, a finite element model. The fitness is
measured as some function of the difference between the strain envelope generated
by the forward model and that measured on the bridge.
Yu and Chan (2007) have reviewed and compared the various method of load
identification. Traditional WIM system has developed from the earlier work done by
Moses (1979). The method involves the inverse of the system matrix and is solved by
the least squares method. However, methods based on the inversion of a matrix are
computationally expensive; hence the pseudo inverse or singular value decomposition
method is used to reduce the computational load. This method shows high fluctuation
in the error due to the presence of measurement error and ill-posed conditions
(Pinkaew, 2006). In order to tackle these regularization techniques the least square
error method is used (Law et al., 1997). However, finding the optimal value of the
regularization parameter proves difficult in practice.
There are many methods that involve the use of an optimization algorithm to
search for a solution. Sequential quadratic programming and dynamic programming
have been used most frequently (Leming and Stalford, 2003). However, these
algorithms are based on finding the zero gradient from the provided auxiliary
information. The algorithms require good formulation of the equation containing
gradient information. It is difficult to form the equation for nonlinear constrained
problems. Also traditional optimization algorithms often get stuck in local error
minima. In contrast, Genetic Algorithms are a class of optimization techniques which
do not require knowledge of the gradient of the error function or any auxiliary
information about the system and are capable of escaping from local error minima.
GA has been used successfully to other field.
The objective of this study is to test and evaluate the potential of using reverse
modeling with Genetic Algorithms to determine the truck loading scenario that gave
rise to an observed strain envelope at one or more locations on a bridge.
BRIDGE MODEL
COMPUTING IN CIVIL ENGINEERING 221
….. .. .. . . 3
Measurement Noise
WIM data will include measurement error due to various factors including
lack of precision of the system used for recording the response, uneven profile of
222 COMPUTING IN CIVIL ENGINEERING
mutation. The process is repeated until a solution is found that is within a specified
error tolerance.
Tuning of GA Operators
The performance of the GA depends on the tuning of the GA operators. It is
good practice to try different combinations of operators and analyze their impact on
the results. Table 1 outlines the different settings of the GA operators that were
investigated. Since the GA is a stochastic process the procedure was run 20 times to
gain a more representative mean and variance for the truck loading attributes. The
mean of the result was taken as the identified value of load and spacing.
Table 1. GA operators
Operator Options Trial 1 Trial 2
Effect of Fitness Scaling: Both the rank scaling and proportionate scaling were
found to perform well compared to the shift linear scaling. Rank scaling had a
consistent error in all the identified parameters and also within the selection criteria.
Hence, rank scaling was adopted as the optimal scaling operator for the study.
Effect of Selection: Stochastic Uniform, Roulette and Uniform selection have
produced results which meet the selection criteria. In order to select one of them as
the optimal operator, the minimum cumulative error associated with each operator
was considered. Since the cumulative error did not show much difference, the
standard deviation was considered. The lowest standard deviation was associated
with the Stochastic Uniform selection operator, hence it was adopted as the optimal
selection operator for the study.
224 COMPUTING IN CIVIL ENGINEERING
Effect of Crossover: The only crossover operator that met the selection criteria was
Heuristic Crossover with a ratio set to 1.8. Hence, Heuristic Crossover was chosen as
the optimal crossover operator for further analysis.
Effect of Migration: Migration and multiple species increase the accuracy of the
identification. Three subpopulation, each having 20 individuals was considered with a
migration interval of 40 and forward migration fraction of 0.6.
RESULTS
The bridge response was simulated with increases in the measuring section,
sampling frequency, and noise level. The minimum sampling frequency considered
was 100 Hz in order to capture the minimum 100 data point of the bridge response
profile as recommended (Yu and Chan 2007). In Figure 2, the spacing refers to the
axle spacing between front and rear axle load. The spacing between the rear axle
loads was assumed to be known. From the results, it was found that the front axle
load experienced more fluctuation in error compared to the rear axle load. For noise
free data all the results were found to be within 3% error at all sampling frequencies.
It was also found that even a single measuring section at midspan was sufficient to
produce results of this level of accuracy.
The range of each parameter considered in the subsequent analysis was as
follows:
Sampling Frequency = 100 to 400 Hz
Number of Measuring Sections = 1, 3, 5, 7, and 9 points along the length of
the bridge. Points were placed at equal distance of 1/8th of the length of span
from the midpoint.
Level of Noise = 5%, 10%, 15%, 20%, 25%
Computational Time
The computational time taken by the GA for finding a solution was
investigated. A personal computer with Intel core i5 2.67 GHz with 4GB RAM was
used for the study. The CPU time required to find a solution was between 3 and 4
COMPUTING IN CIVIL ENGINEERING 225
seconds. However, it should be noted that increasing the size of the population
significantly influences the processing time.
10 Noise: 15% Frequency: 100 Hz
Error (%)
5 Axl1
0 Axle2
1 3 5 7 9 Axle3
Spacing
No. of Meauring Locations
5 Axle2
0
Axle3
1 3 5 7 9
Spacing
No. of Meauring Locations
10 Noise: 15% Frequency: 400 Hz
Error (%)
5
Axl1
Axle2
0
1 3 5 7 9
Axle3
Spacing 1-2
No. of Meauring Location
comparison using an identical set of validation problems. For noise free bridge
response, truck attributes can be found within the accuracy of 1% considering the
bridge response recorded only at the midspan. However, the presence of the dynamic
effect of a truck and white noise affect the accuracy significantly. The single
measuring location is not inadequate for noisy data sets. Increasing the number of
measuring locations increased the accuracy, as did increasing the sampling frequency.
Increasing the number of measuring locations and sampling frequency increase the
amount of information available for identifying the truck loading attributes, and thus
helped overcome the effects of white noise and dynamic loading.
It is proposed to extend the scope of the study to include bridges of more
complicated structure, using finite element methods for the forward modeling
component of the algorithm. In addition, consideration will be given to a range of
truck types. Additional validation will include the use of live data from a range of
bridges.
REFERENCES
Gagarin, N. (1991). "Advances in weigh-in-motion with pattern recognition and
prediction of fatigue life of highway bridges." PhD thesis, University of
Maryland at College Park, MD.
Law, S. S., Chan, T. H. T, and Zeng, Q. H. (1997). “Moving force identification: A
time domain method.” J. Sound Vib., 201, 1-22.
Law, S. S., Bu, J. Q., Zhu, X. Q., and Chan, S. L. (2004). "Vehicle axle loads
identification using finite element method." Eng.Struct., 26(8), 1143.
Law, S. S., and Zhu, X. Q. (2004). "Dynamic behavior of damaged concrete bridge
structures under moving vehicular loads." Eng.Struct., 26(9), 1279.
Leming, S. K., and Stalford, H. L. (2003). "Bridge Weigh-in-Motion System
Development Using Superposition of Dynamic Truck/Static Bridge
Interaction." Proceedings of the American Control Conference, .
Monti, G., Quaranta, G., and Marano, G. C. (2010). "Genetic-Algorithm-Based
Strategies for Dynamic Identification of Nonlinear Systems with Noise-
Corrupted Response." J.Comp.in Civ.Engrg., 24(2), 173-187.
Moses, F. (1979). “Weigh-in-Motion system using instrumented bridges,” J. Comp.
in Civ. Engrg., 105(3), 233-249.
Pinkaew, T. (2006). "Identification of Vehicle Axle Loads from Bridge Response
using Updated Static Component Technique." Engineering Structures, 28(11),
1599-1608.
Prozzi, J. and Hong, F. (2007). “Effect of Weight-in-Motion System Measurement
Errors on Load-Pavement Impact Estimation.” J.Trans. Engrg., 133(1), 1-10.
. Yu, L., and Chan, T. H. T. (2007). "Recent Research on Identification of Moving
Loads on Bridges." J.Sound Vibrat., 305(1-2), 3-21.
The Application of Artificial Neural Network for the Prediction of the
Deformation Performance of Hot-Mix Asphalt
ABSTRACT
INTRODUCTION
227
228 COMPUTING IN CIVIL ENGINEERING
retarded partial-recovery’ at the same time, and the proportions of those components
are highly dependent on the loading time (during of the loading) and material
temperature.
DATA GENERATION
Since the objective of this study is to implement ANN for the prediction of
HMA performances, a set of data used in this study was extracted from the author’s
previous study, which can found elsewhere (Oh and Coree, 2004).
Table 1 summaries the data used in this study. It contains critical volumetric
properties of HMA mixtures and the Gyratory Indentation Test Number (GITn) which
represents the number of loading corresponding to the 2% deformation of the
specimens measured by the Gyratory Indentation Test. High GITn Value indicates a
stable or highly rut-resistance mix while low GITn indicates highly rut-susceptible
mix. It should be noted that among the 27 test samples, 24 samples were use to train
the artificial neural network and the other 3 samples; #4, #11, and #23 were used to
verify the trained ANN.
COMPUTING IN CIVIL ENGINEERING 229
a b c d e f g h i
Sample No. Nd SA Pb Pbe VMA VFA DP FT FAA GITn
The other 3 lab sample data were used to verify the accuracy of the developed
neural network and its capability to predict the GITn value for a given HMA mix.
In this section, results obtained from the trained ANN are presented and
compared to the observed values. Next, verification for the effectiveness of the
developed network to predict the GITn value for test samples different from those
used to train the neural network is presented.
Figures 3, 4, and 5 present a comparison between the GITn values predicted
by the ANN versus the observed values for all the test samples that were used to train
the neural network. Although the deformation performance of HMA is very
complicated, the figures prove that the ANN was able to learn the relation between
the 9 inputs and the one output (GITn). The sum of the least squared was 0.003 for all
samples and less than 0.001 for each individual sample.
70
60
50
40
Nd=75 30
20
10
0
1 2 3 4 5 6 7 8
Observed GITn 58 56 41 49 39 54 53 52
ANN GITn 56.5 56.6 41.0 48.8 39.7 53.6 53.1 52.2
80
60
Nd=100 40
20
0
1 2 3 4 5 6 7 8
Observed GITn 63 49 60 66 62 70 65 69
ANN GITn 61.9 48.8 59.7 66.0 61.9 69.8 65.2 70.1
120
100
80
60
Nd=125
40
20
0
1 2 3 4 5 6 7 8
Observed GITn 75 75 88 89 109 82 78 94
ANN GITn 74.5 74.7 88.4 88.3 107.2 83.8 78.3 92.1
CONCLUSION
This paper has been focused on the application of the artificial neural
networks and its potential use in predicting the performance of HMA. Based on the
findings of this study, the ANN was able to learn the relation between the 9 inputs
COMPUTING IN CIVIL ENGINEERING 233
governing the behavior of HMA and the one output (GITn). The developed artificial
neural network performance in predicting the GITn value was much better than the
traditional regression method. Therefore, it clearly has great potential in this field.
Although this study used only 9 inputs, more input parameters such as testing
temperatures, loading conditions, asphalt binder types, etc., can be easily included in
training the ANN which is not the case for the traditional statistical analysis.
REFRENCES
Brown, E. R., and Cross, S. A. (1992). “A National Study of Rutting in Hot Mix
Asphalt (HMA) Pavements.” Journal of the Association of Asphalt Paving
Technologists, Volume 61.
Cochran, W. G., and Cox, G. M. (1960). Experimental Designs, 2nd ed. New York:
John Wiley & Sons.
Dawson, M. R. W., and Yaremchuk, V. (2003). The Rumelhart and RumelhartLite
Multilayer Perceptron Programs. Biological Computation Project, University
of Alberta, Edmonton, Alberta, Canada.
Department of Transportation, Federal Highway Administration (1998). Performance
of Coarse-Graded Mixes at Westrack – Premature Rutting. Final Report,
FHWA-RD-99-134, U.S.
Itt, R. L. (1993). An Introduction to Statistical Methods and Data Analysis, 4th
ed.California: Wadsworth Pub. Co.
John, J. A., and Quenouille, M. H. (1977). Experiments: Design and Analysis, 2nd ed.
London: Charles Griffin & Company Ltd.
Oh, I., and Coree, B. J. (2004). “A Rapid Performance Test for SUPERPAVE HMA
mixtures.” Proceedings of International Symposium on Long Lasting Asphalt
Pavements, International Society of Asphalt Pavements, Auburn, Alabama.
Oh, I., and Coree, B. J. (2004). “A Really Simple Performance Test.” Proceedings of
Association of Asphalt Paving Technologists, Baton Rouge, Louisiana.
An Approach for Occlusion Detection in Construction Site Point Cloud Data
234
COMPUTING IN CIVIL ENGINEERING 235
Figure 3. Angle-tree data structure showing one point bucket as the only child of
one leaf node and two point buckets as the children of a different leaf node.
Figure 3 also shows point buckets at leaf nodes. To deal with the possibility of
points in one field of view being at various distances from the scanner, leaf nodes of
the angle-tree keep one or two lists of points, or point buckets. When the node
distances are close together, one point bucket is used. When the greatest distance
between any pair of nodes exceeds a user-defined threshold value, two point buckets
are used. When two point buckets are used, the points are segregated by distance
using the midpoint of the two extreme node distances in the node.
Finding Meaningful Occlusions in the Angle-Tree. With all points of a point cloud
scan entered into an angle tree, the tree can be processed to tag point buckets as a)
being the source of occlusion, b) neighboring an occluded space, or c) neither. A
number of user-defined parameter values are then used to tune the occlusion detection
algorithm to the data set. These include “depth range”, “detection distance”, and
“bucket size”. Each of these is discussed in more detail below.
238 COMPUTING IN CIVIL ENGINEERING
The depth range value is the number of levels above the deepest leaf node at
which nodes will be considered for occlusion detection. The higher the value used for
depth range, the higher (and consequently, larger) the nodes will be considered. In
most cases tested to date, a depth range of 1, 2, or 3 yields good results. The tree is
traversed from top to bottom. If a parent node is within the depth range, the parent is
examined and the children of the parent will be ignored. To look at the parent node's
points, we traverse to all of the children that are leaves and combine all of the
buckets. This potentially large bucket(s) is considered as the parent's set of points.
The detection distance value specifies the minimum distance difference
between neighboring nodes before they are classified as being a source of occlusion
or bordering an occluded space. In this paper, distance means “distance of a point to
the origin”. For example, assume that node A has an average point distance of 10,
node B has an average point distance of 20, and the detection distance value is 10. In
this case, even though node A is closer and could possibly occlude node B, it will not
be counted as occluding because the difference of distances between nodes A and B
does not exceed the detection distance value.
The bucket size value specifies how many points a node can hold before
dividing into two nodes. This value, consequently, determines how many levels of
nodes the tree has. For smaller point clouds, a bucket size of 3 to 10 works well. For
larger clouds, a size of 100 or larger is still largely effective.
The combination of detection distance, bucket size, and depth range values
control the behavior of the algorithm. As a result, the occlusion detection algorithm
can be tuned to successfully detect occlusions in very large and very small point
clouds.
EXPERIMENTAL RESULTS
A range scan simulator was created to facilitate development and evaluation
of the occlusion detection approach. The simulator generates point cloud data from
simple geometry, such as rectangles and circles, thus allowing for testing on well-
known data. The simulator allows for testing experimental implementations on cases
with controllable properties, including point cloud size, shape, and density.
Figure 4 shows a rendering of a small (~ 3,000 point) point cloud produced by
the simulator. The scene represents a wall with a window (a large rectangle enclosing
a rectangular opening) with two trees (two tall, narrow rectangles) occluding parts of
the wall and window from a scanner location to the left of the wall and trees. As the
point cloud is the production of a scanner-simulation, some ‘wall’ points are absent
from the resulting point cloud due to the occluding ‘trees’.
In this scene, the expectation is for the parts of the ‘trees’ in front of the wall
to be classified as meaningful occluders. Likewise, it is expected that the points of the
wall adjacent to the missing (occluded) points would be classified as such. In this
case, the angle-tree approach to occlusion detection is judged to be successful, as the
expected classifications appear in the rendered data.
In the image (see Figure 4), the points colored orange are points in a node
detected as the source of a meaningful occlusion. The points colored green are points
in a node detected as being adjacent to occluded space. The remaining points, colored
black, are classified as neither occluding nor adjacent to an occluded space.
COMPUTING IN CIVIL ENGINEERING 239
Figure 5. Rendering of the dome scene, as viewed from above and to the right of
the scanner.
In the following images (Figures 6 to 8), points are colored based on their
participation in occlusions. The orange points are in a node detected as the source of
an occlusion. The green colored are points in a node detected as adjacent to occluded
space. The black nodes are neither occluding nor adjacent to an occluded space.
Figure 6 shows the same dome scene as rendered in Fig. 5. The differences
between these two images are a) the viewing position, and b) points which are the
source of occlusion have been indentified in the image. The points in Fig. 7 are
identified in the same way as in Fig. 6. Here the viewing position is near the scanner
location. From this position, it is easy to see the correlation between the points that
occlude and those adjacent to occluded spaces. Again, the data was edited to include
the field of view that outlines the building.
240 COMPUTING IN CIVIL ENGINEERING
Figure 6. Scene of building with a dome in which trees occlude parts of a wall,
viewed from a point above and to the left of the origin of the scanner.
Figure 7. Scene of building with a dome in which trees occlude parts of a wall
viewed from a point near the origin of the scanner.
Figure 8. A slice of dome scene in which trees occlude a wall with a window,
(left) occlusions detected with high bucket value (100), (right) occlusions
detected with low bucket value (10)
Figure 8 shows the different results given by changing the bucket size
parameter. With a large bucket size (100 points), the algorithm produces many false
positives, as seen in the left image of Figure 8. The large bucket value prohibited the
angle tree from subdividing sufficiently to properly classify finer details of the point
cloud, resulting in over-classification. By comparison, the right image of Figure 8 is a
rendering of the same scene with a lower bucket value (10). In this rendering, the
occlusions are correctly identified.
DISCUSSION
The approach outlined above proved to be a successful approach to identifying
meaningful occlusions in our initial experiments on simulated and actual data. This is
true for large outdoor scenes as well as small simple scenes. For the dome scan
(approximately 55,000 points) the angle tree construction and occlusion detection
was completed in a matter of a few seconds on a consumer-grade personal computer.
COMPUTING IN CIVIL ENGINEERING 241
Testing of larger scenes (approx. 1 million points) was equally successful. Though
programming refinements may yield improved performance, we believe our
prototype is already fast enough for field applications. While individual scans can be
set to exceed the numbers of points tested to date, the parameters tested are sufficient
to quickly evaluate the existence of occlusions in a scan from a given location.
Though we tried to identify characteristics in which our approach yielded
significant false positives (detecting an occlusion where none exists) or false
negatives (failing to detect an existing occlusion), we found that tuning the
parameters of the algorithm would resolve minor issues encountered to date. As we
continue to improve this work, we plan to examine automation of setting the
occlusion detection parameters for different site characteristics.
Future work will include investigating alternative data structures, as well as
exploring the general approach outlined here as a means for efficient meshing and
reconstruction of occluded regions. The approach outlined in this paper shows
promise of providing valuable decision support at a similar speed to data collection.
While the approach as conceived, and implemented, focuses on one scan at a time, it
may be extended to accommodate additional scans (as would be necessary for
complete coverage of typical construction sites).
REFERENCES
Adán, A. and Huber, D., 2010, Reconstruction of Wall Surfaces Under Occlusion and
Clutter in 3D Indoor Environments. CMU-RI-TR-10-12
Akinci, B., Boukamp, F., Gordon, C., Huber, D., Lyons, C. and Park, K., 2006. A
formalism for utilization of sensor systems and integrated project models for
active construction quality control. Automation in Construction, Elsevier,
New York, USA, Vol. 15, No. 2, pp. 124-138.
Bosche, F., Haas, C.T., Akinci, B., 2009, "Automated Recognition of 3D CAD
Objects in Site Laser Scans for Project 3D Status Visualization and
Performance Control", ASCE Journal of Computing in Civil Engineering,
Special Issue on 3D Visualization, Vol. 23, Issue 6, pp. 311-318.
Dell’Acqua, F. and Fisher, R. 2002 Reconstruction of Planar Surface Behind
Occlusions in Range Images. IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 24, 569-575.
Elmqvist, N. and Tsigas, P. 2008. A Taxonomy of 3D Occlusion Management for
Visualization. IEEE Transactions on Visualization and Computer Graphics,
Vol. 14, No 5, September 2008.
Fischler, M. A. and Bolles, R. C. 1981. Random sample consensus: a paradigm for
model fitting with applications to image analysis and automated cartography,
Communications of the ACM, Vol 24:6.
Ying, Z. and Castanon, D. 1999. Statistical Model for Occluded Object Recognition,
Proceedings of the 1999 International Conference on Information Intelligence
and Systems, pp. 324 -327.
Applications of Machine Learning in Pipeline Monitoring
Yujie Ying1, Joel Harley2, James H. Garrett, Jr.3, Yuanwei Jin4, Irving J. Oppenheim5,
Jun Shi6, and Lucio Soibelman7
1
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (412) 620-3253; email: yying@cmu.edu
2
Department of Electrical and Computer Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (732) 567-6786; email: jharley@andrew.cmu.edu
3
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (412) 268-2941; email: garrett@cmu.edu
4
Department of Engineering and Aviation Sciences, University of Maryland Eastern
Shore, Princess Anne, MD 21853; PH (410) 621-3410; email: yjin@umes.edu
5
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (412) 268-2950; email: ijo@cmu.edu
6
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (607) 279-9558; email: junshi@andrew.cmu.edu
7
Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213; PH (412) 268-2952; email: lucio@andrew.cmu.edu
ABSTRACT
In the field of structural health monitoring, researchers focus on the design of
systems and techniques capable of detecting damage in structures. However, most
traditional detection methods fail under environmental and operational variations that
tend to distort the signals and masquerade as damage. In this paper, we investigate the
applications of machine learning techniques to developing a damage detection system
robust to changes in the internal air pressure of a pipe. From each of the 240
experimental datasets, we extract 167 features and implement three classification
algorithms for detecting damage: adaptive boosting, support vector machines, and a
method combining the two. The performances of the three classifiers are evaluated
over 30 detection trials with different combinations of training and testing data,
resulting in the average accuracies of 87.7%, 92.5% and 93.5%, respectively. The
combined method is a promising classifier for damage detection. Through feature
selection, we also demonstrate the effectiveness of features related to the curve length,
the shift-invariant correlation coefficient and the peak amplitude of the signal.
INTRODUCTION
Natural gas pipelines require regular inspection and maintenance to ensure
their structural safety and integrity. We have explored a continuous monitoring
technique for steel natural gas pipelines using permanently installed low-cost
transducers to perform structural health monitoring (SHM). Previously, we had
devised a Time Reversal Change Focusing (TRCF) approach by combining guided
wave ultrasonics with time reversal acoustics (Harley et al. 2009, Ying et al. 2010a).
The TRCF method can focus and magnify the changes caused by damage in the
COMPUTING IN CIVIL ENGINEERING 243
received signals and allows us to detect very small defects. However, benign effects
such as changes in air pressure can also produce considerable difference in the signals
(Ying et al. 2010b). It is essential but challenging to develop robust detection schemes
that are invariant to environmental and operational conditions. In this paper, we
present our results of applying machine learning algorithms to distinguishing damage
(simulated by a mass scatterer) from harmless pressure variations in a steel pipe.
MEASUREMENTS
For our experiments, we used a pair of lead zirconate titanate (PZT) ultrasonic
sensors to generate guided waves inside of a pressurized, steel pipe (Figure 1a). We
used a National Instruments PXI data acquisition device to excite a 300 kHz sinc
pulse from one PZT and measured the response from the other PZT. To simulate
damage, we placed a mass scatterer at six locations on the pipe surface, with three
near the transmitter (Zone 1), and three close to the receiver (Zone 2), as shown in
Figure 1.
The data was taken during 13 different collection events, each with 20 records.
Every record is a 10 ms long signal, sampled at 1 MHz. Over the 20 records in each
collection, the pipe was randomly pressurized or discharged from 0 to 110 PSI. The
first collection of measurements from an “undamaged” pipe (i.e., no mass scatterer
applied to the pipe) is regarded as baseline data. In SHM, a baseline is a known
signal collected when there is no damage in the structure under test and is used as a
reference to evaluate the present conditions of the structure. Therefore, the baseline
data will be excluded in both the training and testing set as machine learning
algorithms are applied. After measuring the baseline, the second collection is taken
with a grease-coupled mass on the pipe. The mass is then removed and another set of
undamaged data is taken. This is done to accommodate for any changes due to the
grease coupling used. These two succeeding collections of damaged and undamaged
records are named as “Measurement Set 1”. This process of placing and removing the
mass is repeated for another five locations, and six measurement sets (240 records) in
total are recorded, each with the equal number of undamaged and damage records.
Figure 2 shows examples of one undamaged record and one damaged record.
The two signals are difficult to distinguish in either the time or frequency domain.
Moreover, correlation coefficient is computed as a metric for the similarity of the
baseline measurement and any of the 240 measurements taken, with 1 indicating two
identical signals and 0 indicating no similarity. The pressure changes decrease the
correlation coefficient by an equivalent amount as compared to damage; the reduction
in correlation coefficient is subtle but variable over all the measurements (see Figure
3).
Pressure gauge PZT transmitter PZT receiver Valve
Zone 1 Zone2
Figure 1. (a) Schematic of the steel pipe specimen and the mass locations (blue
crosses), and (b) photo of the mass.
244 COMPUTING IN CIVIL ENGINEERING
Voltage [mV]
Magnitude
10
0 0.5
-10
0
0 1 2 3 4 100 200 300 400 500
(a) Time [ms] (b) Frequency [kHz]
Voltage [mV]
Magnitude
10
0 0.5
-10
0
0 1 2 3 4 100 200 300 400 500
(c) (d)
Time [ms] Frequency [kHz]
Figure 2. (a) Received signal with no damage present, (b) amplitude spectrum of (a),
(c) received signal with damage (mass) present, and d) amplitude spectrum of (d).
(a) (b)
0.98 0.98
Corr. Coef.
Corr. Coef.
0.96 0.96
20 20
0.94 0.94
0.92 15 0.92 15
12 10 12 10
3 5 3 5
45 45
6 Pressure level 6 Pressure level
Measurement set Measurement set
Figure 3. Correlation coefficients of the baseline and (a) any of the 120 measurements
with no damage present, and (b) any the 120 measurements with damage present
FEATURE EXTRACTION
Two types of features are considered: one requires a baseline and the other is
independent of the baseline. 167 different features are extracted using signal processing
and machine learning tools, such as the Fourier transform, the Hilbert transform, Time
Reversal Focusing (TRF), TRCF, correlation, principal component analysis (PCA), and
the analysis of local maxima, as briefly detailed below.
Baseline-free features
112 baseline-free features are extracted from the time domain signal, the TRF
signal, the TRCF signal, the envelopes of the above three, and the amplitude spectrum.
The TRF and TRCF methods have been developed in our earlier work (Harley et al. 2009,
Ying et al. 2010a); the envelope and the amplitude spectrum of a signal can be computed
by using the Hilbert transform and the Fourier transform, respectively.
Peak amplitude and location features. The peaks of a complex signal indicate the arrival,
reflection, or conversion of wave modes. We would expect certain peaks to be affected
differently from others when damage is introduced. Local maxima of a signal are
searched for to construct the features, including the number of local maxima, the
amplitudes and the locations of the first 3 maxima, and the peak-to-peak amplitude.
Statistical features. We extract the mean, median, standard deviation and kurtosis values,
for the signals and the amplitude spectrum, as well as for the locations and amplitudes of
all the peaks in different domains. Any shift, scale or conversion in wave modes may
change the distribution of energy across time or frequencies.
COMPUTING IN CIVIL ENGINEERING 245
Curve length. The curve length of a signal is useful for describing the signal complexity
(Lu and Michaels 2009). A variation in curve length may be caused by changes in the
modal amplitudes or locations of waves. The curve length is also robust to time-scale
changes since the signal’s shape remains the same. The curve length of a discrete-time
signal x[n] is defined by
Baseline-dependent features
55 features are generated based on 11 baselines from the mean of the first collection and
from the first 10 principal components of those measurements. PCA is used to uncover
certain properties of the signal that may better define the presence of damage, and can be
implemented by singular value decomposition or Eigen decomposition.
where denotes the discrete Fourier transform, and is the Euclidean norm.
Differential curve length. Lu and Michaels (2009) showed that the differential curve
length was an excellent feature for damage detection. The feature is computed similar to
the curve length shown previously, but instead with the residual signal. In addition, the
curve length of the envelope of the residual signal is also obtained.
Mean square error (MSE). MSE is an important criterion to evaluate an estimator for the
true value. For our analysis, we utilize the MSE as a feature to measure the difference
between a discrete-time signal with N sampling points and the baseline.
classifier; soft margin SVM with the radial basis function as kernel is implemented by
LIBSVM (Chang and Lin 2001)
One limitation of AdaBoost is that it can only linearly combine weak classifiers;
thus the final classifier may not necessarily be optimal (Shen et al. 2005, Morra et al.
2010). By contrast, SVM can effectively incorporate nonlinear combinations of features
through kernel functions. However, applying SVM to 167 features is not computationally
efficient and some features may create adverse effects in classification by adding noise.
As a result, we develop a combined method that uses AdaBoost to select principal
features, followed by SVM for classification. We define the principal features as the
features selected when the AdaBoost classifier reaches its lowest error rate after a certain
number of iterations. One issue about AdaBoost is that it allows the features to be
selected repeatedly, whereas, using the same feature more than once is of little use for
SVM (Morra et al. 2010). Therefore, we make slight modifications in the AdaBoost
algorithm to avoid the repeated selection of the same feature when implementing
AdaSVM.
For cross-validation purposes, the three classifiers are applied to 30 tests with different
divisions of the training and testing sets. All of the trials are categorized into five groups,
six in each group, according to the number of measurement sets used for training, i.e.
from one to five. All the remaining data records compose the testing set. We consider
several tests with a very small portion of the acquired data for training, given that in the
real world of SHM, we usually do not have a large amount of data to learn the damage
characteristics in a structure. The challenge is to determine how to make the most of the
limited information to maximize the probability of making a correct decision.
Feature Selection
We apply AdaBoost to automatically rank principal features over the 30 trials.
The three most frequently selected features are the curve length of the time domain, the
shift-invariant correlation coefficient of the second principal component and the
amplitude of the third greatest peak of the time domain signal. Figure 4 illustrates that the
three features oscillate over all the measurement sets, but show generally fine separation
between 120 undamaged datasets in blue and 120 damaged datasets in red.
0 0 0
20 40 60 80 100 120 20 40 60 80 100 120 20 40 60 80 100 120
(a) Measurement (b) Measurement (c) Measurement
Figure 4. Three normalized principal features selected by AdaBoost: (a) curve length of
the signal, (b) shift-invariant correlation coefficient of the signal and the baseline, and (c)
amplitude of the third greatest peak of the signal, over 240 measurements, with the
undamaged records in blue circles and damaged records in red circle areas.
Amp. of 3rd greatest peak
1 1
0.8 0.8
No. of peaks
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
(a) Curve length (b) Maximal amp.
Figure 5. Normalized feature space with features (a) selected by AdaBoost, and (b)
randomly selected. Blue crosses: undamaged records for training; red asterisks:
damaged records for training; blue circles: undamaged records for testing; and red
circle areas: damaged recording for testing.
Damage Detection
We show the results of damage detection by using three classification methods,
AdaBoost, SVM, and AdaSVM. Figure 6 shows the classification results of six trials
with four measurement sets for training and another two sets for testing. All the tests
lead to high accuracy, greater than 90% and with several at 100%, using any of the
three classifiers. Low false-positive rates (FPRs) and false-negative rates (FNRs) are
also shown in Figure 6. In addition, six complementary tests are conducted by
reversing the roles of the training and testing sets of the forgoing trials. Figure 7
shows that the performance of AdaBoost is weakened due to the reduction in the
number of training data cases, while SVM and AdaSVM still achieve relatively high
accuracy, ranging from 82.5% to 96.9%, and 83.1% to 100%, respectively.
Furthermore, we show in Figure 8 the average performance of the three algorithms as
the number of training data cases increases. SVM and AdaSVM show more than 85%
accuracy no matter whether the training data is sufficient or inadequate; AdaBoost
gives more than 95% accuracy when the training set consisted of at least three
measurement sets, but the accuracy decreases rapidly as the number of training data
cases is reduced. As a rough evaluation of the classifiers, the average accuracy over
all the 30 trails is 87.7%, 92.5% and 93.5% for AdaBoost, SVM and AdaSVM,
respectively. Combining AdaBoost and SVM leads to superior performance.
248 COMPUTING IN CIVIL ENGINEERING
100
Percentage [%]
100 100
Percentage [%]
Percentage [%]
50 50 50
0 0 0
1 1 1
2 2 2
3 Accuracy 3 Accuracy 3 Accuracy
4 FNR
4 FNR 4 FNR
5
(a) Trial
5
6 FPR (b) Trial 6 FPR (c) Trial
5
6 FPR
Figure 6. Classification results of damage detection with four measurement sets (2/3
of all the datasets) for training, by using (a) AdaBoost, (b) SVM, and (c) AdaSVM.
Percentage [%]
Percentage [%]
Percentage [%]
50 50 50
0 0 0
1 1 1
2 2 2
3 Accuracy 3 Accuracy 3 Accuracy
4 FNR 4 FNR 4 FNR
5 FPR 5 5
6 6 FPR 6 FPR
(a) Trial (b) Trial (c) Trial
Figure 7. Classification results of damage detection with two measurement sets (1/3
of all the datasets) for training, by using (a) AdaBoost, (b) SVM, and (c) AdaSVM.
100 100 100
AdaBoost AdaBoost
Percentage [%]
Percentage [%]
Percentage [%]
80 80 SVM 80 SVM
AdaSVM AdaSVM
60 60 60
40 40 40
AdaBoost
20 SVM 20 20
AdaSVM
(a) 0
1 2 3 4 5 (b) 0
1 2 3 4 5 (c) 0
1 2 3 4 5
No. of measurement sets for training No. of measurement sets for training No. of measurement sets for training
CONCLUSIONS
Physical experiments were conducted on a pipe with varying internal
pressures and with a mass scatterer at six positions to simulate damage. Signal
processing and machine learning techniques have been applied to extract 167 features.
The curve length, the shift invariant correlation coefficient and the amplitude of the
third peak in the time domain signal were largely selected by AdaBoost as principal
features for good separation between undamaged and damaged classes. Three
classification methods (adaptive boosting, support vector machines and a combination
approach of the two) have been investigated in order to detect the damage in the pipe.
These three classifiers provide average accuracies of 87.7%, 92.5% and 93.5%,
respectively, over 30 trials with different combinations of training and testing data.
The combined method is a promising classifier for damage detection.
ACKNOWLEDGEMENTS
The work is based on an earlier project (the Instrumented Pipeline Initiative)
that was supported by Department of Energy through Concurrent Technologies
Corporation, and the work has been supported by an award from the Pennsylvania
Infrastructure Technology Alliance and by a gift from Westinghouse Electric
Company. The authors would also like to thank Professor Lawrence Cartwright at
Carnegie Mellon University for his advice on operating the experimental apparatus.
REFERENCES
Burges, C. J. (1998). “A tutorial on support vector machines for pattern recognition.”
Data mining and knowledge discovery, 2(2), 121–167.
Chang, C., and Lin, C. (2001). “LIBSVM : a library for support vector machines.”
Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
Cortes, C., and Vapnik, V. (2005). “Support-vector networks,” Machine Learning,
3(20), 273-297.
Freund, Y., and Schapire, R.E. (1997). “A decision-theoretic generalization of on-line
learning and an application to boosting.” Journal of Computer and System
Sciences, 55(1), 119–139.
Harley, J., O'Donoughue, N., States, J., Ying, Y., Garrett, J. H., Jin, Y., Moura, J. M.
F., Oppenheim, I. J., and Soibelman, L. (2009). “Focusing of Ultrasonic
Waves in Cylindrical Shells using Time Reversal.” Proceedings of the 7th
International Workshop on Structural Health Monitoring, Stanford, CA.
Lu, Y., and Michaels, J. E. (2009). “Feature Extraction and Sensor Fusion for
Ultrasonic Structural Health Monitoring Under Changing Environmental
Conditions.” Sensors Journal, IEEE, 9(11), 1462–1471.
Morra, J. H., Tu, Z., Apostolova, L. G., Green, A. E., Toga, A. W., and Thompson, P.
M. (2010). “Comparison of AdaBoost and support vector machines for
detecting Alzheimer's disease through automated hippocampal segmentation.”
IEEE Transactions on Medical Imaging, 29(1), 30-43.
Shen, L., Bai, L., Bardsley, D., and Wang, Y. (2005). “Gabor feature selection for
face recognition using improved AdaBoost learning.” Advances in Biometric
Person Authentication, 39–49.
Ying, Y., Harley, J., Garrett, J. H., Jin, Y., Moura, J. M., O'Donoughue, N.,
Oppenheim, I. J., and Soibelman, L. (2010). “Time reversal for damage
detection in pipes.” Proceedings of SPIE, 76473S.
Ying, Y., Soibelman, L., Harley, J., O'Donoughue, N., Garrett, J. H., Jin, Y., Moura, J.
M. F., and Oppenheim, I. J. (2010). “A Data Mining Framework for Pipeline
Monitoring Using Time Reversal.” Society for Industrial and Applied
Mathematics (SIAM) Conference on Data Mining (SDM10) -Workshop on
Data Mining for Smarter Infrastructure, Columbus, OH.
Using Electimize to Solve the Time-Cost-Tradeoff Problem in Construction
Engineering
ABSTRACT
Construction optimization problems are difficult to solve due to the enormous
number of parameters resulting from the booming in technology and the application
of sophisticated systems in construction projects. In the past few decades,
evolutionary algorithms (EAs) have served as good optimization techniques for
solving these problems. However, many EAs are limited in their capabilities in
reaching optimality due to their methods of evaluating candidate solution strings.
This paper presents a newly developed evolutionary algorithm, named
Electimize, with an application example on solving construction time-cost-tradeoff
problem (TCTP). The new algorithm simulates the behavior of electrons moving
through electric circuit branches with the least resistance. Specifically, the paper
discusses: 1) the basic steps of optimization using Electimize, 2) TCTP modeling
using Electimize, and 3) comparison between performances of Electimize and other
EAs used to solve this problem. Electimize demonstrates an advantage over other
existing evolutionary algorithms in the method used for evaluating solution strings,
which is reflected by the better results obtained for the TCTP.
INTRODUCTION
250
COMPUTING IN CIVIL ENGINEERING 251
LITERATURE REVIEW
duration, as shown in Figure 1. This has made the TCT problem a complex
optimization problem that accommodates far too many investigations in search of the
optimality.
Solving the TCT problems serves many purposes. It could be utilized to find
the project optimum duration, which corresponds to the least cost. The objective
could be meeting the project’s deadline with least cost. (Hegazy and Ersahin, 2001).
A third utility of TCT is finding the project least cost regardless of the time it takes to
finish the project. Some of the previous work incorporated the application of EA in
solving construction TCTP appeared in: [Feng et al. 1997; Hegazy and Wassef 2001;
Zheng, et al. 2005; El-Rayes and Kandil 2005; Elbeltagi 2005; Elbeltagi, et al. 2005]
“U” can be the time, cost, or an index combing both of them for the available
construction methods.
19 25 36 17 1 11
Figure 2. Solution String Represented as a
Wire Composed of Multiple Segments
Cost
I V
19 25 36 17 1 11
Duration
18 21 32 18 3 9
20 27 37 16 2 10
H R / RCW (4)
M (5)
r * ml [ rml ( 1 H )] * R n / [ rml ( 1 H )]
m 1
Where r*ml = modified resistance of segment (m); rml = resistance of value (lml) of
segment (m) in the original wire (Wn); Rn = resistance of wire (Wn); and
RCW= resistance of the control wire.
7. Updating resistances (rml) for the generated values: The resistance (rml) is
updated for each selected value (lml) for each segment (m) according to equation
(6). The length (bml) can then be calculated using equation (1). If a certain value
(lml) is used more than a specified number of times- set by the user, then the
updated resistance r ml is multiplied by the Heat Factor to account for the pseudo-
resistance generated due to the overuse of segments. Experimentation showed that
a suitable value for the Heat Factor can be in the range of [0.4, 0.7[.
r ml rml r * ml (6)
Where r lm = updated resistance for value (l) of segment (m), and rml = resistance
for value (lml) of segment (m) from the previous iteration.
8. Selection of new values (lml) for the variables: The selection probability of new
values is based on the calculated length (bml) of each value (lml). For maximization
problems, it can be calculated according to equation (7).
1 / bl
Pml L (7)
1 / bl
l 1
Where Pml= probability that value (lml) is selected for segment (m).
9. Algorithm Termination: The algorithm terminates after the stipulated number of
iterations is reached.
COMPUTING IN CIVIL ENGINEERING 255
PROBLEM MODELING
Activity/Segment
No.
1 2 3 i K
Wire X1j X2j X3j Xij XKj
Index
Figure 4.Wire Representation of a Project Activities-Execution Scenario
K
Minimize (Total cost = DR C ij C l W ) (8)
i 1
Where: D: project total duration; R indirect cost/unit time; Cij direct cost of activity (i)
using construction method (j); k: number of activities; Cl: total liquidated damages;
and W: incentive for speedy performance.
APPLICATION EXAMPLE
For application, a case study was selected from the literature. The case study
is a construction project composed of 18 activities. There are different construction
methods available for executing each activity. The maximum number of construction
methods available for a single activity is five, while the least number available is two.
The time and cost of each construction method is given. The objective is to find the
project optimum completion time, which corresponds to the least cost using different
combination of the activities different construction methods. The problem at hand has
4.72 x 109 available solutions.
The problem was first solved using linear and integer programming (Burns et
al. 1996); reattempted using GAs (Feng et al. 1997); resolved using Ant Colony
Optimization (Elebeltagi 2005); and reattempted using five different evolutionary
algorithms (Elebeltagi et al. 2005). The data for the problem can be easily obtained
from the literature.
256 COMPUTING IN CIVIL ENGINEERING
Table 1. Summary of Parameters Values of Different EAs used to Solve the TCTP
Best No. of Unit of
Least No. of Attempt
Algorithm Time Solution Sol.
Cost Iterations By
(Days) Strings Strings
Abdel-
Electimize 161,270 110 1 30 Wire Raheem &
Khalafallah
Genetic Chromoso
NA NA 50 400 Feng et. al
Algorithms me
Genetic Chromoso Elebetagi
162,270 113 Unlimited 500
Algorithms me et. al
Ant
Elebetagi
Colony 161,270 110 100 30 Ant
et. al
Algo.
Memetic Chromoso Elebetagi
161,270 110 Unlimited 100
Algorithm me et. al
Particle Elebetagi
161,270 110 10,000 40 Particle
Swarm et. al
Shuffled
Elebetagi
Frog 162,020 112 10 200 Frog
et. al
Leaping
COMPUTING IN CIVIL ENGINEERING 257
CONCLUSION
REFERENCES
ABSTRACT
INTRODUCTION
258
COMPUTING IN CIVIL ENGINEERING 259
Tracking Jib Rotation. Automated estimation of the jib rotation angle will be
performed through the use of a 3D crane model in combination with the camera
calibration parameters. Through the use of the model and the camera parameters,
3D pose estimation techniques will provide the jib rotation. 3D pose estimation
algorithms first generate renderings of the 3D crane as perceived by a camera with
known parameters. These rendered images are then compared to the actual
observed image. The estimated geometry is correct when the rendered image and
the captured image are in agreement.
The rendered image is not a true-life rendering of the work-site as seen in
Figure 1(a), but is instead a rough approximation that depicts only the image model,
here a crane. A crude crane model was generated by surveying the installed crane
using a robotic total station (RTS). The model is depicted in Figure 1(b). Using
the known camera calibration configuration, a rendering of the crane jib generates a
predicted image of the scene given a specified jib rotation angle. Figure 2 depicts
several such simulated renderings given distinct jib rotation angles.
The renderings only depict the crane jib, and exclude both the tower and the
260 COMPUTING IN CIVIL ENGINEERING
tower mast. These binary images must be compared to the actual captured camera
image. Given that the image is far richer than the simulated rendering, pre-
processing algorithms convert the captured image into a binary image. The process
for doing so incorporates three steps, all of which are described in the following
paragraphs: 1) cropping of the image to consist of the crane jib operational region, 2)
a sky elimination step which identifies the sky regions, and 3) a background
subtraction step which isolates the crane arm from other static elements of the image.
The first and simplest step isolates the image region that the crane arm could
realistically occupy. In this case, that corresponds roughly to the top quarter of the
image (100 lines of 480 lines). This step is depicted in the top row of Figure 3.
The second step converts the image to grayscale and applies Otsu's thresholding
algorithm (Otsu 1979) to the image in order to isolate and remove the sky regions of
the image. Results for various sky conditions are depicted in the second row of
Figure 3. The remaining visual elements are the tower crane and buildings.
The last step of the process consists of foreground detection through the use
of a background subtraction algorithm. The algorithm utilized is the single
Gaussian background modeling algorithm (Wren et al. 1997), which models the
background as an image whose intensities obey a Gaussian distribution. Associated
to each pixel are mean and covariance values. The expected image is generated by
the mean values. When a new image is captured, converted to grayscale, and has
the sky regions removed, it is then compared against the Gaussian model. Pixels are
outliers if they have low likelihood of belonging to the Gaussian model, e.g., if they
lie too many standard deviations away from the mean. Here, the threshold was 2
standard deviations. The last row of Figure 3 depicts the classified statistical
outliers to the Gaussian image model. What mostly remains is the crane jib.
The rendered binary image is compared to the processed surveillance camera
image to see how well the two images match. Let represent the silhouette
generated from the 3D model with jib angle , and let represent the processed
camera image. The overlap energy between these two binary images is defined as:
where sums over all image pixels. Searching through all possible rotation angles
can be exhaustive given the need to render the images. Since the target application
is tracking, we can assume that the angle from the previous frame is known and the
current angle is sought. As the crane has a finite angular rate of change, the set of
angles reachable from one frame to the next is limited. Thus a window-based search
is applied to find the angle that maximizes the matching energy
where is the jib angle from the previous frame and is the search radius. The
angular search range is discretized with step size .
COMPUTING IN CIVIL ENGINEERING 261
z y
J x
z
z
y y
W M
x x
(a) Imaged crane structure and site layout. (b) Simple 3D model of crane.
Figure 1. Surveillance camera view of worksite and a wireframe rendering of the
crane model.
Figure 2. Black and white renderings of only the crane jib at various rotation
angles.
Figure 3. Results of the image processing. (The top row depicts the cropped image.
The second row shows the result after sky removal via Otsu's thresholding method.
The last column shows the final jib segmentation after background removal.)
Tracking Trolley Position. As seen in Figure 1(b), the trolley appears as a small,
dark quadrilateral region along the crane jib in the captured image. Geometric,
model-based approaches do not work well for objects with the visual characteristics
of the trolley and the resolution of the imagery. Since color cues are the primary
mechanism for identifying the trolley visually, a color-density approach to tracking is
proposed. The target model description found in (Comaniciu et al. 2003) will be the
model followed in this paper, which builds a histogram of the target given a template.
The histogram defines a quantized density estimate of the target appearance
probability density function. When performing tracking, the density estimate is
augmented with a spatial kernel density function, ,
measure between two densities provides the similarity score. Given a target location
, the density associated to the corresponding extracted image sample is
provides a value of how well the two distributions match. The value approaches one
when the distributions match, and approaches zero when they differ substantially.
The current trolley position is estimated from the previous trolley position by
comparing the Bhattacharya measure for nearby trolley positions. The windowed
search procedure seeks to optimize over the trolley location in the image,
where is the search direction in the image, is the trolley search radius in pixels,
and is the previous trolley position. The search range is
sampled at one pixel intervals along the vector direction . The vector direction
gives the direction of expected motion of the trolley, computable via the crane model,
for the current jib angle . Once the best location is found, it is transformed
from image coordinates to 3D world coordinates corresponding to where the trolley
would be on the crane jib, using the known crane geometry. The two measurements,
jib angle and trolley jib translation, provide a polar coordinate description of the
crane load with the origin being the center of rotation. Transformation from polar to
Cartesian coordinates gives the 2D position of the crane hook in the plan view.
ACTIVITY INFERENCE
Once the jib angle and trolley position are known, and thus also the crane
hook location in the plane view, the activities of the crane can be decoded. As a
lifting machine which moves materials around the construction site, a tower crane's
activity can be clearly defined as loading, lifting and unloading materials. A static
crane occurs during loading, unloading or transitions between the two, while a
moving crane corresponds to lifting. Since the material being lifted is not actively
classified, the site layout plans will be exploited to infer the tower crane’s activities.
Inferring the crane activities requires knowledge of the site layout and the
functions associated to different regions of the site space. Many construction sites
have site layout plans that describe the intended use of the construction work space,
which includes a plane view description of the extents of the as-built structure,
expected roadways, laydown yards, and permissible crane flyby zones. Figure 1(a)
shows the building construction site divided into three major zones: driveway,
COMPUTING IN CIVIL ENGINEERING 263
parking lot, and working zone. The site's logistics plan, Figure 4, indicates that the
concrete mixer is allocated two spots along the driveway, where it serves the crane.
The coverage of the crane jib is a circle centered at the tower mast. A mapping
between the crane jib rotation angle and the function area is defined in Figure 4(b).
Empirically, the crane loads materials from the storage area to unload in the work
zone. Ideally, if the storage area is organized with sub-areas for distinct materials
types, the material type lifted by the crane could be inferred by area. Based on our
observations of the available surveillance video, that is not the case. However, out
of all the materials lifted, the concrete bucket is unique. The concrete mixers are
located at specific spots of the work site, thus distinguishing the concrete pouring
activity from other materials lifting tasks. Hence, crane activity is naturally
categorized into concrete pouring and non-concrete pouring.
Based on the activity categories and the crane action modes (lifting, loading,
unloading), a natural model for describing the crane activity is the finite state
machine. A finite state machine (FSM) is a mathematical behavior model composed
of a finite number of states. Transitions between states will happen when certain
condition is met. Examples of FSMs include [Davis et al 1994] and [Hong et al
2000], which were used for visual gesture recognition. Inspired by their works, we
introduce FSM to construction activity analysis. As shown in Figure 5., the FSM
model for concrete pouring has four states which happen in a fixed order: loading
concrete at the mixer, moving from the mixer to work zone, unloading concrete at the
work zone and moving back to the mixer from the work zone. Transitions between
states are determined by motion of the crane jib. To be robust to the measurement
noise, two thresholds are defined. One is for the instant angular speed, the other is for
the state transition. When the instant angular speed exceeds the speed threshold, is the
crane jib considered to be moving. To generate a transition, the transition event has
to be continuously detected for a specific amount of time (the time threshold).
EXPERIMENTAL RESULTS
The experimental data was obtained from a surveillance camera mounted on
the Georgia Tech campus that monitors the construction of a building, which views
the site from the roof of a nearby building. Access to the roof is possible, allowing
for measurement of the crane and the camera parameters. A robotic total station
(RTS) was used to measure the necessary 3D points. The video sequence taken
from 1:03PM to 2:11PM, with capture period of 4 seconds/frame, totals 1377 frames.
The crane’s activities as per the finite-state machine are given in Figure 6,
with a breakdown of the activity timing shown in Table 1. Note that manual ground
truth of the crane activity state matches that of the algorithm. Further, the times
spent on the concrete pouring activities within each cycle is consistent, which is
expected when the crane is operated by an experienced operator. Furthermore, time
on concrete loading is shorter than time on concrete pouring, and time on bucket
moving to the working zone is longer than time on bucket moving back to the mixer.
264 COMPUTING IN CIVIL ENGINEERING
Trailer Complex
Entry
Emergency Gate
Gate
Concrete
Security Fence Mixer Working Zone
Tower
Crane
Concrete
Mixer
Entry Gate
(a) Site logistics plans. (b) Plan view of crane activity zones.
Figure 4. Understanding crane activities by incorporating information regarding
site logistics plans.
Figure 6. Crane activities over time according to the finite state machine model.
CONCLUSION
This paper illustrated the use of computer vision algorithms for construction
project analysis. A visual tracking algorithm for the tower crane coupled with a
finite state machine with activity state enabled construction activity understanding.
COMPUTING IN CIVIL ENGINEERING 265
Tower crane activity was categorized into concrete pouring and non-concrete
pouring. Experimental results show that the visual tracking algorithm is able to track
the tower crane while the finite-state machine distinguishes the crane activities.
Future work seeks to consider additional activities. We hypothesize that a
Bayesian inference algorithm that optimally processes the track signal using a
collection of past measurements, rather than simply the current measurement, will
accurately detect different activities. Further, future work also seeks to actively
detect and classify the load to more accurately assess the activity state.
REFERENCES
Abdelhamid, T. S., and Everett, J. (1999). "Time Series Analysis for Construction
Productivity Experiments." Journal of Construction Engineering and
Management., 125, 87-95.
Comaniciu, D., Ramesh, V., and Meer, P. (2003). "Kernel-Based Object Tracking."
IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5), 564-577.
Davis, J., and Shah, M. (1994). “Visual Gesture Recognition.” Image and Signal
Processing, 141(2), 101-106.
Everett, J., and Slocum, A. (1993). "Cranium: Device for Improving Crane
Productivity and Safety." Journal of Construction Engineering and Management.,
119(1), 23-39.
Gong, J., and Caldas, C. H. (2010). "Computer Vision-Based Video Interpretation
Model for Automated Productivity Analysis of Construction Operations." Journal
of Computing in Civil Engineering, 24(3), 252-263.
Hong, P., Huang, T., and Turk, M. (2000). “Gesture Modeling and Recognition using
Finite State Machines. 4th IEEE International Conference on Automatic Face
Gesture and Recognition, 410-415.
Ju, F., and Choo, Y. S. (2005). "Dynamic Analysis of Tower Cranes." Journal of
Engineering Mechanics, 125(1), 88-96.
Otsu, N. (1979). "A Threshold Selection Method from Gray-Level Histograms."
IEEE Transactions on Systems, Man, and Cybernetics, 9, 62-66.
Park, M. W., Makhmalbaf, A., and Brilakis, I. (2010) "2D Vision Tracking Methods'
Performance Comparison for 3D Tracking of Construction Resources." ASCE
Construction Research Congress, Banff, Canada., 459-469.
Shapira, A., Rosenfeld, Y., and Mizrahi, I. (2008). "Vision System for Tower Cranes."
Journal of Construction Engineering and Management., 134, 320-332.
Tantisevi, K., and Akinci, B. (2008). "Simulation-based Identification of Possible
Locations for Mobile Cranes on Construction Sites." Journal of Computing in
Civil Engineering, 22, 21-30.
Teizer, J., and Vela, P. A. (2009). "Personnel Tracking on Construction Sites Using
Video Cameras." Advanced Engineering Informatics, 23(4), 452-462.
Wren, C. R., Azarbayejani, A., Darrel, T., and Pentland, A. P. (1997). "Pfinder: Real-
Time Tracking of the Human Body." IEEE Transactions on Pattern Analysis and
Machine Intelligence, 19(7), 780-785.
Yang, J., Arif, O., Vela, P. A., Teizer, J., and Shi, Z. (2010). "Tracking Multiple
Workers on Construction Sites Using Video Cameras." Advanced Engineering
Informatics, 24(4), 428-434.
Design of Optimization Model and Program to Generate Timetables for a Single
Two-Way High Speed Rail Line under Disturbances
1
Graduate Research Assistant, Department of Civil Engineering, National Central
University, 300 Jhongda Rd., Jhongli, Taoyuan 32001, Taiwan (03) 422-7151 ext.
34150; email: 993402004@cc.ncu.edu.tw, 993402003@cc.ncu.edu.tw,
983402006@cc.ncu.edu.tw
2
Associate Professor, Department of Civil Engineering, National Central University,
300 Jhongda Rd., Jhongli, Taoyuan 32001, Taiwan (03) 422-7151 ext. 34132; FAX
(03) 425-2960; email: ccchou@ncu.edu.tw
ABSTRACT
INTRODUCTION
266
COMPUTING IN CIVIL ENGINEERING 267
grown by over 40% in both freight and passenger sectors over the past 10 years. All
railway companies try to provide good services in order to satisfy their customers.
One way to realize this is by improving the quality of the train control process or
scheduling so that the railway company could optimize these services as well.
In addition, the train timetable is the basis for performing the train operations. It
contains information regarding the topology of the railway, train number and
classification, arrival and departure times of trains at each station, arrival and
departure paths, etc. More formally, the train scheduling problem is to find an
optimal train timetable, subject to a number of operational and safety requirements.
Due to the limited resources of railway companies, managing circulation of trains
becomes important, including turning back operations, regular inspection and car
cleaning times. Any solution that ignores train circulation requirements is
unreasonable to train companies. The Taiwan High-Speed Railway System already
has cyclic patterns of daily train circulation, but these patterns have not been
modeled yet. Moreover, based on a review of the literature, researchers in the railway
field have never considered train circulation, especially in high-speed rail systems,
even though it is an important requirement. Therefore, a scheduling model which has
the capability to accommodate not only basic requirements (railway topology, traffic
rules, and user requirements) but also train circulation requirements as well needs to
be formulated.
Furthermore, based on the data in the contingency timetable, THSR prefers to
cancel many trains and operates only two trains per hour in many cases of
disturbances. On the other hand, creating an optimal timetable, which means optimal
journey time, is important since the THSR Company has to preserve the maximal
profit during disturbances. In addition, in order to mitigate the impact of disturbances
instead of cancelling many trains on their system, THSR needs a method for
analyzing how disturbances propagate within the original timetable and which
actions to decide. In the end, the train operator could predict the effects of disruptions
on the timetable without doing real experiments.
MODEL DEVELOPMENT
etc), and the schedules of the trains that use this topology (arrival and departure times
of each train at stations, dwell-times, crossing times, regular inspection times, and
turning back operation). The timetabling design in this research is described as
follows. Given the THSR railroad system and set of services, then the problem
performs a timetable as well as a track assignment plan for these services.
The goals of the optimization model in this research are to let the trains depart as
close to their target departure times as possible, at the same time minimizing the
operation times of services. Since the operation times of each train as well as
required headway between consecutive trains depend on the track assignment,
railway topology and train circulation issues have to be considered simultaneously to
obtain a realistic result which is close to the real timetable.
Suppose a railway system with r station, n trains going down and m trains going
up. Minimizing the operation times for all trains means minimizing the journey times
(arrival and departure times) for all trains going-down, initialized as i (1 to n) plus
the journey time of trains going-up as j (1 to m) in every station (1 to r). Thus, the
mathematical constraint for representing the objective function in this research is
presented as Equations 1 below:
T A Ti D1 j 1 T j A1 T j Dr
i n j m
Min i 1 i r (1)
The variables of this research are journey time, arrival and departure time of all
trains with travel time, station time, headway, car cleaning time, regular inspection
time and turning back operation, as parameters. Variables and parameters will be
explained as constraints below. Travel time constraints restrict minimum time to
travel between two contiguous stations (k to k+1) for all trains going up initialized as
i (1 to n) and trains going down initialized as j (1 to m).
As represented by Equations 2, the arrival time for train i in the station k+1
minus departure time in the station k (origin station) should be greater or equal to the
needed time for trains i to travel between two contiguous stations (k to k+1).
The arrival time for train j in the station k minus departure time in station k+1
should be greater or equal to the needed time for trains j to travel between two
contiguous stations (k+1 to k). This research uses minimum travel time between two
COMPUTING IN CIVIL ENGINEERING 269
contiguous stations, because different types of trains have different speeds and travel
time would automatically differ. As explained before, running time is calculated from
departure times in the timetable minus dwell times. Therefore, station time for each
train i and j at station k (1 to r) should be greater than departure time minus arrival
time, as shown in Equations 4 and Equations 5. This condition represents that the
model uses maximum station time at each station, because not all trains will stop at
every station.
Ti Dk Ti Ak TS ik CS ik (4)
T j Dk T j Ak TS jk CS jk (5)
variable for the availability of track in one segment. The value is 1 if there is a track
available between station k to k+1 and 0 otherwise. Travel time in line determines the
total travel time for one train to travel through a line southbound and northbound
plus allowed time margin. Travel time in line determines the total travel time for one
train to travel through a line southbound and northbound plus allowed time margin.
Maximum travel time has been applied in the model; thus, the difference between
arrival and departure time for one train in the same station should be less or equal to
this travel time, as formulated in Equations 10 and Equations 11 below:
Ti Ar Ti D1 1 time i1r (10)
100
270 COMPUTING IN CIVIL ENGINEERING
T j A1 T j Dr 1 time jr 1 (11)
100
In the THSR system, allowed time margin was set to different numbers for
different types of train. Therefore, this parameter would be a good input in sensitivity
analysis to reveal the effects of changes in this parameter on objective value.
The results showed 0.02% gap during the process, and the algorithm defined the
results by cutting off the model. It means that the nodes were being solved during the
branch and cut search. This value is typically computed by exceeding the cutoff
value, and then the node can be pruned without the need to solve the node to
optimality. Therefore, the computation time to solve the model is 7442.51 seconds
and performance time is 6,438 minutes as the optimal result of minimum total
COMPUTING IN CIVIL ENGINEERING 271
operation time in the THSR system. The timetable diagram contains information
regarding departure and arrival times of trains at each station. From the output
departure and arrival times for each train at stations, we developed the timetable
diagram in Figure 1(a). The new timetable for rescheduling is shown by Figure 1(b)
that has a calamity in station 3 at time is 100.
Figure 1. Timetable diagram: (a) original model, and (b) rescheduling model.
The program for timetable rescheduling was developed. The train operator can
input the time when the disaster was occurred, and the location of trains is shown in
Figure 2. The train operator can input the parameters for related condition during the
disaster, and the new train timetable will be generated running proposed program.
Figure 3 is shown the new train timetable by rescheduling in the program.
CONCLUSIONS
ACKNOWLEDGEMENTS
The authors would like to express sincere thanks to Mr. Te-Che Chen, who is a
senior engineer at Taiwan High-Speed Railway Corporation and currently a Ph.D.
student under the guidance of Dr. Chien-Cheng Chou, for providing real data and
invaluable suggestions.
REFFERENCES
ABSTRACT
Automated motion classification of construction workers/equipment from
videos is a challenging problem, but has a wide range of potential applications in
construction. These applications include, but are not limited to, enabling rapid
construction operation analysis and ergonomic studies. This research explores the
potential of an emerging motion analysis framework, bag of video feature words, in
learning and classifying workers and heavy equipment motions in challenging
construction environments. We developed a test bed that integrates the bag of video
feature words with a Bayesian learning method, and evaluated the performance of
this motion analysis approach on two video data sets. For each video data set, a
number of motion models are learned from the training video segments and applied to
the testing video segments. Compared to previous studies of construction
worker/equipment motion classification, this new approach can achieve good
performance in learning and classifying multiple motion categories while robustly
coping with the issues of partial occlusion, view point and scale changes.
INTRODUCTION
Video becomes an easily captured and widely spread media serving the
purpose of construction method analysis and worker ergonomic study in the
construction industry. The associated demand for reducing the burden of manual
analyses in retrieving information from video motivates further research in automated
construction video understanding.
Recent studies have focused on leveraging computer vision algorithms to
automate the manual information extraction process in analyzing recorded videos
(Teizer and Vela 2009; Jog et al. 2010; Zou and Kim 2007; Peddi et al. 2009; Gong
and Caldas 2010). However, despite considerable progress in construction object
tracking, classifying the motion of construction workers or construction equipment in
single view video, especially in beyond simple categories like working and not
working, remains a hurdle for reaping the full benefits of video-based analysis in
method studies and worker ergonomic studies. Robust motion analysis algorithms
that are capable of differentiating subtle motion categories and handling scene clutter,
occlusion, and view point changes are essential to overcome such a hurdle. However,
there are no reported studies that have developed algorithms with the above
capabilities in a challenging construction environment.
274
COMPUTING IN CIVIL ENGINEERING 275
RELATED WORK
Computer vision algorithms can be widely used in construction to improve a
variety of manual processes if the problem of reliable recognition and tracking of
objects on construction jobsites can be solved. In this regard, many recent studies
have focused on evaluating the performance of existing vision recognition and
tracking algorithms in construction environments (Weerasinghe and Ruwanpura 2010;
Teizer and Vela 2009; Jog et al. 2010). In lieu of automated productivity
measurement using videotaping, there are so far three main approaches. They include
detecting the movement of construction resources (Zou and Kim 2007), recognizing
and tracking the trajectories of construction resources (Gong and Caldas 2010), and
recognizing worker gestures (Peddi et al. 2009). In particular, Peddi et al. (2009)
proposed to use wireless camera to develop a real-time productivity measurement
system based on human poses for bridge replacement. In this study, background
subtraction was used to extract human pose at each frame, and a neural network was
used to train models for classifying worker performance into three classes including
effective work, ineffective work, and contributory work. This approach was only
tested on a bridge deck placement activity, and its performance on other video data
sets remains to be seen. Furthermore, it is likely that similar human gestures can
belong to the different categories as defined above. Besides these efforts, Gonsalves
and Teizer (2009) studied human motions such as walking and running using 3D
ranging cameras. However, the performance of the algorithms is not reported. As an
extension of the work reported in Gong and Caldas (2010), this research focuses on a
general framework of classifying motions of construction objects into intrinsic
276 COMPUTING IN CIVIL ENGINEERING
categories pertaining to the activity in which the objects are engaged. We are
interested in a method that can classify motions into a level of detail that is
comparable to crew balance analysis or manual ergonomic studies. To date, this type
of method is remaining to be found in the construction research domain
By using this method, a large set of features (typically in the order of 105) and
their associated descriptors can be computed to represent the visual contents of the
videos. These features and descriptors are analogous to the words in a document. The
number of features produced in each video sequence depends on many factors, such
as the resolution and action type. This leads to another important step, codebook
formation, before these features can be effectively used for action classification.
are the entries in the dictionary. Video sequences for different actions have these
entry words showing at different frequencies. Each of the video sequences can be
represented as a bag of video feature words. A particular distribution of entry words
for each action category can be learned from a training data set.
performance in these ten trials. It is expected that random guess in this case would
yield 33.3% of accuracy given there are equal number of cases in each category. It
became clear that our model performs well in distinguishing these motions.
Relocating Excavating Swing Relocating Excavating Swing
Relocating 92% 4% 1% Relocating 80% 1% 1%
Excavating 6% 81% 13% Excavating 16% 78% 20%
Swing 3% 15% 86% Swing 4% 21% 79%
(a) (b)
Note: average of 10 random runs, 500 code words, and 80% data for training
Figure 5. Confusion Matrix: (a) Training Data in Data set I; (b) Testing Data in Data set I
A similar training and testing process was conducted on the worker motion
data set. In this case, we used 1000 code words but keep the training vs. testing ratio
same (80% for training). The results are shown in Figure 6. This is a much more
difficult data set than the crane motion data set. Because there are more categories in
this data set, the expected accuracy of random guess drops to 20%. Our learned
model can do significantly better than the random guess. It can be noted that the
learned model performs better in terms of classifying bending, nailing, and aligning
motions. It is clear that the most difficult motion categories to classify is transporting
and traveling. It is also true that these two categories themselves have much in
common in terms of motion features. Overall, the bag of video feature words model
performs reasonable well considering the difficulty of this video data set.
Transporting Traveling Bending Nailing Aligning
Transporting 69% 8% 4% 1% 2%
Traveling 19% 69% 1% 1% 1%
Bending 3% 11% 90% 1% 6%
Nailing 2% 5% 1% 90% 8%
Aligning 6% 8% 3% 7% 83%
(a)
Transporting Traveling Bending Nailing Aligning
Transporting 45% 12% 5% 2% 2%
Traveling 35% 47% 7% 0% 3%
Bending 12% 20% 75% 7% 8%
Nailing 3% 17% 7% 68% 13%
Aligning 5% 5% 7% 23% 73%
(b)
Note: average of 5 random runs, 1000 codewords, and 80% data for training
Figure 6. Confusion Matrix: (a) Training Data in Data set II; (b) Testing Data in Data set II
CONCLUSION
In this study, we extended the bag of video feature words model into the
construction domain. We implemented this new motion learning and classification
framework in MATLAB, and we created two construction video data sets to evaluate
its performance. Experiments show that the bag of video feature words model
performs reasonably well on the video set. The attractiveness of the bag of video
feature words is that it does require foreground segmentation, and is robust to partial
occlusion and changes in view point, illumination, and scale. The performance of this
method can be further improved by adding spatial information since it is well known
that the bag-of-words method ignores spatial information and only concerns the
COMPUTING IN CIVIL ENGINEERING 281
Reference
Dalal, N., Triggs, B., and Schmid, C. (2006) “Human detection using oriented
histograms of flow and appearance.” In ECCV, 2006.
Gong, J., and Caldas, C. (2010). "Computer Vision-Based Video Interpretation
Model for Automated Productivity Analysis of Construction Operations."
ASCE Journal of Computing in Civil Engineering, 24(3), 223-324.
Gonsalves, R., and Teizer, J. "Human Motion Analysis Using 3D Range Imaging
Technology." 26th International Symposium on Automation and Robotics in
Construction (ISARC 2009), Austin Texas, 76-85.
Gorelick, L., Galun, M., Sharon, E., Brandt, A., and Basri, R. “Shape Representation
and Classification Using the Poisson Equation,” Proc. IEEE Conf. Computer
Vision and Pattern Recognition, vol. 2, pp. 61-67, 2004.
Jog, G. M., Brilakis, I. K., and Angelides, D. C. "Testing in harsh conditions:
Tracking resources on construction sites with machine vision." (in press)
Automation in Construction.
Klaser, A., Marszalek, M., and Schmid, C. “A spatio-temporal descriptor based on 3
D gradients.” In BMVC, 2008.
Laptev, I., Marszałek, M., Schmid, C., and Rozenfeld, B. (2008). “Learning realistic
human actions from movies” In CVPR, 2008.
Lowe, D. G. (2004). "Distinctive image features from scale-invariant keypoints."
International Journal of Computer Vision, 60(2), 91-110.
Niebles, J. C. and Fei-Fei, L. (2007) “A hierarchical model of shape and appearance
for human action classification.” In CVPR, 2007.
Peddi, A., Huan, L., Bai, Y., and Kim, S. "Development of human pose analyzing
algorithms for the determination of construction productivity in real-time."
Construction Research Congress 2009, Seattle, WA, 11-20.
Teizer, J., and Vela, P. A. (2009). "Workforce Tracking on Construction Sites using
Video Cameras." Advanced Engineering Informatics, 23(4), 452-462.
Weerasinghe, T. I. P., and Ruwanpura, J. Y. "Automated Multiple Objects Tracking
System (AMOTS)." Construction Research Congress 2010, Banff, Canada,
11-20.
Zou, J., and Kim, H. (2007). "Using Hue, Saturation, and Value Color Space for
Hydraulic Excavator Idle Time Analysis." J. Computing in Civil Engineering,
21, 238.
EVOLUTIONARY SOFTWARE DEVELOPMENT TO SUPPORT
ETHNOGRAPHIC ACTION RESEARCH
Timo Hartmann1
1
Assistant Professor, Department of Construction Management and
Engineering,
Twente University, P.O. Box 217, 7500AE Enschede, The Netherlands;
PH +31(0)53 489-3376; email: t.hartmann@utwente.nl
ABSTRACT
Using the ethnographic action research method researchers can develop
information systems by simultaneously accounting for technological and
organizational factors. The method relies on the close collaboration of practitioners
and researchers that develop a new information system in iterative steps of observing
current work practices in practical work contexts, developing a new or adjusted
information system, and evaluating the usefulness of the system by its introduction in
the same practical work context. One shortcoming of the method, caused by its highly
iterative character, is that it is not possible for researchers to design the information
system in much detail upfront to guide the software development efforts within each
iteration. This makes it hard for researchers to develop systems that they can
introduce readily in practice for the purpose of evaluating the developed system
during each research iteration. By drawing on evolutionary software development
methods this paper introduces a test driven software development framework to
support ethnographic action researchers to overcome this problem. The paper also
illustrates the application of the framework by describing the exemplary
implementation of the framework in software. Overall, with the evolutionary software
development framework the paper contributes to action research methodology to
develop information systems. It provides another stepping stone in enabling
researchers to develop methods and systems to support the complex project based
engineering processes of the construction industry in a bottom-up iterative manner.
282
COMPUTING IN CIVIL ENGINEERING 283
participant observation. In this way, the method allows the continuous adjustment of
the information system with the changing work processes. Additionally, the method
allows to react to changes in the work processes that are caused by the
implementation of the newly developed information system. Hence, in theory, the
ethnographic action research methodology allows for the continuous improvement of
project based work processes through the iterative development of information
technologies. In this way, the method enables the bottom up research of generally
applicable processes and best practices for work settings in frequently changing
environments, such as the construction industry.
One of the problems during ethnographic action research activities is the
dynamic accommodation of the iterative and evolutionary change that lies at the heart
of the research method. The research process requires the modification of functions
already developed during a previous iteration, or the extension of the system by the
introduction of new functions. In both cases, the introduced changes should not
disrupt the already ongoing application of the system's functionality developed in
previous iterations. Additionally, the integrity and consistency of the existing system
needs to be ensured, both functionally and from the perspective of data persistence.
This paper presents a framework to allow for the introduction of dynamic changes
while ensuring the integrity of the existing system. The framework is based on the
evolutionary and test driven software development philosophy (Kramer & Magee,
2002). To implement and test the framework the paper presents an exemplary
implementation of the framework in software.
The paper is structured in two parts. The first part of the paper derives the
development framework by drawing on existing literature in the field of evolutionary
software development. In the second part, the paper then describes the illustrative
application of the framework.
Development Description
Method
Acceptance Acceptance tests are tests for a completed function of the overall IT system.
Tests Acceptance tests simulate the interaction of users with the system by automating
simulated user interaction with the system and testing the system's outputs. Hence,
acceptance tests treat the underlying functionality of the IT system as black box
(Ambler & Sadalage, 2006).
Localization Localization are means to adapt IT systems to regional differences and local
requirements of the organizational cultures the system is to be implemented in.
Localization allows this adaption without changes in the underlying functional and
structural logic of the system.
Unit Tests Unit test are tests to determine whether individual units of code work for their
specific focus. Ideally, each unit test is independent from each other and can run in
isolation from the rest of the system (Martin, 2003).
Database A database sandbox is a database different then the database that supports the
Sandboxes ongoing operation of the system. Database sandboxes allow developers to
evolutionary design, program, and test functionality without comprising the integrity
284 COMPUTING IN CIVIL ENGINEERING
Therefore, the rest of the section will focus on describing the implementation of the
“add project” functionality following the process described by Illustration 2.
Tool Description
Google The Google web toolkit allows for the easy setup of server-client based
Web architectures for projects that use the JAVA programming language.
Toolkit The GWT also allows for the easy setup of localization mechanisms to
(GWT) support different specific project contexts.
JUnit JUnit is a JAVA based framework to support unit and configuration
testing. It provides all the infrastructure necessary to implement, run,
and evaluate unit and configuration tests.
DbUnit DbUnit is another JAVA based framework that provides functionality to
write unit tests for database functions.
Selenium Selenium is a software testing framework for web applications. The
framework provides functionality to record user interaction with a
browser and to convert this activity in JUnit test code.
Table 2. Tools used to implement the evolutionary development architecture
Illustration 3: Excerpt from a GWT localization file with text strings used to
implement the "Add Project" functionality.
The minimal user interface to support the “add project” functionality provides the
possibility to enter a project name, a contract type, and a short description for the
project. Further, the user interface should provide the possibility to confirm that the
project is to be added to the application's database. The example implementation
programs this basic user interface in JAVA code which then can be translated by GWT
to JAVA Script that is executable by state-of-the art internet browsers. All text strings
of the user interface are integrated in the localization functionality of GWT to allow
for the easy adjustment of the user interface to specific language used in specific
project settings. Illustration 3 provides an excerpt from a GWT localization file with
the terms used to implement the “Add Project” functionality. Using the GWT
localization functionality action researchers can support the language practitioners
use in different project contexts by providing similar text files. It would be, for
example, easy to exchange the term “Lump Sum” by the term “Fixed Price” in a copy
of the above displayed file to support a different company setting.
288 COMPUTING IN CIVIL ENGINEERING
Once this user interface is implemented, developers can then use Selenium to record a
configuration test for the add project functionality. To do so, they load the JAVA script
code generated earlier in a by Selenium supported browser and record en exemplary
interaction with the user interface. Selenium can then then generate the JUnit code for
a configuration test of this interaction. Illustration 4 provides an example of such a
configuration test code that is partly generated by Selenium. The code automates the
navigation to the website and the opening of an “add project” dialog box (Illustration
4, line 7-8). Further, it fills in the above described ”add project” interface (line 10-
12), and it presses the confirmation button (line 13). The functionality of the
configuration text is then finalized with manually written code to test whether the
new project was actually added to the database (line 19-22). It is also important to
note that the test can run against an existing production database, without comprising
the information in this database. This functionality of the configuration test is crucial
because configuration tests also need to be run to test whether a deployment of a new
or improved use case functionality in an existing production environment was
successful (see also the deployment part of the evolutionary development process in
Illustration 2). The code example in Illustration 4, therefore, removes the added
project from the database in line 25 before it stops the database transaction in line 27
and 28.
After the implementation of the configuration test, the next step in the test
driven development framework is the identification of atomic implementation
functions. For this simple use case scenario the only atomic implementation function
COMPUTING IN CIVIL ENGINEERING 289
is to add a new project to the database. Hence, this use case only requires the
implementation of one unit test: “addProject”. Using the functionality of JUnit and
DBUnit the implementation of the test is straight forward. It involves the manual
storage of a number of projects in a separate test database and, afterward,
functionality to verify whether the projects have been stored in the database.
Illustration 5 shows the implemented example code of this unit test.
Illustration 5: Unit test to verify the functionality to add a new project to the database.
With these tests in place an action researcher can then continue to implement
the actual functionality of the interface. The reader should note that, while the
previous implementation of the tests seems to be a lot of extra work, most of the logic
of the required functional code is already included in the tests. Hence, the actual
implementation of the functional code does not add much more work to the overall
programming effort. After, the implementation of the functionality, the action
researcher can then deploy the new code and test whether the functionality works in
the local IT environment using the previously implemented configuration test.
290 COMPUTING IN CIVIL ENGINEERING
CONCLUSION
This paper introduced a development framework to support ethnographic
action research efforts to implement IT systems for project based environments. The
framework draws on state-of-the-art evolutionary software technologies to
specifically support the iterative nature of action research efforts. It is based on a
modular client-server architecture and provides an test-driven development process.
In this way, the framework allows for the quick and easy development and
deployment of new or altered functionality to support specific business processes
while ensuring that previously developed functionality remains intact. Additionally,
the modular server-client character enables developers to make program changes
available immediately to all users without the need to install new functionality on
client machines.
By providing the possibility to evolutionary integrate changes in existing and
running IT systems, the framework is specifically designed to support the iterative
nature of ethnographic action research efforts. The framework allows for the
continuous adjustment of information systems to work processes identified through
ongoing ethnographic observations. It also allows for timely reaction to changes in
the work processes in dynamic project based settings and for the quick change of
functionality to support work in different settings. Finally, with the possibility to
quickly integrate changes while ensuring the integrity of the overall system, the
framework allows for the timely integration of suggestions developed in collaboration
with practitioners.
Overall, the here presented framework supports ethnographic action research
efforts by allowing for an iterative improvement of project based work processes by
the evolutionary development of information technologies. The framework presents
another stepping stone towards the possibility to support bottom up research and
development efforts of generally applicable processes and best practices for work
settings in frequently changing environments, such as the construction industry.
REFERENCES
Ambler, S. & Sadalage, P. (2006). Refactoring databases: Evolutionary database
design. Addison-Wesley Professional.
Hartmann, T., Fischer, M. & Haymaker, J. (2009). Implementing information systems
with project teams using ethnographic-action research. Advanced Engineering
Informatics, 23, 57-67.
Hartmann T. and S. Johnson (2009). A Pragmatic Approach to Develop Financial IT
Systems for Small Construction Companies. Proceedings of the 2009 ASCE
International Workshop on Computing in Civil Engineering, Austin, Texas, USA.
Horstmann, C. & Cornell, G. (1997). Core Java 1.1 fundamentals: volume 1. Sun
Microsystems, Inc. Mountain View, CA, USA.
Kramer, J. & Magee, J. (2002). The evolving philosophers problem: Dynamic change
management. Software Engineering, IEEE Transactions on, 16, 1293-1306.
Martin, R. (2003). Agile software development: principles, patterns, and practices.
Prentice Hall PTR Upper Saddle River, NJ, USA.
Determining The Benefits Of An RFID-Based System For Tracking Pre-
Fabricated Components In A Supply Chain
E. Ergen1, G. Demiralp2, G. Guven3
1
Assistant Professor, Department of Civil Engineering, Istanbul Technical
University, Istanbul, 34469, TURKEY; PH (90) 212 285 6912; e-mail:
esin.ergen@itu.edu.tr
2
Graduate Student, Department of Civil Engineering, Istanbul Technical
University, Istanbul, 34469, TURKEY; e-mail: demiralpg@itu.edu.tr
3
Doctoral Student, Department of Civil Engineering, Istanbul Technical
University, Istanbul, 34469, TURKEY; PH (90) 212 285 3656; e-mail:
gursans.guven@itu.edu.tr
ABSTRACT
Radio Frequency Identification (RFID) technology is an automated
identification technology that can be used to track components through
construction supply chains. Although there are studies which show that RFID
increases labor productivity, detailed assessment of the benefits of RFID
technology utilization through a supply chain is limited. In this study, a simulation
model is developed to calculate benefits of an RFID investment in a construction
supply chain. The simulation model is developed for a pre-fabricated exterior
concrete wall panel supply chain and it includes prefabrication and construction
phases. The results of the simulation indicate significant reductions in task
durations and improvements in the efficiency of the process. Based on the
identified benefits, a cost sharing factor for the parties of the supply chain is
determined and it is proposed to be used for distributing the investment cost.
INTRODUCTION
Radio Frequency Identification (RFID) technology has been utilized in
multiple research studies in the construction industry to track components and
related information throughout various phases of a supply chain (Jaselskis and
Misalami, 2003; Goodrum et al., 2006; Ergen et al., 2007). In the recent studies,
several benefits were identified such as decreases in the time needed to complete
certain tasks (e.g., delivery and receipt of materials), increases in the labor
productivity and improvements in data collection processes (Jaselskis and
Misalami, 2003; Song et al., 2006; Grau et al., 2009). Also in some studies,
simulation models were developed to compare the current systems with the RFID-
based systems (Akinci et al. 2006, Young et al. 2010). However, benefits of RFID
technology for different parties in a supply chain were not specifically assessed in
the literature.
In the study explained in this paper, a supply chain of pre-fabricated panels
is investigated to determine the expected benefits for different parties, and a
simulation model is created to assess the impact of RFID technology through the
supply chain. The simulation model includes the prefabrication and construction
phases. In this paper, the initial results of the simulation are provided and
discussed and the cost sharing factor is determined for the supply chain members.
291
292 COMPUTING IN CIVIL ENGINEERING
BACKGROUND RESEARCH
Various cost-benefit studies were performed in retail industry to determine
the feasibility of using RFID technology in supply chains (Lee and Ozer, 2007;
Sarac et al., 2010; Ustundag, 2010). In Architecture/Engineering/Construction
(A/E/C) industry, previous studies show that integrating RFID technology with
the current approach resulted in time savings and improvements in labor
productivity for specific construction activities. However, the studies that consider
the impact of RFID technology on the entire A/E/C supply chain are limited in the
literature. For example, Grau et al. (2009) identified the benefits associated with
the automation of tracking process for the structural steel elements in a case study
and the focus was only on the construction site (i.e., lay down yard and the
installation area). Nasir (2008) also determined the cost and benefits of an
automated construction materials tracking system that located the materials (e.g.,
pipe spools, valves) via integration of RFID and Global Positioning System (GPS)
technologies at the job site. The benefits are identified as the total man hours that
were reduced for locating materials, the reduced lost labor hours, and the costs
avoided due to reduced number of lost materials. Jang and Skibniewski (2009)
also performed a cost-benefit analysis to illustrate the labor savings in sensor-
based material tracking. In another study, time savings were reported due to RFID
usage during material receiving process at site (Jaselskis and Misalami 2003).
In some of the studies, simulation models were used to assess the impact
of technology use in different phases. Davidson and Skibniewski (1995)
developed a simulation model to investigate the effects of an automated data
collection method (i.e., bar coding) on increasing efficiency in asset management
at an office building in the maintenance phase. In another study, a simulation
model was developed to investigate the benefits of using advanced data collection
technologies and it only focused on collection of productivity data from the
construction site (Akinci et al. 2006). The most comprehensive simulation model
covers a supply chain including the installation of components and it was
developed to reflect the impact of automated materials tracking technology on the
visibility of materials (Young et al., 2010). In the study explained in this paper, it
is aimed to examine the impacts of RFID technology on the supply chain of a pre-
fabricated concrete exterior wall panels, including the prefabrication and
construction phases.
SIMULATION MODEL
Two simulation models are developed to examine the impacts of RFID
usage in the current supply chain of the prefabricated concrete panels: (1) Base
case which represents the current manual approach. (2) RFID case which is the
modified version of the base case. In this case, RFID tags are attached to wall
panels, and identification and tracking of the panels are performed in a semi-
automated way by using handheld RFID readers and GPS units. The objective of
developing two different simulation models is to calculate the time differences
between the base case and the RFID case to determine the benefits in terms of
time and money savings due to utilization of RFID technology.
Prefabricated concrete panels go through two different phases within their
supply chain: (1) the production phase at the plant, and (2) the construction phase
at the construction site. Thus, the supply chain is considered as a two-echelon
supply chain and the tasks are classified as plant tasks, and construction site tasks.
Durations of each activity and probabilities in the model are the inputs for the
simulation models. The task durations for the base case were gathered from the
case study. To collect data, observations were made at the construction site, and
practitioners from the manufacturing plant and at the construction site were
interviewed. On the other hand, probabilities (e.g., percentages of located and
missing components) and the durations of the tasks in RFID case were adapted
from the average durations given in the previous RFID studies (Jaselskis, 2003;
Yin et al., 2009; Grau et al., 2009). Table 1 lists the probabilities used in the base
case and RFID case. When determining the durations of the transfer tasks (i.e.,
294 COMPUTING IN CIVIL ENGINEERING
transfer to storage area in plant and transfer to lay down area at construction site)
in the RFID case, estimations were made based on the observations since these
activities are not considered in previous studies.
Table 1. Probabilities for the base case and RFID case
Probabilities Base case (%) RFID case (%)
SIMULATION RESULTS
The initial results of the simulation of two models are summarized in
Table 2. The average and accumulated task durations are given for the base case
and RFID-based case. There is a significant decrease in the durations of related
tasks in the RFID case. The largest time savings in task durations are observed in
locating of panels and extended search, which is performed when the panels
cannot be located at the plant. Another important labor time saving is observed
during receiving of panels at construction site.
COMPUTING IN CIVIL ENGINEERING 295
Task Name Base case durations RFID case durations Accum. time
Average Accum. Average Accum. savings
Production of panels* 24 h 3600 h 24 h 3600 h -
Transfer to storage area
15 min 31.8 h 8 min 19.9 h 11.9 h
in plant**
Locate panels for
20 min 46.1 h 0.6 min 1.4 h 44.7h
shipping
Extended search in
90 min 80.9 h 90 min 1.2 h 79.7 h
plant
Shipping panels to site* 60 min 150 h 60 min 150 h -
Receive panels at site 1.3 min 3.8 h 0.78 min 1.9 h 1.9 h
Transfer to storage yard
25 min 53.9 h 10 min 24.8 h 29.1 h
at constr. Site**
Locate panels at const,
10 min 23.2 h 0.57 min 1.4 h 21.8 h
site
Extended search in
60 min 15.2 h 60 min 1.5 h 13.7 h
construction site
Moving panels to
20 min. 50 h 20 min. 50 h -
construction area*
*Tasks with fixed durations that are not affected from RFID utilization.
** Performed by two workers
To determine the benefits of using RFID for each party in the supply
chain, base case and RFID case were compared and the differences between these
two cases were analyzed in terms of cost reduction. Three types of improvements
that resulted in cost savings were identified in comparison of the RFID case with
the base case: (1) decrease in task durations which leads to reduction in labor and
equipment cost, (2) decrease in the number of incorrectly shipped/identified
pieces and related transfer (i.e., labor and equipment) cost, (3) decrease in the
number of missing panels and reduction in reproduction costs. Since it was not
possible to quantify the cost of the delay caused by a missing panel at the
construction site, this factor was not included in the cost saving calculations.
296 COMPUTING IN CIVIL ENGINEERING
The results show that the panel manufacturer of this two-echelon supply chain
gets almost twice more benefits (i.e., cost savings) compared to the contractor
when RFID technology is utilized. One of the reasons is that the number of
incorrectly identified panels and missing panels in the plant are more than that of
the construction site. Since the panels for different destinations are stored together
at the plant, it is more common to lose material at the plant compared to
construction sites. Additionally, the plant stores larger number of panels at once,
while the construction site stores limited number of panels in the lay down areas.
The identified benefit ratio for two parties can be used as a cost sharing factor
when implementing the RFID-based system in the described supply chain. The
cost of the RFID investment can be shared by two parties based on this cost
sharing factor.
COMPUTING IN CIVIL ENGINEERING 297
CONCLUSIONS
In this study, two simulation models are developed for calculating the
benefits of an RFID-based system for the members of a prefabricated exterior
concrete panels supply chain. Since the supply chain is modeled as a two-echelon
supply chain, both models include the prefabrication and construction phases. The
first model represents the existing manual approach (i.e., base case), and the
second model represents RFID integrated semi-automated approach which is
developed for automated identification and locating of components. The initial
results of the simulations show that there is a major reduction in the durations of
the tasks that are related to identification and localization of the panels at the plant
and at the construction site. Also, the number of missing or incorrectly
shipped/identified panels decreased significantly. There were no panels missing
both at the plant and at the construction site in the RFID case.
When these benefits were quantified for each party, it is determined that
while both parties of the supply chain gained cost savings by using RFID
technology, the total benefit of the panel manufacturer is about twice more
compared to the benefit of the contractor. The identified benefit ratio for two
parties can be used as a cost sharing factor when implementing an RFID-based
system in the described supply chain. Also, for other RFID implementations in
construction supply chains, cost sharing factor can be calculated and used to
distribute the investment cost. As a future work, it is planned to calculate the
investment cost of the proposed RFID system to perform a detailed cost-benefit
analysis.
REFERENCES
Akinci, B., Kiziltas, S., Ergen, E., Karaesmen, I. Z., and Keceli, F. (2006).
“Modeling and Analyzing the Impact of Technology on Data Capture and
Transfer Processes at Construction Sites: A Case Study.” J. Constr. Eng.
Management, 132(11), 1148-1157.
Davidson, I. N. and Skibniewski, M. J. (1995). “Simulation of automated data
collection in buildings.” J. Comput. Civ. Eng., 9(1), 9-20.
Ergen E., Akinci B., East B., and Kirby J. (2007). “Tracking Components and
Maintenance History within a Facility Utilizing RFID Technology.” J.
Comput. Civil Eng., 21(1), 11-20.
Goodrum, P.M., McLaren, M.A. and Durfee, A. (2006). “The Application of
Active Frequency Identification Technology for Tool Tracking on
Construction Job Sites.” Automation in Construction, 15(3), p. 292-302.
Grau, D., Caldas, C. H., Haas, C. T., Goodrum, P. M., and Gong, J. (2009).
“Assessing the impact of materials tracking technologies on construction”,
Automation in Construction, 18(7), 903-911.
Jang, W.S. and Skibniewski, M. (2009). “Cost-Benefit Analysis of Embedded
Sensor System for Construction Materials Tracking”, J. Constr. Eng.
Management, 135(5), 378-386.
298 COMPUTING IN CIVIL ENGINEERING
Abstract
During disaster response, it is imperative to timely provide the rescuers with the
adequate equipment to facilitate lifesaving operations. However, in the case of the 9-
11 terrorist attacks for example, supply of high demand equipment was insufficient
during the initial phase of disaster response, challenging lifesaving operations.
Prioritization of limited resources is one of the greatest challenges in decision making.
Meanwhile, management of geographically distributed resources has been recognized
as one of the most important but difficult tasks in large scale disasters. Additionally,
resource outside of the disaster affected zone converges into the disaster affected area
to assist the response efforts, which is the effect of resource convergence that often
made the already complex task of resource coordination even more challenging.
Although there are difficulties on managing the converging volunteers and groups,
such as the ability to be deployed immediately to the incidents without their
appropriate skills and training, construction equipment and its professional operators
are specialized entities. The effectiveness of their collaboration in the disaster
response operations could be improved through regular participation in drills. As a
result, the convergence of construction equipment could be efficiently utilized to
facilitate Urban Search and Rescue (US&R). This paper proposes a mobile
application that could potentially guide and coordinate the volunteering construction
equipment in collaboration with the emergency command and control structure.
299
300 COMPUTING IN CIVIL ENGINEERING
Introduction
Distribution of resources, such as heavy construction equipment, is critical to efficient
and effective urban search and rescue (US&R) operations during disaster response. It
is imperative to timely provide the rescuers with the adequate equipment to facilitate
lifesaving operations (Sullum et al., 2005; McGuigan, 2002). However, management
of geographically distributed resources has been recognized as one of the most
important but challenging tasks in disaster response (Holguin-Veras et al., 2007;
Halton, 2006). Challenges include identification, assignment, location tracking and
delivery of resources (SBC, 2006; 9/11 Commission Report, 2004). For disaster
response efforts to become more effective, these challenges must be addressed.
During disaster response, search and rescue task forces would need to gain situational
awareness of the disaster, activate required resources and capabilities, and to
coordinate the response actions (DHS 2008). These steps form a loop to continually
gain and maintain the status of the disaster, activate and deploy resources, and to
coordinate response actions for an efficient and effective disaster response.
Coordination of resources during disaster response operations has been characterized
by various shortcomings that inhibit efficient and effective decision making, and
prioritization of limited resources is one of the greatest challenges (SBC, 2006; 9/11
Commission Report, 2004; Auf der Heide, 1989). Limited resources must be
distributed efficiently to the first responders to facilitate lifesaving operations.
However, the supply of resources such as construction equipment is usually unable to
meet the great demand in large scale incidents. This could result in additional
casualties (Gentes, 2006; Bissell et al., 2004). As a result, an efficient prioritization
and distribution of resources is critical to disaster response efforts.
Motivation
In response to disasters, the initial efforts including US&R are usually and mostly
carried out by civilians, which are within the area at the time when the disaster occur
(Drabek and McEntire, 2003; Auf der Heide, 1989). These individuals collect relief
supplies, provide shelter, and are engaged in a variety of services (Drabek and
McEntire, 2003; Wenger, 1992). At the same time, the establishment of the official
command and control by the Emergency Management Agencies (EMAs) from the
local, state and federal usually takes time, to coordinate task forces and assets to
respond to the disaster (Drabek and McEntire, Auf der Heide, 1989). Meanwhile,
volunteers and response organizations outside of the disaster affected zone converges
into the area to assist the response efforts. This is the effect of resource convergence
that often made the already complex problem of resource coordination even more
challenging (Drabek and McEntire, 2003; Fritz and Mathewson, 1957). For incidence,
this causes on-site congestion from volunteers, material, and equipment that hinders
an efficient logistical coordination (Drabek and McEntire, 2003; Kendra and
Wachtendorf, 2001). However, provided with the convergence of resources, such as
volunteers, equipment and organizations, the response to the incident could become
more efficient and effective (Drabek and McEntire, 2003; Mileti, 1989; Auf der
Heide, 1989). In the rapidly changing environments of disasters, the convergence
COMPUTING IN CIVIL ENGINEERING 301
could bring certain capabilities and flexibilities that do not exist or is not sufficient in
the response system (Kendra and Wachtendorf, 2001). How to properly manage the
converging resources is then the challenge to be addressed.
One of the greatest challenges of utilizing the converging resources is their ability to
be deployed immediately to the incidents without the appropriate and required skills,
training and the familiarity to the command and control structure and EMAs (Kendra
and Wachtendorf, 2001). In addition, Kendra and Wachtendorf (2001) pointed out
that to have an efficient and effective disaster response, it is vital to develop, maintain
and take action based on a “Shared Vision” of emergency goals, critical tasks and
their need of critical resources. It is difficult to have civilian volunteers obtain such
Shared Vision without any prior training and communication with the EMAs.
As types, magnitude and context of disasters vary, the mitigation actions usually need
creativity and require responders to improvise to better respond to the incident (Auf
der Heide, 1989). However, the official centralized command and control system
makes logistics coordination difficult, as it is static and inflexible (Neal and Philips,
1995). The command and control structure is established to coordinate the response
efforts and resources of the local, state and federal government, private sector and
NGOs (NRF, 2008). The general outline from bottom up is as follows, although it
may vary from jurisdiction to jurisdiction: 1) first response teams on site request for
resources; 2) the Incident Command Post (ICP) which manages and coordinates
several aggregated incidents, such as several collapsed and partially collapsed
buildings in the area, provides the resource for the first responders with the resources
in their jurisdiction; 3) the county level Emergency Operations Center (EOC)
provides resources to multiple ICPs, and establishes priorities for the distribution of
resources among the various incidents; 4) a State level EOC is activated if the
incident exceeds the response capacity of the County, with the primary role of
supporting the local government in responding to the incidents and coordinating
resources within the state; and 5) if the incident exceeds the local and state response
capacity, the federal government involves its agencies to organize a federal response
and coordinates with the states and response partners to mobilize more resources. To
accomplish those efforts, the private sector and NGOs coordinate and support
response actions of the governments. However, this approach inherits various
challenges that inhibit an efficient utilization of available response resources.
During the initial phase of disaster response, access to heavy equipment is critical to
the relief efforts (Gentes, 2006; SBC, 2006; Kevany, 2005; Bissell et al., 2004).
Heavy equipment is a necessity during response operations such as 1) rapid debris
clearance of the transportation network for first response teams to reach blocked
hazard zones, 2) careful lifting of damaged structural elements in conditions when
human power is not sufficient, and 3) selected debris removal to clear structural
materials to facilitate void searches and tunneling under collapsed buildings
(ELANSO, 2009). In destructive events, the best timing of saving victims is within
the first 24 hours right after the impact of the disaster (Mizuno, 2001). However, in
major disasters, supply of heavy construction equipment for rapid removal of
collapsed building sections are often not able to meet the massive demand. In the
Loma Prieta Earthquake, there were also challenges faced in the early US&R due to
302 COMPUTING IN CIVIL ENGINEERING
the lack of available heavy equipment (McGuigan, 2002). Heavy equipment, which
supports critical lifesaving activities, must be efficiently located, assigned and
distributed to meet the urgent demands in US&R.
Objective
How response units perceive information to make decisions is critical. When disasters
occur, information needed is not always available. Before the Haiti Earthquake for
instance, there is little information regarding the road network and the spatial entities
on existing digital maps. After the earthquake, this lack of information hindered the
response operations. However, volunteers in Port-au-Prince filled in cartographic
blanks in the maps which became very detailed and were accessible to the public
online (OpenStreetMap, 2010). It is also important to emphasize that initial
information collected about the disaster is often inaccurate (Quarantelli, 1983). For
this reason, assessment of resource needs has to be a recurring procedure that
continues throughout the duration of the incident, to update information for all
entities involved within the disaster response operations (Auf der Heide, 1989). In the
case of Haiti, the volunteers used text messages, GPS, and hand drawings to dispatch
thousands of updates for road names, building collapse, and injury locations
(OpenStreetMap, 2010; Ushahidi, 2010). The officials used the information to guide
their emergency workers, including the Marine Corps and Red Cross (Ushahidi,
2010). Although there are drawbacks in this approach of information update, the
benefits outweighed in the case of Haiti (OpenStreetMap, 2010; Ushahidi, 2010).
The objective of this paper is to implement a mobile application for responding
equipment to communicate with a public web service that is capable of receiving and
storing information discovered and updated by civilians and first responders. The
mobile application could be potentially used by officials in the command and control
system and volunteering personal, equipment and materials.
Approach
A decentralized approach that facilitates immediate equipment distribution in
response to disasters is proposed by Chen and Peña-Mora (2011). An Equipment
Control Structure, which is inspired by the behavior control structure of honeybees’
foraging (Biesmeijer and Seeley, 2005), enables a collective decision making process
for equipment coordination. With the Equipment Control Structure applied to
facilities management such as construction equipment distribution, disaster response
operations have the potential to become more efficient. Each volunteering Equipment
Unit will make its own decision on where it will carry out the disaster relief effort.
Based on the decentralized approach the authors proposed for the converging
resources (Chen and Peña-Mora, 2011), the mobile application –proposed by this
paper– could automate information gathering and decision making for an Equipment
Unit. An Equipment Unit is assumed to be a complete crew formed by the equipment,
the operator and the required labor and material.
Information Technology approaches have great potential to make equipment
coordination more efficient. GIS analysis and visualization with GPS tracking could
provide to the authorities an overall view of how all the equipment move and
COMPUTING IN CIVIL ENGINEERING 303
Figure 1 a) User interface of BAS (left); b) Spatial Visualization of the Damaged Zone
(center); 3) EOC and data server (top right); and d) Digital device, e.g. an iPhone, for each
equipment unit (bottom right).
For volunteering Equipment Units, a public web service could provide the converging
resources a source of information as to guide where the resources should converge.
The web service takes discovered or updated demand information into its database
and provides access to the public.
When a person in the disaster affected area discovers a location where there are
victims that need help, for example victims are trapped under collapsed structural
elements, the person discovered the situation could send this piece of information to
the web server through a handheld device with network capability such as a personal
device assistance (PDA), a smart phone, or a touchpad device. The information
uploaded by the person and all other information provided by other people could be
seen through a webpage. As a result, the webpage could serve as an information hub
for unassigned disaster response resources, such as Equipment Units. This way the
304 COMPUTING IN CIVIL ENGINEERING
Figure 2
Equipment Units the necessary information for the decentralized decision making
proposed by the authors (Chen and Peña-Mora, 2011). Mobile devices take the
information and automate decision making for the Equipment Units.
Although this approach of equipment distribution could result in non-optimal
assignment and arrangement of equipment utilization, it is under the assumption that
the official command and control system is overloaded. As a result, this web service
could potentially be used to guide construction equipment to respond to demands in
the early phase of disaster.
Future work would be to further implement algorithms into this process. In a large
scale setting when the official command and control system is overloaded, demand
for equipment could be in a great number. As a result, clustering of discovered
demands needs to be performed on the server side, to avoid overloading information
to the Equipment Units. In addition, an algorithm to rank demand locations for a
Equipment Unit based on the number of demand, spatial attributes, severity of
demand and the capacity of the piece of equipment could be highly useful to help
decision making of the crew.
Acknowledgement
The authors would like to thank Bill Keller (Champaign County EMA), Mark
Toalson (Champaign County GIS Consortium), Mr. Nacheman (chair, ITTF PSC
Building Industry Emergency Response Network) for their kind suggestions, and
Gavin Horn (Research Program Director of IFSI) and Brian Brauer (Assistant
Director of IFSI) for their help and guidance in the exercise at the IFSI, and the
reviewers for their valuable and helpful comments.
Reference
9-11 Commission Report (2004). “National Commission on Terrorist Attacks Upon the
United States—9-11 Commission Report.” Final Report of the National Commission
on Terrorist Attacks Upon the United States, Official Government Edition.
Auf der Heide, E. (1989) “Disaster Response: Principles of Preparation and Coordination.”
Online Book for Disaster Response, Center of Excellence in Disaster Management
and Humanitarian Assistance.
Biesmeijer, J. C. and Seeley, T. D. (2005) “The use of waggle dance information by honey
bees throughout their foraging careers.” Behavioral Ecology and Sociobiology, 59(1),
133–142
Bissell A. B., Pinet, L., Nelson, M., and Levy, M. (2004). “Evidence of the Effectiveness of
Health Sector Preparedness in Disaster Response.” Family and Community Health,
Lippincott Williams & Wilkins, Inc, Vol. 27, Np.3, pp. 193-203
Chen, A.Y. and Peña-Mora, F. (2011) "A Decentralized Approach Considering Spatial
Attributes for Equipment Utilization in Civil Engineering Disaster Response." ASCE,
Journal of Computing in Civil Engineering, Reston, VA. doi:
10.1061/(ASCE)CP.1943-5487.0000100
DHS, 2008 “National Response Framework Document,” Department of homeland Security,
January 2008, <http://www.fema.gov/pdf/emergency/nrf/nrf-core.pdf> (11/20/2008)
Drabek T. E., and McEntire D. A. (2003) “Emergent phenomena and the sociology of
disaster: lessons, trends and opportunities from the research literature.” Disaster
Prevention and Management, Vol. 12 Iss: 2, pp.97-112
306 COMPUTING IN CIVIL ENGINEERING
ABSTRACT
INTRODUCTION
Since planting trees has various effects such as carbon dioxide fixation,
mitigation of heat island phenomenon, reduction of air pollution, scenery
enhancement, effect of relaxation, ecosystem maintenance, etc, afforestation in urban
area is a very important action as an environmental program. Roadside trees are an
essential part of urban plants and their effects include not only the ones stated above
but also road safety, disaster mitigation, leafy shade forming. The number of high
roadside trees in Japan increased from 3.7 million in 1987 to 6.7 million in 2007,
which implies the increase of social demand and importance of roadside trees. On the
other hand, roadside trees are surrounded by objects causing growth inhibition such
307
308 COMPUTING IN CIVIL ENGINEERING
as electric poles and wires, light poles, road signs, billboards, underground gas and
water pipes. Extremely heavy pruning is often done to reduce frequency and cost of
pruning work, and roots are often uplifted by underground works. In addition, gas
emissions and dust on roads have impact to health of trees. Neglecting the health of
roadside trees would cause outbreak of fungi, disease and insect damage, dieback,
stump-hole, rotting wood, and eventually, trees would fall when strong wind blows.
In fact, some people were killed in tree falling accidents.
In order to prevent such accidents and to keep roadside trees healthy for a
long time, some national highway offices and local governments have begun to
introduce roadside tree diagnosis by tree surgeons based on the Visual Tree
Assessment (VTA) method developed by Clause Mattheck (2007) of Germany. Tree
surgeons are experts on diagnosis and treatment of trees and are publicly certified by
Japan Greenery Research and Development Center. There are 1,730 tree surgeons as
of January 2009.
Diagnosis of VTA method is to find detects of disease and decay of inside
trees by appearance inspection. Its feature is that diagnosis is done scientifically and
systematically following a common procedure instead of relying on experts’ hunch
and experiences. Japan Urban Tree Diagnosis Association promotes to progress this
diagnosis technique to realize systematic maintenance of roadside trees by
accumulating clinical records of periodical diagnosis work. However, currently, not
many national road offices or local governments execute VTA based diagnosis except
some progressive local governments for fiscal reasons and high cost of diagnosis.
Thus, new resolutions or improvement of the diagnosis method are necessary to
attempt a breakthrough. Recently, although Radio Frequency Identification (RFID)
technology has been investigated for the application to inspection and diagnosis in
maintenance of structures, it has not been used for research and development of
maintenance of roadside trees.
On the other hand, if periodical inspection and diagnosis are done for
roadside trees, a large amount of data will be stored in each database of the
organization for roadside trees. If data in these databases can be shared or integrated
temporarily for comparing or analyzing data, the databases will be more effectively
used. However, since different terms may be used for the same meaning, it will be
difficult for the integrated database to correctly treat queries.
Ontology is a technique which can formulate concepts of human terms into
forms comprehensible by machines using concept classes and semantic links.
Development of consistent knowledge base using ontology can enhance sharing and
reuse of knowledge significantly. Using ontology as a schema is effective in unifying
different terminology and interoperability among multiple systems. Although
ontology has been used for developing unified medical science database, no
application has been heard in developing an ontology-based schema for roadside
trees.
Therefore, the objective of this research is to develop a system for diagnosis
support and data management using RFID technology and ontology in order to
improve the efficiency and to enable multiple databases with different terminology to
be compared and integrated. A prototype system was developed and tested at an
actual road and evaluated by several experts.
COMPUTING IN CIVIL ENGINEERING 309
Current Problems
Current problems regarding roadside tree management, based on our
literature search and interviews with public agencies and greenery business
companies, are described in the following.
1) Inconsistent and unestablished management method: Although a unified roadside
tree diagnosis format has been made among tree surgeons, it takes more time for
the format to prevail all over the nation.
2) Lack of accumulation of diagnosis data: Due to the high cost (about US$ 185) for
writing a diagnosis form for each tree, diagnosis has not been done repeatedly.
3) Difficulty in tree identification: Generally, tree identification is done using
photographs and maps, which makes the following problems. As the same kind
of many trees are planted in a row at certain intervals along the road, mistakes in
identifying a tree in a picture occur frequently. Since a tree is identified as
somthingth from a certain reference point, e.g., 9th from the traffic light No. 102,
if some tree is logged or multiple trees are planted, the number is not correct
anymore.
4) Time for diagnosis work based on VTA method: It takes much time for a tree
surgeon to execute diagnosis due to the many diagnosis items and complicated
determination treatment. If the required time for diagnosis is reduced, the number
of trees to be diagnosed per day per person will be increased, which will
eventually reduce cost.
310 COMPUTING IN CIVIL ENGINEERING
System Overview
In this research, Roadside Tree Diagnosis Support System (RTDSS) was
developed to solve the problems described in the previous section. In this system, an
RFID tag is installed to each roadside tree and a tree surgeon with a Personal Digital
Assistant (PDA) attached with an RFID reader/writer performs diagnosis after
reading the ID from the RFID. RFID tag has its own unique ID and thus it is possible
to distinguish individual trees firmly. Once the tree ID is identified, diagnosis forms
are displayed on the PDA display and the user can input necessary data quite easily.
The PDA stores previous diagnosis data if any, which is useful for comparing the
current and previous conditions at the site.
System Architecture
Since active RFID tags have batteries and need battery exchange, passive,
RFID tags without batteries are used. There are 4 bands available in Japan, and each
band has its own advantages and disadvantages. We adopted 13.56 MHz band,
considering the communication distance and directionality. There are three types for
data storage for RFID tags, i.e., read only type, one time write type, and read/write
type. In this research, we adopted read/write type because every time diagnosis is
done the latest information should be stored in the RFID tag for reference. In addition,
resistance to environment and durability are required. Based on the above
considerations, we used Tag-it HF-I of Texas Instruments. This tag of 13.56 MHz is
passive type, coin-type, 22mm in diameter, covered with polyphenylene sulfide (PPS),
resistant to environment and durable.
We selected iPAQ212 of Hewlett-Packard as a PDA because its display is
relatively large and touch screen type, suitable for outdoor usage with LED back light,
and it comes with a Compact Flash (CF) card slot and a relatively large memory.
As an RFID reader/writer, RF5400-542 of Socket Mobile was used. This
reader/writer can read and write the selected RFID tags and can be installed to
iPAQ212 using a CF card.
As a system development environment for the PDA, Le Courent, which is
developed by Soar Systems, Co., Ltd. was used. This development environment is
very useful because the agent does not depend on Operating Systems (OS) and
hardware platforms.
System Functions
When the user turns on the system, the first window to appear is RFID tag
reading function. The RFID tag reader/writer can read the ID of the tag by tapping the
Read Tag button on the screen. Then, the system displays the data in the tag, namely,
tag ID, tree number, last date of diagnosis, last user name. The system, then, displays
the detailed information of the tree corresponding to the tag ID. The user can
overwrite the previous data based on the diagnosis. If it is the first time for the tree to
be diagnosed, the user inputs various diagnosis data using the following functions.
1) Fundamental data input form: The user inputs the date, weather, office, user
name, tree surgeon’s name, history.
COMPUTING IN CIVIL ENGINEERING 311
2) Appearance inspection form: As there are many items to be filled out, eight forms
(windows) are provided, i.e., form, dimensions, life-energizing force; boughs;
crotches; shaft damage; damage of shaft bifurcations; other matters of shaft; root
damage; and other matters of root. The user can move to these windows freely by
tapping the tabs on the right hand side of the window.
3) Appearance determination form: Based on the appearance inspection data, the
system automatically displays the appearance determination result, i.e., normal,
complete examination necessary, pruning or replanting necessary, etc. If, for
some reason, one or more necessary data in the appearance inspection form are
missing, the system gives an alarm to the user.
4) Complete examination form: After the complete examination such as
resistograph or gamma ray tree decay detection, the void ratio is to be filled on
the form. Then, the decision is shown on the form.
5) Special instruction form: If the user receives a previous special message,
acknowledgment can be input here. The user can send a message or special
instruction to the following inspector.
6) Photograph file name form: The user takes some digital photographs of the tree
and input the file name of each photograph.
After the diagnosis is done, the user saves the input data and write the
specific data to the RFID tag using the diagnosis result saving function.
VERIFICATION OF RTDSS
Experiment
To verify the feasibility and practicality of the developed RTDSS, an
experiment was performed at Makuharicho 4-Chome, National Highway No. 14 for 7
roadside trees on November 29, 2010 with permission of the Chiba National Highway
Office. The examiner, who is an employee of Toho Leo Co., Ltd, is a certified tree
surgeon and routinely executes tree diagnosis. First, the examiner diagnosed three
roadside trees by the conventional method, filling out the diagnosis form and making
decisions. During the diagnosis, another examiner kept time. Then, the same
examiner used RTDSS for the same kind but different three roadside trees and kept
time. Table 1 shows the total time spent for three trees for each method. The
diagnosis time is not so different but there is a significant difference in the time for
filling out form, appearance and health decisions. The reason is that while using
RTDSS the user can input all necessary data into PDA immediately. Thus, the total
time for RTDSS is less than half of the time spent by the conventional method.
Ontology
Ontology is originally a philosophical term meaning a theory of being. In
computer science it means something which enables sharing and reuse of knowledge
in a domain by describing it explicitly and logically so that computers can process
(Kanzaki 2005). Thomas R. Gruber (1993) defined ontology as an explicit
specification of a conceptualization.
Ontology is composed of concept classes and semantic links. Concept
classes involve entry words such as automobile, vehicle in the real world, and
semantic links represent relationships of these concepts. Semantic links include
subClassOf links (general-special links), hasPart links (whole-part links), and
attribute links. Furusaki (2010) classified objectives of ontology usage as (1)
providing common terminology, (2) utilization for semantic query, (3) usage as
indices, (4) usage as a schema, (5) usage as media for sharing knowledge, (6) usage
for information analysis, (7) usage for information extraction, (8) usage as a
specification of knowledge model, (9) usage for systematization of knowledge.
CONCLUSION
ACKNOWLEDGMENT
The authors would like to thank the Chiba National Highway Office, Kanto
Regional Maintenance Bureau, Ministry of Land, Infrastructure, Transport and
Tourism and Toho Leo Co. Ltd., for their kind support to this research.
REFERENCES
ABSTRACT
INTRODUCTION
With the growing complexity of built environments, there has been an increasing
emphasis on navigation assistance for vehicles as well as pedestrians and robots.
Although the business case for providing navigation guidance to vehicles is well known,
the need for providing navigation guidance to building occupants and first responders
during building emergencies is being realized more recently (Zlatanova and Holweg,
2004; Walder et al., 2009). Other use-cases for providing navigation guidance include
assisting elderly people in navigating complex environments especially hospitals.
Currently, Global Positioning System (GPS) provides sufficient accuracy in open
environments so as to enable navigation solutions that provide accurate guidance to
vehicles. In congested environments, such as city centers, navigation solutions utilize
vector representation of road networks in GIS databases to correct the not-so-accurate
GPS data (Scott, 1994; Taylor et al., 2001). Unfortunately, unlike the GPS and GIS, there
is no mature framework that can be used to provide accurate navigation assistance to
pedestrians in indoor environments. Indoor environments present a challenge to
positioning technologies and hence there is a need for spatial models that can correct
erroneous positioning data (Liao et al., 2003; Spassov, 2007). In this paper, we have built
upon network-based navigation and the vector representation of road networks in GIS to
315
316 COMPUTING IN CIVIL ENGINEERING
create a spatial model that can be utilized both for navigation guidance and erroneous
positioning data correction. We refer to the developed navigation-network model of a
building as a Geometric Topology Network (GTN). This paper compares strengths and
weaknesses of two algorithms namely, the Straight Medial Axis Transformation
algorithm and a modified form of the Medial Axis Transform algorithm for automated
creation of the GTN from an IFC file.
The next section presents the requirements for creating a GTN to provide
navigation assistance in indoor environments. Section 3 provides the background
research on creating spatial representations for navigation assistance. Section 4 describes
the algorithms to create a GTN that have been selected for comparison. Section 5
contains the details of the process to transform IFC-based building information into a
GTN. Section 6 concludes this paper and presents a discussion on the findings.
RESEARCH BACKGROUND
Researchers in the robotics domain have been utilizing algorithms from
computational geometry to decompose planar layouts of indoor environments into
topology network-based maps for robot navigation. Some of the most commonly used
computational geometry algorithms that have been utilized for mobile robot navigation
include the Medial Axis Transform (Blum, 1967; Lee, 1982) and the Generalized
COMPUTING IN CIVIL ENGINEERING 317
Voronoi Graphs (Wallgrun, 2005). The medial axis of a polygon is the set of points
internal to a polygon that are equidistant from and closest to at least two points on the
polygon’s boundary. Lee (1982) stated that the Medial Axis Transform of a polygon is
the same as the Voronoi Diagram of that polygon minus the edges that originate from
concave vertices.
One of the drawbacks of the Medial Axis Transform and the Voronoi diagram is
the fact that these representations include points, lines and parabolic curves.
Representations containing complex parabolic curves work well for a mobile robot that
carries significant computational power onboard, but these representations are not
suitable for a pedestrian carrying a mobile device with limited memory and
computational power. Researchers in the domain of computational geometry recognized
these drawbacks and developed algorithms that would produce topology networks to
contain only linear elements. Aichholzer, et al. (1995) developed the straight skeleton
representation that is used for calculating the polygonal roof over a general layout of
ground walls. This representation consists of only straight elements and no parabolic arcs
and hence is referred to as straight skeleton. Yao and Rokne (1991) developed another
simple algorithm for creating the medial axis that creates a topology network with
straight-line elements rather than parabolic arcs. Lee (2004) modified the algorithm
developed by Yao and Rokne (1991) to create a 3D topology network for providing
geospatial analysis capabilities for urban environments. They named their algorithm the
Straight Medial Axis Transform (S-MAT) algorithm.
The Medial Axis Transform, the Voronoi diagram and the Straight Medial Axis
Transform are centerline-based topology networks. Kannala (2005) developed a metric-
based topology network for fire-egress distance measurement, as illustrated in Figure 1b,
but it does not reflect actual navigation routes in indoor environments. Lee et al. (2010)
developed a visibility-based circulation network (Figure 1c) for code-compliance
checking on circulation distances in building. Their representation includes only linear
elements and the algorithm is accurate and efficient. The circulation network is created as
needed based on the particular query for navigation between any two points in the
building. Unlike the centerline-based network representation, where there is only one
consistent network for the whole building, in visibility-based network representation,
there are numerous networks that are possible based on the different routes a person can
take in the building. Creating this network as needed, using a lightweight mobile device,
can prove to be a significant challenge. Hence, this representation is suitable for static
applications, such as code-compliance checking, rather than mobile dynamic
applications, such as navigation.
a) b) c)
Figure 1. a) Centerline-based geometric topology network (Lee 2004); b) metric-
based topology network (Kannala 2005); c) visibility-based circulation network (Lee
et al 2010)
318 COMPUTING IN CIVIL ENGINEERING
a) b)
Figure 3. a) Illustration of medial axis defined by property one; b) Illustration of
medial axis defined by property two
We decided to implement the aforementioned S-MAT algorithm, but discovered
certain drawbacks and limitations of this algorithm. Since the algorithm involves
constructing bisectors of only convex vertices, whenever there is an intersection at a
concave vertex, a new bisector does not emerge from that intersection point. Unless there
is another bisector that is heading from the opposite direction, such as bisector r67 in
Figure 2, the algorithm gets stuck at that point. Figure 4 illustrates the fact that the S-
MAT algorithm gets stuck after determining nodes n1 and n2 at concave vertices.
Similarly, in Figure 3b the algorithm gets stuck at the intersection at the common
concave vertex. Figure 2 illustrates a scenario where the S-MAT algorithm does not get
stuck at an intersection at a concave vertex. In Figure 3, bisectors r13 and r41 intersect at a
concave vertex, but since there is another bisector, r67, heading from the opposite
direction, the S-MAT of the polygon is completed. Keeping in mind this limitation, we
decided to use a different algorithm that is a modification of the algorithm developed by
Blum (1967) for generating the medial axis.
Figure 4. Limitation of S-MAT algorithm. The algorithm gets stuck after reaching
nodes n1 and n2
Our algorithm, the modified medial axis transform (MAT), involves constructing
bisectors of all the elements of a planar polygon including the concave vertices. Figure 5
illustrates the various bisectors possible in a simple planar polygon. Figure 5a involves
creating an angle bisector of two edges. Figure 5b illustrates the parabolic bisector of a
concave vertex and an edge. Since a parabola is a locus of all the points equidistant from
a point and a line hence a bisector of a concave vertex and an edge will always be a
parabola. Figure 5c depicts the case where the bisector of two concave vertex elements of
a simple planar polygon is the perpendicular bisector of these two vertices. We modified
the algorithm for medial axis creation developed by Blum (1967) by removing the
parabolic bisector depicted in figure 5b and replacing it with the two perpendicular
bisectors of a concave vertex as shown in figure 5d. The unique properties of these two
perpendicular bisectors ensure that the region enclosed between these two bisectors and
the concave vertex is a Voronoi region with respect to the concave vertex. This property
ensures that the nodes resulting from the intersection of these perpendicular bisectors
with other bisectors of a planar polygon lie on the medial axis of the polygon.
320 COMPUTING IN CIVIL ENGINEERING
a) b) c) d)
Figure 5. a) Bisector of two edges, b) Bisector of an edge and a concave vertex, c)
Bisector of two concave vertices, d) Bisectors of a concave vertex.
The modified MAT algorithm involves constructing the two perpendicular
bisectors of concave vertices to determine the nodes of the medial axis. Two
perpendicular bisectors of a concave vertex are shown in Figure 6b as dotted red lines.
The nodes, n1, n2 and n3, of the medial axis are determined by intersecting these
perpendicular bisectors with other angle bisectors. The original MAT algorithm involves
constructing the parabolic bisector, p1 or p2, of a concave vertex and determining the
nodes, n1, n2 and n3, of the medial axis by intersecting the parabolic bisector with other
bisectors as shown in Figure 6a. A major difference between the MAT and modified
MAT algorithms is the fact that in the MAT algorithm the parabolic bisector of a concave
vertex is a part of the medial axis of the polygon, whereas in the modified MAT
algorithm the two perpendicular bisectors of a concave vertex only assist in determining
the nodes of a medial axis. This difference is clear in Figure 6. To complete the medial
axis in the modified MAT algorithm, we draw line segments, l1 and l2, between those
nodes of medial axis, as shown in Figure 6b, that would originally contain a parabolic
bisector, as shown in Figure 6a.
a) b)
Figure 6. a) Voronoi diagram of the polygon, b) Modified medial axis of the polygon.
The modified MAT algorithm has the advantage of being generally applicable to
any shape or layout of the indoor environment, whereas the S-MAT algorithm breaks if
there is an I-shaped hallway. On the other hand, the resulting medial axis from the
modified MAT algorithm presents certain challenges to navigation assistance. For
instance, in Figure 6b, if a user has to walk straight through the hallway crossing nodes n1
and n2, then the node n3 does not lie on the user’s path, although the medial axis
generated from modified MAT algorithm will take the user from node n3, as there is no
direct connection between nodes n1 and n2. The straight medial axis resulting from S-
MAT algorithm, shown in Figure 2, does not suffer from this limitation. Second, the
modified medial axis also suffers from the fact that a straight line replaces a parabolic
bisector at a concave vertex. As the angle of the concave vertex approaches 360o, the line
segment that has replaced the parabolic bisector gets nearer and nearer to the concave
vertex, and hence represents neither the path of a user nor the centerline of the hallway.
We have implemented the two selected algorithms in a proof-of-concept prototype. The
next section describes the process that has been used in the proof-of-concept prototype
for transforming an IFC-based building information file into a GTN.
COMPUTING IN CIVIL ENGINEERING 321
REFERENCES
Aichholzer, O., Aurenhammer, F., Alberts, D. and Gartner, B. 1995, A novel type of
skeleton for polygons. Journal of Universal Computer Science, vol (1) pp. 752-761.
Blum, H. 1967, A Transformation for Extracting New Description of Shape. Symp.
Models for Perception of Speech and Visual Forms, Cambridge, MA: MIT Press, pp.
362-380.
IFC 2010, http://www.iai-tech.org/products/ifc-overview. Last accessed 3rd October,
2010.
Kaemarungsi, K. 2005, Design Of Indoor Positioning Systems Based On Location
Fingerprinting Technique. Doctoral Thesis, University Of Pittsburgh.
Kannala M, 2005, Escape route analysis based on building information models: design
and implementation, MSc thesis, Department of Computer Science and Engineering,
Helsinki University of Technology, Helsinki.
Lee, D.T. 1982, Medial axis transformation of a planar shape. IEEE Trans. Pattern
Analysis and Machine Intelligence, vol (4), pp. 363-369.
Lee, J. 2004, A spatial access-oriented implementation of a 3-d gis topological data
model for urban entities. GeoInformatica, 8 (3), pp. 237-264.
Lee, J.-K., Eastman, C.M., Lee, J., Kannala, M. and Jeong, Y.-S. 2010, Computing
walking distances within buildings using the universal circulation network.
Environment and Planning B: Planning and Design, 37 (4), pp. 628-645.
Liao, L, Fox, D., Hightower, J., Kautz, H. and Schulz, D. 2003, Voronoi tracking:
Location estimation using sparse and noisy sensor data. In Proc. of the IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS).
OpenIFCTools 2011, Open IFC Java Toolbox,
http://www.openifctools.org/Open_IFC_Tools/Demo.html.
Pradhan, A., Akinci, B., and Garrett Jr., J. H. 2009, Development and testing of inertial
measurement system for indoor localization. Proceedings of the 2009 ASCE
International Workshop on Computing in Civil Engineering, pp. 115-124.
Scott, C. 1994, Improving GPS positioning for motor vehicle through map matching. In
Proceedings of ION GPS-94. The Seventh International Technical Meeting of the
Satellite Division of the Institute of Navigation, Salt Lake City, Utah, pp. 1391-1410.
Spassov, I. 2007, Algorithms for Map-Aided Autonomous Indoor Pedestrian Positioning
and Navigation. Ph. D. thesis, Ecole polytechnique fédérale de Lausanne (EPFL).
Taylor, G., Blewitt, G., Steup, D., Corbett, S. and Car, A. 2001, Road Reduction Filtering
for GPS-GIS Navigation. Transactions in GIS, 5(3), pp. 193-207.
Walder, U., Bernoulli, T. and Wießflecker, T. 2009, An Indoor Positioning System for
Improved Action Force Command and Disaster Management. Proceedings of the 6th
International ISCRAM Conference, pp. 251 – 262.
Wallgrun, J. O. 2005, Autonomous Construction of Hierarchical Voronoi-Based Route
Graph Representations. In Volume 3343 of Lecture Notes in Computer Science,
Berlin, Heidelberg: Springer Berlin / Heidelberg, Chapter 23, pp. 413-433.
Yao, C. and Rokne, J., 1991, A Straightforward Algorithm for Computing the Medial
Axis of a Simple Polygon. Intern. J. Computer Mathematics, 39, pp. 51-60.
Zlatanova, S. and Holweg, D. 2004, 3D Geo-information in emergency response: a
framework. Proceedings of the Fourth International Symposium on Mobile Mapping
Technology (MMT'2004), Kunming, China, pp. 29.
Business Models for decentralised Facility Management supported by
Radio Frequency Identification Technology
ABSTRACT
INTRODUCTION
323
324 COMPUTING IN CIVIL ENGINEERING
model is to ensure that all the factors needed to create a successful business plan are
analyzed and proposed. Business models can describe the facility management and
maintenance services offered, the business infrastructure (internal & external
resources) required for producing these services, the stakeholders (building owners,
facility managers and maintenance crews) who will use these services, and the
financial cost savings and profits facility management providers can achieve. The
objective of this paper is to use a proposed Alex Osterwalder’s business model and
adopt this model for the opportunities of using RFID in FM.
The building occupants and building owners are the people who will benefit
with the outcome. They are one of the valuable sources to evaluate user comfort
based on the performance of building systems and components. Measuring and
documenting the user satisfaction with environmental factors such as air quality,
thermal comfort or lighting means this data could be used to determine and evaluate
service level agreements with facility management and maintenance services
providers (Allan, L. and Menzel, K. 2009).
David Moore (Director BAM FM Contractors Ltd.) was approached from the
view point of using the mobile RFID system to provide an early warning system for a
FM contractor that the building environmental comfort parameters have been
contravened so that the FM manager could mobilize his maintenance crew to rectify
the issue before the end user lodged a complaint through the FM helpdesk.
326 COMPUTING IN CIVIL ENGINEERING
In the Irish market there are basically four types of FM categories. The choice
made by organisations is dependent on the size and circumstances under which they
operate. Managers need a clear understanding of the benefits or otherwise in
determining which choice to make. The decision also has to be made as to whether
outsourcing will occur on the operational side or the management side. For example
on the management side it may be decided to outsource project management activities
to those experienced and qualified to carry out such tasks. This in itself could be
broken down into total outsourced project management or a mixture of both outside
consultants and in-house employees. Operational outsourcing would deal more with
the physical activities associated with the running of an organisation.
The first category involving FM is the decision to keep both operational and
management within the company. This would involve dedicating a facilities team
consisting of employees to deal with FM.
The second category is outsourcing some of non-core activities where the
expertise is not available within the organisation or where it is more cost beneficial to
do so. This would involve a service contract being set up with an outside contractor.
The number of service contracts set up is dependent on the facilities manager who
retains the responsibility for monitoring and control of these services.
The third category involves a relatively new type of FM contract. This is
called partnerships and can be described as a strategic agreement whereby the client
organisation and the service provider share the responsibility for the delivery and
performance of the service. Both parties share the benefits of more efficient methods
and associated cost savings.
The fourth category and one, which is provided by most of the companies, is
Total Facilities Management (TFM). Under this situation the organisation contracts
with a company who will provide both operational and managerial FM in its entirety.
There would need to be complete trust by the organisation that the TFM company
will provide the required service and quality that is being offered. This form allows
the TFM company to be totally responsible for delivering, managing, controlling and
achieving the objectives of the organisation. This grouping of service contracts is
known as bundling and it has been suggested that TFM can never truly exist. This is
because a TFM provider would have to have the capability to cover every aspect of
FM from auditing to providing cleaners. Due to the many aspects involved in FM it is
unlikely that any firm would have that capability. It is unlikely that any organisation
would surrender all in-house support totally.
COMPUTING IN CIVIL ENGINEERING 327
BUSINESS CASE
These business cases differ from business models in that they provide general
descriptions of certain areas of interest without being very specific (Browne, D. and
Menzel, K. 2010). Therefore, the following business cases were created:
Business Case 1: IT-supported design, installation and operation of RFID
networks to enable efficient and effective installation of RFID tags and decentralised
information and central data unit.
Business Case 2: Development, implementation and operation of a
decentralised information management system which provides decentralised
information such as manufacture, specification, timestamp, sensor reading (optional)
etc. to facility managers and maintenance crews.
Business Case 3: Development, implementation and operation of a
graphical mobile user interface which provides maintenance crews with access to
building maintenance data to carry out maintenance activities onsite.
COMPUTING IN CIVIL ENGINEERING 329
BUSINESS MODELS
CONCLUSION
This paper describes the potential business opportunities and clearly defines
them as business models in a building management/maintenance context. It becomes
possible to easily specify the relationship between stakeholders and maintenance
profiles. As future work, we plan to evaluate the model to suit better use.
This study is funded by the Higher Education Authority Ireland, under PRTLI
– Cycle 4. It is embedded in research and development activities of smart building
cluster at University College Cork.
REFERENCES
ABSTRACT
Crane related accidents, caused by multiple factors such as a worker entering a
dangerous area, is one of the major accident types in the construction industry. A
vision for addressing this problem is through an intelligent jobsite, fully wired and
sensed. Recent advancement in pervasive and ubiquitous computing makes
autonomous crane safety monitoring possible. An initial step towards implementing
autonomous crane safety monitoring is to identify the safety and information
requirements needed. This paper presents a literature review and results from a set of
expert interviews, used to extract requirements for autonomous crane safety
monitoring. The extracted requirements for dynamic safety zones and associated
information requirements as a precursor to deployment are also presented in the
paper.
INTRODUCTION
The construction industry has suffered from economic loss due to jobsite
accidents every year. Among these accidents, cranes are one of the major issues.
According to statistics from the Occupational Safety and Health Administration
(OSHA), there were 323 fatalities in 307 crane incidents between 1992 and 2006 (22
fatalities per year on average). Among these 323 fatalities, 102 were caused by
overhead power line electrocutions (32%), 68 deaths associated with crane collapses
(21%), and 59 deaths involved a construction worker being struck by a crane
boom/jib (18%)(McCann 2009). A crane’s components or a worker entering a
dangerous area by accident or on purpose is a critical node on the fault chains of over
50% of fatalities. Reducing unnecessary access to dangerous areas by giving clear
warning to involved workers can eliminate the critical node on the fault chain and
hence improve the safety performance on construction jobsites. Our research
envisions an autonomous crane safety monitoring system, which utilizes data and
information collected from various sources (e.g., global positioning system,
anemometer, load cell sensor, rotary sensor and building information models). This
research is divided into three phases: 1) safety knowledge elicitation to extract the
safety and information requirements as a foundation for development of crane safety
331
332 COMPUTING IN CIVIL ENGINEERING
RESEARCH BACKGROUND
Cranes play an important role on construction jobsite regarding hoisting and
transporting materials and equipment. Crane safety has been addressed in several
regulations published by various organizations. Until very recently, construction
practitioners followed OSHA regulations (e.g., OSHA 1910 Subpart N: Material
handling and storage). With the publication of OSHA’s recently released regulation
COMPUTING IN CIVIL ENGINEERING 333
RESEARCH METHODOLOGY
In order to reach a comprehensive understanding of the safety requirements of
crane operation regarding dangerous areas, the authors reviewed OSHA regulations
and industry best practices, including OSHA 2007-0066 (Cranes and Derricks in
Construction), OSHA 1910 subpart N (Occupational Safety and Health Standards:
Materials Handling and Storage) and a series best practice guidelines published by
Construction Plant-hire Association. Based on these publications, eight dangerous
areas for crane components and workers around the operating cranes were identified.
RESULTS
After reviewing safety regulations and interviewing safety experts, we have
summarized three dangerous areas for workers near operating cranes and four
dangerous areas for crane components/loads. Unauthorized users and authorized
users without proper personal protection entering the following three areas should be
warned: w1) area under the crane load; w2) area around material stacks from which a
crane is lifting/unloading materials; and w3) swing area of mobile crane’s
superstructure. Crane load’s entering the following areas should be avoided: l1)
Proximity to nearby structure and l2) proximity to nearby highway, traffic road,
railway or waterway. Also, any part of the crane should be avoided entering these two
areas: c1) proximity to power line and c2) proximity to other crane’s components.
Although OSHA’s regulations mention these areas, there does not exist a clear
summary to facilitate the implementation of autonomous safety monitoring. In this
research, seven extracted dangerous areas have set a clear set of boundaries of safety
operation for various entities related to crane operation. The definition of these
COMPUTING IN CIVIL ENGINEERING 335
dangerous areas in OSHA’s regulations does not have a specific numbers to define
the area and leaves the decision to safety professionals. However, in order for a
machine to make such safety decisions, specified boundary parameters are required.
Get concerned
Get worker’s location object’s location
Timer
Figure 1. Decision rules to monitor worker’s safety near crane load.
To move forward with the decision making chunks development, the authors
identified the information requirements for autonomous crane safety monitoring.
When comparing to previous studies, the identified information requirements (table 1)
add value to the safety community as the authors give detailed information
requirements and propose possible data sources. An application programming
interface will be provided for each data source so the decision making chunks we are
going to use in the application development can easily call the functions and retrieve
the required data for decision making.
CONCLUSIONS
Crane safety has been an important issue in construction for decades.
Although there are various standards and regulations for practitioners to follows in
COMPUTING IN CIVIL ENGINEERING 337
order to improve safety performance on jobsites, the site conditions (e.g., lack
of sight for crane operator) and human errors (e.g., unaware of entering dangerous
areas) causes many accidents in the construction industry. This research envisions an
autonomous crane safety monitoring system on jobsites to improve safety records in
the construction industry. To achieve this goal, this paper identified typical dangerous
areas related to crane operations and examined safety requirement and information
requirement for crane safety monitoring through a comprehensive regulation review
and expert interviews. The result can be used to design and develop safety monitoring
systems. Future work includes the extraction of generic decision making modules and
information requirement across different dangerous area scenarios.
ACKNOWLEDGEMENTS
The authors would like to acknowledge the experts participating in the
interview and their contributions to our study. This research was funded, in part, by the
National Science Foundation, Grant # OCI-0636299. The conclusions herein are those
of the authors and do not necessarily reflect the views of the National Science
Foundation.
REFERENCES
Arcolano, N., M. Diercks, et al. (2001). Powerline proximity alarm. Worcester, MA,
Worchester Polytechnic Institute: 1-103.
Giretti, A., A. Carbonari, et al. (2008). Advanced Real-time Safety Management
System for Construction Sites. The 25th Internetional Symposium on
Automation and Robotics in Construction, Vilnius, Lithuania.
Lee, U.-K., K.-I. Kang, et al. (2006). "Improving Tower Crane Productivity Using
Wireless Technology." Computer-Aided Civil and Infrastructure Engineering
21: 594-604.
McCann, M. (2009). Crane-Related Deaths in Construction and Recommendations
for Their Prevention. Silver Spring, MD, The Center for Construction
Research and Training: 1-13.
Saidi, K. (2009). Intelligent and Automated Construction Job Site Testbed.
Gaithersburg, MD, National Institute of Standards and Technology. 2010:
1-44.
Teizer, J., B. S. Allread, et al. (2010). "Autonomous pro-active real-time construction
worker and equipment operator proximity safety alert system." Automation in
Construction 19: 630-640.
Wu, W., H. Yang, et al. (2010). "Towards an autonomous real-time tracking system
of near-miss accidents on construction sites." Automation in Construction
19(2): 134-141.
A Knowledge-directed Information Retrieval and Management Framework for
Energy Performance Building Regulations
Lewis John McGibbney¹ and Bimal Kumar²
¹School of the Built and Natural Environment, Glasgow Caledonian University, G4
0BA, Glasgow; PH (0044) 0141-3318038; email: lewis.mcgibbney@gcu.ac.uk
²School of the Built and Natural Environment, Glasgow Caledonian University, G4
0BA, Glasgow; PH (0044) 0141-3318522; email: b.kumar@gcu.ac.uk
ABSTRACT
The Internet-driven world we now live in has profound implications for every aspect
of our personal and professional lives. Over the past two decades or so, an enormous
amount of information has been made accessible over the Internet, thanks to
advanced search and retrieval technologies. Over the last five years 1200 Exabyte’s
(1 Exabyte – 1 billion Gigabyte) of data have been put online. As a result, an
increasing amount of professional work within the domain of sustainable design and
construction is becoming dependent on retrieving regulatory and advisory
information over the web quickly. Designers and builders are finding it increasingly
difficult to identify this information and assimilate them in their activities. Generic
search engines like Google do not retrieve relevant information for domain-specific
needs in a focussed manner. Therefore, there is a need for developing smarter
domain-specific search and retrieval technologies under an information management
framework. This paper presents a web-based information search and retrieval
application which employs domain specific ontology to identify (in particular)
relevant energy performance building regulations. The paper will focus on our
development of a customised, domain specific web search platform providing
information on (i) the choice of technologies used within this research and the basic
construction of the search application, (ii) the construction of the domain dependant
ontology which is used to enhance search results, (iii) initial observations relating to
ongoing experiments. Our proposed framework is being developed in collaboration
with a Scottish City Council’s building control department who are actively
validating the value of our approach in their daily activity of checking and approving
designs for construction.
INTRODUCTION
In recent years we have seen a paradigm shift towards semantic retrieval of
information over the internet. It is becoming increasingly common for developers to
incorporate semantic knowledge technologies such as RDF, RDFS, OWL and
Ontology into web-based applications, this enables them to become more compatible
with the World Wide Web in general and the vision of the Semantic Web in
particular. In research the requirement for more efficient information retrieval over
the web has been widely documented. Systems which aim to solve this high level
problem have been implemented mainly within the biomedical (Yu, 2010) and legal
(AKOMANTOSO, 2000-2010) domains (these references by no means represent an
339
340 COMPUTING IN CIVIL ENGINEERING
exhaustive list). Examples within construction and engineering have also been in
development over a number of years (Gulla, 2006), (Rezgui Y. B., 2009), and serious
contributions to knowledge in the domain of both ontology engineering and use of
ontology in information processing have been made. The framework proposed in this
paper provides an effective method of efficiently retrieving web-based data, in
particular Scottish energy performance building regulations using the expressiveness
of OWL the Web Ontology Language as the primary driver towards improved search
and retrieval. The rationale behind this work stems from collaboration with a Scottish
Council’s building control department, their experiences retrieving online data and
effectively incorporating this into design decisions and regulatory rulings within the
local authority. Forthcoming sections of this paper are structured as follows: an
overview of the research framework providing information on the construction
architecture of the search application, proceeded by a section containing underlying
justification for the requirement of a domain specific ontology within the
management framework and the use of the W3C’s OWL language as an appropriate
regulation representation format, we bring the paper to a close with our initial
observations during testing of the framework followed by suggestions for future
work efforts.
RESEARCH FRAMEWORK ARCHITECTURE
According to (Cafarella, 2004) the fundamental flaw regarding current commercially
owned internet search engines is twofold. Firstly they provide no details of their
internal workings e.g. algorithms associated with the ranking of search results,
clustering techniques, scoring options or spidering policies. Second they encapsulate
immense political and cultural power which can distort the underlying search
direction. To provide an information retrieval solution tailored for the domain of
construction and engineering it was obvious an alternative search and retrieval
architecture was required. Its underlying principles would include a spiderbot (or
crawler) tailored to specifically crawl the web for required data, an indexing and
search implementation which would store fetched data in a structured manner backed
by a database populated with ontology, finally an ontology enhanced query
refinement mechanism running in-between the web-based user interface and the
index. For the information retrieval framework to be successful the following factors
would have to satisfied
a) Web-based data such as building regulations in particular are subject to periodic
change; their dynamic nature would have to be taken into consideration when
designing the system as any system implementation which does not contain up-to-
date data can offer little value. It is a fundamental requirement that an accurate
image of the Web graph would have to be maintained
b) It was essential that the knowledge framework had to have good performance
running on sets of standard machines, as this specific criteria would undoubtedly
ensure no IT upgrade would be required in order to test and validate
c) The system would need to incorporate domain specific ontology to enhance
search results; this meant that the knowledge-based tools used to infer the ontology
COMPUTING IN CIVIL ENGINEERING 341
execute reasoners to regularly check the ontology for consistency during the design
process. The ontology class heirarchy during the construction process can be seen in
Figure 3.
INITIAL TESTING & OBSERVATIONS
The principal aim of our knowledge framework is to provide an enhanced search and
retrieval platform with specific application to energy performance building
regulations. This primary aim encompasses several sub objectives, several of which
are mentioned in the next section. In terms of achieving the primary research focus,
we are able to provide significant levels of accuracy over two commercial search
engines which were used as comparisons. Various test scenarios were implemented
and initial results compared with the popular commercial search engines Google UK
and Yahoo UK, as this would provide an initial basis for comparison. Testing was
structured around the submission of various queries and results were based upon
performance when comparing levels of precision ((Eq 1) in this case it was
determined that a precision at n method would be adopted and that n would represent
ten documents, as occasionally the number of documents retrieved from our index
was less than ten) and recall ((Eq 2) the fraction of documents that are relevant to the
query that were successfully retrieved) between search platforms. Based upon these
criteria remaining constant some initial results represented by can be seen in Table 1.
Eq 1
Eq 2
CONCLUSIONS & FUTURE WORK
This paper documents our early efforts towards the construction of an efficient
knowledge-directed information retrieval and management framework tailored
specifically to locate energy performance building regulations. From the results
shown in Table 1, one can conclude that our efforts towards ontology driven
information retrieval enhance both levels of precision and recall far beyond the
current ability of commercial search engines. The framework maintains underlying
principles which permit further extensibility both in terms of knowledge processing
by use of extended ontology based on building regulations as well as the potential to
create a distributed computing architecture operating over clusters of processing
units. An important characteristic which needs to be addressed is the dynamic nature
of web data; therefore we are actively working towards an automated crawler which
maintains a healthy and accurate representation of the web graph. The ontology
enhanced query refinement enables our research framework to be extended to deal
not only with building regulations but with any data encoded in OWL format. This
promotes clear support for further application of our research framework. Finally, we
maintain an interest in specifically locating clauses within regulations; this provides
COMPUTING IN CIVIL ENGINEERING 345
ACKNOWLEDGEMENTS
The authors would like to thank members of the Scottish City Council’s building
control department who are actively validating and improving our research
framework.
REFERENCES
AKOMANTOSO. (2000-2010). Architecture for Knowldege-Oriented Management
of African Normative Texts using Open Standards and Ontologies. Retrieved 12 9,
2010, from AKOMANTOSO: http://www.akomantoso.org/
346 COMPUTING IN CIVIL ENGINEERING
ASF. (2010, December 3). Welcome to Apache Lucene. Retrieved December 7, 2010,
from Apache Software Foundation: http://lucene.apache.org
ASF. (2010, September 27). Welcome to Nutch. Retrieved December 7, 2010, from
Apache Software Foundation: http://nutch.apache.org
Dickinson, I. (2009, 02 24). The Jena Ontology API. Retrieved November 29, 2010,
from Jena - A Semantic Web Framework for Java:
http://jena.sourceforge.net/ontology/index.html
Princeton. (2010, 12 20). About WordNet. Retrieved 12 26, 2010, from WordNet:
http://wordnet.princeton.edu/
Rezgui, Y. B. (2009). Past, present and future of information and knowledge sharing
in the construction industry: Towards semantic service-based e-construction.
Computer-Aided Design , doi:10.1016/j.cad.2009.06.005.
Rezgui, Y. (2007). Text-based domain ontology building using Tf-Idf and metric
clusters techniques. The Knowledge Engineering Review , 379-403.
ABSTRACT
Innovations in the design and construction of sustainable green buildings have gained
significant interest in recent years. It has been estimated that the deployment of an
intelligent monitor and control systems can result in around 20% savings in energy
usage and play a crucial role in green buildings. Among various emerging
technologies, wireless sensor network (WSN) for building management has been
becoming an increasingly feasible approach. However, because of the extreme
constraints on system size (and hence the battery capacity), frequent battery
recharging or replacement for a sensor node is unavoidable and suffers from
unaffordable labor cost. Thus, limited energy availability in a WSN poses a big
challenge and obstacle to wide deployment of WSN based building automation and
management systems.
In this paper, the authors introduce and discuss two emerging techniques (i.e., energy
harvesting and power line communication) that have potentials to be integrated
together and provide a significant improvement on cost, performance, convenience
and reliability. To achieve low-cost high-efficiency building automation and
management, a hybrid system diagram and operation mechanism is proposed in this
paper. A case study is also provided to demonstrate how the proposed system
mitigates the inherent weakness of WSN systems.
INTRODUCTION
According to the U.S. Green Building Council, buildings account for 39% of CO2
emission and consume 70% of the electricity load in the United States. Much of these
emissions and energy usage could be saved by increasing energy efficiency when
providing heating, cooling, and lighting [1]. Even a small adjustment to the operation
of HVAC systems could result in significant reduction of energy consumption and
operating cost. As a result, in recent years the design of sustainable green buildings
with intelligent energy management is attracting more and more attention in both
academic and industrial communities. Smart building automation and energy
management is considered a practical and sustainable solution that could make a huge
contribution to energy savings and environmental benefits.
347
348 COMPUTING IN CIVIL ENGINEERING
In this section, the advantages and disadvantages of EH and PLC techniques are
discussed. Based on their features and merits, the authors propose a hybrid network
architecture.
Power Line Communication
Recently power line communication (PLC) has drawn a lot of interest from the
building construction and management community. PLC enables data transmission
350 COMPUTING IN CIVIL ENGINEERING
through power lines that are normally used to carry and deliver electrical power to
household apparatus. The entire network for power line communication is illustrated
in Figure 1 below. PLC can be utilized to intelligently manage the home appliances.
The operating mechanism of a PLC system is described as follows. The PLC adaptor
modulates a baseband signal onto a carrier, and injects the modulated signal onto the
power line. Once the modulated signal is captured, another PLC adaptor (in the
receiver) demodulates it and extracts the original baseband signal. By communicating
with the PLC adapter, the central controller is able to monitor and control all
appliances connected to the power line. The entire power line network can be easily
set up by installing power line adaptors at each power electrical outlet. The resultant
expense is relatively low in comparison with the total cost of setting up a Local Area
Network (LAN).
This section presents a simple example case study to illustrate the benefits of our
proposed hybrid network architecture (Figure 4). Suppose the sensors in the darkened
area are exposed to sufficient light irradiance and the distance between each sensor
COMPUTING IN CIVIL ENGINEERING 353
node and the central control computer is less than the sensor’s maximum allowed
transmission distance. Then these sensor nodes are able to directly perform reliable
wireless transmission to the central control computer. Other sensors that are placed
away from this darkened region cannot directly communicate with the central control
computer, since their distance exceeds the maximum transmission range. As a result,
in the conventional energy harvesting based WSN systems [9], the central control
computer is unable to receive wireless signals from these nodes outside the maximum
wireless communication distance. Hence, the coverage percentage is limited.
However, the PLC is available to operate where the wireless system is disabled.
Those nodes that are far away from the central controller can transmit their data to the
adapters that are connected to the electrical outlets. Then those data can be transferred
to the controller via the power line. In this case, our proposed network architecture
can cover a larger area.
CONCLUSION
REFERENCES
ACKNOWLEDGEMENT
This work was supported by 2010 Graduate Advisory Committee (GAC) Fellowship,
Purdue University.
Planning of Wireless Networks with 4D Virtual Prototyping for Construction
Site Collaboration
O. Koseoglu
Assistant Professor, Construction Technology and Management, Department of
Civil Engineering, Eastern Mediterranean University, Gazimagusa – TRNC Via
Mersin 10 Turkey, PH +90392 6301233, FAX. +90392 6302869,
email: ozan.koseoglu@emu.edu.tr
ABSTRACT:
INTRODUCTION
355
356 COMPUTING IN CIVIL ENGINEERING
This paper presents the case study research on a live construction project
carried out with a major contractor in UK and planning of wireless network on a 4D
sequenced virtual prototype for onsite implementation.
STATE-OF-THE-ART- WIRELESS NETWORKS AND 4D VIRTUAL
PROTOTYPING TECHNOLOGIES IN CONSTRUCTION
Wireless Networks in Construction. There has been little research focused on the
feasibility and assessment of wireless communication networks at construction sites.
Survey results, recorded from 58 construction managers around the world, on the
use of wireless and web-based technologies in construction revealed that
construction companies are not widely deploying wireless networks at remote
offices or in the field (Williams et al., 2006).
(2006) highlighted the need for measuring and identifying the benefits of 4D
planning in the construction industry. From the real applications and performance
analysis point of view, 4D planning has not been investigated in detail. Quantifying
the benefits and identifying the capabilities of 4D planning is crucial for the
improvement of project performance (Dawood et al., 2006). Hu et al. (2005)
presented the ease and speed with which 4D models could be developed from the 2D
drawings of a specific construction project. Dib et al. (2006) suggested an approach
to combine graphical objects and textual information in order to integrate the
information between construction team members and parties. Kang et al. (2007)
investigated the usefulness of web-based 4D construction visualisation in
collaborative construction planning and scheduling. Research results revealed that
project teams using 4D models detected logical mistakes easier and faster than the
teams using 2D drawings (Kang et al., 2007). Hartmann et al. (2006) presented the
data collected from case studies on six pilot projects between 1997-2005 in order to
measure and compare the 3D modelling productivity on construction projects.
Research has shown a general increase in 3D modelling productivity in recent years
and project managers have demanded the use of 4D modelling in all case-study
projects (Hartmann et al., 2006). Sadeghpour (2006) proposed a system that
integrates Real Time Locating System (RTLS) technology with a 4D site
visualization model. The aim of the system is to visualise movements of, and
changes to, objects on construction sites in real-time by using different technologies,
such as GPS-enhance RFID and 4D CAD (Sadeghpour, 2006). Podbreznik & Rebolj
(2007) presented the development process of the 4D-ACT (Automated Construction
Tracking) system, which automatically recognizes the building elements from the
building elements on site and makes comparison between planned and performed
activities (Podbreznik & Rebolj, 2007). Jongeling & Olofsson (2007) suggested a
location-based planning approach to 4D CAD models to improve their usability for
work-flow analyses. A 4D CAD model is useful for traditional planning, however, it
does not provide information about the flow of resources to specific locations at
construction sites and this article presented a case study which investigated the
combined use of location-based scheduling and 4D CAD (Jongeling & Olofsson,
2007). 4D and nD modelling concepts and new production methods provide an
opportunity for modifying the existing construction planning and scheduling
processes (Rischmoller, 2006). Norberg et al. (2006) investigated the use of 4D
CAD models combined with Line of Balance scheduling technology for the
planning of cast-in-place concrete construction processes. In the existing 4D
modelling software packages, links between 3D CAD objects and the activities of
the time schedule have to be established manually. Tulke & Hanff (2007) presented
a solution for creating time schedules and 4D simulations based on data stored in a
building model. The aim of this approach is to speed up the preparation of 4D
simulations and to provide additional benefits by a better integration of the 4D
models into planning and scheduling practice (Tulke & Hanff, 2007).
Conclusion on the State of the Art. There has been little research focused on the
feasibility and assessment of wireless communication networks at construction sites.
The state-of-the-art in wireless networks and communications in the construction
358 COMPUTING IN CIVIL ENGINEERING
industry revealed that construction companies are not widely deploying wireless
networks at remote offices or in the field. Research projects should focus more on
the planning and implementation of wireless networks with the help of 3D digital
tools at construction sites in order to improve real-time communication and
collaboration.
Laing O’Rourke plc is the largest privately owned construction company in the UK.
From its headquarters in the UK, the Group is developing into an international
business with hubs in Europe, the Middle East and Asia, and Australasia. They have
offices in the UK, Germany, India, Australia, and United Arab Emirates, with over
30,000 employees worldwide.
Laing O’Rourke agreed to support this research and granted the researcher
permission to become involved in some of the organization’s activities whilst
conducting the case study. One of the outcomes of this case study research was to
identify the planning and implementation of wireless networks on a 4D visualised
construction model within a live construction project called “One Hyde Park”.
One Hyde Park (OHP) Project-Pilot Project. One Hyde Park is a prestigious
development of eighty apartments, set out over four residential blocks. Project was
managed and interior designed by Candy & Candy. One Hyde Park is planned to be
one of the finest residential addresses in London and is due for completion in 2010
(Candy & Candy, 2007). The structural design has been undertaken by Arup and
coordinated with Richard Rogers Partnership, the building services engineer is
Cundall.
As the buildings near completion, the height may exceed good propagation
depending on the exact mounting of the nodes. For example, it could be desirable to
mount an AP at the top of a crane to provide good rooftop coverage. To achieve this
a pair of wireless bridges would be required with one mounted on the site office and
the other with the AP on the crane with the antenna panels directed at each other for
optimum performance. In addition, floors that require coverage might need to have
APs most probably one per three floors but these could be moved to cover the floors
where there is a need for wireless coverage. The exact requirements and coverage
heavily depend on the materials used for the internal construction of the buildings.
The buildings might cause more significant radio shadow as they are closer
to completion. The layout might look like Figure 4 (note that some APs are hidden
by the buildings and none of the APs are shown for the floors of the buildings).
CONCLUSIONS
REFERENCES
Arup (2006). One Hyde Park- Structural Engineering Report, Volume 04.
Barrett, P. (2000). Construction Management Pull for 4D CAD. Construction
Congress IV: Building Together for a Better Tomorrow, pp.977-983.
Bowden, S., Dorr, A., Thorpe, T., Anumba, C. (2006). Mobile ICT Support for
Construction Process Improvement. Automation in Construction, Vol.15, Issue 5,
pp. 664-676.
Brilakis, I.K. (2006). Remote Wireless Communications for Construction
Management: Case Study. Joint International Conference on Computing and
Decision Making in Civil & Building Engineering, June14-16 2006, Montreal-
Canada, pp.135-144
Candy & Candy Official Website (2007). One Hyde Park Project.
www.candyandcandy.com. Accessed November, 2007.
Dawood, N., Akinsola, A. & Hobbs, B. (2002). Development of automated
communication of system for managing site information using internet technology.
Automation in Construction, 11(5), 557-572. Elsevier Science.
Dawood, N., Scott, D., Sriprasert, E., Mallasi, Z. (2005). The virtual construction
site (VIRCON) tools: An industrial evaluation. ITcon, 10, Special Issue From 3D
to nD modelling , 43-54, http://www.itcon.org/2005/5. Accessed July 2005.
Dawood, N., Sikka, S., Ramsay, B., Allen, C., Khan, N. (2006). The Potential
Value of 4D Planning in UK Construction Industry. Joint International Conference
on Computing and Decision Making in Civil& Building Engineering, June14-16
2006, Montreal-Canada, pp.3107-3115
Dib, H., Issa, R.R.A., Cox, R. (2006). Visual Information Access and Management
for Life-Cycle Project Management. Joint International Conference on Computing
and Decision Making in Civil& Building Engineering, June14-16 2006, Montreal-
Canada, pp.2466-2475.
Hartmann, T., Gao, J., Fischer, M. (2006). An Analytical Model to Evaluate and
Compare 3D modeling Productivity on Construction Projects. Joint International
Conference on Computing and Decision Making in Civil& Building Engineering,
June14-16 2006, Montreal-Canada, pp.1917-1926.
362 COMPUTING IN CIVIL ENGINEERING
Hu, W., He, X., Kang, J.H. (2005). From 3D to 4D visualization in building
construction. Proceedings of ASCE International Conference on Computing in Civil
Engineering, Cancun, Mexico, July 12-15.
Jongeling, R., Olofsson, T. (2007). A method for planning of work-flow by
combined use of location-based scheduling and 4D CAD. Automation in
Construction, Vol. 16, Issue 2, 189-198.
Kang J. H., Anderson, S.D., Clayton, M.J. (2007). Empirical Study on the Merit of
Web-Based 4D Visualisation in Collaborative Construction Planning and
Scheduling. Journal of Construction Engineering and Management, Vol.133, Issue
6, 447-461.
Mobile Enterprise Analyst. (2005). Construction: Can the sleeping giant be roused?
(http://www.comitproject.org.uk/downloads/news/MEAStent.pdf). Accessed July
2006.
Nielsen, Y., Koseoglu, O. (2007). Wireless Networking in Tunnelling Projects.
Tunnelling and Underground Space Technology, Vol.22, Issue 3, 252-261.
Norberg, H., Jongeling, R., Olofsson, T. (2006). Planning for cast-in-place concrete
construction using 4D CAD models and Line-of-Balance scheduling. Proceedings
of the World IT Conference for Design and Construction, INCITE/ITCSED 2006,
New Delhi, India, Vol.2, 391- 402.
Nuntasunti, S., Bernold, L., E. (2006). Experimental Assessment of Wireless
Construction Technologies. Journal of Construction Engineering and Management,
Vol.132, No.9, 1009-1018.
Podbreznik, P, Rebolj, D. (2007). Real-time Activity Tracking System- The
Development Process. Proceedings for CIB 24th W78 Conference, Maribor 2007,
67-71.
Rischmoller, L. (2006). Construction Multidimensional (nD) Planning and
Scheduling. Proceedings of the World IT Conference for Design and Construction,
INCITE/ITCSED 2006, New Delhi, India, Vol.2, 299-314.
Sadeghpour, F. (2006). Real Time Locating System for Construction Site
Management. Joint International Conference on Computing and Decision Making;
Montreal, Canada, June 13-16, 2006, pp.3736-3741.
Tulke, J., Hanff, J. (2007) 4D Construction Sequence Planning- New Process and
Data Model. Proceedings for CIB 24th W78 Conference, Maribor 2007, 79-84.
Williams, T.P., Bernold, L., Lu, H. (2006). A survey of the use of wireless and web-
based technologies in construction. Proceedings of the 10th Biennial International
Conference on Engineering Construction, and Operations in Challenging
Environments, p.113.
Zhang, H., Shi, J.J., Tam, C.M. (2002). Iconic animation for activity-based
construction simulation. Journal of Computing in Civil Engineering, 16(3), 157–
164.
Comparison of Camera Motion Estimation Methods for 3D
Reconstruction of Infrastructure
Abstract: Camera motion estimation is one of the most significant steps for
structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the
7-point, and the 5-point algorithms are normally adopted to perform the estimation,
each of which has distinct performance characteristics. Given unique needs and
challenges associated to civil infrastructure SFM scenarios, selection of the proper
algorithm directly impacts the structure reconstruction results. In this paper, a
comparison study of the aforementioned algorithms is conducted to identify the most
suitable algorithm, in terms of accuracy and reliability, for reconstructing civil
infrastructure. The free variables tested are baseline, depth, and motion. A concrete
girder bridge was selected as the “test-bed” to reconstruct using an off-the-shelf
camera capturing imagery from all possible positions that maximally the bridge’s
features and geometry. The feature points in the images were extracted and matched
via the SURF descriptor. Finally, camera motions are estimated based on the
corresponding image points by applying the aforementioned algorithms, and the
results evaluated.
Introduction
The 3D spatial data of infrastructure contain useful information for civil engineering
applications including as-built documentation, on-site safety enhancement, progress
monitoring, and damage detection. Accurate, automatic, and fast acquisition of the
spatial data of infrastructure has been a priority for researchers and practitioners in the
field of civil engineering over the years.
Advances in computer vision provide a useful path for 3D data acquisition from
images and video frames. Vision-based 3D reconstruction has been investigated in the
area of computer vision for two decades. Based on the setup, such as the type of
sensor (monocular or binocular camera) or the type of captured data (image or video),
a number of frameworks have been proposed by researchers Fathi and Brilakis (2010).
Each framework, as a pipeline, consists of several stages, and each stage can be
363
364 COMPUTING IN CIVIL ENGINEERING
implemented using different algorithms. Selecting the most appropriate algorithm for
each stage is a critical decision that depends not only on the application of the
framework but also the user’s requirements.
In computer vision, algorithms that are proposed are usually tested and evaluated
using synthetic data or data obtained indoors. For 3D reconstruction of infrastructure,
such as bridges, the distance between the camera and the bridge is usually more than
10 m.n The scene itself consists of several distinct elements such as trees and sky.
Thus, evaluating the performance of such algorithms in real conditions and choosing
the best one for specialized applications is of great importance.
In this paper, we evaluate and compare different algorithms for the estimation of
camera motion. As explained in Section 3, camera motion estimation is an essential
part of every monocular 3D reconstruction framework. The performance of commonly
used methods is evaluated and compared in terms of specific metrics determined by
the requirements of infrastructure systems. The rest of the paper is organized as
follows. In Section 2, an overview of the necessary steps for camera motion is
presented. Section 3 presents the matrices and experimental setup used to compare the
performance of different algorithms, and the obtained results are discussed in Section
4. The conclusions of the investigation are presented in Section 5.
most efficient algorithm for each step. Then, using a real infrastructure scene, the
performances of the three motion estimation algorithms are evaluated.
The approach for the estimation of camera motion between two views using an
essential matrix consists of three main steps: the calibration of the camera; the
computation of correspondence feature points; and the computation of the essential
matrix, camera rotation, and translation between two views. As depicted in Figure 1,
each step also contains sub-stages, which are briefly described in the next few
sections.
Calibration of camera
In computer vision, the process of obtaining the intrinsic parameters of a camera is
called calibration. Intrinsic parameters define the pixel coordinates of an image point
with respect to the coordinates in the camera reference frame. The parameters that are
known as camera intrinsic parameters are:
- Focal length;
- Image center or principle point;
- Skew coefficient (defines the angle between the X and the Y pixel axes); and
- Coefficients of lens distortions;
In this paper, we used the method proposed by Zhang for calibration (Zhang, 1999).
The method only requires the camera to observe a planar pattern shown at a few (at
least two) different orientations.
applications. In recent years, another feature point detector and descriptor known as
SURF has become more popular. While the SIFT method uses a 128D vector as the
descriptor, the SURF descriptor uses a 64D vector. Thus, from the viewpoint of
identifying matches, SURF is more computationally efficient than SIFT. According to
the research conducted by Leo Juan et al. (Bauer et al., 2007), though SIFT performs
slightly better than SURF in terms of accuracy, the performance of these two
descriptors are almost the same after applying a RANSAC algorithm to remove
outliers.
In this paper, we use the SURF method as a feature detector and descriptor. We also
used the Euclidian distance between descriptors as the criterion to find corresponding
matches. In order to improve matching efficiency, an approximate nearest
neighborhood matching strategy, a ratio test described by Lowe (2004), has been
applied rather than the classification of false matches by thresholding the distance to
the nearest neighbor. Moreover, since camera motion estimation algorithms are so
sensitive to false matches, the detected matched features are refined by the calculation
of the fundamental matrix between the two views using the RANSAC approach.
Further information on such refinement can be obtained from Snavely et al. (2007).
0 1 0
D 1 0 0 . (4)
0 0 1
The 8 point algorithm
The 8-point algorithm, which is the most straightforward method for the calculation of
the essential matrix, was first introduced by Longuet-Higgins (Hartley, 1997). The
great advantage of the 8-point algorithm is that it is linear, and hence, it is fast and
easily implementable. If 8-point matches are known, the linear equations are simply
solved. For more than 8 points, a linear least-squares minimization problem must be
solved. The key to the success of the 8-point algorithm lies in proper normalization
of the input data before the construction of the equations to be solved. In this case, a
simple transformation (translation and scaling) of the points in the image before
formulating the linear equations leads to an enormous improvement in the
conditioning of the problem, and hence, in the stability of the result. The complexity
added to the algorithm as a result of the normalizing transformations is insignificant.
If A has rank 8, then it is possible to solve for E up to scale. In the case where the
matrix A has rank 7, it is still possible to solve for the essential matrix by making use
of the singularity constraint. The most important case is when only 7 point
correspondences are known, leading to a 7 × 9 matrix A , which generally has rank 7.
The solution to the equations AE 0 in this case is a 2-dimensional space of the form
(Hartley & Zisserman, 2004):
aE1 (1 a ) E2 (7)
Where a is a scalar variable. The matrices E1 and E2 are obtained as the matrices
corresponding to the generators of the right null-space to A. Next, we exploit the
constraint det E 0 . Since E1 and E2 are known, this leads to a cubic polynomial
equation in a. This polynomial equation may be solved to find the value of a. There
will be either one or three real solutions, giving one or three possible solutions for the
essential matrix.
368 COMPUTING IN CIVIL ENGINEERING
Two parameters are considered for evaluating the performance of these algorithms: the
length of the baseline and the depth value. For each motion scenario, three possible
COMPUTING IN CIVIL ENGINEERING 369
baseline lengths have been defined: 60,100, and 140 cm. For sideway motion
scenarios (items 1 to 8), four different depth values have been selected: 12, 16, 20, and
24 m. Consideration of these parameters implies 102 motion primitives in total.
In order to run the test, a concrete girder bridge located on Interstate 75, McDonough,
GA, has been chosen as our target infrastructure. The two-span bridge consists of three
rows of concrete columns, and each row contains five columns (Figure 2).
We used a high-resolution 8-megapixel Nikon camera installed on a tripod as our
sensor. The tripod was marked such that it was possible to measure the degree of
rotation in different configurations. A tape measure was used to measure the actual
translations of the sensor.
Figure 3: Concrete girder bridge used as selected infrastructure to conduct the test(left)
and test-bed platform(right)
Experimental Results
The average calculated error in computing the translation and rotation for one of the
motion primitives (table 1, number 2) with different baseline and depth values is
presented in Figure 4:
5 4 5 point algorithm
5 point algorithm 7 point algorithm
7 point algorithm 8 point algorithm
4
8 point algorithm
3
3
2
1
1
0 0
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
Number of Motion Number of Motion
Figure 4: Average translation and rotation errors for 3 different algorithms, motion
primitive number 2.
- The length of the baseline in the applied range (60 to 140 cm) has no specific effect
on the accuracy of the results; however, increasing the depth value usually leads to
less accurate results.
References
Bauer, J., Sunderhauf, N., & Protzel, P. (2007). “Comparing Several Implementations
of Two Recently Published Feature Detectors.” In Proc. of the International
Conference on Intelligent and Autonomous Systems, IAV, Toulouse, France.
Fathi, H., and Brilakis, I. (2010). “Automated sparse 3D point cloud generation of
infrastructure using its distinctive visual features.” Journal of Advanced Engineering
Informatics, in press.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2009). “D4AR- A 4-
Dimensional augmented reality model for automating construction progress data
collection, processing and communication.” Journal of Information Technology in
Construction (ITcon), Special Issue Next Generation Construction IT: Technology
Foresight, Future Studies, Road-mapping, and Scenario Planning, 14, 129-153.
Golparvar-Fard, M., Peña-Mora, F. Arboleda, C. A., and Lee, S. H. (2009).
“Visualization of construction progress monitoring with 4D simulation model overlaid
on time-lapsed photographs.” ASCE J. of Computing in Civil Engineering, 23 (6),
391-404.
Hartley, R. (1997). “In defense of the eight-point algorithm.” IEEE Transactions on
Pattern Analysis and Machine Intelligence, 19(6), 580–593.
COMPUTING IN CIVIL ENGINEERING 371
Hartley, R., and Zisserman, A. (2004). “Multiple view geometry.” Cambridge, UK:
Cambridge University Press.
Lowe, D. (2004). “Distinctive image features from scale-invariant keypoints.”
International Journal of Computer Vision, 60(2), 91-110.
Nistér, D. (2004). “An efficient solution to the five-point relative pose problem.” IEEE
Transactions on Pattern Analysis and Machine Intelligence (PAMI), 26(6), 756-770.
Pollefeys, M., Van Gool, L., Vergauwen, M., Verbiest, F., Cornelis, K., Tops, J., and
Koch, R. (2004). “Visual modeling with a hand-held camera.” International Journal of
Computer Vision, 59(3), 207-232.
Rodehorst, V., Heinrichs, M., and Hellwich, O. (2008). “Evaluation of relative pose
estimation methods for multi-camera setups.” In proceedings of ISPRS08, B3b: 135 ff.
Snavely, N., Seitz, S., and Szeliski, R. (2007). “Modeling the world from internet
photo collections.” International Journal of Computer Vision, 80(2), 189-210.
Zhang, Z. (1999). “Flexible camera calibration by viewing a plane from unknown
orientations.” International Conference on Computer Vision (ICCV99), pages 666—
673.
Multi-Image Stitching and Scene Reconstruction for Evaluating Change
Evolution in Structures
ABSTRACT
It is well-recognized that civil infrastructure monitoring approaches that rely
on visual approaches will continue to be an important methodology for condition
assessment of such systems. Current inspection standards for structures such as
bridges require an inspector to travel to a target structure site and visually assess the
structure’s condition. This study presents and evaluates the underlying technical
elements for the development of an integrated inspection software tool that is based
on the use of commercially available digital cameras. For this purpose, digital
cameras are appropriately mounted on a structure (e.g., a bridge) and can zoom or
rotate in three directions. They are remotely controlled by an inspector, which allows
the visual assessment of the structure’s condition by looking at images captured by
the cameras. By not having to travel to the structure’s site, other issues related to
safety considerations and traffic detouring are consequently bypassed. The proposed
system gives an inspector the ability to compare the current (visual) situation of a
structure with its former condition. If an inspector notices a defect in the current view,
he/she can request a reconstruction of the same view using images that were
previously captured and automatically stored in a database. Furthermore, by
generating databases that consist of periodically captured images of a structure, the
proposed system allows an inspector to evaluate the evolution of changes by
simultaneously comparing the structure’s condition at different time periods. Several
illustrative examples are presented in the paper to demonstrate the capabilities, as
well as the limitations, of the proposed vision-based inspection procedure.
1. INTRODUCTION
Bridges constitute one of the major civil infrastructure systems in the U.S. According
to the National Bridge Inventory (NBI), more than 10,400 bridges are categorized as
structurally deficient (Chong et al. 2003). There is an urgent need to develop effective
approaches for the inspection and evaluation of these bridges. In addition, periodical
inspections and maintenance of bridges will prolong their service life (McCrea et al.
2002).
372
COMPUTING IN CIVIL ENGINEERING 373
inspection is the predominant method used for the inspection of bridges. In many
cases, other NDE techniques are compared with visual inspection results (Moore et al.
2001). Visual inspection is a labor-intensive task that must be carried out at least bi-
annually in many cases (Chang et al. 2003).
The main purpose of the current study is to enable inspectors to accurately and
conveniently compare the structure’s current condition with its former condition.
Cameras can be conveniently mounted on a structure, and in the case of bridges, the
cameras can be mounted on bridge columns. Even though the cameras may be
constrained in regard to translation, they can easily rotate in two or three directions.
In the present study, a database of images captured by a camera is constructed
automatically. If the inspector notices a defect in the current view, he or she can
request the reconstruction of that view from the previously captured images. In this
way, the inspector can look at the current view and the reconstructed view
simultaneously. Since the reconstructed view is based on images that are in the
database and it virtually has the same camera pose of the current view, the inspector
can easily compare the current condition of the structure with its previous condition
and evaluate the evolution of defects. Figure 1 shows a simplified schematic
hardware configuration of the proposed inspection system.
the distance ratio of the closest neighbor to that of the second-closest neighbor is
greater than 0.6 (Brown 2005).
2.5 Composition
The selected images are all transformed onto the plane of the current-view image and
stitched using the homographies between each selected image and the current-view
image. The composition surface is flat.
Consequently, straight lines remain straight, which is important for inspection
purposes. Finally, the reconstructed scene is cropped and can then be compared to the
current-view image.
376 COMPUTING IN CIVIL ENGINEERING
2.6 Blending
After stitching the images together, some image edges are still visible. This effect is
usually due to exposure differences, vignetting (reduction of the image intensity at the
periphery of the image), radial distortion, or mis-registration errors (Brown 2005).
Due to mis-registration or radial distortion, linear blending of overlapped images may
blur the overlapping regions. In the problem under discussion, the preservation of the
high-frequency components (e.g., cracks) are of interest. A solution to this problem is
to use a technique that blends low-frequency components over a larger spatial region
and high-frequency components over a smaller region. For this purpose, the
Laplacian pyramid blending (Burt and Adelson 1983) technique is used.
Figure 4(a) shows a current-view image of the truss system shown in Figure 3. The
resolution for this image is 800 × 600 pixels. A yellow tape is attached to the truss in
this image. Figures 4(b) and (c) are the reconstructed and cropped scenes using the
images captured at time periods t2 and t1, respectively. The regions of interest are
shown by red circles in these figures. One can see that the yellow tape did not exist at
time period t1. At time t2, a vertical tape is attached to the truss. The current-view
image shows two vertical and horizontal yellow tapes attached to the structure. This is
a simple example to demonstrate the capabilities of the proposed system.
Note that none of the images in Figures 3(a) and (b) are identical with the
reconstructed scenes in Figures 4(b) and (c). To reconstruct the scenes shown in
Figures 4(b) and (c), four and six images are selected automatically from the
databases in Figures 3(a) and (b), respectively. Figure 5 shows the contribution of
four images used to reconstruct Figure 4(b). On a AMD Athlon II X4 (2.6 GHz)
processor, it takes 110 seconds for the proposed system to detect SIFT keypoints in
the current-view image, find the matching keypoints between the current-view image
and all the images in the database (32 images), select matching images, solve the
bundle adjustment problem, blend the selected images and crop the reconstructed
scene in Figure 4(b).
Bundle adjustment takes less than a second of the whole computation time (because
the sparse bundle adjustment algorithm is efficiently implemented in C++). Note that
COMPUTING IN CIVIL ENGINEERING 377
no parallel processing is used in this process. Except for the bundle adjustment
algorithm, which is implemented in C++, the rest of the algorithms are implemented
in MATLAB. For faster performance (i.e., online processing), all the algorithms
should be efficiently implemented in C++ (or an equivalent computer language).
(a)
(b)
Figure 3: Two image databases of a truss system captured at different time
periods: (a) and (b) images of a truss system captured at time periods t1 and t2,
respectively (t1 < t2).
directions. The inspector thus has the appropriate tools to inspect different parts of the
structure from different views.
Figure 5: The scene reconstruction and the contribution of four selected images
from the database captured at time t2 (Figure 3(b)). The current-view image
corresponding to this reconstruction is shown in Figure 4(a).
The main purpose of the current study is to give the inspector the ability to compare
the current situation of the structure with the results of previous inspections. In order
to reach this goal, a database of images captured by a camera is constructed
automatically. When the inspector notices a defect in the current view, he can request
the reconstruction of the same view from the images captured previously. In this way,
the inspector can evaluate the growth of a defect of interest. If overlapping images are
captured periodically and saved in separate databases, then the evolution of changes
can be tracked through time by multiple reconstruction of a scene from images
captured at different time intervals.
COMPUTING IN CIVIL ENGINEERING 379
The correction of radial distortion is not considered in this study. Radial distortion
can be modeled using low order polynomials. Furthermore, implementing all of the
discussed algorithms in a computer language such as C or C++ will dramatically
decrease the computation time and will hasten the online usage of the proposed
system. Further details and examples about the proposed study can be found in the
studies done by Jahanshahi et al. (2009 and 2011).
5 ACKNOWLEDGEMENTS
This study was supported in part by grants from the National Science Foundation.
REFERENCES
Brown MA. Mult-image Matching using Invatiant Features. The University of British
Columbia. Vancouver, British Columbia, Canada; 2005.
Burt PJ, Adelson EH. A Multiresolution Spline With Application to Image Mosaics.
ACM Transactions on Graphics. 1983 October;2(4):217–236.
Chang PC, Flatau A, Liu SC. Review Paper: Health Monitoring of Civil
Infrastructure. Structural Health Monitoring. 2003;2(3):257–267.
Chong KP, Carino NJ, Washer G. Health monitoring of civil infrastructures. Smart
Materials and Structures. 2003 June;12(3):483–493.
Graybeal BA, Phares BM, Rolander DD, Moore M, Washer G. Visual inspection of
highway bridges. Journal of Nondestructive Evaluation. 2002
September;21(3):67–83.
Jahanshahi MR, Kelly JS, Masri SF, Sukhatme GS. A survey and evaluation of
promising approaches for automatic image-based defect detection of bridge
structures. Structure and Infrastructure Engineering. 2009 December;5(6):455–
486.
Jahanshahi MR, Masri SF, Sukhatme GS. Multi-Image Stitching and Scene
Reconstruction for Evaluating Defect Evolution in Structures. Structural Health
Monitoring. In press (2011). doi:10.1177/1475921710395809.
Lourakis MIA, Argyros AA. The Design and Implementation of a Generic Sparse
Bundle Adjustment Software Package Based on the Levenberg-Marquardt
Algorithm. Heraklion, Crete, Greece: Institute of Computer Science - FORTH;
2004. 340. <http://www.ics.forth.gr/~lourakis/sba> (Jan. 12, 2010).
Lowe DG. Distinctive Image Features from Scale-Invariant Keypoints. International
Journal of Computer Vision. 2004;60(2):91–110.
McCrea A, Chamberlain D, Navon R. Automated inspection and restoration of steel
bridges - A critical review of methods and enabling technologies. Automation in
Construction. 2002 June;11(4):351–373.
Mizuno Y, Abe M, Fujino Y, Abe M. Development of interactive support system for
visual inspection of bridges. Proceedings of SPIE - The International Society for
Optical Engineering. 2001 March;4337:155–166.
Moore M, Phares B, Graybeal B, Rolander D, Washer G. Reliability of visual
inspection for highway bridges, Volume I: Final Report. US Department of
Transportation, Federal Highway Administration; 2001.
<http://www.tfhrc.gov/hnr20/nde/01020.htm> (Jan. 6, 2010).
Computer Vision Techniques for Worker Motion Analysis to Reduce
Musculoskeletal Disorders in Construction
Chunxia Li1 and SangHyun Lee2
1
PhD student, Department of Civil & Environmental Engineering, University of
Michigan, 1316 G. G. Brown, 2350 Hayward Street, Ann Arbor, MI 48109; PH:
(734)763-5091; email: chunxia@umich.edu
2
Assistant Professor, Department of Civil & Environmental Engineering, University
of Michigan, 2340 G. G. Brown, 2350 Hayward Street, Ann Arbor, MI 48109; PH:
(734)764-9420; email: shdpm@umich.edu
ABSTRACT
Worker health is a serious issue in construction. Injuries and illnesses result in days
away from work and incur tremendous costs for construction organizations.
Musculoskeletal disorders, in particular, constitute a major category of worker injury.
The repetitive movements, awkward postures, and forceful exertions involved in trade
work are leading causes of this type of injury. To reduce the number of these injuries,
worker activities must be tracked and analyzed. Traditional methods to measure work
activities rely upon manual on-site observations which are time-consuming and
inefficient. To address these limitations, computer vision techniques for worker
motion analysis are proposed to automatically identify non-ergonomic postures and
movements without on-site work interruption. Specifically, we intend to acquire 2D
skeleton extracting joints from image sequences and, while obtaining 3D coordinates
for each joint, reconstruct 3D human skeletons for each frame; these then can be used
for diverse ergonomic analyses (e.g., joint angle comparisons with the suggested
ergonomic guidelines for trades). In this paper, we therefore discuss how 3D skeleton
video images can be reconstructed with two 2D skeleton images recorded from two
network surveillance cameras. The results demonstrate that the obtained 3D skeleton
video with coordinates of joints have enough detail to be used for motion analysis and
have great potential to identify non-ergonomic postures and movements. This
information can be used to reduce musculoskeletal disorders in the construction
industry.
Introduction
Worker health is a serious issue in the construction industry. It has attracted attention
both from academics and industry professionals (Albers et al. 2007). The physically
demanding characteristics of construction result in prevalent strains, sprains, and
work-related musculoskeletal injuries (Albers et al. 2007). The Federal Bureau of
Labor Statistics (BLS) defines musculoskeletal disorders (MSDs) as injuries and
380
COMPUTING IN CIVIL ENGINEERING 381
analysis reduces the amount of human effort and workforce involvement in on-site
surveys and observation; it thus can be effective and economical. Our research efforts
focus on obtaining the required action information from video taken on a construction
site, using currently available video-based computer vision technologies. The
framework of this research is shown in Figure 1.
are correct, ergonomic risk factors, such as action frequency and duration, can be
calculated and compared to ergonomic standards to check whether they are within the
required range and whether workers are following correct work methods. For
example, if a bar bender is assembling bars while standing straight along the platform,
it can be said that his/her work is being conducted ergonomically in terms of back
bending.
4. Motion visualization: Obtained motion information will be implemented in virtual
reality environment to provide visual interface so that people can get intuitive
understanding of the construction site and visual feedback, such as how workers are
conducting their work and whether workers are working in an ergonomic way or not.
2D Skeleton Extraction and 3D Skeleton Reconstruction
Reconstructed 3D skeleton of workers’ actions is presented in this section. Since back
injuries accounts for 25% of injuries in the construction industry, the experiment
begins with activities involving back-bending. This experiment aims to establish
3D skeleton images based on 2D skeleton recorded from two network surveillance
cameras. In addition, it attempts to calculate the angle and duration of back
bending using this 3D skeleton. 2D skeletons with 15 joints (Figure 2) will be
marked by extracting from the video frame by frame. A projective reconstruction
algorithm is used on the 2D skeleton to establish a 3D skeleton and realize a three
dimensional reconstruction of the body joints (Hartley and Zisserman 2003).
For this experiment, there is no prior information except two sets of images from the
two cameras. The 3D skeleton can be recovered using projective reconstruction
without knowing calibration. The projective algorithm (Hartley and Zisserman 2003)
is as follows:
Figure 4 shows the reconstructed 3 skeleton from this algorithm. With 3D coordinates
in this skeleton, motion information, such as back bending angle, joint angle and joint
distance, can be calculated and then be used to measure whether worker is working
within the range that the ergonomic standard recommends. For example, the angle of
back bending is calculated using the vector (from belly to neck) and (a vertical
vector) as shown in Figure 4. If a worker is bending back with an angle greater than
30° for four hours or more per working day (8 hours), it is considered as hazardous
(Spielholz et al. 2006).
To validate the accuracy of the calculated results, the experimenter stayed still with
fixed back bending angle (30° and 75°, respectively) for 30 seconds. 30 Frames (1
frame per second) were analyzed here (Figure 5). For the 30 frames with back
bending angle of 30°, the mean and variance are 29.11 and 2.44 respectively, and
error mean is -0.89. For the 30 frames with back bending angle of 75°, the mean and
variance are 79.33 and 8.59 respectively, and error mean is 4.22. With confidence
level 99%, the confidence interval for 30° and 75° dataset are (27.60, 31.63) and
(73.59, 85.01) respectively. Based on this analysis, it can be concluded that this
algorithm has a potential to be implemented in this research since the reconstructed
3D skeleton and angle calculation results are reasonably accurate.
The duration for back bending angle over 30° can be also calculated multiplying the
frame rate by the number of frames. This information can be also used to check
whether this duration exceeds the one recommend by the ergonomic standard.
386 COMPUTING IN CIVIL ENGINEERING
Conclusions
Worker MSDs are a serious problem in the construction industry. The leading causes
of MSDs, such as repetitive movements, are related to worker activities. Through
measuring worker activities, information such as joint angle can be obtained and used
for MSD research as a basis comparing to health standard. The limitations of existing
research methods, such as surveys, interviews, and questionnaires, can be addressed
through the utilization of a computer vision-based research framework which includes
four steps: motion identification, motion recognition, motion analysis, and motion
visualization.
In this paper, we establish 3D skeletons of construction workers from videos. 2D
skeletons are manually extracted from image sequences decomposed from videos. A
projective reconstruction algorithm then is implemented to calculate the 3D
coordinates for each joint in the designated human model for each frame. The
algorithm produces 3D skeletons similar to the skeletons evident in the videos. The
3D joint coordinates also are useful for precise motion analysis, since they can be
used to calculate relative information, such as duration, frequency, joint angle, and
back bending angle. With this technology, early symptoms and warnings can be
automatically detected and feedback can be provided to the worker with regard to
existing ergonomic standards. Therefore, early intervention can be executed to rectify
worker behavior in order to reduce MSD development.
The 2D skeletons in this experiment were marked manually. Automatic extraction of
2D skeleton with human model-based tracking is ongoing.
Reference
Adisesh, A., Rawbone, R., Foxlow, J., and Harris-Roberts, J. (2007). “Occupational
health standards in the construction industry.” HSE Research Report
Aggarwal, J. K., Park, S. (2004). “Human Motion: Modeling and Recognition of
Actions and Interactions.” International Symposium on 3D Data Processing,
Visualization & Transmission, Thessaloniki, Greece.
Albers, J. T., and Estill, C. F. (2007). “Simple solutions: ergonomics for construction
workers.”
Arikan, O., and Forsyth, D. (2002). “Interactive motion generation from examples.”
ACM Transactions on Graphics, 21(3), 483–490.
Daggfeldt, K., and Thorstensson, A. (2003). “The mechanics of back-extensor torque
production about the lumbar spine.” J. Biomech, Jun, 36(6), 815-825.
D’Apuzzo, N., Plankers, R., Fua, P., Gruen, A., and Thalmann, D. (1999). “Modeling
human bodies from video sequences.” Videometrics Conferences, SPIE Proc.,
vol. 3461, 36-47.
DiFranco, D. E., Cham, T-J., and Rehg, J. M. (2001). “Reconstruction of 3-D figure
motion from 2-D correspondences.” Proc. of the 2001 IEEE Conf. on
Computer Vision and Pattern Recognition, vol. 1, 307-341.
Everett, J. G., and Kelly, D. L. (1998). “Drywall joint finishing: productivity and
ergonomics.” Journal of Construction Engineering and Management, 9-10,
347-353.
COMPUTING IN CIVIL ENGINEERING 387
ABSTRACT
Automated health monitoring and maintenance of civil infrastructure systems is an
active yet challenging area of research. Current inspection standards require an
inspector to travel to a target structure site and visually assess the structure's condition.
If a region is inaccessible, binoculars must be used to detect and characterize defects.
This approach is labor-intensive, yet highly qualitative. A less time-consuming and
inexpensive alternative to current monitoring methods is to use a robotic system that
could inspect structures more frequently, and perform autonomous damage detection.
Among several possible techniques, the use of optical instrumentation (e.g., digital
cameras), image processing and computer vision are promising approaches as
nondestructive testing methods. The feasibility of using image processing techniques
to detect deterioration in structures has been acknowledged by leading researches in
the field. This study presents and evaluates the technical elements for the
development of a novel crack detection methodology that is based on the use of
inexpensive digital cameras. Guidelines are presented for optimizing the acquisition
and processing of images, thereby enhancing the quality and reliability of the damage
detection approach and allowing the capture of even the slightest, which are routinely
encountered in realistic field applications where the camera-object distance and image
contrast are incontrollable.
1. INTRODUCTION
Civil infrastructure system assets represent a significant fraction of the global assets
and in the United States are estimated to be worth $20 trillion. These systems are
subject to deterioration due to excessive usage, overloading, and aging materials, as
well as insufficient maintenance and inspection deficiencies.
In the past two decades, efforts have been made to implement image-based
technology in crack detection methods. Tsao et al.(1994), Kaseko et al. (1994) and
Wang et al. (1998) used image processing to detect defects in pavements. Siegel and
Gunatilake (1998) developed a remote visual crack inspection system of aircraft
surfaces using wavelet transformation features and a neural network classifier.
Nieniewski et al. (1999) developed a visual system that could detect cracks in ferrites.
Moselhi and Shehab-Eldeen (2000) used image analysis techniques and neural
networks to automatically detect and classify defects in sewer pipes.
388
COMPUTING IN CIVIL ENGINEERING 389
Chae (2001) proposed a system consisting of image processing techniques along with
neural networks and fuzzy logic systems for automatic defect (including cracks)
detection in sewer pipes. Benning et al. (2003) used photogrammetry to measure the
deformations of reinforced concrete structures and monitor the evolution of cracks.
Abdel-Qader et al. (2003) analyzed the efficacy of different edge detection techniques
in the identification of cracks in concrete pavements of bridges. Abas and Martinez
(2003) used a morphological top-hat operator and a fuzzy k-means technique to detect
cracks in paintings.
Recently, Fujita and Hamamoto (2009) proposed a crack detection method in noisy
concrete surfaces using probabilistic relaxation and a locally adaptive thresholding.
Jahanshahi et al. (2009) surveyed and evaluated several crack detection techniques in
conjunction with realistic infrastructure components.
In all of the above studies, many important parameters (e.g., camera-object distance)
are not considered or assumed to be constant. In practical circumstances, the image
acquisition system often cannot maintain a constant focal length, resolution, or
distance to the object under inspection. In the case of nuclear power plants, for
instance, the image acquisition system needs to be located a significant distance from
the reactor site. To detect cracks of a specific thickness, many of the parameters in
these algorithms need to be adaptive to the 3D structure of a scene and the attributes
of the image acquisition system; however, no such study has been reported in the
open literature. The proposed approach in this study gives a robotic inspection system
the ability to detect cracks in images captured from any distance to the object, with
any focal length or resolution.
390 COMPUTING IN CIVIL ENGINEERING
2. CRACK DETECTION
An adaptive crack detection procedure is proposed in this study. This system is
adaptive because based on the image acquisition specifications, camera-object
distance, focal length and image resolution, it automatically adjusts its parameters to
detect cracks of interest. Figure 1 shows the overview scheme of the proposed system.
The main elements of the proposed crack detection procedure are segmentation,
feature extraction, and decision making. Note that before processing any image,
preprocessing approaches can be used to enhance the image [30].
2.1 Segmentation
Segmentation is a set of steps that isolate the patterns that can be potentially classified
as a defined defect. The aim of segmentation is to reduce extraneous data about
patterns whose classes are not desired to be known. Several segmentation techniques
have been evaluated by the authors previously (Jahanshahi et al. 2009), and it has
been concluded that a proposed morphological operation by Salembier (1990) works
best for crack detection purposes in components that are typically encountered in civil
infrastructure systems.
where I is the grayscale image, S is the structuring element that defines which
neighboring pixels are included in the operation, ‘ ’ is the morphological opening,
and ‘ ’ is the morphological closing. The output image T is then binarized using
Otsu's thresholding method (Otsu 1979) to segment potential crack-like dark regions
from the rest of the image. This nonlinear filter extracts the whole crack as opposed to
edge detection approaches where just the edges are segmented.
then this object can be segmented by the operation in Eq. (1). Consequently, linear
structuring elements are defined in 0o, 45o, 90o, and 135o orientations. The challenge
is to find the appropriate size for the structuring element.
Using a simple pinhole camera model, the relation between the structuring element
size and different image acquisition parameters is shown below:
FL SR (2)
S Cs ,
WD SS
where S (pixels) is the structuring element size, FL (mm) is the camera focal length,
WD (mm) is the working distance (camera-object distance), SR (pixels) is the camera
sensor resolution, SS (mm) is the camera sensor size, CS (mm) is the crack thickness,
and is the ceiling function.
By having the working distance, the derived formula in (2) is used to compute the
appropriate structuring element. Using this equation, the size of the appropriate
structuring element is computed based on the crack size of interest. Figure 2 shows
the geometric relationship between the image acquisition parameters for a simple
pinhole camera model.
.
Figure 1: The geometric relation between image acquisition parameters of a
simple pinhole camera model.
segmented object), (4) absolute value of the correlation coefficient (here, correlation
is defined as the relationship between the horizontal and vertical pixel coordinates),
and (5) compactness (the ratio between the square root of the extracted area and its
perimeter). The convex hull for a segmented object is defined as the smallest convex
polygon that can contain the object. The above features are computed for each
segmented pattern.
2.3 Classification
In this study, a feature set consisting of 1,910 non-crack feature vectors and 3,961
synthetic crack feature vectors was generated to train and evaluate the classifiers.
About 60% of this set was used for training, while the remaining feature vectors were
used for validation and testing. Note that due to the lack of access to a large number
of real cracks, randomized synthetic cracks were generated to augment the training
database. For this reason, real cracks were manually segmented and an algorithm was
developed to randomly generate cracks from them. The non-crack feature vectors
were extracted from actual scenes. The performance of several SVM and NN
classifiers was evaluated. Eventually, a SVM with a 3rd order polynomial kernel and a
3-layer feedforward NN with 10 neurons in the hidden layer and 2 output neurons
were used for classification. A nearest-neighbor classifier was used to evaluate the
performance of the above classifiers.
1 k [ S min , m]; C k (u , v ) 1,
J m (u , v )
0 Otherwise,
COMPUTING IN CIVIL ENGINEERING 393
where Jm is the crack map at scale (i.e., structuring element) m, Smin is the minimum
structuring element size, Ck is the binary crack image obtained by using k as the
structuring element, and u and v are the pixel coordinates of the crack map image.
3. EXPERIMENTAL RESULTS AND DISCUSSION
In order to evaluate the overall performance of the proposed crack detection
algorithm, a test set consisted of 220 real concrete crack and 200 non-crack images
was used. Table 2 summarizes the performance of the detection system for real
patterns. The performance of the system based on NN is slightly better than the one
based on SVM. So, the former system is used for the rest of the experiments in this
study. The minimum length of the detected cracks was set to 10 mm.
Table 2: The overall performance of the proposed system using real data
Classifier Accuracy Precision Sensitivity Specificity
(%) (%) (%) (%)
Neural Network 79.5 78.4 84.1 74.5
Support Vector Machine 78.3 76.8 84.1 72.0
Figure 2 shows the detected cracks in a concrete beam under flexural stress. Each red
box indicates the borders of a detected crack. As it is seen, the system was able to
detect almost all cracks. In this figure, there are also few false positive alerts that are
mainly the handwritings on the concrete. Note that there are several edges and objects
in Figure 2(c) where the proposed system has correctly detected the real cracks.
Structuring element sizes of 4 to 22 pixels were used to extract the cracks in these
images. The images are 2 mega pixels and it took averagely 74 seconds to process
these images on an AMD Athlon II X4 (2.6 GHz) processor.
(a) (b)
(c)
Figure 2: Detected cracks in concrete beams under flexural stress. Each detected
crack is surrounded by a red box.
394 COMPUTING IN CIVIL ENGINEERING
4. SUMMARY
Current visual inspection of civil structures, which is the predominant inspection
method, is highly qualitative. An inspector has to visually assess the condition of a
structure. If a region is inaccessible, an inspector uses binoculars to detect and
characterize defects. There is an urgent need for developing autonomous quantitative
approaches in this field. In this study, a novel adaptive crack detection procedure is
introduced. A morphological crack segmentation operator is introduced to extract
crack-like patterns. The structuring element parameter for this operator is
automatically adjusted based on the camera focal length, object-camera distance,
camera resolution, camera sensor size, and the desired crack thickness. Appropriate
features are extracted and selected for each segmented pattern using the LDA
approach. The performances of a NN, a SVM, and a nearest-neighbor classifier are
evaluated to classify cracks from non-crack patterns. A multi-scale crack map is
obtained to represent the detected cracks. The authors are developing an autonomous
crack quantification approach based on the obtained crack map from this approach.
5. ACKNOWLEDGEMENTS
This study was supported in part by grants from the National Science Foundation.
REFERENCES
F. S. Abas and K. Martinez, “Classification of painting cracks for content-based
analysis,” Proceedings of the SPIE - The International Society for Optical
Engineering, vol. 5011, pp. 149–160, January 2003, Santa Clara, CA, USA.
I. Abdel-Qader, O. Abudayyeh, and M. E. Kelly, “Analysis of edge-detection
techniques for crack identification in bridges,” Journal of Computing in Civil
Engineering, vol. 17, no. 4, pp. 255–263, October 2003.
I. Abdel-Qader, S. Pashaie-Rad, O. Abudayyeh, and S. Yehia, “PCA-based algorithm
for unsupervised bridge crack detection,” Advances in Engineering Software,
vol. 37, no. 12, pp. 771–778, December 2006.
W. Benning, S. G¨ ortz, J. Lange, R. Schwermann, and R.Chudoba, “Development of
an algorithm for automatic analysis of deformation of reinforced concrete
structures using photogrammetry,” VDI Berichte, no. 1757, pp. 411–418, 2003.
M. J. Chae, “Automated interpretation and assessment of sewer pipeline,” Ph.D.
dissertation, Purdue University, December 2001.
L.-C. Chen, Y.-C. Shao, H.-H. Jan, C.-W. Huang, and Y.-M. Tien, “Measuring
system for cracks in concrete using multitemporal images,” Journal of
Surveying Engineering, vol. 132, no. 2, pp. 77–82, May 2006.
R. A. Fisher, “The use of multiple measurements in taxonomic problems”, Annals of
Eugenics 7 (1936) 179-188.
Y. Fujita and Y. Hamamoto, “A robust method for automatically detecting cracks on
noisy concrete surfaces,” Next-Generation Applied Intelligence. Twenty-
second International Conference on Industrial, Engineering and Other
Applications of Applied Intelligent Systems IEA/AIE 2009, pp. 76–85, June
2009, Tainan, Taiwan.
I. Giakoumis, N. Nikolaidis, and I. Pitas, “Digital image processing techniques for the
COMPUTING IN CIVIL ENGINEERING 395
ABSTRACT
To minimize the total cost of earthwork, a number of Linear and Integer Programming
Techniques have been used, through considering the various factors involved in this
process. These techniques often ensure a global optimum solution for the problem.
However, they require sophisticated formulations besides being computationally
expensive. Therefore, these techniques are of limited use in industry practice.
Mass-Haul Diagrams (MD) have been an essential tool for planning earthwork
construction for many applications. One of the most common heuristics that is used
widely by practicing engineers in this field to balance the MD is the “Shortest-Haul-
First” strategy. Using this heuristic in balancing the MD is usually carried out either
graphically on drawings or manually by computing values from the Mass-Haul Diagram
itself. However, performing this approach graphically or manually is fairly tedious and
time consuming. Besides that manual and graphical approaches are prone to errors. A
robust algorithm that can automatically offer a balance for the MD algorithm is,
therefore, needed.
This research presents a formal definition of an algorithm that uses a sequential
pruning technique for computing balances of Mass-Haul Diagrams automatically. It
shows that the new algorithm is more efficient than the existing Integer Programming
Techniques as it computationally runs in O(logn) time in most cases.
INTRODUCTION
The primary use of MD is to determine the points where the cuts and fills are balanced
out as well as planning for haul routes and distances. Mathematical programming
models of earthwork allocations have been formulated aiming at minimizing the total
earthwork costs considering various technological, physical and operational constraints
(Akay 2004; Zhand and Wright 2004; Shahram et al 2007). These models usually solve
the optimization problem using Linear or Integer Programming (LP and IP) Techniques.
Although, these techniques ensure a global optimum solution for the problem, they
require sophisticated formulations that are essential for their setup and definition.
Therefore, these techniques are of limited use in practice. On the other hand, one of the
most common strategies used by practicing engineers in this field to balance the MD is
the “Shortest-Haul-First” strategy. There are two ways for determining a balance for an
396
COMPUTING IN CIVIL ENGINEERING 397
(1)
Therefore,
xz = xi + dx (2)
The new zero points are added to the original vector P in their respective locations in
relation to the other points. Also a new vector Z is created, which carries these new
points where y value equals zero. The points stored in the Z vector shall be sorted in an
ascending order with respect to the x value. Before we start processing the diagram, the
original vector P is divided into two sub vectors; one carries the points with Mass
Diagram positive value points Ppos, and the other carries the negative value points
Pneg, i.e. two new diagrams are created, one above the primary balance line and the
other below it as in figure 3. In this step also, the trough points are stored in a new
vector Inv. For each trough, the preceding and succeeding y values are greater than the
y value of the trough itself, i.e. yi-1>yi and yi+1 >yi. As a result, each point that fulfills
this condition will be stored in the Inv vector and as we did for the Z vector, the Inv
vector’s points will be sorted in an ascending order with respect to the x value.
It is important to note that when scanning the P vector for zero points, a value (±ε)
that is slightly larger or smaller than zero can be considered, i.e. a volume of soil that is
COMPUTING IN CIVIL ENGINEERING 399
small enough so that the earthwork planner can consider it negligible. This is important
practically and also computationally and it varies from one project to another, as it
depends on the type of soil, the cost of moving this soil and the level of accuracy
needed for the project.
Getting the Bell balances
Starting with the first point in the Inv vector, we check for i, j = 0 if xjInv > xiZ and if this
condition is true we proceed to check if xZi < xjInv < xZi+1. These two conditions are
necessary to identify bell balances. For each bell balance, we will store the values [Xstrt,
Xmax, Xend, and Ymax] as shown in figure 3 and 4. Once these Bell balances have been
stored, they can be then pruned from the diagram (i.e. the points between Xstrt and Xend
are removed from P).
Getting trapizoidal balances
After trimming the diagram, auxiliary balance lines are drawn to divide the diagram
into more balances as shown in figure 4 and 5. This is done by assigning the minimum
value in the vector Inv to a dynamic pointer, Invmin= ykInv where k is a counter for the
values in the Inv vector. Invmin represents the height at which the first auxiliary balance
line should be drawn. To define this trapezoidal balance, two extra points are needed,
which are determined from the intersection of the auxiliary balance line and the
boundaries of the Mass Diagram after and before Invmin. Figure 5 shows how this
interpolation is accomplished. In figure 5 y@cut represents the first point before Invmin.
To interpolate we scan the P vector for the condition that yi < y@cut < yi+1 and
therefore we need to interpolate between yi and yi+1 get x2cut which can be calculated by
the equation:
(3)
x2cut = xi + dx (4)
The same will be done to get x2fill. The trapezoidal balance is stored by its parameters
[X1cut, X2cut, X1fill, X2fill,YkInv].
Processing the negative segments of the mass diagram Open endED diagrams
As mentioned earlier, the P vector is divided into two sub vectors; Ppos and Pneg.
Processing the negative segments of the diagram will not be much different than
processing the positive ones. Instead of dealing with the negative y values of the points
in the Pneg vector which may require some calculations to be modified, the Pneg
vector will be mirrored and processed exactly as the Ppos vector. In several cases the
soil along the alignment will not balance for the earthwork, this will appear as open
ends. These open ends will appear in our approach as remaining values in the P vector
(either Ppos or Pneg) after the last forward. Finally, the output of this program will be in
the form of a series of sequential balances along the MD, e.g. for Bell balance: “Cut
from stations (Xstart) to (Xpeak) will fill in stations from (Xpeak) to (Xend)” and for
Trapezoidal balance: “Cut from stations (X1 cut) to (X2 cut) will fill in stations from (X2
fill) to (X1 fill)” as shown in figure 6.
COMPUTATIONAL EXPERIEMENT
A case study has also been used to test this approach. A rural 4 lane highway in Upper
Egypt was chosen. A Mass Diagram was created for the road, given the existing
contours and the designed vertical curves and horizontal alignments. Commercial
software (Civil 3D) was used to calculate the cuts and fills volumes for the road and,
hence, plots the Mass Diagram. The road was composed of 327 stations along the
alignment with varying cuts and fills sections. A total of 81 different balances were
identified along the length of the project. Therefore, the output of this Mass Diagram
was a set of 81 different segments that will balance the cuts and fills. These balances
were then analyzed further. Thus, It is possible to also calculate the average hauling
distance as well as the average volume (or mass in tons) of soil moved between each
station. The product of these two values represents the average ton-meter of work to be
done. In the preceding example, the average hauling distance was 112 meters with an
average volume of 209 meter cubed. It is also possible to incorporate swell and
shrinkage factors as well as limiting the economical hauling distance by specifying a
window for the algorithm similar to the concept of free-haul in some commercial
software.
COMPUTING IN CIVIL ENGINEERING 401
CONCLUSION
The algorithm employs a sequential pruning process, where the MD is balanced by
cutting it down into more simple closed balanced shapes. While computational
experiments on LP and IP solutions of the problem reported polynomial complexity of
the heuristic procedure and exponential worst-case complexity of traditional
enumerative methods, the algorithm presented in this research runs in O(log n) time.
The proposed algorithm is able to efficiently balance MDs faster than other methods in
the literature. This means that it can help construction planners in dealing with long
alignments and projects that extend for whatever number of stations.
REFERENCES
Figure 3. Types of balances and Dividing P into positive and negative values vectors algorithm
Figure 4. Segments that are balanced over the primary balance line. As there are no troughs
between xz1 and xz2, the circled segments belong to this group of segments
Figure 5. Defining and storing the trapezoidal balance and shifting down the MD algorithm
404 COMPUTING IN CIVIL ENGINEERING
Figure 6. The result as it appears in the program developed to test the algorithm
//Preprocessing
For (i=0, i<= Lp, i++)
If (yi> 0, yi+1 < 0) or (yi< 0, yi+1> 0) then
interpolate to get y=0, corresponding station (x) &
add to P & add to Z
ElseIf (yi< yi+1) & (yi< yi-1) &yi != 0 Store point in
troughs array in Inv
Sort inverts array Inv points in an ascending order by
(x) value
//Intercepts
For (i=0, i<= Lp, i++) If yi== ± ε then store point in Z
zero array &
Sort zero array points
ABSTRACT
The BIM standard establishes standard definitions for building information exchanges to
support critical business contexts using standard semantics and ontologies. This Standard
forms the foundation for accurate and efficient communication and commerce that are needed
by the construction industry. The Standard is still in its infancy and the evolution and
maturity of BIM Standard will depend largely on the efforts and contribution of various
disciplines involved in design, construction, and management of a facility. This paper
focuses on advancing standardization of BIM model for structural analysis and design.
Specifically speaking, the paper addresses The Information Delivery Manual (IDM) as it
aims to provide the integrated reference for process and data required by BIM by identifying
the discrete processes undertaken within structural design, the information required for their
execution and the results of that activity. Furthermore, it will address Model View
Definitions (MVDs) for structural design and analysis to create a robust process for seamless,
efficient, reproducible exchange of accurate and reliable structural information that is widely
and routinely acknowledged by the industry.
INTRODUCTION
405
406 COMPUTING IN CIVIL ENGINEERING
Specifically, the BIM Standard recognizes that a BIM requires a disciplined and transparent
data structure which supports the following:
A specific business case that includes an exchange of building information.
The users’ view of data that is necessary to support the business case.
The digital exchange mechanism for the required information interchanges (software
interoperability).
This combination of content selected to support user need and described to support open
digital exchange are the basis of information exchanges in the NBIM Standard. All these
levels must be coordinated for interoperability and this is the focus of the NBIMS Initiative.
Therefore, in nutshell the primary drivers for defining requirements for the BIM Standard are
industry standard processes and associated information exchange requirements.
In addition, even as the BIM Standard is focused on open and interoperable information
exchanges, the BIM Standard Initiative addresses all related business functioning aspects of
the facility lifecycle. BIM Standard is chartered as a partner and an enabler for all
organizations engaged in the exchange of information throughout the facility lifecycle.
The success of a Building Information Model is its ability to encapsulate, organize,
relate, and deliver information for both users and machine in simple readable format. These
relationships must be at the detail levels relating, for example, a door to its frame or even a
nut to a bolt, but maintain relationships from a detailed level to a world view. When working
with as large a universe of materials as exists in the built environment there are many
traditional, vertical integration points that must be crossed and many different “languages”
that must be understood and related. Architects and engineers, as well as the real estate
appraiser or insurer must be able to speak the same language and refer to items in the same
terms as the first responder in an emergency situation. This also carries to the world view of
being able to translate to other international languages in order to support the multinational
corporation. In order to standardize these many options and produce a comprehensive viable
Standard, all organizations have to be represented and solicited for input.
One of the primary roles of BIM Standard is to set the ontology and associated common
language that will allow information to be machine readable between team members and
eventually provide direction and, add quality control to what is produced and called a BIM
model. Ultimately, these boundaries will encompass everyone who interacts with the built
and natural environments. In order for this to occur, the team members who share
information must be able to map to the same terminology. Common ontologies will allow
this communication to occur.
The recommended process for generating a NBIMS specification, and implementation is
described in NBIMS, Vol. 1, Section 5 (NIBS 2007). The core components of NBIMS (see
figure 1) include the Information Delivery Manual (IDM), and Model View Definition
(MVD).
connections, foundations and boundary conditions, loading conditions, and other MEP
related information.
The output results of structural analysis and design may include the assessment of the
building’s deformation and strength for compliance with regulations and targets, overall
estimate of the safety level by the building, and estimate of the quantities of structural
materials used.
The structural IDM is the document that describes the processes and requirements to set
up BIM models for structural analysis and design purposes. It focuses on the relationship of
processes and data. Structural designers are currently faced with the fact that BIM software
tools do not allow for full interoperability with their structural analysis and design software
and in addition they get upgraded quite frequently with new features. Until some of these
features are added, however, the designer has to use “workarounds” to get the paper
documentation to communicate design intent. The important issue here is to define the level
of detail desired for the modeling process. The structural IDM provides the foundation for
standardized data exchange. The main objectives of the IDM include:
i. Define the processes within the structural design project lifecycle for which
engineers require information exchange.
ii. Describe the results of process execution that can be used in subsequent processes.
iii. Identify the actors sending and receiving information within the process.
iv. Make certain that definitions, specifications and descriptions are provided in a form
that is useful and easily understood by the target group.
The IDM development has two main phases: one is the process map detailing the end
user processes and information exchange between end users, as shown earlier in Figure 3.
The other component is the list of exchange requirements. The development of IDM begins
with definitions of the data exchange functional requirements and workflow scenarios for
exchanges between architects, engineers, manufacturers, erectors, and general contractors
utilizing the ‘use case’ concept. A use case defines an exchange scenario between two well
defined roles for a specific purpose, within a specified phase of a building’s life cycle
(Eastman et. al, 2010). It is generally composed of more detailed processes and is embedded
in a more aggregate process context. Most of the use cases are parts of larger collaborations,
where multiple use cases provide a network of collaboration links with other disciplines.
Such composition of use cases is referred to a process map
The process map was created using the Business Process Modeling Notation (BPMN)
(www.bpmn.org), since the notation is adopted by buildingSMART and the National Institute
of Building Sciences (NIBS). Horizontal swim lanes are used for the major processes. Main
activity phases of typical structural analysis and design are identified along with their
relationship to sub processes (Figure 3). In addition to the standard BPMN notation the IDM
utilizes notation for information exchanges between activities called Exchange Models (see
Figure 4). The Exchange model requirement presents a link between process and data. It
applies the relevant information defined within an information model to fulfill the
requirements of an information exchange between two processes at a particular stage of the
project. Each exchange model is uniquely identified across all use cases and besides its name
caries abbreviated designation of the use case it belongs to:
AS_EM01 - Architectural, Structural Concept use case exchange models.
AS_EM02 - Structural concept, Structural Analysis use case exchange models.
DD_EM01 - Preliminary Structural Analysis use case exchange models.
DD_EM02 - Structural Analysis, Reinforced Concrete Design use case exchange models.
DD_EM03 - Structural Analysis, Structural Steel Design use case exchange models.
DD_EM04 - Structural Analysis, Structural Wood Design use case exchange models.
DD_EM05 - Structural Analysis, Other Structural Design use case exchange models.
COMPUTING IN CIVIL ENGINEERING 409
The scope of the exchange requirement is the exchange of information about structural
elements and systems. Each of the exchange models described above contains a wide range
410 COMPUTING IN CIVIL ENGINEERING
The model view definitions provide the framework that the software developers use to
define the IFC exchange format. It focuses on the relationship of application and data. The
process of developing the MVDs begins as indicated earlier with defining the IDM and its
exchange requirements
by specifically identifying the object attributes to be exchange and how they will be used,
both in terms for the users and developers. For the case of AS_EM01 and AS_EM02, the list
of entities includes Story, Grid, Column, Beam, Brace, Wall, Slab, Footing
and Pile
The IFC schema contains a wide range of datasets as it covers the whole lifecycle of a
building and its environment. Software products should only deal with a subset of the full
IFC schema to void processing overwhelming amount of data. Therefore a model view
definition focuses on defining model subsets that are relevant for the data exchange between
specific application types. The goal is that software implementers only need to focus on the
parts of the IFC schema relevant to them.
The MVD structure consists of a number of levels. At the first level is a list of entities
that are relevant for the data exchange. Each entity is listed under a group such as “spatial
structure” or “architectural systems”.
At the second level is a list of concepts associated with a particular entity. These concepts
include basic information such as the name and description of the entity as well as specific
characterization related to the entity. Figure 5 shows the building story entity to illustrate
some of its associated concepts, which include spatial composition s, placement and
COMPUTING IN CIVIL ENGINEERING 411
geometric representation. Figure 6 expands the wall entity to illustrate further details about
the wall exchange requirements.
Finally at the last level is a list of implementer’s agreements associated with a particular
concept. Since IFC does not provide detailed information about how it should be used in
specific cases because of its wide scope and inclusive nature, making such decisions about
the use of IFC has been left to IFC implementers. These decisions are called implementer’s
agreements and they are documented as part of MVDs.
After the development and implementation of the IDM and MVD, testing and validation
must be performed to verify meeting the baseline for measuring structural information
modeling and exchange capabilities. The process include establishing test cases along with A
description of test criteria against which the result is validated, a realization of the same test
model in (at least) two structural modeling applications, and a matrix of success/failure
412 COMPUTING IN CIVIL ENGINEERING
descriptions for import/export into other software applications. These topics and other that
deals with establishing industry test cases and guidance for conformance testing and
interoperability testing will be addressed in the next phase of the research.
CONCLUSIONS
The NBIMS is, by design, a standard of standards, i.e. it is based upon other standards,
mainly, IAI, IFC, and Omniclass. The NBIMS strive to establish IFC building model schema
that provides the basis for achieving full interoperability within and across different AEC
trades.
This paper presents an initial effort to standardize structural BIM using NBIMS generic
approach. NBIMS defines a minimum standard providing a baseline against which
additional, developing information exchange requirements may be layered At the time of
writing, no completed content volumes of the NBIMS had been published and the
applicability of the generic process was thus not fully tested. The research offers detailed
guidelines for the development of structural BIM standards following the generic NBIMS
approach.
The basic process for developing structural BIM commences with the development of
functional specifications or exchange requirements defined by end users in an IDM. These
are then mapped to MVDs by software developer to establish a neutral IFC model schema.
Theoretically, a direct mapping should exist between the IDM, the MVD, and the IFC
schema where the IDM provides a list of information that must appear in the IFC schema and
the MVD provides the guideline specifying how the information must appear in the IFC
schema. The IDM and the MVD are generally supposed to be complementary of each other.
The paper presented examples illustrating these steps for the domain of structural analysis
and design.
The research attempted to develop a preliminary version of structural BIM standard and
bridges NBIMS implementation from theory into practice in a way that provides goals for the
best method to manage structural building information in an efficient integrated approach.
REFERENCES
AIA Document E201TM – 2007 (2007 ). “Digital Data Protocol”, American Institute of Architects, 2007.
ATC-75, Applied Technology Council (ATC), Project ATC-75: “Development of Industry Foundation
Classes (IFCs) for Structural components ” http://www.atcouncil.org/Projects/atc-75-project.html
(Nov. 2010)
American Society of Civil Engineers Structural Engineering Institute/Council of American Structural
Engineers Joint Committee on Building Information Modeling, http://www.seibim.org
Eastman, C. M.; Jeong, Y.-S.; Sacks, R.; Kaner, I.(2010). “Exchange Model and Exchange Object Concepts
for Implementation of National BIM Standards”. Journal of Computing in Civil Engineering,
Jan/Feb2010, Vol. 24 Issue 1, 25-34.
Froese T (2003): Future directions for IFC-based interoperability, ITcon Vol. 8, pg. 231-246.
International Alliance for Interoperability (IAI), buildingSMART International, http://www.iai-
international.org. (Nov. 2010)
Industry Foundation Classes (IFC), http://www.iai-tech.org (publication of the IFC specification). (Nov.
2010)
Omniclass. _2006_. Omniclass: A strategy for classifying the built environment, introduction, and user
guide, 1.0 edition, Construction Specification Institute, Arlington, Va.,
_http://www.omniclass.org/.
Nawari, N. and Sgambelluri, M. (2010). ―The Role of National BIM Standard in Structural Design‖, The
2010 Structures Congress joint with the North American Steel Construction Conference in
Orlando, Florida, May 12-15, 2010, pp. 1660-1671.
NIBS (2007): NBIMS (National Building Information Modeling Standard), Version 1, Part 1: “Overview,
Principles, and Methodologies”, National Institute of Building Sciences.
http://www.nationalcadstandard.org/ (Nov. 2010).
Collaborative Design Of Parametric Sustainable Architecture
J.C. Hubers1
1
Hyperbody, Faculty of Architecture, Delft University of Technology, P.O. Box 5,
2600 AA Delft, j.c.hubers@tudelft.nl.
ABSTRACT
INTRODUCTION
413
414 COMPUTING IN CIVIL ENGINEERING
Also around 1985 the Biosphere 2 was realised in the U.S. as a closed
ecological system. Fifteen years later the Eden project was realised in England.
Governments started to fund research into sustainable architecture end of the last
century. Demonstration projects were funded and alternatives were stimulated
with grants. The report of MIT to the Club of Rome in 1972 about the limits to
growth had a big impact. The Brundtland report in 1987 introduced the concept of
sustainability. Later the triple P was added: People, Planet and Profit. Prof.
Duijvestein, one of the founding fathers of sustainability at Delft University of
Technology listed in detail the criteria for buildings under every P (SenterNovem
2009). Reuse of existing resources with as less degradation as possible (cradle to
cradle) is important (McDonough and Braungart 2002).
But it was not until vice-president Al Gore went over the world in 2006
showing the spectacular movie: “An Inconvenient Truth”, that people became
aware of global warming, the ozone hole, widespread land degradation and
declining biodiversity. The work of Jón Kristinsson should be mentioned,
especially his design for the Floriade 2012 (Kristinsson 2010). Of course the IPCC
reports are important references.
COMPUTING IN CIVIL ENGINEERING 415
But it could well turn out that SynSerres in Northern countries need a
different solution than in Southern. Up to 77% of direct sunlight for PV reduces
the cooling capacity by 4.
+ +
Figure 3. ETFE cushions + Tentech round wood connection + PV
concentration
Fresnel greenhouse (WUR 2010). This together with the ideas of Urgenda (2010)
and the initiative of Rotterdam (2010) to turn existing flat roofs into green roofs
led to the idea of this project. The Elkas produces 18-15 KWh/m2 year at the
curved side of the roof by reflecting and concentrating the Near Infrared Radiation
of the sun to an adaptable line of PV cells.
Besides fossil energy use, CO2 emission and the problems mentioned
before, there are other criteria that a sustainable building should answer. The
author made the list in Table 1 and used it during a test case of collaborative
design (Hubers 2008). The list was meant to be completed by the design team. It
turned out that the multidisciplinary design team only focused on a few criteria
and didn’t take the time to add other criteria or to evaluate all of them. Also other
research shows that weighted criteria evaluation is not reliable (Lawson 2006).
But maybe it is better to have a badly used list of criteria, than no list at all. At
least it helps efficiently focussing the discussion on the subjects, which team
members don’t agree about (standard deviation in Table 1).
A multidisciplinary design team needs more than a list of criteria. It needs
to have good ideas! Developing ideas and evaluating them with criteria are the
two main sub processes of design (Lawson 2006, Hubers 2008). Knowledge
sharing and management is important. We plan to use Wikis, which are websites
418 COMPUTING IN CIVIL ENGINEERING
that everybody that is authorized can easily edit. Well known is Wikipedia.
Creativity is needed to turn experience and knowledge into ideas. Many
techniques can be used. The use of different representations and different media is
one of them (Stellingwerff 2005, Schön 1983). The work of Edward de Bono
shows several methods like Random Word Stimulation, Analogy thinking, Brain
storming etc. (Bono 1980). The Environmental Maximisation Method of
Duijvestein is interesting though a bit laborious (Duijvestein 2002). It consists of
drawing the design only from the point of view of one function, e.g. water, green,
sun, wind, power, traffic, housing, parking etc. and later the plans are integrated.
Recent developments in ICT make it possible to share all this information in 3D
digital models. We call this Green BIM.
BIM
The pressure to use IFC based BIM is growing. IFC is an ISO standard. A
good introduction to IFC-based BIM is to be found in Khemlani (2010). Autodesk
adopted it a.o. in Autodesk Revit. Other major CAD developers in the AEC
industry support it too. The Dutch part of the buildingSMART association, which
develops IFC, states in her newsletter that the directions of Governmental Services
of U.S., Denmark, Finland, Norway and the Netherlands signed an agreement for
adopting IFC based BIM for all major government projects (BS 2010).
COMPUTING IN CIVIL ENGINEERING 419
Contractors for a long time are working on this and recently the Dutch Conceptual
Building network starts working in this direction too (CB 2010). The conceptual
building approach converts the demand market into an offer market. Providers of
concepts no longer wait for a client to define a demand, but develop complete
adaptable solutions that clients can order. It is more or less like in the car business:
lean production and mass customization.
The simulation of buildings is a vital benefit. VR systems like CAVEs and
Head Mounted Displays are used for that. Delft University of Technology
developed a lab called protoSPACE which uses these techniques (Hubers 2008).
Eastman et al. report 10 case studies of realized buildings. The 16 reported
benefits are summarised in Error! Reference source not found. with the number
of projects that had these benefits. Benefit 9 ‘Earlier collaboration of multiple
design disciplines’ is in the Construction execution/coordination phase and thus
not collaborative design as we define it. Not one project had all those benefits.
Besides benefits of BIM there are also drawbacks of course. It is obvious that
BIM asks for much knowledge about 3D, 4D, 5D, nD CAD knowledge (4D is
planning, 5D is cost, nD is management etc.). Then there are the difficulties of
author/ownership and liability of the BIM. Contracts like Design Build and
Guaranteed Maximum Price have enormous impact on the concerned professional
practices (Hardin 2009).
Recently parametric design software is used. Two main groups of
parametric design software can be distinguished: object parametric or process
parametric. The problem is that only object parametric design software is
compatible with IFC (Hubers 2010).
CONCLUSION
REFERENCES
Bhalotra, A., Oosterhuis, K. A.H. Art Activities, A.J. Alblas, J.C. Alblas and
Witteveen + Bos, (1992). City Fruitful. Ed. G. W. de Vries. 010
Publishers Rotterdam.
Bono, E. de. (1980). Lateral thinking; a textbook of creativity. Harmondsworth
Penguin
BS (2010). Newsletter April 2009 nr1 available at http://bw-dssv07.bwk.tue.nl/-
files/newsletters/nieuwsbrief-buildingsmart-22-04-2009.pdf. accessed 3-
1-2010.
CB (2010) http://www.conceptueelbouwen.nl/?mod=cbouwen&id=17&act=cb_-
english . Accessed on 9-3-2010.
Duijvestein, C. A. J. (2002). The environmental maximisation method in: T. M. d.
Jong and D. J. M. v. d. Voordt Ways to study and research urban,
architectural and technical design (Delft) Delft University Press
420 COMPUTING IN CIVIL ENGINEERING
Eastman, C., Teichholz, P., Sacks, R. and Liston, K. (2008). BIM Handbook.
Wiley, Hoboken, New Jersey, U.S.A.
Grau, L. (2009). Sustainable district in Barcelona. In Changing roles; new roles,
new challenges, ed: H. Wamelink, M. Prins and R. Geraedts. TU Delft
Faculty of Architecture Real Estate & Housing, Delft.
Hardin, B. (2009). BIM and construction management. Sybex, Indianapolis,
Indiana, U.S.A.
Hubers, J.C. (1986). “Eindelijk een gebouw dat met alles rekening houdt: ‘Het
Ei’”. In Bouw, februari’86, pp. 10 – 14.
Hubers, J.C. (2008). Collaborative architectural design in virtual reality. PhD
diss. Faculty of Architecture of Delft University of Technology, The
Netherlands. Also available at http://www.bk.tudelft.nl/users/hubers/
internet/DissertatieHansHubers(3).pdf.
Hubers, J.C. (2010). Collaborative parametric BIM. In proceedings of the 5th
ASCAAD conference 2010, ed: A. Bennadji, B. Sidawi and R. Reffat.
Robert Gordon University, Scotland. ISBN: 987-1-907349-02-7.
IEA (2008). Energy efficiency requirements in building codes: Energy efficiency
policies for new buildings, IEA Publications.
Khemlani L. (2010). The IFC Building Model: A Look Under the Hood. In AEC-
bytes. Available at http://www.aecbytes.com/feature/2004/
IFCmodel.html.
Kristinsson, J. (2010).
http://www.kristinssonarchitecten.nl/projecten/images/9006/9006-1.jpg
Accessed 7-10-2010.
LAWSON, B.R. (2006). How designers think. Architectural Press/Elsevier,
Oxford.
McDonough, W. and M. Braungart. (2002). Cradle to Cradle. North Point Press,
New York Rotterdam (2010).
http://www.rotterdamclimateinitiative.nl/en/100_climate_proof/-
rotterdam_climate_proof/results. Accessed 10-10-2010.
Schön, D. A. (1983). The reflective practitioner; how professionals think in
action. Basic Book Inc. U.S.A.
SenterNovem (2009).
http://www.senternovem.nl/mmfiles/Position%20paper%20-
Duurzame%20Bedrijfsvoering%20Rijk_tcm24-338988.pdf Last accessed
6-11-2010.
Stellingwerff, M. C. (2005). Virtual context. Ph.D. diss. Delft University of
Technology, The Netherlands.
Urgenda (2010). http://www.urgenda.nl/visie/ Last accessed 29-9-2010.
WCED (1987). Our Common Future, Report of the World Commission on
Environment and Development, World Commission on Environment and
Development, 1987. Also available at http://www.worldinbalance.net/-
intagreements/1987-brundtland.php Last accessed 29-9-2010.
WUR (2010). http://www.glastuinbouw.wur.nl/UK/expertise/design/. Last
accessed 23-11-2010.
Developing Common Product Property Sets (SPie)
ABSTRACT
BACKGROUND
The re-use of building information models (BIM) is fraught with difficultly due to the
differences in properties, aggregations, and organization of the information in various
software systems. Recent history has even demonstrated that software system vendors
will implement unique properties for high-visibility owners. The example of this
situation is the space area measurement property required by the General Services
Administration (GSA 1996) included in design-oriented BIM products. This property,
“GSA BIM Area” appears on all users’ versions of these software systems, even if you
don’t work for that specific agency, or speak English. Imagine a world where each
owner had their own unique requirements loaded into every software system that was
required during a project life-cycle. Rather than creating a common language for
information exchange this creates a BIM Tower of Babylon. If everyone gets to have
their own “standard,” none of the parties end up being able to speak with any other
due to the complexities of the data sets being provided.
421
422 COMPUTING IN CIVIL ENGINEERING
In addition to problems with object properties, the way that various software
systems aggregate and decompose their data provides another level of complexity to
creating software standards. An example of such a problem is encountered with
attempting to provide classifications of objects within commercial software systems.
None of the design software reviewed by the authors have externally modifiable
classifications that may be easily changed when switching from one client to another.
Facility Management software can be explicitly classified as those systems based on a
spatial decomposition of a facility or facilities, and those systems based on an asset
classification. Spatial facility management software allows the equipment to be placed
in space. Asset classification provides only a notional location such as within a given
building or floor. As a result, by providing information from a spatially oriented
system data set, such as a design BIM, into an asset-oriented facility management
system, the user runs the risk of losing the room numbers where the equipment is
located.
The transfer of information in a design office or project site from one system to
another is not an arbitrary act or academic consideration; it is an act that is required to
complete some clear business function. These exchanges are needed to provide
specific information at specific times in the project. Usually such exchanges are
included directly in contracts to ensure that critical exchanges are required. Examples
are daily construction reports, construction schedules, and equipment lists. Rather than
focus on some global approach to creating interoperable BIM, the approach favored by
the authors is to prepare small, contractually possible, performance-based data
exchanges (East 2009). The idea of “contracted information exchanges” also is the
heart of the standard process for the development of Industry Foundation Class (IFC)
Model View Definitions (MVD) using the Information Delivery Manual Process
(IDM) (ISO 2010).
is the technical specification of the IFC model that provides the open standard
framework for information exchange.
The delivery of BIM data for the Facility Management Handover MVD (bSi 2009) in
the United States is often referred to by the more accessible name, COBie (East
2010a). COBie is the Construction-Operation Building information exchange. COBie
is one of many information exchange, or MVD, projects currently underway through
the buildingSMART alliance (bSa 2011). COBie has been shown to eliminate the
delivery of current paper documentation of construction handover documents
including installed equipment lists, submittal and shop drawing information,
commissioning, operations and maintenance, and asset management information.
Project teams may also use COBie as a platform upon which to transform their
business practices since they can manage COBie data instead of paper documentation.
The Life-Cycle information exchange (LCie): LCie describes the exact format and
timing of each exchange of information during a project that results, ultimately, in the
production of facility management handover information as simply a report provided
from the building information model (East 2010b).
Technical Gaps
Business Gaps
pressure they have begun to feel from customers for the production of BIM data
models. SPie allows manufacturers to respond to these requests once, using a common
(and defensible) format, rather than creating multiple models for every user and every
software platform.
A SPie set of template is comprised of information common to all products and those
properties required for the specific class of product. Table 1 provides the list of
properties common to all products. The SPFF, ifcXML, and SpreadsheetML
transforms required to produce and consume these standard properties are available
through the free BIMServices software (AEC3 2010).
Duration Priors
Frequency Resources Required
Task Number
The product class specific properties of the initial SPie template are shown in
Table 2. Since the thermostat has multiple inheritances from both an electrical device
and a controller the default IFC properties in the template reference both property sets.
Finally there are additional properties derived from product-specific product catalog
data sheets. Using the combined information from the properties identified in Table 1
and Table 2, the template has been provided to product manufacturers for review.
Following that review the harmonized minimum common denominator will be used to
update the templates on the WBDG ProductGuide™. One of the primary sources of
the existing templates relied on the efforts of Specifications Consultants in
Independent Practice (SCIP), who provided an 8,500 line database organized into 425
specification sections, developed from Kalin Associates’ Master Short-Form
Specifications, 8th Edition. To coordinate efforts of SCIP and Construction
Specifications Institute (CSI) a Construction Engineering Research Laboratory project
paid to review the ProductGuide™ content against that of the OmniClass properties
table.
RESULTS
Results from the SPie project debuted at the 2009 NIBS Annual Conference (bSa
2009). Representatives from General Electric described the ease with which product
templates were created from their standard product catalogs and posted on their
website. A specifier using the design software AutoDesk Revit and the specification
software system eSpecs demonstrated how the product template and then specific
manufacturer information may be directly linked in the design and specification
process. The presenter stated that while “out of the box BIM objects contain little
valuable data”, SPie property sets “allow designers to assign valuable product data
into BIM components.”
Based on this success a meeting in March 2010 was held at NIBS to reach out
to manufacturers associations. Several associations are currently working with NIBS
to develop entire libraries of SPie templates from their members’ products. Chief
among these organizations is the National Electrical Manufacturing Association
(NEMA). In a Dec 2010 presentation NEMA demonstrated a prototype application for
the integration of SPie information within their manufacturers’ standard Electronic
Data Interchange (EDI) application.
FUTURE WORK
The level of effort required to develop consensus templates across the approximately
10,000 building product manufacturers is a daunting task. While there is increasing
demand for open BIM deliverables of product data, the authors expect that it will
require decades to fully replace the current PDF marketing catalog page with the
associated computable BIM model and associated style sheet enabling human readable
display. The authors’ see the development of these templates as the key work to be
accomplished, since the production of manufacturer data into these formats has been
acknowledged to be a trivial task by manufacturers themselves. Given that the hard
work is to come to a consensus on the content of the SPie templates, the authors would
like to encourage members of the 400 building product related trade associations assist
in organizing these discussion through the buildingSMART alliance. The national
Technical Committee of the Construction Specifications Institute (CSI) is currently
encouraging their 6,000 industry members to participate as well, with an update to the
ProductGuide expected after August 2011. Additional support for reviewing the
consensus templates is anticipated from members of Specifications Consultants in
Independent Practice.
ACKNOWLEDGEMENTS
The U.S. Army, Engineer Research and Development Center, Construction Engineer
Research Laboratory in Champaign, IL and Vicksburg, MS supported this project
under the “Life-Cycle Model for Sustainable and Mission Ready Facilities” project.
The authors would like to thank Earle Kennett and Dominique Fernandez of NIBS,
Bob Payn of DB Interactive, and Nicholas Nisbet of AEC3 for their support and work
on this project.
428 COMPUTING IN CIVIL ENGINEERING
REFERENCES
ABSTRACT
The aim of this work is to improve the integration between the
geotechnical and infrastructural designing, modeling and analyzing processes. Up
to now these three planning stages are executed isolated and without the required
data exchange between each other. This separation leads to time-consuming and
expensive manual re-input of geometric and semantic data. Currently, roads are
designed using the traditional approach which is based on various 2D drawings.
The current design process focuses on the roadway itself, additional geotechnical
conditions such as the slope angle of the dam or the position of a retaining wall
are not considered.
To solve these problems, a new parametric and 3D-model based approach
has been developed in the research project ForBAU – The virtual construction
site. This new approach is based on the traditional 2D infrastructure planning
process, but includes a new parameterized 3D modeling concept. Different open
data formats such as LandXML and GroundXML allow data integration with
both, a parametric Computer Aided Design (CAD) system and a geotechnical
engineering software. An automatic update function ensures data flow without
loss of information. Usage of this new approach will accelerate the infrastructure
design and provide a parametric 3D-model approach to close the gap between the
geotechnical and the infrastructure planning process. This paper provides detailed
information about this new integration concept and gives an overview on the
various implementation steps.
MOTIVATION
430
COMPUTING IN CIVIL ENGINEERING 431
The next sections describe the implementation of a concept that can realize
an integrated geotechnical and infrastructural design and analysis process based
on a parametric 3D-model. It gives a short overview on the available parametric
432 COMPUTING IN CIVIL ENGINEERING
3D modeling systems and data exchange formats, explains in detail the newly
developed concept and finally discusses some topological problems that arose
during the development process.
LandXML
LandXML (www.landxml.org) is a terrestrial and infrastructural extension
of the W3C standard XML format and is used to exchange geo-referenced
information regarding the surveying and infrastructure planning process. The
structure of the data set is defined by the LandXML schema, which is based on the
XML format (Crews et al., 2010). Through the hierarchical structure and its easy
extensibility, complex datasets can be defined and stored in this format. However,
semantic information of geotechnical properties such as cohesion parameter or
friction angle cannot be transferred using this format.
GroundXML
GroundXML is an extension of the LandXML format (Obergrießer et al.,
2009). It can store all geometrical and semantic information resulting from the
survey, geotechnical and infrastructure planning processes (Figure 7). The level of
the stored data depends on the progress of the planning process. In the first step a
GroundXML file is used to transfer the information regarding the geotechnical
data of a 3D subsoil model. In the next step it stores the information regarding the
infrastructure model. Finally it uses the geometrical and semantic infrastructure
information to model the cross sections in the geotechnical structural analysis
system. The major advantage of this format is that it enables a continuous data
stream for the entire planning process.
436 COMPUTING IN CIVIL ENGINEERING
Topological problems
During the developing process of the parameterized 3D-model approach some
problems have been discovered. One problem occurs regarding the topology of the
cut and dam cross sections. There are three different forms of cross sections
(Figures 8). The first cross section includes only dam geometry and the second
one only cut geometry. The third cross section is a mix of a dam and a cut field. It
is easy to model a roadway that consists only of a dam or cutting cross section
because it only requires the extrusion of each cross section along the form of the
3D space curve. An irregular cross section combining dam and cut sections,
however, cannot be modeled like this because the left side of the roadway is in a
dam area and the right side in a cutting field (or vice versa). The problem is the
intersection between the roadway line and the surface line. The existing
intersection point results in violating a cross section extrusion rule. To solve this
problem an advanced modeling concept has been developed which will be
presented in future publications.
CONCLUSION
REFERENCES
Borrmann, A., Ji, Y., Wu, I-C., Obergrießer, M., Rank, E., Klaubert, C, Günthner,
W. (2009). ForBAU - The virtual construction site project. Proc. of the 24th
CIB-W78 Conference on Managing IT in Construction, Istanbul, Turkey.
Crews, N. and Hall E. (2010). “LandXML Schema.” LandXML Schema Version
1.0 Reference. http://www.landxml.org/ (Dec.10, 2010).
Eastman, C., Teicholz, P., Sacks, R., Liston, K. (2008). BIM handbook: A guide to
building information modelling for owners, managers, designers, engineers,
and contractors, Wiley, New York.
Kaminski, I. (2010). Potenziale des Building Information Modeling im
Infrastrukturprojekt - Neue Methoden für einen modellbasierten Arbeitsprozess
im Schwerpunkt der Planung, Dissertation, Universtät Leipzig, Leipzig.
Obergrießer, M., Ji, Y., Baumgärtel, T., Euringer, T., Borrmann, A., Rank, E.
(2009). GroundXML - An addition of alignment and subsoil specific cross-
sectional data to the LandXML scheme. Proc. of the 12th International
Conference on Civil, Structural and Environmental Engineering Computing,
Madeira, Portugal.
Obergrießer, M., Euringer, T., Horenburg, T., Günthner, W. (2011). CAD-
Modellierung im Bauwesen: Integrierte 3D-Planung von Brückenbauwerken,
In: 2. ForBAU Kongress, München.
Rebolj, D., Tibaut, A., Čuš-Babič, N., Magdič, A., Podbreznik, P. (2008).
„Development and application of a road product model.” Automation in
construction. Volume 17, Issue 6, 719-728.
Aspects of Model Interaction in Mechanized Tunneling
ABSTRACT
INTRODUCTION
The focus of research of the Collaborative Research Center SFB 837 “Model
Interaction in Mechanized Tunneling” started at the Ruhr-University of Bochum in
2010 is placed on two main issues. First, there are several sub-projects concerned
with fundamental research problems regarding specific aspects of mechanized
tunneling. This includes subjects such as
● recognizing subsoil structures based on the analysis of machine data and
creating material models for destructuring subsoil behavior,
● using acoustic techniques for underground exploration,
● investigating the stability of tunnel faces,
● creating process oriented simulation models for mechanized shield driving,
including monitoring-based optimization of process work flows and
● employing methods of system identification used for the adaption of
numerical simulation models.
438
COMPUTING IN CIVIL ENGINEERING 439
Second, as can be seen from the above list of highly interrelated tasks, a further focus
of research addresses the question of how the individual project models have to be
coupled and how data and ideas can be exchanged in an efficient, collaborative and
practical manner to create synergetic effects that will notably increase the
productivity and creativeness as a whole. In addition, it is of interest how designers,
engineers, managers, TBM operators, maintenance workers and others can
successfully collaborate during the actual construction phase, using tools and ideas
developed within the research projects.
Thus, a specific sub-project (D1) is in charge of the implementation of an
“interaction platform in mechanized tunneling”. Accordingly, this sub-project is
responsible not only for the definition of purely technological aspects of interaction,
such as specifying the type of a network protocol or other communication paradigms,
but also for the establishment of soft skills to classify the amount and type of
interaction needed. The need of collaboration was one of the important lessons
learned from a similar, preceding tunneling project (TunConstruct 2010, Lehner et al.
2007, Beer 2009). In a networked environment of cooperating researchers it is
therefore vital to find a proper balance between technological issues and subject-
specific aspects.
BACKGROUND
high level. If proper measures to guide integration are not made available in due time,
then the proper and consistent interaction between project partners can be disrupted or
even endangered.
METHODOLOGY
specific terminology and knowledge regarding system states, actions, activities and
tasks are formally defined. Subsequently, the system and its inherent interactions and
couplings are modeled resulting in a holistic object-oriented ontology for mechanized
tunneling. This ontology developed in the second step contains distributed partial
models incorporating different space and time scales. Hereby, dependencies are
revealed resulting in either “strong” or “loose” coupling rules, object relations,
behavior patterns, data flows, events and actors interconnections.
Real World
Interactions
Processes
Actors
Product Model
Cooperation
Agents Support
Agents
Workflow Web Services Intelligent Coupling
Agents
Domain specific
Domain Agents, Services,
Agents and Models
Access/ Embedment
Models and Resources
Processes Product Models
Partial Models
DIN/ EC Standard Software
Database Regulations FE-Software Product Models
Within the third step, the identified components as well as the static
interaction structure of the tunnel driving system are implemented as an Object-
Oriented Tunneling Product Model (OOT-PM), with an emphasis on model
consistency and correctness. This model is incrementally improved and enhanced
within the ongoing project and provides a basis for the forth step, the Tunneling
Interaction Platform (T-IP) implementation. The T-IP supports information retrieval,
model updating, product and process visualization capabilities as well as a context-
sensitive interaction control, in the sense of computational steering, to interactively
run a holistic tunnel driving simulation. Providing a collaboration platform including
system dynamics and organizational aspects, the T-IP is implemented as a three-layer
architecture (see Fig. 1). On the top level, real world couplings and interactions
between sub-processes and actors take place. If feasible, individual partial domain
models and workflows are supported by domain agents and workflow agents,
respectively (middle layer). These agents are organized in a multi-agent system,
which is responsible for keeping dependencies (couplings) consistent and actually
performing defined interactions between partial models. For this purpose, they access
resources (bottom layer) that could be provided as Web services. If autonomous
442 COMPUTING IN CIVIL ENGINEERING
Tunnel track
Water
inclusion
Boulders
Driving Simulation
input to the exploration simulation. Within this simulation, the sensor data is analyzed
and an attempt is made to replicate the received data by changing the geological
conditions in the exploration model. Through defect minimization the recognition of
boulders, water inclusions or other geological irregularities is enabled and used to
improve the common ground model. Then, the incrementally refined ground model
provides an up-to-date basis to perform several other simulations, for example the
driving simulation. Once the ground model is updated, the multi-agent system takes
care of the change notification and propagation, so that other processes can benefit
from the improved data set.
PROTOTYPE IMPLEMENTATION
The fundamental structure of the SFB 837, with its numerous sub-projects and
participants, necessitates an adequate approach for the underlying computational
infrastructure. To ensure persistency, a persistence layer is created to cope with large
data sets and very heterogeneous data formats. This approach is required because, so
far, no accepted common product model file format for tunneling projects exists.
Furthermore, it is a fact that many subprojects are dependent on proprietary formats
provided by existing simulation and analysis tools. As it is often not feasible to map
all incoming simulation data files to a common objects model without loss of
information, the persistence layer is used to store raw data in their respective file
formats and, at the same time, to provide access to all data files as needed.
Traditional relational database management systems (RDBMS) with their
rigid structures are not well suited to address the data heterogeneity needed.
Therefore, a document-oriented database approach using Apache CouchDB has been
chosen. In CouchDB, each document consists of a text body that uses JSON
(JavaScript Object Notation) to define its contents. JSON is a light-weight text format
comparable to the Extensible Markup Language (XML), but with reduced complexity
and smaller computational overhead. By that, documents can be processed in the
several different programming languages. Additionally, each document may have an
arbitrary number of attachments, which allow to store original raw files originating
from the different sub-projects. For a product model data that cannot be transformed
directly into a corresponding JSON structure, the original content is stored as an
attachment and annotated with a JSON document containing the meta-data necessary
to find and identify the content.
As the central data repository has to be accessible from a large number of
different clients, the communication layer has to provide easy access to the database
content without relying on heavy-weight protocols or language-specific
communication frameworks. Therefore, a RESTful approach (Representational State
Transfer) has been chosen for client-server communication. REST is usually based on
the HTTP protocol and allows to access and manipulate each resource by sending a
standard HTTP request to a Uniform Resource Identifier (URI) that denotes the target
resource. The request are processed by a Tomcat Server running dedicated Java
servlets responsible for providing requested data or for updating the model.
444 COMPUTING IN CIVIL ENGINEERING
The basic approach for processing requests based on the exemplary interaction
chain is shown in Figure 3. Going back to our example, to start a new simulation run,
the advance exploration client needs different sets of input data. First, it sends a
HTTP GET request to obtain the geometry of all ground layers in the observed area.
The file format and system boundaries are provided using URI parameters. Then, the
geometry servlet responsible for processing the request fetches the relevant data from
the database and transforms it into the designated target format (e.g. an ACIS file),
which is suitable for generating a finite element model for simulation purposes. In a
second GET request, the corresponding material parameters are fetched as JSON text
and incorporated into the simulation model. Now, the simulation results can be
compared to actual seismic sensor data, which are read in a final GET request. Once
the simulation optimization has found an improved model, the necessary changes are
send back to the respective servlets and stored in the database. As all clients have
access to the same data set, all modifications are instantly accessible to other
participating sub-systems (in our case. the driving simulation).
Sensor Data
Servlet
Advance 4 HTTP UPDATE
Ground Model
Exploration
Simulation Geology Apache Tomcat
Servlet CouchDB
Geometry
5 HTTP GET Servlet
Driving Simulation
Plaxis file
Figure 3: Service architecture and request structure for the interaction chain
Acknowledgement
The authors gratefully acknowledge the support of this project by the German
Research Foundation (DFG).
REFERENCES
M. König1
1
Chair of Computing in Engineering, Institute of Computational Engineering,
Faculty of Civil and Environmental Engineering, Ruhr-Universität Bochum,
Universitätsstr.150, Building IA, 44780 Bochum, Germany, PH (49) 234 32-
23047; FAX (49) 23432-14292; email: koenig@inf.bi.rub.de
ABSTRACT
In construction management the definition of a robust schedule is often more
important than finding an optimal process sequence for the construction activities. In
nearly every construction project the planned schedule must be continually adapted
due to disruptions like activities take longer than expected, construction equipments
have failures, resources vary, delivery dates change or new activities have to be
considered. Therefore, it is imperative to generate a robust schedule regarding the
different project objectives like time, costs or quality. In the context of planning and
scheduling the term “robust” means that normal project variations have no significant
effects on the schedule and mandatory project objectives. One appropriate concept to
analyze the robustness of schedules is to simulate different typical disturbance
scenarios. In the end, from the multitude of valid schedules the one that is nearly
optimal and highly robust is selected for execution. In this paper a concept is
presented to generate robust construction schedules using evolution strategies.
Therefore, it is necessary to define reasonable robustness criteria to evaluate the
schedules. Two important robustness criteria are presented in the paper. Finally, the
practicality of the presented robust scheduling approach is validated by a case study.
INTRODUCTION
Usually, during the execution of construction projects numerous disruptions
can occur. These disruptions are for example that some activities can take longer than
expected, construction equipments have failures, resources vary, delivery dates
change or new activities have to be considered. Now, the challenge is to handle all
these uncertainties and resulting disturbances. Therefore, the main criterion for the
generation of a schedule should not be to find a global optimum regarding time, costs
or quality, but rather to define a robust schedule that can react flexible to possible
disruptions. Several definitions for robustness have been proposed. Billaut et al.(2005)
state that a schedule is robust if its quality is little sensitive to data uncertainties and
to unexpected events. Another definition is that a robust schedule isone that is likely
valid under a wide variety of disturbances (Leon et al. 1994).
Dealing with uncertainties is nothing new in the context of scheduling.
Though, often only durations of activities are considered as stochastic variables. The
uncertain numerical data are assumed to be random and obey a known probability
distribution. Thus, for every activity an appropriate probability distribution for the
446
COMPUTING IN CIVIL ENGINEERING 447
ROBUSTNESS CRITERIA
The specification of robustness criteria is not trivial. A very common
COMPUTING IN CIVIL ENGINEERING 449
| |
Very important for the robustness measurement rs are the realized activity
start times Si and the activity weights wi. However, during scheduling no realized
activity start times are available. Therefore, the planner must estimate possible
delays in consequence of typical disruptions during the execution. Currently, only
a discrete determined delay value is considered to calculate the realized activity
start time for each activity. However, the concept can be easily extended by
distribution functions or other uncertainties concepts like Fuzzy sets. In the same
way the activity weights must be defined. Pre-defined probabilities associated with
linguistic values are provided to support the planner within the weighting process.
Additionally, delays can occur if resources are not available. Consequently, possible
breakdown intervals and breakdown times must be defined for each resource type.
For some resources experienced data about failures and maintenance exists. For
other resources failure probabilities must be estimated. Within this paper only simple
probabilities and fixed breakdown times are used. However, the bounds and
linguistic variables can be adapted according to project planner’s experiences.
| |
IMPLEMENTATION
CASE STUDY
In order to test the practicality of the presented robustness scheduling
approach a case study was realized. The case study includes the scheduling of shell
constructions activities of an office building with 14 similar levels. Within this case
study only two levels with a total of 512 activities were simulated. Some
considered activities including their main attributes and parameters for the
robustness scenario analysis are shown in Table 2.
Table 2. Shell construction activities and robustness input data.
Element Activity Performance Delay Delay Weight
type factor probability
Column Installing formwork 0.5 h/m2 0.04 h/m2 rare low
Reinforcing steel 0.05 h/kg 0.01 h/kg common average
Concreting 2 h/m3 0.3 h/m3 very rare high
Curing 8h 1h very rare very high
Removing formwork 0.3 h/m2 0.04 h/m2 common low
Wall Installing formwork 0.3 h/m2 0.02 h/m2 rare low
Reinforcing steel 0.4 h/m2 0.02 h/m2 common average
Concreting 0.65 h/m3 0.05 h/m3 very rare high
Curing 8h 1h very rare very high
Removing Formwork 0.3 h/m2 0.04 h/m2 common low
Slab Installing ceiling table 0.45 h/m2 0.06 h/m2 common low
Reinforcing steel 0.4 h/m2 0.03 h/m2 rare average
Installing concrete 4.3 h 1h common very high
distributor
Concreting 0.5 h/m3 0.01 h/m3 very rare high
Curing 8h 1h very rare very high
Removing formwork 0.3 h/m2 0.04 h/m2 rare low
452 COMPUTING IN CIVIL ENGINEERING
/
Figure 2. Normalized robustness measurements of disturbed schedules.
REFERENCES
Bäck, T., and Schwefel, H.-P. (1993). “An overview of evolutionary algorithms
for parameter optimization”, Evolutionary Computation, Spring 1993, Vol. 1,
No. 1:1–23
Beyer, H.-G., and Schwefel, H.-P. (2002). “Evolution Strategies: A
Comprehensive
Introduction”, Journal Natural Computing, 1(1):3–52, 2002
Billaut, J.-C., Moukrim, A., and Sanlaville, E. (2005). “Flexibilité et robustesse en
ordonnancement”, Hermès, Paris
Davenport, A., and Beck, J. (2000). “A survey of techniques for scheduling with
uncertainty”, http://www.eil.utoronto.ca/profiles/chris/gz/uncertainty-
survey.ps, (2010-12-10)
Herroelen, W., and Leus, R. (2004). “Robust amd reactive project scheduling: A
review and classification of procedures”, International Journal of
Production Research, vol. 42, num. 8, p. 1599-1620
Leon, V. J., Wu, S. D., and Storer, R. H. (1994). “Robustness measures and
robust scheduling for job shops”, IIE Transactions, 26(5):32-43
König, M., Beißert, U., Steinhauer, U., and Bargstädt, H.-J. (2007). “Constraint-
Based Simulation of Outfitting Processes in Shipbuilding and
Civil Engineering”, Proceedings of the 6th EUROSIM Congress on Modeling
and Simulation, Ljubljana, Slovenia
Starkweather, T., Mcdaniel, S., Whitley, D., Mathias, K., and Whitley, D.
(1991). “A Comparison of Genetic Sequencing Operators”, Proceedings of
the fourth International Conference on Genetic Algorithms, Morgan
Kaufmann, 69-76
T’kindt, V., and Billaut, J.-C. (2005). “Multicriteria Scheduling – Theory,
Models and Algorithms”, Springer, Berlin Heidelberg
Van de Vonder, S., Ballestin, F., Demeulemeester, E., and Herroelen, W.
(2006). “Heuristic procedures for reactive project scheduling”, Report num.
KBI 0605, Department of Decision Sciences & Information Management,
Katholieke Universiteit Leuven, Belgium.
The Development of the Virtual Construction Simulator 3: An Interactive
Simulation Environment for Construction Management Education
1
Department of Architectural Engineering, Pennsylvania State University, 104
Engineering Unit A, University Park, PA 16802; PH (814) 863-6786; FAX (814) 863-
4789; email: SHLatPSU@gmail.com
2
Department of Architectural Engineering, Pennsylvania State University, 104
Engineering Unit A, University Park, PA 16802; PH (814) 865-5022; FAX (814) 863-
4789; email: dragana@psu.edu
3
Department of Architectural Engineering, Pennsylvania State University, 104
Engineering Unit A, University Park, PA 16802; PH (814) 865-4578; FAX (814) 863-
4789; email: jmessner@engr.psu.edu
4
Department of Architectural Engineering, Pennsylvania State University, 104
Engineering Unit A, University Park, PA 16802; PH (814) 865-6394; FAX (814) 863-
4789; email: anumba@engr.psu.edu
ABSTRACT
This paper discusses the development of the Virtual Construction Simulator
(VCS) 3 - a simulation game-based educational tool for teaching construction
schedule planning and management. The VCS3 simulation game engages students in
learning the concepts of planning and managing construction schedules through goal
driven exploration, employed strategies, and immediate feedback. Through the
planning and simulation mode, students learn the difference between the as-planned
and as-built schedules resulting from varying factors such as resource availability,
weather and labor productivity. This paper focuses on the development of the VCS3
and its construction physics model. Challenges inherent in the process of identifying
variables and their relationships to reliably represent and simulate the dynamic nature
of planning and managing of construction projects are also addressed.
INTRODUCTION
The nature of building construction is dynamic due to factors that are difficult to
manage, such as resource availability, weather conditions and the resources
performance. Project delays are commonly expensive causing significant problems to
contractors and the owner. It is imperative for construction professionals to be able to
deal with unanticipated events and problems to complete the project on time within
454
COMPUTING IN CIVIL ENGINEERING 455
the budget.
Educators are tasked with equipping students with knowledge to develop feasible
construction schedules and manage common and unforeseen problems at the site.
When learning construction scheduling concepts, students typically start by
interpreting 2D drawings and supplemental documents, and then identify activities
and establish relationships into a logical sequence. Critical Path Method (CPM)
schedule development using 2D/3D drawings coupled with lectures and traditional
assignment format however, fail to motivate students to try different approaches when
solving construction related problems. In addition, this method relies on the students’
personal ability to interpret the documents. It is not easy for students to tell whether
the developed schedule has any conflicts or deficiencies, especially when the project
is complex. The opportunity for students to experience real construction processes
remains limited to field trips, case studies, and exercises with typical building
projects based on real construction projects. While valuable, site visits are too short
for students to see construction progress over time, and learn about inherent risks and
challenges.
Computational support for education using various simulation techniques, 3D/4D
modeling, and Virtual Reality (VR) technology has significantly advanced and is
increasingly used in solving various construction problems. Simulation technologies
offer students with opportunities to experience realistic scenarios and actively learn to
develop construction plans, test solutions and modify strategies accordingly. To
engage students in active learning of construction scheduling concepts, our research
team developed and evaluated an educational simulation – the Virtual Construction
Simulator (VCS).
BACKGROUND
The Construction Industry increasingly employs commercial schedule
development applications to support visualization of the construction process.
However, the solution quality still greatly depends on the developer’s personal
knowledge and experience. Due to limited practical experience, students often
struggle to detect conflicts and make informed decisions when developing a
construction plan. Furthermore, drawings and bar chart schedules impede students’
ability to visualize spatial data and their temporal relationships. Construction
simulations, 4D modeling, and building information modeling (BIM) have become
valuable tools for developing and visualizing construction schedules and processes.
Construction engineering programs are progressively incorporating advanced
simulation technologies to prepare students to respond to industry needs. Examples of
simulation technologies used in education include the 3D visualization system for
construction operations simulation (Kamat et al., 2001) and the virtual construction
model for integrating the design and construction process to improve constructability
(Thabet, 2001). In particular, 4D modeling can aid in visualizing construction
schedule of each building element in a 3D environment in sequence over actual
construction time so that project participants can see construction progress and easily
identify any potential problems such as time-space conflicts, congestion, and
accessibility problems prior to actual construction.
A simulation is a useful tool to test the developed construction plan. From the
educational perspective, simulations can help students learn complex concepts as they
456 COMPUTING IN CIVIL ENGINEERING
MOTIVATION
The Virtual Construction Simulators (VCS) project sought to address existing
limitations in traditional methods for teaching construction scheduling and explore
simulation technologies for active and engaged learning. The goal of the VCS project
is to provide students with opportunities for scenario-based learning through
practicing decision-making skills, testing different strategies and outcomes to achieve
most optimum solutions.
Current VCS simulation application is a continuation of the research efforts
initiated in 2004. The first version (VCS1) developed as a 4D learning module
integrated the processes of viewing a 3D model and creating a construction sequence
(Wang, 2007; Wang et al. 2007), eliminating the need for the CPM schedule and its
subsequent linking to each corresponding 3D building element. The implementation
with undergraduate students in Architectural Engineering demonstrated improvement
in student communication and interaction, efficiency in time spent on understanding
construction problems and greater focus on developing solutions, resulting in higher
quality solutions. The second VCS version (VCS2) addressed certain limitations of
the VCS1 in terms of software development and the user interface (Jaruhar, 2008).
The VCS2 focused on more robust interaction while developing construction plans.
The newly added functions include preset viewpoints, sequencing activities in a chain,
automatic schedule generation, and save and load functions. The same building
model used in the first version, a typical floor of the MGM Grand hotel, was
embedded. Throughout the implementation of the VCS2 with students in the same
course as the VCS1, the VCS2 showed that it can reduce time throughout the process
of construction schedule development and that its interface was more intuitive and
user-friendly than the VCS1. The students also ascertained that the VCS and 4D
modeling was an effective tool for communication and helped them better understand
their construction schedules.
However, one of the most crucial limitations of both the VCS1 and VCS2 is that
the user needs to manually identify a set of activities for building element types and
manually calculate the duration of each activity prior to developing a construction
sequence inside the application. Furthermore, there is no function to check if a user’s
values are logically correct and if it is a realistic schedule. The applications reproduce
4D simulations based on the input data. Specific project constraints which would add
to the realism and encourage developing feasible solutions were not yet included.
COMPUTING IN CIVIL ENGINEERING 457
Hence, the only feedback students received on the schedule solution came from the
instructor’s comments after the simulation was presented in the classroom. In
addition, the applications required an amount of repeated typing and mouse
interaction to complete the schedule development.
Features of VCS3
The following is the list of VCS3 features implemented to achieve the project
goal. They are mainly elicited from the analysis of the evaluation results and students’
feedbacks, which were obtained through surveys and focus group discussion.
(1) Project-based constraints and rules: Project-based constraints and rules
provide scenario based goal-driven exploration and help the user develop feasible
construction schedules. In addition to scenario-based constraints such as budget or
available resources, the VCS3 has embedded both physical and activity constraints,
against which each activity is checked before its planned start. The VCS3 allows a
new activity to start only when both conditions are met – all the building elements
identified as physical constraints for the given building element associated with the
activity are constructed; and also activity predecessors within the building element
instance are completed. For example, activities associated with constructing a column
can start only after the footing for the column is constructed. Also, among activities
associated with constructing a footing, the excavation activity needs to be completed
before other activities such as formwork or concrete placement can start. Information
about physical and activity constraints for each building element are stored in the
project database.
(2) Productivity factors: Resources in VCS3 consist of equipment and labor.
The productivity of each laborer can vary as a function of project experience and
weather conditions. The user dynamically manages labor and equipment during the
construction simulation to respond to any changes in construction progress. In
addition to currently implemented factors – weather and learning curve; factors
identified to impact project performance to be added later include project experience,
fatigue, site congestion, and random equipment breakdowns.
(3) Performance feedback: The report interface summarizes daily construction
progress allowing the user to track schedule progress, resource utilization with
comparison of time spent on site and time worked, and cost data for the day as well as
cumulative. The report data guides students to make appropriate decisions and
adjustments if necessary for the next simulation day.
(4) Goal-driven exploration: The VCS3 engages students in exploration of
different solutions depending on project goals. For example, depending on a scenario
a user can choose construction methods and allocate resources to complete the project
with minimum cost, or test strategies to construct the project under time constraints
458 COMPUTING IN CIVIL ENGINEERING
within the given budget. Real time resource management and the performance
feedback help the user achieve the user-defined goal.
(5) Pre-defined construction activities and corresponding method sets: The
VCS3 provides the user with a set of pre-defined activities to construct the particular
building element type. For each building element type and its defined list of
construction activities, the user selects between possible construction methods
depending on the project goal. The VCS3 then generates and assigns selected
methods to all the building element group instances of the same type. Thus, the user
does not create custom activities but the VCS3 generates activities automatically
based on selected construction methods. Automated activity list eliminates the error-
prone manual data input process of the previous versions of the VCS. Information
about construction methods and activities is stored in the MS Access project database
and can be easily modified if necessary.
(6) Development of construction plans and review process: Figure 1 illustrates
the process of planning and simulating a construction schedule using the VCS3. The
process consists of two main phases: construction planning and simulation. During
the construction planning phase, the user develops a construction plan by selecting
construction methods from the list of applicable methods, allocating resources to each
activity and sequencing the activities. After developing a sequence, the user can
estimate the total time to complete the construction project using MS Project.
VCS 3
3D geometry model 3D
control module geometry
components
User Interfaces
User Interfaces Construc on plan
User Interfaces control module Access
Global
DB
Simula on control
module Access
Project-
Specific
DB
VCS data model
Data flow
Two database files are used: a Global database and a Project-Specific database.
The Global database stores general data for running the application including
construction activities, corresponding methods, and resource data obtained from
RSMeans, independently from a particular construction project. The Project-Specific
database stores data about construction activities status, resources, as well as the
results of the daily simulation for future analysis.
460 COMPUTING IN CIVIL ENGINEERING
VCSBeam VCSHumanResource
VCSConstants
VCSColumn VCSEquipmentResource
VCSFoo ng
VCSCrew VCSFunc ons
ACKNOWLEDGEMENTS
We are grateful to Lorne Leonard and George Otto for their support during the
development of the VCS3. We thank the National Science Foundation (Grant
#0935040) for support of this project. Any opinions, findings, conclusions, or
recommendations expressed in this paper are those of the authors and do not
necessarily reflect the views of the National Science Foundation.
REFERENCES
Al-Jibouri, S., Mawdesley, M., Scott, D., and Gribble, S.J. (2005). “The Application
of a Simulation Model and Its Effectiveness in Teaching Construction Planning
and Control.” Computing in Civil Engineering 2005, Vol. 179, No. 7
Chen, W. and Levinson, D.M. (2006). “Effectiveness of Learning Transportation
Network Growth through Simulation.” Journal of Professional Issues in
Engineering Education and Practice, Vol. 132, No. 1, January 1.
Galarneau, L.L. (2004) “The e-learning edge: Leveraging interactive technologies in
the design of engaging, effective learning experiences.” In e-Fest 2004
Jaruhar, S. (2008). “Development of Interactive Simulations for Construction
Engineering Education.” Master’s Thesis, The Pennsylvania State University
Kamat, V. R., and Martinez, J. C. (2001). "Visualizing simulated construction
operations in 3D." Journal of Computing in Civil Engineering, ASCE, 15(4), 329-
337.
Martin, A. (2000). “A simulation engine for custom project management education”
International Journal of Project Management, Vol. 18, 201-213
Rojas, E.M. and Mukherjee, A. (2005). “General-Purpose Situational Simulation
Environment for Construction Education.” Journal of Construction Engineering
and Management, Vol. 131, No. 3, March 1.
Thabet, W. Y. (2001). "Design/Construction Integration thru Virtual Construction for
Improved Constructability." Retrieved on December 2010 from:
http://www.ce.berkeley.edu/~tommelein/CEMworkshop/Thabet.pdf
Wang, L. and Messner, J.I. (2007). “Virtual Construction Simulator: A 4D CAD
Model Generation Prototype.” ASCE Workshop on Computing in Civil
Engineering Pittsburgh, PA, 2007
Wang, L. (2007). “Using 4D Modeling to Advance Construction Schedule
Visualization in Engineering Education.” Master’s Thesis, The Pennsylvania State
University
Preparation of Constraints for Construction Simulation
ABSTRACT
INTRODUCTION
462
COMPUTING IN CIVIL ENGINEERING 463
today (Halpin, 1977; AbouRizk and Hajjar, 1998; Lu, 2003; Zhang et al., 2005; König
et al., 2007). Using these approaches, it is also possible to generate near-optimal
schedules with respect to a multitude of restrictions and different optimization criteria
(Beißert et al., 2008; Hamm and König, 2010). The use of simulation in the
construction industry is still very limited. There are various reasons for this (Hajjar
and AbouRizk, 2002), one of which is the time-consuming nature of preparing
planning data for the simulation.
In this paper we introduce an approach to accelerate the preprocessing of
construction simulation. The key to shorter planning time is the reuse of existing data
that are generated during design or former projects. But not all available data are of
suitable quality and sufficient quantity for construction simulation. This paper
explains what data are required, how they are prepared, and what kinds of additional
data are added. The focus is on preparation of constraints for construction simulation.
To prove the applicability of our approach an interactive 4D tool for construction
simulation preprocessing and evaluation, called SiteSim Editor, has been
implemented as a prototype. The functionalities of the SiteSim Editor are described in
terms of implementation of the schemata and patterns of our approach.
SIMULATION WORKFLOW
Project data. The input data for simulation preprocessing are project specific data
that are generated during the design process. The input data consist of a building
information model, the construction site layout, an operational bill, and supply chain
information. Missing data must be manually entered by the user. The BIM provides
the building data, i.e., the building components in the form of a 3D building model
with additional semantic information. The BIM is created during the design process.
The construction site layout is defined by the construction site plan. It defines the
footprint of the building, existing construction and landmarks, parking and storage
areas, delivery and emergency egress paths, and also includes stationary equipment
like cranes, site trailers, and storage sheds. The operational bill includes the
464 COMPUTING IN CIVIL ENGINEERING
SIMULATION PREPROCESSING
Process patterns. Descriptions of the processes and their outcomes are part of the
operational bill. In general, these descriptions do not have the required granularity and
are not sufficient. For the erection of an in-situ concrete column, the operational bill
contains a process description that includes information about the material resources
of the concrete and the reinforcements. The process consists of five subprocesses that
must be defined for simulation. With the help of process patterns, these subprocesses
are defined according to the construction method. Process patterns are reusable
patterns that are stored in a pattern catalog. This catalog is a company specific
knowledge base that consists of all construction patterns that comprise the company’s
internal work processes. Process patterns define the technological constraints for the
subprocesses and associated personnel and operational values. For example, the
reinforcement can be completed by two to five workers with an operational value of
0.05h/kg steel for each worker.
Supply Chain. Figure 2 shows a simplified construction site with a storage area, the
construction area, and a tower crane with its capacity related radii. Normally, the
project specific supply chain information is not sufficient and must be modified and
COMPUTING IN CIVIL ENGINEERING 465
their construction space, while others require exclusive access. These spatio-temporal
constraints have to be taken into account during construction scheduling.
Construction spaces can be classified into three categories: resource space, topology
space, and process space (Akinci et al., 2002; Marx et al., 2010). The resource space
is the space that is occupied by a resource. It is derived from the dimensions of a
resource and is defined for a specific time period. Topology space includes the
building under construction, the construction site, and its surrounding area with all
landmarks and existing constructions. Topology spaces are also time-dependent and
can change during construction. A process space is linked with a construction process.
It covers a space for a specific period, which corresponds to the length of the
construction process. Process spaces are composed of many process-related subspaces
like working spaces, hazard spaces, protected spaces, and post-processing spaces.
Working spaces must be available to execute construction works using resources (see
Figure 4). Some construction works require special hazard space for safety purposes.
So-called protected spaces are sometimes required to temporarily protect a building
component from possible damage induced by adjacent construction. Some building
elements need post-processing work that can only be performed in special areas.
To prove the feasibility of our approach (see Figure 1), a prototype interactive
4D simulation editor called SiteSim Editor has been implemented (see Figure 5). It is
designed for simulation preprocessing and result evaluation. The SiteSim Editor is a
stand-alone application, implemented in Java, based on the Eclipse Rich Client
Platform (RCP) technology. The application supports all data models that are required
for the simulation workflow depicted in Figure 1 and can read data formats as
follows. Building information models are imported as IFC data models. Currently the
construction site layout is not part of the IFC data model. The enhancement of the IFC
data model is still in progress. Therefore, construction site plans as well as operational
bills and supply chain information are imported in XML format.
Simulation preprocessing and evaluation are embedded in a 4D environment,
which enables an intuitive connection of the building elements and the construction
processes (see Figure 5).
COMPUTING IN CIVIL ENGINEERING 467
predecessor and successor relationship (see Figure 6b). In combination with groups
the assignment of process patterns creates the technological constraints depicted in
Figure 6c. The required personnel and operational values provided by the process
patterns are defined irrespective of the mode of assignment.
BE1 Process P Process S Process P Process S Process P
Group A
BE2 Process P Process S Process P Process S Process P
Process S
BE3 Process P Process S Process P Process S Process P
a) Separate Assignment b) Combined Assignment c) Grouped Assignment
BEn = Building Element n
Figure 6. Modes of constraint assignment
The process definition derived from groups of building elements can be
performed in two ways. A summary process can be defined with as many
subprocesses as elements. The required personnel and operational values are set
separately for each process. The processes can be of different types. Otherwise a new
process related to all elements can be defined. All processes must be of the same type
and the required personnel and operational values are derived from the process type.
The strategic constraints are defined in a similar manner. Two selection sets
are created, one with predecessors and the other with successors. The result of the
assignment of the strategic constraints corresponds to the result in Figure 6. Strategic
constraints can be defined among single or grouped building elements.
Construction projects often take much longer than planned or are more
expensive than projected. Efficient planning and scheduling are essential for
successful construction. In particular, the comparison of different schedules and
resource utilization alternatives is crucial, but often it cannot be conducted because
construction scheduling is extremely time consuming. The simulation of construction
processes can help to generate multiple construction schedules and to find a near-
optimal solution. Currently, aggregation and preparation of construction simulation
input data are still very time consuming. In this paper a concept has been presented to
accelerate the simulation preprocessing based on a multi-model approach. Data that
are created during different design phases and former projects are extracted from
different data models and reused to generate simulation input data. Data are extracted
and enhanced by additional process information as described above. A tool for
simulation preprocessing and evaluation, called SiteSim Editor, has been introduced.
An interesting area for future research is the generation of constraints.
Currently, many precedence constraints are manually specified by the planner. These
constraints are derived from different project-specific circumstances, such as
construction methods, building and building site layout, costs, and operational values.
For construction scheduling, these constraints have to be converted into precedence
constraints. Generally, this is done based on the experience of the planner. To support
the planner’s decision-making process, constraint generation must be improved or,
better yet, automated. Furthermore, the flow of material at construction sites is still
COMPUTING IN CIVIL ENGINEERING 469
REFERENCES
AbouRizk, S.M, and Hajjar, D. (1998). "A framework for applying simulation in
construction." Can. J. Civ. Eng. 25(3): 604–617.
Akinci, B., Fischer, M., Levitt, R., and Carlson, R., (2002). “Formalization and
automation of time-space conflict analysis.” J. Comp. in Civ. Engrg. Volume
16, Issue 2, pp. 124-134.
Beißert, U., König, M., and Bargstädt, H.-J. (2008). “Generation and local
improvement of execution schedules using constraint based simulation.” Proc.
of the 12th International Conference on Computing in Civil and Building
Engineering (ICCCBE-XII), Beijing, China.
Hajjar, D., and AbouRizk, S.M, (2002) “Unified Modeling Methodology for
Construction Simulation” J. Constr. Engrg. and Mgmt. Volume 128, Issue 2,
pp. 174-185.
Halpin, D. W. (1977). ‘‘CYCLONE—Method for modeling of job site processes.’’
Journal of the Construction Division, Vol. 103, No. 3, pp. 489-499.
Hamm, M. and König, M. (2010). “Constraint-based multi-objective optimization of
construction schedules.” In Computing in Civil and Building Engineering,
Proceedings of the International Conference, W. TIZANI (Editor), 30 June-2
July, Nottingham, UK, Nottingham University Press, Paper 122, p. 243, ISBN
978-1-907284-60-1.
König, M., Beißert, U., Steinhauer, D. and Bargstädt, H-J. (2007). “Constraint-Based
Simulation of Outfitting Processes in Shipbuilding and Civil Engineering.”
Proceedings of the 6th EUROSIM Congress on Modeling and Simulation,
Ljubljana, Slovenia.
Lu, M. (2003). “Simplified discrete-event simulation approach for construction
simulation.” J. Constr. Engrg. and Mgmt. Volume 129, Issue 5, pp. 537-546.
Marx, A., Erlemann, K., and König, M. (2010). “Simulation of Construction
Processes considering Spatial Constraints of Crane Operations.” In Computing
in Civil and Building Engineering, Proceedings of the International
Conference, W. TIZANI (Editor), 30 June-2 July, Nottingham, UK,
Nottingham University Press, Paper 17, p. 33, ISBN 978-1-907284-60-1.
Tulke, J., Tauscher, E., and Theiler, M. (2010). “Open IFC Tools”
http://openifctools.com (Dec. 5, 2010).
Zhang, H., Tam, C. M., and Li, H. (2005). “Activity object-oriented simulation
strategy for modeling construction operations.” J. Comp. in Civ. Engrg.
Volume 19, Issue 3, pp. 313-322.
Using IFC Models for User-Directed Visualization
1
PhD, Computer Scientist, U.S. Army Engineer Research and Development Center,
Information Technology Laboratory, 3909 Halls Ferry Road, Vicksburg, MS 39180-
6199; PH (601) 634-4624; FAX (601) 634-4402; email:
Chris.Bogen@usace.army.mil
2
PhD, PE, F. ASCE, Research Civil Engineer, U.S. Army Engineer Research and
Development Center, Construction Engineering Research Laboratory, P.O. Box 9005,
2902 Newmark Drive, Champaign, IL 61826-9005; PH (217) 373-6710; email:
bill.east@us.army.mil
ABSTRACT
BACKGROUND
Transporting CAD models into three-dimensional graphics engines for games can be
labor-intensive, rely on expensive software product stacks, and may require skills and
approaches unfamiliar to many architects and designers (O'Coill and Doughty 2004).
This complexity is compounded by model compilation processes inherent to many
popular gaming engines that support large scale worlds, efficient rendering, dynamic
lighting, and real-time multi-user interactions (e.g. Radiant, Unreal, and
Source/Hammer).
In 1999, Fu and East outlined the requirements for a multi-user virtual design review
that includes multiple perspectives of building design models, interactions between
reviewers and designers in a spatial context, restricted access, design review, and
470
COMPUTING IN CIVIL ENGINEERING 471
project management verification (Fu and East 1999). Various researchers have
advanced the design review concept by reporting on transformations from design
models to game engine models. For example, Shiratuddin and Thabet outlined an
approach for exporting 2D Autodesk models into the Unreal game engine with an
intermediate editing and export step in the 3DS VIZ/Max environment (Shiratuddin
and Thabet 2002). Later in 2011, Shiratuddin and Thabet reported on an alternate
approach where a 3D model of a 2D design was developed in Autodesk 3D Studio
Max, and then imported into the Torque Game Engine. Limitations of the Torque
.Max import feature required Shiratuddin and Thabet to manually re-assemble the
individual 3D components (e.g. doors, walls, roof) by importing the elements and
then manually moving and re-aligning them properly (Shiratuddin and Thabet 2011).
Kumar et al. reported on a transformation from Revit to Autodesk’s .FBX file, and
finally into the Unity game engine where textures were assigned (Kumar et al. 2011).
Such approaches rely on data exchange artifacts (e.g. 3DS .MAX and AUTOCAD
.DXF files) that may not contain direct linkages to design model metadata, and they
can also obscure or hide details about the underlying data exchanges. In such cases it
may be very difficult or impossible to programmatically trace the target destination
file entities back to the entities in the original source file.
PROBLEM STATEMENT
The authors’ intent is to define an efficient, repeatable process for converting IFC
models to raw geometry files that are processed by a 3D game engine compiler. To
facilitate traceable information exchanges, elements of the target file format must be
explicitly referenced back to elements in the design file through unique identifiers.
The process must also provide semi-automated support for selecting surface textures
and visualization properties, while also considering more technical issues such as the
efficiency demands of the target visualization engine. Finally, the transformation
process must provide these features at a low cost of ownership for non-commercial
research and educational purposes.
472 COMPUTING IN CIVIL ENGINEERING
APPROACH
The authors adopted a transformation from IFC 2x3 (Coordination Model View
Definition), to VRML (.wrl v2.0 and .x3d v3.0), and finally, to the .MAP format for
the Call of Duty 4 (COD4) Radiant compiler. This approach attempts to reduce the
steep learning curve for BIM applications of compiler-based real-time modeling
platforms. The authors’ approach makes uses IFC attributes to mediate surface
texture selections and other scene customizations. VRML was chosen as the
intermediate model format because of its international adoption and the availability of
free conversion tools. A reliable IFC to VRML translation is provided by Karlsruche
Institute of Technology’s IFCStoreyView. The IfcStoreyView-generated VRML 2.0
(.wrl) file includes the IFC element type, element name, and unique ID tags while
representing the geometry with collections of triangulated surface meshes.
The COD4Radiant .MAP format was chosen as the map representation format
because it is ASCII based and it directly supports face-vertex meshes, the same
geometry format of the generated VRML files. The COD4 Radiant engine supports
multi-user interactions while efficiently rendering large-scale, detailed models, and it
(as well as accompanying development tools) may be used free of charge for
research, academic, and noncommercial purposes. While the .MAP format is not
defined by a formal schema, technical references are available in McDonald’s Thesis
report (McDonald 2007) and on various game “mod” Websites (Modsonwiki.com).
The .MAP format also allows for the identification of mesh face groups via unique
identifiers and descriptive data. The authors use this feature to tag the destination
model objects with their corresponding IFC element unique identifiers.
Before the VRML file is parsed, the xj3d tool is used to transform the .wrl VRML
format to .x3d. Once the x3d file is deserialized, ifcVRMLtoMAP prompts the user
to specify the original length measurement units of the IFC model and specify
whether or not to perform polygon reduction on objects with more than 1000
polygons. Polygon reduction simplifies a complex surface meshes and it is
sometimes necessary to avoid exceeding BSP node size thresholds. The authors
implemented a version of Melax’s edge-collapse polygon reduction algorithm (Melax
1998) that may be applied to objects with high polygon counts such as toilets and
sinks.
The openings of IFCDoor objects are copied to invisible hint surfaces. Hint surfaces
influence the compiler to create a separate BSP node for an enclosed space, thus
reducing the chance that a BSP node will have too many vertices. Users may also use
the ifcVRMLtoMAP user interface to identify an object as a light. While lights can
enrich repetitive surface textures with shading and contrast, they are not a
requirement because the COD4Radiant engine provides adequate ambient light
settings. After the .MAP text is constructed, it is distributed in several .MAP files
that are labeled by building element type. Each file is limited to 2 MB because the
Radiant editor performs poorly or crashes when dealing with larger .MAP files.
After surface textures are assigned to the model elements, these options may be saved
to an XML file so that they may be reused if the transformation is repeated. Users
may optionally specify a skybox file to contain the facility model. All MAP objects
must be placed in a skybox, which can simply be a hollow cube large enough to
contain the building with special ground texture on the bottom and special sky
textures on all of the remaining sides. It is also possible to build a generic skybox that
may be used with minimal editing for almost any building.
A few critical manual steps must be executed before compiling the map assets. First,
the surface texture and light mapping coordinates in the MAP files must be corrected.
This is accomplished by opening the map file(s) in the Radiant editor, selecting all
elements in the map, and clicking the Natural or LMAP texturing buttons in texture
material mode and light-map surface editing modes. Finally, the map must be
compiled using the Radiant compiler, and this process may inherently require some
trial and error for detailed maps containing several complex objects with high
(>1000) polygon counts.
FACILITY VISUALIZATIONS
Virtual walkthroughs for two building models were developed to demonstrate the
adopted transformation process. The first building, a 248 m2 (2,669 ft2) duplex
apartment building (47.2-MB IFC File), was originally developed as a submission to
a German design school competition. This duplex apartment model has been used to
474 COMPUTING IN CIVIL ENGINEERING
The floor planes of the IfcSpace objects were represented as scriptbrushmodel map
objects with space OID and name attributes. This representation enabled the authors
COMPUTING IN CIVIL ENGINEERING 475
to develop a script that allows walkthrough participants to view the room name and
usage category of their current location. The authors also used the IfcSpace objects
to customize floor textures according to space function–e.g., bathrooms have tile,
offices have carpet. Since the floor slabs of the source models were represented by sa
single group of polygons, the floors of the IfcSpace objects provided an efficient and
accurate way to “color” floors by room function.
2 G Select base length units & indicate Deserialize X3D, perform polygon
whether or not to perform polygon reduction, perform unit conversion, &
reduction initialize internal ifcVRML objects
3 B, C, D Browse the master (B) & detail grid User interface handling
(D) views. Assign Textures (C) &
specify collision surface options
(B,D). When the user wishes to omit
an object, uncheck Render?
4 E Select an existing skymap .MAP file User interface handling
5 F Click “Convert X3D to MAP” Combine user options with the existing
internal ifcVRML objects, transform
ifcVRML objects to .MAP text, & write
.MAP files
CONCLUSIONS
RECOMMENDATIONS
ACKNOWLEDGEMENTS
This work was sponsored under the Life-Cycle Model for Mission-Ready,
Sustainable Facilities project through the U.S. Army Engineer Research and
Development Center. The authors would like to acknowledge Howard Yu (ERDC-
Champaign) for his assistance in preparing the clinic demonstration, Nicholas Nisbet
(AEC3 UK) for his IFC expertise and his work on BIMServices, and the
buildingSMART alliance™ for their on-going commitment to BIM interoperability.
COMPUTING IN CIVIL ENGINEERING 477
REFERENCES
1
Assistant Professor, School of Architecture, University of Florida, Gainesville, FL
32611-5702, Email: nnawari@ufl.edu
2
Student, College of Engineering, University of Florida, Gainesville, FL 32611-5702,
Email: litani@ufl.edu
3
Grad. Student, College of Engineering, University of Florida, Gainesville, FL
32611-5702, Email: egonzalez6@ufl.edu
ABSTRACT
INTRODUCTION
478
COMPUTING IN CIVIL ENGINEERING 479
Unfortunately, other types of engineering knowledge have come to receive rather less
than their fair share of attention.
To advance other type of structural engineering knowledge, this research
focuses on the conceptual and qualitative behavior of a structure and how to engage
student’s’s imagination and to use it no less creatively than a musician or artist
producing ideas out of his other head. In addition to the envisioning of a geometrical
shape or type of material, which can be done largely from memory, there is also the
possibility of carrying out structural analysis in the mind -what can be termed
conceptual analysis. The research aims to emphasize the value of a qualitative
understanding of structural behavior in the context of the education of engineers and
architect. Although the data is lacking to allow comparison with earlier times, some
alarm has been sounded at the poor qualitative and conceptual understanding amongst
young structural engineers and architects.
With recent technological advancements, students have more tools to analyze
and demonstrate how load combinations affect the stability and behavior of a
structure. Specifically, Building Information Modeling (BIM) has the potential to
assist in achieving different types of structural knowledge learning objectives without
compromising their distinct requirements. Building information modeling, or BIM, is
a process that fundamentally changes the role of computation in structural design by
creating a database of the building objects to be used for all aspects of the structure
from design to construction and beyond.
BIM has revolutionized the design and construction of buildings mainly due
to its ability to specify the interaction of stresses, section properties, material strength,
and deformation based on type of supports and connections. This research project
focuses on utilizing Revit Structure and its extensions including Robot Structural
Analysis software to understand the basics building structures and how to
conceptually analyze members such as portal frames. This conceptual knowledge of
structural behavior is similar to the type of knowledge usually associated with craft
skill, or the skill of knowing how to do something (e.g. swim, paint, make docks,
playa musical instrument) and normally yields deep learning results.
The experimental research team includes one undergraduate student and one
graduate student from the college of engineering and one graduate student from
college of design and construction working at the school of architecture, University
of Florida to investigate how BIM would improve learning and understanding of
building structures. The research team was introduced to the basics of BIM and Revit
Structures. This introduction took about eight contact hours (see figure 1). The last
phase of this introduction was an overview of REVIT Structure emphasizing the
comprehension of new concepts such as model element, categories, families, types
and instances. Before starting the analysis, students are then assigned simple projects
to practice using REVIT structure in modeling single and two-storey steel and wood
buildings. Figure 2 below illustrates the process followed in this introduction
(Sharag-Eldin & Nawari 2010).
480 COMPUTING IN CIVIL ENGINEERING
Following the introduction, students started to learn about the analysis tools in Revit
Structure. These tools are available as extensions to the basics version of
BIM tools used in this research are principally the beam, frame simulation, the
load takedown, and the integration with Robot Structural Analysis. The load
takedown played an important role in introducing load path, load tracing, reactions
and constraints in building structures. Students were able to understand concepts such
as tributary areas for beams, girders and columns in visually interactive manner (see
figure 2) which greatly stimulated the interest and motivation to explore other
analysis capabilities of the tool.
The study was then centered on understanding the conceptual behavior of
frames under gravity and lateral loads using these tools. The next sections illustrate
the approach and results obtained.
(a) Tributary Areas for beams and girders (b) Axial Column
Loads Map.
STUDY APPROACH
single gravity point load was analyzed by altering the beam and column member
sizes. The first analysis allowed the beam sizes to remain constant as the column’s
width and depth gradually increased by intervals of 6 in. The second portion of the
study kept the column sizes constant and varied the beam sizes by increasing only the
height by 6 in. The analysis was repeated by removing the gravity load and replacing
it with a point lateral load.
CASE STUDY
The main concern of structural analysis is the calculation of internal force systems
and stress analysis that are involved in the determination of the corresponding
internal stresses.
This research paper focuses on statically indeterminate frames which cannot
be solved simply using force equilibrium equations. In order to understand the
structural behavior and qualitatively determine the corresponding shear and bending
moment diagrams of such structures, the study endeavors to utilize the concepts of
simple beams, namely simply supported, cantilever and fully fixed beams (figure 4).
P P P
P/2 P/2
Shear
P
P/2 P/2
PL/4 PL/8
Bending PL
Shear Bending
PL/8 PL/8
Fixed and pinned portal frames subjected to a 1.0 kip gravity load behave
similarly when analyzing their shear and bending moment diagrams. As the beams
increased in size in relation to the columns, the bending moments gradually moved
towards the center of the frame. The portal frame members act more like a simply
supported beam. Alternatively, as the columns increased in size in relation to the
beams, the bending moments gradually move away from the center and behave more
like a fully fixed structure. When columns and beam sizes are the same, the behavior
is the average of simply supported and fully fixed beam(bending moment =1/2 (PL/4
+PL/8) =PL/6) . Figures 5 below depict visually the shear and bending moment
diagrams part of the results for different conditions.
As the portal frame undergoes a 1.0 kip lateral load application, the effects of
the column size changes (beam size is fixed) on the shear and bending moment
diagrams is insignificant in the case of pinned frame. On the other hand, in the case of
a fixed frame, as the column sizes increases the column behave more like a cantilever
beam with the maximum moment PL at the support. Considering pinned frames, as
the beams gradually increase in size while the column size is fixed, the behavior of
the frame is identical to the pinned frame when changing the column sizes. However,
in the case of fixed portal frame, the larger the beam sizes in relation to the column
size, the columns behave like a half cantilever with maximum bending moment of
PL/2 at both ends. Figures 6 below demonstrates part of the corresponding shear and
moment diagrams for the fixed and pinned frames.
As the column size increases for a lateral load on the fixed frame, the bending
moment and shear force on the beam diminishes almost to zero. For the pinned frame,
the changes in the sizes of the beam do not affect the magnitude of the bending
moment or shear force on the beam when subjected to the same lateral load.
484 COMPUTING IN CIVIL ENGINEERING
VARYING COLUMNS
12 ft. x 12 ft.
24 ft. x 24 ft.
36 ft. x 36 ft.
VARYING BEAMS
12 ft. x 12 ft.
12 ft. x 36 ft.
CONCLUSIONS
The research work was focused on the development of more effective ways in
which structural behavior and analysis can be judged, improved, communicated,
learnt and taught. The study emphasized the development of more effective
techniques by which structural engineering knowledge, in its widest sense, including
the understanding of structural behavior which is so essential to the skill of design,
can be educated and learnt. Structural education in engineering as well as in
architecture should incorporate an increased concentration upon behavior and
conceptual analysis as a central activity.
A direct approach to the understanding of structural behavior is to be
emphasized, rather than relying on the dubious assumption that such understanding
necessarily follows from learning the mathematics of structural analysis. Ultimately,
this type of knowledge is the only check on the legitimacy of using structural
engineering theories in design procedures and computer aided design software.
The use of BIM tools in a reflective mode to enhance learning of fundamental
structural concepts allowed students to appreciate the full behavior of the structure
and hence this approach has promoted improved deep learning/understanding of
structural behavior.
REFERENCES
Adis, W. (1991). “Structural Engineering – the nature of theory and design”. Ellis
Harwood, New York.
Beckman, P. (1966). “Education lost have been misled”, Arup Journal 1, No.3, 7.
Brohn, D.M. (1982). “Structural Engineering – a Change in Philosophy”, Structural
Engineer, 60A, 117-120.
Cowan, J.(1981).”Design Education based on an Expressed Statement of the Design
Process”, Proc. Instn. Civ. Engrs 70, 743-753.
Duncan, P. (1981). “The Teaching of Structural Design: a Proposal”, Arup
Newsletter, No. 125, 1-2.
Harris, A. J. (1980). “Can Design be Taught?” Proc. Instnt., Civ., Engrs. , 68, 409-
416.
Hills, G. and Tedford, D., (2002). “Innovation in engineering education: the uneasy
relationship between
science, technology and engineering”, Proc. 3rd Global Cong. on Eng. Edu.,
Glasgow, UK, 43-48.
Lewin , D. (1981). “Engineering Philosophy – the Third Culture”, Journal of Royal
Society of Arts, 129, 653-666.
Pugsley, A. (1980). “The Teaching of the Theory of Structures”, Structural Engineer,
58A, 49-51.
Sharag-Eldin, A., and Nawari, N.O.:“BIM in AEC Education” 2010 Structures
Congress joint with the North American Steel Construction Conference in
Orlando, Florida, May 12-15, 2010, pp.1676-1688.
Efficient and Effective Quality Assessment of As-Is Building Information
Models and 3D Laser-Scanned Data
ABSTRACT
INTRODUCTION
486
COMPUTING IN CIVIL ENGINEERING 487
This paper presents the data and as-is BIM QA requirements of civil engineers, the
deviation analysis method for QA, and evaluation results illustrating how this
deviation analysis method meets the domain requirements.
RELATED STUDIES
in the model. For instance, engineers need to know whether large deviations between
overlapping scans are caused by scanner calibration problems or data registration
errors, so that they can recalibrate the scanner or improve data registration
accordingly. Second, most applications have specific tolerances about the accuracy of
the data and as-is BIMs. The engineers need to quantify the magnitudes of deviations
or errors. For instance, if an architect specifies that the positioning accuracy tolerance
for windows is 5 cm, then the QA method should enable that architect to identify all
locations having errors larger than 5 cm.
A typical as-is BIM construction workflow is composed of three phases: (1)
Data collection; (2) Data preprocessing; and (3) Modeling the BIM. More detailed
descriptions of these three steps can be found in (Tang et al. 2010). Generally, the
first two phases influence the data quality, while the last phase influences the model
quality. The major error sources in the data collection phase include: 1) Incorrect
calibration of the scanner; 2) Mixed pixels due to spatial discontinuity edges; and 3)
Range errors due to specular reflections (Anil et al. 2011). Data preprocessing mainly
involves identifying and removing noisy data points, and aligning multiple scans in
local coordinate systems to a common coordinate system (known as data registration).
The major error sources involved in this step include: 1) Incorrect noise removals;
and 2) Data registration errors. The major error sources in the modeling phase include:
1) Failing to model physical components; 2) Modeling components using incorrect
shapes; 3) Modeling components with incorrect positions. A good QA approach
should be able to identify all these types of quality issues, and to enable engineers to
quantify and understand their implications to the domain applications. Due to the
space limits, this paper focuses on the domain requirements and an evaluation of the
deviation analysis method on satisfying these requirements without detailing data
processing steps and the definitions of all error types. More details on these aspects
can be found in a related publication (Anil et al. 2011).
DEVIATION ANALYSIS
a red-yellow continuous color map (gradual color variation from red to yellow with
the reduction of deviation values) and a yellow-green binary color map (assign
yellow/green color to data or model with deviations larger/smaller than a
user-specified threshold), as detailed later. Second, for continuous color maps,
engineers can configure it as unsigned or signed. Unsigned color maps visualize the
absolute deviation values, so that deviations of the same absolute values will have the
same color, while signed color maps visualize equivalent positive and negative
deviations with different colors. This paper focuses on signed color maps, which we
found to be more effective in practice. Third, engineers can configure the scale of the
color map so that they can control which ranges of deviations are of interest.
Specifically, they can configure the maximum and minimum deviation values
visualized; they can also set the threshold value for the binary color map to only
distinguish deviations larger and smaller than that threshold. Finally, engineers can
choose to colorize points or colorize the BIM surfaces. In this paper, we focus on
evaluating the point colorization method, since it can give more detailed and
localized deviation information for QA (Anil et al. 2011).
In addition to deviation generation and visualization, statistical analysis can
be used to analyze the deviation patterns. One example is to create the deviation
histograms for a certain region for obtaining the mode of deviation values, as shown
in a related publication (Anil et al. 2011). Such statistical methods could make the
deviation pattern analysis automatic. This paper focuses on the deviation generation
and visualization, and leaves the automated deviation analysis for future exploration.
EVALUATION RESULTS
registration. Figure 2(c) shows the deviation patterns around a window on the façade
of this building, using the same binary map adopted in 2(a). The deviations around
the two vertical edges of a window are all larger than 2.5 cm. Detailed investigations
revealed that the mixed pixels around spatial discontinuities influence the data quality
and cause such patterns. Using the same binary color map, Figure 2(c) and (d) show
that for all specular objects with high reflectivity, such as window glass and the
metallic awning, the deviations are larger than other parts, likely due to higher noise
in these regions. These observed correlations between deviation patterns and types of
data quality issues show the effectiveness of the deviation analysis method for
pinpointing types of data problems.
(a) Potential scanner calibration problem (b) A rotation error in data registration
(c) Mixed pixels at spatial discontinuities (d) Low data quality on specular surfaces
Figure 2. Deviation patterns for identifying various data quality issues.
the deviation analysis method enables engineers to configure parameters of the color
maps for visualizing deviations of interest. First, engineers can configure the
maximum and minimum deviations visualized by a continuous color map to only
show the patterns within that range based on their requirements. In Figure 2(b), the
range of interest is (-0.1 m to 0.1 m). In Figure 3 (b), (c), and (d), the ranges of
interest are (-0.2 m to 0.2 m), (-0.05 m to 0.05 m), and (-0.05 m to 0.05 m)
respectively. Generally, identifying “failing to model physical components” issues
needs a larger range than identifying the other two types of modeling issues, since
missing a component typically causes relatively larger deviations. Similarly, for the
binary color map, engineers can configure the threshold to only highlight regions
exceeding a tolerance. According to the tolerance specified in the project manual, we
used 0.025 m as the threshold for all shown results.
(a) Photo of a part of the back façade (b) Failing to model a physical component
(c) Model using incorrect shape (d) Model components with incorrect positions
Figure 3. Deviation patterns for identifying various model quality issues.
ACKNOWLEDGEMENT
This material is based upon work supported by the U.S. General Services
Administration under Grant No. GS00P09CYP0321. Any opinions, findings,
conclusions, or recommendations presented in this publication are those of authors
and do not necessarily reflect the views of the U.S. General Services Administration.
REFERENCES
Akinci, B., Boukamp, F., Gordon, C., Huber, D., Lyons, C., and Park, K. (2006). “A
formalism for utilization of sensor systems and integrated project models for
active construction quality control.” Automation in Construction, Elsevier,
15(2), 124–138.
Anil, E. B., Tang, P., Akinci, Burcu, and Huber, Daniel. (2011). “Assessment of
Quality of As-is Building Information Models Generated from Point Clouds
Using Deviation Analysis.” Proceedings of SPIE, San Jose, California, USA.
Autodesk, Inc. (2010). “Navisworks.”
http://usa.autodesk.com/adsk/servlet/pc/index?siteID=123112&id=10571060.
Cheok, G. S., Filliben, J. J., and Lytle, A. M. (2009). Guidelines for accepting 2D
building plans. NIST Interagency/Internal Report (NISTIR) - 7638.
Cheok, G. S., and Franazsek, M. (2009). Phase III: Evaluation of an Acceptance
Sampling Method for 2D/3D Building Plans. NIST Interagency/Internal report
(NISTIR)-7659.
Gordon, C., Boukamp, F., Huber, D., Latimer, E., Park, K., and Akinci, B. (2003).
“Combining reality capture technologies for construction defect detection: a
case study.” EIA9: E-Activities and Intelligent Support in Design and the Built
Environment, 9th EuropIA International Conference, Citeseer, 99–108.
Innovmetric, Inc. (2010). “Polyworks v11.0.” www.innovmetric.com.
Tang, P., Huber, Daniel, Akinci, Burcu, Lipman, R., and Lytle, A. (2010).
“Automatic reconstruction of as-built building information models from
laser-scanned point clouds: A review of related techniques.” Automation in
Construction, 19(7), 14.
Occlusion Handling Method for Ubiquitous Augmented Reality
Using Reality Capture Technology and GLSL
INTRODUCTION
As a novel visualization technology, Augmented Reality (AR) has gained widespread
attention and seen prototype applications in multiple engineering disciplines for
conveying simulation results, visualizing operations design, inspections, etc. For
example, by blending real-world elements with virtual reality, AR helps to alleviate the
extra burden of creating complex contextual environments for visual simulations
(Behzadan, et al., 2009a). As an information supplement to the real environment, AR
has also been shown to be capable of appending georeferenced information to a real
scene to inspect earthquake-induced building damage (Kamat, et al., 2007), or in the
estimation of construction progress (Golparvar-Fard, et al., 2009). In both cases, the
composite AR view is composed of two distinct groups of virtual and real objects, and
they are merged together by a set of AR graphical algorithms.
Spatial accuracy and graphical credibility are the two keys in the implementation of
successful AR graphical algorithms, while the primary focus of this research is
exploring a robust occlusion algorithm for enhancing graphical credibility in
ubiquitous AR environments. In an ideal scenario, AR graphical algorithms should
have the ability to intelligently blend real and virtual objects in all three dimensions,
instead of superimposing all virtual objects on top of a real-world background as is the
case in most current AR approaches. The result of composing an AR scene without
considering the relative depth of the involved real and virtual objects is that the
graphical entities in the scene appear to “float” over the real background rather than
blending or co-existing with real objects in that scene. The occlusion problem is more
complicated in outdoor AR where the user expects to navigate the space freely and the
relative depth between involved virtual and real content is changing arbitrarily with
494
COMPUTING IN CIVIL ENGINEERING 495
time.
Several researchers have explored the AR occlusion problem from different
perspectives: (Wloka, et al., 1995) implemented a fast-speed stereo matching algorithm
that infers depth maps from a stereo pair of intensity bitmaps. However random gross
errors blink virtual objects on and off and turn out to be very distracting; (Berger, 1997)
proposed a contour based approach but with the major limitation that the contours need
to be seen from frame to frame; (Lepetit, et al., 2000) refined the previous method by a
semi-automated approach that requires the user to outline the occluding objects in the
key-views, and then the system automatically detects these occluding objects and
handles uncertainties on the computed motion between two key frames. Despite the
visual improvements, the semi-automated method is only appropriate for
post-processing; (Fortin, et al., 2006) exhibited both model-based using bounding box
and depth-based approach using stereo camera. The former one only works with static
viewpoint, and the latter is subject to low-textured areas; (Ryu, et al., 2010) tried to
increase the accuracy of depth map by region of interest extraction method using
background subtraction and stereo depth algorithms, however only simple background
examples were demonstrated; (Tian, et al., 2010) also designed an interactive
segmentation and object tracking method for real-time occlusion, but their algorithm
fails in the situation where virtual objects are in front of real objects.
In this paper, the authors propose a robust AR occlusion algorithm that uses real-time
Time-of-flight (TOF) camera, RGB video camera and the OpenGL frame buffer to
correctly resolve the depth of real and virtual objects in AR visual simulations.
Compared with previous work, this approach enables improvements in three aspects:
1) Ubiquitous: TOF camera capable of suppressing background illumination enables
the algorithm and implemented system to work in both indoor and outdoor
environments. It puts the least limitation on context and illumination conditions
compared with any previous approach; 2) Robust: Due to the depth-buffering
employed, this method can work regardless of the spatial relationship among involved
virtual and real objects; 3) Fast: The authors take advantage of OpenGL texture and
OpenGL Shading Language (GLSL) fragment shader to parallelize the sampling of
depth map and rendering into the frame buffer. A recent publication (Koch, et al., 2009)
describes a paralleled research effort that adopted a similar approach for TV production
in indoor environments with 3D model constructed beforehand.
ag th
Im ep
e
D
Hidden Surface
Removal
Dep
B uf t h
fer
AR
RGB & TOF Registration
Camera
R ag
G e
Im
Col
o
B uf r
FIRST Rendering Stage fer SECOND Rendering Stage
1) After being processed through the OpenGL graphics pipeline and written into the
depth buffer, the distance between the OpenGL camera and the virtual object is not
the physical distance at all (Shreiner, et al., 2006). The transformation model is
explained in section 3.1. Therefore the distance for each pixel from the real object
to the viewpoint given by the TOF camera has to be processed by the same
transformation model, before it is written into the depth buffer for comparison.
2) Traditional glDrawPixels() command can be extremely slow when writing a
two-dimensional array, i.e. the depth map, into the frame buffer. Section 4
introduces an alternative and efficient approach using OpenGL texture and GLSL.
3) The resolution of TOF depth map is fixed as 200*200 while that of the depth buffer
can be arbitrary, depending on the resolution of the viewport. This implies the
necessity of interpolation between the TOF depth map and the depth buffer. Section
4 also takes advantage of OpenGL texture to fulfill interpolation task.
4) There are three cameras for rendering an AR space: Video camera captures RGB
values of the real scene as the background, and its result is written into the color
buffer; TOF camera acquires the depth map of the real scene, and its result is
written into the depth buffer; OpenGL camera projects virtual objects on top of real
scene with its result written into both color and depth buffer. To ensure correct
registration and occlusion, all of them have to share the same projection
parameters: aspect ratio and focal length. While the projection parameters of
OpenGL camera are adjustable, the intrinsic parameters of video camera and TOF
camera do not agree: i.e. different principle points, focal lengths and distortion
models. Therefore an image registration method is designed to find the
correspondence between the depth and RGB image.
Fig.2: Projective transformation of depth map. The right side shows the original depth
map, and the left side shows transformed depth map written into the depth buffer (Dong
and Kamat 2010)
needs to be offset and scaled by the depth buffer range [0,1] before it is sent to the depth
buffer.
Table 1: The transformation steps applied on the raw TOF depth image.
Name Meaning Operation Expression Range
during the pure camera rotation, the short translation makes the approximation
reasonable. Fig.3 shows the registration results using homography, where RGB image
is transformed to the TOF depth image coordinate frame. Since it is difficult to find
identical points if using the depth map, instead we use the grey scale intensity image
provided by TOF camera, that has a one to one mapping to the depth map.
Transforming the RGB image points to the depth map coordinate system on the fly is
very expensive. To accelerate the process, this mapping relationship between RGB and
TOF image points is pre-computed and stored as a look-up table. The depth value is
bilinear interpolated for the corresponding RGB image points on the fly.
TOF Camera
Fig.3: The identical points on two images are used to calculate the homography matrix,
that registers the RGB image with the TOF depth image.
Occlusion DISABLED
Occlusion ENABLED
VALIDATION
Despite the outstanding performance of TOF camera in speed and accuracy, the biggest
technical challenge of it is the modular error, since the receiver decides the distance by
measuring the phase offset of the carrier. Ranges are mod the maximum range, that is
decided by the RF carrier wavelength. For instance, the standard measurement range of
CamCube3.0 is 7m. (PMD, 2010) If an object happens to be 8m away from the camera,
its distance is represented as 1m (8 mod 7) on the depth map instead of 8m. This can
bring incorrect occlusion in outdoor condition, where ranges can easily go beyond 7m.
The authors have been looking into object detection, segmentation etc. to mitigate the
limitation. For now, the experiment range is intentionally restricted to within 7m.
The TOF camera is positioned approximately 7m facing the wall of a building, so that
ambiguous distance is ruled out. A small excavator model (accredited to J-m@n from
Google 3D Warehouse community) is positioned about 5m away from the TOF camera,
and the author is standing in front of the excavator. Scenarios for both occlusion
function enabled and disabled are shown. It is obvious that occlusion provides much
better spatial cues and realism for outdoor AR visual simulation.
REFERENCE
Acharya, T., & Ray, A. K. (2005). Image processing : principles and applications, John
Wiley & Sons, Inc.
Beder, C., Bartczak, B., & Koch, R. (2007). A Comparison of PMD-Cameras and
Stereo-Vision for the Task of Surface Reconstruction using Patchlets, Computer Vision
and Pattern Recognition, Minneapolis, MN, 1-8. .
Behzadan, A. H., & Kamat, V. R. (2009a). Scalable Algorithm for Resolving Incorrect
Occlusion in Dynamic Augmented Reality Engineering Environments, Journal of
Computer-Aided Civil and Infrastructure Engineering, Vol. 25, No.1, 3-19.
Behzadan, H. A., & Kamat, R. V. (2009b). Automated Generation of Operations Level
Construction Animations in Outdoor Augmented Reality, Journal of Computing in
Civil Engineering, Vol.23, No.6, 405-417.
Berger, M.-O. (1997). Resolving Occlusion in Augmented Reality: a Contour Based
502 COMPUTING IN CIVIL ENGINEERING
Vincenty, T. (1975). Direct and inverse solutions of geodesics on the ellipsoid with
application of nested equations, In Survey Reviews, Ministry of Overseas
Development, 88-93.
Wloka, M. M., & Anderson, B. G. (1995). Resolving Occlusion in Augmented Reality.
Symposium on Interactive 3D Graphics Proceedings ACM New York, NY, USA, 5-12.
Appendix A
// Function Texture2D receives a sampler2D that is DepthTex or IntensityTex here,
fragment texture
// coordinates. And it returns the texel value for the fragment.
vec4 texelDepth = texture2D(DepthTex,
gl_TexCoord[1].xy);
// The final depth of the fragment with the range of [0, 1].
gl_FragDepth = texelDepth.r;
// Since a fragment shader replaces ALL Per-Fragment operations of the fixed function
OpenGL pipeline, the
// fragment color has to be calculated here as well.
vec4 texelColor = texture2D(IntensityTex,
gl_TexCoord[0].xy);
// The final color of the fragment.
gl_FragColor = texelColor;
A Visual Monitoring Framework for Integrated Productivity and Carbon
Footprint Control of Construction Operations
Arsalan Heydarian1 and Mani Golparvar-Fard2
1
Graduate Student, Vecellio Construction Engineering and Management Group, Charles E.
Via Department of Civil and Environmental Engineering, and Myers-Lawson School of
Construction, Virginia Tech, Blacksburg, VA; PH (540) 383-6422; FAX (540) 231-7532;
email: aheydar@vt.edu
2
Assistant Professor, Vecellio Construction Engineering and Management Group, Charles E.
Via Department of Civil and Environmental Engineering, and Myers-Lawson School of
Construction, Virginia Tech, Blacksburg, VA; PH (540) 231-7255; FAX (540) 231-7532;
email: golparvar@vt.edu
ABSTRACT
As buildings and infrastructure are becoming more energy efficient, reducing and
mitigating construction-phase carbon footprint and embodied carbon is getting more
attention. Government agencies are forming incentive-based regulations on
controlling these impacts and expressing control of carbon footprints as principle
dynamic goals in projects. These regulations are placing requirements upon
construction firms to find control techniques to minimize carbon footprint without
affecting productivity of operations. Nevertheless, there is limited research on
integrated real-time techniques to monitor operations productivity and carbon
footprint. This paper proposes a new framework and presents preliminary data in
which (1) construction operations are visually sensed through construction site
imagery and video-streams; subsequently (2) equipment’s location and action are
semantically analyzed through an integrated 3D image-based reconstruction and
appearance-based recognition algorithm; (3) productivity and carbon footprint of
construction operations are measured through a new machine learning approach; and
finally (4) for each construction schedule activity, measured productivity and carbon
footprint are visualized.
INTRODUCTION
According to several research studies, the rise in Green House Gas (GHG) emission
is very likely the main reason for most of the recently observed increase in the
temperature and other climate changes (EPA 2010, IPCC 2007). On earth, GHG
emissions from human activities have increased by 26% from 1990 to 2005 (EPA
2010). Over this period in U.S., GHG emission has increased by 14% (EPA 2010).
Among these emissions, carbon dioxide which is the main reason for the rise in the
temperature (EPA 2010) accounts for three quarter of the total GHG emissions, with
increase of concentration by 31% over the same period of time; meanwhile, a rise of
35% is projected by the U.S. department of Energy (Artenian et al. 2010, IPCC 2008).
The construction industry is considered to be one of the major contributors of
these GHG emissions (EPA 2010). According to EPA, historical emission from 14
industrial sectors in the U.S. count for 84% of the industrial GHG emissions, while
the construction sector is responsible for 6% of the total U.S. industrial-related GHG
504
COMPUTING IN CIVIL ENGINEERING 505
emissions, placing the construction sector to be the producer of the third highest GHG
emissions along all these sectors. Among all environmental impacts from
construction processes (e.g., waste generation, energy consumption, resource
depletion, etc.), emissions from construction equipment account for the largest share
(more than 50%) of the total impact (Guggemos and Harvath 2006). Furthermore,
embodied carbon - emissions from production and transportation of construction
materials - accounts for another 8% of the global GHG emissions and is mainly
released within the first year of any construction project.
In order to minimize concentrations of GHGs, the United Nations, many
European countries, and the state of California are considering a reduction of 80% in
GHG emissions by 2050, necessary to prevent the most catastrophic consequences of
climate change (Kockelman et al. 2009, Luers 2007). Nonetheless in the U.S., a new
set of EPA off-road diesel emissions regulations is rapidly becoming a concern for
the construction industry (ENR 2010) and has required Associated General
Contractors of America and the California Air Resources Board to postpone
enforcements of these emission rules until 2014. Although these regulations are
considered to minimize construction carbon footprint by a large factor, yet industry
interest has been minimal due to high cost of the alternatives: (1) high cost of new
equipment, and (2) upgrading older machinery. These regulations are challenging
construction firms to find solutions to reduce the carbon footprint of their operations
without affecting productivity and the final cost of their projects. In order to meet
these ambitious reductions in carbon footprints, a major cut in GHG emissions due to
construction operations, manufacture, and delivery of materials is necessary.
Among all decision alternatives, minimizing the idle time of construction
equipment would result in reduction of fuel use, extension of engine life, and safer
work environment for operators and workers on site. If the equipment is rented,
reducing the idle time can reduce the rental fee and the cost associated with the labor.
From a contractor’s perspective, better operation planning, deployment of equipment
through a more accurate equipment idle time analysis will improve construction
productivity, leading to significant time and cost saving (Zou and Kim 2007).
Establishing and implementing idle time reduction policies enables the construction
industry to take a proactive action in carbon footprint reduction (EPA 2010). Despite
the importance, reducing idle time for any onsite operation requires proper
assessment of productivity. It is important to first gather data on resources and
processes that are used for each construction operation in order to measure and
analyze productivity as well as carbon footprint.
Traditional data collection methods for productivity analysis (Oglesby et al. 1989)
include direct manual observations; i.e., a set of methods that are adopted from stop
motion analysis in industrial engineering, and survey based methods. Although this
method provides beneficial solutions in terms of construction operations, but
implementing it is time-consuming, manual and labor-intensive, and is prone to
errors (Su and Liu 2007). This significant amount of information also affects the
quality of the analysis, makes it subjective (Gong and Caldos 2009, Grau et al. 2009,
Golparvar-Fard et al 2009) and therefore many critical decisions will be made based
on these faulty or incomplete information, ultimately leading to project delays and
cost overruns. Therefore, contractors only attempt to collect productivity data at the
506 COMPUTING IN CIVIL ENGINEERING
Figure 1. An overview of data and process in the proposed vision-based tracking and
integrated productivity and carbon footprint assessment framework.
RESEARCH BACKGROUND
In recent years, there have been a number of research groups that have focused on
estimating, monitoring and controlling construction operation GHG emissions. Ahn
et al. (2010) presents a model which estimates construction emission using a discrete
event simulation. Peña-Mora et al. (2009) present a framework on integrated
estimation and monitoring of GHG emission and recommend application of portable
emissions measurement systems. Lewis et al. (2009a) presents the challenges
associated with quantification of non-read construction vehicle emissions and
proposes a new research agenda that specifically focuses on air pollution generated
by construction vehicles. Lewis et al. (2009b) studies the impact of changing fuel
COMPUTING IN CIVIL ENGINEERING 507
type, and Tier 0, 1, and 2 engines and recommendations are made by development
and practical application of emission inventories for construction fleet management.
Artenian et al. (2010) demonstrated that lowering construction emissions could be
achieved through an intelligent and optimized GIS route planning for the construction
vehicles. Shiftehfar et al. (2010) also propose a visualization system which visualizes
the impact of construction operation emissions with a tree metaphor. In Most recent
study, Lewis et al. (2011) presents a framework for assessing the effects of equipment
operational efficiency on the total pollutant emissions of construction equipment
performing a construction operation. Nonetheless, data collection or analyses in most
of these state-of-the-art approaches are not automated. Furthermore, there is a
significant non-renewable energy which is consumed in the acquisition of raw
construction materials, their processing, manufacturing, and transportation to the site
which is not being considered in these approaches. An automated tracking system
that can measure both construction operations and initial embodied carbon footprints
could result in a faster and more accurate data collection technique.
Similarly in recent years, a number of research groups have focused on automated
assessment of construction productivity and idle time. Gong and Caldas (2009), Grau
et al. (2009), and Su and Liu (2007) all emphasize on the importance of a real-time
construction operation tracking of resources. More specifically, Gong and Caldas
(2009) presented a vision-based tracking model for monitoring a bucket in
construction placement operations. Despite the effectiveness of the proposed
approach, the operation equipment location and action are not simultaneously
tracked. Zou and Kim (2007) has also presented an image-processing approach that
automatically quantifies the idle time of hydraulic excavator; though this approach
uses color information for detecting motion of equipment in 2D, and it since it uses
color space, may not be robust to changes of scale, illumination, viewpoint and
occlusions. To the best of the author’s understanding, there is no existing research on
automated vision-based tracking that can simultaneously locate the equipment in a
3D and identify their idle times and actions. Such an approach not only allows
productivity of construction operations to be remotely and inexpensively measured,
but it also enables onsite monitoring of construction carbon footprint. Integrated with
the initial embodied carbon enables construction practitioners to assess productivity
and carbon footprint of their operations and decide on control actions that can
maintain or maximize productivity, while the overall carbon footprint is minimized.
INTEGRATED PRODUCTIVITY & CARBON FOOTPRINT MONITORING
The goal of the proposed framework is to establish guidelines on how to visually
monitor construction equipment, increase productivity of operations, and reduce
carbon footprint. To reach this goal, an initial study is done to understand time-cost-
footprint relationship, equipment productivity, and construction resources. An
automated and visual identification system to identify construction equipment’s
location and action is developed; this tracking technique allows for performing a
productivity analysis on each crew. To understand their relationship for every
activity and operation, a side-by-side productivity and carbon footprint analysis was
then performed. Hence, as an initial step an integrated 3D reconstruction and
recognition algorithm is proposed to sense and model the construction site.
508 COMPUTING IN CIVIL ENGINEERING
In the proposed approach (1) construction operations are visually sensed through
construction site video-streams from fixed cameras; subsequently (2) equipment’s are
recognized and located in 2D frames. For this purpose (as observed in the process and
data model presented in Figure 1), these videos are further processed to spatially
recognize and locate equipment in 3D and go-register their location in D4AR, 4-
dimensional augmented reality environment (Golparvar-Fard et al. 2010, 2009).
Equipment actions are recognized using an action recognition model. Throughout this
stage, for each equipment (i) the location Li(x,y,z,time) and action Action(Li) of
construction equipment are monitored and reported. (3) productivity and carbon
footprint of construction operations are measured through a new machine learning
approach; finally (4) by integrating 4D Building Information Models for each
construction schedule activity, measured productivity as well as operation and
embodied carbon footprint are visualized. Figure 2 shows the IDEF-0 representation
for monitoring of equipment actions, locations, and productivity.
in every stage of the construction from the underlying building information model.
Since the D4AR model is linked to the construction schedule, it can also provide a
connection between embodied emissions and operations emissions.
Operation Carbon: To measure the operation carbon footprint, activities that need to
be monitored are initially queried from the D4AR model. Similar to Lewis et al.
(2011) and based on the monitoring component and manufacturing equipment
dataset, for the action of each equipment, the engine power (EP), operation hours
(OD), emission factor (EF), and load factor (LF), on-site humidity and site’s physical
characteristics are measured (Eq. 2). The overall effect of humidity varies at different
type of the day by 1% to 9%; for instance it is expected to have lower emission rates
in the evening and early morning where the humidity level is higher and temperature
lower (Lindhjem 2004). Figure 3 presents the instantaneous and accumulative carbon
footprints and gained reductions.
#
∑# (1)
(2)
#
(3)
where em is the Emission Module measurement of each action, and tm is the duration
of each action. OE is the operations emission and EE is the embodied emission.
(a) Instantaneous vs. Accumulative (b) Instantaneous vs.
CF for an Operation CF Accumulative CF
Accumulative
for all Operations
Accumulative
CFi
CFi
Instantaneous
ti ti
Concept Study
The goal is to demonstrate the concepts of tracking, locating, and action recognition
of the equipment. The operation includes one excavator and three dump trucks. The
D4AR model is used to provide a 3D image-based reconstruction and BIM
registration (Figure 4a). Once the entire site is reconstructed, using the vision-based
tracking, equipment is tracked and located (4b, c). Locating, tracking, and identifying
different motions of equipment at a given time for each operation, enables action
recognition for deformable equipment body (4d). The actions for excavator included
digging, hauling, dumping, swinging, and idle time; respectively, the recognized
actions of each truck included moving, filling, dumping, and idle time. The 3D
reconstructed scene and equipment locations are visualized in a Euclidean 3D
environment. Once the location and action of equipment is recognized, an operation
chart is created for one cycle (Figure 5). D4AR provides the material resources based
on the schedule activity which allows for the calculation of embodied carbon. The
operation emission can also be calculated using Eqs.1, 2, and 3. The overall
510 COMPUTING IN CIVIL ENGINEERING
instantaneous and accumulative emission rates are plotted (Figure 3). By comparing
the operation sequence chart overall carbon footprint, the user can determine exactly
how much carbon footprint is emitted for a given activity.
1
Assistant Professor, Construction Management, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA, 30060; PH (678) 915-3715;
FAX (678) 915-4966; email: pmeadati@spsu.edu
2
Assistant Professor, Building Construction Program, Georgia Institute of
Technology, 280 Ferst Drive,1st Floor, Atlanta, GA, 30332; PH (404) 385-7609; FAX
(404) 894-1641; email: Javier.irizarrry@coa.gatech.edu
3
Assistant Professor, Construction Management, University of Arkansas at Little
Rock, 2801 South University Ave, Little Rock, AR, 72204; PH (404) 385-7609; FAX
(404) 894-1641; email: akakhnoukh@ualr.edu
ABSTRACT
INTRODUCTION
The different phases of the project life cycle include planning, design, construction,
maintenance and decommissioning. The construction phase can be divided into pre
and post construction stages. The traditional media of communication among various
phases of life cycle is two dimensional (2D) drawings. The introduction of object
oriented computer aided design (CAD) software facilitated three-dimensional (3D)
models as media of communication between the planning and design phases and
introduced the concept of Building Information Modeling (BIM). Some of the
applications of these 3D models in the preconstruction stage include resolving
512
COMPUTING IN CIVIL ENGINEERING 513
constructability problems, space conflict problems, and site utilization (Koo &
Fischer, 2000; Chau et al. 2004). During construction, post construction and
maintenance still 2D drawings are most widely used. The as-built drawings developed
during post construction stage are in the form of 2D drawings. Currently, the 2D as-
built drawings developed in post construction stage and the related documents exist
independently. Thus the usage of a 3D model developed through BIM during
different early stages of the lifecycle is not in use after the preconstruction stage. This
paper presents an overview of the current status of BIM and discusses the desired
status of BIM that facilitates its implementation beyond the preconstruction stage.
The objective of the study is to extend BIM beyond the pre-construction stage and
facilitate its implementation during the maintenance O&M phase of the project life
cycle. The paper presents the different approaches for developing 3D as-built models.
This paper also discusses means to integrate the information to the 3D as-built model
to facilitate BIM implementation during the operation and maintenance phase.
The information used in a facility’s lifecycle can be categorized into graphical and
non-graphical data. The graphical data includes two dimensional and three
dimensional (3D) drawings. The non-graphical data includes other project documents.
The current status of graphical data and non-graphical data in the construction phase
is shown in Figure 1. It shows the 2D as-built drawings developed in the post
construction stage where related documents exist independently. The 3D models
developed in early stages of the facility’s life cycle are generally not in use after the
preconstruction stage. The implementation of BIM needs a 3D product model and
association of relevant information to each component to serve as information
resource. Thus BIM implementation tends to stop at the preconstruction phase leaving
large amounts of relevant data out of the final model needed by facilities management
for operations, maintenance, and possible re-commissioning or decommissioning
efforts.
Desired Status
Two reasons for not achieving BIM during post construction of the facility are (1)
unavailability of a 3D as-built model and (2) lack of integration of operation and
maintenance information to the 3D as-built model (Goedert & Meadati, 2008). The
desired status of information flow during the construction phase to facilitate the
implementation of BIM is shown in Figure 2. The existing 2D as-built drawings have
to be replaced with 3D as-built models. Operation and Maintenance data such as
COMPUTING IN CIVIL ENGINEERING 515
In the FADAA, 3D laser scanner as shown in Figure 5 was used to produce a dense
point cloud. The required data collection is achieved by scanning and merging the
dense point clouds collected from various locations. Then this data is used for
developing a 3D model. This approach provides a very accurate and detailed 3D
model (Kwon et al. 2004). However the 3D model obtained by using the 3D laser
scanner is not compliant for BIM implementation, since the captured 3D model will
act as single composite object and will not allow the elements to be picked
individually. The scanned 3D model was further used to develop the 3D as-built
model for BIM implementation.
COMPUTING IN CIVIL ENGINEERING 517
electronic product information and then provides it to O&M purposes (East, 2007). In
this study, O&M information is integrated by selecting each component and linking
the documents by specifying the documents storage path. O&M information
integrated into the 3D as-built model included Microsoft Word files, PDF,
photographs, audio and video files. The steps involved in the integration of
information to the 3D as-built model include creation of new parameters and
association of information to these parameters. In Revit, each element is associated
with predefined parameters and these are categorized into type parameters and
instance parameters. The type parameters control properties of elements of that type
while the instance parameters control the instances properties. The type and instance
parameters are further categorized into different groups. The data format stored in
each parameter is of type: text, integer, number, length, area, volume, angle, URL,
material, and yes/no. In this project, since the predefined type or instance parameters
are inadequate, new parameters are added to the elements. Revit facilitates the
addition of new parameters as a project parameter or a shared parameter. Only the
shared parameters are exported to databases. Other families and projects share these,
whereas project parameters are not exported to the databases. Some of the newly
added shared parameters include O&M manuals, Maintenance Schedule, Performance
Test Videos, Specifications, Typical Section, Construction Photos, Code
Requirements, and Installation videos. These parameters are made to appear under the
group name ‘Other’ in the type parameters list. The URL data format is used for each
parameter. This format is useful for establishing the link between the respective files
and components. The association of information to the model components is
accomplished by assigning the file paths of the information to the parameters. This
link between the documents through the path stored in the parameter allows easy
access to the required information. Figure 6, shows the screen shot of the retrieved
O&M manual, specifications, Performance Test Videos of the door accessed from the
3D as-built model.
CONCLUSION
Figure 6: Screen shot showing retrieved O&M information of a door using BIM
REFERENCES
Chau, K.W., Anson, M. and Zhang, J.P. (2004). “Four dimensional visualization of
construction scheduling and site utilization.” J. of Constr. Engrg and Mgmt, 130(4),
598-606.
East, W, E., (2007). “Construction Operations Building Information Exchange
(COBIE).” < http://www.wbdg.org/pdfs/erdc_cerl_tr0730.pdf> (March 5, 2011).
Goedert, J.D., and Meadati, P. (2008). “Integration of construction process
documentation into Building Information Modeling.” J. of Constr. Engrg and Mgmt,
137(7), 409-516.
Kim, C., Haas, C.T., and Liapi, K.A. (2005). “Rapid on site spatial information
acquisition and its use for infrastructure operation and maintenance.” Autom. Constr.,
14, 666-684.
Kim, K.J., Lee, C.K., Kim, J.R., Shin, E.Y., and Cho, Y.M. (2005). “Collaborative
work model under distributed construction environments.” Can. J. Civ. Eng., 32,
299-313.
Koo, B., and Fischer, M. (2000). “Feasibility study of 4D CAD in commercial
construction.” J. of Constr. Engrg and Mgmt, 126(4), 251-260.
Kwon, S.W., Bosche, F., Kim, C., Haas, C.T., and Liapi, K.A. (2004). Fitting range
data to primitives for rapid local 3D modeling using sparse range point clouds.
Autom. Constr., 13, 67-81.
Tanyer, A.M., and Aoudad, G. (2005). “Moving beyond the fourth dimension with an
IFC-based single project database.” Autom. Constr., 14, 15-32.
Simulating the Effect of Access Road Route Selection on Wind Farm
Construction
Mohamed El Masry1,Khaled Nassar2 and Hesham Osman3
1
Graduate student and research assistant with the Department of Construction and
Architectural Engineering at the American University in Cairo,
.m_elmasry@aucegypt.edu
2
Associate Professor, Department of Constructon Engineering, American University
in Cairo, knassar@aucegypt.edu
3
Assistant Professor, Department of Structural Engineering, Faculty of Engineering,
Cairo University, 12613, Giza, Egypt hesham.osman@gmail.com
ABSTRACT
Potential adverse impacts on environment are increasing due to the usage of
fossil fuels to produce energy; nowadays renewable energy is used instead to
overcome this problem. One of the most widely used renewable energy sources is
wind energy from which electricity is produced by wind farms. On shore wind farms
construction could be very complicated due to the interaction between the various
disciplines involved in the construction process. To overcome such complexity,
construction process of wind farms is simulated using STROBOSCOPE simulation
tool to illustrate how selecting a certain route in constructing the access roads could
produce different volumes of cut and fill that could significantly affect the
construction cost and time. Not only selecting a certain path could affect the cost and
time, but also the used equipment can play a vital role. Optimization of the number
of equipment and crews to reach an optimum cost and time for construction is
presented. A case study to illustrate all the above is also presented.
INTRODUCTION
The increase in global demand for renewable energy has created a booming wind
energy market. By the end of 2009, the capacity for worldwide production of wind
power using wind turbine generators reached almost 157.9 Gigawatts (GW). In 2009
38.3 GW were added. The wind energy production capacity grew by 31%, the highest
rate since 2001. Predictions are that 54 GW will be added in 2010 (WWEA, 2009).
This ambitious growth in wind power requires a significant ramp-up in all links of the
wind turbine supply chain. Wind turbine construction is one of the most critical yet
under-investigated steps in the supply chain of wind turbines. Wind turbine
construction is a repetitive construction process that mainly involves constructing
access roads to connect locations of wind towers that will be erected and lifting large
prefabricated components to large heights in high wind speed conditions. Thus,
contractors are faced with challenging work environments that impact the time, cost
and safety of construction operations. Based on the exploratory research work in wind
turbine construction presented in this paper, it is important to clearly identify the
scope of this work and identify areas where further investigation is necessary. The
scope of this paper can be delimited as follows:
520
COMPUTING IN CIVIL ENGINEERING 521
1- Wind Farm Scope: The scope of construction activity in a wind farm generally
encompasses three main elements; site infrastructure (access roads, crane pads, and
tower foundations), wind turbines, and electrical substation and grid networks. This
paper will focus on the construction of the access roads.
2- Wind Farm Location: The construction processes described in this paper are for
on-shore wind turbines. Details regarding off-shore wind farms are beyond the scope
of this work.
3- Project Stage: This paper will focus on the development of tools for the planning
of wind turbine construction activities. It is expected that tools can also be developed
for monitoring the construction process and planning the maintenance and
rehabilitation of wind turbines.
When selecting the appropriate site for constructing a wind farm, scheduling
consideration should be given to the access of the site and to the site construction. An
important factor affecting project schedule and costs is the transportation and road
system in the wind farm. Roads have to be constructed such that they can adequately
bear the load of wind turbine parts and equipment. A framework is presented in an
attempt to guide the contractors in wind farm construction to select best route to start
the construction operation and to choose the optimum number of resources to use
with the exact combination.
SIMULATION MODULE
Simulation can be considered as a powerful tool because it imitates what happens
in reality to a certain level of accuracy and reliability without extra costs.
STROBOSCOPE (Martinez, 1996) is used as a simulation tool to represent the tasks
in reality. Strobscope represents activities by rectangular shapes called “combi” and
“normal” while the resources can be represented by circle shapes named “queues”.
Each activity can take an argument called semaphore to control the start and the end
of this activity. To entail stroboscope to start activities at the beginning of a working
day and ends by the end of the day, semaphore was used using the following syntax:
SEMAPHORE workingHours;
The road segments were defined in a queue; with the type “characterized resource”.
The characterized resource had a defined property under the name “cf” which might
take a value of zero or one and defines whether this section is cut or fill. For cut
sections the “cf” property was given a value of zero and for the fill sections the “cf”
value was given one. The value of the quantities of cut and fill were entered in a
property under the name of “value” which expresses the volume of cuts and fills.
Each segment was given a different value for cf and value depending whether this
section is cut or fill and depending on the volume to be cut or filled, this was done
using a property for the characterized resource called subtype, which defines the
different stations like st1, st2. st3,…etc. The syntax for the previous was written as
follows :
CHARTYPE Stations cf value; /ST
SUBTYPE Stations st1 1 200;
SUBTYPE Stations st2 0 400;
SUBTYPE Stations st3 1 800;
522 COMPUTING IN CIVIL ENGINEERING
Multiple replications for different number of alternatives for resources were done for
the model and different simulation times were obtained as shown in figures 2 and 3.
Table.1: Description of the activities and Resources used in stroboscope model
Abbreviation
used in the Description Remarks
model
Survey works and setting out of the points
Activity: Setting
StgOut important to define the premises of the roads and
Out
the project
Activity: Overlaying of the aggregate bas on the road
Overlng
Overlaying segments after filling and cutting
Activity: Filling Filling segments that are required to be filled using
FllgSoil
Soil a fill truck and a bulldozer
After overlaying the aggregate base watering is
Wtrng Activity: Watering necessary for compaction to reach optimum
moisture content
Queue: Segments Fill segments in which stations were routed using
Fill
to be filled dynafork using the expression cf. stations=1
Queue: Segments Cut segments in which stations were routed using
Cut
to be cut dynaforks using the expression cf. stations=0
Queue: All
Total road segments that are required to get cut or
RdSgmnts segments to be cut
filled
and filled
Activity: Cutting Activity of cutting the soil using a loader that loads
CtgSoil
soil the trucks with the cut soil
Queue: Road Queue that represents the road segments read to get
RdCmpct
Compaction compacted after grading and watering
Activity:
AgBsCmpct Aggregate Base Activity of compacting aggregate base in roads
Compaction
Activity: Fill Compacting the fill segments and getting them
FillCmpctn
Compaction ready to get the aggregate base
Activity: Tilting The activity of tilting the section nacelles of the
TltScn
sections tower to get lifted
Queue: Secondary
SecCrn Secondary crane used in installing the wind tower
crane
Activity:
Positioning and bolting of the blade hub with the
PBladeHub Positioning Blade
turbine at the tip of the wind tower
Hubs
Activity: Lifting
LftgSec Lifting the nacelles in the tower
sections
Activity:
PstngSc Positioning and bolting of the nacelles
Positioning Section
Activity: Lifting
LBladeHub Lifting of blade hub to get bolted to the turbine
Blade Hub
Queue represents the ticks of the clock to get an
Ticks Queue: Ticks
eight hour working shifts in the day
EightHours Activity: Eight A combi that entails the day to be eight hours
COMPUTING IN CIVIL ENGINEERING 523
OPTIMIZATION MODULE
Figure.1 shows a schematic diagram for the alternatives that could be optimized
and different approaches to be used in construction of wind farms. The first
alternative is the different paths that could be available in the construction of access
roads. The second alternative is the number of crews that would be used in road
construction, where only one or more than one road crew can be used in road
construction to do overlaying, watering and grading. The third alternative that could
be analyzed is the equipment used in hauling the dirt in earth moving, for the same
alternative several scenarios could be found. One of these scenarios could be using
scrappers, or using loaders and trucks or dozers depending on how long the hauling
distance is. The fourth and the last alternative is the cranes and how they would be
used. There are two approaches to be used; the first is the assembly of blade hub on
the ground while the other is on the tower after erection which would require cranes
of higher capacity and this was covered in another research (Atef, 2010).
310 200
Simulation Time (Days)
Figure.2: Different Paths and their Figure.3: Different Paths and their
effect on construction time using different effect on construction time using
number of trucks different number of cranes
It was found that the equipment used in hauling could decrease the duration of road
construction significantly (i.e.: loaders, dozers and hauling trucks).While equipment
such as water and aggregate trucks have less effect on the simulation time. The effect
of using different number of cranes in lifting tower parts with different paths is shown
in figure.3. The effect of changing the number of hauling trucks with different paths
is shown in figure 2. As shown in figures 4 and 5 simulation time decreases with the
increase in the number of trucks for different number of loaders and dozers. The
above shows that many alternatives are involved when a decision is made. The total
number of alternatives can yield into a huge number of combinations, therefore
optimization was used in an attempt to reduce processing time and improve the
quality of solutions.
7 7
1 Dozer
Simulation Time (Days)
1 Loader
Simulation Time (Days)
6 6
2 Dozers 2 Loaders
5 5
4 4
3 3
2 2
1 2 Number3of trucks 4 5 1 2 Number3of trucks 4 5
Product of simulation time and total cost was set as the objective function that would
be optimized. Different particles were initiated to get optimized and get the optimum
solution. Each particle represents different number of alternatives of resources that
can affect the simulation time (crane, loader, compactor, grader, and truck, road
crews, volumes of cut and fill representing different sequence of construction). Given
all the previously mentioned resources, cost and time were obtained and PSO was
performed. Convergence was achieved and pareto optimal face was drawn as shown
in figure 6.
Figure.6 shows four different alternatives with different resources and equipment.
The points having same marker represents a certain path taken in the construction of
wind farm, while the different points of the same marker represent different
combination of alternatives to use for equipment. For the given alternatives it was
found that the best solution was the third path (the path represented with the triangle)
using two cranes, 3 road crews, two dozers and two loaders with five hauling trucks
while keeping the other resources the same.
44000000
42000000
Total Cost (L.E.)
40000000
38000000
36000000 S1
34000000
32000000 Pareto Optimal
30000000
250 300 350 400 S2
450 500 550
Simulation Time (Days)
Figure.6: Total cost versus simulation time showing pareto optimal face
Table.2 summarizes the alternatives that were optimized and the optimum number to
use based on the resulting pareto face.
Table.2: Number of alternatives to use
Alternatives to use S1 S2
Path Number 3 3
Number of Loader 2 2
Number of Trucks 5 5
Number of Dozers 2 2
Number of Graders 1 1
Number of Road Crews 3 3
Number of Main Cranes 2 1
Number of Compactors 1 1
Number of Secondary Cranes 2 1
526 COMPUTING IN CIVIL ENGINEERING
CONCLUSION
There is a global boom in using alternative energy resources, leading to an
increase in wind farms construction. Many trades are involved in wind farm
construction that could interact in an efficient way to reduce cost and time of
construction. A framework was introduced to help contractors in performing time-
cost trade off analysis to optimize resources utilization in wind farm construction. A
pareto optimal forentier is introduced that would help the contractors in deciding how
many resources to use and the effect of this decision on cost and time. The multi-
objective optimization was performed to get the previously mentioned pareto face by
using particle swarm optimization (PSO) with an objective function that is the
product of the cost and simulation time of construction. The PSO algorithm was
plugged in STROBSCOPE simulation tool that imitates the processes in reality. By
running the original model several times using different alternatives; it was found that
using the combination of optimized solution set would decrease the duration by a
value of 40% but would affect the total cost by an increase of 25%.
ABSTRACT
While there are mature data models for exchanging semantically rich building
models, no means for exchanging bridge models using a neutral data format exist so
far. A major challenge lies in the fact that a bridge’s geometry is often described in
parametric terms, using geometric constraints and mathematical expressions to
describe dependencies between different dimensions. Since the current draft of IFC-
Bridge does not provide a parametric geometric description, this paper presents a
possible extension and describes in detail the object-oriented data model proposed to
capture parametric design including geometric and dimensional constraints. The
feasibility of the concept has been verified by actually implementing the exchange of
parametric models between two different computer-aided design (CAD) applications.
INTRODUCTION
Planning and realizing roadways and bridges are important aspects of
infrastructure construction projects. Nowadays, road and bridge models are usually
generated using completely different modeling systems. However, since bridges form
part of the roadway, a bridge’s geometry depends significantly on the course of the
carriageway, i.e. its main axis. Small modifications in the road design occur
frequently during the planning process. When a conventional computer-aided design
(CAD) system is used to create the bridge model, these modifications involve a
tedious, time-consuming manual adaptation of the bridge’s geometry. Researchers
belonging to the research cluster ForBAU - “The Virtual Construction Site”
(Borrmann et al., 2009), have accordingly been investigating the application of
parametric CAD technology, which makes it possible to model dependencies
between geometric objects explicitly (Hoffmann and Peters, 2004; Sachs et al., 2004).
With the help of this technology the bridge model can be coupled with the axis of the
carriageway, enabling a fast and automatic update whenever the roadway design is
528
COMPUTING IN CIVIL ENGINEERING 529
B
A
Sketch A
Sketch B
Figure 2. The bridge’s superstructure is coupled with the road’s main axis.
A more detailed example of a parameterized design is illustrated in Figure 3.
The sketch describes the superstructure of a beam bridge consisting of geometric
objects, i.e. lines and points, the geometric constraints Parallel and Perpendicular,
and design parameters h1 to h8 and b1 to b6. A complete list of the geometric and
dimensional constraints commonly used for bridge modeling is depicted in Figures 4
and 5Figure 5.
COMPUTING IN CIVIL ENGINEERING 531
Figure 6. UML diagram of the proposed data structure for representing parametric
geometry
including the definition of entity types (the equivalent of a class) and attributes
representing the common properties of the objects belonging to the same entity type.
While support for STEP data is rather limited, reading and writing XML documents
is supported by a large variety of libraries available for almost every programming
language. For an initial evaluation, the proposed data structure was therefore
implemented as an XML schema.
To illustrate the proposed data structure, Figure 7 depicts a specimen sketch
and the corresponding XML instance file. The points of the sketch P_1 to P_5 are
defined by means of explicit coordinates. The lines Line_1 to Line_5 are defined
using the respective start and end points. The geometric constraint parallel
(ParallelGeometricConstraint) is associated with Line_2 and Line_4. Similary, the
perpendicular constraint (PerpendicularGeometricConstraint) is associated with
Line_2 and Line_3. The vertical dimensional constraint
(VerticalDimensionalConstraint) refers to the design dimension p4 to which the
string value “8.7” has been assigned.
REFERENCES
Arthaud, G. and Lebegue, E. (2007). IFC-Bridge V2 Data Model, edition R7.
Borrmann, A., Ji, Y., Wu, I-C., Obergrießer, M., Rank, E., Klaubert, C., Günthner W.
(2009). “ForBAU - The Virtual Construction Site Project”. In Proc. of the 26th CIB-W78
Conference on Managing IT in Construction.
Eastman, C., Teicholz, P., Sacks, R., Liston, K. (2008). BIM Handbook: A Guide to Building
Information Modeling for Owners, Managers, Designers, Engineers and Contractors.
Wiley Press Inc.
Hoffmann, C. M.., Peters, J. (1994). “Geometric Constraints for CAGD”. In Proc. of
Mathematical Methods for Curves and Surfaces, Vanderbilt University Press.
International Organization for Standardization (1995). ISO 10303 - Standard for the
exchange of product model data.
International Organization for Standardization. (2005a). ISO/PAS 16739:2005- Industry
Foundation Classes, Release 2x, Platform Specification.
International Organization for Standardization. (2005b). ISO 10303-108:2005 – Part 108:
Parameterization and constraints for explicit geometric product models
Ji, Y., Obergrießer, M., Borrmann A. (2010). “Development of IFC-Bridge-based
Applications in Commercial CAD Systems (in German)”. In Proc. of the 22th Forum
Bauinformatik, Technical University Berlin, Germany.
Katz, C. (2008). “Parametric Description of Bridge Structures”. In Proc. of the IABSE
Conference on Information and Communication Technology for Bridges, Buildings and
Construction Practice, Helsinki.
Pratt, M. J., Anderson, B. D., Ranger, T. (2005). “Towards the Standardized Exchange of
Parameterized Feature-based CAD Models”. Computer-aided Design, vol. 37.
ProSTEP. (2006). Final Project Report – Parametrical 3D data Exchange via STEP,
ProSTEP iViP Association.
Sacks, R., Eastman, C. M., Lee, G. (2004). “Parametric 3D Modeling in Building
Construction with Examples from Precast Concrete”. Automation in Construction, 13(3).
Shah, J. J. and Mäntylä, M. (1995). Parametric and Feature-based CAD/CAM - Concepts,
Techniques, Applications, Wiley Press Inc.
Yabuki, N., Lebeque, E., Gual, J., Shitani, T., Li, Z-T. (2006). “International Collaboration
for Developing the Bridge Product Model IFC-Bridge”. In Proc. of the International
Conference on Computing and Decision Making in Civil and Building Engineering.
An Agent-Based Approach to Model the Effect of Occupants’ Energy Use
Characteristics in Commercial Buildings
ABSTRACT
INTRODUCTION
536
COMPUTING IN CIVIL ENGINEERING 537
affecting the energy consumption in buildings, and the anticipated savings in energy
usage if occupant behavior was modified (Emery and Kippenhan 2006; Meier 2006;
Staats et al. 2000). These studies looked at how changes in occupants’ behavior can
result in energy saving in excess of 40 percent in the building under consideration
when compared to buildings of similar type.
Energy modeling techniques exist and are widely used in the building sector to
predict energy consumption during the operational phase of buildings. However, the
estimates obtained from these tools typically deviate by more than 30 percent from
actual energy consumption levels (Yudelson 2010; Dell’lsola and Kirk 2003;
Soebarto and Williamson 2001). This deviation can be mainly attributed to the
approach used by these modeling software that accounts for building occupants as
static elements with constant energy use characteristics. The term ‘occupant’s energy
use characteristics’ being defined as the presence of people in the premises and the
actions they perform (or do not perform) to influence the level of energy consumption
(Hoes et al. 2009). So, these tools assume that all occupants consume energy at the
same rate, and that these rates are constant over time (Hoes et al. 2009 and Jackson
2005). Therefore, by accounting for building occupants as dynamic entities with
different and changing energy consumption characteristics over time, better energy
consumption estimates can be obtained (Hoes et al. 2009). This can be achieved by
using agent-based modeling, a technique capable of simulating almost all behavioral
aspect of agents (XJ Technologies 2009). In this research, agents represent the
building occupants. Consequently, the qualitative behavioral aspects of occupants can
then be represented in a quantitative way.
BACKGROUND
Energy simulation software including eQuest, Energy-10, TRNSys, and Energy Plus,
which are commonly used in the industry, are very sensitive to occupancy related
inputs such as energy consumption rates and building schedules (Turner and Frankel
2008). The Clevenger and Haymaker (2006) study on the impact of building
occupancy on energy simulation models showed that estimated energy consumption
can change by more than 150 percent when occupants with different energy
consumption rates were considered.
Not only it is important to model occupants with different energy consumption
patterns, it is also essential to model and predict their change in behavior over time
(Jackson 2005). For example, an occupant might change his/her energy usage
characteristics by adopting more energy efficient practices or on the contrary, adopt
bad consumption habits known as the ‘rebound effect’ (Sorrell et al. 2009). Many
factors could lead to such changes in energy consumption behavior such as ‘green’
social marketing campaigns or financial incentives that encourage energy efficiency
(Jackson 2005). Another important factor is the ‘word of mouth’ effect, which is
considered to be a very influential channel of communication (Allsop et al. 2007).
The ‘word of mouth’ effect is a marketing concept defined as a type of informal,
person-to-person communication between a perceived non-commercial communicator
and a receiver regarding a brand, a product, an organization or a service (Harrison-
Walker, 2001). This study mainly focuses on this factor, representing the influence
538 COMPUTING IN CIVIL ENGINEERING
that each occupant exerts on the other occupants in the same room to change their
energy consumption habits.
Agent-based modeling has already been used to assist energy simulation software
for buildings. More specifically, Erickson et. al (2009) used agent-based modeling to
model the rooms’ occupancy in buildings in order to optimize the HVAC loading, and
hence avoid typical over sizing problems. This research showed that by simulating
occupancy usage patterns, HVAC energy usage can be reduced by around 14 percent.
Another example where agent-based modeling was used to assist HVAC design was
presented by Li et al. (2009). In this study, the occupancy of an emergency
department of a health care facility was first modeled. The obtained numbers were
then used to optimize the sizing of the HVAC system, avoiding unnecessary or
excessive air conditioning loads. This organizational simulation model showed that
the required capacity of the ventilation system might change by as much as 43 percent
when a building’s occupancy is properly modeled.
Literature specific to assisting energy simulation models with agent-based
modeling tends to mainly focus on HVAC calculation. While HVAC accounts for 31
percent of the total energy consumption for an average U.S. commercial building,
other energy consumption sources such as lighting, computers, and hot water supply
account for more than 33 percent (InterAcademy Council 2007). As a consequence,
there is a need to broaden the scope of study to include energy consumption sources
other than HVAC, while accounting for the occupancy effect on the levels of energy
consumption.
Therefore, the main objective of this paper is to present a new approach to energy
estimation in buildings by using agent-based modeling to account for the different
occupant energy use characteristics, their change over time, and finally calculate
energy consumption levels that reflect this dynamic aspect of occupancy.
METHODOLOGY
The methodology that was used to achieve the study’s objectives consists of three
main steps: (1) Define different occupants’ energy use characteristics and obtain
corresponding energy consumption rates, (2) simulate occupants’ interaction and the
change in their behavior over time, (3) combine the results and estimate total energy
consumption.
For the first step, three categories of occupants were defined. First, the ‘High
Energy Consumers’ category represents occupants that over-consume energy.
Second, occupants that make minimal efforts towards energy savings form the
‘Medium Energy Consumers’ category. Finally, ‘Low Energy Consumers’ represent
occupants that use energy efficiently. These assumptions were made based on a study
by Accenture (2010) that classified energy consumers into different categories based
on their attitude toward energy management programs. For each of the three defined
categories, an energy consumption rate was then obtained through literature review
and through simulations using traditional energy software (e.g., EnergyPlus, eQuest,
etc.). As a result, the change in behavior was translated into a change in energy
consumption levels.
COMPUTING IN CIVIL ENGINEERING 539
The second step consists of an agent-based model that simulates the interactions
of the building occupants and the resulting change in their energy use characteristics.
This change in behavior is shown by continuously calculating the number of
occupants in each category: high, medium, and low energy consumers.
The last step is to combine the previous results by applying the energy
consumption rates obtained from Step 1 to the changing behavior simulated in Step 2
and finally calculate dynamic energy estimates that account for the differences and
changes in occupants’ behavior.
Figure 1 shows the flow chart of the agent-based model summarizing the three
stated phases. In step 1, energy consumption rates were obtained from traditional
energy simulation software. Step 2 represents the interaction of agents and the
potential change in behavior that is translated into the move of an agent from a
category to another (e.g., from High Energy Consumers’ to ‘Medium or Low Energy
Consumers’ and vice versa). Finally, step 3 combines the obtained results and
generates the total energy consumption estimates.
For each time step, the occupants start by interacting. Then in the case of a
successful influence, certain occupants change behavior and the model updates the
number of occupants in the three categories: high, medium, and low energy
consumers. These numbers are then combined to the energy consumption rates from
Step 1, and total energy consumption levels are calculated for this time step (Step 3).
Once this iteration completed, the model moves to the next time step and keeps
repeating the cycle until the total simulation time is reached.
An experimental energy simulation model was built for the purpose of this study
consisting of a 1000 sq ft graduate student office of a university building,
540 COMPUTING IN CIVIL ENGINEERING
Then for each occupant categories: High, medium, and low energy consumers,
energy consumption levels were obtained also using eQuest by running different
experiments that were tailored to each type of behavior. The results from these tests
are shown in Figure 3, where the red, orange, and green curves represent respectively
the energy consumption rates of high, medium, and low energy consumers. Similar
graphs were obtained for the gas consumption.
The next step is to model the change in energy consumption characteristics over
time (Figure 4), where the 10 students in the room interact and possibly influence
each others’ behaviors. At the start of the simulation, 3 of the students were assumed
to be ‘High Energy consumers’, 4 ‘Medium Energy Consumers’, and 3 ‘Low Energy
Consumers’. In this example, low energy consumers were assumed to have a higher
level of influence than the other categories. This means that the low energy
consumers were the most efficient in influencing other occupants to change their
behavior and adopt low energy use behavior.
As shown in Figure 4, while the simulation time was advancing, low energy
consumers represented by the green line were successfully converting all of the
medium and high energy consumers to the low consumer category. More specifically,
all of the 10 occupants of the room became low energy consumers after the 48th
month. Consequently, low energy consumers were attracting other occupants at a
faster rate than they were being attracted. Therefore, their number kept increasing
until all of the occupants were converted to their category.
After calculating the energy consumption rates and the changes in occupancy
behavior over time, electric and gas consumptions were calculated by applying the
rates of Figure 3 to the number of occupants in each category over time from Figure
4. Figure 5 summarizes the total electric and gas consumption levels.
As shown, there was a significant drop of 20 percent for the total electric
consumption over the simulation time of 60 months. This was expected since the
number of low energy consumers was increasing over time as high and medium
energy were getting converted. So with the room occupants becoming low energy
consumers, their energy consumptions decreased over time (Figure 5).
On the other hand, the drop of only one percent in the gas consumption was less
542 COMPUTING IN CIVIL ENGINEERING
significant since the occupants did not have direct control over the major portion of
the gas consumption sources. In fact, as was previously shown in Figure 2, the
occupants did not control the HVAC heating system, which accounted for 88 percent
of the gas consumption. So even though behavior changed and people were
consuming less energy, this did not reflect on the gas consumption as it did on the
electric consumption, where occupants were directly controlling 77 percent of total
the electric consumption (Figure 2).
CONCLUSION
REFERENCES
Emery, A. and Kippenhan, C. (2006). “A long term study of residential home heating
consumption and the effect of occupant behavior on homes in the Pacific Northwest
constructed according to improved thermal standards.” Energy, 31 (5), 677–693.
Energy Information Administration (EIA) (2010). “Annual Energy Review.” DOE/EIA –
0384, August 2010. <http://www.eia.doe.gov/aer/pdf/aer.pdf> (Sep. 3, 2010).
Erickson, V. L., Lin, Y., Kamthe A., Brahme, R., Surana, A., Cerpa, A.E., Sohn, M.D.,
and Narayanan, S. (2009). “Energy Efficient Building Environment Control
Strategies Using Real-time Occupancy Measurements”.
<http://andes.ucmerced.edu/papers/Erickson09a.pdf> (Jun. 1, 2010).
Harrison-Walker, L.J. (2001). “The measurement of word-of-mouth communication and
an investigation of service quality and customer commitment as potential
antecedents”. Journal of Service Research 4(1):60-75.
Hawthorne, C. (2003). “Turning Down the Global Thermostat.” Metropolis Magazine,
<http://www.metropolismag.com/story/20031001/turning-down-the-global-
thermostat>, (Oct. 1, 2009).
Hoes, P., Hensen, J. L. M., . Loomans, M. G. L. C., de Vries, B., and Bourgeois, D.
(2009). “User Behavior in Whole Building Simulation”. Energy and Buildings,
Elsevier 41:295-302.
InterAcademy Council (2007). “Lighting the Way: Towards a Sustainable Energy
Future”. Technical Report, InterAcademy Council, Amsterdam, The Netherlands.
Jackson, T. (2005). “Motivating Sustainable Consumption: a review of evidence on
consumer behaviour and behavioural change”. Technical Report, Centre for
Environmental Strategy, University of Surrey, Surrey, United Kingdom.
Li, Z., Yeonsook, H. and Godfried A. (2009). “HVAC Design Informed by
Organizational Simulation”. Proceedings of the Eleventh International IBPSA
Conference, Glasgow, Scotland.
<http://www.ibpsa.org/proceedings/BS2009/BS09_2198_2203.pdf> (Sep. 15, 2010).
Meier A. (2006). “Operating buildings during temporary electricity shortages.” Energy
and Buildings, 38(11), 1296–1301.
Soebarto, V. I. and Williamson, T.J. (2001). “Multi-criteria Assessment of Building
Performance: Theory and Implementation”. Building and Environment, Elsevier
36(6):681-690.
Sorrell, S., Dimitropoulos, J. and Sommerville, M. (2009). “Empirical Estimates of the
Direct Rebound Effect: A Review”. Energy Policy, 37 (4) 1356–1371
Staats, H., van Leeuwen, E. and Wit., A. (2000). “A longitudinal study of informational
interventions to save energy in an office building.” Journal of Applied Behavior
Analysis, 33 (1) 101–104.
Turner, C. and M. Frankel (2008). “Energy Performance of LEED for New Construction
Buildings”. Technical Report, New Buildings Institute, Vancouver, WA.
<http://www.usgbc.org/ShowFile.aspx?DocumentID=3930> (Jun. 1, 2010).
United Nations Environment Programme (2007). “Buildings Can Play Key Role In
Combating Climate Change”.
<http://www.unep.org/Documents.Multilingual/Default.asp?DocumentID=502&Artic
leID=5545&l=en> (May 18, 2010).
XJTechnologies (2009). Anylogic Overview. <http://www.xjtek.com/anylogic/overview/>
( Jun. 1, 2010).
Yudelson, J. (2010). “Greening Existing Buildings”. Green Source/McGraw-Hill, New
York, NY.
Incorporating Social Behaviors in Egress Simulation
Mei Ling Chu1, Xiaoshan Pan2, Kincho Law3
1
Department of Civil and Environmental Engineering, Stanford University, Stanford,
CA 94305; PH (650) 723-4121; FAX (650) 723-7514; email: mlchu@stanford.edu
2
Tapestry Solutions, 2975 Mcmillan Avenue # 272, San Luis Obispo, CA 93401;
PH (805) 541-3750; FAX (805) 541-8296; email: xpan@stanfordalumni.org
3
Department of Civil and Environmental Engineering, Stanford University, Stanford,
CA 94305; PH (650) 725-3154; FAX (650) 723-7514; email: law@stanford.edu
ABSTRACT
Emergency evacuation (egress) is considered one of the most important issues in the
design of buildings and public facilities. Given the complexity and variability in an
evacuation situation, computational simulation tool is often used to help assess the
performance of an egress design. Studies have revealed that social behaviors can have
significant influence on the evacuating crowd during an emergency. Among the
challenges in designing safe egress thus include identifying the social behaviors and
incorporating them in the design analysis. Even though many egress simulation tools
now exist, realistic human and social behaviors commonly observed in emergency
situations are not supported. This paper describes an egress simulation approach that
incorporates research results from social science regarding human and social
behaviors observed in emergency situations. By integrating the behavioral theories
proposed by social scientists, the simulation tool can potentially produce more
realistic predications than current tools which heavily rely on simplified and, in most
cases, mathematical assumptions.
KEYWORDS
Social behavior, egress, crowd simulation, multi-agent based modeling
INTRODUCTION
This paper articulates a computational approach that integrates human and social
behaviors in emergency evacuation (egress) simulations. Despite the wide range of
simulation tools currently available, “the fundamental understanding of the
sociological and psychological components of pedestrian and evacuation behaviors is
left wanting [in computational simulation] (Galea, 2003, p. VI)”, and the situation has
been echoed by the authorities in fire engineering and social science (Aguirre 2009;
Challenger et al. 2009; Still 2000). Our approach to address this shortcoming is to
design a multi-agent based egress simulation framework that can incorporate current
and future social behavior theories on crowd dynamics and emergency evacuation.
544
COMPUTING IN CIVIL ENGINEERING 545
perceived data, an agent prioritizes the different behaviors that the agent may exhibit
and chooses the one with the highest priority. After a decision is made, the agent
executes the actions according to the selected behavior, and invokes the appropriate
locomotion.
Exit Sign
a- Initially, the group members b- The members explore the c- The group starts to look for exit
are separated. floor until they see each other sign when all members are visible
Figure 4. Screenshots showing the member seeking behavior in a group of 6
COMPUTING IN CIVIL ENGINEERING 549
Agent in room
Agent of high
influence
Exit Sign Exit Sign
a- An agent sees the exit sign and shares the b- The agent moves out of the room and see the
information with the other members. Note the exit sign, his goal becomes the exit sign since
agent in room can see the information-sharing this exit location is informed by the
agent but not the exit sign. information-sharing agent.
Figure 5. Screenshots showing the “group influence” process in a group of 6.
agents has perfect knowledge about the egress route. By following the agent with
perfect knowledge, other agents are able to escape efficiently. Other examples can be
created by varying the leader’s familiarity about escape routes and the exit sign
arrangement.
Leader
a- Members are attracted to the leader (who b- Agents continue to follow the leader who
possesses knowledge of the escape route). navigates according to defined route.
*The user-defined escape route is symbolized by the square labels
Figure 6. Screenshots showing members following the leader with better
knowledge of escape route
REFERENCES
Aguirre, B. E. (2005). “Commentary on Understanding Mass Panic and Other
Collective Response to Threat and Disaster.” Psychiatry, 68, 121-129.
Aguirre, B.E., Torres, M., and Gill, K.B. (2009). “A test of Pro Social Explanation of
Human Behavior in Building Fire.” Proceedings of 2009 NSF Engineering
Research and Innovation Conference.
Aguirre, B.E., El-Tawill, S., Best, E., Gill, K.B., and Fedorov, V. (2010). “Social
Science in Agent-Based Computational Simulation Models of Building
Evacuation.” Draft Manuscript, Disaster Research Center, University of
Delaware.
Averill, J. D., Mileti, D. S., Peacock, R. D., Kuligowski, E. D., Groner, N., Proulx, G.,
Reneke, P. A., and Nelson, H. E. (2005). Occupant Behavior, Egress, and
Emergency Communications, Technical Report NCSTAR, 1-7, NIST.
Challenger, W., Clegg W. C., and Robinson A.M. (2009). Understanding Crowd
Behaviours: Guidance and Lessons Identified, Technical Report prepared for
UK Cabinet Office, Emergency Planning College, University of Leeds, 2009.
Chertkoff, J. M., and Kushigian, R. H. (1999). Don’t Panic: The Psychology of
Emergency Egress and Ingress, Praeger, London.
Cocking, C., and Drury, J. (2008). “The Mass Psychology of Disasters and
Emergency Evacuations: A Research Report and Implications for the Fire and
Rescue Service.” Fire Safety, Technology and Management, 10, 13-19.
Galea, E., (ed.). (2003). Pedestrian and Evacuation Dynamics, Proceedings of 2nd
International Conference on Pedestrian and Evacuation Dynamics, CMC Press,
London.
Gwynne, S., Galea, E. R., Owen, M., and Lawrence, P. J. (2005). “The Introduction
of Social Adaptation within Evacuation Modeling.” Fire and Materials,
2006(30), 285-309.
Helbing, D., Buzna, L, Johansson, A., and Werner, T. (2005). “Self-Organized
Pedestrian Crowd Dynamics.” Transportation Science, 39(1), 1-24.
Mawson, A. R. (2005). “Understanding Mass Panic and Other Collective Responses
to Threat and Disaster.” Psychiatry, 68, 95-113.
Mintz, A. (1951). “Non-Adaptive Group Behavior.” Journal of Abnormal and Social
Psychology, 46, 150-159.
Pan, X. (2006). Computational Modeling of Human and Social Behavior for
Emergency Egress Analysis, Ph.D. Thesis, Stanford University.
Pan, X., Han, C. S., Dauber, K., and Law, K. H. (2007). “A Multi-Agent Based
Framework for the Simulation of Human and Social Behaviors during
Emergency Evacuations.” AI & Society, 22, 113-132.
Proulx, G., Reid, I., and Cavan, N. R. (2004). Human Behavior Study, Cook County
Administration Building Fire, October 17, 2003 Chicago, IL, Research Report
No. 181, National Research Council, Canada.
Santos, G., and Aguirre, B. E. (2004). “A Critical Review of Emergency Evacuation
Simulations Models.” in Peacock, R. D., and Kuligowski, E. D., (ed.).
Workshop on Building Occupant Movement during Fire Emergencies, June
10-11, 2004, Special Publication 1032, NIST.
Still, G. K. (2000). Crowd Dynamics, Ph.D. Thesis, University of Warwick, UK.
3D Thermal Modeling for Existing Buildings using Hybrid LIDAR System
ABSTRACT
INTRODUCTION
Buildings account for about 40% of the primary energy usage, 71% of the
electricity in the U.S.(U.S. DOE 2008; EIA 2009), and, yet, they receive much less
public attention than fuel economy or new technologies for automobiles or alternative
sources or distribution systems for power generation. The US DOE Build America
Program (NREL 2008) set a goal of reducing the average energy use of housing by
40% to 70%. Especially, the existing residential buildings comprise the single
largest contributor to U.S. energy consumption and greenhouse gas emissions (>50%)
which is applied to over more than 120 million buildings (>95% of the total number
of buildings). Exacerbating these problems is the fact that the average age of such
buildings is over 50 years, with about 85% of buildings built before 2000 (U.S. DOE
2008). However, millions of decision makers of these buildings usually lack
552
COMPUTING IN CIVIL ENGINEERING 553
The disconnect between existing high performance building products and the
willingness of decision makers to choose those products is likely due to the
complexity of building systems and the marketplace and the lack of adequate
feedback loops between decision makers and outcomes associated with the different
stages of the building lifecycle. In particular, there is still a lack of: 1) rapid and
low-cost as-built data collection techniques for Building Information Modeling of
existing buildings(Schlueter 2008); 2) metrics and measurements for evaluating
overall building performance (including energy and occupant issues); 3) adequate
measurements and integrated intelligence for evaluating component performance; 4)
tools and information geared to non-expert decision makers (e.g., owners, occupants);
and 5) evidence that buildings touted as high performance actually perform well.
Previous work has focused on some aspects of the problems above. However,
several gaps remain that will be addressed in this study including:
Lack of perception-based rapid & low-cost data collection tools for as-built
BIM design and thermal performance of existing buildings
Lack of integrated tools and data for analyzing the performance and
opportunities for improvement in existing buildings, and
Lack of connectivity between building performance information and decision
makers
as-built models of structures and scenes for quality control, surveying, mapping,
reverse engineering (Cheok 2005). Most of the commercial survey-level LIDAR
scanners enable an internal or external camera to capture the digital images of the
scanned scene and map image textures onto corresponding points in point clouds.
Then, each point has information of position(x, y, z) and color (R,G,B) values.
Unlike applications using digital cameras, there have been few efforts to map thermal
images taken from an infrared camera onto LIDAR’s point clouds although the
infrared thermography technique has long been used as a non-invasive approach to
diagnose buildings and infrastructure (Balaras and Argiriou 2001). Tsai and Lin
(2004) developed a software program which can create an integrated system to
acquire and integrate information produced by laser scanner and infrared (IR) camera
used in cultural heritage diagnostic for restoration in architectural and cultural study
applications.
Figure 2. Infrared thermal image projected onto point clouds of the building
(overlay).
COMPUTING IN CIVIL ENGINEERING 555
Point
Clouds
Data
Fusion
Temp. on
Pixels
Figure 4. Integrated kinematics frame for the hybrid thermal LIDAR system.
Most off-the-shelf cameras are not perfect and tend to show a variety of
distortions and aberrations. For geometric measurements by a camera, the most
important issue is the distortion(Shah and Aggarwal 1996). To solve the distortion
that the IR camera exhibits, the intrinsic parameters are identified which encompass
focal length in terms of pixels, image format, and principal point (Hartley and
Zisserman 2003; Zhang 2000). Extrinsic parameters are also needed to transform 3D
world coordinates to 3D camera centered coordinate frame. There are 3 extrinsic
parameters: the Euler angles yaw θ, pitch , and tilt φ for rotation. In this research,
angle θ always equals zero, angle and angle φ can be obtained from pan-tilt
equipment. And the rotation matrix R can be expressed as function of θ, , and φ as
follows:
X Xw
Y R Y (2)
w
Z Z w
In equation (2), (X, Y, Z) is the infrared camera 3D coordinate system, and (Xw, Yw,
Zw) is the object world coordinate system.
While the LIDAR can cover wide range of area with one scan, the IR camera
needs to capture multiple images due to its low resolution (320 x 240), especially
when a target is too large as previously shown in Figure 2. Figure 5 demonstrates the
proposed hybrid data fusion approach in which point clouds are mapped with the
thermal image of human body. Once the LIDAR scan is done, the area of interest is
captured by the IR camera using the panning and tilting functions from the GUI. In
this example, one LIDAR scan and one IR camera capture were used to model the 3D
image. The points that are out of the IR camera range (not mapped with thermal data)
are shown in blue.
COMPUTING IN CIVIL ENGINEERING 557
Figure 5. Example of 3D thermal modeling of human body- front view (left) and
skewed view (right).
FIELD TEST
With a mouse click on any point on the point clouds from the GUI, it shows x,y,z and
temperature data. In Figure 7, a hottest point shown in red was selected (39.566 C).
camera(IR) was integrated to the LIDAR system, which measures temperature of the
building surface. Multiple degrees of freedom (DOF) Kinematics was solved to
integrate two units to obtain x,y,z, point values and corresponding thermal data for
each point. A graphical user interface (GUI) was developed to control hardware units
(LIDAR, pan and tilt unit, and IR camera) for data capture, and edit and visualize 3D
thermal point clouds.
ACKNOWLEDGEMENTS
This research has been supported by a grant from the U.S. Department of
Energy(DOE) (Contract #: DE-EE0001690 ). The authors would like to acknowledge
and extend the gratitude to the U.S. DOE for their support.
REFERENCES
Shah, S., Aggarwal, J. (1996). “Intrinsic parameter calibration procedure for a high-
distortion fish-eye lens camera with distortion model and accuracy estimation,”
Pattern Recognition, 29(11), pp. 17775-1788.
Zhang, Z.(2000). "A flexible new technique for camera calibration'", IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol.22, No.11, pages
1330–1334, 2000
A Generalized Time-Scale Network Simulation Using Chronographic Dynamics
Relations
ABSTRACT
INTRODUCTION
560
COMPUTING IN CIVIL ENGINEERING 561
PERT (Malcom et al., 1959) was the first step to applying uncertainties within
activity durations. Various researchers including Murray (1963), Grubbs (1962) and
MacCrimmon and Ryavec (1964) suggested alternatives to PERT adding
uncertainties to the cost and reliability. Martinez and Joannou (1997) stated that
PERT is unable to establish a correlation between activities durations, or to manage
uncertainty in activity relationships. Daji and Reiar (1993) introduce uncertainty in
the durations of non-critical activities with BFUE. Wang and Demsetz (2000)
correlate the activity networks durations with NETCOR. Pritsker et al. (1989), Halpin
and Riggs (1992), and Lu AbouRizk (2000) have applied the simulation to the PERT
network.
Allen (1984) describes the logic based on temporal intervals rather than time
points, defines thirteen possible temporal relationships and describes situations from
either a static or dynamic perspective. Song and Chua (2007) present a temporal logic
intermediate function relationship based on an interval-to-interval format. The
temporal logics residing in the intermediate functions is applied from three
perspectives: the construction life cycle of a single product component, functional
interdependencies between two in-progress components, and availability conditions
of an intermediate functionality provided by a group of product components.
Francis and Miresco (2000, 2002a, 2002b, 2006 and 2010) propose the
Chronographic Model and introduce the internal division concept. Divisions are
related to the quantity of work to be accomplished and can be adjusted automatically
as a function of the production variation rates. The concept of Temporal Functions is
introduced in order to specify the decisional and relational constraints between
activities. Temporal functions connect activities on one or more points, called
connection points. Each connection point can be at one of the two extremities of the
activity, or on one of its internal divisions. Internal divisions extend the relationships
between the activities to external and internal types, and generate realistic
dependencies and new types of floats. The Chronographic Method studies the
dynamic time-scaled dependencies that allow probabilistic simulations based on the
internal variation of the production rate. Plotnick (2006) proposes the Relationship
Diagramming Method (RDM) that also employs the notion of partial relationships.
The RDM uses five classes of new coding: Event Codes, Duration Codes,
Reason/Why Codes, Expanded Restraint or Lead/Lag Codes and Relationship Codes.
Han et al (2007) propose the Value Addition Rate (VAR), a time-scaled metric
562 COMPUTING IN CIVIL ENGINEERING
method that captures the amount of non-value adding activities consuming time
and/or resources without increasing value. They use different colour schemes to
model the percentage of efforts effectively utilized to add value on a bar chart.
BACKGROUND
schedule, and prevents its integration into the existing construction planning software.
This paper presents a generalized time-scale network simulation using chronographic
dynamics relations, thus allowing the user to present a schedule with several
alternatives and have a better understanding of the implication of each decision on the
entire project.
separately may affect the modeling representation. Integrating these decision points
within the activity representation (see Figure 1) is an acceptable solution. Decision
points are drawn as green triangles.
In the previous example, the project was calculated based on the selected
alternative. Thus, if a different alternative is chosen during the execution, the project
duration is likely to change. If the probability of execution of the three (3)
alternatives are consecutively 40%, 35% and 25%, the project has a 60% chance that
one of the two others alternatives are executed. This means a probability of 60% that
the duration and cost of the project will be different. With such a process, the
confidence in the result decreases, as the method has completely neglected the effect
of the two other alternatives on the duration, cost and quality of the project. The
simulation should then take all possible alternatives into consideration.
The most likely duration is calculated by the sum of the products of the
duration of each activity by its respective probability: 16 x 0.6 + 12 x 0.1 + 21 x 0.3 =
17.1 17 days.
The difference between the most likely duration and the chosen alternative
duration represents the uncertainty (17 – 16 = 1 day). This uncertainty is represented
by a temporary function called the probability entity. The probability entity adjusts
the overall project duration and is represented graphically by a spring (see Figure 2).
The most probable cost is the sum of the products of the cost of each alternative
multiplied by its respective probability. The difference between the actual and the
most probable cost is associated with the probability entity.
downward. This entity will also include the cost difference, whether positive or
negative.
Due to limited space, the simulation model is not specifically explained. This
article is therefore limited to the presentation of the modeling approach to simulate
the execution alternatives using the Chronographic Method.
The methodology and the mathematical details for an extensive example using
a mathematical function will be presented in a future paper. The mathematical
function will contain the rules that manage the interdependencies between the two
activities in progress.
CONCLUSION
REFERENCES
Allen, J. F. (1984). “Towards a general theory of action and time.” Art. Int. J.,
Elsevier, 23, 123-154.
Chehayeb, N. N. and AbouRisk, S. M. (1998). “Simulation-based scheduling with
continuous activity relationships.” J. Constr. Eng. Manage., 124(2), 107-115.
Crowston, W. B. and Thompson, G. L. (1967). “Decision CPM: a method for
simultaneous planning, scheduling and control of projects.” Oper. Res., 15, 407-
426.
Daji, G. and Reiar, H. (1993). “Time uncertainty analysis in project networks with a
new merge-event time-estimation technique”, Oper. Res., 11(3), 165-173.
COMPUTING IN CIVIL ENGINEERING 567
Eisner, H. (1962). “A generalized network approach to the planning and research and
scheduling of the research program.” Oper. Res., 10, 115-125.
El-Bibany, H. (1997). “Parametric constraint management in planning and scheduling
computational basis.” J. Constr. Eng. Manage., 123(3), 348–53.
Elmaghraby, S. E. (1964). “An algebra for the analysis of the generalized activity
network.” Manage. Sc., 10, 494-514.
Eppinger, S. D. (1997). “Three concurrent engineering problems in product
development seminar.” MIT, Sloan School of Management, Cambridge, Mass.
Francis, A. and Miresco, E. (2000). “Decision support for project management using
a chronographic approach.” Proc.2nd Int. Conf. on Decision Making in Urban and
Civil Engineering, Lyon, France, 845-856.
Francis, A. and Miresco, E. (2002a). “Amélioration de la représentation des
contraintes d’exécution par la méthode Chronographique.” Proc. 2006 Annual
Canadian Society of Civil Engineers (CSCE) conf., Montreal, Qc, GE019, g-27.
Francis, A. and Miresco, E. (2002b). “Decision support for project management using
a chronographic approach.” J. Decis. Sys., 11(3-4), 383-404.
Francis, A. And Miresco, E. (2006). “A chronographic method for construction
project planning.” Can. J. Civ. Eng., 33(12), 1547-1557.
Francis, A. and Miresco, E. (2010). “Dynamic production-based relationships
between activities for construction projects' planning.”, Proc., Int. Conf., in
Computing in Civil and Building Engineering, Nottingham, UK, 126, 251.
Grubbs F.F. (1962). “Attempts to validate certain PERT statistics.” Oper. Res., 10,
912-915.
Halpin, D., and Riggs, L. (1992). “Planning and analysis of construction operations.”
Wiley, New York.
Han, S., Lee S., Fard, M. G. and Peña-Mora, F. (2007). “Modeling and representation
of non-value adding activities due to errors and changes in design and construction
projects.” Proc., 2007 Wint. Simu. Conf., Washington D.C, 2082-2089.
Hespos, R. F. and Strassmann P. A. (1965). “Stochastic decision trees for the analysis
investment decisions.” Manage. Sc., S.B, 11, 244-259.
Itakura, H. and Nishikawa, Y. (1984). “Fuzzy network technique for technological
forecasting, Fuzzy Sets and Systems.” Int. J. Inf. Sc. Eng., 14(2), 99-113.
Lu, M. and AbouRizk, S-M. (2000). Simpli. ed. “CPM=PERT simulation model.”
J. Constr. Eng. Manage., 126 (3), 219–26.
MacCrimmon, K.R., Ryavec, C.A. (1964). “An Analytical study of the PERT
Assumptions.” Oper. Res., 12(1), 16-37.
Malcom, D.G., Roseboom, J.H., Clark, C.E., Fazar, W. (1959). “Applications of a
technique for R and D Program Evaluation, PERT.” Oper. Res., 7(5), 646-669.
Martinez, J. C. and Ioannou, P. G. (1997). “State-Based Probabilistic Scheduling
Using STROBOSCOPE’s CPM add-On.” Constr. Congress V, Minneapolis, MN,
438-445.
Moeller, G. L. and Digman, L. A. (1981). “Operations planning with VERT.” Oper.
Res., 29(4), 676-697.
568 COMPUTING IN CIVIL ENGINEERING
Nawari. O. Nawari1
1
School of Architecture, College of Design, Construction and Planning, University of
Florida, Gainesville, FL 32611, USA. Email:nnawari@ufl.edu.
ABSTRACT
The intelligent codes (SMARTcodes) is a new initiative of the International Code
Council (ICC) that strives to automate code compliance check which takes the building plan
as represented by a Building Information Model (BIM), and instantly checks for code
compliance via model checking software. The goal is to be able to create an inspection
checklist of building elements to look for, and viewing the building components that don't
comply with code provisions and for what reasons.
This paper examines automated code compliance checking systems that assess
building designs according to various structural code provisions. This includes evaluating and
reviewing the functional capabilities of both the technology and structure of smart codes and
current building design rule checking systems. The paper suggests a new framework for
development of automated rule checking systems to verify structural design against code
provisions and other user defined rules.
INTRODUCTION
At present structural design and construction processes become more complex every
day because of the introduction of new building technologies, research outcomes and
increasingly stringent building codes. As a result, structural engineers are responsible to
comply with many regulations and specifications ranging from seismic, blast resistance,
progressive collapse, to fire safety and energy performance requirements. They are constantly
facing the problem of checking the conformance of products and processes to international,
national and local regulations. They are also more and more subject to increasing expectations
on several knowledge domains, striving towards building designs with better performance and
quality. These challenges require an intense collaboration among project participants, and a
profound verification of the building design starting from the earliest stages in the design
process.
The introduction of Smart Codes will greatly improve the current design practice by
simplifying the access to code provisions and complaints checks. Converting Code and
Standards from a flat rigid format into dynamic actionable format does play the key role. By
breaking through the precincts of Code and Standard provisions, design software, and the
Building Information Modeling a solution to insurmountable hurdle can be achieved
Smart or intelligent code is referred to as the electronic digital format of the building
codes that allow automated rule and regulation checking without modifying a building design,
but rather assesses a design on the basis of the configuration of parametric objects, their
relations or attributes. Smart Codes employ rule-based systems to a proposed design, and give
results in format such as “PASS”, “FAIL” or “WARNING”, or ‘UNKNOWN’’ for conditions
where the required information is incomplete or missing.
There has been a long historical interest in transforming building codes into a format
acquiescent for machine interpretation and application. The initial effort was started in 1966
when Fenves made the observation that decision tables, an if-then-novel programming and
program documentation technique, could be used to represent design standard provisions in a
569
570 COMPUTING IN CIVIL ENGINEERING
precise and unambiguous form. The concept was put to use when the 1969 AISC Specification
(AISC 1969) was represented as a set of interrelated decision tables. The stated purpose of the
decision table formulation was to provide an explicit representation of the AISC Specification,
which could then be reviewed and verified by the AISC specification committee and
subsequently used as a basis for preparing computer programs. Subsequently, Lopez et al.
implemented the SICAD (Standards Interface for Computer Aided Design) system (Lopez and
Elam 1984; Lopez and Wright 1985; Elam and Lopez 1988; Lopez et al. 1989). The SICAD
system was a software prototype developed to demonstrate the checking of designed
components as described in application program databases for conformance with design
standards. The SICAD concepts are in production use in the AASHTO Bridge Design System
(AASHTO 1998). Garrett developed the Standards Processing Expert (SPEX) system (Garrett
and Fenves 1987) using a standard-independent approach for sizing and proportioning
structural member cross-sections. The system reasoned with the model of a design standard,
represented using SICAD system representation, to generate a set of constraints on a set of
basic data items that represent the attributes of a design to be determined.
Then further research effort was led by Singapore building officials, who started
considering code checking on 2D drawings in 1995. In its next development, it switched and
started the CORENET System working with IFC (Industry Foundation Classes) building
models in 1998 (Khemlani,, 2005). In the United States similar works have been initiated
under the Smart Code initiative. There are also other several research implementations of
automated rule-checking to assess accessibility for special populations (SMC, 2009) and for
fire codes (Delis, 1995). The GSA and US Courts has recently supported development of
design rules checking of federal courthouses, which is an early example of rule checking
applied for automating design guides (GSA, 2007). A comprehensive survey of developments
for computer representation of design codes and rule checking was reported by Fenves et al.
(1995) and Eastman et al. (2009).
SMART CODES
This refers to the electronic digital representation of the rules and regulations of the
building codes and the dictionary needed for that format. In the United State, the International
Codes Council (ICC) will be available in a form of XML. To maintain consistency of
properties within the digital format of the Codes a dictionary of the properties found within the
building codes is being developed. The dictionary is being developed as part of the
International Framework for Dictionaries effort and, in the US, is being managed by the
Construction Specifications Institute (CSI) in cooperation with ICC. This work is also
enabling the properties within the codes to be identified against appropriate tables within the
Omniclass classification system that has been developed by CSI.
Recently, a number of researchers investigated the application of ontology-based
approach (Yurchyshyna et al. 2009) and the semantic web information as a possible rule
checking framework (Pauwels et. al. 2009). The first research approach works on formalizing
conformance requirements conducted under the following methods (Yurchyshyna et al. 2009):
(i) knowledge extraction from the texts of conformance requirements into formal languages
(e.g. XML, RDF); (ii) formalization of conformance requirements by capitalizing the domain
knowledge. (ii) semantic mapping of regulations to industry specific ontologies; and (iv)
formalization of conformance requirements in the context of the compliance checking
problem. On the other hand the semantic web approach focuses on enhancing the IFC model
by using description language based on a logic theory such as the one found in semantic web
domain (Pauwels et. al. 2009). Because the IFC schema was not explicitly designed for
interaction with rule checking environments, its specification is not based on a logic theory.
By enhancing IFC onto a logical level, it could be possible to enable design and
implementation of significantly improved rule checking systems.
COMPUTING IN CIVIL ENGINEERING 571
As can be seen, Smart Codes systems depend on Information availability and rule
conformance checking system. Each of these components has some limitations aspects. Major
cluster of difficulties are related to the nature of Codes and Standards. Building Codes can be
extremely subjective in certain provisions. That means legal scholars have the ability to argue
either side of a question using accepted methods-of legal discourse. The most recurring cause
of indeterminacy of Code provisions is caused by open-textured concepts used in expressing
the provisions.
It is clear that a powerful semantic-oriented representation that encompasses most of
the Codes and Standard provisions and the encoding of the knowledge domain are keys in the
success of Smart Code initiative. The paper proposes a new framework based upon XML and
LINQ (Language Integrated Query) language to enable basic and complex level of rules and
reasoning to be expressed both in XML as a normative concrete syntax and in a more human-
readable abstract syntax to allow for effective AC3 systems.
Figure 2. Process Map Showing the Exchange Requirements for the AC3
Framework.
COMPUTING IN CIVIL ENGINEERING 573
The first twelve lines of the code illustrate clearly the power of LINQ to extract
information from the Smart Code in a very efficient and flexible format. The query searches
the Smart Code for the minimum cover provision and read the values allocated for beams and
then compares them to the actual instance of the beam in the building. The actual building
structural framing information is extracted from the BIM generated IFC file which is
converted into ifcXML and then into fbmXML as described previously (figure 4). In the AC3
framework this is given by Line 18 to 26 in Figure 6, which implement LINQ to BIM via
fbmXML. This concise example depicts the potential of automating an unlimited range of
COMPUTING IN CIVIL ENGINEERING 575
rules, including unlimited nested conditions and branching of alternative contexts within a
specified structural design Code or Standard.
CONCLUSIONS
Application of the AC3 framework in structural design has the impending to optimize and
simplify the automated code and standard conformance checks by leveraging building
information that exists in the architectural and structural models created by BIM authoring
platform. The proposed automated code conformance checking (AC3) framework has many
advantages over existing rule checking systems. The major differentiator of the AC3 lies in the
abilities of LINQ to XML as in-memory XML programming platform. Language-
Integrated Query provides a consistent query experience across different data models as well
as the ability to mix and match data models within a single query, it is able to depict an
unlimited range of rules, including unlimited nested conditions and branching of alternative
contexts within a specified domain. Furthermore, AC3 framework provides flexibility of
encoding building codes provisions and domain knowledge, capability of providing friendly
user-defined rules, and the ability of integrating with other applications. Increasing BIM
adoption and the concomitant increasing interest in the interoperability potential of XML
prove to be the essential catalyst in the successful adoption and further development of
automated code conformance checking (AC3) systems.
576 COMPUTING IN CIVIL ENGINEERING
ACKNOWLEDGEMENT
The author would like to express his appreciation to College of Design, Construction
& Planning, University of Florida, Gainesville, Florida for funding and supporting this
research.
REFERENCES
Conover, D. (2007).”Development and Implementation of Automated Code Compliance
Checking in the U.S.”, International Code Council, 2007.
Delis, E.A., and Delis, A. (1995). “Automatic fire-code checking using expert-system
technology”, Journal of Computing in Civil Engineering, ASCE 9 (2), pp. 141–156.
Ding, L., Drogemuller, R., Rosenman, M., Marchant, Gero, D. J. (2006). “ Automating code
checking for building designs: in: K. Brown, K. Hampson, P. Brandon (Eds.), Clients
Driving Construction Innovation”: Moving Ideas into Practice, CRC for Construction,
Innovation, Brisbane, Australia, pp. 113–126.
Eastman, C. M., Jae-min Lee, Yeon-suk Jeong, Jin-kook Lee (2009). “Review Automatic rule-
based checking of building designs “, Journal of Automation in Construction (18), pp.
1011–1033, Elsvier.
EDM (2009).” EXPRESS Data Manager”, EPM Technology, http://www.epmtech.jotne.com.
Fenves, S. J. (1966). “ Tabular decision logic for structural design”, J. Structural Engn 9 92,
pp. 473-490
Fenves, S. J. and Garett Jr, J. H. (1986). “Knowledge-based standards processing”, Int. J.
Artificial Intelligence Engn 1, pp. 3-13.
Fenves, S. J., Garrett, J. H., Kiliccote. H., Law. K. H., and Reed, K. A. (1995). "Computer
representations of design standards and building codes: U.S. perspective." The Int. J. of
Constr. Information Technol., 3(1), pp. 13-34.
Garrett, J. H., Jr., and S. J. Fenves, (1987). “A Knowledge-based standard processor for
structural component design” Engineering with Computers, 2(4), pp 219-238.
GSA (2007). “U.S. Courts Design Guide”, Administrative Office of the U.S. Courts, Space
and Facilities Division, GSA,
http://www.gsa.gov/Portal/gsa/ep/contentView.do?P=PME&contentId=15102&contentTyp
e=GSA_DOCUMENT .
GSA (2009). “BIM Guide for Circulation and Security Validation”, GSA Series 06 (draft).
Hietanen, J. (2006). “IFC Model View Definition Format”, International Alliance for
Interoperability.
ICC (2006). “MDV for the International Energy Conservation Code”, http://www.blis-
project.org/IAI-MVD/.
ISO TC184/SC4 (1997). “ Industrial automation systems and integration—Product data
representation and exchange” , ISO 10303-11: Description Methods: The EXPRESS
Language Reference Manual, ISO Central Secretariat.
ISO TC184/SC4 (1999).” Industrial automation systems and integration—Product data
representation and exchange:”, ISO 10303-14: Description Methods: The EXPRESS-X
Language Reference Manual, ISO Central Secretariat.
Jeong, Y-S., Eastman, C.M., Sacks, R., Kaner, I. (2009) “Benchmark tests for BIM data
exchanges of precast concrete”, Automation in Construction 18 (2009) 469–484.
Khemlani, K. (2005). “ CORENET e-PlanCheck: Singapore's automated code checking
system”, AECBytes,
http://www.aecbytes.com/buildingthefuture/2005/CORENETePlanCheck.html.
Lopez, L. A., and S. L. Elam (1984). “ SICAD: A Prototype Knowledge Based System for
Conformance Checking and Design”, Technical Report, Department of Civil Engineering.
University of Illinois at Urbana-Champaign, Urbana-Champaign, IL.
COMPUTING IN CIVIL ENGINEERING 577
Lopez, L. A., and R. N. Wright (1985). “Mapping Principles for the Standards interface for
Computer Aided Design”, NBSIR 85-3115, National Bureau of Standards, Gaithersburg,
MD.
Lopez, L. A., S. Elam and K. Reed (1989). “ Software concept for checking engineering
designs for conformance with codes and standards”. Engineering with Computers, 5,
pp.63-78.
Nawari, N. O. (2009). “Intelligent Design Codes”, The Structures Congress, 2009, Structural
Engineering Institute, ASCE, pp.2303-2312.
Nawari, N. O. (2010).”Standardization of Structural BIM”, ASCE 2011 Workshop of
Computing in Civil Engineering, Florida, Miami, June 19-22, 2011.
SMC (2009). “automated code checking for accessibility” Solibri,
http://www.solibri.com/press-releases/solibri-model-checker-v.4.2-accessibility.html
Vassileva, S. (2000). “An approach of constructing integrated client/server framework for
operative checking of building code”, in Taking the Construction Industry into the 21st
Century, Reykjavik, Iceland, ISBN: 9979-9174-3-1, June 28–30 2000.
Benefits of Implementing Building Information Modeling for Healthcare
Facility Commissioning
ABSTRACT
INTRODUCTION
Healthcare is one of the grand challenges of the 21st century and the nation’s
leading industry. Despite miraculous advances in modern medical diagnostics and
interventions, healthcare in the U.S. is inconsistent, with sometimes-dismal quality,
safety and efficiency, and massive access inequities. Total healthcare construction
spending, including hospitals, medical office buildings, nursing homes, and other
health facility buildings are forecasted to be one of the highest performing
construction sectors throughout 2012 according to IHS Global Insight’s Construction
Service. Healthcare facilities are unique because these facilities need to be open and
operational regardless of any circumstances. Building commissioning is the process
of verifying, in new construction, that all the subsystems achieve the owner's project
requirements as intended by the building owner and as designed by the building
578
COMPUTING IN CIVIL ENGINEERING 579
architects and engineers. Commissioning helps to deliver the owner a project that is
on schedule and reduced cost of delivery and substantial life cost, and will meet the
needs of users and occupants. Continuous commissioning focuses on the
improvement of overall system control and operations for the building and/or plant as
it is currently utilized and on meeting existing facility needs. Continuous
commissioning extends beyond the operations and maintenance program, and
optimizes the facility to its current use, which is likely different from the original
design. During the continuous commissioning process, a comprehensive engineering
evaluation is typically conducted for both building and plant functionality and system
functions. The optimal operational parameters and schedules can then be developed
based on actual conditions. An integrated approach is used to implement these
optimal schedules to ensure local and global system optimization and to ensure
persistence of the improved operational schedules.
The National Building Information Modeling Standards (NBIMS) Committee
defines BIM as “a digital representation of physical and functional characteristics of a
facility. BIM is a shared knowledge resource for information about a facility, forming
a reliable basis for decisions during its life-cycle; defined as existing from earliest
conception to demolition. A basic premise of BIM is collaboration by different
stakeholders at different phases of the life cycle of a facility to insert, extract, update
or modify information in the BIM to support and reflect the roles of that stakeholder
(NIBS, 2007). This allows planners, designers and builders to better coordinate
details and information amongst the multiple parties involved.
THE BENEFIT OF BIM IN PROJECT MANAGEMENT
Current BIM software are parametric 3D modeling tools, that offer the
architect a quick and reliable method to design the facility and share the details of the
design with other stakeholders involved with the project. BIM as an approach focuses
on the collection and sharing of information throughout the life cycle of the project
and the visualization of this information using the 3D model. The transfer from 2D to
3D modeling influences the design of a structure in many ways (Sacks and Barak,
2007). The author used benchmarks or hours to measure the time saving of two
structural engineering design projects and reach the conclusion that parametric 3D
modeling is especially useful at the early stages of design. Azhar, Hein and Sketo
(2008) states BIM represents the development and use of computer-generated n-
dimensional (n-D) models to simulate the planning, design, construction and
operation of a facility. It helps architects, engineers and constructors to visualize and
identify potential design, construction or operational problems. BIM also facilitates
the information and date flow among architects, designers, owners and contractors.
Steel, Drogemuller and Toth (2010) have studied the information exchange model
between different models, especially in the format of IFC (Industry Foundation
Classes), which combine the effort of architectural, mechanical and electrical
drawings into a compiled document. The authors concluded that collaboration and
scale are two of the most prominent characteristics of interoperability. According to
Hlotz and Horman (2006), the added detail allows stakeholders to better coordinate
information in executing and developing the projects. Grilo and Jardin-Goncalves
(2010) have studied the value proposition that interoperability of BIM makes evident.
580 COMPUTING IN CIVIL ENGINEERING
The higher level of collaboration among participants increases cost benefits and
decreases risks; hence the reinforcement of interoperability in the AEC sector is
highly recommended.
BIM IN HEALTHCARE
Healthcare projects benefit most because of the complexity and rigorous build
environment in healthcare facilities. By modeling healthcare projects, the early
adopters of BIM have experienced reduced project costs, shortened schedules, and
increased project quality (Barista, 2007). Chellappa (2009) addressed that BIM feeds
the need for Evidence Base Design (EBD) to provide a healing environment for
patients and staff in healthcare facilities. Manning and Messner (2008) addressed the
following 5 aspects of why BIM benefits healthcare projects: 1. the layout of facility
in the hospitals should be arranged properly to avoid disease infection; 2. the
coordination between complex mechanical, electrical and plumbing system; 3.
because of the exist of patients in the hospital, the simulation of lighting, air
ventilation are also very important; 4. the operation phase of healthcare facility will
also benefit from the information in design and construction stages; 5. Saving
compared to large investment into healthcare facility will be tremendous. Two case
studies are presented: the first is a trauma hospital. In this project, 2D conceptual
drawings were abandoned after 7 months because of the discord of planning and
facility reality. The benefits of adopting 3D BIM modeling in this project are: 1. The
parametric design tools, the conversion of the drawing and dimensions between
metric units (used by vendors/contractors) and imperial units (used by panning teams
and users) in minimal time without any scaling or coding commands required; 2. The
updates for drawing set cross-referencing are performed quickly and automatically.
The second case study is the renovation of a Medical Research Lab. Using BIM, the
team is able to save 20% man-hours comparing to the historical data within the
company to calculate division and department space, which is approximately 62%
savings cost. Khanzode and Fisher (2008) have studied the benefits of BIM in
coordination between Mechanical, Electrical and Plumbing (MEP) systems. Through
the case of coordination of MEP in the new Medical Office Building (MOB) facility,
they discussed issues, such as: the role of the general contractor, specialty contractors,
the coordination of the scope of work, the coordination of software to be used,
coordination sequence, and the information exchange between designers and
subcontractors. The benefits of BIM for various stakeholders is also discussed, such
as, for an owner, there were close to zero change orders and made fast tracking
project delivery possible; for architects and engineers, there would be less time spent
producing requests for information during the construction phase; and for the general
contractor, improved safety on site (only one injury), and the ability to have more
planning time rather than “firefighting”, for specialty contractors, finishing work on
schedule and to 100% pre-fabricated plumbing and less than 2% rework.
HEALTHCARE FACILITY COMMISSIONING
perform in accordance with the design intent, that the design intent is consistent with
the owner’s project requirements, and that operation and maintenance staff are
adequately prepared to operate and maintain the completed facility. Through the
commissioning process, building systems can be integrated. The critical built
environment in a healthcare facility, such as dysfunctional control system, air quality
system, temperature control system, acoustic system will be secured through the
commissioning process. Additionally, the documents created during commissioning
become a guideline for maintenance and operation. Re-commissioning during
operation and maintenance also brings savings in energy consumption (Feldbauer,
2008). Mills, et al(2005) studied the cost-effectiveness of commissioning new and
existing commercial buildings for 244 buildings, which represents 30.4 million square
feet of commissioned space, across 21 states. They compiledd and synthesized,
published and unpublished data, from real-world commissioning and retro-
commissioning projects, establishing the largest available collection of standardized
information on new and existing building commissioning experience. Through data
analysis, they quantified the energy saving per square meters and payback time. Seth
(2006) mentioned that the commissioning scope of work for critical healthcare
facilities should not only include traditional HVAC system, but also broadened
complex diagnostic environment, operation environment, recovery environment,
insulation and patient care services.
BIM IN HEALTHCARE COMMISSIONING
Pre-Design Phase
The primary tasks for pre-design phase of healthcare commissioning are
establishing the commissioning scope and selecting the commissioning team. Since
each healthcare facility has its own characteristics and budget limitations, the scopes
or work (SOW) need to be identified. With building information modeling,
information from similar projects will be found through the database. Then the
commissioning team and the owner collaborate together in early stage to make
decisions about what systems to commission. An important note is that not only
HVAC systems are important for commissioning, but different projects have special
systems to be commissioned. For example, if a hospital specializes in the treatment of
burn patients, then the air control systems are vital for the healing environment.
Based on experience-based judgment and information gained from the BIM
information database, the decision-making to choose scope of work within limited
cost is not difficult.
582 COMPUTING IN CIVIL ENGINEERING
Design Phase
BIM is most beneficial to the commissioning process during the design phase.
The earlier that BIM is adopted in commissioning, the more beneficial it will be
(Feldbauer, 2008). BIM based commissioning process requires the involvement of the
commissioning team in early stages, which will enhance knowledge sharing between
different parties. Communication through the BIM in the early stages helps the
commissioning team connect the owner’s project requirement (OPR) and Basis of
Design (BOD) for commissioning more tightly. In addition, a solid timeline will be
formed by BIM to guide the commissioning process. The timeline will lead the
commissioning team to keep up with the commissioning schedule efficiently. Instead
of a traditional schedule, it is a simulation of the actual commissioning practice.
Therefore, it is much more feasible and practical for the commissioning team.
Commissioning during the design phase is mainly focused on reducing design error
and conflicts. With BIM, the commissioning experts will coordinate mechanical,
electrical, plumbing, HVAC systems, which are designed by different specialists,
drawings. Then, the commissioning team will collaborate with the architect and
owner to correct the defects or errors in the design. BIM based commissioning also
enhances Evidence-Based Design (EBD) in the long run. Correcting design defects or
drawing negligence, and gaining more experiences will be accumulated for future
EBD simultaneously and will be archived and stored in the database. BIM based
commissioning also improves energy savings performance in operation and
maintenance phases.
Construction Phase
Commissioning in the construction phase will influence operation and
maintenance directly. During the construction Phase, the commissioning scope of
work will be ensured, meaning, applicable equipment and building systems are
installed properly and receive adequate start-up and testing by installation contractors.
Also, the manuals for maintenance and operation and Testing, Adjusting, and
Balancing (TAB) reports have to be reviewed by the commissioning engineers. Other
tests, including pressure test and witness functional performance tests, will also be
conducted. BIM provides a functional detailed 3D drawing, so the engineer can find
problems quickly by what they’ve seen with the document.
operation manuals. Additionally, BIM will form a baseline model for the facility. In
the operation and maintenance phases, the staff can benchmark performance of the
facility with the baseline model to detect system failure in the early stage. This is
particularly important for a healthcare facility, because the built environment is
extremely important for healing patients. Also, the baseline model will provide
reference for future redesign, and other research activity.
Retro-Commissioning
Retro-commissioning is also called continuous commissioning. The purpose
of this commissioning is to solve the conflicts between systems and to improve the
building energy savings during O&M. When it is realized that the facility consumes
more energy than necessary, the energy efficiency issues will be analyzed. BIM is
able to detect problems in operation of systems by simulating and analyzing which
part of system is responsible for energy consumption. The interoperability of BIM
will let the data exported into energy analysis software, like Energy Plus, to detect the
problems in the operation of systems. When the problems are found, the BIM based
commissioning team is able to devise the easiest and effective method to coordinate
the operation of systems to achieve the energy saving goals.
CASE STUDY: BIM BASED HEALTHCARE FACILITY COMMISSIONING
IN MARYLAND GENERAL HOSPITAL
Project Overview
Maryland General Hospital is a top-notch, university-affiliated teaching
hospital. The project is the Central Care Expansion project of 92500 sq ft (15534
square feet of renovated space and 77000 square feet of new space) expansion
project, with the budget of more than $57 million. The SOW includes 8 new
operating rooms: one dedicated to ophthalmology, two dedicated endoscopy suites;
one dedicated cystoscopy suite; a pre-surgical unit with 14 private patient rooms and
2 inpatient holding bays; and a post-anesthesia care unit with 20 recovery bays and 2
isolation rooms. Also, there are updated pharmacy and laboratory, family waiting
areas with private consultation rooms and elevators.
archive life-cycle cost performance, BIM technology is used during the design and
construction processes.
REFERENCES
ABSTRACT
Cranes are among the most expensive pieces of equipment in many
construction projects as well as freight terminal operations, shipyards, and
warehouses. Despite their wide range of application, a vast majority of cranes still in
use do not feature the advanced automation and sensor technologies. A typical crane
operator mostly uses visual assessment of the jobsite conditions which may be
enhanced through a signalperson on the ground. However, the lack of an integrated
decision support system which takes into account the evolving work conditions and
the time and space constraints may lead to delays due to inefficient prioritization of
crane service requests. In a longer term, this may affect or even change the project
critical path which will ultimately lead to increased project time and cost. This paper
presents the latest results of an ongoing study which aims to design and implement an
automated crane decision support system to help crane operators fulfill service
requests in the most efficient order.
INTRODUCTION
The construction industry still lags behind most manufacturing and industrial
operations where transforming conventional activities into fully automated processes
has resulted in significant increases in productivity and lowered the overall project
cost (Groover 2008). Recently, automating construction activities at various levels
(e.g. design, installation, and operation) through the application of robotic and
automation has been explored in two areas: hard robotics that deals with developing
new robotic systems, and soft robotic which centers more on software and
information technology (IT) by enhancing the efficiency of existing machines
(Balaguer and Abderrahim 2008). While only few robots succeeded to find their way
into construction industry (Gambao et al. 1997, Balaguer et al. 2000, Hasegawa 2006,
Naito et al. 2007), research has been mostly focused on investigating soft robotic
techniques in construction (Everett 1993, Rosenfeld 1995 and 1998, Balaguer and
Abderrahim 2008, Lee et al. 2009). Some researchers investigated the possibility of
automating existing construction equipment with an ultimate objective of increasing
project efficiency (Everett 1993, Rosenfeld 1995 and 1998, Lee et al. 2009). Among
586
COMPUTING IN CIVIL ENGINEERING 587
such projects, automating crane operations has been of major interest due to the fact
that cranes are typically the most expensive pieces of equipment in many construction
projects and activities that rely on crane service usually control the project critical
path. Previous research in this area mostly falls into two categories: optimization of
crane layout pattern, in which the main objective is to find the best number of cranes
and the optimum location for each crane in order to satisfy criteria such as balancing
workload or minimizing spatial conflicts between cranes and other moving resources
on the site (Zhang et al. 1999, Al-Hussein 2005, Tantisevi and Akinci 2008), and
planning of physical crane motions, which includes the design and implementation of
tools to help in navigating the motions of the end manipulator (i.e. crane hook) and
other body parts (e.g. boom, jib, trolley) from the moment a load is picked up until it
is delivered to the desired location (Everett and Slocum 1993, Rosenfeld 1995, Lee
2009). The missing link between these two bodies of research is the need for a tool
that helps a crane operator decide the sequence of fulfilling service requests received
from crews working on a jobsite that yields to maximum production rate and
minimum operations time and cost. This gap of knowledge has been identified in the
presented research and is referred to as the decision- making phase. In this phase of
the crane operation cycle, the operator should prioritize crane service requests and
create a job sequence list given constraints such as idle times of each crew requesting
a crane service, significance of ongoing crew tasks, and total resource idle times.
Figure 1 shows the schematic overview of major areas with high potential for
automation in crane operations.
PROBLEM DESCRIPTION
Cranes are among the most expensive equipment in a typical construction
jobsite. The total cost of a crane includes the procurement (or rental) cost, operation
and maintenance (fuel, oil, parts) costs, and crane operator’s salary. Traditionally, a
crane is operated by a single crane operator. As shown in Figure 2, a signalperson
may assist the crane operator by giving hand or audio signals for lifting, swinging,
and lowering loads especially from and onto blind spots. In addition, a crane operator
uses his or her visual assessment and personal judgment or the help of an on-duty
superintendent to decide the order of tasks to fulfill if there are several service
requests from crews. This decision-making process could be biased towards certain
activities and as a result, may lead to longer operations time which can eventually
alter the project critical path.
588 COMPUTING IN CIVIL ENGINEERING
requesting crew, and unload the material. Subsequently, the operator should choose
the next crew for crane service from a total of w-1 remaining crews. This process will
continue until all outstanding crane service requests are fulfilled which basically
implies that there will be a total of w! (permutation of w) possible ways to fulfill all
crane requests. Since w! grows significantly with w, the challenge is to design a
robust automated method to determine the optimal sequence of tasks that yields the
minimum completion time.
METHODOLOGY
Linear programming (LP) has been evolved as a method for allocating scarce
resources among various activities in an optimal manner and is one of the most
widely used operations research tools and decision-making aids in manufacturing
industry, financial sectors, and service organizations (Lawler et al. 1985, Koch et al.
2009). Several classes of LP optimization problems can be graphically represented in
a network model (Bazaraa et al. 2009). A network model consists of a set of nodes
and arcs, and functions associated with arcs and/or nodes (Winston 2003). Using this
terminology, the problem of crane operations optimization can be categorized as a
network problem. The authors have investigated several network models to find the
most efficient formulation that yields a minimized total travel time of crane hook.
One of the most promising ways to formulate the problem of optimization of
crane operations is to assimilate this problem to an equivalent transportation problem
which deals with physical distribution of products from supply points to demand
points (Ojhaa et al. 2010) with the goal of minimizing the shipping costs, while the
need of each arrival area is met and every shipping location operates within its
capacity. However, the major limitation which hampers the use of the original
transportation problem to solve the problem of optimizing crane operations is that
unlike the transportation problem, in which more than one demand can be satisfied at
the same time (no limit on transport means), the crane optimization problem includes
only a limited number of cranes on a jobsite. In addition, demands can be fulfilled
from any supply point in the transportation problem whereas in the crane optimization
problem, demands are targeted (i.e. each crew demands material from a specific
storage area). The shortest path problem is another LP method which seeks to
minimize the total length of a path between any two given nodes (Winston 2003).
This class of LP problem cannot be directly applied to the crane optimization problem
either mainly because it does not guarantee a “continuous” path which covers all
nodes. Considering the limitations of the transportation and the shortest path
problems, the authors developed a mathematical model based on the Traveling
Salesman Problem (TSP) with Dantzig, Fulkerson and Johnson (DFJ) formulation.
TSP Formulation
The main objective of the TSP is to find the shortest route of a traveling
salesperson that starts at a home city, visits several other cities, and finally returns to
the same home city. The distance travelled in such a tour will depend on the order in
which the cities are visited and, thus, the problem is to find an optimal order of the
cities (Gutin and Punnen 2004). TSP is a typical “hard” optimization problem and
solving a TSP with large number of nodes may turn into a very difficult if not
impossible task (Gutin and Punnen 2004, Applegate et al. 2006). The formulation of a
590 COMPUTING IN CIVIL ENGINEERING
TSP problem starts with introducing a graph G = (V, A) where V is a set of n vertices
and A is a set of arcs or edges. Let C : (cij) be a distance (or cost) matrix associated
with A. The TSP will then try to determine a minimum distance circuit passing
through each vertex once and only once. Such a circuit is known as a tour or
Hamiltonian circuit (or cycle) (Laporte 1992, Gutin and Punnen 2004). Several exact
algorithms have been proposed for the TSP among which DFJ is one of the earliest
formulation that can be explained in the context of integer LP (Applegate et al. 2006).
In this algorithm, a binary variable xij is associated to every arc (i, j) and set equal to
1 if and only if arc (i, j) is used in the optimal solution (i j). The objective function
will then be to minimize cij xij , subject to the following constraints,
i j
x
j
ij 1 Ǝ i , j V , arc (i,j) A (1)
x
k
ki 1 Ǝ k , i V , arc (i,j) A (2)
x
jS jS
ij 1 S V , 2 s n 2 , arc (i,j) A (3)
other in the bipartite travel time graph. In order to find the minimum travel time for
all w outstanding crew requests, the original TSP is simultaneously applied to w sub-
problems. Each sub-problem is derived by assuming a certain crew (out of all crews
with outstanding requests) to be the last crew receiving crane service and as a result,
there will be w different sub-problems that need to be independently solved using the
TSP formulation. This method will thus substantially reduce the necessary
calculations as it only requires solving w smaller sub-problems rather than w!
problems by Brute force. Figure 4 is a graphical illustration of the bipartite travel time
graph in which each arc weight represents the travel time of the crane hook along that
arc in minutes. A sample crew request list received by the crane operator is also
shown in this Figure for which w = 3 since there are three outstanding crane service
requests to be fulfilled (crew 1 requests material from storage area 2, crew 2 requests
material from storage area 3, and crew 3 requests material from storage area 1).
CONCLUSIONS
Previous research in areas such as optimization of crane layout pattern and planning
of physical crane motion has shown the high potential of automating crane operations
in improving productivity and decreasing the overall project cost. This is
592 COMPUTING IN CIVIL ENGINEERING
Min 1X12 + 1X13 + 2X14 + Min 1X12 + 1X13 + 2X14 Min 1X12 + 1X13 + 2X14 +
2X62 + 5X52 + 3X63 + 5X54 +6X73 + 4X74 + 5X54 2X62 + 3X63 + 6X73 + 4X74
+ 7X27 + 3X46 + 6X35 +5X52 7X27 + 3X46 + + 7X27 + 3X46 + 6X35
X27 = X46 = X35 =X71 =1 6X35 X27 = X46 = X35 =X51 =1
X12+X13+X14 =1 X27 = X46 = X35 =X61 =1 X13 + X63 + X73 = 1
X52+X54=1 X13 + X73 =1 X62 + X63 = 1
X62+X63=1 X14 + X74 + X54 = 1 X12 + X13 + X14 = 1
X12+X62+X52=1 X12 + X13 + X14 = 1 X73 + X74 = 1
X63+X13=1 X12 + X52 = 1 X12+X62 =1
X54+X14=1 X73 + X74 = 1 X14+X74 =1
X52+X62 1 X52+X54 = 1 X63+X73 1
X54+X74 1
REFERENCES
Al-Hussein, M., Alkass, S., and Moselhi, O. (2005). “Optimization Algorithm for Selection and
on Site Location of Mobile Cranes.” J. of Const. Engrg. and Mngt., ASCE, 131(5), 579-590.
Bazarraa, M. S., Jarvis, J. J., and Sherali H. D. (2009) Linear Programming And Network Flows,
Fourth Edition, John Wiley and Sons, New York, NY.
Balaguer, C., and Abderrahim, M. (2008) Robo. and Auto. in Const. , I-Tech Education and
Publishing KG, Vienna, Austria.
Balaguer, C., Giménez, A., Padron, V., and Abderrahim, M. (2000). “A climbing autonomous
robot for inspection applications in 3D complex environment.” Robotica, 18(3), 287-297.
Applegate, D. L., Bixby, R. E. and Cook, W. J. (2006) The Traveling Salesman Problem: A
Computational Study, Princeton University Press, Princeton, NJ.
Everett, J.G., and Slocum, A.H. (1993). ”CRANIUM: device for improving crane productivity
and safety.” J. of Const. Engrg. and Mngt, ASCE, 119(1), 23–39.
Gambao, E., Balaguer, C. Barrientos, A., Saltaren, R., and Puente, E. (1997). “Robot assembly
system for the construction process automation.” IEEE international Conference on Robotics
and Automation (ICRA’97) Albuquerque (USA), 46-51.
Gutin, G., and Punnen, A. P., (2004) The travelling salesman problem and its variationsi, Kluwer
Academic Publishers, Dordrecht, Netherlands.
Groover M. P. (2008) Automation, Production Systems, and Computer-Integrated Manufacturing,
Third Edition, Pearson Education, Upper Saddle River, NJ.
Hasegawa, Y. (2006). “Construction Automation and Robotics in the 21th century.” 23rd
International Symposium on Robotics and Automation in Construction (ISARC’06), Japan,
October 2006, Tokyo, Japan
Huang, C., Wong, C.K., and Tam, C.M. (2010) “Optimization of tower crane and material supply
locations in a high-rise building site by mixed-integer linear programming.” J. of Auto. in
Const., Elsevier, 19(5), 656-663.
Koch, S., König, K., and Wäscher, G. (2009). “Integer linear programming for a cutting problem
in the wood-processing industry: a case study.” Journal of International Transactions in
Operational Research, John Wiley, 16(6), 715–726.
Lawler, E., Lenstra, J., Rinnooy A., and Shmoys, A. (1985) The Traveling Salesman Problem,
John Wiley, New York, NY.
Laporte, G., (1992). “The Traveling Salesman Problem: An overview of exact and approximate
algorithms.” Euro. J. of Oper. Res., Elsevier Science, 59, 231-247.
Lee, G., Kim, H., Lee, C., Ham, S., Yun, S., Cho, H., Kim, B., Kim, G., Kim, K. (2009). “A laser-
technology-based lifting-path tracking system for a robotic tower crane.” automation in
construction, Elsevier Science, (2009)18, 865-874.
Naito, J., Obinta, G., Nakayama, A., and Hase, K. (2007). “Development of a Wearable Robot for
Assisting Carpentry Workers.” International J. of Adv. Robo. Sys., In-Tech, 4(4), 431-436.
Ojhaa, A., Dasb, B., Mondala, S., and Maitia, M. (2010). “A solid transportation problem for an
item with fixed charge, vehicle cost and price discounted varying charge using genetic
algorithm.” Applied soft computing, Elsevier Science, 10(1), 100-110.
Rosenfeld, Y. (1995). “Automation of existing cranes: from concept to prototype.”. Automation in
Construction, Elsevier Science, (1995)4, 125-138.
Rosenfeld, Y. Shapira, A. (1998). “Automation of existing tower cranes: economic and
technological feasibility.” Auto. in Const., Elsevier Science, 7 (4), 285–298.
Tantisevi, K., and Akinci, B. (2008). “Simulation-Based Identification of Possible Locations for
Mobile Cranes on Construction Sites.” J. Comp. in Civ. Engrg., ASCE, 22(1), 21-30.
Winston, W. (2003) Operations Research Applications and Algorithms, 4th. Edition, Duxbery
Press, Philadelphia, PA.
Zhang, P., Harris, F. C., Olomolaiye, O.P., and Holt, G.D. (1999). “Location Optimization for a
Group of Tower Cranes.” J. of Auto. in Const., ASCE, 125(2),115-122.
The Competencies of BIM Specialists: a Comparative Analysis of the
Literature Review and Job Ad Descriptions
ABSTRACT
594
COMPUTING IN CIVIL ENGINEERING 595
COMPETENCIES
In North American countries, competencies are regarded as being a set of
characteristics (knowledge, skills and attitudes - KSAs) that underlie (affect) the
successful performance (or behavior) of the individual at work (Slivinski and Miles
1996). In Europe, competencies are understood differently: employees demonstrate
the possession of a competence when they achieve or exceed expected results in their
work (Parry 1996).
Companies should select competencies in practical and concrete terms that are
aligned with the organization’s goals. Zingheim and Schuster (2009) recommend
keeping competency programs relatively simple and easy to understand. Hoff (2010)
outlines the steps necessary for creating a competency model: collecting information
about a job (tasks and skills), creating a draft model of competencies, collecting
quantitative and qualitative feedback to support competencies and, refining the final
model.
The present study addresses the research question “what are the individual
competencies necessary to perform functions related to BIM?”, and it is limited to the
first of the steps mentioned above that are needed for creating a competence model
for BIM specialists.
Owing to the different origins of the term ‘competency’ and the wide range of
types of competencies, it has been defined in various ways. For the purposes of this
study, the definitions that are considered are those that are relevant to the domain of
human resources.
Although many studies about the issue of ‘competency’ have been published
in recent years, the concept of competency is often mixed up with other terms such as
aptitude, qualifications, skill/ability, knowledge and attitude (Table 1). The present
study demarcates individual competencies in accordance with the terms and
definitions outlined in Table 1.
METHODOLOGY
A survey of the technical literature was conducted with the aim of searching
for references to any competencies that BIM specialists might need.
Job ads from the main labour market for BIM-related careers (i.e., that of the
United States) were collected from the Internet, particularly from “BIM Wiki” and
“LinkedIn” weblogs. The job descriptions analyzed were from more than 20 large
companies in the U.S., some of them with international branches. Thus, this study
was confined to the social context where these jobs can be found.
A Content Analysis process (Krippendorf 2004) was performed using input
from BIM job descriptions and the technical literature. Content Analysis is a process
that involves categorizing qualitative textual data into clusters of similar entities or
conceptual categories, in order to identify patterns and relationships between themes,
which can either be identified a priori or just emerge from the analysis. In this
method, the texts are broken down into units. This study has identified the units by
author and by job title.
The literature review and job descriptions covered individual competencies in
accordance with the five categories set out in Table 1. A list of competencies was
generated from the responsibilities and functions of several BIM professions from
both sources. A comparative analysis was carried out between the required
competencies in the job market for BIM-related careers and those cited in the
literature.
RESULTS
A large number of the job ads published online between 2009 and 2010,
advertising BIM related positions, were collected and analyzed (N=31). Although the
job titles in the ads varied, their classifications were standardized in this work. Table
2 provides a statistical summary of the ad sample (breakdown into categories of job
and company).
The results of the content analysis that was conducted for the job ads and
literature are summarized in Table 3. In this table, the numbers between parenthesis (
) in the left-hand column refer to the number of ads mentioning the item, and those
between square brackets [ ] in the right-hand column refer to publications in the
reference section, which are marked in the same way. There were no aptitudes
mentioned in the job ads that were collected.
DISCUSSION
In terms of education, the requirements collected from the job ads indicate
that companies sometimes accept applicants with a lower degree than that stipulated
in the literature. This is probably because the specific responsibilities of a BIM
Manager vary a lot among companies, sometimes with priority being given
Table 3. The competencies required for a BIM Manager in BIM job ads and
specified in the technical literature (items are listed in higher-to-lower order of
frequency).
to technical rather than management issues; in these cases, a higher degree may not
be needed. On the other hand, with regard to experience, on average, the job market
expects more professionals to have worked for a longer period of time than what is
recorded in the literature (5-7 vs. 3-4 years).
Both the AEC companies and the reviewed authors regard some core abilities
like oral communication, team/collaborative work and management as very important
for a BIM Manager. In contrast, the analysis also showed that the job market is more
focused on functional skills related to systems & technological abilities, especially
skills in BIM software/applications, while the literature is more concerned that the
BIM Manager has the foundational skills of critical and systemic thinking.
With regard to the necessary background knowledge, the literature suggests that
information technologies, construction processes and management are the most
important areas that a BIM Manager must know. The job market also expects
professionals to have this same background knowledge, although it focuses more on
specific BIM-supported activities.
Finally, the job market seeks to hire self-driven professionals, who are motivated by
the benefits of BIM technology, as well as those who have a positive attitude to
COMPUTING IN CIVIL ENGINEERING 599
teamwork, whereas the literature more often concentrates on the need for attitudes
conducive to working in a team environment.
studies for collecting quantitative and qualitative feedback from BIM professionals to
support the competencies listed here could refine and finalize a model of
competencies.
ACKNOWLEDGEMENTS
The first author would like to express her gratitude to CAPES for partially
funding this research. The second author is grateful to CNPq for partially funding this
research.
REFERENCES
Note: The number in [n] at the end of some references refers to Table 2.
Allison, H. (2010). “10 Things every BIM Manager should know”. Vico Guest
Blogger Series. <http://www.vicosoftware.com/vico-blogs/guest-blogger/
tabid/ 88454/bid/22833/10-Things-Every-BIM-Manager-Should-Know.aspx>
(Dec. 10, 2010). [1]
Barison, M. B. and Santos, E.T. (2010). “An overview of BIM specialists”.
Computing in Civil and Building Engineering, Proceedings of the
International Conference, Nottingham, UK, Nottingham University Press,
Paper 71, p. 141.
Bronet, F., Cheng, R. Eastman, J., Hagen, S., Hemsath, S. Khan, S. Regan, T., Ryan,
R. and Scheer, D., (2007). “Draft: The Future of Architectural Education”.
AIA TAP 2007. [2]
Casey, M. J. (2008). “BIM in Education: Focus on Local University Programs”.
BuildingSmart Alliance National Conference Engineering & Construction,
Washington, DC. <http://projects.buildingsmartalliance.org/files/
?artifact_id=1809>(Jan. 9, 2011). [3]
Chasey, A. and Pavelko, C. (2010). “Industry Expectations Help Drive BIM in
Today’s University Undergraduate Curriculum”. JBIM, Fall, 2010.
<http://www.wbdg.org/pdfs/jbim_fall10.pdf >(Dec, 2010). [4]
Cheng, R. (2006). “Questioning the role of BIM in architectural education”.
AECBytes.2006. http://www.aecbytes.com/viewpoint/2006/issue\_26.html
(Dec. 5, 2007). [5]
Colman, A. M. (2001). “A dictionary of psychology”. Oxford University Press,
Oxford.
Computer Integrated Construction Research Program (CICRP) (2009). “BIM Project
Execution Planning Guide – Version 1.0”. The Pennsylvania State University,
Pennsylvania, PA. [6]
Cooperative Research Centre for Construction Innovation (CRCCI) (2009). “National
Guidelines for Digital Modelling”. <http://www.construction-
innovation.info/images/pdfs/BIM_Guidelines_Book_191109_lores.pdf>(Sep,
2010).[7]
C3Consulting(2009).“Project-Level BIMM”. Infocus. <http://c3consulting.com.
au/newsletter/infocus-october-2009.html>(Dec. 13, 2010). [8]
Dossick, C. S., Neff, G. and Homayouni, H. (2009). “The Realities of BIM for
Collaboration in the AEC Industry”. Construction Research Congress.
<http://www.ascelibrary.org>(Oct. 25, 2010). [9]
COMPUTING IN CIVIL ENGINEERING 601
Chih-Yuan Chu1
1
Assistant Professor, Department of Civil Engineering, National Central University,
No. 300, Jhongda Rd., Jhongli City, Taoyuan County 32001, Taiwan; Tel: +886-3-
422-7151 ext 34151; Fax: +886-3-425-2960; email: jameschu@ncu.edu.tw
ABSTRACT
The guidance system for pedestrians is one of the most critical components of
emergency evacuation in complex building geometries in the events of accidents and
natural disasters. However, pre-determined, fixed emergency guidance systems
provide only static information of evacuation routes to exits and do not respond to the
real-time situations. In the cases of large-scale evacuation, these evacuation routes are
likely to be congested because a large number of pedestrians attempt to leave the
hazardous areas at the same time. To address this problem, this paper proposes a
method for planning adaptive emergency evacuation guidance to support the fixed
guidance system. The method includes two steps. First, the spatial distribution of the
pedestrians is converted into a digital image and the congestion areas in the facility are
identified with the techniques of digital image processing. Second, the identified
congestion areas are considered as virtual obstacles in addition to the original obstacles,
and adaptive guidance that instructs pedestrians to bypass these areas can then be
generated.
INTRODUCTION
The emergency evacuation guidance systems are critical for buildings because
they are responsible for the evacuation time for a pedestrian to leave the hazardous
area in the events of accidents and natural disasters. The planning of emergency
evacuation guidance systems is particularly important for complex building geometries
such as high-rise building, train stations, and airport terminals because they usually
serve more pedestrians and their sizes are larger than other types of buildings. Studies
of the optimal design of guidance systems are rare. Among the few that do exist, Chu
(2010) developed an approach to designing optimal evacuation guidance systems given
polygonal obstacles. The fixed guidance system is optimal in the sense that: (1) all
pedestrians are covered by the guidance system; (2) after a pedestrian finds the first
sign, the evacuation direction information is provided unambiguously without
requiring any judgment on the part of the pedestrian; and (3) the guidance allows a
pedestrian to evacuate to the closest exit via the shortest path. However, pre-
determined, fixed emergency evacuation guidance systems provide only static
information of the evacuation routes to exits. In the case of the large-scale evacuation,
these routes are likely to be congested because too many pedestrians attempt to access
the exits simultaneously. Motivated by the need for the dynamic information of the
603
604 COMPUTING IN CIVIL ENGINEERING
evacuation routes, this paper proposes a method to find the adaptive guidance strategy
in response to the real-time status of congestion in the facility to support the fixed
guidance systems.
There are two major parts of research. In the first part, a fixed guidance system
provides static evacuation information to the pedestrians. The evacuation process is
monitored by a congestion detection mechanism. Based on the pattern of pedestrian
distribution, the bottlenecks of the evacuation are determined and the adaptive
guidance system generates alternative evacuation routes at a regular interval for the
pedestrians to guide them to bypass the congestion areas and expedite the evacuation
process. In the second part, a cellular automata (CA) pedestrian simulation model is
adopted to evaluate the benefit of the adaptive guidance and validate the methodology.
CA models proposed by Burstedde et al. (2001) simulate pedestrian behaviors
adequately and are highly efficient for the large-scale simulation of human movement
under the emergency evacuation guidance in complex environments. Thus, CA models
were chosen for this research to evaluate the performance of evacuation guidance
systems. Each of the fixed and the adaptive guidance systems generates a static field,
which drives the pedestrians to move as if they are following the corresponding
guidance in the simulation model. The ratio of pedestrians following the fixed
guidance to those following the adaptive guidance is decided by the compliance rate,
the percentage of pedestrians that follow adaptive guidance. As a result, the critical
measures such as the maximum evacuation time under the evacuation guidance system
can be evaluated numerically. Further, by using this simulation tool, the effects of the
important factors including the update interval and the compliance rate of the adaptive
guidance can be determined.
METHODOLOGY
Two major assumptions are made in the methodology of this paper. The first
assumption is that the pedestrians follow either fixed or adaptive guidance. The second
assumption is that the space of the facilities of interest is separated into squares of
equal size. The two assumptions are explained next.
In this paper, two types of pedestrians are assumed. In addition to the dynamic
field and randomness in CA simulation, part of pedestrians follow only fixed guidance
for evacuation while the others follow adaptive guidance. Note that the pedestrians are
assumed to have no knowledge of the floor plan and are not capable of searching for
the evacuation routes without guidance. Although theories have been developed for
wayfinding with partial knowledge of space or imperfect guidance (Golledge, 1999),
this assumption is necessary because the purpose of this study is to design an adaptive
guidance system and evaluate its performance. The effect of the guidance system
cannot be evaluated adequately when pedestrians' wayfinding behavior is involved.
Therefore, the wayfinding behavior of pedestrians without following emergency
evacuation guidance is excluded from this paper.
The technique of digital image processing is one of the key components in the
identification of congestion areas. Because digital images are composed of pixels, it is
COMPUTING IN CIVIL ENGINEERING 605
required to convert the facility under consideration into squares of equal size. Similarly,
the CA pedestrian simulation model that will be used for the validation of the
methodology is the discrete-space approximation of the actual pedestrian movement. It
discretizes the space into cells and each cell can only be occupied by a single person.
The assumption of the discrete-space approximation in the paper is made due to the
above two important tools. Indeed, the model accuracy could be affected due to the
discretization; it is emphasized that reducing the cell size is still possible if higher
accuracy is required as described earlier in the literature review. In this research, the
space of a cell is 40 cm×40 cm, which is the space an average person occupies and
widely accepted in CA simulation.
Figure 1, which will be used in the numerical example later in the paper, is
used to explain the problem of relying only on the fixed guidance system in emergency
evacuation. In the figure, the black areas represent the obstacles and the pedestrians are
represented by gray areas. A stairway connecting to the ground floor is marked with an
exit sign. To explain the methodology in detail, a 30 m×40 m space is marked in the
lower right part of the figure and the following discussions will be focused on this area.
As the marked area shows, the pedestrians are moving from north and west to leave the
floor via the stairway following the fixed guidance system. There are two sources of
bottlenecks caused by the fixed guidance: pedestrians from north pass the same ticket
gate to access the exit and all of those from west use the narrow space between the
wall and a column. Because the pedestrians follow the fixed guidance and all of them
are taking the shortest paths, significant congestions are formed at the two bottlenecks.
Without the adaptive guidance, the alternative routes are ignored and the evacuation
process could be delayed.
After the step of noise removal, the next task is to search for major congestion
areas with simple shapes in the facility. In the terminology of image processing, it is
equivalent to finding objects of interest in an image, which can be done by smoothing
out object outlines, filling small holes, and eliminating small projections in the image
(Umbaugh, 2005). The common operations include erosion and dilation. Erosion
shrinks objects by eroding their boundaries to simply their shapes in an image and
dilation expands the objects for the loss of size in the step of erosion. By operating
erosion and dilation in sequence, the objects with simpler shapes can be identified.
When it is applied on the spatial distribution of pedestrians, the irregular shapes of
small groups of pedestrians are smoothed out. Therefore, the planning of the adaptive
guidance would not be affected by these relative small congestion areas. Figure 2(c)
and Figure 2(d) show the effects of erosion and dilation respectively, and the results
are the congestion areas that will be considered in the following procedure of adaptive
guidance design. Finally, it should be emphasized that the parameters in the image
processing algorithms determine the size and number of congestion areas. The
appropriate parameters for different scenarios of emergency evacuation could be
different and more research would be required for this topic.
Adaptive guidance
The main concept of the adaptive guidance is to consider the congestion areas
identified above as virtual obstacles. A new optimal guidance system considering both
the original obstacles and the update-to-date congestion areas as obstacles is generated
with the method proposed in Chu (2010). Due to the additional obstacles, the shortest
COMPUTING IN CIVIL ENGINEERING 607
paths to the exits and the optimal guidance would be different. By comparing the
routes suggested by the original guidance system and the adaptive guidance, the
required information to instruct the pedestrians to bypass the congestion areas can be
determined. The procedure can be explained by Figure 2(a) and the congestion areas
identified in Figure 2(d) are drawn in Figure 2(a) for comparison. As explained above,
the pedestrians are moving toward to the exit from west and north. It can be seen from
the figure that the alternative guidance (arrows in the figure) instructs the pedestrians
to bypass the congestion areas. One of the practical ways for implementing this
adaptive guidance is to deploy staff members at appropriate locations and guide the
pedestrians to use the less congested routes with vocal or gestural instruction. However,
the mechanisms of providing the adaptive guidance is not specified in this study and
still need more tests and experiments.
NUMERICAL EXAMPLE
The floor B1 of the Taipei Train Station, the largest transportation terminal in
Taiwan, is used as an example to demonstrate and validate the proposed methodology.
Figure 1 shows the layout of floor B1 with a length of approximately 197 m and a
width of 143 m. The floor provides space for ticket checking and passenger waiting
areas. The north part is reserved for conventional rail and the south part is dedicated to
high-speed rail. The four stairways connecting to the ground floor above serve as the
exits in this example and are marked with exit signs. Other stairways connect to floor
B2 below; however, because pedestrians would try to move upward during an
evacuation, these stairways are not considered in this example. All the areas in the
figure that constitute obstacles to pedestrians are black; these include walls, columns,
and ticket gates.
Figure 1 also shows the simulation under the fixed guidance after 60 s based
on the CA implementation proposed in Chu (2009) and the optimal fixed guidance
system proposed in Chu (2010). The figure indicates several types of evacuation
bottleneck when large-scale pedestrians (6,000 total) are following the fixed guidance.
The first source of bottleneck is the ticket gates (double circles). The second source of
bottleneck is the columns next to the stairways (solid circles) and the third bottleneck
is the narrow corridors (dashed circles). These results are useful for identifying
potential problems in the case of emergency situations, and they provide evidence for
the need of adaptive guidance to improve the evacuation performance.
The influence of the compliance rate, which is the percentage of the pedestrians
following the adaptive guidance, is also tested. In this analysis, the update interval of
15 s is selected because the adaptive guidance has the strongest effect on the
evacuation and the change due to the compliance rates could be observed more clearly.
As usual, the cases of 100 and 1,000 pedestrians are not affected by the adaptive
guidance because no congestion was detected. By comparing the compliance rates of
75%, 50%, and 25%, it can be observed that the compliance rate of 50% has the lowest
maximum evacuation times. This result is not very surprising because in this particular
example most of the alternative routes are relatively close to the original shortest
routes. As a result, splitting the pedestrians equally into two nearby routes has the
greatest improvement. Note that the analysis for the compliance rate is not complete
and it should be noted that the optimal compliance rate of 50% only apply to this
example.
Finally, Figure 3 shows the simulation after 60 s under the adaptive guidance
with an update interval of 15 s and a compliance rate of 50%. The case was chosen
because its performance is the best and the effects of the adaptive guidance are easier
to observe. The figure is useful for an overall understanding of the performance of the
guidance system. The adaptive guidance that indicates alternative routes are shown in
the figure as arrows. Compared to the fixed guidance, half of the pedestrians are taking
advantage of the alternative routes to bypass the congestion areas and the reduction of
the congestion areas is significant. It is noteworthy that most of the alternative routes
provided by adaptive guidance are close to the original routes from the fixed guidance
for this example. The implication is that the adaptive guidance and the fixed guidance
are identical for the most part, and the adaptive guidance is required only next to the
congestion areas. It implies that the implementation of the methodology would be
straightforward and feasible. The exception of the above observation is represented by
the solid line (original route) and the dashed line (alternative route) in the figure. The
original route shows that the shortest path to the exit without considering the
congestion is via the top right stairway. However, the congestion areas marked with
the dashed circle completely blocks the corridor. As a result, the adaptive guidance
guides the pedestrians to take the alternative route that leads to the lower right stairway.
As a result, the associated adaptive guidance is relatively far from the congestion area,
which is difficult to obtain without the help of a systematic approach proposed in this
research.
evacuation times. The example also finds that the compliance rate has impact on the
evacuation times and should be considered when the adaptive guidance is designed.
ACKNOWLEDGMENTS
This work was supported by the National Science Council of Taiwan through
research grant NSC 98-2221-E-008-078.
REFERENCES
Burstedde, C., Klauck, K., Schadschneider, A., and Zittartz, J. (2001). “Simulation of
pedestrian dynamics using two-dimensional cellular automation.” Physica A, 295,
507–525.
Chu, C.-Y. (2009). “A computer model for selecting facility evacuation design using
cellular automata.” Computer-Aided Civil and Infrastructure Engineering, 24(8),
608–622.
Chu, C.-Y. (2010). “Optimal emergency evacuation guidance design for complex
building geometries.” under review.
Golledge, R. (1999). Wayfinding behavior: Cognitive mapping and other spatial
processes. Johns Hopkins Univ Pr. 16
Helbing, D. and Johansson, A. (2007). “Dynamics of crowd disasers: An empirical
study.” Physical Review E, 75, 046109–1–046109–7.
Hoogendoorn, S., Daamen, W., and Bovy, P. (2003). “Extracting microscopic
pedestrian characteristics from video data.” Transportation Research Board 2003
Annual Meeting CD-ROM.
Jin, T. (2002). “Visibility and human behavior in fire smoke.” SFPE Handbook of Fire
Protection Engineering, P. J. DiNenno, ed., National Fire Protection Association,
Quincy, MA, USA, 3 edition, chapter 2-4, 2–42–2–53.
Umbaugh, S. (2005). Computer Imaging: digital image analysis and processing. CRC
Press.
IMPROVING THE ROBUSTNESS OF MODEL EXCHANGES USING
PRODUCT MODELING ‘CONCEPTS’ FOR IFC SCHEMA
Manu Venugopal1, Charles Eastman2, Rafael Sacks3, and Jochen Teizer4
1
PhD Candidate, School of Civil and Environmental Engineering, Georgia Institute of
Technology, 790 Atlantic Dr. N.W., Atlanta, GA, 30332-0355, PH (510) 579-8656.
E-mail: manu.menon@gatech.edu
2
Professor, College of Computing and College of Architecture, Georgia Institute of
Technology, Atlanta, GA, 30332-0155. E-mail: charles.eastman@coa.gatech.edu
3
Associate Professor, Faculty of Civil and Environmental Engineering, Technion-
Israel Institute of Technology, Haifa, 32000, Israel. E-mail:
cvsacks@techunix.technion.ac.il
4
Assistant Professor, School of Civil and Environmental Engineering, Georgia
Institute of Technology, Atlanta, GA, 30332-0355, E-mail: teizer@gatech.edu
ABSTRACT
Empirical approaches to define Model View Definitions (MVD) for exchange
specifications exist and are expensive to build, test, and maintain. This paper presents
the novel idea of developing modular and reusable MVDs from IFC Product
Modeling Concepts. The need and application for defining model views in a more
logical manner is illustrated with examples from current MVD development. A
particular focus of this paper is on precast entities in a building system. Presented is a
set of criteria to define fundamental semantic concepts articulated within the Industry
Foundation Classes (IFC) to improve the robustness of model exchanges.
Keywords: Building Information Modeling (BIM), Product/Process Modeling, Model
View Definition (MVD), Industry Foundation Class (IFC).
INTRODUCTION
Building Information Modeling (BIM) tools serving the Architecture, Engineering,
Construction (AEC) and Facilities Management (FM) industry cover various domains
and have different internal data model representation to suit each domain. Data
exchange is possible mostly by hard-coding translation rules. This method is costly to
implement and maintain on an individual system-to-system basis. NIST has estimated
that information copying and recreation is costing the industry 15.8 billion dollars a
year (NIST, 2004). The Industry Foundation Classes (IFC) schema is widely
recognized as the common data exchange format for interoperability within the AEC
industry (Eastman et al. 2008). Although IFC is a rich product-modeling schema, it is
highly redundant, offering multiple ways to define objects, relations and attributes.
Thus, data exchanges are not reliable due to inconsistencies in the assumptions made
in exported and imported data, posing a barrier to the advance of BIM (Eastman et al.
2010). The National BIM Standard (NBIMS) initiative (NIBS, 2008) proposes
facilitating information exchanges through model view definitions (MVD) (Hietanen,
2006). Empirical approaches to define MVD’s for exchange specifications exist and
611
612 COMPUTING IN CIVIL ENGINEERING
are expensive to build, test, and maintain (Venugopal et al. 2010). The authors’
experience in developing Precast BIM standard (Precast MVD, 2010), which is one of
the early NBIMS, has given insights into the advantages and disadvantages of the
MVD approach. Some of the deficiencies of current approaches are explained in this
paper to illustrate the need for a formal and rigorous approach to model view
development. We explore a novel idea of developing modular and reusable MVDs
from IFC Product Modeling Concepts. Presented is a set of criteria to define
fundamental semantic concepts articulated within the Industry Foundation Classes
(IFC) to improve the robustness of
model exchanges.
NBIMS PROCESS
Effective exchanges require
providing a layer of specificity over
the top of an IFC exchange schema
or other exchange schema. The
purpose of this layer of information
is to select and specify the
appropriate information entities
from a schema for particular uses.
Such a subset of the IFC schema
that is needed to satisfy one or
many exchange requirements of the
AEC industry is defined as a Model
View Definition by
buildingSMART organization
(NIBS 2008). The National BIM
Standard Version 1 Part 1 outlines a
draft of procedural steps to be
followed in the case of developing
model views. The NBIMS process
is shown in Figure 1. The focus of
this paper is on the translation from
the Design to Construct stage in the
model view development process in
Figure 1. The Design phase
rigorously defines the model view.
This involves translation of
exchange requirements from the
textual form so that they can be
bound to a particular exchange
schema. A model view is a
collection of such information Figure 1. Outline of NBIMS model view
modules, which will be development process. This research is aimed at
implemented by the software improving the Design and Construct stages of this
companies. Example MVDs include process.
COMPUTING IN CIVIL ENGINEERING 613
those supporting concept level design review by GSA (GSA, 2010), for structural
steel exchanges by steel fabricators (Eastman et al, 2005), all the exchanges needed to
support precast concrete exchanges from design to fabrication and erection (PCI,
2009), and the pass-off of building information from the contractor to the facility
owner or operator (COBIE2) and others. The Construct phase involves working with
the software companies to implement the model views. This involves creating
mapping of model views into internal data structures. The following section illustrates
the potential barriers of the model view approach and explains the need of a different
approach.
NEED FOR A FORMAL AND ROBUST MVD APPROACH
IFC is based on EXPRESS language, which is known to be highly expressive but
lacks a formal definition (Guarino et al. 1997). For example, no standard model view
has been proposed in which a precast architectural facade is modeled and mapped to
and from the IFC schema (Jeong et al. 2009), leading to ad hoc and varied results.
Performance studies of BIM data bases designed to create partial models and run
queries show a strong need for both identifying model views for specific exchanges,
as well as for specifying the exchange protocols in a stricter manner (Nour 2009;
Sacks et al. 2010). The translation from exchange requirements to model views in
NBIMS process is currently done manually and error prone. Moreover, it is time
consuming and expensive. The base entities from which model views can be defined
are not strictly defined. The model views developed are not based on logic
foundations, hence no possibility of applying reasoning mechanisms. Moreover, the
required level of detail of model exchanges is an issue, which is not specified in
current approaches.
In preparing a set of MVDs, information modelers must determine the
appropriate level of meaning and the typing structure. The structure of a model view
for exchange of product model data between various BIM application tools depends
on the extent to which building function, engineering, fabrication and production
semantics will be embedded in the exchange model. At one end of the spectrum, an
exchange model can carry only the basic solid geometry and material data of the
building model exchanged. The export routines at this level are simple and the
exchanges are generic. In this case, for any use beyond a simple geometry clash
check, importing software would need to interpret the geometry and associate the
meaning using internal representations of the objects received in terms of its own
native objects. At the other end of the spectrum, an exchange file can be structured to
represent piece-type aggregations or hierarchies that define design intent,
procurement groupings, production methods and phasing, and other pertinent
information about the building and its parts. In this case, the importing software can
generate native objects in its own schema with minimum effort, based upon
predefined libraries of profiles, catalogue pieces, surface finishes, and materials and
do not require explicit geometry or other data in every exchange. The export routines
at this level must be carefully customized for each case since the information must be
structured so that they are suitable for the importing applications supporting each use
case. Different use cases require different information structures. For example, an
architect might group a set of precast façade panels according to the patterns to be
614 COMPUTING IN CIVIL ENGINEERING
The notion of a Concept is that it is a subset of a product model schema that can be
used to create various, higher-level, Model View Definitions (MVD). These modular
sub-units or Concepts can be tested for correctness and completeness separately,
easing validation. A related but different purpose for defining product model sub-
schemas is for querying and accessing part of the instance data associated with a
target sub-schema.
Initial Test Model: The main criterion of Concepts is that they need to be stand-
alone and testable from the completeness point of view. Concepts should be a
complete subschema that has no broken links or references. Further, this also applies
to retrievable queries. This requirement of completeness is strongly influenced by the
optional versus mandatory property of some data fields. This may have to be adjusted
for IFC to work well with concepts. Figure 2 shows a grouping of various concepts
for a precast piece. A second and important requirement, which was identified during
the current model view work, is the need to avoid redundancy and rework in terms of
616 COMPUTING IN CIVIL ENGINEERING
Criteria Description
CONCLUSION
Product model schemas such as IFC are rich, but redundant. In order to build effective
exchanges, a new methodology based on formal definition of IFC Concepts is
introduced by this research. Based on the analysis, it is shown that MVD
development process needs to be transitioned from the current ad-hoc manner to a
more rigorous framework and/or methodology similar to the one explained in this
research. The semantic meaning of IFC concepts needs to be defined in a rigorous and
formal manner with strict guidelines. This can help achieve a uniform mapping to and
from internal objects of BIM tools and IFC.
The expressiveness and rigor, where MVD aspects can be represented fully
and in a consistent manner is important. Model views represent different levels of
detail; hence the new methodology should contribute to a better understanding of
model views by providing a concise and object oriented view of the exchange. It
should be possible to decompose the view into several modular objects (Concepts)
that are more manageable and testable. Moreover, traceability is a very important
feature in the development process. A more effective translation and transparency of
the user needs (Exchange Requirements) into the design of MVDs are required.
Avoiding unnecessary iterations and redundancy of IFC concepts can reduce
development time and costs. Work is still in progress in defining the IFC Concepts
and validating them. Based on the impact expected from this research, there is a
compulsive need to complete this research in a time bound manner to make available
the products to the IFC development community.
REFERENCES
Eastman C., F. Wang, S-J You, D. Yang, (2005) Deployment of An AEC Industry Sector
Product Model, Computer-Aided Design 37:11, pp. 1214–1228.
618 COMPUTING IN CIVIL ENGINEERING
Eastman, C., Teicholz, P., Sacks, R. and Liston, K., (2008) BIM Handbook: A Guide to
Building Information Modeling for Owners, Managers, Designers, Engineers and
Contractors, John Wiley & Sons, Inc., New Jersey.
Eastman, C., Jeong, Y.-S., Sacks, R., Kaner, I., (2010). Exchange Model and Exchange
Object Concepts for Implementation of National BIM Standards Journal Of
Computing In Civil Engineering, 24:1 (25).
GSA (2010). GSA BIM Program Overview, Available:
http://www.gsa.gov/portal/content/102276.
Guarino, N., Borgo, S., and Masolo, C., (1997). Logical modelling of product knowledge:
Towards a well-founded semantics for step, Citeseer.
Hietanen, J. and S. Final (2006). "IFC model view definition format." International Alliance
for Interoperability. In Rebolj, D.(ed.): Proceedings of the 24th CIB W78
Conference, Maribor. 26-29 June 2007.
Jeong, Y-S, Eastman C.M, Sacks R. and Kaner I, (2009) Benchmark tests for BIM data
exchanges of precast concrete Automation in Construction 18, 4, July 2009, pp 469-
484.
NIBS, (2008) United States national building information modeling standard version 1—Part
1: Overview, principles, and methodologies. (http://nbimsdoc.opengeospatial.org)
NIST, (2004) Gallaher P, O‘Connor A, Dettbarn, J., Gilday L, Cost Analysis of Inadequate
Interoperability in the U.S. Capital Facilities Industry, NIST GCR 04-867, U.S.
Department of Commerce Technology Administration National Institute of Standards
and Technology, Advanced Technology Program Information Technology and
Electronics Office Gaithersburg, Maryland 20899.
Nour, M. (2009). Performance of different (BIM/IFC) exchange formats within private
collaborative workspace for collaborative work, ITcon Vol. 14, Special Issue
Building Information Modeling Applications, Challenges and Future Directions , pg.
736-752, http://www.itcon.org/2009/48
Precast IDM, (2009) Eastman, C., Sacks, R., Panushev, I., Aram, V., and Yagmur, E.
Information Delivery Manual for Precast Concrete, PCI-Charles Pankow Foundation.
Available: http://dcom.arch.gatech.edu/pcibim/documents/IDM_for_Precast.pdf.
Precast MVD, (2010) Eastman, C., Sacks, R., Panushev, I., Venugopal, M., and Aram, V.
Precast Concrete BIM Standard Documents:Model View Definitions for Precast
Concrete, PCI-Charles Pankow Foundation. Available:
http://dcom.arch.gatech.edu/pcibim/documents/Precast_MVDs_v2.1_Volume_I.pdf.
Sacks, R, Kaner, I., Eastman, C.M., and Jeong, Y-S, (2010) The Rosewood Experiment –
Building Information Modeling and Interoperability for Architectural Precast
Facades, Automation in Construction 19 (2010) 419–432.
Venugopal, M., Eastman, C., Sacks, R., Panushev, I., Aram, V., (2010) Engineering
semantics of model views for building information model exchanges using IFC,
Proceedings of the CIB W78 2010: 27th International Conference –Cairo, Egypt, 16-
18 November.
Framework for an IFC-based Tool for Implementing
Design for Deconstruction (DfD)
ABSTRACT
Design for deconstruction is a way of thinking about and designing a building
to maximize its flexibility and to ensure that a building can be disassembled for
various reasons such as aged community or after becoming obsolete. The goal of
designing for deconstruction is to design building elements to be easily disassembled.
This paper presents a framework to enhance the design for disassembled building
systems for the construction of new facility using IFC. The framework integrates
architectural design with the ability of disassembly and constructability in four main
modules. The first module extracts building components’ properties from IFC and
creates an internal data structure. The second module utilizes created data structure to
construct a graph data model. The third module generates possible disassembly
solutions based on disassembly criteria. The last module compares disassembly
sequence of existing building with assembly sequence of new designed building to
obtain optimal disassembly sequence.
INTRODUCTION
The disassembly of buildings to recover materials and components for future
reuse is not widely practiced in the modern construction industry. No matter how well
a structure is built, it will not last forever. Structural engineers long ago developed the
idea of a "service life", in which a building (or other structure) is designed to be
structurally durable for a given number of years after construction, commonly 35
years today. Moreover, end-of-life for a building generally means end-of-life for the
bulk of its component materials. Conventional construction methods create heavily
integrated building systems that cannot be dismantled piece by piece. Sustainable
practices seek to eliminate waste and reduce demand for new materials, largely by
turning linear processes (such as the standard life cycle of a building, from
construction to useful life to demolition) into cyclical processes that maximize reuse
and minimize waste of resources, as shown in Figure 1.
The objective of this paper is to develop an IFC-based framework to optimize
disassembly sequences of an existing building. The framework integrates
architectural design and assembly planning of new designed building with the ability
of disassembly of an existing building in four main modules. The first module
extracts building components’ geometrical and topological information as well as
semantic information and generates an internal data structure. The second module
619
620 COMPUTING IN CIVIL ENGINEERING
FRAMEWORK OVERVIEW
The overall design for disassembly (DfD) framework is shown in Figure 2
comprising four key modules as depicted in the model. The following are the key
modules:
IFC Browser and Database Generator. The first module develops an internal data
structure from extracted physical and spatial properties, such as dimensions, materials
and topological relations from CAD drawings in IFC model. This module uses a CAD
tool which can export 3D CAD drawings into IFC files. It can also read IFC files and
transfer them into 3D CAD drawings. Autodesk Revit with its IFC2x utility and
ArchiCAD (Graphisoft, 2005) with its add-on interface are two typical IFC-
compatible CAD applications available on the market.
Graph Model Generator. The second module maps geometrical and semantic
information into a graph data representation model. This module has a parser, which
is used to transpose geometrical and topological relationships of structural elements
from IFC files to a graph model (GM). This parser includes a user interface that helps
user to match up topological relationships if it is necessary. It is convenient to
transpose the assembly’s drawing into a set of logical expressions and graphical
representations.
One of the most famous diagram in the DfD is called “nodes and edges” or
“Graph Model”, where nodes represent the physical parts (structural components) and
edges (topological relationships) represent the existing connections among parts.
Figure 3-a shows a sample concrete framework which is represented by graph. The
“Graph Model” (figure 3-b) is the simplest “Graph Model” disassembly graph: it is
COMPUTING IN CIVIL ENGINEERING 621
Figure 3. (a) Sample concrete structure, (b) graph model of sample structure.
As a logical data model, Graph Model (GM) is a pure graph representing the
adjacency and connectivity relationships among the internal elements of a building. In
order to implement network-based analysis such as graph traversal algorithms,
checking for feasibility and creating transition matrix in the GM, the logical network
model needs to be complemented by a 3D geometric network model that accurately
represents these geometry properties, called Geometric Network Model (GNM) (Lee
and Zlatanova 2008). In order to interpret geometrical and topological data of a
622 COMPUTING IN CIVIL ENGINEERING
column is intersection of both removal space of wall and slab. If the directions’
sets are empty, then the component cannot be removed and the GS is null. An
advanced procedure for searching the interferences has been introduced by
Romney et. al. (1995); they translated into an algorithm the physical and intuitive
principle according to which a person is sure to be able to move a body if he/she
completely sees the three-dimensional borders of the object. Relying on the
projections of the three-dimensional body’s borders on an orthogonal plan to the
chosen direction, it is possible to discern if the movement will have a positive
result, thus disassembling the part from the rest.
2- Structural Supporting:
Removing a component from the assembly may cause instability of the
structure. Supporting relationships of components are defined by assigning two
attributes to each component. These attributes are “Supporting” and “Supported
by”. For example “Supporting” attribute for component “A” depicts the
components which are relying on A and removing A before those components
cause regional or global instability in the structure. While “Supported By”
attribute of “A” represents the component(s) which are carrying “A” (Figure 6).
This attributes are extracted from “IFCSupportingAttribute” entity in IFC.
extracted from IFC using the similar procedure as explained in section 34-
9583409573. VFLib Data-Link Library (DLL) developed by Cordella (2004) is used
to implement graph matching part of this research. The VF algorithm will perform a
sub-graph isomorphism in which there is a sub-graph of the first graph which is
isomorphic to the second graph. It also allows for retrieving the mapping after a
match has been made. It allows for context checking in which nodes can carry
attributes, and those attributes can be tested against the corresponding attributes for
the isomorphic node, and the matching can be rejected based on the outcome of such
tests.
Finally for each operation and component it is possible to associate a
disassembly index (D) that can be achieved choosing some parameters such as
matching index (m), cost (c), time (t) and necessary movement (n). So, for each
solution, the D= f(m, c, t, n) and the solution(s) which has/have the maximum or
minimum value of D is selected for disassembly and the components are used in
assembly of the new designed building.
CONCLUSION
In this paper, a new framework is presented that automatically produces all the
possible sequences of disassembly of a building structure using IFC. IFC enhances
obtaining New method is used to represent topological relations of components in
building which is Graph Model. This method is able to drastically reduce the set of
the disassembly operations of the components, in case of complex systems, that is of
exponential type, without losing generality, making possible the exact calculation of
all the disassembly sequences. Applying structural and operational constraints
decreased the total number of sequences to the set of feasible sequences.
Graph search and graph matching algorithm is used to identify structural components
of existing building and their equivalent in the new designed building. Then,
associating an index to the different sequence, it is possible to find the optimal
sequence of disassembly. The future work will integrate implementation of software
code and disassembly example of a complex case study.
REFERENCES
Cordella, L., P. Foggia, et al. (2004). "A (sub) graph isomorphism algorithm for
matching large graphs." IEEE Transactions on Pattern Analysis and Machine
Intelligence: 1367-1372.
Lee, J. and S. Zlatanova (2008). "A 3D data model and topological analyses for
emergency response in urban areas." Geospatial Information Technology for
Emergency Response.
Liebich, T. (2004). "IFC 2x Edition 2 model implementation guide." International
Alliance for Interoperability.
Romney, B., C. Godard, et al. (1995). "An efficient system for geometric assembly
sequence generation and evaluation." COMPUTERS IN ENGINEERING: 699-
712.
Zhang, H. and T. Kuo (2002). A graph-based approach to disassembly model for end-
of-life product recycling, IEEE.
Temporary Facility Planning of a Construction Project Using BIM
(Building Information Modeling)
Hyunjoo Kim1 and Hongseob Ahn2
ABSTRACT
The key role in safety management is to identify any possible hazard before it
occurs by identifying any possible risk factors which are critical to risk assessment.
This planning/assessment process is considered to be tedious and requires a lot of
attention due to the following reasons: firstly, falsework (temporary structures) in
construction projects is fundamentally important. However, the installation and
dismantling of those facilities are one of the high risk activities in the job sites.
Secondly, temporary facilities are generally not clearly delineated on the building
drawings. It is our strong belief that safety tools have to be simple and convenient
enough for the jobsite people to manage them easily and be flexible for any occasions
to be occurred at various degrees. In order to develop the safety assessment system,
this research utilizes the BIM technology and collects important information by
importing data from BIM models and use it in the planning stage.
INTRODUCTION
In spite of various efforts of safety professionals and strong governmental
reinforcement, its high frequency and severity of injuries and illness have not
decreased well enough in the construction industry. The recent development in BIM
(Building Information Modeling) technology encourages us to utilize it in the field of
accident prevention as well as in design and project management.
The key role in safety management is to identify any possible hazard before it
occurs by developing prevention measures which are critical to risk assessment. This
planning/assessment process is considered to be tedious and requires a lot of attention
due to the following reasons: firstly, falsework (temporary structures) in construction
projects is fundamentally important. However, the installation and dismantling of
those facilities are one of the high risk activities in the job sites. Secondly, temporary
facilities are generally not clearly delineated on the building drawings. It is our strong
belief that safety tools have to be simple and convenient enough for the jobsite people
627
628 COMPUTING IN CIVIL ENGINEERING
to manage them easily and be flexible for any occasions to be occurred at various
degrees.
Current CAD systems are mostly used for physical models of buildings,
representing the static results of design and construction. While these models provide
a topological description of buildings in the way different objects (or entities) are
connected together and store specific architectural features and attributes, the authors
recognize that CAD programs typically represent buildings mostly as geometric
models. Information flow from design to construction is critical and, when efficiently
controlled, it allows for design-build and other integrated project delivery methods to
be favored. The impact of BIM processes has been more evident in cutting-edge
buildings and innovative processes. Utilizing the BIM technology, this paper
developed a new methodology of modeling an installation and dismantling of
falseworks in construction projects. Using the methodology developed in this
research, it is expected that the construction manager could create falsework models
(temporary structures) imbedded with certain guidelines and regulations containing
the safety related requirements of a building in the planning/assessment process. This
research established a procedure of Building Information Modeling (BIM) modeling
technique in assessing possible hazard(s) by visually representing falsework objects,
and their locations of a building. The prototype described in the paper is mainly for
designing a scaffolds layout of a building, but could be developed further in planning
on various temporary facilities.
A number of case studies have illustrated how designers have implemented
collaborative work via 3D modeling with contractors to enhance constructability. In
that sense, we believe that BIM is one of the most recent technologies that has gained
acceptance in the AEC industry. This study intends to develop a safety assessment
system based on the BIM technology which will enable the jobsite people to plan
ahead on safety management and eventually achieve more productivity. One of the
possible benefits from the BIM based safety assessment system can be efficient
hazard identification at the planning process which will focus on the movements of
workers incorporated with other resources such as different kinds of equipment,
materials and tools.
PREVIOUS RESEARCH
Year after year construction is one of the most dangerous industries, with
approximately 1,050 construction workers dying on the job each year. Although
construction employment equals just over 5% of the workforce, construction injuries
account for in excess of 17% of all occupational deaths. One out of every seven
construction workers is injured each year and one out of every fourteen will suffer a
disabling injury.
Jaselskis et al. (1996) developed a strategy for improving construction safety
performance and Hinze et al. (1995) measured the number of safety violations and
fatalities which revealed interesting trends. Interestingly, Carter et al (2006) focused
on the safety hazard identification on construction projects and described that
unidentified hazards [resent the most unmanageable risks. The research utilized an IT
(Information Technology) tool in construction project safety management with a
computerized module. De la Garza et al. (1998) analyzed safety indicators in
COMPUTING IN CIVIL ENGINEERING 629
construction projects and Hinze et al [5] did research in identifying factors that
significantly influence the safety performance of specialty contractors. Jannadi et al
(2003) worked on the assessment of risk for major construction activities. And
Kartam (1997) emphasized the importance of effective planning and control
techniques to prevent construction accidents.
RESEARCH METHODOLOGY
Figure 1 shows the correspondence between BIM data and 3D simulation
model. The major BIM design software (ArchiCAD) is able to export data to an
ifcXML, which is a non-proprietary, open standard. ifcXML receives a lot of support
from government and the AEC industry. The proposed approach is that the
requirements of temporary facilities are extracted into the safety management system
for the installation and dismantling of the falseworks. In this experiment, the scaffolds
were built to demonstrate the hazard identification process, but a future paper is under
preparation that will show an automatic extraction of requirements and identification
of all the temporary facilities from a BIM data by saving the BIM model in ifcXML.
In figure 2, an example of a 5 story office building is shown in CAD representation
and stored in ifc file (shown in the figure background). Next the types of scaffolds
and their locations are taken from the ifc file once it is saved in ifcXML. Finally the
safety management system developed in this research will identify temporary
facilities and their locations. Details of modeling process are described in the next
section. These steps are explained in further detail in the case study section.
be completed in two weeks, totaling 10 weeks to complete the five story office
building. Figure 4 shows the entire progress of the construction schedule could be
segmented into ten different weeks from the prediction of the amounts of each
component in the office building. Figure 4 also shows a 3D model which has
completed the second floor of the building along with scaffolds built around the
perimeter of the building.
Step 4: Identify the location of the scaffolds necessary to construct the building
The use of scaffolds as tools for working at varied levels on construction sites,
is a fixture in the construction industry. Unfortunately, there have been many
accidents involving scaffolding. There could be many different reasons for the
accidents. But one of the important reasons is many scaffolds have been improperly
installed because of the lack of knowledge, or the lack of misunderstanding on the
exact locations of the scaffolds. They must be designed, installed, loaded and
dismantled properly in full accordance with OSHA regulations.
Besides the scaffolds, each worker is to be provided with additional
protection from falling hand tools, debris, and other small objects through the
installation of toe boards, screens, or guardrail systems. In this research, one of the
scaffolds, carpenters’ bracket scaffolds was applied and an example of the scaffolds
and its guardrails are shown in Figure 5.
CONCLUSIONS
BIM technology allows the special characteristics of safety oriented
construction process planning. The safety management system proposed in the paper
showed in the case study that BIM technology could be used to optimize the design
process to create safer construction environments.
Utilizing the BIM technology, this paper proposed a new methodology so
that a construction manager could create falsework models imbedded with safety
regulations and guidelines containing the safety related requirements of a building in
the planning/assessment process. This research demonstrated a procedure of Building
Information Modeling(BIM) modeling technique in assessing possible hazards by
visually representing falsework objects, and their locations in a building.
The prototype described in the paper is mainly for designing a scaffolds
layout of a building, but could be developed further in planning on various temporary
facilities.
REFERENCES
Jaselskis, E., Anderson, S., and Russel, J., “Strategies for Achieving Excellence in
Construction Safety Performance”, Journal of Construction Engineering and
Management, Vol.122, pp.61-70, 1996.
Hinze, J. and Russel, D., “Analysis of Fatalities Recorded by OSHA”, Journal of
Construction Engineering and Management, Vol. 121, pp. 209-214, 1995
Carter, G., and Smith, S., “Safety Hazard Identification on Construction Projects”,
Journal of Construction Engineering and Management, Vol. 132, pp. 197-205,
2006.
634 COMPUTING IN CIVIL ENGINEERING
Garza, J., Hancher, D., and Decker, L..”Analysis of Safety Indicators in Construction”,
Journal of Construction Engineering and Management, Vol. 124, pp. 312-314.
1998
Hinze, J., and Gambatese, J., “Factors That Influence Safety Performance of
Specialty Contractors”, Journal of Construction Management and
Engineering, Vol. 129, pp. 159-164, 2003.
Jannadi, O. and Almishari, S. “Risk Assessment in Construction”, Journal of
Construction Management and Engineering, Vol. 129, pp. 492-500, 2003.
Kartam, N. “Integrating Safety and Health Performance into Construction CPM”,
Journal of Construction Management and Engineering, Vol. 123, pp. 121-126,
1997
Energy Simulation System Using BIM (Building Information Modeling)
ABSTRACT
It is recognized that there is a need in the architecture, engineering, and
construction industry for new programs and methods of producing reliable energy
simulations using BIM (Building Information Modeling) technology. Current
methods and programs for running energy simulations are not very timely, difficult to
understand, and lack high interoperability between the BIM software and energy
simulation software. The goal of this research project is to develop a new
methodology to produce energy estimates from a BIM model in a more timely
fashion and to improve interoperability between the simulation engine and BIM
software. In the proposed methodology, the extracted information from a BIM model
is compiled into an INP file and run in a popular energy simulation program, DOE-2,
on an hourly basis for a desired time period. Case study showed that the application
of this methodology could be used to expediently provide energy simulations while at
the same time reproducing the BIM in a more readably three dimensional modeling
program.
INTRODUCTION
While BIM technology allows designers to run energy simulations (Kim et al.,
2009), there are limits on its usefulness in the current state of energy simulation
programs. Academics have pointed out that there is a need for improved
interoperability between energy simulation and building information modeling
programs (Messener et al., 2006).
The goal of this research project is to develop a new methodology to produce
energy estimates from a BIM in a more timely fashion and to improve interoperability
between the simulation engine and BIM software. In the case study applied in this
paper a BIM is created using modern commercial building design software. Next
the ifcXML file is read in a more commonly used modeling program that allows us to
use the ruby programming language to extract the relevant from by the ifcXML.
The geometric information regarding the building envelope is gathered and then used
to recreate the BIM in this new interface. From there the extracted information in
conjunction with user entered data is compiled into an INP file and run in a popular
energy simulation program, DOE-2, on an hourly basis for a desired time per0iod.
635
636 COMPUTING IN CIVIL ENGINEERING
This then produces estimated energy requirement reports for the proposed structure
based on the inputted conditions, duration, and location. The simulation results are
then compared over various locations to the results from commercial energy
simulation programs given the same conditions.
LITERATURE REVIEW
The Architecture/ Engineering/ and Construction industry is experiencing
great change because of BIM and its increasing popularity (Eastman et al. 2008;
Sacks et al. 2004). Many different energy modeling techniques have been applied to
numerous studies in attempts to predict future energy usage over the years including:
artificial neural networks, statistical analysis of building consumption data, decision
trees, and computer simulations programs (DOE-2, eQuest, and EnergyPlus)
(Catalina et al., 2008; Ekici et al., 2009; Olofsson et al., 2009). Previous work has
shown that using computer simulations takes a considerable amount of time to
properly input data correctly, even for qualified practitioners (Zhu et al., 2006;
Catalina et al., 2008). At the same time, DOE-2 specifically was applied in the
study of predicting energy consumption in the building sectors of major U.S. cities to
determine energy consumption profiles, but is very timely due to intensive labor
requirements. In hopes to alleviate some of the time requirements in this process
several groups have begun creating new methodologies for energy modeling using
EnergyPlus.
RESEARCH METHODOLOGY
In process 1.1 (Figure 1) a model is first created using the Graphisoft three
dimensional modeling program ArchiCAD 14. The model is created with only the
most basic of features: foundation, walls, windows, door(s), and a roof. While this
might not seem like very much information it is all that is required for running energy
simulations. Interior walls and cosmetic designing is not required unless it is desired
to have multiple heating ventilation and air conditioning (HVAC) zones for separate
parts of the structure. This is because heat loss and gain through the interior walls
are zero-sum products when considered in one zone. Once these basic parameters
have been set and the geometry of the structure has been finalized the model is
exported as an IFCXML file, process 2.1 (Figure 1). Process 1.2 (Figure 1) will be
discussed later in the section on writing the INP file.
Before moving on to process 2.2 (Figure 1) it is necessary to better
understand IFCXML files. The IFCXML file type was created by buildSMART
(formerly the International Alliance for Interoperability). The goal of the file type is
to, “promote open and interoperable IT standards to support the process change
within the construction and facility management industries” [buildSMART 2008].
COMPUTING IN CIVIL ENGINEERING 637
in the file. To extract the information regarding the length of the wall we have to
write the following ruby code:
target[IfcShapeRepresentation][‘i1803’][‘Items’][‘i1808’][IfcPolyline]['i1799’'][Po
ints][‘i1802’]
['IfcCartesianPoint'][1]['Coordinates']['i1798']['IfcLengthMeasure'][0]
The information is referenced by indexes essentially. The first thing that is
called out is to find the first level of the index, IfcShapeRepresntation.
A default unit is selected (PSZ system), temperature supply levels are set,
occupancy type, and the heat sources are defined. The last step regarding the
HVAC systems is defining the zone. The zone is where the temperatures required to
trigger the heating and air conditioning to turn on and off are entered. Once entered,
the majority of the INP file has been written and all that remains is completing the
economics and reports section. The only data needed for the economic section are
the current utility rates for gas and electricity which the user entered earlier. Lastly
the reports section is generally standard and concludes the file with default methods
of reporting the information computed (Hirsch 2004). The INP file is now ready to
be run in the DOE-2 simulator and estimate the proposed structure’s utility bill.
COMPUTING IN CIVIL ENGINEERING 639
Now it is finally possible to calculate the structure’s fuel and electrical demands. The
economic analysis subprogram can now compute the expected energy costs applying
the user inputted utility rates. The loads, HVAC, and economic subprograms are run
on an hourly basis for the defined duration (Figure 4) and produce the estimated
utility bill of the proposed structure for the given time period with detailed analysis of
the estimated energy consumption (Hirsch 2004).
CASE STUDY
Inputting the INP file into the DOE-2.2 energy simulation program we were
able to produce reasonable simulations. In Los Angeles the estimated kilowatt hours
for the electrical components of the BIM was 18,025 and the estimated natural gas
amounted to 377 THERM (Figure 5). On a per square foot basis this amounted to
an estimated 67.9 kilo BTU per square foot per year..
CONCLUSION
REFERENCES
Kim, H. and Stumpf, A. (2009), “Framework of Early Design Energy Analysis using
BIMs (Building Information Models”, ASCE Construction Research
Congress, Seattle, WA
Catalina, T., Virgone, J., and Blanco, E. (2008). “Development and validation of
regression models to predict monthly heating demand for residential
building.” Energy and Buildings, 40, 1825-1832
Ekici, B., and Aksoy, U. (2009). “Prediction of building energy consumption by using
artificial neural network.” Advances in Engineering Software, 40, 356-362
Olofsson, T., Andersson, S., and Sjӧgren, J. (2009). “Building energy parameter
investigations based on multivariate analysis.” Energy and Buildings, 41, 71-
80
Zhu, Y. (2006). “Applying computer-based simulations to energy auditing: a case
study.” Energy and Building, 38, 421-428
Semantic Modeling for Automated Compliance Checking
D. M. Salama1 and N. M. El-Gohary2
1
Graduate Student, Department of Civil and Environmental Engineering, University
of Illinois at Urbana-Champaign, 205 North Mathews Ave., Urbana, IL 61801; FAX
(217) 265-8039; email: abdelmo2@illinois.edu
2
Assistant Professor, Department of Civil and Environmental Engineering, University
of Illinois at Urbana-Champaign, 205 North Mathews Ave., Urbana, IL 61801; PH
(217) 333-6620; FAX (217) 265-8039; email: gohary@illinois.edu
ABSTRACT
Automated compliance checking of construction projects remains to be a
challenge. Existing computer-supported compliance checking methods are mainly
rule-checking systems (utilizing if-then-else logic statements) that assess building
designs based on a set of well-defined criteria. However, laws and regulations are
normally complex to interpret and implement; and thus if-then-else rule-checking
does not provide the level of knowledge representation and reasoning that is needed
to efficiently interpret applicable laws and regulations and check conformance of
designs and operations to those interpretations. In this paper, we explore a new
approach to automated regulatory compliance checking – we propose to apply
theoretical and computational developments in the fields of deontology, deontic logic,
and Natural Language Processing (NLP) to the problem of regulatory compliance
checking in construction. Deontology is a theory of rights and obligations; and
deontic logic is a branch of modal logic that deals with obligations, permissions, etc.
The paper starts by discussing the need for automated compliance checking of
construction operations and analyzing the limitations of existing compliance checking
efforts in this regard. The paper, then, provides an overview of the proposed approach
for automated compliance checking; and follows by an introduction of deontology
and deontic logic and their applications in other domains (e.g. computational law).
Finally, the paper presents the initial deontic modeling efforts towards automated
compliance checking.
INTRODUCTION
The Architecture, Engineering and Construction (AEC) industry is facing a
technological revolution with the introduction of Building Information Modeling
(BIM). Researchers, software developers, and industry professionals are pursuing
automation in diversified areas of the AEC industry. One area of automation in the
AEC industry is compliance checking. Compliance checking is the process of
assessing the compliance of a design, process, action, plan, or document to applicable
laws and regulations. Laws and regulations address architectural and structural design
requirements such as fire safety, accessibility, building envelope performance,
structural performance, etc. Laws and regulations also govern construction operations
to ensure construction safety, environmental protection, quality assurance, and
contractual compliance. Ongoing research efforts have been undertaken to automate
641
642 COMPUTING IN CIVIL ENGINEERING
the compliance checking of architectural and structural designs to applicable laws and
regulations (International Building Code (IBC), Americans with Disabilities Act
(ADA) standards, etc.). With the evolution of BIM-based tools, design data needed
for checking compliance is represented in a BIM model. This facilitates, to an extent,
the process of developing a tool for automated compliance checking of architectural
and structural designs. Automating the compliance checking of construction
operations, on the other hand, is far more challenging for three main reasons: 1)
data/information about construction operations (e.g. construction methods, temporary
facilities, construction safety procedure, quality control procedure, etc.) are not
semantically represented in a BIM model; 2) data/information about construction
operations are distributed across several documents (construction operations plans,
site layout, construction safety plan, quality control plan, etc.); and 3) construction
operations are highly dynamic and related documents undergo frequent changes and
updates (e.g. construction schedules). Due to the aforementioned challenges, more
attention has been given to the automation of compliance checking of architectural
and structural designs in comparison to construction operations. In the following
sections, this paper discusses the need for and challenges of automated compliance
checking of construction operations and proposes a new approach to automated
compliance checking.
THE NEED FOR AUTOMATED COMPLIANCE CHECKING OF
CONSTRUCTION OPERATIONS
Compliance checking of construction operations is a complex process. The
construction phase of a project is governed by a number of laws and regulations that
are issued by various authorities, originate from different sources, vary from one
location to another, change dynamically with time, and govern different construction
operations and activities: 1) Laws and regulations are issued by various authorities
such as the Occupational Safety and Health Administration (OSHA) safety
regulations, Environmental Protections Agency (EPA) laws and regulations, ASTM
(American Standard for Testing Materials (ASTM) standards, etc; 2) Project
contracts are also a major source of law - the source of private law; a contract
represents a binding agreement imposing rules and regulations on construction
operations; 3) Laws and regulations vary by project location; some laws are imposed
on a federal level, while other laws are imposed on a state level or a local level; 4)
Environmental laws and regulations are expected to change frequently in response to
the increasing awareness of sustainability and green construction; and 5) One piece of
regulation may apply to many construction operations. Given such complexities,
manual compliance checking of construction operations has been a time and
resource-consuming task. Not only that, but it has been error prone, causing
construction projects to violate the law and as such suffer monetary and/or non-
monetary consequences. For example, recent violations to environmental regulations
by construction contractors include Wal-Mart Stores Inc. that was fined $1 million
and committed to an environmental management plan valued at $4.5 million to
increase compliance to storm-water regulations at its construction sites, through
additional inspections, training, and recordkeeping (US EPA 2010). Similarly, Beazer
Homes USA Inc., a national homebuilder, paid a $925,000 fine due to its violations
COMPUTING IN CIVIL ENGINEERING 643
of the Clean Water Act (Helderman, Washington Post 2010a); Bechtel National paid
$170,000 fines due to quality violations in the construction of a vitrification plant
(Cary, Hanford News 2010); and Hovnanian Enterprises, another homebuilder, paid
$1 million fines due to storm-water run-off violations (Helderman, Washington Post
2010b). Automated compliance checking would reduce the probability of making
compliance assessment errors and, consequently, improve compliance; thereby
reducing violations to laws and regulations that govern the construction process.
CURRENT EFFORTS TOWARDS THE AUTOMATION OF COMPLIANCE
CHECKING
Research on automated rule checking has been ongoing for over a decade
(Tan et al. 2010, Eastman et al. 2009). Researchers and software vendors developed
different compliance checking software focusing on the architectural and structural
design phases of a construction project. Most developers utilize Industry Foundation
Classes (IFC) to facilitate data exchange. IFC is a data format developed by the
buildingSMART initiative, which aims to facilitate data sharing between project
members and software applications (Building Smart Alliance 2008). IFC provides a
medium for data interoperability. It is registered by ISO (International Organization
for Standardization) and is currently in the process of becoming an official
international standard (Building Smart Alliance 2008). Efforts to automating the
compliance checking process include Solibri Model Checker, several projects led by
FIATECH, CORENET led by the Singapore Ministry of National Development, and
HITOS a Norwegian BIM based project. As an example, Solibri Model Checker
(SMC) is an IFC-compliant rule-checking software. IFC models are created in
applications such as Autodesk Revit Architecture, ArchiCAD, etc. (Khemlani 2002).
SMC is a java-based desktop platform application. It reads an IFC model and maps it
to an internal structure facilitating access and processing (Eastman et al. 2009).
Solibri performs several checks; it includes a pre-checking built-in function that tests
the model for overlaps, object existence, and name and attributes conventions. It
performs a set of design checks, such as accessibility checks according to the ISO
accessibility building code, fire safety checks according to the fire code exit path
distance, etc. The software reports the checking results in a visual manner, in the
form of pdf, xls or xml files. The checking is carried out using parametric constraints;
thus, the user can change the parameters of certain constraints according to the
desired standard (Khemlani 2002). A limitation of SMC, however, is that the addition
of new rules or modification of existing rules has to be done through the
addition/modification of java programming code. This means that the user is not
capable of removing, adding, or updating the built-in rules.
Previous research and software development efforts have undoubtedly paved
the way for automated compliance checking in the AEC industry. However, one
limitation of these efforts is that most of them focus on the architectural and structural
design domains. Automated compliance checking of construction operations has
received little, if any, effort due to its relative complexity. Another limitation of
existing automated compliance checking tools is that they all focus on the, relatively,
simpler form of rules; for example rules dealing with geometrical and spatial
attributes of the buildings, such as rules for checking proper representation of objects,
644 COMPUTING IN CIVIL ENGINEERING
overlaps and intersections of objects, wall thicknesses, door sizes, etc. Existing tools
lack the capability of performing more complex levels of compliance reasoning and
checking, such as checking compliance with contractual requirements. A third
limitation of existing tools is that they do not provide the level of flexibility that is
needed so that users can add or modify the set of governing rules and regulations (the
addition of rules is controlled by software vendors). Another fact which limits the
applicability of previous efforts to construction operation compliance checking
applications is that the compliance checking depends on data/information that is not
part of the BIM model. Data/information about construction operations (e.g.
construction methods, temporary facilities, construction safety procedure, quality
control procedure, etc.) are not semantically represented in a BIM model. A fifth
limitation of previous research efforts is that the rules and regulations are manually
extracted (from relevant textual documents describing laws and regulations) and
coded. Full (or at least higher) automation of compliance checking requires complex
processing of regulatory and contractual documents to automatically (or semi-
automatically) extract applicable rules.
PROPOSED NEW APPROACH TO AUTOMATED COMPLIANCE
CHECKING – A DEONTIC-BASED APPROACH
Automated compliance checking remains to be a challenge because of its complexity
and the highly elaborate reasoning it requires. Existing rule-checking engines do not
provide the level of knowledge representation and reasoning that is needed to process
applicable regulations and check conformance of designs and operations to those
regulations. As such, further research efforts towards semantic modeling of
construction-related laws and regulations and compliance reasoning must be
undertaken.
In this paper, we propose a new approach to automated regulatory compliance
checking; we propose to apply theoretical and computational developments in the
fields of deontology, deontic logic, and Natural Language Processing (NLP) to the
problem of compliance checking in construction. Deontology is a theory of rights and
obligations. Deontic logic is a branch of modal logic that deals with obligations,
permissions, etc. Natural Language Processing is NLP is a theoretically-based
computerized approach to analyzing, representing, and manipulating natural language
text or speech for the purpose of achieving human-like language processing for a
range of tasks or applications (Chowdhury and Cronin 2002). The proposed approach
for automated compliance checking is outlined as a five-step process, as shown in
Figure 1: 1) Extracting and formalizing the rules: extracting the rules from textual
regulatory and contractual documents (natural language text presented in word
documents), and converting these rules into formal logic sentences; 2) Extending the
BIM model: adding missing data/ information/ knowledge needed to represent and
reason about construction operations and its compliance to laws and regulations; 3)
Extracting and formalizing project data: extracting relevant construction data from
textual project documents (e.g. safety plan, environmental plan, etc.), and converting
these data into a semantic format. Unlike step 1, this step involves the documents
being checked for compliance, rather than the documents describing the laws and
regulations; 4) Executing the code checking process: checking whether the
semantically-represented project data (extended BIM model data and extracted
COMPUTING IN CIVIL ENGINEERING 645
textual data) comply with the formalized rules; and 5) Reporting the results: the
system reports missing data, violating objects and warnings; in some cases the result
is in the form of a pass or fail. All the above-mentioned processes (Step 1 through 5)
will be facilitated by the representation and reasoning capabilities of a deontic model
(presented and discussed in the following sections). Steps 1 and 3 will, additionally,
require the use NLP techniques. The remainder of this paper focuses on the
application of deontology and deontic logic for semantic modeling of laws and
regulations and compliance reasoning. Presenting and discussing the authors’
research efforts in using NLP for automated compliance checking is beyond the scope
of this paper.
Extending the
Extracting and Extracting and Executing the
Building
Formalizing Formalizing Code Checking Reporting
Information
the Rules Project Data Process
Model
laws and regulations; and 3) Compliance Checking Axioms: assess the compliance of
the extended BIM model and the textual documents to the applicable rules.
originates from
Authority
prescribes
is prohibited by
prescribes prescribes
is permitted by
is obligated by
obligates prohibits
Agent
may suffer may commit
may result in
Penalty Violation
Building Smart Alliance. (2008). "Industry Foundation Classes (IFC). Building Smart,
http://www.buildingsmart.com/bim> (Dec 30, 2010).
Cary, A. (2010). “Bechtel Settles 'Quality Issues'.” Hanford News Online,
http://www.hanfordnews.com/2010/09/25/15961/bechtel-settles-quality-
issues.html>( Sep. 25, 2010).
Chowdhury, G., and Cronin, B. (2002). “Natural language processing.” Annual
Review of Information Science and Technology, 37, 51-89.
Cheng, J. (2008). “Deontic relevant logic as the logical basis for representing and
reasoning about legal knowledge in legal information systems." Knowledge-
based intelligent information and engineering systems, Springer, 517-525.
Eastman, C., Lee, J., Jeong, Y., and Lee, J. (2009). "Automatic rule-based checking
of building designs." Automation in Construction, 18(8), 1011-1033.
Feltus, C., and Petit, M. (2009). “Building a responsibility model using modal logic -
towards accountability, capability and commitment concepts.” Proc. Intl.
Conf. on Computer Systems and Applications, IEEE, 386-391.
Gazendam, H. W.M., and Liu, K. (2005). “The evolution of organisational semiotics:
A brief review of the contribution of Ronald Stamper.” Studies in
organisational semiotics, Kluwer Academic Publishers, Dordrect.
Helderman, R. S. (2010a). "Cuccinelli Praises EPA for Polluting Homebuilder
Settlement." The Washington Post (Dec. 10, 2010),
http://voices.washingtonpost.com/virginiapolitics/2010/12/cuccinelli_praises_
epa_for_pol.html> (Dec. 03, 2010).
Helderman, R. S. (2010b). "Cuccinelli Issues Rare Praise for EPA in Stormwater
Case." The Washington Post (Apr. 21, 2010),
http://voices.washingtonpost.com/virginiapolitics/2010/04/cuccinelli_issues_r
are_applaus.html> (Dec.10, 2010).
Jureta, I., Siena, A., Mylopoulos, J., Perini, A., and Susi, A. (2010). “Theory of
regulatory compliance for requirements engineering.” Computing Research
Repository (CoRR), Vol. abs/1002.3711
Khemlani, L. (2002). "Solibri Model Checker." CadenceWeb,
http://www.cadenceweb.com/2002/1202/pr1202_solibri.html> (Oct. 10, 2010).
McNamara, P. (2007). "Deontic Logic." Stanford Encyclopedia of Philosophy,
http://plato.stanford.edu/entries/logic-deontic/> (Oct. 30, 2010).
Prisacariu, C., and Schneider, G. “A formal language for electronic contracts.” Proc.
9th IFIP WG Intl. Conf. on Formal Methods for Open Object-Based
Distributed Systems, 174-189.
Tan, X., Hammad, A., and Fazio, P. (2010). "Automated code compliance checking
for building envelope design." J. of Compt. in Civil Engrg, 24(2), 203-211.
US EPA. (2008). US Environmental Protection Agency, http://www.epa.gov/> (Nov.
30, 2010).
Wieringa, R. J., and Meyer, J. (1993). "Applications of deontic logic in computer
science: a concise overview." Deontic logic in computer science: normative
system specification, 17-40.
Ontology-based Standardized Web Services for Context Aware Building
Information Exchange and Updating
ABSTRACT
Standardized web services technology has been used in the construction
industry to support activities such as sensing data integration, supply chain
collaboration, and performance monitoring. Currently these web services are used for
exchanging messages with simple structure and small size. However, building
information models often contain rich information and are huge in size. Therefore,
retrieving and exchanging building information models using standardized web
services technology is challenging.
This paper discusses the usage of ontologies and context awareness in
standardized web services technology to facilitate efficient and lightweight
information retrieval and exchange. Ontologies for building information such as
Industry Foundation Classes (IFC) and aecXML have been developed for over a
decade and become mature. These ontologies can be leveraged to structure the input
and output messages of web services. On the other hand, context awareness not only
provides data security according to user’s location, time, and profile, but also enables
retrieval and exchange of partial information models. This paper presents and
demonstrates an ontology-based context aware web service framework that is
designed for retrieval and updating of building information models.
INTRODUCTION
With the emergence of the Internet and the advancements in network
technologies, the web services technology has become a promising means to deliver
information and software functionalities in a flexible manner. A web service is a self-
contained, self-describing application unit that can be published, distributed, located,
and invoked over the Internet to provide information and services to users through
application-to-application interaction. Due to the reusability and plug-and-play
capability of web services, the web services technology has attracted increasing
attention for communication and system implementation. In the construction industry,
the web services technology has been leveraged for various applications, such as
supply chain collaboration (Cheng et al. 2010), sensing data transmission and
integration (Hsieh and Hung 2009), searching and browsing of construction products
catalogue (Kong et al. 2005), and performance monitoring (Cheng and Law 2010).
649
650 COMPUTING IN CIVIL ENGINEERING
As the Internet becomes ubiquitous, the uses of the web services technology will keep
growing in the construction industry.
Web service standards such as SOAP (Simple Object Access Protocol) (World
Wide Web Consortium (W3C) 2003) and WSDL (Web Service Description Language)
(World Wide Web Consortium (W3C) 2007) have been developed to facilitate the
communication between web services and to enhance the interoperability of web
service units. These standards provide a generic data model and communication
mechanism for message exchange between web service units. Currently, the
standardized web services technology is used for exchanging messages with simple
structure and small size in the construction industry. However, building information
models often contain rich information and are huge in size. It is not efficient to
exchange the entire building information model using web services for data retrieval
and modification of models.
Therefore, we propose an ontology-based context aware web service
framework that is designed for exchanging data in a lightweight and customized
manner for manipulating building information models. The framework leverages
commonly used building information modeling (BIM) ontologies such as Industry
Foundation Classes (IFC) and CIMSteel (CIS/2) for defining the structure and
semantics of the data being exchanged. With the aid of these ontologies, users of the
framework do not need to exchange entire building information models through web
service units to make changes on them. Exchanging of partial models or key values is
sufficient. As different software programs may interpret the same BIM ontology
differently, the ontology-based web service units in the framework are specific to the
software environment of the source and the target applications. The framework also
uses context information for providing customized operations and functionality. This
paper presents the framework and an illustrative example.
RESEARCH BACKGROUND
purpose of such context is to allow the web services to modify their output
according to the client’s device properties. It is necessary to share construction
data that can be understood and used by the clients. For example, if CIS/2 data
is to be received by a client who uses other ontologies, say IFC, appropriate
mapping is required.
Consumer: Information about the consumer, e.g. name and id, who invokes the
web service. Such information can be intelligently used by a web service in
different scenarios.
Connection preferences: Information about the properties of the connections to
the web services.
command input and the context information input, the web service unit then decides
the suitable web service units on the data mapping layer and the model editing layer.
Command Context information
Structured (e.g. Modify Wall, (e.g. user profile, Inputs
Data Add Column) software environment)
WS Service Routing
Original WS WS WS
building Etc.
model [IFC, Revit] [gbXML, Revit] [CIS/2, ETABS] Model Editing
{AddWall, ModifyWall, {MoveWall, {ModifyColumn,
RemoveWall, etc.} ResizeWindow, etc.} etc.}
Outputs
New model files Model updated
The service routing unit leverages the BPEL standard (Business Process Execution
Language) (Organization for the Advancement of Structured Information Standards
(OASIS) 2007) for service composition. BPEL is a service orchestration language
that is commonly supported by open source and commercial web service execution
engines.
Data Mapping Layer. This layer contains a set of ontology-based web service units,
which extract the structured data input and convert the data into parameter values in
the target ontology. The messages for communication between web services are in
XML (eXtensible Markup Language) format. The web service units can identify the
ontology used in the structured data input by parsing the XML tags from the data
input. For example, if the data input is structured using ifcXML, the XML tags will
contain ifcXML elements which start with the prefix ‘IFC’.
BIM standards such as IFC and CIS/2 may represent the same object using
different parameters. For instance, in one standard the location of a wall may be
defined using the coordinates of the starting point and the coordinates of the ending
point, while in another standard the location may be defined using the coordinates of
the midpoint and the length of wall. Therefore, after parsing the structured data input
and identifying the ontology used, the data mapping web service units convert the
information into parameter values in the target ontology. This task requires the
knowledge of what parameters are needed to define a particular type of building
component in a specific ontology.
Model Editing Layer. This layer contains web service units which make changes to
building information models. The web service units identify the components (e.g.
COMPUTING IN CIVIL ENGINEERING 653
doors, walls, columns, slabs, etc.) to be changed and select the templates for the
specified command (e.g. addition, removal, moving, etc.). The actual operations and
behaviors may vary depending on the target software environment, data structure, and
Figure 2. (Left) 3D view of the building model used in the illustrative example;
(Right) The wall to be added using the ontology-based web service framework
user profile. The model editing layer is therefore context aware. The web service
units obtain the entire building model, either from online or local machine, and
incorporate the changes into the original building model. Finally, the web service
units either return a file of the new building model, or edit the original model directly.
EXAMPLE SCENARIO
To illustrate the proposed web service framework, an example scenario is
presented in this section. In this example, users are allowed to modify an Autodesk
Revit Architecture building model by inputting commands and parameters through
web browsers. The building model has two floors and one basement (see Figure 2).
For demonstrative purposes, addition of walls using the ontology-based context aware
web service framework is presented and discussed in the following sub-sections. The
wall to be added is an interior wall located on the lower floor, as depicted in Figure 2.
Currently, 3D building models can be showed on web pages with the aid of
technologies such as VRML (Virtual Reality Modeling Language), AutoCAD DXF
(Drawing eXchange Format), and CAD viewers. In this example, an interior architect
logs into an intranet and views the building model on a designated web page. The
web page not only displays the 3D building model, but also allows users to perform
some pre-defined functions to edit the building model and to view the updated model.
The architect selects the function ‘Add Wall’ and provides parameter values such as
wall dimensions and location coordinates. Once the architect submits the form, the
web page connects to the web service unit for service routing and starts the invocation
of web services in the framework. While the command and supporting data are
provided by the users, context information including user id and browser display
settings is extracted by the web page and sent to the service routing unit.
Service Routing Layer. The service routing unit processes the inputs which consist
of (1) the information such as dimensions and material type of the new wall,
structured in a user-defined schema, (2) the command ‘Add Wall’, and (3) the context
information including target software environment and user id. The service routing
654 COMPUTING IN CIVIL ENGINEERING
unit then selects the appropriate data mapping web service unit and model editing
web service unit, and invokes them using BPEL.
Web Service Unit for Mapping Data to IFC. In this example, the target software
environment is using IFC as the data structure. IFC defines a wall using parameters
including coordinates of the starting point, orientation, length, width, height, and type
(e.g. retaining wall and exterior wall). On the other hand, the parameters in the
structured data input are height, width, coordinates of the starting point, coordinates
of the ending point, material, and wall type. The coordinates need to be converted
into wall orientation and length before wall addition can be performed.
Web Service Units for Addition of Walls. To add a wall in an IFC building model
that Revit Architecture can understand, five modifications of the IFC model should be
performed:
(1) Add an IfcWallStandardCase element and its sub-elements to represent the
new wall;
(2) Add properties to the new wall using IfcRelDefinesByProperties elements;
(3) Specify the material of the wall using an IfcRelAssociateMaterial element;
(4) Connect the new wall to neighboring walls, if any, using
IfcConnectsPathElements elements; and
(5) Assign the new wall to a storey in the structure, using an
IfcRelContainedSpatialStructure element.
These five modifications performed by the wall addition web service unit will be
discussed in the following paragraphs.
Figure 3 shows the structure of a wall defined in the IFC ontology. To add a
new wall, IFC elements such as IfcWallStandardCase, IfcProductDefinitionShape,
and IfcExtrudedAreaSolid need to be created. In Figure 3, the numbers in the boxes
represent the line numbers of the IFC elements in the modified building model. The
highlighted boxes represent the new lines created for the new wall. The highlighted
element names represent the elements that contain numeric values which are
associated with the dimensions of the new wall.
Similarly, the IFC elements IfcRelDefinesByProperties,
IfcRelAssociatesMaterial and IfcConnectsPathElements, and their subsequent
elements are created for the new wall in the model editing web service unit. The
IfcRelDefinesByProperties element defines the properties of a building component in
1-to-N relationship. It allows the assignment of one property set to a single or
multiple building components such as walls or floor slabs. A building component can
be represented even without associated with any IfcRelDefinesByProperties element,
but component information may be lost in this case. The IfcRelAssociatesMaterial
element relates a building component with its material properties. The
IfcConnectsPathElements element defines the connectivity relation between two
elements. The structures of the three IFC elements are described in Figure 4.
Finally, the IfcRelContainedSpatialStructure element which specifies the
structural elements on each floor of the building should be re-defined. If the new wall
is located on an existing floor, there is already an IfcRelContainedSpatialStructure
element in the building model and the new wall should be added to the element.
COMPUTING IN CIVIL ENGINEERING 655
IFCRELDEFINESBYPROPERTIES
3770
IFCOWNERHISTORY IFCWALLSTANDARDCASE IFCRELASSOCIATESMATERIAL
33 IFCPROPERTYSET 3100 5001
IFCOWNERHISTORY
3570 33
3100
IFCRELCONTAINEDSPATIALSTRUCTURE
163
IFCOWNERHISTORY IFCBUILDINGSTOREY
33 47
2600 3100
IFCWALLSTANDARDCASE
REFERENCES
Aziz, Z., Anumba, C. J., Ruikar, D., Carrillo, P., and Bouchlaghem, D. (2006).
"Intelligent Wireless Web Services for Construction--A Review of the
Enabling Technologies." Automation in Construction, 15(2), 113-123.
Cheng, J. C. P., and Law, K. H. "A Web Service Framework for Environmental and
Carbon Footprint Monitoring in Construction Supply Chains." Proceedings of
the 1st International Conference on Sustainable Urbanization, Hong Kong,
China, December 15 - 17, 2010.
Cheng, J. C. P., Law, K. H., Bjornsson, H., Jones, A., and Sriram, R. D. (2010). "A
service oriented framework for construction supply chain integration."
Automation in Construction, 19(2), 245-260.
Hsieh, Y.-M., and Hung, Y.-C. (2009). "A scalable IT infrastructure for automated
monitoring systems based on the distributed computing technique using
simple object access protocol Web-services." Automation in Construction,
18(4), 424-433.
Keidl, M., and Kemper, A. (2004). Towards Context-Aware Adaptable Web Services,
University of Passau, Germany.
Kong, S. C. W., Li, H., Liang, Y., Hung, T., Anumba, C., and Chen, Z. (2005). "Web
services enhanced interoperable construction products catalogue." Automation
in Construction, 14(3), 343-352.
Organization for the Advancement of Structured Information Standards (OASIS).
(2007). Web Services Business Process Execution Language (WS-BPEL),
Version 2.0.
World Wide Web Consortium (W3C). (2003). Simple Object Access Protocol (SOAP),
Version 1.2.
World Wide Web Consortium (W3C). (2007). Web Services Description Language
(WSDL), Version 2.0.
IFC-Based Construction Industry Ontology And Semantic Web Services
Framework
1
Rinker School of Building Construction, College of Design, Construction and
Planning, University of Florida, PO Box 115703, Gainesville, FL, USA 32611-5703;
PH (352) 949-9419; email: zhangle@ufl.edu
2
Rinker School of Building Construction, College of Design, Construction and
Planning, University of Florida, PO Box 115703, Gainesville, FL, USA 32611-5703; ,
PH (352) 273-1152; email: raymond-issa@ufl.edu
ABSTRACT
A construction project is a multi-disciplinary team effort combining inputs from
various domains, for which interoperability is of great importance. Currently existing
data exchange problems between different software applications is adversely
impacting the overall productivity of the Architecture, Engineering and Construction
(AEC) industry. The use of Industry Foundation Classes (IFC) has been proposed to
help address the lack of interoperability throughout construction industry. But the IFC
specification itself is too complicated for normal users without special training.
This paper proposes a semantic Web Services framework utilizing IFC-based
industry ontology to address the interoperability problem. First, the possibility of
building an IFC-based construction industry ontology is reviewed. Then, a framework
to build semantic Web Services on this ontology is suggested. Both a core service and
an assistant service is included. The framework could be easily expanded as long as
the same Web Services model and the common ontology is observed. Once
implemented, the framework could be utilized by any IFC-supported BIM
applications, as well as personnel without extensive knowledge of IFC specifications,
for more precise, consistent and up-to-date project information retrieval. This
approach is expected to further the effort of IFC and enhance and improve
interoperability in the AEC industry without requiring extremely technologically
savvy users.
INTRODUCTION
A construction project is a multi-disciplinary team effort combining valuable
and unique inputs of stakeholders from various domains, including owners, architects,
engineers, contractors and facility managers. Among other requirements of
657
658 COMPUTING IN CIVIL ENGINEERING
interoperability, correct and timely information sharing between the parties is of vital
value for a successful project, as well as the continuous development of the whole
industry.
The application of information technology (IT) in construction industry is an
indispensible contributing factor for the growth of productivity within each specific
domain in the industry. But when everything is put together to work on a project, the
data exchange problem between software applications adversely impacts over-all
project productivity. As a result, the owner spends more money and waits for a longer
time for a project to be designed and built. Even worse, after the completion of the
project the owner sometimes spend extra money re-inventing and re-inputting
everything that should have already been stored somewhere along the design and
construction of a project.
In this paper a Web Services approach is proposed to address these problems.
After a brief review of Semantic Web and Web Services technologies, the possibility
of building an IFC-based construction industry ontology structure is discussed. Then,
an expandable semantic Web Services framework build on the IFC ontology is
suggested. If this framework is implemented, all IFC-supported BIM applications
could utilize the framework for precise, consistent and up-to-date construction
industry and project information, which is expected to greatly enhance and improve
interoperability.
Current research
Domain ontologies define concepts, activities, objects and the relationships
among elements within a certain domain. Construction industry knowledge
management is among the first disciplines focusing on the building and application of
660 COMPUTING IN CIVIL ENGINEERING
but also properties and behaviors, endows the IFC objects intelligence (Vanlande et al.
2008). As an instance of the ISO 10303 international standard, one advantage of IFC
is that it is an open standard and everyone has full access to the information within.
Therefore it is ideal for transferring data between different software platforms.
Another advantage of IFC is its built-in support for XML, which allows any IFC
model to be described in ifcXML format in a standard XML file.
According to Corcho (2002), an ontology should include the following
minimal set of components: classes or concepts (with attributes describing the class),
and relations or associations between concepts. The contents in IFC could fulfill these
components requirements. IfcWindow is a typical Entity Type in IFC specifications.
The content page of IfcWindow includes the following sections: summary, property
set use definition, quantity use definition, containment use definition, geometry use
definition, express specification, attribute definitions and formal propositions (IAI
2010). The sections contained in other IFC elements are different due to the nature of
each element, but are all similar. The information contained in these sections fit into
the different components required for having an ontology.
The classes or concepts requirement of an ontology is about the nature or
definition of certain terminology. They are also known as entities (this is the name
used in IFC) or sets. The “summary” section of IfcWindow gives formal definitions
of a window, including the definition from ISO and IAI as well as an explanation of
other IFC entities used or related to IfcWindow. The URL of the webpage could be
used as the URI to identify the term in the ontology. Classes are usually organized in
taxonomies with inheritance information. This information is available in the IFC
“formal propositions” section. The “Inheritance Graph” in this section lists all the
entities that the current entity inherits.
In IFC, the attributes of the class is included in the following sections:
property set use definition, quantity use definition, geometry use definition and
attribute definitions. Property sets are most typical attributes information. A property
set is a group of properties that applies to each entity. The IfcWindow entity has three
property sets: WindowCommon, DoorWindowGlazingType and
DoorWindowShadingType. The WindowCommon property set includes reference,
acoustic rating fire rating, etc. Each property is described in a word (string) or a
number, which are referred to as IfcPropertySingleValue in IFC. The other two
property sets are similar properties about the glazing and shading of the window, but
they also apply to doors. The reason all these properties are divided into three groups
is to promote the re-use of each property set through different entities. Other sections
are also sources of entity attributes, for example, the geometry use definition section
includes the height and width of the window each value which is represented as a
number.
The relations in an ontology are also called roles. They denote how the classes
or entities are associated with others. Most of the relations are binary, meaning two
662 COMPUTING IN CIVIL ENGINEERING
classes are involved. The relation of the IfcWindow class with other classes are
described in “containment use definition” section. The relations a window may be
involved in include “fills” and “voids”, etc. A window “fills” an opening, which
“voids” a wall. Together, the window, the opening and the wall are “contained” in a
building story. The building itself is an “aggregation” of several stories.
The client portal is the user interface available to the end users. The portal
will be a web page implemented in JSP, and could easily be transported to mobile
devices. Users have options to enter information for enquiries and use other functions
provided on the web page. The portal will talk to the Web Services through Simple
Object Access Protocol (SOAP). SOAP is an XML-based standard protocol using
HTTP that describes a request/response message and therefore governs the
communication between a service and the client (W3C 2004). It provides a platform
and programming language independent way for Web Services to exchange
information.
The core service, named “Model Service”, will receive and analyze the
client’s enquiry and return the result, e.g. the detailed information of a certain element
specified by the user. Other assistant services are also available for use under the
same interface. The “Dictionary Service” is shown as an example of assistant services
in the diagram. The function of this service is to translate between plain English
words and IFC terms.
COMPUTING IN CIVIL ENGINEERING 663
REFERENCES
Akinci, B., Karimi, H., Pradhan, A., Wu, C., and Fichtl, G. (2008) "CAD and GIS
interoperability through semantic web services." Journal of Information
Technology in Construction, 13, 39-55.
Berners-Lee, T., Hendler, J., and Lassila, O. (2001) "The Semantic Web." Science
American, 284(5), 34.
Cardoso, J. (2007). "Semantic Web Services: Theory, Tools and Applications." IGI
Global.
Chen, H., Lin, Y., and Chao, Y. (2006). "Application of web services for structural
engineering systems." Journal of Computing in Civil Engineering, 20(3),
154-164.
Cheng, C. P., Lau, G. T., Pan, J., and Law, K. H. (2008). "Domain-specific ontology
mapping by corpus-based semantic similarity." Proceedings of 2008 NSF CMMI
Engineering Research and Innovation Conference.
Corcho, O., and Fernandez-Lopez, M. (2002). "Ontological engineering: what are
ontologies and how can we build them?" Semantic Web Services: theory, tools and
applications, J. Cardoso, ed., IGI Global, .
IAI. "IfcWindow."
http://www.iai-tech.org/ifc/IFC2x4/beta3/html/ifcsharedbldgelements/lexical/ifcw
indow.htm (5/20/2010, 2010).
Katranuschkov, P., Gehre, A., and Scherer, R. J. (2002). "An engineering ontology
framework as advanced user gateway to IFC model data." EWork and EBusiness
in Architecture, Engineering and Construction: Proceedings of the Fourth
European Conference on Product and Process Modelling in the Building and
Related Industries, Portorož, Slovenia, 9-11 September 2002, Taylor & Francis,
269.
Neches, R., Fikes, R. E., Finin, T., Gruber, T., and Patil, R. (1991). "Enabling
technology for knowledge sharing." AI Magazine, 12(3), 36.
OMG. (2009). "Ontology definition metamodel." http://www.omg.org/spec/ODM/
(5/10, 2010).
Vacharasintopchai, T., Barry, W., Wuwongse, V., and Kanok-Nukulchai, W. (2007).
"Semantic Web Services framework for computational mechanics." Journal of
Computing in Civil Engineering, 21(2), 65-77.
Vanlande, R., Nicolle, C., and Cruz, C. (2008). "IFC and building lifecycle
management." Automation in Construction, 18 70-78.
W3C. (2004). "Web Services architecture." http://www.w3.org/TR/ws-arch/ (2/11,
2010).
Wetherill, M., Rezgui, Y., Lima, C., and Zarli, A. (2002). "Knowledge management
for the construction industry: the e-COGNOS project." Journal of Information
Technology in Construction, 7, 183-195.
Using Laser Scanning to Assess the Accuracy of As-Built BIM
B. Giel1 and R.R.A. Issa2
1
M.E. Rinker School of Building Construction, College of Design Construction and
Planning, University of Florida, P.O. Box 115703, Gainesville, FL 32611-5703; PH
(352) 339-0237; FAX (352) 846-2772; email: b1357g@ufl.edu
2
M.E. Rinker School of Building Construction, College of Design Construction and
Planning, University of Florida, P.O. Box 115703, Gainesville, FL 32611-5703; PH
(352) 273-1152; FAX (352) 846-2772; email: raymond-issa@ufl.edu
ABSTRACT
The growth of laser scanning and building information modeling (BIM) has
impacted the process by which we document facilities. However, despite rapid
advances in scanning technology, there still remains a great disconnect between the
data derived from laser scans and the creation of functional as-built BIM to be used in
the operation and maintenance phase of a building's life cycle. Even with the
paradigm shift to BIM-centered processes, the question still remains as to how
owners can be assured of the accuracy of their as-built models and supplemental
documentation. This paper addresses some of the current methods used to capture
and update existing facility information and also reviews several laser scanning
applications currently being developed. Based on existing 2D as-built documentation
available for a campus facility, a 3D BIM was created post construction. Then, using
point-cloud data from a small area of the building, a case study was conducted to
assess the accuracy and value of updating the previously generated BIM to better
represent the as-is conditions.
INTRODUCTION
While the employment of BIM in the design and construction phases of
facilities has increased dramatically in recent years, utilization of BIM after the
construction phase is still seldom explored. Even as owners become driving forces in
the shift to BIM-centered processes, few are fully utilizing their building models in
the operations and maintenance of their facilities. This is primarily because the
accuracy and level of detail obtained from a construction model does not always
reflect as-built conditions needed by maintenance personnel. Furthermore, over the
span of a facility's life cycle, multiple changes occur post-construction, that are
seldom documented. Laser scanning provides one possible solution to this issue by
presenting a fast and simple method for digitizing spatial information. With this
technology, as-built conditions and changes can be documented to ensure the
accuracy of information processed in BIM.
665
666 COMPUTING IN CIVIL ENGINEERING
LITERATURE REVIEW
Introduction
The operations and maintenance phase of a building's life cycle is the longest
and most significant. Consequently, it is also the most costly. According to Gallaher
et al. (2004), owners and operators are shouldering $10.6 billion or 68% of the total
cost of interoperability in the built environment. This is much in part due to
numerous hours wasted searching, authorizing, and in many cases reconstructing as-is
documentation (Akcamete et al. 2009). In 2004, it was estimated that roughly $1.5
billion was exhausted annually on facilities personnel being delayed waiting for
accurate and adequate as-built documentation. Another $4.8 million is spent each
year by facility personnel updating existing facility documentation to match current
as-built conditions (Gallaher et al. 2004).
Traditional methods of capturing existing facility information
Facilities managers need accurate information available to them to support
decision making processes, yet documentation is consistently not updated during the
construction phase to reflect in-place conditions. This is in part due to a lack of time
and adequate staffing as well as the tedious nature of the process. To date, there is no
standardized method for updating a design model to reflect changes made during
construction (Gu and London 2010). Moreover, as renovations and modifications
take place after a project's handover, little is done to document those changes.
Akcamete et al. (2009) conducted several case studies of projects during the
construction and O&M phases to determine the types of changes that occur over a
building's life cycle. Analysis of construction change orders as well as maintenance
work orders showed a consistent pattern among change history. Though BIM
provides a possible solution to the problem of change management, the authors
critiqued its inability to track the history of changes, which may also be relevant to
facilities personnel (Akcamete et al. 2009).
Owners' push to require BIM during the design and construction phases has
facilitated an improved level of accuracy over traditional 2D methods of
documentation. However, there is still a wealth of existing facilities constructed
without BIM which must also be documented and updated. The traditional method of
updating as-built information involves the tedious process of taking field
measurements and manually recording necessary changes that must then be reflected
in the 2D and paper-based documentation. The COBIE (Construction Operations
Building Information Exchange) standard has provided some support for the digital
documentation of existing building information. However, updating this information
into COBIE is still somewhat manual. Rojas et al. (2009) compared and critiqued
three common methods being used to capture existing facility information including
traditional paper forms and computer data entry, laptop computers, digital pens and
hand held computers. They found using digital pens resulted in the highest
productivity rate, while hand-held computers were the most cost effective method.
Additionally, they noted a need to standardize procedures for surveying and data
collection within existing facilities.
COMPUTING IN CIVIL ENGINEERING 667
METHODOLOGY
Research was conducted in two phases. First, using existing 2D as-built
documentation a multi-disciplinary 3D BIM was constructed to assess the accuracy
of as-built documentation for an existing campus facility. Then, using point cloud
data from a scan of a small area of the facility, the virtual model was checked for
dimensional and object attribute accuracy. Finally, based on discrepancies uncovered
between the scan data and the BIM, updates will be made to the BIM to reflect true
as-built dimensions and conditions.
RESULTS
Phase 1: As-built BIM creation
In the fall of 2007, a group of several graduate level and PhD students at the
University of Florida began a research project to gain hands on experience with a
variety of BIM software tools available to the industry (Giel and Issa 2010). The
students were charged with constructing an as-built BIM of their current facility,
Rinker Hall, using the 2D as-built documentation given to them by the UF's
Construction and Planning office. The software platforms used to achieve this task
were Autodesk's Architecture, Structure, and MEP. Some general information about
the facility is listed in Table 1.
Table 1. Rinker Hall Summary
Rinker Project Statistics
Location: Gainesville, FL
Use: Higher Education
Type: New Construction
Scope: 3- story building
Square Footage: 47, 300 SF
Construction Type: Steel framing, lightweight concrete on metal deck with brick
veneer and metal panel facade
Construction Cost: $6,500,000.00
Rating: USGBC LEED-NC, v2--Level: Gold (39 points)
The BIM for Rinker Hall was virtually constructed into a series of linked
multi-disciplinary models using AutoCAD files, paper based construction drawings,
and any available specifications and submittals on file for the project. Additionally,
students were asked to document any existing conditions that were not reflected in the
original as-built documents. To gain a better understanding of how BIM may
improve the documentation process, the students also simultaneously interpreted the
2D drawings originally designed in the corresponding 3D Revit platform.
Rather than tracing up from the imported 2D drawings, the students virtually
constructed Rinker Hall using two monitors. This was done to simultaneously check
the accuracy of AutoCAD rounding and resolve any drafting errors that may
otherwise have been overlooked. An image of the twin screen methodology used is
shown in Figure 1. An image of the final BIM model that was constructed is shown in
Figure 2.
COMPUTING IN CIVIL ENGINEERING 669
Figure 1. An example of using twin screens to construct the BIM from 2D documents
Using the "slice" tools, multiple sections were created along the different UCS
planes of the point cloud to better visualize the interior space. Then, the "fit plane"
and "fit line" tools were employed to draw the the basic boundaries of the room. This
is achieved in the program by selecting a sample of points along the known planes
and edges using Least Squares algorithm. Other useful tools were the "extend" and
"intersection" tools for planes which were helpful in determining where multiple
planes met. The "fit cylinder" tool was also tested to determine the location and
sizing of an insulated rain leader pipe located at the front of the classroom.
There were several lessons learned through this pilot study. Perhaps the most
significant was the importance of obtaining accurate point cloud data. The level of
accuracy of the planar model that can be created is dictated by the density and
number of points and the number of scans conducted. Therefore, it is necessary to
take multiple scans from different vantage points to record information about the
space in its entirety. It is felt that the accuracy of the BIM could have been greatly
improved by taking more scans of the interior environment and moving some of the
excess debris in the room out. In addition, the number of scans needed is determined
COMPUTING IN CIVIL ENGINEERING 671
by the materiality of the surfaces in the space, the amount of shadows and obstacles in
the room, and the overall size of room. As noted by Huber et al. (2010), edge loss
was a significant issue encountered. Thus, the farther objects were from the
instrument, the lower the density of points created and less accurate the method for
placing those planes. There was also significant difficulty placing the curtain wall
plane at the back of the classroom because of its transparent materiality. The point
cloud actually captured several points outside the interior space. Another major
difficulty faced was the nature of working within a closed interior space. Because the
space was bounded by solid planes on all sides, it was difficult to distinguish between
objects inside the room. While the slice tools greatly assisted this issue, using a scan
with a photo superimposed on it would have greatly assisted in interpreting objects
based on color and edge differentiation.
The final step of this study will be to import the 3D planar model created in
AutoCAD into the existing federated Revit model to try and assess its accuracy.
Physical interior dimensions will also be taken in the space to verify the accuracy of
the planar modeling method before updates are made.
CONCLUSIONS
There is still much work to be done to ensure the accuracy of as-built
documentation in the AEC industry. Many of the errors found in the as-built
documentation of Rinker Hall were the result of drafting errors inherent in a 2D
system. However, the other significant errors that were uncovered raise the question
whether greater weight needs to be placed on the review process of as-built
documentation before project handover. Perhaps the solution to this issue extends
beyond advanced technology and software; but in improving management practices
and quality control. In addition, the process of tracing and creating a physical
geometric model from a point cloud was found to be labor intensive and sometimes
inexact procedure. Furthermore, laser scanning equipment is an expensive initial
investment. Thus, the decision to use this methodology must be weighed heavily
against scale and nature of the facility. At this time, the authors are not fully
convinced that this method is any less time consuming or more accurate than the
traditional manual processes of updating as-is documentation. However, the method
does digitize a wealth of information that may be useful for future applications and it
also provides a temporary solution to a growing problem in the construction industry.
FUTURE WORK
The objective of the second phase of this study was to become comfortable
using some of the software tools available for point cloud post-processing and
analysis and understand some of their limitations. Therefore, only a small room in
Rinker hall was analyzed and updated for this study. However, after evaluating the
accuracy of this method, we hope to update and correct the entire as-built BIM.
Lastly, due to the sheer number of inaccuracies uncovered in the MEP disciplines'
drawings, the next phase of research will involve scanning Rinker's mechanical room
to provide a more accurate and intelligent set of as-built documentation to the
Facilities Planning office.
672 COMPUTING IN CIVIL ENGINEERING
ACKNOWLEDGEMENTS
Thanks to Alex Demogines from Faro for his patience in helping us in getting
started on our journey. We would also like to thank Scott Diaz and Kubit-USA for
allowing us to sample several of their software packages.
REFEERENCES
Akcamete, A., Akinci, B., Garrett, J.H., Jr. (2009). "Motivation for computational support
for updating building information models (BIMs)." Proceedings from the 2009
ASCE International Workshop on Computing in Civil Engineering, Texas, June 24-
27, 2009, 523-532.
Bosche, F. (2009). "Automated recognition of 3D CAD model objects in laser scans and
calculation of as-built dimensions for dimensional compliance control in
construction." Advanced Engineering Informatics, 24, 107-118.
Brilakis, I., Lourakis, M., Sacks, R., Savarese, S., Christodoulou, S., Teizer, J., and
Makhmalbaf, A. (2010). "Toward automated generation of parametric BIMs
based on Hybrid video and laser scanning data." Advanced Engineering Informatics,
24, 456-465.
Dai, F. and Lu, M. (2010). "Assessing the accuracy of applying photogrammetry to
take geometric measurements on building products." Journal of Construction
Engineering and Management, 136(2), 242-250.
Gallaher, M.P., O'Connor, A.C., Dettbarn, J.L., Jr., and Gilday, L.T. (2004). "Cost
analysis of inadequate interoperability in the U.S capital facilities industry."
NIST GCR 04-867.
Giel, B. and Issa, R.A., (2010). "Benefits and challenges of converting 2D-as-built
documentation to a 3D BIM post construction." Autodesk White Papers.
Goedert, J.D. and Meadati, P. (2008). "Integrating construction process documentation into
building information modeling." Journal of Construction Engineering and
Management, 134 (7), 509-516.
Gu, N. and London, K. (2010). "Understanding and facilitating BIM adoption in the AEC
industry." Journal of Automation in Construction, 19, 988-999.
Huber, D., Akinci, B., Tang, P., Adan, A., Okorn, B. and Xiong, X. (2010). "Using laser
scanners for modeling and analysis in architecture, engineering , and construction."
Proceedings from the 2010 44th Annual Conference on Information Sciences and
Systems, CISS 2010.
Maas, H.G. and Vosselman, G. (1999). "Two algorithms for extracting building models from
raw laser altimetry data." Journal of Photogrammetry and Remote Sensing, 54, 153-
163.
Rojas, E.M., Dossick, C.S., Schaufelberger, J., Brucker, B.A., Juan, H., and Rutz, C., (2009).
"Evaluating alternative methods for capturing as-built data for existing facilities."
Proceedings from the 2009 ASCE International Workshop on Computing in
Civil Engineering, Austin,TX, June 24-27, 2009, 237-246.
Tang, P., Huber, H., Akinci, B., Lipman, R. and Lytle, A. (2010). "Automatic
reconstruction of as-built building information models from laser-scanned point
clouds: a review of related techniques." Automation in Construction,19, 829-843.
Woo, J., Wilsmann, J. and Kang, D. (2010). "Use of as-built information modeling."
Proceedings from the 2010 Construction Research Congress, May 1-8, 2010,
538-548.
BIM Facilitated Web Service for LEED Automation
ABSTRACT
BIM technology has been increasingly implemented in green building design
and construction, noticeably in LEED projects. The advantages of using BIM on
LEED projects reside in its information richness that is desirable to tackle the
challenges posed by LEED certification, meeting the credit compliance and
generating the documentation required in certification review. This research proposes
a 3rd party web service relying on BIM as the information backbone to facilitate the
LEED documentation generation and management. As a potential enhancement to
LEED Online, this web service uses a structured database that feed on information
coming directly from an integrated BIM model, enabled by information exchange
protocols such as Open Database Connectivity (ODBC) and the Industrial Foundation
Class (IFC). This paper explores the premises of this web service and proposes the
preliminary architecture framework in two different scenarios.
Keywords: BIM, LEED, ODBC, IFC, Web services
INTRODUCTION
BIM and LEED are arguably two of the most popular trends in current
AEC&FM industry. With the market still in transformation, the U.S. Green Building
Council (USGBC)’s LEED brand has become a new paradigm in the U.S. green
building market. Stakeholders including legislators, developers, building owners,
design professionals and contractors are engaged with LEED in one way or another
and the business mode in building construction is affected correspondingly. LEED is
not yet a building code, but quite a few states (e.g. Arizona and California) and
governmental agencies (e.g. U.S. DOE and GSA) have mandated its implementation.
Earning LEED certification for a building is a challenging and tedious
process, with two major tasks to accomplish: 1) meet the LEED rating system
requirements; and 2) demonstrate such compliance with valid and comprehensive
documentation. To streamline the process, USGBC launched a web-based platform
called LEED Online in 2006 to help project teams manage LEED documentation.
673
674 COMPUTING IN CIVIL ENGINEERING
“Through LEED Online, project teams can manage project details, complete
documentation requirements for LEED credits and prerequisites, upload supporting
files, submit applications for review, receive reviewer feedback, and ultimately earn
LEED certification” (GBCI 2010). A major bottleneck of LEED Online, however,
stems from the intrinsic deficiency of the traditional project delivery method of the
building industry: fragmentation induced lack of interoperability. In terms of the
LEED project delivery, this means redundancy in project documentation and data
collection due to information inconsistency, which eventually causes the loss of
productivity and profitability.
The boom in building information modeling (BIM) technology deployment
seems to change the status quo. Despite the controversies remaining in defining what
“BIM” exactly is, the consensus is reached on the integrity of the information
captured in a building information model: everyone on the project team will be at the
same page at any point and any change triggered by any party will be delivered in a
consistent way to the rest of the team. In addition to being the information backbone,
BIM implementation in LEED projects can be justified by the functionalities of
current BIM authoring/analysis tools in building design and performance
configuration including those required by the LEED rating system, e.g. whole
building energy simulation. The bottom line is that practitioners of BIM and LEED
have found opportunity to leverage the LEED process by integrating it with BIM.
This research focuses on the information flow in LEED project delivery,
especially how information is generated, processed, delivered, managed and finally
submitted to USGBC/GBCI (Green Building Certification Institute) for certification
review. The goal is to propose a supplemental web service to LEED Online that feeds
on information coming directly from BIM. The information exchange process is
enabled by common protocols such as ODBC and IFC. By designing the appropriate
schema, generic information in BIM could be manipulated and organized in the
format that is compatible with LEED Online, and eventually the automation of the
LEED process can be achieved.
LEED DOCUMENTATION
As a critical step in the certification workflow (Figure 1), USGBC/GBCI
mandates the project team to prepare application with adequate documentation to
demonstrate that the project has fulfilled the claimed LEED credits. Two major types
of documentation are involved: 1) LEED Online templates and 2) supplemental
submittal documentation.
actually specifies what kind of submittals are expected from the project team. In
contrast, supplemental submittal documentation is optional and project teams only
use it to address issues they feel necessary to help increase the chances of achieving
the applied for credits.
Straightforward as it seems to be, the challenges in preparing LEED
documentation come from the chaos in information management that is intrinsic to
the traditional project delivery model, as illustrated in Figure 2A. Without a
centralized information source, project members are on their own in data collection
and documentation preparation. When project evolves, they respond to the progress
asynchronously, exacerbated by overwhelming redundancy during information
exchange. Consequently, inaccuracy or even loss of information will take place and
make the documentation error-prone. In contrast, with BIM, the project team will
have an integral information source that they can rely on when it comes to preparing
LEED documentation, as illustrated in Figure 2B. Without extra efforts to ponder
whether they are looking at the latest-greatest drawings, or wonder if the required
regional materials are furnished into the building, an investigation of the building
information model will answer all these questions.
directs where it should head for. It also reveals where improvement is needed in terms
of keeping BIM as a technology to advance. It is believed that BIM Stage 1 and Stage
2 reflect the status quo for most of the industry, and Stage 3 is the next immediate
goal to embark on, which is also the focus of this research.
Figure 3. BIM maturity in stages – linear view (Adapted from Succar 2008).
INFORMATION EXCHANGE
The maturity of BIM also dictates how many resources a company or a project
team has access to in a project setting, especially in LEED project delivery. The
capacity of current BIM authoring and analyzing tools has made previously laborious
process cost-effective to conduct in order to achieve the desired building
performance. Wu and Issa (2010) summarized popular BIM solutions and their
possible application in LEED for New Construction projects at the credit level.
Formulating an effective strategy to take advantage of appropriate tools in
streamlining the sustainability oriented design and construction is critical to the
success of LEED certification. Documentation on the other hand, requires
stewardship in management of information produced along with the project progress.
Information exchange protocols adopted by a company to communicate with
business partners as well as the data format used internally may be consistent with the
company’s culture and thus resistant to transit. Nevertheless, “for a BIM
implementation strategy to succeed, it must be accompanied by a corresponding
cultural transformation strategy”. A painstaking process as the transition is, it is
inevitable and the benefits are tangible, “the more flexibly information can be
exchanged, the greater the likelihood that it can be preserved in a useful form for the
long term (Smith and Tardif 2009).”
A whole range of data exchange and storage options already exists, and
Industry Foundation Classes (IFCs) protocol is a major initiative of open-standard
data formats that has been supported by many BIM software applications. The IFCs
framework involves comprehensive efforts in building semantics and ontologies that
is essential to accurate interpretation and exchange of building information, and so far
is still a work in progress. The most updated information about IFCs can be found at
http://www.buildingsmart.com/bim.
Another approach for interoperable information exchange is through the Open
Database Connectivity (ODBC) mechanism, which is also popular and supported by
major software vendors. Unlike IFC, ODBC aims to create a software interface for
accessing database management system (DBMS) independent of programming
COMPUTING IN CIVIL ENGINEERING 677
languages, database systems and operating systems, in other words, direct exchange
of information at the metadata level.
The major challenge for seamless information exchange in either the IFCs or
ODBC scenario is to ensure the integrity of information without distortion or loss
during transfer. For IFCs, it demands the richness of vocabularies. International
Framework of Dictionary (IFD) in the IFCs framework is dedicated to capture
building semantics and ontologies including those unique ones in LEED projects. In
the ODBC case, software vendors shall be prudent to allow users to manipulate the
internal database of the software without compromising its intended functionalities,
yet have the adequate freedom to export/import the data catering to their needs.
LEED AUTOMATION: NETWORK-BASED BIM-LEED INTEGRATION
LEED Automation is an effort to implement BIM in LEED projects at the
Stage 3 maturity level: network-based integration. USGBC’s official announcement
explicitly outlines the characteristics of such integration: “LEED Automation works
similarly to an app. It will perform three key functions for LEED project teams and
users of LEED Online by seamlessly integrating third-party applications to 1)
provide automation of various LEED documentation processes; 2) deliver customers
a unified view of their LEED projects; and 3) standardize LEED content and
distribute it consistently across multiple technology platforms” (USGBC 2010).
BIM Stage 3 is a perfect fit for this proposition for it has all the ingredients
required to fulfill these three functions: 1) documentation generation is a built-in
functionality of popular BIM solutions in the market; 2) a building information model
for a LEED project is more than a unified view to the customers but a valuable
reservoir of information for the project over its life cycle; 3) the essence of LEED, its
features broken down at the building component level together with the relationships
between them, are distributed to and shared by project members via a network in
standardized data format, regardless of how sparsely they are geographically located
and what software packages they are dealing with individually. BIM Stage 3 models
become interdisciplinary n-dimensional models allowing complex analyses at the
early stages of virtual design and construction. The model deliverables extend beyond
semantic object properties to include business intelligence, lean construction
principles, green policies and whole lifecycle costing (Succar 2008).
The network-based BIM-LEED integration is semantically-rich and can be
hypothetically achieved through model server technologies using proprietary, or non-
proprietary, open formats (e.g. BIMserver.org), single integrated/distributed federated
databases (e.g. Autodesk’s RDBLink, Laiserin 2003) and/or SaaS (Software as a
Service) solutions (e.g. Onuma Planning System, Wilkinson 2008). The prerequisites
for this integration include: 1) the maturity of network/software technologies allowing
a shared interdisciplinary model to provide two-way access to project stakeholders; 2)
the readiness of a competent information exchange format to lubricate the process.
BIM FACILITATED WEB SERVICE
This research looks at two possible approaches to propose the framework of
the network-based BIM-LEED integration differentiated by the interoperability
strategy implemented: the IFC approach and the ODBC approach. Figure 4 shows a
brief roadmap from the process perspective of BIM and LEED integration, of the key
678 COMPUTING IN CIVIL ENGINEERING
steps in LEED project delivery using BIM. No matter which approach is adopted, the
network-based integration kicks off when the model information flow is triggered.
The IFC approach has the potential to overcome the barriers that encountered
in the ODBC scenario as long as its ontology and semantic representation of “LEED
parameters” are fully developed. The most recent version of IFC 2x4 RC has
significantly improved in addressing such needs. For example, a series of new entities
that deals with material definition: IfcMaterialDefinition, material profiles:
IfcMaterialProfile, material relationship: IfcMaterial-Relationship and material
usage: IfcMaterialUsageDefinition have been added into the IFC framework, and can
be expected to accommodate semantic needs in a LEED project that aims for credits
in the materials & resources category.
CONCLUSION
This research investigated the premises of a BIM facilitated web service to
achieve LEED automation, and proposed the framework of this web service in two
different scenarios. It is believed that current technology is ready for the industry to
experiment with the prototype of network-based BIM-LEED integration. The
semantic comprehensiveness of IFC, BIM server development, and the LEED API
are among the highest priorities to look at in the next step of research. Well
documented case studies of LEED projects are highly desirable to help validate and
improve the framework as well as the functionality of the proposed web service.
REFERENCE
buildingSMART International Ltd. (2010). “Industry Foundation Classes Release 2x4
(IFC2x4) Release Candidate 2.” <http://www.iai-
tech.org/ifc/IFC2x4/rc2/html/index.htm> (Dec.23, 2010).
GBCI. (2010). “Certification Guide: LEED for New Construction.” Green Building
Certification Institute, <http://www.gbci.org/main-nav/building-
certification/certification-guide/leed-for-new-construction/about.aspx> (Dec.
23, 2010).
COMPUTING IN CIVIL ENGINEERING 681
ABSTRACT
INTRODUCTION
682
COMPUTING IN CIVIL ENGINEERING 683
execution sequence of tasks is feasible, the result of the simulation will be a valid
schedule for the construction tasks.
IMPLEMENTATION EXAMPLE
rank of a task equates with the number of predecessors of this task, i.e., a task
without any predecessors is defined as rank 0. The implementation details are shown
as an UML class-diagram in Figure 3.
CONCLUSION
REFERENCES
ABSTRACT
4D CAD system visualizes the schedule data for construction project.
Generally, 5D CAD system visualizes construction cost or resource data by
linking 4D object. This study attempts to develop a 5D CAD system by linking
4D object for progress schedule data with the risk data for visualizing construction
risk degree of each activity. This system uses the fuzzy analysis and AHP analysis
procedures to estimate risk degree of each activity. And the system considers
construction cost, duration and dangerous condition of work site as risk factors.
The estimated risk degree of each activity is simulated with different colors by the
risk level in 4D CAD engine that developed in this study. Because the 5D CAD
system integrated with risk analysis data has creative functions comparing with
the current similar systems, it can be a useful tool for visualizing practical risk
data and progress schedule data.
INTRODUCTION
Risk management in construction projects is heavily dependent upon the
experience-based intuition of constructors and owners. To solve this issue, active
research is underway on risk management systems where risk analysis techniques
are applied. Kang (2010) suggested a 4D CAD engine with an improved link
method between 3D object and schedule data. Nasir (2003) suggested setting an
activity period for risk analysis and developed evaluating risk in construction-
schedule model (ERIC-S) by coming up with probability distributions of a risk
combination. This study suggests a risk analysis process that can quantify risk
factors and develops a 5D CAD system that risk information is visually expressed
by each activity schedule. The estimated risk degree of each activity is simulated
with different colors by the risk level in 4D CAD engine that developed in this
study. This methodology will not only simplify conventional risk analysis
procedures but also provide visualized risk information to maximize the efficiency
of risk management operations.
690
COMPUTING IN CIVIL ENGINEERING 691
THEORETICAL BACKGROUNDS
4D CAD system
The 4D CAD system realizes the progress of building construction over
time with the virtual reality (VR) technique by combining three-dimensional
drawings with the schedule data that contains temporal information. The system
continuously simulates the progress of building construction on a three-
dimensional basis by point of time on the schedule data. Benefits of the 4D CAD
simulation are to detect problems such as temporal and spatial interferences for
structures in advance and thereby reduce the construction period and costs.
∑
........................................Equation (1)
i: risk criteria
j: risk evaluation factors (risk probability, risk impact) j∈{P, I}
n: linguistic variable values (n∈RV={VL, L, M, H, VH})
Rijn: fuzzy number of linguistic variable value on the levels of risk criteria i and
risk evaluation factor j
Pijn: fuzzy membership function value of linguistic variable value n on the levels
of risk criteria i and risk evaluation factor j
The risk levels are calculated by the risk probability and impact as
evaluation factors for each risk criteria. They are used in the approximation
formula, a triangular fuzzy number applied from Zadeh's extension principle, as in
Equation (2) to calculate risk evaluation values with different levels of importance.
∑ ∑ ∑
∑ ∑ ∑
.............Equation (2)
From the fuzzy analysis results suggested above, risk priorities and levels
for each risk factor are drawn. This information helps identify risk criteria
requiring focused management. Risk analysis values for each evaluation factor
can also be checked on the basis of the weight on evaluation factors calculated in
the AHP analysis model. In other words, fuzzy input information by risk factor is
stored into the database, so risk analysis considering seven evaluation factors in
total can be performed by selecting evaluation factors for the same fuzzy input
values (i.e. time, cost and work condition) either entirely or redundantly.
CONCLUSION
ACKNOWLEDGEMENTS
This study has been made under the sponsorship of the construction
technology innovation project (project no: 06 E01). We would like to thank the
Ministry of Land, Transport and Maritime Affairs and the Korea Institute of
Construction and Transportation Evaluation for making this study possible.
REFERENCES
L.S. Kang, H.S. Moon, S. Y. Park, C.H. Kim and T.S. Lee (2010). “Improved Link
System between Schedule Data and 3D Object in 4D CAD System by Using
WBS Code”, KSCE Journal of Civil Engineering, 14(6), 803-814.
Nasir, D., Mccabe, B., and Hartono, L. (2003). “Evaluating Risk in Construction–
Schedule Model (ERIC–S): Construction Schedule Risk Model.” Journal of
Construction Engineering and Management, 129(5), 518 – 527.
Carr, V., and Tah, J. H. M. (2001). “A fuzzy approach to construction project risk
assessment and analysis: construction project risk management system.”
Advances in Engineering Software, 32(10), 847 – 857.
Zeng, J., An, M., and Smith, N. J. (2007). “Application of a fuzzy based decision
making methodology to construction project risk assessment.” International
Journal of Project Management, 25(6), 589 – 600.
Integration Of Safety In Design Through The Use Of Building Information
Modeling
Jia Qi1, R. R. A. Issa2, J. Hinze3 and S. Olbina4
ABSTRACT
The construction industry has incurred the most fatalities of any industry in
the private sector in recent years. It is partly because of the fact that the designers
usually lack design for construction safety knowledge, which results in many safety
hazards manifesting themselves at any given stage during the construction process. In
this research study, the researchers devised a design for construction worker safety
tool which makes designing for safety suggestions available to designers and
constructors in an efficient way, which will effectively alleviate the potential hazards
on construction sites.
This research study looks at formalizing the collected design for construction
worker suggestions. A dictionary and a constraint model are then developed to store
these formalized suggestions. These can then be used by a model checking software
package to conduct designing for construction worker safety checking during the
design process. These tools make it possible for architects to optimize the drawings to
ensure minimization of safety hazards during construction. In the meantime,
constructors can take protective procedures to eliminate the construction site hazards
from the beginning of the project. Therefore in both the design and construction
phases, significant improvements to construction worker safety could be realized by
using this designing for safety tool.
INTRODUCTION
The construction industry has incurred the most worker fatalities of any
industry in the private sector in recent years. It is partly because designers cannot
access design for construction safety knowledge, which results in many safety
hazards being built in the project models/drawings. To improve the current situation,
this research study identifies the possible influences of Building Information
Modeling (BIM) technology on construction worker safety. After identifying the
extent of the positive impact of BIM technology on construction worker safety
through extensive literature review, the researchers describe the development of a
design for construction safety tool which can automatically check three-dimensional
(3-D) building models and make the designing for construction worker safety
suggestions available to the designers and constructors in an efficient way.
Using a software tool to help designers implement the design for construction
safety knowledge is not a new idea. In the 1990s, after recognizing the lack of
698
COMPUTING IN CIVIL ENGINEERING 699
After the designing for construction safety suggestions are classified, two
major components of the Model Checking System need to be developed: the
Dictionary and the Constraint Model.
period. The design process is an iterative one. Users could submit construction
documents and check the design for non-compliance by using the Construction Safety
Checking software tool. After the report identifies the problematic building
components, the designer(s) can revise their drawings by returning to architectural
design tools. The core of the entire process is the model checking software which is
supported by a dictionary and a design for construction safety rule set. After the
design for construction safety knowledge has been incorporated into construction
documents, shop drawing can be delivered to constructors for further construction
work.
Dictionary can make sure that the property is always assigned the same meaning and
unit of measurement.
The constraint model, also known as rule sets, is the electronic format design
for safety suggestions. It takes three steps to transfer the original paper-based design
for safety suggestions to the Constraint Model. The first step is to transfer original
design for construction safety suggestions into computer readable baseline electronic
suggestions which are in XML format. Then the logic between different ‘terms’ and
‘term properties’ in each suggestion is tagged by marking them with different colors.
Finally, different logic is encoded, which makes the baseline electronic suggestions
transfer into Safety Constraint Model/ Safety Rule Sets.
After the architecture of the tool has been determined, the next issue is to
define the functionalities of the tool. The safety checking system is expected to have
two main functions. One function consists of checking the drawings against the
design for construction safety rule set. The tool should also be able to provide safety
information related to certain building components. This is based on both the
characteristics of the design for construction safety knowledge and the reasoning
process of the safety checking tool. One of the differences between building codes
and design for construction safety knowledge is that a large number of design
suggestions are in the textual form without any parametric information. Many of
these suggestions are very difficult to encode into rule sets that can be compared with
the properties of building components and be used to restrict non-compliance.
Consequently, it is better to keep them in their original form and show them to the
user in text, while most of the building codes are connected to attributes that can be
physically measured. Second, usually the building code checking systems just provide
detailed information after the checking task has been completed. While the design for
safety tool is expected to provide suggestions during the design process. These two
points are very similar to delivering constructability knowledge to designers during
the preliminary design phase. Taking the above two points into consideration, an
appropriate way to deliver safety knowledge to designers must be found.
The process of checking a construction drawing includes the following steps.
First, the user loads the design into the rule checker. Then the 3D view can be shown
on the right hand side of the safety checking tool. The navigation functions usually
include Zoom, Spin and Walkthrough. On the left hand side there are checkboxes
which are used to select objects and rule sets. The user could get detailed properties
of any object by selecting an object tab. The user also can access all design for
construction safety suggestions by selecting them from the rule sets. A detailed
explanation of every suggested design provision will be provided and some graphs
will also be given to illustrate complex issues. Next, the user can select the rules that
will be used to check against specific objects. After running the checking function,
two sets of results will be produced. One is a list of all non-compliance issues
identified in the drawings, along with suggestions about how to eliminate or mitigate
these issues. The user could print the report out. Another set of results will be shown
on the right hand side in the form of a 3D view. Red circles will show all the
components which violate certain rule sets. After getting the report from the model
checker, the user can change drawings in the architectural modeling tools or keep the
COMPUTING IN CIVIL ENGINEERING 703
original design ideas if other requirements need to be met. Designers will be advised
to keep a record of their decisions for future use.
Next a case study of how to use the Safety Checking tool to check a building
model is discussed. The user imports the sample model into the Model Checking
Software to check whether the slope of roof meets the requirement. The following
requirements need to be met. “1. Design the parapet to be 42 inches tall. A parapet of
this height will provide immediate guardrail protection and eliminate the need to
construct a guardrail during construction or for future roof maintenance. 2. Minimize
the roof pitch to reduce the chance of workers slipping off the roof.”
After loading the Constraint Model and clicking the ‘Navigation’ button, the
system should generate a 3D view of the building model. As shown in Figure 2, the
roof of the building model is so sloped that there is a possibility that the roof does not
meet the requirement. Then the pitch of the roof can be checked. After running the
Model Checking Software, the tool will show the results as shown in Figure 3.
The detailed description also demonstrates that the pitch of the subject roof
does not meet requirements. According to OSHA standard, low-slope roof means a
roof having a slope less than or equal to 4” in 12”, the following requirement need to
be met. “Minimize the roof pitch to reduce the chance of workers slipping off the
roof.” The project participants need to consider either revising the building model or
installing fall protection on the job site. Suppose that after negotiation between the
Design-Build team members, the designers find the pitch of the roof exceed 4” in 12”,
they can go back and revise the building model. As shown in Figure 4, the pitch of
the roof meets the safety requirement after changing the pitch of the roof in the BIM
authoring software.
participants. Interoperability identifies the need to pass data between applications, and
for multiple applications to jointly contribute to the work at hand. Research studies
show that the lack of interoperability can cause tremendous inefficiencies and waste
in the construction industry (Gallaher et al. 2004).
The type of project delivery method will impact the extent to which
construction worker safety can be addressed in the design. The forms of project
delivery alter the roles played by the different parties and the allocation of their
responsibilities. In the most prevalent delivery method of Design-Bid-Build, the
designer develops a design based on the owner’s requirements, and then a constructor
is selected to build it. With this procedure, the project is designed with little expertise
from the constructor who actually constructs the project. As a result, many
constructability and safety issues are not considered until the construction phase.
Furthermore, governments often dictate that “open bidding” must be used in
government construction projects, so substantive early involvement of the actual
constructor is prohibited. Alternative project delivery methods can be used to access
the constructor’s knowledge to find safety hazards and to facilitate the
implementation of design modifications. For example, Toole (2007) confirms that
both the fee structure and model contract terms of a design-built project could induce
design engineers to consider construction safety during the design phase.
The Design-Build (DB) or Integrated Project Delivery (IPD) project delivery
method can be introduced to solve the current problem. DB and IPD allow
constructors to contribute their expertise in construction techniques early in the
design process resulting in improved project quality and financial performance during
the construction phase. Therefore the designer could benefit from the early
contribution of the constructors’ expertise during the design phase. Designers can
fully understand the ramifications of their decisions at the time the decisions are
made. The close collaboration eliminates a great deal of waste in the design, and
allows data sharing directly between the design and construction team, thereby
eliminating a large barrier to increased productivity in construction. DB and IPD also
leverage early contributions of knowledge and expertise through the utilization of
new technologies. The DB and IPD processes unlock the power of BIM, and the full
potential benefits of both DB or IPD and BIM can be achieved only when they are
used together.
COMPUTING IN CIVIL ENGINEERING 705
SUMMARY
A design for construction worker safety software tool is developed. This tool
can automatically check for fall hazards in the building information models and
provide design alternatives to users. It can be used by the architects/engineers during
the design process or be used by the constructors before conducting the construction
works.
This tool consists of a ‘Model Checking Software’ and the ‘Constraint
Model/Rule Sets’. The model checking software is an object-based rule engine such
as Express Data Manager (EDM), which can conduct automatic design checking
process. These rule sets are electronic computer-readable construction safety
suggestions. The user loads the building model into the design for construction safety
tool. The user can get familiar with the building model through 3D navigation which
includes function such as zoom, spin and walkthrough. Then, the user needs to select
the specific rules sets which will be used to check against the subject building model.
After running the model checking tool, two sets of results will be produced. One is a
list of all non-compliances identified in the drawings, along with detailed suggestions
about how to eliminate or mitigate these hazards. Another set of results will be shown
in the 3D view. The building model will be marked with different colored circles
which show all the building objects violating certain design for construction safety
rules. After getting the report from the model checker, the user can either change the
drawings or keep the original design ideas if other requirements need to be met. At
the same time, the change in delivery methods provides project participants with new
opportunities to succeed.
REFERENCES
AEC 3. (2010). “Test Drive On-line Code Compliance Checking.” <
http://www.aec3.com/1/1_2007-02-xabio.htm> (December 1, 2010)
Conover, D. (2010). “Method and apparatus for automatically determining
compliance with building regulations.” <
http://www.faqs.org/patents/app/20090125283> (December 1, 2010).
Gallaher, M., et al. (2004) “Cost analysis of inadequate interoperability in the U.S.
capital facilities industry.” National Institute of Standards and Technology, U.S.
Department of Commerce, Gaithersburg, Md.
Gambatese, A. (1996). “Addressing construction worker safety in the project design.”
Ph.D. Dissertation, University of Washington, Seattle, WA.
Gambatese, J., Hinze, J. and Haas, C. (1997). “Tool to design for construction worker
safety.” Journal of Architectural Engineering, ASCE 3 (1), 32-41.
Health and Safety Executive (HSE). (2003). The development of a knowledge based
system to deliver health and safety information to designers in the construction
industry, HSE Books, Sudbury.
Marini, J. (2007). “Design for construction worker safety: a software tool for
designers.” MSBC thesis, Gainesville, University of Florida.
Nisbet, N.(2010).“Projects.”<http://www.aec3.com/5/index5.htm>(December1, 2010)
Toole, T. M., (2007). “Design engineers’ responses to safety situations.” Journal of
Professional Issues in Engineering Education and Practice, 133(2), 126-131.
A Study of Sight Area Rate Analysis Algorithm on Theater Design
1
Graduate Research Assistant, Department of Architectural Engineering, Yonsei
University, Korea, 120-749; PH (822) 2123 7833; email: yeony8@gmail.com
2
Corresponding Author, Associate Professor, Ph D. Department of Architectural
Engineering, Yonsei University, Korea, 120-749; PH (822) 2123 7833; email:
glee@yonsei.ac.kr
ABSTRACT
This paper proposes a new quantitative sight area rate analysis algorithm based on
the “sight area rate” of a stage from the audience seats in the theater. The current
sightline analysis checks whether a sightline from a seat is blocked by front-row
seats from a cross-sectional and plane view at the center of a theater. Although this
method is a commonly accepted practice, it is not uncommon to find people who
have their view blocked by the front-row seats in a theater. The newly proposed
algorithm analyzes and quantifies the actual view area from each seat. The sight area
rate is the actual sight area divided by the total unblocked sight area (or screen area)
from each seat. The proposed algorithm provides quantitative results which make it
easier to design a theater. Since the proposed algorithm can derive sight area at the
early design stage of theater utilizing a set of plan and cross section drawings, it can
be applied to analyze view of audiences though a 3D BIM model is not fully
developed.
INTRODUCTION
Sightline is a ‘line of sight’ between the viewpoint of stage and audience at theater
(Burris-Meyer and Cole 1964; DCMS 2008; Ham 1987; Izenour 1996; John and
Sheard 2000). Viewpoint locates at the edge of stage and is the lowest and closest
point that every audience can see (Shows in Figure 2). Existing theater design
706
COMPUTING IN CIVIL ENGINEERING 707
manuals (Burris-Meyer and Cole 1964; DCMS 2008; Ham 1987; Izenour 1996; John
and Sheard 2000) suggest the sightline analysis method, which only examines
whether the sightline from a seat is blocked by front-row seats through cross-
sectional and plane view.
To overcome the limitation of the existing method, a 3D modeling tool has been
widely used to check sightline being secured through a 3D BIM model. Although this
method presents results visually, it is hard to check sightlines of every seat at the
same time. Since a 3D BIM model has been modified frequently at an early design
stage, sightline should be also analyzed whenever a 3D BIM model is modified.
This paper suggests new sightline analysis algorithm based on the ‘sight area rate’
index. The proposed algorithm utilizes coordinates of a seat and automatically
calculates the visible screen area of every seat which derives sight area. This
algorithm can be adapted at an early design stage when the 3D BIM model isn’t fully
developed.
This paper proposes the sight area rate analysis algorithm of the theater based on
cross sectional and plane drawings. First, the limitations of existing sightline analysis
methods will be briefly described and then a new analysis algorithm based on sight
area will be proposed.
PREVIOUS METHODS
Since sightline affects the choice of stage type and the auditorium’s width and
depth (Burris-Meyer and Cole 1964), sightline should be analyzed when the theater
is designed. Sightline is categorized into two types: vertical sightline and horizontal
sightline. A vertical sightline is “the angular path of vision in the vertical plane over
or under impediments, if any, between a sight point and the performance area”
(Izenour 1996, p.4). When a vertical sightline is analyzed, spectators in the front-row
of considered seat or building elements can be obstacles of sightline of considered
seat. A vertical sightline is an important factor in deciding the slope of the auditorium
since steep slopes ensure the vertical sightline. A horizontal sightline is “the angle of
vision in the horizontal plane between or around intervening obstructions” (Izenour
1996, p.4), and is affected by width of the auditorium (Ham 1987; Izenour 1996).
There are two types of sightline analysis methods: sightline analysis through cross-
sectional and plane drawings, and sightline analysis through 3D modeling tools. The
former method only checks whether obstacles exist on the sightline path. The latter
shows the visible area of considered seat using a camera view function based on 3D
modeling. This method has limitations in that each visible area of considered seat
should be checked manually and spends too much time analyzing every seat in the
auditorium. This analysis method can easily be adapted to a fully developed 3D
model which contains information on the type and angle of the seats. Since it is
difficult to avoid modifying 3D models at the early design stages, there is a limit to
the application of this method.
708 COMPUTING IN CIVIL ENGINEERING
This paper proposes the notion that ‘sight area’ is the visible area of the screen from
a considered seat. The existing methods analyze the view of audience in theater
based on ‘sightline’ whereas the proposed analysis method focuses on sight area to
secure the view of the audience rather than sightline. Figure 1 illustrates the notion of
sight area with the gray area indicating the sight area of the considered seat. The
figure expressed as a percentage in Figure 1 indicates the actual visible screen area
rate of a considered seat compared to the total screen area.
The sight area rate of a considered seat can be calculated by the following equation
[1]. The sight area rate is the actual visible sight area (or screen area) divided by the
total unblocked sight area (or screen area) from a seat.
To analyze the sight area rate of a theater, the sightline analysis of every seat in the
theater should be analyzed. After analyzing the sightline of every seat, the sight area
of considered seat can be obtained from the proposed algorithm, which consists of
two parts: obtaining the vertical distance from the vertical sightline analysis and
obtaining horizontal distance from the horizontal sightline analysis. Since this paper
applies the proposed algorithm to a 2D based theater design, X, Y, and Z coordinates
extracted from plane and cross-sectional drawings are critical to calculating the sight
area of every seat in a theater.
To obtain the vertical distance of the visible area of a considered seat, Z coordinates
from the eye point of the considered audience should be identified. First, obstacles in
the vertical sightline path which connects the eye point of the considered seat and the
viewpoint of the imaginary screen should be identified. If rows in front of a
considered seat block the vertical sightline path, the critical line which connects the
eye point of the considered seat and the highest head point of audience in front of
that should be identified to obtain a vertical distance (shown in Figure 2). The
vertical distance is defined as the vertical distance of the screen, which is unblocked
from the considered seat. Figure 2 illustrates the vertical distance from the critical
point to the highest point of the screen.
Once the vertical distance and horizontal distance are obtained, the screen can be
divided into several sections (shown in Figure 4) according to Figure 2 and Figure 3.
710 COMPUTING IN CIVIL ENGINEERING
Section 4, 5, and 6 are blocked screen area of a considered seat after analyzing
vertical sight line. Section 1, 3, 4, and 6 are blocked screen area of a considered seat
after analyzing horizontal sight line. The total unblocked screen area of a considered
audience is intersections of unblocked screen area of vertical sightline analysis result
and horizontal sightline analysis result. In this case, the intersections are section 4
and 6. The union sections of blocked screen area are the invisible area of the screen
from the considered seat. Except for union sections of blocked area, sight area can be
calculated by adding the area of the other sections. By applying equation [1], the
sight area rate can be derived from the total sight area.
COMPUTING IN CIVIL ENGINEERING 711
CONCLUSIONS
To secure the view of the audience in the theater, the existing methods analyze the
view of audience based on ‘sightline,’ which is ‘line of sight’ between the viewpoint
of the stage and the eye point of audience in the theater. The existing method can be
categorized into two parts: visual identification of obstacles in the sightline based on
2D drawings, and identification of obstacles in a 3D model with a 3D modeling tool.
However, these methods have the drawbacks of having to redo the 3D model
whenever changes in the theater design plans are made and not having accurate
analysis results. This paper proposes the analysis method focused on ‘sight area’ to
secure the view of the audience rather than ‘sightline’. The newly proposed notion
sight area in this paper is the visible area of a screen from the considered seat. The
proposed analysis method can be applied to any theater stage design irrespective of
developing a full 3D model. In the future, we will validate the sight area rate analysis
method through a case study and compare the accuracy of the result with those of
existing methods.
Acknowledgement
This research was supported by the MKE (The Ministry of Knowledge Economy),
Korea, under the national HRD support program for convergence information
technology supervised by the NIPA (National IT Industry Promotion Agency) (NIPA-
2010-C6150-1001-0013).
REFERENCES
Burris-Meyer, H., and Cole, E. (1964). "Theaters and Auditoriums." Von Nostrand
Reinhold Publishing Corporation, , New York.
DCMS. (2008). "Guide to safety at sports grounds." Department for culture media
712 COMPUTING IN CIVIL ENGINEERING
1
Graduate Research Assistant, Department of Architectural Engineering, Yonsei
University, Korea, 120-749; PH (822) 2123 7833; email: jongsungwon@yonsei.ac.kr
2
Corresponding Author, Associate Professor, Ph D. Department of Architectural
Engineering, Yonsei University, Korea, 120-749; PH (822) 2123 7833; email:
glee@yonsei.ac.kr
ABSTRACT
This research proposed two algorithms, which may reduce the size of IFC files, by
extracting only the information requested by each project participant. The extraction
algorithms could help to increase the productivity related to the exchange project
information between them. One of the algorithms extracted entities related to the
required building elements and instances recursively explain these entities from an
IFC file. The other one eliminated the unnecessary entities and instances from the
file. This research compared the IFC file extracted by the two algorithms to identify
the more efficient algorithm. The extraction algorithm was more efficient than the
elimination algorithm because the size of an IFC file through the extraction
algorithm was almost 1/11 of the file size when the elimination algorithm was used.
The identified algorithm could reduce the file size to 8.7% of the original size when
extracting information related to the slab element in an IFC file.
INTRODUCTION
There are diverse software used in the Architecture, Engineering, and Construction
(AEC) industries, and because of different and various formats supported by the
software it is difficult to exchange data automatically and directly. To overcome these
limitations, buildingSMART International (former International Alliance for
Interoperability, IAI) proposed Industry Foundation Classes (IFC) as the
international standard for exchanging data between project participants. However,
since a master IFC model generally means an integrated model that includes a lot of
713
714 COMPUTING IN CIVIL ENGINEERING
PREVIOUS STUDIES
Some of previous studies insisted that it was more efficient to use an extracted IFC
model containing only requested information instead of a master IFC model which
integrated all the information generated by diverse project participants (Chen et al.
2005; Hwang 2004; Park and Kim 2009). Park and Kim (2009) claimed that
utilization of software based on description logics were necessary because of more
complex and bigger IFC models than before. The authors proposed using an ontology
representation of an IFC based building information model by adding ontology web
language (OWL) notation into the IFC model. However, algorithms of the proposed
representation were not mentioned. Hwang (2004) attempted to identify a method to
calculate quantity takeoff of a building. The concept of the method was to extract
basic information related to quantity takeoff from an IFC-based instance file as a
subset of the master IFC model. However, this study has several limitations as
follows. The authors should identify the entities related to the representation of the
preliminary quantity takeoff when users want to use this algorithm. If the IFC will be
COMPUTING IN CIVIL ENGINEERING 715
updated, the authors should also update the list of related entities and this algorithm
could not be utilized in areas other than the quantity takeoff. Since BIM software has
not supported IFC perfectly so far, this algorithm might cause errors like not
including the necessary IFC instances. Chen et al. (2005) developed an IFC-based
web server which could extract geometric information automatically for structural
analysis from a 3D object-oriented Computer-Aided Design (CAD) model. Although
a validation process was conducted by using case studies, they identified and
validated the extraction process of information related only to columns and beams
among various building elements. The extraction process was not utilized to extract
information related other elements but not columns and beams. The server was
implemented only to support a collaboration process between the design and
structural teams by identifying entities related to building elements, which users
wanted to carry out their work.
There are a few studies that use extraction subsets from EXPRESS schema (Lee
2009; Yang and Eastman 2007), but none of them develop algorithms for the
recursive extraction of instances from an IFC file. A few studies (Lee 2009) did
developed a recursive algorithm and a program for extracting meaningful subsets
from EXPREE schema,. however, this program only extracted a minimum valid set
of entities from an integrated IFC file but not an instance-level extractor. Therefore,
in this research the authors developed two algorithms which could extract entities
and instances related to selected building elements from an IFC file and compared
them to identify an efficient instance-level extraction algorithm. The details of the
developed algorithms are explained in the next section.
DEVELOPMENT OF ALGORITHMS
This research proposes two algorithms that could extract a minimum valid
instance-level subset containing information about entities connected with the
requested building elements and developed two programs based on the algorithms.
Algorithm 1 extracts the requested instances and Algorithm 2 eliminates the
unnecessary instances from the IFC file. To extract entities related to the selected
building elements and instances explaining the entities, a relationship between the
entities and the building elements should be defined first.
Development of Algorithm 1
Algorithm 1 denotes the algorithm to extract instances related to a selected
element from an IFC file. Figure 1 shows an example of an extraction of the entities
and instances related to the slab element from a master IFC model. If IfcSlab was
selected as the entity to be extracted, instance # 2638, representing IfcSlab in the IFC
file, should be extracted. Instances #33 (IfcOwnerHistory), #2614
(IfcLocalPlacement), and #2637 (IfcProductDefinitionShapes) should also be
extracted because instance #2638 referred to these instances, and instances which
#2637 refers to should be extracted recursively.
Except for these recursive processes, the algorithm should take into account
instances referring to the instance explaining the entity IfcSlab. For example,
instance #2642 (IfcRelDefinesByProperties) should be extracted because this
instance referred to instance #2638, which explains the IfcSlab entity. The order of
extracted instances in the IFC file was changed since the extraction process was
carried out according to relationships of instances. However, the order of instances
did not cause errors in the IFC files.
Development of Algorithm 2
Algorithm 2 is an algorithm that eliminates instances explaining entities connected
with unnecessary building elements not selected among the 53 entities as basic
entities for representing building elements and instances referring to eliminated
instances. Figure 2 shows an example of the elimination process. If a user wanted to
extract information related to column elements from an IFC file, entities related to
slab elements should be eliminated from the IFC file. As in Figure 2, instance #2638,
which explains the entity IfcSlab, was eliminated first and instances #2642
(IfcRelDefinesByProperties), #2654 (IfcRelDefinesByProperties), and #2656
(IfcRelDefinesByProperties), referring to instance #2638 were also eliminated.
COMPUTING IN CIVIL ENGINEERING 717
Figure 3 shows the image of an IFC model before extraction and the extracted
models by using the two algorithms developed in this research. DDS viewer was
used for representing the IFC models. The number of entities and size of the IFC file,
where the entities and instances related to the slab elements, extracted by Algorithm
1 were different from those extracted by Algorithm 2. Table 1 shows the results of a
comparison. According to the comparison of the size of the extracted IFC files, using
Algorithm 1 was more efficient than Algorithm 2 for reducing the size of an IFC file.
Algorithm 1 reduced the file size to 8.7% of the master IFC file and Algorithm 2
reduced it to 90.0% of the file.
718 COMPUTING IN CIVIL ENGINEERING
The number of entities included in the IFC file extracted by Algorithm 1 was 56.6%
of the number of entities in the IFC file by Algorithm 2. This means that the IFC file
extracted by Algorithm 1 had fewer unnecessary entities than Algorithm 2.
CONCLUSIONS
This research developed two algorithms to extract information related to building
elements requested from an IFC file and identified more the efficient algorithm. One
of the algorithms, Algorithm 1, extracted the necessary instances from an IFC file
recursively and the other one, Algorithm 2, eliminated unnecessary instances from an
IFC file. Both of the algorithms extracted entities connected with building elements
requested for the extraction and instances explaining the related entities from an
integrate IFC file and generated valid IFC files.
For the evaluation of the developed algorithms the authors created an IFC model
and compared the IFC files extracted by the two algorithms. The size of an IFC file
that used Algorithm 1 was 1/11 of the file size (8.7% of an integrated IFC model)
through the Algorithm 2 (90.0%). Therefore, the authors identified the extraction
algorithm as the most efficient.
The identified algorithm should be implemented into large-scale IFC models that
include design, MEP, and structural information, and be evaluated to confirm the
possibility of implementation into real BIM-based projects.
ACKNOWLEDGEMENTS
This research was supported by a grant titled "06-Unified and Advanced
Construction Technology Program-E01" from the Korean Institute of Construction
and Transportation Technology Evaluation and Planning (KICTEP) and the MKE
(The Ministry of Knowledge Economy), Korea, under the national HRD support
program for convergence information technology supervised by the NIPA (National
IT Industry Promotion Agency) (NIPA-2010-C6150-1001-0013)
REFERENCES
Chen, P.-H., Cui, L., Wan, C., Yang, Q., Ting, S. K., and Tiong, R. L. K. (2005).
"Implementation of IFC-based web server for collaborative building design
COMPUTING IN CIVIL ENGINEERING 719
ABSTRACT
Patient safety is a principal factor in healthcare facility operations and
maintenance (O&M). Ongoing initiatives to help track patient safety information and
record incidents and close calls include Common Formats and International
Classification for Patient Safety (ICPS). Both efforts aim to develop ontologies to
support healthcare providers to collect and submit standardized information regarding
patient safety events. Aggregating this information is crucial for pattern analysis,
learning, and trending. The purpose of this paper is to analyze these existing efforts to
see how much facility and facility management information is covered in the existing
frameworks and how they can interface with new systems development. This analysis
uses documented cases from literature on healthcare associated infections, inputs the
data from the cases into the information categories of Common Formats and ICPS,
and identifies gaps and overlaps between these existing systems and facility
information. With this analysis, connections to these efforts are identified that serve
as a leverage for showing the role of healthcare facility information for assessing and
preventing risky conditions. Future work will use these findings and the supported
ontology to connect patient safety information to a building model for supporting
facility operations and maintenance. The aim is generating and interpreting high-level
information to provide effective and efficient patient safety in a healthcare
environment.
INTRODUCTION
720
COMPUTING IN CIVIL ENGINEERING 721
to better quality of care. Guidelines exist within the industry, such as those from the
U.S. Department of Health and Human Services (Sehulster and Chinn, 2003), and
other design standards, to ensure the environment of care is safe, with proper
ventilation, systems control, and procedures to help reduce Healthcare-Associated
Infections (HAIs) and patient safety events.
The use of HIT applications within healthcare systems as a way of improving
patient safety is expanding. Research has shown that HIT has the potential for
significant savings, increased safety, and better health (Hilestad et al., 2005; Taylor et
al., 2005; Bigelow et al., 2005; Bates and Gawande, 2003). Reducing medical errors
and improving patient safety can ultimately save healthcare and related industries
$19.5 billion (USD) annually in the United States (Shreve et al., 2010). The
improvement to patient safety and reduction of medical errors linked to the use of
HIT has led to the federal government passing legislation promoting the use of HIT
and to create programs for funding their implementation (Bates and Gawande, 2003).
Integrating facilities and environment information is lacking within existing
HIT solutions that deal with patient and clinical information. This paper reviews two
HIT ontologies related to patient safety events for their ability to support facility
information and explores options for including them in future systems
implementation. This is done by applying data from documented case studies of
patient safety events on healthcare associated infections which involve a failure on
the facility side, into the existing ontologies. The results of this study can help to
develop a decision support system that links patient safety concerns with facility
management and operational tasks that can be used to help improve patient safety and
environmental quality.
Information from two cases involving patient safety events and Healthcare-
Associated Infections (HAIs) caused by facility/maintenance issues were found
within literature and one case scenario developed through interviews with clinical and
facilities staff at Hershey Medical Center, Hershey, PA, were involved in the
following analysis. Information for these cases and scenarios were used as inputs into
the existing frameworks (Common Format and ICPS) to identify information gaps of
environmental and facility information that is important for properly recording
incidents and preventing similar situations from happening again.
Case 1: Operating room air-intake duct. A growth of moss on the room and pigeon
feces on the window ledge both adjacent to an operating room air-intake duct caused
an outbreak of Aspergillus endocarditis (Walsh & Dixon, 1989).
units, patients and staff were infected with legionnaires’ disease when bacteria
became airborne.
There are a few formalisms underway within the healthcare industry to create
a central system for capturing and classifying patient safety events and related
information within a structured ontology. Two of these initiatives are the Association
for Heath Research and Quality’s (AHRQ) Common Format and the World Health
Organization’s (WHO) International Classification for Patient Safety.
AHRQ – Common Format. The Patient Safety and Quality Improvement Act of
2005 established a framework for voluntary submission of privileged and confidential
information to be collectively analyzed in regards to the quality and safety of patient
care given in a healthcare setting. The idea is to have the information, from different
organizations, in a standardized format to allow the aggregation of data to identify
and address underlying causal factors of patient safety problems. The information will
be stored in a database where AHRQ in the larger scale or individual hospitals locally
can then use the data to analyze statistics and do trending of patterns in regards to
patient safety events (AHRQ, 2010).
AHRQ Common Formats allows for capturing the information on different
incident types. Associated data for each incident is captured and classified in the
Logical Data Model. The Common Format also defines use cases for developers on
how to implement the data model. The processes are captured in a flowchart format to
assist with development of data types that need to be recorded for each incident. The
goal of the Common Format is to support standardization so that data collected by
different entities are clinically and electronically comparable.
The data model in Common Formats is organized around “Concern -
Event or Unsafe Condition” class. There are eight main patient safety
conditions that are defined as sub-types around it: blood/blood product,
device/medical surgical supply, fall, healthcare-associated Infection, medication/other
substance, surgery/anesthesia, perinatal and pressure ulcer. Every event has data
related to the “Contributing Factor”, “Reporter”, “Patient”, and “Linked”.
For the purpose of this study we focused on describing a case for a Healthcare-
Associated Infection (HAI). Figure 1 shows how the information is organized in the
Common Formats for HAI (adapted from PSO Privacy Protection Center, 2010).
In this model, the information which needs to be recorded would include the
type of infection; if the infection was present at time of admittance (such as from a
previous health event) or if it was acquired in the hospital; the source of the infection,
if medical procedures were involved; and what types of treatments were given. Each
of these details is linked to a data element. The data elements are clearly defined
within the Common Formats Data Dictionary that describe their appropriate use
within the overall system, the data type, maximum available length, and where the
information may be collected from (PSO Privacy Protection Center, 2010).
COMPUTING IN CIVIL ENGINEERING 723
WHO – International Classification for Patient Safety (ICPS). The WHO formed
a Drafting Group that was in charge of developing the conceptual framework for the
ICPS. The framework was validated for multiple languages and approved to fit the
purpose, and to be meaningful, useful, and appropriate for classifying patient safety
data and information. The framework aims at providing a comprehensive
understanding of the patient safety domain by representing a continuous learning and
improvement cycle emphasizing identification of risk, prevention, detection,
reduction of risk, incident recovery and system resilience (WHO, 2009).
At this point, ICPS only focuses on a taxonomy for classifying the patient
safety events. It is more of a conceptual framework than a complete data model. On
the larger scale the classes are created but the attributes are still in the development
process. The taxonomy is based on a conceptual framework, consisting of 10 high
level classes: incident type, patient outcomes, patient characteristics, incident
characteristics, contributing factors/hazards, organizational outcomes, detection,
mitigating factors, ameliorating actions, actions taken to reduce risk. The “incident
type” class identifies13 sub-types as safety events which are: clinical administration,
clinical process/procedure, documentation, healthcare associated infection,
medication/IV fluids, blood/blood products, nutrition, oxygen/gas/vapor, medical
device/equipment, behavior, patient accidents, infrastructure/building/fixtures,
resources/organizational management.
For the cases defined in this paper, two incident types in ICPS, healthcare
associated infection and infrastructure/building/fixtures, fit the purpose. Figures 2 and
3 (adapted from WHO, 2009) show how these classes are formed.
Actual implementation of both initiatives is ongoing and in development
although both offer organizational models and technical information to help with
development. Common Formats has a bottom-up approach where attributes for every
safety event are specifically defined. The context of the data model only covers
hospitals and the model is ready for implementation. The ICPS has a top-down
approach, where the context covers all healthcare environments and the model defines
724 COMPUTING IN CIVIL ENGINEERING
large scale relationships first. The focus is on comprehensive classification but the
attributes are not yet identified for every class. The ICPS is not implementable yet.
CAPACITY COMPARISON
Types of Events
Blood X X
Device/Supply X X
Fall (or accident) X X
Healthcare-Associated Infection X X
Types of infections X X
Treatment Sources X
Location Where Appeared X X
Medication/IV X X
Surgery/Anesthesia X X
Pressure Ulcer X
Nutrition X
Documentation X
Procedure X
Behavior (Patient or Staff) X
Infrastructure Building or Fixture (and associated problem) X
Resources (Organization) X
Table 2 shows the different types of information directly related to the facility
that is available from the cases and can be useful in determining better practices for
facility maintenance and operations. Note that patient information, including
symptoms, treatments, and other medical information, is omitted from the table as
this information is not directly important to facility management and is stored by both
Common Format and ICPS.
Although both information structures allow for the storage of all information
related to facilities within the cases, ICPS appears to allow for better sorting of the
information for events caused by facility issues because of the classes of information
that allow for Structure/Building/Fixture information. While not all attributes are
defined through ICPS, the conceptual framework takes facility information into
726 COMPUTING IN CIVIL ENGINEERING
key point to providing quality of care and maintaining that environment requires
keeping many systems working properly. A HIT that links patient safety to facility
management information can help lead to a reduction of patient safety events, saving
the healthcare industry money, and more importantly improving quality of patients’
lives.
REFERENCES
Association for Health Research and Quality (AHRQ). (2010) “Users Guide: Version
1.1: AHRQ Common Formats for Patient Safety Organizations” AHRQ Common
Formats Version 1.1 – March 2010 Release | Users Guide.
Bates, D.W. and A.A. Gawande. (2003) “Improving Safety with Information
Technology” The New England Journal of Medicine, 348 (25): 2526-2534.
Bigelow JH, Fonkych K, and Girosi F. (2005) “Technical Executive Summary in
Support of ‘Can Electronic Medical Record Systems Transform Healthcare?’ and
‘Promoting Health Information Technology’,” Health Affairs, Web Exclusive,
September 14.
Clements-Croome D. (2003) “Environmental Quality and the Productive Workplace,”
CIBSE/ASRAE Conference (24-26 Sept).
Cooper EE, O’Reilly MA, Guest DI, and Dharmage SC. (2003). “Influences of
Building Construction Work on Aspergillus Infection in a Hospital Setting,”
Infection Control and Hospital Epidemiology, 24(7): 472-476.
Hillestad R, Bigelo J, Bower A, Girosi F, Meili R, Scoville R, and Taylor R. (2005)
“Can Electronic Medical Record Systems Transform Healthcare? An Assessment
of Potential Health Benefits, Savings, and Costs,” Health Affairs, 24(5).
PSO Privacy Protection Center (2010). “AHRQ Common Formats Version 1.1:
Technical Specifications,” Accessed on 12/20/10, website:
https://www.psoppc.org/web/patientsafety/version-1.1_techspecs.
Shreve J, Van Den Bos J, Gray T, Halford M, Rustagi K, and Ziemkiewicz E. (2010)
“The Economic Measurement of Medical Errors: Sponsored by Society of
Actuaries’ Health Section,” Milliman Inc. (June).
Sehulster L and Chinn RYW. (2003) “Guidelines for Environment Infection Control
in Healthcare Facilities,” Centers for Disease Control and Prevention Healthcare
Infection Control Practices Advisory Committee (HICPAC).
Taylor R, Bower A, Girosi F, Bigelow J, Fonkych K, and Hillestad R. (2005)
“Promoting Health Informaiton Technology: Is There as Case for More-
Aggressive Government Action?” Health Affairs, 24(5).
Ulrich R, Quan X, Zimring C, Joseph A, Choudhary R. (2004) “The Role of the
Physical Environment in the Hospital of the 21st Century: A Once-in-a-Lifetime
Opportunity,” Report to the Center for Health Design for Designing the 21st
Century Hospital Project, September 2004.
Walsh T.J., and Dixon D.M. (1989) “Nosocomial Aspergillosis: Environmental
Microbiology, Hospital Epidemiology, Diagnosis and Treatment,” European
Journal of Epidemiology, 5(2):131-142.
World Health Organization (WHO). (2009 “Conceptual Framework for the
International Classification for Patient Safety” World Health Organization.
EVMS For Nuclear Power Plant Construction:
Variables For Theory And Implementation
ABSTRACT
It is anticipated that there will be intense competition in the nuclear industry
as the cost and time for nuclear power plant construction are expected to fall
(Richardson 2010). In order to attain competitive advantages under the globalized
market, utilizing advanced project control systems by integrating cost and time
management is of great concern for practitioners as well as the researchers. In this
context, the purpose of this paper is to identify major variables that characterize the
real-world Earned Value Management System (EVMS) implementation for nuclear
power plant construction. Distinct attributes of nuclear power plant construction were
investigated first. Organizational policies, measurement techniques, data collection
methods for nEVMS were then developed. A case-project is briefly introduced in
order to validate the viability of proposed methodology. This study is conducted as
part of effort developing an organization-wide EVMS system from an owner’s
perspective.
INTRODUCTION
It is reported by Richardson (2010) that “the nuclear industry is rapidly
globalizing. As it does so, there will be sharper vendor competition. Cost and
construction time are expected to fall, and more countries will opt for nuclear power”.
Under this globalized intense competition, companies in the nuclear industry strive to
enhance the quality, cost, and time for nuclear construction projects.
Effectively managing quality, cost, and time is the utmost objective for any
type of construction projects, and the most advanced and systematic method of
controlling these three performance measures in an integrated way is known as the
‘Earned Value Management System’ (EVMS). However, additional management
effort required to collect and maintain detailed data has been highlighted as a major
barrier to utilizing this concept over a quarter of a century (Rasdorf and Abudayyeh
1991; Deng and Hung 1998; Jung and Woo 2004). In order to maximize the benefits
that this integration has to offer, tools and techniques to reduce the workload for
integrated cost and schedule control should be investigated in a comprehensive
728
COMPUTING IN CIVIL ENGINEERING 729
manner. Nevertheless, there has been no research addressing these issues for nuclear
construction.
In this context, the purpose of this paper is to explore influencing variables
that would facilitate effective EVMS implementation for nuclear power plant
construction. Distinct attributes of nuclear power plant construction were investigated
first. Organizational policies, measurement techniques, data collection methods for
EVMS were then developed. A case-project is briefly introduced in order to validate
the viability of proposed methodology. This paper presents the result of an ‘action
research’, as the authors have conducted information systems (IS) planning for an
organization-wide EVMS system.
Objectives Methods
O1: Integrating - Cost, time, and quality
Performance Measures - Lifecycle(planning,E/P/C,startup, operation)
- Hierarchical schedules
- Planning capability as owner
O2: Enhancing
- Project management (PM) capability as a supplier
Organizational Capability
- Organizational learning mechanism and database
- Minimized additional data requirements
O3: Optimizing
- Balanced data linkage and segment
EVMS Workload
- Maximized data utilization for analyses
O4: Augmenting - Redesigning risk & cost management system
Cost Engineering - Focused on cost engineering, not accounting
- Systemized project baseline
730 COMPUTING IN CIVIL ENGINEERING
Contract Types
Deciding a contract type for mega construction projects involves many issues
such as politics, regulations, risk sharing, local economy, etc. Despite ‘the highly
uncertain nature of nuclear plant cost estimates’ and ‘the changes toward more
complex hybrid’, fixed price contract serves as a base model in practice (Flaherty
2008). Moreover, as an EPC firm, the concept of fixed price budget is required for the
purpose of risk management and cost engineering under any contract types, including
unit price, reimbursable, and guaranteed maximum price.
732 COMPUTING IN CIVIL ENGINEERING
Due to the mega-size of the project and the technical complexity, nuclear plant
construction is performed by multiple specialty entities. Therefore, the vertical
integration inside an E/P/C organization, which can be observed in industrial plant
construction, cannot be achieved. For this reason, indirect and contractual integration
among many parties and disciplines is a crucial issue for project management
organization (PMO). EVMS needs to support the PMO to enhance technical and
managerial leadership and to improve organizational learning.
Every single principle for construction project management is equally
important. Among these construction management functions, however, the quality
management is strongly stressed throughout the entire project life cycle in the nuclear
industry. This emphasis on quality empowers the EVMS more viable and effective for
nuclear plant construction by adding quality onto the integrated cost and schedule.
Actual cost (AC; actual cost work performed) data for construction activities and CAs
can be directly acquired from legacy site inspection systems. The current practice for
progress payment of the case-company utilizes this process.
EVMS Structure
(e.g. building), commodity (e.g. piping), or system (e.g. water circulating) can be
used for subcategory.
EVMS Procedures
CONCLUSIONS
It is observed that distinct characteristics of nuclear power plant construction
make the EVMS implementation more viable and effective. As a demand pull,
strategic needs for enhancing cost and schedule control capabilities under globalized
competition require the E/P/C firms to furnish EVMS techniques. Finally, the authors
could recognize that EVMS implementation will be very successful if it is properly
optimized in terms of reengineering, workloads, and knowledge embedding.
ACKNOWLEDGEMENTS
This study was mainly supported by Korea Hydro and Nuclear Power Co., Ltd.
(KHNP). Partial expenses were also supported from Ministry of Education, Science,
and Technology (MEST) under Grant No. 2009-0074881.
REFERENCES
Deng, M. Z. M, and Hung, Y. E. (1998). “Integrated cost and schedule control: Hong
Kong perspective.” Project Mgmt. J., Project Management Institute (PMI),
29(4), 43-49.
Flaherty, T. (2008). “Navigating Nuclear Risks: New Approaches to Contracting in a
Post-Turnkey World,” Public Utilities Fortnightly, July, 2008, 39-45.
Jung, Y. (2008). "Automated Front-End Planning for Cost and Schedule: Variables for
Theory and Implementation", Proceedings of the 2008 Architectural
Engineering National Conference, ASCE, Denver, USA, doi:
10.1061/41002(328)43.
Jung, Y. and Joo, M. (2011). “Building Information Modeling (BIM) Framework for
Practical Implementation”, Automation in Construction, Elsevier, 20(2), 126-
133.
Jung, Y. and Woo, S. (2004)."Flexible Work Breakdown Structure for Integrated Cost
and Schedule Control", Journal of Construction Engineering and Management,
ASCE, 130(5), 616-625.
COMPUTING IN CIVIL ENGINEERING 735
ABSTRACT
Sustainability assessment tools are critical in the process of achieving
sustainable development. Eco-efficiency has emerged as a practical concept which
combines environmental and economic performance indicators to measure the
sustainability performance of different product alternatives. In this paper, an
analytical tool that can be used to assess the eco-efficiency of construction materials
is developed. This tool evaluates the eco-efficiency of construction materials using
data envelopment analysis; a linear programming based mathematical approach. Life
cycle assessment and life cycle cost are utilized to derive the eco-efficiency ratios,
and data envelopment analysis is used to rank material alternatives. Developed
mathematical models are assessed by selecting the most eco-efficiency exterior wall
finish for a school building. Through this study, our goal is to show that DEA-based
eco-efficiency assessment model could be used to evaluate alternative construction
materials and offer vital guidance for decision makers during material selection.
INTRODUCTION
The construction industry is one of the major contributors to environmental
problems such as global warming, ozone depletion, acidification, natural resources
depletion, solid waste generation, and indoor air quality. The construction industry
must inevitably employ certain environmental assessment tools in the process of
achieving sustainable development, since it consumes a substantial amount of natural
and physical resources and has significant environmental burdens during its life cycle.
In order to measure the progress, several metrics needs to be devised. Although not
adopted widely in the construction industry, eco-efficiency has emerged as an
alternative tool that combines environmental and economic performance indicators to
measure the sustainability performance of different design alternatives.
The objective of this paper is to develop an analytical tool that can be used to
assess the eco-efficiency of construction materials. This tool is used to evaluate the
projects using data envelopment analysis (DEA), a linear programming based
mathematical approach. LCA and LCC are used to derive the eco-efficiency ratios,
and DEA is utilized to rank alternatives without a need to subjectively weight life
cycle impact dimensions and LCC. Developed mathematical will be assessed by
selecting the most eco-efficient exterior wall finish for a building. The rest of the
paper is organized as follows. First, the need for eco-efficiency assessment is
736
COMPUTING IN CIVIL ENGINEERING 737
discussed. Next, basic aspects of DEA are explained. Then, the data collection and
model development are described. Next, analysis results and discussion are presented.
Finally, the findings are summarized and future work is pointed out.
ECO-EFFICIENCY ASSESSMENT
Eco-efficiency is defined as the delivery of the competitively priced goods and
services that satisfy human needs and enhance the quality of life while progressively
reducing ecological impacts and resources intensity throughout product life cycles to
a level appropriate with the estimated capacity of the Earth (Kibert 2008). Eco-
efficiency ratio consists of two independent variables; an economic variable
measuring the value of products or services added and an environmental variable
measuring their added environmental impacts. The ratio expresses how efficient the
economic activity is with regard to nature's goods and services. According to the
definition, eco-efficiency is measured as the ratio between the added value of what
has been produced (income, high quality goods and services, jobs, GDP etc) and the
added environmental impacts of the product or service (Zhang et al. 2008). Eco-
efficiency improvement can be accomplished by reducing the environmental impact
added while increasing the economic value added for products or services during their
life cycle. Eco-efficiency analysis has been used successfully as a valuable
assessment tool to assess sustainability in various domains (Kicherer et al. 2007;
Korhonen and Luptacik 2004; Kuosmanen and Kortelainen 2005). In this study, LCA
and LCC were utilized as denominator and numerator for eco-efficiency ratio:
LCC
Eco-efficiency ratio = (1)
LCA
The approach of utilizing LCC to represent the economic value added has been
adopted in several research studies (Saling et al. 2002). The main advantage in
utilizing LCC is to be able to account for all costs associated with the life cycle
environmental impacts. As a result, this would properly assess the economic value for
the whole life cycle.
max (2)
subject to
1 1, … , (3)
, 0 (4)
where µr is the output multiplier, vi is the input multiplier, o is the DMU which is
being evaluated, s represents the number of outputs, m represents the number of
inputs, j is the number of DMUs, yrj is the amount of output r produced by DMU j,
and xij is the amount of input i used by DMU j. The objective function z is the
weighted sum of outputs for the DMU under evaluation. DEA consists of multiple
inputs and outputs and seeks to minimize the inputs to produce the desired output. If
the output cannot be produced by the combination of the input of all the other DMUs,
then the DMU in consideration is on the efficient frontier. In the cases where the
inputs of other DMUs produce the output of DMU in consideration, that DMU is
considered not efficient, since the inputs of other DMUs were able to produce more
output for the DMU in question.
DEA has also been used to measure eco-efficiency. Eco-efficiency ratio was
modeled as input-output model where environmental impacts represent the inputs to
the system and the economic value added as the output of the system (Kuosmanen
and Kortelainen 2005). As a result, the environmental impacts are forced to be
minimized to achieve the same level of economic value. Alternatives that need more
environmental impacts to produce the same level of economic value were deemed as
inefficient. DEA can be adapted to mitigate the subjective judgment about the
weights of the environmental and economic performance indicators, since DEA does
not require a priori weight assignments (Kuosmanen 2005).
MODEL DEVELOPMENT
Figure 1 presents the general DEA framework in modeling eco-efficiencies of
construction materials. According to DEA notation in Fig. 1, the inputs constitute
LCA and the output constitutes LCC. Utilizing this framework, two DEA models
were developed; CCR-based ECODEA-1 model and weight restricted ECODEA-2
model.
max (5)
subject to
1 1, … , (6)
0 (7)
where represents the life cycle cost of DMU 0. Since life cycle cost is the only
output, output multipliers are not needed for the model. The DMU is regarded as eco-
efficient when z = 1. This model does not force any weight restrictions on
environmental impacts. Thus, the flexibly chosen weights for environmental impacts
are enabled to maximize the relative eco-efficiency of the DMU with respect to other
compared DMUs (Kortelainen 2008). To solve this model as a linear program, it is
linearized by taking the inverse of the eco-efficiency ratio as follows:
1
min (8)
subject to
1
1 1, … , (9)
0 (10)
This mathematical model is solved through linear programming and the eco-
efficiency ratio is derived by taking the inverse of z.
from .44 to 1. Among wall finishes, DECO, TRMP, and GVNL were found to be
100% eco-efficient. CDRS was found to be the least eco-efficient (.44) when
compared with the other exterior wall finishes in the study.
Table 2. ECODEA results and corresponding weights
Weights ( ) for
Wall DM Rati Ran
Finish U o k AC TO EU GW FF SM WT HH CA
D X T P D G R L P
ABR1 1 0.64 6 0 0 0 0 0 0 1.42 0 0.35
ABR2 2 0.70 5 0 0 0 0 0 0 1.59 1.64 0
GBR
3 0.98 4 0.35 0 0 0 0 0 1.42 0 0
M
CDR 2.3
4 0.44 11 0 0 0 0 0 0 0.24 0
S 7
DEC 2.2
5 1.00 3 0 0 0 0 0 0.09 0 0
O 5
0.4
HFRS 6 0.47 10 0 0 0 0 0 1.36 0 0
0
0.4
HSST 7 0.57 9 0 0 0 0 0 1.36 0 0
0
HMC 0.4
8 0.58 8 0 0 0 0 0 1.36 0 0
T 0
TRM
9 1.00 1 0.35 0 0 0 0 0 1.42 0 0
P
GST 0.4
10 0.59 7 0 0 0 0 0 1.36 0 0
C 0
GVN
11 1.00 1 0 0.38 0 0 0 0 1.43 0 0
L
Table 3. ECODEA-1 based percent improvements of exterior wall finishes
Wall Percent Improvements (%) for
DMU
Finish ACD TOX EUT GWP FFD SMG WTR HHL CAP
ABR1 1 -36 -41 -39 -40 -51 -37 -36 -94 -36
ABR2 2 -47 -47 -46 -49 -58 -46 -31 -30 -46
GBRM 3 -2 -11 -5 -09 -25 -3 -2 -95 -2
CDRS 4 -69 -69 -70 -68 -56 -66 -67 -56 -69
DECO 5 0 0 0 0 0 0 0 0 0
HFRS 6 -60 -64 -64 -62 -53 -64 -53 -93 -60
HSST 7 -52 -58 -56 -55 -43 -57 -43 -94 -52
HMCT 8 -51 -57 -56 -54 -42 -56 -42 -93 -52
TRMP 9 0 0 0 0 0 0 0 0 0
GSTC 10 -54 -60 -56 -56 -41 -57 -41 -95 -55
GVNL 11 0 0 0 0 0 0 0 0 0
DEA also offers insights about percent improvements that could be made to
reduce the environmental impact while LCC is held constant, to reach 100% eco-
efficiency (See Table 3). Although it is not always possible to reduce the
environmental impacts of materials, nevertheless, percent improvement analysis gives
important information regarding ecological inefficiencies. This information could be
used to achieve dematerialization or aid in selecting more eco-efficient sub-materials
742 COMPUTING IN CIVIL ENGINEERING
during production of exterior wall finishes. For instance, based on ECODEA-1, for
ABR1 to become 100% eco-efficient, it needs to reduce ACD 36%, TOX by 41%,
EUT by 39%, GWP by 40%, FFD by 51%, SMG by 37%, WTR by 36%, HHL by
94%, and CAP by 36%. It is worth noting that DECO, TRMP, and GVNL do not
need any improvement in reducing their environmental impacts, since they are 100%
eco-efficient. The same analysis could be done using ECODEA-2, as well.
The results showed that DEA is an effective tool to evaluate construction
material alternatives and offer a critical insight to the decision maker that can lead to
buildings that use much more eco-efficient materials. Percent improvement analysis
provided valuable information to the decision makers regarding which environmental
impacts need more improvements. Although BEES model was used to calculate both
LCA and LCC, other LCA software tools, such as SimaPro and Athena, could be
utilized as well. Since the mentioned LCA software tools utilized process-based LCA
methodology, the results are expected to be similar to the study here. Yet, it should be
noted that SimaPro and Athena do not utilize TRACI environmental impact
categories, and their raw data would be to be used to calculate these categories on a
separate platform.
CONCLUSIONS
In this paper, a DEA-based eco-efficiency assessment framework is presented
as an effective and practical way to evaluate construction materials. The developed
framework utilized LCC and LCA as numerator and denominator for calculating the
eco-efficiency ratio and solved LP models to calculate eco-efficiency ratios for
exterior wall finishes. The model predicted DECO and TRMP to be 100% eco-
efficient. Percent improvement analysis was carried out to investigate environmental
impact categories that need to be reduced to reach 100% eco-efficiency. Eco-
efficiency ratios were analyzed for two cities to compare the results and gain more
insight.
This paper makes several contributions to construction research, including
developing a mathematical model that does not require subjective weighting to assess
the sustainability of construction materials, and presenting a practical way to apply
eco-efficiency to construction materials. The analysis of DEA results could be very
helpful to decision makers to compare relative eco-efficiency of building materials.
However, it should be noted that DEA compares eco-efficiency by analyzing other
sections in the data set. This is a major drawback of DEA, since the eco-efficiency
ratios are relative to the eco-efficiency of other materials in the data set. Also,
accuracy of the results depends on the accuracy of the data extracted. Taking these
limitations into consideration, the developed DEA based eco-efficient assessment
models could provide immediate assessment of building material eco-efficiency and
offer vital guidance for decision makers during material selection. In future work, the
scope of the study could be expanded to address more complex decision making
situations in construction projects. Furthermore, different DEA formulations could be
developed and assessed for different decision making settings.
REFERENCES
Asif, M., Muneer, T., and Kelley, R. (2007). "Life cycle assessment: A case study of
a dwelling home in Scotland." Building and Environment, 42(3), 1391-1394.
COMPUTING IN CIVIL ENGINEERING 743
ABSTRACT
In construction projects the implementation of Alternative Dispute Resolution (ADR)
techniques requires capital expenditures to cover related costs such as fees and
expenses paid to the owner’s/contractor’s employees, lawyers, claims consultants,
third party neutrals, and other experts associated with the resolution process. Since
most projects today operate on tight budgets, one way to ease the potential for
variations from an already financially stressed project budget is to price ADR
techniques as an insurance product. However, since the premium charged by
insurance company is designed to cover its underwriting expenses and profit target,
the benefits of purchasing ADR implementation insurance for a specific project must
outweigh its cost for the investment to be worthwhile. A number of factors in the
ADR implementation insurance model combine to determine whether it is financially
advantageous for project participants to invest in ADR implementation insurance, and
the purpose of this paper is to identify and analyze the critical parameters in the
model. Sensitivity analysis is conducted on the effectiveness of each ADR technique
chosen for the project, average ADR implementation cost on each stage of dispute
resolution, and distribution of possible disputes. These results will help determine the
most critical factors related to the pricing of ADR as an insurance product.
INTRODUCTION
Although using Alternative Dispute Resolution (ADR) techniques such as
negotiation, mediation or Dispute Review Board (DRB) to resolve disputes has been
widely adopted in construction projects as a more effective and cost-saving approach
compared to litigation, ADR implementation costs incurred throughout the dispute
resolution process sometimes could account for a large portion of the
settlement/award amount, the original claim amount, and even the total contract value
(Gebken II and Gibson 2006). Typical ADR implementation costs may include fees
744
COMPUTING IN CIVIL ENGINEERING 745
Determine Project
participants’
Subjective Loss
Function (SLF) Determine
subjective loss of
ADR
implementation
costs
Probability-weighted
Disputes occur and
scenarios for Total expected ADR Determine if
go through
possible resolution implementation costs insurance is
contractual DRL
outcomes (ETA) neccessary
Determine Gross
Premium to cover
ADR
implementation
costs
Then, use the probability mass function derived by ETA to calculate the Total
Expected ADR Implementation Costs. Without loss of generality, the risk of
incurring ADR implementation costs in any construction project can be
mathematically represented by:
1. n, the total number of disputes occurring in the period from the notice to
proceed (t = 0) to the project completion (t = T); n = N1, N2,.., Nk with
probability q1, q2,.., qk respectively, where N1 is the minimum possible
number of disputes and N1 ≥ 0, while Nk is the maximum number of possible
disputes. Since construction disputes occur randomly over time, the arrival of
disputes can be approximated with a Poisson Process with occurrence rate λ
(Touran 2003).
2. cj, the average amount of ADR implementation costs for each dispute
resolution process, where j = 1, 2,…, m represents the jth stage on the
contractual DRL. Then, for each dispute, its resolution process bears m
possible outcomes: resolved at ADR1 and cost c1, resolved at ADR2 and cost
COMPUTING IN CIVIL ENGINEERING 747
c2, … , resolved at ADRm and cost cm, with probability p1, p2, and pm,
respectively, where ∑ 1, and
1 1 … 1 Eq. (1)
3. For the ith dispute (i=1,2,…,n), define xij = 1 represents that the ith dispute is
resolved in the jth stage; otherwise, xij = 0. Thus ∑ represents the
total number of disputes that are resolved in the jth stage and follows a
multinomial distribution M(n, p1 p2,…, pm), with the expected value E(xj) = n
pj, where j = 1, 2,…, m. Specifically, when m = 2, then follows binomial
distribution B(n, p1 p2). E(xj) is the expected number of disputes that are
resolved in the jth stage.
4. Among all n disputes, there are a total of R different possible outcomes. For
each outcome, there could be xj disputes resolved with ADRj. Consequently,
the total ADR implementation cost throughout the time horizon for the rth
outcome is ∑ with a probability of ∏ , given a total
of n disputes. The number of outcome which bears the same total cost and
probability is .
Eq. (2)
748 COMPUTING IN CIVIL ENGINEERING
The fourth step in the flow chart is to calculate the Total Expected Subjective
Loss of ADR Implementation Costs. As mentioned earlier, a subjective loss function
(SLF) is used to indicate the negative utility u(c) that project participants attach to a
given loss amount of ADR implementation costs C resulting from dispute resolution.
The total expected subjective loss could be expressed as follows:
∑ Eq. (3)
where is the total subjective loss when the total number of disputes is n.
R
n
SL x x x p x u c Eq. (4)
The last step of the model is to compare the gross premium and expected subjective
loss and to determine whether investing in ADR implementation insurance is
favorable. If GP E u C , then there exists the possibility for an insurance policy.
SENSITIVITY ANALYSIS
To determine the most critical factors of the model, sensitivity analysis is conducted
with an illustrative example on the effectiveness of each ADR technique chosen for
the project (kj), average ADR implementation cost on each stage of dispute resolution
(cj), and distribution of possible disputes (λ).
Assume there is a highway bridge project in which project participants decide to
include a three-step DRL in the contract for dispute resolution (m = 3). In this DRL, a
dispute goes through the Architect/Engineer or Supervising Officer (ADR1) to
mediation (ADR2) and then arbitration (ADR3). If the DRL fails to provide a
satisfactory settlement, then dispute resolution will eventually escalate to litigation,
which will be much more costly. Details are shown in Figure 2.
The estimated duration of this project is T = 720 days from Notice To Proceed
(assume there are 30 days in each month, T = 24 months). Assume that disputes occur
according to Poisson Process with rate λ = 3. To determine the total expected ADR
implementation costs, ETA, is determined as in Figure 3.
15
lambda
10 k1
$MM
5 k2
k3
0
c1
-40% -20% 0% 20% 40%
80
k3
60
40 c1
20 c2
0 c3
-40% -20% 0% 20% 40%
From the figures we can conclude that the effectiveness of each ADR technique
chosen for the project (kj) and the rate of dispute occurrence (λ) have larger influence
on Total Expected ADR Implementation Costs and Subjective Loss. The limitation is
that this is just a simplified model with assumptions such as the independence
between dispute occurrences and the effectiveness of each ADR. The real situation
could be more complicated. Thus a more detailed analysis with tests on more
parameters is required in order for the model to be applied to real projects. Moreover,
drawing analogy from other commercial insurances such as medical insurance, the
policy will have a deductible limit on project participants to prevent moral hazard. In
this case project participants will have to bear part of the ADR implementation costs
before insurance kicks in. future work will focus on finding the optimal point on
project participants’ subjective loss curve which will minimize their total expected
subjective loss.
REFERENCES
Bowers, N.L., Gerber, H.U., Hickman, J.C., Jones, D.A. and Nesbitt, C.J. (1997)
“Actuarial Mathematics.” Society of Actuaries. Hardback.
Gebken II, R. J. and Gibson, G. E. (2006) “Quantification of costs for dispute
resolution procedures in the construction industry.” J. Professional Issues in Eng.
Education and Practice, 132( 3), July, 264-271
Hoshiya, M., Nakamura, T. and Mochizuki, T. (2004) “Transfer of Financial
Implications of Seismic Risk to Insurance.” Natural Hazards Review, ASCE, 5(3),
141-146.
COMPUTING IN CIVIL ENGINEERING 751
Menassa, C., Pena Mora, F., and Pearson, N. (2009). “A Study of Real Options with
Exogenous Competitive Entry to Analyze ADR Investments in AEC Projects.”
Journal of Construction Engineering and Management, American Society of Civil
Engineers, Reston, VA, ASCE
Rausand, M. and Høyland, A. (2005) “System reliability theory: models, statistical
methods, and applications.” New Jercy: John Wiley &Sons, Inc.
Song, X, Peña-Mora, F., Arboleda, C., Conger, R., and Menassa, C. (2009). “The
Potential Use of Insurance as a Risk Management Tool for ADR Implementation in
Construction Disputes.” Accepted for publication and presentation at 2009 ASCE
International Workshop on Computing in Civil Engineering, Austin, Texas, U.S. -
June 24-27, 2009
Song, X., Peña-Mora, F., Arboleda, C. (2010), "The Calculation of Optimal Premium
In Pricing ADR As An Insurance Product." The International Conference on
Computing in Civil and Building Engineering (ICCCBE), and XVII European Group
for Intelligent Computing in Engineering (EG-ICE) Workshop, Nottingham, UK,
June 30-July 2, 2010.
Touran, A. (2003). “Calculation of Contingency in Construction Projects.” IEEE
Transactions on Engineering Management, IEEE Engineering Management Society,
Piscataway, N.J, 50 (2), 135-140.
United States Nuclear Regulatory Commission (1975). “An assessment of accident
risk in U.S. commercial nuclear power plants.” Appendix I. Accident definition and
use of event tree, WASH-1400, NUREG-75/ 014, USNRC, Gaithersburg, Md.
Application of Latent Semantic Analysis for Conceptual Cost Estimates
Assessment in the construction Industry
Tarek Mahfouz1
1
Assistant Professor, Department of Technology, College of Applied Science and
Technology, Ball State University, Muncie, Indiana, 47306, email:tmahfouz@bsu.edu
ABSTRACT
Conceptual cost estimates represent the first benchmark upon which owners
define their financial capability of performing a construction project. Consequently,
the accuracy and quality assessments of these estimates are crucial. This paper
proposes an automated conceptual cost estimate assessment model through Latent
semantic Analysis (LSA). The use of LSA in the construction industry has rarely been
implemented, which deprives this industry from utilizing its strengths to facilitate
decision making. The research methodology adopted (1) utilizes data from a set of
completed construction projects; (2) proposes an automated LSA model for the
assessment of conceptual cost estimates based on error ranges; and (3) compares the
attained outcomes to previous researches in the literature review. The outcomes of the
current research illustrate that LSA modeling performs accurately in assessing
conceptual cost estimate making it a powerful tool for construction decision making.
INTRODUCTION
The US Census data showed that the total construction spending in 2007 was
about $ 14 trillion (US Census 2010). This considerable amount of expenditure is due
to the dynamic nature of the construction industry and the increasing sophistication
and complexity of construction projects. These characteristics created a requirement
for an extensive amount of coordination between different parties, different expertise,
and the production of massive amount of documents in diversified formats. All of
these factors impose a high level of burden on the design team and a larger one on the
estimators. At the conceptual estimate stage, these factors affect the accuracy of the
developed estimate which is based on experiences and previous knowledge. Such an
aspect imposes a high level of risk on owners and developers due to the uncertainty
associated with the estimate. In an effort to facilitate construction conceptual cost
estimate (CCCE) assessment, a number of researches developed expert systems,
mathematical models, and machine learning (ML) models. Although those studies
resulted in significant contribution, none of them utilized Latent Semantic Analysis
(LSA). Latent semantic Analysis has proven to be a reliable automated decision
support methodology in previous researches performed by the author in the field of
Knowledge Management and Legal Decision Support (Mahfouz 2009, and Mahfouz
et al. 2010). It achieved higher prediction accuracy in comparison to researches in the
literature review.
Therefore, in an attempt to provide a robust CCCE methodology for the
construction industry, this paper developed an automated assessor through Latent
Semantic Analysis (LSA). The models developed made use of data from a set 89
752
COMPUTING IN CIVIL ENGINEERING 753
completed projects worldwide. To that end, the adopted research methodology (1)
investigated LSA algorithms; (2) developed truncated feature spaces for the utilized
projects; (3) developed 5 LSA automated assessment models; (4) developed a C++
algorithm to facilitate assigning cost assessment; and (5) tested and validated the best
developed model with newly un-encountered projects. It is conjectured that this
research stream will help in relieving the negative consequences associated with
CCCE that are based on incomplete set of documents. In addition, the achieved
outcomes of this research highlight the possibility of this technique to be adopted for
automated decision support in the construction industry.
The rest of the body of this paper describes (1) Literature Review; (2)
Methodology; (3) Results and discussion; and (4) Conclusion.
LITERATURE REVIEW
Over the last decade, researchers in the construction industry have focused
their efforts for developing models to asses the quality of CCCE. These models
ranged between Rule Based Reasoning Systems (RBR) (Serpell 2004); mathematical
modeling systems (Fortune and Lees 1996, Oberlender and Trost 2001, and Trost and
Oberlender 2003); and Machine Learning (ML) modeling (An et al. 2007). Despite
the significant contribution of these systems to the advancement of assessing CCCE,
they faced the following hindrances. The success of RBR models was limited due to
(Bubbers and Christian 1992): (1) the failure to deduce all necessary rules upon
which the system operates; and (2) the assumption of the existence of a full domain
model that captures all required rules about a specific matter. Due to the complexity
of the analyzed problem and the involvement of a number of factors, mathematical
modeling like regression and factor analysis were implemented. However, their
limited capability to integrate nonlinear associations opened the horizon for the use of
more sophisticated ML methodologies. In one of the most recent studies, An et al.
(2007) utilized Support Vector Machines (SVM) for the assessment of construction
conceptual cost estimate errors. However, none of these researched implemented
LSA. It is a mathematical based method that utilizes ML through developing
truncated feature space. This characteristic allows for less computation and signifies
the effect of the analyzed factors.
METHODOLOGY
The following sections of the paper describe the different steps of developing,
implementing, and validating the LSA models. The adopted research methodology is
composed of five main stages. These stages are defined as (1) Data Collection; (2)
Assessment Criteria; (3) Factors Identification; (4) LSA Model Design and
Implementation; and (5) Model Testing and Validation.
Data Collection
The data pertinent to the current analysis is collected from 89 completed
projects worldwide. Table 1 below illustrates the distribution of the projects with
respect to geographic location. The related information was gathered from project
managers and experienced estimators. Since the current research is concerned with
assessing the accuracy of the CCCE, as will be discussed in the following section,
only information related to scope of works that did not undergo any changes was
gathered. Consequently, any additions to the scope of work were excluded and any
754 COMPUTING IN CIVIL ENGINEERING
omissions were eliminated from the initial conceptual cost data. The analyzed
projects were classified into three categories with respect to their % error, adopted
from An et al. (2007), as follows (0-<5%, 5-10%, and >10%). The reason for
adopting these ranges is that the literature illustrates that acceptable error should not
be more than 10%. However, as mentioned by An et al. (2007) “from interviews with
experienced experts, Korean companies generally set the primary goal of the range of
error rate at 5%”. The 0-<5%, 5-10%, and >10% categories included 22 (24.72%), 50
(56.18%), and 17 (19.10) projects respectively.
Table 1. Geographic Distribution of Projects
# of Projects % of Projects Location
13 14.61 USA
23 25.84 Egypt
8 8.99 Qatar
10 11.24 Kuwait
35 39.33 UAE
Assessment Criteria
The adopted assessment measure for the current research is the CCCE
accuracy (refer to equation 1). It could be defined as a measure of how close was the
initial cost estimate compared to the actual cost after completion. However, one
should understand that any changes in the scope of work will affect the assessment.
As a result, only data related to unchanged scope of work are considered when
defining the final cost at completion.
eq. 1
Factors Identification
The set of factors adopted for the current assessment were defined in a three
steps. First, a comprehensive literature review of factors utilized in previous
researches, including Skitmore 1991, Akintoye and Fitzgerald 2000, Trost and
Oberlender 2003, Serpell 2004, and An et al. 2007, was performed attaining a set of
54 factors. Second, interviews with experienced estimators and project managers
from the adopted projects identified an extra 5 factors. Third, after gathering
information related to these factors from all utilized projects, statistical choice models
namely Probit and Logit were developed to define the most significant factors and
their associations in relation to CCCE assessment. As a result of the aforementioned
steps 32 factors were used for the current research task. Table 2 illustrates the full list
of factors definitions, statistical significance, and their types.
LSA Model Design and Implementation
Latent Semantic Analysis (LSA) is a theory that utilizes linear algebra,
particularly, Singular Value Decomposition (SVD) to solve associations and
constraints between factors mathematically. It is based on the concept of Vector
Space Model implemented by SVM. However, the main advantage in LSA is that it
utilizes a truncated space in which the number of features is decreased. LSA
methodology applies SVD for the reduction of dimensionality in which all of the
local relations are simultaneously represented. The implementation of LSA modeling
COMPUTING IN CIVIL ENGINEERING 755
within the current research task is performed in a three steps. First, all gathered
projects are represented in a form of matrix (figure 1). Each row of the developed
matrix demonstrates a specific factor within the defined 32 factors.
Table 2. List of Utilized Factors
Item Factor Definition t-stat Type of Factor
1 Intensity of the site visit 1.66 Ordinal 5:high–0:none
2 Site clearness of obstacles during site visit 2 Ordinal 5:high–1:low
3 Possibility of differing site conditions 1.86 Ordinal 5:high–0:none
4 Level of site survey 1.46 Ordinal 5:high–1:low
5 Experience with similar projects 2.13 Ordinal 5:high–1:low
6 Details of existing data 1.35 Ordinal 5:high–0:none
7 Level of details in project definition 2.19 Ordinal 5:high–1:low
8 Level of details in project scope statement 1.58 Ordinal 5:high–1:low
9 Level of details of the project drawings 1.84 Ordinal 5:high–1:low
Level of details of the project technical
10 1.8 Ordinal 5:high–1:low
specifications
Level of details of the project general
11 1.41 Ordinal 5:high–1:low
conditions
Level of details of the project
12 2.02 Ordinal 5:high–1:low
supplementary conditions
Level of commitment of the company to
13 1.52 Ordinal 5:high–1:low
the project
14 Financial capacity of the company 1.63 Ordinal 5:high–1:low
15 Financial capacity of the client 1.75 Ordinal 5:high–1:low
16 Time to estimate 1.53 Numerical days
17 Difficulty of the estimating procedures 1.47 Ordinal 5:high–1:low
18 Estimator’s career experience 1.8 Numerical years
19 Estimator’s field work experience 1.5 Numerical years
Estimator’s experience with similar
20 1.41 Ordinal 5:high–0:none
projects
Estimator’s experience with field work in
21 1.55 Ordinal 5:high–0:none
similar projects
22 Capacity of the estimating team 2.23 Ordinal 5:high–1:low
Number of others projects under
23 1.68 Numerical integer
estimation
24 Capacity of the architectural team 2.19 Ordinal 5:high–1:low
25 Capacity of the procurement team 1.59 Ordinal 5:high–1:low
26 Capacity of the technical office team 1.74 Ordinal 5:high–1:low
27 Capacity of the quality control team 1.8 Ordinal 5:high–1:low
28 Capacity of the quality control team 1.41 Ordinal 5:high–1:low
29 Capacity of client 2.53 Ordinal 5:high–1:low
30 Level of construction difficulty 1.55 Ordinal 5:high–1:low
31 Level of competition 1.64 Ordinal 5:high–1:low
32 Contingency level 2.73 Ordinal 5:high–0:none
756 COMPUTING IN CIVIL ENGINEERING
Each column of the matrix stands for a project. Each cell contains the
recorded value that each factor has within a specific project (Landauer et al. 2007).
The developed m (number of factors) by n (number of projects) matrix will contain
zero and nonzero elements. Generally, a weighing function is applied to nonzero
element to give lower weights to high frequency factors that occur in many projects
and higher weights to features that occur in some projects but not all (Salton and
Buckley, 1991). Second, SVD is applied to the developed matrix to achieve an
equivalent representation in a smaller dimension space (Choi et al., 2001). With SVD,
a rectangular matrix is decomposed into the product of three other matrices (figure 1).
One component matrix describes the original row entities as vectors of derived
orthogonal factor values, another describes the original column entities in the same
way, and the third is a diagonal matrix containing scaling values such that when the
three components are matrix-multiplied, the original matrix is reconstructed
(Hofmann, 1999). Third, the number of factors adopted for analysis is determined
(Truncation). Since the singular value matrix is organized in an ascending order based
on the weight of each term, it is easy to decide on a threshold singular value below
which terms significance is negligible, refer to (Figures 2) (Dumais, 1991). For an
original matrix A with rank k, a newly truncated matrix Ak can be formulated by the
dot product illustrated in equation 2.
weights of the factors are computed over the collection of projects. By default, only a
local weight is assigned and this is simply the frequency with which the factor
appears in a project. The algorithm implements two thresholds for factor frequencies:
Global and Local. Next, the local weights of the features are computed. Each factor
weight is the product of a local weight times a global weight. Next, the algorithm
creates a final factor-project matrix. The algorithm finally performs SVD
decomposition. To that end, five truncated feature spaces were generated with the
following k sizes 5, 10, 15, 20, and 25. Each truncated feature space was generated
with a local threshold of Log function and a global threshold of Entropy function. The
Log function (equation 3) decreases the effect of large differences in factor
frequencies (Landauer et al., 2007). The entropy function (equation 4), on the other
hand, assigns lower weights to factors repeated frequently over the entire project
collection, as well as taking into consideration the distribution of each factor
frequency over the projects (Landauer et al., 2007). These thresholds were adopted
for the current analysis due to their success over other types of threshold
combinations in earlier researches performed by the authors (Mahfouz, 2009).
eq. 3
eq. 4
where tfij is the factor frequency of factor i in project j, and gfi is the total number of
times that the factor i appears in the entire collection of n projects.
Model Testing and Validation
The developed LSA models were tested and validated based on correctly
predicting the % error of newly introduced projects that were not utilized for
developing the models. A C++ algorithm was developed to perform the validation.
The implementation of the algorithm performs 4 steps. First, each project in the
feature space is tagged with its % error. The algorithm iterates sequentially through
the projects storing the project number and its corresponding % error. Second, the
LSA algorithm is implemented to extract the closest set of projects to the newly tested
one. A similarity threshold of 95% is considered. In other words, any project retrieved
at a similarity measure of less than 0.95 is disregarded. The algorithm is set to
retrieve each project and its similarity measure. Third, the algorithm reads through the
project numbers attained from the LSA implementation and retrieves the % error of
each project. Fourth, it reports the % error of the newly tested project by two means.
The first is reported as the most repeated % error. The second is reported a weighted
average of the retrieved % errors. The reported outputs are compared against manual
tagging of the newly tested projects to decide on the most accurate method.
RESULTS AND DISCUSSION
The results of the implementation of the aforementioned methodology are
illustrated in table 3. The training and testing of the developed models were
performed on 10 fold cross validation. In each step the models were trained on 90%
of the projects and tested over the other 10%. The process was repeated in an iterative
manner until the models are trained and tested on all projects. For more illustration,
each step would utilize 80 and 9 projects for training and testing respectively. The
758 COMPUTING IN CIVIL ENGINEERING
reported results in table 3 are the averages of the difference in % error of all 10 folds.
A closer look at the results shows the followings.
All models attained an average % error difference of 6 % or less.
Generally, the validation scheme of the weighted average attained better
results than the most repeated one.
Best results with respect to both validation schemes were achieved using a
truncated feature space of size 20. This is supported by the reported
advancements shown in figure 3.
The developed model is suitable for evaluating conceptual cost estimates of
construction projects with various complexity levels. This could be attributed
to two folds. Firstly, the model was tested and validated using different
projects performed at different locations of the world. Secondly, project
complexity was captured into the analysis through proxy factors like (30)
Level of construction difficulty, (18) Estimator’s career experience, (19)
Estimator’s field work experience, (20) Estimator’s experience with similar
projects, (21) Estimator’s experience with field work in similar projects, and
(22) Capacity of the estimating team.
Table 3. Average % error Difference
Average % error
Truncated Space Size Scheme 1 (Most Repeated) Scheme 2 (Weighted Average)
5 5.8 6
10 4 3.4
15 3.1 2.8
20 2.2 1.6
25 4 5.3
ABSTRACT
Designers and managers of buildings and other constructed facilities cannot easily
quantify the sustainability impacts of structures for improved analysis, management,
or decision-making. This is due in part to the lack of interoperability between design
and analysis software and datasets that enable full life cycle assessment (LCA) of
constructed facilities. This work develops a computational framework to enable
building designers, engineers, contractors, and managers to reliably and efficiently
construct dynamic life cycle models that capture environmental impacts associated
with every life cycle phase. This includes 3D architectural tools, structural software,
and virtual design and construction packages. Use phase impacts can be quantified
using distributed sensor networks. This integration provides a dynamic LCA
modeling platform for management of facility footprints in real-time during
construction and use phases, offering unique analysis opportunities to examine the
tradeoffs between design and construction/operation decisions.
INTRODUCTION
The built environment creates significant environmental, economic, and social
impacts. These occur throughout the life cycle of constructed facilities from raw
material acquisition, through construction and use, to demolition and disposal. The
commercial and industrial sectors consume approximately 40% of energy produced in
the US and contribute close to 40% of greenhouse gas emissions, while contributing
to acidification, eutrophication, and smog (EIA, 2010). This represents an opportunity
for improvement; yet presently, few studies exist, little information is available, and
no tools are in use for measuring the distribution of energy consumption and
environmental impacts among the life cycle phases of constructed facilities. Methods
are needed to accurately assess, manage, and control consumption and emissions
starting from early design and continuing through the facility life cycle.
While no tools are available for environmental impact control and process
monitoring, highly developed economic cost controls form the foundation of current
construction management and operation practices. These controls allow construction
managers and facility owners to compare accrued costs with estimates derived from
design documents (e.g., drawings, contract specifications) and predictive performance
models (e.g., building energy models). Variance from budgeted costs or schedules,
760
COMPUTING IN CIVIL ENGINEERING 761
METHODS
The objective of this research is to create a computational framework to link LCA
and BIM, facilitating adoption of widely accepted construction management
techniques of variance control to manage reduced construction and operation
environmental impacts of facilities. The proposed architecture is shown in Figure 1.
CSI Code Array and the Crew Array, a life cycle inventory identifier (LCI No.) is
listed that is linked to a material or process within existing life cycle inventory
datasets. In the case of cobble, the corresponding LCI database identifier is
EIN_UNIT06567700467, corresponding to “Gravel or Rock” in the Ecoinvent life
cycle inventory database. From the Ecoinvent database, impacts for the production of
1 metric ton of cobble in terms of global warming, energy resources, acidification,
eutrophication, and carcinogens are found to be 1.7 kg CO2e, 25.6 MJ LHV, 0.04 kg
SO2, and 0.01 kg PO4, with negligible amounts of B(a)P, respectively. Quantities for
each material or piece of equipment used are cascaded down from the User Interface
to compute the total impacts based on LCI material and process data. Total impact
results for each work item performed are then displayed on the User Interface.
Analogous to the development of a time-dependent construction and facility
operation budget which is made by tying construction activity costs to construction
activity schedules, the impact results from the integrated BIM-LCA model are used to
model the accrual of environmental footprint (e.g., CO2e, SOx, etc.) over the course
of construction. When combined with a use phase model, a budget of environmental
footprint accrued over the life cycle of a constructed facility is developed. Figure 2
shows the GWP footprint accrual for a hypothetical constructed facility over time.
Here, the ordinate axis shows GWP in terms of percent of total CO2e emissions
over facility life and the abscissa measures time. The projected lifetime is 30 years
with 10% of emissions from construction and 90% from use. The four lines (A, B, C,
D) shown in Figure 2 correspond to expected and measured time dependent impact
accrual. Line A represents forecasted CO2e impact accrued during construction. Line
B represents actual impact realized during construction. Line C represents projected
impact during use. Line D represents actual impact accrued during use. The dashed
vertical line marks the end of construction. Implementing common cost variance
control management techniques, this type of figure can be used to measure
construction processes and facility operations in real-time to better ensure that
environmental performance targets are met. Lines A and B are analogous to cost
variance figures used today in the construction industry for project cost management.
Line A is created using integrated BIM-LCA by associating the impacts of each
construction activity with the construction schedule. Line B is based on measured
COMPUTING IN CIVIL ENGINEERING 765
construction emissions using information on the actual type of equipment used, the
hours it is operated, and actual material production impacts accounting for
construction change orders. Line C is created from use phase models such as eQuest
or EnergyPlus that predict energy consumption. Line D is created by monitoring the
actual accrual of environmental impacts. Sensing technologies and networks are
becoming more widespread in constructed facilities and can provide real-time
information on temperature, humidity, lighting levels, and energy consumption, and
can be used to calculate environmental impact over the use phase.
CASE STUDY
Michigan Department of Transportation Project BHT 9903002, a bridge
rehabilitation project in southeast Michigan, was selected as a case study of the LCA-
BIM integration platform. The project included activities of (1) concrete deck
hydrodemolition, (2) placement of concrete overlay, (3) strengthening of steel girders
by adding plate steel, (4) replacement of guardrail, (5) asphalt paving, (6) epoxy
painting, (7) excavation, (8) removal of old drainage structures, and (9) installation of
replacement drainage structures. The construction material quantities are listed in
Table 1. Based on crew productivity, a construction schedule was also constructed.
Table 1. Materials and Quantity Takeoffs for BHT 9903002
Material Unit Quantity Material Unit Quantity
Bitumen kg 68,025 Reinforcing Steel kg 7,929
Concrete m3 146 Riprap tonne 1000
Epoxy Coating kg 8,646 Sand kg 768,800
Formwork (plywood) kg 548 Structural Steel kg 15416
Grout kg 194,580 Timber m3 0.3
Iron, Sand Casted kg 844 Water kg 2,616,933
PVC pipe kg 63,950
RESULTS
The total impact of the designed work, in terms of life cycle GWP and energy
consumption, was 4.4x106 CO2e and 7.2x107 MJ (lower heating value), respectively.
Based on MasterFormat work codes, project impacts were broken down into
construction activities so that impacts can be associated with specific tasks. This
allows construction managers to pinpoint sources of impact and focus process
improvements. The breakdown of activities and impacts is shown in Table 2.
Table 2. Construction Activities for BHT 9903002 and Associated Impacts
Construction GWP Energy Acidification Eutrophication Carcinogens
Activity (CO2e) (MJ LHV) (kg SO2) (kg PO4) (kg B(a)P)
Hydrodemolition 3.7x106 5.9x107 4.8x104 8.5x103 7.6x10-2
Excavation 5.7x103 1.1x105 7.6x101 1.4.101 1.8x10-3
Drain Structure 1.8x105 4.3x106 9.2x102 7.5x101 2.2x10-4
Structural Steel 4.4x104 7.1x105 3.1x102 9.1x101 1.8x10-2
Concrete Overlay 3.3x105 1.7x106 1.6x103 1.6x102 6.6x10-3
Epoxy Painting 5.6x104 1.2x106 1.9x102 3.0x101 3.0x10-3
Curb and Gutter 3.1x102 1.9x103 6.9x10-1 1.5x10-1 9.2x10-6
Paving 3.3x104 3.7x106 4.9x102 3.6x100 7.1x10-4
Guardrail 1.2x104 2.1x105 7.6x101 2.6x101 4.6x10-3
Totals 4.4x106 7.2x107 5.2x104 8.9x103 1.1x10-1
766 COMPUTING IN CIVIL ENGINEERING
Linking the activities shown in Table 2 with the construction schedule, accrual of
environmental impacts can be plotted versus percent project completion (Figure 3).
This time-dependent “budget” for environmental impacts (Line A described in the
Methods section) can be used to guide project management during the construction
phase when compared against actual accrual of environmental impacts.
CONCLUSION
This paper presents a novel framework for managing environmental impacts of a
constructed facility throughout its life cycle. Coupling environmental impacts of
construction activities determined using life cycle assessment with construction
schedules can successfully produce time-dependent environmental impact budgets
that form the basis for management of variance between predicted environmental
impacts and actual environmental impacts of constructed facilities. This is
demonstrated using a case study of a simple bridge reconstruction.
The framework produces environmental impact accrual timelines facilitating
improved sustainability-oriented project management during the construction and use
of a constructed facility and offers unique analysis opportunities to examine the
managerial tradeoffs between design and construction/operational decisions. It
enables designers, contractors, and engineers to methodically manage designed and
actual environmental impacts and make more informed decisions throughout the
facility life cycle. Further, it pushes widespread adoption of building information
models and life cycle assessment tools by making their collective use more valuable.
Future research in this area will develop tools necessary to track actual environmental
emissions realized onsite using existing cost management procedures and tracking of
change orders, predict use phase facility energy and material consumption, and
monitor actual facility use phase energy and material consumption.
ACKNOWLEDGEMENTS
The authors would like to thank the Stanford Center for Integrated Facility
Engineering, the National Science Foundation Graduate Fellowship, the National
Defense Science and Engineering Graduate Fellowship, and the Stanford Terman
Faculty Fellowship for their generous financial support in completing this work.
COMPUTING IN CIVIL ENGINEERING 767
REFERENCES
AIA (2007). “Integrated Project Delivery: A Guide, v1.” Retrieved 2 Dec 2010.
Eastman, C. (1999). Building Product Models: Computer Environments Supporting
Design and Construction, CRC Press, Boca Raton, FL.
Eastman, C., Teicholz, P., Sacks, R., Liston, K. (2008). BIM Handbook: A Guide to
BIM for Owners, Managers, Designers, Wiley, Hoboken, NJ.
EIA (2010). Annual Energy Review 2009, Technical report, US DOE.
Finnveden, G., Hauschild, M., Ekvall, T., Guinée, J., Heijungs, R., Hellweg, S.,
Koehler, A., Pennington, D., Suh, S. (2009). “Recent Developments in Life Cycle
Assessment.” J. Envir. Man., 91(1), 1-21.
Fischer, M., Hartmann, T., Rank, E., Neuberg, F., Schreyer, M., Liston K., Kunz J.
(2004). “Combining different project modelling approaches for effective support
of multi-disciplinary engineering tasks.” In: P. Brandon, H. Li, N. Shaffii and Q.
Shen, Editors, Int. Conf. on Infor. Tech. in Design and Construction (INCITE
2004), Langkawi, Malaysia, 167–182.
Gu, D., Zhu, Y., Gu, L. (2006). “Life cycle assessment for China building
environment impacts.” J. Tsinghua University, 46(12), 1953–1956.
Häkkinen, T., Kiviniemi, A., (2008). “Sustainable Building and BIM.” In proceedings
of World Sustainable Building Conference (SB08), Melbourne, Australia.
Junnila, S., Horvath, A., Guggemos, A. (2006). “Life Cycle Assessment of Office
Building in Europe and the United States.” J. Infra. Sys., 12(1), 10-17.
Keoleian, G., Blanchard, S., Reppe, P. (2001). “Life Cycle Energy, Costs, and
Strategies for Improving a Single Family House.” J. Indust. Ecol., 4(2), 135-156.
Khasreen, M., Banfili, P., Menzies, G. (2009). “Life cycle assessment and the
Environmental Impact of Buildings: A Review.” Sustainability, 1(3), 674-701.
Loh, E., Nashwan, D., Dean, J. (2007). “Integration of 3D Tool with Environmental
Impact Assessment (3D EIA).” In proceedings of Int. Conf. of Arab Soc. for
Computer Aided Arch. Design (ASCAAD 2007), Alexandria, Egypt.
Ma, Z., Zhao, Y. (2008). “Model of Next Generation Energy-Efficient Design
Software for Buildings.” Tsinghua Sci Technol., 13(S1), 298-304.
Ochoa, L., Hendrickson, C., Matthews, H. (2002). “Economic input-output life-cycle
assessment of U.S. residential buildings.” J. Infra. Sys., 8(4), 132-138.
RS Means (2010). “Facilities Construction Cost Data.” RS Means, Kingston, MA.
Sartori, I., Hestnes, A. (2007). “Energy use in the life cycle of conventional and low-
energy buildings: A review article.” Energy and Buildings, 39(3), 249-257.
Scheuer, C., Keoleian, G., Reppe, P. (2003). “Life Cycle Energy and Environmental
Performance of a New University Building: Modeling Challenges and Design
Implications.” Energy and Buildings, 35(10), 1049-1064.
Seo, S., Tucker, S., Newton, P. (2007). “Automated Material Selection and
Environmental Assessment in the Context of 3D Building Modeling.” J. Green
Bldg., 2(2), 11.
Steel, J., Drogemuller, R., Toth, B. (2010). “Model interoperability in building
information modeling.” Softwr. Sys. Model., DOI: 10.1007/s10270-010-0178-4.
Steinmann, R. (2010). “BIM and openBIM from Various Viewpoints.” Lecture.
Stanford University, Stanford, CA. 3 Nov 2010.
A Real Options Approach to Evaluating Investment in Solar Ready
Buildings
B. Ashuri1, H. Kashani2
1
Assistant Professor, School of Construction, Georgia Institute of Technology, 280 Ferst
Drive, 1st Floor, Atlanta, GA 30332-0680. Email: Baabak.Ashuri@coa.gatech.edu
2
Ph.D. Candidate, School of Construction, Georgia Institute of Technology, 280 Ferst Drive,
1st Floor, Atlanta, GA 30332-0680. Email: HammedKashani@gatech.edu
Abstract
Sustainable building technologies such as Photovoltaics (PV) have promising features
for energy saving and greenhouse gas (GHG) emissions reduction in the building
sector. Nevertheless, adopting these technologies generally requires substantial initial
investments. Moreover, the market for these technologies is often very vibrant from
the technological and economic standpoints. Therefore, investors typically find it
more attractive to delay investment on the PV technologies. They can alternatively
prepare “Solar Ready Buildings” that can easily adopt PV technologies later in future;
when their prices are lower, energy price are higher, or stricter environmental
regulations are in place. In such cases, the decision makers should be equipped with
proper financial valuation models in order to avoid over- and under-investment. We
apply Real Options Theory to evaluate the investment solar ready buildings. Our
proposed investment analysis model uses experience curve concept to model the
changes in price and efficiency of the PV technologies over time. It also has an
energy price modeling component that characterizes the uncertainty about future retail
price of energy as a stochastic process. Finally, the model incorporates the
information concerning specific policy and regulatory instruments that may affect the
investment value. Using our model investors’ financial risk profiles of investment in
the “fixed” Solar Building and “flexible” Solar Ready Buildings will be developed.
Also, for solar ready buildings, the model determines whether the PV panels should
be installed and, if yes, how much should be invested. Finally, by utilizing the
proposed model, the optimal time for installing the PV panels can be identified.
Introduction
Given the increasing scale of investments in sustainable building technologies such as
the Photovoltaic (PV) panels, it is of crucial importance to offer the proper financial
decision-making tools to the stakeholders and decision-makers. Without a proper
methodology, the risk that funds are misappropriated is imminent, e.g., by choosing
wrong technologies or by timing the investment incorrectly.
Proper allocation of resources to sustainable building projects (e.g. installing Solar
Panels) requires an assessment of the cost and performance of proposed solutions to
establish their profitability. Metrics such as Payback Period (PP), ROI and NPV have
been traditionally applied to measure this profitability. Of all these measures, Net
Present Value (NPV) is the widely prescribed metric, e.g., in ASTM E917–05 (2010)
for conducting life cycle costs and benefits analysis for a building system. Despite the
popularity of NPV, this method has serious limitations in financial assessment of an
energy retrofit solution.
A NPV analysis approach assumes that all decisions related to an energy investment
are made at once and are completely irrevocable. These assumptions are not
768
COMPUTING IN CIVIL ENGINEERING 769
components, which generates the information that proper financial analysis of the
investment in solar ready buildings requires. Specifically, the model receives input
from an external Building Energy Simulation component, which is used to assess the
energy performance of the solar ready building prior and after the installation of the
PV panels. Thus, the module determines the potential energy savings resulted from
the installation of the PV panels. An important component of our model is Retail
Energy Price Modeling module, which shows future projected paths for the energy
price. The financial benefit of installing the PV panels will be calculated based on
these energy price models. The other component is Experience Curve Modeling,
which is used to characterize how price and efficiency of the PV technologies evolve
over time. This is critical in finding the optimal investment time for a proposed
energy retrofit. The modeling process is described in the following sections
Building Energy Simulation: Characterize Energy Savings Performance
The Building Energy Simulation component explicitly addresses the determination of
the energy savings performance of PV panels. The analysis first quantifies the
performance of the solar ready building prior to the installation of the PV panels
considering a variety of factors including the meteorological, urban and micro climate
effects, related to the environmental conditions around the building. Next the
simulation model quantifies the expected level of energy saving in the building
following the installation of the Solar Panel. The detailed discussion about the
implementation of Building Performance Simulation is out of the scope of this paper.
Our financial analysis only uses the expected energy consumption of the solar ready
buildings prior to the installation of the solar panels and after their potential
installation as the essential inputs.
Retail Energy Price Modeling: Create a Stochastic Model for Energy Price
Retail Energy Price Modeling explicitly addresses uncertainty about energy price as
major benefit driver of an energy retrofit investment. Financial benefits of energy
savings depend on the price of energy in the utility retail market. Although average
energy price rises over time, it is subject to considerable short-term variations. A
Binomial Lattice model (See Hull (2008) for detailed descriptions) can be created to
characterize the energy price uncertainty. A binomial lattice model is a simple,
discrete random walk model, which has been used to describe evolving uncertainty
about energy price (Liski and Murto 2010; Ellingham and Fawcett 2006). The
modeling choice of binomial lattice is also consistent with the general body of
knowledge in real options (Hull 2008; Luenberger 1998). In economics and finance,
binomial lattice is an appropriate model to capture uncertainty about a factor like
energy price that grows over time plus random noise (Dixit and Pindyck 1994).
Binomial Lattice Model
To define a binomial lattice (Figure 2) for energy price (S), a basic short period with
length ∆t will be considered. Suppose the current energy price is S0. Energy price in
the next period is one of only two possible values: u×S0 or d×S0 where both u and d
are positive rates with u>1 and d<1. The probabilities of upward and downward
movements are p and 1-p, respectively. This variation pattern continues on for
subsequent periods until the end of investment time horizon. Binomial lattice
parameters can be determined using data on the expected annual growth rate of
COMPUTING IN CIVIL ENGINEERING 771
energy price (α) and the annual volatility of energy price (σ) as explained by Hull
formulation (2008). This binomial lattice can be used to generate future price paths.
Monte Carlo Simulation
Next, Monte Carlo simulation technique can be applied to generate several random
paths for energy price S – from the start to the end of investment time horizon – based
on the described binomial lattice. Considering the binomial lattice formulation,
energy price in any period of the lattice is a random variable that follows a discrete
binomial distribution; this is the basis of applying Monte Carlo simulation technique
for generating a large number of random energy price paths along the investment time
horizon (Figure 1). Random energy price paths are used to compute respective energy
savings series. In addition to benefits, it should be specified how the initial cost of the
PV panels changes over time to find when it is optimal to invest in. This is discussed
in the following section.
Through what-if analyses, the impact of the regulatory conditions on the investment
timing for an energy retrofit solution can be evaluated.
Figure 5: (a) Optimal Retail Price of Electricity ($/kWh) Triggering the installation of
Solar Panels; (b) Installation Likelihood of PV Panels Over the House Service Life;
(c) NPV Distribution of Solar Ready Home; (d) NPV Cumulative Distribution
Functions (CDFs) of Solar House and Solar-Ready Building and Price of Flexibility
Conclusion
Better investment decision models can facilitate achieving energy savings in the
buildings through increasing the efficiency and effectiveness of investments in energy
efficiency measures. The proposed investment analysis framework for evaluating
investment in solar ready buildings will enlighten investors about the economic
inefficiencies that conventional fixed energy investment strategies produce and
facilitates the valuation of the flexible solutions that mitigate such inefficiencies.
Explicit pricing of flexibility is significant for systematic decision-making beyond the
current energy target; embedded options in delayed retrofit solutions reflect on the
possibility to meet future stricter targets and prepare for future upgrades.
The proposed investment framework can be used as a decision making instrument,
looking at different scenarios in technology and market developments, and deciding
between immediate or delayed investment in PV technologies. Thus, it can also
become an instrument in the selection of the right government incentives over time.
As a corollary, the methodology will be used to single out the type of technologies
that are ripe in the expected market of competing sustainable technologies.
COMPUTING IN CIVIL ENGINEERING 775
References
Ashuri, B., Kashani, H., Molenaar, K., and Lee, S. (2010). "A Valuation Model for Choosing
the Optimal Minimum Traffic Guarantee (MTG) in a Highway Project: A Real-Option
Approach " Proceedings of the 2010 Construction Research Congress, Canada.
ASTM (2010) Standard Practice for Measuring LCC of Buildings & Building Systems.
Borison, A. (2005) " Where Are the Emperor's Clothes?" J. App. Corp. Fin. 17(2), pp. 17-31.
Crawley, D.B. (2007) “Creating Weather Files for Climate Change and Urbanization Impacts
Analysis”. Proceedings of the 10th International IBPSA Conference, Beijing, China.
Dixit, A., and Pindyck, R. (1994) Investment Under Uncertainty, Princeton University Press,
Draper, N.R. and Smith, H., (1998) Applied Regression Analysis, 3rd ed., Wiley, New York.
Ellingham, I., and Fawcett, W. (2006) New Generation Whole-life Costing: Property and
Construction Decision-making Under Uncertainty, Taylor & Francis, New York, NY.
Greden, L. V., Glicksman, L. R., and Lopez-Betanzos, G. (2006) "A Real Options
Methodology for Evaluating Risk and Opportunity of Natural Ventilation." J. Sol. Energ.
Eng., 128(2), pp. 204-212.
Greden, L., and Glicksman, L. (2005) "A Real Options Model for Valuing Flexible Space."
Journal of Corporate Real Estate, 7(1), pp. 34-48.
Gritsevskyi, A. and Nakicenovic, N. (2000) “Modeling Uncertainty of Induced Technological
Change”, Energy Policy, 28, pp. 907–921.
Hartley, P., Medlock, K. B., Temzelides, T. and Zhang , X. (2010) “Innovation, Renewable
Energy, and Macroeconomic Growth”.
Hopfe, C., Augenbroe, G.. Hensen, J. Wijsman, A. and Plokker, W. (2009) “The impact of
climate scenarios on Decision making in building performance simulation: a case study”.
Hu, H. (2009) Risk-Conscious Design of a Zero Energy House. Ph. D. dissertation, Ga Tech
Hull, J. C. (2008) Options, Futures, and Other Derivatives, Prentice Hall, New Jersey.
Luenberger, D. G. (1998). Investment Science, Oxford University Press, New York.
McGraw-Hill Construction (2010) “Green Building Retrofit & Renovation”.
Morris, M.D., (1991) "Factorial Sampling Plans for Preliminary Computational
Experiments," Technometrics, 33(2) pp. 161-174, 1991.
Robinson D., Campbell N., Gaiser W., Kabele K., Le-Mouel A., Morel N., Page J.,
Stankovic, S., and Stone, A. (2007). “Suntool - A New Modeling Paradigm for Simulating
and Optimizing Urban Sustainability”, Solar Energy, 81(9), pp. 1196-1211.
Rye, C. (2008). “Solar-Ready Buildings.” Solar Power Authority the Dirt on Clean, 2008.
Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M., and
Tarantola, S. (2008). Global Sensitivity Analysis: The Primer. John Wiley & Sons,
Chichester.
SBI Energy (2010) “Global Green Building Materials and Construction”, 2nd Edition.
van Sark, W. G. J. H. M. E., Alsema, A. , Junginger, H. M., de Moor, H. H. C., and
Schaeffer, G. J.(2008) “Accuracy of Progress Ratios Determined From Experience Curves:
The Case of Crystalline Silicon Photovoltaic Module Technology Development” Progress in
Photovoltaics: Research and Applications, (16), pp. 441–453.
Weiss, M., Junginger, H. M., Patel, M.K. and Blok, K. (2010) “Analyzing Price and
Efficiency Dynamics of Large Appliances with the Experience Curve Approach”, Energy
Policy, 38, pp.770–783.
Yeh, S., Rubin S., Hounshell, D.A. Taylor, M. R. (2009) “Uncertainties in Technology
Experience Curves for Integrated Assessment Models”, available at:
http://repository.cmu.edu/epp/77
Yu, C.F., van Sark, W.G.J.H.M., Alsema, E.A. (2010) Unraveling the photovoltaic
technology learning curve by incorporation of input price changes and scale effects,
Renewable and Sustainable Energy Reviews, Article in Press.
Agile IPD Production Plans as an Engine of Process Change
ABSTRACT
The design and construction industry experiences a continuous pressure to
reduce time to market through fast track projects. Projects engage large
multidisciplinary teams that interact and impact each other’s solutions. Integrated
project development (IPD) process represents an improvement to the waterfall
process. Nevertheless, ethnographic observations show that state of practice IPD
processes still lead to significant rework and coordination. The aim is to improve IPD
team process in order to reduce rework, coordination, number of iterative design
cycles, and length of design iteration cycle. Today, IPD is achieved through co-
creation of production plans that enable participants to explicitly represent their tasks
and workflow. This paper presents an approach for agile IPD production plans that
extends this state-of-practice by modeling information about the task interdependence
types, and a process to make timely and explicit decisions when and how to form
subgroups that engage in sprints to address reciprocal task interdependencies.
INTRODUCTION
The design and construction industry experiences a continuous pressure to
reduce time to market through fast track projects. Complex building projects engage
large teams of stakeholders from diverse trades that interact and impact each other’s
decisions and solutions. Integrated project development (IPD) process is becoming
increasingly central to large complex building projects and large teams of
stakeholders [AIA 2007]. IPD represents a significant improvement to the waterfall
sequential project process. Nevertheless, ethnographic observations show that state of
practice IPD processes still lead to significant rework and coordination especially in
cases of reciprocal task interdependence. This paper presents initial results of an
ongoing project focused to formalize, develop, deploy, and assess an agile IPD
process that extends the state of practice IPD. The aim is to improve IPD team
process in order to reduce rework, coordination, number of iterative design cycles,
and length of design iteration cycle.
Corporate experience claims that the most efficient and effective interactions
are when all project stakeholders are in face-to-face collocated team environments
776
COMPUTING IN CIVIL ENGINEERING 777
such as the Jet Propulsion Lab Integrated Concurrent Engineering (ICE) space
[Chachere, Kunz, Levitt 2004], the iRoom at CIFE, Big Room at DPR or Turner
Construction. This is extreme collaboration when, people, content, models, activities
and processes are collocated. Nevertheless, the AEC industry experiences a
continuous increase in mobility, geographic distribution of project stakeholders,
collaboration technologies, digital content, interactivity, and convergence of physical
and virtual workplaces.
People and knowledge represent the corporations’ strategic asset. Knowledge
workers in the AEC industry are challenged to engage in today’s competitive markets
in which corporate objectives are to significantly reduce (up to 50%) project duration,
travel budgets, work space, and personnel, as well as significantly increase
productivity (by 50%) and maintain a high quality of their products. Fast track
projects lead to overlapping interdependent tasks and consequently generate large
unanticipated volumes of coordination and rework [Levitt and Kunz, 2002]. Such
rework is not planned for, is hard to track, manage, and acknowledge. Managers
assign resources only to direct tasks and work. This can lead to stress, underestimated
scope and scale of coordination, unrealistic schedules with heroic attempts to meet
dead lines.
Today, IPD is achieved through co-creation of production plans that enable
participants to explicitly represent their tasks and workflow. This paper presents an
approach for agile IPD production plans that extends this state-of-practice by
modeling information about the task interdependence type, and a process to make
timely and explicit decisions when and how to form subgroups that engage in sprints
to address reciprocal task interdependencies.
new state of the art hospital that will meet California’s hospital seismic safety law,
SB1953, passed in 1994. The deadline for complying with SB1953 is 2013. Sutter
Health looked for new ways to transform and improve the design and construction
delivery process with accelerated schedule that is 30% faster than the traditional
design-bid-build process. They apply lean construction principles, IPD process, and
BIM technology. In addition, they engaged from day one a core multidisciplinary
core team of 10 stakeholders that included the owner Sutter Health, architect-
Devenney Group, structural design – TMAD-Taylor & Gaines, general contractor
DPR Construction, mechanical and plumbing design – Capital Engineering, electrical
design – The Engineering Enterprise, mechanical design-assist and construction –
Superior Air Handling, plumbing design-assist and construction – JW Meadows,
electrical design-assist and construction – Morrow Meadows, fire protection –
Transbay Fire Protection, lean BIM project integration – GHAFARI Assoc. All core
team members are geographically distributed in California, Arizona, and Michigan.
Most of them travel to Castro Valley project site weekly or biweekly for a collocated
project process coordination and 3D CAD / BIM integration meeting in their Big
Room where they use two SmartBoards to view floor plans and NavisWorks
integrated model and clash detection. These face-to-face meetings allow participants
to identify problem areas in the building through cross-disciplinary reviews in
NavisWorks. There are typically two dozen participants in the Big Room that engage
in the cross-disciplinary review process. Often team members that are not present but
need to address an issue, connect to the Big Room via GoToMeeting. The review
process was typically visual inspection of each room in the BIM model that was
operated by GAPHARI participant. At times team members would approach the
displays to point or annotate the model as they discussed issues. The accuracy of the
model based approach facilitates cost estimating, code checking. The team uses a
technique developed by Toyota called Value Stream Mapping (VSM) that enables
them to create a representation of the production plan workflow that they regularly
evaluate. The ethnographic observations indicate that:
The IPD production plan workflow description does not distinguish among task
interdependencies types. However, different types of task interdependencies will
have various levels of uncertainty and require corresponding coordination
mechanisms. Thompson distinguished between three types of task
interdependence: pooled, sequential and reciprocal [Thompson 1967] and
attributed to each a type of task-actor coordination. (Table 1) The types of task
interdependence depend on the degree or intensity of interaction. Thomson
suggests that activities and actors that need intense interaction should be placed
near to each other spatially and organizationally.
TESTBED
We used the AEC Global Teamwork course as a testbed. The course offers a
project-based learning (PBL) experience focused on problem based, project organized
activities that produce a product for a client, re-engineered processes that bring
people from multiple disciplines, and engages faculty, practitioners, and students
from different disciplines, who are geographically distributed. It is offered annually
since 1992 – January through May. It engages architecture, structural engineering,
and construction management students from universities in the US, Europe and Asia
[Fruchter 1999, 2006]. The AEC student teams work on a university building project.
The project specifications include: (1) building program requirements for a 30,000
sqft university building; (2) a university campus site that provides local conditions
780 COMPUTING IN CIVIL ENGINEERING
and challenges for all disciplines, e.g. local architecture style, climate, and
environmental constraints, earthquake, wind and snow loads, flooding zones, access
roads, local materials and labor costs; (3) a budget for the construction of the
building, and (4) a time for construction and delivery. The project progresses from
conceptual design in Winter Quarter to 3D and 4D CAD models of the building and a
final report in Spring Quarter. The teams experience fast track project process with
intermediary milestones and deliverables. They interact with industry mentors who
critique and provide constructive feedback.
All AEC teams hold weekly two hour project review sessions similar to
typical building projects in the real world. During these sessions they present their
concepts, explain, clarify, question these concepts, identify and solve problems,
negotiate and decide on changes and next steps. The interaction and the dialogue
between team members during project meetings evolve from presentation mode to
inquiry, exploration, problem solving, and negotiation. Similar to the real world, the
teams have tight deadlines, engage in design reviews, negotiate and decide on
modifications. To view AEC student projects please visit the AEC Project Gallery
(http://pbl.stanford.edu/AEC%20projects/projpage.htm).
schedule the sprint, as well as what the deliverable of the sprint is. The structure of
the task list is further extended to include the rubrics: Subgroup Members - Date for
Subgroup Sprint – Deliverable(s) (Figure 1). The agile IPD production plan is the
result of integrating the Task List, Production Plan, and Task Interdependence Types
(i.e. pooled, sequential, and reciprocal). State-of-the-art approaches and systems (e.g.,
SPS software) facilitate only the representation of production plans with sequential
task interdependencies. This leads to linear sequences of repeatable workflow
segments every time there is reciprocal task interdependence. More importantly,
sequential production plans do not highlight the need for subgroup formation to
immediately address issues triggered by such reciprocal interdependent project tasks.
Agile production plans models such workflow situations. It enables the team to
decide in a timely manner when and why to form a subgroup and engage in a sprint to
address an issue that had reciprocal task interdependence and required close and
intense interactions among specific team members and trades.
The AEC Ridge Team case study illustrates how the proposed agile IPD
production plan approach was implemented and led to process changes (Figure 1).
Ridge team was composed of an architect in Puerto Rico, two structural engineers at
Stanford, and one construction manager in Stockholm Sweden. Each of them was
working in the respective university laboratory, using their laptops on WiFi, with a
headset for audio. They used the 3D Team Neighborhood in Teleplace
(http://www.teleplace.com/) as their multimedia collaboration environment [Fruchter and
Cavallin, 2011]. The 3D Team Neighborhood provided a highly immersive
environment that enabled the team members to construct in real time their
collaboration space around them as the dialog and interaction evolved during the
meeting. Each team member could share their content on any number of displays that
were created on as needed bases, as well as manipulate and annotate any content
displayed in their shared workspace. This provided a persistent presence of team
members, visibility and transparency of activities performed and content created by
them that allows for immediate interaction and co-creation of solutions to problems.
There was a consistent and continuous capture of project issues they jointly
identified, tracking their progress and linking it to the task list and agile production
plan as they planned and re-planned weekly. As they identified tasks that had
reciprocal interdependencies during their weekly project review sessions they formed
subgroups and engaged in parallel sprints for given amounts of time to produce
specific deliverables. This intense and close subgroup sprints avoided significant
rework, coordination, and led to zero response latency.
The explicit task list and agile production plan provided quantitative
information to determine the weekly work distribution among the team members as
well as track the team progress by means of the weekly burndown chart (Figure 2).
A key team process transformation was observed over time (Figure 3):
From traditional, static, linear, agenda and meeting minutes driven weekly project
review sessions – experienced during the first three weeks of their project as the
team used the sequential production plan approach;
To agile, dynamic, concurrent, and result driven weekly project review sessions –
experienced for the rest of the twelve weeks of the project as the team adopted the
agile IPD production plan approach.
782 COMPUTING IN CIVIL ENGINEERING
CONCLUSION
This paper introduced an agile IPD production plans approach that extends the state-
of-practice production plan method by modeling information about the task
interdependence type, and a process to make timely and explicit decisions when and
how to form sub groups that engage in sprints to address reciprocal task
interdependencies. The paper presents the pilot testbed and validation run in 2009-
2010. The preliminary results show that the agile IPD production plans approach
leads to less and shorter design iteration cycles, reduced rework and provides a
consistent feedback link between issues, the progress made to resolve the issues, and
the IPD workflow representation and dynamic update. Agile IPD production plans act
as an engine of process change in the project team.
Building on contingency theory task interdependence classification Table 2
summarizes our contributions as further recommendations based on the findings of
the agile IPD production plan approach. We plan to continue deployment and
assessment of the agile IPD production plan approach in Winter and Spring 2011
with seven AEC global project teams.
ACKNOWLEDGEMENTS
The project was partially sponsored by the PBL Lab and CIFE at Stanford University.
The authors thank DPR and the AEC teams in 2010.
784 COMPUTING IN CIVIL ENGINEERING
REFERENCES
AIA National / AIA California Council (2007). “Integrated Project Delivery: A Guide”
www.aia.org/contractdocs/AIAS077630
Agile Software Development www.agilemodeling.com/essays/agileSoftwareDevelopment.htm
Chachere, J., Kunz, J. and Levitt, R. (2003) Can you Accelerate your Project using Extreme
Collaboration? A Model Based Analysis, CIFE TR154.
Fruchter, R. (1999) Architecture/Engineering/Construction Teamwork: A Collaborative
Design and Learning Space. Journal of Computing in Civil Engineering, 13 (4): 261-270.
Fruchter, R., (2006) The Fishbowl: Degrees of Engagement in Global Teamwork. LNAI,
2006: 241-257.
Fruchter, R. and Cavallin, H, (2011) Attention and Engagement of Remote Team Members in
Collaborative Multimedia Environments, ASCE Computing in Civil Engineering Workshop,
Miami, June 2011.
Khanzode A., Fischer M., Reed D., Ballard G. (2006). A Guide to Applying the Principles of
Virtual Design and Construction (VDC) to the Lean Project Delivery Process, CIFE Working
Paper #093, December 2006
Levitt R., and Kunz J. (2002). Design your project organization as engineers design bridges,
CIFE Working Paper #73, August 2002
Thompson J. D. (1967). Organizations in Action, McGraw-Hill.
An Automated Collaborative Framework To Develop Scenarios For Slums
Upgrading Projects According To Implementation Phases And Construction
Planning
ABSTRACT
Slums are informal areas that are illegally developed on property of the State
with no physical planning. Accordingly, governments adopt various intervention
strategies to replace or upgrade these slums. However, implementing these strategies
is often faced by several planning and constructability challenges, because the slums
areas that should be upgraded (1) are already occupied by resident families; and (2)
are often characterized by unplanned and extremely crowded transportation networks.
Accordingly, the construction period of these upgrade projects can result in
significant social disruption to resident families and requires protracted timelines and
additional budgets. The objective of this paper is to present a multi-objective
optimization model that is capable of accelerating the delivery of urgent
redevelopments while minimizing construction costs and socioeconomic disruptions
to slum dwellers. An application example is presented to demonstrate the model
capabilities and is followed by a discussion of formulation challenges.
INTRODUCTION
Slums are areas of population concentrations developed in the absence of
Physical Planning. Slum dwellers suffer from one or more of the following
conditions: (1) lack of access to clean water; (2) lack of access to improved sanitation
facilities; (3) insufficient and overcrowded living area; (4) inadequate structural
quality or durability of dwellings; and (5) lack of tenure security (UN-HABITAT
2008). Slums represent a violation to public and/or private properties and are
characterized by high crime rates and illiteracy (Abdel Aziz and El-Anwar 2010).
Dealing with urban slums is widely recognized as a global challenge, where an
estimated one billion people worldwide live in urban slums and four out of ten
inhabitants in the developing countries are slum dwellers (Abdelhalim 2010; Nijman
2008; UN-HABITAT 2003a).
There are several upgrading intervention strategies that can be employed to
deal with the slums issue, including (1) on-site redevelopment of informal areas; (2)
redevelopment and relocation; (3) servicing informal areas; (4) sectorial upgrading;
(5) planning and partial adjustment; and (6) participatory upgrading (Abdelhalim
2010; Algohary and El-Faramwy 2010). These upgrading strategies focus on different
aspects of the living environment in informal areas, such as on physical
785
786 COMPUTING IN CIVIL ENGINEERING
Zone boarder
Urban
Attributes
Construction
Attributes
Social
Attributes
Zone 6
Disruption (OSD)
Zone 12
Time
1 2 3 4 Years
and slums dwellers in order to optimize slums upgrading projects. To this end, the
framework consists of two main phases: (1) data generation and modeling; and (2)
plans evaluation and optimization. First, the objective of the data generation and
modeling phase is to (1) generate all the needed urban, construction, and social data
from the involved stakeholders using a participatory process that involves planners,
contractors, and representatives of slums dwellers; (2) divide the slums area into
zones and propose intervention strategies for these zones based on their
characteristics and level of risk; and (3) model the generated data for each zone using
an object-oriented two-dimensional representation that enables analyzing and
utilizing this data. The product of this phase is a two-dimensional attributes-loaded
map for the slum area under consideration, as shown in Figure 1(a). In this
representation, each zone will be modeled using object-oriented programming and
will have a set of attributes, including (1) urban attributes such as the proposed
intervention strategy, the urgency of upgrading this zone based on its condition and
level of risk (using an urgency factor), roads width and conditions, and accessibility
to utilities and transportation; (2) construction attributes, such as estimate
construction cost and duration to upgrade this zones, need for access roads for
construction equipment, and availability of storage areas for construction materials;
and (3) social attributes such as the socioeconomic impacts of closing roads during
upgrading this zone, the number of local businesses that will be temporary closed or
relocated, and number of families to relocated from this zone.
Second, the objective of the plans evaluation and optimization phase is to
identify the optimal integrated upgrading plans that can (1) maximize the benefits of
slums upgrading projects by accelerating the delivery of the urgent projects; (2)
minimize the total costs of these projects; and (3) minimize the social and economic
disruptions for resident families during the construction phases of slums upgrading.
As shown in Figure 1(b), this phase utilizes a multi-dimensional analysis process that
consists of four main modules, including (1) performing multi-objective optimization;
(2) developing time schedules; (3) incorporating cost dimension; and (4) quantifying
social disruption. The following section briefly describes the design of a multi-
objective optimization model, which represents the computational implementation of
the first module in this phase.
MODEL DESIGN
A multi-objective optimization model is designed to identify the optimal
slums upgrading plans in order to maximize benefits to residents, minimize
construction costs, and minimize the associated socioeconomic disruptions. The focus
of the model in this development stage is on the on-site redevelopment intervention
strategy. This strategy is used when housing conditions are very poor, the urban
fabric is irregular and unsafe, and/or tenure status is illegal. This intervention strategy
refers to a complete replacement of the physical fabric through gradual demolition
and in-situ construction of alternative housing (Abdelhalim 2010; Algohary and El-
Faramwy 2010).
The model optimizes the construction sequencing of the slum zones taking
into account budget constraints and logistics constraints such as the limited access of
788 COMPUTING IN CIVIL ENGINEERING
Z
Minimize TC CostDz CostRz durz nf z Costh nbz Costb (CS ) (2)
z 1
Z
Minimize OSD durz Wh I h Wb I b (CS ) (3)
z 1
Where, OBI is the overall benefits of upgrading the slums area; Z is the total
number of zones in the considered slum; UFz is the urgency factor for upgrading zone
z; nz is the total number residents in zone z; Rz is the start date of redevelopment
activities for zone z; durRz is the duration of redevelopment activities for zone z; TC
is the total cost of the slum upgrading project; CostDz and CostRz are the costs of
demolition and redevelopment activities for zone z, respectively; durz is the total
duration of direct disruption to zone z, which starts with the demolition and site
clearing activities and ends with the finish of redevelopment activities and can be
calculated as shown in Equation (4); nfz is the number of families to be temporary
COMPUTING IN CIVIL ENGINEERING 789
relocated from zone z during construction; Costh is the cost for providing temporary
housing for one family per week; nbz is the number of families to be directly affected
by the temporary closure/relocation of local businesses during upgrading zone z;
Costb(CS) is the compensation amount to be paid to each family affected by business
disruption and is a function in the compensation scheme (CS); OSD is the overall
socioeconomic disruption to slum dwellers during upgrading zone z; Wh and Wb are
the relative weights of the socioeconomic impacts of temporary relocating families
and temporary disrupting local businesses, respectively; Ih and Ib(CS) are the
socioeconomic impacts of temporary relocating families and temporary disrupting
local businesses, respectively, which can take the values of 0 (i.e. negligible impact)
to 3 (i.e. major impact); and Dz is the start date of demolition and site clearing
activities for zone z.
The optimization model is implemented using Non-Dominated Sorting
Genetic Algorithms II (NSGA2), because of (1) the non-linear and multi-objective
nature of the problem; (2) the need for near-optimal solutions; (3) the huge search
space; and (4) the superior performance of NSGA2 and its unique characteristics,
such as fast non-dominated sorting, crowding, and elitism (Deb et al. 2001). The
following section presents a brief application example to demonstrate the model
capabilities and formulation challenges.
APPLICATION EXAMPLE
Figure 1(a) shows a satellite photo of part of Manshiet Naser in Cairo (which
is the largest informal area in Egypt) using Google Maps. In this example, it is
assumed that the shown part of the informal area is divided into 12 zones with a total
population of 3,730 families, with higher urgency factors assigned to zones 3, 6, 9,
and 12 because of unsafe conditions. Other needed input data (such as construction
costs and durations) were reasonably assumed. Furthermore, a maximum annual
budget of $25 million was defined as a budget constraint.
The optimization model is used to search for near-optimum solutions for this
slums upgrading problem with a population size of 1,000, crossover probability of
0.9, and mutation probability of 0.003906. Figure 2 shows the identified near-optimal
tradeoffs among the three optimization objectives after 10,000 generations using 2D
graphs showing tradeoffs between maximizing benefits and minimizing each of the
total costs and socioeconomic disruption. In this figure, the normalized values of the
three objectives are used instead of their absolute values in order to illustrate the
solutions performance in comparison to the ideal values, where the normalized values
are computed as shown in Equation (5).
Where, NOBI, NTC, and NOSD are the normalized values of OBI, TC, and
OSD, respectively, which can range from 0 (lowest performance) to 1.0 (ideal
790 COMPUTING IN CIVIL ENGINEERING
NOSD
NTC
0.85
0.9
0.8
0.85 0.75
0.65 0.75 0.85 0.95 0.65 0.75 0.85 0.95
NOBI NOBI
Formulation 1
NTC
Formulation 4
0.94 0.85 Formulation 2
0.92 Formulation 3
0.8
Formulation 4
0.9 0.75
0.75 0.8 0.85 0.9 0.95 0.75 0.8 0.85 0.9 0.95
NOBI NOBI
all three optimization objectives, as computed using Equation (5). Accordingly, the
model maximizes each of NOBI, NTC, and NOSD. This formulation is introduced
assuming that normalizing the optimization objectives will reduce any bias the model
has towards or against any objective. However, the results illustrated that this is not
the case. As shown in Figure 3, this formulation could only contribute one Pareto
optimal solution when compared to other formulations.
Formulation #3: This formulation is presented to simplify the optimization
problem to the presented model by converting it into a single objective optimization
problem. To this end, the normalized values of the three objectives are aggregated as
shown in Equation (6).
Where, WOBI, WTC, and WOSD are the relative weights of OBI, TC, and OSD,
respectively. In this example, all weights are set to 33.33%. This formulation could
offer three Pareto optimal solutions when compared to other formulations. The
limited number of generated solution is attributed to the fixed values of the objectives
relative weights (which should converge into one solution if more generations are
allowed). A possible way to overcome this is to automatically generate a set of unique
combinations of relative weights and solve each of these combinations as a separate
optimization problem. This method should generate a diversified Pareto front;
however it will increase the computational time of the optimization model
proportional to the number of unique combinations of weight (Kandil et al. 2010).
Formulation #4: This formulation is also introduced to simplify the
optimization problem but using an entirely different approach. Instead of designing
the model to optimize the three objectives, it is rather designed to optimize the values
of the underlying factors that affect the performance of upgrading plans in the three
objectives. To this end the model is designed to optimize two objectives: (1)
accelerate the delivery of urgent developments as shown in Equations (7), which in
turn maximizes the overall benefits to residents (OBI); and (2) minimize the total
duration of direct disruption due to construction as shown in Equation (8), which in
turn minimizes socioeconomic disruptions as well as projects costs by reducing the
periods of temporary housing and businesses disruption and their associated costs and
compensations.
Z
Minimize WFD UFz nz Rz durRz (7)
z 1
Z
Minimize TD durz (8)
z 1
disruption during upgrading work; and durz is the total duration of direct disruption to
zone z. This formulation resulted in six additional Pareto optimal solutions compared
to other formulations, as shown in Figure 3. It could offer the highest NOBI among all
other formulations; however it could not achieve high performance in minimizing
socioeconomic disruption. This is attributed to the model’s inability to identify the
impact of business compensation schemes on total costs and socioeconomic
disruptions as shown in the formulation of its objectives using Equations (7) and (8).
Accordingly, the model arbitrarily selected the least cost compensation scheme which
resulted in higher socioeconomic disruption.
INTRODUCTION
During 1811 and 1812 the New Madrid seismic zone in the Central USA has experienced
some of the strongest earthquake ground motions observed in the US, where a series of three
earthquakes shook the Midwest region with magnitudes around 8 (Cleveland et al. 2007).
Figure 1(a) shows the three main New Madrid fault lines. A recurrence of the 1811 and 1812
earthquakes would cause significant social and economic impacts affecting the lives of over
45 million residents of the states surrounding the New Madrid seismic zone. Moreover, the
recurrence of this series of earthquakes would subject the major urban center of Memphis,
Tennessee to intense ground shaking (Cleveland et al. 2007). The Mid-America Earthquake
(MAE) Center and the Institute for Crisis, Disaster and Risk Management (ICDRM)
performed an earthquake impact assessment for the State of Tennessee using HAZUS-MH
MR2 software, where the earthquake scenario considered a magnitude 7.7 event along the
southwest extension of the presumed eastern fault line in the New Madrid Seismic Zone
(Elnashai and Jefferson 2008). The results showed that direct economic losses from damaged
buildings, transportation, and utility systems are estimated at $56.6 billion for the State.
The recurrence of these series of severe earthquakes will result in large-scale
displacement of families in the impacted areas, where it is estimated that more than 60,000
households will be displaced in Shelby County, TN, alone based on the impact assessment
794
COMPUTING IN CIVIL ENGINEERING 795
performed by the MAE Center and ICDRM. Those displaced families will urgently need
temporary accommodations for several months (or even years) until permanent housing can
be eventually obtained. In order to enable emergency planners quickly identify temporary
housing solutions, El-Anwar et al. (2009) developed an automated decision support system
(DSS) for optimizing temporary housing arrangements following large-scale natural disasters.
This DSS supports the optimization of a number of important objectives, including (1)
minimizing social and economic disruptions for displaced families; (2) maximizing
temporary housing safety in the presence of potential post-disaster hazards; (3) minimizing
negative environmental impacts of temporary housing on host community; and (4)
minimizing total public expenditures.
(a) Potential Hazards in TN
New Madrid Fault Lines Hazmat location
40,510 Households
in need of
Temporary Housing
MODEL FORMULATION
This section provides a brief description of the formulation of the four main optimization
objectives as they relate to the scope of the presented case study. The first objective is to
minimize the socioeconomic disruptions experienced by displaced families during their stay
in temporary housing (Bolin 1982; Bolin and Bolton 1986; Golec 1983; Johnson 2007). To
this end, the model calculates a socioeconomic disruption index (SDI) for each candidate
housing (e.g. motels, travel trailers, or mobile homes) and its proposed location. This SDI
represents the aggregated weighted performance of the candidate housing in six metrics,
796 COMPUTING IN CIVIL ENGINEERING
including (1) housing quality; (2) delivery time; (3) median household income at the
proposed location; (4) unemployment rates; (5) cost of living index; and (6) reported crime
rates. Accordingly, for any configuration of temporary housing arrangements, the overall
socioeconomic disruption is evaluated by normalizing and averaging the computed SDI for
each family in that configuration.
The second optimization objective is to maximize the safety temporary housing in the
presence of multiple potential post-disaster hazards (e.g. aftershocks and hazmat release). To
this end, the model computes a building performance index for each housing alternative
taking into account (1) characteristics of potential hazards; (2) housing type and distance
from potential hazards; and (3) housing expected building performance if the potential hazard
occurs. Because of the probabilistic nature of potential hazards occurrences, the model
generates all possible scenarios for hazards occurrences and calculates a corresponding
building performance index for the candidate housing for each scenario. The model then
computes a safety index (SI) for the candidate housing to represent the expected value of its
possible building performance indexes. Accordingly, for any configuration of housing
arrangements, the overall safety index is evaluated by normalizing and averaging the safety
indexes for all housing alternatives in that configuration.
The third objective is to minimize the environmental impact of constructing and
maintaining temporary housing projects on host communities. To this end, the model
computes an environmental index (EI) for each candidate housing project. This index
represents the housing project’s weighted impacts on the main environmental areas analyzed
in the expedited environmental review process conducted by the Federal Emergency
Management Agency (FEMA). Accordingly, for any configuration of temporary housing
arrangements, the overall environmental index is evaluated by normalizing and averaging the
environmental indexes of all housing alternatives in that configuration. The fourth
optimization objective is to minimize total public expenditures on temporary housing.
Accordingly, the model enables emergency planners to input all the life cycle costs of
candidate housing alternatives and calculates the net present value of their total costs over the
period of their use. More detailed description of the formulation of these four objectives is
available at El-Anwar et al (2008; 2010a and 2010b).
CASE STUDY
This section presents the development of a case study applying the developed DSS to identify
optimal temporary housing plans for families that would be displaced in Shelby County,
Tennessee. This case study assumes the occurrence of an earthquake of magnitude 7.7 along
the southwest extension of the presumed eastern fault line in New Madrid Seismic Zone,
which is the closest of the three main fault lines to Shelby County and represents the worst
case scenario, as shown in Figure 1(a). The following sections briefly present the input data,
optimization procedure, and results for this case study.
Input data. The required input data includes (1) the number of displaced families; (2) eight
environmental areas and their relative importance weights; (3) available temporary housing
alternatives and their locations and characteristics; and (4) post-disaster hazards data. For the
first required input data, two thirds of the displaced households in Shelby County are
assumed to be in need for temporary housing. Accordingly, the emergency management
agency needs to provide temporary housing to 40,510 households according to the estimated
number of 60,772 displaced households in Shelby County (Elnashai and Jefferson 2008).
Figure 1(b) shows the distribution of displaced families per census tract. For the second set of
input data, importance weights were assumed for eight environmental areas that will be
potentially impacted by developing temporary housing projects according to FEMA’s
expedited review process (FEMA 2005). The eight areas and their assumed weights are as
COMPUTING IN CIVIL ENGINEERING 797
follows: 20% for hazardous materials and toxic wastes; 20% for air quality; 20% for water
quality; 10% for geology and soils; 10% for wetlands; 10% for threatened and endangered
species; 5% for vegetation and wildlife; and 5% for noise, as shown in Table 1. In addition,
the impact intensities of each temporary housing alternative on the environmental areas were
assumed and represented numerically by 0, 1, 2, and 3 for negligible, minor, moderate, and
major impacts, respectively.
The input data for temporary housing alternatives was obtained after conducting a detailed
online search of available alternatives in Tennessee. The results of this search generated 413
temporary housing alternatives with a total capacity of 55,243 families, as shown in Figure 2.
These alternatives consist of 25 campsites for travel trailers and 16 campsites for tents, as
well as 372 hotels, inns, motels, and other lodges. More detailed data on each of these
alternatives were obtained and used in this case study including their monthly cost rate,
location (longitude and latitude), and capacity, as shown in Table 1. Additional data were also
gathered for the locations of all the considered housing alternatives, including crime rates,
median household income, percentage of unemployment among the civil labor force, and cost
of living index. Furthermore, the expected delivery times of the temporary housing
Temporary Housing in TN
Temporary Housing
90% 100%
80% 70%
80%
Index (SDI)
70%
60% 60%
30%
50% 40%
40% 0% 0%
20%
30%
$55.0 $65.0 $75.0 $85.0 0%
Monthly Public Expenditures (PE) in millions of $ SDI SI EI TPE
Expenditures Vs. Housing Safety (b.2) Solution Performance
0.75
Average Safety Index (SI)
0.73
Optimization Objective Value
0.20
Impact Index (EI)
0.18
0.16
0.14
0.12
0.10
$55.0 $65.0 $75.0 $85.0
Monthly Public Expenditures (PE) in millions of $
CONCLUSIONS
This paper presented the development of a case study representing a large-scale
temporary housing allocation problem. For this case study, the model identified 286 optimal
temporary housing plans for 40,510 families that would be displaced in Shelby County,
Tennessee, in case of the occurrence of an earthquake of magnitude 7.7 along the southwest
extension of the presumed eastern fault line in the New Madrid Seismic Zone. This case
study illustrated the unique capabilities of the developed automated decision support system
in optimizing large-scale real-life temporary housing problems in order to (1) minimize social
and economic disruptions for displaced families; (2) maximize temporary housing safety in
the presence of multiple potential post-disaster hazards; (3) minimize the negative
environmental impacts of constructing and maintaining temporary housing on host
communities; and (4) minimize total public expenditures on temporary housing. This case
study also illustrated the efficiency and effectiveness of the developed system and highlighted
the modifications and flexibilities added to the model to ensure its practical computational
requirements.
REFERENCES
Bolin, R. (1982). “Long-term family recovery from disaster.” Institute of Behavioral Science
Monograph 36, University of Colorado, Boulder.
Bolin, R. C. and Bolton, P. (1986). Race, religion, and ethnicity in disaster recovery,
Boulder, CO: Institute of Behavioral Science, University of Colorado.
Cleveland, L. J., Elnashai, A. S., Pineda, O. (2007). New Madrid Seismic Zone Catastrophic
Earthquake Response Planning, Mid-America Earthquake Center, Report 07-03, May
2007.
El-Anwar, O., El-Rayes, K., and Elnashai, A. (2008). "Multi-objective optimization of
temporary housing for the 1994 Northridge earthquake," J. of Earthquake Engineering,
12(1), 81 — 91.
COMPUTING IN CIVIL ENGINEERING 801
El-Anwar, O., El-Rayes, K., and Elnashai, A. (2009) "An Automated System for Optimizing
Post-Disaster Temporary Housing Allocation," Automation in Construction, 18(7), 983-
993.
El-Anwar, O., El-Rayes, K., and Elnashai, A. (2010) (a) "Maximizing Temporary Housing
Safety after Natural Disasters," Journal of Infrastructure Systems, ASCE, 16(2), 138-148.
El-Anwar, O., El-Rayes, K., and Elnashai, A. (2010) (b) "Minimization of Socioeconomic
Disruption for Displaced Population Following Disasters," Disasters, 34(3), 865−883.
Elnashai, A. and Jefferson, T (2008). Analysis of: New Madrid Seismic Zone - M7.7 Event,
New Madrid Seismic Zone Catastrophic Earthquake Response Planning, Mid-America
Earthquake Center and Institute for Crisis, Disaster and Risk Management, State Report
for Tennessee Earthquake Impact Assessment, March 2008.
Elnashai, A., Hampton, S., Karaman, H., Lee, J.S., McLaren, T., Myers, J., Navarro, C.,
Sahin, M., Spencer, B., and Tolbert, N. (2008). “Overview and Applications of Maeviz –
HAZTURK 2007,” Journal of Earthquake Engineering, 12(1), 100 — 108.
FEMA (2005) “Programmatic Environmental Assessment: Temporary Housing for Disaster
Victims of Hurricane Katrina,” FEMA-DR-1604-MS, September 2005.
Golec, J. (1983). “A contextual approach to the social psychological study of disaster
recovery,” Journal of Mass Emergencies and Disasters, 1, August, 255-276.
Johnson, C. (2007). “Impacts of prefabricated temporary housing after disasters: 1999
earthquakes in Turkey,” Habitat International, 31(1), 36-52.
Requirements for an Integrated Framework of
Self-managing HVAC Systems
Xuesong Liu1, Burcu Akinci2, James H. Garrett, Jr.3 and Mario Bergés4
1
Ph.D Candidate, Dept. of Civil & Environmental Engineering, Carnegie Mellon
University, 5000 Forbes Ave., Pittsburgh, PA 15213; PH (412) 953-2517; email:
pine@cmu.edu
2
Professor, Dept. of Civil & Environmental Engineering, Carnegie Mellon University,
5000 Forbes Ave., Pittsburgh, PA 15213; email: bakinci@andrew.cmu.edu
3
Professor and Head, Dept. of Civil & Environmental Engineering, Carnegie Mellon
University, 5000 Forbes Ave., Pittsburgh, PA 15213; email: garrett@cmu.edu
4
Assistant Professor, Dept. of Civil & Environmental Engineering, Carnegie Mellon
University, 5000 Forbes Ave., Pittsburgh, PA 15213; email: marioberges@cmu.edu
ABSTRACT
Heating, ventilating and air conditioning (HVAC) systems account for about 16% of
the total energy consumption in the United States. However, research shows that
25%-40% of the energy consumed by HVAC systems is wasted because of
undetected faults. Actively detecting faults requires continuously monitoring and
analyzing the status of hardware and software components that are part of HVAC
systems. With the increasing complexity in HVAC systems, fault detection that relies
on manual processes becomes even more challenging and impractical. Hence, a
computerized approach is needed, which enables HVAC systems to continuously
monitor, assess and configure themselves. This paper proposes an integrated
framework for developing and implementing self-configuring approaches to operate
and maintain HVAC systems. The discussions include the identification of functional
requirements, a synthesis of existing self-configuring approaches, and an analysis of
the requirements for developing an integrated framework using an implemented
prototype system.
INTRODUCTION
Buildings account for 41% of the total energy consumption and 38% of carbon
dioxide emissions in the United States. About 40% of the energy consumed in both
residential and commercial buildings is used by HVAC systems (DoE 2008; EIA
2008). However, research shows that 25%-40% of the energy used by HVAC systems
is wasted due to faults, such as misplaced and uncalibrated sensors, malfunctioning
controllers and controlled devices, improper implementation and execution of control
logic, improper integration of control software and hardware components, and
sub-optimal control strategy (Mansson and McIntyre 1997; Liddament 1999; Liu et al.
2002; Roth et al. 2005). This waste accounts for $36 – $60 billion every year in the
United States (EIA 2008). Indirect social and environmental impacts of the waste are
802
COMPUTING IN CIVIL ENGINEERING 803
beyond estimation due to the fast depleting energy resources and increasing
environmental pollution (Liang and Du 2007).
Several researchers have stated that a primary reason for the occurrence of different
types of faults that result in significant waste in energy is that HVAC systems are
getting increasingly complex and it is difficult for operators to manually detect and
diagnose these faults (Lee et al. 2004; Katipamula and Brambley 2005a; Jagpal 2006).
Due to increasing needs for better indoor environment control, more and more HVAC
systems are equipped with software and hardware components. To maintain the
desired performance of these HVAC systems, operators need to continuously monitor
and diagnose hundreds of components. Moreover, because different faults occurring
in HVAC systems can have similar symptoms, it is difficult for the operator to
diagnose the root cause of the faults (Schein and Bushby 2005). All of these issues
make it very difficult, if not impossible, to manually monitor the performance of
HVAC systems and to detect possible problems resulting in inefficient operations.
Computerized approaches, such as computer-aided fault detection and diagnosis
(FDD), automated commissioning and optimized operating schedule, have been
studied and developed to address some of these challenges associated with manual
operation and maintenance of HVAC systems. Both laboratory and real-world
experiments have been conducted to validate the energy saving capability of these
approaches (Mansson and McIntyre 1997; Castro 2004; Katipamula and Brambley
2005a; Katipamula and Brambley 2005b; Schein and Bushby 2005).
These studies show that computerized approaches have the potential to improve
energy efficiency of HVAC systems by addressing issues associated with managing
complex systems through the elimination of human involvement in maintaining these
systems. They can enable the systems to automatically detect abnormal conditions,
diagnose the causes, and mitigate the faults, thus eliminating their impacts on the
performance of the systems. However, many of the studies and developments are still
conducted in academic fields. Very few commercial products were deployed in
real-world projects (Liang and Du 2007). One primary reason identified by the
researchers is that because these approaches were developed by researchers, their
deployment requires thorough knowledge of HVAC systems so that the correct
information can be provided to them and the systems can be adjusted according to
their outputs. This requirement is beyond the average skill level of most system
operators (Katipamula and Brambley 2005a).
We envision that an integrated framework, which can automatically manage the
computerized approaches by providing the needed information and reconfiguring the
HVAC systems according to their outputs, can solve this problem. In this paper, we
discuss the identification of functional requirements for developing such an integrated
framework. We will introduce the synthesis of existing self-configuring approaches,
and an analysis of the requirements for developing the integrated framework using an
implemented prototype system.
804 COMPUTING IN CIVIL ENGINEERING
PROBLEM STATEMENT
Previous studies showed that computerized approaches are able to improve the energy
efficiency of HVAC systems by automating two processes: (1) detecting, diagnosing
and mitigating faults; and (2) evaluating the performance of the HVAC systems and
improving their control strategy. We identified three challenges which contribute as
possible impediments for the deployment of these approaches in the real-world.
First, it is very difficult for system operators to prepare the needed inputs and process
outputs for the approaches (Kumar et al. 2001; Venkatasubramanian et al. 2003;
Katipamula and Brambley 2005b). As shown in Figure 1, every approach requires
some inputs, such as the condition measures of the building environment, the
configuration of the HVAC systems, or the properties of the building elements. For
different buildings and different HVAC systems, the inputs are very different in terms
of data type, communication protocol, file format and the stakeholders who create
them. Outputs of these approaches also need to be interpreted by the system operators
so that they can use the information to re-configure the systems. As a result, it is very
challenging for the system operators to collect and process all the required
information manually.
integrate them and enable the system operator to deploy them in real-world systems.
RESEARCH APPROACHES
This research first explored the existing computerized approaches and analyzed their
information requirements. Based on the findings, functional requirements were
identified for an integrated framework to address the vision described in section 2.
Finally, a prototype application was developed to test the feasibility of the envisioned
framework and investigate challenges associated with that framework. The following
sections discuss these three steps in the research.
Analysis of information requirements for the existing computerized approaches
Based on the review of the existing computerized approaches, we selected thirty-two
scientific publications for use in identification of information requirements. The main
goal in selecting publications was to have a diverse set of approaches to be
incorporated in the initial framework. Hence, the criteria for selection were to include
the publications that cover different types of approaches and that are developed by
different researchers and/or organizations.
According to their information sources, the identified information requirements can
be categorized into two groups: dynamic and static information items. Static
information items are documented in drawings, manuals and spreadsheets. They only
change when the configuration of the building layout or HVAC systems is changed.
For example, dimensions and materials of the building elements typically do not
change frequently after construction. A summary of the static information items is
listed in Table 1.
Table 1 Summary of the static information requirements
Information requirement Example
Building Building layout Total size of the windows in Room 01.
Material of building elements Material of the external walls.
Occupancy and equipment load Number of occupants in Room 01.
Sensor Type of measurement Temperature, pressure, flow rate, etc.
Measured object Supply air duct of a VAV box.
Data interface for acquiring the BACnet device ID and object ID of the
measurement HVAC components.
Controller Controlled device Speed of the supply fan for an AHU.
Communication interface ID of the BACnet device and object.
Set-point Temperature set-point for a thermal zone.
Actuator Controlled device Damper in a VAV box.
Data interface for acquiring the ID of the BACnet device and object.
status of the controlled device
Relationship Spatial relationship Space where the temperature sensor is
located.
Topological relationship Connection between the air terminals
and the VAV box.
Functional groups of the HVAC Components which serve the temperature
components control of a space.
806 COMPUTING IN CIVIL ENGINEERING
Dynamic information items are generated by the components of the HVAC systems
and the framework. The identified dynamic information items include the variables in
HVAC systems and outputs of the computerized approaches. Variables in HVAC
systems include the sensor measurements, set-point values, control signals, and
working status of the controlled HVAC components. These information items are
typically collected by the HVAC systems. To acquire these information items, the
framework needs the capability to communicate with the HVAC systems. Examples
of the outputs of computerized approaches include the type of fault and faulty
components which are detected by the FDD approaches.
Analysis of functional requirements for the integrated framework
The primary objective of the integrated framework is to automatically provide the
requested information to the computerized approaches and process their outputs.
According to the information requirements, the following functional requirements
were identified for the proposed framework:
Self-recognizing: The ability to recognize its own components and their
configurations and functions.
The static information items represent the characteristics of the building elements and
HVAC components. To be able to provide this information to the computerized
approaches, the framework needs the capability to recognize the configuration of its
components and their functions. For example, to provide the material information
about the windows in a building to a model-based FDD approach (Salsbury and
Diamond 2001), the framework should be able identify the information about the
windows in the building and the associated material types.
Self-monitoring: The ability to monitor the conditions of the building indoor
environment and the HVAC systems.
The dynamic data and corresponding information items are generated by HVAC
components, such as sensors and controllers, and the computerized approaches in the
framework. To collect and process these items, the framework should be able to
communicate with the components and acquire the needed information items.
Self-configuring: The ability to re-configure the HVAC systems according to the
outputs of the computerized approaches.
To mitigate the faults and apply control strategy, which results in higher energy
efficiency, the configuration of HVAC systems needs to be modified. For example, to
apply the supervisory control approach (Gibson 1997), the values of set-points in the
HVAC systems need to be modified. The framework should be able to reconfigure the
HVAC systems according to the outputs of the computerized approaches.
Vision of the integrated framework for self-managing HVAC systems
Based on the analysis of information requirements and functional requirements, we
envisioned an integrated framework for the self-managing HVAC systems. The three
functional requirements are achieved by three modules in the framework. There is
also a controller module that controls the operation of other modules. These modules
connect the computerized approaches with the real-world HVAC systems and
COMPUTING IN CIVIL ENGINEERING 807
CONCLUSIONS
This paper has shown the need for an integrated framework to achieve the vision of
self-managing HVAC systems. By exploring the existing computerized approaches
that enable the automated performance analysis and fault mitigation, the energy
saving potential of these approaches were recognized. The information and functional
requirements for implementing such an integrated framework were analyzed based on
thirty-two previous studies. A prototype application was implemented to test the
feasibility of the envisioned framework.
The prototype showed the need for an approach that can extract information
requirements from different sources and check their consistency, and provide the
needed information to the approaches. Further research is needed to develop such an
approach to address these needs.
ACKNOWLEDGEMENTS
The authors would like to acknowledge and thank the National Institute for Standards
and Technology (NIST) for the grant that supported the research presented in this
paper, which is part of the research project Identification of Functional Requirements
and Possible Approaches for Self-Configuring Intelligent Building Systems. The
authors would also like to acknowledge and thank Dr. Steven Bushby from NIST for
the input and feedback received over the duration of this project.
REFERENCES
Castro, N. (2004). Commissioning of building HVAC systems for improved energy
performance. Proceedings of the Fourth International Conference for
Enhanced Building Operations, Paris, France.
DoE, U. S. (2008). Building Energy Data Book. Washington, D.C., Energy Efficiency
and Renewable Energy, Buildings Technologies Program, U.S. DoE.
EIA (2008). The 2007 Commercial Building Energy Consumption Survey (CBECS).
Washington, D.C., U.S. Energy Information Administration.
Fernandez, N., M. Brambley, S. Katipamula, H. Cho, J. Goddard and L. Dinh (2009).
Self-Correcting HVAC Controls Project Final Report. Richland, WA (US),
Pacific Northwest National Laboratory (PNNL).
Gibson, G. (1997). "Supervisory controller for optimization of building central
cooling systems." ASHRAE Transactions.(493).
Glazer, J. (2009). "Common Data Definitions for HVAC&R Industry Applications."
ASHRAE Transactions 115: 531-544.
Jagpal, R. (2006). Computer Aided Evaluation of HVAC System Performance:
COMPUTING IN CIVIL ENGINEERING 809
W. Orabi1, M. ASCE
1
Assistant Professor, Department of Construction Management, Florida International
University, 10555 West Flagler Street, EC 2952, Miami, FL 33174-1630; PH (305)
348-2730; FAX (305) 348-6255; email: worabi@fiu.edu
ABSTRACT
Post-disaster response and recovery efforts for damaged transportation
networks are typically complex and challenging tasks. This is mainly due to the
limited availability of resources and the dynamic changes to the status of the
transportation networks undergoing recovery efforts. The complexity of these tasks is
however exacerbated due to the lack of adequate communication between the
different stakeholders involved in the response and recovery efforts. Therefore,
improving the communication between the Departments of Transportation (DOTs),
contractors, suppliers and the public can facilitate swift, hassle-free and cost-effective
post-disaster response and recovery efforts of damaged transportation networks. This
paper presents the development of a web-based resource management system that is
designed to provide near real-time and cost-effective medium to exchange important
data between the main response and recovery stakeholders and provides useful and
up-to-date information to the public about the progress in the response and recovery
efforts. To this end, the system is designed to have four main portals for: DOTs,
contractors, suppliers and the public. The use of this system should prove useful to
all users and should help control and minimize the impact of disasters on society.
INTRODUCTION
The response and recovery efforts for damaged transportation networks in the
aftermath of natural disasters are a challenging and complex task. This is mainly due
to the limited availability of reconstruction resources (Orabi et al. 2009). It is
therefore extremely important to optimize the utilization of these limited resources in
order to control and minimize the impact of natural disasters on the society (Orabi et
al. 2010). This optimization process however requires prompt and accurate exchange
of data between the main stakeholders of the post-disaster recovery process (Manoj
and Baker 2007).
The four main stakeholders involved in post-disaster recovery of damaged
transportation networks are: departments of transportation (DOTs), contractors,
suppliers and the public. There are myriad types of data and information that needed
to be exchanged between pairs of these stakeholders on a frequent basis. For
example, departments of transportation (DOTs) need accurate and almost instant
information about the availability of resources they can deploy to respond to disasters
(Chen et al. 2007), as shown in Figure 1. Similarly, both DOTs and contractors need
an effective and efficient way to communicate recovery project data as shown in
810
COMPUTING IN CIVIL ENGINEERING 811
Figure 1. In addition, DOTs need to keep the public updated on the progress of the
recovery effort and receive their feedback on the disaster management practices,
while the contractors need to promptly acquire the construction materials needed for
the reconstruction works, as shown in Figure 1.
It is therefore important to provide a swift, hassle-free and cost-effective
communication media that can facilitate the exchange of data and information
Planner Portal
This portal is designed to provide planners and decision makers in DOTs with
the tools needed for successful disaster management efforts. Through this portal,
planners can efficiently and effectively exchange important data and useful
information with contractors and the public. The major tools available for planners in
this portal include, as shown in Figure 2:
Searching for and downloading the contractor resources available for post-
disaster response and recovery efforts. The planner can sort these resources by
their type, availability, location, productivity, among other attributes. Orders to
deploy resources to respond to extreme events can be placed and delivered almost
instantly to a contractor’s email inbox and/or cellular phone. In addition, planners
can use the downloaded reconstruction resources data to plan and optimize the
recovery and reconstruction efforts of damaged transportation networks. The
reconstruction efforts can be optimized to simultaneously minimize both the network
service disruption and the public expenditures on reconstruction works (Orabi et al.
2009; Orabi et al. 2010). Based on the results of the optimization process, planners
can assign recovery projects to interested and qualified contractors and notify them
through the system accordingly.
traffic flow. Monitoring such data can allow DOTs to: (i) evaluate the impact of
reconstruction work on the level of service provided by the network; (ii) inform the
public about road closures and recommend suitable detours; and (iii) keep the public
updated on the progress of the recovery efforts. RMS also allows contractors,
through their portal as described in the following subsection, to keep the DOT
informed on any planned road closures due to construction activities and/or
reintroducing closed roads into the network after the completion of construction
work.
Keeping the public updated on the progress of the recovery efforts and receiving
their feedback. DOTs can use RMS to communicate with the public on the progress
of the recovery efforts of the damaged transportation network and solicit their
feedback on the handling the post-disaster issues. For example, DOTs can provide
the public with information on: (i) current road status and suggested detours for
closed or partially closed roads; (ii) expected project completion dates of major
reconstruction projects; (iii) safety tips for motorists driving near construction
jobsites; and (iv) reports on the government’s handling of post-disaster issues
including contract solicitation and public expenditures on reconstruction works. In
addition, RMS provides DOTs with the capability of soliciting important feedback
from the public on: (i) the decisions made by the DOT in response to the disaster; (ii)
the progress of the recovery efforts; and (iii) suggestions to improve handling of post-
disaster issues, if any.
Contractor Portal
This portal is designed to facilitate the communication and data exchange
between contractors and each of the DOT and suppliers. The portal enables
contractors to exchange resource and project data with the DOT; and check the
availability and place orders for construction materials from suppliers. To support
these functions, the contractor portal provides the following tools to contractors, as
shown in Figure 3:
resource; (ii) number of crews available from this resource; (iii) availability dates for
each crew; (iv) productivity of each crew, if different; (v) the current location of each
crew; and (vi) daily unit cost for regular, overtime and weekend shifts. The
contractor should provide this list in pre-disaster time and is responsible for keeping
this list of resources updated on a regular basis (e.g. bi-weekly). A complete and up-
to-date resource list will therefore be available to DOTs when planning for response
and recovery efforts from disasters (as described above in the planner portal
subsection).
Report any planned road closures during reconstruction works. Contractors can
use RMS to keep DOTs updated on any planned road closures based on the progress
data discussed in the previous tool. According to the schedule of planned
reconstruction work, the contractor should in advance identify for any planned road
closure: (i) location of closure; (ii) number and length of closed lanes; (iii) start time
and duration of closure. DOTs can therefore monitor and analyze these data to plan
for any road closure. For example, suitable detours can be identified and timely
announced to the public (as described in the planner portal subsection). This kind of
information exchange can facilitate an effective and efficient recovery process and
contribute to controlling and minimizing the level of service disruption experienced
by travelers during the reconstruction efforts.
Check the availability of and place orders for construction materials needed for
reconstruction of damaged transportation networks. Contractors can use RMS as
a one-stop-shop to search for and compare construction materials from different
sources. Using the material search tool, contractors are able to compare materials
available from different suppliers and sort them based on technical specifications, unit
price, availability, and delivery estimate. In addition, contractors can place initial
purchase orders for construction materials through the system. This initial order is
simply a notification to the supplier of the contractor’s intent to purchase a specific
quantity by a given date. Such communication and exchange of information can
significantly contribute to an effective and efficient post-disaster recovery process.
Supplier Portal
This objective of this portal is to provide suppliers with the useful tools
needed to promote and trade their products to contractors while facilitating a
successful post-disaster recovery process. It is also possible to charge suppliers a
COMPUTING IN CIVIL ENGINEERING 815
registration fee for using the system and the proceeds can be collected in a disaster
management fund. The following describe the RMS tools available to suppliers, as
shown in Figure 4:
Public Portal
This portal is designed to allow departments of transportation (DOTs) to
communicate with the public during regular times and in pre-disaster situations for
the benefit of the society at large. Through this portal, citizens can get updates on the
recovery process and provide their feedback on the RMS and the DOT’s disaster
management practices. In order to facilitate these functions, RMS provides the
following tools to the public, as shown in Error! Reference source not found.:
816 COMPUTING IN CIVIL ENGINEERING
Obtain disaster handling information and recovery updates. RMS provides the
public with important information on handling of disasters. This information includes
the planning, preparedness, response, and recovery practices adopted by the DOT. In
addition, the public can also obtain tips on traveling safely though a damaged
transportation network including the suggestion of suitable detours for closed roads
(as described above in the planner portal subsection). DOTs can also keep the public
informed on the progress of the recovery process including tracking of contract
solicitation and expenditures on the reconstruction works.
SYSTEM IMPLEMENTATION
The RMS is running PHP 5 for the server-side code and MySQL 5 database
on an Apache 2 server. In order to provide users with faster and cleaner interfaces,
RMS also includes AJAX code. The database is designed to effectively and
efficiently store and retrieve data on: users, roads, projects, resources, and materials.
Cascading Style Sheets (CSS) is used to provide support all major desktop Internet
browsers and eliminate browser compatibility issues.
commercially available software that are currently used by DOTs and contractors.
Similarly, new modules are needed to integrate construction material inventory with
suppliers’ in-house inventory systems to provide the capability of finalizing material
acquisition on RMS. In addition, geographical information systems (GIS) can be
incorporated with the transportation network data in RMS to allow: (i) improved
reporting of progress in reconstruction efforts of the damaged transportation network;
(ii) advanced tracking of reconstruction resources; and (iii) enhanced visualization of
suitable detours to closed roads.
REFERENCES
Aldunate, R.; Ochoa, S. F.; Peña-Mora, F.; and Nussbaum, M. (2006). “Robust
Mobile Ad Hoc Space for Collaboration to Support Disaster Relief Efforts
Involving Critical Physical Infrastructure,” Journal of Computing in Civil
Engineering, ASCE, 20(1), 13–27.
American Society of Civil Engineers (ASCE) (2009). “Report Card for America’s
Infrastructure.” <http://www.infrastructurereportcard.org/> (December 29,
2010).
Chen, A. Y.; Tsai, M-H; Lantz, T. S.; Plans, A. P.; Mathur, S.; Lakhera, S.; Kaushik,
N.; Peña-Mora, F. (2007). “A Collaborative Framework for Supporting Civil
Engineering Emergency Response with Mobile Ad-Hoc Networks,” ASCE
Conference Proceedings 261, ASCE, Reston, VA, 68.
Franco, G.; Green, R.; Khazai, B.; Smyth, A.; and Deodatis, G. (2010). “Field
Damage Survey of New Orleans Homes in the Aftermath of Hurricane
Katrina,” Natural Hazards Review, ASCE, 11(1), 7–18.
Kapuc, N. (2006). “Interagency Communication Networks During Emergencies:
Boundary Spanners in Multiagency Coordination,” The American Review of
Public Administration, 36(2), 207–225.
Lambert, J. H. and Patterson, C. E. (2002). “Prioritization of schedule dependencies
in hurricane recovery of transportation agency,” Journal of Infrastructure
Systems, 8(3), 103–111.
Manoj, B. S. and Baker, A. H. (2007). “Communication challenges in emergency
response,” Communications of the ACM, 50(3), 51–53.
Orabi, W.; El-Rayes, K.; Senouci, A.; and Al-Derham, H. (2009). “Optimizing Post-
Disaster Reconstruction Planning for Damaged Transportation Networks,”
Journal of Construction Engineering and Management, ASCE, 135(10),
1039–1048.
Orabi, W.; El-Rayes, K.; Senouci, A.; and Al-Derham, H. (2010). “Optimizing
Resource Utilization during the Recovery of Civil Infrastructure Systems,”
Journal of Management in Engineering, ASCE, 26(4), 237–246.
Portmann, M. and Pirzada, A. A. (2008). “Wireless Mesh Networks for Public Safety
and Crisis Management Applications,” IEEE Internet Comp., 12(1), 18–25.
Time, Cost and Environmental Impact Analysis on Construction Operations
ABSTRACT
INTRODUCTION
818
COMPUTING IN CIVIL ENGINEERING 819
environmental impact during construction phases, there are still gaps between the
ultimate goal of environmentally conscious construction and contributions of those
studies. This is because most of the studies have been focused on a specific
dimension, i.e., environmental impact, and overlooked the multi-objective nature of
construction projects. Only recently, a couple of studies in the environmentally
conscious construction management category have addressed the issue of multi-
objectives to a certain degree (e.g., Marzouk et al. 2008).
It is critical to develop an analytic procedure for studying the multi-objective
characteristic of construction projects, and thus this paper will discuss methodology
for analyzing the relationships between project time, cost and environment impact
(TCEI). Currently, time and cost are the major project constraints that are carefully
planned and controlled by construction professionals. Although there are other factors
such as quality and safety that are also important, this study will only include
environmental impact. Other considerations can be added later. A sample project, the
Future House USA project, is used as a case study to demonstrate the application of
the framework.
BACKGROUND
METHODOLOGY
TCEI Estimation for Analysis. The TCEI information for the different alternatives
of construction operation is used in the optimization. Time and cost data are derived
COMPUTING IN CIVIL ENGINEERING 821
fitness value means a better solution as it would be closer to the origin. The fitness
function defined would be then:
CASE STUDY
GWP
Act. Alt. Cost Time
Description (kg CO2
No No (US $) (days)
eq)
Sitework, cut & chip light trees to 12"
1 5,039.71 4 1,728.86
diam., finish grading, removal not incl.
1
Sitework, cut & chip light trees to 12"
2 4,924.93 4 2,938.36
diam., finish grading, removal incl.
Excavation and Fill, 1' to 4' deep, 3/8
1 360.71 2 317.66
CY excavator, backfill trench
2
Excavation and Fill, 1' to 4' deep, 1/2
2 297.05 2 399.34
CY excavator, backfill trench
Footing, 3000 psi concrete, 60000 psi
1 84,232.67 6 9,541.15
rebar, direct chute
3
Footing, 3000 psi concrete, 60000 psi
2 90,392.28 5 9,715.51
rebar, pumped, formwork crew doubled
Stem Wall, 3000 psi concrete, 60000
1 76,650.79 13 9,647.65
psi rebar, direct chute
4 Stem Wall, 3000 psi concrete, 60000
2 psi rebar, pumped, formwork crew 86,174.94 8 9,822.01
doubled
Table 1 - Activities and Alternatives
COMPUTING IN CIVIL ENGINEERING 823
Table 2 shows the cost, time and GWP results of a random set of
chromosomes as an example. In the first chromosome, Activity No. 1 is performed
using option 1; Activity No. 2 is also performed using option 1; and Activity No. 3 is
performed using option 2 from the options available to perform each activity
respectively.
PARETO FRONT
Dispersion
(kg CO2
Fitness
(US $)
GWP
(days)
Time
Cost
eq)
1 2 3 4 5 6 7 8 9 10 11
1 1 2 2 2 1 2 3 1 1 2 439,811 107 65,119 0.291 0.00%
1 2 2 2 2 1 2 3 1 1 2 439,747 107 65,200 0.291 0.00%
2 1 2 2 2 1 2 3 1 1 2 439,696 107 66,328 0.293 0.14%
2 2 2 2 2 1 2 3 1 1 2 439,632 107 66,410 0.293 0.15%
1 2 1 2 2 1 2 3 1 1 2 433,587 108 65,026 0.305 1.41%
2 1 1 2 2 1 2 3 1 1 2 433,536 108 66,154 0.307 1.54%
1 2 2 2 2 1 2 3 1 2 2 437,487 108 65,200 0.309 1.75%
1 2 1 2 2 1 2 3 1 2 2 431,327 109 65,026 0.324 3.25%
2 1 1 2 2 1 2 3 1 2 2 431,276 109 66,154 0.325 3.37%
2 2 1 2 2 1 2 3 1 2 2 431,212 109 66,235 0.325 3.38%
Table 3 - The Pareto front for the "Future House USA"
CONCLUSION
construction method selection, other factors that can have an impact on the
connections between time, cost and GWP need to be studied in the future.
REFERENCES
Bilec, M., Ries, R., & Matthews, H. S. (2007). Sustainable development and green
design - who is leading the green initiative? Journal of Professional Issues in
Engineering Education and Practice , 133 (4), 265-269.
Chen, Z., Li, H., Kong, S. C., & Xu, Q. (2005). A knowledge-driven management
approach to environmental-conscious construction. Construction Innovation ,
5, 27-39.
El-Rayes, K., & Kandil, A. (2005). Time-cost-quality trade off analysis for highway
construction. Journal of Construction Engineering and Management , 131 (4),
477-486.
Howes, R. (2000). Improving the performance of earned value analysis as a
construction project management tool. Journal of Engineering, Construction
and Architectural Management , 7 (4), 399-411.
Jiang, A., & Zhu, Y. (2010). A multi-stage approach to time-cost trade-off analysis
using mathematical programming. International Journal of Construction
Management .
Li, X., Zhu, Y., & Zhang, Z. (2009). An LCA-based environmental impact
assessment for construction processes. Building and Environment , accepted
for publication.
Marzouk, M., Madany, M., Abou-Zied, A., & El-Said, M. (2008). Handling
construction pollutions using multi-objective optimization. Construction
Management and Economics , 26, 1113-1125.
Mouzon, G., & Yildirim, M. B. (2008). A Framework to minimize total energy
consumption and total tardiness on a single machine. International Journal of
Sustainable Engineering , 1 (2), 105-116.
Ozcan, G., & Zhu, Y. (2009). Life-cycle assessment of a zero-net energy house. The
Proceedings of the International Conference of Construction and Real Estate
Management (ICCREM). Beijing, China: The Chinese Construction Industry
Press.
Shen, L., Lu, W., Yao, H., & Wu, D. (2005). A computer-based scoring method for
measuring the environmental performance of construction activities.
Automation in Construction , 14, 297-309.
Zhu, Y. (2006). Applying computer-based simulation to energy auditing: a case study.
Energy and Buildings , 38, 421-428.
Learning to Appropriate a Project Social Network System Technology
Ivan Mutis1 and R.R.A. Issa2
1, 2M.E. Rinker School of Building Construction, College of Design Construction
ABSTRACT
Construction project participants constitute a complex social human network
composed of a heterogeneous and fragmented set of stakeholders. The disjoint group
of actors that team to work on a project constitutes collective entities, social networks
at different scales in time and space. There is a need to incorporate new social network
systems that respond to the demand for the interfacing of actors’ communication in the
construction project practices. For this purpose, it is critical to understand how social
actors interact with these new technologies. This research proposes a framework to
understand the actors’ learning process of a social networking system technology, in
particular an in-house developed social-network- system for construction projects. The
challenge is to understand the interplay of social network system and social actors, as
it involves the interaction of multiple actors. It is expected that learning and
understanding the components of this technology will lead learners to effectively use
the its resources and enable them to effectively appropriate its associated processes for
its use, including communication and coordination with the construction work force,
and creation, contribution, and distribution of information content.
INTRODUCTION
As appropriation is the process by which users adopt and adapt technologies, fitting
them into their working practices (Dourish 2003), there is need to understand how this
process occurs with the critical mass of learners. This research takes on social network
system as a mediating technology for the analysis.
Learning to interact and to communicate the information content of
construction projects will improve the speed of the adoption of new technologies. This
research explores methods for learning to collaboratively communicate construction
project information through social networking environments by appropriating a social
network system. The urgent challenge to advance the competitiveness and efficiency
of the construction industry through innovative methods to connect its workforce is
recognized.
This investigation uses a systematic model based on the structuration theory
(Giddens 1984) to learn how social actors appropriate technology (DeSanctis et al.
1999; Orlikowski 1992; Orlikowski 2000). The model is an analytical construct that
assists in the understanding of the use, advantages, and limitations of mediating
technologies. The model explicitly associates the social network actors as a social
structure, and the social network system as a technology that enables effective
communication of information content. It is expected that the deployment of this
826
COMPUTING IN CIVIL ENGINEERING 827
technology will benefit the acquisition of concepts, knowledge, and skills for their
effective interfacing through the use of mediating technologies. Learners will use the
technology and its resources to simulate the interfacing of actors in collaborative
settings within the contexts of a project, to interact, share social objects, look for
affinity roles within the social network, annotate documents, and send messages, as
part of the features and services of a social network. Central to building the learners’
experience with the technology is understanding of the mediation process between the
technology and the users
As it is critical to clearly comprehend the fundamental components that
underlie the proposed framework, the following session defines actors, team, and
communities as social components, followed by the explanation of the social network
system as a mediating technology.
The third element of the layer in the Figure 2 is social structures. Social
structures are defined according to properties of the actors. The actors’ status quo can
COMPUTING IN CIVIL ENGINEERING 831
be defined by beliefs (e.g. the degree to which members trust in the technology);
modalities in conduct; contextual relationships, and history records actors have access.
The contextual relationships describe the status of the user or actor within the
organization, which generally is defined by the organization’s normative. Other
dimension of contextual relationships is the one the actor has with the environment.
For example, actors can belong to other forms of structures other than the actors’ own
organization, such as union organizations. Alternatively, other social structures that
offer a source of constraints to the users in their activities can be defined as contextual.
There are multiple combinations of the dimensions according to the properties
of the structures of the technology, the information, and the social actors. There are
also multiple instances when the structures are not defined explicitly. In this case, it
requires the actors’ actions with the technology to uncover the implicit structure or
actions to evolve structures to uncover new dimensions. This case is represented in
Figure 2 as the non-explicit social structure.
Action. The technology and its resources are brought into action along with the users’
social structures. As the actors’ status quo is defined by the social structures, the
actors’ actions are produced under such conditions. The actions are the actors’
responses towards the technology, and they reflect the reactions to the technology
rules and constraints (DeSanctis et al. 1994). Typical uses of technology are
integration, sharing, exchanging of information the technology provides to the users.
The actions reflect either the good or poor understanding of the reasons and purposes
of the technology and information content.
There is a mediation process between the technology and the users. The
mediation consists of processing, computing, and facilitating the accessing, retrieving,
and searching of information. Other mediation processes connect users, which is the
case of the social networking system. Figure 2 shows the action layer, including the
elements and their features.
Output. The act of appropriation of technology occurs when users decide to
employ or not employ certain resources of the technology (DeSanctis et al. 1994). For
example, if users decide to only mark up construction documents through the tagging
tools provided by social network system, users appropriate the annotation resource of
the mediating technology. When users bring technology into action, they may find
other uses from the ones that the resources where originally designed for. In this case,
the appropriation of the technology occurs but the resources take new forms
(Orlikowski 1992; Orlikowski 2000). The technology takes new forms and they are
defined by the structure for those particulars actors. These new forms occur when
certain groups of users repeatedly interact with the technology.
Learning and appropriation. Learning how to use the technology resources,
therefore, is a resulting process of appropriation. There is a wide range of cases where
students learn to fully manipulate the resources provided by the technology or poorly
understand the reasons and purposes of those resources. The proposed framework is
used to explore the learning process of a group of users. The framework provides the
template to understand the social interaction of a group of actors in learning a social
network system, since actors perform collective and individual actions. The
framework provides the strategy to understand the attitudes and abilities of users
832 COMPUTING IN CIVIL ENGINEERING
towards the mediating technology. This is possible for example by studying the
actions and reactions to the purpose and reasons, as was stated in the framework,
REFERENCES
Aconex. (2010). Project collaboration and online project management system,
Accessed, September 2010.
Beer, M. (1998). "Organizational behavior and development." HBS Working
Papers Collection, Harvard School of Business Cambridge, MA, 17.
Carroll, J. M., Rosson, M. B., Farooq, U., and Xiao, L. (2009). "Beyond being aware."
Information and Organization, 19(3), 162-185.
Chia, R. (2000). "Discourse Analysis Organizational Analysis." Organization, 7(3),
513-518.
Cohen, S. G., and Bailey, D. E. (1997). "What Makes Teams Work: Group
Effectiveness Research from the Shop Floor to the Executive Suite."
Journal of Management, 23(3), 239-290.
DeSanctis, G., and Monge, P. (1999). "Introduction to the Special Issue:
Communication Processes for Virtual Organizations." Organization
Science, 10(6), 693-703.
DeSanctis, G., and Poole, M. S. (1994). "Capturing the Complexity in Advanced
Technology Use: Adaptive Structuration Theory." Organization Science,
5(2), 121-147.
Dourish, P. (2003). "The Appropriation of Interactive Technologies: Some
Lessons from Placeless Documents." Computer Supported Cooperative
Work (CSCW), 12(4), 35.
Dourish, P., and Bellotti, V.(1992) "Awareness and coordination in shared
workspaces." Proceedings of the 1992 ACM conference on Computer-
supported cooperative work, Toronto, Ontario, Canada, 107-114.
Foley, J., and Macmillan, S. (2005). "Patterns of interaction in construction team
meetings." CoDesign: International Journal of CoCreation in Design and the
Arts, 1(1), 19 - 37.
Giddens, A. (1984). The constitution of society : outline of the theory of
structuration, University of California Press, Berkeley.
COMPUTING IN CIVIL ENGINEERING 833
ABSTRACT
INTRODUCTION
834
COMPUTING IN CIVIL ENGINEERING 835
requests from a wide range of buildings from the campus. The maintenance data of
selected buildings is extracted for storage and analysis in a Data Warehouse (DW).
This paper will first introduce the scope of decision support. The second part
introduces a DSF. It includes a methodology of maintenance activities analysis and
development of a generic DSM and a DST based on the DSM. The final part
describes the application of the tool and proposes five suggestions. It involves an
example of decision support for upgrading existing tubes of Boole Building of UCC.
D
eF
pa
ec
nt
do
er
ns
t
B
u
i
l
d
i
n
g
P
e
r
f
o
r
m
a
n
c
e
EB
iu
x
s
ti
i
nn
gg
i
l
d
Dp
ep
co
ir
st
iM
oo
nd
S
u
e
l
e e
I
n
dF
ea
pc
et
no
dr
es
n
t
r
a r
p a
p
m
o m
C o
C
TA
e
cn
h
n
iy
c
ai
ls
M
a
iA
na
t
ey
n
as
n
cs
e
a
l
s
n
l
i
Dp
ep
co
ir
st
i
oT
no
D
eM
ca
ik
si
in
og
n
RS
eo
nl
ou
vt
ai
to
in
os
n
S
u
o
l
F
i
n
a
n
c
i
a
l
A
n
a
l
y
s
i
s
D
a
t
a
b
a
s
e
Figure 1 Simplified diagram of the Decision Support Framework
Renovation works are not in practice for all of these six buildings, so this case
study focuses on Section II and Section III from the maintenance perspective. For a
COMPUTING IN CIVIL ENGINEERING 837
Analysis of
Maintenance Data
Yes
Energy performance Yes
analysis (simulation)
aims to select a better
alternative to renovate.
Yes Has the
Section II
component exceeded
its life span?
No
Is the
No maintenance cost
higher than
normal?
Yes
No Will renovation
save energy?
Yes
No Would the
owner like to renovate
or improve it?
Yes
Continue Optimize maintenance
maintenance and scheduling to reduce Consideration of all
repair maintenance density or cost possible solutions
Improve settings of existing BMS – Checking the fault reason, for example
Boole building, more than 90% maintenance activities were associated with poor
indoor performance (the room was too cold or too warm). The historical maintenance
data enable display of which area always has this kind of problem, therefore, it is
important to re-set up temperature point for sensors of existing Building Management
System (BMS) or add more sensors to this area for improvement of occupant comfort
level.
Upgrade lighting systems – The main type of energy consumption in UCC
campus is electricity. Currently, most of fluorescent tubes of lighting system of UCC
are T12 or T8. Thomas et al. (1991) showed that significant energy savings could be
attained by use of more efficient lighting systems. T5 tubes are new version of
fluorescent tubes designed to optimize energy consumption, which its lifespan more
than 30,000hrs, providing certain savings on installation and long-term maintenance.
840 COMPUTING IN CIVIL ENGINEERING
According to previous retrofit experience, lighting system with T5 tubes was about
38% more energy efficient than the conventional T8 system (Wu, 2005) and the
illumination at working plane was increased from 500-700lux at the same time.
Improve existing mechanical ventilation systems – Many buildings at UCC
were built with mechanical ventilation systems. Some of them currently do not use
this system (e.g. Kane building). The existing vents waste more energy during winter
time because cold air always comes from those vents into occupied rooms. Therefore,
for the offices, which can have enough fresh air through natural ventilation, it is
better to seal existing vents, avoiding air leakage. For the public computer labs which
the room temperature is always higher than the prescribed value 19-21oC, not only
buildings currently use mechanical ventilation system but also buildings do not use
this system, ventilation units with heat recovery were suggested to improve
traditional mechanical ventilation systems, which can save more than 75% energy
and eliminate 80% heat losses (Hazucha, 2009).
Improve existing ‘core and shell’ – Buildings built before 1990 in UCC did
not have materials for insulation. According to previous research experience (Yin et
al, 2009), for example the CEE building, illustrate that the improved roof and walls
insulation will lead to 33% savings and 19% savings respectively, and the
replacement of windows leads to 11% savings and eventually the reduction will reach
almost 65% of savings.
Section I, IV, V, VI – Initial Survey, Feasible Options, Options
Evaluation and Decision Making: When possible renovation solutions were
proposed, feasible options have to be generated, then evaluate those options, and
finally make a decision for renovation. For example, the Boole building, the existing
T8 fluorescent tubes were replaced by T5 after discussions. The renovation work had
two options and four contractors (Table 3). The evaluation and comparison for these
two options and four contractors were carried out. Table 3 shows the summary of this
comparison (O' Regan, 2010), including four contractors with eight options by
comparing renovation costs (fitting cost, labor cost, metering cost,
commissioning/certification cost) and payback time. Finally, the option 2 of
contractor 2 was decided because of the lowest cost and satisfactory payback time.
CONCLUSION
REFERENCES
Bohance, M., 2003. "What is Decision Support? "
Energy Consumption Guide 19, “Energy Use in Offices”,
< http://www.carbontrust.co.uk/Publications/pages/publicationdetail.aspx?id=ECG019>
Gilfillan, L., 1997. “Project Management and Evaluation.”
<http://lga-inc.com/ut/syllabus/Session7and8/index.htm>.
Han, J., Kamber, M., 2001. Data Mining: Concepts and Techniques, Morgan Kaufman.
Hazucha, J., 2009, “Renovation of Social Buildings - Guidelines for complex renovations.”
IMOS, Inc., 1997. “Decision Support Primer.” <http://www.imos.com/whatis.htm.>
Morrison, J.G., Moore, R.A., 1999. “Design Evaluation and Technology Transition: Moving Ideas from the
drawing board to the Fleet.” <http://wwwtadmus.spawar.navy.mil/Slides/JGMC2Conf/index.htm>.
O' Regan, K., Sweeney S. M., 2010, “Upgrade of Boole Library Lighting - Proposed Luminaries.” UCC Report.
SRI, 2001. Maths & Decision Systems Group, Silsoe Research Institute,
<http://www.sri.bbsrc.ac.uk/scigrps/sg9.htm>.
Thomas, P. C., Natarajan, B., Anand, S., 1991, “Energy conservation guidelines for government office buildings
in New Delhi.” Energy and Buildings,Vol. 16 (1–2), pp.617–623.
Vreenegoor, R.C.P., de Vries, B., Hensen, J.L.M. (2008). “Energy saving renovation, analysis of critical factors at
building level.” Proc. 5th International Conference on Urban Regeneration and Sustainability.
Skiathos: WIT Press. 653-663.
Wu, K. T., Lam, K. K., 2005, “Office lighting retrofit using T5 fluorescent lamps and electronic ballasts.” The
Hong Kong Institution of Engineers Transactions, Vol. 10 (1).
Yin, H., Otreba, M., Allan,L., Menzel, K., 2009, “A Concept for IT-Supported Carbon Neutral Renovation.”
Dikbas A., Ergen E. & Giritli H. (eds.): “Sustainability”, Proceedings of 26th W78 Conference on
Information Technology in Construction, ISBN 978-0-415-56744-2 (hbk), ISBN 978-0-203-85978-0
(eBook) pp.611 – 619, ITU, Istanbul, Turkey.
Yin, H., Menzel, K., 2011, “Decision Support Model for Building Renovation Strategies.” International
Conference on Building Science and Engineering (ICBSE) 2011, Venice, Italy, April 2011.
Environmental Performance Analysis of a Single Family House Using BIM
ABSTRACT
Energy consumption and greenhouse gas emissions are major indicators of
environmental performance of any building. In the recent years, the need for
much-improved energy efficient performance in the housing sector has
substantially grown due to serious energy concerns in the United States. According
to the World Business Council for Sustainable Development, energy use for
buildings in the Unites States is appreciably higher than in other regions, and this
is likely to continue. The lack of a structured approach to planned use of the
sustainability features like post occupancy evaluation, benchmarking against
similar projects, or setting performance targets has made the situation grimmer.
For the past 50 years, a wide variety of building energy simulation programs have
been developed, improved and are in use throughout the building energy
community. With the advancement in Building Information Modeling (BIM) and
simulation technology, the environmental performance of buildings can be
assessed before their actual construction. The primary goal of this research was to
analyze annual energy consumption and CO2 emissions in a single family house in
Florida occupied by a defined type of household using BIM. The secondary goal
was to compare the results with the U.S. Energy Information Administration (EIA)
data published in the Building Energy Data book (DOE 2009) for validation
purposes and to establish the importance of BIM and its use in simulation. This
research has shown that BIM when used in conjunction with computer-aided
building simulation is a very valuable tool in the study of energy performance,
design and operation of buildings. Using energy simulation technology at the
design stage of dwellings facilitates the sustainability decision making process.
INTRODUCTION
The environmental performance of buildings depends on many factors. Energy
consumption and CO2 are the major concerns in the recent times especially with
the spread of sustainable design and green buildings concepts throughout the world
(Figure 1). Under the 1997 Montreal Protocol, participating governments agreed to
phase out chemicals used as refrigerants that have the potential to destroy
stratospheric ozone. It was therefore considered desirable to reduce energy
842
COMPUTING IN CIVIL ENGINEERING 843
consumption and decrease the rate of depletion of world energy reserves and
pollution of the environment (Omer 2009).
A) Per Capita consumption (EIA 2008) B) Daily load shapes (FPSC 2009).
Figure 2. Florida Electricity Consumption
The energy analysis is performed using BIM simulation analysis techniques and
the results are then compared to EIA (2005) data for validity purposes. Based on
the information from these results, a recommendation metrics has been developed
for constructing more energy efficient houses in Florida. The secondary goal of
this paper is to show how the availability of user friendly energy analysis software
can help construction professionals and designers make decisions at the early
stages of their designs.
BASE MODELING DATA
The research started with the modeling of a typical single family house in Florida.
The general data was obtained from two sources:
1. Energy Information Administration (EIA) - for general characteristics of single
family houses in the U.S.
2. Florida Energy Efficiency Code For Building Construction (FEECBC) -For
insulation and equipment efficiency values in Florida
This data was studied to determine the typical number of rooms; glass to floor ratio
and type of heating and cooling equipment found in a single family house in
Florida and then the appropriate values were selected from the FEECBC data. Six
houses were selected from FEECB database with approximately similar square
foot area and geographical location as that of the intended base model house and
the values were averaged for the following components: R value for internal and
external walls, roof and floor, U value for windows, SHGC value for windows,
COP values for heating and cooling equipment, energy efficiency factor (EF) for
COMPUTING IN CIVIL ENGINEERING 845
hot water system and glass to floor ratio. These values were then used as input data
for simulation purposes in the BIM software.
SIMULATION INPUT DATA
The study was intended to perform a detailed energy analysis for a single family
house in a hot and humid climate therefore Gainesville was selected as a
geographical location in Florida. Based on the information from EIA and
FEECBC, the base model house was developed in Design Builder software with
the demographics shown in Table 1. Design Builder software is used to model the
house based on brick/block construction. The design was kept simple by creating
an appropriate number of zones (area with similar functions were assigned to a
single zone) to save simulation execution time.
Table 1. Data Input for Location and Climate
Location Gainesville FL, USA
Source ASHRAE/ TMY3
WMO 722146
General Climatic Region 4A
Latitude 29.70
Longitude -82.28
Elevation (m) 40.0
Time and Daylight Standard Pressure (KPa) 100.9
Saving Time zone (GMT -05:00) Eastern Time
Energy Codes Legislative Region Florida
Annual Monthly
Figure 3. Internal heat gains
846 COMPUTING IN CIVIL ENGINEERING
He
He
at
at
Bal
Ba
anc
lan
e
ce
K
K
W
W
h
h
Annual Monthly
Figure 4. Envelope heat gains and losses
Annual Monthly
Figure 5. Fuel Consumption Breakdown
COMPUTING IN CIVIL ENGINEERING 847
Annual Monthly
Figure 6. Total fuel consumption
C. Annual CO2 production:
CO2 emissions produced in a house vary as the use of electricity varies throughout
the year. The total value of CO2 emissions in a typical house in Gainesville, FL is
10,478 Kg which is 23 times the maximum allowance of CO2 emissions per person
per year (World Resource Institute 2010).
Annual Monthly
for the base model house. EIA data is available for the broadly divided regions
and therefore the values given are the averaged ones. Comparison shows
approximately similar results to EIA and the differences are due to the following
reasons: 1) EIA values are national average values for the whole Southern region
and the data generated from the model is particular to Gainesville, Florida 2)
Florida falls in the South Atlantic region and some of the other Southern regions
have a different climate than Gainesville, FL which affects the duration of cooling
and heating annually. Gainesville has a hot and humid climate so cooling expenses
are more than the heating expenses. Moreover cooling is required for more than 9
months as compared to heating in the houses for creating comfortable indoor
conditions which again is a big factor affecting overall expenses. Figure 8 shows a
comparison of delivered end use energy values from the analysis and EIA data.
The lighting and cooling values were compared and not the heating values because
the EIA data is for the whole South region which consist of three sub regions
namely south Atlantic, east south central and west south central. Florida falls in
South Atlantic region and in this division cooling degree-days (a measure of how
much space cooling is needed in summer) averaged 2,071 per household,
compared with a U.S. average of 1,407. The observed difference in the results is
due to averaged values of EIA data for the whole south region. Therefore another
source (Terrapass) was used for getting an accurate dollar amount for the energy
expenses per year. Gainesville electricity companies have their own rates for
electricity/gas and these rates may differ according to the location. Comparison of
results between the model and EIA data (2006) shows a difference of $230
whereas comparison between Terrapass and Design Builder shows a difference of
just 1.83% (Figure 9A). Figure 9B shows the comparison of results for annual CO2
emissions. The data shows very little differences which really shows that the
results obtained from analysis through Design Builder are accurate.
30
20
BTU)
10
0
Design Builder
Lighting & Appliances Space cooling
EIA
Comparison of Annual
Energy Expenses
Annual Energy Expenses($)
2200
2000
1800 Design
EIA, Terrapass
Builder 2100
1600 2065 1835
1400
1200
1000
CONCLUSIONS
The ever increasing U.S. energy demand can only be curbed with the use of
modern simulation technology to address all the energy related issues in the
housing sector in order to minimize consumption. This will also help in controlling
CO2 emissions which is also a major environmental issue. The early stages of
building design include a number of decisions which impact the performance of
the building throughout the rest of the process. It is therefore important that
designers are aware of the consequences of these design decisions. The use of BIM
and simulation programs can greatly contribute toward more feasible design
decisions when used during the building design process to predict the performance
of various design alternatives on parameters such as energy, CO2 emissions and
indoor air quality.
REFERENCES
Energy Information Administration (EIA), (2008). U.S. Carbon Dioxide Emissions
from Energy Sources 2007 Flash Estimate, DOE U.S.
Florida Public Service Commission (2009). Annual report on Activities Pursuant
to the Florida Energy Efficiency and Conservation Act
Laptali, E., Bouchlaghem, N., and Wild, S. (1997). “Planning and estimating in
practice and the use of integrated computer models,” Automation in
Construction, 7, 71-76.
Omer, A. (2009). “Energy use and environmental impacts: A general review”,
Journal of Renewable and Sustainable Energy, 1, 053101-1.
Petersen, S., and Svendsen, S. (2010). “Method and simulation program informed
decisions in the early stages of building design”, Journal of Energy and
Building, Elsevier Science Ltd, Article in press.
World Resources Institute, (2010). WRI summary of the carbon limits and energy
for America’s renewal act, Washington DC.
U.S. Dept. Of Energy, 2009 Building Energy Data Book.
Enhancing Student Learning in Structures Courses with Building Information
Modeling
1
Assistant Professor, Civil and Construction Engineering, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA, 30060; PH (678) 915-3946;
FAX (678) 915-5527; email: wbarham@spsu.edu
2
Assistant Professor, Construction Management, Southern Polytechnic State
University, 1100 South Marietta Parkway, Marietta, GA, 30060; PH (678) 915-3715;
FAX (678) 915-4966; email: pmeadati@spsu.edu
3
Assistant Professor, Building Construction Program, Georgia Institute of
Technology, 280 Ferst Drive,1st Floor, Atlanta, GA, 30332; PH (404) 385-7609; FAX
(404) 894-1641; email: Javier.irizarrry@coa.gatech.edu
ABSTRACT
This paper presents the findings of a study conducted to evaluate the effectiveness of
Building Information Modeling (BIM) in enhancing student learning in structural
concrete design courses. In reinforced concrete design, three dimensional (3D)
visualization of concrete members can advance students understanding of
reinforcement details and rebar-placement. BIM facilitates the usage of 3D views in
teaching such courses and provides opportunities to address the challenges faced by
the students during the visualization process. A study involving the use of BIM 3D
views were conducted in two courses over a one-semester period. In the study,
improvements in student performance were observed in several of the problems
presented when 3D models were used. BIM has the potential to provide faculty with a
tool that can improve teaching of structural design courses in a more visual and
interactive way and greatly enhance the educational experience of the students.
INTRODUCTION
Two-dimensional (2D) drawings are most widely used as pedagogical tools for
teaching courses to Architecture, Engineering and Construction (AEC) students. The
interpretation of 2D drawings by students varies based on their educational
background, previous practical experience, and visualization capabilities among other
factors. Students are required to develop three-dimensional (3D) models mentally by
visualizing the different components of the project. Students with little or no practical
experience often face challenges and spend more time in developing 3D visual
models. In reinforced concrete design, 3D visualization of concrete members can
advance students understanding of reinforcement details and rebar-placement.
Building Information Modeling (BIM) facilitates the usage of 3D models in teaching
courses and provides opportunities to address the challenges faced by the students
during the visualization process. BIM is a process that provides a framework to
850
COMPUTING IN CIVIL ENGINEERING 851
develop data rich product models. In this process, real world elements of a facility
such as beams, columns, and slabs are represented as objects in a three dimensional
(3D) digital model. In addition to modeling, it provides a framework that fosters the
integration of information from conception to decommissioning of the constructed
facility (Goedert & Meadati, 2008). This paper presents the findings of a study
conducted to evaluate the effectiveness of BIM in enhancing student learning in
structural concrete design courses in the department of civil and construction
engineering and the department of construction management at Southern Polytechnic
State University in the United States. BIM 3D views used for the study are developed
using Autodesk’s Revit Structure. The following section discusses the usefulness of
BIM in teaching environments.
Based on learning styles students can be identified as auditory, visual, and kinesthetic
learners. Auditory, visual, and kinesthetic learners learn through hearing, seeing, and
doing respectively (Marvin, 1998). Teaching AEC courses by addressing students’
different learning styles is a challenging task. Traditional lecture is one of the styles
which is widely used for teaching AEC courses. Sometimes, the lecture format style
is complimented by including construction site visits. This teaching style provides an
auditory and visual learning environment. However, inclusion of site visits within the
course schedule is not always feasible due to reasons such as unavailability of
construction sites meeting the class needs, class schedule conflicts, and safety issues
(Haque et al. 2005). Additionally, lack of laboratory and training facilities are
impeding the creation of kinesthetic learning environments. Sometimes the traditional
lecture teaching style also falls short to serve as an effective communication tool for
transferring knowledge to students. Due to the lack of a conducive learning
environment, which stimulates auditory, visual, and tactile senses, currently AEC
students are unable to gain the required skills to solve real world problems. A user-
friendly interactive knowledge repository that provides a conducive learning
environment is needed to enhance students’ learning capabilities. BIM facilitates
development of such knowledge repositories and fosters conducive learning
environments. BIM serves as an excellent tool for data management. It facilities easy
and fast access to the information stored in a single centralized database or in
different databases held at various locations through the 3D model. Some of the BIM
characteristics such as easy access to the information, visualization, and simulation
capabilities allow auditory, visual, and kinesthetic learning environments to emerge.
Any-time and interactive access to the repository through a 3D model creates learning
environment beyond time and space boundaries and facilitates students to learn at
their own pace. These environments allow students to discover strengths and
weaknesses of their learning practices and facilitate self-improvement. As shown in
Figure 1, BIM has the potential to greatly enhance the educational experience of AEC
students in acquiring skills related to different areas and will provide faculty with a
tool that can improve teaching different courses in a more visual and interactive way.
852 COMPUTING IN CIVIL ENGINEERING
Conceptual Design
Construction Means &
Methods
Structural Analysis
BIM
Estimation
Scheduling
The data collected for this study was large in scope and it was collected using data
collection instruments that included a survey which was designed to be consistent
with the objectives of this research. Students from two classes were involved in this
action research project. The first course is Steel and Concrete design from the Civil
and Construction Engineering Department. The second class was the Applied
Structures I course from the Construction Management (CM) Department. Both
courses are senior level courses and part of the undergraduate curriculum at Southern
Polytechnic State University.
Since the main objective of this study is to measure whether or not BIM 3D views can
enhance student learning in structures courses, BIM has been used as a teaching tool
in both courses during the semester and many concrete structural elements were
presented to students using BIM 3D views in addition to the traditional 2D approach
(See Figure 2, Figure 3 and Figure 4). The questionnaire was given to students at the
end of the semester after they had been exposed to different BIM models. The
questionnaire was designed in such a way that the questions, timing, the scoring
procedures and interpretations were administered and scored in a predetermined,
standard manner so that they were valid and relevant. The questionnaire was
composed of three sections - namely: demographic questions, qualitative part, and
quantitative test. The goal of the demographic questions was to profile students and
their background. The qualitative part consists of 10 questions with a 5-level Lykert
scale (1=Strongly Agree, 2=Agree, 3=Neutral, 4=Disagree, and 5=Strongly Disagree)
focusing on students’ opinion about BIM and whether or not they think BIM helped
them to gain better understanding of the taught material and improve their
visualization capabilities.
member. The second problem is similar to the first problem but we used a 3D BIM
view to represent the concrete member and its steel reinforcement. The problems in
the study include three reinforced concrete members; a simply supported beam
(Figure 2), one-way slab (Figure 3), and an isolated footing and column (Figure 4).
The students in both courses completed the designed questionnaire for simply
supported beam at the end of the semester. The one way slab and an isolated footing
and column questionnaires were completed by CM students only. The students in
both sections were given enough time to complete the questionnaire. The content of
the questionnaire was evaluated by making sure that the items in it agree with
objectives of this study and learning outcomes of both courses. The qualitative and
quantitative sections were used to study BIM actual impact in improving students’
learning and their perception about visualization using 2D and BIM.
Main Reinforcement
Shrinkage Reinforcement
#5@16"
#4@14"
Shrinkage Reinforcement
Main Reinforcement #4@12"
#4@16"
#3 Ties at 16"
8#10 Bars
8#11
Ties #4@12"
8#10
8#10
8#10 Bars
#3 ties
at 16"oc
RESULTS
The data collected in the CE and the CM courses was analyzed and the results are
presented next. Demographic information about the study population is presented first
followed by an analysis of student performance when responding to questions using
2D and 3D views of structural elements. Benefits of using BIM for visualization are
then discussed based on the analysis of student performance in the quantitative test
and finally, the issues found with performance and perceptions about visualization
using 2D are discussed.
As part of this study, demographic data was collected. Tables 1 and 2 display this
data.
There were a total of 39 students enrolled in the two courses included in the study.
The majority of the students were seniors at the time the study took place (85%,
n=40). A total of 52.5% (n=40) of the students were between the ages of 18 to 24
years and the balance was over 24 years of age. The following sections discuss the
results obtained by analyzing the respondent’s performance on the questions asked
and their responses to the survey questions, which required them to rate their level of
agreement with the statements presented.
It was observed that performance improved with 10.6% more correct responses in the
CE class when students had a 3D representation of the beam design. In the CM class,
4.7% more correct responses were observed when students used the 3D representation
of the beam problem. A smaller increase of 4.8% was observed in the CM class when
students used a 3D representation of the slab design presented in the problem. A
similar increase of 4.8% was also observed in the CM class when students used the
3D representation for the foundation and column problem. These results show that
students performed better when a 3D graphic representation of the problem was
provided, particularly in the CM class. More data will allow a more in-depth analysis
of the factors that may influence the difference in performance observed between the
CE and CM students.
Analysis of the responses to the survey questions provided some insight into possible
reasons for the observed increases in performance in reinforcement related problems.
These problems included the 2D and 3D Beam Problem #1, the 2D and 3D Slab
Problem #2, and the 2D and 3D Foundation and Column Problem #3. It was
observed that on average, students expressed agreement with the statement “BIM 3D
856 COMPUTING IN CIVIL ENGINEERING
models helped me to visualize beam reinforcement” (average rating of 1.9, n=20) for
the CM class and 2.71, n=19 for the CE class).
An analysis between the qualitative and quantitative data was made to study the
impact of students’ performance and their perception about visualization using 2D
and BIM 3D views. To study the impact, an analysis between students perception vs
performance was made using responses for statement “I fully understand beam
reinforcement and steel placement using 2D cross-sections” and their performance on
2D Beam problems. When students performance on the 2D problems was reviewed,
it was observed that 19 % of students in the CM class responded incorrectly to 2D
Beam Problem #1 and 2D Beam Problem #2 and expressed an agreement (average
rating of 2.10, n=20) with the statement “I fully understand beam reinforcement and
steel placement using 2D cross-sections.” as shown in Table 4 regarding their
perceived level of visualization using 2D-cross sections. If this perception is accurate,
students should have answered these questions correctly. The results show that
students may have an inaccurate assessment of their visualization skills and may
underestimate the benefits of using 3D BIM for enhancing their visualization of
reinforcement in structural concrete elements.
Data was collected in a CE and a CM course to explore the benefits of using 3D BIM
models for assisting in visualization in concrete structures courses. During the Fall
2010 semester, an in-class exercise was conducted in the two courses to measure
student performance in solving problems using 2D and 3D models of the structural
members used in the problems. In addition, students were presented with several
COMPUTING IN CIVIL ENGINEERING 857
statements regarding their perceptions about the value of 2D and 3D models for
visualization of the concepts covered in the problems. Students were required to
express their level of agreement with the statements through a 5 level Lykert scale.
The data collected was analyzed to determine student performance when solving the
presented problems when 2D and 3D models were used. Improvements in student
performance were observed in several of the problems presented when 3D models
were used. An increase between 4.7% and 10.1% in the number of correct answers
was observed with the CM students and 10% with CE students.
The study also observed differences in performance between the two groups of
students for the beam problem. . The results showed that the BIM 3D views seem to
benefit more the CM students than CE students who participated in the study. These
differences may be due to a number of factors that were not included in this study
such as work experience, overall academic performance (i.e. GPA), and academic
maturity level (academic rank such as sophomore vs. seniors) when they take the
course. Future studies will consider these factors as well as the impact of using color
in the models and the view angle used in the images presented to students.
REFERENCES
Haque, M.E. (2007). “n-D Virtual Environment in Construction Education.” The 2nd
International conference on Virtual Learning, ICVL 2007, Retrieved
November 12, 2010 from
http://www.cniv.ro/2007/disc2/icvl/documente/pdf/met/2.pdf.
Marvin, B. R. (1998). “Different learning styles: visual vs. non-visual learners mean
raw scores in the vocabulary, comprehension, mathematical computation, and
mathematical concepts.” 1998, Retrieved November 11, 2008 from
http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000
019b/80/17/95/48.pdf
Irizarry, J., and Meadati, P. (2009) “Use of interactive Display Technology for
Construction Education Applications.” American Society for Engineering
Education Southeastern Section Annual Conference, April 5-7, Marietta, GA
(in CD-ROM).
Using Applied Cognitive Work Analysis for a Superintendent to Examine
Technology-Supported Learning Objectives in Field Supervision Education
ABSTRACT
INTRODUCTION
858
COMPUTING IN CIVIL ENGINEERING 859
coordinate and make decisions that will lead to successful management of the
jobsite. This results in complex work that is difficult to perform and challenging even
to expert practitioners. A consequence of this complexity is that it requires several
years for a person to develop the skills to become an expert superintendent. Once a
person becomes an expert, their mental models, sense of typicality, perceptual skills
and routines related to their job will be highly developed (Klein and Militello, 2005).
However, articulating the subtle aspects of their job and their basis for decision will
become a difficult task itself (Crandall et al., 2006; Smith, 2003). In this way, passing
on their expertise is complicated and novices have to face the same challenge of
making sense of a complex job. Cognitive Task Analysis (CTA) are procedures to
understand how people think and how they perform complex work (Crandall et al.,
2006). That is, CTA can provide insight into the superintendents’ complex mental
work that leads to attaining the main objective of managing field activities.
LITERATURE REVIEW
1986). Such analysis renders the required information that people need to accomplish
a given task that requires mental activity. In this sense, CTA studies are deemed
particularly useful to understand the details and subtle elements of work that people
consider for achieving their job’s objectives and responsibilities. The three primary
aspects for capturing cognition through CTA studies are knowledge elicitation, data
analysis, and knowledge representation. Inclusion of these elements in a CTA study
will facilitate reproduction of expertise.
In the construction domain few studies have used CTA methods. Distefano
and O’Brien (2009) analyzed experts on infrastructure assessment in small combat
units using the Applied Cognitive Task Analysis framework for interviews, aimed at
obtaining critical elements of performance that can be improved through information
technologies. Saurin, Saurin and Costella (2010) used the Critical Decision Method
framework for interviewing workers and gain insight into causes of workers’
accidents in jobsites to improve error classification and safety procedures. It can be
noted that both CTA methods used are mostly focused on knowledge elicitation, and
they are aimed at performance improvement.
METHODOLOGY
perform to achieve such objectives is unveiled through the CWRs; in turn, this allows
to determine the IRRs, which are the information needed to perform the cognitive
work. Since system design is out of the scope of this research, only the cognitive
analysis section is of interest.
ACWA covers all the aspects of cognitive task analysis in the different steps
that form the cognitive analysis section. The knowledge elicitation method of ACWA
consists in targeting information about how goals and processes work, how is
feedback about them obtained, and actions responding to incorrect functioning (Elm,
2002). The analyst is in charge of compiling the obtained data and turning it into the
main representation of the ACWA, which is the FAN. The analyst is also in charge of
verbalizing the CWRs and IRRs for each of the goals in the FAN. Three additional
knowledge elicitation methods were used to support collection of data; observations,
the think-aloud method and the Critical Decision Method modified for day-to-day
activities. The think-aloud method consists in having someone narrate their thoughts
as they perform any task. The modified Critical Decision Method consists in having
an expert narrate their daily activities and searching for decision points even if
incidents are not critical (Crandall et al., 2006). These methods provided input for the
three steps of interest. The ACWA methodology does not have to be carried out in a
strictly sequential manner (Potter et al., 2002), but the cognitive analysis must be
completed through to adequately describe the job in terms of goals and information
needs.
Development
Knowledge Representation
ANALYSIS
cognitive analysis part have been developed. Identification of useful functionality that
will not hinder the mental model obtained is essential to determine the IT tools that
can reduce the complexity of the job. In addition, the obtained products can be used
to determine relevant learning objectives for novices, with the purpose of building
their own mental models of the knowledge domain. The potential benefit of
developing mental models that rely on computer systems for reduced complexity is
discussed in this section.
Each of the IRRs represents data that has to be processed in order to obtain
meaningful information that allows to perform the CWRs. Some IRRs are readily
available but still have to be stored in the mind of the practitioner. Other IRRs are not
available as such, and the practitioner has to process available data to produce them.
Using the results of the ACWA study would also have an implication for
designing instructional programs. The relationships expressed in the FAN provide a
context for practice, in which goals are interrelated and information obtained for a
goal serves to attain other goals. This would fit modular instruction, in which learning
objectives for each module can build on one another. Then, the CWRs and IRRs can
determine the instructional strategy that better fits each learning objective. For
example, scenarios and roles can be designed for each job goal, or sets of job goals.
Overall, design of instruction can be grounded in the mental model of expert
superintendents, since such models are comprehensive and responsive to the
constraints of the field management domain.
CONCLUSIONS
REFERENCES
Crandall B., Klein G., Hoffman R.R. (2006). Working Minds – A practitioner’s guide
to Cognitive Task Analysis, The MIT Press, Cambridge, MA.
Distefano M.J., O'Brien W.J. (2009). “Comparative Analysis of Infrastructure
Assessment Methodologies at the Small Unit Level.” Journal of Construction
Engineering and Management, ASCE, 135(2), 96-107.
Elm, W. (2002). Applied cognitive work analysis: ACWA, Unpublished briefing,
<http://mentalmodels.mitre.org/cog_eng/ce_references_V.htm> (Dec. 18th,
2010).
866 COMPUTING IN CIVIL ENGINEERING
Elm, W., Potter, S., Gualteri, J., Roth, E., Easter, J. (2003) “Applied cognitive work
analysis: a pragmatic methodology for designing revolutionary cognitive
affordances.” Handbook of Cognitive Task Design, Hollnagel E., ed.,
Lawrence Erlbaum Associates, Mahwah, NJ, Ch. 16.
Hoffman, R. R., & Woods, D. D. (2000). “Studying cognitive systems in context.”
Human Factors, 42, 1-7.
Hollnagel, E. and Woods, D.D. (1983). “Cognitive systems engineering: new wine in
new bottles.” International Journal of Man-Machine Studies, 18, 583-600.
Klein, G., and Militello, L. (2005). “The knowledge audit as a method for cognitive
task analysis.” How professionals make decisions, H. Montgomery, R.
Lipshitz and B. Brehmer, eds., Lawrence Erlbaum Associates, Mahwah, NJ.
Potter S. S., Elm W. C., Roth E. M., Gualtiere J. W., and Easter J. R. (2002).
“Bridging the gap between cognitive analysis and effective decision aiding.”
State of the Art Report (SOAR): Cognitive Systems Engineering in Military
Aviation Environments: Avoiding Cogminutia Fragmentosa! McNeese, M.D.,
and Vidulich, M.A., eds., Wright-Patterson AFB, Human Systems
Information Analysis Center, 137-168.
Saurin, T.A., Saurin, M.G., and Costella, M.F. (2010). “Improving an algorithm for
classifying error types of front-line workers: Insights from a case study in the
construction industry,” Safety Science, 48, 422-429.
Smith, P.J. (2003). “Workplace learning and flexible delivery.” Review of
Educational Research, 73(1), 53–88.
Developing and Testing a 3D Video Game for Construction Safety Education
1
Ph.D. Candidate, Ph.D. Program in the Built Environment, University of
Washington, 130E Architecture Hall, Box 351610, Seattle, WA 98195; PH (206) 616-
3205; FAX (206) 685-197; email: json@uw.edu
2
Assistant Professor, Department of Construction Management, University of
Washington, 120 Architecture Hall, Box 351610, Seattle, WA 98195; PH (206) 616-
1915; FAX (206) 685-1976; e-mail: kenyulin@uw.edu
3
Director and Professor, The Durham School of Architectural Engineering and
Construction, University of Nebraska-Lincoln, 1110 S. 67th St., Omaha, NE 68182;
PH (402) 554-3186; FAX (402) 554-3850; e-mail: er@unl.edu
ABSTRACT
Construction safety education has mostly relied on one-way transference of
knowledge from instructors to students through traditional lectures and media such as
textbooks. However, we argue that safety knowledge could be more effectively
acquired in experiential situations. The authors have developed a 3D video game
where students learn by themselves about safety issues in a virtual construction site.
Students, who assume the roles of safety inspectors in the game, explore a virtual site
to identify potential hazards and learn from the feedback provided by the game as a
result of their input. This paper reports on the game design and development process
as well as a preliminary assessment of the game’s effectiveness. The preliminary
assessment was conducted on five students and the results suggested a positive
outlook as well as areas for improvement. Further work to improve the game includes
incorporating additional violation scenarios, adding new game features to enrich the
game experience, and providing enhanced pedagogical opportunities.
INTRODUCTION
Promoting safety education and preparing our future taskforce for a safer and
healthier work environment is in no doubt a critical agenda amongst other high
priorities in construction. However, traditional safety teaching practice based on the
textbook–chalkboard–lecture–homework–test paradigm has long been criticized as
867
868 COMPUTING IN CIVIL ENGINEERING
inadequate and inappropriate for student learning (Nirmalakhandan et al. 2007). The
authors propose a 3D video game, Safety Inspector (SI), to explore how game
technology can intertwine with safety education and complement existing learning
approaches. For safety education, the game aims to provide a “safe” training
environment that engages students in comprehensive hazard recognition challenges as
a way to evaluate student performance and increase student learning interests. In
addition, the game development process is expected to serve as an operational model
of available technologies for those who are also interested in educational video games
in the Architectural/Engineering/Construction industry. In addressing these objectives,
this paper reports the authors’ overall game development process, the implementation
of a preliminary prototype system, and game evaluation.
Safety Violations
The safety violations targeted for game implementation are listed in Table 1.
Table 1 is a preliminary hazard classification derived from the US Washington State
Labor and Industry (WA L&I) safety training materials used to guide the game
development. One of the dimensions to categorize these violations is the level of
knowledge that learners need to perceive them. Using this dimension, the
categorization can naturally evolve into three game stages (e.g. easy, moderate, and
difficult). However in the current stage of the game, a mix of easy and moderate
violation recognition challenges are implemented for testing purposes.
Game Implementation
Safety Inspector is powered by the Torque 3D game engine through Torque
Software Development Kit (SDK) V1.0. The engine time code is written in C++ with
tools written in C++ and a proprietary scripting language, TorqueScript. Main
functionalities of the Torque 3D game engine include 3D rendering, physics, and
animation. To simplify the game development processes, Torque 3D game engine is
manipulated through Torque SDK. Editors and tool kits such as Terrain Editor, Shape
Editor, and Material Editor in Torque SDK enable developers to complete games
without laborious coding. Detailed game implementation processes are presented in
Fig. 2 and described in the following paragraphs.
1. Creating Terrain: One of the first objects added to the game was the construction
site terrain. The terrain was created (Fig. 3) by modifying a default terrain shape and
textures using the Terrain Editor in Torque SDK.
2. Creating 3D Objects: 3D game objects were produced and then imported into
Safety Inspector virtual space. These objects include, but are not limited to, worker
characters, fleets, buildings, equipments, tools, materials, and background objects.
872 COMPUTING IN CIVIL ENGINEERING
Some objects were created from scratch and some were modified from existing 3D
models. Autodesk 3DS Max 2009 was used to create/edit most of the 3D objects.
Furthermore, the collision boundaries for each object were added in the object’s 3D
hierarchy for defining collision detection mechanism among game objects.
3. Exporting and Importing Game Objects: Completed 3D objects were imported into
the game in the format of DTS (Dynamix Three Space) or DIF (Dynamix Interior
File), both proprietary. DTS is generally used for representing non-structural game
objects such as characters, fleets, and equipment while DIF is used for representing
structural game objects such as buildings or other enclosing structures. DTS objects
were produced by exporting 3D objects through a DTS exporter (e.g.
max2dtsexporter) in Autodesk 3DS Max 2009 and DIF objects were created through
two steps, including exporting 3D objects into the file format of Torque Constructor
and then exporting again the 3D objects in Torque Constructor into the DIF format.
Exported game objects were incorporated into the virtual game space through Torque
3D SDK.
4. Customizing Game Code: The game engine has been customized so that required
properties, responsive actions, and dynamic behaviors of game objects could be
accomplished.
COMPUTING IN CIVIL ENGINEERING 873
5. Creating Graphic User Interface (GUI): GUI was designed to display necessary
information such as current total points and instructional messages so that learners get
feedback along their game play.
SYSTEM EVALUATION
Before a full-scale implementation and evaluation is attempted, a small group
of students from the Department of Construction Management at the University of
Washington were invited to test the game and to provide feedback on their learning
experiences. A total of five students, who have taken the construction safety class
required in the CM curriculum, voluntarily participated in the game testing. They
played the game for ten minutes and then filled up a feedback survey to help evaluate
the research effort. A total of eighteen questions were listed in the survey. A 7 point
Likert scale (1 being the lowest level and 7 being the highest level) was used. Table 2
presents the survey questions and their results. Although these results are not
statistically significance given the small sample size, the evaluations still provide
some insights about game performance and useful feedback to improve the game for
future versions.
design analysis.
Is the game user-friendly and easy to operate for you? “Yes” (80%)
ACKNOWLEDGMENTS
The authors would like to acknowledge the financial support for this research
received from the National Science Foundation Award 0753360. The authors would
also like to recognize their collaborators from the University of Texas at Austin and
the Rinker School of Building Construction at the University of Florida.
REFERENCES
Nirmalakhandan, N., Ricketts, C., McShannon, J., and Barrett, S. (2007). "Teaching Tools to
Promote Active Learning: Case Study." Journal of Professional Issues in Engineering
Education and Practice, 133(1), 31-37.
Attention and Engagement of Remote Team Members in
Collaborative Multimedia Environments
ABSTRACT.
INTRODUCTION
875
876 COMPUTING IN CIVIL ENGINEERING
Jiazhi et al. [1] suggests that when collaborating through computer mediation, people
will look at targets that help them determine whether or not their messages have been
understood as intended, and that gaze patterns of speakers and listeners are closely
linked to the words spoken, and help in the timing and synchronization of utterances.
Vertegaal et al. [2] found that in multi-party conversations, speakers looked at the
person they were talking to 77% of the time and listeners looked at the speaker 88%
of the time.
According to Gutwin and Greenberg [3] , awareness has four basic characteristics:
1. Awareness is knowledge about the state of a particular environment.
2. Environments change over time, so awareness must be kept up to date.
3. People maintain their awareness by interacting with the environment.
4. Awareness is usually a secondary goal—that is, the overall goal is not simply
to maintain awareness but to complete some task in the environment.
We can see that even though awareness is not a goal in itself, it is an
important condition in order to achieve the proper environment for collaborative
problem solving. Vertegaal et al. differentiate two different levels of awareness in
cooperative work. The macro-level, refers to the awareness that conveys background
information about the activities of others prior to or outside of a meeting. The micro-
level of awareness according to them gives “online information about the activities of
others during the meeting itself. Micro-level awareness usually has a more continuous
nature than its macro level counterpart. It consists of two categories: conversational
awareness and workspace awareness. Vertegaal summarizes the elements of micro-
level awareness according to the attentive state, from the syntactical to the pragmatic
aspects of the interaction (Table1).
The syntax level contains two subcategories. The locus of attention describes
the spatial aspects of attention, i.e., where the person directs their attention, while
attention span describes the temporal aspects of attention, i.e., the amount of time a
person can concentrate on a task without being distracted. This paper concentrates on
the measurement of this particular aspect, by using methods that enable us to
indirectly infer the attention of the participant, in order to establish if there is a
connection between the characteristics of the interface and the way in which people
attend to the interface. We are assuming in this case that the attention to the interface
COMPUTING IN CIVIL ENGINEERING 877
will be acting as an indicator of the level of engagement that the person is having in
the collaboration process.
Although this variable in itself is not a unique component of the micro-level
awareness in the interaction, its study provided us with important insights about how
the characteristics of the interface affect this crucial aspect of non-collocated
interactions.
METHODOLOGY
side conversations, gaze foci, and use of ICT tools were noted. This provided a sense
of the engagement at the individual level occurring during the team meetings. Five
out of six teams had more two or three members at Stanford. Observations included:
(1) How the collocated participants make their engagement (or lack thereof) visible to
each other? (2) How do artifacts and ICT support or constraint engagement activities?
(3) When participants engage with ICT, where is their gaze? (4) When and how did
their gaze move between objects, from person to objects and back again?
TESTBED
All AEC teams hold weekly two hour project review sessions similar to
typical building projects in the real world. During these sessions they present their
concepts, explain, clarify, question these concepts, identify and solve problems,
negotiate and decide on changes and next steps. Since the concepts, problems and
challenges are defined by the students who work on that specific project, their level
of attention and engagement is maximized. Consequently the students are highly
motivated to exchange and acquire as much knowledge as they participate in the
cross-disciplinary dialogue. The interaction and the dialogue between team members
during project meetings evolved from presentation mode to inquiry, exploration,
problem solving, and negotiation. Similar to the real world, the teams have tight
deadlines, engage in design reviews, negotiate and decide on modifications. Most
importantly, students learn to use and combine diverse communication channels and
media to express and share their ideas and solutions. To view AEC student projects
please visit the AEC Project Gallery
(http://pbl.stanford.edu/AEC%20projects/projpage.htm).
The following describes Island and Ridge team collaborative ICT settings
according to Vertegaal’s micro-level “Functionality” characteristics, i.e., workspace
and conversational awareness:
Island team was composed of an architect in Puerto Rico, a structural engineer at
Stanford, an energy simulation engineer at Stanford, a construction manager at
UW Madison, a life cycle financial manager at Bauhaus University, in Germany.
Each of them worked in the respective university laboratory, using their laptops
on WiFi, with a headset for audio. They used GoToMeeting as their multimedia
collaboration environment. GoToMeeting allowed them to share their
applications that were running on their individual laptops, e.g., architect showing
3D images of the building, structural engineer showing structural component
options, construction manager showing cost estimates and schedules spreadsheets,
life cycle financial manager showing cash flow model diagrams. GoToMeeting
allows viewing, sharing, controlling one application at a time. This required
participants to switch presenter and control as they were toggling between the
different applications running on the different computers. It allowed all
participants to view and manipulate data only on one application at a time.
(Figure 1a).
Ridge team was composed of an architect in Puerto Rico, two structural engineers
at Stanford, and one construction manager in Stockholm Sweden. Each of them
was working in the respective university laboratory, using their laptops on WiFi,
with a headset for audio. They used the 3D Team Neighborhood in Teleplace as
their multimedia collaboration environment. The 3D Team Neighborhood
provided a highly immersive environment that enabled the team members to
construct in real time their collaboration space around them as the dialog and
interaction evolved during the meeting. Each team member could share their
content on any number of displays that were created on as needed bases, as well
as manipulate and annotate any content displayed in their shared workspace. All
participants were able to view and interact with their content and models in
context, i.e., in relation to the content and models shared by the other team
members. This allowed them to interpret correlate, combine, and compare items
on different displays. In addition, they were constantly aware where their team
members look and are located with respect to them and the displayed content in
their shared 3D Team Neighborhood. The interaction and communication in this
multimedia immersive collaboration environment lead to a free and continuous
flow of interaction and communication. (Figure 1b).
Based on the position of the gaze of the two architecture students at University of
Puerto Rico, we used the following categories to classify the EyeTracker information:
1. Screen: person looking at the computer screen, specifically to the window of the
application evaluated.
2. Notes: person taking notes related to discussion
3. Keyboard: person typing
4. Outside: person looking to anything else beyond the three previous categories.
We analyzed the videos from the EyeTracker device based on these
categories, and represented the interaction graphically (Figure 1c and 1d). These
preliminary results were supported by the observations made in-person at Stanford
COMPUTING IN CIVIL ENGINEERING 881
and in the 3D Team Neighborhood showing where the students and their avatars
gazed as well as their discourse dynamics. We used the notations “on task on screen”
and “off task engaged-observing screen content” for the situations in which the
students’ gaze was focused on the GoToMeeting or 3D Team Neighborhood content
on screen. The cases in which the students were “off task and disengaged” indicated
that their gaze was off screen, or on screen multitasking doing other activities
unrelated to the project or ongoing discourse. Five situations were observed:
one student on task on screen - the other(s) off task but engaged-observing
content on screen or taking notes,
one student on task on screen - the other student on a different task that is
directly related to the task and topic that is discussed,
none on task but engaged-observing screen or taking notes.
one student on task on screen - the other(s) off task and disengaged, i.e. looking
off screen or multitasking on screen,
none on task and disengaged, (e.g., having a side conversation that is not related
to task at hand or each multitasking, performing different tasks such as working
on other homework, email, browsing, chatting online).
DISCUSSION
These preliminary results show that participants tend to visually engage in the
3D Team Neighborhood environment more time and more frequently than they do in
the GoToMeeting with sharing environment. These preliminary observations are
supported by previously reported findings in the literature, in which 3D simulated
highly immersive and interactive environments seem to attract the attention and
engagement of the participants in more consistent and efficient ways [10]. It is
important to note that the AEC global teams had very effective team meetings once
they were fluent in using the functionalities of the ICT and embedded them into their
daily work practice. Nevertheless, in the case of webconferencing, participants who
were not presenting or their decisions were directly impacted, tended to multitask and
their gaze attention was directed elsewhere. In contrast, the 3D Team Neighborhood
collaboration environment created a rich multimedia and multimodal context that
kept the participants almost continuously engaged in the activity and discourse.
Attention has been studied extensively by scholars in psychology, pedagogy,
neuroscience, communication and cognitive science. This study is a first step in a
long term effort we started with many opportunities to extend the breadth and depth
of the experiments, data collection and analysis, as well as increasing the data points.
REFERENCES
1. Jiazhi, O., et al., Analyzing and predicting focus of attention in remote collaborative tasks, in
Proceedings of the 7th international conference on Multimodal interfaces. 2005, ACM:
Torento, Italy.
2. Vertegaal, R., B. Velichkovsky, and G.v.d. Veer, Catching the eye: management of joint
attention in cooperative work. SIGCHI Bulletin, 1997. 29(4).
3. Gutwin, C. and S. Greenberg, The importance of awareness for team cognition in distributed
collaboration, Report 2001-696-19. 2001, University of Calgary: Alberta, Canada.
4. Vertegaal, R., et al., Eye gaze patterns in conversations: there is more to conversational agents
than meets the eyes, in Proceedings of the SIGCHI conference on Human factors in
computing systems. 2001, ACM: Seattle, Washington, United States.
5. Antti, O., et al., Interaction in 4-second bursts: the fragmented nature of attentional resources in
mobile HCI, in Proceedings of the SIGCHI conference on Human factors in computing
systems. 2005, ACM: Portland, Oregon, USA.
6. Fruchter, R., Architecture/Engineering/Construction Teamwork: A Collaborative Design and
Learning Space. ASCE Journal of Computing in Civil Engineering, 1999. 13 (4): 261-270.
7. Fruchter, R., The Fishbowl: Degrees of Engagement in Global Teamwork. LNAI, 2006: 241-257.
8. GoToMeeting Webconferencing [cited 2010 4/26]; Available from:
http://www.gotomeeting.com/fec/
9. Teleplace: Virtual Worlds Collaboration Solutions for Program Management, Virtual Operations
Centers. [cited 2010 4/26]; Available from: http://www.teleplace.com/.
10. Reeves, B. and R. Leighton, Total Engagement: Using Games and Virtual Worlds to Change the
Way People Work and Businesses Compete. 2009, MA: Harvard Business School Publishing.
Teaching Design Optioneering: A Method for Multidisciplinary Design
Optimization
ABSTRACT
This paper describes a Design Optioneering methodology that is intended to
offer multidisciplinary design teams the potential to systematically explore a large
number of design options much more rapidly than currently possible using
conventional methods. Design Optioneering involves first defining a range of design
options using associative parametric design tools; then coupling this model with
integrated simulation-based analysis; and, finally, using computational design
optimization methods to systematically search though the defined range of
alternatives in search of design options that best achieve the problem objectives while
satisfying any constraints. The Design Optioneering method was tested by students
as part of a parametric design course at Stanford University in the spring of 2010. The
performance of the method are discussed in terms of the student’s ability to capture
the design intent using parametric modeling, integrate expert analysis domains, and
select a preferred option among a large number of alternatives. Finally, the potential
of Design Optioneering to reduce latency, further domain integration, and enable the
evaluation of more design alternatives in practice is discussed.
INTRODUCTION
Current Computer-Aided Design and Engineering (CAD/CAE) tools allow
architects and engineers to simulate many different aspects of building performance
(e.g. financial, structure, energy, lighting) (Fischer 2006). However, designers are
often not able to leverage simulation tools early in the design process because of the
time required to complete a design cycle involving the generation and analysis of a
design option using model-based CAD/CAE tools. It often takes multidisciplinary
design teams longer than a month to complete a single design cycle (Flager and
Haymaker 2007). High design cycle latency in current practice has been attributed to
software interoperability (Gallaher, O’Connor et al. 2004), lack of collaboration
between design disciplines (Akin 2002; Zhao and Jin 2003; Holzer, Tengono et al.
2007), among other issues.
Associative parametric CAD tools have been shown to reduce latency
associated with the generation of design options (Sacks, Eastman et al. 2005) as well
as to manage greater project complexity (Gerber 2009). A parameter in this context is
a design variable that can be associated or related to other parameters to define
883
884 COMPUTING IN CIVIL ENGINEERING
particular design logic. The designer can then manipulate a single parameter or set of
parameters to rapidly generate many unique design configurations (Szalapaj 2001).
Parametric modeling as a concept and mathematical construct (e.g. parametric curves
and surfaces), has been around for years with the first parametric CAD tools
emerging in 1989 (Eastman, McCracken et al. 2001). However, providing tools that
enable designers to readily develop these robust and rigorous input models that
describe their design intent in order to guide design generation remains a challenge
(Shea, Aish et al. 2005; Gerber 2007).
The use of associative parametric tools to reduce design cycle latency in
current AEC practice has been limited by two primary factors. First, there are
inherent differences in the way architects and engineers iteratively define and
represent design problems (Akin 2002). Therefore, it is often difficult for these
different disciplines to agree on a common parametric representation of the design,
particularly when opportunities for collaboration are limited by organizational and/or
geographic boundaries (Burry and Kolarevic 2003; Holzer, Tengono et al. 2007).
Few methods have been developed to instruct practitioners on how to use parametric
methods in collaborative, multidisciplinary environments, and those developed have
not been pervasively disseminated. Second, there is limited interoperability between
parametric CAD tools commonly used by architects and CAE tools commonly used
for engineering analysis. With a few exceptions (Shea, Aish et al. 2005; Holzer,
Hough et al. 2007; Flager, Welle et al. 2009), engineers are not able to provide timely
simulation-based performance feedback on the parametric variations generated by the
design team.
This paper introduces the Design Optioneering methodology that aims to
address the limitations associated with parametric modeling discussed above. The
paper is presented in the following structure. First, a Design Optioneering method is
described. Second, the context of initial use of the method by students in a
parametric design course at Stanford University is described and the findings of the
use-case are presented. Finally, the potential and implications of Design
Optioneering to reduce latency and enable the evaluation of more design alternatives
in practice is discussed.
Problem Formulation
The first step is to formally define the design problem including the design objective,
variables and constraints. The design objective is the goal of the optimization
exercise and generally involves maximizing or minimizing a real function (e.g. cost,
energy consumption, etc.). The constraints are the criteria that a design option must
satisfy to be considered feasible. Finally, the variables are the parameters of the
design that the can be manipulated within a defined range to achieve the objectives
and satisfy the constraints.
Definition of the problem objective, constraints and variables are then used to
COMPUTING IN CIVIL ENGINEERING 885
inform the creation of an associative parametric digital model. This involves creating
a parametric representation of the project in CAD that is driven by the design
variables specified. Designers can then test the parametric model by modifying the
variable values and observing the resulting design configuration to ensure that it is
consistent with design intent. This process is often iterative; observations made from
variable testing can lead to the selection of new variables and/or ranges as well as the
refinement of parametric model logic.
Process Integration
The goal of this activity is to create an integrated process model that includes the
parametric CAD model created in the previous activity as well as any CAE models
used to assess design objectives and constraints. Process integration involves first
defining the information dependencies between all of the CAD/CAE tools used in the
design process. Next, the data flow between the tools is automated to reduce design
cycle latency that is pervasive in current practice. Finally, the integrity of the data
flow and the analysis representation is checked by modifying the variables in the
parametric CAD model and ensuring the necessary analysis configurations update
correctly to ensure rapid evaluation of all design domains.
COURSE BACKGROUND
The course “CEE 135A/235A: Parametrics - Applications in Architecture and
Product Design” was originally conceived by the authors in 2008 to explore how to
capture and communicate design intent using parametric methods at both an
architectural and a product scale. The course was first offered in the fall term of 2008
to undergraduates and graduates in product design, architectural design and
engineering disciplines at Stanford University. The course evolved through two
quarter offerings at Stanford University’s Civil and Environmental Engineering
(CEE) Department. The more recent course offered in the spring of 2010 included
the addition of the Design Exploration module which involved coupling the
parametric model with integrated simulation-based analysis and using computational
design exploration and optimization techniques. For this course which is described in
more details below, author Flager developed the curriculum and served as the
primary instructor and author Gerber participated as a guest lecturer.
Objectives
The pedagogical goals for the course are as follows: (1) be proficient with
parametric modeling methods and understand the strengths and limitations of these
886 COMPUTING IN CIVIL ENGINEERING
methods with regard to capturing design intent; (2) learn to communicate design
intent to others in a multidisciplinary team; (3) understand how to integrate
parametric CAD tools with CAE tools; (4) be able to critically assess a given design
logic/process and its impact on the range of possible solutions, emphasizing the value
of solution space thinking; and, (5) hear from leading design practitioners about how
they are applying parametric design concepts to their own work.
The primary research goal for the course was to get user feedback on the
Design Optioneering method in the following areas: (1) effectiveness in capturing
design intent; (2) quality of performance feedback provided; (3) ability to
systematically search through the design space in search of preferred designs; and (4)
ease of use. A second research goal was to document how multidisciplinary design
teams collaborate using the Design Optioneering method and to compare these
observations to conventional design methods.
Structure
The course was organized in two modules (1) defining the design space and (2)
exploring the design space. The former module provided instruction related to
parametric modeling methods and communicating and abstracting design intent into
computable and shareable constructs. The later module dealt with methods for
integrating parametric CAD with CAE tools as well as computational optimization
and sampling methods to systematically search the design space for high performance
solutions. Class time was divided approximately equally between lecture and studio /
workshop components. The lectures are structured to give students a background in
parametric design and its applications. Topics include design theory, precedents in
architecture and product design, as well as methods for mapping design intent to
parametric logic and design exploration. Workshops are designed to provide students
with hands-on experience with parametric modeling and simulation software that will
be used to complete the design exercises. The workshop time is also used to mentor
individuals and teams on their design projects.
Assignments
The primary assignments for the course were the completion of three design
projects: (1) beverage container, (2) building façade, and (3) tall building (final
project). The first two design exercises are described below and the final project is
explained in the following section.
The objective of the first assignment was to introduce the class to associative
parametric modeling methods. The brief was to select a single factor to drive the
physical form of a beverage container (e.g. ambient temperature, user age, etc).
Students began by sketching what they thought would make a good beverage
container for the extreme cases given the chosen driver and then identified at least
three geometric dimensions that responded to the customer needs identified from
their driving concept. Next, students described the dependencies between the design
driver, the customer needs, and the geometric parameters using a logic diagram.
Finally, the students created a 3-D parametric CAD model of the beverage container
and documented the possible geometric variations.
The second assignment was to design a façade system for a series of rail
COMPUTING IN CIVIL ENGINEERING 887
station canopies to be built in various Chinese cities. The functional requirements for
the façade were to provide shading from direct sunlight during the summer and to
allow solar penetration and maximum day lighting during the winter. The design
challenge was to create a single parametric façade panel that could satisfy the
requirements above for the specified canopy geometries and geographic locations.
The second project instructed students in the value of developing and prototyping a
design logic for repeatable deployment, where each instance was topologically
identical but geometrically unique given the varying context of the panel. As with
the first assignment, the deliverables were a parametric logic diagram and 3-D
parametric CAD model that could be reconfigured to each of the specified station
locations.
Problem Formulation
The objectives and constraints for each subsystem were included in the design
brief as described below:
The construction of the parametric model was one of the most challenging
aspects of the assignment. Student teams generally took one of two approaches to
parametric model creation: the first approach was to create the parametric model
collaboratively, essentially all team members participating concurrently in the
modeling process. Teams that used this approach were generally satisfied with the
quality of the parametric model, but felt that having all team members participating
concurrently in the modeling process limited the productivity of the team.
Alternatively, some teams assigned the architect role to create the parametric
model with relatively little input from others. In this scenario, the architect found it
difficult to communicate the parametric logic to the rest of the design team. In
addition, the other team members often found the parametric model deficient in that it
did not afford them enough flexibility to explore desired design variations that were
significant for their particular discipline.
Process Integration
A variety of analysis tools were required to assess the performance of a given design
option with respect to the design objectives and constraints defined above. The
software tools used and their purpose are described below.
Figure 1: Sample final project results showing optimal tall building forms from the
perspective of each subsystem (courtesy of John Basbagill, Spandana Nakka and Jieun Cha)
CONCLUSIONS
The Design Optioneering methodology was presented and applied by students
to a multidisciplinary design project involving the optimization of a tall building
massing considering architectural, structural, and façade performance. In general, the
students felt that Design Optioneering enabled them to substantially reduce design
cycle latency, and enable the evaluation of more design than conventional design
methods. It was observed that the method required a substantially different approach
to the design that the students were accustomed to. Perhaps the most significant
changes involved the requirement to define the complete range of design alternatives
at the beginning of the process and the relatively long set up time required to create
the integrated process models. At the beginning of the class, students struggled to
understand how a given design parameterization might impact the range of design
forms and performance, but the students became much more skilled in this area with
practice. Further research is underway to make Design Optioneering more
collaborative and interactive as well as to understand what types of design problems
are best suited for this method.
ACKNOWLEDGEMENTS
890 COMPUTING IN CIVIL ENGINEERING
We would like to thank John Barton, the director of the Architecture program
at Stanford University, Professor Martin Fischer and USC’s School of Architecture
Dean, Qingyun Ma for the opportunity to teach the course. The course was supported
by the Stanford Institute for Creativity and the Arts (SiCa), the Department of Civil
of Environmental Engineering, and the Center for Integrated Facility Engineering
(CIFE).
REFERENCES
Akin, Ö. (2002). Variants in Design Cognition. Design Knowing and Learning Cognition in Design
Education. C. Eastman, M. McCracken and W. Newstetter. Amsterdam, Elsevier: 105–124.
Audet, C., J. Dennis Jr, et al. (2000). A surrogate-model-based method for constrained optimization.
Eighth AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, AIAA-2000-4891.
Booker, A. (1998). Design and analysis of computer experiments. 7th AIAA/USAF/NASA/ISSMO
Symposium on Multidisciplinary Analysis and Optimization. St. Louis, MO AIAA.
Booker, A. J., J. E. Dennis, et al. (1999). "A rigorous framework for optimization of expensive
functions by surrogates." Structural and Multidisciplinary Optimization 17(1): 1-13.
Burry, M. and B. Kolarevic (2003). "Between intuition and process: Parametric design and rapid
prototyping." Architecture in the Digital Age: Design and Manufacturing. Ed. Branko
Kolarevic. London: Taylor & Francis: 54-57.
Eastman, C. M., W. M. McCracken, et al. (2001). Design Knowing and Learning: Cognition in Design
Education. Oxford, UK, Elsevier Science Ltd.
Fischer, M. (2006). Formalizing Construction Knowledge for Concurrent Performance-Based Design.
Intelligent Computing in Engineering and Architecture: 186-205.
Flager, F., A. Aadya, et al. (2009). Impact of High Performance Computing on Discrete Structural
Member Sizing Optimization of a Stadium Roof Structure. CIFE Technical Report Stanford,
CA, Stanford University: 1-10.
Flager, F. and J. Haymaker (2007). A Comparison of Multidisciplinary Design, Analysis and
Optimization Processes in the Building Construction and Aerospace Industries. 24th
International Conference on Information Technology in Construction. I. Smith. Maribor,
Slovenia: 625-630
Flager, F., B. Welle, et al. (2009). "Multidisciplinary Process Integration and Design Optimization of a
Classroom Building." Information Technology in Construction 14(38): 595-612.
Gallaher, M. P., A. C. O’Connor, et al. (2004). Cost Analysis of Inadequate Interoperability in the U.S.
Capital Facilities Industry. Gaithersburg, Maryland, National Institute of Standards and
Technology. NIST GCR 04-867: 210.
Gerber, D. J. (2007). Parametric Practices: Models for Design Exploration in Architecture.
Architecture. Cambridge, MA, Harvard Graduate School of Design. D.Des.
Gerber, D. J. (2009). The Parametric Affect: Computation, Innovation and Models for Design
Exploration in Contemporary Architectural Practice. Design and Technology Report Series.
Cambridge, MA, Harvard Design School.
Holzer, D., R. Hough, et al. (2007). "Parametric Design and Structural Optimisation For Early Design
Exploration." International Journal of Architectural Computing, 5(4): 625-643.
Holzer, D., Y. Tengono, et al. (2007). Developing a Framework for Linking Design Intelligence from
Multiple Professions in the AEC Industry. Computer-Aided Architectural Design Futures
(CAADFutures) 2007: 303-316.
Sacks, R., C. M. Eastman, et al. (2005). "A target benchmark of the impact of three-dimensional
parametric modeling in precast construction." PCI journal 50(4): 126.
Shea, K., R. Aish, et al. (2005). "Towards integrated performance-driven generative design tools."
Automation In Construction 14(2): 253-264.
Szalapaj, P. (2001). CAD Principles for Architectural Design, Architectural Press.
Zhao, L. and Y. Jin (2003). Work Structure Based Collaborative Engineering Design. ASME 2003
Design Engineering Technical Conferences and Computers and Information in Engineering
Conference. Chicago, IL. DETC2003/DTM-48681: 1-10.
Synectical Building of Representation Space: a Key to Computing Education
ABSTRACT
This paper proposes a method for building a design representation space capturing
domain knowledge and at the same time creating an opportunity to acquire
knowledge outside the problem domain. This dual emphasis increases the potential
for producing novel designs. The method combines the advantages of heuristic
thinking based on Synectics with traditional systematic and analytical thinking and is
intended mostly for use in computing education. It will allow students to develop
a fundamental understanding how to acquire knowledge necessary for conceptual
design while preserving their ability to explore various domains and to expand
a representation space.
INTRODUCTION
891
892 COMPUTING IN CIVIL ENGINEERING
The method is based on a five-stage process (Fig. 1), including such stages as:
Problem identification, Team selection, Problem formulation, Knowledge acquisition
and development of design concepts, Concept evaluation and selection.
The first three stages are preparation for the most important and difficult Stage 4 [3],
called „Knowledge Acquisition and Development of Design Concepts”. In the first
COMPUTING IN CIVIL ENGINEERING 893
stage, called „Problem Identification”, the Team Leader indentifies the problem and
presents it in a desriptive form. Next, he/she determines the relative position of the
problem with respect to the State of the Art (SOTA). In the second stage, the Team
Leader selects team members, called „Synectors”. The ideal team of Synectors
should be balanced considering at least eight main characteristics, listed below:
1. Domain Differentiation: Synectors should represent various domains, for
example four engineers and two-three professionals from non-engineering
domains (for example, biology, law, history, etc.)
2. Emotional involvement: differentiated levels of motivation
3. Thinking styles: global and local thinkers, members with legislative,
executive, and judicial thinking styles, etc.
4. Differentiated Age: optimal age is 25-40, but all ages are acceptable
5. Administrative Experience: one or two experienced executives understanding
managment
6. Entrepreneurship: one or two entrepreneurs focused on action
7. Job Experience: Synectors should be experienced and successful
8. Differentiated Education: as many domains as possible, including Art,
Engineering, Biology, etc.
9. The “Almost” Individual included: people who are not very successful at
work but have potential
In the Stage 3, called “Problem Formulation”, the entire team is building a group
understanding of the problem and is working on the formulation of the specific
design tasks. In the Stage 4, called “Knowledge Acquisition and Development of
„Design Concepts,” all knowledge is acquired and design concepts are developed.
This integrated process is based on the assumption that human development of new
ideas (design concepts in our case) requires knowledge and is inspired by knowledge
from various domains. For this reason, the process of knowledge acquisition in the
proposed method is conducted using Synectics and both internal and external
Synectics sessions. Our idea of using Synectics for knowledge acquisition has been
inspired by great inventors, who also learned using unconventional methods. For
example, Leonardo da Vinci learned human anatomy using human carcasses to
improve his inventions [5] and Thomas Edison sought knowledge in poetry, which
his mother taught him while schooling him at home [6]. Another great inventor,
894 COMPUTING IN CIVIL ENGINEERING
When an internal Synectics session is conducted, the most fantastic and infeasible
concepts are created. They are not useless because they can be considered as seeds in
an evolutionary process, which may ultimately lead to novel and feasible. Next, the
initial concepts created using Analogies are transformed by Synectors in accordance
with selected metaphors [4]]. As a result of this, the concepts gradually become
better and their feasibility is improved. This part of the session is very important,
because the conducted process leads to the seed questions for the External Synectics
Session. Then, all Synectors distribute the questions through the entire Knowledge
Acquisition Network, searching for new sources of inspirations and concepts, if
possible. An External Synectic Session could be something small like a short
conversation with friends or family, or something big like an international
teleconference or a forum on the Internet. All knowledge acquired from these
interactions is then presented in the second internal Synectics Session [8]. In this
session the most interesting, novel, and plausible design concepts are produced. After
the development of a class of design concepts, Stage 5, called “Concept Evaluation”
is conducted (Figure 1). In this stage, the produced design concepts are evaluated
first and next the best concept, or several comparable concepts in terms of their
COMPUTING IN CIVIL ENGINEERING 895
novelty, utility, or feasibility are selected. At this stage, concepts are usually
presented in the descriptive form. Their descriptions are used then to identify
symbolic attributes and their values. Next, other possible values for the individual
attributes are determined and in this way the entire ranges of variation are obtained
for all identified attributes. Finally, these attributes and their values are used to
construct the design knowledge representation space for a given problem. The
developed design representation space allows the preparation of patent claims and/or
of design specifications for the detailed design.
VALIDATION OF METHOD
Symbolic Analogy. One of the most powerful analogies is Symbolic Analogy. This
analogy shows a logical unit represented by a symbol. Very often, the symbols in this
analogy are the natural objects, such as human body parts, trees, leaves, etc. Often
ideas generated by the symbolic analogy could equally well be developed using a
direct analogy. But the mere fact of looking for symbols affects the development of
new solutions. Thus, despite the seemingly similar results, application of only one of
them may be insufficient. The selected results from the Symbolic Analogy Stage of
Synectics Session are presented in Table 2.
Table 2. 1st , 2nd & 3rd class of concepts developed using Symbolic Analogy
SA 1st class of concepts 2nd class of concepts 3rd class of
Item concepts
Valves of combustion
Canal Lock System
Human heart used as engine used to control
(with the use of
1 a pump in water water pumping
water exchange
exchange in the river Dialysis Machine used to
system)
clean the water in rover
… …
Attributes determination. After concept selection, the function tree has been
decomposed into various attributes in order to describe each function.
domains. Also, the nature and extend of such knowledge representation has a direct
impact on the novelty of produced design concepts and often also determines if a
given problem can be solved. The proposed method has been developed as a result of
extensive research on methods and tools for building design knowledge
representation space. It has been tested with a group of students and modified as a
result of the provided feedback. The method is not easy to use and it is appropriate
only for students familiar with Inventive Engineering and with Synectics. Also, all
team members must be carefully selected and prepared for their participation in the
team efforts. During the entire process the team cohesion must be maintained and
team members constantly motivated and encouraged to be involved and to contribute
to the process. The method is also sensitive to the internal group balance, i.e. no
group member is supposed to dominate the team and the Team Leader must
constantly react to the changes in the group’s dynamics.
The conducted experiments were successful. The team efforts produced a continual
flow of concepts and all members contributed in various ways, reflecting their
knowledge and personalities. Most likely, the team size (four members) was too
small and that might have negative impact on results because all team members
needed to be fully engaged all time, sometimes nearly impossible to maintain.
Our experiments clearly demonstrated that it is possible to develop a
transdisciplinary design representation space for inventive conceptual design and that
this goal can be accomplished in a systematic manner. Synectics proved to be
difficult to use but it helped to acquire a rich body of knowledge. The method still
requires refinements but even in its present form it can be used for teaching how to
acquire transdisciplinary knowledge and how to use it to develop a design knowledge
representation space.
REFERENCES
ACKOWLEDGEMENTS
This article is a result of research conducted by the first author at George Mason
University and in cooperation with the second author. The Synectics session was
held at George Mason University in November, 2010. The authors thank all
synectors for their participation and numerous contributions, including Mario
Cardullo, David Flanigan and Ali Adish. Finally, the authors would like to
acknowledge contributions of Izabela Koziolek (izabela.kw83@gmail.com) who has
prepared all drawings used in Figure 2.
Enhancing Construction Engineering and Management Education using a
COnstruction INdustry Simulation (COINS)
ABSTRACT
As the millennium generation enters the higher education system many have
spent hours playing computer games as they have in the classroom during their
lifetime; therefore, a natural transition is for our learning environments to begin to
use techniques from the gaming world. Simulations have now been used for decades
to help people learn; i.e. flight and driving simulators. Current on-line simulations
run the gamut from complicated mathematical models to interpersonal skills
899
900 COMPUTING IN CIVIL ENGINEERING
development tools. Some simulations are all online while others mix in real-world
in-person rehearsals that follow your time online.
Much of the basis for simulation design has historically been rooted in
experiential learning. Human beings absorb information through the senses, yet
humans ultimately learn by doing. Learning also involves feeling things about the
concepts (emotions) and doing something (action). These elements need not be
distinctive; they can be, and often are, integrated. (Lewin 1995) In the book
Experiential Learning, David Kolb describes learning as a four-step process (Kolk
1994). He identifies the steps as first watching, second thinking (mind), third feeling
(emotion), and forth doing (muscle). Kolb wrote that learners have immediate
concrete experiences that allow humans to reflect on new experience from different
perspectives (Kolb 1994). From these reflective observations, humans engage in
abstract conceptualization, creating generalizations or principles that integrate
observations into sound theories. Finally, humans use these generalizations or
theories as guides to further action. Active experimentation allows humans to test
what was learned in new and more complex situations. The result is another concrete
experience, but this time at a more complex level.
The first version of COINS was Building Industry Game (BIG) and was
originally developed to enhance student learning in construction management
departments. BIG originated out of an idea by Hal Johnston, Professor in
Construction Management Department and Emeritus Faculty Jim Borland at Cal
Poly. The BIG simulation game focused on the commercial building sector of the
construction industry. BIG had a built-in an estimating and scheduling simulation
and a limited accounting database. Students used BIG to emulate managing a
commercial building contractor. The origins of BIG began with Glenn Sears,
Professor Emeritus, of the University of New Mexico. Professors Johnston and
Borland were granted permission from Professor Sears to write, modify, convert BIG
to C++. The idea that BIG could become something much larger and more a robust
game came about with collaboration between Hal Johnston and Jim Borland at Cal
Poly. It was their goal that BIG would become part of a larger integrated
construction company simulation that incorporated more sectors of the construction
industry; COINS incorporates their vision.
Using BIG as a template, COINS was developed into a web based simulation
written with a in JAVA front-end and using PostgreSQL database. COINS was
developed using open source software. The intent of COINS was to develop a
simulation beyond just an estimating game; the goal was to produce a simulation
required students to create a strategy for human resources management, business
development/procurement of work, and project management.
Each project consists of nine (9) activities, which together comprise the
project schedule. These are listed in the table below.
For each project activity, there are five (5) different construction methods to
select from; therefore, the schedule and cost estimate is dependent on the methods
selected for each project activity.
902 COMPUTING IN CIVIL ENGINEERING
The typical use of COINS involves dividing a class into teams, who will form
a virtual construction company. Student teams are able to hire virtual staff as needed,
deal with monthly problems, make choices, and experience the effects of their
decisions. During game play, participants gain experience and exposure to a cadre of
real world scenarios, are provided with the opportunity to gain experience, learn from
their mistakes, and experience the totality of management required of the construction
professional. Each team is given an equal amount of capital at the beginning of the
game.
COINS allows the game administrator (instructor) to place the player or team
into a situation or incident that could require a quick short term solution or possibly a
long term change in the company. Situations also take the form of cases that require
ongoing management by the team over an extended period of time. The game can
simulate the month-to-month problems, issues and decisions required to manage a
construction company successfully. Specific aspects of the game play are described
below: Human Resources Management, Business Development/Procurement of
Work, and Project Management.
COMPUTING IN CIVIL ENGINEERING 903
The first order of business in game play involves students forming multiple
teams and creating a virtual construction company. They must develop a mission and
value statement to define their company. Student teams are given a username and
password by instructor. Teams register their team members and each student team
member plays the role in the companies organization. Teams are required to hire
personnel, creating main office overhead, i.e. President, Marketing Director,
Estimator, Student Intern, Scheduler, Accountant, etc. They are permitted to change
personnel, as they need either for growth or other reasons.
As the period advances, the computer evaluates the estimates for each project
and awards a construction contract to the lowest responsive team. COINS also
generates an estimate internally for every project in effort to check the teams
estimates within reason. Teams evaluate the results and attempt to interpret their
competitor’s strategies as the game progresses. Construction estimates are rejected if
they fall below a minimum amount (calculated by the computer). In order to propose
on a project, teams must have cash-on-hand in addition to positive financial
indicators. These factors assist COINS to determine individual project size limits for
bonding purposes, which is at least 5% of their estimate. COINS does not permit
teams to become overloaded with too many projects; therefore, all teams have a work
in progress bonding limit that may not be exceeded.
Considering the market place has both private projects in which contracts are
most frequently negotiated and non-bonded and public projects where contracts are
typically bid and must be bonded; COINS contains both. After some success, a
company may be put on select lists and even move onto being considered for
negotiated projects.
Project Management
Players must monitor their financial position as work progresses, and submit
request for payment for their work to date. Also, teams must create strategies to
improve their bonding limits. A record of successful projects creates an opportunity
to obtain negotiated work. At the end of every period, each team receives a:
904 COMPUTING IN CIVIL ENGINEERING
Progress Report
Complete Dynamic Financial Report
Analysis Report of the work accomplished, and
financial result to date.
The amount of work completed during a period depends on: the production
rate for the work packages selected on each activity and the uncertainty factors,
including - weather conditions, labor availability, and fluctuating cost of materials.
Each team must evaluate the projects in progress by changing the construction
methods, and at the very least, submit a progress payment for the work they
completed during that period. Accounts received affects cash flow and cash position
on the balance sheet and the teams bonding capacity. The end-of-period financial
reports show expenses incurred for:
Direct construction costs
Bidding costs
Consulting services
Liquidated damages, and
Interest on borrowed money
Student teams must monitor their financial position as work progresses, and
bill for their progress payments. Also, teams must create strategies to improve their
bonding limits. A record of successful projects creates an opportunity to obtain
negotiated work. Changes in company’s financial position will change ratios and are
also logged along with changes to the company’s appraisal metrics:
Financial liquidity
Financial success
Responsibility
Pace
Ethics
Name recognition
A financial report shows the final total worth of the firm in either case.
Maximization of profit and positive strong financial condition are the main
objectives, but additional emphasis can be placed on the company appraisal metrics.
At the conclusion of the game play, the instructor can either have the simulation
COMPUTING IN CIVIL ENGINEERING 905
forecast the expected results of any on-going projects or use the actual results at that
time.
At Cal Poly, COINS has been used in several courses including: Professional
Practice, Construction Estimating, Construction Accounting, Management of the
Construction Firm, and Business Practices, and most recently Heavy Civil
Construction Management. During the 2005/2006 academic year, COINS was first
used at regional level. Teams from several universities in the Associated Schools of
Construction (ASC) regions 6 and 7 competed against each other.
The simulation has a built-in grading module that can be used to obtain
statistic on the various companies for comparison or to use in the classroom for
grading the simulation. Faculty members can also develop their own method of
grading. To assess participation and student learning, the faculty member is able to
use the following criteria:
Number of instances a team proposes to perform a project
Number of instances a teams proposal is rejected (due to factors such as: not
enough bonding capacity, substantially low cost estimate, etc.)
Number of instances a team procures a project
Number of instances the team retains earnings at the end of a cycle
Company’s appraisal metrics
learning. There is no substitute for time on task. The use of COINS assist students in
budgeting their time. Allocating realistic amounts of time means effective learning
for students and effective teaching for faculty. COINS communicates high
expectations. High expectations are important for everyone, even for the poorly
prepared, for those unwilling to exert themselves, and for the bright and well
motivated. Expecting students to perform well becomes a self-fulfilling prophecy
when teachers and institutions hold high expectations for themselves and make extra
efforts. COINS respects diverse talents and ways of learning.
REFERENCES
Aldrich (2005) - Learning by Doing: A Comprehensive Guide to Simulations,
Computer Games, and Pedagogy in e-Learning and Other Educational Experiences
by Clark Aldrich. (John Wiley & Sons, 2005)
Kaye (2002) Flash MX for Interactive Simulation: How to Construct & Use Device
Simulations by Jonathan Kaye, PhD and David Castillo (Delmar Learning, 2002)
Companion CD-ROM with full source code.
ABSTRACT
To teach programming language courses for undergraduate engineering
students, instructors are faced with a plethora of challenges. Unlike similar
courses provided by the computer science department, engineering students should
review mathematical concepts as well as learn programming pragmatics in order to
solve an engineering problem, e.g., matrix class creation and manipulation.
Additionally, during the learning process practices are considered to be an essential
part for further understanding. However, plagiarism always exists among students’
source codes. To resolve such problems, an ontology-based model and system,
called Programming Language Online Exam Platform (PLOEP), were proposed to
help the practice and examination of the programming course. A questionnaire
was designed and distributed to assess the effectiveness of PLOEP. Results show
that engineering students can learn programming concepts more efficiently and
effectively by taking exams on PLOEP. Finally, expanding the knowledge base of
PLOEP was recommended to cover more concepts and other challenges associated
with PLOEP were discussed.
INTRODUCTION
Nowadays programming language has become a required course in many
universities for undergraduate students in the department of civil engineering.
However, the instruction points for engineering students and for computer science
students may not be the same. The instruction for engineering students may be more
concentrated on the application of programming language. To achieve a better
learning effect, exercise-oriented instruction is considered a proper way for
students (Lahtinen et al., 2005), but generating lots of questions can be annoying for
teachers. In addition, plagiarism may appear (Spinells et al., 2007) and reduce the
907
908 COMPUTING IN CIVIL ENGINEERING
learning effect when give such a number of exercises. To resolve these problems, an
ontology-based approach is suggested in this research. Ontology is a representative
approach of knowledge sharing and reuse. With the ontology model, some
reasoning mechanism can be operated. Such characteristics can be applied to
generate questions dynamically. The proposed approach constructed an online
exam platform named programming language online exam platform (PLOEP) with
an ontology model of basic set concept in high school mathematics (Halmos, 1974)
as the core question generator. After the interpretation of the PLOEP, the
verification and the validation done in a freshmen programming language course
are presented with some discussion about the performance evaluation. Finally the
conclusion for the proposed approach is presented.
PROPOSED APPROACH
An online exam platform is the chosen approach to perform such an
exercise-oriented instruction way for programming language course, which is
named PLOEP. The expected actors of the PLOEP are sorted into four categories,
senior teacher, junior teacher, student, and grader. The whole instruction process
with the aid of the PLOEP may proceed with four steps. First, senior teacher designs
an appropriate question template for the varying questions. Second, junior teacher
obtains questions of requested difficulty levels, confirms the suitability of the
question to be in the test, and joins them to a test for students. Third, students take
the test for programming language practice. Finally the junior teacher or the grader
gives scores of the tests, since the PLOEP does not include automatic assessment
function. The system structure can be simplified to an online portion and a core
question generator. Senior teacher utilizes the question generator, and the other
three actors interact with the online portion.
The question generator is developed depended on the PLOEP ontology
model. As shown in figure 1, the model contains two parts, concept part for set
concept in high school mathematics, and C++ implementation part. In the C++
implementation part, there are loop methods and data structure for arranging a set.
And in the concept part, there are three categories of operators described, basic
operator, element operator, and principle operator, which are demonstrated in
shaded ellipse. These operators are assumed to have an original set and finally get
an output after operating with or without an input. They are described in the form as
“OperatorName (Input): Output.” The single-line arrow representing “use”
relationship indicates that element operator and principle operator can be
implemented by basic operator. Each of the inheritance relationship of basic
operator and element operator simply has two subclasses as well as the subclasses
COMPUTING IN CIVIL ENGINEERING 909
of principle operator are more complicated. Principle operator can be viewed as two
groups, one group contains the classes which use “Belonging”, the other one
contains those using “Difference” and “Union”. The difficulty level shown by the
stereotype in figure 1 is set to each operator. In the model, the static difficulty levels
is set from 1 to 4, but the difficulty level can increase 1 through the “use”
relationship; thus, the exact difficulty level of questions can be produce from the
model is from 1 to 5.
The questions generated from the PLOEP ontology model are fit in with a
template shown in figure 2. The sentences in [ ] brackets is alternative, some will
appear when the question is generated through the “use” relationship. From the
template, it is known that the change of the questions can be sorted into four types,
“which operator is asked to be implemented,” “which loop method must be used,”
“which data structure must be arranged in,” and “which operator must be used to
implement another operator.” The total number of questions can be generated by the
PLOEP ontology model is calculated 3,624 from these four groups. With such a
great number of questions, students may have more opportunity to practice
programming language.
910 COMPUTING IN CIVIL ENGINEERING
VERIFICATION
The questions can be generated from the PLOEP ontology model is
estimated at 3,624 as described before. A program is designed to inspect the model
performance including the automatic test generation for different difficulty level as
well as the description clarity and the grammar accuracy of the words in the
questions. Nevertheless, on account of the large number of questions, it is hard to
test all of the possibilities generated by the PLOEP ontology model. Consequently,
the verification of the model is simplified for only each difficulty level and some
derived difficulty levels. After the testing, the ability and accuracy of automatic test
generation of the PLOEP ontology model are verified.
VALIDATION
Primarily teaching by the instructor and using some fixed exercises for
assistance is the traditional way in programming language course. However, mainly
led by the instructor and without sufficient practices cannot impress student much.
Also, the fixed exercises bring the chances for plagiarism and will decrease the
learning effects. In addition, the examination collaborating with the traditional way
is usually provided in paper form, which may have more opportunities for students
to cheat in the exam; some other ways are based on computer, but most questions
are choice questions or cloze test questions, which may unable to scale the complete
realization of students.
In order to verify the exercise-oriented instructing way proposed in this
research is better than the traditional way, an experiment in freshman programming
language course was implemented. The level of realization of programming
COMPUTING IN CIVIL ENGINEERING 911
Instruction. The first stage started immediately after the mid-term exam of the
course. The contents including set concepts in high school mathematics review and
programming language knowledge related to the set were distributed in two days,
December 13 and 20. Set arrangement in C++ program and some basic operations
were taught on December 13 as well as advanced set operations such as union and
intersection were taught on December 20. In this stage, students were divided into
two groups. The instructor and the teaching materials for these two groups were the
same, but the instruction ways were different. For the first group, the instructor
taught some necessary concepts, and in the class students did the exercises most of
the time. For the second group, the instructor taught the concepts, and students had
no chance to do any exercise. After the teaching process, instructor interpreted the
exercises for all students, in order to ensure they all knew how to resolve the set
problem by programming language.
Examination. After the instruction stage, a quiz was held to evaluate and compare
the learning performance of these two groups. The proposed approach stated in
previous section is designed for exercise-oriented instruction which provides lots of
questions and includes automatic test generation; however, the objective of the quiz
was using the same questions to test the students in a fair way. As a result, the quiz
was paper-based to exclude the automatic generation part. Furthermore, the
question form of the quiz was open-book and all questions were short answer
questions. In the quiz there were two types of questions, one is basic set question as
well as the other one is the application of set concepts in the engineering problem.
The basic set question contains the concepts introduces in the course. For the type
of application questions, unified soil classification was chosen to be modified to
programming language questions.
Since unified soil classification is taught in soil mechanics course which is
for senior students, the questions only applied the concept of plasticity chart. Also,
the plasticity chart shown as figure 3 was simplified to emphasize its relationship
with set concept. The organic types of soil in the chart were removed. Different
types of soil could be viewed as sets, and using the set operations such as
912 COMPUTING IN CIVIL ENGINEERING
intersection can be one way to find out the type of the given soil.
Assessment. After the quiz, the scores were calculated to know the performance of
the instruction way proposed before. The results are discussed in the next section.
DISCUSSION
The experiment was arranged between the mid-term exam and the final
exam. The form of the mid-term exam and the final exam was different from the
quiz, which are computer-based with static choice questions but the questions and
the choices were in randomized order for each student. Before the mid-term exam
there was the introduction to basic C++ programming language concept; in the
period between the quiz and the final exam, only the course review was provided
and no any other instruction. Thus, two indices are used for the learning
performance evaluation, Quiz-Mid (QM) and Final-Mid (FM). The name of the
index represents how it is calculated, for example, the QM index means the quiz
grade minus the mid-term exam grade. From the indices the progress of students
can be presented clearly. The result is shown as table 1, although the total grade of
mid-term exam and final exam was 20 points as well as that of quiz was 7 points,
the scale of them are extended to 100 points here. Group 1 represents the students
who learned in exercise-oriented way as well as group 2 are the students instructed
by traditional way.
Group QM FM
Group 1 -10.550 -6.950
Group 2 -17.214 -14.920
CONCLUSION
A number of issues in VLE for programming language were studies, but few
of them concerned about the difference between teaching civil engineering students
and computer science students. This study constructs an ontology model to
represent knowledge of existing concepts of students and C++ concepts. The “set”
concepts taught in the mathematics course of high school are selected as an example
of existing concepts. The concepts from the set theory and the C++ mechanisms are
integrated so as to dynamically generate questions for students to practice. The
verification indicates the function of the PLOEP ontology model works. The
experiment done in the validation section and the results presented in the discussion
proves not only the exercise-oriented instruction is better for students’
understanding, but also reduce the chance of plagiarism since the dynamic
characteristic from the automatic question generation. As a result, this
ontology-based approach provides an effective way of programming language
course.
REFERENCES
914 COMPUTING IN CIVIL ENGINEERING
Halmos, P.R. (1974). Naive set theory, New York: Springer-Verlag, New York.
Lahtinen, E., Ala-Mutka, K. and Jarvinen, H. M. (2005). “A study of the
difficulties of novice programmers.” In: Proceedings of the 10th annual
SIGCSE conference on Innovation and technology in computer science
education, 2005, Caparica, Portugal.
Spinells, D., Zaharias P. and Vrechopoulos, A. (2007). “Coping with plagiarism
and grading load: randomized programming assignments and reflective
grading.” Computer Applications in Engineering Education, 15(2), 113-123.
Author Index
Page number refers to the first page of paper
915
916 COMPUTING IN CIVIL ENGINEERING
919
920 COMPUTING IN CIVIL ENGINEERING