Professional Documents
Culture Documents
IABSE2007 Laser Scan Bridge Inspection
IABSE2007 Laser Scan Bridge Inspection
net/publication/233686297
CITATIONS READS
24 3,568
3 authors:
James H. Garrett
Carnegie Mellon University
243 PUBLICATIONS 5,675 CITATIONS
SEE PROFILE
All content following this page was uploaded by Pingbo Tang on 15 October 2014.
Summary
The current process for acquiring bridge geometric data for the National Bridge Inventory (NBI) is
based on manual surveying, manual data processing and interpretation. Hence, it is time-consuming
and error-prone. This paper presents a laser-scanning-based approach to acquire geometric data for
bridge inspection, describes a case study and discusses the advantages of this approach over current
practice from the perspectives of both bridge inspection and management. Both current approach
and laser-scanning based approach are composed of three major steps of data collection, data
processing and data interpretation. Yet, a comparison of these approaches highlights major
differences in the accuracy and comprehensiveness of the data collected. Based on the comparison,
we suggest a need for a formalized way to decompose higher level bridge inspection goals to enable
successful application of laser scanning technology for bridge inspection.
Keywords: bridge inspection, laser scanning, bridge management, inspection goal decomposition,
sensor planning, geometric feature extraction, geometric reasoning
1. Introduction
In the United States, the National Bridge Inventory (NBI) program requires bridge inspectors to
inspect more than 600,000 bridges at least once every two years. According to our research, of the
116 NBI data items, 16 are related to bridge geometric features (Table 1) and 11 are deduced from
bridge geometric features[1]. Hence,
Table 1 Geometric Data Items in NBI and their geometric data collection is important for
Accuracy Requirements bridge management. In spite of these
facts, current bridge geometric data
Item Precision
Number
Name
Requirement
collection methods are time-consuming
and error-prone because they rely on
34 Skew Degree
manual data collection methods[2].
Minimum Navigation Vertical Clearance, Vertical
116
Lift Bridge
10-1m Traditional geometric data collection
49 Structure Length 10-1m
instruments, such as tape, gauge, and
48 Length of Maximum Span 10-1m total station often involve the physical
50 Curb or Sidewalk Widths 10-1m positioning of an inspector near hard-to-
51 Bridge Roadway Width, Curb to Curb -1
10 m accessible bridge components [3], which
52 Deck Width, Out to Out -1
10 m causes safety problems for the bridge
32 Approach Roadway Width -1
10 m inspectors and makes the data collection
47 Inventory Total Horizontal Clearance 10-1m process tedious and time consuming [4,
55 Minimum Lateral Under Clearance on Right -1
10 m 2]. These instruments are also only
56 Minimum Lateral Under Clearance on Left -1
10 m capable of acquiring measurements at
39 Navigation Vertical Clearance -1
10 m discrete and sparse positions. Hence,
40 Navigation Horizontal Clearance -1
10 m they can not effectively capture the
10 Inventory Route Minimum Vertical Clearance -2
10 m shape of the structure to detect the
54 Minimum Vertical Under Clearance -2
10 m deflection pattern [5]. Moreover, most
53 Minimum Vertical Clearance over Bridge Roadway -2
10 m collected data are recorded in a paper-
based manner or in form of flat text file
without any semantic support so that the geometric information interpretation and retrieval process
involves a large amount of manual data transfer work.
In recent years, researchers and bridge management agencies identified laser scanning technology
as a promising alternative for bridge geometric data collection and documentation due to its high
accuracy and high data collection rate [6-8, 5]. Laser scanners can collect thousands of 3D points
per second and can achieve mm-level accuracy for every single point. With laser scanned data,
bridge inspectors can create 3D bridge models for any further analysis, such as virtual
measurements, geometric feature extraction and geometric spatial reasoning.
This paper presents a laser-scanning-based geometric data collection process for bridge inspection
using a case study and shows some advantages and disadvantages of using a laser scanner over
current manual practice from the perspectives of both bridge inspection and bridge management.
We describe both the current bridge geometric data acquisition process and the laser-scanning-based
process as a three-stage activity: data collection, data processing and data interpretation. A
comparison of these two processes shows that a laser-scanning-based process can provide accurate
and a more comprehensive data that can be used for future data needs. Some of the limitations of
using laser scanners can potentially be overcome by developing formalisms for planning data
collection in the field, and for automated processing of that data collected to identify geometric
features of interest.
2. Case Study
We have utilized a scanner for collecting as-is conditions of an existing bridge. The scanner used is
a phase-based laser scanner. Table 2 shows the major technical data about this scanner, as specified
by the manufacturer. The data collection rate of this scanner can be up to 62,500 points per second
so that a typical scanning process only takes about 100 seconds. The accuracy of the positioning of
Table 2 Technical Data of Z+F Imager 5003[9]
1.0 mm/LSB 16 =< 500000
Resolution: Range Sampling Rate
bit pixels/sec'
Range Noise at 10 m: > Reflectivity 20% (dark grey): 3.0 mm rms,
Typical data acquisition rate 125,000 pixels/sec
> Reflectivity 100% (white): 1.3 mm rms
Range Noise at 25 m: > Reflectivity 20% (dark grey): 9.0 mm rms, Vertical Field of View: 310°
> Reflectivity 100% (white): 3.0 mm rms Horizontal Field of View: 360°
Vertical Resolution: 0.018°
Linearity Error: < 5 mm
Horizontal Resolution: 0.009°
Vertical Accuracy: 0.02° rms
Beam Divergence 0.22 mrad
Horizontal Accuracy: 0.02° rms
Beam Diameter at 1 m distance 3 mm circular Max. vertical scanning speed: 2,000 rpm
Max. Output Data Rate 5 MByte/sec Max. number of pixels vertical: 20,000
Scanning time: (8,000 x 8,000 pixel image, total field of Max. number of pixels
140 sec. 20,000
view) horizontal:
a single point by using this scanner is about 9mm at 25 m, and about 3 mm at 10 m. The collected
range images are of high resolution. The vertical angular resolution of the scanner can be up to
0.018° and the horizontal angular resolution of it can be up to 0.01°. Assuming that the distance
between a planar surface facing the scanner and the scanner is 10 m, the described angular
resolution values means the vertical surface
sampling step can be about 3.1 mm, and
the horizontal surface sampling step can be
about 1.7mm. Considering that the
accuracy required by NBI for most bridge
geometric features are at a scale of several
decimeters or centimetres, the data density
level of this scanner is expected to satisfy
these requirements.
Fig. 1 3D Model of the Case Study Bridge Generated
We scanned a highway bridge using the
from Laser Scanned Data scanner and created a 3D model based on
the scanned data using a commercially
available system. This is a skewed highway bridge with a road underneath it. Fig. 1 shows this 3D
model. We took a total of 12 scans. For every scan, we only kept points within 10 m of the scanning
location for 3D modelling and data
analysis, because within 10 m, the
uncertainty in the data is less than 5mm.
Previously, engineers manually surveyed
this bridge. Based on their surveying
results, we created a 3D model using a
CAD software package. We exported the
3D model as a VRML model and then
imported that VRML model into a
commercially available laser scan data
processing package for virtual 3D
inspection of the bridge.
A comparison of the 3D model created
using survey information with the 3D
model generated from scanned data
demonstrates some benefits of using
laser scanned data. Fig. 2 shows an
overlay of these 3D models and
highlights the differences between these
two models. Several observations from
Fig. 2 Comparison of the Laser-Scanned-Data-Based this figure are noticeable. First, the
Bridge Model and the Manual-Surveying-Results-Based bottom surface of the superstructure has
an expected deformation based on the
Bridge Model middle part of the bridge deflecting. Such
deflections would be hard to capture by manual surveying. Second, one cross beam of the bridge is
at different locations in these two models, suggesting a surveying error during manual data
collection. This case demonstrated some of the benefits of data collection using laser scanners. In
the following sections, we will use this case study to compare current bridge inspection practice and
a proposed bridge inspection approach using laser scanning.
Bridge Surveying
Inspection Instrument
Specification Technical Data Data Format Data Reporting
Specification Regulations
Surveying Bridge
Bridge Processed Inspection
Data Report
Data Collection Data in Standard
Inspection
Goals Bridge
A1 Data Processing Management
Database Bridge
Decompose inspection Data
A2 Geometric
goals. Interpretation
Inspection
Plan for Surveying. Extract data from A3 report
Measure measurement surveying report.
goals. Check correctness of Query surveying
Record and generate data. results
surveying report. Populate bridge Deduce inspection
management data goals
base. Generate geometric
inspection report.
Fig. 5 Proposed Laser Scanning Based Geometric Data Collection Process for Bridge Inspection
The laser-scanning-based geometric data collection phase is composed of two sub-processes:
inspection goal decomposition and laser scan planning. In this new process, the goal of inspection
goal decomposition is to decompose bridge inspection goals into geometric features that can be
extracted from laser scanned data. In the structure length example, we can decompose “structure
length” into “measuring front surfaces of abutments” since from laser scanned data, object surfaces
can be extracted directly. The inspection goal decomposition process generates a number of
measurement tasks that can be carried out by a laser scanner. Model-based object recognition
research shows that formalization and automation of this process is possible through knowledge
based reasoning [10]. With generated measurement tasks, bridge inspectors can generate scan plans
which can ensure the coverage of all measurement goals with required accuracy and level of detail
while satisfying time constraints on the site. A series of performance-oriented sensor space planning
research projects show that automatic scan planning is a maturing technique [11]. Next-best-view
sensor planning research also shows the possibility of real-time data completeness evaluation [12].
This discussion indicates that formalizing inspection goal decomposition and laser scan planning is
technically feasible and is important for efficient and effective on-site data collection with a laser
scanner.
Bridge inspectors need to process the raw 3D point clouds before data interpretation in order to
ensure the accuracy and completeness of the data. To ensure the data accuracy, they need to filter
the data because the raw 3D point clouds contain noise and other artifacts. Noise refers to random
uncertainties in the position values of those points. Using various filters such as Gaussian filter,
bridge inspectors can smooth out those noises. However, there is a risk of losing an object detail by
smoothing the data and it is difficult to ensure that the uncertainty model used for filtering
accurately models the random data collection behaviour of a scanner. Artifacts refer to erroneous
data in the point clouds due to unavoidable hardware effects of the scanner. Computer vision
community has designed many algorithms to accurately detect and remove those artifacts from
point clouds. To ensure the completeness of the data, bridge inspectors need to combine information
from several scans to ensure that all needed features for inspection goal generation are extractable.
This process is known as data registration. Since at each location of a scanner only part of a bridge
is visible, and since point clouds from different locations are collected under different local
coordinate systems, bridge inspectors need to bring those point clouds to a common coordinate
system for extracting geometric features requiring information from multiple point clouds.
Automatic data registration techniques are maturing and constraint-based data registration
technique is promising for domain-knowledge-guided data registration [13, 14]. The outputs of the
data processing phase are registered point clouds with noise and artifacts removed.
Bridge inspectors interpret the geometric data after the data processing phase. 3D point clouds
interpretation includes three major steps: geometric feature extraction, bridge components
recognition and generation of semantically rich point clouds. For geometric feature extraction,
bridge inspectors need to first segment the point clouds into patches which can be reliably fitted to a
specific primitive surface feature, such as a plane, cylinder, ellipsoid, cone, elliptical and hyperbolic
paraboloids [15]. Several range image segmentation algorithms are able to get reasonable
segmentation results and it has already been shown that human-computer-interaction post-
processing of the raw segmentation results is efficient enough for generating excellent results [16,
17]. After point cloud segmentation, model-matching algorithms can automatically extract
geometric features and recognize bridge components from point clouds based on a proper feature-
based geometric representation of bridge components[15]. This model matching process can assign
semantic information to point clouds and generate the semantically rich point clouds. Semantically
rich point clouds specify which points belong to which bridge component and the geometric
attribute values of recognized bridge components. Based on this information, bridge inspectors can
calculate values of bridge inspection goals either manually or with automatic support of a semantic
reasoning mechanism for inspection goal generation. This process is an inverse process of that for
inspection goal decomposition: bridge inspectors query semantic data such as “abutment north
surface” from semantically rich point clouds when necessary and assemble extracted geometric
features into the values of inspection goals. Structured semantic information in the semantically rich
point clouds is a formal representation of the bridge geometry, so formalization and automation of
this inspection goal value acquisition process is feasible if we can develop a formalism for semantic
geometric information retrieval and inspection goal assembly based on semantically rich point
clouds.
Table 3 Comparison of Manual Surveying Based Process and Laser Scanning Based Process
Dimension Manual Process Laser Scanning Based Process
Angular Accuracy: 5 seconds
Angular Accuracy: 72 seconds
(Leica TPS405)
Single Point
Distance Accuracy: 3 mm ±
Accuracy Distance Accuracy: 3 mm to 9 mm depends on
2ppm distance within 170 m
object surface reflectivity within 26 m
(reflectorless mode)
Accuracy
Mesh generation algorithms can fit surfaces
Not Applicable: sparse points
against dense point clouds to get 3D model
3D Model are not useful for mesh
with higher accuracy, model accuracy is x/sqrt
Accuracy generation based on geometric
(n), x is single point accuracy and n is the
fitting.
number of points for surface fitting.
Sparse collected points can
not capture surface details Dense point clouds can capture details of
for effective deformation bridge surface and detect shape changes
Data Comprehensiveness detection or require special without having to define any surveying targets
surveying targets definition as far as the surface of interest are within
to capture an expected the field of view of a scanner.
geometric pattern.
Many computer vision algorithms can recognize
Sparse collected points
3D objects from point clouds through point
require bridge inspectors to
Potential for data cloud segmentation, geometric feature
manually assign semantic
interpretation extraction and model matching, so it is
meaning to each point for
feasible to automate the data interpretation
interpretation of the data.
process.
Given scan planning support to ensure
The highway bridge for the
collecting all data with small number of
On-site case study requires three
scans, the highway bridge for the case study
time engineers to work one whole
Time, requires one engineer to work less than 2
day.
Spatial and hours.
Human Given semantically rich point clouds, bridge
Resource inspectors may only need a few days to process
Requirements Off-site Weeks for data input and and analyze the data. But proper pre-
time manual geometric reasoning. processing of data such as removing data noise
and point cloud segmentation is necessary and
need additional computation time.
7. Acknowledgements:
This material is based upon work supported by the National Science Foundation under Grant No.
0420933 and 0121549. NSF's support is gratefully acknowledged. Some of the early data collected
in this research was done under a funding provided by Pennsylvania Department of Transportation.
Any opinions, findings, conclusions or recommendations presented in this publication are those of
authors and do not necessarily reflect the views of the National Science Foundation and
Pennsylvania Department of Transportation.
References
[1] WESEMAN, W. A., Recording and Coding Guide for the Structure Inventory and Appraisal
of the Nation's Bridges, 1995.http://www.fhwa.dot.gov/BRIDGE/mtguide.pdf
[2] SANFORD, K. L., HERABAT, P., MCNEIL, S.,Bridge Management and Inspection Data:
Leveraging the Data and Identifying the Gaps, 1999.
[3] VELINSKY, S. A., RAVANI, B., Advanced Highway Maintenance and Construction
Technology, 2006.http://www.ahmct.ucdavis.edu/index.htm?pg=OtherStructures
[4] ALLEN, C. E., DAN, M. F.,Updating Bridge Reliability Based on Bridge Management
Systems Visual Inspection Results, Journal of Bridge Engineering, 2003, 8, 374.
[5] GORDON, S. J., D.LICHTI, D., STEWART, M. P., FRANKE, J., "Modelling Point Clouds
for Precise Structural Deformation Measurement", presented at XXth ISPRS Congress, Istanbul,
Turkey, 2004.
[6] JASELSKIS, E. J., GAO, Z., WALTERS, R. C.,Improving Transportation Projects Using
Laser Scanning, Journal of Construction Engineering and Management, 2005, 131, 377.
[7] FUCHS, P. A., WASHER, G. A., CHASE, S. B., MOORE, M.,Applications of Laser-Based
Instrumentation for Highway Bridges, Journal of Bridge Engineering, 2004, 9, 541.
[8] KRETSCHMER, U., ABMAYR, T., THIES, M., FRÖHLICH, C., "Traffic Construction
Analysis by Use of Terrestrial Laser-Scanning ", presented at ISPRS working group VIII/2,
Freiburg, Germany, 2004.
[9] ZOLLER+FRÖHLICH, Technical Data Imager, 2005.
[10] FISHER, R. B.,Applying Knowledge to Reverse Engineering Problems, Computer-Aided
Design, 2004, 36, 501.
[11] SCOTT, W. R., ROTH, G., RIVEST, J.-F., "Performance-Oriented View Planning for
Automatic Model Acquisition", presented at International Symposium on Robotics, 2000.
[12] BANTA, J. E., WONG, L. R., DUMONT, C., ABIDI, M. A.,A Next-Best-View System for
Autonomous 3-D Object Reconstruction, Systems, Man and Cybernetics, Part A, IEEE
Transactions on, 2000, 30, 589.
[13] SCOTT, W. R., ROTH, G., RIVEST, J. F., "View Planning with a Registration Constraint",
presented at 3-D Digital Imaging and Modeling, 2001. Proceedings. Third International
Conference on, 2001.
[14] HUBER, D. F., HEBERT, M.,Fully Automatic Registration of Multiple 3d Data Sets, Image
and Vision Computing, 2003, 21, 637.
[15] FISHER, R. F., AW; WAITE,M; TRUCCO,E; ORR,M, "Recognition of Complex 3-D Objects
from Range Data", presented at The proceedings of th 7th IAPR conference ICIAP, 1993.
[16] HOOVER, A., JEAN-BAPTISTE, G., JIANG, X., FLYNN, P. J., BUNKE, H., GOLDGOF, D.
B., BOWYER, K., EGGERT, D. W., FITZGIBBON, A., FISHER, R. B.,An Experimental
Comparison of Range Image Segmentation Algorithms, Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 1996, 18, 673.
[17] YIZHOU, Y., FERENCZ, A., MALIK, J.,Extracting Objects from Range and Radiance
Images, Visualization and Computer Graphics, IEEE Transactions on, 2001, 7, 351.