Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/233686297

Laser Scanning for Bridge Inspection and Management

Article in IABSE Symposium Report · January 2007


DOI: 10.2749/222137807796120283

CITATIONS READS

24 3,568

3 authors:

Pingbo Tang Burcu Akinci


Carnegie Mellon University Carnegie Mellon University
146 PUBLICATIONS 3,137 CITATIONS 244 PUBLICATIONS 8,876 CITATIONS

SEE PROFILE SEE PROFILE

James H. Garrett
Carnegie Mellon University
243 PUBLICATIONS 5,675 CITATIONS

SEE PROFILE

All content following this page was uploaded by Pingbo Tang on 15 October 2014.

The user has requested enhancement of the downloaded file.


Laser Scanning for Bridge Inspection and Management

Pingbo Tang Burcu Akinci James H. Garrett


PhD Student Associate Professor Professor
Carnegie Mellon Carnegie Mellon Carnegie Mellon
University University University
Pittsburgh, PA, USA Pittsburgh, PA, USA Pittsburgh, PA, USA
tangpingbo@cmu.edu bakinci@cmu.edu garrett@cmu.edu

Summary
The current process for acquiring bridge geometric data for the National Bridge Inventory (NBI) is
based on manual surveying, manual data processing and interpretation. Hence, it is time-consuming
and error-prone. This paper presents a laser-scanning-based approach to acquire geometric data for
bridge inspection, describes a case study and discusses the advantages of this approach over current
practice from the perspectives of both bridge inspection and management. Both current approach
and laser-scanning based approach are composed of three major steps of data collection, data
processing and data interpretation. Yet, a comparison of these approaches highlights major
differences in the accuracy and comprehensiveness of the data collected. Based on the comparison,
we suggest a need for a formalized way to decompose higher level bridge inspection goals to enable
successful application of laser scanning technology for bridge inspection.

Keywords: bridge inspection, laser scanning, bridge management, inspection goal decomposition,
sensor planning, geometric feature extraction, geometric reasoning

1. Introduction
In the United States, the National Bridge Inventory (NBI) program requires bridge inspectors to
inspect more than 600,000 bridges at least once every two years. According to our research, of the
116 NBI data items, 16 are related to bridge geometric features (Table 1) and 11 are deduced from
bridge geometric features[1]. Hence,
Table 1 Geometric Data Items in NBI and their geometric data collection is important for
Accuracy Requirements bridge management. In spite of these
facts, current bridge geometric data
Item Precision
Number
Name
Requirement
collection methods are time-consuming
and error-prone because they rely on
34 Skew Degree
manual data collection methods[2].
Minimum Navigation Vertical Clearance, Vertical
116
Lift Bridge
10-1m Traditional geometric data collection
49 Structure Length 10-1m
instruments, such as tape, gauge, and
48 Length of Maximum Span 10-1m total station often involve the physical
50 Curb or Sidewalk Widths 10-1m positioning of an inspector near hard-to-
51 Bridge Roadway Width, Curb to Curb -1
10 m accessible bridge components [3], which
52 Deck Width, Out to Out -1
10 m causes safety problems for the bridge
32 Approach Roadway Width -1
10 m inspectors and makes the data collection
47 Inventory Total Horizontal Clearance 10-1m process tedious and time consuming [4,
55 Minimum Lateral Under Clearance on Right -1
10 m 2]. These instruments are also only
56 Minimum Lateral Under Clearance on Left -1
10 m capable of acquiring measurements at
39 Navigation Vertical Clearance -1
10 m discrete and sparse positions. Hence,
40 Navigation Horizontal Clearance -1
10 m they can not effectively capture the
10 Inventory Route Minimum Vertical Clearance -2
10 m shape of the structure to detect the
54 Minimum Vertical Under Clearance -2
10 m deflection pattern [5]. Moreover, most
53 Minimum Vertical Clearance over Bridge Roadway -2
10 m collected data are recorded in a paper-
based manner or in form of flat text file
without any semantic support so that the geometric information interpretation and retrieval process
involves a large amount of manual data transfer work.
In recent years, researchers and bridge management agencies identified laser scanning technology
as a promising alternative for bridge geometric data collection and documentation due to its high
accuracy and high data collection rate [6-8, 5]. Laser scanners can collect thousands of 3D points
per second and can achieve mm-level accuracy for every single point. With laser scanned data,
bridge inspectors can create 3D bridge models for any further analysis, such as virtual
measurements, geometric feature extraction and geometric spatial reasoning.
This paper presents a laser-scanning-based geometric data collection process for bridge inspection
using a case study and shows some advantages and disadvantages of using a laser scanner over
current manual practice from the perspectives of both bridge inspection and bridge management.
We describe both the current bridge geometric data acquisition process and the laser-scanning-based
process as a three-stage activity: data collection, data processing and data interpretation. A
comparison of these two processes shows that a laser-scanning-based process can provide accurate
and a more comprehensive data that can be used for future data needs. Some of the limitations of
using laser scanners can potentially be overcome by developing formalisms for planning data
collection in the field, and for automated processing of that data collected to identify geometric
features of interest.

2. Case Study
We have utilized a scanner for collecting as-is conditions of an existing bridge. The scanner used is
a phase-based laser scanner. Table 2 shows the major technical data about this scanner, as specified
by the manufacturer. The data collection rate of this scanner can be up to 62,500 points per second
so that a typical scanning process only takes about 100 seconds. The accuracy of the positioning of
Table 2 Technical Data of Z+F Imager 5003[9]
1.0 mm/LSB 16 =< 500000
Resolution: Range Sampling Rate
bit pixels/sec'
Range Noise at 10 m: > Reflectivity 20% (dark grey): 3.0 mm rms,
Typical data acquisition rate 125,000 pixels/sec
> Reflectivity 100% (white): 1.3 mm rms
Range Noise at 25 m: > Reflectivity 20% (dark grey): 9.0 mm rms, Vertical Field of View: 310°
> Reflectivity 100% (white): 3.0 mm rms Horizontal Field of View: 360°
Vertical Resolution: 0.018°
Linearity Error: < 5 mm
Horizontal Resolution: 0.009°
Vertical Accuracy: 0.02° rms
Beam Divergence 0.22 mrad
Horizontal Accuracy: 0.02° rms
Beam Diameter at 1 m distance 3 mm circular Max. vertical scanning speed: 2,000 rpm
Max. Output Data Rate 5 MByte/sec Max. number of pixels vertical: 20,000
Scanning time: (8,000 x 8,000 pixel image, total field of Max. number of pixels
140 sec. 20,000
view) horizontal:
a single point by using this scanner is about 9mm at 25 m, and about 3 mm at 10 m. The collected
range images are of high resolution. The vertical angular resolution of the scanner can be up to
0.018° and the horizontal angular resolution of it can be up to 0.01°. Assuming that the distance
between a planar surface facing the scanner and the scanner is 10 m, the described angular
resolution values means the vertical surface
sampling step can be about 3.1 mm, and
the horizontal surface sampling step can be
about 1.7mm. Considering that the
accuracy required by NBI for most bridge
geometric features are at a scale of several
decimeters or centimetres, the data density
level of this scanner is expected to satisfy
these requirements.
Fig. 1 3D Model of the Case Study Bridge Generated
We scanned a highway bridge using the
from Laser Scanned Data scanner and created a 3D model based on
the scanned data using a commercially
available system. This is a skewed highway bridge with a road underneath it. Fig. 1 shows this 3D
model. We took a total of 12 scans. For every scan, we only kept points within 10 m of the scanning
location for 3D modelling and data
analysis, because within 10 m, the
uncertainty in the data is less than 5mm.
Previously, engineers manually surveyed
this bridge. Based on their surveying
results, we created a 3D model using a
CAD software package. We exported the
3D model as a VRML model and then
imported that VRML model into a
commercially available laser scan data
processing package for virtual 3D
inspection of the bridge.
A comparison of the 3D model created
using survey information with the 3D
model generated from scanned data
demonstrates some benefits of using
laser scanned data. Fig. 2 shows an
overlay of these 3D models and
highlights the differences between these
two models. Several observations from
Fig. 2 Comparison of the Laser-Scanned-Data-Based this figure are noticeable. First, the
Bridge Model and the Manual-Surveying-Results-Based bottom surface of the superstructure has
an expected deformation based on the
Bridge Model middle part of the bridge deflecting. Such
deflections would be hard to capture by manual surveying. Second, one cross beam of the bridge is
at different locations in these two models, suggesting a surveying error during manual data
collection. This case demonstrated some of the benefits of data collection using laser scanners. In
the following sections, we will use this case study to compare current bridge inspection practice and
a proposed bridge inspection approach using laser scanning.

3. Current Geometric Data Acquisition Process for Bridge Inspection


We use a general process modelling language IDEF0 to describe the current and proposed bridge
geometric data acquisition processes as a 3-stage process: data collection, data processing and data
interpretation. Fig. 3 shows current data acquisition process for bridge inspection.
The current geometric data collection process is composed of three sub-processes: inspection goal
decomposition, survey planning and data reporting. The inputs of this process are a set of bridge

Bridge Surveying
Inspection Instrument
Specification Technical Data Data Format Data Reporting
Specification Regulations
Surveying Bridge
Bridge Processed Inspection
Data Report
Data Collection Data in Standard
Inspection
Goals Bridge
A1 Data Processing Management
Database Bridge
Decompose inspection Data
A2 Geometric
goals. Interpretation
Inspection
Plan for Surveying. Extract data from A3 report
Measure measurement surveying report.
goals. Check correctness of Query surveying
Record and generate data. results
surveying report. Populate bridge Deduce inspection
management data goals
base. Generate geometric
inspection report.

Fig. 3 Current Data Acquisition Process for Bridge Inspection


geometric inspection goals. Those data items in Table 1 are all bridge inspection goals. The output
of this process is a survey data report containing geometric data for generating values of inspection
goals. Fig. 4 shows two examples of bridge inspection goals: structure length and minimum vertical
under clearance of a bridge. Bridge inspectors need to decide what measurements need to be taken
for determining the values of these inspection goals. We refer this process as bridge inspection goal
decomposition. This figure is a longitudinal profile of the highway bridge of our case study. Since
the backsides of the abutments are hidden in soil, we can not directly measure them to get the
structure length. In order to obtain structure length, we can decompose the inspection goal
“structure length” into the measurements of the visible surfaces of the abutments. In this way, we
can calculate the structure length based on the sizes of abutments, which are known from the bridge
design, and the position and orientation of visible surfaces of the abutments. Similarly, we can
decompose the inspection goal “minimum vertical under clearance” into a set of measurements of
the bottom points of the superstructure and points on the road underneath the bridge. Then, we can
calculate a number of vertical space sizes and the minimum one is the minimum vertical under
clearance of the bridge.
Inspection goal decomposition is an instrument dependent process. In the above structure length
example, in order to measure surfaces of abutments, we need to further decompose “the
measurements of surfaces” into measurements of several feature points on those surfaces if we only
have a total station which can only position points versus surfaces if we have a laser scanner. After
inspection goal decomposition, bridge inspectors identify a number of measurement tasks. They
need to plan their surveying activities so that they can efficiently carry out these measurements at
the field. For example, bridge inspectors need to decide where to put a total station so that they can
accurately collect as many target points as possible. This process is a survey planning process.
Bridge inspectors need to manually generate a good survey plan based on their surveying
knowledge and experiences. After such planning, bridge inspectors survey target points using a total
station and other surveying equipment according to the survey plan. They record the geometric data
on paper-based bridge inspection data collection templates and generate bridge inspection data
reports. This process is a data reporting process. In some states, bridge inspectors use wearable
computers as on-site data recording tools in order to facilitate automatic bridge inspection report
generation.
Current geometric data
processing process requires
bridge inspectors to manually
transfer data from on-site
geometric data reports to
specified formats and import
those data into a bridge
management database. During
this process, bridge inspectors
need to manually check data
consistency for ensuring
Fig. 4 Inspection goal decomposition: structure length and information integrity and
correctness, and they will
minimum vertical under clearance remove any erroneous data
items from the original data. Recently, bridge inspectors are starting to use wearable computers as
on-site data collection tools so that the raw data can be electronically available for further
processing. Even in this case, bridge inspectors still need to manually manage data format
requirements, data exchange and data integrity.
Geometric data processing generates data that satisfies the data format and integrity requirements,
and the geometric data interpretation process is the next step to provide domain-oriented semantic
information to support decisions about bridge management. Bridge inspectors need to manually
generate different views of the geometric data based on processed geometric data in the database in
order to extract information about a specific aspect of the status of the bridge. For example, in order
to post a limit of vehicle height for traffic flow underneath the bridge, bridge inspectors need to
query all points on the bottom surface of the superstructure and all points on the surface of the road
underneath the bridge, and then deduce the maximum allowed vehicle height according to the
specification about vertical under clearance posting. From this example, we can see that in many
cases the data interpretation process is an inverse process of the inspection goal decomposition: it
assembles measurements into values of inspection goals which are directly related to decisions
about bridge operation and management. Currently, data interpretation process is done manually
resulting in possible errors.

4. Laser Scanning Based Data Acquisition Process for Bridge Inspection


Based on our knowledge about laser scanning technology, we propose a new bridge geometric data
acquisition process which is composed of three major phases: geometric data collection, geometric
data processing and geometric data interpretation (Fig. 5). The proposed process, however,
leverages high 3D data collection rate of laser scanners and as a result each of these processes will
be completed differently with opportunities for automation.
Bridge Inspection
Accuracy Laser
Specification Scanner
Sensor
Laser Scanner Bridge Component
Model Geometric
Sensor Model Geometric
Point Clouds Model
Bridge Inspection Representation
Matching
Goals Data Collection Model
Mechanism
Laser Scanner Processed
Type A1 Data Processing Point Clouds.

A2 Data Interpretation Semantically Rich


Point Clouds
Decompose inspection A3
goals. Remove noise and
Plan for Scanning. artifacts.
Extract geometric
Collect 3D point clouds. Register point
features.
clouds.
Recognize bridge
components.
Generate semantically
rich point clouds.

Fig. 5 Proposed Laser Scanning Based Geometric Data Collection Process for Bridge Inspection
The laser-scanning-based geometric data collection phase is composed of two sub-processes:
inspection goal decomposition and laser scan planning. In this new process, the goal of inspection
goal decomposition is to decompose bridge inspection goals into geometric features that can be
extracted from laser scanned data. In the structure length example, we can decompose “structure
length” into “measuring front surfaces of abutments” since from laser scanned data, object surfaces
can be extracted directly. The inspection goal decomposition process generates a number of
measurement tasks that can be carried out by a laser scanner. Model-based object recognition
research shows that formalization and automation of this process is possible through knowledge
based reasoning [10]. With generated measurement tasks, bridge inspectors can generate scan plans
which can ensure the coverage of all measurement goals with required accuracy and level of detail
while satisfying time constraints on the site. A series of performance-oriented sensor space planning
research projects show that automatic scan planning is a maturing technique [11]. Next-best-view
sensor planning research also shows the possibility of real-time data completeness evaluation [12].
This discussion indicates that formalizing inspection goal decomposition and laser scan planning is
technically feasible and is important for efficient and effective on-site data collection with a laser
scanner.
Bridge inspectors need to process the raw 3D point clouds before data interpretation in order to
ensure the accuracy and completeness of the data. To ensure the data accuracy, they need to filter
the data because the raw 3D point clouds contain noise and other artifacts. Noise refers to random
uncertainties in the position values of those points. Using various filters such as Gaussian filter,
bridge inspectors can smooth out those noises. However, there is a risk of losing an object detail by
smoothing the data and it is difficult to ensure that the uncertainty model used for filtering
accurately models the random data collection behaviour of a scanner. Artifacts refer to erroneous
data in the point clouds due to unavoidable hardware effects of the scanner. Computer vision
community has designed many algorithms to accurately detect and remove those artifacts from
point clouds. To ensure the completeness of the data, bridge inspectors need to combine information
from several scans to ensure that all needed features for inspection goal generation are extractable.
This process is known as data registration. Since at each location of a scanner only part of a bridge
is visible, and since point clouds from different locations are collected under different local
coordinate systems, bridge inspectors need to bring those point clouds to a common coordinate
system for extracting geometric features requiring information from multiple point clouds.
Automatic data registration techniques are maturing and constraint-based data registration
technique is promising for domain-knowledge-guided data registration [13, 14]. The outputs of the
data processing phase are registered point clouds with noise and artifacts removed.
Bridge inspectors interpret the geometric data after the data processing phase. 3D point clouds
interpretation includes three major steps: geometric feature extraction, bridge components
recognition and generation of semantically rich point clouds. For geometric feature extraction,
bridge inspectors need to first segment the point clouds into patches which can be reliably fitted to a
specific primitive surface feature, such as a plane, cylinder, ellipsoid, cone, elliptical and hyperbolic
paraboloids [15]. Several range image segmentation algorithms are able to get reasonable
segmentation results and it has already been shown that human-computer-interaction post-
processing of the raw segmentation results is efficient enough for generating excellent results [16,
17]. After point cloud segmentation, model-matching algorithms can automatically extract
geometric features and recognize bridge components from point clouds based on a proper feature-
based geometric representation of bridge components[15]. This model matching process can assign
semantic information to point clouds and generate the semantically rich point clouds. Semantically
rich point clouds specify which points belong to which bridge component and the geometric
attribute values of recognized bridge components. Based on this information, bridge inspectors can
calculate values of bridge inspection goals either manually or with automatic support of a semantic
reasoning mechanism for inspection goal generation. This process is an inverse process of that for
inspection goal decomposition: bridge inspectors query semantic data such as “abutment north
surface” from semantically rich point clouds when necessary and assemble extracted geometric
features into the values of inspection goals. Structured semantic information in the semantically rich
point clouds is a formal representation of the bridge geometry, so formalization and automation of
this inspection goal value acquisition process is feasible if we can develop a formalism for semantic
geometric information retrieval and inspection goal assembly based on semantically rich point
clouds.

5. Comparison of Current Practice and Laser Scanning Based Bridge


Inspection Practices
Based on the descriptions of the current bridge inspection practice and the laser scanning based
bridge geometric data acquisition process in sections 3 and 4, we compare these two data collection
processes in Table 3.
In Table 3, we can find that laser scanning technology is a promising sensor solution for
automation of geometric bridge inspection. First, in short ranges such as tens of meters, its data
accuracy can compete with a total station since in the short range, angular accuracy does not have a
substantial impact on point positioning accuracy. At the same time, it can collect dense point clouds
because of its high data collection rate, while a total station cannot provide a large amount of data in
short time. Another advantage of laser scanner is that 3D models generated based on dense data
have much higher geometric feature accuracy than single points. Second, laser scanned dense point
clouds provides richer information than sparse points collected by a total station. Since dense point
clouds sample object surface in small steps, it is possible to detect unexpected bridge deformation
patterns. For example, we did not know in advance the bridge deformation pattern shown in Fig. 2,
but we can analyze the 3D bridge model generated from dense point clouds and detect it. Third, as
discussed in section 4, recognition of 3D objects from dense point clouds is technically feasible,
and the automation of the generation of semantically rich point clouds is possible. Currently,
commercial reverse engineering tools, such as Polyworks, can help bridge inspectors to reconstruct
3D models of bridges and manually extract geometric features. However, due to the fact that point
clouds carry no semantic information, manual data processing and interpretation are tedious and
error-prone. Object recognition algorithms make it possible that bridge inspectors do not have to
manually identify and input semantic information, such as height of a column. All they need to do is
to manipulate recognized 3D bridge components directly, and then geometric features of those
components will be automatically extracted from dense data. Fourth, if we can develop a formalism
to provide automatic support for data collection, processing and interpretation based on existing
computer vision research, the time requirement and human resource requirement will decrease
substantially so that the bottleneck in the application of laser scanning for bridge inspection will be
overcome.

Table 3 Comparison of Manual Surveying Based Process and Laser Scanning Based Process
Dimension Manual Process Laser Scanning Based Process
Angular Accuracy: 5 seconds
Angular Accuracy: 72 seconds
(Leica TPS405)
Single Point
Distance Accuracy: 3 mm ±
Accuracy Distance Accuracy: 3 mm to 9 mm depends on
2ppm distance within 170 m
object surface reflectivity within 26 m
(reflectorless mode)
Accuracy
Mesh generation algorithms can fit surfaces
Not Applicable: sparse points
against dense point clouds to get 3D model
3D Model are not useful for mesh
with higher accuracy, model accuracy is x/sqrt
Accuracy generation based on geometric
(n), x is single point accuracy and n is the
fitting.
number of points for surface fitting.
Sparse collected points can
not capture surface details Dense point clouds can capture details of
for effective deformation bridge surface and detect shape changes
Data Comprehensiveness detection or require special without having to define any surveying targets
surveying targets definition as far as the surface of interest are within
to capture an expected the field of view of a scanner.
geometric pattern.
Many computer vision algorithms can recognize
Sparse collected points
3D objects from point clouds through point
require bridge inspectors to
Potential for data cloud segmentation, geometric feature
manually assign semantic
interpretation extraction and model matching, so it is
meaning to each point for
feasible to automate the data interpretation
interpretation of the data.
process.
Given scan planning support to ensure
The highway bridge for the
collecting all data with small number of
On-site case study requires three
scans, the highway bridge for the case study
time engineers to work one whole
Time, requires one engineer to work less than 2
day.
Spatial and hours.
Human Given semantically rich point clouds, bridge
Resource inspectors may only need a few days to process
Requirements Off-site Weeks for data input and and analyze the data. But proper pre-
time manual geometric reasoning. processing of data such as removing data noise
and point cloud segmentation is necessary and
need additional computation time.

6. Discussion and Conclusions


This paper describes and compares a laser scanning based geometric data acquisition process with
current bridge inspection practice. The power of laser scanning technology lies in a high data
collection rate and a high accuracy. With a high data collection rate, it is possible to generate
detailed 3D bridge as-is models and extract geometric information. Compared to traditional
surveying instruments, such as a total station and gauge, laser scanners can capture multiple
inspection goals in one scan and can capture surface details for future analysis.
The challenges to the application of laser scanning to bridge inspection are three folded. First, it is
necessary to provide automatic support for inspection goal decomposition, to plan for scans in the
field to ensure that data to be collected meet the accuracy requirements of bridge inspection goals.
Second, it is necessary to enable automatic recognition of bridge components from point clouds and
make point clouds semantically rich. Finally, it is necessary to provide automatic support for bridge
inspectors during data processing and data interpretation for obtaining the bridge inspection goals.

7. Acknowledgements:
This material is based upon work supported by the National Science Foundation under Grant No.
0420933 and 0121549. NSF's support is gratefully acknowledged. Some of the early data collected
in this research was done under a funding provided by Pennsylvania Department of Transportation.
Any opinions, findings, conclusions or recommendations presented in this publication are those of
authors and do not necessarily reflect the views of the National Science Foundation and
Pennsylvania Department of Transportation.

References
[1] WESEMAN, W. A., Recording and Coding Guide for the Structure Inventory and Appraisal
of the Nation's Bridges, 1995.http://www.fhwa.dot.gov/BRIDGE/mtguide.pdf
[2] SANFORD, K. L., HERABAT, P., MCNEIL, S.,Bridge Management and Inspection Data:
Leveraging the Data and Identifying the Gaps, 1999.
[3] VELINSKY, S. A., RAVANI, B., Advanced Highway Maintenance and Construction
Technology, 2006.http://www.ahmct.ucdavis.edu/index.htm?pg=OtherStructures
[4] ALLEN, C. E., DAN, M. F.,Updating Bridge Reliability Based on Bridge Management
Systems Visual Inspection Results, Journal of Bridge Engineering, 2003, 8, 374.
[5] GORDON, S. J., D.LICHTI, D., STEWART, M. P., FRANKE, J., "Modelling Point Clouds
for Precise Structural Deformation Measurement", presented at XXth ISPRS Congress, Istanbul,
Turkey, 2004.
[6] JASELSKIS, E. J., GAO, Z., WALTERS, R. C.,Improving Transportation Projects Using
Laser Scanning, Journal of Construction Engineering and Management, 2005, 131, 377.
[7] FUCHS, P. A., WASHER, G. A., CHASE, S. B., MOORE, M.,Applications of Laser-Based
Instrumentation for Highway Bridges, Journal of Bridge Engineering, 2004, 9, 541.
[8] KRETSCHMER, U., ABMAYR, T., THIES, M., FRÖHLICH, C., "Traffic Construction
Analysis by Use of Terrestrial Laser-Scanning ", presented at ISPRS working group VIII/2,
Freiburg, Germany, 2004.
[9] ZOLLER+FRÖHLICH, Technical Data Imager, 2005.
[10] FISHER, R. B.,Applying Knowledge to Reverse Engineering Problems, Computer-Aided
Design, 2004, 36, 501.
[11] SCOTT, W. R., ROTH, G., RIVEST, J.-F., "Performance-Oriented View Planning for
Automatic Model Acquisition", presented at International Symposium on Robotics, 2000.
[12] BANTA, J. E., WONG, L. R., DUMONT, C., ABIDI, M. A.,A Next-Best-View System for
Autonomous 3-D Object Reconstruction, Systems, Man and Cybernetics, Part A, IEEE
Transactions on, 2000, 30, 589.
[13] SCOTT, W. R., ROTH, G., RIVEST, J. F., "View Planning with a Registration Constraint",
presented at 3-D Digital Imaging and Modeling, 2001. Proceedings. Third International
Conference on, 2001.
[14] HUBER, D. F., HEBERT, M.,Fully Automatic Registration of Multiple 3d Data Sets, Image
and Vision Computing, 2003, 21, 637.
[15] FISHER, R. F., AW; WAITE,M; TRUCCO,E; ORR,M, "Recognition of Complex 3-D Objects
from Range Data", presented at The proceedings of th 7th IAPR conference ICIAP, 1993.
[16] HOOVER, A., JEAN-BAPTISTE, G., JIANG, X., FLYNN, P. J., BUNKE, H., GOLDGOF, D.
B., BOWYER, K., EGGERT, D. W., FITZGIBBON, A., FISHER, R. B.,An Experimental
Comparison of Range Image Segmentation Algorithms, Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 1996, 18, 673.
[17] YIZHOU, Y., FERENCZ, A., MALIK, J.,Extracting Objects from Range and Radiance
Images, Visualization and Computer Graphics, IEEE Transactions on, 2001, 7, 351.

View publication stats

You might also like