Bridge Management From Data To Policy

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 157

Iowa State University Capstones, Theses and

Graduate Theses and Dissertations Dissertations

2011

Bridge management from data to policy


Basak Aldemir Bektas
Iowa State University

Follow this and additional works at: https://lib.dr.iastate.edu/etd

Part of the Civil and Environmental Engineering Commons

Recommended Citation
Aldemir Bektas, Basak, "Bridge management from data to policy" (2011). Graduate Theses and
Dissertations. 11951. https://lib.dr.iastate.edu/etd/11951

This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at
Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized
administrator of Iowa State University Digital Repository. For more information, please contact digirep@iastate.edu.
Bridge management from data to policy

by

Basak Aldemir Bektas

A dissertation submitted to the graduate faculty

in partial fulfillment of the requirements for the degree of

DOCTOR OF PHILOSOPHY

Major: Civil Engineering (Transportation Engineering)

Program of Study Committee:


Omar Smadi, Co-major Professor
Reginald Souleyrette, Co-major Professor
Alicia Carriquiry
Fouad S. Fanous
Konstantina Gkritza

Iowa State University

Ames, Iowa

2011

Copyright © Basak Aldemir Bektas, 2011. All rights reserved


ii

TABLE OF CONTENTS

LIST OF FIGURES......................................................................................................................................iv

LIST OF TABLES..........................................................................................................................................v

ACKNOWLEDGEMENTS.......................................................................................................................vi

ABSTRACT...................................................................................................................................................viii

CHAPTER 1. GENERAL INTRODUCTION....................................................................................1

Background..................................................................................................................................................1
Objectives.....................................................................................................................................................4
Dissertation Organization........................................................................................................................4
References....................................................................................................................................................5

CHAPTER 2. AN INDEPENDENT LOOK AT FEDERAL BRIDGE PROGRAMS:


FINDINGS FROM A NATIONAL SURVEY......................................................................................7

Abstract.........................................................................................................................................................7
Introduction..................................................................................................................................................8
Background..................................................................................................................................................9
Methodology.............................................................................................................................................19
Survey Results..........................................................................................................................................21
Discussion..................................................................................................................................................32
Conclusions...............................................................................................................................................36
References..................................................................................................................................................36

CHAPTER 3. A DISCUSSION ON THE EFFICIENCY OF NBI TRANSLATOR


ALGORITHM...............................................................................................................................................40

Abstract.......................................................................................................................................................40
Introduction...............................................................................................................................................41
Bridge Inspections and Bridge Management Systems in the United States..........................42
Pontis Bridge Management System...................................................................................................43
NBI Condition Ratings and Bridge Element Condition Data....................................................44
NBI Translator..........................................................................................................................................46
Statistical Comparison of Actual and Generated Ratings for Iowa Bridges..........................48
How to Improve the Translator...........................................................................................................51
Conclusions...............................................................................................................................................53
References..................................................................................................................................................54
iii

CHAPTER 4. CART ALGORITHM FOR PREDICTING NBI CONDITION


RATINGS.........................................................................................................................................................56

Abstract.......................................................................................................................................................56
Introduction...............................................................................................................................................57
Objective....................................................................................................................................................65
Methodology.............................................................................................................................................66
Results and Discussion...........................................................................................................................72
Conclusions...............................................................................................................................................90
References..................................................................................................................................................91

CHAPTER 5. GENERAL CONCLUSION........................................................................................93

Conclusions...............................................................................................................................................93
Recommendations for Future Research............................................................................................94

APPENDIX A. SURVEY RESULTS.....................................................................................................95

APPENDIX B. CART ANALYSES REPORTS.............................................................................115


iv

LIST OF FIGURES

Figure 2-1: NBI items used in sufficiency rating calculation.....................................................................15


Figure 2-2: Criteria used for prioritizing bridge projects............................................................................24
Figure 2-3: Percent of the bridge projects in the program that derive from
BMS recommendations............................................................................................................................. 24
Figure 2-4: Current reasons reported for not using a BMS for bridge program
development................................................................................................................................................27
Figure 2-5: States’ need for additional guidance in bridge management ..................................................31
Figure 3-1: Table for NBI Generation modify according to the guide......................................................46
Figure 3-2: Comparison of actual and generated deck ratings...................................................................50
Figure 3-3: Comparison of actual and generated superstructure ratings ...................................................50
Figure 3-4: Comparison of actual and generated substructure ratings ......................................................51
Figure 3-5: Comparison of actual and generated culvert ratings................................................................51
Figure 4-1: Comparison of actual and generated (translated) superstructure ratings ..............................63
Figure 4-2: An example classification tree....................................................................................................67
Figure 4-3: An example end leaf and predicted probabilities for a classification tree ............................68
Figure 4-4: Adjustment of condition state quantities for painted steel elements .....................................71
Figure 4-5: Comparison of field and CART ratings by NBI condition class
(Deck, DOT A)............................................................................................................................................74
Figure 4-6: Comparison of field, CART and BMSNBI ratings by NBI condition class
(Deck, DOT A)............................................................................................................................................75
Figure 4-7: Comparison of field and CART ratings by NBI condition class
(Deck, DOT B)..........................................................................................................................................76
Figure 4-8: Comparison of field, CART and BMSNBI ratings by NBI condition class
(Deck, DOT B)............................................................................................................................................77
Figure 4-9: Comparison of field and CART ratings by NBI condition class
(Superstructure, DOT A)...........................................................................................................................78
Figure 4-10: Comparison of field, CART and BMSNBI ratings by NBI class
(Superstructure, DOT A)...........................................................................................................................79
Figure 4-11: Comparison of field and CART ratings by NBI condition class
(Superstructure, DOT B)...........................................................................................................................80
Figure 4-12: Comparison of field, CART and BMSNBI ratings by NBI lass
(Superstructure, DOT B)...........................................................................................................................81
Figure 4-13: Comparison of field and CART ratings by NBI condition class
(Superstructure, DOT C)...........................................................................................................................82
Figure 4-14: Comparison of field and CART ratings by NBI condition class
(Substructure, DOT A)...............................................................................................................................84
Figure 4-15: Comparison of field, CART and BMSNBI ratings by NBI class
(Substructure, DOT A)...............................................................................................................................85
Figure 4-16: Comparison of field and CART ratings by NBI condition class
(Substructure, DOT B)...............................................................................................................................86
Figure 4-17: Comparison of field, CART and BMSNBI ratings by NBI condition class
(Substructure, State DOT B).....................................................................................................................87
Figure 4-18: Comparison of field and CART ratings by NBI condition class
(Substructure, DOT C)...............................................................................................................................88
v

LIST OF TABLES

Table 2-1: NBI condition rating guidelines in brief......................................................................................13


Table 2-2: Deficiency status classification[15].............................................................................................14
Table 2-3: HBP funding eligibility for NBI bridges.....................................................................................14
Table 3-1: NBI general condition rating guidelines*...................................................................................44
Table 3-2: Condition state definitions of unprotected concrete deck.........................................................45
Table 4-1: NBI general condition rating guidelines [3]...............................................................................58
Table 4-2: An example Pontis bridge inspection data..................................................................................60
Table 4-3: NBI generation [7]..........................................................................................................................61
Table 4-4: Number of BMS elements in the sample superstructure observations by State DOT ..........70
Table 4-5: Number of BMS elements in the sample substructure observations by state ........................72
Table 4-6: Accuracy of predictions by the distribution of error terms (Deck, DOT A) ..........................73
Table 4-7: Comparison of the predictions by two methods (Deck, DOT A)............................................74
Table 4-8: Accuracy of predictions by the distribution of error terms (Deck, DOT B) ..........................75
Table 4-9: Comparison of the predictions by two methods (Deck, DOT B)............................................77
Table 4-10: Accuracy of predictions by the distribution of error terms
(Superstructure, DOT A)...........................................................................................................................78
Table 4-11: Comparison of the predictions by two methods (Superstructure, DOT A) ..........................79
Table 4-12: Accuracy of predictions by the distribution of error terms
(Superstructure, State DOT B).................................................................................................................80
Table 4-13: Comparison of the predictions by two methods (Superstructure, State DOT B) ................81
Table 4-14: Accuracy of predictions by the distribution of error terms
(Superstructure, State DOT C).................................................................................................................82
Table 4-15: Accuracy of predictions by the distribution of error terms
(Substructure, State DOT A).....................................................................................................................83
Table 4-16: Comparison of the predictions by two methods
(Substructure, State DOT A).....................................................................................................................84
Table 4-17: Accuracy of predictions by the distribution of error terms
(Substructure, State DOT B).....................................................................................................................85
Table 4-18: Comparison of the predictions by two methods
(Substructure, State DOT B).....................................................................................................................86
Table 4-19: Accuracy of predictions by the distribution of error terms
(Substructure, State DOT C).....................................................................................................................87
Table 4-20: CART Predictions.........................................................................................................................88
Table 4-21: CART and BMSNBI Comparison..............................................................................................89
Table 4-22: ANN study (Best results)[12].....................................................................................................89
Table 4-23: Average Absolute Error of CART predictions..........................................................................90
vi

ACKNOWLEDGEMENTS

It is my pleasure to acknowledge some of the people who have contributed to this


dissertation and supported me through my PhD study. I owe my gratitude to all those people
who have made this journey special and rewarding.
My deepest gratitude is to my major advisor Dr. Omar Smadi. Omar has been my major
professor, my boss and my guide during this endeavor. I owe him for many things but
especially for his genuine concern and generous support for my professional development.
He has been an exceptionally understanding boss and patient advisor during the final stages
of this work. His positive attitude, constructive criticism and fairness are some of his many
values that make him a rare supervisor and the people who work with him very fortunate.
Lucky for me; I continue learning from him.
My second major professor, Dr. Reginald Souleyrette, has always been a mentor and
teacher during my graduate studies at ISU. I am grateful for his kind support,
suggestions, intriguing discussions and supervision.
Dr. Alicia Carriquiry was beyond a member of the committee but a third mentor and
teacher, and my privilege to work with. Special thanks to her for her insightful suggestions,
guidance and encouragement.
Dr. Nadia Gkritza has been my mentor for Preparing Future Faculty Program. I would
like to thank her for her continuing support and always being there for my questions.
Gratitude is also extended to all of my committee members for their time in reading the
manuscript and their valuable suggestions.
I am thankful to the Iowa DOT for providing data and funding during my graduate
studies. My sincere thanks also go to Kansas DOT and Montana DOT for providing the
needed data.
Bridge management community including the FHWA, TRB AHD35 Bridge Management
Committee and bridge management engineers from state departments of transportation have
provided kind support and information for my research. I am thankful for their time and
suggestions.
vii

I have been fortunate with many friends who made academic life more pleasant.
Particular thanks to Julie Sen, Tomoko Ogawa, Sue DeBlieck, Emrah & Cigdem Simsek,
Craig Mizera, Josh Hinds, Nikki D’Adamo, Phil Damery, and Yuri Svec for making Iowa a
second home. I am grateful for friends who have always been there for me; especially for
Ferhan Zeynep Celebi, Tugce Bahadir, Ela Kazanci, Rengin Pekoz, Revan Coban, and Nihal
Yilmaz.
One of my good fortunes has been my family. I owe so much to my parents Esmeray and
Vahap for their love, encouragement, sacrifice, and friendship. My brother, Barkın, has
always been my dear friend and joy.
Finally, special thanks to my encouraging, and patient husband, Fatih, for being my
comrade through better or worse.

Başak Aldemir Bektaş


viii

ABSTRACT

Bridge management involves all efforts to build, preserve, and operate bridge networks
cost-effectively with an objective to deliver the best value for the public tax dollars spent.
The dissertation consists of three complementary studies that address both bridge
management policies and condition data that contribute to bridge management practices.
This dissertation begins with an overview of federal and state government bridge
management efforts taken in conjunction with the federal bridge programs in the last 40
years. While the majority of the states have implemented a BMS, the level of implementation
is varied, and the overall input from BMSs to network-level decisions remains minimal.
Survey findings from 40 states indicate that federal funding eligibility is the major criterion
that impacts state-level bridge management decisions. State transportation agencies need
federal guidance on areas such as using decision support tools, implementing BMSs, and
improving data quality. The findings from the study are useful to both practitioners and
policy makers, and identify challenges and needs for bridge management at both federal and
state level.
Following the policy study, a statistical comparison of field NBI condition ratings and
ratings generated by FHWA’s NBI Translator (BMSNBI) algorithm for Iowa bridges is
presented. Statistical analysis indicates that the ratings generated by the NBI Translator
algorithm are not representative of actual NBI ratings. Results from the research raise
questions about the effectiveness of the algorithm.
Final study in this dissertation presents a new methodology to predict National Bridge
Inventory (NBI) condition ratings from bridge management system (BMS) element condition
data, based on Classification and Regression Trees (CART). The proposed methodology
achieves significantly better accuracies than other methodologies reported in the literature for
the data set used in this study. The CART prediction methodology uses simple and logical
conditions of BMS element condition data to predict NBI condition ratings and has potential
use for federal and state transportation agencies to summarize bridge condition data.
1

CHAPTER 1.
GENERAL INTRODUCTION

BACKGROUND

Infrastructure, in the simplest terms, is the “the basic physical and organizational
structures and facilities needed for the operation of a society”[1]. Traditionally, infrastructure
facilities with “high fixed costs, long economic lives, and strong links to economic
development” are owned, maintained and operated by the public sector, to a great extent [2].
The facilities referred to as infrastructure include highways, roads, and bridges; airports and
airways; rail systems; public transit; intermodal transportation; water supply; wastewater
treatment; water resources; solid waste and hazardous waste services; energy generation and
transmission facilities; schools; and so forth [3, 4].
In the United States, the Eisenhower Interstate Highway System forms the backbone of
the highway infrastructure. Since the beginning of its construction in mid-1950s, the system
has enhanced mobility and economic development nationwide [5]. The highway financing
pattern in the United States was simultaneously designated by the Federal-Aid Highway Act
of 1956. The Highway Trust Fund (HTF), established then, is the main transportation fund
for financing the needs of the federal-aid highways. Tax revenues directed to the HTF are
derived from excise taxes on highway motor fuel and truck-related taxes on truck tires, sales
of trucks and trailers, and heavy vehicle use [6]. For the eligible projects, the states can use
HTF funds up to 80% of the cost and match the rest by local funds [8].
Prior to the Highway Act of 1976, federal funds were limited only to new construction.
Consequently the maintenance, rehabilitation and renewal activities by state and local
transportation agencies were quite limited or deferred [5]. These needs, which have not
received enough attention, have added to the increasing needs of the highway system in light
of the increasing demand. Today, transportation agencies at all levels of government are
substantially challenged to address the backlog of needs by the restricted resources available
to them.
Beginning in 1988, the American Society of Civil Engineers (ASCE) has published a
Report Card to grade the nation’s infrastructure. The latest report from 2009 estimates that
2

$2.2 trillion needs to be invested over five years to bring the condition of the nation’s
infrastructure up to a good condition, which is double the amount of current estimated
spending [3]. Although, the combined investment by all levels of government in highway
and bridge infrastructure has increased over time and was estimated to be $78.7 billion
dollars in 2006 [7], the gap between the needs and the available funds still remains wide and
critical. ASCE estimates an annual investment need of $186 billion dollars over the next five
years for highway and bridge infrastructure.
Congress expanded the span of federal-aid eligible activities by adding reconstruction to the
list in 1981 legislation, with special emphasis on bridge rehabilitation and replacement
[5]. By the late 1980s however, the efforts had not made much difference, and it was
understood that a new approach would be needed to close the gap between infrastructure
needs and available resources. Decision makers then began looking at management sciences
such as finance, asset management and accounting for alternative solutions [8].
Infrastructure asset management, as defined by the FHWA and AASHTO, is “a
systematic process of maintaining, upgrading, and operating physical assets cost-effectively.
It combines engineering principles with sound business practices and economic theory, and it
provides tools to facilitate a more organized, logical approach to decision-making” [9]. The
major goals of infrastructure asset management are “to build, preserve, and operate the
infrastructure systems cost-effectively with improved asset performance; to deliver the best
value for the public tax dollars spent; and to enhance the credibility and accountability of the
transportation agency to its governing executive and legislative bodies” [10].
The Intermodal Surface Transportation Efficiency Act of 1991 originally mandated that
states should implement a variety of transportation management systems, including pavement
and bridge management systems [11], and thus many transportation agencies across the
country started implementing them.
Trends in public administration and transportation in the 1990s have also provided
motivation to align transportation agency business practices with infrastructure asset
management principles [11]. In April 1992, the American Society of Public Administration
(ASPA) adopted “a resolution that endorses efforts by governments at all levels to develop and
adopt performance measures.” Later in 1999, the Governmental Accounting Standards
3

Board (GASB) issued Statement 34, which specifies that “governments’ financial statements
must show a value for their infrastructure investments and the costs associated with
depreciation of those assets” [12]. Unlike previous experiences, the transportation agencies
now needed to use either the depreciation method or a modified approach that allows an asset
management system to be implemented [12]. Since infrastructure asset management focuses
on explicit and clearly defined goals and the value and continued maintenance costs of assets
over their life-cycle, these shifts in public policy were good motivators for transportation
agencies to adopt an asset management philosophy and implement infrastructure asset
management systems.
The origin of bridge management programs in the United States dates to the early 1970s.
Bridge management systems (BMSs) were developed in the mid 1990s [13, 14]. Today, state
transportation agencies have established bridge inspection programs and a majority of them
have implemented a BMS.
Although the share of bridges classified as deficient fell from 34.2 percent in 1996 to
27.6 percent in 2006 [15], aging bridges still have substantial maintenance, rehabilitation,
and replacement needs, and agencies are challenged with the backlog of these needs. ASCE’s
2009 Report Card grades the nation’s bridges at a grade of C (mediocre) [3] and estimates
that a $ 17 billion annual investment on the construction and maintenance of bridges is
needed, instead of the current $10.5 billion, to substantially improve the condition of the
nation’s bridges.
The collapse of the I-35 W Bridge in 2007 drew attention to the safety of bridges and
elicited a self-assessment of bridge management activities by both the federal government
and state transportation agencies. Many aspects of bridge management, such as federal bridge
programs, bridge management at the state level, and tools and techniques used in bridge
management, are being questioned to find answers to a basic question, “How can we do
better?” This dissertation focuses on bridge management in the United States and focuses on
bridge management policies and bridge management condition data, to suggest answers to
the same basic question.
4

OBJECTIVES

The objectives of the research for this dissertation are:


To provide an independent review and explanation of the issues regarding the federal
bridge programs;
To obtain information from state transportation departments on their bridge
management practices and how these practices are influenced by the federal
bridge programs;
To synthesize inputs from the major stakeholders;
To develop a statistical model to predict National Bridge Inventory (NBI) deck,
superstructure, and substructure condition ratings based on BMS element condition
data; and
To provide timely input to the reauthorization and future policy debate.

DISSERTATION ORGANIZATION

This dissertation is divided into five chapters. Chapter 1 provides a general introduction
that includes brief background information, dissertation objectives, and dissertation
organization.
Chapters 2-4 comprise papers that have been either published or prepared for
submission to peer reviewed journals. The papers are ordered in the dissertation as follows:

Aldemir Bektas B, Souleyrette R, Smadi O. An Independent Look at Federal Bridge


Programs: Findings from a National Survey. Will be submitted to Transportation Research
Record (TRR).
Chapter 2 presents findings from a national survey on bridge management and federal
bridge programs. It includes a review of the issues and a discussion of the federal bridge
programs to provide input to the reauthorization or restructuring of the federal transportation
programs and future policy debate.
5

Aldemir Bektas B, Smadi O. A Discussion on the Efficiency of NBI Translator Algorithm.


A paper presented in Tenth International Conference on Bridge and Structure Management
and published in Transportation Research E-Circular E-C128.
Chapter 3 presents a statistical comparison between the field NBI condition ratings and
the ratings generated by the BMSNBI algorithm for bridges in Iowa. The paper also includes
a review of bridge inspections and bridge condition data in the United States.

Aldemir Bektas B, Carriquiry A, Smadi O. CART Algorithm for Predicting NBI


Condition Ratings. A paper to be submitted to The Journal of Infrastructure Systems
(ASCE).
Chapter 4 presents a new methodology to predict NBI condition ratings from BMS
element condition data based on Classification and Regression Trees (CART). The statistical
results point to a method of predicting NBI condition ratings that is more accurate than the
previous algorithms in the literature.

Finally, Chapter 5 summarizes the major findings of the study and includes
recommendations for future work.
Appendix A gives the summary results of the national survey in Chapter 2.
Appendix B includes the reports from the CART analyses in Chapter 3.

REFERENCES

1. Infrastructure, in Online Compact Oxford English Dictionary.


2. National Council on Public Works Improvement, Fragile Foundations: A Report
on America’s Public Works, Final Report to the President and Congress. . 1988:
Washington D.C. p. 33.
3. ASCE, 2009 Report Card for America's Infrastructure. 2009: Washington D.C. p.
168.
4. Moteff, J. and P. Parfomak, CRS Report for Congress, Critical Infrastructure and
Key Assets: Definition and Identification. 2004, Resources, Science, and Industry
Division: Washington D.C.
5. Dornan, D.L., Asset management: remedy for addressing the fiscal challenges facing
highway infrastructure. International Journal of Transport Management, 2002. 1(1):
p. 41-54.
6. FHWA Office of Policy Development, Highway Trust Fund Primer. 1998:
Washington D.C.
6

7. FHWA, 2006 Status of the Nation’s Highways, Bridges, and Transit: Condition and
Performance, Report to the Congress. 2007. p. 436.
8. Grigg, N.S., Infrastructure engineering and management. 1988. Medium: X; Size:
Pages: 496.
9. FHWA and AASHTO, Asset Management: Advancing the State of the Art Into the
21st Century Through Public-Private Dialogue. 1996.
10. Cambridge Systematics Inc., Transportation Asset Management Guide, Final Report,
National Cooperative Highway Research Program (NCHRP) Project 20-24(11).
2002.
11. USDOT FHWA Office of Asset Management, Asset Management Primer. 1999:
Washington D.C. p. 30.
12. Koechling, S., How to Convince your Accountant that Asset Management is the
Correct Choice for Infrastructure Under GASB 34. Leadership and Management in
Engineering, 2004. 4(1): p. 10-13.
13. Thompson, P.D. and R.W. Shepard, Pontis, in Transportation Research Circular
423: Characteristics of Bridge Management Systems, Transportation Research
Board. 1994.
14. Hawk, H., The BRIDGIT bridge management system. Structural Engineering
International, 1998. 8(4): p. 309.
15. FHWA, 2008 Status of the Nation’s Highways, Bridges, and Transit: Condition and
Performance, Report to the Congress. 2009: Washington D.C. p. 622.
7

CHAPTER 2.
AN INDEPENDENT LOOK AT FEDERAL BRIDGE PROGRAMS:
FINDINGS FROM A NATIONAL SURVEY

A paper to be submitted to Transportation Research Record (TRR)

B. Aldemir Bektas1, R. Souleyrette2, O. Smadi3


ABSTRACT

This paper presents findings from a national survey on bridge management and an
overview of the federal bridge programs in the United States. The collapse of the I-35W
Minnesota Bridge in 2007 led to efforts by state and federal transportation agencies to
improve the federal bridge programs. The main purpose of this study is to contribute to these
efforts by providing an independent review of the issues regarding the federal bridge
programs and synthesizing findings from a national survey to provide timely input to the
reauthorization and future policy debate.
The responses to a national survey from 40 states indicated that network-level bridge
management decisions at the state level are typically guided by federal funding eligibility. In
general, states are pleased with the federal bridge programs, but they want to have more
flexibility in using federal bridge funds. Although a majority of the states have implemented
a bridge management system (BMS), less than half consider BMS recommendations for
selecting bridge projects. Skepticism of the BMS simulation results, difficulties in BMS
simulation modeling, and resource limitations are the most reported issues that obstruct BMS
implementation at the national level.

1 Graduate student; primary researcher and author; Department of Civil, Construction, and
Environmental Engineering, Iowa State University, Ames, IA 50011.
2 Professor, Department of Civil, Construction, and Environmental Engineering, Iowa State University,
Ames, IA 50011.
3 Research Scientist; Institute for Transportation and Adj. Assistant Professor; Department of
Civil, Construction, and Environmental Engineering, Iowa State University, Ames, IA 50011.
8

Ninety percent of the respondent states do not believe that National Bridge Inventory
(NBI) data items cover their data needs for bridge management, and seventy percent believe
that more detailed BMS element condition data should be utilized in the federal bridge
funding allocation process. Development of clear performance measures and tools to guide
network-level bridge management decisions and funding allocation remains a critical need.
Using decision support tools, implementing BMSs, and improving data quality are the major
areas in which federal guidance is needed at the state level.

INTRODUCTION

The I-35W Bridge over the Mississippi River collapsed on August 1, 2007 in
Minneapolis, Minnesota, resulting in 13 fatalities and 145 injuries [1]. Although
investigation of this failure later suggested that it was due to a design error [2], this event
raised national concern on the condition of the nation’s bridges and triggered a self-criticism
among federal and state agencies to improve how they oversee and guide the management of
bridge infrastructure.
Oversight and guidance of bridge management by the Federal Highway Administration
(FHWA) is executed through two federal bridge programs: the Highway Bridge Program
(HBP) and the National Bridge Inspection Program (NBIP). Current efforts to evaluate these
programs are particularly important since the conclusions will provide input to Congress as
they work on the next transportation bill to follow the “Safe, Accountable, Flexible, Efficient
Transportation Equity Act: A Legacy for Users (SAFETEA-LU).” SAFETEA-LU (Public
Law 109-59) was enacted on August 10, 2005 and provided a five-year authorization of
federal surface transportation programs for highways, highway safety, and transit.
Authorization ran out in 2009 [3].
The main purpose of this study is to contribute to the efforts to improve the federal bridge
programs from an academic perspective. It provides a review of the issues and related
literature, investigates the practices of the policy implementers at the state level and their
perception of the programs through a national survey, and suggests a set of recommendations
by synthesizing these components. The objectives of this study are as follows:
9

To provide an independent review and explanation of the issues regarding the federal
bridge programs;
To obtain information from state transportation departments on their bridge
management practices and how these practices are influenced by the federal
bridge programs;
To synthesize inputs from the major stakeholders; and
To provide timely input to the reauthorization and future policy debate.
This paper focuses on bridge management policy and implementation at the state and
national levels rather than providing details of tools, methodologies, or techniques. We begin
reviewing recent criticism and the background of federal bridge programs. Background
information on the status and issues of bridge management system implementation in the
United States is also included.

BACKGROUND

A review of recent evaluations regarding the federal bridge programs

After the collapse of the I-35 W Bridge in Minneapolis, the Secretary of Transportation
asked the U.S. Department of Transportation (USDOT) Office of Inspector General (OIG) to
evaluate FHWA’s management of bridge safety and oversight of the HBP. Reports were
issued in 2009 and 2010 [4, 5] by the USDOT Office of the Inspector General (OIG).
Together with a report from 2006 on load ratings and postings on structurally deficient
structures [6], these reports raise issues regarding federal bridge programs.
In the 2006 report, some errors in the calculation of load ratings and in the posting
of maximum weight limits were identified [6]. As a result of the analysis, the report
projected that 10.5 percent of the load rating calculations for structurally deficient
structures on the National Highway System (NHS) were inaccurate.
The 2009 report reviewed audits of 10 FHWA Division Offices and observed that the
new Risk Assessment Tools for Bridge Load Ratings and Postings, suggested by
FHWA in a February 2007 memo, were either not being used or were being used
inconsistently. The OIG suggested that FHWA incorporate systematic data-driven
oversight to address nationwide bridge safety risks and to encourage states to expand
10

their use of bridge management systems. However the OIG also reported that the
FHWA lacks the statutory authority to require this.
In the 2010 report, the OIG concluded that FHWA lacks sufficient data to evaluate
states’ use of HBP funds and cannot link expenditures of HBP to the investments on
deficient bridges. Since deficiency (number of deficient structures or total deck area
of deficient structures) and sufficiency ratings are the only significant performance
measures guiding the HBP, tracking the connection between HBP and spending on
deficient structures is critical for FHWA to show the use of apportioned funds for the
intended use.
The U.S. Government Accountability Office (GAO) also contributed to the evaluation. In
2008, the GAO published a report [7] on the condition of the national bridge network and
federal investment in it. The GAO concluded that the HBP lacks “focus, performance
measures, and sustainability.”
The GAO noted that it is difficult to determine the impact of HBP funds in reducing the
number of deficient bridges and increasing the average sufficiency ratings from 1998 through
2007 [7] because HBP is not the only funding source for states’ expenditures on bridges, and
local funding on bridges is not well documented. The report generally agreed with the
proposed legislation under review at that time by the U.S. Senate Committee on Environment
and Public Works. The National Highway Bridge Reconstruction and Inspection Act of 2008
(S.3338) recommended a risk-based prioritization process for selecting bridge projects, five-
year performance plans, and implementation of BMSs. This bill was not reported out of the
Committee.
On July 21, 2010, the U.S. House Subcommittee on Highways and Transit held a hearing
on “Oversight of the Highway Bridge program and the National Bridge Inspection Program”
as part of the effort to prepare for the reauthorization of Federal surface transportation
programs under SAFETEA-LU. Testimony was given by the USDOT OIG, FHWA, GAO,
and AASHTO. The GAO’s testimony [8] emphasized its previous findings. The USDOT
OIG [9] acknowledged FHWA’s response to its recommendations from the three reports they
published in the last four years. Central to these was to implement a pilot risk-assessment
11

program to identify high-priority bridge risks. However, a lack of progress in obtaining data
from the states on their expenditure of HBP funds was also noted.
FHWA’s statement [10] noted the achievements of the federal bridge programs over the
last 30 years, such as a reduction in the percentage of the Nation’s deficient bridges from
19.4 percent to 12 percent since 1994. Efforts such as domestic and international scans on
bridge inspections, the Bridge Research and Technology Program, providing training on
bridge inspections and BMS implementation assistance, and NBIS Compliance Reviews by
FHWA that aim to improve and monitor bridge management practices nationwide were also
acknowledged. FHWA concurred with the majority of the recommendations from the earlier
USDOT IG and GAO reports, and reported
the development of a new NHI course on Load and Resistance Factor Rating
methodology;
the development of additional NBI data reports to identify load rating issues and data
quality problems;
the initiation of a risk assessment program for load rating and posting practices;
the continuation of the efforts to assist states in their BMS implementation;
the implementation of a pilot program for data-driven, risk-based oversight of the
NBIP;
working with AASHTO to update the standards for element-level data; and
the beginning of on an enhancement to the Financial Management Information
Systems (FMIS) to improve tracking of HBP spending and bridge projects.
AASHTO’s testimony focused on the necessity of a new vision for the HBP and the
importance of addressing the overall health of the bridge network based on asset
management rather than a “worst-first” approach [11]. Referring to the USDOT’s 2006
Conditions and Performance report [12], which estimated a backlog of over $ 19 billion
of bridge needs, the testimony stated that the level of funding is far below the needs to
reconstruct or rehabilitate all deficient structures in the country. AASHTO suggested a
balanced asset management approach of addressing immediate problems, replacing old
bridges, and conducting preventive maintenance.
12

Background on federal bridge programs

The content of existing federal bridge programs has also been influenced by previous
bridge collapses. Prior to the collapse of the Silver Bridge in Ohio in 1967, the focus of the
U.S. bridge program was [7] on building and enhancing the infrastructure. The collapse
evoked national concern about the safety and conditions of the national bridge network.
The 1968 Federal-Aid Highway Act set the state transportation authorities in action to
collect and maintain an inventory of federal-aid highway system bridges. Congress later
established two major federal bridge programs: the Special Bridge Replacement Program
(SPRB, 1970), which assists states in replacing and rehabilitating bridges, and the National
Bridge Inspection Program (NBIP, 1971) to ensure periodic national inspections [7]. The
SBRP was enhanced and renamed by subsequent federal programs: first by the Highway
Bridge Replacement and Rehabilitation Program (HBRRP) and later by the Highway
Bridge Program (HBP).
The NBIP establishes standards and requirements for the inspection and evaluation of
bridges in the United States. National Bridge Inspection Standards (NBIS) were first issued
in 1971 [7], and these standards are used to guide state transportation agencies in complying
with the responsibilities for inspecting bridges, maintaining a current bridge inventory, and
reporting bridge condition data to FHWA. The HBP provides funding to enable states to
improve the condition of bridges through replacement, rehabilitation, and systematic
preventive maintenance and is the primary source of federal funding for bridges. The
allocation of HBP funds is based on an apportionment process, which is dependent on the
data from the NBIP.
As a requirement of the NBIP, states are required to inspect all bridges longer than 20
feet and report both condition and updated inventory data to FHWA on an annual basis. The
NBI data that the states are obliged to submit are specified in the Recording and Coding
Guide for the Structure Inventory and Appraisal of the Nation’s Bridges [13] and includes 94
NBI items. The NBIP is intended to identify the nation’s structurally deficient or functionally
obsolete bridges, to evaluate overall conditions of bridges in the national network, and to
form a statistical basis for developing the cost-to-repair estimates that are used in HBP
apportionment formulae [14]. NBI includes condition ratings for deck, superstructure,
13

substructure, and culverts and includes the primary data items that determine the deficiency
status for bridges. NBI condition ratings are assigned on a scale of zero to nine (with nine being
excellent and zero being failing), according to the specifications in the Recording and Coding
Guide [13]. Table 2-1 gives a brief summary of NBI condition rating guidelines [13].

Table 2-1: NBI condition rating guidelines in brief


Code Description
N NOT APPLICABLE
9 EXCELLENT CONDITION
8 VERY GOOD CONDITION ( No problems noted)
7 GOOD CONDITION (Some minor problems)
6 SATISFACTORY CONDITION (Minor deterioration in structural elements)
5 FAIR CONDITION (Sound structural elements with minor section loss)
4 POOR CONDITION (Advanced section loss)
3 SERIOUS CONDITION (Affected structural elements from section loss)
2 CRITICAL CONDITION (Advanced deterioration of structural elements)
1 IMMINENT FAILURE CONDITION (Obvious movement affecting structural stability)
0 FAILED CONDITION (Out of service)

The FHWA defines deficiency under two categories: structural deficiency or functional
obsolescence. Structural deficiency (SD) indicates poor conditions and deterioration in
structural elements, while functional obsolescence (FO) indicates design or configuration that
is no longer adequate for the traffic. Deficiency status affects both allocation of federal
funding and eligibility to use federal bridge replacement funds. This classification is
summarized in Table 2-2.
Sufficiency rating (SR) in conjunction with deficiency status determines whether a
structure is eligible for rehabilitation only or eligible for both rehabilitation and replacement
(Table 2-3).
14
Table 2-2: Deficiency status classification[15]

Structurally Deficient Functionally Obsolete


A condition rating of 4 or less for: An appraisal rating of 3 or less for:

Deck (Item 58) OR Deck geometry (Item 68) OR


Superstructure (Item 59) OR Underclearance (Item 69) OR
Substructure (Item 60) OR Approach roadway alignment (Item 72)
Culvert (Item 62)
OR OR
An appraisal rating of 2 or less for: An appraisal rating of 3 for:

Structural condition (Item 67) OR Structural evaluation (Item 67) OR


Water adequacy (Item 71) Water adequacy (Item 71)

Table 2-3: HBP funding eligibility for NBI bridges


Bridge Classification Sufficiency Rating Eligibility for HBP funds

Not deficient 81-100 Not eligible

Deficient 50-80 Eligible for Rehabilitation

(Structurally Deficient OR Eligible for replacement or


Functionally Obsolete)
0-49 rehabilitation

SR is a value between 0 to 100, where “100” represents an entirely sufficient bridge and
“0” represents an entirely insufficient bridge. The formula for calculating SR uses 20 of the
94 NBI data items with an emphasis on condition ratings. These items are summarized by
the FHWA as shown in Figure 2-1 [13].
15

Figure 2-1: NBI items used in sufficiency rating calculation

Apportionment and use of HBP funds

HBP eligible activities were later expanded in SAFETEA-LU to include systematic


preventative maintenance [16], although states can use HBP funds for this purpose only if
they have a systematic process of choosing such activities. The final decision of eligibility is
determined by mutual agreement between the FHWA division office and state DOT.
FHWA’s apportionment process for the allocation of HBP funds between states (U.S.
Code, Title 23, Chapter 1, § 144) includes the following steps [17]:
Gathering NBI data and bridge construction unit costs
(BCUC) Identifying HBP eligibility based on deficiency status
Computing state apportionment factors
Computing the funds that go to each state (Some standard adjustments for each state)
To compute the state apportionment factors, rehabilitation and replacement needs for
eligible deficient structures for each state are calculated by multiplying deck areas by
corresponding replacement and rehabilitation BCUC. The bridge investment requirement at
the national level is simply the total of needs at the state level. The state apportionment factor
is calculated by dividing the state investment required by the national investment required.
16

Regardless of the apportionment factor, states receive a minimum of 0.25 percent of total
funds, and no state can receive more than 10 percent of total funds [17].
Total HBP funds available for distribution in FY 2009 was $4.3 billion [18]. Funds
allocated by the HBP are not direct cash amounts, but rather are made available to the states
through reimbursement for suitable projects, which include replacement, rehabilitation,
painting, seismic retrofitting, systematic preventive maintenance, and installation of scour
countermeasures (U.S. Code, Title 23, Chapter 1, § 144). Typically, states must provide
matching funds of up to 50 percent of project costs by law. In 2006, HBP funded 45% of
total capital outlays by all levels of government on bridge rehabilitation [19].

Bridge Management Systems in the United States

The aftermath of the two bridge failures, first the Silver Bridge in 1967 and later in 1983
the Schoharie Creek Bridge [20, 21], and the increasing gap between the available funds and
needs of the national bridge network stimulated increasing research to develop bridge
management systems in mid-1980s.
Shortly thereafter, the Intermodal Surface Transportation Efficiency Act of 1991 required
states to develop and implement BMSs. A BMS is a software package that provides a rational
and systematic approach to all the activities related to managing a network of bridges [22],
such as inspecting and storing bridge condition data, predicting future needs of the bridge
network, selecting maintenance and improvement actions cost-effectively, and tracking
maintenance activities. Although development of BMSs was made optional by the National
Highway System Designation Act of 1995, many states decided to continue implementing
them [23].
Some states set out to develop their own tools, while many decided to implement
available systems. The FHWA developed the Pontis Bridge Management System in 1989
[24], which later became the most popular bridge management tool in the United States [25].
The BRIDGIT bridge management system was later developed by the National Cooperative
Highway Research Program (NCHRP) but has not become as popular as Pontis [26]. Today,
although Pontis is licensed in 44 states [27], the level of implementation varies.
The bases of the Pontis BMS modeling approach are principles of operations research
and economic analysis. Pontis addresses preservation decisions separately from
17

improvement decisions [28]. The main inputs for Pontis preservation modeling are element-
level condition data, cost models, and deterioration models. In Pontis, bridges are
represented as a set of structural elements based on AASHTO’s Commonly Recognized
(CoRe) elements. For each element a set of possible conditions is defined (up to five, where
the first represents the perfect condition and the last represents the worst) and a set of
feasible actions is assigned to each condition (such as do nothing, paint, replace).
Pontis models separate functional improvements (widening, raising, strengthening, and
replacement) from preservation. Preservation cost models are developed by assigning the
costs to each feasible action of each element along with the failure cost of each element.
Costs are assigned typically through expert elicitation. Deterioration models are based on
Markov transition probabilities. Dynamic optimization models are used to identify the
optimal bridge preservation policy, which minimizes the total life-cycle costs given the cost
and deterioration models. Improvement models have bridge-level formulas and inputs (e.g.,
cost of raising, user benefit of replacement).
Currently, the most acknowledged performance measure based on current BMS elements is
the Health Index (HI) [29]. HI is a single number (from 0-100) that reflects the condition
distribution for the different elements on a structure [28]. This index reflects a weighted
condition distribution of the BMS elements with weights determined by expert assignments or
element failure costs. HI values typically accumulate at the higher end of the 0-100 range, and
therefore relative HI values do not always convey a clear notion of relative performance.
Some states that are experienced Pontis users and that have been using the tool for
developing their bridge programs for a while discovered a problem in the program results
over long durations. When simulations are performed over a long term, the condition of the
network converges to a condition level lower than what agencies would target in practice
[30, 31]. So, when decision making is based only on cost minimization, the agencies cannot
achieve a desired future network condition. This phenomenon is addressed by Patidar et al.
[32] in the recent “Multi-Objective Optimization for Bridge Management Systems” study,
which is the outcome of a NCHRP project.
This new approach is based on multi-objective optimization and considers more than one
performance criterion. Each performance criterion is represented with a utility function,
18

which is measured on a zero to one scale and has a unitless measure. The benefits of bridge
actions (i.e., project benefits) are represented by the utility value. Utility functions can be
defined for a variety of measures, such as condition, load capacity, risks, or functional needs.
The total utility of a project is equal to the weighted sum of all component utilities. Final
prioritization in this approach is based on a total utility/cost ratio. This new methodology will
be used in the new version of Pontis (5.2), planned for release in 2011.

New AASHTO Bridge Element Inspection Manual

At present, two types of condition inspections are prevalent in the United States: the
obligatory NBI condition inspections and the optional BMS element condition
inspections. Because each relies on different methodologies and rating systems, they
require separate condition inspections. State DOTs allocate resources to both inspections
whilst the mutual goal of both inspections is to assess condition.
In 2010, the AASHTO Subcommittee on Bridges and Structures approved a new
element-level bridge inspection manual intended to replace this methodology. The new
AASHTO Bridge Element Inspection Manual [33] replaces the AASHTO Guide to
Commonly Recognized Structural Elements. The new manual classifies two sets of bridge
elements differently as the National Bridge Elements (NBE) and the Bridge Management
Elements (BME).
The NBE proposed refined condition ratings for the primary structural components of
bridges, including decks, superstructures, substructures, and culverts defined in the
Federal Highway Administration’s Recording and Coding Guide for the Structure
Inventory and Appraisal of the Nation's Bridges[33]. The intention of introducing NBEs is
to eventually replace the NBI condition inspections with NBE condition inspections to
provide a more detailed and objective condition assessment of the Nation’s bridges.
Pontis 5.2 is expected to combine the new AASHTO element definitions and multi-
objective optimization, but the modeling is still in process. The shift to NBE from NBI
condition ratings was discussed and received positive comments at the recent Pontis User
Group meeting (September 21-22, 2010, Newport, Rhode Island). However, at this time, it is
unclear when the shift to NBE may take place. Until the HBP apportionment process is
19

changed to be compatible with the new NBEs, NBI data will still need to be reported to meet
federal requirements.

The NBI Translator (BMSNBI)

Since the early 1990s, the NBI Translator has been available as a built-in module within
Pontis BMS [34] to map element-level condition data to NBI condition ratings. Ideally, for
the States that have BMS element condition inspections, use of the NBI Translator would
eliminate the need for NBI condition inspections. States have the option to report translated
NBI condition ratings to the FHWA; however, the majority of states continue to collect both
types of condition data due to reported skepticism of the accuracy of the algorithm [35].
The NBI Translator is frequently criticized for estimating lower ratings than the field NBI
ratings at the upper end of the condition ratings [36]. Since the lower end governs deficiency
status, the effect on HBP apportionments may be negligible; however, it is well documented
that a true estimation between the two rating systems has not been possible [35-37].
The NBI Translator algorithm is also used within the National Bridge Investment
Analysis System (NBIAS), which is used to project future investment needs for repair,
rehabilitation, and replacement of bridges in the national network [19]. These projections are
part of the Condition and Performance Report on the nation’s highways, bridges, and transit
systems, prepared by FHWA and submitted to Congress biennially. The NBIAS follows the
principles of the Pontis BMS, uses the same deterioration and cost models, and incorporates
benefit-cost analysis into bridge investment evaluation. Utilization of these models also
requires Pontis BMS element-level condition data. Therefore, the NBIAS uses the NBI
Translator algorithm for a back-translation to create representative Pontis BMS elements for
structures based on reported NBI condition ratings. This same algorithm is also used to
predict future conditions in NBIAS scenario analysis.

METHODOLOGY

The focus of this study was a survey of state bridge management engineers, who typically
oversee all bridge management activities for the state. The purposes of the survey were to
analyze bridge management practices and identify how consistently state DOTs implement
federal bridge programs, and to explore how bridge management at the state level is
20

influenced by the federal bridge programs. The survey design was based on the results of a
literature search [20, 24-27], targeting computer-mediated survey research. A preliminary
draft was reviewed by a diverse group of experts from academia, FHWA, state DOTs, and
consultants from the bridge management community.
The first section of the survey focused on the bridge management data that is maintained
by the state. This included identifying the data items collected, the data items used for bridge
management, methods of data entry and storage, and methods to ensure quality control and
quality assurance (QC/QA).
The second section targeted BMS implementation. The survey asked whether the state is
implementing a BMS, the challenges in implementation, the benefits realized so far,
improvements needed, and the state’s current and future plans for using a BMS for decision
support.
The third section included questions on how the state developed its bridge program (the
list of selected bridge projects). This section identified all of the agency processes involved
in preparing the list of bridge rehabilitation, replacement, and maintenance projects, as well
as how the state agency assessed condition and risk, how it used a BMS (or other decision
support tools), how it involved local agencies, and how it used economic methods such as
benefit-cost analysis or life-cycle cost analysis (LCCA).
The final section explored the respondents’ perceptions of federal bridge programs. This
included identifying issues, comments, and suggestions they have and how the federal
bridge programs affect the way they manage their bridge network. This section also asked
for recommendations to improve federal bridge programs and to identify the primary areas
in bridge management where the states need more federal guidance.
A review of the recent literature and related recent surveys [23, 38-41] on the subject
were studied to form the conceptual design. The main concern regarding the online survey
study was the level of response from the states, because this was an independent study and
participation was voluntary. A combination of 52 closed- and open-ended questions was
included in this survey. Open-ended questions were picked for salient issues such as
network-level prioritization and comments on federal bridge programs.
21

FHWA shared the recent annual NBIS Compliance Reviews conducted by their division
offices (2007 and 2008) and a recent Bridge Management Systems Survey to be sent to the
states at the time the survey was being prepared. These documents were reviewed to avoid
duplicate questions and to minimize shared content. Some of the common content was kept
in the survey if there was value in asking the question as an independent party (e.g.,
challenges in implementing BMSs, data quality concerns).
The survey was sent to one person, typically the state Bridge Management Engineer, who
oversees all bridge management activities for the state. Initial response was from 23 states in
a week. It took three months for the survey to be completed, with an 80 percent response rate
(40 states).
The survey results were analyzed to explore the relationships between identified
problems regarding the federal bridge programs and the bridge management practices at the
state level. A synthesis of the issues and findings are given in the discussion section, along
with a list of recommendations on improvement of federal bridge programs.

SURVEY RESULTS

The major findings from the survey are presented in this section.

Bridge management data

Ninety percent of respondent states do not believe the NBI data items (required by the
NBIP) cover all their needs for bridge management. When they were asked which additional
data items they thought were necessary for bridge management, respondents indicated
AASHTO CoRe element condition data (32 states), condition information on paint or
protective coatings (26 states), condition information on deck joints and wearing surfaces
(25 states), and condition information on bearings (20 states). Other common necessary data
items included condition data on deck drainage systems, scour history, scour condition, and
seismic vulnerability, to name a few.
The list of items collected specifically by some state agencies is extensive. Thirty-seven
states collect agency-specific items. When these items were reviewed, some common
information collected under different definitions was observed.
22

The majority of states (29) use both NBI condition data and BMS element-level condition
data when assessing the condition of bridges, 6 states use only NBI condition data, and 5 states
use only element-level condition data. One issue observed from the responses was that
condition assessment from both systems does not always provide systematic input to the
development of the bridge programs. For example, using NBI condition ratings to identify
project candidates eliminates the opportunity to use BMS element-level condition data. Even
though BMS data can subsequently be used to compare between single structures, it is not
typically used to assess the condition of the entire bridge network.
In addition to inspection and inventory data, effective decision making in bridge
management needs other sources of information such as traffic and cost data [42]. Often
these other data items are collected and stored by different offices within the agency. Four
respondents indicated that inspection data are maintained in multiple database systems. Of
these, two indicated that duplicate data are being archived but not used.
US Code 23 CFR 650.313 requires states to assure systematic quality control and
assurance procedures, and the FHWA provides states a framework for a Bridge Inspection
QC/QA Program to help them comply with this requirement. Survey questions relating to this
framework were designed to determine how states comply with the regulations. Thirty-seven
respondents reported having a QC program, and 36 respondents reported having a QA
program. Implementation of the framework, however, varies, and in general, has not yet been
systematized.
Only 23 respondents reported having in-house training for bridge inspectors. The
majority of the QA programs reported were for NBI inspections (29 states), but 21 states
have QA programs for BMS inspections as well. Fifteen respondents report QA reviews for
all inspections, while 20 report random inspections. Data samples for the random reviews
range from 0.5 percent to 20 percent of all inspections.

Bridge program development

A bridge program development is a key process that combines all efforts in bridge
management and transforms them into a tool to support funding decisions. Ideally, this is a
systematic process that is based on quality data, effective models, and economic methods that
23

result in the most cost-effective decisions. Survey results on bridge program development
process indicate the following:
Seven respondents reported having no set criteria for prioritizing bridge projects.
Three others reported criteria that are vague (e.g., “all/any available
data/information is used by the bridge committee”)
Local agency involvement varies. Nineteen respondents involve local agencies in the
development of state bridge programs (21 do not). Some local agencies are partners in
decision making.
Twenty-seven respondents listed set criteria for prioritizing bridge projects. Figure 2-
2 presents the frequency of reported criteria. Most common are NBI condition data,
deficiency status, and traffic.

Bridge Management Systems


Eighteen respondents reported the use of a BMS (Pontis or other) to develop their bridge
program. Figure 2-3 shows the percentage of projects in bridge programs derived from BMS
recommendations, as reported by these 18 respondents. The most common benefits reported
from the BMSs in order of importance were the bridge condition data inventory with historic
data and deterioration models; identifying, programming and tracking maintenance activities;
and systematic analyses.
The responses indicate that the decision support capabilities of BMSs are being used by
only a few states. The few states at this more advanced BMS implementation level are
typically content with the benefits realized. The same respondents indicated the value of
identifying and programming preventive maintenance and rehabilitation activities and
commented on how this approach helped them to improve more of their bridge network
condition instead of only targeting bridge replacement projects to a small portion of the
network. Improvements that are necessary for states to use BMSs more effectively can be
generalized under one area: modeling. The majority of the responses to the questions
regarding BMSs that included the modeling issues referred to Pontis BMS, since it is the
dominant BMS used by the states.
24

Inspector recommendations
Local agency input
Politics
Scour critical condition
Fracture critical condition
Structure age
BMS element condition data
Proximity to other projects
BMS recommendations
Posted bridges
Sufficiency Rating
Road Classification
Functional Obsoloscence
Detour Length
Benefit‐Cost Analysis
Structural Deficiency
Traffic
NBI condition data
0 2 4 6 8 10 12 14 16 18 20
Number of states that considers the criterion for prioritizing bridge projects

Figure 2-2: Criteria used for prioritizing bridge projects

25 What percentage of your bridge program is coming from BMS

22 recommendations?
20
15 Number of States

10

7
5 4 5

2
0

BMS Less than 10 Less than 50 Less than 80 100 percent


recommendations percent percent percent
are not used

Figure 2-3: Percent of program bridge projects derived from BMS recommendations
25

Pontis model cost numbers are different from real construction costs, and each agency
needs to customize and develop its own cost models [30, 43]. Having historical element-level
condition data helps states to improve their deterioration models [44]. However, developing
both cost and deterioration models takes time and requires good data inventory and agency
commitment. Coordination between departments in the agency is also needed since cost
information is typically kept by another department. The results of the survey are consistent
with the literature; 11 respondents identified developing the deterioration and cost models as
a major difficulty in implementing BMSs.
The survey reported that difficulties in implementing a BMS differed depending on level
of implementation. The states at earlier stages of implementation were typically challenged
by management buy-in (most reported), inspection and data requirements, development of
deterioration and cost models, questions on whether Pontis simulation results are realistic, the
amount of time required to learn and implement the BMS, and internal resistance from the
agency staff.
At later stages of implementation, the limited number of staff in these departments was
an issue. Since it takes considerable time to train new employees, turnover can be troubling.
Also, uncertainties regarding the national direction for bridge management and the expected
improvements for the Pontis BMS were noted by the respondents as impairing agency
commitment to implementation. Experienced states reported that despite the initial challenge
of developing agency procedures and policies to support the BMS, they now see the value
and perceive the challenges as the necessities of implementing a good BMS. As the
inspectors and management get more experienced, these difficulties also lessen.
The most common criteria used in the development of the bridge programs by the states
are reported as NBI data, and mostly NBI condition ratings. Sufficiency ratings and
deficiency status, which are mainly based on NBI condition data and are the main influencers
of HBP fund allocations, are also the most common information used by the states to identify
candidate projects. This step is typically followed by a prioritization process by the central
office; local agencies affect this prioritization in 25 percent of the states.
As indicated earlier, 29 respondents reported using BMS element-level condition data
when they are evaluating bridge conditions. However, except for 11 respondents, the effect
26

of BMS element-level condition data in bridge program development could not clearly be
seen, either because the process was guided by NBI data only or the BMS element-level
condition data were used only for individual assessment of bridges and not network-level
comparisons. Thirty percent of the respondents had some form of a prioritization method
(e.g., ranking structurally deficient structures by sufficiency rating, ranking based on a
combined index of vulnerabilities and condition, or ranking by the use of multi-objective
utility functions).
Whether states had a systematic process for the development of their bridge programs
was another concern. From the explanations provided for several questions, seven states
have a clear and systematic process for developing bridge programs. Other methods are
based on BMS recommendations (5 states), proximity of bridges to other large projects (4
states), structural vulnerabilities (3 states), and economic analyses (2 states). The typical
method is to identify candidates based on structural deficiency and sufficiency ratings,
combined with engineering judgment.
Figure 2-4 summarizes the reasons 28 respondents report for not using a BMS for bridge
program development. Skepticism of BMS simulation results and the challenge of
developing the cost and deterioration models are the primary reasons cited. Some
respondents report difficulty in advancing the level of implementation due to lack of
resources and staff limitations, while a few respondents are satisfied with their own systems.
Two respondents reported resistance within the agency as the major reason. Eighty percent
are planning to use a BMS for this purpose in the future.
27

Current system is good

Internal resistance to implementation

Not enough cycles of inspection

Complexity of the BMS

Lack of resources

Staff limitations

Modeling is incomplete or difficult

Skepticism on the BMS simulation results

0 2 4 6 8 10 12 14

Figure 2-4: Reasons reported for not using BMS for bridge program development

Thirty-two respondents confirmed addressing risks such as scour and seismic safety
while developing bridge programs. Addressing these vulnerabilities, however, was not as
pronounced in the responses to earlier questions. For example, addressing vulnerabilities was
indicated by only five respondents in the development of bridge programs and prioritization
criteria.
Twenty-five of the forty respondents do not consider LCCA when evaluating project
alternatives for new design and improvements. Although LCCA has long been recognized as
a technique to help transportation agencies in project-level investment decisions, there is no
consensus on methodology or cost parameters [45]. These parameters may include agency,
user, and vulnerability costs, and analysts must have reliable estimates of these costs as well
as probabilities that these costs will be incurred. Similar to the modeling of deterioration and
cost models, LCCA is a dynamic process and model estimates need to be updated.

States’ perception of the federal bridge programs

The final section of the survey investigated respondents’ perceptions of the federal bridge
programs, how they benefit from the programs and the types of challenges they may have.
The majority of respondents acknowledged the NBIP as the principal source of
information on the condition of bridges and the condition assessment of their bridge
networks. Half of respondents also noted the NBIP and the data collected for NBI strongly
28

influenced their network-level prioritization. While a few respondents indicated that the HBP
does not affect their bridge management process to any great degree, the majority reported
that bridge management is driven by HBP requirements for eligible projects. The relaxation
in HBP to fund preventive maintenance projects was well received by some states and
evaluated to be a very positive move. For example, one respondent stated,
“The relaxing of rules that used to require a bridge be eligible before bridge
money can be spent on it has helped…In many ways this has helped focus the
need on maintaining bridges rather than waiting until they deteriorate to the point
that a lot needs to be spent to upgrade them. You can get more done with limited
resources that way.”
On the other hand, some respondents criticized that the HBP program does not encourage
preservation activities. Several respondents were aware of the recent change and wanted to
use HBP funds for preventive maintenance, but evaluated the requirements to be too
restrictive or commented on the difficulty of complying with these requirements due to lack
of time and personnel.
When respondents were asked whether federal bridge programs provide enough
flexibility to agencies, 13 of the 40 respondents stated that they do not believe federal bridge
programs provide enough flexibility. Eight of the 13 respondents commented that flexibility
to use HBP funds for preventive maintenance was too restrictive. Final eligibility to use HBP
funds on project-specific preventive maintenance depends on mutual agreement between the
state FHWA division office and the state DOT. Some respondents reported that the
interpretation of requirements by the FHWA division offices may vary and that the
requirements should be more objective.
Thirty-one of the respondents are familiar with how FHWA uses NBI data in allocating
Federal bridge funds (apportionment factors for states depend on the rehabilitation and
replacement needs of eligible deficient structures identified by NBI condition ratings).
Twenty-nine respondents believe that NBI data is not sufficient for allocating federal bridge
funds. Twenty-seven respondents believe that BMS element-level condition data should be
incorporated into the allocation of federal bridge funds and preparation of the required
Condition and Performance Reports. Nine respondents suggested that the new National
29

Bridge Elements (NBE) as suggested by AASHTO would improve the condition assessment
for the Nation’s bridges and federal bridge programs.
Respondents were also asked if they have any suggestions to improve allocation of
federal bridge funds and preparation of the Condition and Performance Reports. Several
of these suggestions are quoted below:
“Quantifying all funds spent on bridges is difficult to summarize. Funds are expended in
a variety of manners to address transportation needs throughout the State. The obvious
and easiest to quantify are those funds spent directly for the repair, rehab and replacement
of bridges. The FY 2010 Program will improve or replace approximately *** bridges.
However, we also spend funds indirectly on bridges through congestion management
projects, roadway resurfacing projects where bridges are included separately from other
bridge funding categories. State funds are expended for a variety of bridge maintenance
needs that are not included in typical program accounting.”
Quantifying the funds spent not only on bridges but also on other transportation assets or
programs is important to analyze the relationship between investment and effect. This
quantification is challenging since a variety of alternative resources can be used for any
transportation project. However, it can provide valuable information to determine the
effectiveness of federal programs and investment decisions. Unless the documentation and
reporting of expenditures is required to a level of detail for all activities, by funding source
and by structure, tracking of expenditures from the HBP is difficult. Whether such a
requirement would be reasonable or feasible at the national level is another issue.
“Federal apportionment is only moderately fair and accurate because there is a
wide variation in the way each state inspects and reports bridge data.”
“Without independent review of all bridge inspections or processes, there is no other way
for FHWA to report to congress. The question is "are all states getting similar inspection
results when seeing a bridge with similar conditions?" The main problem is this method
punishes states that do preventive maintenance as they will receive less than those who
‘let the bridges go.’”
While variation to an extent is inevitable, the need to having a more objective and
consistent framework to inspect and assess bridge conditions is also acknowledged by the
30

FHWA as well as the bridge management community. The recent efforts to have national
consistency and a more objective and detailed framework to inspect and assess bridge
conditions are supported by the FHWA.
“The NBI feeds the analysis done for the Condition and Performance Report but does
not necessarily determine how much congress allocates to transportation. The data and
impact relationship cannot be directly measured.”
The Condition and Performance Reports are informative but not compulsory documents.
Yet they have an important mission to provide the major input from the FHWA to Congress.
While not the only criteria, they are part of the criteria that guide Congress in resource
allocation.
“We recently lost funds because our state’s bridges were in good shape. To get there, we
had to sell BONDS and part of our future. States that are willing to be creative to
generate funds should not be punished.”
This comment points out a major shortcoming in the current resource allocation model.
The model does not have a component that motivates improving the conditions of bridges;
rather, with increasing deficient deck area the amount of HBP apportionment for a state can
increase.
“The funding should be partially based on the amount spent on preventive
maintenance. If you are not trying to preserve, why should you get a larger piece of the
pie by letting the bridges deteriorate to get more money?”
The current HBP apportionment process does not include bridge preservation needs in the
calculation of national bridge investment requirements. Although the necessity of timely
preventive maintenance and preservation activities is acknowledged by the FHWA [46], the
apportionment process does not address this necessity.
The final question of the survey asked respondents to identify the subject areas they felt
needed additional guidance from FHWA. The most common issues are presented in
Figure 2-5. Using decision support tools for supporting bridge program development was
cited most often, while bridge management implementation, funding restrictions, developing
QC/QA programs, and data quality are other areas states reported to struggle with in bridge
management.
31

Qualifications required for bridge inspection personnel 3


Inspection workload 6 Number of States
Reporting requirements 8
Data management 8
Data quality 11
Bridge inspection QC/QA Programs 13
Funding restrictions 16
Bridge management implementation 20
Using DS tools for supporting bridge program development 21

0 5 10 15 20 25

Figure 2-5: States’ need for additional guidance in bridge management

Summary of major findings

The major findings from the survey can be summarized as follows:


Ninety percent of the states do not believe NBI data items cover all their needs for
bridge management.
States as well as the federal government are concerned with data quality, but bridge
inspection QC/QA programs typically are not yet completely established.
Seven states have no set criteria to prioritize their bridge needs and for other states the
criteria are open-ended. Reported criteria are governed by NBI condition data, HBP
eligibility, and road capacity.
The majority of states have implemented a BMS, typically Pontis; however, the level
of implementation is varied. With few exceptions, states are challenged with
implementation of BMS for decision support, especially due to difficulties in
developing deterioration and cost models.
General skepticism of the recommendations from the BMS discourages
implementation and augments internal resistance to the necessary efforts to fully
implement a BMS.
States have extensive amount of BMS element-level condition data, but often the
data are not processed and used, especially for network-level assessment.
32

States desire more flexibility in the HBP to use funds for preventive maintenance.
Use of economic analysis techniques such as LCCA or benefit-cost analysis is
limited.

DISCUSSION

Bridge Inspections and Condition Data

The extensive amount of data collected by the states typically does not translate into
information to support bridge management decision making. Investigating the differences
between actual data collected and the data used may help transportation agencies identify
redundant or duplicate data items. Ensuring that the data collection is rational [23] can save
the states both crew time and resources and help them in their challenges to keep up with the
inspection workload. How the data are transformed into concise, relevant inputs to various
decision making processes should guide efforts from inspection to reporting. Effective
communication of how these inputs guide the decision making with stakeholders such as
policy makers, planners, budgeters, and the public is also essential for the accountability of
bridge management programs.
Survey results suggest that although BMS element-level condition data based on
AASHTO CoRe elements have been collected in the United States since the 1990s, not many
states process or use this information extensively in bridge management. Project selection
and prioritization are typically driven by NBI condition data, as they drive federal funding.
However, although NBI is the only enforced and most nationally available source of data, it
cannot sufficiently support all questions regarding bridge management at the state and federal
levels.
The possible shift from NBI condition ratings to NBE as intended with the new
AASHTO Bridge Inspection Manual will not be without its problems. Management of this
transition at the state and federal level is critical for its ultimate success and sustainability.
The NBE definitions in the new AASHTO Manual are pretty consistent with the AASHTO
CoRe elements. However, significant changes to the condition state language have been
made. The states that already collect CoRe element condition data will need to revise their
inspection manuals and condition state definitions. NBE is proposed to replace the NBI
33

condition ratings with element-level detail for decks, superstructures, substructures, and
culverts. The element-level condition data for these primary structural components (e.g., pre-
stressed concrete arch, reinforced concrete abutment) are to be represented by percentages of
total quantity in four condition states. Many of these elements are in common with the
AASHTO CoRe elements by structural use, and the element numbers are kept the same for
ease of transition. However, the condition language and condition states are revised
considerably to better capture defects and make them more objective.
While the change in bridge inspection approach is a potentially positive move from a
condition assessment point of view, the transition has its challenges. The first challenge is
how to migrate already available and very valuable historic BMS condition data to the NBE
condition data. The number of condition states in the current CoRe elements can be 3, 4, or 5,
but in the proposed NBE every element will be in 4 condition states. Other significant
changes include the separation of wearing surfaces from decks, separation of steel protective
coatings from steel, and incorporation of smart flags into condition state language. These
issues regarding this migration are known by the developers of the new manual and are
currently under investigation. The possible shift process needs time, but on the other hand
these present circumstances and the resulting ambiguity leaves states anticipating the coming
changes.

The NBI Translator (BMSNBI)

Within the data section of the survey, respondents were asked whether they use the NBI
Translator (BMSNBI) algorithm to translate BMS element-level condition data to NBI
condition ratings. Only three states report translated ratings to FHWA, and three other states
use translated ratings for internal purposes. To a great extent, HBP funding eligibility is
governed by the NBI condition ratings. Therefore, whether translated ratings adequately
represent field NBI condition ratings is an issue. Reported concerns [35-37] about the
accuracy of the translation between two condition systems therefore suggests concerns about
the accuracy of investment projections. If NBEs are to be implemented, use of the NBI
Translator algorithm within the NBIAS to synthesize BMS elements from NBI condition
ratings will no longer be necessary. National bridge investment projections, needs, and
performance measures could then be based on simulation results from the NBE data.
34

The FHWA QC/QA framework is a positive initiative, however additional guidelines in


querying, processing, and using the data will be needed to improve overall data quality.
Some inaccuracies and data quality problems are obvious at the bridge level but become less
recognizable when the network as a whole is queried (e.g., queries for periodic checks to
identify errors such as improved condition data when no improvement action was applied to
the structure). Without using and processing the data, it is difficult or impossible to identify
quality issues. In an agency where BMS implementation is not openly endorsed by all levels
of management, there is little motivation to do element inspections in the field or to process
and use the BMS element condition data.

Highway Bridge Program

Many states and FHWA acknowledge the value of a national bridge management policy
that is based on modern asset management principles and balances bridge preservation,
rehabilitation, and replacement activities based on objective data. Efforts at both the state and
federal levels are needed to advance the framework and tools to get there, but this is a vision
that would lead to cost-effective investments of tax dollars and sustainable bridge
management programs. However, state-level management buy-in is key to have an
established framework to achieve that vision, as NBEs and BMS implementation will require
full-time staff, horizontal communication, and data sharing, training, and patience. Without
it, it will be difficult for FHWA to encourage BMS implementation among the states, since
executive-level endorsement is crucial for the implementation of asset management tools
[47] (and any other strategic change such as the new NBE).
For national and state-level network assessments, developing performance measures
based on the proposed NBE is a potential need. New performance measures with balanced
distributions over their defined range are needed. Such performance measures will also
serve the national agenda to identify “quantifiable performance measures” and to have
“data-driven, risk based” oversight of the Nation’s bridges. New performance measures
based on NBEs also have also the potential to enhance the HBP apportionment model.
In the survey, 34 respondents reported spending $4.5 billion on bridge replacement,
rehabilitation, and preservation. This compares to nearly the same amount spent from HBP
for all states in 2009. If the expenditures from the HBP were tracked, it would be possible to
35

investigate the link between expenditure and performance. The GAO recommended FHWA
track HBP funding, and FHWA responded positively. However, several challenges exist.
First, states have discretion in their spending, and FHWA does not have the authority to
require such information. Another challenge is a provision in the Intermodal Surface
Transportation Efficiency Act of 1991 that allows up to 40 percent of a state’s bridge
program apportionment to be transferred to the National Highway System (NHS) or the
Surface Transportation Program (STP). Whether to share the amount of transfers from the
HBP is, again, at the states’ discretion.
Since 1992, the amount of transfers from HBP to other transportation programs, as
reported by 35 states, equals $4.5 billion [48]. This amount is almost equal to last year’s total
HBP apportionment and reflects only the amount for 70% of the states that reported these
transfers. It is well known that bridge management needs at the state level goes beyond all
available federal and local funds. States also spend funds from other large “core” formula
program apportionments on their states’ bridges or spend more than required for the
minimum matching share [14]. Therefore, it appears that the reason states transfer these
funds are not because they do not spend more on bridges, but because either they want to
allocate funds to projects that are not eligible for HBP funds or they prefer to use apportioned
funds through more flexible transportation programs.
In several questions in the survey, respondents reported that they need more flexibility in
spending HBP funds, but, at the same time, more documentation is necessary to track the
HBP funds and their impact on the national network. This is an inherent conflict between
FHWA and the state transportation departments and a challenge for the FHWA: providing
national consistency without being perceived as rigid, and being flexible in achieving the
goals of a national bridge management policy without being overly flexible. This hard-to-
achieve balance will be will be debated during the next reauthorization.
Successful bridge management practices at state departments of transportation depend
significantly on staff experience and expertise, due to the sophisticated nature of tools, long
implementation times, and customization needed for each agency. Side comments from the
survey emphasized the negative impact of staff fluctuation on the bridge management
programs. Institutional memory loss in strategic planning and decision making programs is a
36

significant problem [49]; knowledge management literature [50, 51] is available to assist state

transportation departments as they consider the sustainability of their bridge programs.

CONCLUSIONS

This study presented an overview of federal and state government bridge management
efforts taken in conjunction with the federal bridge programs in the last 40 years. Survey
results from 40 states identify challenges and needs for bridge management at both
federal and state level, useful to both practitioners and policy makers.
State transportation agencies collect extensive amounts of data on bridges, including
generally both NBI and BMS condition inspections. However, systematic transformation of
the extensive data into information to guide bridge management decisions is limited. Further,
survey results indicate that ninety percent of the states do not believe that federally required
NBI data items cover their data needs for bridge management.
HBP eligibility, NBI condition data, and road capacity guide network-level bridge
management decisions. While the majority of state agencies have implemented BMS, the
level of implementation is varied and the overall input from BMSs to network-level decisions
is, as of yet, minimal.
Advancing implementation of BMSs in support of decision making at the national level
has many challenges. A modeling approach that is consistent with states’ expectations and
verified by data and experience is yet to be achieved. Current models are complex and
require continuous updates to verify assumptions and model inputs. Simplified network-level
tools and methodologies are needed that summarize available data into objective information
to guide bridge management decisions. Such tools that also consider economic analysis can
support cost-effective, network-level decisions for both state and federal governments.
Questions remain and further research is needed on technical, institutional,
and managerial aspects.

REFERENCES

1. National Transportation Safety Board, Highway Accident Report, Collapse of I-


35W Highway Bridge Minneapolis, Minnesota August 1, 2007. 2008: Washington
D.C.
2. Hao, S., I-35W Bridge Collapse. Journal of Bridge Engineering, 2010. 15(5): p. 608-
614.
37

3. Binder, S.J., The Straight Scoop on SAFETEA-LU. Public Roads, 2006. 69(5): p. 1-1.
4. USDOT OIG, Assessment of FHWA Oversight of the Highway Bridge Program and
the National Bridge Inspection Program. 2010: Washington D.C.
5. USDOT OIG, National Bridge Inspection Program: Assessment of FHWA's
Implementation of Data-Driven, Risk-Based Oversight. 2009: Washington D.C.
6. USDOT OIG, Audit of Oversight of Load Ratings and Postings on Structurally
Deficient Bridges on the National Highway System. 2006: Washington D.C.
7. GAO, HIGHWAY BRIDGE PROGRAM: Clearer Goals and Performance Measures
Needed for a More Focused and Sustainable Program, Report to Congressional
Committees, in GAO-08-1043. 2008: Washington D. C.
8. Highway Bridge Program, Condition of Nation’s Bridges Shows Limited
Improvement, but Further Actions Could Enhance the Impact of Federal
Investment, Statement of Phillip R. Herr, Director Physical Infrastructure Issues,
Government Accountability Office, in Committee on Transportation and
Infrastructure, House of Representatives. 2010, GAO: Washington D.C.
9. FHWA Has Taken Actions But Could Do More to Strengthen Oversight of Bridge
Safety and States' Use of Federal Bridge Funding, Statement of Joseph W. Comé,
Assistant Inspector General for Highway and Transit Audits, U.S. Department of
Transportation, in Committee on Transportation and Infrastructure United States
House of Representatives. 2010: Washington D.C.
10. Oversight of the Highway Bridge Program and the National Bridge Inspection
Program, Statement of King W. Gee Associate Administrator for Infrastructure
Federal Highway Administration U.S. Department of Transportation, in Committee
on Transportation and Infrastructure. 2010: Washington D.C.
11. Oversight of the Highway Bridge Program and the National Bridge Inspection
Program, Testimony of Malcolm T. Kerley, P.E. Chief Engineer Virginia Department
of Transportation, on behalf of AASHTO, in Committee on Transportation and
Infrastructure. 2010.
12. FHWA, 2006 Status of the Nation’s Highways, Bridges, and Transit: Condition and
Performance, Report to the Congress. 2007. p. 436.
13. FHWA, Recording and Coding Guide for the Structure Inventory and Appraisal of
the Nations Bridges, B.D. Office of Engineering, Bridge Management Branch, Editor.
1995: Washington D.C.
14. Kirk, R.S. and W.J. Mallett, Highway Bridges: Conditions and the Federal/State
Role, in Congressional Research Service. 2007.
15. FHWA. NON-REGULATORY SUPPLEMENT, OPI: HNG-33. 1987 September,
2010]; Available from:
http://www.fhwa.dot.gov/legsregs/directives/fapg/0650dsup.htm.
16. FHWA, SAFETEA-LU Cross Reference Links to Highway Material Related to Each
Section of SAFETEA-LU Including Fact Sheets, Guidance, and Regulations, Sec
1114, Highway Bridge Program, Fact Sheet.
17. NTSB. NTSB Docket Management System, Docket ID:44005, Bridge Design Group
- Attachment 7, FHWA's Apportionment Process for Highway Bridge Program
(HBP) Funds. 2009 [cited 2009 March 17, 2009]; Available from:
http://www.ntsb.gov/dockets/highway/hwy07mh024/401912.pdf.
38

18. FHWA, Notice, Apportionment of Fiscal Year (FY) 2009 Highway Bridge Program
Funds, N 4510.687, HCFB-1. 2008.
19. FHWA, 2008 Status of the Nation’s Highways, Bridges, and Transit: Condition and
Performance, Report to the Congress. 2009: Washington D.C. p. 622.
20. Small, E.P., et al., Current Status of Bridge Management System Implementation in
the United States, in Eighth Transportation Research Board Conference on Bridge
Management, TRB Transportation Research Circular 498. 1999: Washington D.C. p.
A-1/1-16.
21. Swenson, D.V. and A.R. Ingraffea, The collapse of the Schoharie Creek Bridge: a
case study in concrete fracture mechanics. International Journal of Fracture, 1991.
51(1): p. 73-92.
22. Hudson, R.W., et al., Microcomputer Bridge Management System. Journal of
Transportation Engineering, 1993. 119(1): p. 59-76.
23. Sanford, K.L., P. Herabat, and S. McNeil, Bridge Management and Inspection Data:
Leveraging the Data and Identifying the Gaps. Transportation Research Circular,
1999. 498: p. B-1/1-15.
24. Cambridge Systematics, I., Pontis Bridge Management Release 4.4 User Manual.
2005, AASHTO: Washington D.C. p. 572.
25. Thompson, P.D. and R.W. Shepard, Pontis, in Transportation Research Circular
423: Characteristics of Bridge Management Systems, Transportation Research
Board. 1994.
26. Hawk, H., The BRIDGIT bridge management system. Structural Engineering
International, 1998. 8(4): p. 309.
27. AASHTO (2009) BRIDGEWare Update Newsletter. 3.
28. Cambridge Systematics, I., Pontis Bridge Management Release 4.4.
Technical Manual. 2005: Cambridge, Massachusetts. p. 347.
29. Thompson, P.D., et al., The Pontis bridge management system. Structural
Engineering International, 1998. 8(4): p. 303.
30. Milligan, J.H., R.J. Nielsen, and E.R. Schmeckpeper, Short- and Long-Term Effects of
Element Costs and Failure Costs in Pontis. Journal of Bridge Engineering, 2006.
11(5): p. 626-632.
31. Johnson, M.B., Bridge Management Update, AASHTO SCOB Meeting,
Omaha Nebraska. 2008.
32. Patidar, V., et al., Multi-Objective Optimization for Bridge Management Systems, in
NCHRP Report 590, Project: NCHRP 12-67. 2007.
33. AASHTO, AASHTO Bridge Element Inspection Manual, 1st Edition. 2010.
34. Hearn, G., Frangopol, D., Chakravorty, M., Myers, S., Pinkerton, B., Siccardi, A.J. ,
Automated Generation of NBI Reporting Fields from Pontis BMS Database
Infrastructure: Planning and Management, American Society of Civil Engineers, J.L.
Gifford, D.R. Uzarski, S. McNeil, eds., Denver, 1993: p. 226-230.
35. Aldemir-Bektas, B. and O.G. Smadi, A Discussion on the Efficiency of NBI
Translator Algorithm, in Proceedings of the Tenth International Conference on
Bridge and Structure Management, October 20-22, 2008, Transportation Research E-
Circular. 2008: Buffalo, New York.
39

36. Sobanjo, J.O., Thompson, P. D., Kerr, R., Element-to-Component Translation of


Bridge Condition Ratings. Transportation Research Board Annual Meeting 2008
Compendium of Papers DVD, 2008. Paper #08-3149.
37. Al-Wazeer, A., Nutakor, C.,Harris, B., Comparison of Neural Network Method
Versus National Bridge Inventory Translator in Predicting Bridge Condition Ratings.
TRB 86th Annual Meeting Compendium of Papers CD-ROM, 2007. Paper #07-
0572.
38. R. Edward Minchin, J., et al., Best Practices of Bridge System Management---
A Synthesis. Journal of Management in Engineering, 2006. 22(4): p. 186-195.
39. Small, E.P., Philbin, T., Fraher, M., Romack, G.P., Current Status of Bridge
Management System Implementation in the United States, in Eighth Transportation
Research Board Conference on Bridge Management, TRB Transportation Research
Circular 498. 1999: Washington D.C. p. A-1/1-16.
40. Rolander, D.D., Highway Bridge Inspection: State-of-the-Practice Survey.
Transportation Research Record, 2001. 1749(-1): p. 73.
41. Markow, M.J., Hyman, W.A., Bridge Measurement Systems for Transportation
Agency Decision Making, in NCHRP Sythesis 397. 2009.
42. Sanford, K.L., P. Herabat, and S. McNeil, Bridge Management and Inspection Data:
Leveraging the Data and Identifying the Gaps. Transportation Research Circular,
1999. 498: p. B-1/1-15.
43. Thompson, P.D., J.O. Sobanjo, and R. Kerr, Florida DOT Project-Level Bridge
Management Models. Journal of Bridge Engineering, 2003. 8(6): p. 345-352.
44. Thompson, P.D. and M.B. Johnson, Markovian bridge deterioration: developing
models from historical data. Structure and Infrastructure Engineering: Maintenance,
Management, Life-Cycle Design and Performance, 2005. 1(1): p. 85 - 91.
45. Hawk, H., Bridge Life-Cycle Cost Analysis, in NCHRP Report 483. 2003,
Transportation Research Board: Washington D.C.
46. FHWA, ACTION: Preventive Maintenance Eligibility, Memorandum, in HIAM-20.
2004.
47. Falls, L.C., et al., Asset Management and Pavement Management Using Common
Elements to Maximize Overall Benefits. Transportation Research Record, 2001. 1769:
p. 1-9.
48. FHWA, O.o.B.T. Transfers from The Highway Bridge Program (By State and FY).
2010; Available from: http://www.fhwa.dot.gov/bridge/transfer.cfm.
49. Coffey, J.W., Knowledge modeling for the preservation of institutional memory.
Journal of knowledge management, 2003. 7(3): p. 38.
50. Wiig, K.M., Knowledge management: where did it come from and where will it go?
Expert Systems with Applications, 1997. 13(1): p. 1.
51. Stein, E.W., Organization memory: Review of concepts and recommendations for
management. International Journal of Information Management, 1995. 15(1): p.
17-32.
40

CHAPTER 3.
A DISCUSSION ON THE EFFICIENCY OF NBI TRANSLATOR ALGORITHM

A paper presented at the Tenth International Conference on Bridge and Structure


Management and published in Transportation Research E-Circular E-C128.

Basak Aldemir-Bektas
Center for Transportation Research and Education
Iowa State University
2711 South Loop Drive, Suite 4700
Ames, IA 50010-8664
basak@iastate.edu

Omar Smadi
Center for Transportation Research and Education
Iowa State University
2711 South Loop Drive,
Suite 4700, Ames, IA 50010-8664
smadi@iastate.edu

ABSTRACT

The National Bridge Inventory (NBI) database is an extensive source of information on


highway bridges in the United States. Among more than 100 NBI elements—deck,
superstructure, substructure, and culverts—condition ratings are of special interest for bridge
engineers and managers. The data for these condition ratings come from biannual bridge
inspections in the field. As a part of their bridge management programs, many states have
been collecting element-level condition data (mostly Pontis inspections) for more than 15
years. Element-level data provide more detailed condition data on sub-elements of the
aforementioned general NBI element categories. Due to having such detailed condition data
at hand, there has been an interest in developing algorithms that have the capability of
estimating the NBI condition ratings from the Pontis element inspection data. If a sound
estimation tool could be developed, the biannual NBI inspections done for these condition
ratings would be deemed unnecessary. The NBI Translator is one of the algorithms that have
been developed to achieve that goal and also works as a built-in module within Pontis.
41

Recently, there has been some concern as to the degree of accuracy of this algorithm by users
of both Pontis and the translator. This paper presents a literature review on bridge
management systems and bridge inspections in the United States. In addition, background on
the NBI Translator algorithm and discussions on the efficiency of the tool are provided. A
comparison study between the generated and actual values of the NBI ratings for bridges in
Iowa is also included. The paper concludes with a discussion on how to improve the
algorithm and use the translated results in a simplified network-level tool for bridge
management decision making.
Keywords: asset management—bridge management—element condition ratings—
NBI translator—NBI ratings

INTRODUCTION

In the past 40 years, there has been a shift from constructing new infrastructure to
maintaining and managing the built infrastructure in the United States. Assessment of the
deficiencies in the nation’s infrastructure gained significant importance during this period. As
the infrastructure gets older, more resources are required to maintain it at an acceptable level
of service. Since the funds eligible for maintenance and rehabilitation activities are limited,
effective resource allocation is now more necessary than ever. Agencies are required to keep
condition data on their pavements, bridges, and other infrastructure elements and justify their
reasons for decision making and funding requests.
As an important segment of the infrastructure system, bridges and their management have
also been in the spotlight for the last four decades. Unlike pavements, the failure of bridge
structures may result in disasters. Agencies in the United States learned from these incidents
and started implementing an extensive and comprehensive approach to bridge management.
The biannual National Bridge Inventory (NBI) rating is an effort to support bridge
management and to form a basis for funding bridge improvements in the United States.
Agencies have also been collecting lower level detailed condition data for their bridge
management systems. Modeling NBI ratings from lower level element condition data has
been a topic of interest due to the significant resource savings it will facilitate (1). There have
been efforts, but the degree of efficiency of the models is under consideration.
42

BRIDGE INSPECTIONS AND BRIDGE MANAGEMENT


SYSTEMS IN THE UNITED STATES

On December 15, 1967, the Silver Bridge on U.S. Highway 35 suddenly collapsed into
the Ohio River during rush hour (2). At the time of this tragic event, there were 37 vehicles
crossing the bridge, and 31 of them fell into the river. Forty-six lives were lost during this
event, and nine people had severe injuries (3). In addition to the loss of life, an important
road connecting West Virginia and Ohio was no longer in service. The catastrophe evoked
concern over the reliability of the national network of bridges in the United States.
The 1968 Federal-Aid Highway Act put the states in action to collect and keep an
inventory for Federal-aid highway system bridges. In the early 1970s, the National Bridge
Inspection Standards (NBIS) that form the basis of bridge inspection and inventory in the
United States today were developed and implemented by the Federal Highway
Administration (FHWA). This legislation guided the data collection on bridge condition all
over the nation. After the collapse of the Silver Bridge, the failure of the Mianus River
Bridge in 1983 and Schoharie Creek Bridge in 1987 were two other unfortunate events that
drew attention to the importance of keeping the nation’s bridges in sufficient condition and
keeping up-to-date condition data (4).
In general, bridges are inspected every two years, and the condition ratings are
reported to the FHWA. The inspection data are compiled by the FHWA into the NBI. After
the analysis of the data, reports on bridge conditions are prepared and submitted to
Congress. Decisions on the distribution of federal funding through programs such as the
Highway Bridge Replacement and Rehabilitation Program are based on these reports (5).
In addition to the biannual NBI inspections, many states also collect element-level bridge
condition data for the bridge management systems. Along with the Intermodal Surface
Transportation Efficiency Act of 1991, which required the states to develop and implement
bridge management systems, most of the states realized the importance and advantages of
implementing bridge management systems. Although development of bridge management
systems was made optional later in 1995 by the National Highway System Designation Act,
many states decided to implement bridge management systems and took action (6). Forty-
eight states were reported to be implementing a bridge management system as of September
43

1996 (7). Efforts to develop efficient national bridge management tools encouraged research
in the area. A research project initiated by FHWA resulted in the development of Pontis
Bridge Management System which later became the most popular bridge management tool in
the United States. Forty-two states reported that they considered implementing Pontis Bridge
Management System. Few states preferred to develop their own bridge management systems
(Pennsylvania, Alabama, New York, and North Carolina). The state of Maine implemented
BRIDGIT which was developed as a result of a National Cooperative Highway Research
Program (NCHRP) Project (6).

PONTIS BRIDGE MANAGEMENT SYSTEM

As previously stated Pontis (8) is the most popular bridge management system in the
United States that aims to help transportation agencies in the decision making process
regarding maintenance, rehabilitation, and replacement of bridge structures. Agencies are
now aware that the aging highway system has considerable improvement needs; however,
funding resources are limited. Therefore, they need to make the best possible decisions for
improvement, and these decisions should be based on facts. The Pontis input data structure is
a relational database that contains complete bridge inventory and inspection data. FHWA and
American Association of State Highway and Transportation Officials (AASHTO) adopted
Commonly Recognized (CoRe) Elements for Bridge Inspection in order to standardize
element-level condition data collection within the United States. Bridges are presented by the
CoRe elements in Pontis, and the percentage of condition states for bridge elements are
inspected and stored in the database. For each bridge element, specific condition states and
related deterioration models were developed. Based on this detailed element inspection data,
the program keeps track of current situation, simulates future condition, identifies bridge and
network-level needs and makes project recommendations in order to gain maximum benefits
from scarce funds.
Although Pontis has been extensively used for maintaining bridge element condition data
inventory, not all states benefit from the tool for resource allocation and identifying future
projects literally for the time being. Implementing a bridge management system is a big
organizational change, and it takes time to prepare the organization for such a strategic
change and customize the implementation.
44

NBI CONDITION RATINGS AND BRIDGE ELEMENT CONDITION DATA

The FHWA Recording and Coding Guide for the Structure Inventory and Appraisal of the
Nation’s Bridges (Coding Guide) helps inspectors with the data collection process. States are
encouraged to use the coding guide for standardization purposes (5). The Structure Inventory
and Appraisal Sheet (SI&A) lists the NBI items necessary for inspecting individual
structures, and these items can be divided into three main categories: inventory items,
condition rating items, and appraisal rating items. The NBI condition rating for an element is
an evaluation of its current condition when compared to its new condition. In order to make
the NBI condition ratings as objective as possible, the inspectors are provided with the
general condition rating guidelines listed in Table 3-1. NBI condition rating elements are
different from bridge management system elements. Three subsystems of bridges and
culverts receive overall condition ratings in NBI inspections (5):
Item No. 58 Deck
Item No. 59 Superstructure
Item No. 60 Substructure
Item No. 62 Culverts

Table 3-1: NBI general condition rating guidelines*


Code Description
N NOT APPLICABLE
9 EXCELLENT CONDITION
8 VERY GOOD CONDITION ( No problems noted)
7 GOOD CONDITION (Some minor problems)
6 SATISFACTORY CONDITION (Minor deterioration in structural
5 FAIR CONDITION (Sound structural elements with minor section loss)
4 POOR CONDITION (Advanced section loss)
3 SERIOUS CONDITION (Affected structural elements from section loss)
2 CRITICAL CONDITION (Advanced deterioration of structural elements)
1 “IMMINENT” FAILURE CONDITION (Obvious movement affecting
0 FAILED CONDITION (Out of service)

*Adapted from (Dunker and Rabbat 1995)


45

While the NBI condition ratings are assigned according to the 0-9 scale given in Table 3-
1, element-level data collected for bridge management systems are assigned on a scale of 1
to 3, 1 to 4, or 1 to 5 based on the particular element. Of 106 CoRe bridge elements, 21 CoRe
elements describe bridge decks, 35 CoRe elements describe superstructures, 20 CoRe
elements describe substructures, and 4 CoRe elements describe culverts. In addition, smart
flags are defined to describe special defects in miscellaneous bridge elements such as each
beam, column, or girder. The rest of the CoRe elements are a variety of items such as bridge
railings, joints, or bearings (1). Condition State 1 for an element is the best condition, while
condition states 3, 4, or 5 present the worst conditions for particular elements. In Table 3-2
condition state definitions of an unprotected concrete deck from Pontis element
configurations are provided as an example (9). The percentage/quantity of an element for
each defined condition state is recorded during Pontis inspections.

Table 3-2: Condition state definitions of unprotected concrete deck


Code Description

1 No damage

2 Distress <= 2%

3 2-10% distress

4 10-25% distress

5 Distress >=25%

The Pontis condition inspection data with extensive detail down to each individual
element made agencies and experts in the field question the redundancy of NBI inspections
for the same inspected bridges. Pontis inspection results provided agencies with much more
detailed condition data for the aforementioned NBI items. Using the data at hand for other
data requirements when possible is essential because data collection is a time- and resource-
consuming process. For the year 1986, NBI costs were estimated to be approximately $150 to
180 million (10). Although NBI data and Pontis inspection data have discrepancies in item
definition and rating scales, researchers have been trying to make a translation from bridge
element condition data to high-level NBI ratings to reduce the huge cost and time spent for
46

data collection (11-13). Hearn et al. (11) developed an estimator model for the purpose,
which was later developed as a software tool known as the NBI Translator or BMSNBI. The
Pontis program has this software tool as a built-in module, and the tool can be used for
translation from a defined set of element inspection states for specified bridges in the Pontis
environment.

NBI TRANSLATOR

The NBI Translator was developed at the University of Colorado at Boulder, with the
collaboration of the Colorado Department of Transportation (DOT) (11, 13). The translator
generates condition ratings for deck (Item 58), superstructure (Item 59), substructure (Item
60), and culverts (Item 62) “by linking CoRe elements to corresponding NBI fields and
mapping bridge management system condition states to NBI rating scale” (11). Bridge
inspection data that contains both the NBI ratings and element-level condition state data of
approximately 35,000 bridges were used to calibrate the NBI Translator (13).
Generation of NBI condition ratings is realized in four main steps (13). First, CoRe elements are
grouped into matching NBI fields. Then, NBI condition ratings are generated for individual elements based
on the quantities of that element in the different condition states. This table‐driven procedure is shown in
Figure 3‐1 (adapted from 13).
Requirements on element quantities NBI Rating

P1 ≥ M1,9
P1+P2 ≥ M2,9 9

P1+P2+P3 ≥ M3,9

P1+P2+P3+P4 ≥ M4,9

P1 ≥ M1,8
P1+P2 ≥ M2,8 8

P1+P2+P3 ≥ M3,8

P1+P2+P3+P4 ≥ M4,8
Figure 3-1: Table for NBI Generation modify according to the guide

Hearn, Cavallin, and Frangopol (13) describe the table-driven element NBI generation as
follows: Percentages of element quantities in condition states are denoted by Pi and taken
47

from element inspection records. Each row in Figure 3-1 checks the sum of percentages for a
minimum required sum. These minimum required sums, denoted by Mi,j, are called mapping
constants. As previously mentioned, number and definition of condition states differ for
CoRe elements for each material and use. For example, the condition states for steel deck are
different from reinforced concrete deck. Overall, 20 different maps are required for
generating NBI ratings. The four requirements for each NBI rating should be satisfied at the
same time to assign that particular NBI rating to that particular element. The calibration
process estimates these mapping constants. After assigning the NBI ratings for all elements,
NBI ratings for each item (deck, superstructure, substructure, and culverts) are calculated by
a weighted combination of element ratings. While the weights for deck and superstructure
fields are based on relative quantity, the weights for substructure field are based on number
of spans. Finally, NBI condition ratings are modified based on the smart flag condition
reports. Smart flags may reduce the NBI ratings by a maximum of three points.
The objective of the calibration process is to find the mapping constants that will lead to
the minimum difference between the NBI ratings given by inspectors and the generated NBI
ratings from the element condition data.

Discussions on the NBI Translator Algorithm

Although the PC-based version of the NBI Translator algorithm has been available since
1994, the traditional NBI inspections for bridge subsystems are still being done since the
translator results are not accepted as satisfactory. In some of the states that have access to the
NBI Translator through Pontis, bridge engineers reported that they have concerns regarding
the efficiency of the tool. A recent study (14) on bridge management involving 17 state
DOTs reported a general skepticism about the estimation accuracy of the NBI Translator.
Among these states, only Oklahoma has been using the rating translator. However, due to the
variance of the generated ratings, they are in the process of stopping the use of the translator.
In another study, Scherschligt (15) reports that the Kansas DOT evaluated NBI Translator
results as an alternative to performance measures for bridge prioritization. The coefficient of
determination between generated and real ratings was only 25%. This implies that the
translator was able to explain only 25% of the variation in the NBI ratings in the best case.
The Kansas DOT decided that the translator results were statistically insufficient and
48

inconsistent. Therefore, they eliminated the NBI Translator results from their alternatives of
performance measures.
A study by Al-Wazeer, Nutakor, and Harris (1) proposes an alternative for NBI
generation to improve the results of NBI Translator. Based on data from Wisconsin and
Maryland, artificial neural network (ANN) models were developed, and results of ANN
models were statistically compared with the NBI Translator results. The statistical
comparison was based on the differences between the predicted and the actual observed NBI
ratings. NBI error ranges were defined, such as the following:
NBI Error = 0 (the difference between the predicted and the actual observed NBI
rating is zero)
NBI Error = 1 (the difference is equal to the absolute value of one)
NBI Error = 2 (the difference is equal to the absolute value of two)
NBI Error > 2 (the absolute value of the difference is greater than two)

Comparisons based on the aforementioned error ranges showed that the ANN model had
a higher estimation capability with respect to the NBI Translator model for a particular state
when the data used for ANN training is from the same state. The superiority of the ANN
model to NBI Translator cannot be generalized, since the statistical results are valid for only
the data used in the study. However, the study drew attention to the importance of
customizing the prediction model for each state.

STATISTICAL COMPARISON OF ACTUAL AND GENERATED


RATINGS FOR IOWA BRIDGES

For the state of Iowa, NBI generation from the element-level condition data was
performed using the built-in NBI Translator in Pontis software. Six hundred and eighty data
points were used for the analysis of culvert ratings, and 3,038 data points were used for the
analysis of substructure, superstructure, and deck ratings. Before using NBI Translator for
Iowa bridges, it was customized according to the element configuration of the Iowa Bridge
Management System. This customization was done by modifying the driver file,
Elements.prn, in the Pontis program folder, which defines the elements to be included in NBI
generation (13). First, the list of elements defined in the original Elements.prn file in the
49

program folder and Iowa elements defined in the Pontis inspection manual were compared to
find the differences. Some elements that were included in the original Elements.prn file were
not being used in the Iowa Pontis system; therefore, those elements were discarded in the
modified Elements.prn file. Some elements had different numbers in the Iowa system, and
they were also renumbered accordingly in the driver file.
The Elements.prn file contains seven fields of information. These information fields are
element ID (element number), element NBI field (e.g., deck, superstructure), element
material (e.g., unpainted steel, masonry, smart flag), element type (e.g., slab, truss bottom
chord), element dimension (e.g., each, square feet), and element name in both long and short
forms (13). There were some elements in the Iowa Pontis elements that were not defined
within the original Elements.prn file. In order to include these elements in the NBI
generation, all seven fields of information for each element were coded into the modified
Elements.prn file. The list of codes necessary for modifying Elements.prn file is provided by
Hearn et al. (11). After making all the modifications to the driver file, the modified file in the
Pontis program folder is replaced by the modified version and used in NBI generation.
Figures 3-2 though 3-5 summarize the findings of the comparison. For each rating item,
the percentage distribution of actual and generated ratings among the data set are presented
as clustered column charts. Figure 3-2 shows that the NBI Translator estimates lower deck
ratings than the actual observed deck ratings. While 34% of actual deck ratings have values
of 8 and 9, the NBI Translator estimates no deck rating within this range.
Figure 3-3 shows the comparisons for superstructure ratings. While 19% of the actual
ratings are equal to 9, no observation equal to 9 appears in the generated ratings. The
percentages of generated 5, 6, and 8 ratings are greater than the actual case, while the
percentage of generated 7 ratings is lower than the actual case.
For the substructure ratings, once again, the NBI Translator algorithm tends to estimate
lower values than the actual assigned ratings (Figure 3-4). The percentages of 4, 5, and 6
ratings are very close for substructures. However, approximately 45% of the actual ratings
that are equal to 8 and 9 are lost in the generated ratings. For culverts, the algorithm
generates 20% more 8 ratings, 22 % more 7 ratings, and 19 % fewer 6 ratings than the
actual case. Once again, no 9 rating is generated by the algorithm (Figure 3-5).
50

Comparison of actual and generated deck ratings

40% 37%
36%

35% Actual deck ratings


30% 31% Generated deck ratings
Percentage
25% 23%
in Iowa
bridges
19% 21%
20%

15% 13%

9%
10%

7%
5%
2% 2%
0% 0% 0% 0%

3 4 5 6 7 8 9

NBI Rating

Figure 3-2: Comparison of actual and generated deck ratings

Comparison of actual and generated superstructure ratings

40%
35%

35% Actual sup. ratings 33%

30% Generated sup. ratings 28% 28%

Percentage in
Iowa bridges 25%

21%

20% 19%
15%
15%

10% 10%

8%
5%

0% 0% 2% 1% 0%

0%
3 4 5 6 7 8 9

NBI Rating

Figure 3-3: Comparison of actual and generated superstructure ratings


51

Comparison of actual and generated substructure ratings


90%
79%
80%
Actual sub. ratings

70% Generated sub. ratings

Percentage in 60%
Iowa bridges
50%

40%
31% 29%
30%
20% 16% 16%
14%

10% 2% 6% 5%

0% 0% 0% 1% 1% 0%
5
3 4 6 7 8 9

NBI Rating

Figure 3-4: Comparison of actual and generated substructure ratings

Comparison of actual and generated culvert ratings

60%

50% Actual culvert ratings 49%


Generated culvert ratings
38%
40%
Percentage in
Iowa bridges
32%

30% 27%
21%

20% 18%

10%

5% 6%
3%
0% 0% 0% 1% 0% 0%

3 4 5 6 7 8 9

NBI Rating

Figure 3-5: Comparison of actual and generated culvert ratings

HOW TO IMPROVE THE TRANSLATOR

Ideally it would be desirable to have an entirely objective system to collect bridge condition

data. This would enable developing objective and standard performance indicators
52

to evaluate bridge conditions that are efficient for all of the states. However routine bridge
inspections are usually completed using only visual inspections, and in this context they are
considerably dependent on the subjective assessments of the bridge inspectors (16). Although
national and local agencies provide guides and guidelines to assist bridge inspectors in the data
collection process and make bridge inspections as objective as possible, the ratings have
subjectivity. Visual inspection methods have been criticized due to their subjectivity in the
literature (17-19). However, they are still the most common bridge inspection methods due to
budget constraints and lack of convenient and feasible alternative methods.
The current NBI Translator algorithm was developed based on element-level condition
data and NBI ratings from 11 different states and from approximately 35,000 bridges. This
extensive data was used to come up with a general translator algorithm that could be used in
all states regardless of the location. Data from different states was banded together and used
as the input data to develop this general algorithm. This approach is absolutely reasonable
when the objective is to develop a general estimator; however, it is not useful in order to
detect individual inspection practices of different organizations. A general estimator
developed based on such an input structure may fail to sufficiently identify the variability
that comes from the custom practices of different states. As mentioned in an earlier section,
in a recent study where an alternative algorithm (1) was proposed, it was reported that when
custom input data was used for the same state, this alternative algorithm had a higher
estimation capability. Developing a customized estimator based on state-specific data may
result in a more efficient and sufficient estimator and motivate agencies to use such a tool to
estimate the NBI ratings, which will eventually have significant impact on bridge inspection
costs.
The discrete characteristic of the NBI condition ratings make it impossible to use the
ordinary least squares regression to develop an estimator where the NBI condition ratings are the
dependent variables and element-level condition data are the independent variables. Also, when
evaluated from a statistical point of view, the structure of potential predictor variables is
complex. For each general NBI rating category, there is a set of CoRe elements that are elements
of that category, and the condition data is presented as the percentages of those elements in
different condition states. Because of these issues with the data, there is not a
53

straightforward statistical model to be used to develop an alternative algorithm, but the data
has potential to come up with a generalized linear model. Our current research focuses on
developing an estimator based on a generalized linear model. The main challenge with the
research is to define the most appropriate input structure for the model. After the model is
developed, there is a plan to test the model with the data from other states and discuss its
potential as an alternative algorithm.

CONCLUSIONS

This research paper reviewed bridge condition data and management in the United States
and focused particularly on the estimation of NBI ratings from already-collected bridge-
element condition data. The best known algorithm for the purpose, which is also available
within the most popular bridge management software in the United States, was investigated
and evaluated using a case study for Iowa bridge data.
The results of the statistical comparison for Iowa bridges showed that the generated
ratings by NBI Translator algorithm with its current configuration are not representative of
the actual NBI ratings. The results from this research support the concerns about the
efficiency of the translator algorithm that have been previously reported. A more customized
model for Iowa can lead to a more efficient model for estimating the NBI ratings. Using
mapping constants specific to only Iowa bridge data instead of using the mapping constants
calibrated with the data from 11 different states to create the translator algorithm may be an
option. An improved and more customized algorithm may yield better estimates of NBI
ratings. As a follow-up to another study in the literature, an ANN model can be developed for
Iowa as another future research alternative. No matter what model is used, the current NBI
rating system is prone to variation from the subjectivity of inspector decisions and might be
hard to correlate to the more objective element-level condition data. Ultimately, an algorithm
that calculates a 0 to 9 rating in an objective and consistent manner might be what is needed
for improved bridge management data and decision making tools.
Whichever rating system an agency uses, the objective is to make consistent and
objective decisions regarding bridge maintenance, rehabilitation, and/or replacement. Future
research will cover the development of a simplified network-level tool utilizing consistent
54

objective data to aid the decision makers and bridge managers in making resource allocation
decisions and funding needs based on realistic and easy to use models.

REFERENCES

1. Al-Wazeer, A., C. Nutakor, and B. Harris. Comparison of Neural Networks Method


Versus National Bridge Inventory Translator in Predicting Bridge Condition Ratings.
Presented at 86th Annual Meeting of the Transportation Research Board, Washington,
D.C., 2007.
2. LeRose, C. The Collapse of the Silver Bridge. West Virginia Historical Society Quarterly,
Vol. 15, No. 4, 2001. Available at www.wvculture.org/history/wvhs1504.html. Accessed
January 26, 2008.
3. A Highway Accident Report, Collapse of U.S. 35 Highway Bridge. HAR-71/01, NTIS
Number: PB- 190202. National Transportation Safety Board, Point Pleasant, W.V., 1967.
Available at www.ntsb.gov/Publictn/1971/HAR7101.htm. Accessed January 26, 2008.
4. Small, E. P., T. Philbin, M. Fraher, and G. P. Romack. Current Status of Bridge Management
System Implementation in the United States. Presented at 8th Transportation Research Board
Conference on Bridge Management, Denver, Colo., 1999.
5. Dunker, K. F., and B. G. Rabbat. Assessing Infrastructure Deficiencies: The Case of
Highway Bridges. Journal of Infrastructure Systems, Vol. 1, No. 2, 1995, pp. 100–119.
6. Sanford, K. L., P. Herabat, and S. McNeil. Bridge Management and Inspection Data:
Leveraging the Data and Identifying the Gaps. Presented at 8th Transportation Research
Board Conference on Bridge Management, Denver, Colo., 1999.
7. Scheinberg, P. F. Transportation Infrastructure: States' Implementation of Transportation
Management Systems. Report Nr. GAO/T-RCED-97-79. U.S. General Accounting
Office, Washington, D.C., 1997.
8. Cambridge Systematics. A Brochure on Pontis® Bridge Management System. Available at
http://aashtoware.camsys.com/docs/brochure.pdf. Accessed December 17, 2007.
9. Pontis for Windows XP/2000®, Bridge Management System, Version 4.4.2, Build 442.
American Association of State Highway and Transportation Officials, 2005.
10. The Nation’s Public Works: Defining the Issues. National Council on Public Works
Improvement, Washington, D.C., 1986.
11. Hearn, G., D. Frangopol, M. Chakravorty, S. Myers, B. Pinkerton, and A. J. Siccardi.
Automated Generation of NBI Reporting Fields from Pontis BMS Database. In
Infrastructure Planning and Management, Committee on Facility Management and the
Committee on Urban Transportation Economics, Denver, Colo., 1993, pp. 226–230.
12. Hearn, G., J. Cavallin, and D. M. Frangopol. Generation of NBI Ratings from
Condition Reports for Commonly Recognized Elements. University of Colorado at
Boulder, Colorado Department of Transportation, Denver, and Federal Highway
Administration, 1997.
13. Hearn, G., J. Cavallin, and D. M. Frangopol. Generation of NBI Ratings from
Commonly Recognized (CoRe) Element Data. In Infrastructure Condition Assessment:
Art, Science, and Practice, 1997, pp. 41–50.
55

14. Hale, J. E., D. P. Hale, and S. Sharpe. Asset Management GASB 34 Compliance Phase III
(Bridges). Final Report, ALDOT Report Number 930-553R. Aging Infrastructure
Systems Center of Excellence and University Transportation Center for Alabama,
February 1, 2007.
15. Scherschligt, D. L. Pontis-Based Health Indices for Bridge Priority Evaluation. Paper
th
presented at 6 National Conference on Transportation Asset Management, Kansas City,
Missouri, November 1–3, 2005. Available at www.trb.org/conferences/preservation-
asset/presentations/11-2-Scherschlight.pdf. Accessed January 30, 2008.
16. Phares, B. M., G. A. Washer, D. D. Rolander, B. A. Graybeal, and M. Moore. Routine
Highway Bridge Inspection Condition Documentation Accuracy and Reliability. Journal
of Bridge Engineering, Vol. 9, No. 4, 2004, pp. 403–413.
17. Lenett, M. S., A. Griessmann, A. J. Helmicki, and A. E. Aktan. Subjective and Objective
Evaluations of Bridge Damage. In Transportation Research Record: Journal of the
Transportation Research Board, No. 1688, TRB, National Research Council,
Washington, D.C., 1999, pp. 76–86.
18. Agrawal, A. K. Reliability of Inspection Ratings of Highway Bridges in USA, UTRC
Reports, 2007. Available at www.utrc2.org/publications/assets/41/bridgereliability1.pdf.
Accessed February 2, 2008.
19. Reliability of Visual Inspection for Highway Bridges, Volume I: Final Report and Volume
II: Appendices. Techbrief, FHWA-RD-01-105. FHWA, U.S. Department of
Transportation, September 2001. Available at www.tfhrc.gov/hnr20/nde/01105.pdf.
Accessed January 28, 2008.
56

CHAPTER 4.
CART ALGORITHM FOR PREDICTING NBI CONDITION RATINGS

A paper to be submitted to The Journal of Infrastructure Systems (ASCE)


B. Aldemir Bektas4, A. Carriquiry5, O. Smadi6
ABSTRACT

This paper presents a new methodology to predict National Bridge Inventory (NBI)
condition ratings from bridge management system (BMS) element condition data using
Classification and Regression Trees (CART). Methodologies and use of two types of bridge
condition data collected in the United States are first discussed. The algorithms and accuracy
of predictions for FHWA’s BMSNBI (NBI Translator) and two other proposed methods from
the literature are then briefly summarized. The paper also discusses the need for and uses of
translated NBI ratings and potential problems due to the reported accuracy concerns. The
CART analyses were conducted with the bridge condition data from three states, using 2006
to 2010 data. The statistical results point to a more accurate prediction method than the
previous algorithms in the literature. Comparisons of predictions by the CART algorithm and
the BMSNBI indicated better accuracy of the CART algorithm on the same sample data sets.
The CART algorithm also achieved higher accuracies than the other proposed methods when
similar accuracy measures were compared. The methodology also provides an easy to use
prediction framework based on logical conditions of BMS element condition data. The
methodology does not assume expert weights for the BMS elements on their impact to
relevant NBI condition ratings, but rather defines BMS element categories as predictor
variables. Therefore, the results also reveal potential information about the statistical impact
of BMS element categories on the NBI condition ratings.

4 Graduate student; primary researcher and author; Department of Civil, Construction, and
Environmental Engineering, Iowa State University, Ames, IA 50011.
5 Professor; Department of Statistics, Iowa State University, Ames, IA 50011.
6 Research Scientist; Institute for Transportation and Adj. Assistant Professor; Department of
Civil, Construction, and Environmental Engineering, Iowa State University, Ames, IA 50011.
57

INTRODUCTION

Bridge condition data in the United States

Bridge condition information is the fundamental information essential for decision


makers to make well-informed bridge management decisions. Transportation agencies collect
bridge condition data to assess the current condition and identify the future needs for
maintenance, repair, rehabilitation, and reconstruction activities, whichever decision making
methodology they might use. This data is also used by the federal government to evaluate
national needs and to make funding allocation decisions.
Federal Highway Administration (FHWA) implemented National Bridge Inspection
Standards (NBIS) in the early 1970s [1]. The NBIS forms the basis of bridge inspection
and bridge management in the United States today. Typically, bridges are inspected every
two years in the United States, and data on 116 NBI (National Bridge Inventory) items are
reported annually to the FHWA by the state transportation agencies. This data is compiled
into the NBI by the FHWA, and condition and performance reports based on the NBI data
are prepared and submitted to Congress [2]. Allocation of federal funding for bridges is
based on the NBI information, including condition and appraisal ratings [1].
NBI data are collected and recorded according to the guidelines in FHWA’s Recording
and Coding Guide [3]. NBI items 58 through 66 constitute the NBI condition ratings, and
four of them are especially important and more frequently utilized by the bridge
management community because they directly affect the Highway Bridge Program (HBP)
funding eligibility criteria. These NBI condition ratings are deck, superstructure,
substructure, and culvert condition ratings (items 58, 59, 60, and 62, respectively).
NBI condition ratings are assigned on a scale of zero to nine and according to the
specifications in the Recording and Coding Guide. Table 4-1 gives a summary of these
specifications.
58
Table 4-1: NBI general condition rating guidelines [3]

Code Description
N NOT APPLICABLE
9 EXCELLENT CONDITION
8 VERY GOOD CONDITION ( No problems noted)
7 GOOD CONDITION (Some minor problems)
6 SATISFACTORY CONDITION (Minor deterioration in structural
5 FAIR CONDITION (Sound structural elements with minor section loss)
4 POOR CONDITION (Advanced section loss)
3 SERIOUS CONDITION (Affected structural elements from section
2 CRITICAL CONDITION (Advanced deterioration of structural

1 “IMMINENT” FAILURE CONDITION (Obvious movement affecting


structural stability)

0 FAILED CONDITION (Out of service)

In addition to NBI condition data, many states also collect element condition data to use
in their bridge managements systems (BMSs). The Intermodal Surface Transportation
Efficiency Act (ISTEA) of 1991 required states to develop and implement bridge
management systems. Although the implementation of BMSs was later made optional by the
National Highway System Designation Act of 1995, many states continued in their efforts to
implement a BMS [3]. This intent to develop and implement BMSs motivated research in the
area, and a research project initiated by FHWA resulted in the development of Pontis BMS.
Today, this tool is the predominant BMS used in the United States, used by 44 state agencies
[4]. The norm in BMS element definitions and inspections in the United States is the
American Association of State Highway and Transportation Officials’ (AASHTO’s)
Commonly Recognized (CoRe) Structural Elements Guide. The guide is intended for
national consistency in defining and inspecting BMS elements. However, it provides some
flexibility to the states to adapt the guide and element definitions to their needs and add
additional agency elements. The major BMS elements are structural elements that are sub-
59

elements of the three NBI elements (deck, superstructure, and substructure). They provide
more detailed condition information than the NBI condition ratings.
For each CoRe element there are several condition states that represent different stages of
deterioration, and the maximum number of condition states can be 3, 4, or 5, depending on
the particular element. Condition state 1 represents the best possible condition, while
condition states 3, 4, or 5 represent the worst. BMS element-level inspection data are
assigned as percentages of the total element quantities in these condition states. Agencies can
also define environments that indicate the severity of the external condition for an element.
Deterioration of the structures is partially affected by environmental conditions and other
operational factors such as traffic and loading conditions [5]. The environments enable
deterioration modeling specific to environmental conditions for the BMS elements. Four
standard environmental classifications designated as benign, low, moderate, and severe have
been defined to capture these effects. The deterioration models reflect more rapid
deterioration for more severe environments (e.g., a pile element in a stream or a deck
element subject to high average daily traffic). A number of structural units can also be
defined for larger multiple-span structures. Therefore, the element condition data for one
BMS element can be fragmented across several structural units and environments. BMS
elements also include a set of special elements called smart flags, which allow tracking of
distress conditions such as pack rust and deck cracking [5]. They indicate different patterns
of deterioration other than typical CoRe element deteriorations.
Table 4-2 gives an example Pontis inspection (modified from real data for illustrative
purposes) for a bridge where there are two deck, one superstructure (in two environments),
and six substructure elements. This structure has only one structural unit. BMS element
condition data represents the percentage of that element in that specific condition. For
example, for element 275 (a reinforced concrete backwall) in Table 4-2, 72 % of that element
is in condition state 1 (which is the best condition), 25 % of the element is in condition state
2, and 3 % of the element is in condition state 3. The number of possible condition states is 4,
where condition state 4 represents the worst condition. The NBI condition ratings for this
structure from the same field inspection are 6 (satisfactory) for the deck and 5 (fair) for both
superstructure and substructure.
60

Table 4-2: An example Pontis bridge inspection data


# Env Element Name States Subsystem (Use) % in 1 % in 2 % in 3 % in 4 % in 5

22 1 P Conc Deck/Rigid Ov 5 Deck 0 100 0 0 0


359 1 Botm Deck Smart Flag 5 Deck 0 100 0 0 0
109 1 P/S Conc Beam 4 Superstructure 100 0 0 0
109 2 P/S Conc Beam 4 Superstructure 97 3 0 0
202 1 Pntd Stl H-Pile 5 Substructure 94 0 0 6 0
234 1 R/C Pier Cap 4 Substructure 99 1 0 0
234 2 R/C Pier Cap 4 Substructure 70 30 0 0
271 1 R/Conc Stub Abutment 4 Substructure 97 3 0 0
275 1 R/C Backwall w/Stub 4 Substructure 72 25 3 0
279 1 R/Conc Column 4 Substructure 62.5 37.5 0 0
300 1 Strip Seal Exp Joint 4 NA 70 30 0 0
301 1 Pourable Joint Seal 3 NA 0 100 0 0
310 1 Elastomeric Bearing 3 NA 100 0 0
313 1 Fixed Bearing 3 NA 0 100 0
321 1 R/Conc Approach Slab 4 NA 100 0 0 0
331 1 Conc Bridge Railing 4 NA 90 10 0 0

BMSNBI (NBI Translator)

Although the level of implementation varies among agencies and not many state
transportation agencies can use Pontis BMS to support decision making at the moment, all
Pontis licensees collect BMS element condition data and use the inspection module of
Pontis. Consequently, a majority of Pontis licensee states collect and manage bridge
condition data for both rating systems. Since BMS element condition data contain more
detailed information on the general NBI elements, state agencies, experts in the field, and the
FHWA questioned the redundancy of collecting NBI condition ratings. As a response to this
interest, Hearn et al. developed [6] the BMSNBI (also known as the NBI Translator)
software that maps BMS element condition data to NBI condition ratings, with the
collaboration of the Colorado Department of Transportation, for the FHWA. The tool was
calibrated with data from 11 states.
61

BMSNBI generates NBI condition ratings by a four-step process [7]. First, elements are
grouped under matching NBI elements. Second, NBI ratings are assigned for each element.
This assignment is done according to the mapping process as shown in Table 4-3 and
repeated for each BMS element under each NBI element. Here, Pi represents the percentage
of element quantity in condition state i, and Mi,j is the mapping constant (quantity
requirement for the total quantity in percentages for the first i condition states. as shown in
the left hand side). For a BMS element to be assigned a specific NBI rating (such as a 9 or
an 8, as we see in Table 4-3) all the percentage requirements represented by the inequalities
on the left hand side should hold. So, for an element to be assigned an NBI rating of 8, the

percentage of element quantity in condition state 1 (P1) should be equal to or greater than

M1,8, the total of percentages in the first two condition states (P1+P2) should be equal to

or greater than M2,8, and so on.

Table 4‐3: NBI generation [7]


Requirements on NBI Rating
Element Quantities
M
P1 1,9
M 9
P1 + P2 2,9
M
P1 + P2 + P3 3,9
M
P1 + P2 + P3 + P4 4,9
M
P1 1,8
M 8
P1 + P2 2,8
M
P1 + P2 + P3 3,8
M
P1 + P2 + P3 + P4 4,8

After the NBI rating assignment for each BMS element is done, NBI ratings are
computed for each NBI element by a weighed combination of element ratings. Users can
choose from two options for the weights used in this calculation: equal weights for all
elements or weights based on relative quantities of elements. Finally, if there are any smart
flags, NBI ratings are modified accordingly, and because smart flags are used for describing
special defects in miscellaneous bridge elements, this modification is a reduction in NBI
rating.
62

The BMSNBI has been available as a built-in module within Pontis BMS since 1994,
and FHWA accepts translated NBI ratings instead of field NBI rating, for the annual NBI
submissions. However, only three states use BMSNBI for their NBI data submissions. In
2007, Hale et al. reported general skepticism among the states regarding the accuracy of the
BMSNBI algorithm [7]. The study involved 17 state departments of transportation (DOTs),
and the states reported that they are not comfortable using BMSNBI results for their NBI
submissions. Among these 17 states, Oklahoma has been using generated ratings for
submissions. However, they were in the process of stopping this process at the time of the
study due to the variance between generated and field ratings. In another comparison study
between generated and field ratings done by the data from Kansas bridges [8], the Kansas
DOT decided that the generated ratings were statistically insufficient and inconsistent,
considering the coefficient of determination between generated and field ratings was only
25% in the best case.
A recent comparison of the field and translated ratings was done for Iowa bridges [9].
When the number of bridges in different NBI rating categories was compared, the overall
condition of the network for the same NBI rating categories was different for actual and
generated ratings. Figure 4-1 is a graph from this study that shows this comparison for the
superstructure ratings. The BMSNBI is conservative in assigning NBI ratings and thus tends
to assign lower ratings. The translated ratings were not representative of the bridge network
condition based on the actual ratings, and the results supported the concerns about the
efficiency of the algorithm that have been previously reported.
A recent national survey conducted by Iowa State University to assess the impact of BMS
implementation on decision making both at the state and national level also confirms the
limited use of the BMSNBI algorithm by the states. At present, only three states report
translated ratings to the FHWA for the annual NBI data submissions, and they stopped
collecting NBI condition ratings.
63

Comparison of actual and generated superstructure ratings

40%
35%

35% Actual sup. ratings 33%

30% Generated sup. ratings 28% 28%

Percentage in
Iowa bridges 25%

21%

20% 19%
15%
15%

10% 10%
8%
5%

0% 0% 2% 1% 0%

0%
3 4 5 6 7 8 9

NBI Rating

Figure 4-1: Comparison of actual and generated (translated) superstructure ratings

BMSNBI algorithm is not only used for NBI submissions; it has two other major uses.
The first is within Pontis BMS modeling framework (Pontis 4.x). The algorithm is used to
develop performance measures for forecasted future conditions, so the translated ratings
affect the simulation results [10]. Therefore, even though the majority of states do not use the
BMSNBI to submit NBI ratings, they are indirectly using its results as part of Pontis program
simulation. The second major use is within the National Bridge Investment Analysis
(NBIAS) software tool, which is used by FHWA to forecast bridge needs for the “Condition
and Performance” reports prepared for Congress [11]. Therefore, the problems in the
algorithm affect not only the NBI submissions, but also Pontis BMS simulations and
forecasted bridge needs reported to Congress.

Alternative Algorithms for NBI Condition Rating Prediction

The noted problems and lack of confidence in the BMSNBI algorithm induced research to
develop alternative tools. Two recent studies from the literature dated 2007 and 2008 proposed
alternative translators. The first alternative translator was an Artificial Neural Networks (ANN)
model [12]. This study utilized data from the states of Wisconsin and Maryland. In the analyses,
when the training data for the ANN model and the data used for predictions were from the same
state, the ANN model was reported to perform better than the
64

BMSNBI. The comparison was done by looking at the percentages of predictions for four
classes based on the differences between the predicted and actual NBI ratings. This study
achieved some improvement in prediction accuracy with respect to the BMSNBI for the
sample data set. The authors, however, noted that with data from only two states their results
could not be generalized and concluded to be better than the BMSNBI results. In this ANN
model, the single input vector contained all element-level data, and the target output was the
set of three NBI ratings. This modeling approach does not provide an understanding of or
explore the analytical relationship between an NBI element and matching BMS elements
[13].
The second study from the literature was published in 2008 [13] and proposed another
alternative tool called the NewTranslator. The modeling methodology for the NewTranslator

is pretty similar to that of Bridge Health Index7. The tool is a proposed index calculation and
not an estimator. The first step in the calculation is computing condition indices on a zero to
one scale for individual BMS elements. Then an NBI rating is assigned to these elements
assuming an index value of zero is equal to an NBI rating of 3 and an index value of 1 is
equal to an NBI rating of 9. The relationship in between is assumed to be linear. NBI ratings
for an NBI element are calculated based on the individually assigned NBI ratings of
elements and element weights based on expert opinion. Statistical analysis showed that the
ratings from the BMSNBI and the NewTranslator are not precisely the same, and the
NewTranslator is more accurate in assigning NBI ratings in the higher range. However, it
was also reported that the NewTranslator tends to indicate high NBI ratings for bridges with
poor element condition data.
Sobanjo et al. provided graphs to show the variation in the accuracy of the translated
ratings by their method [13]. The average of absolute error (NewTranslator rating-field rating)
was plotted vs. field NBI rating classes. The average of absolute error for decks (as read from
the graph) was close to 0.8 for NBI rating classes 7 and 8, approximately 0.5 for NBI rating
class 9, and above 1 for all other NBI rating classes. For superstructures, the

7 The Bridge Health Index is a single number (from 0-100) which reflects the condition distribution for
the different elements on a structure [5]. This index reflects a weighted condition distribution of elements and
the weights for elements are either expert assignments or element failure costs.
65

average of absolute error was smaller than 1 when only the field NBI superstructure rating was
9. For substructures, the average of absolute error was smaller than 1 when field ratings were 8
and 9, close to 0.9 for NBI rating class 8, and almost zero for NBI rating class 9.
Sobanjo et al. [13] reported that the method was a potential new method due to the
improved accuracy in the higher range of NBI ratings, since this is a weakness of the
BMSNBI. However, the fact that the method assigns higher NBI ratings in the low NBI
range for bridges with bad element condition data was also noted as a problem. The poor
conditions in especially old structures are critical, and such information is too valuable to be
overlooked. Also, since the study is a computational index, the statistical relationship
between specific BMS elements and field NBI condition ratings is not addressed.
To address the general issues with NBI ratings, in 2010 the AASHTO Subcommittee on
Bridges and Structures approved a new element-level bridge inspection manual. The new
AASHTO Bridge Element Inspection Manual [14] replaces the AASHTO Guide to
Commonly Recognized Structural Elements. This new manual provides two sets of bridge
elements: the National Bridge Elements (NBE) and the Bridge Management Elements
(BME). The NBEs represent the primary structural components of bridges and are proposed
as a refinement of the deck, superstructure, substructure, and culvert condition ratings
defined in the FHWA’s Recording and Coding Guide for the Structure Inventory and
Appraisal of the Nation's Bridges [14]. The intention with the introduction of the NBEs is to
eventually replace the NBI condition inspections with NBE condition inspections to provide
a more detailed and objective condition assessment of the nation’s bridges. However, the
time for such a possible transition is not certain yet. Continuing use of the two different
bridge condition inspections still indicates a potential need to accurately predict NBI
condition ratings from BMS element condition data.

OBJECTIVE

The objective of this study is to develop a statistical model to predict NBI deck,
superstructure, and substructure condition ratings based on BMS element condition data.
66

METHODOLOGY

Classification and Regression Trees

Classification and Regression Trees (CART) is a statistical technique used for predicting
continuous dependent variables or categorical variables (classes) from one or more
continuous and/or categorical predictor variables [15]. The analysis essentially builds a tree
of logical conditions based on the predictor variables and is also commonly known as
recursive partitioning.
The prediction models in CART are obtained by recursively partitioning (splitting) the
data space by one predictor variable at a time and fitting a simple model to each partition.
While the classification trees are designed for categorical dependent (response) variables that
take a finite number of discrete values, regression trees are for continuous dependent
variables [16]. The trees are designed to minimize the expected error between the
observations and predictions for the dependent variables. Figure 4-2 presents an example
classification tree from the study.
For the CART analyses in this paper, SAS JMP Statistical Software Partition Platform
was used. In the SAS JMP platform, the splits are determined by maximizing a LogWorth
statistic that is related to the likelihood ratio chi-square statistic (reported as “G^2” in the
platform), which involves the ratios between the observed and expected frequencies [15] of
the dependent variable. When the response variable is categorical (e.g., field NBI condition
rating), the response rates become the fitted value. The most significant split (along the
ranges of continuous independent variables) can be determined by the largest likelihood ratio
chi-squared statistic. The split is chosen to maximize the difference in the responses between
the two groups after the split. For the CART models in this study; the categorical dependent
variables are the field NBI condition ratings, while the independent (predictor) variables are
the BMS element quantities in different condition states, presented by a percentage of the
total quantity (e.g., XPCTSTATE#).
67

Figure 4-2: An example classification tree


68

The initial letters A, B, or C before the PCTSTATE# denote different types of BMS
elements (e.g., in a substructure, element A can be a column, element B can be an abutment,
and element C can be an abutment cap).
Splitting can continue until little predictive ability is gained by further splitting [17] by
2
comparing column contributions or coefficient of determination (R ) values, and where to
end the splitting is a user decision. Recursive splits along the range of the same independent
variable indicate a trend rather than a different clustering within the data.
The initial cluster in Figure 4-2 has 5,885 observations, and the first split is based on the
first condition state of element C. The initial best split where CPCTSTATE = 91.67 splits the
observation set into two clusters. The marked cluster with 1,883 observations as shown in
Figure 4-3 is one of the five ending clusters after four splits. For each end cluster (leaf), the
predicted probabilities for the observations to be in a specific dependent variable class are
reported. The largest predicted probability designates the class (dependent variable)
prediction for a specific cluster. The predicted probabilities for this cluster suggest that NBI
condition rating 7 is the most likely class prediction.

Figure 4-3: An example end leaf and predicted probabilities for a classification tree
69

Description of the Data Set

Sample BMS element condition data and NBI condition ratings from three state DOTs
(Montana, Iowa, and Kansas) were used for the CART analyses in this study. Deck condition
data could be acquired for only two state DOTs, while superstructure and substructure
condition data were available for all three. The results of the analyses are not identified by
the states. These states are notated as State DOT A, B, and C anonymously within this paper.
None of these three DOTs uses translated ratings for agency purposes or reports them to
the FHWA. However, for State DOTs A and B, the “Elements.prn” [7] file, which is
necessary to use the BMSNBI algorithm, was updated, and translated ratings were obtained
for a smaller set of observations for comparison purposes.
The BMS implementation in State DOT A is at its earlier stages. State DOT A does not
use BMS element condition data for condition assessment, and neither does it use the BMS
recommendations to determine bridge work candidates at present. State DOT A does not
have a quality assurance (QA) process for the BMS element condition data. Minimal in-
house training for bridge inspectors is offered by the state.
State DOTs B and C are at similar stages of BMS implementation. Both states use BMS
element condition data for assessing their bridge conditions, and they make use of BMS
decision support capabilities while they select bridge work candidates. They have QA
reviews for BMS element condition data. State DOT B provides in-house training for
bridge inspectors each year, while State DOT C provides similar training every two years.
State DOTs B and C have concurrent NBI and BMS condition data, while State DOT A
has only recently started doing concurrent inspections. Therefore, the data subsets for State
DOT A for simultaneous BMS element-level and NBI condition inspections were combined
from different data sources.

Modeling Approach

For the CART models in this study, the NBI condition rating classes from zero to nine
were defined as the dependent categorical variables. The independent variables were
assigned as the percentages of the related BMS element quantities in the condition states.
These percentages of total quantities in a condition state are notated as PCTSTATE# in the
70

analyses, where # designates the specific condition state. The number of possible condition
states for BMS elements varies and can be 3, 4, or 5, depending on the type of element. The
quantity of the BMS elements was included in the analyses as another predictor variable.
Different combinations and numbers of BMS elements represent the NBI deck, substructure,
and superstructure elements, depending on the structure design.

Deck
NBI deck elements are typically represented by single BMS deck or slab elements, which can
be in one of the five condition states as a whole. Bridge railings and deck joints are also a field
for BMS data inventory but are not to be considered in the overall NBI deck evaluation
[3] and were thus not included in the analyses. Therefore, the predictor variables for decks in the

data set are represented by five cases, a 100% in one of the condition states from 1 to 5.

Superstructure
Depending on the design and length of the bridges, NBI superstructure elements were
represented by up to 10 BMS elements in the data sets from State DOTs A, B, and C;
however, a majority of the observations had up to three major BMS elements. The number of
inspections and the number of BMS superstructure elements in these observations are given
in Table 4-4.
Table 4‐4: Number of BMS elements in the sample superstructure observations by State DOT
State DOT A State DOT B State DOT C
# % of # % of # % of
inspections inspections inspections inspections inspections inspections
1 5121 94.41%* 2241 46.26%* 7334 81.46%*
in

210 190 3.51% 1179


4
24.34%*
0.08%
848 9.42%*
um berofBMSelementsNthesuperstr ucture

3 113 2.08% 1139 23.51%* 235 2.61%


4 150 3.10% 274 3.04%
5 99 2.04% 214 2.38%
6 24 0.50% 86 0.96%
7 8 0.17% 12 0.13%
8 0 0.00%
9 0 0.00%

Total 5424 4844 9003


*Used for the CART models
A majority of the observations from State DOT A had one BMS element in the

superstructure, either a girder or a beam. In addition to a single beam or girder, State DOTs B
71

and C had a number of observations where a bearing element complemented the girder or
beam element. Condition of bearings is not to be considered for assigning NBI condition
ratings other than extreme conditions. Regardless, bearing condition was kept as a predictor
variable in the analysis when available, to see whether it has any significant statistical effect
on NBI superstructure ratings.
More than 23% of the State DOT B superstructure observations had three superstructure
BMS elements, typically a beam or a girder accompanied by two different bearing elements.
Different classification trees were fit for each of these groups, since the contributing BMS
elements (the predictor variables) were different.
All superstructure elements were in four condition states, with the exception of painted
steel elements. The AASHTO CoRe element condition state definitions for painted steel
elements were used in all three states. Since the definitions of condition states 2 and 3
together represent a close definition of typical condition state 2 for unpainted steel elements,
the percentages for painted steel elements were adjusted as shown in Figure 4-4.

PCTSTATE1 PCTSTATE1

PCTSTATE2 PCTSTATE2

PCTSTATE3 PCTSTATE3

PCTSTATE4 PCTSTATE4

PCTSTATE5

Figure 4-4: Adjustment of condition state quantities for painted steel elements

Substructure
The combination of BMS element types that represented the NBI substructure
elements across the three State DOTs also varied. Three typical combinations that
represented NBI substructures were as follows:
A single abutment element
A combination of one span support element (e.g., a wall, column, pier, or pile) with
an abutment element
72

A cap element, to complement one span support element and an abutment

The number of inspections and the number of BMS substructure elements in these
observations are given in Table 4-5.
Table 4‐5: Number of BMS elements in the sample substructure observations by state
State DOT A State DOT B State DOT C
# % of # % of # % of
inspections inspections inspections inspections inspections inspections
1 300 7.56%* 338 4.26%* 2162 21.91%*
2 162 4.08%* 4169 52.59%* 962 9.75%*
NumberofBMSelements

7 5 0.05%

3 1992 50.20%* 3123 39.40%* 5887 59.66%*


4 1514 38.16%* 273 3.44% 661 6.70%
5 19 0.24% 165 1.67%
6 5 0.06% 25 0.25%

Total 3968 7927 9867


*Used for the CART models

RESULTS AND DISCUSSION

This section reports the results using a series of tables and figures. The notations used for
these tables and figures are as follows:
Field deck, field super, or field sub: NBI condition ratings assigned by bridge
inspectors in the field for deck, superstructure, or substructure
CART deck, CART super, or CART sub: Predicted NBI condition ratings for deck,
superstructure, or substructure by the CART model
Error (CART deck/ CART super/ CART sub): Error in prediction calculated by
subtracting the field NBI condition ratings from predicted values (e.g., for
decks: (CART deck-field deck))
BMSNBI deck, BMSNBI super, or BMSNBI sub: Predicted NBI condition ratings by
the BMSNBI (Available for subsets of the data, for State DOTs A and B)
Error (BMSNBI deck/super/sub): Error in prediction by the BMSNBI calculated
by subtracting the field NBI condition ratings from the BMSNBI predictions (e.g.,
(CART deck-field deck))

Deck

NBI deck elements are represented by only one BMS element. Deck BMS elements can
73

be in one of the five condition states as a whole element. Therefore, the element condition
observations for decks can be any of the five cases where the PCTSTATE# is 100%. Given
five possible types of observations, the deck observations can be partitioned into at most
five clusters (leaves).

State DOT A
Table 4-6 gives the count and percentage of the errors in predicting NBI deck ratings by
error class for State DOT A. Approximately 40 % of the predicted NBI condition ratings
matched the field NBI condition ratings. 92.5% of the predictions were within one error term;
25% of the predictions were one NBI class above, and almost 28% of the predictions were
one NBI class below. Deck BMS element observations in condition states 4 and 5 were
partitioned to one leaf and predicted as NBI condition rating 4. Deck BMS element
observations in condition state 3 were assigned NBI condition rating 5. Deck BMS element
observations in condition state 2 were predicted as NBI condition rating 6. However, the
predicted probabilities for NBI condition rating 6 or 7 were pretty close: 0.384 and 0.372,
respectively. The predicted probabilities for NBI condition ratings 7 (0.370) and 8 (0.374)
were even closer for deck BMS element observations in condition state 2.
Table 4‐6: Accuracy of predictions by the distribution of error terms (Deck, DOT A)
Error
(CART deck) count %
-3 7 0.22%
-2 89 2.77%
-1 894 27.78% 98.94% 92.51%
0 1277 39.68%
1 806 25.05%
2 118 3.67%
3 21 0.65%
4 6 0.19%
3218

Figure 4-5 compares the field and predicted NBI ratings by NBI condition class. Since the
deck BMS element observations could only be partitioned into four NBI rating classes, the
distributions of the field and predicted NBI ratings by class for State DOT A deck data are not
similar. Since deck BMS element observations in condition states 4 and 5 were partitioned to
one leaf and predicted as NBI condition rating 4, the remaining observations
74

that are deck elements in condition states 1, 2, and 3 could be matched only to three NBI
condition rating classes (5, 6, and 8).

1600
1400

1200

1000

800 Field deck

600 CART deck

400

200

0
3 4 5 6 7 8 9

Figure 4-5: Comparison of field and CART ratings by NBI condition class (Deck, DOT A)

Table 4-7 compares the errors in predictions from RP and BMSNBI for a smaller
available set of observations, while Figure 4-6 gives the distribution of observations by
method and NBI class. Exact predictions and predictions within one error term are both
better for CART results. However, CART assigns all observations to NBI condition rating 8
and BMSNBI to NBI condition rating 7; hence, both methods do not provide a good overall
picture of the NBI deck field ratings.
Table 4‐7: Comparison of the predictions by two methods (Deck, DOT A)
Error Error
(CART deck) (BMSNBI deck) CART BMSNBI
-2 0 118 0.00% 25.88%
-1 118 187 25.88% 96.49% 41.01% 73.90%
0 187 135 41.01% 29.61%
1 135 15 29.61% 3.29%
2 15 1 3.29% 0.22%
3 1 0 0.22% 0.00%
456 456
75

500
450
400
350
300 Field deck
250
CART deck
200
BMSNBI deck
150
100
50
0
1 2 3 4 5 6 7 8 9

Figure 4-6: Comparison of field, CART and BMSNBI ratings by NBI condition class (Deck, DOT A)

The same set of figures and tables with similar content are given in this section for State
DOT B deck analysis and then for both superstructure and substructure rating analyses of the
three states. Several SAS JMP partition reports (a view of the classification tree, number of
2
splits, coefficient of determination [R ], and column contributions) are provided in Appendix
B.
State DOT B
The CART deck predictions for State DOT B had overall better accuracies than for State
DOT A. 63% of the predictions were exact matches, and almost all observations were
predicted within one error term (Table 4-8).
Table 4‐8: Accuracy of predictions by the distribution of error terms (Deck, DOT B)
Error (CART deck) count %
-3 3 0.04%
-2 3 0.04%
-1 2496 32.97% 99.67%
0 4765 62.95%
1 284 3.75%
2 12 0.16%
3 6 0.08%
4 1 0.01%
7570
76

As with State DOT A deck observations, deck BMS element observations were clustered
into four leaves. Only 21 of the 7,570 observations were in condition state 5 and were
predicted as NBI condition rating 3. BMS element condition states 4 and 3 were predicted as
NBI condition ratings 5 and 6, respectively. Condition states 1 and 2 were clustered into one
leaf and the predicted NBI condition rating class for both condition states was 7. Figure 4-7
gives the distributions of field and predicted NBI ratings by NBI condition class. CART
analysis cannot make a distinction between NBI rating 7 and 8, but the number of predictions
for NBI condition classes 5 and 6 are closer to the number of field ratings in the same
classes.
8000
7000

6000

5000

4000 Field deck

3000 CART deck

2000

1000

0
3 4 5 6 7 8 9

Figure 4-7: Comparison of field and CART ratings by NBI condition class (Deck, DOT B)

For 85% of the observations in the previous analysis, BMSNBI predictions were also
available. Table 4-9 compares the errors from both methods in predicting NBI condition
ratings. Exact matches are higher for CART predictions by 16%.
The BMSNBI overestimates the number of bridges with NBI condition ratings 5 and 6
(Figure 4-8), while CART underestimates it. However, for NBI condition rating class 7,
CART predictions are higher in number than both BMSNBI predictions and field ratings.
Neither of the algorithms predicts NBI condition class 8, which is the second largest cluster
in the field ratings.
77

Table 4-9: Comparison of the predictions by two methods (Deck, DOT B)


Error
(CART Error NBI
deck) (BMSNBI deck) RP Translator
-5 0 1 0.00% 0.02%
-4 0 6 0.00% 0.09%
-3 3 70 0.05% 1.08%
-2 3 175 0.05% 2.70%
-1 2169 2884 33.51% 99.77% 44.56% 94.65%
0 4059 3042 62.72% 47.00%
1 229 200 3.54% 3.09%
2 9 78 0.14% 1.21%
3 0 13 0.00% 0.20%
4 0 3 0.00% 0.05%
6472 6472
7000

6000

5000

4000 Field deck

3000 CART deck


BMSNBI deck
2000

1000

1 2 3 4 5 6 7 8 9

Figure 4-8: Comparison of field, CART and BMSNBI ratings by NBI condition class (Deck, DOT B)

Superstructure

State DOT A
Since superstructures of the majority of the bridges in State DOT A were represented by
one BMS element (a beam or a girder), CART analyses were done for the 2,533 single BMS
element condition observations with matching NBI condition inspections. Unlike deck BMS
elements, superstructure BMS elements can be in several condition states. Therefore, the
78

PCTSTATE# variables can be equal or smaller than 100% and are numeric continuous
variables.
The predictions from CART matched the same NBI rating class for 48% of the
observations, and 91% of all predictions were within one error term (Table 4-10). Predicted
NBI rating classes were 5, 6, 7, and 8, and the predictions for NBI rating 5 were only for 7
observations. While there is a significant number of field NBI rating observations for class 9
(Figure 4-9), no predictions were observed.

Table 4-10: Accuracy of predictions by the distribution of error terms (Superstructure, DOT A)
Error (CART super) count %
-2 8 0.32%
-1 587 23.17% %90.76
0 1207 47.65%
1 505 19.94%
2 190 7.50%
3 31 1.22%
4 4 0.16%
5 1 0.04%
2533 100.00%

1600

1400

1200

1000

800 Field super


600 CART super

400

200

0
2 3 4 5 6 7 8 9

Figure 4-9: Comparison of field and CART ratings by NBI condition class (Superstructure, DOT A)
79

For a smaller set of 1,179 observations, the predictions from CART and BMSNBI were
compared (Table 4-11). The exact matches and predictions within one error term were pretty
close for both methods but slightly better for RP. The BMSNBI was more accurate for lower
NBI rating classes of 4 and 5 (Figure 4-10). BMSNBI overestimated NBI rating class 6,
while CART underestimated the same class; the reverse situation was observed for NBI
rating class 7, and both methods overestimated NBI class 8. Neither method predicted NBI
rating class 9, although more than 200 observations were in the data set.
Table 4‐11: Comparison of the predictions by two methods (Superstructure, DOT A)
Error Error
(CART super) (BMSNBI super) CART BMSNBI
-4 0 1 0.00% 0.08%
-3 0 18 0.00% 1.53%
-2 2 59 0.17% 5.00%
-1 321 370 27.23% 92.54% 31.38% 90.75%
0 539 532 45.72% 45.12%
1 231 168 19.59% 14.25%
2 73 29 6.19% 2.46%
3 13 2 1.10% 0.17%
1179 1179

700
600

500

400 Field super

300 CART super


BMSNBI super
200

100

4 5 6 7 8 9

Figure 4-10: Comparison of field, CART and BMSNBI ratings by NBI class (Superstructure, DOT A)
80

State DOT B
Half of the observations for State DOT B were single BMS elements (a beam or a girder)
(Table 4-4), and for the rest of the observations the beam or girder element was accompanied by
either one or two additional bearing elements. While the first condition state of the beam/girder
elements was the major contributor to the splitting in the algorithm, bearing elements also
contributed. The exact matches in predictions were much higher in percentage than State DOT
A. Almost 80% of the predictions were exact matches, while most (98%) of the predictions were
within one error term (Table 4-12). Higher NBI ratings of 7 and 8 were slightly overestimated,
while lower ratings were slightly underestimated. However, overall distributions of field and
predicted ratings by rating class were quite similar (Figure 4-11).

Table 4-12: Accuracy of predictions by the distribution of error terms (Superstructure, State DOT B)
Error (CART super) count %
-4 2 0.04%
-3 3 0.07%
-2 1 0.02%
-1 226 4.96% 97.70%
0 3639 79.82%
1 589 12.92%
2 73 1.60%
3 26 0.57%
4559

2500

2000

1500
Field super
1000 CART super

500

0
3 4 5 6 7 8 9

Figure 4-11: Comparison of field and CART ratings by NBI condition class (Superstructure, DOT B)
81

For 1,715 observations, field and predicted ratings by both methods were compared
(Table 4-13). The percentage of exact matches from the CART algorithm were 82%, similar
to the results from the main data set. However, exact matches from the BMSNBI were only
27%. Predictions from the CART algorithm had higher accuracies within one error term
(98% to 78%).
Table 4‐13: Comparison of the predictions by two methods (Superstructure, State DOT B)
Error Error
(CART super) (BMSNBI super) CART BMSNBI
-4 1 0 0.06% 0.00%
-3 2 1 0.12% 0.06%
-2 0 359 0.00% 20.93%
-1 77 807 4.49% 98.08% 47.06% 78.48%
0 1406 471 81.98% 27.46%
1 199 68 11.60% 3.97%
2 24 8 1.40% 0.47%
3 6 1 0.35% 0.06%
1715 1715

When the predictions from both methods are compared with field ratings by NBI rating
class (Figure 4-12), predictions from the CART algorithm have like distributions for the data
set. BMSNBI overestimates NBI rating classes 5 and 6 and underestimates 7 and 8 by
marked differences compared to the CART algorithm.
900
800

700

600

500 Field super

400 CART super


300 BMSNBI super

200

100
0
3 4 5 6 7 8

Figure 4-12: Comparison of field, CART and BMSNBI ratings by NBI class (Superstructure, DOT B)
82

State DOT C
81% of the observations from State DOT C were single BMS elements, and 9% of the
observations had an additional bearing element. 59% of the predictions by the CART
algorithm were exact matches, and 98% of the observations were predicted within one error
term (Table 4-14).
Table 4‐14: Accuracy of predictions by the distribution of error terms (Superstructure, State DOT C)
Error (CART super) count %
-3 1 0.01%
-2 54 0.66%
-1 2144 26.26% 98.07%
0 4818 59.00%
1 1046 12.81%
2 63 0.77%
3 12 0.15%
4 10 0.12%
5 7 0.09%
6 6 0.07%
7 5 0.06%
8166 100.00%

The predictions for NBI classes 4 and 6 were closer in number to the field ratings in these
classes (Figure 4-13). However, NBI class 5 was underestimated and class 7 was
overestimated. There were 1,640 field ratings for NBI class 8, but no predictions.

7000

6000

5000

4000
Field super
3000 CART super

2000

1000

0
0 2 3 4 5 6 7 8 9

Figure 4-13: Comparison of field and CART ratings by NBI condition class (Superstructure, DOT C)
83

Substructure

State DOT A
There were three BMS elements per substructure for 50% and four BMS elements for
38% of the observations (Table 4-5). CART analysis was done separately for all subsets of
substructures having 1, 2, 3, or 4 for BMS elements. 55% of the predictions were exact
matches, and 94% of the predictions were within one error term (Table 4-15).

Table 4-15: Accuracy of predictions by the distribution of error terms (Substructure, State DOT A)
Error (CART sub) count %
-3 4 0.10%
-2 20 0.50%
-1 703 17.72% 94.18%
0 2170 54.70%
1 863 21.75%
2 161 4.06%
3 40 1.01%
4 5 0.13%
7 1 0.03%
3967 100.00%

CART analysis predicted NBI rating classes 5, 6, 7, 8, and 9 only (Figure 4-14). The
predictions for classes 6 and 9 were less than the field ratings, while there were more
predictions than the field ratings for classes 7 and 8.
84

2000
1800
1600
1400
1200
1000 Field sub
800 CART sub

600
400
200
0
1 2 3 4 5 6 7 8 9

Figure 4-14: Comparison of field and CART ratings by NBI condition class (Substructure, DOT A)

For the available 1,939 observations, predictions from the BMSNBI and CART
algorithms were compared (Table 4-16). CART predictions were higher in number for both
exact matches (54% to 29%), and the predictions were within on error term (95% to 83%).
Table 4‐16: Comparison of the predictions by two methods (Substructure, State DOT A)
Error Error
(CART sub) (BMSNBI sub) CART BMSNBI
-4 0 1 0.00% 0.05%
-3 2 20 0.10% 1.03%
-2 10 263 0.52% 13.56%
-1 378 830 19.49% 95.15% 42.81% 82.67%
0 1042 561 53.74% 28.93%
1 425 212 21.92% 10.93%
2 56 46 2.89% 2.37%
3 21 5 1.08% 0.26%
4 4 0 0.21% 0.00%
5 0 1 0.00% 0.05%
7 1 0 0.05% 0.00%
Total 1939 1939

The number of predictions from BMSNBI was close to the number of field ratings for the
low rating classes of 4, 5, and 6; however, the algorithm overestimated class 7 and did not
predict NBI class 8, which has the highest number of observations among the classes (Figure
85

4-15). The distribution of CART predictions among NBI classes resembled more of the field
ratings.
1600
1400

1200

1000 Field sub

800
CART sub
600
BMSNBI sub
400

200

1 2 3 4 5 6 7 8 9

Figure 4-15: Comparison of field, CART and BMSNBI ratings by NBI class (Substructure, DOT A)

State DOT B
Two BMS elements represented the substructure for more than half of the observations
from State DOT B (Table 4-5). Three separate CART analyses were done for substructures
with one, two, and three BMS elements. 77.3% of the predictions were exact matches, and
98% of the predictions were within one error term (Table 4-17).

Table 4-17: Accuracy of predictions by the distribution of error terms (Substructure, State DOT B)
Error (CART sub) count %
-2 20 0.26%
-1 657 8.61% 98.35%
0 5899 77.31%
1 948 12.42%
2 88 1.15%
3 14 0.18%
4 4 0.05%
7630

The distribution of the CART predictions was quite similar to that of the field ratings
(Figure 4-16).
86
4000

3500

3000

2500

2000 Field sub

1500 CART sub

1000

500

0
4 5 6 7 8

Figure 4-16: Comparison of field and CART ratings by NBI condition class (Substructure, DOT B)

The predictions were compared for a smaller data set of 2,878 observations (Table 4-18).
The percentage of exact matches by CART algorithm was 78%, compared to 41% by the
BMSNBI.
Table 4‐18: Comparison of the predictions by two methods (Substructure, State DOT B)
Error Error
(CART sub) (BMSNBI sub) CART BMSNBI
-4 0 3 0.00% 0.10%
-3 0 5 0.00% 0.17%
-2 4 54 0.14% 1.88%
-1 288 1573 10.01% 54.66%
0 2246 1170 78.04% 99.13% 40.65% 97.74%
1 319 70 11.08% 2.43%
2 19 2 0.66% 0.07%
3 1 1 0.03% 0.03%
4 1 0 0.03% 0.00%
2878 2878

Since the BMSNBI does not predict NBI class 8, the number of predictions by the
algorithm for NBI class 7 was much higher than the field and predicted ratings by the CART
algorithm (Figure 4-17). CART predictions resembled a similar distribution to that of field
ratings.
87

3000
2500

2000

1500 Field sub


CART sub
1000 BMSNBI sub
500

2 3 4 5 6 7 8

Figure 4-17: Comparison of field, CART and BMSNBI ratings by NBI condition class (Substructure,
State DOT B)

State DOT C
Substructure observations from State DOT C were typically composed of three BMS
elements (60%), and the remaining observations were in general composed of one (22%) or
two (10%) BMS elements (Table 4-5). 64% of the predictions from the CART analysis were
exact matches (Table 4-19).
Table 4‐19: Accuracy of predictions by the distribution of error terms (Substructure, State DOT C)
Error (CART sub) count %
-3 4 0.04%
-2 69 0.77%
-1 1661 18.44% 96.50%
0 5779 64.15%
1 1253 13.91%
2 197 2.19%
3 17 0.19%
4 10 0.11%
5 6 0.07%
6 7 0.08%
7 5 0.06%
9008

The CART algorithm did not predict NBI class 8. Therefore, the predictions for NBI
class 7 were higher in number when compared to the field ratings. However, overall
distributions, apart from NBI class 7, were quite similar.
88

7000

6000

5000

4000
Field sub
3000 CART sub

2000

1000

0
0 2 3 4 5 6 7 8 9

Figure 4-18: Comparison of field and CART ratings by NBI condition class (Substructure, DOT C)

Summary of Results

The accuracy of CART predictions represented by absolute errors for all three states is
summarized in Table 4-20. For all rating types, the percentage of exact matches was the
highest for State DOT B and the lowest for State DOT A. The percentage of predictions
within one error term was higher than 90% for all NBI condition classes.

Table 4-20: CART Predictions


State DOT A State DOT B State DOT C
CART |CART CART |CART CART |CART
Error=0 Error|=1 Error=0 Error|=1 Error=0 Error|=1
Deck 40% 98% 63% 99.7%
Superstructure 48% 91% 80% 98% 59% 98%
Substructure 55% 94% 77% 98% 64% 97%

Comparison of absolute errors in predictions between the CART algorithm and the
BMSNBI algorithm for State DOTs A and B is given in Table 4-21. For all rating types, the
percentage of exact matches was higher for the CART algorithm. For State DOT B, the
percentages of exact matches for superstructure and substructure ratings were higher than
the BMSNBI predictions by 37% and 54.5%, respectively.
89

Table 4-21: CART and BMSNBI Comparison


Deck Superstructure Substructure
State State State State State State
DOT A DOT B DOT A DOT B DOT A DOT B
CART Error = 0 41% 63% 46% 82% 54% 78%
BMSNBI Error=0 30% 47% 45% 27% 29% 41%

|CART Error|=1 96% 99.8% 93% 98% 95% 99%


|BMSNBI Error|=1 74% 95% 91% 78% 83% 98%

Comparison with Other Proposed Methods

Percentages of exact predictions and predictions within one absolute error term from the
ANN model as reported by Al-Wazeer et al. [12] are included in Table 4-22. The table
includes the best results from the study for both states and all rating types. The sample bridge
data used in the ANN study come from two state DOTs and are different than State DOTs A,
B, and C mentioned in this paper. Since the sample data sets used by the ANN model and the
CART algorithm are different, a true comparison of the accuracy of these methods is not
possible. However, it should be noted that the highest percentage of exact predictions by the
ANN model is 48%, while it is almost 80% for the CART algorithm. Except CART deck
predictions for State DOT A, the percentages of exact predictions and predictions within one
error term achieved by the CART algorithm are higher than corresponding ANN prediction
percentages.

Table 4-22: ANN study (Best results)[12]


Deck Superstructure Substructure
DOT M DOT W DOT M DOT W DOT M DOT W
ANN Error=0 43% 41% 43% 44% 48% 39%
|ANN Error|=1 88% 83% 85% 86% 93% 85%

NewTranslator is another proposed methodology for translating BMS element condition


data to NBI condition ratings [13]. Sobanjo et al. presents the variation in accuracy of
translated ratings by plotting the average of absolute errors for each NBI rating class [13].
The average of absolute errors is higher than 0.8 for all NBI rating classes except class 9. The
comparable averages of absolute errors by the CART algorithm are given in Table 4-23. The
highest average absolute error in this study is 0.7. For State DOT B and C predictions, the
90

averages of absolute errors are always below 0.5. Since the data sets used for the two
methods are different, a true comparison of accuracy is not possible, but the achieved
averages of absolute errors for the CART algorithm are significantly lower than the reported
values by the NewTranslator.

Table 4-23: Average Absolute Error of CART predictions


Average Absolute Error
State DOT A State DOT B State DOT C
Deck 0.69 0.38
Superstructure 0.63 0.23 0.44
Substructure 0.53 0.25 0.40

CONCLUSIONS

The statistical results from the CART method in this paper propose a potentially more
accurate method of predicting NBI condition ratings than the previous algorithms in the
literature. Direct comparisons of the predictions from the CART algorithm and the BMSNBI
indicated better accuracy for the CART algorithm. While a true comparison with the other
two proposed methods in the literature was not possible, the CART algorithm achieved
higher accuracies than these earlier methods when similar accuracy measures were
compared.
This methodology does not make assumptions about the impacts (weights) of specific
BMS elements on the related NBI condition ratings. On the contrary, due to the way the
predictor variables were defined, the column contributions from the CART results suggest
the statistical impact of a specific BMS element type (e.g., abutment, column, girder) and
condition state in the prediction. Analyses of such information can be useful to state
transportation agencies for exploring how the condition of different BMS elements
contributes to the assignment of NBI condition ratings.
The statistical results from this study and the classification trees from the CART
algorithm are specific to the states in this study and cannot be generalized. Yet, the study
provides a prediction methodology based on simple logical conditions that can be used to
create easy to understand business rules for state transportation agencies.
91

REFERENCES

1. Small, E.P., Philbin, T., Fraher, M., Romack, G.P., Current Status of Bridge
Management System Implementation in the United States, in Eighth Transportation
Research Board Conference on Bridge Management, TRB Transportation Research
Circular 498. 1999: Washington D.C. p. A-1/1-16.
2. FHWA, 2008 Status of the Nation’s Highways, Bridges, and Transit: Condition and
Performance, Report to the Congress. 2009: Washington D.C. p. 622.
3. FHWA, Recording and Coding Guide for the Structure Inventory and Appraisal of
the Nations Bridges, B.D. Office of Engineering, Bridge Management Branch, Editor.
1995: Washington D.C.
4. Lichtenstein, A.G., The silver bridge collapse recounted. Journal of Performance of
Constructed Facilities, 1993. 7(4): p. 249.
5. Cambridge Systematics, I., Pontis Bridge Management Release 4.4 User Manual.
2005, AASHTO: Washington D.C. p. 572.
6. Hearn, G., Frangopol, D., Chakravorty, M., Myers, S., Pinkerton, B., Siccardi, A.J. ,
Automated Generation of NBI Reporting Fields from Pontis BMS Database
Infrastructure: Planning and Management, American Society of Civil Engineers, J.L.
Gifford, D.R. Uzarski, S. McNeil, eds., Denver, 1993: p. 226-230.
7. Hearn, G., Cavallin, J., Frangopol, D. M., Generation of NBI Condition Ratings
from Condition reports for Commonly Recognized (CoRe) Elements. 1997,
University of Colorado at Boulder. p. 52.
8. Herabat, P. and A. Tangphaisankun, Multi-Objective Optimization Model using
Constraint-Based Genetic Algorithms for Thailand Pavement Management. Journal of
the Eastern Asia Society for Transportation Studies, 2005. 6: p. 1137-1152.
9. Aldemir-Bektas, B. and O.G. Smadi, A Discussion on the Efficiency of NBI
Translator Algorithm, in Proceedings of the Tenth International Conference on
Bridge and Structure Management, October 20-22, 2008, Transportation Research E-
Circular. 2008: Buffalo, New York.
10. Cambridge Systematics, I., Pontis Bridge Management Release 4.4.
Technical Manual. 2005: Cambridge, Massachusetts. p. 347.
11. Frangopol, D.M., Gharaibeh, E.S., Kong, J.S. and Miyake, M., Optimal network-
level bridge maintenance planning based on minimum expected cost. Transportation
Research Record, 2000. 1696: p. 26-33.
12. Al-Wazeer, A., C. Nutakor, and B. Harris, Comparison of Neural Network Method
Versus National Bridge Inventory Translator in Predicting Bridge Condition Ratings.
TRB 86th Annual Meeting Compendium of Papers CD-ROM, 2007. Paper #07-
0572.
13. Sobanjo, J.O., P.D. Thompson, and R. Kerr, Element-to-Component Translation of
Bridge Condition Ratings. Transportation Research Board Annual Meeting 2008
Compendium of Papers DVD, 2008. Paper #08-3149.
14. AASHTO, AASHTO Bridge Element Inspection Manual, 1st Edition. 2010.
15. Breiman, L., Classification and regression trees. 1984.
16. Chou, P.A., Optimal partitioning for classification and regression trees. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 2002. 13(4): p. 340.
92

17. Gaudard, M. and P. Ramsey, Interactive Data Mining and Design of Experiments the
JMP Partition and Custom Design Platforms, in Statistical Discovery from SAS.
2006, North Haven Group
93

CHAPTER 5. GENERAL CONCLUSION

CONCLUSIONS

This dissertation includes three complementary papers on bridge management practice,


policy, and condition data in the United States.
Chapter 2, “An Independent Look at Federal Bridge Programs: Findings from a National
Survey,” presented findings from a national survey on bridge management and an overview
of the federal bridge programs in the United States. Survey results indicated the dominant
impact of federal funding eligibility on state-level bridge management decisions. Ninety
percent of responding states do not believe that federally required NBI data items cover their
data needs for bridge management. States collect an extensive amount of data on their
bridge network, including two types of condition inspections (NBI and BMS). However,
systematic use of the data to support cost-effective bridge management decisions is limited.
While the majority of reporting state departments of transportation have implemented a
BMS, the level of implementation is varied and the overall input from BMSs to network-
level decisions remains minimal. State transportation agencies need federal guidance on
areas such as using decision support tools, implementing BMSs, and improving data quality.
In Chapter 3, “A Discussion on the Efficiency of NBI Translator Algorithm,” a statistical
comparison of field NBI condition ratings and ratings generated by FHWA’s NBI Translator
(BMSNBI) algorithm for Iowa bridges was presented. Statistical analysis indicated that the
ratings generated by the NBI Translator algorithm are not representative of actual NBI ratings.
Results from the research raised questions about the effectiveness of the algorithm.
Chapter 4, “CART Algorithm for Predicting NBI Condition Ratings,” presented a new
methodology to predict National Bridge Inventory (NBI) condition ratings from bridge
management system (BMS) element condition data, based on Classification and Regression
Trees (CART). Analyses were conducted with bridge condition data from Iowa, Kansas, and
Montana for the years 2006 to 2010. The proposed methodology achieved significantly better
accuracies than other methodologies reported in the literature. CART predicted exact matches
of many field ratings 80% of the time and typically more than 60%. In the best case, CART
predicted exact matches 55% more often than BMSNBI and typically predicted exact
94

matches at least 10% more frequently than BMSNBI. The CART prediction methodology
uses simple and logical conditions of BMS element condition data to predict NBI condition
ratings and has potential use for federal and state transportation agencies summarizing bridge
condition data.

RECOMMENDATIONS FOR FUTURE RESEARCH

Advancing implementation of BMSs in support of decision making at the national level


has many challenges. A modeling approach that is consistent with states’ expectations and
verified by data and experience is yet to be achieved. Current models are complex and
require continuous updates to verify assumptions and model inputs. Simplified network-level
tools and methodologies are needed that summarize available data into objective information
to guide bridge management decisions. Such tools that also consider economic analysis can
support cost-effective, network-level decisions for both state and federal governments.
Future work on the CART algorithm to predict NBI condition ratings can utilize bridge
condition data from other states to explore the potential of the algorithm to summarize
BMS element condition data at the national level. An improved methodology can be used
by state departments of transportation for their reporting requirements and for assessment
of the national network and future needs at the federal level. Unified classification trees can
also provide insight on the relative impacts of BMS elements on the NBI condition ratings.
The quest for better tools and methodologies that help the bridge management
community is an ongoing effort. Tools and methodologies, however, require sustaining
implementation to be useful. Questions remain and further research is needed on technical,
institutional, and managerial aspects.
95

APPENDIX A. SURVEY RESULTS


96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115

APPENDIX B. CART ANALYSES REPORTS

State A - Deck

RSquare=0.217
Number of observations=3218
Number of splits=4
116

State B - Deck

RSquare=0.319
Number of observations=7570
Number of splits=4
117

State A – Superstructure
1 BMS Element (Beam/Girder)

RSquare=0.255
Number of observations=2533
(Matching inspected ratings for 5121 total observations)
Number of splits=7

Column Contributions
Number
Term of Splits G^2
quantity 0 0
PCTSTATE1 4 1695.07316
PCTSTATE2 1 33.5535357
PCTSTATE3 1 268.979469
PCTSTATE4 1 41.3441341
Total 7 2038.9503

All Rows
PCTSTATE1

<99.9
PCTSTATE3

>=0.07700205 <0.07700205
PCTSTATE4 PCTSTATE1

<0.12459054
PCTSTATE2

>=57.0607769
PCTSTATE1

<20.0272727
PCTSTATE1
118
119

State B – Superstructure
1 BMS Element (Beam/Girder)

RSquare=0.585
Number of observations=2241
Number of splits=7

Column Contributions
Number
Term of Splits G^2
QUANTITY 1 35.356938
PCTSTATE1 3 2076.59336
PCTSTATE2 1 65.0557929
PCTSTATE3 1 420.161174
PCTSTATE4 1 112.843095
Total 7 2710.01036

All Rows
PCTSTATE1

>=99.0169983 <99.0169983
PCTSTATE1 PCTSTATE3

>=99.910553 <1.04166543 >=1.04166543


QUANTITY PCTSTATE1 PCTSTATE4

>=84.961998
PCTSTATE2
All Rows
LogWorth
Count G^2
2241 4633.8492 390.31326

PCTSTATE1>=99.0169983 PCTSTATE1<99.0169983
LogWorth LogWorth
Count G^2 Count G^2
929 808.0951 43.959126 1312 2036.2375 92.647635

PCTSTATE1<99.910553 PCTSTATE1>=99.910553 PCTSTATE3<1.04166543 PCTSTATE3>=1.04166543


LogWorth LogWorth LogWorth

120
Count G^2 Count G^2 Count G^2 Count G^2
238 328.57541 691 282.82371 8.561475 967 812.31917 20.706718 345 803.75715 25.631631

QUANTITY<85.0390091 QUANTITY>=85.0390091 PCTSTATE1<84.961998 PCTSTATE1>=84.961998 PCTSTATE4<1.78600001 PCTSTATE4>=1.78600001


LogWorth
Count G^2 Count G^2 Count G^2 Count G^2 Count G^2 Count G^2
25 34.29649 666 213.17029 272 242.16448 695 479.7739 15.137826 288 538.49661 57 152.41744

PCTSTATE2<0.80971771 PCTSTATE2>=0.80971771

Count G^2 Count G^2


8 14.404097 687 400.31401
121

State B – Superstructure
2 BMS Elements (Beam/Girder + Bearing)

RSquare=0.544
Number of observations=1179
Number of splits=7

Column Contributions
Number
Term of Splits G^2
AQUANTITY 2 142.66644
APCTSTATE1 2 859.3234
APCTSTATE2 0 0
APCTSTATE3 1 203.000768
APCTSTATE4 0 0
APCTSTATE5 0 0
BPCTSTATE1 1 86.3498831
BPCTSTATE2 0 0
BPCTSTATE3 0 0
BPCTSTATE4 1 38.3234251
BPCTSTATE5 0 0
Total 7 1329.66392

All Rows
APCTSTATE1

>=98.8700562 <98.8700562
APCTSTATE1 APCTSTATE3

>=99.9071503 <1.77777779 >=1.77777779


AQUANTITY BPCTSTATE1 AQUANTITY

<221.893997
BPCTSTATE4
All Rows

Count G^2 LogWorth


1 2442.0041 165.79545
1
7
9

APCTSTATE1>=98.8700562 APCTSTATE1<98.8700562

Count G^2 LogWorth Count G^2 LogWorth


718 556.8976 23.450643 461 1128.6731 45.334979

122
APCTSTATE1<99.9071503 APCTSTATE1>=99.9071503 APCTSTATE3<1.77777779 APCTSTATE3>=1.77777779

Count G^2 Count G^2 LogWorth Count G^2 LogWorth Count G^2 LogWorth
153 204.92915 565 249.07848 15.734025 336 582.66844 19.821721 125 343.00388 17.30627

AQUANTITY<113.995003 AQUANTITY>=113.995003 BPCTSTATE1<78.5714264 BPCTSTATE1>=78.5714264 AQUANTITY>=221.893997 AQUANTITY<221.893997

Count
Count G^2 Count G^2 Count G^2 Count G^2 G^2 Count G^2 LogWorth
23 37.068064 542 144.24863 67 91.06755 269 405.25101 53 111.43827 72 156.66095 9.222292

BPCTSTATE4<7.14285994 BPCTSTATE4>=7.14285994

Count G^2 Count G^2


54 102.11732 18 16.220204
123

State B – Superstructure
3 BMS Elements (Beam/Girder + Bearing1 + Bearing2)

RSquare=0.419
Number of observations=1139
Number of splits=13

Column Contributions
Number
Term of Splits G^2
AQUANTITY 3 100.930909
APCTSTATE1 2 581.664411
APCTSTATE2 1 26.9180658
APCTSTATE3 1 68.4517662
APCTSTATE4 1 45.9368756
APCTSTATE5 0 0
BPCTSTATE1 3 121.442587
BPCTSTATE2 0 0
BPCTSTATE3 0 0
BPCTSTATE4 0 0
BPCTSTATE5 0 0
CPCTSTATE1 1 27.3336803
CPCTSTATE2 0 0
CPCTSTATE3 1 184.653214
CPCTSTATE4 0 0
CPCTSTATE5 0 0
Total 13 1157.33151

All Rows
APCTSTATE1

>=98.9877014 <98.9877014
CPCTSTATE3
BPCTSTATE1

>=96.4285707 <96.4285707 <4.16666651


APCTSTATE3
>=4.16666651
APCTSTATE1 AQUANTITY APCTSTATE4

>=2.22199988 <2.22199988
>=99.9205704
<99.9205704

<883.005981

>=883.005981
<0.783701

>=0.783701
APCTSTATE2 BPCTSTATE1
BPCTSTATE1 CPCTSTATE1
>=83.6352005

<83.6352005

<83.3330002

>=83.3330002
<83.3333359

>=83.3333359

<53.5714264
>=53.5714264

AQUANTITY
>=1505.71191

<1505.71191
AQUANTITY
>=1408.18005

<1408.18005
All Rows

Count G^2 LogWorth


1 2442.0041 165.79545
1
7
9

APCTSTATE1>=98.8700562 APCTSTATE1<98.8700562

Count G^2 LogWorth Count G^2 LogWorth


718 556.8976 23.450643 461 1128.6731 45.334979

124
APCTSTATE1<99.9071503 APCTSTATE1>=99.9071503 APCTSTATE3<1.77777779 APCTSTATE3>=1.77777779

Count G^2 Count G^2 LogWorth Count G^2 LogWorth Count G^2 LogWorth
153 204.92915 565 249.07848 15.734025 336 582.66844 19.821721 125 343.00388 17.30627

AQUANTITY<113.995003 AQUANTITY>=113.995003 BPCTSTATE1<78.5714264 BPCTSTATE1>=78.5714264 AQUANTITY>=221.893997 AQUANTITY<221.893997

Count G^2 Count G^2 Count G^2 Count G^2 Count G^2 Count G^2 LogWorth
23 37.068064 542 144.24863 67 91.06755 269 405.25101 53 111.43827 72 156.66095 9.222292

BPCTSTATE4<7.14285994 BPCTSTATE4>=7.14285994

Count G^2 Count G^2


54 102.11732 18 16.220204
125

State C – Superstructure
1 BMS Element (Beam/Girder)

RSquare=0.324
Number of observations=7318
Number of splits=13

Column Contributions
Number
Term of Splits G^2
PCTSTATE1 6 4401.25328
PCTSTATE2 1 76.7714123
PCTSTATE3 1 1026.89686
PCTSTATE4 5 487.957598
Total 13 5992.87915

All Rows
PCTSTATE1

>=98.3236575 <98.3236575
PCTSTATE3

<5 >=5

PCTSTATE1 PCTSTATE4 >=10

>=90 <90 <10


PCTSTATE4 PCTSTATE4 PCTSTATE2
5>=

<5 <5 >=5 <5 >=5


PCTSTATE1 PCTSTATE1 PCTSTATE4 PCTSTATE1 PCTSTATE4
<25
<75

>=5
>=75

>=25

>=75
>=91.6763425

<75
<91.6763425

<5
PCTSTATE1
>=75

<75
All Rows

Count
G^2 LogWorth
7 18505.944 797.8598
3
1
8

PCTSTATE1>=98.3236575 PCTSTATE1<98.3236575

Count G^2 Count G^2 LogWorth


3515 5930.4324 3803 8909.8904 224.77419

PCTSTATE3<5 PCTSTATE3>=5

Count G^2 LogWorth Count G^2 LogWorth


2 5752.715 114.85412 875 2129.4406 39.119434
9
2
8

126
PCTSTATE1>=90 PCTSTATE1<90 PCTSTATE4<10 PCTSTATE4>=10

G^2
Count G^2 LogWorth Count LogWorth Count G^2 LogWorth Count G^2
1 2837.2845 28.49108 1136 2393.2212 27.083231 825 1834.4545 17.716844 50 120.45939
7
9
2

PCTSTATE4>=5 PCTSTATE4<5 PCTSTATE4<5 PCTSTATE4>=5 PCTSTATE2<5 PCTSTATE2>=5

G^2
Count G^2 Count G^2 LogWorth Count G^2 LogWorth Count LogWorth Count G^2 LogWorth Count G^2 LogWorth
2 52.590752 1769 2658.7902 23.84683 1099 2172.12 11.263441 37 101.62937 6.3298151 158 409.54621 6.9669755 667 1348.1369 10.197133
3

PCTSTATE1>=91.6763425 PCTSTATE1<91.6763425 PCTSTATE1>=75 PCTSTATE1<75 PCTSTATE4<25 PCTSTATE4>=25 PCTSTATE1<75 PCTSTATE1>=75 PCTSTATE4>=5 PCTSTATE4<5

G^2
Count G^2 Count G^2 Count G^2 Count G^2 Count G^2 Count G^2 Count G^2 Count G^2 Count Count G^2 LogWorth
1 1388.2592 729 1165.8336 547 940.28073 552 1184.322 29 59.061017 8 17.176686 13 36.731361 145 344.58806 52 107.02604 615 1198.4017 8.1442731
0
4
0

PCTSTATE1>=75 PCTSTATE1<75

Count G^2 Count G^2


420 783.63577 195 381.27869
127

State C – Superstructure
2 BMS Element (Beam/Girder + Bearing)

RSquare=0.357
Number of observations=848
Number of splits=12

Column Contributions
Number
Term of Splits G^2
AQUANTITY 0 0
APCTSTATE1 2 340.217147
APCTSTATE2 1 21.2040499
APCTSTATE3 1 22.780726
APCTSTATE4 0 0
BQUANTITY 3 85.1514928
BPCTSTATE1 1 94.2676856
BPCTSTATE2 2 72.4753529
BPCTSTATE3 1 31.0721084
BPCTSTATE4 1 124.880254
Total 12 792.048817

All Rows
APCTSTATE1

>=93.0661323 <93.0661323
BPCTSTATE1 BPCTSTATE4

>=90 <90 <5 >=5

APCTSTATE1 BPCTSTATE3 BPCTSTATE2 APCTSTATE3


<100
>=100

<20

>=20

>=10

<25 >=25 <10


BQUANTITY BQUANTITY BQUANTITY

BPCTSTATE2 APCTSTATE2
>=30
<33

<220

>=220

>=33 <30
>=15
>=7.75

<7.75

<15
All Rows

Count G^2 LogWorth


848 2218.6447 66.234993

APCTSTATE1>=93.0661323 APCTSTATE1<93.0661323

Count G^2 LogWorth Count G^2 LogWorth


381 748.57117 21.537454 467 1171.2083 28.310484

BPCTSTATE1>=90 BPCTSTATE1<90 BPCTSTATE4<5 BPCTSTATE4>=5

Count G^2 LogWorth Count G^2 LogWorth Count G^2 LogWorth Count G^2 LogWorth
347 575.17663 9.5916124 34 79.228243 7.6044669 423 928.78384 11.937406 44 117.34605 5.7409376

128
APCTSTATE1<100 APCTSTATE1>=100 BPCTSTATE3<20 BPCTSTATE3>=20 BPCTSTATE2<25 BPCTSTATE2>=25 APCTSTATE3>=10 APCTSTATE3<10

Count
Count G^2 Count G^2 Count G^2 Count G^2 Count G^2 LogWorth Count G^2 LogWorth Count G^2 G^2 LogWorth
128 180.44818 219 354.74472 26 42.127812 8 6.0283226 326 653.00868 7.1071981 97 225.21392 7.9439917 5 6.7301167 39 87.835203 5.8843612

BQUANTITY<33 BQUANTITY>=33 BQUANTITY>=30 BQUANTITY<30 BQUANTITY<220 BQUANTITY>=220

Count G^2 Count G^2 LogWorth Count G^2 Count G^2 LogWorth Count G^2 Count G^2
85 133.39908 241 490.75765 5.7769156 50 99.964244 47 92.659095 5.3841644 26 30.460723 13 33.958772

BPCTSTATE2>=7.75 BPCTSTATE2<7.75 APCTSTATE2<15 APCTSTATE2>=15

Count G^2 Count G^2 Count G^2 Count G^2


144 288.09549 97 179.72221 6 13.183347 41 58.271697
129

State A – Substructure
1 BMS Element (Abutment)

RSquare=0.383
Number of observations=300
Number of splits=12

Column Contributions
Number
Term of Splits G^2
QUANTITY 2 27.036313
PCTSTATE1 6 284.009469
PCTSTATE2 2 22.1938733
PCTSTATE3 1 12.4991294
PCTSTATE4 1 14.7167928
Total 12 360.455578

All Rows
PCTSTATE1

>=95 <95
PCTSTATE1 PCTSTATE1
>=99

<99 >=73 <73


QUANTITY PCTSTATE3 PCTSTATE1
<2
>=28.042

<28.042 >=2 <65


>=65

QUANTITY PCTSTATE2 PCTSTATE1


>=26.8224

>=50

<26.8224 <15 <50


>=15

PCTSTATE1 PCTSTATE2 PCTSTATE4


<98

>=98

<10
>=10

<1

>=1
130
131

State A – Substructure
2 BMS Elements (Abutment + Column/Pile/Backwall)

RSquare=0.510
Number of observations=162
Number of splits=12

Column Contributions
Number
Term of Splits G^2
AQUANTITY 3 31.991119
APCTSTATE1 1 17.2761433
APCTSTATE2 1 79.798327
APCTSTATE3 1 18.3633844
APCTSTATE4 0 0
BQUANTITY 1 10.8840956
BPCTSTATE1 3 64.50044
BPCTSTATE2 1 6.73822769
BPCTSTATE3 1 20.7433782
BPCTSTATE4 0 0
Total 12 250.295115

All Rows
APCTSTATE2

<15 >=15

BPCTSTATE1 APCTSTATE3

<100 >=100 <1 >=1

AQUANTITY APCTSTATE1 AQUANTITY BPCTSTATE1


<80
<21.031
>=21.031

>=80
>=100

<27.127 >=27.127 <100


BPCTSTATE1 BPCTSTATE3 AQUANTITY
<24.3840008
5>=9
<9

>=24.384000

<1 >=1
5

BQUANTITY BPCTSTATE2
<20.7264004

<5

>=5
>=20.7264004
All Rows

G^2
Count LogWorth
1 490.67789 18.382325
6
2

APCTSTATE2<15 APCTSTATE2>=15
LogWorth LogWorth
Count G^2 Count
G^2
1 296.02166 8.552091 47 114.8579 4.7386605
1
5

BPCTSTATE1<100 BPCTSTATE1>=100 APCTSTATE3<1 APCTSTATE3>=1

G^2
Count LogWorth Count G^2 LogWorth Count G^2 LogWorth Count G^2 LogWorth

132
5 131.07712 3.2567961 58 129.62969 4.4905024 16 28.209032 3.3143025 31 68.285487 3.6758767
7

AQUANTITY<27.127 AQUANTITY>=27.127 APCTSTATE1<100 APCTSTATE1>=100 AQUANTITY<21.031 AQUANTITY>=21.031 BPCTSTATE1>=80 BPCTSTATE1<80

Count G^2 LogWorth Count G^2 LogWorth Count G^2 LogWorth Count G^2 Count G^2 Count G^2 Count G^2 Count G^2
2 46.857122 4.0730401 33 72.294164 5.2797344 1430.614492 2.3041975 44 81.739053 6 0 10 16.036371 22 43.097044 9 11.457255
4

BPCTSTATE1<95 BPCTSTATE1>=95 BPCTSTATE3<1 BPCTSTATE3>=1 AQUANTITY<24.3840008 AQUANTITY>=24.3840008


LogWorth LogWorth
G^2
Count G^2 Count G^2 Count Count G^2 Count G^2 Count G^2
15

9 12.3073 19.095425 20 34.227547 3.0132572 13 17.32324 2.0251737 6 8.3177662 8 14.404097

BQUANTITY<20.7264004 BQUANTITY>=20.7264004 BPCTSTATE2<5 BPCTSTATE2>=5

Count G^2 Count G^2 Count G^2 Count G^2


8 6.0283226 12 17.315128 5 0 8 10.585012
133

State A – Substructure
3 BMS Elements (Cap + Abutment + Column/Pile/Backwall)

RSquare=0.225
Number of observations=1991
Number of splits=12

Column Contributions
Number
Term of Splits G^2
AQUANTITY 1 26.4122332
APCTSTATE1 1 28.3318648
APCTSTATE2 1 33.3117303
APCTSTATE3 0 0
APCTSTATE4 0 0
BQUANTITY 1 42.058024
BPCTSTATE1 3 534.246131
BPCTSTATE2 0 0
BPCTSTATE3 1 154.561269
BPCTSTATE4 0 0
CQUANTITY 0 0
CPCTSTATE1 2 280.727141
CPCTSTATE2 0 0
CPCTSTATE3 2 91.8684755
CPCTSTATE4 0 0
Total 12 1191.51687

All Rows
BPCTSTATE1

<100 >=100
BPCTSTATE3 CPCTSTATE1

<2 >=2 >=100 <100


CPCTSTATE1 BPCTSTATE1 BQUANTITY CPCTSTATE3
>=57
<5

>=

<3

<98 >=98 <61.8744011


>=61.8744011
7

APCTSTATE2 BPCTSTATE1 APCTSTATE1


<8
>=18

<18 >=84
<99
4

>=99

CPCTSTATE3 AQUANTITY
<2

>=2

<24.9936

>=24.9936
All Rows

Count G^2 LogWorth


1 5293.8322 167.15069
9
9
1

BPCTSTATE1<100 BPCTSTATE1>=100

Count
G^2 LogWorth Count G^2
LogWorth
5 1540.1746 43.222908 1428 3287.6377 65.031547
6
3

BPCTSTATE3<2 BPCTSTATE3>=2 CPCTSTATE1>=100 CPCTSTATE1<100

134
G^2 Count
Count G^2 LogWorth Count LogWorth Count G^2 LogWorth G^2 LogWorth
4 1174.0741 17.074839 82 211.53925 4.8732617 1126 2348.2744 6.2014598 302 733.71325 16.273147
8
1

CPCTSTATE1<98 CPCTSTATE1>=98 BPCTSTATE1<57 BPCTSTATE1>=57 BQUANTITY<61.8744011 BQUANTITY>=61.8744011 CPCTSTATE3>=3 CPCTSTATE3<3

Count G^2 LogWorth


G^2 LogWorth Count G^2 LogWorth Count G^2 Count G^2 Count Count G^2 Count G^2 Count G^2
237
1 337.35211 5.6164936 345 761.64493 5.0379754 22 35.897394 60 144.02377 1108 2298.4922 3.8831571 18 7.7241296 65 133.45242 527.98319
3
6

APCTSTATE2<18 APCTSTATE2>=18 BPCTSTATE1<84 BPCTSTATE1>=84 APCTSTATE1<99 APCTSTATE1>=99

Count G^2 LogWorth Count G^2 Count G^2 Count G^2 LogWorth Count G^2 Count G^2
1 291.73308 2.6349181 9 12.3073 25 67.822014 320 657.21476 2.3589127 23 52.039316 1085 2218.121
2
7

CPCTSTATE3<2 CPCTSTATE3>=2 AQUANTITY<24.9936 AQUANTITY>=24.9936

G^2 G^2
Count G^2 Count Count Count G^2
87 199.78211 40 72.360138 8 20.087717 312 610.71481
135

State A – Substructure
4 BMS Elements (Cap + Abutment + Backwall/PierWall + Column)
(All reinforced concrete bridges and elements)

RSquare=0.269
Number of observations=1514
Number of splits=12

Column Contributions
Number
Term of Splits G^2
AQUANTITY 1 35.8857674
APCTSTATE1 3 166.24809
APCTSTATE2 0 0
APCTSTATE3 0 0
APCTSTATE4 0 0
BQUANTITY 1 44.1795476
BPCTSTATE1 2 288.647503
BPCTSTATE2 0 0
BPCTSTATE3 1 484.438746
BPCTSTATE4 0 0
CQUANTITY 0 0
CPCTSTATE1 2 124.585784
CPCTSTATE2 0 0
CPCTSTATE3 2 133.078021
CPCTSTATE4 0 0
DQUANTITY 0 0
DPCTSTATE1 0 0
DPCTSTATE2 0 0
DPCTSTATE3 0 0
DPCTSTATE4 0 0
Total 12 1277.06346

All Rows
BPCTSTATE3

<0.87719297 >=0.87719297
BPCTSTATE1 CPCTSTATE1
<84

<99.0139999 >=99.0139999 >=84


CPCTSTATE3 CPCTSTATE3 BPCTSTATE1
<69

>=69
>=9

>=6

<9 <6
APCTSTATE1 APCTSTATE1
<96

>=96 >=100
<100

CPCTSTATE1 BQUANTITY
>=24.384
<90

>=90 <24.384
APCTSTATE1 AQUANTITY
<99.7142857

>=99.7142857

<43.5860023
>=43.5860023
BPCTSTATE3<0.87719297

Count G^2 LogWorth


1 3208.7058 56.175723
1
2
7

BPCTSTATE1<99.0139999 BPCTSTATE1>=99.0139999

Count G^2 LogWorth Count G^2 LogWorth


7 1810.2762 14.590667 426 1145.7225 16.33792
0
1

CPCTSTATE3<9 CPCTSTATE3>=9 CPCTSTATE3>=6 CPCTSTATE3<6


LogWorth LogWorth

136
Count G^2 Count G^2 Count G^2 Count G^2
655 1648.9071 13.787725 46 98.795282 15 25.715221 411 1049.5031 14.743444

APCTSTATE1<96 APCTSTATE1>=96 APCTSTATE1>=100 APCTSTATE1<100

Count G^2 Count G^2 LogWorth Count G^2 LogWorth Count G^2
120 297.26838 535 1292.7045 9.4447722 302 694.29783 10.523465 109 291.93859

CPCTSTATE1<90 CPCTSTATE1>=90 BQUANTITY<24.384 BQUANTITY>=24.384

Count G^2 Count G^2 LogWorth Count G^2 LogWorth Count G^2
166 419.93726 369 833.44384 10.494102 94 238.44468 8.6793728 208 411.67361

APCTSTATE1<99.7142857 APCTSTATE1>=99.7142857 AQUANTITY<43.5860023 AQUANTITY>=43.5860023

Count G^2 Count G^2 Count G^2 Count G^2


180 423.43146 189 365.96519 29 51.387036 65 151.17187
137

State B – Substructure
1 BMS Element (Abutment)

RSquare=0.528
Number of observations=338
Number of splits=21

Column Contributions
Number
Term of Splits G^2
QUANTITY 10 168.854467
PCTSTATE1 7 261.334229
PCTSTATE2 1 23.0954215
PCTSTATE3 2 23.3417154
PCTSTATE4 1 21.6111224
Total 21 498.236955

All Rows
PCTSTATE1

>=96.2024994 <96.2024994
QUANTITY PCTSTATE1

>=79 <79
<19.5072002

>=19.5072002
QUANTITY QUANTITY QUANTITY

>=13.4111986 <13.4111986 <16.1543999 >=16.1543999


<21.9456005

>=21.9456005

PCTSTATE3 QUANTITY QUANTITY PCTSTATE4


>=12.1920004

>=9.83600044
<12.1920004

<10.9728003

>=1.5151515 <1.5151515 >=10.9728003 <9.83600044


PCTSTATE1 QUANTITY PCTSTATE3 PCTSTATE2
>=84.4830017

>=7.69211006
<84.4830017

<7.69211006

>=16.1543999 <16.1543999 >=32.2580643 <32.2580643


PCTSTATE1 QUANTITY PCTSTATE1 QUANTITY
>=92.3076935

>=11.1111002

>=19.8120003

<19.8120003
<11.1111002

<92.3076935 <15.5447998 >=15.5447998


PCTSTATE1 PCTSTATE1 QUANTITY
<82.7590027

>=82.7590027

<92

>=92

<15.8500004
>=15.8500004
138
139

State B – Substructure
2 BMS Elements (Column/Pile + Abutment)

RSquare=0.401
Number of observations=4169
Number of splits=21

Column Contributions
Number
Term of Splits G^2
AQUANTITY 3 89.8949668
APCTSTATE1 2 193.517993
APCTSTATE2 2 91.8635644
APCTSTATE3 0 0
APCTSTATE4 0 0
BQUANTITY 3 122.170241
BPCTSTATE1 6 2249.10621
BPCTSTATE2 2 142.811703
BPCTSTATE3 2 119.725323
BPCTSTATE4 1 42.1193004
Total 21 3051.2093
<86.6666641

>=1.449
>=86.6666641

<5

>=1.4925081
<20
<23.1650009

<94.5650024

<1.51521003

<89.8303986

<20.7294483
>=23.1650009

>=94.5650024

<8.33300018 >=1.51521003

>=20.7294483
>=3

>=6
<3

<6

>=20
>=8.33300018
140
141

State B – Substructure
3 BMS Elements (Column/Pile + Abutment + Cap)

RSquare=0.462
Number of observations=3123
Number of splits=20

Column Contributions
Number
Term of Splits G^2
AQUANTITY 0 0
APCTSTATE1 3 222.343631
APCTSTATE2 0 0
APCTSTATE3 1 34.314392
APCTSTATE4 0 0
BQUANTITY 3 98.097465
BPCTSTATE1 3 1249.63916
BPCTSTATE2 0 0
BPCTSTATE3 2 206.380168
BPCTSTATE4 1 28.290579
CQUANTITY 1 32.035462
CPCTSTATE1 4 956.583323
CPCTSTATE2 0 0
CPCTSTATE3 1 92.2972147
CPCTSTATE4 1 62.0505558
Total 20 2982.03195
>=18 <98.8095512 <98.4126987

>=8.33333302

>=11.4754
>=95.7894745

>=15
>=86.1110001

<11.4754
<86.1110001

<95.7894745

<38.0999985
>=1.38889003

<16 >=38.0999985

>=33.3333015
<18

>=30.7849999

>=16
>=90.9091034
<90.9091034
142
143

State C – Substructure
1 BMS Element (Abutment)

RSquare=0.362
Number of observations=2161
Number of splits=20

Column Contributions
Number
Term of Splits G^2
QUANTITY 8 186.136204
PCTSTATE1 6 1518.62454
PCTSTATE2 1 53.0960633
PCTSTATE3 3 247.38156
PCTSTATE4 2 112.887934
Total 20 2118.1263

All Rows
PCTSTATE1

>=95 <95

PCTSTATE1 PCTSTATE3

<100 >=100 >=10 <10


PCTSTATE2 QUANTITY PCTSTATE4 PCTSTATE4

>=15.84 <5 >=5 <5


QUANTITY PCTSTATE3 QUANTITY PCTSTATE3

>=5 <5
QUANTITY PCTSTATE1

<75 >=75

QUANTITY PCTSTATE1

<24 >=24 <90

QUANTITY PCTSTATE1 QUANTITY

<60 >=60

QUANTITY PCTSTATE1
144
145

State C – Substructure
2 BMS Elements (Column/Pile + Abutment)

RSquare=0.367
Number of observations=962
Number of splits=18

Column Contributions
Number
Term of Splits G^2
AQUANTITY 4 77.1859829
APCTSTATE1 4 509.599776
APCTSTATE2 0 0
APCTSTATE3 2 75.7886735
APCTSTATE4 0 0
BQUANTITY 2 51.1990672
BPCTSTATE1 4 159.256098
BPCTSTATE2 1 18.5583322
BPCTSTATE3 0 0
BPCTSTATE4 1 27.7872598
Total 18 919.37519
146
147

State C – Substructure
3 BMS Elements (Column/Pile + Abutment + Cap)

RSquare=0.305
Number of observations=5885
Number of splits=20

Column Contributions
Number
Term of Splits G^2
AQUANTITY 0 0
APCTSTATE1 4 561.829206
APCTSTATE2 0 0
APCTSTATE3 2 281.341445
APCTSTATE4 1 50.7372446
BQUANTITY 0 0
BPCTSTATE1 5 1382.01782
BPCTSTATE2 0 0
BPCTSTATE3 1 75.8682568
BPCTSTATE4 1 99.2861114
CQUANTITY 0 0
CPCTSTATE1 2 1982.95039
CPCTSTATE2 0 0
CPCTSTATE3 3 181.095778
CPCTSTATE4 1 73.5930845
Total 20 4688.71933
148

You might also like