Professional Documents
Culture Documents
Abrigo Funding Benchmark Data 2022Q3
Abrigo Funding Benchmark Data 2022Q3
Abrigo Funding Benchmark Data 2022Q3
PETER S. DEBBAUT
1
We would like to thank Garver Moore, Kevin Abbas, Jake Buchanan, and Neekis Hammond for comments on
a previous draft. The views expressed in this article are those of the authors and do not necessarily represent
the views of Abrigo.
Many more institutions will not be able to establish these parameters for one or more material
portfolio segments, due to either lack of specific data or lack of observable funding experience. By
aggregating the integrated data of thousands of loan portfolios, Abrigo can provide credible
guardrails that support an institutional assumption of one or more of these parameters. Our goal in
this research is to establish a bridge between detectable, first-party funding rates based on
historical information and a notion of extant funding rates based on a broader set of experience.
Practitioners will see wide dispersions in these results; we present this additional information not to
confuse, but to warn. Even with millions to instruments to work with, and even disaggregating
with a broad brush, there is variance in observed experience. This information is provided for
institutions to interpret, apply, and defend for their own reasons and at their own risk, and no
warranty is made as to the applicability or appropriateness of a given parameter for a given
institution.
Simply, we conduct and provide this research as an arms-length alternative to guessing, and we
hope it is useful for institutions who have difficulty establishing a modeling parameter using first-
party experience. These parameters may migrate as conditions change, or as more data is
collected; we expect to update and release Benchmark results and the attendant research report to
users of the Abrigo™ ALLL and Abrigo™ LLA solution annually, following the end of the third
quarter.2
While the use of these benchmarks is at a client's own risk, we can identify specific uses that we do
not recommend in the following, non-exhaustive list:
• Use of benchmark inputs where complete, accurate, and relevant data exists for the client
• Automatically applying the results of more recent benchmark studies without evaluation
and consideration
• Asymmetrical application of benchmarks (version A used to specify a model, and version B
used to apply that model).
• Use of benchmarks to justify a failure to capture complete and accurate first-party data on a
go-forward basis
• Periodic changes to interpretation of the "shape" of the benchmarks, e.g. using the 25th
performance percentile in period 1 and the 75th percentile in period 2.
• Use of benchmarks without a strong back-testing and monitoring regime for internal
performance.
2
More frequent studies are available to interested institutions for an additional fee: contact
consulting@abrigo.com for more details.
Definitions
The Lifetime Funding Rate (LFR) calculations performed via Benchmarks closely align with what a
user (or customer) would obtain through an engagement with Advisory Services. That is, for a
specific institution and call code within that institution, the results generated via Benchmarks
should closely align with the results generated by Advisory Services holding underlying parameters
constant. The following provides working definitions for Default and the LFR, while the section
titled Funding Calculations presents technical definitions for these terms.
DEFAULT
LFR
Given a segmentation choice, the Lifetime Funding Rate, LFR, is given by the ratio of the maximum
increase in balances from the start of the measurement period (i.e. the maximum of balance during
the period less the balance at the start of the period, bounded by current available credit) to the
current available credit as of the start of the measurement period.
Data gathering/reconciliation
1. Initial Eliminations – The following observations will be eliminated from the analysis
population:
a. Assets categorized as “not-integrated-from-the-core” within the Abrigo database.
b. Assets categorized as “not-migration-analysis-enabled” within the Abrigo database.
c. Assets categorized as Purchased Credit Impaired (PCI/310-30)
d. Assets identified as impaired or individually analyzed per FAS 114 (310-10-35)
e. Assets which do not have a defined FAS code of ‘5’, or ‘91’.
f. Assets with zero, negative, or missing principal balance as of the period start date.
g. Assets that exhibit a days past due value greater than twenty-eight.
h. Assets with zero, negative, or missing current available credit as of the period start
date.
i. Assets classified as “in default” (corresponding with the definition of default
previously outlined).
j. Assets whose call report code is missing or unavailable.
k. Assets whose date-of-observation lies within 12 months of the most recent
Data presentation
The following eliminations will remove observations before presenting time series and aggregate
values within Abrigo’s Benchmarks product:
1. Given the segment, date, and institution, remove funding rates where the number of
available loans is less than 100.
2. Given the segment and date, as well as the application of part (a), remove all observations
where the number of institutions is less than 25.
When evaluating benchmark results we test the accuracy of the calculations as well as the
completeness of the data. First, accuracy of the calculations is verified by selecting an institution
included within the benchmark data and matching, through recalculation, the answer within the
benchmark data to a calculation performed within Abrigo. Second, completeness of the data is
verified by querying the database for allowance customers that should be considered for inclusion
in the benchmark dataset (noting not all may be ultimately considered due to the underlying data
not meeting our criteria for inclusion). We then compare these customers to those in the benchmark
dataset, which helps ensure that all institutions were appropriately included.
As stated above, we have placed controls on the dataset’s reconciliation to known public data
sources such as FFIEC call reports in terms of reported loss, nonperforming assets, and
balances. These controls may significantly reduce the included loan population: reported customer
and loan counts present population counts of only portfolios passing our controls. In some cases,
these controls eliminate enough portfolios from the analysis that we do not have sufficient
populations to provide credible analysis.
This is tested by recalculating the cumulative rate for a given call code using an export of the raw
data and then comparing these values to the benchmarks within Abrigo to ensure no differences.
Validated Sample
In this sub-section, we provide summary details on the underlying customer sample after validation
procedures have been applied (as described previously). Figure 1 presents the percentage of
validated portfolios (customers) broken out by Census Region (Panel A) and asset size (Panel B).
Aggregates by date
Time series observations for “Aggregate” LFR are given by
Also, we again note that the COVID-19 pandemic may have introduced volatility into the presented
series and/or aggregates. We have tried to exclude loans tied to the SBA’s Paycheck Protection
Program (PPP) to limit the effects of these loans on the presented data but we do not believe we
have captured all of them. We will continue to work on capturing these loans and update the
following data accordingly.
All Loans
Figure 2. Funding Benchmarks - All Loans - Counts - Current