Benchmarking: The Benefits and Limitations in Research Space Planning

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

 Benchmarking: The Benefits and Limitations in Research

Space Planning
Wed, 01/27/2016 - 2:59pm1 Comment

by Sarah Holton, AIA, LEED AP, CLSS, Payette

Benchmarking is a useful tool to compare performance metrics from one project


to another. When planning new research space, using benchmark data assists
the decision-making process as it enables the client to understand what peer
institutions are doing in similar situations. Benchmark data can also compare
metrics between different researchers within an organization. However, when
relying on benchmark data, it’s important to keep in mind some of its
limitations. Data gathered by multiple institutions or companies can be difficult
to compare as metrics aren’t always calculated in the same way. While
comparable data can provide insight into possible solutions, benchmark data
alone does not solve unique, complex problems. With over 40 years of
laboratory experience at Payette, we have a robust benchmarking database built
upon our experience, which helps us frame those unique, complex problems.
Often, we use benchmarking as the first step in the planning process as it
applies to a project’s sustainability goals as well as department or floor planning.

Benchmarking and sustainability goals

Benchmarking is particularly useful in setting sustainability goals at the


beginning of a building project. As part of the 2030 Commitment, which
challenges all new buildings, developments and major renovations to be carbon-
neutral by 2030, we have been tasked with carefully benchmarking our
sustainable strategies. An important benchmark is Energy Use Intensity (EUI)
(kBtu/SF/year) which expresses a building’s potential energy use. A low EUI
signifies good energy performance so this is a key metric to compare against
similar projects. All new projects are continually striving to better their energy
performance and lower their projected building EUI as compared to competitors.

New science buildings benchmarks showing predicted EUIAs our projects


strive to reduce their overall energy usage, other sustainable strategies are
carefully benchmarked and are also useful on a detailed planning level. For
instance, we track air changes per hour in research labs and the face velocity
required at the fume hoods as a useful sustainable benchmark when planning
research buildings. The fume hood is arguably the single most important safety
feature in laboratories. They are also one of the major energy users and can be
known to use three and a half times more energy annually than the average
American house. At a recent project meeting, upon understanding that current
practice has lowered the fume hood face velocity to between 60 to 80 fpm by
using high performance fume hoods and that most of their current hoods were at
100 fpm, the client decided to aim to lower their standard to 60 fpm. Benchmark
data quickly helped the client frame the problem and make a decision.
Benchmarking and space planning

Benchmarking is also important at a department or floor level planning level. As


we begin to understand how many faculty members will occupy a building, how
big their labs are and what kind of research they are involved in, we use
benchmarks to estimate the initial program. These early benchmarks are more
of a range per person and become more detailed as programming evolves and
the constraints of the project are known.

To illustrate how benchmarking works in a real-world scenario, let’s take a look


at a case study. We had a client with a challenging problem. They wanted to
continue growing their school by hiring new faculty but were constrained by their
current building footprint. Using benchmark data, the administration hoped to
convince the faculty that they each individually had more space than their
contemporaries at peer institutions and, therefore, should reduce their individual
research footprints, which would allow for new hire growth. In this instance, the
benchmark data was under critical scrutiny because the administration was using
it to take space away from the researchers. The researchers examined the data
and felt the existing conditions were not exactly like the conditions of the
benchmarked institutions. This faculty was solving unique problems in an area of
geographic density that were not exactly comparable to other schools. This
instance reinforces the notion that benchmarking data must be considered with a
grain of salt. Often such data gets you in the ballpark but doesn’t fully solve or
answer the problem.

Research space types examplesIn the example of this client, this


benchmarking research density exercise did highlight some very clear outliers.
In this institution some faculty occupied their space very densely while others
occupied their space very loosely and, ultimately, most of the research space
was within an average range. From this point, we then dug in to work with the
outliers to understand what they were doing and if we could create higher
quality, but less quantity of space. We observed inefficiencies within many of
their labs and our proposed redesign proved how we create more linear feet of
bench with less overall square footage. In the end, benchmarking was a useful
tool, but it was not the standalone tool in reaching a solution as we planned the
research space that was incomparable to other institutions. That is the limitation
of benchmarking. However, a benchmarking tool becomes more powerful as
more data is collected from varying institutions and clients.

At Payette we have established some strategies to gathering and building


reliable benchmark data:

• Establishing definitions at the outset is very important for reliable,


comparable data.

• Graphing data and making it visual helps team members (client


leadership and stakeholders as well as the design team) understand the story
behind the data. Gathering and presenting good comparable benchmarking data
can be an iterative process because, often, when the data is graphed or visually
interpreted, outliers jump out. When outliers are identified they sometimes need
to be revisited.
• Create an easy process for collecting data when it is calculated in the
drawing phase. Recalculating data afterwards often takes twice as long.

Benchmark ranges by research space types illustrating NASF per


pesearcherBenchmarking can be very powerful in helping clients make decisions.
As a tool, benchmarking allows designers to frame the problem and provide a
comparison to peer institutions decisions and standards. However, no two
institutions are exactly the same. Thus, each problem must be examined in
detail to truly understand what researchers are doing and what their space
requirements are. Careful consideration of relevant benchmarking data leads to
smarter decisions for clients, whether it is a thoughtful response to space
utilization or a new goal towards a sustainable future.

You might also like