Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Journal of Building Performance Simulation

ISSN: 1940-1493 (Print) 1940-1507 (Online) Journal homepage: https://www.tandfonline.com/loi/tbps20

Effects of real-time simulation feedback on design


for visual comfort

Nathaniel L. Jones & Christoph F. Reinhart

To cite this article: Nathaniel L. Jones & Christoph F. Reinhart (2019) Effects of real-time
simulation feedback on design for visual comfort, Journal of Building Performance Simulation, 12:3,
343-361, DOI: 10.1080/19401493.2018.1449889

To link to this article: https://doi.org/10.1080/19401493.2018.1449889

Published online: 13 Mar 2018.

Submit your article to this journal

Article views: 290

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tbps20
Journal of Building Performance Simulation, 2019
Vol. 12, No. 3, 343–361, https://doi.org/10.1080/19401493.2018.1449889

Effects of real-time simulation feedback on design for visual comfort



Nathaniel L. Jones and Christoph F. Reinhart
Department of Architecture, Massachusetts Institute of Technology, Cambridge, MA, USA
(Received 14 October 2017; accepted 3 March 2018 )

Building performance simulations and models of human visual comfort allow us to predict daylight-caused glare using
digital building models and climate data. Unfortunately, the simulation tools currently available cannot produce results fast
enough for interactive use during design ideation. We developed software with the ability to predict visual discomfort in real
time. However, we know little about how users react to simulation feedback presented in real time. In our study, 40 subjects
with backgrounds in building design and technology completed two shading design exercises to balance glare reduction
and annual daylight availability in two open office arrangements using two simulation tools with differing system response
times. Subjects with access to real-time simulation feedback tested more design options, reported higher confidence in design
performance and increased satisfaction with the design task, and produced better-performing final designs with respect to
spatial daylight autonomy and enhanced simplified daylight glare probability.
Keywords: daylighting; visual comfort; glare; simulation; real time; user behaviour; human–computer interaction; design

Introduction If building performance simulation can be made faster


Design firms increasingly seek to justify architectural deci- and more intuitive without losing accuracy, it may be pos-
sions with data. However, the simulations that produce sible to train a new generation of building designers to
these data often run days or weeks after the design is use simulation as a design aid. In this paper, we investi-
finalized. Most building performance simulations validate gate the effect of providing simulation results in real time.
completed architectural designs, and designers rarely run We compare two simulation tools for visual comfort based
the simulations themselves or see the direct output. In on their effects on user behaviour, design performance,
the common use case, the user-friendliness and speed of and user satisfaction. We presented 40 subjects with two
simulation tools are of secondary importance. How must design problems, asking them to reduce the annual like-
simulation tools themselves change in order to produce lihood of glare in office environments while maintaining
results during the design process, rather than afterward? acceptable daylit illuminance values on the work plane.
What effect does the choice of simulation tool have on Each subject analysed one space with DIVA-for-Rhino 4.0
design outcomes? To answer these questions, we investi- (Jakubiec and Reinhart 2011), a commercial lighting anal-
gate how the design of simulation tools affects their use ysis plugin for the Rhinoceros CAD environment (Robert
and their users’ decisions. McNeel & Associates 2016), and one space with Acceler-
Two factors that limit the expanded use of building per- adRT, an experimental tool for physically based real-time
formance simulation tools are cumbersome interfaces and rendering. Both tools provided subjects with false colour
feedback latency. Many simulation tools in common use luminance maps representing occupant views and corre-
lack user interfaces that are intuitive or easy to learn. Pro- sponding daylight glare probabilities. However, DIVA-
fessional users expect a trade-off between simulation speed for-Rhino required subjects to wait for feedback from
and accuracy, although we have shown previously that each viewpoint, while AcceleradRT provided continuous
this need not be the case when new algorithms and paral- feedback as subjects navigated the space.
lelization techniques are used (Jones and Reinhart 2017a). From this study, we demonstrate a link between simu-
These factors helped to create professional roles for build- lation feedback latency and the behaviour, design choices,
ing physics and lighting experts separate from the role of and satisfaction of simulation users. To enable this anal-
designers. Unfortunately, this professional separation fur- ysis, we make several other contributions to the field of
ther limits the ability of performance simulation to guide daylighting and building performance simulation. First, we
early development of designs. present AcceleradRT, a novel real-time lighting simulation

*Corresponding author. Email: nljones@mit.edu

© 2018 International Building Performance Simulation Association (IBPSA).


344 N.L. Jones and C.F. Reinhart

tool developed as a design aid. Second, we present a High SRT affects user behaviour, but the latency that
method for comparative usability testing of building per- users will accept depends on the speed of the user’s
formance simulation tools. Third, we present a modified though process. Types of thought and their time scales
visual comfort metric that accounts for annual, spatial, can be classified as deliberate acts that carry expected
and directional variation in glare. Finally, we examine the responses ( ∼ 100 ms), cognitive operations that prompt
role of user experience and discuss implications for tool unexpected responses ( ∼ 1 s), or unit tasks that involve
development and visual comfort metrics. planning and strategy ( ∼ 10 s) (Newell 1994). Users notice
short delays in response to deliberate acts such as input
device manipulation. In one study, touchpad users detected
Background
latency differences between 2.38 and 11.36 ms (Ng et al.
Software tools have changed the nature of design think- 2012). However, users in another study often failed to
ing significantly in the last half century. Digitally based detect touchpad response latencies below 40 ms (Jota et al.
design emphasizes specificity and detail over abstract rep- 2013). In a first-person shooter game, introducing 75–
resentation (Oxman 2006), and designers exhibit different 100 ms delays led to 50% fewer successful shots; players
design strategies using CAD tools than when sketching, found 100 ms latencies noticeable and 200 ms latencies
according to think-aloud design studies (Salman, Laing, annoying (Beigbeder et al. 2004). Strategy games employ
and Conniff 2014). This may be in part because CAD (and a slower type of thinking based in cognitive operations
simulation tool) development has been driven more by and unit tasks. Internet latencies up to several seconds
production needs rather than an effort to foster creativity did not significantly affect performance in real-time strat-
(Jonson 2005). egy games (Sheldon et al. 2003). However, even if high
SRT goes unnoticed, it can alter thought processes. In
a tile-moving game, subjects faced with higher latency
System response time
worked harder to develop strategies and moved fewer tiles
For simulation results to inform design decisions, they (O’Hara and Payne 1998, 1999). Similarly, increasing the
must be available before those decisions are made. We are SRT of web searches causes users to submit fewer queries
referring to decisions made during active designing, not (Brutlag 2009). Liu and Heer (2014) observe that reduced
those made in the boardroom after the fact, so informed SRT correlates to greater numbers of mouse movements
decisions require interactivity. System response time (SRT) and application events, affecting both deliberate acts and
is the time a user waits after entering input until the system cognitive operations of users. Their study suggests that
begins to present results to the user (Doherty and Kelisky SRT must be below 500 ms to support interactive user
1979). Studies conducted at IBM show that SRT affects behaviour.
productivity, but not simply by adding SRT to the time
required for task completion. Instead, longer SRT seems
to “disrupt the thought process” and cause a longer user Visual comfort
response time as well (Doherty and Kelisky 1979). Reduc- At its most basic, visual comfort can be defined as pro-
ing SRT increases productivity, job satisfaction, design viding enough illumination without providing too much.
quality, and perceived power and decreases anxiety lev- Visual discomfort occurs when light levels are too dim to
els (Doherty and Thadani 1982; Barber and Lucas 1983; support a given task or when intense light sources and con-
Guynes 1988; Hoxmeier and DiCesare 2000). From these trast cause glare. Concern for sustainability and connection
observations, Brady (1986) developed roll theory, which to the outdoors leads designers to meet as much of lighting
states that given immediate access to organized data and needs as possible through natural lighting, but sunlight is
with concentration unbroken by distractions, “ideas and also the most common cause of glare. Performance met-
solutions will suggest more ideas and solutions to succes- rics for daylighting must therefore consider both daylight
sive steps of the creative process, in a rapid and orderly availability and risk of glare.
flow.” When “on a roll,” an average user can exhibit higher Daylight availability metrics use illuminance values
productivity than an expert user faced with high SRT measured or simulated over a grid of points. Annual sim-
(Doherty and Thadani 1982). ulations using climate data can give daylight illuminance
Brady’s theory seems intrinsically related to Csíkszent- values for a grid of points for all hours of the year. Spatial
mihályi’s concept of flow (1996). Flow is a focused mental daylight autonomy refers to the fraction of occupied space
state in which tasks become automatic and effortless, often receiving at least 300 lux for at least 50% of occupied hours
referred to as being “in the zone.” Flow describes the cre- and is abbreviated sDA300,50% (IESNA 2012). The LEED
ative mindset generally and is not restricted to digital work, (USGBC 2013) and WELL (International WELL Build-
but its applicability to designers using digital media is ing Institute 2016) green building standards recognize
clear. To integrate simulation into the flow of the design sDA300,50% as an indicator of good daylighting.
process, simulation results must be clearly organized and Some grid-based metrics also use illuminance lev-
immediately available. els as a proxy for glare potential. Annual sun exposure,
Journal of Building Performance Simulation 345

abbreviated ASE1000,250 , refers to the fraction of occupied thresholds be limited to 5% of occupied hours at any
space receiving at least 1000 lux for at least 250 occupied point. A recent European standard codifies this requirement
hours (IESNA 2012) and is similarly used by the LEED (CEN 2016). However, Wienold does not specify what por-
and WELL standards. Useful Daylight Illuminance (UDI) tion of the floor area needs to be evaluated, leaving it up to
sets both lower and upper thresholds (100 and 2000 lux, the designer to choose critical locations and view direc-
respectively) to encompass both daylight availability and tions for analysis. Kleindienst and Andersen’s (2012) glare
glare potential in a single metric (Nabil and Mardaljevic avoidance extent represents glare occurrence spatially and
2006). There are drawbacks to using illuminance as a proxy can be displayed graphically as a temporal map. To reduce
for glare, however. Glare potential is not a measured like- computational complexity, they divide the year into a sta-
lihood of glare; rather, it is intuition that glare could occur, tistically representative sample of hours for simulation and
and there is little evidence to support the upper thresholds approximate DGP using illuminance. DIVA-for-Rhino cre-
proposed by ASE1000,250 or UDI. Furthermore, it is diffi- ates temporal maps of eDGPs but provides no quantitative
cult for designs to perform well in both sDA300,50% and metric to compare these visualizations (Jakubiec and Rein-
ASE1000,250 . hart 2011). None of these methods quantify the annual,
Glare metrics use luminance rather than illuminance spatial, and directional glare occurrence together in a single
and consider the location, size, and brightness of individ- metric.
ual glare sources. Discomfort glare occurs when bright Contrast on a computer screen or paper also plays a role
light sources in the field of view cause visual irrita- in visual comfort. For legibility, we describe the minimum
tion or eyestrain. Increasing the source’s size or lumi- luminance difference between text and its background by
nance can lead to loss of contrast in the retinal image, the contrast ratio CRmin :
resulting in disability glare (Hopkinson 1972). Daylight
glare probability (DGP) describes the likelihood that an CRmin = 2.2 + 4.84 × LL −0.65 , (2)
occupant will report the presence of glare in a given
view (Wienold and Christoffersen 2006). We calculate where LL is the lesser of the two luminance values (ISO
DGP as: 2008). Within a typical luminance range, this ratio is usu-
ally around 3:1 (ISO 1992). Veiling glare occurs when
DGP = 5.87 × 10−5 Ev + 0.0918 reflected light, such as sunlight falling on a computer
  screen, obscures text or images seen through the reflective
 n
L2s,i ωs,i surface. Based on our approximation, a typical monitor that
× log10 1 + + 0.16, (1)
i=1
Ev1.87 Pi2 emits 100 cd/m2 will exhibit veiling glare when it reflects
at least 50 cd/m2 .
where Ev is the scene vertical eye illuminance in lux, Various standards and rating systems that address
Ls,i and ωs,i are the luminance and solid angle of the ith visual discomfort emphasize adequate daylighting (IESNA
source, and Pi is the Guth position index representing the 2012), views to the outdoors (USGBC 2013), circadian
eye’s sensitivity to the source direction. In human subject entrainment (International WELL Building Institute 2016;
studies, values greater than 45% correspond to intolerable Konis 2017), visual interest (Amundadottir et al. 2017),
glare, while those under 35% predict imperceptible glare or glare (Hopkinson 1972; Einhorn 1979; CIE 1995; Naz-
(Wienold 2009a). zal 2001; Harrold 2003; Wienold and Christoffersen 2006).
DGP prediction requires the rendering of an image, To complicate matters, these phenomena are ephemeral
usually by physically based ray tracing, for every anal- and occupant-centric, so lighting designers must consider
ysed time, position, and view direction, leading to very how each aspect of visual comfort will change with time,
high simulation SRT. However, glare sources (typically occupant position, and viewing direction (Jakubiec and
defined as regions of the image with luminance at least five Reinhart 2012; Amundadottir et al. 2017; Konis 2017;
times the average scene luminance) tend to represent direct Jones and Reinhart 2017a). Design for visual comfort is an
views to light sources, so the luminance terms of Equation open-ended problem that in many cases requires the use of
1 can be calculated through a faster zero-ambient-bounce simulation. The choice of simulation tool may produce dif-
luminance simulation. The enhanced simplified DGP met- ferent performance results (Bellia, Pedace, and Fragliasso
ric (eDGPs) combines a zero-ambient-bounce luminance 2015) and elicit different user behaviour (Liu and Heer
simulation with Ev calculated by lower-resolution illumi- 2014). It therefore plays a critical role in the success of
nance simulation to approximate DGP with high accuracy design outcomes.
(relative mean bias error under 2% and relative root-mean-
square error under 5%) (Wienold 2009b).
Although DGP and eDGPs evaluate glare at a spe- Tools
cific time, location, and viewing direction, they may be The SRT of CAD tools has decreased substantially since
extended to consider multiple times and locations. Wienold the early days of computer graphics and is now short
(2009b) proposes that situations exceeding certain DGP enough to avoid disrupting creative flow. However, many
346 N.L. Jones and C.F. Reinhart

building performance simulation tools have long SRTs of Design task


minutes or hours. Excluding expert intuition and scaled We gave each subject two design problems in which they
physical models, all methods for predicting visual dis- had to modify a space to mitigate glare issues while main-
comfort of which we are aware use physically based ray- taining acceptable daylight levels. Each problem involved
tracing simulations (Whitted 1980), which generally have a Rhinoceros model of a 19 m × 10.9 m open-plan cor-
a high SRT. ner office space with floor-to-ceiling windows on two
For the past 20 years, RADIANCE has stood out as the sides (Figure 1(a,b)). Subjects had 20 minutes to com-
most validated simulation software for physically based plete each problem. For each space, we asked subjects to
ray traced rendering and irradiance calculation in building identify regions where occupants could experience glare
design (Larson and Shakespeare 1998). Previous studies by marking on a printed floor plan, and then to choose a
have validated its ability to predict daylit sensor readings combination of shading devices stored in predefined lay-
in test settings (Ubbelohde and Humann 1998; Reinhart ers of the Rhinoceros model that would best mitigate glare
and Herkel 2000; Reinhart and Walkenhorst 2001; Rein- (Figure 1(c,d)). The two models had different climates
hart and Andersen 2006; Reinhart and Breton 2009; Du (Minneapolis and Albuquerque), different orientations, dif-
and Sharples 2011; Yoon, Moon, and Kim 2016) and ferent workspace layouts, and different shading devices, so
building atria (Grynberg 1989; Ng et al. 2001; Galasiu subjects could not use the same strategy for both models.
and Atif 2002). Our own analysis shows that RADIANCE To assist their analysis and decision-making, each partici-
accurately predicts periods of visual discomfort based pant had access to DIVA-for-Rhino for one design problem
on DGP and contrast ratio (Jones and Reinhart 2017a). and to AcceleradRT for the other. We randomly chose
DIVA-for-Rhino is a plugin for Rhinoceros CAD software which model and which tool each subject experienced first
that enables many types of RADIANCE analysis, includ- to ensure an equal number of subjects in each condition.
ing – relevant to this study – luminance maps and DGP Additionally, we sat subjects at computers of two dif-
calculations. ferent configurations. Half of the subjects used machines
Accelerad is a RADIANCE derivative created by the with faster 2.60 GHz Intel® Xeon® E5-2604 processors
authors that achieves faster computational performance with 3.4 GHz overclocking, and half used machines with
by running simulations in parallel on the graphics pro- slower 2.40 GHz Intel® Xeon® E5620 processors with
cessing unit (GPU) instead of using a single central 2.66 GHz overclocking. Each machine had an NVIDIA®
processing unit (CPU) thread (Jones and Reinhart 2014; Tesla® K40 graphics accelerator with 2880 CUDA® cores
Jones and Reinhart 2015; Jones and Reinhart 2017a). for AcceleradRT’s GPU-based computations.
This strategy allows hours-long RADIANCE simulations Prior to starting each design problem, we gave each
to run in minutes through the OptiX™ ray-tracing engine subject simple training in glare analysis and use of the
(Parker et al. 2010). AcceleradRT modifies RADIANCE tools. We instructed subjects to consider DGP, veiling
still further by using progressive path tracing (Lafor- glare, and work plane illuminance in their designs. As
tune and Willems 1993) instead of light-backwards dis- simple rules of thumb, we asked subjects to keep DGP
tribution ray tracing (Cook, Porter, and Carpenter 1984). below 35%, reflections on monitors below 50 cd/m2 , and
Progressive path tracing makes images at intermediate reflected luminance on work surfaces above 48 cd/m2 ,
stages of rendering immediately available, reducing SRT which roughly corresponds to 300 lux falling on a 50%
to fractions of a second. We have previously demon- reflective Lambertian surface. Subjects could test the lat-
strated that progressive path tracing produces coherence ter two criteria only through the false colour rendering
in contrast ratio and DGP calculations over time, and provided by each simulation tool (Figures 2 and 3).
that these results may be used with some degree of cer- After each design problem, subjects completed a sur-
tainty starting early in the rendering process (Jones and vey. The survey assessed subjects’ confidence and satis-
Reinhart 2016). faction while performing the task. It also asked subjects to
describe their approach to the design problems and their
impressions of the tools. We also asked each subject to
Experiment setup record the combination of shading devices they chose to
mitigate glare issues.
We recruited 40 subjects aged 21–37 to take part in our
study. Twenty-five subjects reported previous professional
experience in architecture, ranging up to 13 years with
a median of 4 years’ experience. The remaining subjects Tool design
were engaged in graduate-level architecture coursework or We created a customized workflow for each tool that gave
design research. Seventeen subjects had educational back- subjects access to four input parameters: layer state, date
grounds in architecture, 12 in building technology, 7 in and time, view, and simulation initiation. Subjects added
both, and 4 in related engineering disciplines. All but two or removed shading devices from the Rhinoceros model
reported normal colour vision. using the layer panel within the main Rhinoceros window
Journal of Building Performance Simulation 347

Figure 1. Floor plans of the two models used for the (a) Minneapolis and (b) Albuquerque climates and (c,d) the shading device options
for each.

(Figure 4). We provided month, date, and hour controls the Rhinoceros model to serve as an avatar to choose posi-
within a Grasshopper visual programming interface for tion and viewing direction. Subjects could move and rotate
both tools. For DIVA-for-Rhino, we provided an object in the avatar using Rhinoceros’ gumball navigation (shown in
348 N.L. Jones and C.F. Reinhart

Figure 2. The AcceleradRT user interface includes a luminance map image that supports mouse navigation and a DGP dial widget.
Subjects control date and time through Grasshopper.

Figure 4). We chose this method because preliminary trials colouring to represent acceptable and unacceptable values,
showed that subjects had difficulty placing the virtual cam- respectively (shown in Figure 2).
era if they could not see its location. Because AcceleradRT We used a custom Grasshopper component to log each
features an animated progressive rendering rather than a subject’s layer state, time, view, and simulation interac-
static image, users can navigate the scene with mouse tions for both tools. A timestamp associated with each
gestures over the image. We used Grasshopper to place interaction allowed us to record the length of time the spent
an avatar in the Rhinoceros viewport to show the current in each combination of layer state, time, and view, and for
view, but subjects controlled the position and view direc- DIVA-for-Rhino the elapsed time during simulation runs.
tion through AcceleradRT rather than Rhinoceros. In both
instances, the avatars held the viewpoint at 1.3 m above the
floor, corresponding to a typical seated eye level. Finally, Results
we provided a Grasshopper button to start the simulation Forty subjects collectively give us 800 minutes of recorded
process in DIVA-for-Rhino. After setting the desired layer user interaction for each tool. We analyse these data in
state, time, and view, subjects could click the button to ini- three ways. First, we examine the user behaviours, pro-
tiate a simulation using DIVA-for-Rhino’s lowest quality cesses, and strategies that subjects employed to see how
setting. AcceleradRT performs simulations continuously, they differed depending on the tool used. Second, we exam-
so its interface offered no equivalent feature. ine the proposed designs from each subject to see if the
Both tools provide graphical output. We set up both choice of tool affected design performance. Finally, we
tools to display the avatar’s current view in 180° equian- examine the subjects’ survey responses to see if subjects
gular fisheye projection using a logarithmic false colour developed preferences for the tools.
scale from 10 to 10,000 cd/m2 . This display allowed sub-
jects to estimate the brightness of spots such as monitors
and work surfaces in the model. In DIVA-for-Rhino, we User behaviours and design strategies
displayed the DGP value in a prominent text box. Acceler- We are interested in the amount of information that sub-
adRT displays DGP using a dial widget with green and red jects accumulated while making their design decisions, and
Journal of Building Performance Simulation 349

Figure 3. The DIVA-for-Rhino user interface includes a static luminance map image and a text box reporting DGP within Grasshopper.
Viewpoint navigation happens in Rhinoceros.

therefore we want to know how many views, times, and Figure 5 shows the fraction of states (combination of
layers states subjects explored (Table 1). However, setting view, time, and layer state) that subjects viewed for dura-
the desired configuration of view, time, or layer state might tions up to one minute. Using AcceleradRT, half of those
involve multiple interactions to set the month, date, hour, states were active for no more than 2 s, and only 3.5%
x and y positions, view rotation angle, or visibility states were active for at least 20 s. Using DIVA-for-Rhino, 80%
of multiple layers. Rather than counting the total number of the states that subjects observed remained active for
of interactions, we count only those that preceded suffi- at least 2 s, and 40% remained active for at least 20 s.
ciently long pauses to indicate that the subject might have This indicates a change in the cognitive state of subjects
thought about the result. Using DIVA-for-Rhino, simula- when using DIVA-for-Rhino. One possible explanation is
tions took on average 20 s using the faster computers and that the slower tool induced slower thought processes, so
29 s using the slower computers, so we can assume that that actions that would have been fast cognitive operations
a series of interactions within a shorter timeframe form a became slower unit tasks. Another possibility is that sub-
unit task such as setting up a time and viewpoint for the jects relied more on strategy to cope with the increased
next simulation. Using AcceleradRT, where preliminary SRT and therefore performed fewer cognitive operations.
results are available in real time, subjects might reject an Different user strategies are apparent when plotting the
idea after rendering only a few frames if the results did not views visited by each subject. In Figure 6, the thickness
look promising. Each frame takes approximately 200 ms of a vector is proportional to the length of time the sub-
to render, so a series of interactions over a period of sev- ject spent in that view, and colours indicate DGP values
eral seconds might represent multiple cognitive operations for views where the subject ran a simulation in DIVA-for-
as the user queries more information about the scene. In Rhino. Some common strategies were to concentrate views
2 s, AcceleradRT renders about 10 frames, by which point near windows (Figure 6(a,b)) or in the brightest corner
global metrics such as DGP tend to stabilize in the ani- (Figure 6(c,d)), visit all workstations in the space (Figure
mation (Jones and Reinhart 2016). We choose 2 s as the 6(e,f)), or concentrate on a small number of views in the
minimum duration below which the subjects were unlikely centre of the room (Figure 6(g,h)). These patterns are often
to gain information about a design. clearer in the DIVA-for-Rhino sessions, where the subjects
350 N.L. Jones and C.F. Reinhart

Figure 4. The Rhinoceros interface used with both tools includes the layer controls (right) and avatar, shown here with the gumball
navigation control used in DIVA-for-Rhino.

Table 1. Average number of interactions followed by 2-s


delays by type.

Interaction DIVA-for-
type AcceleradRT Rhino Mean ratio

Time 17.8 11.2 1.73


Viewpoint 93.6 26.9 6.06
Layer state 45.6 17.1 7.98
Simulation N/A 23.3 N/A

had more precise control over placement of their avatars.


Subjects were able to visit more of the space using Accel-
eradRT and visited on average six times as many views
for at least 2 s (Table 1). Aggregating the views visited
by all of the subjects shows that subjects collectively cov- Figure 5. Subjects spent less time on average examining each
ered a greater portion of the space with their analysis using combination of view, time, and layer state in AcceleradRT than
AcceleradRT than using DIVA-for-Rhino (Figure 7). in DIVA-for-Rhino.
Subjects using AcceleradRT also explored a greater
variety of times during the year than those using DIVA-
for-Rhino. Figure 8 shows the hours of the year studied number of times for at least two seconds using Acceler-
by subjects, with darkness indicating the cumulative time adRT than they did using DIVA-for-Rhino (Table 1).
that subjects spent examining each hour. Subjects tended The greatest effect occurred for the number of layer
to focus on the first day of each month or on solstice and states that subjects investigated with the two tools. On
equinox days. Using AcceleradRT, subjects were likely to average, subjects viewed eight times as many layer states
examine times earlier or later in the day than when using for at least 2 s using AcceleradRT than using DIVA-for-
DIVA-for-Rhino. The average subject studied 1.7 times the Rhino. Our method for counting layer states considers
Journal of Building Performance Simulation 351

Figure 6. Sample traces show different strategies that subjects used to explore the space using AcceleradRT (left column) and DIVA–
for-Rhino (right column). Colours indicate DIVA-for-Rhino simulations with imperceptible glare (green), perceptible or disturbing glare
(yellow), and intolerable glare (red).

specifically the number of state changes, so some subjects tried on average thirteen times as many later states when
may have alternated between a smaller number of layer they later used AcceleradRT, while those who used Accel-
states more often in order to compare them using Accel- eradRT first tried on average only three times more layer
eradRT. The order in which subjects experienced the two states than they did later with DIVA-for-Rhino (Table 1).
tools played a factor. Those who used DIVA-for-Rhino first We reason that early exposure to the faster tool conditioned
352 N.L. Jones and C.F. Reinhart

Figure 7. Subjects explored a greater portion of the spaces using AcceleradRT. The darkness of view symbols corresponds to the
cumulative time subjects spent analysing the view in that region and direction.

subjects to explore the design space more thoroughly. We they are generally shorter and occur earlier in the sessions.
also note that some subjects spent the majority of their time This may indicate hesitance to start the task due to the less
analysing the base case; for both tools, 10% ran out of time intuitive interface.
before modifying any layer states. With DIVA-for-Rhino only, we can examine the effect
A temporal view of user behaviour offers another win- of processor speed on user behaviour. Subjects using faster
dow into the strategies that subjects employed. In Figure 9, computers examined 50% more views and 30% more layer
coloured dots represent different types of subject interac- states, and ran 37% more simulations than subjects using
tions. Most subjects alternated between spatial exploration the slower computers. Subjects using faster computers also
and changing the layer state, with infrequent changes to spent less time reviewing results or changing the model
the time. This may be because the time controls required between simulations; there was almost no difference in
keyboard interaction and therefore seemed to involve more the fraction of time taken up by simulations between the
effort. Using DIVA-for-Rhino, most subjects adopted a two groups (45% for subjects with fast computers versus
pattern of changing either view or layer state followed 48% for subjects with slow computers). This may indi-
by running a simulation, and the frequency of interaction cate that subjects who had to wait longer for simulation
stayed relatively constant for each subject. Using Accel- results became distracted more often, valued the time in
eradRT, subjects displayed a much higher frequency of which they could interact with the models more, or were
interaction, with many views and layer states observed for otherwise less inclined to run simulations.
only a matter of seconds. A quarter of subjects using Accel-
eradRT entered a prolonged period of inactivity ranging
from one to five minutes, usually towards the end of the 20- Design performance
minute session. This inactivity may indicate the need for a We define high-performing designs as those that maximize
break, attention turning to the printed floor plans instead daylight availability while minimizing the risk of glare. In
of the screen, or early completion of the task. We observe general, a trade-off exists between these two goals, and
periods of inactivity using DIVA-for-Rhino as well, but no single design will simultaneously provide the greatest
Journal of Building Performance Simulation 353

Figure 8. Subjects tended to view the spaces on the first of the month or on solstice and equinox days. The darkness of each hour on the
temporal maps corresponds to the cumulative time subjects spent analysing that hour.

daylit area and the least glare risk. Instead, we define opti- To judge the performance of the designs that subjects
mal designs as those that lie on the Pareto frontier of these proposed, we simulated each model with each possible
two criteria; a design is optimal if and only if no other combination of shading devices and calculated daylight
design performs better in one respect without performing availability and glare risk. For each metric, we divided the
worse in the other. Based on the instructions we gave to space according to a 0.5-m square grid of 819 sensors. To
subjects, we expected them to seek out Pareto optimal assess daylight availability, we computed spatial daylight
designs. autonomy (sDA300,50% ) at a work plane elevation of 78 cm.
354 N.L. Jones and C.F. Reinhart

Figure 9. Timelines of subject interactions with the simulation tools show greater interaction with AcceleradRT and more regularly
spaced interaction with DIVA-for-Rhino.

To assess the risk of glare, we computed enhanced sim- increase as the unobstructed view to the sky increases,
plified daylight glare probability (eDGPs) in eight equally resulting in a Pareto frontier of designs that maintain a
spaced directions around each sensor at a seated head high sDA300,50% while minimizing saeDGPs. We highlight
height of 130 cm for all occupied hours using climate- the final design solutions chosen by subjects from among
based skies. We report the fraction eDGPs values across all other available design choices: shape indicates the tool
all occupied hours, positions, and directions that exceeded used by the subject choosing that design, and colour indi-
35%. We refer to this fraction as spatial annual eDGPs cates the number of subjects who chose the design using
(saeDGPs). that tool. Subjects using AcceleradRT were more likely
Figure 10 shows the sDA300,50% and saeDGPs for each to choose final designs with Pareto optimal combinations
unique combination of shading devices (320 for Min- of sDA300,50% and saeDGPs than subjects using DIVA-
neapolis and 144 for Albuquerque). Both metrics tend to for-Rhino; of subjects who selected final designs, 61%
Journal of Building Performance Simulation 355

Figure 10. Shading designs chosen using AcceleradRT were more likely to lie on the Pareto frontier (high spatial daylight autonomy
(sDA300,50% ) and low spatial annual enhanced simplified daylight glare probability (saeDGPs)) than those chosen using DIVA-for-Rhino.

Figure 11. The most popular design for Minneapolis, chosen by 11 subjects, used slanted louvres to shade the top two-thirds of the
glazing.

of AcceleradRT users chose optimal designs, while only for 20 minutes, we asked subjects to evaluate their experi-
39% of DIVA-for-Rhino users did. We did not instruct ence by answering 12 questions, each on a 7-point Likert
subjects on how to value the trade-off between sufficient scale. These questions dealt with subjects’ confidence in
work plane illuminance and glare, and the responses show their own work and psychological state during the task
preferences at both ends of the spectrum and in the mid- (Figure 13).
dle. In the west-facing Albuquerque model, subjects using Subjects were more confident when they used Accel-
AcceleradRT preferred lower saeDGPs than those using eradRT than when they used DIVA-for-Rhino. More than
DIVA-for-Rhino, but the same trend was not apparent in half of subjects rated their confidence higher when it came
the south-facing Minneapolis model. One design option for both to their assessment of the space prior to their interven-
Minneapolis proved particularly popular and was chosen tion and to the performance of their final design. This was
by over a quarter of subjects, including seven using Accel- the case even though 55% of subjects ranked their famil-
eradRT and four using DIVA-for-Rhino (Figure 11). For iarity with DIVA-for-Rhino higher, and a plurality of 38%
comparison, only three subjects using AcceleradRT and trusted the two tools to predict glare equally well. Subjects
one using DIVA-for-Rhino chose the most popular design who used AcceleradRT first were likely to rate themselves
option for Albuquerque (Figure 12). Both of these popular more confident when using AcceleradRT by a higher mar-
designs are Pareto optimal. gin. Subjects who used the slower computers were likely
to put more trust in AcceleradRT by a higher margin.
Most subjects found the task using AcceleradRT to be
User evaluations more enjoyable, more relaxing, less difficult, less frustrat-
Clear trends are evident in the subjects’ own evaluations ing, and less hurried than the task using DIVA-for-Rhino.
of their work using the two tools. After using each tool More than half of the subjects also reported learning more
356 N.L. Jones and C.F. Reinhart

Figure 12. The most popular design for Albuquerque, chosen by four subjects, used a light shelf and slanted louvre shading on the
middle potion of the glazing.

Figure 13. Subject responses to survey.

from the task using AcceleradRT. The order in which sub- productive because they had to wait for each simula-
jects used the tools mattered. Subjects who had already tion to complete. Regarding the interface, several subjects
completed the task using AcceleradRT before they tried found navigation difficult in AcceleradRT, but they found
DIVA-for-Rhino were likely to rate AcceleradRT even the dial widget display of DGP easier to read. The four
more enjoyable while rating DIVA-for-Rhino more diffi- subjects who preferred DIVA-for-Rhino tended to cite
cult, frustrating, and hurried by comparison. Relaxation AcceleradRT’s progressive rendering as a reason. Some
was not significantly affected by tool order, but subjects felt that it was distracting, less precise, or inefficient com-
who used the slower computers did tend to rate Acceler- pared to static renderings. One subject, apparently unaware
adRT more relaxing by a wider margin than those who used that AcceleradRT was not in public release at the time,
the faster machines. cited DIVA-for-Rhino’s larger market share as a reason to
Finally, we asked subjects which tools they preferred prefer it.
overall. In all, 90% of subjects preferred AcceleradRT for
glare prediction, while 10% preferred DIVA-for-Rhino.
As reasons for preferring AcceleradRT, subjects cited its Discussion
ease of use, interactivity, and smooth workflow. Subjects In this study, we found that that changes in SRT cor-
appreciated the ability to see immediately when a design relate with differences in user behaviour, user confi-
would not work, to examine more views in order test dence, user satisfaction, and optimality of design choices.
their designs, and to make changes “on the fly.” Some In this section, we look for a mechanism to explain
felt that their time spent using DIVA-for-Rhino was less these correlations. Based on our analysis, we then make
Journal of Building Performance Simulation 357

recommendations for the development of building perfor-


mance simulation tools and metrics.

Explaining the connection


We would like to explain why subjects produced opti-
mal solutions more often when they had access to a tool
with low SRT. An obvious hypothesis is that the faster
tool allowed them to explore more options in the allotted
time and therefore gather more knowledge about the design
space. Could increased exploration be a predictor of opti-
mal final designs? This turns out not to be the case. In fact,
within the tests of each tool, users who chose non-optimal
designs viewed just as many (or more) options as users who Figure 15. Subjects with at least three years of professional
chose designs on the Pareto frontier (Figure 14). architectural experience were much more likely to choose opti-
If exploration does not account for differences in design mal designs using AcceleradRT but showed no advantage using
DIVA-for-Rhino.
performance, then perhaps some subjects’ previous expe-
rience helped them in the task. Our second hypothesis
is that varying levels of architectural experience among
the participants affected their ability to choose optimal
designs. To test this, we divided subjects into four roughly
equal bins according to their self-reported length of pro-
fessional experience. Experience aided users in low SRT
situations. Subjects with three or more years’ professional
architectural experience chose optimal final designs using
AcceleradRT twice as frequently as their less experi-
enced counterparts (Figure 15). Subjects with backgrounds
in architecture were also more likely to choose optimal
designs using AcceleradRT, regardless of their level of
experience (Figure 16). However, when faced with high
SRT, subjects with professional experience or architec- Figure 16. Subjects with educational backgrounds in architec-
tural training did no better that inexperienced subjects at ture were more likely to choose optimal designs using Acceler-
adRT but showed no advantage using DIVA-for-Rhino.
choosing optimal final designs. It is worth noting that
inexperienced subjects still reported feeling more confi-
dent when using AcceleradRT, even though low SRT did We cannot attribute the performance of designs cho-
not make them significantly more likely to select optimal sen by the subjects solely to exploration or experience,
final designs. so instead we examine how experience related to the tool
used. Our third hypothesis is that subjects were more
likely to choose optimal final designs when their combina-
tion of experience and activity resulted in flow. Based on
interviews with recognized experts in creative fields, Csik-
szentmihalyi (1996) characterizes flow with nine traits. We
examine these traits in the context of our study:

1. Clear goals at every step. Subjects received clear


instruction prior to completing the tasks. How-
ever, the design of graphic user interfaces could
affect subjects’ ease of recalling individual steps.
In particular, the AcceleradRT window contained
only the animated rendering, colour scale bar,
and DGP dial widget. This design emphasized
a process and a goal – view the rendering and
Figure 14. The average number of states (combination of view, manipulate it to reduce DGP to an acceptable
time, and layer state) viewed by subjects for at least 2 s has little value. This streamlined process is not evident in
relationship to whether subjects chose optimal final designs. DIVA-for-Rhino’s Grasshopper interface, where
358 N.L. Jones and C.F. Reinhart

the relationship between components is repre- 9. Activity becomes autotelic. Csíkszentmihályi pro-
sented abstractly, the rendered view does not exist poses that individuals who experience flow seek it
until the first simulation finishes and then does not out for its own sake. We cannot test this directly as
update immediately to reflect user interactions, and it would require participants to continue the exper-
DGP is displayed as text. iment task of their own volition beyond the allot-
2. Immediate feedback to one’s actions. AcceleradRT ted time. However, a majority (63%) of subjects
produced renderings at five frames per second expressed finding the activity more enjoyable with
and converged DGP results within 2 s, giving the AcceleradRT than with DIVA-for-Rhino, so we
impression of immediate feedback. In contrast, expect they would choose AcceleradRT if given
subjects using DIVA-for-Rhino had to wait 20–29 s reason to continue the task.
for results from each simulation.
3. Balance between challenges and skills. Csíkszent- AcceleradRT aligns more closely with the closely with
mihályi observes flow to occur when both the chal- the characteristics of flow than DIVA-for-Rhino, and in
lenge of the task and skill level of the subject are general, low SRT tools seem better positioned to encourage
high. In our study, the task was challenging. Our flow. The balance between challenge and skill best explains
observation that subjects with past professional the link we observe between experience and choosing opti-
experience in architecture chose optimal designs mal designs. We conclude that experienced subjects were
more frequently suggests that their skill level was better equipped to use the information they got through
better matched to the problem as presented in design exploration and thus entered a creative state of flow
AcceleradRT. On the other hand, DIVA-for-Rhino more easily.
made the task more difficult, such that these sub-
jects’ skill levels were no longer well matched
to it. Recommendations for tools
4. Actions and awareness are merged. This merger We wish to emphasize one take-away from our discus-
describes deliberate acts – the type of thought asso- sion of flow: the low SRT and simplified user interface of
ciated with gestures and mouse interaction charac- AcceleradRT appeared to reduce the skill level necessary
terized by speeds on the order of 100 ms (Newell to balance the challenge of the task. In effect, flow lowered
1994). In our study, navigation through the mod- the barrier to entry for performing the task. To promote
elled offices using the mouse is the primary deliber- competent use of building performance simulation tools,
ate act. AcceleradRT’s progressive rendering tied developers should incorporate principles of flow into their
subjects’ awareness of daylighting performance to products. Tools need interactive speeds and intuitive user
the act of navigating the space. interfaces.
5. Distractions are excluded from consciousness. To promote design exploration through cognitive oper-
More than half of subjects reported feeling more ations rather than less interactive unit tasks, preliminary
distracted when using DIVA-for-Rhino than when simulation feedback should be available within 500 ms
using AcceleradRT. However, the feeling of dis- (Liu and Heer 2014). This is far less than the 20–29 s
traction was heavily swayed by tool order; subjects that DIVA-for-Rhino takes to render at its lowest quality
tended to report feeling more distracted during the settings. AcceleradRT provides preliminary results after
second half of the experiment. In their comments, rendering its first frame in approximately 200 ms, so it
some subjects explained that distraction resulted meets our definition of an interactive tool, and it provides
from finishing the task early. converged DGP results within 2 s, which is approximately
6. No worry of failure. More than half of subjects the SRT necessary for feedback to cognitive operations.
reported feeling more confident and more relaxed However, our study method is not fine grained enough to
when using AcceleradRT than when using DIVA- reveal what type of thinking subjects engaged in between
for-Rhino. interactions or at what point they began to wait. Some sub-
7. Self-consciousness disappears. Although this is jects who preferred AcceleradRT mentioned the ability to
similar to five above, we do not have a way to test dismiss a bad design idea based on an incomplete simu-
it directly. lation result as a benefit, while a subject who preferred
8. Sense of time becomes distorted. A plurality (48%) DIVA-for-Rhino called having to wait for convergence in
of subjects felt that time passed more quickly while the progressive rendering a drawback. These comments
using DIVA-for-Rhino. However, like distraction, suggest that even a 2-s SRT can interrupt flow. Future stud-
many attributed this to finishing early with the ies should employ more fine-grained approaches such as
other tool. It is therefore possible that responses to think-aloud methods to learn how simulation tools affect
this question do not accurately represent the feeling users’ thought processes (Liu and Heer 2014; Salman,
of flow while engaged in the task itself. Laing, and Conniff 2014).
Journal of Building Performance Simulation 359

Building performance simulation tools need intuitive multiple positions and view directions and does not impose
user interfaces. AcceleradRT was successful in this regard; a 5% limit on time exceeding the eDGPs threshold. We did
while none of the subjects had any previous experience not instruct subjects to consider an annual limit, nor did
with AcceleradRT, 28% rated themselves somewhat or using point-in-time simulations seem to prompt them to do
very familiar with it after only 20 minutes of exposure. so (Figure 17). Furthermore, we are not aware of any sci-
The ease with which subjects became familiar with the new entific basis for the 5% limit. Ultimately, glare mappings
tool may be a result of its simplified interface in which visi- that represent glare probability experienced by occupants
ble components corresponded directly to the task they were across space and time in a single graphic representation
designed for. may provide the most intuitive understanding of visual
discomfort. However, such simulations remain outside the
realm of practicality for large parametric spaces. Even
Recommendations for metrics using the dctimestep programme with input from Accel-
We found that experienced subjects tended to achieve erad’s GPU-accelerated rcontrib programme (Jones and
designs optimized for minimal annual glare risk with max- Reinhart 2017b) to calculate eDGPs, predicting the glare
imal daylight autonomy, even though the subjects had no probabilities in all 464 design variants of the two models
access to gridded readings or annual simulation results. took days and generated over a terabyte of data. Computing
Daylighting metrics must account for spatial and tempo- results at this level of detail as part of an iterative design
ral variance of visual discomfort. Proxy metrics for glare process would surely interrupt flow.
based on illumination only, such as ASE1000,250 , UDI, or
even simplified DGPs (Wienold 2009b), do not adequately Conclusion
measure occupant experience of glare. The final designs
Forty subjects completed two design problems involving
chosen by subjects in our study did not tend to fall on
the visual comfort of office spaces using two simulation
Pareto frontiers involving any of these metrics, suggesting
tools. The two tools differed in SRT; AcceleradRT pro-
that these metrics do not correspond well to the occupant-
vided immediate feedback with progressive refinement,
centric views seen by our subjects (Figure 17). Indeed,
while DIVA-for-Rhino required 20–29 s to produce a
subjects were no more likely to choose designs on the
result. Subjects demonstrated differences in user behaviour,
Pareto frontiers with respect to ASE1000,250 or UDI than if
confidence, satisfaction, and design performance in terms
they had chosen designs at random (indicated by horizon-
of daylight autonomy and glare depending on which tool
tal lines in Figure 17). Furthermore, we found that designs
they used. Low SRT correlated with more exploration
with spatial UDI averages above the recommended 80%
of the space, higher confidence in design performance,
(Mardaljevic 2015) often had high risk for glare.
increased satisfaction with the design task, and more final
Comparisons across a large design space require that
designs that fell on the design space’s Pareto frontier with
we condense visual comfort data. For this study, we devel-
respect to sDA300,50% and saeDGPs.
oped a climate-based daylighting metric using eDGPs. For
These results should provide guidance for creators
each position and view direction, we calculated eDGPs at
of performance simulation tools. Tools that present clear
all occupied hours and report the mean fraction of times
goals and can be embedded in existing iterative design pro-
when eDGPs exceeded 35%. Our method is similar to
cesses have a lower barrier to entry than those that interrupt
that proposed by Wienold (2009b) and adopted in the new
flow and require the user to wait for results. Designers can
European standard (CEN 2016), except that it considers
better mediate conflicting goals, such as maximizing day-
light availability and eliminating glare, when they are able
to view a scene from an occupant’s vantage point in real
time. Human subjects testing is an effective way to deter-
mine how well different simulation tools can guide their
users. To measure the performance of design options pro-
posed by the study’s subjects, we developed a metric that
considers eDGPs across all occupied hours, viewpoints,
and directions. Considering annual spatial data for a para-
metric design space quickly becomes a big data problem.
Future tools and metrics must therefore be sound in their
representation of human perception and efficient in their
use of computational resources.
We cannot understate the importance of human inter-
Figure 17. Subjects chose designs with optimal combinations of action with building performance simulation tools. The
sDA300,50% and saeDGPs more often than optimal combinations choice of simulation tool affects the choices made by its
of sDA300,50% and any other proxy metric for glare. user, even when different simulation engines purport to
360 N.L. Jones and C.F. Reinhart

show the same output. Currently, building performance Du, J., and S. Sharples. 2011. “The Assessment of Vertical Day-
simulation is primarily used to validate completed designs, light Factors Across the Walls of Atrium Buildings, Part 1:
when any problem it uncovers will be expensive to correct. Square Atria.” Lighting Research and Technology 44 (2):
109–123.
However, far-reaching changes can be implemented at lit- Einhorn, H. 1979. “Discomfort Glare: A Formula to Bridge
tle cost when data are available early in the design process. Differences.” Lighting Research and Technology 11 (2):
If performance simulation is to inform design decisions as 90–94.
they are made, it must offer interactive results through an Galasiu, A. D., and M. R. Atif. 2002. “Applicability of Daylight-
intuitive interface so that it does not interrupt flow. ing Computer Modeling in Real Case Studies: Comparison
Between Measured and Simulated Daylight Availability and
Lighting Consumption.” Building and Environment 37 (4):
363–377.
Acknowledgements Grynberg, A. 1989. Validation of Radiance. Berkeley, CA:
The Tesla K40 accelerators used for this research were donated by Lawrence Berkeley Laboratories. Document ID 1575.
the NVIDIA Corporation. Jon Sargent developed the Grasshop- Guynes, J. L. 1988. “Impact of System Response Time on
per components that allowed interactivity between Rhinoceros State Anxiety.” Communications of the ACM 31 (3): 342–
and AcceleradRT. The authors thank Philip Thompson for his 347.
technical support of the user study and Les Norford for providing Harrold, R. 2003. IESNA Lighting Ready Reference: A Com-
space. pendium of Materials from the IESNA Lighting Handbook.
9th ed.New York: Illuminating Engineering Society of North
America.
Hopkinson, R. 1972. “Glare from Daylighting in Buildings.”
ORCID Applied Ergonomics 3 (4): 206–215.
Nathaniel Jones http://orcid.org/0000-0002-0041-1593 Hoxmeier, J. A., and C. DiCesare. 2000. “System Response Time
Christoph F. Reinhart http://orcid.org/0000-0001-6311-0416 and User Satisfaction: An Experimental Study of Browser-
based Applications.” AMCIS 2000 Proceedings, 140–145.
IESNA Daylighting Metrics Committee. 2012. Lighting Mea-
surement #83, Spatial Daylight Autonomy (sDA) and Annual
References Sunlight Exposure (ASE). New York: IESNA Lighting Mea-
Amundadottir, M. L., S. Rockcastle, M. S. Khanie, and M. Ander- surement.
sen. 2017. “A Human-centric Approach to Assess Daylight International WELL Building Institute. 2016. The WELL Building
in Buildings for Non-visual Health Potential, Visual Interest Standard® v1. New York: Delos Living LLC.
and Gaze Behavior.” Building and Environment 113: 5–21. ISO 9241–303. 2008. Ergonomics of Human–system Interaction
Barber, R. E., and H. C. Lucas. 1983. “System Response Time – Part 303: Requirements for Electronic Visual Displays.
Operator Productivity, and Job Satisfaction.” Communica- Geneva: International Organization for Standardization.
tions of the ACM 26 (11): 972–986. ISO 9241–3. 1992. Ergonomic Requirements for Office Work
Beigbeder, T., R. Coughlan, C. Lusher, J. Plunkett, E. Agu, and with Visual Display Terminals (VDTs) – Part 3: Visual Dis-
M. Claypool. 2004. “The Effects of Loss and Latency on play Requirements. Geneva: International Organization for
User Performance in Unreal Tournament 2003® .” Proceed- Standardization.
ings of 3rd ACM SIGCOMM Workshop on Network and Jakubiec, J. A., and C. F. Reinhart. 2011. “DIVA 2.0: Integrat-
System Support for Games, 144–151. ing Daylight and Thermal Simulations Using Rhinoceros
Bellia, L., A. Pedace, and F. Fragliasso. 2015. “The Impact of 3D and EnergyPlus.” Proceedings of Building Simulation
the Software’s Choice on Dynamic Daylight Simulations’ 2011: 12th Conference of International Building Perfor-
Results: A Comparison Between Daysim and 3DS Max mance Simulation Association, Sydney, 14–16 November,
Design® .” Solar Energy 122: 249–263. 2202–2209.
Brady, J. T. 1986. “A Theory of Productivity in the Creative Pro- Jakubiec, J. A., and C. F. Reinhart. 2012. “The ‘Adaptive Zone’
cess.” IEEE Computer Graphics and Applications Magazine – A Concept for Assessing Discomfort Glare Throughout
6: 25–34. Daylit Spaces.” Lighting Research and Technology 44 (2):
Brutlag, J. 2009. Speed Matters for Google Web Search. Moun- 149–170.
tain View: Google. Jones, N. L., and C. F. Reinhart. 2014. “Physically Based
CEN Technical Committee 169. 2016. prEN 17037:2016 – Global Illumination Calculation Using Graphics Hardware.”
Daylight of Buildings. Brussels: European Committee for Proceedings of eSim 2014: The Canadian Conference on
Standardization. Building Simulation, 474–487.
CIE Technical Committee 3–13. 1995. CIE 117–1995 Discomfort Jones, N. L., and C. F. Reinhart. 2015. “Validation of GPU Light-
Glare in Interior Lighting. Vienna: Commission Interna- ing Simulation in Naturally and Artificially Lit Spaces.”
tionale de L’Eclairage. Proceedings of BS2015: 14th Conference of International
Cook, R. L., T. Porter, and L. Carpenter. 1984. “Distributed Ray Building Performance Simulation Association, Hyderabad,
Tracing.” SIGGRAPH Computer Graphics 18 (3): 137–145. India, December 7–9, 1229–1236.
Csikszentmihalyi, M. 1996. Creativity: Flow and the Psychol- Jones, N. L., and C. F. Reinhart. 2016. “Real-time Visual Comfort
ogy of Discovery and Invention. 1st ed.New York: Harper- Feedback for Architectural Design.” PLEA 2016 Los Ange-
Collins. les – 32nd International Conference on Passive and Low
Doherty, W. J., and R. P. Kelisky. 1979. “Managing VM/CMS Energy Architecture.
Systems for User Effectiveness.” IBM Systems Journal 18 Jones, N. L., and C. F. Reinhart. 2017a. “Experimental Vali-
(1): 143–163. dation of Ray Tracing as a Means of Image-based Visual
Doherty, W. J., and A. J. Thadani. 1982. The Economic Value of Discomfort Prediction.” Building and Environment 113:
Rapid Response Time. White Plains, NY: IBM. 131–150.
Journal of Building Performance Simulation 361

Jones, N. L., and C. F. Reinhart. 2017b. “Speedup Poten- Parker, S. G., et al. 2010. “OptiX: A General Purpose Ray Tracing
tial of Climate-based Daylight Modelling on GPUs.” 15th Engine.” ACM Transactions on Graphics – Proceedings of
Conference of International Building Performance Simula- ACM SIGGRAPH 29 (4).
tion Association, 1438–1447. Reinhart, C. F., and M. Andersen. 2006. “Development and Vali-
Jonson, B. 2005. “Design Ideation: The Conceptual Sketch in the dation of a Radiance Model for a Translucent Panel.” Energy
Digital Age.” Design Studies 26 (6): 613–624. and Buildings 38 (7): 890–904.
Jota, R., A. Ng, P. Dietz, and D. Wigdor. 2013. “How Fast Is Fast Reinhart, C., and P.-F. Breton. 2009. “Experimental Validation
Enough? A Study of the Effects of Latency in Direct-touch of Autodesk® 3ds Max® Design 2009 and Daysim 3.0.”
Pointing Tasks.” Proceedings of the SIGCHI Conference on LEUKOS: The Journal of the Illuminating Engineering
Human Factors in Computing Systems, 2291–2300. Society of North America 6 (1): 7–35.
Kleindienst, S., and M. Andersen. 2012. “Comprehensive Annual Reinhart, C. F., and S. Herkel. 2000. “The Simulation of Annual
Daylight Design Through a Goal-based Approach.” Building Daylight Illuminance Distributions – A State-of-the-art
Research & Information 40 (2): 154–173. Comparison of Six RADIANCE-based Methods.” Energy
Konis, K. 2017. “A Novel Circadian Daylight Metric for Build- and Buildings 32 (2): 167–187.
ing Design and Evaluation.” Building and Environment 113: Reinhart, C. F., and O. Walkenhorst. 2001. “Validation of
22–38. Dynamic RADIANCE-based Daylight Simulations for a
Lafortune, E. P., and Y. D. Willems. 1993. “Bi-directional Path Test Office with External Blinds.” Energy and Buildings 33
Tracing.” Proceedings of CompuGraphics 93: 145–153. (7): 683–697.
Larson, G. W., and R. Shakespeare. 1998. Rendering with Radi- Robert McNeel & Associates. 2016. Rhinoceros 5.
ance: The Art and Science of Lighting Visualization. San Salman, H. S., R. Laing, and A. Conniff. 2014. “The Impact of
Francisco, CA: Morgan Kaufmann. Computer Aided Architectural Design Programs on Concep-
Liu, Z., and J. Heer. 2014. “The Effects of Interactive Latency tual Design in an Educational Context.” Design Studies 35
on Exploratory Visual Analysis.” IEEE Transactions on (4): 412–439.
Visualization and Computer Graphics 20 (12): 2122–2131. Sheldon, N., et al. 2003. “The Effect of Latency on User Perfor-
Mardaljevic, J. 2015. “Climate-based Daylight Modelling and Its mance in Warcraft III® .” Proceedings of the 2nd Workshop
Discontents.” CIBSE Technical Symposium, London, April on Network and System Support for Games, 3–14.
16–17. Ubbelohde, M. S., and C. Humann. 1998. “Comparative
Nabil, A., and J. Mardaljevic. 2006. “Useful Daylight Illumi- Evaluation of Four Daylighting Software Programs.”
nances: A Replacement for Daylight Factors.” Energy and In 1998 ACEEE Summer Study on Energy Efficiency
Buildings 38 (7): 905–913. in Buildings, Pacific Grove, CA, 3, 325–3.340. Wash-
Nazzal, A. A. 2001. “A New Daylight Glare Evaluation Method: ington, DC: American Council for an Energy-Efficient
Introduction of the Monitoring Protocol and Calculation Economy.
Method.” Energy and Buildings 33 (3): 257–265. US Green Building Council (USGBC). 2013. LEED Reference
Newell, A. 1994. Unified Theories of Cognition. Cambridge, MA: Guide for Building Design and Construction, LEED V4.
Harvard University Press. Washington, DC: USGBC.
Ng, A., et al. 2012. “Designing for Low-latency Direct-touch Whitted, T. 1980. “An Improved Illumination Model for Shaded
Input.” Proceedings of the 25th Annual ACM Symposium Display.” Communications of the ACM 23 (6): 343–349.
on User Interface Software and Technology, 453–464. Wienold, J. 2009a. “Daylight Glare in Offices.” PhD Thesis.
Ng, E. Y.-Y., L. K. Poh, W. Wei, and T. Nagakura. 2001. University Karlsruhe.
“Advanced Lighting Simulation in Architectural Design in Wienold, J. 2009b. “Dynamic Daylight Glare Evaluation.”
the Tropics.” Automation in Construction 10 (3): 365–379. Eleventh International IBPSA Conference, Glasgow, Scot-
O’Hara, K. P., and S. J. Payne. 1998. “The Effects of Operator land, 944–951.
Implementation Cost on Planfulness of Problem Solving and Wienold, J., and J. Christoffersen. 2006. “Evaluation Methods and
Learning.” Cognitive Psychology 35 (1): 34–70. Development of a New Glare Prediction Model for Daylight
O’Hara, K. P., and S. J. Payne. 1999. “Planning and the User Environments with the Use of CCD Cameras.” Energy and
Interface: The Effects of Lockout Time and Error Recovery Buildings 38 (7): 743–757.
Cost.” International Journal of Human–Computer Studies Yoon, Y., J. W. Moon, and S. Kim. 2016. “Development of
50 (1): 41–59. Annual Daylight Simulation Algorithms for Prediction of
Oxman, R. 2006. “Theory and Design in the First Digital Age.” Indoor Daylight Illuminance.” Energy and Buildings 118:
Design Studies 27 (3): 229–265. 1–17.

You might also like