Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Slide 1

L5

Slide 2
Human Decision Making and interaction

Slide 3
Where are we in the information model of Wickens and colleagues1: Decision
making and response selection

Slide 4
Dual system
Daniel Khaneman2 dual system theory is a way to simplify the complexity of our
mind and it is a functional exemplification to explain our way of taking
decisions.
We used two systems:
 The fast system (system 1): automatic mind system one - it evolved first
and is the main controller of our perception and behaviour.
 The slow system (system 2) the conscious, monitored mind. System two
came later in evolution and it usually takes a back seat in controlling
human perception and behaviour.

Some scientists prefer the terms emotional mind (System 1) and rational mind
(system 2)

System 1: Like perception judgments and decisions happen quickly


EMOTIONAL MIND: Fast thinking (automatic, frequent, emotional,
stereotypic, subconscious). To deal with unconscious and automatic mind
processes: 2+2= ?

System 2: For important or hard judgments/decisions (slow)


RATIONAL MIND: Control of perception and behaviour. Slow, effortful,
infrequent, logical, calculating, conscious. To deal with conscious and complex
decision(e.g., 55 * 3 -94 = ?). System 2 COULD check system one, but often
doesn’t.

1
Wickens, C. D., Gordon, S. E., Liu, Y., & Lee, J. (1998). An introduction to
human factors engineering.
2
Kahneman, Daniel. Thinking, fast and slow. Macmillan, 2011.
See in your handbook the example on CH 10 (section: WE HAVE TWO
MINDS) regarding the baseball and the bat. In this example, most people report
$10 for the ball, which is obviously wrong if you would check. The correct
answer for the cost of the ball is $5. System 2 can work that out; system 1
cannot.

Slide 5
The fast system (System 1)
judges quickly, and usually its judgements are good enough to allow us to
achieve our goals.

 It judges only on what it is perceived in front of you;


 It filters perceptions based on goals and beliefs;
 It excludes potentially conflicting information before any action of
System 2.
 System 1 seeks a coherent story above all else and often leads us to jump
to conclusions.

System 1 works mainly in terms of shortcuts and heuristics!

Slide 6
The slow system (System 2)
The perceptions and judgments of system 1 come quickly, and are usually good
enough to allow us to get by in most situations.

System 2 is lazy and only sometimes necessary i.e., we are lazy thinkers
We use System 2 only for complex and important situations, usually, we rely on
System 1

System 2: requires a mental and conscious effort while System 1 is always


running in the background and takes no conscious effort.

Slide 7
Why we need both systems:
A fully automatic brain (Only System 1) would be inflexible: it couldn’t
switch goals in mid-action, or quickly adjust its response to rapidly changing
situations and very ineffective in complex situations.
A Fully rational brain (Only System 2) would be slow and very ineffective
to deal with all the tasks we can perform automatically: Think about driving.
Most of the tasks you are doing are automatic, you have learned how to do that
“without thinking”. System 1 is dealing with that, so you can, for instance, think
about other aspects while you are driving e.g., mentally go through the key
concepts for an exam, while you are driving to the university to perform the
exam?

Slide 8
System 2 IS REQUIRED when we need to perform exactly right. When, for
instance, we are in situations that System 1 does not recognize and therefore has
no automatic response, or when system one has multiple conflicting responses
and no quick-and-dirty way to resolve them.

The SLOW system is the controller of the FAST system!

Slide 9
The design should take advantages of the fact that we can act and think fast and
the slow

Slide 10
Designing considering Fast and slow thinking means:
• Support System 1: Indicate system status and users’ progress toward
their goal. e.g., Home>About us, Progress bar. This simplifies
recognition for the users
• Support System 1 and avoid/minimize the use of System 2: Tell
people explicitly and exactly what they need to know. Don’t expect
them to deduce information. Don’t require them to figure things out by a
process of elimination. Give clear instructions.
• Support the use of System 1 - Minimize complexity. Don’t expect
people to optimize combinations of many interacting settings or
parameters.
• Avoid the use of System 2 - Don’t make users diagnose system
problems. People do not want to or are not able to solve complex issues
related to technology. You need to create support tools and an interface
for when things go bad!
• Avoid the use of System 2 - Let people use perception rather than
calculation. Some problems that might seem to require calculation can
be represented graphically, allowing people to achieve their goals with
quick perceptual estimates instead of calculation.
• Avoid the use of System 2 - Let the computer do the math. Don’t
make people calculate things the computer can calculate itself.

• Facilitate System 1 - Make the system familiar. Use standards and


graphics/function people have seen before (Recognized is better than
recall!)

All these practices are about to let people use System 1!

Slide 11
Decision and framing

Slide 12
A decision is not only a matter of Systems 1 and System 2 but it is also
associated with (and affected by) how the information is presented.

Slide 13
Framing is one of the reasons why people’s judgments and preferences are
unstable over time
i.e., to present a choice in one way nudges people to decide for an option; to
present the options differently enables people to decide differently.

The value of the options is certainly important but, how choices /options are
framed (presented) may strongly affect people’s decisions.

Slide 14
In this case,3 the economist subscription was presented with three options.
• Print-only for: $59
• Web subscription-only access for: $125
• Print and web access for: $125

Only 16% of the subscribers opted for option 1, none for option 2, and 84% for
option 3

3
From the behavioural economist Dan Ariely:
https://www.ted.com/talks/dan_ariely_are_we_in_control_of_our_own_decisions
Slide 15
Then a large majority of the subscribers opted for the less expensive option, as
removing the second option the offer of print and web access resulted in way too
expensive.
So the middle option with only web subscription at the same price as Print and
web was “persuading” people of the convenience of the “print and web option”.

Slide 16
How we present (manipulate) information sets up the context for people to
understand these options. Framing is a powerful mechanism to manipulate
information:
• Negative: Fake news, fake quotations, fake information, fake options;
• Positive: to enable users to perform better, quicker or to increase safety and
give them a sense of control (placebo buttons) in many crossways with
traffic lights.

Slide 17
Exploit our expectations and the limitation of our fast thinking (System 1) is
often accepted to change in a positive way human behaviour
Persuasive systems attempt to change our behaviour by pushing our
motivations, by challenging us, by rewarding etc.

Fogg Behaviour Model4 (FBM)


How a system persuades and pushes people to perform certain actions is
something that a good persuasive system is taking into account.
So we want to change or affect a behaviour (B), and to do that we can use some
triggers (T: design, information, rewards etc.) to motivate (M) people and to
increase their ability (A) to find information, perform certain tasks etc.

Slide 18
When persuasion is used in malicious ways we talk about manipulative design.
This is an approach to design systems and processes to influence or trick users

4
Fogg, B. J. (2009, April). A behavior model for persuasive design. In Proceedings of the 4th
international Conference on Persuasive Technology (pp. 1-7).
into taking particular actions that they might not take otherwise. There are
several types of malicious tricks that are reported in this infographic.

Slide 19
Design Errors and ROI
The bad design of everyday objects is a very problematic (and often
undiscussed) issue.
For instance on October 21 of 2020 about 10.000 units of this chair were retired
from the USA and Canada market because the locking mechanism on the chair
frame can disengage, posing a fall hazard. In total 19 reports of chairs breaking,
including four reports of minor injuries from falls were reported.
All of this was avoidable with a simple test, for instance, by using the prototype
for a period of time before introducing it in the market.
Unfortunately, often everyday products are not tested because most companies
do not see the Return of Investment in the evaluation.

Slide 20
We discussed in this course that our brain can lie to us, and also that “Design
and Systems” can lie to us or use our limitations against us.
We need to be aware of that as designers and as users.
In today complex society we need to learn to rely more and more on the Slow
System. Moreover, errors may always occur but we need to demand Design
Responsibility from companies and institutions.

As designers, you should take a stand. You are going to deal with issues,
sometimes, even important issues. There are two approaches to deal with
problems that are embedded in the core of our society (i.e., unavoidable unless
we redesign the entire system):
1. You can decide to hide the problems from the eyes, make these less
perceivable [Left picture]. This will not mitigate the issue at all, it will
just make appear the system better.
2. You can try to mitigate the issue with design solutions. This will raise
awareness regarding the issue, open up the possibility to be inclusive and
offer some solutions for the end-users [Right picture]
I let you decide where you stand!

Slide 21
ACTIVITY L6
Slide 22
Priming: how a product is presented in terms of information (e.g., packaging,
marketing instructions etc.) may affect people perception of that object
Perceptual frames: we have expectations about objects or events that are
usually encountered in a situation (e.g., you know most rooms in your home
well enough that you need not constantly scrutinize every detail). Our
expectations could be biased by this previous knowledge. For instance, if an
object is moved from one place to another in our house, it could take a while for
us to realise it.

Priming and perceptual frames are both related to the perception of the
information and how this perception may be biased by the presentation of
information(priming) or by our expectations based on our previous knowledge
(perceptual frames).

Framing is related to decision making, and how the presentation of options and
choices affect our decisions. Perceptual frames also may affect directly or
indirectly our decisions. If we believe that something should be in a certain
place, we will maintain this assumption until we will discover that it is false, and
this discovery could have consequences i.e., I can not find the key of the car, I
am going to be late!

These two phenomena have one thing in common: The context.


In fact, the manipulation of the context (Priming) and our expectations regarding
the context (Frames) may affect how we perceived the reality, our decision-
making and ultimately our behaviour.

Slide 23
DESIGN PERSUASION & S1 & S2

Slide 24
Computers are good—reliable, fast, and accurate—at precisely what we are bad
at: remembering, calculating, deducing, monitoring, searching, enumerating,
comparing, and decision making.
People are using computers for that reason. Designers need to build interface
and products to help people to perform rationally their tasks and minimize
the need to use System 2
This is the reason behind the title of the very famous book by Steve Krug:
<<Don’t make me think>>

Slide 25
Decision Support Systems (DSS) – DESIGN FOR SYSTEM 2
In the slide an example of a Dashboard (Information Visualization ) to support
decisions regarding Supply Chain.

In this case, we want to support accurate decisions by providing a way to


manage, filtering and visualize multiple complex data. The management of
Complexity
Key aspects of DDS systems (support of System 2):
 Provide all options. If there are too many to simply list, organize or
abstract them into categories and subcategories, and provide summary
information for those, so people can evaluate and compare entire
categories of options at once.
 Help people find alternatives. Some solutions may be so counterintuitive
that people don’t consider them. Decision support systems can expose
options users might miss and can generate variants on user solutions that
are minor or even major improvements.
 Provide unbiased data. That is, data created or collected in an objective,
reproducible manner.
 l Don’t make people calculate. Perform calculations, inferences, and
deductions for users where possible. Computers are good at that; people
aren’t.
 Check assertions and assumptions. Decisions are based not only on data
but also on assumptions and assertions. Decision support systems—
especially those supporting critical or complex decisions—should let
users declare any assumptions and assertions upon which a decision will
be based, and then should “sanity check” them for the user.

Slide 26
Persuasive Design for good – DESIGN FOR SYSTEM 1
Example from https://www.net-a-porter.com/en-nl/
You want to buy something. The System provides you with examples of other
products and accessories to wear with the product you are looking at and other
(similar) products you may also like.
The system is designed to influence or persuade people suggesting associations
that may push you to spend more or increase your wish list. This is different
from simplifying the complexity and support decision making. The system is not
presenting you with all the possible choices by giving you the possibility to filter
and managing the information. The System is suggesting to you what else to buy
in terms of predefined choices (based on predefined criteria). This system is
persuading you that attached to the target product there are other items you want
to look at.

In this specific case, if we refer to the model of Fog the target behaviour is to
inform you about what else you can buy associated with a product you are
looking at(Behavior). The motivation is that fashion objects are usually linked to
outfits that you can composed (Motivation), therefore the System is offering you
the ability to connect items and to create your personal choice among the
available options (ability). In this case, we are talking about a legitimate set of
Triggers: The system tells you how you could or should wear the product
(WEAR IT label)” and what else “you may also like”. These triggers are playing
on System 1 (your fast reaction to the triggers) but are legitimate modalities, the
interface is not playing tricks on you, for instance suggesting that there is limited
availability of the product and that you should buy that now.

Slide 27
BAD PERSUASION - DESIGN FOR SYSTEM 1
This example of a web system, by which you can book rooms in hotels all over
the world, is playing on System 1. This interface is pushing continuously the
end-users to believe (without saying that) that the displayed deal is a special
offer (JACKPOT! YOU GOT A GREAT RATE!), and that someone else could
still this offer (3 other people looked at this in the last 10 minutes). This seems
to be a great deal, in fact, as reported in the trigger “someone else just booked in
fact on the same dates 8 minutes ago”.
The behaviour the system is persuading you is to book a room as soon as
possible, without doing further comparisons or searches.
The interface once you selected an offer is not offering you the ability to do a
further comparison with other offers, but only to select inside the offer (Ability).
To motivate you to book in a fast way, several triggers are proposed and these
are all based on unverifiable information that the end-users can not check or
visualize.

This is very close to irresponsible persuasive design. It is not (yet) illegal to


adopt these practices, but we can consider these design patterns something
borderline between persuasion and dark patterns
Slide 28
Action, controls (CH 13) and errors

Slide 29
Selection and Execution of response

Slide 30
FITTS’ LAW you handbook is quite accurate about fits’ Law, therefore, we can
rapidly discuss this law

Slide 31
FITTS was interested in establishing the difficulty of a target selection task
when people are interacting with point and click interfaces. This law helps in
defining some useful principles that we have to consider when designing
interfaces.
When you move an object from one point to another (moving toward a target):
• The larger and the nearer your target is to your starting point, the faster you
can point to it.
• This applies to all the types of controls in which people may point and click.

Slide 32
Following the Fitts’ law: circular menus enables fast cursor-to-target motions. If
the cursor is at the center of the circular menu then it is equidistant from all the
options, and you will have the same level of difficulty to reach all the options.
Despite this is good in theory, and in some cases, this solution is adopted with
good results. Not always interacting with digital devices the cursor may be at the
center of the menu (no advantage)

Slide 33
Fitts’ law predicts that, if the pointer or finger is blocked from moving past the
edge of the screen, targets at the edge will be very easy to hit: people can just
yank the pointer toward the target until the edge stops it, with no need for slow,
fine adjustments at the end of the movement.
Thus, from the point of view of Fitts’ law, targets at the screen’s edge behave as
if they were much larger than they are. However, this edge-pointing detail of the
law applies mainly to desktop and laptop computers, because modern
smartphones and tablet computers don’t have raised edges that physically stop
fingers.

Slide 34
 Make click-targets—graphical buttons, menu items, links—big enough so
that they are easy for people to hit.
 Make the actual click-target at least as large as the visible click-target.
 Checkboxes, radio buttons, and toggle switches should accept clicks on
their labels as well as on the buttons, thereby increasing the clickable area.
 Leave plenty of space between buttons and links so people don’t have
trouble hitting the intended one.
 Place important targets near the edge of the screen to make them very
easy to hit.
 Display choices in pop-up and pie menus if possible.

Slide 35
Complexity advantage and Hick-Hyman law.

Slide 36
When we select a response/action to perform a task:
the time required to make a decision (select and execute the action) is a function
of the number of available options.
i.e., intuitively the more the options the more the time!
However, the time to make a decision when we have 4 equal options (2 bits of
information) is not double when we have 2 options (1 bit of information).

Slide 37
Often a person may transmit more information, or perform faster per unit time,
if: s/he may operate with complex (information-rich) with manageable choices,
than when s/he has to work with several simple (information-reduced) choices.
Slide 38
The time to select an option is roughly proportional (logarithmic) to the number
of possible choices – i.e., more options more time to decide (i.e., Hick’s Law)

The frequency of the decision also matters: People are better at selecting 1
option among 8 (complex decision, with 3 bits), then at selecting one option
among two repeated 4 times (simple decision, 4*1bits). i.e., breaking complexity
in a small set of tasks may add unnecessary frustration

Slide 39
How we handle the complexity? It depends on the context: Is complexity
manageable? Which are the frequent actions?

To show a large set of visible options is perceived by people as complex i.e., too
many options slow down the reaction time of users and expose people to errors
and workload.

To reduce the number of stimuli/options to a manageable group of options


enables end-users to perform a faster decision-making process (and usually
fewer errors) especially when the actions we need to perform are not frequent
ones.

NOT ALWAYS SHOWING SEVERAL OPTIONS IS BAD: If we show a


manageable set of options of frequent actions and minimize the information
regarding the infrequent actions people will interact in a smoother way

Slide 40
Complexity advantage: Implications for design

Slide 41
• KEEP CHOICES SIMPLE!
o Do not give users more choices than are essential (e.g., time is critical)
o Long menus, with lots of rarely chosen options, may not be desirable.
o Offering many choices = longer response time and increased possibility of
mistakes.
o More items/options typically lead to a greater similarity between items
and confusion.
o Provide the necessary more used information/options
• TO USE A SMALL NUMBER OF COMPLEX CHOICES WITH KEY
(frequent) OPTIONS, IS USUALLY PREFERABLE TO HAVE
MULTIPLE SIMPLE CHOICES
• COMPLEXITY IS PART OF ANY SYSTEM! IT CAN NOT BE
ELIMINATED, BUT IT CAN BE MANAGED TO ENABLE BETTER
PERFORMANCE.
• SOMETIMES IT IS NECESSARY TO ADD IN A CONTROLLED WAY
COMPLEXITY - e.g., When we want to enable the user to think and to
perform slowly and accurately

Slide 42
In online websites the payment process is often performed with sequential steps,
with few elements (sequential choices), instead of presenting to people a long
and complex form, this approach:
• increases the time to perform (login, select, insert in the basket, confirm the
selection, insert data, check-in HTTPS, payment, Payment confirmation
and email for track);
• enables people to have more control over errors (error recovery).

HOW WE MANAGE AND DISTRIBUTE COMPLEXITY AFFECT THE


USER JOURNEY!

Slide 43
In their study Iyengar and Lepper (2000)5 showed in three empirical studies
<<that, although having more choices might appear desirable, it may sometimes
have detrimental consequences for human motivation.>> Their study on
consumer behaviour suggested that the provision of extensive choices, though
initially appealing to choice-makers, may nonetheless undermine choosers'
subsequent satisfaction and motivation.

Netflix is a very good example of choice paradox: Too many choices often lead
to indecision!

5
Iyengar, S. S., & Lepper, M. R. (2000). When choice is demotivating: Can one
desire too much of a good thing?. Journal of personality and social
psychology, 79(6), 995
How much time do you spend to select a movie?

Slide 44
BREAK

Slide 45
Errors

Slide 46
Most basic difference between errors:
 Commission/execution of an error: your action results in an error.
 Omission (lack in executing a necessary action): Because you forget to do
a certain action an error happens.

Slide 47
Skills, Rules Knowledge Model of Rasmussen 6
Taxonomy of unsafe acts, distinguishing between voluntary and involuntary
actions. Diagram on pg 47 slide important!!!

Slide 48
Violations of Rules
• Sometimes someone makes an intentional error, where the knowledge and
awareness of the context or mode were correct, but the wrong action is still
chosen to perform the INTENDED action.
• A violation is when you want to do a specific action/task (intended) but you
DELIBERATELY PERFORM A PROCEDURAL ERROR or use a
wrong approach to achieve the goal (deliberate)
• This type of error constitutes a violation (and could be punishable by law).

6
RASMUSSEN Jens (1983). Skills, rules, and knowledge: signals, signs and
symbols and other distinctions in human performance models. IEEE
Transactions: Systems, Man & Cybernetics, 1983, SMC-13, pp.257-267.York,
USA), 1994 - Not available from UT Library
Types of Violations:
• Changes in the procedures (rule not apply any more) but new rules are not
available yet, or voluntarily ignored for convenience: ROUTINE
VIOLATION
• People have competing demands and they need to speed up so they decide
to break the rules: SITUATIONAL
• People deal with unexpected circumstances and try to solve a problem
that can not be solved following the rules: EXCEPTIONAL
VIOLATION
• People were asked to do so for ORGANISATIONAL REASONS
• People may decide to violate rules for personal reasons (INDIVIDUAL)

Slide 49
Sometimes people may do the wrong thing, believing it to be right! In these
cases, we have an error in thinking and planning the action.

So these actions are INTENDED because people want to do the specific


action/task, but an ERROR is performed because people do not know that they
are making a mistake, for instance:
• Unusual situations which require to think and to apply a divergent problem-
solving strategy
• People may rely on processes or information that are out of date (a map to
drive on an unfamiliar route with a not updated navigation system),
• Misdiagnose process and take inappropriate corrective action (due to lack
of experience or insufficient/incorrect information etc.)

Slide 50
While a situation may be diagnosed and understood correctly, rule-based errors
may result from a failure to apply the correct rules for the selection of a
response, or by applying bad rules.
e.g., we decide to ignore an alarm in a real emergency, by following the history
of spurious alarms…

Slide 51
Errors may result from slips of action when the correct response is intended but
an incorrect action is actually released i.e., an unintended response “slips” out of
the hands (Norman, 1981).
Slips of this sort are typically the result of poor human factors design, such as
incompatible control–display relationships, confused displays or controls,
coupled with an operator who is well skilled and performing a task in a highly
automated mode, thereby not carefully monitoring his or her action of
selections.

In skill-based errors due to slips: We want to perform an action/task, but we end


up doing something else (UNINTENDED). This unintended action resulted in
an INADVERTENT error.

A simple, frequently-performed physical action goes wrong:


• The wrong key while typing
• Wrong word (Slips of the tongue)
• Click on the wrong choice in a menu (“delete” instead of “save”…)

Slide 52
Similar to slips we can inadvertently forget to perform an important/mandatory
action is forgotten (Omission of an important task).
So without intent, we did not finish appropriately an action because we forget
something and this resulted in an error.
i.e., Nowadays “MS Word” automatically saves documents every ‘tot’ minutes.
In the past, many potential masterpieces were lost because people forget to save
before close.

Slide 53
Controls (Actuators) of interactive systems

Slide 54
Control (actuator): the part of the system that is directly actuated by the operator
e.g. by applying pressure (ISO 9355-1, 1999).
Control is adjusted or manipulated by the human hand to effect change in a
system

Slide 55
What we have discussed in terms of principles and methods for Usability and
UX assessment could be also applied to redesign controls, e.g.,:
• Clustering information (e.g., Card Sorting)
• Hierarchical labelling: organise information in a hierarchical manner
• Demarcation + Similarity and Symmetry to manage and handle the
complexity i.e.,: breaking the interface into chunks, so that it becomes easier
to process.
• Usability inspection with experts + Usability test to review and inform the
redesign

Slide 56
Stereotypes + affordances
Controls are the tools to perform actions. Whenever we deal with controls we
have expectations about:
- how the movement of that control will be associated with the corresponding
action.

These expectations may be defined as stereotypes due to previous knowledge of


similar systems. For instance, in the case of the picture in the slide, the
AFFORDANCE is that the control is turn”able”, the STEREOTYPE is that
turning clockwise equals to increase the volume.

Slide 57
In displays, the user controlling experience is defined by this formula of GAIN

Gain is proportional. It is the result of the relationship between the magnitude of


the input with the magnitude of the output signal at steady state. Many systems
contain a method by which the gain can be altered, providing more or less
"power" to the system.
e.g., the gas pedal in the car is an example of GAIN the more you push down the
pedal the more the speed of the car. However, the relationship between the
action “pushing the gas pedal” and “increase speed of the car” is controllable
and alterable.

We can alter the RATIO of the controls, for two reasons, for instance:
- we want people to perform with the minimal amount of physical effort a quick
movement in the display. Therefore, we want a LOW RATIO (HIGH display
movement/LOW Physical movement) e.g., 3 units: 2 units = A GAIN of 1.5
units of increase in the display per each unit of increase of physical movement
(HIGH GAIN)
- We want to slow down the user to focus their attention and increase accuracy
so we increase the physical movement required (or reduce the display
movement). HIGH RATIO (low display movement/high physical
movement) e.g., 3units: 6unist= A GAIN OF 0.5 units of increase in the
display per each unit of physical movement (LOW GAIN)
In the slide, you have an example about how you can change the GAIN of the
mouse (in Windows system)

Slide 58
When we operate with a system having control means to be able to manage the
stimuli and the reaction of the system, and to be able to continuously correct
actions, manage complexity and lags

http://www.youtube.com/watch?v=LRrePuCiaAY&feature=fvsr

You might also like