Professional Documents
Culture Documents
HCI Guidelines
HCI Guidelines
Author: WP06.1
Identifier: D06.1.f
Type: Deliverable
Version: 1
Class: Public
Summary
This document discusses HCI privacy design in general and points to specific problems such as
ease of use vs. informed consent. The document gives guidance regarding HCI requirements and
makes proposals on user interfaces (UI) of the PRIME integrated identity management prototype,
especially for web-browsing scenarios. Test results are discussed but experiments are not
presented in full. This document also gives a brief overview of HCI issues for the two PRIME
application prototypes for collaborative eLearning and location-based services for mobile phones.
Application providers interested in adopting the PRIME architecture or similar architectures will
in this deliverable find suggestions for user interface design as well as discussions of problematic
issues. Moreover, PRIME ontologies are presented and an architecture-level sketch is given over
how UI components are connecting to the PRIME core.
The PRIME project receives research funding from the Community’s Sixth Framework Programme and the Swiss Federal
Office for Education and Science.
Foreword
Several PRIME partners have contributed by authoring or contributing text to chapters to this
deliverable; see the list below. This document was edited by John Sören Pettersson (Karlstads
universitet) who also contributed most of the text.
Table of Contents
Executive Summary .................................................................................................................................. 10
1 Introduction ............................................................................................................................. 12
2 Related work............................................................................................................................ 14
2.1 Perceptions of fairness of data processing .................................................................................... 14
2.1.1 User perception of and trust in web sites ......................................................................................... 14
2.1.2 Trustguide........................................................................................................................................ 16
2.2 Specific UI paradigms for privacy systems.................................................................................... 17
2.3 Usability of security systems........................................................................................................... 18
2.4 PISA: from privacy principles to HCI requirements..................................................................... 19
2.4.1 PISA: Testing of user interfaces for a privacy agent ....................................................................... 20
2.4.2 “Just-In-Time-Click-Through Agreements” (JITCTAs) ................................................................. 20
2.5 Privacy UI vocabularies and information structuring .................................................................. 21
2.5.1 Multi-layered structure for compliance with the EU directive ........................................................ 21
2.6 New vistas for IDM systems: Tools for exercising rights.............................................................. 22
10 Outlook................................................................................................................................. 93
11 References ............................................................................................................................ 94
List of acronyms
AP Application Provider, Application Prototype
APV2 Application Prototype version 2
APWG Anti-Phishing Working Group
ATUS A Toolkit for Usable Security (from Universität Freiburg)
CeL Collaborative eLearning
DADA Drag-and-drop agreement
DSM Decision-suggesting Module (in BluES’n)
DRIM Dresden Identity Management
DS Data Subject
EC European Commission
EU European Union
EuroPriSe European Privacy Seal
HCI Human–Computer Interaction
HCI-P Privacy HCI
HCI-S Security HCI
IPV1 (PRIME) Integrated Prototype version 1
IPV2 (PRIME) Integrated Prototype version 2
IPV3 (PRIME) Integrated Prototype version 3
HTTP HyperText Transfer Protocol
IAP Intra-Application Partitioning
ID Identity
IDM Identity Management
ISO International Organization for Standardization
JSP Java Server Pages
JITCTA Just In Time Click-Through Agreement
LBS Location-Based Services
LI Location Intermediary
MO Mobile Operator
OIP Ontology Information Point
OWL Web Ontology Language
P3P Platform for Privacy Preferences
PD Personal Data
PET Privacy Enhancing Technology
Executive Summary
PRIME project. The guiding principle in the PRIME project is to allow individuals to be in control of
their personal data. The notion of user control has been adopted in many recent user-centric identity
management initiatives. However, most of these approaches only provide the first steps on the way to
a new generation of identity management systems. They do not provide adequate safeguards for
personal data and are restricted to giving individuals limited control over their personal data.
Effective management of one’s own privacy in the online world requires new tools starting with the
minimisation of personal data disclosure. Furthermore, users can be empowered with tools that allow
them to evaluate the privacy capabilities and performance of service providers, and to match privacy
policies with user preferences and also to negotiate privacy policies with service providers. Service
providers would need systems that enforce agreed policies by technical means and keep track of actual
use, while users would need tools to keep track of data collection and agreed usage. In addition, user
applications should be able to connect to service providers to verify that data usage complies with the
negotiated policies; tools should be provided to users for exercising legal privacy rights.
Human-computer interaction. The tools just mentioned are however not instruments for effective
privacy protection if people in general cannot use them. Giving these functions informative and
intuitive user interfaces are therefore crucial for putting individuals in control of their personal data.
The Human-Computer Interaction (HCI) work package has in collaboration with other PRIME work
packages developed a host of concepts and concrete designs. User tests have gathered input from
ordinary Internet users on their ability to use the PRIME tools and their acceptance of such privacy-
enhancing technology. Guidelines for implementing appropriate HCI for privacy-related functionality
have been derived from the project results. The HCI work package approaches regarding PRIME’s
main functionalities are sketched in the following.
Minimisation of personal data disclosure. In an everyday web scenario, where an Internet user surfs
to the homepage of a service provider, browses several pages, and then decides to order a service,
keeping data disclosures to a minimum is often desirable.
Within the PRIME project an electronic visiting card concept called “PrivPref” has been elaborated
whereby user with no knowledge in the basics of data processing shall be able to use different Privacy
Preference settings. Anonymous communication, pre-limited data sets, and pre-defined purposes for
data processing are features included in the different PrivPrefs.
Another strand of design developed conceptually within PRIME concerns the “TownMap” metaphor
whereby different areas on a stylised street map represent different privacy preferences. The map
design is an attempt to make preference settings more accessible and, hopefully, more understandable
to users. The map design also makes it possible to show the user how the data transfers are taking
place, and it also allows the user to interact graphically by dragging data icons to service providers.
Evaluate the privacy capabilities and performance. A data disclosure function should be augmented
with some way to check the trustworthiness of service providers that are not familiar to the user.
Within PRIME the focus is on the privacy-related information that can be beneficial for the user to
have; this information is similar to the information available nowadays concerning price comparisons
and customer satisfaction ratings concerning product quality, delivery, etc. In essence, the service
providers should be trustworthy in processing the user’s personal data in a privacy-friendly manner.
To this end, an “Assurance Evaluation” component has been implemented within the PRIME project.
The Assurance Evaluation can rely partly on other parties performing more advanced checks and focus
on checking ‘trust marks’, especially privacy seals, issued by such parties.
Matching and negotiaing privacy policies. Using pre-defined purposes for data processing makes it
possible to alert users for data requests that are excessive. This is included among the PrivPref
features. At some occasions it might be very hard to prescribe what enough data is. For instance, for
the case where someone buys something but then also admits that (some) personal data is used for
sending information in the future on “special offers”, it would be hard to construct a PrivPref that in
advance can foretell what types of data and what retention period should be admitted. Such things will
depend very much on the expectation the user has in each situation on the promised special offers. The
PRIME HCI work package has developed concepts for manually setting the obligations that the data
processor must follow.
Keep track of data collection. Within the PRIME prototypes, the “Data Track” function allows a user
to look up what personal data he or she has released to others. Data transmissions administered by the
user side PRIME system are logged and stored on the end user’s device. Being able to track what data
were disclosed, when, to whom, and how the data are further processed, is an important feature to
make the processing of personal data transparent. The Data Track stores transaction records
comprising personal data sent including pseudonyms used for anonymous transactions and credentials
used to prove the correctness of the data (especially important for non-anonymous transactions); the
transaction records also includes date of transmission, purpose of data collection, recipient, and all
further details of the privacy policy that the user and recipient have agreed upon.
Exercising rights. The records in the Data Track contain valuable information in case a user feels that
his data have been wrongly used. Storing the pseudonyms used for transaction allows a user to identify
himself as a previous contact in case that he wants to exercise his rights to access, rectification,
blocking, or deletion while still remaining pseudonymous. These features have only been realised in
dummy-implementations yet as they demand some further elaboration on service side, but they
constitute interesting prospects for empowering the user beyond the concept of “minimisation of data
disclosure”. To these features could be added links to online help functions of various benevolent
organisations, such as issuers of privacy seals, data protection boards, and consumer organisations.
Specific application prototypes. A demonstrator based on web scenarios has been built to demonstrate
the PRIME Integrated Prototype. There are also two other prototypes with specific scenarios: the
Collaborative eLearning prototype and the Location-Based Service prototype (mobile phones). These
have developed user interfaces emphasising functions relevant for privacy-friendly collaborative e-
learning and location-based services, respectively (brief descriptions can be found in this report).
Conclusion. The work on Human-Computer Interaction in the PRIME project has produced many
results on user interfaces for PRIME-based products and on users’ reaction to such technology. There
is now a need to present a comprehensive overview of critical comments and recommendations
pertaining to the kind of user interface designs implemented or considered within the PRIME project.
This is the motivation to produce a final deliverable on “HCI Guidelines”.
The target group of these guidelines consists of application providers interested in adopting the
PRIME architecture or similar architectures, as well as programmers and user interface designers
working in the area of privacy and security functionality. They will in this deliverable find suggestions
for user interface design as well as discussions of problematic issues. Moreover, PRIME ontologies
are presented and an architecture-level sketch is given to illustrate how user interface components are
connecting to the PRIME core.
1 Introduction
Informative and intuitive user interfaces are crucial for effective privacy protection as it is stated in
Annex 1 of the project description of PRIME, Privacy and Identity Management for Europe (8.1.1,
Deliverables). The present deliverable provides guidelines for the design of the user interfaces (UIs)
for Privacy and Identity Management Systems (PIM Systems) with a focus on the user-side UIs, rather
than the services-side.
The work on Human-Computer Interaction (HCI) in the PRIME project has produced many results on
user interfaces for PRIME-based products and on users’ reaction to such technology. The HCI work
package has in collaboration with other PRIME work packages developed many interesting ideas and
concrete designs, and in user tests gathered input from ordinary Internet users on different aspects of
the question of user acceptance of privacy-enhancing technology. These results have been published in
deliverables1 and working papers as well as in conference papers and book chapters.
There is now a need to present a comprehensive overview of recommendations and critical comments
pertaining to the kind of UI designs implemented or considered within the PRIME project. This is the
motivation to produce a final deliverable on “HCI Guidelines”.
In order to define the state-of-the-art and to put PRIME results in perspective, this deliverable starts
(in chapter 2) with an account of the literature on privacy-related HCI questions. This account includes
comments on the relevance for the PRIME work.
Thereupon follows an “Introduction to PRIME UI paradigms (for web browsers)” in chapter 3. This
introduction is rather brief but is needed for the two subsequent chapters: the tabulations in chapters 4
of HCI requirements also mentions PRIME proposals to meet these requirements; and the discussions
in chapter 5 on design considerations and evaluations are dependent on the specific prototypes with
their specific UIs.
The introduction in chapter 3 covers the main user interfaces that have been discussed for the service–
client web scenarios, i.e. the scenarios that have been used for demonstration of the different versions
the PRIME integrated prototype. Section 3.1 gives a condensed presentation of these main user
interfaces, while somewhat more comprehensive presentations including illustrations are given in 3.2-
3.6.
Chapter 4 relates privacy principles to requirements on the HCI design. It starts off by a discussion on
the different focuses a dedicated Privacy HCI discipline may have, HCI-P. Different perspectives give
different focuses, and the rest of chapter 4 takes a few compelling perspectives and derives HCI
guidelines from them. Firstly, chapter 4 tabulates the HCI requirements and implications derived from
legal privacy principles. Secondly, chapter 4 lists a set of adoption criteria for privacy-enhanced
applications for end users. Thirdly, it derives eight general HCI guidelines for PIM systems. Finally,
guidelines for a user-oriented design process are presented and also advice on testing and evaluation.
The development of the PRIME integrated prototype has been paralleled by the development of some
minor demonstrator scenarios, all based on web browsing use. Chapter 5 collects ideas and lessons
learnt during this process. The chapter discusses specific points in some detail with illustrations on
mockups used in user tests and different UI elements that have been considered for good usability of
privacy functions. The considerations are often applicable outside the web browsing sphere.
Application prototypes have been developed within PRIME, and these have naturally also been
endowed with UIs. The CeL, Collaborative eLearning, prototype includes privacy aspects of peer-to-
peer communication and thus moves the focus away from the service-client view of the demonstrators
of the Integrated Prototype. The LBS, Location-Based Service, prototype develops the service-to-
1
Public ally available PRIME deliverables are listed on page 2. One HCI deliverable, D06.1.a General Mock-
ups, is available for project members and parts can be made available for interested readers.
service issue but for HCI the service-client relationship is perhaps the most interesting aspect of the
LBS prototype. Chapters 6 and 7 give overviews of the HCI issues encountered in these two
prototypes; more information on UIs and functionalities are to be found in the deliverable connected to
these prototypes, D04.1.b.
“Speak the user’s language” (Nielsen, 1994) is a well-known phrase in usability design. However,
there might be a conflict between legally binding statements and everyday language. It is essential to
achieve consistency of presentation of legally binding information and to optimise the usability of the
system by providing tested strings in multiple languages. Chapter 8 outlines ontologies meant to
provide standardised descriptions of concepts being communicated to the user. The PRIME ontologies
at the current state are rather primitive, but this chapter explains the potentials of ontologies for HCI
design.
Chapter 9 contains a short overview of some technical aspects of how the user interface of an
application interacts with the PRIME system. The chapter gives an architecture-level sketch over how
UI components are connecting to the PRIME core
Finally, some prospects for the future work are given in chapter 10 “Outlook”. New projects are in the
preparation, and we mention in particular PrimeLife which will extend the PRIME perspective into
online social networks and the life-long management of privacy and identity.
2 Related work
Scanning the fields of privacy and HCI provides some interesting articles illuminating interesting
intersections. In general, the works reported has had different focuses, but in the last few years also
some comprehensive sources have appeared, such as Microsoft’s Privacy Guidelines for Developing
Software Products and Services and the SOUPS conference, Symposium on Usable Privacy and
Software, held annually since 2005 at Carnegie Mellon University in Pittsburgh. One may also
mention Iachello’s and Hong’s (2007) 137-page survey of HCI research on end-user privacy. Below,
papers of various origins are presented under the following headlines:
• Perception of fairness of data processing
• Specific UI paradigms for privacy systems
• Usability of security systems
• PISA: from privacy principles to HCI requirements
• Privacy UI vocabularies and information structuring
• New vistas for IDM systems: Tools for exercising rights
Special relevance to particular PRIME aspects is pointed out when deemed appropriate, but every
work is of general relevance for the PRIME goals.
2
To the surveys mentioned in that deliverable one can add the discussion by Perri 6 (2001) “Can we be
persuaded to become PET-lovers?” A study of web site’s privacy policies and practices is found in a report from
Consumers International (2001).
hypermedia systems (in effect, web sites) but furthermore points out that “These guidelines can be
complemented or modified by users’ individual privacy preferences.” (2001) An important
consideration by Kobsa is that “a single solution for all privacy issues does not exist” (ibid.).
Turner (2001, 2002, 2003) tells about factors that affect the perception of security and privacy of e-
commerce web sites. He claims that user’s “perception of security of e-commerce sites does not
depend on the site’s technical security but on their immediate experience with the site and on their
history with the company and the site” (2003). He refers to a number of studies which have
investigated how users form their judgement of the security of web sites. His own test results (2001)
showed that for experts to feel secure when transacting with a web site depended on factors such as,
1. their deep technical knowledge, 2. knowledge of good security processes, and 3. the company’s
reputation. Ordinary users depended on factors such as 1. the company’s reputation, 2. their
experience with the web site, and 3. recommendations from independent third parties (compare
PRIME work on assurance evaluation; 3.5). In his report from 2002 he provides evidence for claiming
that when it comes to security and trust on web sites, visual presentations is more important than
technical security information to users. Turner (2003) refers to Cheskin (1999) who holds that it is
necessary to understand how trust forms over time as a result of experience and that longitudinal
studies are needed.
According to Nielsen et al. (2000) “Trust is hard to build − and easy to lose. A single violation of trust
can destroy years of slowly accumulated credibility”. This report gives a broad overview of trust-
related issues brought up from users in a study where they were asked to carry out basic shopping
tasks on US e-commerce web sites. The results from the study show that many of the users “had little
trust even in renowned online shopping web sites” in contrast to the finding of Turner (2001). To win
their trust they wanted the web sites to provide: Succinct and readily accessible information about the
company; Fair pricing, fully revealed; Sufficient and balanced product information; Correct, timely,
professional site design; Clear and customer-friendly policies; Appropriate use of personal
information; Trustworthy security; and Access to helpful people. This last point is has been
elaborated upon in reports from the PRIME project in connection to results from our usability
evaluation on (dis)trust; more on this follow in section 2.6 in this chapter.
D’Hertefelt (2000) “noticed that people’s perception of security when doing on-line transactions
depends on the simplicity of the web site and on the availability of user support”, and he notices in
particular an observation that people feel secure just because they believe it is easy. His hypothesis is
that “The feeling of security experienced by a user of an interactive system is determined by the users
feeling of control of the interactive system”. D’Hertefelt gives three design guidelines for how to make
people trust a web service: comprehensible, predictable, and flexible and adaptable. Patrick (2002 –
drawing from the PISA project; see below) discusses software agents from a trust perspective,
extending a model by Lee, Kim and Moon (2000) polarizing trust and cost to a model polarising trust
and perceived risk. Among the trust factors they mention interface design but also other things such as
predictable system performance and user’s experience as well as shared values and user’s ability to
trust. On the other hand “a lack of alternative methods to perform a task” can lead to “feelings of risk”.
One may note that already in 1993 Bellotti and Sellen described “a framework for design for privacy
in ubiquitous computing environments”, where they stated that “privacy is a user-interface design
issue”, concluding with an example of a local online video application. In particular they found that
control and feedback were important for privacy in a ubiquitous computing environment. Nowadays,
more than a decade later, with omnipresent global wired and wireless networks, design for privacy
should be the concern of almost every application developer, especially the user interface developers,
to make people feel secure with their computing environments. This is not yet true but some work has
been done as evident from the following sections.
Lately, the privacy policies published by web services have become a target for discussion. By many
proponents of fair data processing such privacy policies are said to be hidden by the web services, and
a recent study analysed the language in them and found vague formulations giving much freedom for
the web site to do whatever they want with visitors’ data (Pollach, 2007; more on this in section 2.6).
Another recent study, “undertaken to determine whether a more prominent display of privacy
information will cause consumers to incorporate privacy considerations into their online purchasing
decisions,” found that the prominent display of privacy information “tends to lead consumers to
purchase from online retailers who better protect their privacy. Additionally, our study indicates that
once privacy information is made more salient, some consumers are willing to pay a premium to
purchase from more privacy protective websites.” (Tsai et al., 2007) Thus, there is an economic
impetus to both make the data handling less privacy invasive and to make the policy document easily
accessible to people (in Europe there are of course also legal reasons as noted in sections 2.4 and 2.5).
In spite of the fact that research on trust-enhancing factors for web shops has been conducted for
around a decade by now and web browsers are used by most groups in our society, there are still many
people that are hesitant to entrust web services with personal data. For good reasons, one may add,
because it is not only the debate over weak privacy policies that should make people wary, but also the
increasing number of reports on bogus web sites which try to imitate established and trusted web sites.
To this come ‘phishing’ email messages linking to such bogus sites. Phishers “use ‘spoofed’ emails to
lead consumers to counterfeit websites designed to trick recipients into divulging financial data such
as account usernames and passwords. Hijacking brand names of banks, e-retailers and credit card
companies, phishers often convince recipients to respond.” as the Anti-Phishing Working Group
explains (APWG, 2007). And often this really works: Karakasiliotis et al. (2007) found in a web-based
survey presenting a mix of legitimate and illegitimate messages to higher educated persons, that, in
total, the 179 participants made incorrect classifications in 32% of the cases (i.e. classifying a bogus
letter as legitimate and a legitimate as illegitimate). It has also been shown that bogus websites have
many means to fool users to believe they are on a site which they are not. An illuminating usability
study on this topic was presented last year by Dhamija, Tygar, and Hearst (2006) “in which 22
participants were shown 20 web sites and asked to determine which ones were fraudulent. We found
that 23% of the participants did not look at browser-based cues such as the address bar, status bar and
the security indicators, leading to incorrect choices 40% of the time. We also found that some visual
deception attacks can fool even the most sophisticated users.” In another study, Schechter, Dhamija,
Ozment and Fischer conclude “We confirm prior findings that users ignore HTTPS indicators: no
participants withheld their passwords when these indicators were removed. We present the first
empirical investigation of site-authentication images, and we find them to be ineffective: even when
we removed them, 92% participants who used their own accounts entered their passwords.” (2007)
Wu, Miller, and Little (2006) let test users enter sensitive information online via a browser sidebar. In
a usability study, this solution “decreased the spoof rate of typical phishing attacks from 63% to 7%.”
However, spoofing the ‘secure’ sidebar itself turned out to be an effective attack. There has been some
further discussion about how well security and privacy indicators work; see the overview by Cranor
(2006).3
2.1.2 Trustguide
British Telecom and Hewlett Packard Lab in Bristol published a study on public attitudes in Britain
towards trust in our ever evolving technological world. HP has after this completed a new study,
Trustguide2, preliminary results of which was presented at the eChallenges event of autumn 2007. We
cite from this presentation as it also compares its main finding with the original Trustguide.
“The ability to specify preferences at the time personal information is released is clearly a popular
control, and one that can be easily understood by people if sensibly implemented. Preferences present
a useful and pragmatic alternative to other privacy enhancing techniques, e.g. anonymity and
pseudonymity. The key findings from our initial analysis of the study can be summarised as follows:
• Participants balanced convenience against risk (confirming Trustguide finding)
3
In addition one may refer to a Wiki on the W3C site: http://www.w3.org/2006/WSC/wiki/SharedBookmarks
and to “Phishing Tips and Techniques” by Gutmann http://www.cs.auckland.ac.nz/~pgut001/pubs/phishing.pdf.
• Participants were wary and distrusting of new, unfamiliar services and retailers, but over time
would become more trusting in favour of maximising convenience
• Government and Banks were (with some reluctance) seen as trusted bodies. Further
outsourcing of ID management by government to other private organisations was not popular
(contradicting Trustguide finding where government was least trusted)
• Participants were divided on whether data should be held centrally or locally
• Participants were sceptical about the level or privacy that technology could offer (to a lesser
extent confirming Trustguide finding)
• Personal preferences were seen as useful controls, worthy of investing limited time to properly
manage. Providing choice suggested control and engendered trust
• Multiple privacy preferences (ID cards) were not considered useful or necessary
• Participants felt that certain items of PII are more sensitive than others, but struggled to value
each precisely, particularly with different usage scenarios
• Participants demanded clarity and transparency of use of their PII (confirming Trustguide
finding)
• The use of government issued ID cards for commercial applications was not popular”
(Tweney & Crane, 2007)
For the last point, it has to be taken into account that Great Britain does not have government issued
ID cards right now, but plans to issue them soon to all citizens. The authors note that “the findings
endorse other studies around management of PII, e.g. the EU PRIME Project.” (ibid.) As an example
from the PRIME project, one can refer to one where HP participated, where test users appreciated the
control which an interface for setting obligations for data controllers4 promised to give them
(Pettersson et al., 2006). The method in the Trustguide studies have also been akin to the one often
employed in PRIME usability studies and testing: participants are confronted with technology not yet
really existing but the stimuli are used to provoke responses in discussions afterwards (the Trustguide
studies have used focus groups while the usability studies in PRIME were limited to individual post-
test interviews or questionnaires).
on a computer screen. Continuous alerting was found hard to make non-intrusive on a small screen.
On the other hand, with PRIME anonymisation there should be no need for such a feature, because
simple browsing will not reveal any information at all about the user. Cranor and co-workers have
developed search engines around the Privacy Bird concept (Gideon et al., 2006; compare Tsai et al.,
ibid.)
The PISA project – Privacy Incorporated Software Agents – which has influenced the PRIME HCI
guidelines gets an extensive treatment in section 2.4.
Furthermore, one presently sees several identity management technologies being developed around the
world (Windows CardSpace, Liberty Alliance, SXIP, Bandit, Netcraft, etc., as well as formfillers in
web browsers); these are geared towards the end-users. The greatest contribution to privacy protection
in such systems is their facilitation of ‘safe’ data disclosure, but this is only one part of the issue and
we do not review these systems here.
HCI-P might be conceived. Rather than focusing on functions of a system, one point of departure
could be requirements for privacy, as done in the PISA project where legal requirements constituted
the starting point; cf. 2.4 below. More perspectives are conceivable as discussed in chapter 4.
A collection of guidelines by different authors are found in a paper by Herzog and Shahmehri (2007).
This will be discussed more in chapter 4, while here we only briefly mention the work by these two
authors. In a series of papers, collected and summarised in a recent dissertation by Almut Herzog
(2007), it has been demonstrated how complicated and error-prone the configuration of the security
policy file is. Offline editing was found particular inefficient and difficult for end users, Herzog and
her colleagues “conducted a usability study of personal firewalls to identify usable ways of setting up
a security policy at runtime.” (2007, p. 5) This in turn resulted in a tool for setting Java security policy
at runtime which was later subject to a usability study supporting the validity of their ideas and their
design guidelines (which will be discussed more in chapter 4) and also supporting the idea that users
should be included in the whole development process.
5
OECD (2003) has specified practical guidelines to assist government, business and individuals in promoting
privacy protection online at national and international levels and also proposes means of promoting education
and user awareness. However, OECD’s guidelines appear as too general to directly inform UI design and will
therefore not be treated here.
6
The definition of ‘data subject’ is wider than ‘the user’ but in this context they are equivalent. Definition
implied in EU Directive 95/46/EC, Article 2(a): “‘personal data’ shall mean any information relating to an
identified or identifiable natural person (‘data subject’)”.
In a very extensive table (Table 2, presented in a special Appendix), Patrick and Kenny present details
of the four privacy principles under consideration in their paper (i.e., the italicised principles in the list
above) and what HCI requirements they imply. “The core concepts in the requirements can be grouped
into four categories: (1) comprehension: to understand, or know; (2) consciousness: be aware, or
informed; (3) control: to manipulate, or be empowered; (4) consent: to agree.” (ibid.; also in Patrick &
al., 2003, p. 254) Furthermore, possible solutions for UI design are given for each principle and its
sub-principles in that table. This was the basis for the corresponding tables in D06.1.a&c and is also
adapted in the present deliverable, in section 4.2. These PRIME versions have been the point of
reference for the PRIME HCI work as concerns the EU directives.
Finally, Patrick and Kenny present a “Privacy Interface Analysis”. They claim that “The result of a
well-documented privacy interface analysis is a set of design solutions that will ensure usable
compliance with the privacy principles.” (ibid.) This trust in inspection methods can be questioned,
but in brief, the Analysis can be described in the way the authors do in section 4.3:
1. Develop a detailed description of the application or service from a use case and internal
operation point of view.
2. Examine each HCI requirement described […] to see if it applies to this application, using
Table 2 as a guide.
3. For each requirement that must be met, scrutinize the generic privacy solutions provided in
Table 2 (and the interface design methods […]) to determine an appropriate specific solution.
4. Organizing the solutions according to use cases and capture the solutions in an interface
requirements document.
5. Implement the interface according to the requirements document.
Agreements’ (JITCTAs). “The main feature of a JITCTA is not to provide a large, complete list of
service terms but instead to confirm the understanding or consent on an as-needed basis. These small
agreements are easier for the user to read and process, and facilitate a better understanding of the
decision being made in-context. Also, the JITCTAs can be customized for the user depending on the
features that they actually use, and the user will be able to specify what terms they agree with, and
those they do not. It is hoped that the users will actually read these small agreements, instead of
ignoring the large agreements that they receive today.” (Patrick & Kenny, 2003)
A caveat must be given for the tendency of people to automate behaviours so that the individual parts
of an action are executed without conscious reflection (Raskin, 2000). Thus, too many repetitious
click-throughs should be avoided. The PRIME HCI work package has developed a ‘clickless’
alternative concept, namely the Drag-And-Drop-Agreements, which, of course, can appear ‘just in
time’ (section 3.4).
Patrick and Kenny (2003) refer to some articles that discuss click-through agreements and how to
make them valid: Thougburgh (2001), Slade (1999), Halket and Cosgrove (2001), and Kunz (2002).
Now, see also Microsoft’s Privacy Guidelines (2006).
Layer 1 – The short notice: “This must offer individuals the core information required under Article
10 of the Directive [95/46/EC] namely, the identity of the controller and the purpose of processing –
except when individuals are already aware – and any additional information which in view of the
particular circumstances of the case must be provided beforehand to ensure a fair processing. In
addition, a clear indication must be given as to how the individual can access additional information.”
(ibid.)
Layer 2 – The condensed notice, available at all times online but also in hard copy via written or
phone request, includes all relevant information required under the Directive as appropriate plus
information on redress mechanisms:
• The name of the company
• The purpose of the data processing
• The recipients or categories of recipients of the data
• Whether replies to the questions are obligatory or voluntary, as well as the possible
consequences of failure to reply
• The possibilities of transfer to third parties
• The right to access, to rectify and oppose
• Choices available to the individual
• A point of contact for questions and information on redress mechanisms either within the
company itself or details of the nearest data protection agency
Layer 3 – The full notice includes in addition to the points listed above also “national legal
requirements and specificities.” This layer, with national particularities, was hard to accommodate
within the frames of the PRIME project – in principle, the PRIME project has only presumed two
layers (one directly visible in UIs, one linked from these UIs).
The report contains three appendixes with examples of the two first layers. It can be noted that the
layered principle does not in itself provide the means to fully readable comprehensive notices when
small devices with small screens are used.
2.6 New vistas for IDM systems: Tools for exercising rights
Iachello and Hong end their 100+ page survey of HCI research in end-user privacy by a section
entitled “Trends and challenges in privacy HCI research” (2007, pp. 96-113). In this section they
expound five “grand challenges”:
• Developing more effective and efficient ways for end-users to manage their privacy (p. 96)
(Developing standard privacy-enhancing interaction techniques; p. 6, cf. p 114)
• Gaining a deeper understanding of people’s attitudes and behaviours toward privacy
• Developing a “Privacy HCI Toolbox”
• Improving organisational management of personal data
• Reconciling privacy research with technological adoption models
Within the PRIME HCI work package it has been noted that when identity management in the future
finds more standardised forms, it will be easier to investigate tentative HCI privacy principles as these
can be formulated for specific interactive procedures and people will by then be familiar with these
procedures. This is the common trend for all HCI research. Nevertheless, there is still a need for
conscious efforts to extend the present limits of privacy research within the HCI field: in particular,
there is interesting prospects in the intersection of Iachello’s and Hong’s first and fourth challenge as
will be explained in this final section of the present chapter.
Being able to track what data were disclosed, when, to whom and how the data are further processed,
is an important feature providing transparency of personal data processing. However, today this
capability, if at all existent, resides on the service side only (i.e. at the data controller). Weitzner et al.
(2006) present a basic architecture for transparent and accountable data mining that consists of
general-purpose inference components to be installed at services sides providing transparency of
inference steps and accountability of rules. Transparency in this context means that the history of data
manipulations and inferences is maintained and can be examined by authorised parties. Accountability
means that one can check whether the policies that govern data manipulations were adhered to.
Sackmann et al. (2006) present a secure logging mechanism for creating privacy evidence for ex post
enforcement of privacy policies. Both approaches thus comprise transparency tools to be installed at
services sides that allow checking whether personal data at those sides were processed in a way
compliant with privacy legislation. They were, however, in contrast to the approach taken by PRIME
not designed as end user transparency tools.
The PRIME UI proposals include additional functionality in the history function, namely functions
that can enable users to online exercise their rights to rectify data and request their deletion (see
section 4.2), and help them to check that the data controller sticks to the obligations he has agreed to,
and also help the users to set obligations in the first place when providing the data controller with
personal data (see 3.4). In addition, within PRIME it has been discussed how to advise users about
their rights, because PRIME usability tests have shown that many users are not aware of their rights
(Fischer-Hübner et al., 2007). This should be compared with the Eurobarometer of 2003 that showed
that only one-third of EU citizens actually know of their right to retrieve information on what data is
stored about them and for what reason (Eurobarometer 2003, Q33 a.2).
There is a need to get assistance outside the privacy-enhancing system itself. As cited in 2.1, Nielsen
and co-workers found that “Access to helpful people” was a trust-giving factor. For a privacy-
enhanced identity managements system, this could consist of getting in contact with the data
processors, but also consumers’ organisations or data protection authorities. In the light of the findings
of Pollach (2007; again cf. 2.1), it is clear that users could need assistance reading the privacy polices
before agreeing to send data, and also afterwards if they run into problem and would like see if they
have any rights at all to claim. One may object that organisations working for the benefit of citizens
cannot be swamped by more or less justified complaints, but “access to helpful people” might in the
first instance be replaced by automated services or by pay-per-question manned assistance services
(Pettersson et al., 2005; Andersson et al., 2005). The important thing is that users can be guided to find
helpful information outside the data controller’s web site. According to the Eurobarometer, “The level
of knowledge about the existence of these independent authorities was low across the European Union
and two-thirds (68%) of EU citizens were not aware of their existence.” This underlines the opinion of
PRIME that, as for the legal rights, the UI should both inform users about the possibilities to get
assisted help as well as assist users in taking advantage of the possibilities. We consider the work
conducted by PRIME partners in this field as being quite visionary (Pettersson et al., 2006; Fischer-
Hübner et al., 2007).
“Assurance evaluation” is a function that evaluates different parameters indicating how good the
data receiver is at handling the user’s personal data. The result of the assurance evaluation is inserted
in the “Send personal data?” window mentioned in the preceding paragraph. It helps the user before
data has been disclosed.
“Data Track” is a database stored in the user-side PRIME system and logging all data transmitted to
service providers. It supports functions that help the user to keep track of his data disclosures and what
agreements he has entered, and could – in a well-developed PRIME world – actively help the user in
privacy-related questions after data has been disclosed.
In the reminder of this overview chapter, the paradigms and functions just mentioned are explained in
some more detail and sample screen shots of the graphical appearance of the UIs are given. Chapters 6
and 7 show the user interfaces of the two application prototypes of PRIME. It should be noted that the
application in chapter 7, location-based service for mobile phones, is client–service-oriented just like
the web browser application discussed in this chapter and chapter 5. In contrast, the Collaborative
eLearning application in chapter 6 represents a quite different approach with its focus on inter-
personal, peer-to-peer communication. There is nothing that prevents the three paradigms mentioned
above to be used also for peer-to-peer communication, but for peers holding personal data the EU
directives on the obligations of data controllers do not apply as clearly as for service providers and, for
instance, the history function, i.e. Data Track, will not necessarily contain the information needed to
assist the user in ways described below.
that this role is inappropriate, he has to select one of his other roles. The UI paradigm was embodied in
an early user-side prototype called DRIM (Dresden Identity Management) where the IDM functions
were displayed in side bars of an ordinary Internet browser (Mozilla Firefox). Even if it may seem
natural to select the visiting card one wants to use, it turns out that during web browsing, an ordinary
user will be following links to web sites he is not sure he wants to share his personal data with. Pre-
selecting one’s visiting card, as the role-centred paradigm presumes, is not a swift and privacy-friendly
way of surfing the web.
7
I.e., a new pseudonym is created for each transaction (Pfitzmann & Hansen, 2007).
8
I.e., a pseudonym chosen in regard to a specific communication partner (Pfitzmann & Hansen, 2007).
The approach to use different default settings for different areas within a town should make it easier
for a novice to see the options available once he has grasped the TownMap metaphor. Individual
bookmarks or bookmark menus are symbolised by houses. The user also has his own house in the map
(a prominent house at the baseline). Of course, the map display has to vanish or be reduced when the
user clicks on the house of one of the service providers, because then the web page of that service
should occupy the browser window.
In Figure 2 the user wants to add a shortcut link (similarly to dragging a web site icon from a present-
day browser’s address field to the desktop). The user picks a house from a toolbar and places it
somewhere on the map. This will make it possible not only to place a new bookmark in a specific area
of the TownMap but also to put an alternative privacy preference definition: if, for example, a web site
is already listed in the public space, now the user adds an access point to the same site but in his
neighbourhood to indicate that he should be recognised as a returning visitor when accessing the web
site this way.
Figure 3 shows a view when the user is browsing a site. The user has clicked on the TownMap symbol
in the browser bar and can now see a tilted TownMap and all or some of his shortcut links (in this
figure only five houses have been placed on the map). This could be refined9 but in any event, it
allows using the spatial relationships further: the path between the user’s house and the bank for
9
Compare the “Looking Glass” UI paradigm presented by SUN Microsystems “Project Looking Glass”,
http://www.sun.com/software/looking_glass .
instance, can be highlighted for indicating data flow and even for letting the user show preferred data
flows, as will be explained in detail in section 5.2.2.
Icons for The PRIME mask occurs as sole component in the icon for “PRIME Settings” but
pseudonymity should also convey the basic PRIME idea of pseudonymity
and (true) The Face icon is meant to represent the user and is used for the function where
identity personal data is stored on the user side system
Mnemonic “PRIME Returning Customer” icon (or “PRIME Returning Visitor”) – no data but
icons for using one pseudonym per site to be recognised as returning visitor
privacy
preferences “PRIME Anonymous” icon – no data, always a new pseudonym
The data in a visiting card determine what the user side PRIME system can prepare for releasing when
a service makes a data request. The preference settings can be configured differently for different web
sites, so as to minimise the number of cards; for instance, certain web sites can be marked for
automatic disclosure of requested data, and certain data occurring in a visiting card can be marked for
automatic disclosure such as username and password so as to make login procedures automatic. If the
preference setting denies the service provider certain data, the user might fill in missing information
manually (cf. 3.4) or cancel the service request. Managing preference settings can be done off-line, but
it is quite a tricky thing to set all conditions if one aims at automated information management.
PRIME allows setting preference on the fly whenever the user finds it useful to create new visiting
cards or do extensions or changes to existing visiting cards.
10
“Send Personal Data?” is sometimes abbreviated to “Send data?” or “SD?” in PRIME reports and the function
behind it is called AskSendData. Adding “personal” helped test users understand this function.
about changing the settings for the PrivPref, switching PrivPref, creating a new PrivPref, and setting a
default PrivPref for this web site.
The data used to, so to speak, “fill out the form” in Figure 5 has been derived from an IPV2 PreSet
called “PRIME Returning Customer”; as mentioned in 3.2 PrivPrefs were called PreSets in IPV2. Data
types are not given except for data which lack an obvious classification from their face value (such as
“John_in_a_million”).
For a consent request to be valid according to the EU Directive 95/46/EC (Article 10), the data subject
has to be told about what data will be used, for what purpose, who is the data controller, and any
further information of relevance, except when the data subject is already aware of this. In principle, a
user asking for a service at a web site would always already be aware of purpose and controller.
However, as there might be web sites trying to trick the user, the PRIME solution presupposes that the
service sends a specification of these details, as can be seen in Figure 5, so that no unwanted uses are
tacitly included in the agreement, such as sending promotion mail or selling the data to other
businesses. Furthermore, the PRIME solution presupposes that service side identity is checked by the
user side PRIME system either by relying on trustworthy credentials provided by the service itself or
by deriving the information from web site ownership.
The “further information” is put in a second level (cf. section 2.5.1) accessible via the “Link to full
privacy notice” where the company’s privacy policy can be found. Naturally, the user might wonder
whether a particular site is trustworthy or not, and we have proposed an assurance evaluation, here
called “Privacy Functionality Check” – compare 3.5. Moreover, the service side might want to use
some of the data for other purposes than the service requested by the user. For instance, many
companies would like to send newsletters to customers. Additional purposes have to be confirmed by
the user (and should not be “hidden” in the privacy policy! Cf. Pollach, 2007). We have tested a
design for this too as will be explained in chapter 5 where we also propose solutions for ascertaining
that the user really makes conscious choices and not only reacts by least effort by clicking on the
“OK” button.
11
www.european-privacy-seal.eu and ready21.dev.visionteam.dk
logged and stored on the end user’s device (encrypted and password protected if it were a real
application, of course). The Data Track (see Figure 7) is a function available in all three UI paradigms
discussed above. The PRIME UI proposals also include additional functionality in the Data Track,
namely functions that can advise users about their rights and enable them to exercise their basic rights
to access data, or to rectify and request their deletion online (see section 4.2), and help them to check
on agreed obligations or to set obligations (see 3.4 and chapter 5).
Being able to track what data were disclosed, when, to whom and how the data are further processed,
is an important feature to provide transparency of personal data processing. The Data Track stores
transaction records comprising personal data sent including pseudonyms used for transactions and
credentials that were disclosed, date of transmission, purpose of data collection, recipient and all
further details of the privacy policy that the user and recipient have agreed upon (see also Pettersson et
al., 2006). The privacy policy constitutes a valuable document in case that a user feels that something
is wrong with how his data have been used. Storing the pseudonyms used for transaction allows a user
to identify himself as a previous contact in case that he wants to exercise his rights to access,
rectification, blocking or deletion while still remaining pseudonymous.
4.1 HCI-P
As mentioned in section 2.3, Johnston, Eloff and Labuschagne (2003) make a case for enhancing
security functions of computers by improving the user interfaces of such functions. They note that
anti-virus software and firewalls are nowadays present also in systems managed by the everyday user,
who is not a security expert. The user interface “needs to ensure that the user is guided so as to
minimise the potential for the user to be the ‘weakest’ link.” (ibid., p.676). They define security HCI
(HCI-S) as “the part of the user interface which is responsible for establishing the common ground
between a user and the security features of a system. HCI-S is human computer interaction applied in
the area of computer security.”12 (ibid., p.677f)
Their starting point is the ten usability principles by Jakob Nielsen, originally derived from an analysis
of 249 usability problems (Nielsen, 1994, cf. 1993; cited in Figure 8 below). “Given the established
nature of these criteria, it is a good starting point to expand and modify the list of criteria so that they
are relevant to an HCI [=human-computer interface] in a security environment,” according to Johnston
et al., who modify and condense Nielsen’s ten usability principles “to address only the essentials in a
security environment.” This yields six HCI-S criteria (listed in their Table 2; cf. Figure 8 below).
Based on the work conducted in PRIME, it is not unthinkable to develop an HCI genre “HCI-P”. But it
questionable that HCI-P should consist merely of a specific list of usability principles to assess the
usability of privacy functions, even if ascertaining that the functions lead to “trust being developed”,
which Johnston et al. see as a goal for their HCI-S criteria, is a plausible ultimate goal. Before
discussing the HCI guidelines in the following sections, we will present HCI-P as potentially a many-
faceted thing even if the rest of the chapter will deal with only some of these aspects.
12
They vacillate between two interpretations of the abbreviation HCI: human-computer interaction (the field of),
and human-computer interface. For the latter we use user interface (UI), so unless otherwise stated HCI is in this
report to be understood as the field of human-computer interaction research and practice.
privacy principles derived from EU Directive 95/46/EC and what HCI requirements these privacy
principles imply. Thus, if the core of HCI-P should be a set of privacy-specific usability principles,
one would have to decide from which field(s) they should be originally derived in order to properly
guide the design of the part of a system which interfaces between the user and the privacy features of
the system. To this comes the question what are the privacy features of the system.
Patrick and Kenny refer to discussions within the American Bar Association on the binding force of
agreements made by so-called ‘click-throughs’ (e.g. Halket & Cosgrove, 2001); this is when the user
clicks on a button or a link labelled I agree or OK, but for this to be an expression of consent, he has to
be informed about exactly what he agrees to. In several cases processing of personal data is not
allowed without explicit consent from the data subject, so one could argue that HCI-P should centre on
the concept of ‘informed consent’ and how to reach agreements electronically. Identifying and
categorising situations where individuals are in contact with professional service providers would then
be in focus and the target would be to find appropriate interaction solutions for such situations.
Another perspective would be gained if the scope is widened from the service–customer dichotomy to
include also digital peer-to-peer communication. For peer-to-peer communication privacy laws are
less applicable. How to reach agreements (i.e., user ‘giving consent’) will not appear as the core
solution from this perspective. The same holds for foreign services if the scope is widened to consider
data transfers to sites in countries outside the European Union, which have no appropriate level of data
protection. In either case the legal backing is weak and ways to conceal one’s identity by
anonymisation and the use of pseudonyms13 will be one of the best methods to ensure control over
one’s own data. However, privacy threats occur through the growing linkability when mixing real data
with these pseudonyms. Therefore, in this broader perspective, user-controlled automation of the
management of partial identities and linkability estimation appear as interesting as consent-giving. A
strand of HCI-P would then centre on agent technologies, i.e., technologies that act on the behalf of
the user making least-linkability decisions for him (this was included in the very name of the PISA
project, ‘Privacy Incorporated Software Agent’).
More generally speaking, PET is by definition concerned with computers (Privacy-Enhancing
Technology). ‘Usability of PET’ is conceivable as the defining formula for privacy HCI, which
however does not set it out as anything special for HCI workers as it would include general usability
issues as well as privacy-specific issues, and the formula ‘usability of PET’ leads to the question
whether or not HCI-P should be concerned with end users only and not also system administrators
working at the data processors. Automation has been advocated as a means to keep to good practices
on the controller/processor side (Borking & Raab, 2001; European Commission, Memo/07/159).
Manual processes are more prone to errors and deserve special attention too, of course, but such
processes fall outside HCI unless there are specific computer-based tools meant to help the human
operator. Privacy policy authoring tools (Karat et al., 2005) is a special case, and a special case of
these are systems where the policies might be used to directly control the system (‘obligation
management’; Casassa Mont, 2004). Even if a high degree of automation at service side may pave the
way for end users actually accessing the server side databases directly via their own PCs and mobile
phones (as envisioned by the PRIME HCI group; cf. “Data Track” elsewhere in this report), the HCI
requirements are quite distinct between the two user groups – indeed, end users and system
administrators use in fact different systems, even if the systems interact. Thus, the focus of HCI-P
would be split, which can be criticised, of course, but in the same time it should be recognised that the
two sides are far from unrelated to each other.
To start at the other end of the HCI continuum one could note that for people in general, relying on
technology appears at instances as a great obstacle, especially when it comes to security and privacy.
Participants in PRIME user tests have voiced the fear of releasing personal data and also the distrust
that an allegedly ‘privacy-enhancing’ identity management system would ever be able to really protect
people’s privacy. One can envision privacy HCI as methods to invite people to rely on PET. To
13
The term pseudonym is “Generally used to describe identifiers which are not connected to an individual’s civil
or official identity” (Chapter 3 in D14.1.a). For different types of pseudonyms, see Pfitzmann & Hansen (2007).
compare, Johnston et al. (ibid.) claim to have defined criteria to ascertain that security functions lead
to “trust being developed”. Moreover, one can envision a privacy HCI discipline that takes further
responsibility by incorporating in the HCI analysis the whole ‘trust system’ including data protection
authorities, consumer organisations, etc. (Pettersson, 2006; Hansen et al., 2007) If a data protection
authority recommends the use of a certain identity manager, presumably this would mean very much
for people’s trust in such systems. Admittedly, such systems can still be complicated to use: the trust
focus cannot exclude ordinary usability work from HCI-P, because people have a tendency to distrust
their own abilities to cope with technology-centred systems and if proven right, they will surely stop
using even a recommended system.
To continue with the trust focus, it would also put a focus on how to derive design principles. In
previous PRIME deliverables we have claimed that experimental data do not only highlight the
existence of trust problems but also give indications how such problems could be countered by user
interface design (Pettersson et al., 2005; Hansen et al., 2007). In general, usability testing could be
developed further as indicated in the usability evaluation reported in PRIME deliverable D06.1.b.
There is a need to develop guidelines or standards for how to introduce test users to the area of PET as
well as measuring their ability to discern various privacy threats, ability to develop well-founded trust,
etc. Thus there are a number of specific circumstances concerning usability testing of PETs, a fact
which argues for targeting one strand of HCI-P on experimental methodology. The methodological
concerns could stretch beyond usability testing of prototypes – for instance, Iachello and Hong con-
clude on methodological issues for privacy HCI thus: “One salient question is whether surveys, focus
groups, and interviews should be structured to present both benefits and losses to participants. Clearly
a balanced presentation could elicit very different responses than a partial description” (2007, p.41).
To sum up, what has been discussed here are the following perspectives on HCI-P (Privacy HCI):
• As a set of privacy-specific usability design principles;
• As human-centred methods to reach agreements electronically;
• As agent technologies for ordinary people;
• As concerned with usability of PET (user side PET, or any PET);
• As a set of trust principles, or methods to invite people to rely on PET;
• As experimental methodology with a special focus.
The word ‘perspective’ suggests that they are not mutually exclusive (and a more ambitious and
extensive review of approaches and topics for privacy HCI can be found in section 3 of Iachello’s and
Hong’s recent survey; ibid.). It should moreover be remembered that other experts than HCI experts
have a stake in the usability evaluation of a privacy and identity management system, as indeed the
multidisciplinary PRIME project demonstrates. In conclusion, for the development of good PET, HCI-
P is better seen as a many-faceted discipline with various prospects for methodological development.
management applications have been developed, but these requirements are in essence similar to the
EU directive, as far as user-control is concerned. Therefore, we find it uncontroversial to follow the
EU Data Protection Directive to derive HCI requirements for the demonstrator of the PRIME
Integrated Prototype.14
However, the socio-cultural requirements also include adoption criteria, and such cannot be found in
law codes. Thus, factors promoting individuals’ adoption of an application are listed here (4.3).
We moreover give eight general guidelines for the end-user side of PET not emphasising legal or
social requirements applicable to a specific relation between one sort of data collector and one sort
data source, but taking privacy functions for granted whatever their exact effects and instead targeted
at three basic characteristics of PET: that privacy protection is a secondary goal of users’, that PET
concepts can be quite hard to understand for users, and that true reversal of actions is often not
possible (4.4). These guidelines are inspired by recent works in security HCI which are also listed.
Finally, the chapter ends with guidelines on the design and testing process because blending various
guidelines for the UI design will need a tuning process involving testing (4.5).
14
This is not meant to say that individual articles in a directive might not be challenged; see Pettersson (2007).
1.1 Data subject (DS) is Users must be aware of the Opportunity to track controller’s actions is
informed: DS is aware of transparency options made clearly visible in the interface
transparency opportunities. design: There should be a legend “Data
Track” available to accompany the data
track (e.g., footprint) icon in the title or
tool bar.
1.1.1 For: PD collected from DS. Users know who is controlling The identity (legal name, address, email)
Prior to PD collection: DS their data, and for what of the service provider (and possibly also
informed of Data Controller purpose(s). the user’s denomination in bookmarks etc.)
identity and Purpose as well as the purposes should be
Specification (PS). reinforced in the dialogue boxes “Send
(Art. 10 EU Directive data?” corresponding to short privacy
95/46/EC) notices that appear when sites request PD.
This information is also part of the
condensed and full privacy notices that
should be retrievable via the PRIME
interface.
1.1.2 For: PD not collected from Users are informed of each A statement that data can be passed on to
DS but from controller: DS processor who processes their third parties (including the identity of third
informed by controller about data, and the users understand party and purposes of processing) should
processor identity and PS. the limits to this informing. appear in the services side’s privacy policy
(Art. 11 EU Directive and should be displayed either in the
95/46/EC) condensed privacy notices or short privacy
notices (i.e. “Send data?” windows) if this
is regarded as necessary for guaranteeing a
fair data processing.
1.2 When PD are used for direct Users understand that they can Information about the data subject’s rights
marketing purposes, DS object to processing of their PD and tools how to exercise them has to
must be aware of the right to for direct marketing, and the appear in the privacy policy and hence
object (Art. 14 (b) of EU limitations on those objections. should be displayed in the privacy notices
Directive 95/46/EC). (i.e. if multi-layered notices are used, it
should appear in the condensed privacy
notice or in the short notice of the “Send
data?” windows if this is necessary for
guaranteeing a fair data processing).
Relevant information could, for instance,
be provided through a click-through
agreement at registration.
The interface should provide obvious tools
for exercising the data subject’s rights.
This could be part of the Data Track
functions.
2 FINALITY AND Users control the use and Control through the user’s privacy
storage of their PD. preference settings and “Send Data?”
PURPOSE LIMITATION: dialogue boxes (containing core
The use and retention of PD information of short privacy notices),
is bound to the purpose for which are designed to be prominent and
which the PD were collected obvious.
from the DS. Control as regards events that occur after
(Art. 6 EU Directive the user has given her or his consent will
95/46/EC). be managed by the Data Track function
(possibly including an alert function
notifying the user even if the Data Track
window has not been opened).
2.1 The controller has legitimate Users unambiguously give “Send data?” dialogue boxes in form of
grounds for processing the explicit or implicit consent. JITCTAs or DADAs can be used to obtain
PD (see Principle 3.1) unambiguous consent for controller to
(Art. 7 EU Directive process the PD.
95/46/EC)
For direct marketing via
email, DS has the right to
opt-in if there is no customer
relationship (Art. 13 EU
Directive 2002/58/EC.
2.2 The processor can only go Users understand that their PD Information about these exceptions has to
beyond the agreed PS if could be used for other purposes be part of the full privacy notices to be
the processor’s PS is state in special cases. retrievable via the PRIME interface.
security, or prevention/ In the Data Track explicit formulation
detection/prosecution of about these special cases or a “Special
criminal offences, or case” button can be used.
economic interests of the
state, or protection of
DS, or rights of other natural
persons, or scientific/
statistical analysis
(Art. 3 II EU Directive
95/46/EC)
2.3 Retention: the DS should be - Users are conscious of RP RP should be selectable both in privacy
presented a proposed prior to giving consent preference setting form (conditions for
retention period (RP) prior - Users are aware of what automatic disclosure) and preferably also
to giving consent, except happens to their data when the during browsing, in dialog boxes “Send
where PS is scientific/ retention time expires. data?”.
statistical. Controller ensures Information about RP has to be part of the
processor complies with RP. condensed privacy notice or short privacy
When RP expires, it is notices (i.e. “Send data?” windows) if this
preferably deleted or made is regarded as necessary for guaranteeing a
anonymous. fair data processing.
Information about RP and about PD that
has been deleted or made anonymous
because of retention period expiry should
be included in data track function.
3 LEGITIMATE PROCESSING: Users control the boundaries in Interface elements (i.e. privacy preference
which their PD is processed. setting form, “Send data?” dialogue boxes,
the PD is processed within data track windows) for making privacy
defined boundaries. decisions should be prominent and
obvious.
3.1 Permission: To legitimately - Users give informed consent “Send data?” dialogue boxes in the form
process PD, the controller to all processing of data of JITCTAs or DADAs can be used to
ensures that one or more of - Users understand when they obtain unambiguous consent for
the following are true: the are forming a contract for controller to process the PD.
DS gives his explicit services, and the implications of JITCTAs or DADAs can also confirm the
consent, the DS that contract formation of a contract, and the
unambiguously requests a - Users understand the special implications/ limitations of the contract.
service requiring cases when their data may be In the Data Track interface, a reminder of
performance of a contract, processed without a contract. special cases when data can be processed
the PS is legal obligation or without a contract could be included.
public administration, the Data Track contains all contracts agreed to.
vital interests of the DS are
at stake.
(Art. 7 EU Directive
95/46/EC)
3.2 Special categories of data: When dealing with highly Clearly mark requested sensitive data in
Data (Art. 8 EU Directive sensitive information, users “Send data?” boxes corresponding to
95/46/EC): The controller provide explicit, informed JITCTAs or for DADAs, so that the user is
may not process any PD that consent prior to processing. aware that special information is given.
is categorised as religion, The PISA project suggests double JITCTA
philosophical beliefs, race, but there are inherent risks in click-
political opinions, health, throughs that people are OK-ing too
sex life, trade union quickly. The second click-through should
membership, or criminal possibly be very different from the first
convictions unless the DS one.
has given their explicit
consent or the processor is
acting under a legal
obligation
3.4 PD should not be transferred DS has to understand A special warning should be displayed to
to third countries outside the implications, and provide users before data disclosure to third
EU without an appropriate informed consent. countries without an appropriate level of
level of data protection. data protection.
Exceptions: DS has A JITCA or DADAs can be used for
unambiguously consented to obtaining unambiguous consent from DS.
data transfer, transfer is
necessary for performance of
a contract between DS and
controller or controller and
third party in the interest of
the DS, transfer is necessary
for protecting vital interests
of DS, data is information
from public register, or
controller has appropriate
level of data protection (e.g.
fulfils the Safe Harbor
agreement) (Art. 25-26 EU
Directive 95/46/EC).
4 RIGHTS OF THE Users understand and can Tutorials at installation and privacy notices
exercise their rights during run time should inform users about
DATA SUBJECT: their rights and about how to exercise these
DS has the right to rights both online and/or at the physical
self-determination within the address of the controller.
boundaries and balance of The existence in the Data Track of online
the Directive. functions for exercising these rights are
mentioned at installation and later in “Send
data?” and in the UI of the Data Track
function itself.
Interface layout (drop down menus, title
bar and data track window) provides
obvious tools for controlling the rights
functions.
4.1 Access rights of the DS - Users are conscious of their The tracking functions are displayed
(Art. 12 (a) EU Directive rights, which include right to prominently in PRIME interfaces (drop
95/46/EC) and possible know who has received what down menus and title bar of host appl.).
exemptions and restrictions kind of data, from whom, when, The exceptions to the rights should also be
(Art. 13 EU Directive and why, and they understand shown in the tracking interface and should
95/46/EC) the exceptions to these rights be explained in tutorials and named in the
- Users understand and can privacy notices.
exercise their right.s Online functions for exercising the user’s
right of access, i.e. requesting the server
side to provide information about the data
stored about him/her, should be part of the
Data Track and obvious to operate.
Besides, email / snail mail address for
requests to access data has to be provided
in the privacy notices which can be used as
a fall back solution in cases where the
online functions do not work.
4.2 The DS’s control rights: DS Users are conscious of their Links to online functions to exercise the
may issue erase, block, rights, they can exercise control rights to rectify/erase/block could be
rectify, or supplement over their data, with ability to provided in the Data Track window as an
commands on their PD erase, block, rectify, or extension to the Data Track functions. The
(Art. 12 (b) EU Directive supplement the data. commands to erase, block, rectify,
95/46/EC) associated with the tracking logs should be
obvious to operate.
Besides, email / snail mail address for
requests to rectify/block/erase data has to
be provided in the privacy notices which
can be used as a fall back solution in cases
where the online functions do not work.
There is yet a row in the PISA table, row 4.4 about “Derived Information” meant to deal with privacy
threats from web data mining (see chapter 11 in the PISA Handbook from 2002). It is not possible to
match that privacy principle to any directive, wherefore it is left out in the table presented here.
Table 2 Factors promoting adoption, HCI requirements, and possible PRIME UI solutions
(numbering as in PRIME Socio-cultural Requirements V2)
A.1 Social Settings Flexibility The user should be able to The ontologies should provide in different
configure the application to languages standard UI texts as well as
different socio-cultural settings messages generated by service side
A.2 Minimised Skill Levels The user should be able to use Web browser: the relationship-centred
the application with a minimal paradigm in chapter 3 combined with a default
amount of training anonymous browsing and non-permissive
visiting card for new sites make it easy to start
using the privacy functions. Various functions
demands various skill level, but the Data Track
is proposed to contain “exercise” functions to
acquainting users with PET concepts
A.3 Accountability The user should be held When anonymous credentials are used, these
responsible and accountable in should have information texts telling about
specific predefined conditions non-standard cases where the anonymity is
(anonymity will berevoked) revoked
A.4 Trust in Communication The user should be able to trust Information about the trustworthiness of the
Infrastructure that the communication PRIME user side system must be provided by
infrastructure mediate external bodies
information without distortion Information about infrastructure, including
and without making service provider’s system can be shown in an
eavesdropping possible Assurance Evaluation window (see section 3.5)
A.5 Trust in Transaction Partners The user should be able to trust Information about services are provided by the
their transaction partners assurance module and shown in an Assurance
Evaluation window (see section 3.5)
A.6 Affordability (The user should be able to Outside the scope of HCI guidelines but it is a
obtain and use the application at frequent answer in post-test interviews.
reasonable cost)
Table 3 Design guidelines for “applications that must set a security policy, their origin and motivation”
(Herzog & Shahmehri, 2007, Table 1)
S4. Decisions cannot be This guideline is in conflict with the guideline support internal locus of control by making
handled off-line: run-time the user initiate actions, not respond to system output by Shneiderman and Plaisant (2004)
set-up is to be preferred. and shows clearly that not all usability guidelines can be uncritically transferred to security
applications, which are typically supportive and not primary-task applications, and the user
is not likely to take any actions if not prompted to do so.
S5. Enforce least privilege The principle of least privilege comes from Saltzer and Schroeder (1975) and is one
wherever possible. important principle of computer security and specifically access control, which is what
security policies are about. Garfinkel (2005) warns in this context of hyperconfigurability.
Users have difficulties in managing too many options and cannot take in the consequences
of their modifications. Garfinkel rather suggests “a range of well-vetted, understood and
teachable policies” instead of exposing the user to fine-grained policy set-up.
S6. In a security alert, the Nielsen (1994) proposes that error messages should contain instructions on what to do, not
user should be informed only what has happened. Still, the texts must be short and focused so that they are actually
of the severity of the read. Details and additional explanations should be accessible but not blur the main
event and what to do. message. Yee (2002) demands clarity so that the effects of any actions the user may take are
clear to him/her before performing the action. Also Hardee et al. (2006) state that any
decision support should contain the consequence of any action taken.
S7. Spend time on icons. Johnston et al. (2003) state that well-chosen icons can increase learnability. This is
supported by Whitten (2004), who suggests icons for public-key encryption and motivates
icon choices, and Pettersson [in PRIME deliverable D06.1.c] who comments on the
difficulty of choosing icons for privacy settings.
S8. Know and follow general General usability guidelines are e.g. described by Shneiderman and Plaisant (2004) or
usability guidelines and Nielsen (1993). However, these guidelines are often so general that they can be difficult to
test, test, and test again. implement for a specific case. Therefore, actual usability testing with users from the
intended user segment is essential.
animations. services to have legally correct contacts with customers in other EU countries. Graphics are
more than icons, as demonstrated in the TownMap which also made use of animations (and
user’s drag-and-drop agreements, DADAs).
P8. Know and follow general As Herzog and Shahmehri note, guidelines are often so general that they can be difficult to
usability guidelines and implement for a specific case. Specific solutions must be sought. For the early PRIME
test, test, and test again. proposal, prospective users were included from start; paper prototyping or computer-based
mockups are very quick ways to understand what users tend to misunderstand. More on
testing follows in section 4.5.
As is obvious from the table, there are some particularities with privacy usability not found in security
usability. Customers are asked for data and to consent to the use of them. Laws play a prominent role
for what data subjects can do if they dislike some use of their personal data. Therefore, the guidelines
concerning the interaction design for privacy-enhanced identity management systems may have
implications extending much further than the guidelines for security systems even if the number of
guidelines may be kept small in both cases. One could also consider specifying guidelines for each
broadly defined function – i.e. functions such as navigating the web pages of a service, answering on
requests for data, gauging trustworthiness and assessing privacy risks, keeping track of data releases
and conditions – so as to keep each set of guidelines more specific. Herzog and Shahmehri defined
their task as providing guidelines for applications that set security policies, which is somewhat more
specific than guidelines for usability in security applications, which in its turn is more specific than
general usability guidelines for graphical user interfaces. They demonstrate the differing content of
these different sorts of guidelines in a figure which is reproduced in Figure 8.
It is noteworthy that Herzog and Shahmehri introduce a guideline normally counted among guidelines
for the development process rather than as a guideline for the properties of the developed product,
namely “Test and test more”. It is indeed hard to find product guidelines that really produce a usable
user interface if no intermediate user testing has been conducted. The design process is the topic of
section 4.5.
‘Perform tasks’ relates to Control but also to Consciousness because if various UI elements are not
detected by the user, he is most likely not able to perform a desired PIM control task;
‘Understand privacy implications’ obviously relates to Comprehending, but presumes an awareness
(Consciousness) of actual PIM actions.
It should further be noted that it is not self-evident how to measure a test subject’s understanding of
privacy implications. In a post-test interview, the subjects can be asked to explain what implications
there are or asked to tell whether or not something specific can happen. The latter gives a more precise
measurement but it can be hard to construct reasonable and unambiguous test questions.
The criterion of ‘satisfaction’ has many aspects. Naturally, the PRIME project wants to have a user
side that appeals to people. But there is also a general question whether people feel the need for the
product. Furthermore, questions of trust come to the surface in all that has to do with privacy and
identity management. The user interface can only deal with part of this aspect – it is not enough
having a program that appears totally trustworthy to people, because any fraud could copy such a
design. Nevertheless, ‘trust’ should be included in post-test interviews or questionnaires because it is
strongly connected to the question of satisfaction, and the participants can be given different
preconditions to ponder on (e.g. who should recommend the PIM system for them to trust it will be
helpful), and also asked in what situations they would or would not dare to rely on the system.
Choice Inspection
• Does the application have mandatory data entry • Does the application provide information on the
fields only for data necessary for providing a user’s request about:
service (also a data minimisation requirement)?
• when personal data have been disclosed;
• Does the application provide the users with
choices with respect to the use and secondary • to what parties this data have been
use of their data, for instance by providing provided;
options with respect to: • under what conditions the data have been
• whether or not to provide personal provided;
data; • who had access to the data;
• what personal data are to be shared; • for what purpose they had access to the
• for what purpose data can be used; data?
• Does the application provide ways to express • Does the application, for each party involved in the
preferences/policies with respect to: transaction, provide information on the user’s
request about:
• the purpose of use of the personal
data; • when personal data have been disclosed;
• who may have access to the personal • to what parties this data have been
data; provided;
• where they may be stored; • under what conditions the data have been
provided;
• for how long they may be stored?
• who had access to the data;
Context
• for what purpose they had access to the
• Does the application provide ways to change data?
privacy preferences according to context?
Ex-post control
• Does the application provide different Presets of
privacy preferences that can accommodate • Does the application show the user’s rights to
contexts in which the user repetitively operates? access, rectify, block, or erase disclosed (personal)
data and the procedures to execute these rights?
• Does the application provide easy changes
between (predefined) context settings? • Does the application provide ways to access,
rectify, block or erase disclosed (personal) data?
Social Settings Flexibility • Does the application offer customisation options for
more experienced users?
• Does the application allow for changing
interface language, symbol/icon sets, help files • Does the application provide an easy-to-use
and documentation? interface?
• Does the application allow for managing privacy • Does the application provide comprehensive
settings to different social contexts? tutorials and help files?
• Does the application provide information about
privacy risks and what the application can do to
help prevent these risks from materialising?
However, it is hard to measure how well they understood that there are implications of their data
disclosures (name, address, phone number, etc.) that surpass the definition of the linkability settings.
Figure 9 Traditional “Go” button (left) and address field with two “Go’s” (right)
5.2.2 TownMap
In the TownMap (introduced in section 3.2.3) the icons for privacy preferences are replaced by areas
visualising privacy protection concepts with default privacy settings. Individual bookmarks or
bookmark menus are symbolised by houses within these areas. Predefined areas are suggested to be
the Neighbourhood (where relationship pseudonymity is used by default; footnote 8), the Public area
(where the “Anonymous” settings is used by default), and the Work area with another set of personal
data than for private use. The user also has his own house in the map, a prominent house at the
baseline. The approach to use different default settings for different areas should make it easier for a
novice to see the options available once he has grasped the TownMap metaphor.
After populating the map with his favourite web sites and simultaneous assigning privacy preferences,
the user will select the appropriate privacy settings while web surfing by just clicking the icons for the
web sites. The map display has to vanish or be reduced when the web page of the service provider
opens. The TownMap does not only contain houses representing service providers but also icons
representing data items. By combining web page and a reduced view of the TownMap, the user can
use the TownMap icons and positions to send personal data when the service provider requests such –
we called it DADA, Drag-And-Drop Agreement. In Figure 10 the user is sending name (face with
speech bubble) and credit card information to a payment service. Furthermore, how the payment
service sends money to the shop (CDON.COM) can also be demonstrated by the PIM system moving
coin symbols from the representation of the payment service to the representation of the shop.
Figure 11 shows how the user moves the name icon (face with speech bubble) to the gate of his
garden. The gate has a footprint icon used in PRIME prototypes for representing the Data Track. By
dragging a copy of the name icon to the Data Track icon the user enquires the system for history
records concerning his name (or names, if he has used several): which service providers have received
his name? Section 5.8.1 discusses the topic of utilising movements in spatial relationships.
The ferry boat and the bridge are points where the user defines the privacy properties of the areas
beyond the river. This is of course highly metaphorical, and perhaps unnecessarily graphical. A simple
CrossRoad design was also conceived within the TownMap approach where the “roads” are used as
area boundaries. Figure 12 shows two views in this design. The user’s house is in the upper left corner,
which might be a bit unnatural when it comes to moving data icons from the user to the service
providers because the user’s mouse and therefore also the user’s hand movement have to go toward
the user instead of away from him. However, in a compressed view this position of the user’s house
icon might make it possible to suck this icon into the web browser’s toolbars (which would not be
natural in Figure 11) and the icon could be the access point to the data icon row if this is not part of a
tool bar. Areas could be signalled next to the address field as in Figure 9 by stylised CrossRoad views
showing only the position of the cross.
A preference test (with N = 34 test persons) was made by using user interface animations (video clips)
where groups of test participants could see identity management carried out in the traditionally styled
user interface and the also in the TownMap. Afterwards participants individually filled in a form with
questions about impression and preferences; then also the third design was shown, the simplified
CrossRoad map. Swedish university students aged 20 and above, some being older than 45,
participated in the preference test; all had used Internet Explorer and only some had used other
browsers in addition. Our traditionally styled alternative was based on an Internet Explorer mock-up.
The traditionally styled browser got in general a positive response: more than half of the answers gave
positive descriptions of it. The maps, on the other hand, were considered by many to be messy. One
should bear in mind that the maps were populated already from start, while a new user would have
found his own map empty (like the bookmark list in an unused copy of a browser). For more
discussion of how the test was set up, see Bergmann et al. (2006).
On the question about their impression of the display of data and money transaction, 19 answered that
it “facilitates” while 11 ticked “superfluous”. Nine of these eleven persons also ticked “looks
childish”; fifteen in all ticked “looks OK”. This result speaks in favour of using animation in
explanations.
When ranking the alternatives, 24 persons put the traditional browser as their primary choice. Seven
preferred the realistic TownMap and three preferred the simplified map. Two fifths of the participants
answered that they would like to be able to switch between designs. The test has been replicated in the
USA with 27 (young) university students: the results were in the main similar to the test conducted in
Sweden, although a majority of the American subjects wanted to be able to toggle between designs.
Comparing with the age groups among the Swedish participants one can see a clear trend: young
Internet users are in general in favour of the more graphical user interface represented by the
TownMap.
Animations of transactions “facilitates”
That animations of transactions “facilitates” was the opinion of about half of the test participants that
were shown UI animations of one TownMap design. Naturally, it should be possible for users to
switch off such mini-shows but they complement well the use of DADAs. The question is how well
they work in traditional user interfaces where things are not already set in 2-dimentional locations.
Possibly a reappearing diagram with fixed points for standard actors can serve to animate transactions
even if all points except the user’s will have to be renamed for each new transaction. Again, see
section 5.8.1 for further discussion on this.
Statistical All data types except Name and full Should be anon.,
forms of personal and telephone numbers else “Marketing”
Another thing is that the purposes declared by the service providers could be questioned by the
PRIME system if they are not listed in the PrivPref currently used by the user. We have to rely on the
user to select the appropriate preference for this check (i.e. the check that there is a matching between
the requested service and purpose for data request).
Four privacy preference “templates” are provided by the PRIME system. Because the two basic ones
are directly usable as PrivPrefs, we do not suggest an abstraction into two levels (i.e. template vs.
PrivPref) but instead, as in IPV1 and IPV2, keep the PRIME pre-defined ones as unalterable privacy
preferences, even if numbers III and IV need data to be really meaningful. (From Tweney & Crane,
2007, one may infer that even two visiting cards may be too much for most users.)
III. PRIME Minimal Shopping, Data structure (see table), Transactional pseudonyms
Purpose Data Types Retention period
Register an order See separate table Until service completed
Physical delivery -“- -“-
Electronic delivery -“- -“-
Payment -“- -“-
IV. PRIME Profiled Shopping, Data structure (see table), Relational pseudonyms
Purpose Data Types Retention period
Accept an order See separate table Until I object (i.e. user decides)
Physical delivery -“- -“-
Electronic delivery -“- -“-
Payment -“- -“-
Profiling (registration) -“- -“-
“5.” A fifth and more preference settings will be created as the user uses I-IV.
Figure 13 Context Menu appearing on top of the web page when user clicks PRIME button
Figure 14 Data and proof fields have been put in editing mode
Figure 15 relates the ‘locked’ view to the ‘unlocked’ view when not only data but also proofs ( ) are
requested. When the consent dialogue window is based on the content of a privacy preference setting,
there could be several different reasons for why the consent form cannot be filled in with the data
requested by the service provider. For data and proofs it could be that no default item has been set, or
that data or proofs are missing, or even that the data type is excluded from the current preference
setting or the proofs acquired by the user are not recognised by the service provider.
Already a mouse-over tool tip on the “What to do” link should explain what the problem is, while a
click on the link could provide more information and state the need to unlock data fields and proofs, or
guide the user directly to further actions such as opening a form to request new proofs as in Figure 16
(for “Proof Portfolio” in the “Personal Data” database, see section 5.7.3). In the ‘unlocked’ view (i.e.,
the edit view), the combo box will contain the ‘what to dos’.
There will be many popup windows when there are many mismatches. On the other hand, filling in
missing data or requesting credentials to prove one’s identity can make the personal data database rich
enough in the future to provide all data for the consent window, or it may be the case that all
information is already in the user’s system and he only has to pick the right preference setting. At any
rate, if there are deviations from the preference settings, it is not bad that there are a lot of alerts. (The
“I agree” button should be greyed out in Figure 5 and Figure 15 because the data request is yet not
satisfied).
The UI design included the possibility for the customer to set the following ‘obligations’ for the
service provider:
When my data are transferred to third parties to fulfill [Service Provider]’s services:
Ask each time Notify me Save detail about this
When my data are deleted: Notify me
Your data will be deleted as soon as it is not needed
Moreover, because conditions for data use must be specified relative to the intended purpose for the
data collection, the design included an expansion part in which two other purposes than fulfilling the
service request were included: statistical analyses and special offers, each with a specific data request.
If the user chose to opt in for any of these, the window provided options to set obligations concerning
data retention period and, for the statistical analysis, also notifications and/or logging when data
are used.
Because the test participants responded rather univocally in favour of using obligations settings, we
decided not to elaborate further on testing mockups. A condensed account of this experiment is to be
found in Pettersson et al. (2006) where specific problem areas are discussed. To summarise the results,
the experiment showed that Internet users can be interested of such a function; it seemed to give the
test participants a sense of being in control. Moreover, the experiment showed obligation setting to be
manageable by people who had never done it before – however, not every participant would allow
data collection except for the primary purpose, which of course simplifies the procedure a lot.
OTHER PURPOSES
Service provider furthermore says: “Additional data for additional purposes: data types a’,b’,c’ for
purpose U; data types d’,e’,f’ for purpose V, and data types g’,h’,i’ for purpose W.”
This is typically only opt-in ‘services’. Two cases possible where one should raise an alert.
5. SP asks for data types within the PRIME standard purpose definitions.
6. SP asks for more data types than in –“–.
The PrivPrefs are not elaborated to set preferences for such additional requests. They are opt in, and
the user has to fold out a section of the user interface to see them (the only preference setting could
concern whether or not display the fold out control at all, and to always show some types of purposes,
such as Marketing information as is common on web pages).
To this comes other mismatchings concerning retention periods, proofs, and Assurance Evaluation
standards. Also certain data could possibly be marked as ‘risky’ to be disclosed such as bank account
number and sensitive data such as health information. The division into specific purposes is also
somewhat space-consuming; see Figure 17.
Figure 17 PrivPref Profiled Shopping informs the “Send data?” about purposes and data types
Not much effort has been put in designing the icon for the PrivPref (it is the face icon and a shopping
cart); it is just a place holder when designing the window layouts. As the lower left corner indicates, it
is proposed that there should be a step-by-step consent dialogue as an alternative. Such a design would
treat each purpose in a specific pane. This has also the advantage of preparing for multiple-party
agreements where a service provider gives customers direct access to subcontractors, so as to direct
purpose-specific data sets and concomitant conditions directly to the data processors, so that the data
controller in fact does not become a controller anymore but the subcontractors will make direct
agreements with customers and thus becoming data controllers for each data set disclosed by the user.
There is, however, also a drawback with the stepwise disclosure. When the user considers a data
request he or she is not is not primarily interested in who the receiver is, but first what the data
requested are. Some data one could give away to anyone without hesitation. In a stepwise disclosure it
is hard to topicalise the data as done in Figure 5, Figure 15, and Figure 17.
When a privacy preference setting is chosen that contains little data and few or no purposes for data
use, there will be a lot of ‘alerts’. Then the stepwise approach has some advantages, even if it will be
unclear how to select the preferences. This approach is presently only in mock-up state, but Figure 18
and Figure 19 show how the user could be guided in filling out or selecting the data.
Figure 19 might need a scroll – the figure shows an expanded view where all content is visible. Also
Figure 17 could be seen as showing ‘too much’. One can ask whether the “I Accept” button should be
outside the scroll or at the bottom of the scroll area, so as to prevent any user to click it without having
read the whole data disclosure agreement.
Figure 19 Final page (blown up) stepwise mode of multipurpose “Send Personal Data?”
These multiple-purpose (or rather, from a user point of view, split-purpose or specific-purpose)
consent dialogues have been discussed when this report is finalised, and they have not been subjected
to any user testing yet, but they might appear in IPV3. There have also discussions on making the
design more CardSpace-like since it is likely that the solution shipped with Microsoft’s Vista will be
familiar to many people. Naturally, approaching the CardSpace design would demand to introduce
specific PRIME concepts in that design.
test. Instead, one of them mentioned explicitly that his personal number found in his passport will be
sent, even though the window asking for data release stated the request as Proof of “age > 18” (built
on “Swedish Passport”). However, the same person appreciated using electronic cash (a form of
anonymous money issued by some bank or credit institute) as this procedure did not involve the credit
card number or any other personal data.
Possibly, the data release window could be more explicit as to the process of deriving anonymous
proofs from electronic identity cards:
Send proof of “age > 18” (build proof on parts of the certificate “Swedish Passport”)
Different icons and combinations of icons (as a ‘proof’ icon combined with the PRIME mask to
symbolise ‘anonymised proof’) can perhaps strengthen this. In a usability test June 2007 with 30 high
school students we asked for icons for ‘a proof’ and ‘a certificate’. A sheet of paper, with various
adornments, was what the majority suggested – possibly a reflection of the icons for certificates that
exist today.
In any event, even if better designed information will make it possible to make people understand how
little information is sent by an anonymous credential, it has to be admitted that for passports, the
citizenship of the holder is always derivable if the government issuing the passport is derivable from
the anonymous credential derived from the passport credential.
One can conclude that: (1) there is a risk that users think that all data normally visible in the physical
item referred to (i.e., a passport) will also be available to the data receiver; and (2) information
inferred from proof metadata is also a kind of data which is sent by the user’s system to the receiver.
When searching a history database such as the Data Track introduced in 3.6 (further discussed in 5.6),
users could be misled thinking (1) that they can search on more than the history function has actually
recorded, and (2) that metadata found in the records are wrongly recorded or, initially, wrongly
released by the system, which would lower their confidence in the system. Naturally, the assistance
function helping the user to act on the base of the transaction records could also give information on
these two aspects to the user, but it is probably not an assuring thing for the user to learn later that
he/she has misunderstood the arrangement with anonymised proofs.
PRIME system to look in the “Personal Data” database and use the data there to fill in the user consent
dialogue with user data. On the other hand, bearing in mind users general reluctance to do settings and
particular reluctance to fill in personal data on their own computer (which might leak information to
the Internet: D06.1.b; Andersson et al, 2005; Tweney & Crane, 2007), letting them configure privacy
preference settings on the fly has clear advantage. It is also clear that they understand the implications
of the settings if these are done in real situations (Herzog & Shahmehri, 2007). Other means of
avoiding spoofing is therefore desirable such as putting a personalised photo on every PRIME UI.
However, these concerns have not been in the scope of the HCI work in PRIME and we leave this as
an unresolved issue but it mainly concerns web browsers and not other applications.
Another open question concerns how to inform users about data which are not so obviously personally
identifiable, such as items purchased and navigation history, and about data released by themselves but
easily misinterpreted as was the case for the information content of a proof disclosure (5.4.6).
bodies such as seal providers rather than letting the evaluation results be displayed directly on
individual users’ computers (3.5). For the future one might though propose to use RSS feeds about
incidents and security weaknesses to inform users or at least users’ PIM systems for these systems to
request reports from the data controllers concerned if the faults have involved the user’s data.
Another thing to bring up is how to select an illustrative icon for the function when it is no longer
called “Privacy Functionality Check” but instead “Assurance Evaluation”. While ‘functionality’ might
reasonably be signified with cogwheels as have been done in some user tests, the concept ‘evaluation’
does not so easily render itself to illustration in a typical icon style. A magnifying glass might give
association to ‘scrutiny’ but has already been over-employed in today’s user interfaces: magnifying
glass is not only used for ‘magnify/zoom in’ but also for ‘search’ (though some search functions use
binoculars) and in Microsoft Word also in icons for “Print Preview” and “Document Map” (the latter
is meant to give an overview rather than a detailed view, but still Microsoft chose a magnifying glass
also for this icon). In a full-blown PRIME world, an icon based on some trust seal icons might be
suggestive even if the wording “Assurance Evaluation” possibly speaks for itself once the user has got
some negative evaluation reports and has checked out the content of the function.
down boxes. This made it possible to remove all the radio buttons. A simple link “List all records” has
also been added.
When windows Vista and CardSpace were released the history function in CardSpace was compared
with the Data Track design. The test aimed at 20 participants, but was concluded at N = 12 because the
same difficulties for each system seamed to appear for most users. Half of them started by using Data
Track before using CardSpace, and the other half used CardSpace before Data Track. CardSpace’s
history function scored slight better than Data Track, but the number of records used in CardSpace
were fewer making it easier to see sought object directly. Drawbacks found with CardSpace were: it
was hard for most test participants to understand that they had to search for history in each card;
search was not as advanced as in Data Track; it was hard to understand what had been sent.
Drawbacks with the tested Data Track design: hard to find information on ‘templates’ (PrivPrefs);
hard to understand that a record can be open by just clicking on it; cumbersome to get all record;
complicated terminology. (In the PRIME work we have never emphasised recording the privacy
preference settings in Data Track; the essential thing is what data was actually sent, under shat
conditions, and to whom.)
Test participants seemed to find their way in CardSpace when they finally understood how it worked;
the division into ‘cards’ is easy to understand; nicely designed. Many found the free-text search in
Data Track very useful; easy to search both with the search box and with template sentences; powerful
function searching the whole history database; it was good that there was lot of information.
Internet habits did not seem to influence how well one understood the two programs. Eleven persons
liked the idea of using a software like the PRIME integrated prototype or CardSpace; 3 preferred
CardSpace and 8 Data Track.
Assistance functions
In the first evaluation round of early prototypes within the PRIME project, the need for at least two
trust-enhancing functions in the Data Track were gathered. From section 6.5.2.1 of D06.1.b one can
see the need for a help function to rectify/block/erase data (and also informing users that they have
legal rights to such actions). From section 6.5.2.2 one can derive the need for concrete help with
frightening records if the user does not recognises some recipients and finds them suspicious. In such
cases information on, and help to access, data inspection authorities and consumers’ organization
could potentially provide the ‘added value’ that makes PET use the preferred choice.
Figure 21 shows an opening window with several assistance functions that goes outside the normal
scope of help functions in computer applications; it extends the help beyond the system per se and
includes relevant matters in the surrounding world.
Furthermore, some tests have been made on how ordinary internet users would perceive functions that
help them to contact data controllers. We relate one here.
Short mockup test on withdrawal of consent
Ten participants saw UI animations of a user disclosing data aided by PRIME preference settings.
When the animation ended the participants searched the Data Track and were asked to disallow a chat
forum to further use the user’s email address.
The identity manager itself seems to have gained some respect from the participants. Among the Data
Track transaction records there was an extra record involving a certain ‘watcher.com’ that should not
have been there. The test participants were not worried about this but assumed that the PRIME identity
manager had done what it was supposed to do and had locked watcher.com out from the personal data
repository (watcher.com was not recorded to have received any personal data). But on a direct
question if PRIME could raise their trust in Internet and e-commerce, the test participants said they
would rely on company brand (and experience) as the decisive factor. They did not seem to regard that
this makes them vulnerable for ‘spoofing’ attacks where web pages are made to look like pages from
reputable companies (for a recent treatment of this steadily increasing problem, see Dhamija et al.,
2006).
As many as 8 of the 10 participants said that the preconfigured emails would facilitate withdrawal of
information; one of these held it to increase the trustworthiness of the identity manager but seven
regarded it more as an extended service of a kind that was just what one should expect from this kind
of program. 3 of the 8 saw no need to use the aid function as they declared they only buy from
companies they trust. Interestingly, in contrast to the eight more or less positive respondents, two
persons were of the opinion that the withdrawal function decreased the trustworthiness, but from their
answers it is clear that they referred to the e-services:
• One held that if it was possible to withdraw some information, then the company had
requested more information than necessary in the first place.
• The other person could see the value of correcting incorrect information, but added that it
should not be possible to fiddle and deceive by altering already established contractual
agreements.
In conclusion, even if assistive functions could be perceived as facilitator by many people, still many
would probably not recognise the need of such functions as they regard themselves as using services
from trustworthy providers only. This could indicate that there is a great need for assisting people who
have been fooled into deceptive web sites. Another problem is posed by people viewing the mere
possibility to alter data as an indication of non-trustworthiness. Such people would dismiss providers
and technologies that allow user control of user data. This is a didactic problem rather than an
infrastructural problem, but nevertheless a reaction that must be accounted for.
Simulations
Data Track simulations of linkability based on personal data and not only pseudonymity would be
interesting to compute. It could take the form of simulation games such as “If company A and B pool
their customer databases, what can they then infer about me?”
Other linkability computations can be based on using available resources for computing the likelihood
that someone else has one’s name within this district of this city, etc. Naturally, such computation
would be most beneficial during a data release action, but having the possibility to do it in Data Track
together with other linkability computations may allow the user to better understand linkability risks.
Clauß (2007) has made a thorough analysis of linkability including usability options.
Figure 22 A user interface to manage the IPV2 manager (fold out view)
For the drop-down menu in a web browser, there is probably no need for hiding any of the controls as
the menu is not visible until the user calls for it. Rather, it might be that such a menu contains
additional options relating to changing preference settings (i.e. role/PreSet/Privpref) and to continuing
in the present browser window or opening new windows at the same web site (pseudonym changes
may thwart shopping cart content, which could be used intentionally by a registered customer to
browse without letting the navigation pattern created being linkable to him; Figure 23).
Figure 23 Old mockup of changing to anonymous browsing when browsing as registered customer
Background pictures – to tell real PRIME UIs from faked ones, it might be necessary that the user
installs a picture that the user side PRIME system puts as a background to all its UIs.
Style – for legibility reasons and for personal satisfaction, the user should be able to choose font size
(incl. icon size), contrast, colouring, etc. and the user interface should be interpretable by text-to-
speech converters for the Blind.
usage. One can ask if locus meaning instead can be determined relative to natural relations to prevent
the just-mentioned increased cognitive load for arbitrary placement of identifiers before a position has
acquired a meaning. The PRIME HCI group has no empirical data to support this but the effect would
be as follows. In the TownMap design, the home symbol and its data icons were placed ‘near’ the
user, i.e. along the bottom of the screen. When the user takes a data piece and moves it out, he moves
it away from himself. Along the same lines one may ask if right-handed persons would find it more
natural to move personal data from the lower right hand corner of the screen than from the left hand
corner. Presumably, for the TownMap example, the standard of a UI with the home symbol to the left
will over-ride any handedness effect. The question of the locus of the ‘user’ on the screen is important
for a graphical resolution of the problem that users do not really see the difference between ‘their’
PRIME-enabled browser and web services.
It is important to note how movements can draw on spatial relationships but also to note for what
purpose movements can be employed. Below, three cases are differentiated: movements used for
information animation, movements for users’ agreements, and movements for user’s statements.
For informing: As an example we can take the disposal of documents in Microsoft Windows. When
user right-clicks a file icon and chooses Delete, and then answers Yes to the question if he wants to
send the file to the Recycle Bin, Windows shows a small animation in a pop-up window with a folder
symbol to the left, a file icon moving from it to a replica of the Recycle bin to the right (Figure 25).
This is a ‘standard show’ and does not change to hint where the file icon and the Recycle Bin icon are
on the screen (so in this way it is clearly different from the case where the user drags and drops the
icon at the Recycle Bin).
Even if a town plan is not directly depicted, movements can be utilised if the underlying graphical
representation is not completely abstracted away from all 2-dimensionality. For instance, connecting
icons for various communication partners by some street grid will make it possible to illustrate desired
or actual communication paths, but also to illustrate possible side-channel attacks to warn the user.
Animation of the communication paths can highlight individual paths and show data flow directions.
Many users responded positively to simple animated demonstration in the TownMap preference test
even if they were negative to the graphical design (section 5.2.2).
In the two examples just given, animation could possibly be replaced by arrows, but movement is
known to attract attention. Flickering is, however, also known to attract attention and a movement is
thus not needed to grab attention, but movement is possibly a more pleasant animation than flickering.
For giving consent (confirmation): The possibility to let the user act 2-dimensionally might prevent
the automation of behaviours which makes people accidentally click OK and Agree buttons without
really having intended so (Raskin, 2000). This was called DADAs – Drag-And-Drop Agreements in
D06.1.a where it was also noted that one can construct this game such that the user really has to pick
not only a predefined set of data symbols – which would be quite like clicking ‘Agree’ on a pop-up –
but the user would have to pick the right data symbols, and furthermore drop them on the right
receiver symbol. Thereby, the system can to some extent check that the user has understood the
request. (ToolTips displaying the specific data content for each data icon can accompany the drag-and-
drop actions.)
(Not) for stating conditions: The system’s check mentioned in the last paragraph requires that the
information is already requested by the service provider, so that the drag-and-drop action really is an
act of confirming, and not an act of stating conditions. Drag-and-drops can be mistakenly performed
and would need a last confirmation if they are used to state the conditions of an agreement. In normal,
‘click-based’ interaction a final confirmation is sought by requesting yet another click from the user.
Drag-and-drops for stating conditions are not as secure as drag-and-drops for agreements and would
need a last confirmation click.
For minor statements, which can occur within an agreement ‘action’, one might avoid an extra
agreement click, as when the user has selected his VISA credit card rather than his MasterCard. In this
case it is hard for the system to know whether this is really what the user wants if he has not
previously selected one of the cards for the current privacy preference setting – on the other hand, if
the user has not specified which data items match the credit card data type in this preference setting, it
is probably not detrimental to use both cards.
5.8.1.1 Unresolved issues for the employment of spatial relationships
As mentioned above, the question of the locus of the ‘user’ on the screen is pertinent for a graphical
solution of the problem that some users do not really see the difference between ‘their’ PRIME-
enabled browser and web services. The position of the user-side PRIME system in such a graphical
representation may be important for communicating the PET point to the user.
Furthermore, it is an open empirical question if it is possible that drag-and-drop actions – properly
designed – reinforce users’ feeling for the distinction of user-side and services-side (this question
concerns both DADAs and the drag-and-drops for stating wishes).
5.9 Icons
5.9.1.1 Use icons with care
Usability tests (section 6.4.1 of D06.1.b) of icons for ‘roles’ (i.e., PrivPrefs) showed that users may
verbalise facial icons very differently even though they might understand how to use them.
Furthermore, when test subjects were allowed to pick face icons and other icons of their own choice,
there were often competing preferences between subjects. Interesting was that even if some face icons
were mainly associated with criminals, it did not prevent people from selecting such icons.
Nevertheless, use of icons must be handled with care, especially when different kinds of ‘masking’
come into play, as they necessarily do when anonymisation and pseudonymisation are the core
functions of a system.
5.9.1.2 User can select icons for data sets
If icons are used to ease users’ recognition of datasets and privacy options belonging to the users
themselves, it is recommended that users have the ability to select icons themselves. For pre-defined
settings, such as the PRIME Anonymous, it might however be advisable to keep to the icon predefined
as this icon might be used in instruction texts and videos and play on other system icons. How
combined icons, such as the one for PRIME Returning Visitor, , will be made if the user can
select one of the icons (such as the face which has the meaning “me” in PRIME) is a delicate UI
programming issue.
5.9.1.3 Replace verbal references to icons with the icons themselves
Further, when icons are used, even for pre-defined privacy options, it is recommended that verbal
references to these icons are not made by referring to the graphical appearance of these icons, because
the icons may be replaced. This holds also for icons not intended for user selection – the icons may be
changed by application providers at a later stage without the full user interface being searched for
possible textual references to these icons. However, because users find such concrete references
useful, incorporation of the very icon in the text is recommended (this must be made dynamically if
the user is supposed to select the icon, but dynamic linking facilitates updating by application
providers). The text must also categorise the icon in conjunction with its appearance in the text (for
two reasons: 1. the user might have forgot in what circumstance the icon is used and needs to be
reminded, 2. if the user is speaking to a support, he must be given words to explain). If the space
permits and the text is still intelligible, possible denominations of the icon (or what the icon stands for)
can be added too; but the icon and that name must be explicitly linked somewhere in the user
interface.
Let users choose whether icon and/or name for a function, dataset, etc. shall be used. Note, however,
that not all users will be able to utilise this option.
5.9.1.4 Careful use of colours and fine details
An icon once adopted may be appearing in different sizes depending on situation. The same icon may
even be transposed between computer and small-screen handsets of mobile phones. Fine details should
be avoided. It should be noted that the photo-realistic icon style nowadays employed in Windows and
Mac OS may cause trouble. “This level of details serves only to distract from data and function
controls. In addition, although it might, in some instances, make sense to render in detail objects
people are familiar with, what is the sense of similarly rendering unfamiliar objects and concepts (for
example, a network)?” (Cooper & Reimann, 2003, p. 235)
Likewise is colour not reliable between many LCD-displays, and manuals describing the icons may be
photocopied in black and white. ‘Traffic light’ triplets should be shown vertically.
Colours may have competing meaning in different situations where the icon can appear. In one
instance it may be an easy way of telling some icons apart but in another situation when light colours
are needed alert the user, a yellow icon may attract unnecessary attention. It cannot be denied,
however, that colour is a very effective means of marking things.
For colour-blind people, there must be reliable alternatives to colour signals. For an example, see the
alternative disclosure setting icons presented in D06.1.a, Figure 17 & 18 (combined to one in Figure
26; usability test data for normal-sighted people in D06.1.b, section 6.5.1). Note especially that
combination of signs into a new icon is possible even if superposition is not employed. A colour is
easy to add to a black-and-white icon to indicate a shifted meaning, but juxtaposition is also a
workable way.
5.10 Terms
The prototypes in PRIME have not gone through repeated usability tests and we add no conclusion
here on the various terms and phrases appearing in the previous sections. Notably, to facilitate intra-
and extra-project communication, English has been used. A user-centric approach would, however,
demand a more pluralistic approach as noted in the concluding section of the chapter on ontologies:
see section 8.3.
Figure 28 shows the adapted “Send personal data?” dialogue. Changes have become necessary since
the user needs support in selecting a partial identity instead of selecting data to be sent. The dialogue
includes:
1. The dynamic DSM Configuration Button signals the current settings regarding the context
management.
2. The introductory text informs the user why the dialogue is started.
3. This section provides information about the requested data.
4. This section presents information about the initiated action (especially, in which workspace
and functional module was which action initiated) and about the fact to which degree the
provided data might be known to other users.
5. The section “Select partial identity” starts with an introductory text that shall help to
understand the content of that section. Some phrases are highlighted which means that more
information is presented in separate windows.
6. Via the “Create” button the user can generate a new pID. If the DSM suggests creating a new
pID instead of using an already existing one, the button is highlighted orange.
7. This section shows the rated list of pIDs. The rating is visualised via the scales on the left
hand sides of the Chernoff Faces (cf. below) representing the partial identities. If a pID gets
the highest rating at all, it is highlighted in orange. The user can have a look at attributes
already assigned to a specific pID using the button “attributes”.
8. This section provides information about the suggestion generated by the DSM.
More changes will be needed in the future. As one test subject said after a session: “The application is
difficult to use for first time users. The icons do not show what they mean and what one is suppose to
do. It is complicated. Moreover, the “Send Personal Data?” window was confusing, and had a bad title
that does not tell me what to do – should I create an identity or what? There was a lot of information
about different things. I do not understand even though I read. Regarding the title, it feels like the title
does not correspond to what I am suppose to do. If I am supposed to create an identity the tile should
tell me so. I think the text in “Send Personal Data?” is written to the system and not to me as a user.
The text is technical described from the system view. I don’t really understand the purpose with the
identities, and why should I want to be anonymous? It would be better if one was able to choose if one
wanted to be anonymous or not. This manner is just unnecessary and results in much workload.”
Data Track is conceptually integrated in the Information Centre but presently the Information Centre
only contains information about the user’s alias and the online state of his pIDs and of pIDs of other
users.
The following Icons are introduced in the CeL AP:
Icons in the style of Chernoff faces (Chernoff, 1973) represent relevant partial
identities. Particularly, they represent dynamically relevant features of these partial
identities. Within APv2, the following features are represented:
The colour of the face encodes the pID’s online state, i.e., whether it is potentially
visible for other users (face is yellow) since awareness objects are generated and distributed to other
users interested in getting this information or not (face is grey).
The style of the alias (“Office” in the example) encodes the administrative role of the pID: bold is used
for “owner”, italic for guest (as in the example), and plain for “participant”. Current versions will also
provide information about the degree of knowledge (presented by the shape of the eyes), former
communications (the shape of the mouth), the workspace in which the user acts (the margin of the
face), the functional role (by an additional symbol), existing notes or delivered additional information
for support of collaboration (also by additional symbols). More details can be found in Franz et al.
(2006)15.
The DSM configuration button allows starting the DSM configuration tool; the
combination of the CeL Chernoff Face representing the awareness icon and, thus,
recognition necessary for collaboration, and the PRIME mask representing privacy
aspects (pseudonymity) is used at several places. The colour of the face gives
information whether the DSM is switched off; the points represent the current
configuration (green spots represent settings regarding the level of partitioning – the more spots, the
more fine-grained the partitioning; blue spots represent configurations regarding recognition – only
one spot means that the user wants to be recognised in the present workspace only while three spots
mean that the user wants to be recognised in the whole BluES’n system).
Finally one can mention that there are difficulties to have a large area for the course content in the
workspaces when identity management information and privacy information also competes about the
screen space. In the present design of the prototype, ample space is dedicated for these secondary tasks
as they belong to the core concepts of the prototype. See PRIME deliverable D04.1.b.1. We do not
include figures on the ordinary workspaces here. They do, however, contain several innovative
interactive features, but these are not directly related to privacy awareness and management.
15
Also in a document by the BluES Team: Handbook for BluES’n. 21 September 2007.
To address the privacy issues associated with tracking a user in the LBS scenario, a location
intermediary (LI) is introduced as a decoupling entity (see Figure 29), allowing the user to employ
temporary pseudonyms when communicating with services (for more details see D04.1.b.3.8). Privacy
enhancements by PRIME in the push LBS scenario include:
1. AP does not know John’s allergies
2. AP neither gets a movement profile nor the location
3. MO does not know the services John uses
The prototype maps PRIME components and principles like the PRIME console, pseudonymisation,
access control, and appropriate consent to data release to the mobile platform, incurring some
limitations in other areas, but still offering solid privacy protection. The second version features
several user interface and technology improvements, progressing towards fully leveraging what is
offered by PRIME on a mobile platform. It also addresses several specific user interface challenges of
the LBS scenario.
To describe the ontologies the World Wide Web Consortium (W3C) developed the so called Web
Ontology Language (OWL)16. OWL relies on the Resource Description Framework (RDF)17 to
express the ontology relations. RDF statements can be notated in notation (n3)18 as well as in XML (as
in Figure 33).
In the next sections we describe how PRIME makes use of these mechanisms to express the relations
between entities and objects, to describe them in a human and machine readable manner and to offer
translations etc.
16
http://www.w3.org/TR/2004/REC-owl-features-20040210/
17
W3C Recommendation 10 February 2004 http://www.w3.org/TR/2004/REC-rdf-concepts-20040210/
18
http://www.w3.org/DesignIssues/Notation3.html
<rdf:RDF
xmlns:j.0="https://www.prime-project.eu/ont/Datamodel#"
xmlns="https://www.prime-project.eu/ont/PIIBase#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xml:base="https://www.prime-project.eu/ont/PIIBase">
<owl:Ontology rdf:about="">
<owl:imports rdf:resource="https://www.prime-project.eu/ont/Datamodel"/>
<rdfs:comment xml:lang="en">PRIME Ontology defining basic attributes of
entities.</rdfs:comment>
</owl:Ontology>
<owl:ObjectProperty rdf:ID="inverse_of_userCentricSensitiveData_1">
<rdfs:subPropertyOf rdf:resource="https://www.prime-project.eu/ont/Datamodel#owner"/>
</owl:ObjectProperty>
<owl:FunctionalProperty rdf:ID="birthDate">
<rdf:type rdf:resource="https://www.prime-project.eu/ont/Datamodel#ConcreteProperty"/>
<rdfs:label xml:lang="de">Geburtstag</rdfs:label>
<rdfs:label xml:lang="en">birth date</rdfs:label>
<rdfs:comment xml:lang="en">The date of birth or founding of an entity.</rdfs:comment>
<rdfs:subPropertyOf rdf:resource="https://www.prime-
project.eu/ont/Datamodel#userCentricSensitiveData"/>
<j.0:range rdf:resource="http://www.w3.org/2001/XMLSchema#date"/>
</owl:FunctionalProperty>
...
</rdf:RDF>
19
http://www.w3.org/TR/rdf-sparql-query/
20
http://jena.sourceforge.net/
a. PRIME data model allows to attach uniform policy language to each separate data item
b. Each enterprise data model uses its own vocabulary; using ontologies we can map them into one
model (from and to the PRIME model)
c. Easy translation between policy concepts
d. It models the credential metadata and rules that governs the access to the credentials
However, such a lot of flexibility has also its drawbacks. Using a malicious ontology one could
completely invert meaning and functioning of the system. That is why currently only a fixed set of
ontologies is allowed.
- <owl:Class rdf:ID=”givenName”>
<rdfs:label xml:lang=”en”>First Name (Given Name)</rdfs:label>
<rdfs:label xml:lang=”sw”>Förnamn</rdfs:label>
...
Figure 34 Tentative example with resources for data type terms instantiation in two languages
In fact, also other user interface elements, such as button labels and references to icon resources, can
be stored directly in the ontology. This makes it possible to customise interface elements, such as
captions and descriptions, and preserving the relations between different subjects and objects of the
PRIME system. Naturally, this automatic way of filling the UI with texts can entail problems because
in some languages some expressions will be longer than in others as shown in Figure 35, making it
necessary to compress the letters or extend e.g. buttons, which can have unwanted consequences.21 But
in principle, automatic translations are very promising in a broader scope where privacy policies and
(end-user’s) obligation settings are considered. Standard messages would make it possible to
automatically translate between the languages of the service provider and the customer, providing for
informed consent in the best possible way. Translatable standard messages would also be of great
advantage for the use when contacting the data controller via the Data Track’s assistance function (cf.
sections 2.6, 3.6).
21
Naturally, the verbs in the Italian examples might be in imperative rather than infinitive: Crea un nuovo ruolo,
Importa/Sincronizza, Modifica i segnalibri.
Figure 35 How different languages would claim different space for button captions
22
XUL, XML User interface Language, is a Mozilla special construct. This limits the usage of the prime console
to Mozilla users. But as far as a standalone XUL runner is available we may count it as a special programming
language with its advantages and disadvantages. For more information, see http://www.xulplanet.com/ and
http://extensionroom.mozdev.org/ .
23
See http://developer.mozilla.org/en/docs/XULRunner for further details.
24
Per default, https://localhost:8443/prime_impl_ConsoleMediator.
Table 7 JSPs for retrieving and manipulating data in the PRIME core
Function Description
accessRemoteResource Starts an access remote resource protocol for a given URL.
addApplicationAuthorization Adds a program (name) to the list of programs authorised to access the
core. In order to give a program access to the PRIME core, the user must
add the program to this list.
addContact Adds a new contact to the database.
addCredential Creates a new credential.
addPII Adds a new PII record to the database.
auth Handles authorizsation requests - helper function only.
getDataTrack Lists all the stored data, transactions and interconnections, filtered by
options and search string.
getFormInput Form Filler Helper page. Returns some static Form Input to server.
getInputCategory Gives back a fitting prime-category to the name of the form-field.
listApplicationAuthorizations Returns a list of application authorisations.
listCategories Generates a list of available categories as a RDF file (Resource
Description Framework).
listContacts Generates a list with all known receivers or all receivers of a given
PreSet.
listCredentials Lists all credentials the user has as a RDF list.
listDecisionSuggestions Represents the interface to the PRIME DSM (Decision Suggestion
Module) and generates a prioritised list with all possible PreSets that
could be used for one dedicated contact, template, and purpose.
listPII List the PII of a user.
listPresets Lists all PreSets.
listSessions Lists all sessions the user ever had as RDF.
listTemplates List all templates (as string) for a given session id.
listVerifyingData List the PII that were marked for disclosure by the user.
registerObserver The client registers a listener for the PRIME console interaction.
removeApplicationAuthorization Removes the given application authorisation.
removePII Removes either the whole PreSet or only a sub category.
removePreset Removes either a PII and/or a category from the PreSet or removes the
PreSet itself (if only the PreSet id is given).
resumeNegotiation Resume the accessRemoteResource protocol for a given resource.
updateApplicationAuthorization Update the given application name by removing the old one and adding a
new one.
updatePII Update an existing PII record.
updatePreset Update an existing PreSet record.
updatePresetIcon Update the icon entry of an existing PreSet record.
updatePresetReceiver Updates the flags of an PreSet receiver.
The lower layer is the PRIME core itself, providing the privacy protocols (for instance
accessRemoteRessource or listCredentials). It comprises also the context manager, which collects
information about all PII disclosed and is important for linkability computation and all other privacy
related functionality.
If the user is accessing a protected resource (e.g. a web page, which needs an authentication) the
PRIME-enabled service provider returns a response. The response contains the X-PRIME header flag
(see Figure 38)25, indicating to the PRIME-enabled client that access to this resource can be achieved
by mechanisms which PRIME provides. The client can search all incoming packets for this marker
and, if the marker is found, connect to the service provider’s PRIME system (the address of the system
is delivered together with this flag). For legacy applications the request also contains a redirect (HTTP
code SEE_OTHER) to a ‘legacy’ login-page (where the user can login by entering his user name and
some password). So a client without a PRIME system would follow this link and display the legacy
page.
An interceptor (in our case an extension for the Mozilla FireFox) is able to inspect incoming packets.
In case of X-PRIME Flag presence it notifies the Mediator to start the “accessRemoteResource”
protocol and passes the address of the service provider’s PRIME system and the resource, which the
client tries to gain access to, to the Mediator and waits for a decision response. If there are requested
additional data or interaction a so-called AskSendData dialogue (see Figure 37) pops up and waits for
interaction. Finally, the client transmits the request fulfilment26 back to the server.
The service response contains a handle that reflects the decision of the service provider (either access
allowed or access denied, depending on the policy and if the user could verify his identity). When
receiving this handle the interceptor stores it and connects a second time to the access restricted
resource. But this time, a component inspecting outgoing requests recognises that an access handle for
this web page was received and adds this as an HTTP flag to the request header, the X-PRIME-Handle
flag. The X-PRIME-Handle flag can be used as an access handle. The consequences of this approach
are that service providers only need to speak a simple HTTP protocol to give the client a hint. On the
client side, a PRIME-enabled component needs access to the response header and a ‘handle-store’.
When receiving a hint, the interceptor can connect to the Mediator (also via HTTP protocol) and wait
for the decision.
http://localhost:8080/tud_ipv2_Demonstrator/newsletter/personalize.jsp
25
The X-PRIME header flag is defined by PRIME. For detailed information about http flags see
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Header . For header inspection use the very helpful
Firefox extension at http://livehttpheaders.mozdev.org/.
26
In normal applications this may be user name and password, credit card details, or a valid email address.
10 Outlook
During the life span of the PRIME project many HCI aspects have been considered and several
proposals have been suggested and developed. At the same time, privacy principles have been
investigated, discussed, developed, and also used as the basis for the designs made within the project.
This deliverable has given an account of the worked pursued and the results obtained. Especially, the
relation between privacy principles and design proposals and guidelines has been accounted for
(chapter 4). Also, individual design constructs have been discussed at some length including
preference settings, consent management, evaluation of trust-enhancing information, and management
of history records.
There are of course many things that could be penetrated more deeply. Some of the members of the
PRIME consortium are now, together with new allies, negotiating a new Large-scale Integrated
Project with the European Commission (for Framework Programme 7), PrimeLife – Privacy and
Identity Management in Europe for Life. The PrimeLife proposal is more focused on life-long privacy
maintenance than PRIME, and also on collaborative scenarios in virtual communities. The proposal
contains many technical challenges for workable mechanisms for pseudo-identification and reputation
building, as well as standards for access control and policy languages, but also research on social
issues as well as some of the assistance functions discussed in chapter 5. For instance, in the proposal
it is mentioned that “For virtual community applications, a ‘Data Track’ function for providing
transparency of the personal data processing by other peers, may differ from the PRIME Data Track
for client–server applications, as those peers (individuals) might not provide the same data handling
policy information (if they provide such information at all). Furthermore, it might even be difficult to
identify all peers that obtain information about an individual – apart from the possibility that those
peers might even wish to act anonymously.”
Other members of the PRIME consortium will, with other new allies, undertake an FP7 Standard
Research Project named PICOS – Privacy and Identity for Community Services, which will address
trust and privacy issues in identity management in digital information and communication services
that support communities which already operate in the physical world.
The field of Privacy HCI – HCI-P as it was called in section 4.1 – is still not well investigated and new
privacy threats arise as mobile and web technology develops as exemplified by the application
prototypes in chapters 6 and 7. We think the broad scope of the PRIME project and the many aspects
that its architecture actually covers have given a unique ground for developing interaction concepts.
Admittedly, the HCI work is far from complete in the sense of delivering ready made solutions.
Rather, chapters 4 and 5 give guidelines and examples that have to be adapted and refined in every
individual implementation of the PRIME architecture in real applications. But the scope of the PRIME
project has propelled the development of a more inclusive systems thinking and opened many vistas
for further investigation.
11 References
For PRIME deliverables (references of the form “D6.1.c”.), see the list at the beginning of the document.
Andersson, C., J. Camenisch, S. Crane, S. Fischer-Hübner, R. Leenes, S. Pearson, J.S. Pettersson & D. Sommer.
Trust in PRIME. Proceedings of the 5th IEEE Int. Symposium on Signal Processing and IT, December 18-
21, 2005, Athens, Greece.
APWG (2007) web site http://www.antiphishing.org/
Article 29 Data Protection Working Party. Opinion on More Harmonised Information provisions. 11987/04/EN
WP 100, November 25 2004. http://europa.eu.int/comm/internal_market/privacy/workingroup/wp2004/
wpdocs04_en.htm
Bellotti, V. & A. Sellen, Design for Privacy in Ubiquitous Computing Environment. Proceedings of the Third
European Conference on Computer-Supported Cooperative Work, 13-17 September, 1993, Milano, Italy.
Bergmann, M., M. Rost & J.S. Pettersson. Exploring the Feasibility of a Spatial User Interface Paradigm for
Privacy-Enhancing Technology, Proceedings of the Fourteenth International Conference on Information
Systems Development (ISD’2005), Karlstad, August 2005. Published in Andvances in Information
Systems Development, Springer-Verlag, 2006, pp. 437-448.
Borking, J. & C. Raab. Law. PETs and Other Technologies for Privacy Protection, Journal of Information, Law
and Technology, 1, University of Warwick, 2001.
Berners-Lee, T., J. Hendler & O. Lassila. The Semantic Web. Scientific American, May 2001.
Buxton, W. (“Bill”). Sketching user experiences. Getting the design right and the right design. Morgan
Kaufmann Publishers, 2007.
Carrol, J. Making Use: Scenario-based design of hci, MIT press, 2000.
Casassa Mont, M. Dealing with Privacy Obligations. In Trust and Privacy in Digital Business, Springer, NCS
3184, 120-131. 2004.
Chernoff, H. Using faces to represent points in k-dimensional space graphically. Journal of the American
Statistical Association 68 (342): 361-368. 1973.
Cheskin Research and Studio Arcetype/Sapient. E-Commerce trust study. January 1999, as referred in Turner
2003.
Clauß, S. Towards Quantification of Privacy within a Privacy-Enhancing Identity Management System.
Dissertation Technische Universität Dresden, Fakultät Informatik, December 2007.
Consumers International. Privacy@net – An international comparative study of consumer privacy on the
internet. January 2001. http://www.consumersinternational.org/document_store/Doc30.pdf
Cooper A. & R. Reimann. About Face 2.0 – The Essentials of Interaction Design. Wiley Publishing, USA, 2003.
Crane, S., H. Lacohee & S. Zaba. Trustguide: Trust in ICT. BT Technology Journal, Special Edition covering
HP-BT Alliance, October 2006.
Cranor, L.F. Web privacy with P3P. O´Reilly, 2002.
Cranor, L.F. What do they “indicate?”: evaluating security and privacy indicators. interactions, Vol.
XIII.3: 45-57, 2006.
Cranor, L.F. & S. Garfinkel, eds. Security and Usability: Designing Secure Systems that People Can Use.
O’Reilly, 2005.
D6.1.b, D6.1.c, etc., i.e. PRIME deliverables; see list at the beginning of the document.
Dhamija, R., J.D. Tygar & M. Hearst. Why Phishing Works. CHI 2006 Proceedings. ACM Conference
Proceedings Series. ACM Press 2006.
International Organization for Standardization, ISO 9241-11 Ergonomic requirements for office work with visual
display terminals (VDTs) – Part 11: Guidance on usability, 1998.
Kaiser, J. & M. Reichenbach. Evaluating security tools towards usable security – a usability taxonomy for the
evaluation of security tools based on a categorization of user errors. In Hammond, J., T. Gross & J.
Wesson, Usability Gaining a Competitive Edge. IFIP 17th World Computer Congress, Montreal, Canada,
Kluwer Academic Publishers, s. 25-30. August 2002. http://tserv.iig.uni-freiburg.de/telematik/forschung/
projekte/kom_technik/atus/publications.html
Karakasiliotis, A., S.M. Furnell & M. Papadaki. An assessment of end-user vulnerability to phishing attacks,
Journal of Information Warfare, Vol. 6(1): 17-28, 2007.
Karat, J., C.-M. Karat, C. Brodie & J. Feng. Privacy in information technology: Designing to enable privacy
policy management in organizations. Int. J. Human-Computer Studies 63, 153-174, 2005.
Kobsa, A. Personalized Hypermedia and International Privacy. Communications of the ACM 45(5): 64-67, 2002.
Kobsa, A. Tailoring Privacy to Users’ Needs. Invited Keynote, 8th International Conference on User Modeling,
Sonthofen, Germany, 2001. http://www.ics.uci.edu/~kobsa/papers/2001-UM01-kobsa.pdf
Kröger, V.-P. Security of user Interfaces- A Usability Evaluation of F-Secure SSH. Proceedings of the Helsinki
University of Technology, Seminar on Network Security, December 1999.
Kunz, C.L. Click-Trough Agreements: Strategies for Avoiding Disputes on Validity of Assent. 2002.
http://www.Efscouncil.org/frames/Forum%20Members/Kunz_Click-thr_20%Agrmt_20Strategies.ppt. See
also C. L. Kunz, J. Debrow, M. Del Duca & H. Thayer, “Click-Trough Agreements: Strategies for
Avoiding Disputes on Validity of Assent”. Business Lawyer, 57, 401, 2001.
Law, E.L.-C. & E. Hvannberg. Analysis of strategies for improving and estimating the effectiveness of heuristic
evaluation. In Hyrskykari, A. (ed.) Proceedings of the Third Nordic Conference on Human-Computer
Interaction, Tampere, Finland, October 23-27, 2004. Available in ACM Digital Library.
Microsoft Inc. Privacy Guidelines for Developing Software Products and Services, Version 2.1, October 10,
2006.
Molin, L. & J.S. Pettersson. How should interactive media be discussed for successful requirements
engineering? In Perspectives on multimedia: communication, media and technology, eds. Burnett,
Brunström & Nilsson. Wiley, 2003.
Nielsen, J. Usability Engineering. Morgan Kaufmann Publishers, 1993.
Nielsen, J. Heuristic evaluation. In Nielsen & Mack (Eds.), Usability Inspection Methods, John Wiley & Sons,
New York, NY, 1994. Cf. also http://www.useit.com/papers/heuristic/heuristic_ list.html
Nielsen, J., Jacob Nielsen’s Alertbox, User Education Is Not the Answer to Security Problems. October 25,
2004. http://www.useit.com
Nielsen, J. & R.L.Mack (Eds.), Usability Inspection Methods, John Wiley & Sons, New York, NY, 1994
Nielsen, J., R. Molich, C. Snyder & S. Farell. E-commerce user experience: Trust. Nielsen Norman Group, 2000.
OECD, Privacy Online: OECD Guidance on Policy and Practice. (Print Paperback), 2003.
http://www.oecd.org/document/49/0,2340,en_2649_34255_19216241_1_1_1_1,00.html
P3P, Platform for Privacy Preferences (P3P) Project. 2004 and later. http://www.w3.org/P3P The latest version
of the P3P 1.1 element definitions can be found at http://www.w3.org/TR/P3P11/.
Patrick, A.S. Building Trustworthy Software Agents. IEEE Internet Computing, pp 46-53, November 2002,
http://computer.org/internet/
Patrick, A.S. & S. Kenny. From Privacy Legislation to Interface Design: Implementing Information Privacy in
Human-Computer Interaction. Paper presented at the Privacy Enhancing Technologies Workshop
(PET2003), Dresden/Germany, 2003.
Patrick, A.S., S. Kenny, C. Holmes & M. van Breukelen. Human Computer Interaction. Chapter 12 in Handbook
for Privacy and Privacy-Enhancing Technologies. PISA project. Eds. van Blarkom, Borking, Olk, 2002.
http://www.andrewpatrick.ca/pisa/handbook/handbook.html
Pearson, S. Towards Automated Evaluation of Trust Constraints, in Trust Management, LNCS 3986, Springer
Berlin/Heidelberg, 252-266, 2006.
Perri 6. Can we be persuaded to become PET - lovers? Appendix II of Report on the OECD forum session on
Privacy-Enhancing technologies (PETs), held at the OECD, Paris, 8 October, 2001.
Pettersson, J.S. P3P and Usability – the Mobile Case. In Duquennoy, P., S. Fischer-Hübner, J. Holvast & A.
Zuccato (eds.) Risk and challenges of the network society. Karlstad University Studies 2004:35, 2004.
Pettersson, J.S. R1 – First report from the pilot study on privacy technology in the framework of consumer
support infrastructure. Working Report), December 2006. Dept. of Information Systems and Centre for
HumanIT, Karlstad University. http://www.humanit.org/projects.php?projekt_id=48&lang=en
Pettersson, J.S. R2 – Second report from the pilot study on privacy technology in the framework of consumer
support infrastructure. Working Report, July 2007. Dept. of Information Systems and Centre for
HumanIT, Karlstad University. http://www.humanit.org/projects.php?projekt_id=48&lang=en
Pettersson, J.S., C. Thorén & S. Fischer-Hübner. Making Privacy Protocols Usable for Mobile Internet
Environment. HCI International 2003, June 23-25 Crete, Greece.
Pettersson, J.S., S. Fischer-Hübner, N. Danielsson, J. Nilsson, M. Bergmann, S. Clauß, Th. Kriegelstein & H.
Krasemann. Making PRIME usable. SOUPS 2005 Symposium on Usable Privacy and Security, Carnegie
Mellon University, July 6-8 July, 2005, Pittsburgh. Available in ACM Digital Library.
Pettersson, J.S., S. Fischer-Hübner & M. Bergmann. Outlining Data Track: Privacy-friendly Data Maintenance
for End-users, to appear in Proceedings of the 15th International Conference on Information Systems
Development (ISD 2006), Budapest, 31st August - 2nd September 2006, Springer Scientific Publishers.
Pfitzmann, A. & M. Hansen. Anonymity, Unlinkability, Undetectability, Unobservability, Pseudonymity, and
Identity Management – A Consolidated Proposal for Terminology, v0.30, 26 November 2007.
http://dud.inf.tu-dresden.de/Anon_Terminology.shtml
PISA – Privacy Incorporated Software Agent project (http://www.pet-pisa.nl/pisa_org/pisa/index.html);
Handbook for Privacy and Privacy-Enhancing Technologies. PISA project. Eds. van Blarkom, Borking,
Olk, 2002. http://www.andrewpatrick.ca/pisa/handbook/handbook.html
Pollach, I. What’s wrong with Online Privacy Policies? Communications of the ACM, September 2007: 103-108.
Preece, J., Y. Rogers, H. Sharp, D. Benyon, S. Holland & T. Carey. Human-Computer Interaction. Addison-
Wesley, 1994.
Preece, J., Y. Rogers & H. Sharp. Interaction Design: Beyond Human-Computer Interaction. John Wiley &
Sons. 2002. 2nd ed. 2007 with authors listed as Sharp, Rogers & Preece.
PRIME Deliverables, D1.1, etc.; see list at the beginning of the present document.
Raskin, J. The Humane Interface – New Directions for Designing Interactive Systems. ACM Press, New York,
2000.
Reimann, R. & A. Cooper. The Essentials of Inteactiondesign, About Face 3.0. Wiley, 2006.
Rubin, J. Handbook of Usability Testing, Wiley, 1994.
Sackmann, S., J. Strüker & R. Accorsi. Personalization in Privacy-Aware Highly Dynamic Systems.
Communications of the ACM, Vol. 4(9): 32-38, 2006.
Saltzer, J.H. & M.D. Schroeder. The protection of information in computer systems. Proceedings of the IEEE,
Vol. 63(9): 1278-1308, 1975.
Sasse, M.A., S. Brostoff & D. Weirich. Transforming the weakest link – a human/computer interaction approach
to usable and effective security. BT Technology Journal, Vol. 19(3): 122-131, 2001.
Schechter S. E., R. Dhamija, A. Ozment & I. Fischer. The Emperor’s New Security Indicators: An evaluation of
web site authentication and the effect of role playing on usability studies, IEEE Symposium on Security
and Privacy, May 20-27, 2007, Oakland, California.
Shneiderman, B. & C. Plaisant. Designing the User Interface. Addison Wesley, 4th edition, 2004.
Slade, K.H. Dealing with customers: Protecting their privacy and enforcing your contracts. 1999.
http://www.haledorr.com/db30/cgi-bin/pubs/1999_06_CLE_Program.pdf
Thougburgh, D. Click-trough contracts: How to make them stick. Internet Management Strategies, 2001.
http://www.loeb.com/FSL5CS/articles45.asp
Tsai, J., S. Egelman, L. Cranor & A. Acquisti. The Effect of Online Privacy Information on Purchasing
Behavior: An Experimental Study. WEIS 2007, Workshop on the Economics of Information Security,
June 7-8 2007, Carnegie-Mellon University, Pittsburgh.
Turner, C.W. How do consumers form their judgment of the security of e-commerce web sites? Workshop on
Human-Computer Interaction and Security Systems, CHI2003, April 5-10, 2003, Fort Lauderdale,
Florida. http://andrewpatrick.ca/CHI2003/HCISEC-papers.html
Turner, C.W. The online experience and consumers perceptions of e-commerce security. In Proceedings of the
Human Factors and Ergonomics Society 46th Annual Meeting, Sept. 2002.
Turner, C.W., M. Zavod & W. Yurcik. Factors that Affect the Perception of Security and Privacy of E-
commerce Web Sites. In Proceedings of the Fourth International Conference on Electronic Commerce
Research, Dallas TX, November 2001.
Tweney, E. & S. Crane. Trustguide2: An Exploration of Privacy Preferences in an Online World. In
Cunningham & Cunningham (Eds.): Expanding the Knowledge Economy: Issues, Applications, Case
Studies, IOS Press, 2007 Amsterdam, pp. 1379-1385.
Weitzner, D., H. Abelson, T. Berners-Lee, C. Hanson, J. Hendler, L. Kagal, D. McGuinness, G. Sussman & K.
Kasnow Waterman. Transparent Accountable Data Mining: New Strategies for Privacy Protection. MIT
Computer Science and Artificial Intelligence Laboratory Technical Report, MIT-CSAIL-TR-2006-007,
2006.
Whitten, A. & J.D. Tygar. Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0. In Proceedings of the
8th USENIX Security Symposium. Usenix, August 1999.
Wu, M., R.C. Miller & G. Little. Web Wallet: Preventing phishing attacks by revealing user intentions.
Symposium On Usable Privacy and Security (SOUPS) 2006, Carnegie Mellon University, July 12-14,
2006 Pittsburgh. Available in ACM Digital Library.
Yee, K-P. User interaction design for secure systems. Proceedings of the International Conference on
Information and Communications Security, ICIC’02, pp. 278-290. Springer-Verlag, 2002.
EU Directives
Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of
individuals with regard to the processing of personal data and on the free movement of such data, Official
Journal L No. 281, 23.11.1995.
Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing
of personal data and the protection of privacy in the electronic communications sector, Official Journal L
No. 201, 31.07.2002.