Download as pdf or txt
Download as pdf or txt
You are on page 1of 98

Contract N° 507591

Title: HCI Guidelines

Author: WP06.1

Editor: John Sören Pettersson (Karlstads universitet)

Reviewers: Marit Hansen (ICPP)

Peter Keller (Swisscom AG)

Identifier: D06.1.f

Type: Deliverable

Version: 1

Date: 01 February 2008

Status: Final version

Class: Public

Summary
This document discusses HCI privacy design in general and points to specific problems such as
ease of use vs. informed consent. The document gives guidance regarding HCI requirements and
makes proposals on user interfaces (UI) of the PRIME integrated identity management prototype,
especially for web-browsing scenarios. Test results are discussed but experiments are not
presented in full. This document also gives a brief overview of HCI issues for the two PRIME
application prototypes for collaborative eLearning and location-based services for mobile phones.

Application providers interested in adopting the PRIME architecture or similar architectures will
in this deliverable find suggestions for user interface design as well as discussions of problematic
issues. Moreover, PRIME ontologies are presented and an architecture-level sketch is given over
how UI components are connecting to the PRIME core.

Copyright © 2007-2008 by the PRIME consortium

The PRIME project receives research funding from the Community’s Sixth Framework Programme and the Swiss Federal
Office for Education and Science.

File name: pub_del_D06.1.f_ec_wp06.1_V1_final.doc


PRIME D06.1.f
Privacy and Identity Management for Europe

Members of the PRIME consortium:


IBM Belgium Belgium
IBM Zürich Research Laboratory Switzerland
Unabhängiges Landeszentrum für Datenschutz Germany
Technische Universität Dresden Germany
Deutsche Lufthansa AG Germany
Katholieke Universiteit Leuven Belgium
T-Mobile International Germany
Hewlett-Packard Ltd. United Kingdom
Karlstads universitet Sweden
Università degli Studi di Milano Italy
Joint Research Centre Italy
Centre National de la Recherche Scientifique France
Johann Wolfgang Goethe-Universität Frankfurt am Main Germany
Chaum LLC United States of America
Rheinisch-Westfälische Technische Hochschule Aachen Germany
Erasmus Universiteit Rotterdam The Netherlands
Universiteit van Tilburg The Netherlands
Fondazione Centro San Raffaele del Monte Tabor Italy
Swisscom AG Switzerland

Published PRIME documents


These documents are all available from the project web site located at http://www.prime-project.eu/
Excerpt of project “Description of work” 03-2004 with updates
D14.0.a Framework – Version 0 06-2004
D15.1.b Project presentation 09-2004
D05.1.a Overview of existing assurance methods 09-2004
D13.1.a Tutorials Version 0 09-2004
D01.1.a Requirements Version 0 10-2004
D14.2.a Architecture Version 0 10-2004
D06.1.b Evaluation of early prototypes 12-2004
D06.1.c HCI Guidance and Proposals 02-2005
D14.1.a Framework Version 1 03-2005
D16.1.a Annual Research Report I 04-2005
D01.1.b Requirements Version 1 05-2005
D11.2.a Integrated Prototype Version 1 (IPV1) 05-2005
D13.1.b Tutorials Version 1 05-2005
D14.2.b Architecture Version 1 05-2005
D15.1.d White Paper Version 1 07-2005
D06.1.d Evaluation of IPV1 07-2005
D04.2.a Evaluation of Initial Application Prototypes 03-2006
D16.1.e Annual Research Report II 04-2006
D14.1.b Framework Version 2 07-2006
D14.2.c Architecture Version 2 12-2006
D16.1.i Annual Research Report III 04-2007
D11.2.b User-side IDM Integrated Prototype V2 04-2007
D06.1.e Evaluation of Integrated Prototype Version 2 05-2007
D15.1.f White Paper Version 2 06-2007

Final version 1, dated 01 February 2008 Page 2


PRIME D06.1.f
Privacy and Identity Management for Europe

The PRIME Deliverable Series


Vision and Objectives of PRIME
Information technologies are becoming pervasive and powerful to the point that the privacy of citizens is now at
risk. In the Information Society, individuals need to be able to keep their autonomy and to retain control over
their personal information, irrespective of their activities. The widening gap between this vision and current
practices on electronic information networks undermines individuals' trust and threatens critical domains like
mobility, healthcare, and the exercise of democracy. The goal of PRIME is to close this gap.
PRIME develops the PRIME Framework to integrate all technical and non-technical aspects of privacy-
enhancing identity management and to show how privacy-enhancing technologies can indeed close this gap.
PRIME elicits the detailed requirements from legal, social, economic, and application points of view and shows
how they can be addressed. PRIME will enable the users to effectively control their private sphere thanks to the
PRIME Architecture that orchestrates the different privacy-enhancing technologies, including the human-
computer interface. To validate its results, PRIME develops prototypes and conducts experiments with end-users
in specific application areas.
PRIME advances the state of the art far beyond the objectives of the existing initiatives to address foundational
technology, through PRIME research on human-computer interface, ontologies, authorisation and cryptology,
anonymous communications, and privacy-enhancing identity management systems architecture and assurance
methods, taking into account legacy and emerging systems.
PRIME raises awareness of privacy-enhancing identity management through its white paper and tutorials, as
well as press releases, leaflets, slide presentations, and scientific publications. The following PRIME materials
are available from http://www.prime-project.eu:
Introduction to PRIME
• Press releases, leaflets, and slide presentations outline the project objectives, approach, and expected
results;
• The PRIME White Paper introduces privacy-enhancing identity management issues and PRIME's
vision, solutions, and strategy;
• Tutorials introduce major concepts of privacy-enhancing identity management for use by the
software development community and the general public.
PRIME technical materials
• PRIME Framework reviews privacy-enhancing identity management issues, PRIME legal, social,
and economic requirements, PRIME concepts and models, and PRIME architecture outline;
• PRIME Requirements analyses in-depth the legal, social, economic, and application requirements.
They comprise generic requirements, as well as specific, scenario-based requirements of selected
application areas including eLearning, location-based services, and airport security controls.
• PRIME Architecture describes in-depth the organisation and orchestration of the different privacy-
enhancing technologies in a coherent PRIME system;
• Annual research reports review the research results gained in PRIME over the past years, and the
research agenda for the subsequent years;
• HCI Guidance provides a comprehensive analysis of the Human-Computer Interface requirements
and solutions for privacy-enhancing identity management;
• Assurance methods surveys the existing assurance methods that are relevant to privacy-enhancing
identity management;
• Evaluation of prototypes assesses the series of early PRIME technology prototypes from the legal,
social, and economic standpoints;
• Scientific publications address all PRIME-related fields produced within the scope of the project.
PRIME work plan
PRIME global work plan provides an excerpt of the contract with the European Commission.

Final version 1, dated 01 February 2008 Page 3


PRIME D06.1.f
Privacy and Identity Management for Europe

Foreword
Several PRIME partners have contributed by authoring or contributing text to chapters to this
deliverable; see the list below. This document was edited by John Sören Pettersson (Karlstads
universitet) who also contributed most of the text.

Main contributors to content:


Executive Summary John Sören Pettersson (Karlstads universitet). Introductory
paragraphs adapted from PRIME White Paper V2.

Chapter 1 John Sören Pettersson assisted by Simone Fischer-Hübner


(Karlstads universitet)

Chapter 2 John Sören Pettersson assisted by Simone Fischer-Hübner; input


also from Ninni Danielsson (Karlstads universitet)

Chapter 3 Simone Fischer-Hübner and John Sören Pettersson; illustrations


prepared by Jenny Nilsson and Nina Rönntorp directed by the HCI
team in Karlstad

Chapter 4 Section 4.2 by Simone Fischer-Hübner; other sections by John


Sören Pettersson

Chapter 5 John Sören Pettersson assisted by several PRIME workers.

Chapter 6 Elke Franz (Technische Universität Dresden) assisted by John


Sören Pettersson. Test user citation collected and translated by
Ninni Danielsson

Chapter 7 Jan Zibuschka (Johann Wolfgang Goethe-Universität Frankfurt am


Main) and John Sören Pettersson with input from Georg Kramer,
Tobias Kölsch and Marc Wilhelm (T-Mobile International)

Chapter 8 Mike Bergmann (Technische Universität Dresden) with contribu-


tions from John Sören Pettersson

Chapter 9 Mike Bergmann

Chapter 10 John Sören Pettersson with contributions from Pete Bramhall


(Hewlett-Packard Ltd.)

Final version 1, dated 01 February 2008 Page 4


PRIME D06.1.f
Privacy and Identity Management for Europe

Table of Contents
Executive Summary .................................................................................................................................. 10

1 Introduction ............................................................................................................................. 12

2 Related work............................................................................................................................ 14
2.1 Perceptions of fairness of data processing .................................................................................... 14
2.1.1 User perception of and trust in web sites ......................................................................................... 14
2.1.2 Trustguide........................................................................................................................................ 16
2.2 Specific UI paradigms for privacy systems.................................................................................... 17
2.3 Usability of security systems........................................................................................................... 18
2.4 PISA: from privacy principles to HCI requirements..................................................................... 19
2.4.1 PISA: Testing of user interfaces for a privacy agent ....................................................................... 20
2.4.2 “Just-In-Time-Click-Through Agreements” (JITCTAs) ................................................................. 20
2.5 Privacy UI vocabularies and information structuring .................................................................. 21
2.5.1 Multi-layered structure for compliance with the EU directive ........................................................ 21
2.6 New vistas for IDM systems: Tools for exercising rights.............................................................. 22

3 Introduction to PRIME UI paradigms (for web browsers)................................................. 24


3.1 Identity management paradigms.................................................................................................... 24
3.2 UI paradigms for interacting with service providers..................................................................... 25
3.2.1 Role-centred paradigm .................................................................................................................... 25
3.2.2 Relationship-centred paradigm ........................................................................................................ 26
3.2.3 TownMap-based paradigm .............................................................................................................. 26
3.3 Setting privacy preference for PreSets/PrivPrefs .......................................................................... 28
3.4 “Send Personal Data?” .................................................................................................................. 28
3.5 Assurance evaluation ..................................................................................................................... 30
3.6 Data Track ...................................................................................................................................... 30

4 HCI guidance for privacy ....................................................................................................... 32


4.1 HCI-P.............................................................................................................................................. 32
4.1.1 Different approaches to privacy questions in HCI........................................................................... 32
4.1.2 The present approaches to privacy questions in HCI....................................................................... 34
4.2 From legal privacy requirements to HCI guidance....................................................................... 35
4.3 From socio-cultural requirements to HCI guidance..................................................................... 40
4.4 General guidelines for usable privacy ........................................................................................... 41
4.5 Testing and design process............................................................................................................. 44
4.5.1 Guideline for the design process...................................................................................................... 44
4.5.2 Commentary on testing.................................................................................................................... 45
4.5.3 Checklists for the socio-cultural requirements ................................................................................ 46
4.6 Conclusion on HCI principles ....................................................................................................... 48

5 Discussion on specific UI designs ........................................................................................... 49


5.1 Conceptual difficulties ................................................................................................................... 49
5.2 Relationship-centred paradigm...................................................................................................... 50
5.2.1 Accessing web sites and privacy preferences via bookmark lists.................................................... 50
5.2.2 TownMap ........................................................................................................................................ 50
5.3 Privacy preference setting .............................................................................................................. 53
5.4 Mapping preferences and sending personal data.......................................................................... 56
5.4.1 “Context Menu” for data disclosure and consent giving ................................................................. 56
5.4.2 Drag-and-drop for data disclosure and consent giving .................................................................... 57
5.4.3 Separate consent window “Send Personal data?”............................................................................ 57

Final version 1, dated 01 February 2008 Page 5


PRIME D06.1.f
Privacy and Identity Management for Europe

5.4.4 Setting obligations for data processing ............................................................................................ 58


5.4.5 Consent dialogue aware of purpose – data types matching ............................................................. 59
5.4.6 Disclosing anonymous credentials .................................................................................................. 62
5.4.7 Anonymous email addresses for post-consent notification messages.............................................. 63
5.4.8 Unresolved issues for consent dialogues ......................................................................................... 63
5.5 Assurance evaluation and linkability estimations ......................................................................... 64
5.5.1 Test: “Privacy functionality check” at the moment of data release ................................................. 64
5.5.2 Proposal for “Assurance Evaluation” prototype window ................................................................ 66
5.6 Data Track ...................................................................................................................................... 67
5.7 PRIME Identity Manager .............................................................................................................. 70
5.7.1 Access to the administration functions ............................................................................................ 70
5.7.2 Administration of the user side identity manager ............................................................................ 71
5.7.3 Access to administration of data and preference settings ................................................................ 72
5.8 Animations and movements ........................................................................................................... 73
5.8.1 Utilising movements in spatial relationships ................................................................................... 73
5.9 Icons................................................................................................................................................ 75
5.10 Terms .............................................................................................................................................. 76

6 Rationale behind the CeL APV2 user interface.................................................................... 77

7 Rationale behind the LBS APV2 user interface ................................................................... 81

8 Ontologies for HCI .................................................................................................................. 84


8.1 Introduction.................................................................................................................................... 84
8.2 Ontology usage in PRIME ............................................................................................................. 85
8.3 Possible future extensions.............................................................................................................. 86

9 Overview of connections between PRIME core and UI components.................................. 88


9.1 PRIME UI modules........................................................................................................................ 88
9.1.1 PRIME Console............................................................................................................................... 88
9.1.2 PRIME “Send Personal Data?” (AskSendData) .............................................................................. 89
9.1.3 PRIME FormFiller........................................................................................................................... 89
9.1.4 PRIME Console Mediator ............................................................................................................... 89
9.2 Inter-module communication flow ................................................................................................ 91
9.3 HTTP X-PRIME header flag......................................................................................................... 91

10 Outlook................................................................................................................................. 93

11 References ............................................................................................................................ 94

Final version 1, dated 01 February 2008 Page 6


PRIME D06.1.f
Privacy and Identity Management for Europe

Lists of Figures and Tables


Figure 1 Bookmark list with icons for privacy preferences (PrivPrefs) .......................................... 26
Figure 2 TownMap with building tools visible................................................................................ 27
Figure 3 Tilted TownMap visible .................................................................................................... 27
Figure 4 Deriving icons for privacy preferences from other icons .................................................. 28
Figure 5 One design of the “Send data?” dialogue window. ........................................................... 29
Figure 6 A simplified assurance control for end users..................................................................... 30
Figure 7 A Data Track design with several search options.............................................................. 31
Figure 8 Structured overview of guidelines for usability in security applications .......................... 43
Figure 9 Traditional “Go” button (left) and address field with two “Go’s” (right) ......................... 50
Figure 10 DADA to send credit card information ............................................................................. 51
Figure 11 Asking Data Track for information on name disclosures.................................................. 52
Figure 12 CrossRoad, two views ....................................................................................................... 52
Figure 13 Context Menu appearing on top of the web page when user clicks PRIME button.......... 56
Figure 14 Data and proof fields have been put in editing mode ........................................................ 57
Figure 15 Two views of consent dialogue with (missing) proofs (= certificates) ............................. 58
Figure 16 “What to do” when missing a proof .................................................................................. 58
Figure 17 PrivPref Profiled Shopping informs the “Send data?” about purposes and data types...... 60
Figure 18 First four steps in a stepwise disclosure act....................................................................... 61
Figure 19 Final page (blown up) stepwise mode of multipurpose “Send Personal Data?” ............... 62
Figure 20 Original UI and the explanation drawer translated into English ....................................... 65
Figure 21 Four buttons for quick access to assistance functions ....................................................... 69
Figure 22 A user interface to manage the IPV2 manager (fold out view) ......................................... 70
Figure 23 Old mockup of changing to anonymous browsing when browsing as registered customer......... 71
Figure 24 Proof Portfolio view within Personal Data window.......................................................... 73
Figure 26 Three eye icons to alternate with traffic light spots........................................................... 76
Figure 27 DSM Configuration Tool .................................................................................................. 78
Figure 28 CeL-adapted “Send personal data?”.................................................................................. 79
Figure 29 Location Intermediary (LI)................................................................................................ 81
Figure 30 Policy configuration and full privacy policy ..................................................................... 83
Figure 31 Service subscription and Data Track................................................................................. 83
Figure 32 Simple example of Subject-Predicate-Object relation....................................................... 84
Figure 33 RDF example defining properies of 'birthDate'................................................................. 85
Figure 34 Tentative example with resources for data type terms instantiation in two languages...... 86
Figure 35 How different languages would claim different space for button captions ....................... 87
Figure 36 PRIME Console Main Window (in IPV2) ........................................................................ 89
Figure 37 Interaction flow ................................................................................................................. 91
Figure 38 HTTP header example with X-PRIME flag ...................................................................... 92

Table 1 Privacy Principles, HCI requirements, and possible PRIME UI solutions........................ 36


Table 2 Factors promoting adoption, HCI requirements, and possible PRIME UI solutions ........ 40
Table 3 Design guidelines for “applications that must set a security policy, their origin and motivation”. 41
Table 4 Privacy usability in relation to the security usability guidelines ....................................... 42
Table 5 Checklists for the socio-cultural requirements .................................................................. 46
Table 6 Data types (tentatively suggested) in relation to stated purposes ...................................... 54
Table 7 JSPs for retrieving and manipulating data in the PRIME core.......................................... 90

Final version 1, dated 01 February 2008 Page 7


PRIME D06.1.f
Privacy and Identity Management for Europe

List of acronyms
AP Application Provider, Application Prototype
APV2 Application Prototype version 2
APWG Anti-Phishing Working Group
ATUS A Toolkit for Usable Security (from Universität Freiburg)
CeL Collaborative eLearning
DADA Drag-and-drop agreement
DSM Decision-suggesting Module (in BluES’n)
DRIM Dresden Identity Management
DS Data Subject
EC European Commission
EU European Union
EuroPriSe European Privacy Seal
HCI Human–Computer Interaction
HCI-P Privacy HCI
HCI-S Security HCI
IPV1 (PRIME) Integrated Prototype version 1
IPV2 (PRIME) Integrated Prototype version 2
IPV3 (PRIME) Integrated Prototype version 3
HTTP HyperText Transfer Protocol
IAP Intra-Application Partitioning
ID Identity
IDM Identity Management
ISO International Organization for Standardization
JSP Java Server Pages
JITCTA Just In Time Click-Through Agreement
LBS Location-Based Services
LI Location Intermediary
MO Mobile Operator
OIP Ontology Information Point
OWL Web Ontology Language
P3P Platform for Privacy Preferences
PD Personal Data
PET Privacy Enhancing Technology

Final version 1, dated 01 February 2008 Page 8


PRIME D06.1.f
Privacy and Identity Management for Europe

PICOS Privacy and Identity for Community Services


pID Partial Identity (pIDs = partial identities)
PII Personally Identifiable Information (often synonymous with ‘Personal Data’)
PIM Privacy and Identity Management
PISA Privacy Incorporated Software Agent
PKI Public Key Infrastructure
PRIME Privacy and Identity Management for Europe
PS Purpose Specification
RDF Resource Description Framework
RP Retention Period
UI User Interface
URL Uniform Resource Locator
W3C World Wide Web Consortium
XML eXtensible Markup Language
XUL XML User interface Language

Final version 1, dated 01 February 2008 Page 9


PRIME D06.1.f
Privacy and Identity Management for Europe

Executive Summary
PRIME project. The guiding principle in the PRIME project is to allow individuals to be in control of
their personal data. The notion of user control has been adopted in many recent user-centric identity
management initiatives. However, most of these approaches only provide the first steps on the way to
a new generation of identity management systems. They do not provide adequate safeguards for
personal data and are restricted to giving individuals limited control over their personal data.
Effective management of one’s own privacy in the online world requires new tools starting with the
minimisation of personal data disclosure. Furthermore, users can be empowered with tools that allow
them to evaluate the privacy capabilities and performance of service providers, and to match privacy
policies with user preferences and also to negotiate privacy policies with service providers. Service
providers would need systems that enforce agreed policies by technical means and keep track of actual
use, while users would need tools to keep track of data collection and agreed usage. In addition, user
applications should be able to connect to service providers to verify that data usage complies with the
negotiated policies; tools should be provided to users for exercising legal privacy rights.
Human-computer interaction. The tools just mentioned are however not instruments for effective
privacy protection if people in general cannot use them. Giving these functions informative and
intuitive user interfaces are therefore crucial for putting individuals in control of their personal data.
The Human-Computer Interaction (HCI) work package has in collaboration with other PRIME work
packages developed a host of concepts and concrete designs. User tests have gathered input from
ordinary Internet users on their ability to use the PRIME tools and their acceptance of such privacy-
enhancing technology. Guidelines for implementing appropriate HCI for privacy-related functionality
have been derived from the project results. The HCI work package approaches regarding PRIME’s
main functionalities are sketched in the following.
Minimisation of personal data disclosure. In an everyday web scenario, where an Internet user surfs
to the homepage of a service provider, browses several pages, and then decides to order a service,
keeping data disclosures to a minimum is often desirable.
Within the PRIME project an electronic visiting card concept called “PrivPref” has been elaborated
whereby user with no knowledge in the basics of data processing shall be able to use different Privacy
Preference settings. Anonymous communication, pre-limited data sets, and pre-defined purposes for
data processing are features included in the different PrivPrefs.
Another strand of design developed conceptually within PRIME concerns the “TownMap” metaphor
whereby different areas on a stylised street map represent different privacy preferences. The map
design is an attempt to make preference settings more accessible and, hopefully, more understandable
to users. The map design also makes it possible to show the user how the data transfers are taking
place, and it also allows the user to interact graphically by dragging data icons to service providers.
Evaluate the privacy capabilities and performance. A data disclosure function should be augmented
with some way to check the trustworthiness of service providers that are not familiar to the user.
Within PRIME the focus is on the privacy-related information that can be beneficial for the user to
have; this information is similar to the information available nowadays concerning price comparisons
and customer satisfaction ratings concerning product quality, delivery, etc. In essence, the service
providers should be trustworthy in processing the user’s personal data in a privacy-friendly manner.
To this end, an “Assurance Evaluation” component has been implemented within the PRIME project.
The Assurance Evaluation can rely partly on other parties performing more advanced checks and focus
on checking ‘trust marks’, especially privacy seals, issued by such parties.
Matching and negotiaing privacy policies. Using pre-defined purposes for data processing makes it
possible to alert users for data requests that are excessive. This is included among the PrivPref
features. At some occasions it might be very hard to prescribe what enough data is. For instance, for
the case where someone buys something but then also admits that (some) personal data is used for

Final version 1, dated 01 February 2008 Page 10


PRIME D06.1.f
Privacy and Identity Management for Europe

sending information in the future on “special offers”, it would be hard to construct a PrivPref that in
advance can foretell what types of data and what retention period should be admitted. Such things will
depend very much on the expectation the user has in each situation on the promised special offers. The
PRIME HCI work package has developed concepts for manually setting the obligations that the data
processor must follow.
Keep track of data collection. Within the PRIME prototypes, the “Data Track” function allows a user
to look up what personal data he or she has released to others. Data transmissions administered by the
user side PRIME system are logged and stored on the end user’s device. Being able to track what data
were disclosed, when, to whom, and how the data are further processed, is an important feature to
make the processing of personal data transparent. The Data Track stores transaction records
comprising personal data sent including pseudonyms used for anonymous transactions and credentials
used to prove the correctness of the data (especially important for non-anonymous transactions); the
transaction records also includes date of transmission, purpose of data collection, recipient, and all
further details of the privacy policy that the user and recipient have agreed upon.
Exercising rights. The records in the Data Track contain valuable information in case a user feels that
his data have been wrongly used. Storing the pseudonyms used for transaction allows a user to identify
himself as a previous contact in case that he wants to exercise his rights to access, rectification,
blocking, or deletion while still remaining pseudonymous. These features have only been realised in
dummy-implementations yet as they demand some further elaboration on service side, but they
constitute interesting prospects for empowering the user beyond the concept of “minimisation of data
disclosure”. To these features could be added links to online help functions of various benevolent
organisations, such as issuers of privacy seals, data protection boards, and consumer organisations.
Specific application prototypes. A demonstrator based on web scenarios has been built to demonstrate
the PRIME Integrated Prototype. There are also two other prototypes with specific scenarios: the
Collaborative eLearning prototype and the Location-Based Service prototype (mobile phones). These
have developed user interfaces emphasising functions relevant for privacy-friendly collaborative e-
learning and location-based services, respectively (brief descriptions can be found in this report).
Conclusion. The work on Human-Computer Interaction in the PRIME project has produced many
results on user interfaces for PRIME-based products and on users’ reaction to such technology. There
is now a need to present a comprehensive overview of critical comments and recommendations
pertaining to the kind of user interface designs implemented or considered within the PRIME project.
This is the motivation to produce a final deliverable on “HCI Guidelines”.
The target group of these guidelines consists of application providers interested in adopting the
PRIME architecture or similar architectures, as well as programmers and user interface designers
working in the area of privacy and security functionality. They will in this deliverable find suggestions
for user interface design as well as discussions of problematic issues. Moreover, PRIME ontologies
are presented and an architecture-level sketch is given to illustrate how user interface components are
connecting to the PRIME core.

Final version 1, dated 01 February 2008 Page 11


PRIME D06.1.f
Privacy and Identity Management for Europe

1 Introduction
Informative and intuitive user interfaces are crucial for effective privacy protection as it is stated in
Annex 1 of the project description of PRIME, Privacy and Identity Management for Europe (8.1.1,
Deliverables). The present deliverable provides guidelines for the design of the user interfaces (UIs)
for Privacy and Identity Management Systems (PIM Systems) with a focus on the user-side UIs, rather
than the services-side.
The work on Human-Computer Interaction (HCI) in the PRIME project has produced many results on
user interfaces for PRIME-based products and on users’ reaction to such technology. The HCI work
package has in collaboration with other PRIME work packages developed many interesting ideas and
concrete designs, and in user tests gathered input from ordinary Internet users on different aspects of
the question of user acceptance of privacy-enhancing technology. These results have been published in
deliverables1 and working papers as well as in conference papers and book chapters.
There is now a need to present a comprehensive overview of recommendations and critical comments
pertaining to the kind of UI designs implemented or considered within the PRIME project. This is the
motivation to produce a final deliverable on “HCI Guidelines”.
In order to define the state-of-the-art and to put PRIME results in perspective, this deliverable starts
(in chapter 2) with an account of the literature on privacy-related HCI questions. This account includes
comments on the relevance for the PRIME work.
Thereupon follows an “Introduction to PRIME UI paradigms (for web browsers)” in chapter 3. This
introduction is rather brief but is needed for the two subsequent chapters: the tabulations in chapters 4
of HCI requirements also mentions PRIME proposals to meet these requirements; and the discussions
in chapter 5 on design considerations and evaluations are dependent on the specific prototypes with
their specific UIs.
The introduction in chapter 3 covers the main user interfaces that have been discussed for the service–
client web scenarios, i.e. the scenarios that have been used for demonstration of the different versions
the PRIME integrated prototype. Section 3.1 gives a condensed presentation of these main user
interfaces, while somewhat more comprehensive presentations including illustrations are given in 3.2-
3.6.
Chapter 4 relates privacy principles to requirements on the HCI design. It starts off by a discussion on
the different focuses a dedicated Privacy HCI discipline may have, HCI-P. Different perspectives give
different focuses, and the rest of chapter 4 takes a few compelling perspectives and derives HCI
guidelines from them. Firstly, chapter 4 tabulates the HCI requirements and implications derived from
legal privacy principles. Secondly, chapter 4 lists a set of adoption criteria for privacy-enhanced
applications for end users. Thirdly, it derives eight general HCI guidelines for PIM systems. Finally,
guidelines for a user-oriented design process are presented and also advice on testing and evaluation.
The development of the PRIME integrated prototype has been paralleled by the development of some
minor demonstrator scenarios, all based on web browsing use. Chapter 5 collects ideas and lessons
learnt during this process. The chapter discusses specific points in some detail with illustrations on
mockups used in user tests and different UI elements that have been considered for good usability of
privacy functions. The considerations are often applicable outside the web browsing sphere.
Application prototypes have been developed within PRIME, and these have naturally also been
endowed with UIs. The CeL, Collaborative eLearning, prototype includes privacy aspects of peer-to-
peer communication and thus moves the focus away from the service-client view of the demonstrators
of the Integrated Prototype. The LBS, Location-Based Service, prototype develops the service-to-

1
Public ally available PRIME deliverables are listed on page 2. One HCI deliverable, D06.1.a General Mock-
ups, is available for project members and parts can be made available for interested readers.

Final version 1, dated 01 February 2008 Page 12


PRIME D06.1.f
Privacy and Identity Management for Europe

service issue but for HCI the service-client relationship is perhaps the most interesting aspect of the
LBS prototype. Chapters 6 and 7 give overviews of the HCI issues encountered in these two
prototypes; more information on UIs and functionalities are to be found in the deliverable connected to
these prototypes, D04.1.b.
“Speak the user’s language” (Nielsen, 1994) is a well-known phrase in usability design. However,
there might be a conflict between legally binding statements and everyday language. It is essential to
achieve consistency of presentation of legally binding information and to optimise the usability of the
system by providing tested strings in multiple languages. Chapter 8 outlines ontologies meant to
provide standardised descriptions of concepts being communicated to the user. The PRIME ontologies
at the current state are rather primitive, but this chapter explains the potentials of ontologies for HCI
design.
Chapter 9 contains a short overview of some technical aspects of how the user interface of an
application interacts with the PRIME system. The chapter gives an architecture-level sketch over how
UI components are connecting to the PRIME core
Finally, some prospects for the future work are given in chapter 10 “Outlook”. New projects are in the
preparation, and we mention in particular PrimeLife which will extend the PRIME perspective into
online social networks and the life-long management of privacy and identity.

Final version 1, dated 01 February 2008 Page 13


PRIME D06.1.f
Privacy and Identity Management for Europe

2 Related work
Scanning the fields of privacy and HCI provides some interesting articles illuminating interesting
intersections. In general, the works reported has had different focuses, but in the last few years also
some comprehensive sources have appeared, such as Microsoft’s Privacy Guidelines for Developing
Software Products and Services and the SOUPS conference, Symposium on Usable Privacy and
Software, held annually since 2005 at Carnegie Mellon University in Pittsburgh. One may also
mention Iachello’s and Hong’s (2007) 137-page survey of HCI research on end-user privacy. Below,
papers of various origins are presented under the following headlines:
• Perception of fairness of data processing
• Specific UI paradigms for privacy systems
• Usability of security systems
• PISA: from privacy principles to HCI requirements
• Privacy UI vocabularies and information structuring
• New vistas for IDM systems: Tools for exercising rights
Special relevance to particular PRIME aspects is pointed out when deemed appropriate, but every
work is of general relevance for the PRIME goals.

2.1 Perceptions of fairness of data processing


Users’ trust in user-side and services-side PRIME-enabled systems is crucial. The articles related in
this section provide information about how users experience trust on web sites and about factors that
affect the perceptions of security and privacy of e-commerce web sites (2.1.1) and about data
processing in general (2.1.2). HCI design for trust will be discussed later on in this deliverable and in
this section we highlight a few conclusions which are in concordance with the HCI work that has been
pursued in the PRIME project.

2.1.1 User perception of and trust in web sites


A review of several surveys about people’s attitudes towards privacy on the Internet is found in
PRIME deliverable D06.1.b2 (Chapter 3 and Appendix A). In the following some references are made
to publications not only on privacy but also on security. Questions of perceived security threats have
strong connections to perceived privacy threats, as can be seen in the D06.1.b review of user surveys.
For instance, in one of the surveys reviewed, 80% of the respondents were concerned about data
processor “not keeping data secure at the risk of being stolen” and 72% about processor “not
collecting information in a secure way.” (p. 43; originally from Information Commissioner UK, 2004)
According to Kobsa (2001, 2002) numerous surveys show that users are concerned about threats to
their privacy when using Internet, e.g. concerned about disclosure of their personal information online,
and concerned about tracking online. Kobsa claims that privacy will have to be tailored to “each
individual user, taking his or her preferences and the jurisdiction at both the system’s as well as the
user’s location into account” (2001; cf. 2002). Kobsa (2002) expresses the opinion that client-side
instead of server-side personalization would give users exclusive control of all purposely collected
data, an opinion which lends support to the PRIME HCI efforts to develop preference settings and data
disclosure dialogue boxes. He also suggests some guidelines for the design of personalised

2
To the surveys mentioned in that deliverable one can add the discussion by Perri 6 (2001) “Can we be
persuaded to become PET-lovers?” A study of web site’s privacy policies and practices is found in a report from
Consumers International (2001).

Final version 1, dated 01 February 2008 Page 14


PRIME D06.1.f
Privacy and Identity Management for Europe

hypermedia systems (in effect, web sites) but furthermore points out that “These guidelines can be
complemented or modified by users’ individual privacy preferences.” (2001) An important
consideration by Kobsa is that “a single solution for all privacy issues does not exist” (ibid.).
Turner (2001, 2002, 2003) tells about factors that affect the perception of security and privacy of e-
commerce web sites. He claims that user’s “perception of security of e-commerce sites does not
depend on the site’s technical security but on their immediate experience with the site and on their
history with the company and the site” (2003). He refers to a number of studies which have
investigated how users form their judgement of the security of web sites. His own test results (2001)
showed that for experts to feel secure when transacting with a web site depended on factors such as,
1. their deep technical knowledge, 2. knowledge of good security processes, and 3. the company’s
reputation. Ordinary users depended on factors such as 1. the company’s reputation, 2. their
experience with the web site, and 3. recommendations from independent third parties (compare
PRIME work on assurance evaluation; 3.5). In his report from 2002 he provides evidence for claiming
that when it comes to security and trust on web sites, visual presentations is more important than
technical security information to users. Turner (2003) refers to Cheskin (1999) who holds that it is
necessary to understand how trust forms over time as a result of experience and that longitudinal
studies are needed.
According to Nielsen et al. (2000) “Trust is hard to build − and easy to lose. A single violation of trust
can destroy years of slowly accumulated credibility”. This report gives a broad overview of trust-
related issues brought up from users in a study where they were asked to carry out basic shopping
tasks on US e-commerce web sites. The results from the study show that many of the users “had little
trust even in renowned online shopping web sites” in contrast to the finding of Turner (2001). To win
their trust they wanted the web sites to provide: Succinct and readily accessible information about the
company; Fair pricing, fully revealed; Sufficient and balanced product information; Correct, timely,
professional site design; Clear and customer-friendly policies; Appropriate use of personal
information; Trustworthy security; and Access to helpful people. This last point is has been
elaborated upon in reports from the PRIME project in connection to results from our usability
evaluation on (dis)trust; more on this follow in section 2.6 in this chapter.
D’Hertefelt (2000) “noticed that people’s perception of security when doing on-line transactions
depends on the simplicity of the web site and on the availability of user support”, and he notices in
particular an observation that people feel secure just because they believe it is easy. His hypothesis is
that “The feeling of security experienced by a user of an interactive system is determined by the users
feeling of control of the interactive system”. D’Hertefelt gives three design guidelines for how to make
people trust a web service: comprehensible, predictable, and flexible and adaptable. Patrick (2002 –
drawing from the PISA project; see below) discusses software agents from a trust perspective,
extending a model by Lee, Kim and Moon (2000) polarizing trust and cost to a model polarising trust
and perceived risk. Among the trust factors they mention interface design but also other things such as
predictable system performance and user’s experience as well as shared values and user’s ability to
trust. On the other hand “a lack of alternative methods to perform a task” can lead to “feelings of risk”.
One may note that already in 1993 Bellotti and Sellen described “a framework for design for privacy
in ubiquitous computing environments”, where they stated that “privacy is a user-interface design
issue”, concluding with an example of a local online video application. In particular they found that
control and feedback were important for privacy in a ubiquitous computing environment. Nowadays,
more than a decade later, with omnipresent global wired and wireless networks, design for privacy
should be the concern of almost every application developer, especially the user interface developers,
to make people feel secure with their computing environments. This is not yet true but some work has
been done as evident from the following sections.
Lately, the privacy policies published by web services have become a target for discussion. By many
proponents of fair data processing such privacy policies are said to be hidden by the web services, and
a recent study analysed the language in them and found vague formulations giving much freedom for
the web site to do whatever they want with visitors’ data (Pollach, 2007; more on this in section 2.6).

Final version 1, dated 01 February 2008 Page 15


PRIME D06.1.f
Privacy and Identity Management for Europe

Another recent study, “undertaken to determine whether a more prominent display of privacy
information will cause consumers to incorporate privacy considerations into their online purchasing
decisions,” found that the prominent display of privacy information “tends to lead consumers to
purchase from online retailers who better protect their privacy. Additionally, our study indicates that
once privacy information is made more salient, some consumers are willing to pay a premium to
purchase from more privacy protective websites.” (Tsai et al., 2007) Thus, there is an economic
impetus to both make the data handling less privacy invasive and to make the policy document easily
accessible to people (in Europe there are of course also legal reasons as noted in sections 2.4 and 2.5).
In spite of the fact that research on trust-enhancing factors for web shops has been conducted for
around a decade by now and web browsers are used by most groups in our society, there are still many
people that are hesitant to entrust web services with personal data. For good reasons, one may add,
because it is not only the debate over weak privacy policies that should make people wary, but also the
increasing number of reports on bogus web sites which try to imitate established and trusted web sites.
To this come ‘phishing’ email messages linking to such bogus sites. Phishers “use ‘spoofed’ emails to
lead consumers to counterfeit websites designed to trick recipients into divulging financial data such
as account usernames and passwords. Hijacking brand names of banks, e-retailers and credit card
companies, phishers often convince recipients to respond.” as the Anti-Phishing Working Group
explains (APWG, 2007). And often this really works: Karakasiliotis et al. (2007) found in a web-based
survey presenting a mix of legitimate and illegitimate messages to higher educated persons, that, in
total, the 179 participants made incorrect classifications in 32% of the cases (i.e. classifying a bogus
letter as legitimate and a legitimate as illegitimate). It has also been shown that bogus websites have
many means to fool users to believe they are on a site which they are not. An illuminating usability
study on this topic was presented last year by Dhamija, Tygar, and Hearst (2006) “in which 22
participants were shown 20 web sites and asked to determine which ones were fraudulent. We found
that 23% of the participants did not look at browser-based cues such as the address bar, status bar and
the security indicators, leading to incorrect choices 40% of the time. We also found that some visual
deception attacks can fool even the most sophisticated users.” In another study, Schechter, Dhamija,
Ozment and Fischer conclude “We confirm prior findings that users ignore HTTPS indicators: no
participants withheld their passwords when these indicators were removed. We present the first
empirical investigation of site-authentication images, and we find them to be ineffective: even when
we removed them, 92% participants who used their own accounts entered their passwords.” (2007)
Wu, Miller, and Little (2006) let test users enter sensitive information online via a browser sidebar. In
a usability study, this solution “decreased the spoof rate of typical phishing attacks from 63% to 7%.”
However, spoofing the ‘secure’ sidebar itself turned out to be an effective attack. There has been some
further discussion about how well security and privacy indicators work; see the overview by Cranor
(2006).3

2.1.2 Trustguide
British Telecom and Hewlett Packard Lab in Bristol published a study on public attitudes in Britain
towards trust in our ever evolving technological world. HP has after this completed a new study,
Trustguide2, preliminary results of which was presented at the eChallenges event of autumn 2007. We
cite from this presentation as it also compares its main finding with the original Trustguide.
“The ability to specify preferences at the time personal information is released is clearly a popular
control, and one that can be easily understood by people if sensibly implemented. Preferences present
a useful and pragmatic alternative to other privacy enhancing techniques, e.g. anonymity and
pseudonymity. The key findings from our initial analysis of the study can be summarised as follows:
• Participants balanced convenience against risk (confirming Trustguide finding)

3
In addition one may refer to a Wiki on the W3C site: http://www.w3.org/2006/WSC/wiki/SharedBookmarks
and to “Phishing Tips and Techniques” by Gutmann http://www.cs.auckland.ac.nz/~pgut001/pubs/phishing.pdf.

Final version 1, dated 01 February 2008 Page 16


PRIME D06.1.f
Privacy and Identity Management for Europe

• Participants were wary and distrusting of new, unfamiliar services and retailers, but over time
would become more trusting in favour of maximising convenience
• Government and Banks were (with some reluctance) seen as trusted bodies. Further
outsourcing of ID management by government to other private organisations was not popular
(contradicting Trustguide finding where government was least trusted)
• Participants were divided on whether data should be held centrally or locally
• Participants were sceptical about the level or privacy that technology could offer (to a lesser
extent confirming Trustguide finding)
• Personal preferences were seen as useful controls, worthy of investing limited time to properly
manage. Providing choice suggested control and engendered trust
• Multiple privacy preferences (ID cards) were not considered useful or necessary
• Participants felt that certain items of PII are more sensitive than others, but struggled to value
each precisely, particularly with different usage scenarios
• Participants demanded clarity and transparency of use of their PII (confirming Trustguide
finding)
• The use of government issued ID cards for commercial applications was not popular”
(Tweney & Crane, 2007)
For the last point, it has to be taken into account that Great Britain does not have government issued
ID cards right now, but plans to issue them soon to all citizens. The authors note that “the findings
endorse other studies around management of PII, e.g. the EU PRIME Project.” (ibid.) As an example
from the PRIME project, one can refer to one where HP participated, where test users appreciated the
control which an interface for setting obligations for data controllers4 promised to give them
(Pettersson et al., 2006). The method in the Trustguide studies have also been akin to the one often
employed in PRIME usability studies and testing: participants are confronted with technology not yet
really existing but the stimuli are used to provoke responses in discussions afterwards (the Trustguide
studies have used focus groups while the usability studies in PRIME were limited to individual post-
test interviews or questionnaires).

2.2 Specific UI paradigms for privacy systems


Gandon and Sadeh (2003) describe a work on a semantic e-wallet that is meant to support “automated
identification and access of personal resources”. In the e-wallet the user can specify more details than
in other similar applications, even dynamic information such as telling some people which room he is
in, while simply telling others whether he is at work or not, or whether he is in town or not. The e-
wallet also includes scenarios where the user can pretend he is at one place, when he actually is
somewhere else. These are interesting PIM (privacy and identity management) aspects for future
location-based service applications not covered by the PRIME LBS Application Prototype (chapter 7).
The Privacy Bird developed by AT&T in the P3P, The Platform for Privacy Preferences Project,
functions as a plug-in to web browsers. It admits not only privacy preference settings by users but also
contains a feature for continuously, but not necessarily obstructively, informing the user of privacy
compliance of the web sites he visits. Moreover, Cranor (2002; Ch. 14) describes the user testing
conducted during the whole development process of the Privacy Bird. When Pettersson, Thorén, and
Fischer-Hübner (2003) tested a few interfaces for laymen who want to use P3P user agents for mobile
phones the starting point was Privacy Bird. The tests were made with faked WAP cell phone interface
4
‘Data controller’ is defined in EU Directive 95/46/EC, Article 2(d), as “‘controller’ shall mean the natural or
legal person, public authority, agency or any other body which alone or jointly with others determines the
purposes and means of the processing of personal data”, whereas ‘processor’ is the body “which processes
personal data on behalf of the controller”, Article 2(e). Thus the processor is a subcontractor to the controller.

Final version 1, dated 01 February 2008 Page 17


PRIME D06.1.f
Privacy and Identity Management for Europe

on a computer screen. Continuous alerting was found hard to make non-intrusive on a small screen.
On the other hand, with PRIME anonymisation there should be no need for such a feature, because
simple browsing will not reveal any information at all about the user. Cranor and co-workers have
developed search engines around the Privacy Bird concept (Gideon et al., 2006; compare Tsai et al.,
ibid.)
The PISA project – Privacy Incorporated Software Agents – which has influenced the PRIME HCI
guidelines gets an extensive treatment in section 2.4.
Furthermore, one presently sees several identity management technologies being developed around the
world (Windows CardSpace, Liberty Alliance, SXIP, Bandit, Netcraft, etc., as well as formfillers in
web browsers); these are geared towards the end-users. The greatest contribution to privacy protection
in such systems is their facilitation of ‘safe’ data disclosure, but this is only one part of the issue and
we do not review these systems here.

2.3 Usability of security systems


Identity management components sometimes figure prominently in papers on the usability of security
systems and are therefore of relevance to PRIME. Some other works on security and HCI are also
mentioned here.
Several authors have noticed the difficulties for end-users to set security parameters while heavily
simplified setting functions do not provide an adequate set of security levels (Kröger, 1999; Whitten &
Tygar, 1999; Jendricke & Gerd tom Markotten, 2000, Gerd tom Markotten 2002; several works by
Steven Furnell, i.a. 2004, 2005, 2006; Cranor & Garfinkel, eds., 2005). Nielsen’s (2004) report “User
Education Is Not the Answer to Security Problems” the accountability of security cannot be the users’
responsibility. He adheres to common recommendations about making security a built-in feature of all
computing elements and turning on all security settings by default “since most people don’t mess with
defaults. Then, make it easy to modify settings so that users can get trusted things done without having
to open a wide hole for everybody”. This sounds indeed as the working premises taken by PRIME
with relationship ‘pseudonyms’ playing an important role in the user interface proposals while the
default is total anonymity (see next chapter, 3.2).
Some proposals for how to advance the state of art include:
• The Identity-Manager ‘iManager’ for which it is claimed that “all applications can be used
with only one security interface. This eases the handling of the security functionality”
(Jendricke & Gerd tom Markotten, 2000)
• “Integrated user-centred security engineering”, where the users are included in the whole
development process, to ensure “usable security” (Gerd tom Markotten, 2002)
• A measure for usability of security tools based on a categorisation of user errors (Kaiser &
Reichenbach, 2002)
• Developing a series of usability principles concerning the security features of a system
(Johnston et al., 2003)
In the last paper “HCI-S” is defined as “the part of a user interface which is responsible for
establishing the common ground between a user and the security features of a system. HCI-S is human
computer interaction applied in the area of computer security.” The HCI-S criteria presented by
Johnston et al. are based on Nielsen’s Ten Usability Heuristics (1994) but modified and reduced to be
relevant to security environments, such as encryption program and firewall. “HCI-S’s goal is to
improve the interface in order to improve the security. This leads to the system becoming more secure,
robust and reliable.” Johnston et al. claim that D’Hertefelt’s (2000; see above) analysis supports that
“the six HCI criteria lead to trust” (however, their comparison is somewhat unclear). It is not
unthinkable to develop an HCI genre “HCI-P” and a specific list of usability principles to check
specifically the usability of privacy functions. However, there are competing candidates for how an

Final version 1, dated 01 February 2008 Page 18


PRIME D06.1.f
Privacy and Identity Management for Europe

HCI-P might be conceived. Rather than focusing on functions of a system, one point of departure
could be requirements for privacy, as done in the PISA project where legal requirements constituted
the starting point; cf. 2.4 below. More perspectives are conceivable as discussed in chapter 4.
A collection of guidelines by different authors are found in a paper by Herzog and Shahmehri (2007).
This will be discussed more in chapter 4, while here we only briefly mention the work by these two
authors. In a series of papers, collected and summarised in a recent dissertation by Almut Herzog
(2007), it has been demonstrated how complicated and error-prone the configuration of the security
policy file is. Offline editing was found particular inefficient and difficult for end users, Herzog and
her colleagues “conducted a usability study of personal firewalls to identify usable ways of setting up
a security policy at runtime.” (2007, p. 5) This in turn resulted in a tool for setting Java security policy
at runtime which was later subject to a usability study supporting the validity of their ideas and their
design guidelines (which will be discussed more in chapter 4) and also supporting the idea that users
should be included in the whole development process.

2.4 PISA: from privacy principles to HCI requirements


It is hard to find works specifically addressing the problem of how to make UI designs with
satisfactory usability including legal privacy aspects.5 The PISA (Privacy Incorporated Software
Agent) project has published an extensive report Handbook for Privacy-Enhancing Technologies
(2002). Chapter 12 in the Handbook deals with HCI questions (Human-Computer Interaction). We
will also refer to a paper by Patrick and Kenny (2003).
Patrick and Kenny list nine principles, “abstracted from the complexities of legal code” (ibid.), in a
“High-Level Summary of Privacy Principles” (Table 1, ibid.; italicised items indicate the principles
analysed in detail by the PISA project’s HCI team):
• Reporting the processing
• Transparent processing
• Finality & Purpose Limitation
• Lawful basis for data processing
• Data quality
• Rights
• Data traffic outside EU, the European Union
• Processor processing
• Security
Patrick and Kenny remark that the principles “have HCI implications because they describe mental
processes and behaviours that the Data Subject [i.e., the user]6 must experience in order for a service
to adhere to the principles. For example, the principles require that users understand the transparency
options, are aware of when they can be used, and are able to control how their PII is handled.” (ibid.;
PII = Personally Identifiable Information which is more or less equivalent to Personal Data)

5
OECD (2003) has specified practical guidelines to assist government, business and individuals in promoting
privacy protection online at national and international levels and also proposes means of promoting education
and user awareness. However, OECD’s guidelines appear as too general to directly inform UI design and will
therefore not be treated here.
6
The definition of ‘data subject’ is wider than ‘the user’ but in this context they are equivalent. Definition
implied in EU Directive 95/46/EC, Article 2(a): “‘personal data’ shall mean any information relating to an
identified or identifiable natural person (‘data subject’)”.

Final version 1, dated 01 February 2008 Page 19


PRIME D06.1.f
Privacy and Identity Management for Europe

In a very extensive table (Table 2, presented in a special Appendix), Patrick and Kenny present details
of the four privacy principles under consideration in their paper (i.e., the italicised principles in the list
above) and what HCI requirements they imply. “The core concepts in the requirements can be grouped
into four categories: (1) comprehension: to understand, or know; (2) consciousness: be aware, or
informed; (3) control: to manipulate, or be empowered; (4) consent: to agree.” (ibid.; also in Patrick &
al., 2003, p. 254) Furthermore, possible solutions for UI design are given for each principle and its
sub-principles in that table. This was the basis for the corresponding tables in D06.1.a&c and is also
adapted in the present deliverable, in section 4.2. These PRIME versions have been the point of
reference for the PRIME HCI work as concerns the EU directives.
Finally, Patrick and Kenny present a “Privacy Interface Analysis”. They claim that “The result of a
well-documented privacy interface analysis is a set of design solutions that will ensure usable
compliance with the privacy principles.” (ibid.) This trust in inspection methods can be questioned,
but in brief, the Analysis can be described in the way the authors do in section 4.3:
1. Develop a detailed description of the application or service from a use case and internal
operation point of view.
2. Examine each HCI requirement described […] to see if it applies to this application, using
Table 2 as a guide.
3. For each requirement that must be met, scrutinize the generic privacy solutions provided in
Table 2 (and the interface design methods […]) to determine an appropriate specific solution.
4. Organizing the solutions according to use cases and capture the solutions in an interface
requirements document.
5. Implement the interface according to the requirements document.

2.4.1 PISA: Testing of user interfaces for a privacy agent


All this would be fine if it were possible to design intelligible user interfaces directly from
requirements. As it is, the HCI discipline is full of methods for developing systems to meet
expectations rather than narrowly defined requirements. Methods for ensuring that UI designs really
meet expectations do of course include user testing of various kinds. This is elaborated upon in a more
comprehensive piece on Human Computer Interaction by the same authors plus Holmes and
Breukelen (Chapter 12 in the PISA Handbook from 2002, edited by van Blarkom, Borking & Olk).
While section 12.1 contains in essence the same material as Patrick and Kenny (ibid.), section 12.2,
entitled “Testing of User Interfaces for a Privacy Agent” contains a detailed description on how the
PISA Interface Prototype was developed and finally tested by more than fifty test subjects.
One can note the difference from how PRIME conducted usability tests. Some pilot testing was
conducted in the PISA project, but in principle their evaluation used a test involving 50+ test subjects
for one single version. In the D06.1.b usability evaluation of UI designs, many small tests were
conducted while continuously developing the user interfaces (a development which in no way is
completed given that PRIME is not meant to deliver commercial products). Both approaches can be
called HCI approaches. Perhaps the main difference is that between Psychology (Patrick and Kenny)
and Information Systems (Pettersson and Danielsson): in psychology the system has to be the
independent variable, while in information systems the development method could be seen as taking
that role.

2.4.2 “Just-In-Time-Click-Through Agreements” (JITCTAs)


Patrick and Kenny write about so-called ‘click-through agreements’, that is, when users are requested
to click an I Agree button or something similar, to state their agreement with the provider’s condition.
They try to avoid “a large, cumbersome, complicated User Agreement presented to the user only when
they begin to use a product or service” by introducing the concepts of ‘Just-In-Time-Click-Through

Final version 1, dated 01 February 2008 Page 20


PRIME D06.1.f
Privacy and Identity Management for Europe

Agreements’ (JITCTAs). “The main feature of a JITCTA is not to provide a large, complete list of
service terms but instead to confirm the understanding or consent on an as-needed basis. These small
agreements are easier for the user to read and process, and facilitate a better understanding of the
decision being made in-context. Also, the JITCTAs can be customized for the user depending on the
features that they actually use, and the user will be able to specify what terms they agree with, and
those they do not. It is hoped that the users will actually read these small agreements, instead of
ignoring the large agreements that they receive today.” (Patrick & Kenny, 2003)
A caveat must be given for the tendency of people to automate behaviours so that the individual parts
of an action are executed without conscious reflection (Raskin, 2000). Thus, too many repetitious
click-throughs should be avoided. The PRIME HCI work package has developed a ‘clickless’
alternative concept, namely the Drag-And-Drop-Agreements, which, of course, can appear ‘just in
time’ (section 3.4).
Patrick and Kenny (2003) refer to some articles that discuss click-through agreements and how to
make them valid: Thougburgh (2001), Slade (1999), Halket and Cosgrove (2001), and Kunz (2002).
Now, see also Microsoft’s Privacy Guidelines (2006).

2.5 Privacy UI vocabularies and information structuring


The privacy texts in privacy policies and preferences are not always easy for users to read (Pettersson
et al., 2003). There are three main problems, (1) the text may contain difficult words, (2) there is a lot
of text which might be hard to skim when one’s focus is on the service provided, and (3) small screen
sizes in mobile devices may make it hard to get an overview.
A project that has devoted special attention to find everyday English vocabulary for privacy polices
and preferences is the P3P “Platform for Privacy Preferences Project” (P3P, 2004). This project has
provided recommendations for Plain English language formulations of the technical definition found
in the P3P protocol. This would be an interesting input for a further development of the PRIME HCI
ontologies presented in section 8.3.
The Karlstad University group within PRIME has performed some studies of vocabularies on non-
native English speakers’ comprehension of privacy phrases from P3P and other sources. Findings are
univocal: non-English people may have quite big problems with even quite normal English words.
Pettersson (2004) discusses these findings and gives suggestions on linking of difficult words to sub
layers with explanations or to user’s configuration menus. See also chapter 6 of D06.1.b for
vocabulary tests conducted within the first half a year of the PRIME project.

2.5.1 Multi-layered structure for compliance with the EU directive


Apart from the linguistic problems just mentioned, there is a general problem of how to ascertain the
legal compliance of privacy notices given by service providers to customers. Extensive texts will not
be read by customers – short texts may not provide all the information required by the law (in
particular, cf. the requirements stated in Article 10 of EU Directive 95/46/EC).
The Article 29 Data Protection Working Party (2004) recommends providing information in a “multi-
layered format under which each layer should offer individuals the information needed to understand
their position and make decisions”. The report gives support for “the concept of a multi-layered format
for data subject notices” and in particular for the “Acceptance of short notices as legally acceptable
within a multi-layered structure that, in its totality, offers compliance”. This is highly relevant for the
UI design in PRIME in combination with the Just-In-Time-Click-Through-Agreements, JITCTAs;
compare the quotation in section 2.4.2 above from Patrick and Kenny (2003). The recommendations
will be discussed in chapter 3 and 4 of the present deliverable.
The three layers recommended are:

Final version 1, dated 01 February 2008 Page 21


PRIME D06.1.f
Privacy and Identity Management for Europe

Layer 1 – The short notice: “This must offer individuals the core information required under Article
10 of the Directive [95/46/EC] namely, the identity of the controller and the purpose of processing –
except when individuals are already aware – and any additional information which in view of the
particular circumstances of the case must be provided beforehand to ensure a fair processing. In
addition, a clear indication must be given as to how the individual can access additional information.”
(ibid.)
Layer 2 – The condensed notice, available at all times online but also in hard copy via written or
phone request, includes all relevant information required under the Directive as appropriate plus
information on redress mechanisms:
• The name of the company
• The purpose of the data processing
• The recipients or categories of recipients of the data
• Whether replies to the questions are obligatory or voluntary, as well as the possible
consequences of failure to reply
• The possibilities of transfer to third parties
• The right to access, to rectify and oppose
• Choices available to the individual
• A point of contact for questions and information on redress mechanisms either within the
company itself or details of the nearest data protection agency
Layer 3 – The full notice includes in addition to the points listed above also “national legal
requirements and specificities.” This layer, with national particularities, was hard to accommodate
within the frames of the PRIME project – in principle, the PRIME project has only presumed two
layers (one directly visible in UIs, one linked from these UIs).
The report contains three appendixes with examples of the two first layers. It can be noted that the
layered principle does not in itself provide the means to fully readable comprehensive notices when
small devices with small screens are used.

2.6 New vistas for IDM systems: Tools for exercising rights
Iachello and Hong end their 100+ page survey of HCI research in end-user privacy by a section
entitled “Trends and challenges in privacy HCI research” (2007, pp. 96-113). In this section they
expound five “grand challenges”:
• Developing more effective and efficient ways for end-users to manage their privacy (p. 96)
(Developing standard privacy-enhancing interaction techniques; p. 6, cf. p 114)
• Gaining a deeper understanding of people’s attitudes and behaviours toward privacy
• Developing a “Privacy HCI Toolbox”
• Improving organisational management of personal data
• Reconciling privacy research with technological adoption models
Within the PRIME HCI work package it has been noted that when identity management in the future
finds more standardised forms, it will be easier to investigate tentative HCI privacy principles as these
can be formulated for specific interactive procedures and people will by then be familiar with these
procedures. This is the common trend for all HCI research. Nevertheless, there is still a need for
conscious efforts to extend the present limits of privacy research within the HCI field: in particular,
there is interesting prospects in the intersection of Iachello’s and Hong’s first and fourth challenge as
will be explained in this final section of the present chapter.

Final version 1, dated 01 February 2008 Page 22


PRIME D06.1.f
Privacy and Identity Management for Europe

Being able to track what data were disclosed, when, to whom and how the data are further processed,
is an important feature providing transparency of personal data processing. However, today this
capability, if at all existent, resides on the service side only (i.e. at the data controller). Weitzner et al.
(2006) present a basic architecture for transparent and accountable data mining that consists of
general-purpose inference components to be installed at services sides providing transparency of
inference steps and accountability of rules. Transparency in this context means that the history of data
manipulations and inferences is maintained and can be examined by authorised parties. Accountability
means that one can check whether the policies that govern data manipulations were adhered to.
Sackmann et al. (2006) present a secure logging mechanism for creating privacy evidence for ex post
enforcement of privacy policies. Both approaches thus comprise transparency tools to be installed at
services sides that allow checking whether personal data at those sides were processed in a way
compliant with privacy legislation. They were, however, in contrast to the approach taken by PRIME
not designed as end user transparency tools.
The PRIME UI proposals include additional functionality in the history function, namely functions
that can enable users to online exercise their rights to rectify data and request their deletion (see
section 4.2), and help them to check that the data controller sticks to the obligations he has agreed to,
and also help the users to set obligations in the first place when providing the data controller with
personal data (see 3.4). In addition, within PRIME it has been discussed how to advise users about
their rights, because PRIME usability tests have shown that many users are not aware of their rights
(Fischer-Hübner et al., 2007). This should be compared with the Eurobarometer of 2003 that showed
that only one-third of EU citizens actually know of their right to retrieve information on what data is
stored about them and for what reason (Eurobarometer 2003, Q33 a.2).
There is a need to get assistance outside the privacy-enhancing system itself. As cited in 2.1, Nielsen
and co-workers found that “Access to helpful people” was a trust-giving factor. For a privacy-
enhanced identity managements system, this could consist of getting in contact with the data
processors, but also consumers’ organisations or data protection authorities. In the light of the findings
of Pollach (2007; again cf. 2.1), it is clear that users could need assistance reading the privacy polices
before agreeing to send data, and also afterwards if they run into problem and would like see if they
have any rights at all to claim. One may object that organisations working for the benefit of citizens
cannot be swamped by more or less justified complaints, but “access to helpful people” might in the
first instance be replaced by automated services or by pay-per-question manned assistance services
(Pettersson et al., 2005; Andersson et al., 2005). The important thing is that users can be guided to find
helpful information outside the data controller’s web site. According to the Eurobarometer, “The level
of knowledge about the existence of these independent authorities was low across the European Union
and two-thirds (68%) of EU citizens were not aware of their existence.” This underlines the opinion of
PRIME that, as for the legal rights, the UI should both inform users about the possibilities to get
assisted help as well as assist users in taking advantage of the possibilities. We consider the work
conducted by PRIME partners in this field as being quite visionary (Pettersson et al., 2006; Fischer-
Hübner et al., 2007).

Final version 1, dated 01 February 2008 Page 23


PRIME D06.1.f
Privacy and Identity Management for Europe

3 Introduction to PRIME UI paradigms (for web browsers)


The tabulations in chapter 4 of HCI requirements and of PRIME proposals to meet these requirements
as well as the discussions in chapter 5 on different design considerations and on evaluation results are
dependent on specific prototypes with specific UIs. Therefore this “Introduction to PRIME UI
paradigms (for web browsers)”. The introduction covers the main user interfaces developed to
illustrate the service–client web scenarios which have been used for the demonstration of the different
versions the PRIME integrated prototype.

3.1 Identity management paradigms


In order to facilitate interaction with different service providers, a web browsing computer user (or
mobile phone user) could have an electronic ‘visiting card’ providing the web sites with the data they
want to have. However, one card may not be enough. For example, sometimes the user wants to give
the home address, sometimes the address to his or her summerhouse, and sometimes a user does not
want to release the (real) name and address at all. So each user will need different visiting cards. In
addition to containing different sets of personal data, these electronic visiting cards could be charged
with various privacy preferences as regards how data should be handled, so that the visiting card is not
transmitted if the service provider does not have a privacy policy matching the requirements of the
user regarding how the data should be handled. The question then arises how to facilitate the selection
of the right visiting card. In PRIME we have discussed three different paradigms:
Role-centred paradigm where the user selects a visiting card which is used in all contacts until he
changes the visiting card.
Relationship-centred paradigm which functions like the bookmarks in a web browser: they are
aimed at a specific contact and the visiting card is chosen accordingly (the user has set his preferences
beforehand, perhaps at an earlier visit). This is the interaction paradigm of the web browser
demonstrators of PRIME Integrated Prototype 2 and 3, where the visiting cards are called “PreSets”
and “PrivPrefs”, respectively (the latter for ‘privacy preferences’).
The TownMap metaphor where houses in a town map constitute bookmarks or list of bookmarks,
and the town districts classify the corresponding sites as trusted or non-trusted and the visiting cards
are chosen accordingly. Thus, the topography of the TownMap illustrates some basic privacy
preferences. The TownMap also allows for illustrating data flows by animation, so the map and the
web pages are sometimes shown in parallel.
To this come some functions which are common to all these paradigms, even if the UI of these
functions can be designed in different ways.
Different Privacy Preference Settings are contained in the different visiting cards or TownMap areas.
The settings can be changed when the visiting cards are used for releasing data but also in a specific
window for defining visiting cards (or by area characterisation in the TownMap paradigm). A
preference setting can include automated agreements to send certain personal data but it can also state
that a confirmation button should be presented to the user; see next paragraph. Supporting the
preference manager there is also a database for personal data including electronic certificates,
‘credentials’, from which the user draws his personal data when defining a preference setting.
“Send Personal Data?”: in the traditionally-styled paradigms (i.e., not the TownMap) a dialogue
window asks the user for confirmation when a service provider requests personal data. The user
confirms by clicking a button; this is called click-through agreement in chapter 2. The window is
labelled “Send Personal Data?”. As an alternative, PRIME also proposes a small context menu
appearing within the web page and meant for cases when the user only has to choose data for one or a
few categories. In the TownMap paradigm confirmation could be given by dragging and dropping data
symbols onto the symbol of right service provider; we call this DADA, “drag-and-drop agreements”.

Final version 1, dated 01 February 2008 Page 24


PRIME D06.1.f
Privacy and Identity Management for Europe

“Assurance evaluation” is a function that evaluates different parameters indicating how good the
data receiver is at handling the user’s personal data. The result of the assurance evaluation is inserted
in the “Send personal data?” window mentioned in the preceding paragraph. It helps the user before
data has been disclosed.
“Data Track” is a database stored in the user-side PRIME system and logging all data transmitted to
service providers. It supports functions that help the user to keep track of his data disclosures and what
agreements he has entered, and could – in a well-developed PRIME world – actively help the user in
privacy-related questions after data has been disclosed.
In the reminder of this overview chapter, the paradigms and functions just mentioned are explained in
some more detail and sample screen shots of the graphical appearance of the UIs are given. Chapters 6
and 7 show the user interfaces of the two application prototypes of PRIME. It should be noted that the
application in chapter 7, location-based service for mobile phones, is client–service-oriented just like
the web browser application discussed in this chapter and chapter 5. In contrast, the Collaborative
eLearning application in chapter 6 represents a quite different approach with its focus on inter-
personal, peer-to-peer communication. There is nothing that prevents the three paradigms mentioned
above to be used also for peer-to-peer communication, but for peers holding personal data the EU
directives on the obligations of data controllers do not apply as clearly as for service providers and, for
instance, the history function, i.e. Data Track, will not necessarily contain the information needed to
assist the user in ways described below.

3.2 UI paradigms for interacting with service providers


In this section, we will present the main characteristics of the UI paradigms for identity management
that have been elaborated and tested by the PRIME HCI work package.
A particular feature prominent in all these attempts is the bundling of personal data and preference
settings with different electronic pseudonyms to form digital visiting cards. The visiting cards were
called roles or areas in the three main UI paradigms, namely the role-centred, the relationship-centred,
and the TownMap-based paradigm.
The first two paradigms are traditionally styled while the third one is based on the metaphor of a street
map and is an attempt to make preference settings more accessible and, hopefully, understandable to
users. On the other hand, the two latter ones share a common approach to the use of preference
settings, namely that the selection among the different preference settings is implicit when connecting
to each service provider. A user has different privacy needs as regards different communication
partners and pre-defined selection of visiting cards should facilitate this a great deal.
The three paradigms are presented in the three following subsections. The UI paradigms have been
embodied in an early prototype for IDM (cf. D06.1.b, ch. 6) and in some mock-ups and prototypes
produced for the PRIME project. In the PRIME integrated prototype Version 2 of the year 2007, the
word “role” was replaced first by “template” and later by “PreSet” to avoid confusion with other uses
of “role” (and “template”) in applications that include the PRIME kernel. In the final version, IPV3,
“PrivPref” will probably be used. “Visiting card” would naturally also be a candidate term, but this
has connotations that might restrict the user’s understanding of the concept. Microsoft uses
“Information Card” in their CardSpace.

3.2.1 Role-centred paradigm


Role-centred means that user control of data disclosure is primarily carried out via the roles, the
complex visiting cards just described which allow for pseudonymous contacts and definition of data
handling preferences. Within a role, the user can set and utilise different disclosure preferences for
different data types. The user then has to select the role he will be acting under when contacting
service providers, such as “Private” with his own address and telephone number or “Employee” with
company address and phone number or “Anonymous” without any real data, and whenever he thinks

Final version 1, dated 01 February 2008 Page 25


PRIME D06.1.f
Privacy and Identity Management for Europe

that this role is inappropriate, he has to select one of his other roles. The UI paradigm was embodied in
an early user-side prototype called DRIM (Dresden Identity Management) where the IDM functions
were displayed in side bars of an ordinary Internet browser (Mozilla Firefox). Even if it may seem
natural to select the visiting card one wants to use, it turns out that during web browsing, an ordinary
user will be following links to web sites he is not sure he wants to share his personal data with. Pre-
selecting one’s visiting card, as the role-centred paradigm presumes, is not a swift and privacy-friendly
way of surfing the web.

3.2.2 Relationship-centred paradigm


An alternative approach could be to define different privacy preferences in relation to each
communication partner, and using a blank visiting card for previously unknown web sites. In the
relationship-centred UI paradigm, the ordinary bookmarks (“Favorites” in Explorer) have privacy
preferences attached to them as shown by the icons to the right in Figure 1. A user might like to have
more than one privacy preference per bookmarked service provider; PRIME web browser designs
have been elaborated for this also as shown in Figure 1 (and elaborated in section 5.2.1). It should also
be noted that while the graphical user interface has to be somewhat elaborated, this UI proposal does
not introduce any extra actions during ordinary browsing, while on the other hand a role-centred UI
would force the user to repeatedly change roles (or change web sites if roles have default start sites,
but making a role list with a lot of alternative start pages only begs the question of why to re-invent the
ordinary bookmark list).

Figure 1 Bookmark list with icons for privacy preferences (PrivPrefs)

By default, a pre-defined visiting card based on transactional pseudonyms7 called “Anonymous” is


added to every new bookmark – this is the ‘blank’ visiting card mentioned in the previous paragraph.
In the example of Figure 1 the fully masked faces are the icons the user should use to select the
“Anonymous” PrivPref, i.e. the blank visiting card, when entering any of the sites listed. By using
transactional pseudonyms as default, the relationship-centred approach allows the privacy-enhancing
functions to be switched on from start even if the user is not prepared to actively select among them.

3.2.3 TownMap-based paradigm


In the TownMap-based UI paradigm the visiting cards are replaced by areas visualising privacy
protection concepts with default privacy settings. Predefined areas are the Neighbourhood (where
relationship pseudonymity8 is used by default), the Public area (where transactional pseudonymity is
used by default), and the Work area (where relationship pseudonymity is used) with different default
privacy preference settings for another set of personal data than for private use. The Work area is
meant for service providers the user deals with in his role as business owner or employee.

7
I.e., a new pseudonym is created for each transaction (Pfitzmann & Hansen, 2007).
8
I.e., a pseudonym chosen in regard to a specific communication partner (Pfitzmann & Hansen, 2007).

Final version 1, dated 01 February 2008 Page 26


PRIME D06.1.f
Privacy and Identity Management for Europe

The approach to use different default settings for different areas within a town should make it easier
for a novice to see the options available once he has grasped the TownMap metaphor. Individual
bookmarks or bookmark menus are symbolised by houses. The user also has his own house in the map
(a prominent house at the baseline). Of course, the map display has to vanish or be reduced when the
user clicks on the house of one of the service providers, because then the web page of that service
should occupy the browser window.

Figure 2 TownMap with building tools visible

In Figure 2 the user wants to add a shortcut link (similarly to dragging a web site icon from a present-
day browser’s address field to the desktop). The user picks a house from a toolbar and places it
somewhere on the map. This will make it possible not only to place a new bookmark in a specific area
of the TownMap but also to put an alternative privacy preference definition: if, for example, a web site
is already listed in the public space, now the user adds an access point to the same site but in his
neighbourhood to indicate that he should be recognised as a returning visitor when accessing the web
site this way.

Figure 3 Tilted TownMap visible

Figure 3 shows a view when the user is browsing a site. The user has clicked on the TownMap symbol
in the browser bar and can now see a tilted TownMap and all or some of his shortcut links (in this
figure only five houses have been placed on the map). This could be refined9 but in any event, it
allows using the spatial relationships further: the path between the user’s house and the bank for

9
Compare the “Looking Glass” UI paradigm presented by SUN Microsystems “Project Looking Glass”,
http://www.sun.com/software/looking_glass .

Final version 1, dated 01 February 2008 Page 27


PRIME D06.1.f
Privacy and Identity Management for Europe

instance, can be highlighted for indicating data flow and even for letting the user show preferred data
flows, as will be explained in detail in section 5.2.2.

3.3 Setting privacy preference for PreSets/PrivPrefs


As mentioned in 3.2, in order to simplify identity management and make it more privacy-supportive,
preference settings are done by bundling personal data and specific data handling preferences with
electronic pseudonyms to form digital visiting cards. The idea is to anonymise communication by
default even hiding IP addresses or telephone numbers and letting the services resort to only a
pseudonym for maintaining a transaction. If a pseudonym is re-used at a later occasion, the service in
question can in principle detect this and link the new transaction to the earlier one. However, the data
released during two transactions can sometimes also be used to link the two transactions. This fact is
hard to convey swiftly to test users and we have used a restricted set of visiting cards with names and
icons that should convey the implications of using them. The names have been such as
“name&address” and “Anonymous”; two examples are found in Figure 4 in the lower two rows (the
icons are composed of the PRIME mask and a face).

Icons for The PRIME mask occurs as sole component in the icon for “PRIME Settings” but
pseudonymity should also convey the basic PRIME idea of pseudonymity
and (true) The Face icon is meant to represent the user and is used for the function where
identity personal data is stored on the user side system

Mnemonic “PRIME Returning Customer” icon (or “PRIME Returning Visitor”) – no data but
icons for using one pseudonym per site to be recognised as returning visitor
privacy
preferences “PRIME Anonymous” icon – no data, always a new pseudonym

Figure 4 Deriving icons for privacy preferences from other icons

The data in a visiting card determine what the user side PRIME system can prepare for releasing when
a service makes a data request. The preference settings can be configured differently for different web
sites, so as to minimise the number of cards; for instance, certain web sites can be marked for
automatic disclosure of requested data, and certain data occurring in a visiting card can be marked for
automatic disclosure such as username and password so as to make login procedures automatic. If the
preference setting denies the service provider certain data, the user might fill in missing information
manually (cf. 3.4) or cancel the service request. Managing preference settings can be done off-line, but
it is quite a tricky thing to set all conditions if one aims at automated information management.
PRIME allows setting preference on the fly whenever the user finds it useful to create new visiting
cards or do extensions or changes to existing visiting cards.

3.4 “Send Personal Data?”


While the PRIME visiting cards, i.e. the PrivPrefs etc., in principle can function by automatically
disclosing data if all conditions of the preference settings are fulfilled, the user can choose to always
get a confirmation box, a window called “Send Personal Data?” (see Figure 5).10 Moreover, in the
many instances when the user has not specified a data rich visiting card as the default for a web site or
if some conditions in the privacy preferences are not met, the confirmation box will pop up to ask for
additional data and confirmation. Additions / changes will provoke questions from the PRIME system

10
“Send Personal Data?” is sometimes abbreviated to “Send data?” or “SD?” in PRIME reports and the function
behind it is called AskSendData. Adding “personal” helped test users understand this function.

Final version 1, dated 01 February 2008 Page 28


PRIME D06.1.f
Privacy and Identity Management for Europe

about changing the settings for the PrivPref, switching PrivPref, creating a new PrivPref, and setting a
default PrivPref for this web site.

Figure 5 One design of the “Send data?” dialogue window.

The data used to, so to speak, “fill out the form” in Figure 5 has been derived from an IPV2 PreSet
called “PRIME Returning Customer”; as mentioned in 3.2 PrivPrefs were called PreSets in IPV2. Data
types are not given except for data which lack an obvious classification from their face value (such as
“John_in_a_million”).
For a consent request to be valid according to the EU Directive 95/46/EC (Article 10), the data subject
has to be told about what data will be used, for what purpose, who is the data controller, and any
further information of relevance, except when the data subject is already aware of this. In principle, a
user asking for a service at a web site would always already be aware of purpose and controller.
However, as there might be web sites trying to trick the user, the PRIME solution presupposes that the
service sends a specification of these details, as can be seen in Figure 5, so that no unwanted uses are
tacitly included in the agreement, such as sending promotion mail or selling the data to other
businesses. Furthermore, the PRIME solution presupposes that service side identity is checked by the
user side PRIME system either by relying on trustworthy credentials provided by the service itself or
by deriving the information from web site ownership.
The “further information” is put in a second level (cf. section 2.5.1) accessible via the “Link to full
privacy notice” where the company’s privacy policy can be found. Naturally, the user might wonder
whether a particular site is trustworthy or not, and we have proposed an assurance evaluation, here
called “Privacy Functionality Check” – compare 3.5. Moreover, the service side might want to use
some of the data for other purposes than the service requested by the user. For instance, many

Final version 1, dated 01 February 2008 Page 29


PRIME D06.1.f
Privacy and Identity Management for Europe

companies would like to send newsletters to customers. Additional purposes have to be confirmed by
the user (and should not be “hidden” in the privacy policy! Cf. Pollach, 2007). We have tested a
design for this too as will be explained in chapter 5 where we also propose solutions for ascertaining
that the user really makes conscious choices and not only reacts by least effort by clicking on the
“OK” button.

3.5 Assurance evaluation


The function in 3.4 for giving consent should be augmented with some way to check the
trustworthiness of service providers that are not familiar to the user. Naturally, an ordinary user would
like to have information on product quality etc., but here we focus on the privacy-related information
that can be beneficial for the user to have. In essence, the service providers should be trusted to
process the user’s personal data in a privacy-friendly manner. To this end, an “Assurance Evaluation”
component has been implemented within the PRIME framework (Pearson, 2006). Its scope goes
beyond what end users may digest – it could in principle provide service providers with advanced
tools for checking out subcontractors, and also Certification Authorities can use it to check certified
services. For ordinary people, however, the “Assurance Evaluation” has been slimmed down to rely
partly on other parties performing the more advanced checks. In a usability test we simplified it to a
“Privacy Functionality Check” as seen in Figure 6 (edited; the test was performed in Swedish; 5.5.1).
Of course, the check of the service side system could be varied (third category in the figure), but in
principle it should conduct capability tests to verify statements made in the service policy (and/or
covered by the privacy seals; the EuroPriSe project may produce standards for seals, while the
European Consumer Centres have just launched a web-based solution, Howard the owl, for checking
trustmarks and other signs of trustworthiness that can be used when evaluating a web shop11).

Figure 6 A simplified assurance control for end users

3.6 Data Track


Within the PRIME prototypes, the Data Track function allows users to look up what personal data
they have released to other sites. Data transmissions administered by the user side PRIME system are

11
www.european-privacy-seal.eu and ready21.dev.visionteam.dk

Final version 1, dated 01 February 2008 Page 30


PRIME D06.1.f
Privacy and Identity Management for Europe

logged and stored on the end user’s device (encrypted and password protected if it were a real
application, of course). The Data Track (see Figure 7) is a function available in all three UI paradigms
discussed above. The PRIME UI proposals also include additional functionality in the Data Track,
namely functions that can advise users about their rights and enable them to exercise their basic rights
to access data, or to rectify and request their deletion online (see section 4.2), and help them to check
on agreed obligations or to set obligations (see 3.4 and chapter 5).

Figure 7 A Data Track design with several search options

Being able to track what data were disclosed, when, to whom and how the data are further processed,
is an important feature to provide transparency of personal data processing. The Data Track stores
transaction records comprising personal data sent including pseudonyms used for transactions and
credentials that were disclosed, date of transmission, purpose of data collection, recipient and all
further details of the privacy policy that the user and recipient have agreed upon (see also Pettersson et
al., 2006). The privacy policy constitutes a valuable document in case that a user feels that something
is wrong with how his data have been used. Storing the pseudonyms used for transaction allows a user
to identify himself as a previous contact in case that he wants to exercise his rights to access,
rectification, blocking or deletion while still remaining pseudonymous.

Final version 1, dated 01 February 2008 Page 31


PRIME D06.1.f
Privacy and Identity Management for Europe

4 HCI guidance for privacy


This chapter relates user-oriented requirements on PET to UI design. There are different sources for
these requirements and there could also be different notions on what PET-oriented HCI work should
concentrate on. This chapter starts with a discussion entitled “HCI-P” as a tentative name for Human-
Computer Interaction applied in the area of personal data processing and privacy. The discussion
results in an enumeration of different directions for such an application, while the rest of the chapter
concentrates on requirements that need to be interpreted in user interface terms and in human-
computer interaction specifications. HCI guidelines for good usability design are also discussed in
relation to recent proposals within the data security area (mentioned in 2.3). Some renowned sets of
general usability principles are also introduced, and finally general guidelines for development and
evaluation techniques are presented. Chapter 5 expounds on different presentation / interaction
techniques and gives examples from PRIME research.

4.1 HCI-P
As mentioned in section 2.3, Johnston, Eloff and Labuschagne (2003) make a case for enhancing
security functions of computers by improving the user interfaces of such functions. They note that
anti-virus software and firewalls are nowadays present also in systems managed by the everyday user,
who is not a security expert. The user interface “needs to ensure that the user is guided so as to
minimise the potential for the user to be the ‘weakest’ link.” (ibid., p.676). They define security HCI
(HCI-S) as “the part of the user interface which is responsible for establishing the common ground
between a user and the security features of a system. HCI-S is human computer interaction applied in
the area of computer security.”12 (ibid., p.677f)
Their starting point is the ten usability principles by Jakob Nielsen, originally derived from an analysis
of 249 usability problems (Nielsen, 1994, cf. 1993; cited in Figure 8 below). “Given the established
nature of these criteria, it is a good starting point to expand and modify the list of criteria so that they
are relevant to an HCI [=human-computer interface] in a security environment,” according to Johnston
et al., who modify and condense Nielsen’s ten usability principles “to address only the essentials in a
security environment.” This yields six HCI-S criteria (listed in their Table 2; cf. Figure 8 below).
Based on the work conducted in PRIME, it is not unthinkable to develop an HCI genre “HCI-P”. But it
questionable that HCI-P should consist merely of a specific list of usability principles to assess the
usability of privacy functions, even if ascertaining that the functions lead to “trust being developed”,
which Johnston et al. see as a goal for their HCI-S criteria, is a plausible ultimate goal. Before
discussing the HCI guidelines in the following sections, we will present HCI-P as potentially a many-
faceted thing even if the rest of the chapter will deal with only some of these aspects.

4.1.1 Different approaches to privacy questions in HCI


To start by the paper by Johnston et al., their reliance on general usability principles (i.e. Nielsen’s) is
noticeable. In order to develop a privacy HCI, a similar list could be developed but of privacy-specific
usability principles to specifically check the usability of privacy functions. On the other hand, the
PISA project (2002) started with legal requirements to derive HCI principles. Patrick and Kenny note
that the EU Directives demand that the user (the “Data Subject”) must have the ability to be aware and
understand, and that such legal privacy principles “have HCI implications because they describe
mental processes and behaviours that the Data Subject must experience in order for a service to adhere
to the principles.” (Patrick & Kenny, 2003) In a very extensive table, they present details of the

12
They vacillate between two interpretations of the abbreviation HCI: human-computer interaction (the field of),
and human-computer interface. For the latter we use user interface (UI), so unless otherwise stated HCI is in this
report to be understood as the field of human-computer interaction research and practice.

Final version 1, dated 01 February 2008 Page 32


PRIME D06.1.f
Privacy and Identity Management for Europe

privacy principles derived from EU Directive 95/46/EC and what HCI requirements these privacy
principles imply. Thus, if the core of HCI-P should be a set of privacy-specific usability principles,
one would have to decide from which field(s) they should be originally derived in order to properly
guide the design of the part of a system which interfaces between the user and the privacy features of
the system. To this comes the question what are the privacy features of the system.
Patrick and Kenny refer to discussions within the American Bar Association on the binding force of
agreements made by so-called ‘click-throughs’ (e.g. Halket & Cosgrove, 2001); this is when the user
clicks on a button or a link labelled I agree or OK, but for this to be an expression of consent, he has to
be informed about exactly what he agrees to. In several cases processing of personal data is not
allowed without explicit consent from the data subject, so one could argue that HCI-P should centre on
the concept of ‘informed consent’ and how to reach agreements electronically. Identifying and
categorising situations where individuals are in contact with professional service providers would then
be in focus and the target would be to find appropriate interaction solutions for such situations.
Another perspective would be gained if the scope is widened from the service–customer dichotomy to
include also digital peer-to-peer communication. For peer-to-peer communication privacy laws are
less applicable. How to reach agreements (i.e., user ‘giving consent’) will not appear as the core
solution from this perspective. The same holds for foreign services if the scope is widened to consider
data transfers to sites in countries outside the European Union, which have no appropriate level of data
protection. In either case the legal backing is weak and ways to conceal one’s identity by
anonymisation and the use of pseudonyms13 will be one of the best methods to ensure control over
one’s own data. However, privacy threats occur through the growing linkability when mixing real data
with these pseudonyms. Therefore, in this broader perspective, user-controlled automation of the
management of partial identities and linkability estimation appear as interesting as consent-giving. A
strand of HCI-P would then centre on agent technologies, i.e., technologies that act on the behalf of
the user making least-linkability decisions for him (this was included in the very name of the PISA
project, ‘Privacy Incorporated Software Agent’).
More generally speaking, PET is by definition concerned with computers (Privacy-Enhancing
Technology). ‘Usability of PET’ is conceivable as the defining formula for privacy HCI, which
however does not set it out as anything special for HCI workers as it would include general usability
issues as well as privacy-specific issues, and the formula ‘usability of PET’ leads to the question
whether or not HCI-P should be concerned with end users only and not also system administrators
working at the data processors. Automation has been advocated as a means to keep to good practices
on the controller/processor side (Borking & Raab, 2001; European Commission, Memo/07/159).
Manual processes are more prone to errors and deserve special attention too, of course, but such
processes fall outside HCI unless there are specific computer-based tools meant to help the human
operator. Privacy policy authoring tools (Karat et al., 2005) is a special case, and a special case of
these are systems where the policies might be used to directly control the system (‘obligation
management’; Casassa Mont, 2004). Even if a high degree of automation at service side may pave the
way for end users actually accessing the server side databases directly via their own PCs and mobile
phones (as envisioned by the PRIME HCI group; cf. “Data Track” elsewhere in this report), the HCI
requirements are quite distinct between the two user groups – indeed, end users and system
administrators use in fact different systems, even if the systems interact. Thus, the focus of HCI-P
would be split, which can be criticised, of course, but in the same time it should be recognised that the
two sides are far from unrelated to each other.
To start at the other end of the HCI continuum one could note that for people in general, relying on
technology appears at instances as a great obstacle, especially when it comes to security and privacy.
Participants in PRIME user tests have voiced the fear of releasing personal data and also the distrust
that an allegedly ‘privacy-enhancing’ identity management system would ever be able to really protect
people’s privacy. One can envision privacy HCI as methods to invite people to rely on PET. To
13
The term pseudonym is “Generally used to describe identifiers which are not connected to an individual’s civil
or official identity” (Chapter 3 in D14.1.a). For different types of pseudonyms, see Pfitzmann & Hansen (2007).

Final version 1, dated 01 February 2008 Page 33


PRIME D06.1.f
Privacy and Identity Management for Europe

compare, Johnston et al. (ibid.) claim to have defined criteria to ascertain that security functions lead
to “trust being developed”. Moreover, one can envision a privacy HCI discipline that takes further
responsibility by incorporating in the HCI analysis the whole ‘trust system’ including data protection
authorities, consumer organisations, etc. (Pettersson, 2006; Hansen et al., 2007) If a data protection
authority recommends the use of a certain identity manager, presumably this would mean very much
for people’s trust in such systems. Admittedly, such systems can still be complicated to use: the trust
focus cannot exclude ordinary usability work from HCI-P, because people have a tendency to distrust
their own abilities to cope with technology-centred systems and if proven right, they will surely stop
using even a recommended system.
To continue with the trust focus, it would also put a focus on how to derive design principles. In
previous PRIME deliverables we have claimed that experimental data do not only highlight the
existence of trust problems but also give indications how such problems could be countered by user
interface design (Pettersson et al., 2005; Hansen et al., 2007). In general, usability testing could be
developed further as indicated in the usability evaluation reported in PRIME deliverable D06.1.b.
There is a need to develop guidelines or standards for how to introduce test users to the area of PET as
well as measuring their ability to discern various privacy threats, ability to develop well-founded trust,
etc. Thus there are a number of specific circumstances concerning usability testing of PETs, a fact
which argues for targeting one strand of HCI-P on experimental methodology. The methodological
concerns could stretch beyond usability testing of prototypes – for instance, Iachello and Hong con-
clude on methodological issues for privacy HCI thus: “One salient question is whether surveys, focus
groups, and interviews should be structured to present both benefits and losses to participants. Clearly
a balanced presentation could elicit very different responses than a partial description” (2007, p.41).
To sum up, what has been discussed here are the following perspectives on HCI-P (Privacy HCI):
• As a set of privacy-specific usability design principles;
• As human-centred methods to reach agreements electronically;
• As agent technologies for ordinary people;
• As concerned with usability of PET (user side PET, or any PET);
• As a set of trust principles, or methods to invite people to rely on PET;
• As experimental methodology with a special focus.
The word ‘perspective’ suggests that they are not mutually exclusive (and a more ambitious and
extensive review of approaches and topics for privacy HCI can be found in section 3 of Iachello’s and
Hong’s recent survey; ibid.). It should moreover be remembered that other experts than HCI experts
have a stake in the usability evaluation of a privacy and identity management system, as indeed the
multidisciplinary PRIME project demonstrates. In conclusion, for the development of good PET, HCI-
P is better seen as a many-faceted discipline with various prospects for methodological development.

4.1.2 The present approaches to privacy questions in HCI


The rest of this chapter provides guidelines on how to meet privacy principles and HCI requirements.
When applicable, the sections below explain how this is done in the PRIME demonstrator UI and its
usage scenario, i.e. web-based services.
We start with legal requirements from the EU domain (4.2). Naturally, an analysis of the legal
directives could also include a criticism of the directives themselves rather than only an attempt to
follow them. However, the EU Directive has as a starting point the rights of individuals’ to exercise
control over personal data about themselves. Self-determination is, if not synonymous with the
concept of ‘privacy’, at least the essence of this concept. If a person is allowed to determine how data
about him is used and if it should be used at all, there is no invasive use of his data, and his privacy is
well respected. Indeed, within the PRIME project also socio-cultural requirements for identity

Final version 1, dated 01 February 2008 Page 34


PRIME D06.1.f
Privacy and Identity Management for Europe

management applications have been developed, but these requirements are in essence similar to the
EU directive, as far as user-control is concerned. Therefore, we find it uncontroversial to follow the
EU Data Protection Directive to derive HCI requirements for the demonstrator of the PRIME
Integrated Prototype.14
However, the socio-cultural requirements also include adoption criteria, and such cannot be found in
law codes. Thus, factors promoting individuals’ adoption of an application are listed here (4.3).
We moreover give eight general guidelines for the end-user side of PET not emphasising legal or
social requirements applicable to a specific relation between one sort of data collector and one sort
data source, but taking privacy functions for granted whatever their exact effects and instead targeted
at three basic characteristics of PET: that privacy protection is a secondary goal of users’, that PET
concepts can be quite hard to understand for users, and that true reversal of actions is often not
possible (4.4). These guidelines are inspired by recent works in security HCI which are also listed.
Finally, the chapter ends with guidelines on the design and testing process because blending various
guidelines for the UI design will need a tuning process involving testing (4.5).

4.2 From legal privacy requirements to HCI guidance


Important domain-specific HCI requirements can be derived from privacy legislation. In the PISA
project it was studied how privacy principles that are derived from the EU Data Protection Directive
95/46/EC can be translated into HCI requirements and what are possible design solutions to meet
those requirements. The results have been summarised in Table 12.3-12.6 in the PISA handbook from
2002 (Table 2 in Patrick & Kenny, 2003). As explained in chapter 2 above, the derived HCI
requirements were grouped into the four categories comprehension (to understand, or know),
consciousness (be aware or informed), control (to manipulate, or be empowered), consent (to agree).
For the purpose of the general PRIME mock-up design, the privacy principles and HCI requirements
from the PISA table was used in a slightly adapted form to suggest solutions for the PRIME mock-ups.
Based on the lessons learned from the legal evaluation of the mock-ups (D6.1.c, section 3.2), we have
used these and other privacy principles and HCI requirements to derive proposed solutions for
PRIME.
The first two columns of Table 1 correspond, apart from some changes and extensions, to the first two
columns of Table 12.3-12.6 in the PISA handbook. The 3rd column in Table 1 suggests possible UI
design proposals related to the PRIME integrated prototype and the prototype for a location-based
service application. These UI design proposals have partly been implemented in the general mock-ups
and the integrated prototype.
Extensions to the PISA table were necessary in order to consider all essential legal privacy principles.
The PISA project’s legal HCI requirements were derived from the general EU Data Protection
Directive 95/46/EC. However, further legal requirements now have to be taken into consideration. In
particular, EU Directive 2002/58/EC on privacy and electronic communications lists further more
specific requirements that are for instance of importance for the UI design of location-based services.
Also, further HCI requirements can be derived from Art. 25-26 of the general EU Directive 95/46/EC
for data transactions to countries outside the European Union which have no appropriate level of data
protection. For those legal requirements, we have also derived HCI requirements and possible PRIME
UI solutions and have extended the table accordingly.
The abbreviations used in the table are the following:
PD = Personal Data
PS = Purpose Specification
RP = Retention Period

14
This is not meant to say that individual articles in a directive might not be challenged; see Pettersson (2007).

Final version 1, dated 01 February 2008 Page 35


PRIME D06.1.f
Privacy and Identity Management for Europe

Table 1 Privacy Principles, HCI requirements, and possible PRIME UI solutions

Legal Privacy HCI requirements Proposed PRIME UI solutions


Principles
1 TRANSPARENCY Users must be aware of the Inform about the Data Track function at
transparency options, and feel installation (e.g. via tutorials).
empowered to comprehend and Inform about Data Track in the forms for
control how their Personal Data privacy preference settings.
(PD) is handled. Use visible logo forData Track in the
PRIME UI (e.g., title or tool bar of a
browser).
Information about Data Track should be
added to where the service’s side privacy
policy information is displayed and should
in particular appear in the short privacy
notices if a multi-layered approach is used.

1.1 Data subject (DS) is Users must be aware of the Opportunity to track controller’s actions is
informed: DS is aware of transparency options made clearly visible in the interface
transparency opportunities. design: There should be a legend “Data
Track” available to accompany the data
track (e.g., footprint) icon in the title or
tool bar.

1.1.1 For: PD collected from DS. Users know who is controlling The identity (legal name, address, email)
Prior to PD collection: DS their data, and for what of the service provider (and possibly also
informed of Data Controller purpose(s). the user’s denomination in bookmarks etc.)
identity and Purpose as well as the purposes should be
Specification (PS). reinforced in the dialogue boxes “Send
(Art. 10 EU Directive data?” corresponding to short privacy
95/46/EC) notices that appear when sites request PD.
This information is also part of the
condensed and full privacy notices that
should be retrievable via the PRIME
interface.

1.1.2 For: PD not collected from Users are informed of each A statement that data can be passed on to
DS but from controller: DS processor who processes their third parties (including the identity of third
informed by controller about data, and the users understand party and purposes of processing) should
processor identity and PS. the limits to this informing. appear in the services side’s privacy policy
(Art. 11 EU Directive and should be displayed either in the
95/46/EC) condensed privacy notices or short privacy
notices (i.e. “Send data?” windows) if this
is regarded as necessary for guaranteeing a
fair data processing.

1.2 When PD are used for direct Users understand that they can Information about the data subject’s rights
marketing purposes, DS object to processing of their PD and tools how to exercise them has to
must be aware of the right to for direct marketing, and the appear in the privacy policy and hence
object (Art. 14 (b) of EU limitations on those objections. should be displayed in the privacy notices
Directive 95/46/EC). (i.e. if multi-layered notices are used, it
should appear in the condensed privacy
notice or in the short notice of the “Send
data?” windows if this is necessary for
guaranteeing a fair data processing).
Relevant information could, for instance,
be provided through a click-through
agreement at registration.
The interface should provide obvious tools
for exercising the data subject’s rights.
This could be part of the Data Track
functions.

Final version 1, dated 01 February 2008 Page 36


PRIME D06.1.f
Privacy and Identity Management for Europe

2 FINALITY AND Users control the use and Control through the user’s privacy
storage of their PD. preference settings and “Send Data?”
PURPOSE LIMITATION: dialogue boxes (containing core
The use and retention of PD information of short privacy notices),
is bound to the purpose for which are designed to be prominent and
which the PD were collected obvious.
from the DS. Control as regards events that occur after
(Art. 6 EU Directive the user has given her or his consent will
95/46/EC). be managed by the Data Track function
(possibly including an alert function
notifying the user even if the Data Track
window has not been opened).

2.1 The controller has legitimate Users unambiguously give “Send data?” dialogue boxes in form of
grounds for processing the explicit or implicit consent. JITCTAs or DADAs can be used to obtain
PD (see Principle 3.1) unambiguous consent for controller to
(Art. 7 EU Directive process the PD.
95/46/EC)
For direct marketing via
email, DS has the right to
opt-in if there is no customer
relationship (Art. 13 EU
Directive 2002/58/EC.

2.2 The processor can only go Users understand that their PD Information about these exceptions has to
beyond the agreed PS if could be used for other purposes be part of the full privacy notices to be
the processor’s PS is state in special cases. retrievable via the PRIME interface.
security, or prevention/ In the Data Track explicit formulation
detection/prosecution of about these special cases or a “Special
criminal offences, or case” button can be used.
economic interests of the
state, or protection of
DS, or rights of other natural
persons, or scientific/
statistical analysis
(Art. 3 II EU Directive
95/46/EC)

2.3 Retention: the DS should be - Users are conscious of RP RP should be selectable both in privacy
presented a proposed prior to giving consent preference setting form (conditions for
retention period (RP) prior - Users are aware of what automatic disclosure) and preferably also
to giving consent, except happens to their data when the during browsing, in dialog boxes “Send
where PS is scientific/ retention time expires. data?”.
statistical. Controller ensures Information about RP has to be part of the
processor complies with RP. condensed privacy notice or short privacy
When RP expires, it is notices (i.e. “Send data?” windows) if this
preferably deleted or made is regarded as necessary for guaranteeing a
anonymous. fair data processing.
Information about RP and about PD that
has been deleted or made anonymous
because of retention period expiry should
be included in data track function.

3 LEGITIMATE PROCESSING: Users control the boundaries in Interface elements (i.e. privacy preference
which their PD is processed. setting form, “Send data?” dialogue boxes,
the PD is processed within data track windows) for making privacy
defined boundaries. decisions should be prominent and
obvious.

3.1 Permission: To legitimately - Users give informed consent “Send data?” dialogue boxes in the form
process PD, the controller to all processing of data of JITCTAs or DADAs can be used to
ensures that one or more of - Users understand when they obtain unambiguous consent for
the following are true: the are forming a contract for controller to process the PD.
DS gives his explicit services, and the implications of JITCTAs or DADAs can also confirm the
consent, the DS that contract formation of a contract, and the

Final version 1, dated 01 February 2008 Page 37


PRIME D06.1.f
Privacy and Identity Management for Europe

unambiguously requests a - Users understand the special implications/ limitations of the contract.
service requiring cases when their data may be In the Data Track interface, a reminder of
performance of a contract, processed without a contract. special cases when data can be processed
the PS is legal obligation or without a contract could be included.
public administration, the Data Track contains all contracts agreed to.
vital interests of the DS are
at stake.
(Art. 7 EU Directive
95/46/EC)

3.2 Special categories of data: When dealing with highly Clearly mark requested sensitive data in
Data (Art. 8 EU Directive sensitive information, users “Send data?” boxes corresponding to
95/46/EC): The controller provide explicit, informed JITCTAs or for DADAs, so that the user is
may not process any PD that consent prior to processing. aware that special information is given.
is categorised as religion, The PISA project suggests double JITCTA
philosophical beliefs, race, but there are inherent risks in click-
political opinions, health, throughs that people are OK-ing too
sex life, trade union quickly. The second click-through should
membership, or criminal possibly be very different from the first
convictions unless the DS one.
has given their explicit
consent or the processor is
acting under a legal
obligation

3.3 Users provide informed consent For location-based services (LBS)


Location data can only be and are in control to temporarily applications, JITCAs or DADAs could be
processed when they are revoke consent used to obtain user consent.
made anonymous, or with If consent is obtained each time the LBS
the informed consent of the service is used, the DS implicitly has the
DS. In the case of consent, possibility to temporarily refuse giving
the DS must continue to away his/her location.
have the possibility to use Otherwise, the PRIME UI functions placed
simple means to temporarily prominently should provide simple means
refuse the data processing of for DS to temporarily refuse the data
his/her location data for each processing.
connection to the network or
for each transmission of a
communication. (Art. 9 EU
Directive 2002/58/EC).

3.4 PD should not be transferred DS has to understand A special warning should be displayed to
to third countries outside the implications, and provide users before data disclosure to third
EU without an appropriate informed consent. countries without an appropriate level of
level of data protection. data protection.
Exceptions: DS has A JITCA or DADAs can be used for
unambiguously consented to obtaining unambiguous consent from DS.
data transfer, transfer is
necessary for performance of
a contract between DS and
controller or controller and
third party in the interest of
the DS, transfer is necessary
for protecting vital interests
of DS, data is information
from public register, or
controller has appropriate
level of data protection (e.g.
fulfils the Safe Harbor
agreement) (Art. 25-26 EU
Directive 95/46/EC).

Final version 1, dated 01 February 2008 Page 38


PRIME D06.1.f
Privacy and Identity Management for Europe

4 RIGHTS OF THE Users understand and can Tutorials at installation and privacy notices
exercise their rights during run time should inform users about
DATA SUBJECT: their rights and about how to exercise these
DS has the right to rights both online and/or at the physical
self-determination within the address of the controller.
boundaries and balance of The existence in the Data Track of online
the Directive. functions for exercising these rights are
mentioned at installation and later in “Send
data?” and in the UI of the Data Track
function itself.
Interface layout (drop down menus, title
bar and data track window) provides
obvious tools for controlling the rights
functions.

4.1 Access rights of the DS - Users are conscious of their The tracking functions are displayed
(Art. 12 (a) EU Directive rights, which include right to prominently in PRIME interfaces (drop
95/46/EC) and possible know who has received what down menus and title bar of host appl.).
exemptions and restrictions kind of data, from whom, when, The exceptions to the rights should also be
(Art. 13 EU Directive and why, and they understand shown in the tracking interface and should
95/46/EC) the exceptions to these rights be explained in tutorials and named in the
- Users understand and can privacy notices.
exercise their right.s Online functions for exercising the user’s
right of access, i.e. requesting the server
side to provide information about the data
stored about him/her, should be part of the
Data Track and obvious to operate.
Besides, email / snail mail address for
requests to access data has to be provided
in the privacy notices which can be used as
a fall back solution in cases where the
online functions do not work.

4.2 The DS’s control rights: DS Users are conscious of their Links to online functions to exercise the
may issue erase, block, rights, they can exercise control rights to rectify/erase/block could be
rectify, or supplement over their data, with ability to provided in the Data Track window as an
commands on their PD erase, block, rectify, or extension to the Data Track functions. The
(Art. 12 (b) EU Directive supplement the data. commands to erase, block, rectify,
95/46/EC) associated with the tracking logs should be
obvious to operate.
Besides, email / snail mail address for
requests to rectify/block/erase data has to
be provided in the privacy notices which
can be used as a fall back solution in cases
where the online functions do not work.

4.3 The DS’s right to object to Users are empowered to object


processing for certain to processing for certain Links to online functions to exercise the
purposes purposes. right to object, could be provided in the
(Art. 14 EU Directive Ideally, it should be possible for data track window as an extension to the
95/46/EC) the data subjects to exercise Data Track functions. These online
these rights both online and at functions should be obvious to operate.
the physical address of the Besides, email / snail mail address for
controller. requests to object to data processing has to
be provided in the privacy notices which
can be used as a fall back solution in cases
where the online functions do not work.

There is yet a row in the PISA table, row 4.4 about “Derived Information” meant to deal with privacy
threats from web data mining (see chapter 11 in the PISA Handbook from 2002). It is not possible to
match that privacy principle to any directive, wherefore it is left out in the table presented here.

Final version 1, dated 01 February 2008 Page 39


PRIME D06.1.f
Privacy and Identity Management for Europe

4.3 From socio-cultural requirements to HCI guidance


The socio-cultural team in PRIME has been developing requirements that encompass the essence of
the legal ones (self-determination) as well as other requirements that relate to the users’ feeling of
comfort. The socio-cultural requirements are organised in two clusters: user control and factors
facilitating adoption of a PRIME-like system.
User control contributes to realising fundamental social values such as autonomy, individuality, and
dignity. User control is decomposed in a set of seven requirements preceding data disclosure that form
the 7 C’s of user control: consciousness, comprehension, choice, consent, context, confinement, and
consistency, and three post data disclosure requirements: chain control, inspection and ex-post user
control.
User control means that the user should be able to:
• control of how personal data are handled
• be able to object to processing
• control how long personal data are stored
• be able to exercise the rights to examine and correct personal data
These objectives are obviously covered by Table 1, but the division into 10 subtopics has been
complemented by specific questions to ask during an evaluation of a PET system, and the questions
are rendered in the section where design and evaluation methods are discussed (in subsection 4.5.3).
The second cluster of socio-cultural requirements address the requirements on PET systems that aim at
being adopted by individual users of digital services. There are six adoption requirements: Social
Settings Flexibility, Minimised Skill Levels, Accountability, Trust in Communication Infrastructure,
Trust in Transaction Partners, and Affordability. In Table 2 the explications of each requirement are
listed in the Requirements column – they are directly interpretable as HCI requirements (except, of
course, the last one on affordability). In the rightmost column, PRIME UI proposals are put in relation
to each requirement. The table contains seven rows, not six, to correspond to the Socio-cultural
adoption requirements, where the lead row contains the general requirement that an application should
be designed to maximise adoption by its target audience.
Also the adoption requirements have been complemented with a checklist of questions to ask during
an evaluation. The checklist is found in 4.5.3.

Table 2 Factors promoting adoption, HCI requirements, and possible PRIME UI solutions
(numbering as in PRIME Socio-cultural Requirements V2)

Req Socio-cultural Privacy Requirements (both Proposed PRIME UI solutions


No. Principles Socio-cultural & HCI)
A PROMOTE ADOPTION The user should want to use a Only partly a UI question. Also a question of
PRIME-enabled application functions provided and pricing

A.1 Social Settings Flexibility The user should be able to The ontologies should provide in different
configure the application to languages standard UI texts as well as
different socio-cultural settings messages generated by service side

A.2 Minimised Skill Levels The user should be able to use Web browser: the relationship-centred
the application with a minimal paradigm in chapter 3 combined with a default
amount of training anonymous browsing and non-permissive
visiting card for new sites make it easy to start
using the privacy functions. Various functions
demands various skill level, but the Data Track
is proposed to contain “exercise” functions to
acquainting users with PET concepts

Final version 1, dated 01 February 2008 Page 40


PRIME D06.1.f
Privacy and Identity Management for Europe

A.3 Accountability The user should be held When anonymous credentials are used, these
responsible and accountable in should have information texts telling about
specific predefined conditions non-standard cases where the anonymity is
(anonymity will berevoked) revoked

A.4 Trust in Communication The user should be able to trust Information about the trustworthiness of the
Infrastructure that the communication PRIME user side system must be provided by
infrastructure mediate external bodies
information without distortion Information about infrastructure, including
and without making service provider’s system can be shown in an
eavesdropping possible Assurance Evaluation window (see section 3.5)

A.5 Trust in Transaction Partners The user should be able to trust Information about services are provided by the
their transaction partners assurance module and shown in an Assurance
Evaluation window (see section 3.5)

A.6 Affordability (The user should be able to Outside the scope of HCI guidelines but it is a
obtain and use the application at frequent answer in post-test interviews.
reasonable cost)

4.4 General guidelines for usable privacy


There is a distinct resemblance between usability problems of security functions and usability
problems of privacy functions, not only because good digital privacy protection demands good data
security but also – as noted by Whitten and Tygar (1999) for security and later by many others for
privacy – because:
• Security and privacy protection are typically secondary goals to ordinary computer users,
• they contain difficult concepts that may be unintuitive to lay users, and
• true reversal of actions is not possible.
Herzog and Shahmehri (2007) provides a table with UI guidelines for “applications that must set a
security policy”. This table is reproduced here as Table 3. The individual guidelines fit end-user PET
very well, even if there are variations in the range of designable user assistance as explained in Table
4, “Privacy usability in relation to the security usability guidelines”.

Table 3 Design guidelines for “applications that must set a security policy, their origin and motivation”
(Herzog & Shahmehri, 2007, Table 1)

Guideline user side Sec. Origin and motivation


S1. Security must be visible Johnston et al. (2003), Nielsen (1994) and Yee (2002) propose visibility of system status as
without being intrusive. one criterion for successful HCI in security applications. Visibility contributes to the
building of trust in the security application. However, users do not want to be ambushed
with security alerts at all times (Sasse et al., 2001)
S2. Security applications As a first step towards learning, Nielsen (1994) demands that applications use the language
must encourage learning. of the users to enhance their understanding and consequently to support the learning process.
Whitten and Tygar (1999) have shown that security is difficult to understand and that
concepts from educational software could and should be borrowed. Johnston et al. (2003)
propose learnability, which we take one step further: not only should the software be
learnable but also encourage the user to learn about security issues.
S3. Give the user a chance to […] While security in principle has the barn-door property (Whitten & Tygar, 1999) that
revise a hasty decision. the late closing of a security door may be exactly too late because the damage is already
done, this is not always or absolutely the case. But if there is no convenient way for the user
to “close the door”, it will remain open, and this must be avoided. This issue is also
recognised as revocability by Yee (2002) or easy reversal of actions by Shneiderman and
Plaisant (2004), even though true reversal may not be possible because of the barn-door
property.

Final version 1, dated 01 February 2008 Page 41


PRIME D06.1.f
Privacy and Identity Management for Europe

S4. Decisions cannot be This guideline is in conflict with the guideline support internal locus of control by making
handled off-line: run-time the user initiate actions, not respond to system output by Shneiderman and Plaisant (2004)
set-up is to be preferred. and shows clearly that not all usability guidelines can be uncritically transferred to security
applications, which are typically supportive and not primary-task applications, and the user
is not likely to take any actions if not prompted to do so.
S5. Enforce least privilege The principle of least privilege comes from Saltzer and Schroeder (1975) and is one
wherever possible. important principle of computer security and specifically access control, which is what
security policies are about. Garfinkel (2005) warns in this context of hyperconfigurability.
Users have difficulties in managing too many options and cannot take in the consequences
of their modifications. Garfinkel rather suggests “a range of well-vetted, understood and
teachable policies” instead of exposing the user to fine-grained policy set-up.
S6. In a security alert, the Nielsen (1994) proposes that error messages should contain instructions on what to do, not
user should be informed only what has happened. Still, the texts must be short and focused so that they are actually
of the severity of the read. Details and additional explanations should be accessible but not blur the main
event and what to do. message. Yee (2002) demands clarity so that the effects of any actions the user may take are
clear to him/her before performing the action. Also Hardee et al. (2006) state that any
decision support should contain the consequence of any action taken.
S7. Spend time on icons. Johnston et al. (2003) state that well-chosen icons can increase learnability. This is
supported by Whitten (2004), who suggests icons for public-key encryption and motivates
icon choices, and Pettersson [in PRIME deliverable D06.1.c] who comments on the
difficulty of choosing icons for privacy settings.
S8. Know and follow general General usability guidelines are e.g. described by Shneiderman and Plaisant (2004) or
usability guidelines and Nielsen (1993). However, these guidelines are often so general that they can be difficult to
test, test, and test again. implement for a specific case. Therefore, actual usability testing with users from the
intended user segment is essential.

Table 4 Privacy usability in relation to the security usability guidelines

Guideline user side PET Motivation and proposed PRIME solutions


P1. Privacy-enhancing Same motivation as for the 1st security usability guideline. Privacy protective means taken or
mechanisms, including offered by the system must be visible (proposed PRIME solutions: PreSets visible in
assistive functions, must browser address bar; “Send Personal Data?”; Assurance computations etc. mentioned in 2.).
be visible without being Moreover, there must be conspicuously placed information ensuring worried users that they
intrusive. will find helpful instructions within the system, and help functions should inform users
about external help. (For the importance of the latter, cf. P2 and P3.)
P2. Privacy-enhancing In PRIME proposals linkability computation and simulation have been suggested to enhance
mechanisms in understanding of risks. Assurance computations of trustworthiness must contain information
applications must on the different sources for the indicators used, such as privacy seals (a.k.a. trust marks).
encourage learning. Furthermore, since user tests in PRIME as well as surveys conducted by others have
demonstrated a lack of knowledge among the general audience about their privacy rights, a
function such as the Data Track must be supported with assistance functions guiding the
user “in the real world” (5.6) not only with help functions for managing the PRIME system.
P3. Give the user a chance to In data protection context, this could also include providing means to contact the recipient of
revise a hasty decision. data (i.e., the data controller) and ask for erasure or altered conditions for usage (Table 1).
P4. Run-time set-up is to be This is somewhat weaker formulated than the 4th guideline by Herzog and Shahmehri; it will
preferred. take some time to see if off-line set ups are preferred and manageable by ordinary user.
Presumably, installing a pack of preference sets from a Consumer Advisory Bureau might
feel safer than getting proposals for new privacy settings every now and then even if also a
run-time solution can derive the settings from a trusted third party. Much of this hinges on
external trust structures, and the design will have to reflect this; see also 8.
P5. Promote least linkability For users that do not bother or are unsure about settings, it is reasonable to have “maximum
wherever possible. privacy” switched on from start and at every new contact, so as to deflect e.g. phishing
attacks. In the PRIME project, 3-4 PreSets have been proposed as standard ‘visiting cards’,
where the default PreSet is always the ‘anonymous’ one based on transactional pseudonyms
and no personal data is released automatically or filled in automatically in a consent
window, which means that he user will have to make some effort in order to release data or
at all be linkable.
P6. In a privacy alert, the In principle this guideline needs no further motivation than the ones found in the
user should be informed corresponding security usability guideline. (Applicable in particular to the PRIME
of the severity of the Assurance check, 3.5, but exact UI design will be very much dependent on the exact content
event and what to do. of such a function.)
P7. Spend time on terms, Consent is a kind of contractual agreement, where linguistic expressions matters (see a
icons, graphics, and similar discussion in Molin & Pettersson, 2003); language support is also important for

Final version 1, dated 01 February 2008 Page 42


PRIME D06.1.f
Privacy and Identity Management for Europe

animations. services to have legally correct contacts with customers in other EU countries. Graphics are
more than icons, as demonstrated in the TownMap which also made use of animations (and
user’s drag-and-drop agreements, DADAs).
P8. Know and follow general As Herzog and Shahmehri note, guidelines are often so general that they can be difficult to
usability guidelines and implement for a specific case. Specific solutions must be sought. For the early PRIME
test, test, and test again. proposal, prospective users were included from start; paper prototyping or computer-based
mockups are very quick ways to understand what users tend to misunderstand. More on
testing follows in section 4.5.

As is obvious from the table, there are some particularities with privacy usability not found in security
usability. Customers are asked for data and to consent to the use of them. Laws play a prominent role
for what data subjects can do if they dislike some use of their personal data. Therefore, the guidelines
concerning the interaction design for privacy-enhanced identity management systems may have
implications extending much further than the guidelines for security systems even if the number of
guidelines may be kept small in both cases. One could also consider specifying guidelines for each
broadly defined function – i.e. functions such as navigating the web pages of a service, answering on
requests for data, gauging trustworthiness and assessing privacy risks, keeping track of data releases
and conditions – so as to keep each set of guidelines more specific. Herzog and Shahmehri defined
their task as providing guidelines for applications that set security policies, which is somewhat more
specific than guidelines for usability in security applications, which in its turn is more specific than
general usability guidelines for graphical user interfaces. They demonstrate the differing content of
these different sorts of guidelines in a figure which is reproduced in Figure 8.
It is noteworthy that Herzog and Shahmehri introduce a guideline normally counted among guidelines
for the development process rather than as a guideline for the properties of the developed product,
namely “Test and test more”. It is indeed hard to find product guidelines that really produce a usable
user interface if no intermediate user testing has been conducted. The design process is the topic of
section 4.5.

Figure 8 Structured overview of guidelines for usability in security applications


(Herzog & Shahmehri, 2007).

Final version 1, dated 01 February 2008 Page 43


PRIME D06.1.f
Privacy and Identity Management for Europe

4.5 Testing and design process


Having an extensive set of guidelines trying to capture a lot of different application types will not
make it easy to follow the guidelines. On the other hand, if less then a dozen guidelines are listed, they
have to be broad and the employment of them will need much further specification and motivation.
When the starting point is an external set of requirements – as the legal requirements were in the PISA
project – each requirement has to be ticked off, whence the richness of Table 1. Still, motivation is
needed for why the particular UI solutions meet the requirements, and each UI proposal can be
contested until some reasonable set of experience of user behaviour is collected. Usability experiments
and observation of people using the application in real life is thus important. It should furthermore be
noted for the development process preceding the real life tests, that the designers of the UI must get
the ideas for the design from somewhere, and here user testing also plays a significant role: focus
groups and early prototype testing give feedback which can function as input to the design process.
This section deals with guidelines for the design process itself. Many variants can be found on
iterative, user-centred development processes and we will not enumerate them here. Suffices to
mention Carrol (2000) on scenario-based design, Buxton (2007) on sketching interactive products,
Reimann and Cooper (2006) on personas and also the use of standard UI elements, and finally we refer
to Preece et al. (1994, 2002/2007) and other HCI text books for discussions on design processes, as
well as to Dumas and Redish (1999) and Rubin (1994) for usability testing. It may further be noted
that a testing methodology suitable when concepts are unknown or unclear to test participants consists
of letting the participants be confronted with technology not yet really existing; the stimuli are used to
provoke responses in discussions afterwards (so often in PRIME studies; also Tweney & Crane, 2007).
In addition to the tables above, this sections also gives a series of checklists for privacy aspects of
applications where identity management can be said to play a part.

4.5.1 Guideline for the design process


The following list suggests steps to take for a user-centred design process. Examples of the application
of the methods mentioned in the list may be found in e.g. PRIME deliverable D06.1.b. Steps 5 to 9
may be iterated several times.
1. Define one or several scenarios in which the PET user side system is meant to be used;
make it concrete: this-and-this service, this-and-this task, these kinds of users; define each
major step in the scenarios; do not give funny (phoney) names to services – they shall be
used in user tests.
2.* Develop several user interface proposals; step through the interaction according to the
scenarios and define every user input/action needed.
3. Test words/concepts in simple surveys; develop the questionnaires in steps with some pilot
tests first.
4. Put scenarios to user test: compose storyboards or user interface animations with
commentary speaker voice; demonstrate scenarios and let test participants answer
questionnaires on feelings, utility, and intelligibility of the proposed scenario.
5.* Redesign: make sure what the objectives of the system and of the coming tests are (note that
step 5 may re-orient the original direction presumed in step 1); make at least two designs to
avoid getting mentally blocked in refining one particular design (the “only” one).
6. Design usability tests grounded in the scenarios:
a. determine fidelity of the user test (paper prototyping, computer-based mockup, or running
prototype);
b. develop introductions to test participants on the basis of the result from step 3 (short
intro’s might be included in the user interface; longer one’s might not be useful as test intros
at all if several new and/or complicated concepts have to be introduced – consider using user

Final version 1, dated 01 February 2008 Page 44


PRIME D06.1.f
Privacy and Identity Management for Europe

interface animations as in step 3);


c. develop tasks including introductory, “warming-up” tasks;
d. determine data collection (normally screen recording and participants’ voice if they are
asked to comment on what they do);
e. develop pre-test and post-test questionnaires on user habits, impressions, and suggestions.
7. Pilot test with a few users, and change the test-set up accordingly; pilot test again;
8. Conduct the usability test: if several iteration rounds are planned, each round does not have
to include many participants or all features under development;
9.* Evaluate the test outcome including task results as well as pre- and post-test questionnaires.
* For inspecting user interfaces by using usability checklists, see the beginning of the next section.

4.5.2 Commentary on testing


Inspecting user interfaces by using usability checklists (compare * in 4.5.1) have been proposed by
many. Nielsen’s “heuristics” is one of the most frequently cited examples. Johnston et al. rely on them
to develop their own set of security usability criteria. However, comparative studies have shown that
evaluation by inspection might yield many false alarms and still miss serious usability flaws (e.g. Law
& Hvannberg, 2004). Possibly, experience from user observations helps a lot for the quality of an
inspection-based evaluation, but this does not mean that non-HCI professionals should not check off
their designs against good sets of principles, as noted in Herzog’s and Shahmehri’s 8th guideline (0).
But it is also up to the evaluator to develop specific principles from steps 5 and 9 above and interpret
principles such as “minimalist design” somewhat deeper than what meets the eye at a first glance at a
single dialogue window.
Evaluations could be of different sorts depending on the goal for the evaluation. There is a difference
in method when one is looking for inspiration among test users’ suggestions and test results and when
one is comparing the number of completed tasks and user’s comprehension of two designs. See
Nielsen and Mack (eds., 1994) for the difference between formative and summative tests. For the later,
the ISO standard on usability specification and measures defines usability as “the effectiveness,
efficiency, and satisfaction with which specified users achieve specified goals in particular
environments” (International Organization for Standardization, 1998). The ISO document gives
guidance how to set up criteria for evaluation but it does not prescribe a specific percentage rate for
successful task completion as the norm for a usable system. Instead it is the evaluators that have to
motivate for each application type, user group, goal, and situation what counts as effective and efficient
and gives users a feeling of satisfaction also in other senses.
The legal requirements have to be met. However, for legal requirements there is no prescribed success
rate for a user group. Rather, every individual is able to refer to the law, and, in a court case, the data
controller will only be able to refer to reasonable provisions, such as following standard layouts and
conducting usability tests to erase weak spots. In this light, Table 1 gives at least a kind of checklist to
build up one’s design and evaluations around. In the next subsection we also present checklists for the
socio-cultural requirements that could guide designers’ and evaluators’ work. But speaking in more
general terms, the following basic criteria have been underlying usability tests in the PRIME project:
• Users can perform tasks relevant to a specific UI (a window or a whole interaction paradigm)
• Users can understand privacy implications
• Users should feel satisfied with using PIM technology
Some comments on the three general criteria for a usable Privacy and Identity Management system
could be made. The two first criteria can be compared to the four Cs of the PISA project mentioned
earlier in this chapter: Comprehension (= understanding), Consciousness (= be aware), Control (= able
to exercise one’s rights), and Consent:

Final version 1, dated 01 February 2008 Page 45


PRIME D06.1.f
Privacy and Identity Management for Europe

‘Perform tasks’ relates to Control but also to Consciousness because if various UI elements are not
detected by the user, he is most likely not able to perform a desired PIM control task;
‘Understand privacy implications’ obviously relates to Comprehending, but presumes an awareness
(Consciousness) of actual PIM actions.
It should further be noted that it is not self-evident how to measure a test subject’s understanding of
privacy implications. In a post-test interview, the subjects can be asked to explain what implications
there are or asked to tell whether or not something specific can happen. The latter gives a more precise
measurement but it can be hard to construct reasonable and unambiguous test questions.
The criterion of ‘satisfaction’ has many aspects. Naturally, the PRIME project wants to have a user
side that appeals to people. But there is also a general question whether people feel the need for the
product. Furthermore, questions of trust come to the surface in all that has to do with privacy and
identity management. The user interface can only deal with part of this aspect – it is not enough
having a program that appears totally trustworthy to people, because any fraud could copy such a
design. Nevertheless, ‘trust’ should be included in post-test interviews or questionnaires because it is
strongly connected to the question of satisfaction, and the participants can be given different
preconditions to ponder on (e.g. who should recommend the PIM system for them to trust it will be
helpful), and also asked in what situations they would or would not dare to rely on the system.

4.5.3 Checklists for the socio-cultural requirements


As mentioned in 4.3, the PRIME socio-cultural team has developed a series of checklists to be used
when evaluating end-user PET. It should be noted, though, that results from evaluation by developers
(or any other non-users) quite often misses the point as has been demonstrated in evaluation
experiments where inspection methods are compared with user tests (e.g. Law & Hvannberg, 2004
mentioned above). Inspection by non-users frequently finds pseudo-problems and just as frequently
misses real usability issues. Nevertheless, the checklists below can be answered not only directly by
the evaluators themselves but also by observing test users acting with the system in question.

Table 5 Checklists for the socio-cultural requirements

User Control • Is the user documentation sufficient in scope and


understandability?
User control contributes to realising fundamental
social/legal values such as autonomy, individuality, and Consent
dignity. User control is decomposed in a set of seven
requirements preceding data disclosure that form the 7 • Does the application offer the user ways to provide
C’s of user control: consciousness, comprehension, explicit consent to (personal) data disclosure?
choice, consent, context, confinement, and consistency, • Does the application offer the user ways to provide
and three post data disclosure requirements: chain explicit permission to use certain data for
control, inspection and ex-post user control. performing the service contracted for?
Consciousness • Does the application offer ways to treat sensitive
• Does the application provide information to the personal data different from the way it treats other
users signalling events relevant to the collection, personal data?
use, and removal of personal data at the service • Does the application provide special warnings
provider’s end? when data are not editable after disclosure?
Comprehension
• Does the application offer ways for the user to
• Does the application provide sufficiently explicitly agree to the automatic collection and
comprehensive explanations of the processing of (personal) data?
consequences of relevant events with respect to
• Does the application offer ways to revoke
the collection and use of personal data?
previously given consent?
• Does the application provide sufficient general Consistency
information about personal data, their collection
and use? • Does the application provide information about the
consequences of certain user actions involving data
• Does the user understand the application itself? disclosure?

Final version 1, dated 01 February 2008 Page 46


PRIME D06.1.f
Privacy and Identity Management for Europe

Choice Inspection
• Does the application have mandatory data entry • Does the application provide information on the
fields only for data necessary for providing a user’s request about:
service (also a data minimisation requirement)?
• when personal data have been disclosed;
• Does the application provide the users with
choices with respect to the use and secondary • to what parties this data have been
use of their data, for instance by providing provided;
options with respect to: • under what conditions the data have been
• whether or not to provide personal provided;
data; • who had access to the data;
• what personal data are to be shared; • for what purpose they had access to the
• for what purpose data can be used; data?

• when and for how long data may be Chain Control


used? • Does the application provide information about the
Confinement parties involved in a transaction?

• Does the application provide ways to express • Does the application, for each party involved in the
preferences/policies with respect to: transaction, provide information on the user’s
request about:
• the purpose of use of the personal
data; • when personal data have been disclosed;

• who may have access to the personal • to what parties this data have been
data; provided;

• where they may be stored; • under what conditions the data have been
provided;
• for how long they may be stored?
• who had access to the data;
Context
• for what purpose they had access to the
• Does the application provide ways to change data?
privacy preferences according to context?
Ex-post control
• Does the application provide different Presets of
privacy preferences that can accommodate • Does the application show the user’s rights to
contexts in which the user repetitively operates? access, rectify, block, or erase disclosed (personal)
data and the procedures to execute these rights?
• Does the application provide easy changes
between (predefined) context settings? • Does the application provide ways to access,
rectify, block or erase disclosed (personal) data?

Promoting Adoption Minimised Skill Levels


The second cluster of socio-cultural requirements address • Does the application provide a set of default
the requirements that aim to stimulate the adoption of settings that cover the needs of the majority of
PETs by the target audiences. There are six adoption users?
requirements: Social Settings Flexibility, Minimizsed
Skill Levels, Accountability, Trust in Communication • Does the application provide a minimum amount of
Infrastructure, Trust in Transaction Partners, and pop-ups and choices in the human computer
Affordability. interaction?

Social Settings Flexibility • Does the application offer customisation options for
more experienced users?
• Does the application allow for changing
interface language, symbol/icon sets, help files • Does the application provide an easy-to-use
and documentation? interface?

• Does the application allow for managing privacy • Does the application provide comprehensive
settings to different social contexts? tutorials and help files?
• Does the application provide information about
privacy risks and what the application can do to
help prevent these risks from materialising?

Final version 1, dated 01 February 2008 Page 47


PRIME D06.1.f
Privacy and Identity Management for Europe

Accountability Trust in Transaction Partners


• Does the application provide mechanisms to • Does the application provide ways to establish the
reveal the user’s civil identity in certain cases? trustworthiness of the transaction partner?
• Does the application provide interfaces to a set • Does the application provide ways to circumvent
of organisational and legal arrangements that the risks causing distrust (e.g., by offering
describe the situations in which the application guarantees that contractual obligations will be met,
user can be held accountable and to whom? or by direct performance of contractual obligations
such as direct (anonymous) payment)?
Trust in Communication Infrastructure
Affordability
• Does the application provide information about
its trustworthiness? • Does the application have a reasonable cost?
• Does the application provide information about • Is the application easy to install and to maintain?
the infrastructure (network/medium) risks?
• Does the application provide information about
the measures taken to minimise risks during the
communication of personal data?

4.6 Conclusion on HCI principles


In conclusion, the formation of UI design principles for good privacy in human-computer interaction
can be based on different sources. Whatever the sources, such principles can be used by developers to
check their user interface proposals, but it is advisable to avoid relying solely on evaluation by
inspection. User testing is, however, not in itself an answer for avoiding biases because degrees of
compliance to principles must always be counted with. Within the life span of PRIME we have not
been able to make a ‘final’ evaluation of the PRIME UI concepts. Instead, we have often limited the
claims of usability test results to the refinement of concrete UI designs (see examples in the following
chapter), but sometimes also extending the claims to the theoretical analysis of interaction paradigms
and to refining principles. In addition to this, we have deveoped and discussed the test methodology,
which should be of interest to any future development project.

Final version 1, dated 01 February 2008 Page 48


PRIME D06.1.f
Privacy and Identity Management for Europe

5 Discussion on specific UI designs


The exposé in chapter 3 of PRIME user interface paradigms for web browsing will in the present
chapter be complemented with accounts of individual windows and functions. Much of the working
and negotiation between PRIME user side and services side systems are not noticeable by the user.
Rather than relying on a description of the PRIME framework, which would be extensive, this chapter
is arranged like chapter 3, i.e. in accordance to main user interface windows. The considerations are
often applicable outside the web browsing sphere and touches the main areas for privacy-enhanced
identity management which are: negotiation between interaction parties, ‘awareness’ functions (of
privacy risks, trustworthiness, etc.), and utilisation of history functionality. To this could also be added
management of the PIM system itself.
The chapter starts by noting some general conceptual difficulties to appreciate PIM systems that must
be overcome by some users. The UI solutions sketched in the other sections do sometimes include
proposals for remedies of these general conceptual difficulties.

5.1 Conceptual difficulties


In PRIME deliverable D06.1.b the early prototypes were evaluated from several perspectives (not only
HCI). The implications for UI construction were later summarised in D06.1.c and here reiterate some
more general findings made by the HCI evaluators:
• Problems for users to mentally differentiate between user-side and services-side PIM
• Problem to make people trust the claims about the system
• Linkability vs. “real-life” data
Differentiate between user-side and services-side
In section 6.5.5 of D06.1.b, it is noted: “Especially important is the finding that users do not really see
the difference between ‘their’ PRIME-enabled browser and web services.” It is important that users
are aware that the technology is their technology, not just any Internet technology, that is, that they
have control over the data stored at the user side. (Cf. the opinion of Kobsa’s mentioned in section
2.1.)
The user interface shall clearly distinguish between functions provided by services-side and functions
provided by the user-side PET system. Naturally, this relates to the HCI requirements comprehension
and consciousness of section 4.2. For example, in one TownMap version the user’s ‘home’ contained
the PII symbols to be moved to the icon of a service provider, and thus the user sees a representation
of his own domain in relation to representations of other Internet entities. However, a “Send data?”
window with data-entering facility might be harder to make uniquely the user’s; an attacker might be
able to create a similarly-looking window (compare Wu et al., 2006, referred in section 2.1).
Trust
To continue with the trust problem: Even if users understand that they have special software on ‘their’
side, they will not necessarily believe that it will be able to help them. Comments from test users
indicate however exactly where trust breaks down, and it is at such points the UI development must
focus. Below, in the section on Data Track, some trust-enhancing functions are suggested which in the
same time are functions that make it easier to exercise one’s privacy rights.
Network linkability vs. “real-life” data
As for the concept of linkability based on network addressability, ‘pseudonymity’, it could be noted
that in the tests where this feature has figured prominently, mainly two or three degrees of linkability
were considered. Test users seemed to cope fairly well with these when tasks were focused on this.

Final version 1, dated 01 February 2008 Page 49


PRIME D06.1.f
Privacy and Identity Management for Europe

However, it is hard to measure how well they understood that there are implications of their data
disclosures (name, address, phone number, etc.) that surpass the definition of the linkability settings.

5.2 Relationship-centred paradigm


5.2.1 Accessing web sites and privacy preferences via bookmark lists
Section 3.2.2 has already mentioned how bookmark lists can be combined with icons for privacy
preferences. By default a PRIME-enabled web browser would switch to the “Anonymous” preference
setting when entering a new site; “Anonymous” does not fill in the “Send data?” dialogue with any
data and furthermore is based on transactional pseudonyms, so as not to be traceable between different
visits (only allowing session cookies and not revealing true IP address).
The user may however like to define other preference settings for certain visited web pages. The user
is invited to do that in the “Send data?” dialogue if he/she chooses to surpass the restrictions of the
“Anonymous” preference setting. Also when bookmarking a web page, the user has the opportunity to
set specific privacy preferences to a web site; resulting in a list as in Figure 1 on page 26.
In fact, in the PRIME mock-ups, we decided to always have the icon for the anonymous PrivPref
ready in the bookmark list, so that anonymous ‘entrance’ to all bookmarked web sites could always be
made – one can hypothesise that even a user, who sets the PrivPref of a “returning visitor” (Figure 4,
p. 28) as the default for a specific web site, does not always want to be recognised when visiting that
web site. In Figure 1, the anonymous PrivPref is selected by clicking the masked man for each
bookmark while the two other icons stand for PrivPrefs that can be alternatively activated and might
be recognisable by the service provider via the pseudonym that the PrivPref is releasing and/or by
some released personal data (if the service provider requests such and if the user has agreed to it).
Clicking on the name of a bookmark in Figure 1 implies selecting the leftmost preference setting if
there is more than one icon.
The solution described above works when a user accesses web sites via bookmarks. On the other hand,
when the user enters a web address in the address field of his browser the system should find the
default PrivPref for that site, if the user has defined one; otherwise the anonymous PrivPref should be
used because this is the standard setting and applies to all web sites if nothing else has been set by the
user.
More problematic is that users might find it hard to select the anonymous PrivPref when it is not
default; the “Go” button of the web browser could have alternatives as in Figure 9 (right window)
even if users presumably would use the “Enter” key if they have keyed in an address. The icon to the
left of the address field shows the current PrivPref.

Figure 9 Traditional “Go” button (left) and address field with two “Go’s” (right)

5.2.2 TownMap
In the TownMap (introduced in section 3.2.3) the icons for privacy preferences are replaced by areas
visualising privacy protection concepts with default privacy settings. Individual bookmarks or
bookmark menus are symbolised by houses within these areas. Predefined areas are suggested to be
the Neighbourhood (where relationship pseudonymity is used by default; footnote 8), the Public area

Final version 1, dated 01 February 2008 Page 50


PRIME D06.1.f
Privacy and Identity Management for Europe

(where the “Anonymous” settings is used by default), and the Work area with another set of personal
data than for private use. The user also has his own house in the map, a prominent house at the
baseline. The approach to use different default settings for different areas should make it easier for a
novice to see the options available once he has grasped the TownMap metaphor.
After populating the map with his favourite web sites and simultaneous assigning privacy preferences,
the user will select the appropriate privacy settings while web surfing by just clicking the icons for the
web sites. The map display has to vanish or be reduced when the web page of the service provider
opens. The TownMap does not only contain houses representing service providers but also icons
representing data items. By combining web page and a reduced view of the TownMap, the user can
use the TownMap icons and positions to send personal data when the service provider requests such –
we called it DADA, Drag-And-Drop Agreement. In Figure 10 the user is sending name (face with
speech bubble) and credit card information to a payment service. Furthermore, how the payment
service sends money to the shop (CDON.COM) can also be demonstrated by the PIM system moving
coin symbols from the representation of the payment service to the representation of the shop.

Figure 10 DADA to send credit card information

Figure 11 shows how the user moves the name icon (face with speech bubble) to the gate of his
garden. The gate has a footprint icon used in PRIME prototypes for representing the Data Track. By
dragging a copy of the name icon to the Data Track icon the user enquires the system for history
records concerning his name (or names, if he has used several): which service providers have received
his name? Section 5.8.1 discusses the topic of utilising movements in spatial relationships.
The ferry boat and the bridge are points where the user defines the privacy properties of the areas
beyond the river. This is of course highly metaphorical, and perhaps unnecessarily graphical. A simple
CrossRoad design was also conceived within the TownMap approach where the “roads” are used as

Final version 1, dated 01 February 2008 Page 51


PRIME D06.1.f
Privacy and Identity Management for Europe

area boundaries. Figure 12 shows two views in this design. The user’s house is in the upper left corner,
which might be a bit unnatural when it comes to moving data icons from the user to the service
providers because the user’s mouse and therefore also the user’s hand movement have to go toward
the user instead of away from him. However, in a compressed view this position of the user’s house
icon might make it possible to suck this icon into the web browser’s toolbars (which would not be
natural in Figure 11) and the icon could be the access point to the data icon row if this is not part of a
tool bar. Areas could be signalled next to the address field as in Figure 9 by stylised CrossRoad views
showing only the position of the cross.

Figure 11 Asking Data Track for information on name disclosures

Figure 12 CrossRoad, two views

Final version 1, dated 01 February 2008 Page 52


PRIME D06.1.f
Privacy and Identity Management for Europe

A preference test (with N = 34 test persons) was made by using user interface animations (video clips)
where groups of test participants could see identity management carried out in the traditionally styled
user interface and the also in the TownMap. Afterwards participants individually filled in a form with
questions about impression and preferences; then also the third design was shown, the simplified
CrossRoad map. Swedish university students aged 20 and above, some being older than 45,
participated in the preference test; all had used Internet Explorer and only some had used other
browsers in addition. Our traditionally styled alternative was based on an Internet Explorer mock-up.
The traditionally styled browser got in general a positive response: more than half of the answers gave
positive descriptions of it. The maps, on the other hand, were considered by many to be messy. One
should bear in mind that the maps were populated already from start, while a new user would have
found his own map empty (like the bookmark list in an unused copy of a browser). For more
discussion of how the test was set up, see Bergmann et al. (2006).
On the question about their impression of the display of data and money transaction, 19 answered that
it “facilitates” while 11 ticked “superfluous”. Nine of these eleven persons also ticked “looks
childish”; fifteen in all ticked “looks OK”. This result speaks in favour of using animation in
explanations.
When ranking the alternatives, 24 persons put the traditional browser as their primary choice. Seven
preferred the realistic TownMap and three preferred the simplified map. Two fifths of the participants
answered that they would like to be able to switch between designs. The test has been replicated in the
USA with 27 (young) university students: the results were in the main similar to the test conducted in
Sweden, although a majority of the American subjects wanted to be able to toggle between designs.
Comparing with the age groups among the Swedish participants one can see a clear trend: young
Internet users are in general in favour of the more graphical user interface represented by the
TownMap.
Animations of transactions “facilitates”
That animations of transactions “facilitates” was the opinion of about half of the test participants that
were shown UI animations of one TownMap design. Naturally, it should be possible for users to
switch off such mini-shows but they complement well the use of DADAs. The question is how well
they work in traditional user interfaces where things are not already set in 2-dimentional locations.
Possibly a reappearing diagram with fixed points for standard actors can serve to animate transactions
even if all points except the user’s will have to be renamed for each new transaction. Again, see
section 5.8.1 for further discussion on this.

5.3 Privacy preference setting


In a ‘full’ PRIME scenario, the PRIME system on the user side will communicate and negotiate with a
PRIME-enabled system on the service’s side. The user side system can display for the user the
requirements and the promises of the service provider. Of course, it is tedious for users to read and
react on all information. Thus, if the users can utilise a collection of predefined privacy preferences,
they can rely on their PRIME system to perform the reading and reaction automatically.
As explained in chapter 3, these preferences function as a sort of visiting card, although the word
‘visiting card’ itself can be misleading because sometimes the preferences contain no personal data at
all and they furthermore specify network linkability options (transaction or relation pseudonyms).
Preferences that one might wish to set include preferences for:
• Pseudonym type
• Purposes for which the data can be used
• Data types and data items (‘name’ and ‘John Primeur’)
• Retention periods

Final version 1, dated 01 February 2008 Page 53


PRIME D06.1.f
Privacy and Identity Management for Europe

• Performance of the service provider (data controller)


The latter point is treated as an assurance evaluation. The evaluation may include more than evaluating
claims which the service provider makes about itself. It is a trustworthiness check as fas as such a
thing can be performed automatically. An outline for this is presented in section 5.5.
For the first four points in the list above, it should be noted that the effects of the pseudonymity type
has to be explained in plain language if ordinary Internet users shall have a chance to understand the
differences. Adding purposes to the data disclosure options of a preference setting may seem to make
things very complicated, and we will discuss it below. Data items, on the other hand, are more easily
seen as needed in a preference setting that will act as a sort of visiting card, while the retention period
condition can on the general level of a preference setting probably not be more specific than “Use only
as long as needed for the requested service” and “Use until user request deletion”.
As was just noted, adding purposes to the data disclosure options of a preference setting may seem to
make things very complicated for the user. We therefore propose to predefine a limited number of
preference settings (such as PRIME Anonymous) and from these the user can act directly, by adding
data when such are requested by a web site and by, optionally, creating an extended preference setting
from this occasion (cf. guideline 4 in 0 and 0, section 4.4). The purposes should then be designed for
some typical cases, such as shopping. In an elaborated automatic purpose-binding architecture, it
should be possible to define several specific purposes rather than only “For the completion of the
requested service”. Each specific purpose may then need only a few data items. Thus, the preference
setting for Shopping may include “Physical delivery” and “Payment” as the purposes for why data is
collected from the customer. This makes it easy for the service provider to maintain a privacy-friendly
database. It moreover opens for a user-side handling of splitted responsibilities if the fulfilment of the
service request is divided on several companies (typically: the shop, a delivery firm, and a payment
service). Each company would by the PRIME user side system only get the purpose-dedicated data
even if the data request is treated in one window.
If the PRIME system shall be able to validate a data request it has to rely on the service provider’s
declaration of for what purposes the collected data will be used. The stated purposes will exist in a
PRIME Standard Purposes list, and this list would contain for each purpose a set of data types needed;
see a hypothetical list in Table 6.

Table 6 Data types (tentatively suggested) in relation to stated purposes

Purpose Data types Comments


Register an order Ordered items For shopping cart
Session pseudonyms = session cookies

Physical delivery Name Alternative 1


Address (full)
Pin code (received from service prov.) Alternative 2
Pick up point

Electronic delivery Email address Alternative 1


Internet link (user is given a link) Alternative 2

Payment Credit card info The alternatives


Bank account info here are not
eCoin mutually
Bonus points exclusive

Registration User name (automatically generated) This is the mini-


Password (automatically generated) mal need

Final version 1, dated 01 February 2008 Page 54


PRIME D06.1.f
Privacy and Identity Management for Europe

Marketing Email address


Telephone number
Name (for physical contact)
Address (in combination with name)

Commercialisation of data cf. Marketing “Transfer”


PRIME excludes profiling data contact info

Statistical All data types except Name and full Should be anon.,
forms of personal and telephone numbers else “Marketing”

Profiling Ordered items Here, the user


User name / user’s pseudonym wants profiling
(possibly more data types)

Travel booking -- (“To be defined”) --

Another thing is that the purposes declared by the service providers could be questioned by the
PRIME system if they are not listed in the PrivPref currently used by the user. We have to rely on the
user to select the appropriate preference for this check (i.e. the check that there is a matching between
the requested service and purpose for data request).
Four privacy preference “templates” are provided by the PRIME system. Because the two basic ones
are directly usable as PrivPrefs, we do not suggest an abstraction into two levels (i.e. template vs.
PrivPref) but instead, as in IPV1 and IPV2, keep the PRIME pre-defined ones as unalterable privacy
preferences, even if numbers III and IV need data to be really meaningful. (From Tweney & Crane,
2007, one may infer that even two visiting cards may be too much for most users.)

I. PRIME Anonymous, No data, Transactional/“Sessional” pseudonyms/cookies

II. PRIME Returning Visitor, No data, Relational pseudonyms (remembers recipient)

III. PRIME Minimal Shopping, Data structure (see table), Transactional pseudonyms
Purpose Data Types Retention period
Register an order See separate table Until service completed
Physical delivery -“- -“-
Electronic delivery -“- -“-
Payment -“- -“-
IV. PRIME Profiled Shopping, Data structure (see table), Relational pseudonyms
Purpose Data Types Retention period
Accept an order See separate table Until I object (i.e. user decides)
Physical delivery -“- -“-
Electronic delivery -“- -“-
Payment -“- -“-
Profiling (registration) -“- -“-

“5.” A fifth and more preference settings will be created as the user uses I-IV.

Final version 1, dated 01 February 2008 Page 55


PRIME D06.1.f
Privacy and Identity Management for Europe

5.4 Mapping preferences and sending personal data


Setting aside the ambitious attempt outlined in the previous section for the moment, we start the
present section with some ‘plain’ ways of presenting service side request for data to users and asking
for their consent. Then we give short discussions of UIs for displaying how users’ intentions are
matched with service providers’ promises and data requests. There is no single solution that is best –
this section notes the rational behind each one and does not preclude the combination of several
solutions.

5.4.1 “Context Menu” for data disclosure and consent giving


When the preference settings are such that data requests from service providers do not need much
manual labour from the user, context menus could appear directly on the web pages of PRIME-
enabled web sites. It would work by the user clicking a PRIME button on the web page which then
makes the service side system evoke its PRIME system to contact the user’s PRIME system. The user
side system can fill out the requested data and display it in a popup on top of the web page. If some
data is missing, the user can select from a menu. By preventing inputting at a data request (in contrast
to guideline 4 in 0) but only selecting from the user side database, “Personal Data” (5.7.3), the user
cannot be fooled to fill in a faked PRIME consent dialogue sent out by a bogus website.
By the menu dialogue, selecting single data items or proofs (credentials) will be quick and easy. The
use of submenus as in Figure 13 may prevent that the user too quickly clicks the “I agree” button. In
the figure, this button is inactive because the user has not yet selected which e-mail address he wants
to disclose to “News online”.

Figure 13 Context Menu appearing on top of the web page when user clicks PRIME button

Final version 1, dated 01 February 2008 Page 56


PRIME D06.1.f
Privacy and Identity Management for Europe

5.4.2 Drag-and-drop for data disclosure and consent giving


The utilisation of ‘DADAs’, i.e. drag-and-drop agreements’ is discussed together with the TownMap.
See also section 5.8 about the difference between ‘giving consent’ (= confirming) and ‘stating
conditions’.

5.4.3 Separate consent window “Send Personal data?”


In Figure 5 the user had to select an e-mail address from several existing ones in the current privacy
preference setting “PRIME Returning Customer” (presumably, no address had been chosen to be the
default in this preference setting). It is not always that a preference set contains all data requested by a
service side. In Figure 5, there was no ICQ name in the visiting card, so the consent dialogue puts a
link (blue text and underlining) next to this remark, because the user should either Cancel as the
request does not match the user’s ‘preferences’ or change the preference setting – the latter is done by
checking the box “Unlock data fields and proof to edit”; cf. Figure 14 (in Figure 14and Figure 15, the
preference settings are called ‘Templates’ rather than ‘PreSets’ as these screenshots come from
another design series. ‘PrivPrefs’ have been suggested for IPV3; ‘Roles’ were used in IPV1). Note that
it is not only the Name data field that has been opened for editing (or for selection from John
Primeur’s database of personal data; thereof the combo box), but in addition new elements have been
introduced on the right hand side, namely controls relating to the privacy sets: select, create, and save.

Figure 14 Data and proof fields have been put in editing mode

Figure 15 relates the ‘locked’ view to the ‘unlocked’ view when not only data but also proofs ( ) are
requested. When the consent dialogue window is based on the content of a privacy preference setting,
there could be several different reasons for why the consent form cannot be filled in with the data
requested by the service provider. For data and proofs it could be that no default item has been set, or
that data or proofs are missing, or even that the data type is excluded from the current preference
setting or the proofs acquired by the user are not recognised by the service provider.
Already a mouse-over tool tip on the “What to do” link should explain what the problem is, while a
click on the link could provide more information and state the need to unlock data fields and proofs, or
guide the user directly to further actions such as opening a form to request new proofs as in Figure 16
(for “Proof Portfolio” in the “Personal Data” database, see section 5.7.3). In the ‘unlocked’ view (i.e.,
the edit view), the combo box will contain the ‘what to dos’.
There will be many popup windows when there are many mismatches. On the other hand, filling in
missing data or requesting credentials to prove one’s identity can make the personal data database rich
enough in the future to provide all data for the consent window, or it may be the case that all
information is already in the user’s system and he only has to pick the right preference setting. At any
rate, if there are deviations from the preference settings, it is not bad that there are a lot of alerts. (The
“I agree” button should be greyed out in Figure 5 and Figure 15 because the data request is yet not
satisfied).

Final version 1, dated 01 February 2008 Page 57


PRIME D06.1.f
Privacy and Identity Management for Europe

Figure 15 Two views of consent dialogue with (missing) proofs (= certificates)

Figure 16 “What to do” when missing a proof

5.4.4 Setting obligations for data processing


Setting data handling obligations is in IPV2 limited to “delete as soon as possible”. However, the
architecture behind the PRIME Obligation Management is much more capable than this, and an
intriguing question has been whether this capability, which facilitates privacy-friendly services side
data administration, can be offered directly to the user in the sense that the user can set obligations and
later check the fulfillment of these on the service side. Surveys have shown a desire among the public
for such functions (D1.1.a, Part 2). We designed UIs for setting conditions (not checking the
fulfillment which anyhow could be done automatically) to see whether this part of obligation manage-
ment was manageable and appreciated by ordinary Internet users. Mockup-based pilot tests led to an
extended “Send Personal Data?” dialogue window which was subjected to a usability test with 18
ordinary Internet users working through three short scenarios.

Final version 1, dated 01 February 2008 Page 58


PRIME D06.1.f
Privacy and Identity Management for Europe

The UI design included the possibility for the customer to set the following ‘obligations’ for the
service provider:
When my data are transferred to third parties to fulfill [Service Provider]’s services:
 Ask each time  Notify me  Save detail about this
When my data are deleted:  Notify me
Your data will be deleted as soon as it is not needed
Moreover, because conditions for data use must be specified relative to the intended purpose for the
data collection, the design included an expansion part in which two other purposes than fulfilling the
service request were included: statistical analyses and special offers, each with a specific data request.
If the user chose to opt in for any of these, the window provided options to set obligations concerning
data retention period and, for the statistical analysis, also notifications and/or logging when data
are used.
Because the test participants responded rather univocally in favour of using obligations settings, we
decided not to elaborate further on testing mockups. A condensed account of this experiment is to be
found in Pettersson et al. (2006) where specific problem areas are discussed. To summarise the results,
the experiment showed that Internet users can be interested of such a function; it seemed to give the
test participants a sense of being in control. Moreover, the experiment showed obligation setting to be
manageable by people who had never done it before – however, not every participant would allow
data collection except for the primary purpose, which of course simplifies the procedure a lot.

5.4.5 Consent dialogue aware of purpose – data types matching


To reconnect to the previous proposal on tightly match data types to purposes, it is obvious that such a
feature must produce longer and more complicated “Send data?” dialogues. From the existence of user
preferences (expressed, inter alia, by the four PRIME predefined PrivPrefs in 5.3) and the existence of
some PRIME matching standard (Table 6), one can derive several cases of matches and mismatches.
Because the user has chosen a specific preference set, named after the user’s purpose, say Shopping,
the service should not request any data outside Shopping data processing purposes (Register an Order,
Delivery, Payment). Frequently services asks for data for other purposes, such as sending newsletters,
Section 5.4.4 relates a study showing that people can manage to set conditions for such upcoming
requests (but some people do not consent to any such extra requests). Thus, there is no reason to
include them in the preference settings (except, possibly, possibly a simple tic box “Never show any
data requests for purposes not directly related to the service I request.”).
There are six cases then:

PURPOSES CLAIMED TO RELATE DIRECTLY TO THE REQUESTED SERVICE


Service provider says: “Data and purposes directly related to the fulfillment of the requested service:
data types a,b,c for purpose X; data types d,e,f for purpose Y, and data types g,h,i for purpose Z.”
PRIME user side system analyses the data request as concerns both data types and purposes. Four
cases can occur of which three should cause an ‘alert’.
1. SP asks only for data items of data types and purposes recommended by the preference
setting.
2. SP asks for more data types than recommended by the current preference setting.*
3. SP asks for data for other purposes than recommended by the preference setting.
4. SP asks for more data types than recommended by the current preference setting* and for
other purposes than recommended by the current preference setting.
* These might directly reflect the recommendations in the PRIME standard purpose table but the
preference setting might include additional definitions by the user.

Final version 1, dated 01 February 2008 Page 59


PRIME D06.1.f
Privacy and Identity Management for Europe

OTHER PURPOSES
Service provider furthermore says: “Additional data for additional purposes: data types a’,b’,c’ for
purpose U; data types d’,e’,f’ for purpose V, and data types g’,h’,i’ for purpose W.”
This is typically only opt-in ‘services’. Two cases possible where one should raise an alert.
5. SP asks for data types within the PRIME standard purpose definitions.
6. SP asks for more data types than in –“–.
The PrivPrefs are not elaborated to set preferences for such additional requests. They are opt in, and
the user has to fold out a section of the user interface to see them (the only preference setting could
concern whether or not display the fold out control at all, and to always show some types of purposes,
such as Marketing information as is common on web pages).

To this comes other mismatchings concerning retention periods, proofs, and Assurance Evaluation
standards. Also certain data could possibly be marked as ‘risky’ to be disclosed such as bank account
number and sensitive data such as health information. The division into specific purposes is also
somewhat space-consuming; see Figure 17.

Figure 17 PrivPref Profiled Shopping informs the “Send data?” about purposes and data types

Final version 1, dated 01 February 2008 Page 60


PRIME D06.1.f
Privacy and Identity Management for Europe

Not much effort has been put in designing the icon for the PrivPref (it is the face icon and a shopping
cart); it is just a place holder when designing the window layouts. As the lower left corner indicates, it
is proposed that there should be a step-by-step consent dialogue as an alternative. Such a design would
treat each purpose in a specific pane. This has also the advantage of preparing for multiple-party
agreements where a service provider gives customers direct access to subcontractors, so as to direct
purpose-specific data sets and concomitant conditions directly to the data processors, so that the data
controller in fact does not become a controller anymore but the subcontractors will make direct
agreements with customers and thus becoming data controllers for each data set disclosed by the user.
There is, however, also a drawback with the stepwise disclosure. When the user considers a data
request he or she is not is not primarily interested in who the receiver is, but first what the data
requested are. Some data one could give away to anyone without hesitation. In a stepwise disclosure it
is hard to topicalise the data as done in Figure 5, Figure 15, and Figure 17.
When a privacy preference setting is chosen that contains little data and few or no purposes for data
use, there will be a lot of ‘alerts’. Then the stepwise approach has some advantages, even if it will be
unclear how to select the preferences. This approach is presently only in mock-up state, but Figure 18
and Figure 19 show how the user could be guided in filling out or selecting the data.

Figure 18 First four steps in a stepwise disclosure act

Figure 19 might need a scroll – the figure shows an expanded view where all content is visible. Also
Figure 17 could be seen as showing ‘too much’. One can ask whether the “I Accept” button should be
outside the scroll or at the bottom of the scroll area, so as to prevent any user to click it without having
read the whole data disclosure agreement.

Final version 1, dated 01 February 2008 Page 61


PRIME D06.1.f
Privacy and Identity Management for Europe

Figure 19 Final page (blown up) stepwise mode of multipurpose “Send Personal Data?”

These multiple-purpose (or rather, from a user point of view, split-purpose or specific-purpose)
consent dialogues have been discussed when this report is finalised, and they have not been subjected
to any user testing yet, but they might appear in IPV3. There have also discussions on making the
design more CardSpace-like since it is likely that the solution shipped with Microsoft’s Vista will be
familiar to many people. Naturally, approaching the CardSpace design would demand to introduce
specific PRIME concepts in that design.

5.4.6 Disclosing anonymous credentials


In a mockup-based test, four test subjects were asked to explain what information was actually given
to a web service which demanded some proof of age when a passport was used to produce that proof.
The data release window was quite clear about the limited information handed over. However, their
answers were: “Passport number”, “Birth year, month, day”, “Personal number and name” (PNo =
birth date + unique digits), and “Personal number and what is in my passport [exemplifies]”.
Naturally, one could have informed them more about the “anonymity” of the anonymous credentials.
In later test sessions when we used the PRIME prototype IPV2 with mockup versions of the data
release window, the test participants seemed to understand encryption and the usefulness of this, but
there was no reflection of the information about anonymous proofs given in the introduction to the

Final version 1, dated 01 February 2008 Page 62


PRIME D06.1.f
Privacy and Identity Management for Europe

test. Instead, one of them mentioned explicitly that his personal number found in his passport will be
sent, even though the window asking for data release stated the request as Proof of “age > 18” (built
on “Swedish Passport”). However, the same person appreciated using electronic cash (a form of
anonymous money issued by some bank or credit institute) as this procedure did not involve the credit
card number or any other personal data.
Possibly, the data release window could be more explicit as to the process of deriving anonymous
proofs from electronic identity cards:
Send proof of “age > 18” (build proof on parts of the certificate “Swedish Passport”)
Different icons and combinations of icons (as a ‘proof’ icon combined with the PRIME mask to
symbolise ‘anonymised proof’) can perhaps strengthen this. In a usability test June 2007 with 30 high
school students we asked for icons for ‘a proof’ and ‘a certificate’. A sheet of paper, with various
adornments, was what the majority suggested – possibly a reflection of the icons for certificates that
exist today.
In any event, even if better designed information will make it possible to make people understand how
little information is sent by an anonymous credential, it has to be admitted that for passports, the
citizenship of the holder is always derivable if the government issuing the passport is derivable from
the anonymous credential derived from the passport credential.
One can conclude that: (1) there is a risk that users think that all data normally visible in the physical
item referred to (i.e., a passport) will also be available to the data receiver; and (2) information
inferred from proof metadata is also a kind of data which is sent by the user’s system to the receiver.
When searching a history database such as the Data Track introduced in 3.6 (further discussed in 5.6),
users could be misled thinking (1) that they can search on more than the history function has actually
recorded, and (2) that metadata found in the records are wrongly recorded or, initially, wrongly
released by the system, which would lower their confidence in the system. Naturally, the assistance
function helping the user to act on the base of the transaction records could also give information on
these two aspects to the user, but it is probably not an assuring thing for the user to learn later that
he/she has misunderstood the arrangement with anonymised proofs.

5.4.7 Anonymous email addresses for post-consent notification messages


Finally, it should be noted that several of the customer notification issues that complicates UI designs
of the consent dialogue can be solved if there is a reliable system for anonymous emailing. If the user
side system can give a ‘call-back’ address with the pseudonym, and this address is hosted by a server
which the user’s system (including mobile phone) can anonymously and repeatedly enquire about any
incoming messages for all its anonymous email accounts, then alleged necessary information about
contact details ‘in case of flight cancellation, out-of-stock events, product safety alarms, etc.’ can be
dropped – telephone number and email addresses for such things would not be needed. The
anonymous email account would furthermore be spam free because the service providers would have
to show a (one-time) credential given by user side system before the mail server accepts receiving and
storing a message to a specific email account.

5.4.8 Unresolved issues for consent dialogues


To what extent users will like having a host of visiting cards is not really known at present.
Microsoft’s new operating system, Windows Vista, with its Cardspace may show this. There is of
course always a question how different user’s needs be met; how the owner of several one-man
companies arranges his stack of cards as compared to the one who uses the web mainly for private
browsing and only occasionally makes purchases and then only in his own name. As noted in 2.1.2,
the Trustguide2 indicated an unwillingness of people in general to appreciate several cards.
Using pre-filled out preference sets has the advantage of making it impossible for bogus web services
to present PRIME look-alikes of consent dialogues. Only PRIME systems can to provoke the user side

Final version 1, dated 01 February 2008 Page 63


PRIME D06.1.f
Privacy and Identity Management for Europe

PRIME system to look in the “Personal Data” database and use the data there to fill in the user consent
dialogue with user data. On the other hand, bearing in mind users general reluctance to do settings and
particular reluctance to fill in personal data on their own computer (which might leak information to
the Internet: D06.1.b; Andersson et al, 2005; Tweney & Crane, 2007), letting them configure privacy
preference settings on the fly has clear advantage. It is also clear that they understand the implications
of the settings if these are done in real situations (Herzog & Shahmehri, 2007). Other means of
avoiding spoofing is therefore desirable such as putting a personalised photo on every PRIME UI.
However, these concerns have not been in the scope of the HCI work in PRIME and we leave this as
an unresolved issue but it mainly concerns web browsers and not other applications.
Another open question concerns how to inform users about data which are not so obviously personally
identifiable, such as items purchased and navigation history, and about data released by themselves but
easily misinterpreted as was the case for the information content of a proof disclosure (5.4.6).

5.5 Assurance evaluation and linkability estimations


As mentioned in 3.5, it is desirable that the function for giving consent to data use is augmented with
some way to check the trustworthiness of service providers that the user is not familiar with. We leave
out information on product quality etc. even if web-based listings could in principle be used for
automatic scanning of customer ratings; a similar listing, consumer agencies’ black lists, was included
in a mockup test showing the feasibility of making assurance evaluations.

5.5.1 Test: “Privacy functionality check” at the moment of data release


The problem could be stated as: “What do people want from an assurance check for various types of
web site?” The purpose would be to find out what people think is important to check in various
concrete scenarios, after they have been informed about the larger framework of the PRIME
architecture. It was a bit hard to find a name for this ‘check’ that could be easily understood by
laymen. We settled on ‘functionality check’ after checking (!) with some Internet users.
The kind of ‘functionality check’ we pretended our system to be able to perform consisted of four
parts: check company against a list of privacy seals; company not blacklisted by national consumers’
organisation(s); company has good data security; and company system supports PRIME functions.
The last one motivated by the fact that much of the interoperability needed depends on the services.
Furthermore, a reliable check of good data security depends on the service provider being PRIME-
enabled. The second item – blacklisting – would of course include more than only bad managing of
personal data, but the user would probably like to be warned about all kinds of offensive actions. We
ran the whole user interface in the native language of the participants (Swedish) because the specific
window for the functionality check was quite simple and easy to translate before the test started. For
the Swedish participants we abbreviated the name to only Functionality Check because of the
compound rules of the Swedish language (‘privacy’ is often translated to ‘personlig integritet’ which
would make a literal translation produce an illegible four-word compound).
The left window in Figure 20 reads (with underlining indicating a link): Privacy Functionality
Check, Your PRIME system has noted the following matches/mismatches with your preferences:
• “Has good seal (privacy seal)” fulfilled
+ User-listed seals (You have not specified any seals)
+ PRIME defaults (current list)
• “Not on any black list” partly fulfilled
+ In your country
+ In provider’s country
– In other EU countries (Austria)

Final version 1, dated 01 February 2008 Page 64


PRIME D06.1.f
Privacy and Identity Management for Europe

• “Provides tamper-resistant protection of data” fulfilled


+ Hardware-based protection
+ Encryption
• “Supports PRIME functions” fulfilled
If all subcriteria had been marked as not fulfilled, the bar sign would appear in the left column instead
of the “!”. Colour coding was used only on the main entries: Green for fulfilled, yellow for partly
fulfilled, and red for not fulfilled. The window did not include explicit reference to the service
provider being investigated. However, this window was shown either by the user demanding it from
the active browser window or by an automatic interruption if the user had activated a form-filler which
was also available in the active browser window.

Figure 20 Original UI and the explanation drawer translated into English

Scenarios for the test


We used five scenarios with one known and the rest unknown (made up) web sites including a native
(Swedish) site being blacklisted in Hungary and a foreign (Slovenian) site being blacklisted in Austria.
It would be interesting to see how much safer a user feels when the Privacy Functionality Check
checks each site. Small scale tests without real personal data being at stake have limited value, but
varying the situations served the purpose of better illustrating for our test participants different check
results.
The first scenario demonstrated for the participants how one could access a form-filler function and
also the Functionality Check. After the (faked) web site visit, the Data Track function mentioned
earlier was demonstrated by showing details of the data disclosure in this scenario (recipient, purpose,
data sent) together with buttons for contacting the service provider. This was only to make PRIME
more intelligible so that the “Supports PRIME functions” check would produce a meaningful result to
the test participants.
The scenarios, in all five, were based on the following web sites:

Final version 1, dated 01 February 2008 Page 65


PRIME D06.1.f
Privacy and Identity Management for Europe

1. Well-known Swedish media shop; all check criteria fulfilled.


2. Unknown Swedish bookshop; blacklisted (Hungarian list); PRIME-enabled.
3. Unknown Slovenian bookshop; blacklisted (Austrian list); PRIME-enabled.
4. Unknown Swedish news service; tamper-resistant protection of data cannot be checked; does
not supports PRIME functions.
5. Unknown Swedish gambling site; lacks privacy seal listed by PRIME; PRIME-enabled.
Sensitive information in scenarios 2 and 3 were credit card data, in Scenario 4 areas of interest (in
combination with user’s name and email address), and in Scenario 5 – a gambling site – credit card
details.
Summary of test results
Twelve persons participated; six men and six women, 18-45 years old, all of which used Internet
daily/regularly.
In Scenario 2 half of the participants submitted the information requested by the faked Swedish book
site, although one decided even before seeing the functionality check report. Interestingly, in spite of
the blacklisting, one participant said: “If they hadn’t had the Functionality Check they would not have
been trustworthy. But there I could see that they were more trustworthy than I thought before.” Three
participants gave away information in Scenario 3, the rest were requested to tell whether their denial
depended on the check results or not. Responses were mixed: foreignness and unknownness played an
important role for their decision.
In Scenario 4 several participants did not thought tamper resistance mattered when it only concerned
name and email address which are information that they could fake if this had been for real. On the
other hand, in Scenario 5 no one wanted to pursue the subscription.
The post-test questionnaire asked the participants to rank four alternatives: “No functionality check”,
“Check result always visible somewhere”, “Check result presented as in the test” (it popped up if the
user tried to use the form-filler if not all criteria were fulfilled), and “Check result visible only if I
chose to view it.” Most participants ranked the design used in the test highest, and all but one person
wanted to have the functionality check (and the deviant person might have misunderstood the ranking
principle to judge from other answers in the same questionnaire).
The concept of privacy seal was a bit problematic for some participants and the help function with
explanations that was included was necessary to judge from a few responses. There was only one
person who could think of additional information – namely, clearly mark if there is a risk for
marketing mail rather then allude to the seals. Another person remarked that if there is too much
information one cannot digest it. In future tests it can be worth putting in more criteria such as
“Consumers’ ratings” and letting participants rank the individual entries or let them manually select
criteria during each scenario walkthrough.
On a question if they have a use for the functionality check and why not, if not, only one answered that
she does not purchases anything via the Internet because she is unsure about how secure it really is,
but added that if there was such a program as in the test, she might start to do Internet shopping. All
the others were positive to very positive.

5.5.2 Proposal for “Assurance Evaluation” prototype window


In the work preceding the final version of the integrated prototype (i.e., IPV3), the evaluation called
“Provides tamper-resistant protection of data” was dropped – such technical checks could be provided
by seal providers. Admittedly, today’s seals do not work by the seal issuer checking out the approved
companies or products more than every second year or so. But with electronic means such as trusted
computing platforms, there is definitely the possibility to continuously check that approved companies
keep up to standards. How to interpret such check results might, however, be better left over to trusted

Final version 1, dated 01 February 2008 Page 66


PRIME D06.1.f
Privacy and Identity Management for Europe

bodies such as seal providers rather than letting the evaluation results be displayed directly on
individual users’ computers (3.5). For the future one might though propose to use RSS feeds about
incidents and security weaknesses to inform users or at least users’ PIM systems for these systems to
request reports from the data controllers concerned if the faults have involved the user’s data.
Another thing to bring up is how to select an illustrative icon for the function when it is no longer
called “Privacy Functionality Check” but instead “Assurance Evaluation”. While ‘functionality’ might
reasonably be signified with cogwheels as have been done in some user tests, the concept ‘evaluation’
does not so easily render itself to illustration in a typical icon style. A magnifying glass might give
association to ‘scrutiny’ but has already been over-employed in today’s user interfaces: magnifying
glass is not only used for ‘magnify/zoom in’ but also for ‘search’ (though some search functions use
binoculars) and in Microsoft Word also in icons for “Print Preview” and “Document Map” (the latter
is meant to give an overview rather than a detailed view, but still Microsoft chose a magnifying glass
also for this icon). In a full-blown PRIME world, an icon based on some trust seal icons might be
suggestive even if the wording “Assurance Evaluation” possibly speaks for itself once the user has got
some negative evaluation reports and has checked out the content of the function.

5.6 Data Track


The history function in the PRIME system, the Data Track, was introduced in section 3.6 and
illustrated in Figure 7. As mentioned, it can be used as basis for many things. First, however, one has
to assure the usability of the database as such, so that users really can find interesting information.
As people engage in many transactions, which may involve multiple providers simultaneously, the
implementation of a usable Data Track is difficult from an HCI perspective. Providing users with easy
tools to find relevant data disclosure records is a pertinent task. In PRIME several ways have been
considered as will be discussed in this subsection.
Two search methods are quite straightforward and might appear as the obvious choices: (1) Sorting
stepwise by categories, such as ‘Personal Data’ and ‘Recipients (of data)’; and (2) Simple search box.
However, these two approaches seem unsatisfactory because users are unaware of what the system
does as revealed in user tests performed by the PRIME group.
More suitable methods that are currently pilot-tested include: (3) Template sentences which put search
boxes within meaningful frames: “Who has received my [drop-down list with data]?” and (4) A
scrollable transaction track that shows all the records at once. The records are shown in abbreviated
form as small pages stacked along a timeline (see Figure 5). A slider provides the possibility to
highlight an individual page in the stack. In this way, users could browse through the records without
having to understand sorting or to articulate refined search requests. Obviously, this method seems
more appropriate for the beginner whose amount of transaction records will be limited. With an
increasing amount of transactions it becomes more and more difficult to find the desired record(s). For
the more advanced user combinations of methods have to be explored and developed (see also
Pettersson et al., 2006). Recently, Apple Computers presented a similar (but graphically much more
impressive) history function for the file system of a personal computer. This will be part of their new
version of Mac OS X. The card stack is dressed in a Time Machine vocabulary. The Time Machine
resembles the Record Slider but is “windowless”, as fits a function encompassing the whole file
system. (A demo is available at http://www.apple.com/macosx/leopard/features/timemachine.html.)
In a preliminary mockup-based usability test 45 high school students were instructed to answer
questions by searching the Data Track using (2), (3), and (5). Only a few persons scored less than half
correct answers although no preliminary training was given. However, on a post-test question if they
found the search engine for Data Track to be easy to understand and use, only 5 said “No”
(interestingly, this did not correlate with a low number of correct answers). The search function in
Data Track has been subjected to other tests, but suffices here to say that the selection for template
sentences and the simple search box has been simplified further by putting the “Find” button next to
the search box, while the template search is executed as soon as the user selects an item in the drop-

Final version 1, dated 01 February 2008 Page 67


PRIME D06.1.f
Privacy and Identity Management for Europe

down boxes. This made it possible to remove all the radio buttons. A simple link “List all records” has
also been added.
When windows Vista and CardSpace were released the history function in CardSpace was compared
with the Data Track design. The test aimed at 20 participants, but was concluded at N = 12 because the
same difficulties for each system seamed to appear for most users. Half of them started by using Data
Track before using CardSpace, and the other half used CardSpace before Data Track. CardSpace’s
history function scored slight better than Data Track, but the number of records used in CardSpace
were fewer making it easier to see sought object directly. Drawbacks found with CardSpace were: it
was hard for most test participants to understand that they had to search for history in each card;
search was not as advanced as in Data Track; it was hard to understand what had been sent.
Drawbacks with the tested Data Track design: hard to find information on ‘templates’ (PrivPrefs);
hard to understand that a record can be open by just clicking on it; cumbersome to get all record;
complicated terminology. (In the PRIME work we have never emphasised recording the privacy
preference settings in Data Track; the essential thing is what data was actually sent, under shat
conditions, and to whom.)
Test participants seemed to find their way in CardSpace when they finally understood how it worked;
the division into ‘cards’ is easy to understand; nicely designed. Many found the free-text search in
Data Track very useful; easy to search both with the search box and with template sentences; powerful
function searching the whole history database; it was good that there was lot of information.
Internet habits did not seem to influence how well one understood the two programs. Eleven persons
liked the idea of using a software like the PRIME integrated prototype or CardSpace; 3 preferred
CardSpace and 8 Data Track.
Assistance functions
In the first evaluation round of early prototypes within the PRIME project, the need for at least two
trust-enhancing functions in the Data Track were gathered. From section 6.5.2.1 of D06.1.b one can
see the need for a help function to rectify/block/erase data (and also informing users that they have
legal rights to such actions). From section 6.5.2.2 one can derive the need for concrete help with
frightening records if the user does not recognises some recipients and finds them suspicious. In such
cases information on, and help to access, data inspection authorities and consumers’ organization
could potentially provide the ‘added value’ that makes PET use the preferred choice.
Figure 21 shows an opening window with several assistance functions that goes outside the normal
scope of help functions in computer applications; it extends the help beyond the system per se and
includes relevant matters in the surrounding world.
Furthermore, some tests have been made on how ordinary internet users would perceive functions that
help them to contact data controllers. We relate one here.
Short mockup test on withdrawal of consent
Ten participants saw UI animations of a user disclosing data aided by PRIME preference settings.
When the animation ended the participants searched the Data Track and were asked to disallow a chat
forum to further use the user’s email address.
The identity manager itself seems to have gained some respect from the participants. Among the Data
Track transaction records there was an extra record involving a certain ‘watcher.com’ that should not
have been there. The test participants were not worried about this but assumed that the PRIME identity
manager had done what it was supposed to do and had locked watcher.com out from the personal data
repository (watcher.com was not recorded to have received any personal data). But on a direct
question if PRIME could raise their trust in Internet and e-commerce, the test participants said they
would rely on company brand (and experience) as the decisive factor. They did not seem to regard that
this makes them vulnerable for ‘spoofing’ attacks where web pages are made to look like pages from
reputable companies (for a recent treatment of this steadily increasing problem, see Dhamija et al.,
2006).

Final version 1, dated 01 February 2008 Page 68


PRIME D06.1.f
Privacy and Identity Management for Europe

Figure 21 Four buttons for quick access to assistance functions

As many as 8 of the 10 participants said that the preconfigured emails would facilitate withdrawal of
information; one of these held it to increase the trustworthiness of the identity manager but seven
regarded it more as an extended service of a kind that was just what one should expect from this kind
of program. 3 of the 8 saw no need to use the aid function as they declared they only buy from
companies they trust. Interestingly, in contrast to the eight more or less positive respondents, two
persons were of the opinion that the withdrawal function decreased the trustworthiness, but from their
answers it is clear that they referred to the e-services:
• One held that if it was possible to withdraw some information, then the company had
requested more information than necessary in the first place.
• The other person could see the value of correcting incorrect information, but added that it
should not be possible to fiddle and deceive by altering already established contractual
agreements.
In conclusion, even if assistive functions could be perceived as facilitator by many people, still many
would probably not recognise the need of such functions as they regard themselves as using services
from trustworthy providers only. This could indicate that there is a great need for assisting people who
have been fooled into deceptive web sites. Another problem is posed by people viewing the mere
possibility to alter data as an indication of non-trustworthiness. Such people would dismiss providers
and technologies that allow user control of user data. This is a didactic problem rather than an
infrastructural problem, but nevertheless a reaction that must be accounted for.
Simulations
Data Track simulations of linkability based on personal data and not only pseudonymity would be
interesting to compute. It could take the form of simulation games such as “If company A and B pool
their customer databases, what can they then infer about me?”

Final version 1, dated 01 February 2008 Page 69


PRIME D06.1.f
Privacy and Identity Management for Europe

Other linkability computations can be based on using available resources for computing the likelihood
that someone else has one’s name within this district of this city, etc. Naturally, such computation
would be most beneficial during a data release action, but having the possibility to do it in Data Track
together with other linkability computations may allow the user to better understand linkability risks.
Clauß (2007) has made a thorough analysis of linkability including usability options.

5.7 PRIME Identity Manager


5.7.1 Access to the administration functions
In the Microsoft Windows desktop UI, the stand alone PRIME panel can be incorporated as a toolbar
in the taskbar. It can also be a drop-down menu in the browser, e.g. at the icon indicating the current
PRIME privacy setting which should be close to the web address field since it is unique for each
browser window (the setting may be interpreted relative to the visited web site).
Figure 22 shows one conceivable distributions of controls. The privacy settings are in an expansion
pane as the setting of these might be done mainly ‘on the fly’ in “Send Personal Data?” rather than in
advance of the utilisation of the privacy settings. But the arrangement of buttons in this panel, and in
an operating system UI toolbar, might be totally different when use patterns are established.

Figure 22 A user interface to manage the IPV2 manager (fold out view)

For the drop-down menu in a web browser, there is probably no need for hiding any of the controls as
the menu is not visible until the user calls for it. Rather, it might be that such a menu contains
additional options relating to changing preference settings (i.e. role/PreSet/Privpref) and to continuing
in the present browser window or opening new windows at the same web site (pseudonym changes
may thwart shopping cart content, which could be used intentionally by a registered customer to
browse without letting the navigation pattern created being linkable to him; Figure 23).

Final version 1, dated 01 February 2008 Page 70


PRIME D06.1.f
Privacy and Identity Management for Europe

Figure 23 Old mockup of changing to anonymous browsing when browsing as registered customer

5.7.2 Administration of the user side identity manager


Beside the privacy preference settings, there should be a possibility for the user to ‘customise’ the
PRIME system itself, that is the “Send data?” dialogue and the PRIME administrations UIs. These
PRIME Settings should preferably cover the following areas:
Language – the language (including date format, etc.) used in all PRIME UI components. This would
probably per default be derived from the operating system language settings, but there are reasons to
allow for a specific language setting for the PRIME user side system. All requests and claims from
PRIME-enabled service providers (data controllers) will be checked by the PRIME user side system,
the service providers’ statements will consists of standard messages and should thus appear in the
language which the user prefers for such legally binding contracts (section 8.3).
Country – for the legal and other assistance functions in Data Track the user would benefit from being
directed to the most relevant external guidance. (The Assurance Evaluation can include evaluators
from foreign countries, so the selection here in PRIME Settings is merely to give the assistance
function the right legal and organisational base.)
Simple / advanced dialogues – since many PRIME concepts are rather complex, both the beginner and
the most advanced users may want to hide long explanatory notes into side-windows and tool tips. The
beginner might furthermore want to get rid of too many options. The “Send data?” dialogue is a
special case, and could be treated as such; see immediately below.
“Send data?” – in any dialogue box acting on behalf of the service provider by asking for data and
permission to use the data for specific purposes, there is a lot of information that potentially should be
shown to the user. There could be several candidates for what the user could set: stepwise declaration
of data requests per purpose vs. a complete overview immediately; Context Menu vs. dialogue
window vs. browser side-bar; whether automatic disclosure should be possible to set in privacy
preference settings; whether individual items should need confirmation (check boxes); et cetera.
Icon themes – when a new privacy preference is created we have assumed that the user assigns an icon
to it and not only a name. For the pre-defined preference settings (named “PRIME …”), the series of
icons in Figure 4 is clearly made up of combination of ‘primitives’. In the same time, it could be nice
if the ‘self’ icon (the face) could be selected by the user (a stylised face looking like him/her or a
miniature photograph); see section 5.9.1.2.

Final version 1, dated 01 February 2008 Page 71


PRIME D06.1.f
Privacy and Identity Management for Europe

Background pictures – to tell real PRIME UIs from faked ones, it might be necessary that the user
installs a picture that the user side PRIME system puts as a background to all its UIs.
Style – for legibility reasons and for personal satisfaction, the user should be able to choose font size
(incl. icon size), contrast, colouring, etc. and the user interface should be interpretable by text-to-
speech converters for the Blind.

5.7.3 Access to administration of data and preference settings


The concept of ‘personal data’ covers both data about the physical person, such as name, address, etc.,
as well as credentials to prove identity (or parts of one’s identity), and also data pertaining to
economic means, such as credit card numbers and other accounts such as Paynova and Paypal, and
furthermore bonus points/vouchers. Also anonymous payment means such as eCoins can for
convenience sake be put among the personal data related to economic means; and since they would be
based on some system for credentials they might be traceable if the banks construct them so, which
indeed makes them to personal data.
For several prototype-stage systems one has used the word ‘wallet’ for all the data managed by these
systems. However, because the payment means is quite a distinct group we have used ‘PRIME Wallet’
for these means. This includes also eCoins which are ‘pure’ credentials rather than proofs of some
other data. Other credentials constitute the group ‘Proof Portfolio’.
Because all this information is user data, the three groups are accessed from the same control
‘Personal Data’ in Figure 22. This deserves some comments:
• One could argue for some specific name, such as ‘Data Pool’ because the ‘ordinary’ data is
also called ‘personal data’. Some differentiation may be included in the final version of IPV3,
but informal tests with paper mock-ups performed in preparation of IPV2 have not indicated
any conceptual problems for non-initiated persons.
• This treatment of the terms makes it also possible to have the same symbol for all three
groups, namely the ‘face’ icon on the button ‘Personal Data’, which also makes it possible to
play with that icon for the various degrees of anonymity preferences irrespective of what data
is hidden by these preference settings (Figure 4; we do not champion the icon itself – it has
been criticised by test users as looking childish).
• This treatment of the terms demands a compromise in the user interface, as Figure 24
demonstrates where the Proof Portfolio ‘link’ has been clicked after first clicking the Personal
Data button in Figure 22. The ‘link appearance’ disappears from the text “Proof Portfolio” and
this text is bolded. Tabs are used for various sub-lists of proofs and so also in the Wallet view,
why superimposing a tab structure for the main groups Data, Portfolio, and Wallet would
make the user interface a little less navigable.
A special icon for proof can be used to tag on data which have a proof; this icon can then also be used
in “Send data?”. This is space consuming. Clicking such an icon in the Personal Data or PRIME
Wallet views shifts the window to the Proof Portfolio view, displaying the corresponding proof row.
Search in all three views of Personal Data can be done either internally or, for an identified data item
or type, in the Data Track or to achieve a list of the privacy preference settings where the item or type
is included.

Final version 1, dated 01 February 2008 Page 72


PRIME D06.1.f
Privacy and Identity Management for Europe

Figure 24 Proof Portfolio view within Personal Data window

5.8 Animations and movements


5.8.1 Utilising movements in spatial relationships
Spatial relationships are not confined to the TownMap paradigm – to use a spatial relationship there
only have to be two distinct loci. Instead of absolute positions as in the examples in 5.2.2, an
alternative is to use symbols to denote the meaning of each locus allowing for the use of arbitrary
positions (‘symbol’ is then a text label and/or icon; conventional parts of a user interface like window
edges plays more on relative position). It should be noted, however, that if the last and more arbitrary
identification of locus content is used, people will have to have time to recognise and interpret the two
symbols.
The last caveat recognises that locus meaning can be expressed relative to what has been graphically
presented before. Positions will start to bear meaning because of the use those positions have been put
to, just like a word or a picture can start to carry a (new) meaning, often unintentionally, because of

Final version 1, dated 01 February 2008 Page 73


PRIME D06.1.f
Privacy and Identity Management for Europe

usage. One can ask if locus meaning instead can be determined relative to natural relations to prevent
the just-mentioned increased cognitive load for arbitrary placement of identifiers before a position has
acquired a meaning. The PRIME HCI group has no empirical data to support this but the effect would
be as follows. In the TownMap design, the home symbol and its data icons were placed ‘near’ the
user, i.e. along the bottom of the screen. When the user takes a data piece and moves it out, he moves
it away from himself. Along the same lines one may ask if right-handed persons would find it more
natural to move personal data from the lower right hand corner of the screen than from the left hand
corner. Presumably, for the TownMap example, the standard of a UI with the home symbol to the left
will over-ride any handedness effect. The question of the locus of the ‘user’ on the screen is important
for a graphical resolution of the problem that users do not really see the difference between ‘their’
PRIME-enabled browser and web services.
It is important to note how movements can draw on spatial relationships but also to note for what
purpose movements can be employed. Below, three cases are differentiated: movements used for
information animation, movements for users’ agreements, and movements for user’s statements.
For informing: As an example we can take the disposal of documents in Microsoft Windows. When
user right-clicks a file icon and chooses Delete, and then answers Yes to the question if he wants to
send the file to the Recycle Bin, Windows shows a small animation in a pop-up window with a folder
symbol to the left, a file icon moving from it to a replica of the Recycle bin to the right (Figure 25).
This is a ‘standard show’ and does not change to hint where the file icon and the Recycle Bin icon are
on the screen (so in this way it is clearly different from the case where the user drags and drops the
icon at the Recycle Bin).

Figure 25 MS Windows animation of deleting action

Even if a town plan is not directly depicted, movements can be utilised if the underlying graphical
representation is not completely abstracted away from all 2-dimensionality. For instance, connecting
icons for various communication partners by some street grid will make it possible to illustrate desired
or actual communication paths, but also to illustrate possible side-channel attacks to warn the user.
Animation of the communication paths can highlight individual paths and show data flow directions.
Many users responded positively to simple animated demonstration in the TownMap preference test
even if they were negative to the graphical design (section 5.2.2).
In the two examples just given, animation could possibly be replaced by arrows, but movement is
known to attract attention. Flickering is, however, also known to attract attention and a movement is
thus not needed to grab attention, but movement is possibly a more pleasant animation than flickering.
For giving consent (confirmation): The possibility to let the user act 2-dimensionally might prevent
the automation of behaviours which makes people accidentally click OK and Agree buttons without
really having intended so (Raskin, 2000). This was called DADAs – Drag-And-Drop Agreements in
D06.1.a where it was also noted that one can construct this game such that the user really has to pick
not only a predefined set of data symbols – which would be quite like clicking ‘Agree’ on a pop-up –
but the user would have to pick the right data symbols, and furthermore drop them on the right

Final version 1, dated 01 February 2008 Page 74


PRIME D06.1.f
Privacy and Identity Management for Europe

receiver symbol. Thereby, the system can to some extent check that the user has understood the
request. (ToolTips displaying the specific data content for each data icon can accompany the drag-and-
drop actions.)
(Not) for stating conditions: The system’s check mentioned in the last paragraph requires that the
information is already requested by the service provider, so that the drag-and-drop action really is an
act of confirming, and not an act of stating conditions. Drag-and-drops can be mistakenly performed
and would need a last confirmation if they are used to state the conditions of an agreement. In normal,
‘click-based’ interaction a final confirmation is sought by requesting yet another click from the user.
Drag-and-drops for stating conditions are not as secure as drag-and-drops for agreements and would
need a last confirmation click.
For minor statements, which can occur within an agreement ‘action’, one might avoid an extra
agreement click, as when the user has selected his VISA credit card rather than his MasterCard. In this
case it is hard for the system to know whether this is really what the user wants if he has not
previously selected one of the cards for the current privacy preference setting – on the other hand, if
the user has not specified which data items match the credit card data type in this preference setting, it
is probably not detrimental to use both cards.
5.8.1.1 Unresolved issues for the employment of spatial relationships
As mentioned above, the question of the locus of the ‘user’ on the screen is pertinent for a graphical
solution of the problem that some users do not really see the difference between ‘their’ PRIME-
enabled browser and web services. The position of the user-side PRIME system in such a graphical
representation may be important for communicating the PET point to the user.
Furthermore, it is an open empirical question if it is possible that drag-and-drop actions – properly
designed – reinforce users’ feeling for the distinction of user-side and services-side (this question
concerns both DADAs and the drag-and-drops for stating wishes).

5.9 Icons
5.9.1.1 Use icons with care
Usability tests (section 6.4.1 of D06.1.b) of icons for ‘roles’ (i.e., PrivPrefs) showed that users may
verbalise facial icons very differently even though they might understand how to use them.
Furthermore, when test subjects were allowed to pick face icons and other icons of their own choice,
there were often competing preferences between subjects. Interesting was that even if some face icons
were mainly associated with criminals, it did not prevent people from selecting such icons.
Nevertheless, use of icons must be handled with care, especially when different kinds of ‘masking’
come into play, as they necessarily do when anonymisation and pseudonymisation are the core
functions of a system.
5.9.1.2 User can select icons for data sets
If icons are used to ease users’ recognition of datasets and privacy options belonging to the users
themselves, it is recommended that users have the ability to select icons themselves. For pre-defined
settings, such as the PRIME Anonymous, it might however be advisable to keep to the icon predefined
as this icon might be used in instruction texts and videos and play on other system icons. How

combined icons, such as the one for PRIME Returning Visitor, , will be made if the user can
select one of the icons (such as the face which has the meaning “me” in PRIME) is a delicate UI
programming issue.
5.9.1.3 Replace verbal references to icons with the icons themselves
Further, when icons are used, even for pre-defined privacy options, it is recommended that verbal
references to these icons are not made by referring to the graphical appearance of these icons, because
the icons may be replaced. This holds also for icons not intended for user selection – the icons may be

Final version 1, dated 01 February 2008 Page 75


PRIME D06.1.f
Privacy and Identity Management for Europe

changed by application providers at a later stage without the full user interface being searched for
possible textual references to these icons. However, because users find such concrete references
useful, incorporation of the very icon in the text is recommended (this must be made dynamically if
the user is supposed to select the icon, but dynamic linking facilitates updating by application
providers). The text must also categorise the icon in conjunction with its appearance in the text (for
two reasons: 1. the user might have forgot in what circumstance the icon is used and needs to be
reminded, 2. if the user is speaking to a support, he must be given words to explain). If the space
permits and the text is still intelligible, possible denominations of the icon (or what the icon stands for)
can be added too; but the icon and that name must be explicitly linked somewhere in the user
interface.
Let users choose whether icon and/or name for a function, dataset, etc. shall be used. Note, however,
that not all users will be able to utilise this option.
5.9.1.4 Careful use of colours and fine details
An icon once adopted may be appearing in different sizes depending on situation. The same icon may
even be transposed between computer and small-screen handsets of mobile phones. Fine details should
be avoided. It should be noted that the photo-realistic icon style nowadays employed in Windows and
Mac OS may cause trouble. “This level of details serves only to distract from data and function
controls. In addition, although it might, in some instances, make sense to render in detail objects
people are familiar with, what is the sense of similarly rendering unfamiliar objects and concepts (for
example, a network)?” (Cooper & Reimann, 2003, p. 235)
Likewise is colour not reliable between many LCD-displays, and manuals describing the icons may be
photocopied in black and white. ‘Traffic light’ triplets should be shown vertically.
Colours may have competing meaning in different situations where the icon can appear. In one
instance it may be an easy way of telling some icons apart but in another situation when light colours
are needed alert the user, a yellow icon may attract unnecessary attention. It cannot be denied,
however, that colour is a very effective means of marking things.
For colour-blind people, there must be reliable alternatives to colour signals. For an example, see the
alternative disclosure setting icons presented in D06.1.a, Figure 17 & 18 (combined to one in Figure
26; usability test data for normal-sighted people in D06.1.b, section 6.5.1). Note especially that
combination of signs into a new icon is possible even if superposition is not employed. A colour is
easy to add to a black-and-white icon to indicate a shifted meaning, but juxtaposition is also a
workable way.

Figure 26 Three eye icons to alternate with traffic light spots

5.10 Terms
The prototypes in PRIME have not gone through repeated usability tests and we add no conclusion
here on the various terms and phrases appearing in the previous sections. Notably, to facilitate intra-
and extra-project communication, English has been used. A user-centric approach would, however,
demand a more pluralistic approach as noted in the concluding section of the chapter on ontologies:
see section 8.3.

Final version 1, dated 01 February 2008 Page 76


PRIME D06.1.f
Privacy and Identity Management for Europe

6 Rationale behind the CeL APV2 user interface


For eLearning and Collaborative Tools, the privacy questions do not (only) arise from the service—
client communication but from the peer-to-peer communication. In order to be able to collaborate,
people must have some information about other people (to find collaboration partners, and later to
recognise the collaborative partners by their pseudonyms). Another aspect is that tutors must have
some information about learning progress of members of classes in order to be able to assist them
individually. Tutors and peers can collect information over time and this can result in their true
identity being more or less revealed. Within the PRIME project, a privacy-enhanced Collaborative
eLearning (CeL) environment has been developed that tries to avoid these problems. The CeL
prototype is based on the concept of workspaces representing different areas for working as known
from real life situations.
CeL allows management of partial identities (pID), that is, of data sets that only partially describes the
user. Intra-application partitioning (IAP) is crucial and is paralleled in the web scenarios in previous
chapters by identity management that depends on each individual service provider and not only on the
web browser used. The management includes two aspects:
1. Continuous privacy awareness should enable users to assess their current privacy state and,
therefore, motivate them to apply IAP at all.
2. A reasonable support for users in applying the provided techniques is necessary in order to
achieve a rational use of them. This is the task of the context-aware Decision-Suggesting
Module, DSM.
Some important UIs have been received special designs for the CeL: PreSet manager, “Send personal
data?”, Data Track, Icons.
Instead of managing PreSets, it is necessary to select a partial identity within the CeL AP. The
DSM has the task to support the user in this selection; particularly, it evaluates the current context
described by a number of features and generates a suggestion for the user regarding the decision under
which partial identity an initiated action should be performed. The DSM generates this suggestion
based on user-defined rules. The management of these rules can be compared to managing PreSets and
is supported by the DSM configuration tool (Figure 27) which provides the following possibilities
regarding the configuration:
1. Select the workspace to be configured. By default, the active workspace is selected but it is
also possible to make configurations for all workspaces.
2. DSM Configuration Button shows the current configuration for this workspace.
3. Select Mode: The user can select between “Normal Mode” showing simplified configuration
settings and “Expert Mode” for detailed settings.
4. Supported aspects: The user can select the aspect he would like to configure by clicking on
the respective tab. Currently, the level of partitioning (e.g., use different partial identities
within different workspaces) and recognition by other users (necessary for collaboration) is
considered.

Final version 1, dated 01 February 2008 Page 77


PRIME D06.1.f
Privacy and Identity Management for Europe

Figure 27 DSM Configuration Tool

Figure 28 shows the adapted “Send personal data?” dialogue. Changes have become necessary since
the user needs support in selecting a partial identity instead of selecting data to be sent. The dialogue
includes:
1. The dynamic DSM Configuration Button signals the current settings regarding the context
management.
2. The introductory text informs the user why the dialogue is started.
3. This section provides information about the requested data.
4. This section presents information about the initiated action (especially, in which workspace
and functional module was which action initiated) and about the fact to which degree the
provided data might be known to other users.

Final version 1, dated 01 February 2008 Page 78


PRIME D06.1.f
Privacy and Identity Management for Europe

5. The section “Select partial identity” starts with an introductory text that shall help to
understand the content of that section. Some phrases are highlighted which means that more
information is presented in separate windows.
6. Via the “Create” button the user can generate a new pID. If the DSM suggests creating a new
pID instead of using an already existing one, the button is highlighted orange.
7. This section shows the rated list of pIDs. The rating is visualised via the scales on the left
hand sides of the Chernoff Faces (cf. below) representing the partial identities. If a pID gets
the highest rating at all, it is highlighted in orange. The user can have a look at attributes
already assigned to a specific pID using the button “attributes”.
8. This section provides information about the suggestion generated by the DSM.

Figure 28 CeL-adapted “Send personal data?”

Final version 1, dated 01 February 2008 Page 79


PRIME D06.1.f
Privacy and Identity Management for Europe

More changes will be needed in the future. As one test subject said after a session: “The application is
difficult to use for first time users. The icons do not show what they mean and what one is suppose to
do. It is complicated. Moreover, the “Send Personal Data?” window was confusing, and had a bad title
that does not tell me what to do – should I create an identity or what? There was a lot of information
about different things. I do not understand even though I read. Regarding the title, it feels like the title
does not correspond to what I am suppose to do. If I am supposed to create an identity the tile should
tell me so. I think the text in “Send Personal Data?” is written to the system and not to me as a user.
The text is technical described from the system view. I don’t really understand the purpose with the
identities, and why should I want to be anonymous? It would be better if one was able to choose if one
wanted to be anonymous or not. This manner is just unnecessary and results in much workload.”
Data Track is conceptually integrated in the Information Centre but presently the Information Centre
only contains information about the user’s alias and the online state of his pIDs and of pIDs of other
users.
The following Icons are introduced in the CeL AP:
Icons in the style of Chernoff faces (Chernoff, 1973) represent relevant partial
identities. Particularly, they represent dynamically relevant features of these partial
identities. Within APv2, the following features are represented:
The colour of the face encodes the pID’s online state, i.e., whether it is potentially
visible for other users (face is yellow) since awareness objects are generated and distributed to other
users interested in getting this information or not (face is grey).
The style of the alias (“Office” in the example) encodes the administrative role of the pID: bold is used
for “owner”, italic for guest (as in the example), and plain for “participant”. Current versions will also
provide information about the degree of knowledge (presented by the shape of the eyes), former
communications (the shape of the mouth), the workspace in which the user acts (the margin of the
face), the functional role (by an additional symbol), existing notes or delivered additional information
for support of collaboration (also by additional symbols). More details can be found in Franz et al.
(2006)15.
The DSM configuration button allows starting the DSM configuration tool; the
combination of the CeL Chernoff Face representing the awareness icon and, thus,
recognition necessary for collaboration, and the PRIME mask representing privacy
aspects (pseudonymity) is used at several places. The colour of the face gives
information whether the DSM is switched off; the points represent the current
configuration (green spots represent settings regarding the level of partitioning – the more spots, the
more fine-grained the partitioning; blue spots represent configurations regarding recognition – only
one spot means that the user wants to be recognised in the present workspace only while three spots
mean that the user wants to be recognised in the whole BluES’n system).
Finally one can mention that there are difficulties to have a large area for the course content in the
workspaces when identity management information and privacy information also competes about the
screen space. In the present design of the prototype, ample space is dedicated for these secondary tasks
as they belong to the core concepts of the prototype. See PRIME deliverable D04.1.b.1. We do not
include figures on the ordinary workspaces here. They do, however, contain several innovative
interactive features, but these are not directly related to privacy awareness and management.

15
Also in a document by the BluES Team: Handbook for BluES’n. 21 September 2007.

Final version 1, dated 01 February 2008 Page 80


PRIME D06.1.f
Privacy and Identity Management for Europe

7 Rationale behind the LBS APV2 user interface


Mobile phones provide the possibility to give information that depends on where the user is even if the
user is not aware of his location. The mobile phone subscription makes it possible to charge the user
for such services. However, it is also a sensitive question which infrastructure or service providers
should have access to which information about the users. In the second version of the Location-Based
Service application prototype, LBS APV2, the use case is pollen warning:
• John Primeur (a mobile phone user) has a hazelnut pollen allergy
• He wants to be alerted before entering a region with such pollen
• Therefore he subscribes at the application provider (AP) for a pollen warning service
• He configures his allergy profile and permits access to his location at his mobile operator
(MO)
• When he enters an area where the hazelnut pollen concentration is high, he gets a warning by
SMS
This exemplifies a push service, where the service provider contacts the user whenever a relevant
external event is determined using his prior configuration of this service. It contrasts to the pull service
LBS scenario used for LBS Prototype Version 1 (D04.1.a.3) where the user initiated each service
delivery by requesting information about the nearest pharmacist.

Figure 29 Location Intermediary (LI)

To address the privacy issues associated with tracking a user in the LBS scenario, a location
intermediary (LI) is introduced as a decoupling entity (see Figure 29), allowing the user to employ
temporary pseudonyms when communicating with services (for more details see D04.1.b.3.8). Privacy
enhancements by PRIME in the push LBS scenario include:
1. AP does not know John’s allergies
2. AP neither gets a movement profile nor the location
3. MO does not know the services John uses

Final version 1, dated 01 February 2008 Page 81


PRIME D06.1.f
Privacy and Identity Management for Europe

4. LI does not know John’s allergies


5. “PRIME light” on user side:
a. Joining of pollen semantics and LI profile settings
b. Creating user pseudonyms for communication with LI
c. Policy management: create access policy at MO and pass it to LI
d. Distributed Data Track (cf. section 3.6) at PRIME light console
Privacy in location-based service scenarios entails several user interface design problems that are
addressed by the prototype. A short overview will be presented here:
• Limited device capabilities: The smaller screen size, combined with limited input devices and
hardware performance of mobile devices, makes most user interface designs provided in the
context of the PRIME framework not deployable on mobile devices out-of the box. The LBS
prototype addresses this issue by adapting PRIME UI concepts where possible (e.g. in the case
of the PRIME user side console, of which Data Track is displayed in Figure 31). In cases
where an overly complex UI on the mobile device would have been necessary, the application
prototype resorts to custom solutions, trying to minimise the privacy impact by applying
custom solutions tailored towards the scenario.
• Visualisation of involved parties: To make location PETs accessible to the end users and thus
deployable in a real-life setting, it was necessary to map the complex server-side deployments
to the user interface. An icon for quick identification of the communication partner is provided
at the top of all subscription screens (see Figure 30 and Figure 31).
• Consent and location disclosure: A central point of the LBS scenario from the legal point of
view is the user’s consent when the MO hands out his location information to third party LBS
providers. The implementation has to guarantee that the user’s consent has been given under
the applying legal parameters (e.g. the application has to make sure that the user has been
given enough and correct information to assure his informed consent). This is a requirement
for legal compliance of the system, a key requirement to do business in the area of LBS. As an
additional complexity, in the push service scenario, the location disclosure is initiated without
user interaction, requiring some means of pre-emptive consent. For this, the user may
configure time-based restrictions, which offer a simple yet effective way to restrict access to
his location information. The full legal policy is easily available to users (see Figure 30), but
hidden by default (again, see Figure 30), so it does not obstruct the main workflow. The user
is asked for consent in a fashion that should be manageable yet effective. The concept “giving
consent to being tracked” is understood by users.
• Dynamic personal information: Unlike classical personal information like name or birth date,
location information changes arbitrarily, and is updated dynamically, without user
intervention, in the context of a push service. This poses new challenges to identity
management systems, and specifically to the UI, as the prototype tries to conform quite
closely to PRIME principles and concepts. For example, the Data Track will fill up with
localisation events quite rapidly. To address this, filters to hide periodically updating events
were implemented (see Figure 31). This also exemplifies the basic principle of adopting
PRIME concepts without breaking them: The PRIME Light Console is still very similar to the
one used in full PRIME, only adopted for the mobile platform, and augmented by
functionality for handling dynamic personal information.

Final version 1, dated 01 February 2008 Page 82


PRIME D06.1.f
Privacy and Identity Management for Europe

Figure 30 Policy configuration and full privacy policy

Figure 31 Service subscription and Data Track

The prototype maps PRIME components and principles like the PRIME console, pseudonymisation,
access control, and appropriate consent to data release to the mobile platform, incurring some
limitations in other areas, but still offering solid privacy protection. The second version features
several user interface and technology improvements, progressing towards fully leveraging what is
offered by PRIME on a mobile platform. It also addresses several specific user interface challenges of
the LBS scenario.

Final version 1, dated 01 February 2008 Page 83


PRIME D06.1.f
Privacy and Identity Management for Europe

8 Ontologies for HCI


8.1 Introduction
Talking about user interfaces, usability and identity management inevitably leads us to the problem of
understanding what the communication partner, the web service, or the simple data request means. If
for instance the user looks for the term “shark” on the Internet he will get results from various domains
as it could be a rock band, a fish, a software product, etc. If the systems had knowledge of the context
of the search query and the context of results they could help to focus and to select the most fitting
results. But the internet content and the web services are mainly built for human readers, not for
computers. Often there is no machine-readable information about context and if there is, the
information may be inconsistent.
To solve all these issues the so-called Semantic Web was designed (Berners-Lee et al., 2001). It aims
to define a basis for all the terms and relations to describe the content of the web consistent and
sufficient. So-called “Ontologies” were introduced, describing relations between terms in a simple
SubjectPredicateObject relation. See Figure 32 for an example of the kind of relations that needs
to be described; in this case that John is a name used by the user (in English, the word ‘given name’ is
often used for ‘first name’). PRIME uses this mechanism to enable client and server to exchange type
information and to be able to make precise requirement statements about protected resources (so called
“claim requests”).

Figure 32 Simple example of Subject-Predicate-Object relation

To describe the ontologies the World Wide Web Consortium (W3C) developed the so called Web
Ontology Language (OWL)16. OWL relies on the Resource Description Framework (RDF)17 to
express the ontology relations. RDF statements can be notated in notation (n3)18 as well as in XML (as
in Figure 33).
In the next sections we describe how PRIME makes use of these mechanisms to express the relations
between entities and objects, to describe them in a human and machine readable manner and to offer
translations etc.

16
http://www.w3.org/TR/2004/REC-owl-features-20040210/
17
W3C Recommendation 10 February 2004 http://www.w3.org/TR/2004/REC-rdf-concepts-20040210/
18
http://www.w3.org/DesignIssues/Notation3.html

Final version 1, dated 01 February 2008 Page 84


PRIME D06.1.f
Privacy and Identity Management for Europe

<rdf:RDF
xmlns:j.0="https://www.prime-project.eu/ont/Datamodel#"
xmlns="https://www.prime-project.eu/ont/PIIBase#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xml:base="https://www.prime-project.eu/ont/PIIBase">
<owl:Ontology rdf:about="">
<owl:imports rdf:resource="https://www.prime-project.eu/ont/Datamodel"/>
<rdfs:comment xml:lang="en">PRIME Ontology defining basic attributes of
entities.</rdfs:comment>
</owl:Ontology>
<owl:ObjectProperty rdf:ID="inverse_of_userCentricSensitiveData_1">
<rdfs:subPropertyOf rdf:resource="https://www.prime-project.eu/ont/Datamodel#owner"/>
</owl:ObjectProperty>
<owl:FunctionalProperty rdf:ID="birthDate">
<rdf:type rdf:resource="https://www.prime-project.eu/ont/Datamodel#ConcreteProperty"/>
<rdfs:label xml:lang="de">Geburtstag</rdfs:label>
<rdfs:label xml:lang="en">birth date</rdfs:label>
<rdfs:comment xml:lang="en">The date of birth or founding of an entity.</rdfs:comment>
<rdfs:subPropertyOf rdf:resource="https://www.prime-
project.eu/ont/Datamodel#userCentricSensitiveData"/>
<j.0:range rdf:resource="http://www.w3.org/2001/XMLSchema#date"/>
</owl:FunctionalProperty>
...
</rdf:RDF>

Figure 33 RDF example defining properies of 'birthDate'

8.2 Ontology usage in PRIME


Ontologies can be used quite broadly. In PRIME ontologies mainly model the data types. That means,
the ontology model covers the underlying data model and offers methods to access the data model
using the ontology layer. This allows reasoning based on the ontology. If the user has to show some
payment credential, for instance, the PRIME ontology states that payment is of the set:

Data Concrete data type


5 Euro Money
Amex 1234567890 Credit Card
500 Miles Bonus Points
0987654321 Bank account
The system can then automatically suggest one of these items to fulfil the request about “payment”,
even if the concrete property type does not match the type “payment”, because the ontology maps the
items to “payment” members.
The central point in PRIME dealing with ontologies is the PRIME Ontology Information Point (OIP).
This module uses the query language SPARQL19 for accessing the ontology information. For handling
RDF, Ontologies, and SPARQL within the PRIME prototype the Jena Framework20 is used.. The OIP
offers various predefined queries to execute on the PRIME data storage and to get describing
information about a dedicated data item, type, etc.
The use of ontologies for describing types and their relations in a general way enables the following
features of the PRIME prototype:

19
http://www.w3.org/TR/rdf-sparql-query/
20
http://jena.sourceforge.net/

Final version 1, dated 01 February 2008 Page 85


PRIME D06.1.f
Privacy and Identity Management for Europe

a. PRIME data model allows to attach uniform policy language to each separate data item
b. Each enterprise data model uses its own vocabulary; using ontologies we can map them into one
model (from and to the PRIME model)
c. Easy translation between policy concepts
d. It models the credential metadata and rules that governs the access to the credentials
However, such a lot of flexibility has also its drawbacks. Using a malicious ontology one could
completely invert meaning and functioning of the system. That is why currently only a fixed set of
ontologies is allowed.

8.3 Possible future extensions


For future extensions of the PRIME system one may address the capability of the system to
dynamically include external ontologies. As mentioned above, this presupposes a trusted framework to
guarantee the consistency and correctness of the ontologies. This also requires some public key
infrastructure (PKI) to certify and prove the integrity and authenticity of the external ontologies.
As a further activity standardisation could be mentioned. It should be possible to issue standardised
ontologies about relevant privacy related processes to enhance and enrich enterprise data models by
accompanying them with appropriate ontologies. It can be noted that ontologies allow gathering data
and datatypes under common denominations, such that ‘name’ consists of ‘prefix’, ‘given name’,
‘middle name’, ‘family name’, ‘suffix’. Other name categories can be constructed as well as, for
instance, different address structures.
Finally, for HCI it is of particular relevance that the ontologies in PRIME can be extended to host
tooltip explanations and translations of data types into different European languages (Figure 34).

- <owl:Class rdf:ID=”givenName”>
<rdfs:label xml:lang=”en”>First Name (Given Name)</rdfs:label>
<rdfs:label xml:lang=”sw”>Förnamn</rdfs:label>
...

Figure 34 Tentative example with resources for data type terms instantiation in two languages

In fact, also other user interface elements, such as button labels and references to icon resources, can
be stored directly in the ontology. This makes it possible to customise interface elements, such as
captions and descriptions, and preserving the relations between different subjects and objects of the
PRIME system. Naturally, this automatic way of filling the UI with texts can entail problems because
in some languages some expressions will be longer than in others as shown in Figure 35, making it
necessary to compress the letters or extend e.g. buttons, which can have unwanted consequences.21 But
in principle, automatic translations are very promising in a broader scope where privacy policies and
(end-user’s) obligation settings are considered. Standard messages would make it possible to
automatically translate between the languages of the service provider and the customer, providing for
informed consent in the best possible way. Translatable standard messages would also be of great
advantage for the use when contacting the data controller via the Data Track’s assistance function (cf.
sections 2.6, 3.6).

21
Naturally, the verbs in the Italian examples might be in imperative rather than infinitive: Crea un nuovo ruolo,
Importa/Sincronizza, Modifica i segnalibri.

Final version 1, dated 01 February 2008 Page 86


PRIME D06.1.f
Privacy and Identity Management for Europe

English Swedish Italian

Figure 35 How different languages would claim different space for button captions

Final version 1, dated 01 February 2008 Page 87


PRIME D06.1.f
Privacy and Identity Management for Europe

9 Overview of connections between PRIME core and UI


components
The PRIME Integrated Prototype is built with a three-layer architecture. At the top there is the
common user interface, implemented in XUL22 to show the user forms and dialogues. The middle
layer is the so-called “Mediator”, built on java server pages (JSP) technology, which connects the
upper and lower layer. And finally, the PRIME core represents the lower layer, where data is stored
and the privacy protocols are running. The PRIME core is realised as Java modules. The following
sections will explain the interconnection of these parts.

9.1 PRIME UI modules


In detail, the first layer is for displaying data and interacting with the user. We used XUL, which runs
in the Mozilla Firefox and standalone in the XUL-Runner23. The user interface is split into the parts
“PRIME Console”, “Send Personal Data?”, “PRIME FormFiller” and the “PRIME Mediator”.
The PRIME Console covers local identity management. It is responsible for maintaining, displaying
and accessing data (e.g. datatrack.xul, personaldata.xul, and presetmanager.xul).
Sending data to a communication partner, the PRIME user side system distinguishes between PRIME-
enabled and not PRIME-enabled partners. To do so, PRIME introduces an http header extension called
“X-flag”. A PRIME-enabled communication partner, requesting PII, appends an “X-flag” (see the X-
flag section below for more information) to the http header in the response to the user’s initial resource
request (the simple case is a normal page request).
The AskSendData module (which generates the “Send Personal Data?” dialogue and similar
dialogues) will be shown, if the user’s PRIME system receives this X-flag. In this dialogue the user
can choose the data ready to transfer to the communication partner or select pseudonyms, more
general credentials, to use.
The “FormFiller” (completing the top layer), will intercept the request and supports the user editing
the dialogue inputs in case the X-flag is missing, i.e. when the partner is not PRIME-enabled..
The PRIME Mediator encapsulates the PRIME core functionality into a web service interface to
enable external components to use this functionality in a convenient way.

9.1.1 PRIME Console


The PRIME Console is the user-centred management centre, enabling the user to set up and control
the privacy related data disclosure procedures; cf. Figure 36. The console is realised as a front end,
connecting the Mediator via the web service interface. The console uses the get and list functions
offered by the Mediator to request the corresponding data. Such web service requests are simple https
requests to the mediator URL24. Using specific function modules, such as getDataTrack.jsp,
listCategories.jsp, updatePII.jsp etc., the console interacts with the PRIME core. An overview of the
available function and their details can be found in section 9.1.4.

22
XUL, XML User interface Language, is a Mozilla special construct. This limits the usage of the prime console
to Mozilla users. But as far as a standalone XUL runner is available we may count it as a special programming
language with its advantages and disadvantages. For more information, see http://www.xulplanet.com/ and
http://extensionroom.mozdev.org/ .
23
See http://developer.mozilla.org/en/docs/XULRunner for further details.
24
Per default, https://localhost:8443/prime_impl_ConsoleMediator.

Final version 1, dated 01 February 2008 Page 88


PRIME D06.1.f
Privacy and Identity Management for Europe

Figure 36 PRIME Console Main Window (in IPV2)

9.1.2 PRIME “Send Personal Data?” (AskSendData)


The PRIME Send Personal Data works similar to a personal firewall application. It intercepts the
common application communication by parsing the communication (http header) to detect PRIME
related communication. If the so-called interceptor detects PRIME-related communication the
communication flow is intercepted and redirected to the AskSendData module. This component uses
the same interface as the console uses. Based on the so-called claim request, which a service
application sends to the user, this component collects all available information about the requested
data (data categories such as name, credit card number, birth date etc.). It connects to the PRIME core
to get the available data and to help the user to decide which data to disclose. After the user decided
about what data to disclose and which policy to accept this component stores the data and the
communication circumstances in the Data Track (see section 3.6) and answers the claim request and,
finally, hands the control back to the application it once intercepted.

9.1.3 PRIME FormFiller


Currently only a few web sites, and in the future not all web sites are PRIME-enabled. To ensure that
the user is not only supported on PRIME-enabled sites, and to guarantee privacy aware data handling
and self determination, the PRIME developers have foreseen a tool which supports at “normal” sites.
The main purpose of this tool is to assist the user filling out forms and to log which data were sent to
which site (that is, to create records in the Data Track).
This FormFiller will analyze a not-PRIME-enabled site when loading and checks if there are form
fields present. If there are form fields available on this site, the FormFiller connects to the PRIME core
via JSP to request the available resets and concrete PII. Using this data the FormFiller shows a small
popup, supporting the user to make his decisions. The FormFiller tries to assign the different input-
fields to the single facts of the selected PreSet and fills in the corresponding data in the corresponding
field. In special cases some additional user action is required to complete the task and to re-assign data
to data fields. After the user has finished the input and submitted the form, he gets a confirmation
window with information about the data sent to the site. The user-input will be sent by the Mediator to
the PRIME-Core for logging, to support the user on future form-filling on this or any similar.

9.1.4 PRIME Console Mediator


Using the java server pages (JSP) technology the user interface gets access via a web service to the
data held in the PRIME core. There are two types of JSP. The first type are the “list*.jsp” (e.g.
listPII.jsp or listPresets.jsp) for delivering data from the PRIME-core to the UI components. The
second class contains the JSPs for data manipulation in the PRIME core, for instance to adapt or create
data (e.g. addPII.jsp, updatePII.jsp or removePII.jsp). Table 7 gives an overview of the functions.
These functions connect the console and other external programs to the PRIME core.

Final version 1, dated 01 February 2008 Page 89


PRIME D06.1.f
Privacy and Identity Management for Europe

Table 7 JSPs for retrieving and manipulating data in the PRIME core

Function Description
accessRemoteResource Starts an access remote resource protocol for a given URL.
addApplicationAuthorization Adds a program (name) to the list of programs authorised to access the
core. In order to give a program access to the PRIME core, the user must
add the program to this list.
addContact Adds a new contact to the database.
addCredential Creates a new credential.
addPII Adds a new PII record to the database.
auth Handles authorizsation requests - helper function only.
getDataTrack Lists all the stored data, transactions and interconnections, filtered by
options and search string.
getFormInput Form Filler Helper page. Returns some static Form Input to server.
getInputCategory Gives back a fitting prime-category to the name of the form-field.
listApplicationAuthorizations Returns a list of application authorisations.
listCategories Generates a list of available categories as a RDF file (Resource
Description Framework).
listContacts Generates a list with all known receivers or all receivers of a given
PreSet.
listCredentials Lists all credentials the user has as a RDF list.
listDecisionSuggestions Represents the interface to the PRIME DSM (Decision Suggestion
Module) and generates a prioritised list with all possible PreSets that
could be used for one dedicated contact, template, and purpose.
listPII List the PII of a user.
listPresets Lists all PreSets.
listSessions Lists all sessions the user ever had as RDF.
listTemplates List all templates (as string) for a given session id.
listVerifyingData List the PII that were marked for disclosure by the user.
registerObserver The client registers a listener for the PRIME console interaction.
removeApplicationAuthorization Removes the given application authorisation.
removePII Removes either the whole PreSet or only a sub category.
removePreset Removes either a PII and/or a category from the PreSet or removes the
PreSet itself (if only the PreSet id is given).
resumeNegotiation Resume the accessRemoteResource protocol for a given resource.
updateApplicationAuthorization Update the given application name by removing the old one and adding a
new one.
updatePII Update an existing PII record.
updatePreset Update an existing PreSet record.
updatePresetIcon Update the icon entry of an existing PreSet record.
updatePresetReceiver Updates the flags of an PreSet receiver.

The lower layer is the PRIME core itself, providing the privacy protocols (for instance
accessRemoteRessource or listCredentials). It comprises also the context manager, which collects
information about all PII disclosed and is important for linkability computation and all other privacy
related functionality.

Final version 1, dated 01 February 2008 Page 90


PRIME D06.1.f
Privacy and Identity Management for Europe

9.2 Inter-module communication flow


The interaction and data flow schema is shown in Figure 37. A detailed and commented version of this
sketch can be found in the PRIME tutorial (D13.1.e).

Figure 37 Interaction flow

The communication flow could be described as follows:


1) The AskSendData (“Send Personal Data?” dialogue) installs the callback hook to intercept the
communication
2) The client accesses a protected resource. If the service is not PRIME-enabled and there are no
data requested, there will be no interception.
3) In case the service is PRIME-enabled it redirects the request using the above mentioned X-
Flag. The client system detects that flag and reroutes the request via the access remote
resource function, creating a new session.
4) The service answers with a so-called claim request. In addition to a request for data, the claim
request contains all privacy relevant information about the service, i.e., policy statements,
purpose of data usage, address of the data processor, etc.
5) The claim request triggers the mediator to notify the client about the request. The “Send
Personal Data?” dialogue shows up, displaying all the necessary information.
6) The user selects the data to be disclosed, decides about policy and obligation to be attach to
the data and discloses the data. Internally the client system creates a new claim answering the
claim request. At the same time the system goes into a stand-by mode to wait for the response
about the access decision for the protected resource.
7) If the client receives the answer the initiated session is closed. In case the access was granted
the answer contained an access handle.
8) Using this handle the client requests accesses the protected resource and the service finally
delivers the resource.

9.3 HTTP X-PRIME header flag


The “accessRemoteResource” protocol supports the user accessing resources that need some form of
authentication. To fulfil the PRIME principles this protocol is initiated by the client.

Final version 1, dated 01 February 2008 Page 91


PRIME D06.1.f
Privacy and Identity Management for Europe

If the user is accessing a protected resource (e.g. a web page, which needs an authentication) the
PRIME-enabled service provider returns a response. The response contains the X-PRIME header flag
(see Figure 38)25, indicating to the PRIME-enabled client that access to this resource can be achieved
by mechanisms which PRIME provides. The client can search all incoming packets for this marker
and, if the marker is found, connect to the service provider’s PRIME system (the address of the system
is delivered together with this flag). For legacy applications the request also contains a redirect (HTTP
code SEE_OTHER) to a ‘legacy’ login-page (where the user can login by entering his user name and
some password). So a client without a PRIME system would follow this link and display the legacy
page.
An interceptor (in our case an extension for the Mozilla FireFox) is able to inspect incoming packets.
In case of X-PRIME Flag presence it notifies the Mediator to start the “accessRemoteResource”
protocol and passes the address of the service provider’s PRIME system and the resource, which the
client tries to gain access to, to the Mediator and waits for a decision response. If there are requested
additional data or interaction a so-called AskSendData dialogue (see Figure 37) pops up and waits for
interaction. Finally, the client transmits the request fulfilment26 back to the server.
The service response contains a handle that reflects the decision of the service provider (either access
allowed or access denied, depending on the policy and if the user could verify his identity). When
receiving this handle the interceptor stores it and connects a second time to the access restricted
resource. But this time, a component inspecting outgoing requests recognises that an access handle for
this web page was received and adds this as an HTTP flag to the request header, the X-PRIME-Handle
flag. The X-PRIME-Handle flag can be used as an access handle. The consequences of this approach
are that service providers only need to speak a simple HTTP protocol to give the client a hint. On the
client side, a PRIME-enabled component needs access to the response header and a ‘handle-store’.
When receiving a hint, the interceptor can connect to the Mediator (also via HTTP protocol) and wait
for the decision.

http://localhost:8080/tud_ipv2_Demonstrator/newsletter/personalize.jsp

GET /tud_ipv2_Demonstrator/newsletter/personalize.jsp HTTP/1.1


Host: localhost:8080

Keep-Alive: 300
Connection: keep-alive
Referer: http://localhost:8080/tud_ipv2_Demonstrator/newsletter/

HTTP/1.x 303 See Other


Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=C0BE5A61615B0A7F3B0B27D30C4A8C2B;
Path=/tud_ipv2_Demonstrator
X-PRIME: http://localhost:8080//tud_ipv2_Demonstrator/newsletter/prime.jsp
Location: login.html
Content-Length: 0
Date: Mon, 23 Oct 2006 08:52:04 GMT

Figure 38 HTTP header example with X-PRIME flag

25
The X-PRIME header flag is defined by PRIME. For detailed information about http flags see
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Header . For header inspection use the very helpful
Firefox extension at http://livehttpheaders.mozdev.org/.
26
In normal applications this may be user name and password, credit card details, or a valid email address.

Final version 1, dated 01 February 2008 Page 92


PRIME D06.1.f
Privacy and Identity Management for Europe

10 Outlook
During the life span of the PRIME project many HCI aspects have been considered and several
proposals have been suggested and developed. At the same time, privacy principles have been
investigated, discussed, developed, and also used as the basis for the designs made within the project.
This deliverable has given an account of the worked pursued and the results obtained. Especially, the
relation between privacy principles and design proposals and guidelines has been accounted for
(chapter 4). Also, individual design constructs have been discussed at some length including
preference settings, consent management, evaluation of trust-enhancing information, and management
of history records.
There are of course many things that could be penetrated more deeply. Some of the members of the
PRIME consortium are now, together with new allies, negotiating a new Large-scale Integrated
Project with the European Commission (for Framework Programme 7), PrimeLife – Privacy and
Identity Management in Europe for Life. The PrimeLife proposal is more focused on life-long privacy
maintenance than PRIME, and also on collaborative scenarios in virtual communities. The proposal
contains many technical challenges for workable mechanisms for pseudo-identification and reputation
building, as well as standards for access control and policy languages, but also research on social
issues as well as some of the assistance functions discussed in chapter 5. For instance, in the proposal
it is mentioned that “For virtual community applications, a ‘Data Track’ function for providing
transparency of the personal data processing by other peers, may differ from the PRIME Data Track
for client–server applications, as those peers (individuals) might not provide the same data handling
policy information (if they provide such information at all). Furthermore, it might even be difficult to
identify all peers that obtain information about an individual – apart from the possibility that those
peers might even wish to act anonymously.”
Other members of the PRIME consortium will, with other new allies, undertake an FP7 Standard
Research Project named PICOS – Privacy and Identity for Community Services, which will address
trust and privacy issues in identity management in digital information and communication services
that support communities which already operate in the physical world.
The field of Privacy HCI – HCI-P as it was called in section 4.1 – is still not well investigated and new
privacy threats arise as mobile and web technology develops as exemplified by the application
prototypes in chapters 6 and 7. We think the broad scope of the PRIME project and the many aspects
that its architecture actually covers have given a unique ground for developing interaction concepts.
Admittedly, the HCI work is far from complete in the sense of delivering ready made solutions.
Rather, chapters 4 and 5 give guidelines and examples that have to be adapted and refined in every
individual implementation of the PRIME architecture in real applications. But the scope of the PRIME
project has propelled the development of a more inclusive systems thinking and opened many vistas
for further investigation.

Final version 1, dated 01 February 2008 Page 93


PRIME D06.1.f
Privacy and Identity Management for Europe

11 References
For PRIME deliverables (references of the form “D6.1.c”.), see the list at the beginning of the document.

Andersson, C., J. Camenisch, S. Crane, S. Fischer-Hübner, R. Leenes, S. Pearson, J.S. Pettersson & D. Sommer.
Trust in PRIME. Proceedings of the 5th IEEE Int. Symposium on Signal Processing and IT, December 18-
21, 2005, Athens, Greece.
APWG (2007) web site http://www.antiphishing.org/
Article 29 Data Protection Working Party. Opinion on More Harmonised Information provisions. 11987/04/EN
WP 100, November 25 2004. http://europa.eu.int/comm/internal_market/privacy/workingroup/wp2004/
wpdocs04_en.htm
Bellotti, V. & A. Sellen, Design for Privacy in Ubiquitous Computing Environment. Proceedings of the Third
European Conference on Computer-Supported Cooperative Work, 13-17 September, 1993, Milano, Italy.
Bergmann, M., M. Rost & J.S. Pettersson. Exploring the Feasibility of a Spatial User Interface Paradigm for
Privacy-Enhancing Technology, Proceedings of the Fourteenth International Conference on Information
Systems Development (ISD’2005), Karlstad, August 2005. Published in Andvances in Information
Systems Development, Springer-Verlag, 2006, pp. 437-448.
Borking, J. & C. Raab. Law. PETs and Other Technologies for Privacy Protection, Journal of Information, Law
and Technology, 1, University of Warwick, 2001.
Berners-Lee, T., J. Hendler & O. Lassila. The Semantic Web. Scientific American, May 2001.
Buxton, W. (“Bill”). Sketching user experiences. Getting the design right and the right design. Morgan
Kaufmann Publishers, 2007.
Carrol, J. Making Use: Scenario-based design of hci, MIT press, 2000.
Casassa Mont, M. Dealing with Privacy Obligations. In Trust and Privacy in Digital Business, Springer, NCS
3184, 120-131. 2004.
Chernoff, H. Using faces to represent points in k-dimensional space graphically. Journal of the American
Statistical Association 68 (342): 361-368. 1973.
Cheskin Research and Studio Arcetype/Sapient. E-Commerce trust study. January 1999, as referred in Turner
2003.
Clauß, S. Towards Quantification of Privacy within a Privacy-Enhancing Identity Management System.
Dissertation Technische Universität Dresden, Fakultät Informatik, December 2007.
Consumers International. Privacy@net – An international comparative study of consumer privacy on the
internet. January 2001. http://www.consumersinternational.org/document_store/Doc30.pdf
Cooper A. & R. Reimann. About Face 2.0 – The Essentials of Interaction Design. Wiley Publishing, USA, 2003.
Crane, S., H. Lacohee & S. Zaba. Trustguide: Trust in ICT. BT Technology Journal, Special Edition covering
HP-BT Alliance, October 2006.
Cranor, L.F. Web privacy with P3P. O´Reilly, 2002.
Cranor, L.F. What do they “indicate?”: evaluating security and privacy indicators. interactions, Vol.
XIII.3: 45-57, 2006.
Cranor, L.F. & S. Garfinkel, eds. Security and Usability: Designing Secure Systems that People Can Use.
O’Reilly, 2005.
D6.1.b, D6.1.c, etc., i.e. PRIME deliverables; see list at the beginning of the document.
Dhamija, R., J.D. Tygar & M. Hearst. Why Phishing Works. CHI 2006 Proceedings. ACM Conference
Proceedings Series. ACM Press 2006.

Final version 1, dated 01 February 2008 Page 94


PRIME D06.1.f
Privacy and Identity Management for Europe

D’Hertefelt, S. Trust and the Perception of Security. 3 January 2000. http://www.interactionarchitect.com/


research/report20000103shd.htm
Dumas, S.D. & J.C. Redish. A Practical Guide to Usability Testing, Intellect Ltd, Rev. ed. 1999.
Eurobarometer (2003) http://ec.europa.eu/public_opinion/index_en.htm, especially:
http://ec.europa.eu/public_opinion/archives/ebs/ebs_196_data_protection.pdf
European Commission. Privacy Enhancing Technologies (PETs), Memo/07/159, Brussels, 2 May 2007.
EuroPriSe (web) http://www.European-Privacy-Seal.eu/
Fischer-Hübner, S., J.S. Pettersson, M. Bergmann, M. Hansen, S. Pearson & M. Casassa Mont. HCI Designs for
Privacy-Enhancing Identity Management. In Digital Privacy: Theory, Technologies and Practices, eds. Acquisti,
Gritzalis, Lambrinoudakis & di Vimercati. Auerbach Publications, 2007.
Franz, E., K. Liesebach & K. Borcea-Pfitzmann. Privacy-Aware User Interfaces within Collaborative
Environments. Workshop on Contexts in Advanced Interfaces at AVI 2006, Venice, Italy, 2006.
Furnell, S.M. Using security: easier said than done? Computer Fraud & Security, April 2004, pp. 6-15.
Furnell, S.M. E-commerce security: a question of trust. Computer Fraud & Security, October 2004, pp. 10-14.
Furnell, S.M. Why users cannot use security. Computers & Security, Vol. 24(4): 274-279, 2005.
Furnell, S.M., A. Jusoh & D. Katsabas. The challenges of understanding and using security: A survey of end-
users, Computers & Security, Vol. 25(1): 27-35, 2006.
Gandon, F.L. & N.M. Sadeh. Semantic Web Technologies to Reconcile Privacy and Context Awareness. School
of Computer Science, Carnegie Mellon University, CMU-CS-03-211 / CMU-ISRI-03-107. 2003.
http://reports-archive.adm.cs.cmu.edu/anon/2003/CMU-CS-03-211.pdf
Garfinkel, S.L. Design Principles and Patterns for Computer Systems That Are Simultaneously Secure and
Usable. PhD Dissertation, Massachusetts Institute of Technology, May 2005.
Gerd tom Markotten, D. User-centered security engineering, Proceedings of the 4th EurOpen/USENIX
Conference NordU2002, Februar 2002. http://www.iig.uni-freiburg.de/telematik/atus/ publications.html
Gideon, J., L. Cranor, S. Engelman & A. Acquisti. Power stripes, prophylactics, and Privacy, Oh my!
Symposium On Usable Privacy and Security (SOUPS) 2006, Carnegie Mellon University, July 12-14,
2006 Pittsburgh. Available in ACM Digital Library.
Halket, T. D. & D. B. Cosgrove. Is your online agreement in jeopardy? 2001. http://www.cio.com/
legal/edit/010402_agree.html
Hansen, M., S. Fischer-Hübner, J.S. Pettersson & M. Bergmann. Transparency Tools for User-Controlled
Identity Management. Presented at eChallenges e-2007, 24-26 October 2007, The Hague. Published in:
Cunningham & Cunningham (Eds.): Expanding the Knowledge Economy: Issues, Applications, Case
Studies. IOS Press, Amsterdam 2007, pp. 1360-1367.
Hardee, J.S., R. West & Ch.B. Mayhorn. To download or not to download: an examination of computer security
decision making. interactions, Vol. XIII.3: 32-37, 2006.
Herzog, A. Usable Security Policies in Runtime Environments. Linköping Studies in Science and Technology,
Dissertation No. 1075. Linköping University, 2007.
Herzog, A. & N. Shahmehri. Usable set-up of runtime security policies. Proceedings of the International
Symposium on Human Aspects of Informaton Security and Assurance, July 2007.
Jendricke, U. & D. Gerd tom Markotten, “Usability meets Security - The Identity-Manager as your Personal
Security Assistant for the Internet”. Proceedings of the 16th Annual Computer Security Applications
Conference, Dezember 2000. http://tserv.iig.uni-freiburg.de/telematik/forschung/projekte/kom_technik/
atus/publications.html
Johnston, J., J. H.P. Eloff & L. Labuschagne. Security and human computer interfaces. Computers & Security,
Vol. 22 (8): 675-684, 2003.
Iachello, G. & J. Hong. End-user privacy in Human-Computer Interaction. Foundations and Trends in Human-
Computer Interaction, Vol. 1(1): 1-137, 2007.
Information Commissioner (UK) (IC UK DC, 2004) Annual Track Research Findings, Individuals, 2004.
http://www.informationcommissioner.gov.uk/cms/DocumentUploads/Annual%20Track%20Research%20
Findings%20data%20Controllers.pdf

Final version 1, dated 01 February 2008 Page 95


PRIME D06.1.f
Privacy and Identity Management for Europe

International Organization for Standardization, ISO 9241-11 Ergonomic requirements for office work with visual
display terminals (VDTs) – Part 11: Guidance on usability, 1998.
Kaiser, J. & M. Reichenbach. Evaluating security tools towards usable security – a usability taxonomy for the
evaluation of security tools based on a categorization of user errors. In Hammond, J., T. Gross & J.
Wesson, Usability Gaining a Competitive Edge. IFIP 17th World Computer Congress, Montreal, Canada,
Kluwer Academic Publishers, s. 25-30. August 2002. http://tserv.iig.uni-freiburg.de/telematik/forschung/
projekte/kom_technik/atus/publications.html
Karakasiliotis, A., S.M. Furnell & M. Papadaki. An assessment of end-user vulnerability to phishing attacks,
Journal of Information Warfare, Vol. 6(1): 17-28, 2007.
Karat, J., C.-M. Karat, C. Brodie & J. Feng. Privacy in information technology: Designing to enable privacy
policy management in organizations. Int. J. Human-Computer Studies 63, 153-174, 2005.
Kobsa, A. Personalized Hypermedia and International Privacy. Communications of the ACM 45(5): 64-67, 2002.
Kobsa, A. Tailoring Privacy to Users’ Needs. Invited Keynote, 8th International Conference on User Modeling,
Sonthofen, Germany, 2001. http://www.ics.uci.edu/~kobsa/papers/2001-UM01-kobsa.pdf
Kröger, V.-P. Security of user Interfaces- A Usability Evaluation of F-Secure SSH. Proceedings of the Helsinki
University of Technology, Seminar on Network Security, December 1999.
Kunz, C.L. Click-Trough Agreements: Strategies for Avoiding Disputes on Validity of Assent. 2002.
http://www.Efscouncil.org/frames/Forum%20Members/Kunz_Click-thr_20%Agrmt_20Strategies.ppt. See
also C. L. Kunz, J. Debrow, M. Del Duca & H. Thayer, “Click-Trough Agreements: Strategies for
Avoiding Disputes on Validity of Assent”. Business Lawyer, 57, 401, 2001.
Law, E.L.-C. & E. Hvannberg. Analysis of strategies for improving and estimating the effectiveness of heuristic
evaluation. In Hyrskykari, A. (ed.) Proceedings of the Third Nordic Conference on Human-Computer
Interaction, Tampere, Finland, October 23-27, 2004. Available in ACM Digital Library.
Microsoft Inc. Privacy Guidelines for Developing Software Products and Services, Version 2.1, October 10,
2006.
Molin, L. & J.S. Pettersson. How should interactive media be discussed for successful requirements
engineering? In Perspectives on multimedia: communication, media and technology, eds. Burnett,
Brunström & Nilsson. Wiley, 2003.
Nielsen, J. Usability Engineering. Morgan Kaufmann Publishers, 1993.
Nielsen, J. Heuristic evaluation. In Nielsen & Mack (Eds.), Usability Inspection Methods, John Wiley & Sons,
New York, NY, 1994. Cf. also http://www.useit.com/papers/heuristic/heuristic_ list.html
Nielsen, J., Jacob Nielsen’s Alertbox, User Education Is Not the Answer to Security Problems. October 25,
2004. http://www.useit.com
Nielsen, J. & R.L.Mack (Eds.), Usability Inspection Methods, John Wiley & Sons, New York, NY, 1994
Nielsen, J., R. Molich, C. Snyder & S. Farell. E-commerce user experience: Trust. Nielsen Norman Group, 2000.
OECD, Privacy Online: OECD Guidance on Policy and Practice. (Print Paperback), 2003.
http://www.oecd.org/document/49/0,2340,en_2649_34255_19216241_1_1_1_1,00.html
P3P, Platform for Privacy Preferences (P3P) Project. 2004 and later. http://www.w3.org/P3P The latest version
of the P3P 1.1 element definitions can be found at http://www.w3.org/TR/P3P11/.
Patrick, A.S. Building Trustworthy Software Agents. IEEE Internet Computing, pp 46-53, November 2002,
http://computer.org/internet/
Patrick, A.S. & S. Kenny. From Privacy Legislation to Interface Design: Implementing Information Privacy in
Human-Computer Interaction. Paper presented at the Privacy Enhancing Technologies Workshop
(PET2003), Dresden/Germany, 2003.
Patrick, A.S., S. Kenny, C. Holmes & M. van Breukelen. Human Computer Interaction. Chapter 12 in Handbook
for Privacy and Privacy-Enhancing Technologies. PISA project. Eds. van Blarkom, Borking, Olk, 2002.
http://www.andrewpatrick.ca/pisa/handbook/handbook.html
Pearson, S. Towards Automated Evaluation of Trust Constraints, in Trust Management, LNCS 3986, Springer
Berlin/Heidelberg, 252-266, 2006.
Perri 6. Can we be persuaded to become PET - lovers? Appendix II of Report on the OECD forum session on
Privacy-Enhancing technologies (PETs), held at the OECD, Paris, 8 October, 2001.
Pettersson, J.S. P3P and Usability – the Mobile Case. In Duquennoy, P., S. Fischer-Hübner, J. Holvast & A.
Zuccato (eds.) Risk and challenges of the network society. Karlstad University Studies 2004:35, 2004.

Final version 1, dated 01 February 2008 Page 96


PRIME D06.1.f
Privacy and Identity Management for Europe

Pettersson, J.S. R1 – First report from the pilot study on privacy technology in the framework of consumer
support infrastructure. Working Report), December 2006. Dept. of Information Systems and Centre for
HumanIT, Karlstad University. http://www.humanit.org/projects.php?projekt_id=48&lang=en
Pettersson, J.S. R2 – Second report from the pilot study on privacy technology in the framework of consumer
support infrastructure. Working Report, July 2007. Dept. of Information Systems and Centre for
HumanIT, Karlstad University. http://www.humanit.org/projects.php?projekt_id=48&lang=en
Pettersson, J.S., C. Thorén & S. Fischer-Hübner. Making Privacy Protocols Usable for Mobile Internet
Environment. HCI International 2003, June 23-25 Crete, Greece.
Pettersson, J.S., S. Fischer-Hübner, N. Danielsson, J. Nilsson, M. Bergmann, S. Clauß, Th. Kriegelstein & H.
Krasemann. Making PRIME usable. SOUPS 2005 Symposium on Usable Privacy and Security, Carnegie
Mellon University, July 6-8 July, 2005, Pittsburgh. Available in ACM Digital Library.
Pettersson, J.S., S. Fischer-Hübner & M. Bergmann. Outlining Data Track: Privacy-friendly Data Maintenance
for End-users, to appear in Proceedings of the 15th International Conference on Information Systems
Development (ISD 2006), Budapest, 31st August - 2nd September 2006, Springer Scientific Publishers.
Pfitzmann, A. & M. Hansen. Anonymity, Unlinkability, Undetectability, Unobservability, Pseudonymity, and
Identity Management – A Consolidated Proposal for Terminology, v0.30, 26 November 2007.
http://dud.inf.tu-dresden.de/Anon_Terminology.shtml
PISA – Privacy Incorporated Software Agent project (http://www.pet-pisa.nl/pisa_org/pisa/index.html);
Handbook for Privacy and Privacy-Enhancing Technologies. PISA project. Eds. van Blarkom, Borking,
Olk, 2002. http://www.andrewpatrick.ca/pisa/handbook/handbook.html
Pollach, I. What’s wrong with Online Privacy Policies? Communications of the ACM, September 2007: 103-108.
Preece, J., Y. Rogers, H. Sharp, D. Benyon, S. Holland & T. Carey. Human-Computer Interaction. Addison-
Wesley, 1994.
Preece, J., Y. Rogers & H. Sharp. Interaction Design: Beyond Human-Computer Interaction. John Wiley &
Sons. 2002. 2nd ed. 2007 with authors listed as Sharp, Rogers & Preece.
PRIME Deliverables, D1.1, etc.; see list at the beginning of the present document.
Raskin, J. The Humane Interface – New Directions for Designing Interactive Systems. ACM Press, New York,
2000.
Reimann, R. & A. Cooper. The Essentials of Inteactiondesign, About Face 3.0. Wiley, 2006.
Rubin, J. Handbook of Usability Testing, Wiley, 1994.
Sackmann, S., J. Strüker & R. Accorsi. Personalization in Privacy-Aware Highly Dynamic Systems.
Communications of the ACM, Vol. 4(9): 32-38, 2006.
Saltzer, J.H. & M.D. Schroeder. The protection of information in computer systems. Proceedings of the IEEE,
Vol. 63(9): 1278-1308, 1975.
Sasse, M.A., S. Brostoff & D. Weirich. Transforming the weakest link – a human/computer interaction approach
to usable and effective security. BT Technology Journal, Vol. 19(3): 122-131, 2001.
Schechter S. E., R. Dhamija, A. Ozment & I. Fischer. The Emperor’s New Security Indicators: An evaluation of
web site authentication and the effect of role playing on usability studies, IEEE Symposium on Security
and Privacy, May 20-27, 2007, Oakland, California.
Shneiderman, B. & C. Plaisant. Designing the User Interface. Addison Wesley, 4th edition, 2004.
Slade, K.H. Dealing with customers: Protecting their privacy and enforcing your contracts. 1999.
http://www.haledorr.com/db30/cgi-bin/pubs/1999_06_CLE_Program.pdf
Thougburgh, D. Click-trough contracts: How to make them stick. Internet Management Strategies, 2001.
http://www.loeb.com/FSL5CS/articles45.asp
Tsai, J., S. Egelman, L. Cranor & A. Acquisti. The Effect of Online Privacy Information on Purchasing
Behavior: An Experimental Study. WEIS 2007, Workshop on the Economics of Information Security,
June 7-8 2007, Carnegie-Mellon University, Pittsburgh.
Turner, C.W. How do consumers form their judgment of the security of e-commerce web sites? Workshop on
Human-Computer Interaction and Security Systems, CHI2003, April 5-10, 2003, Fort Lauderdale,
Florida. http://andrewpatrick.ca/CHI2003/HCISEC-papers.html
Turner, C.W. The online experience and consumers perceptions of e-commerce security. In Proceedings of the
Human Factors and Ergonomics Society 46th Annual Meeting, Sept. 2002.

Final version 1, dated 01 February 2008 Page 97


PRIME D06.1.f
Privacy and Identity Management for Europe

Turner, C.W., M. Zavod & W. Yurcik. Factors that Affect the Perception of Security and Privacy of E-
commerce Web Sites. In Proceedings of the Fourth International Conference on Electronic Commerce
Research, Dallas TX, November 2001.
Tweney, E. & S. Crane. Trustguide2: An Exploration of Privacy Preferences in an Online World. In
Cunningham & Cunningham (Eds.): Expanding the Knowledge Economy: Issues, Applications, Case
Studies, IOS Press, 2007 Amsterdam, pp. 1379-1385.
Weitzner, D., H. Abelson, T. Berners-Lee, C. Hanson, J. Hendler, L. Kagal, D. McGuinness, G. Sussman & K.
Kasnow Waterman. Transparent Accountable Data Mining: New Strategies for Privacy Protection. MIT
Computer Science and Artificial Intelligence Laboratory Technical Report, MIT-CSAIL-TR-2006-007,
2006.
Whitten, A. & J.D. Tygar. Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0. In Proceedings of the
8th USENIX Security Symposium. Usenix, August 1999.
Wu, M., R.C. Miller & G. Little. Web Wallet: Preventing phishing attacks by revealing user intentions.
Symposium On Usable Privacy and Security (SOUPS) 2006, Carnegie Mellon University, July 12-14,
2006 Pittsburgh. Available in ACM Digital Library.
Yee, K-P. User interaction design for secure systems. Proceedings of the International Conference on
Information and Communications Security, ICIC’02, pp. 278-290. Springer-Verlag, 2002.

EU Directives
Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of
individuals with regard to the processing of personal data and on the free movement of such data, Official
Journal L No. 281, 23.11.1995.
Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing
of personal data and the protection of privacy in the electronic communications sector, Official Journal L
No. 201, 31.07.2002.

Final version 1, dated 01 February 2008 Page 98

You might also like