Professional Documents
Culture Documents
AI Driven Public Services and The Privacy Paradox Do Citizens Really Care About Their Privacy-1
AI Driven Public Services and The Privacy Paradox Do Citizens Really Care About Their Privacy-1
Jurgen Willems, Moritz J. Schmid, Dieter Vanderelst, Dominik Vogel & Falk
Ebinger
To cite this article: Jurgen Willems, Moritz J. Schmid, Dieter Vanderelst, Dominik Vogel & Falk
Ebinger (2022): AI-driven public services and the privacy paradox: do citizens really care about
their privacy?, Public Management Review, DOI: 10.1080/14719037.2022.2063934
ABSTRACT
Based on privacy calculus theory, we derive hypotheses on the role of perceived
usefulness and privacy risks of artificial intelligence (AI) in public services. In
a representative vignette experiment (n = 1,048), we asked citizens whether they
would download a mobile app to interact in an AI-driven public service. Despite
general concerns about privacy, we find that citizens are not susceptible to the
amount of personal information they must share, nor to a more anthropomorphic
interface. Our results confirm the privacy paradox, which we frame in the literature on
the government’s role to safeguard ethical principles, including citizens’ privacy.
KEYWORDS AI; virtual agents; privacy paradox; data privacy; vignette experiment
1. Introduction
Recent advances in artificial intelligence (AI), especially in deep learning, are prone to
revolutionize the way governments and bureaucratic entities interact with their citi
zens (Vogl et al. 2020; Wirtz and Müller 2019; Wirtz, Weyerer, and Geyerer 2019). In
particular, AI-driven conversational agents (such as chatbots) offer tremendous poten
tial to transform public service interactions in an effective manner (Vogl et al. 2020;
Wirtz, Weyerer, and Geyerer 2019), along with re-defining service interactions in
a cost-efficient way (Eggers, Fishman, and Kishnani 2017; Mehr 2017). However,
since the power of these technologies lies in the linking of information on individuals,
they also entail ethical trade-offs, especially in the context of public services. On the
one hand, this trade-off aims to make public service processes more adjusted to the
needs of citizens, with a strong focus on efficient and effective services, as well as the
right for equal access, fair treatments, and privacy for citizens (Dickinson and Yates
2021). On the other hand, this trade-off concerns several ethical aspects inherent to
how AI applications work, such as limited or no insight into actual decision mechan
isms in AI, lack of accountability, inherent biases on gender and race, and difficulties in
interpretability (Miller and Keiser 2021; Busuioc 2021). These concerns are in direct
(1) How do the perceived usefulness, data sharing requirements, and citizens’
overall privacy concerns influence their willingness to use AI-driven public
services?
(2) Do citizens act accordingly to privacy concerns in concrete contexts?
Moreover, given the growing literature on the role of more humaMoreover, given the
growing literature on the role of more human like AI interactions and designs, we also
verify if these elements differ depending on the level of anthropomorphism of these
applications.
Answering these research questions is relevant to the growing debate on AI-driven
public services. First, insights on the role of perceived usefulness of an application as
well as the concrete data sharing requirements can help optimize AI-driven services. In
doing so, practical insights on concrete conditions and features of AI in public services
are developed, which, in turn, can directly help in formulating practical recommenda
tions. Second, confirmation of the privacy paradox in the public area would require
a theoretical and practical debate on the inherent conflicting logics between efficient
public services and a potential violation of principles at the core of modern public
organizations, such as transparency, equality, democratic oversight, and citizens’ well-
being, on the other hand.
We conducted an online vignette experiment with a representative sample of
Austrian citizens (n = 1,048). We opted for a between-subjects 2-by-2 design (data-
sharing requirements × level of anthropomorphism in an AI application) in combina
tion with measuring the exogenous variable of citizens’ general privacy concerns. The
combination of treatment variables for a specific privacy dilemma – based on an
PUBLIC MANAGEMENT REVIEW 3
implemented across a wide range of internet-based public services (Makasi et al. 2020;
Vogl et al. 2020). In the public sector, the main proclaimed benefits of chatbots are that
they allow organizations to reduce their administrative burden and enhance commu
nication, which, in turn, improves service delivery (Androutsopolou et al. 2019).
Hence, they are prone to radically improve citizens’ experience and engagement and
enable new forms of decision-making with the help of citizens’ interactions. Among
several use cases (e.g. UNA in Latvia, Bobbi in the City of Berlin, or NADIA in
Australia), the City of Vienna (Austria) introduced the ‘WienBot’ in 2017 to ensure
that information about the different services in the City of Vienna is accessible and
understandable (Urban Innovation Vienna, 2017).
However, AI-driven conversational agents have also been plagued with controversy
since users do not always feel that chatbot-mediated services demonstrate the appro
priate public service values, such as trust, fairness, and transparency (Makasi et al.
2020). When implementing AI-driven conversational agents, public agencies are
responsible for appropriately handling citizens’ information to prevent it from being
used for unwarranted purposes. In short, governments must ensure and monitor their
citizens’ data privacy.
2.2 Privacy calculus theory – the role of perceived usefulness versus the
combination of privacy concerns with the amount of data to share
Over the last decade, smart devices have become ubiquitous. In the public domain,
conversational agents (chatbots) embedded in mobile apps offer cost-efficient service
interaction around the clock. As a result, the perceived usefulness of a mobile app, for
example, achieved by introducing more advanced AI features, plays a significant role in
accepting AI-driven public services (Vogl et al. 2020). Perceived usefulness is the extent
to which citizens consider the AI-driven application as valuable for their public service
needs and preferences.
However, this blend of features results in privacy risks if using AI-driven applica
tions leads to undisclosed (and undesired) data usage. In the context of AI-driven
processes, this is a substantial threat. Indeed, the core principle of AI is that existing
data is used to update decisions in other and future public service encounters.
Data privacy is considered one of the most critical ethical issues of the information
age (Mason 1986; Smith, Milberg, and Burke 1996) and has been studied extensively
across multiple disciplines in the social sciences. It refers to individuals’ control over
the release of personal information (Belanger and Crossler 2011), including its collec
tion, use, access, and correction of errors (Smith, Milberg, and Burke 1996; Keith et al.
2013). As such, data privacy has implications for human well-being and can be
conceptualized both as a personal right, making it subject to law enforcement, and as
a commodity (Davies 1997) that can be traded and marketed (Jentzsch, Preibusch, and
Harasser 2012; Smith, Dinev, and Xu 2012).
Citizens’ active and passive evaluation of perceived usefulness and privacy concerns
form the building blocks of the privacy calculus theory on which we formulate
Hypotheses 1 to 4 in this section.
Starting from the assumption that data privacy is a commodity, the privacy calculus
theory states that an individual’s decision to disclose versus retain information is
a rational choice made by weighing the costs and benefits of information disclosure
(Becker and Murphy 1988). From this theoretical perspective, it has been argued that
PUBLIC MANAGEMENT REVIEW 5
the online sharing of personal information is affected by both the respective costs and
the anticipated benefits (Culnan and Armstrong 1999). Accordingly, individuals seem
to perform a privacy calculus, defined as a rational choice between the risks and
benefits when disclosing personal information (Culnan and Armstrong 1999). They
might express strong concerns about their privacy being infringed and still give their
personal details if they have something to gain in return. In other words, engaging in
the AI-driven public service is perceived as sufficiently useful, given their needs and
preferences. However, considering the public context, perceived returns are not
necessarily focused on personal and private benefits but could also relate to creating
a public good that a citizen finds relevant. Perceived usefulness of the AI-driven public
service is, thus, a prerequisite for citizens to use AI-driven public services. Building on
the privacy calculus theory, we can assume that the willingness to use AI-driven public
services will be higher if individuals perceive them as more useful.
Against this background, we propose the first hypothesis focusing on the para
mount importance of perceived usefulness:
Hypothesis 1 (H1): The willingness to use AI-driven public services will be higher the
more useful individuals perceive them.
However, the inherent trade-off of the privacy calculus does not only include
perceived usefulness (related to the expected benefits), it also includes the potential
risks to privacy (related to the expected potential costs). These risks relate to (1) the
citizens’ more general privacy concerns as well as (2) the amount and type of personal
data one must share to engage in a particular AI-driven public service.
Privacy concerns refer to individuals’ general beliefs about the risks and potential
negative consequences of sharing information (Zhou and Li 2014). They have co-
evolved with advances in information technology for more than a century (Castañeda
and Montoro 2007; Norris and Reddick 2012) and have been used frequently as
a predictor of privacy-enhancing behaviour when using online services, sharing infor
mation online, and engaging in privacy-protecting behaviours, such as deleting cookies
or un-tagging photos on social networking services (Dienlin and Trepte 2015; Zhou
and Li 2014). In the domain of e-commerce, for instance, concerns about online
privacy are associated with engaging in privacy-protective behaviours, including
removing one’s personal information (e.g. full name, address, etc.) from commercial
databases or completely refraining from self-disclosure (Son and Kim 2008;
Spiekermann, Grossklags, and Berendt 2001). Empirical research has shown that
overall concerns affect behaviour (e.g. Bamberg 2003; Reel et al. 2007). Therefore, it
is reasonable to expect that concerns regarding online privacy will be reflected in the
willingness to share information with online services or, in our case, AI-driven public
services. Against this background, we propose:
Hypothesis 2 (H2): The willingness to use AI-driven public services will be lower the
more concerned individuals are about their data privacy.
their data is limited to the described purposes. Their concerns regarding the misuse of
their information may result in them not using the service. For example, Tsai et al.
(2011) found that the availability and accessibility of privacy policy information affect
individuals’ online behaviour. According to their findings, customers prefer online
businesses that better communicate their data privacy policy and provide more
information on potential dangers. While individuals might have an overall concern
regarding online privacy, they might be willing to share certain types of information
more readily than other types of data. The type of information required to be shared
can determine whether individuals engage in online actions. In contrast, Brown and
Muchira (2004) found that internet users, who have had a prior online experience
where personal information was requested, showed lower levels of online purchase.
Similarly, Castañeda and Montoro (2007) have shown experimentally that requesting
more personal information makes users less inclined to complete an online action.
Drawing on this insight, we expect individuals to be more reluctant to rely on AI-
driven public services the more personal information they must grant access to:
Hypothesis 3 (H3): The willingness to use AI-driven public services will be lower if
individuals are required to share more personal information.
Hence, when assessing the potential cost of privacy risks, the overall privacy
concerns and the case-specific data to be shared are combined. Concretely, in addition
to the two main effects hypothesized above, namely the effects of overall privacy
concerns and sharing more personal information, we predict that citizens with high
overall privacy concerns would be more reluctant to use AI-driven public services
when that requires more personal information to share.
Hypothesis 4 (H4): The negative effect of overall privacy concerns on users’ willingness to
use AI-driven public services is stronger when more personal data is required.
a robot’s degree of human likeness, indeed, relates to feeling comfortable with the
robot. However, as the human likeness increases, the emotional response increases up
to an ‘Uncanny Valley’, where emotion suddenly turns negative and then increases
again as the likeness becomes almost indistinguishable from a human being (Murhy,
Gretzel, and Personen 2019).
The easiest way to enhance the humanness of a virtual agent is the use of human
labels or identities. Cognitive psychologists have emphasized the importance of cate
gory-based perceptions activated by the social labels assigned to objects and noted that
individuals tend to use major attributes attached to labels to minimize cognitive effort
when making judgements or in forming impressions of others (Ashforth and
Humphrey 1997; Heyman and Gelman 1999).
Against this background, we propose the following hypothesis:
Hypothesis 5 (H5): The willingness to use AI-driven public services will be higher the
more human (anthropomorphic) the interface is designed (for limited levels of anthro
pomorphic features, such as a human naming).
While an anthropomorphic identity cue might lead users to appreciate the dialogue
and enjoy the interaction (Chung et al. 2020), they also need to share personal
information to receive a valuable recommendation or answer, which, in turn, can
evoke privacy concerns. However, users’ privacy concerns might differ for chatbots
when they convey a human-like appeal (Ischen et al. 2020). A human-like chatbot
might be perceived as more personal and less anonymous, leading to fewer privacy
concerns. Users might experience a closer connection to the human-like chatbot,
increasing the willingness to use it as a companion (Birnbaum et al. 2016). Hence,
interacting with a human-like chatbot can mimic interpersonal communication, posi
tively influencing (personal) information disclosure and recommendation adherence.
Moreover, when relying on the assumption that an anthropomorphic interface can
induce trust in the AI-driven public service, which in turn might reduce the associated
concern when sharing personal data, we expect that the negative effect of having to
share more personal data will be buffered when the interface is more human.
Hypothesis 6 (H6): The negative effect of needing to share more personal information on
users’ willingness to use AI-driven public services is less strong when the interface is more
human (anthropomorphic) compared to a less human interface.
make them more focused on particular privacy cues in the vignettes. However, as we
measured these independent variables after presenting the vignette, we verified
whether they were dependent on the experimental treatments for both measures. We
found no significant differences between the experimental treatments for both mea
sures. Moreover, the measures of general privacy concerns were collected after a set of
distraction questions in the survey.
We also included two attention check questions on the page after the willingness to
download and asking for perceived usefulness. We asked respondents to remember the
correct vignette information about the chatbot’s name and the exact information
required to share to download the app.
3.3 Analysis
As our main dependent variable is binary, we analyse the willingness to download the
app (yes/no) using binomial logistic regression analysis. The experimental treatments
(‘information to share’ and ‘anthropomorphism’), as well as the covariates (‘perceived
usefulness’ and ‘overall privacy concerns’) are the independent variables in this
regression. Data were analysed with R (R Core Team 2020, Version 1.2.5019).
4. Results
Table 1 reports the number of respondents and the percentage per treatment group
that would download the app, along with the lower and upper bounds of the 95%
confidence intervals.
Table 2 reports the estimated odds ratios from the binomial regression model to
explain whether people would download the application. Results are reported in four
models: with only the covariates (Model 1), with only the experimental treatments
PUBLIC MANAGEMENT REVIEW 11
Table 1. Overview of respondents per treatment group, and percentage that would download the app (with 95%
confidence intervals).
95%CI
(Model 2), with the main effects (Model 3), and with the hypothesized interaction
effects (Model 4). Findings were consistent across these models, and further inter
pretation will be based on the full model (Model 4).
Hypothesis 1 stated that the willingness to use AI-driven public services will be
higher the more useful individuals perceive them to be. This hypothesis is supported by
the results, as for every step on the perceived usefulness scale, the likeliness to down
load the app increases with an odds factor of 2.66 (p < .001). Moreover, Hypothesis 2 is
also supported, as respondents more concerned about their privacy show a decreased
likelihood of downloading the app (Odds ratio = 0.77, p = .012).
However, the other hypotheses are not supported. The experimental treatment
groups in this study have at least 245 observations, which allows discovering a small
to medium effect (Champely 2018, based on Cohen 1988). The amount of information
individuals must share does not influence people to use an AI-based public service app
(H3). Even when people do indicate to be concerned about their privacy, the factor
does not interact with the fact that more (personal) data must be shared. This strongly
supports the privacy paradox being at play in this public service context. Moreover,
a more human-like interface did not impact this paradox either, i.e. there is no support
for H5 and H6. The anthropomorphic naming of an application does not interfere with
the dynamics of the privacy calculus theory and the privacy paradox.
5. Discussion
Our research questions focused on (1) how perceived usefulness, data sharing require
ments, and citizens’ overall concerns influence their willingness to rely on AI-driven
public services, and (2) whether citizens act accordingly to their overall privacy
concerns in concrete contexts.
Our empirical analysis shows that perceived usefulness is the main explanatory
factor for citizens’ willingness to download an AI-driven app to interact with and
request information on public services. Moreover, general privacy concerns do reduce
citizens’ willingness to download the AI-driven app; however, this does not interact
with the amount of personal information to be shared. In sum, in this context of public
services, citizens seem to trade-off the usefulness of a particular AI-application with
their general privacy concerns; however, the amount of data that must be shared – and
would thus be at the basis of privacy risks – is not considered in this trade-off.
12
Table 2. Binomial logistic regression explaining willingness to download the app (‘yes’ = 1/‘no’ = 0).
J. WILLEMS ET AL.
Hence, our empirical analysis supports the existence of a privacy paradox in the
context of AI-driven public service apps. Therefore, our results are in line with earlier
studies supporting the privacy paradox in for-profit and public contexts (e.g. Sevignani
2013). This implies that even when respondents had general privacy concerns, they
were still consenting to download and use a specific app, especially when the perceived
usefulness of the app was high. This is important for the growing field of AI in public
services for two reasons.
First, our study specifies an important element that should not be ignored in the
growing debate of how new technologies can and should (not) be used in a public
service context (Bullock 2019; Criado and Gil-Garcia 2019; Lember, Brandsen, and
Tonurist 2019). Hypothesis 1, which was built on the privacy calculus theory, is
convincingly supported. This leads to questions about the relative value of private
information for citizens and public organizations. From a privacy calculus perspec
tive, personal data has an economic value that can be traded for a benefit. However,
there is the additional risk that the personal information can be used for purposes
unknown and undesired by the person sharing the information, which might cancel
out any short-term and minimal personal benefits. However, personal data do not
have the same economic value for a public organization as it has for many for-profit
organizations (Douglass et al. 2014; Krishnamurthy and Awazu 2016). In fact,
aggregated personal data could potentially be used for better decision-making or
better public service delivery, which in turn might lead to an additional, indirect,
and shared benefit for citizens (Krishnamurthy and Awazu 2016). As a result,
privacy concerns related to providing data to a public institution should relate
less to the direct economic benefit one might gain or lose from it. Instead, its
value should relate to trading off one’s own service experience, as well as the overall
public value, with the risk that the data might be wrongly used. However, given
strict regulations in many countries and general principles of controllability and
transparency for public organizations, it is hard to argue that data security in
a public setting would a priori be less safe than a for-profit setting (Romansky
and Noninska 2020).
Second, confirming the privacy paradox in a public context not only indicates
inconsistency between general concerns and actual behaviour, but it also suggests
that actual behavioural logic for private market-based transactions and public service
contexts are likely not very different. This calls for a broader discussion, as it is the role
of the state and public organizations to operate within boundaries of civil principles,
such as transparency, privacy, equality, democratic participation by citizens, etc. When
citizens’ behaviour is not consistent with these overall concerns, for example, trans
lated in national and international legislation (Borlini 2017), public governance
mechanisms could and should be developed to reduce these inconsistencies.
Consequently, the confirmation of a privacy paradox in the public content, and the
debate on how to manage it from a civic values perspective, could be a relevant topic in
the broader debate on the civic values of increasing policy attention to behavioural
public administration. While a growing body of literature has focused on confirming
that various biases exist in human behaviour, as well as in the specific role as a citizen
in interaction with public organizations (for an overview, see: Battaglio et al. 2019),
other more critical contributions have focused on what should, or should not, be done
with the knowledge of such behavioural biases (Berg 2003; Brown 2012). For example,
a significant critique concerning policy recommendations based on behavioural
14 J. WILLEMS ET AL.
(public) insights is that they often treat citizens paternalistically, which contrasts with
civic values, such as democratic participation, transparency, and equality (Menard
2010; Schnellenbach 2012). A similar debate on the privacy paradox in the public
context seems necessary.
7. Conclusion
This experimental study shows that the privacy paradox exists in the context of AI-driven
public services. Despite substantial statistical power and attention checks in the survey, the
experimental treatments that varied the level of personal information participants had to
grant access to, and the anthropomorphic representation of the interface did not have
significant effects. This experiment opens the discussion about privacy concerns related to
automated, digital service interaction with citizens. Public administrators are urged to
monitor potential privacy concerns when implementing such technologies closely. New
technological approaches offer great potential to encourage, enable, and improve service
interactions in a democratic and cost-efficient manner. If the ultimate goal is a sustainable
implementation of AI-driven technologies in the public domain, those potentials need to
be communicated accordingly as citizens are likely to engage in them if they perceive them
as useful and ethically justifiably.
Data availability
Data and research protocol are available at: <Link removed for anonymousness; will be made available
after the peer-review process>.
PUBLIC MANAGEMENT REVIEW 15
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes on contributors
Jurgen Willems is Full Professor for Public Management and Governance in the Department of
Management at the Vienna University of Economics and Business (WU Wien). His teaching and
research cover a variety of topics on citizen-state and citizen-society interactions. He worked as
a researcher at the Vlerick Business School (Belgium) in the field of Management & ICT. Concrete
projects and executive teaching programs focused on Business Process Management and Business
Intelligence.
Moritz J. Schmid is Research and Teaching Assistant at the institute for Public Management and
Governance in the Department of Management at the Vienna University of Economics and Business
(WU Wien). Moritz Schmid’s research interests revolve around the managerial and governance
challenges that public sector organizations and bureaucratic entities are currently facing. He is
particularly interested in assessing the effects that technological advances have on public service
processes and their interactions with citizens by means of quantitative methods.
Dieter Vanderelst is Assistant Professor at the University of Cincinnati, with a joint appointment in
the departments of Psychology, Biological Sciences; Electrical Engineering & Computing Systems; and
Mechanical & Materials Engineering. His research focusses on a broad set of topics including ethical
boundaries for robot-human interactions.
Falk Ebinger is Postdoctoral Researcher at the Department of Management’s Institute for Public
Management and Governance at Vienna University of Economics and Business (WU Wien). He holds
a M.A. in Public Policy & Management from the University of Konstanz, Germany and earned
a doctorate (Dr.rer.soc.) at the Faculty for Social Science at Ruhr-University Bochum, Germany.
Before joining the WU he worked for several years as research fellow at the Chair for Public
Administration & Regional Politics at Ruhr-University Bochum and as senior research fellow and
substitute professor for Administrative Science at the Department of Politics and Public
Administration at the University of Konstanz.
Dominik Vogel is Assistant Professor of Public Management at the University of Hamburg. In his
research Dominik focusses on what motivates public sector employees, how public sector leadership
can succeed, how citizens interact with the administration and the performance management of public
organizations.
ORCID
Jurgen Willems http://orcid.org/0000-0002-4439-3948
Dominik Vogel http://orcid.org/0000-0002-0145-7956
Falk Ebinger http://orcid.org/0000-0002-1861-5359
References
Acquisti, A. May 2004. “Privacy in Electronic Commerce and the Economics of Immediate Gratification.”
In: Proceedings of the 5th ACM conference on Electronic commerce, New York, NY, USA, 21–29. doi:
10.1145/988772.982777.
Acquisti, A., and J. Grossklags. 2005. “Privacy and Rationality in Individual Decision Making.” IEEE
Security and Privacy Magazine 3 (1): 26–33. doi:10.1109/MSP.2005.22.
Acquisti, A., and S. Spiekermann. 2011. “Do Interruptions Pay Off? Effects of Interruptive Ads on
Consumers’ Willingness to Pay.” Journal of Interactive Marketing 25 (4): 226–240. doi:10.1016/j.
intmar.2011.04.003.
16 J. WILLEMS ET AL.
Chung, M., E. Ko, H. Joung, and K. Sang. 2020. “Chatbot E-service and Customer Satisfaction
regarding Luxury Brands.” Journal of Business Research 117: 587–595. forthcoming 10.1016/j.
jbusres.2018.10.004.
Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Erlbaum.
Correia, L., and K. Wünstel. 2011. “Smart Cities Applications and Requirements.” White paper of
Experts Working Group Net!Works European Technology Platform. [21 march 2020]. http://www.
scribd.com/doc/87944173/White-Paper-Smart-Cities-Applications
Criado, J. I., and J. R. Gil-Garcia. 2019. “Creating Public Value through Smart Technologies and
Strategies: From Digital Services to Artificial Intelligence and Beyond.” International Journal of
Public Sector Management 32 (5): 438–450. doi:10.1108/IJPSM-07-2019-0178.
Culnan, M. J., and P. K. Armstrong. 1999. “Information Privacy Concerns, Procedural Fairness, and
Impersonal Trust: An Empirical Investigation.” Organization Science 10 (1): 104–115. doi:10.1287/
orsc.10.1.104.
Davies, S. 1997. “Re-engineering the Right to Privacy: How Privacy Has Been Transformed from
a Right to a Commodity.” In Technology and Privacy: The New Landscape, edited by P. Agre and
M. Rotenbergm 143–165. Sabon, US: MIT Press.
Dickinson, H., and S. Yates. 2021. “From External Provision to Technological Outsourcing: Lessons
for Public Sector Automation from the Outsourcing Literature.” Public Management Review 1–19.
doi:10.1080/14719037.2021.1972681.
Dienlin, T., P. K. Masur, and S. Trepte. 2019. “A Longitudinal Analysis of the Privacy Paradox.”
SOCARXIV. [accessed 23 May 2020]. https://doi.org/10.31235/osf.io/fm4h7
Dienlin, T., and M. J. Metzger. 2016. “An Extended Privacy Calculus Model for SNSs: Analyzing
Self-Disclosure and Self-Withdrawal in a Representative U.S. Sample.” Journal of Computer-
Mediated Communication 21 (5): 368–383. doi:10.1111/jcc4.12163.
Dienlin, T., and S. Trepte. 2015. “Is the Privacy Paradox a Relic of the Past? an In-depth Analysis of
Privacy Attitudes and Privacy Behaviors.” European Journal of Social Psychology 45 (3): 285–297.
doi:10.1002/ejsp.2049.
Douglass, K., S. Allard, C. Tenopir, and M. Frame. 2014. “Managing Scientific Data as Public Assets:
Data Sharing Practices and Policies among Full-time Government Employees.” Journal of the
Association for Information Science and Technology 65 (2): 251–262. doi:10.1002/asi.22988.
Duffy, B. R. 2003. “Anthropomorphism and the Social Robot.” Robotics and Autonomous Systems
42 (4): 104–123. doi:10.1016/S0921-8890(02)00374-3.
Eggers, W.D., T. Fishman, and P. Kishnani. 2017. “AI-augmented Human Services: Using Cognitive
Technologies to Transform Program Delivery.” Accessed 14 April 2020. https://www2.deloitte.
com/content/dam/insights/us/articles/4152_AI-human-services/4152_AI-human-services.pdf
Fishbein, M., and I. Ajzen. 2010. Predicting and Changing Behavior: The Reasoned Action Approach.
New York: Psychology Press (Taylor & Francis).
Go, E., and S. S. Sundar. 2019. “Humanizing Chatbots: The Effects of Visual, Identity and
Conversational Cues on Humanness Perceptions.” Computers in Human Behavior 97: 304–316.
doi:10.1016/j.chb.2019.01.020.
Griffin, D., and A. Tversky. 1992. “The Weighing of Evidence and the Determinants of Confidence.”
Cognitive Psychology 24 (3): 411–435. doi:10.1016/0010-0285(92)90013-R.
Heirman, W., M. Walrave, and K. Ponnet. 2013. “Predicting Adolescents’ Disclosure of Personal
Information in Exchange for Commercial Incentives: An Application of an Extended Theory of
Planned Behavior.” Cyberpsychology, Behavior and Social Networking 16 (2): 81–87. doi:10.1089/
cyber.2012.0041.
Heyman, G. D., and S. A. Gelman. 1999. “The Use of Trait Labels in Making Psychological
Inferences.” Child Development 70 (3): 604–619. doi:10.1111/1467-8624.00044.
Howlader, D. 2011. “Moral and Ethical Questions for Robotics Public Policy.” Synthesis: A Journal of
Science, Technology, Ethics and Policy 2: 1–6.
Ischen, C., T. Araujo, H. Voorveld, G. van Noort, and E. G. Smit. 2020. Privacy Concerns in Chatbot
Interactions 1–6. doi:10.1007/978-3-030-39540-7_3.
Jaiswal, J. 2010. “Location-aware Mobile Applications, Privacy Concerns, and Best Practices.” 9.
https://www.truste.com/resources/Whitepapers
Jentzsch, N., S. Preibusch, and A. Harasser. 2012. “Study on Monetising Privacy. An Economic Model
for Pricing Personal Information.” European Network and Information Security Agency (ENISA) 1–
76. https://www.enisa.europa.eu/publications/monetising-privacy
18 J. WILLEMS ET AL.
Kattel, Rainer, Veiko Lember, and Piret Tõnurist. 2020. “Collaborative Innovation and
Human-machine Networks.” Public Management Review 22 (11): 1652–1673. doi:10.1080/
14719037.2019.1645873.
Keith, M., S. Thompson, J. Hale, J. Lowry, and C. Greer. 2013. “Information Disclosure on Mobile
Devices: Re-examining Privacy Calculus with Actual User Behavior.” International Journal of
Human-Computer Studies 71 (12): . 1163–1173. doi:10.1016/j.ijhcs.2013.08.016.
Kernaghan, K. 2014. “The Rights and Wrongs of Robotics: Ethics and Robots in Public
Organizations.” Canadian Public Administration 57 (4): 485–506. doi:10.1111/capa.12093.
Kim, Seo Young, Bernd H. Schmitt, and Nadia M. Thalmann. 2019. “Eliza in the Uncanny Valley:
Anthropomorphizing Consumer Robots Increases Their Perceived Warmth but Decreases Liking.”
Marketing Letters 30 (1): 1–12. doi:10.1007/s11002-019-09485-9.
Krishnamurthy, R, and Y. Awazu. 2016. “Liberating Data for Public Value: The Case of Data.gov.”
International Journal of Information Management 36 (4): 668–672. doi:10.1016/j.
ijinfomgt.2016.03.002.
Lember, V., T. Brandsen, and P. Tonurist. 2019. “The Potential Impacts of Digital Technologies on
Co-production and Co-creation.” Public Management Review 21 (11): 1665–1686. doi:10.1080/
14719037.2019.1619807.
Makasi, T., A. Nili, K. Desouza, and M. Tate. 2020. “Chatbot-mediated Public Service Delivery:
A Public Value Based Framework.” First Monday 25 (12). doi:10.5210/fm.v25i12.10598.
Mason, R. 1986. “Four Ethical Issues of the Information Age.” Management Information Systems
Quarterly 10 (1): 54–142. doi:10.2307/248873.
Mehr, H. 2017. Artificial Intelligence for Citizen Services and Government. Cambridge, MA: Harvard
Kennedy School, Ash Center for Democratic Governance and Innovation. https://ash.harvard.edu/
publications/artificial-intelligence-citizen-services-and-government
Meijer, A., L. Lorenz, and M. Wessels. 2021. “Algorithmization of Bureaucratic Organizations: Using
a Practice Lens to Study How Context Shapes Predictive Policing Systems.” Public Administration
Review 81 (5): 837–846. doi:10.1111/puar.13391.
Menard, J.-F. 2010. “A ‚nudge’ for Public Health Ethics.: Libertarian Paternalism as A Framework for
Ethical Analysis of Public Health Interventions?” Public Health Ethics 3 (3): 229–238. doi:10.1093/phe/
phq024.
Miller, Susan M., and Lael R. Keiser. January 2021. “Representative Bureaucracy and Attitudes toward
Automated Decision Making.” Journal of Public Administration Research and Theory 31 (1):
150–165. doi:10.1093/jopart/muaa019.
Moon, Y. 2000. “Intimate Exchanges: Using Computers to Elicit Self-disclosure from Consumers.”
Journal of Consumer Research 26 (4): 323–339. doi:10.1086/209566.
Moon, M. J., J. Lee, and C. Roh. 2014. “The Evolution of Internal IT Applications and E-government
Studies in Public Administration: Research Themes and Methods.” Administration & Society
46 (1): 3–36. doi:10.1177/0095399712459723.
Mori, M. 1970. “The Uncanny Valley.” Energy 7 (4): 33–35. accessed 4 August 2020. https://spectrum.
ieee.org/automaton/robotics/humanoids/the-uncanny-valley Retrieved from
Mori, M., K. F. MacDorman, and N. Kageki. June 2012. “The Uncanny Valley [From the Field].” IEEE
Robotics & Automation Magazine 19 (2): 98–100. doi:10.1109/MRA.2012.2192811.
Murhy, J., U. Gretzel, and J. Personen. 2019. “Marketing Robot Services in Hospitality and Tourism:
The Role of Anthropomorphism.” Journal of Travel & Tourism Marketing 36 (17): 784–795.1 – 12.
10.1080/10548408.2019.1571983.
Nam, T., and T. A. Pardo. 2011. “Conceptualizing Smart City with Dimensions of Technology, People,
and Institutions.” In: The Proceedings of the 12th Annual International Conference on Digital
Government Research, University of Maryland College Park, US.
Nass, C., and K. M. Lee. 2000. “Does Computer-synthesized Speech Manifest Personality?
Experimental Tests of Recognition, Similarity-attraction, and Consistency-attraction.” Journal of
Experimental Psychology 7 (3): 171–181. doi:10.1037/1076-898X.7.3.171.
Neirotti, P., A. De Marco, A. C. Cagliano, G. Mangano, and F. Scorrano. 2014. “Current Trends in
Smart City Initiatives: Some Stylised Facts.” Cities 38: 25–36. doi:10.1016/j.cities.2013.12.010.
Norberg, P., D Horne, and D. Horne. 2007. “The Privacy Paradox: Personal Information Disclosure
Intentions Versus Behaviors.” Journal of Consumer Affairs 41 (1): 100–126. doi:10.1111/j.1745-
6606.2006.00070.x.
PUBLIC MANAGEMENT REVIEW 19
Norris, D. F., and C. G. Reddick. 2012. “Local E-government in the United States: Transformation or
Incremental Change?” Public Administration Review 73 (1): 165–175. doi:10.1111/j.1540-
6210.2012.02647.x.
R Core Team. 2020. “R: A Language and Environment for Statistical Computing.” Vienna, Austria.
https://www.R-project.org/
Reel, J. J., C. Greenleaf, W. K. Baker, S. Aragon, D. Bishop, C. Cachaper, P. Handwerk, et al. 2007.
“Relations of Body Concerns and Exercise Behavior: A Meta-analysis.” Psychological Reports
101 (3): 927–942. doi:10.2466/pr0.101.3.927-942.
Romansky, R., and I. Noninska. 2020. “Business Virtual Systems in the Context of E-governance:
Investigation of Secure Access to Information Resources.” Journal of Public Affairs 20 (15).
doi:10.1002/pa.2072.
Schnellenbach, J. 2012. “Nudges and Norms: On the Political Economy of Soft Paternalism.” European
Journal of Political Economy 28 (2): 266–277. doi:10.1016/j.ejpoleco.2011.12.001.
Sevignani, S. 2013. “The Commodification of Privacy on the Internet.” Science & Public Policy 40 (6):
733–739. doi:10.1093/scipol/sct082.
Singer, Peter W. 2011. “Robots at War: The New Battlefield.” In The Changing Character of War,
edited by Hew Strachan and Sibylle Scheipers, 143–165. Sabon, US: Oxford University Press.
Smith, H.J., T. Dinev, and H. Xu. 2012. “Information Privacy Research: An Interdisciplinary Review.”
Management Information Systems Quarterly 35 (41): 989–1015. doi:10.2307/41409970.
Smith, H.J., S.J. Milberg, and S.J. Burke. 1996. “Information Privacy: Measuring Individuals’ Concerns
about Organizational Practices.” Management Information Systems Quarterly 2035 (21): 167–196.
doi:10.2307/249477.
Son, J.-Y., and S. S. Kim. 2008. “Internet Users’ Information Privacy-Protective Responses:
A Taxonomy and A Nomological Model.” Management Information Systems Quarterly 32 (3):
503–529. doi:10.2307/25148854.
Spiekermann, S., J. Grossklags, and B. Berendt. 2001. “E-privacy in 2nd Generation E-Commerce:
Privacy Preferences versus Actual Behavior.” In: Proceesings of the 3rd ACM Conference on
Electronic Commerce, 14-17 October, Florida, USA.
Sundar, S., S. Kang, B. Zhang, E. Go, and M. Wu. 2013. “Unlocking the Privacy Paradox: Do Cognitive
Heuristics Hold the Key?.” In: Proceedings of the 31st Annual Conference on Human Factors in
Computing Systems, Association for Computing Machinery, Paris, France, 811–816.
Taddicken, M. 2014. “The ‘Privacy Paradox’ in the Social Web: The Impact of Privacy Concerns,
Individual Characteristics, and the Perceived Social Relevance on Different Forms of Self-
disclosure.” Journal of Computer- Mediated Communication 19 (2): 248–273. doi:10.1111/
jcc4.12052.
Tsai, Y., C. A. Egelman, L. Cranor, and A. Acquisti. 2011. “The Effect of Online Privacy Information on
Purchasing Behavior: An Experimental Study.” Information Systems Research 22 (2): 254–268.
doi:10.1287/isre.1090.0260.
“Urban Innovation Vienna: WienBot.” [accessed 25 august 2021]. https://smartcity.wien.gv.at/wienbot/
Vogl, T M., Cathrine Seidelin, Bharath Ganesh, and Jonathan Bright. 2020. “Smart Technology and
the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities.” Public
Administration Review 80 (6): 946–961. doi:10.1111/puar.13286.
Willems, J., L. Schmidthuber, D. Vogel, F. Ebinger, and D. Vanderelst. 2022. “Ethics of Robotized
Public Services: The Role of Robot Design and Its Actions.” Government Information Quarterly
39 (2): 101683. doi:10.1016/j.giq.2022.101683.
Wirtz, B., and W. Müller. 2019. “An Integrated Artificial Intelligence Framework for Public
Management.” Public Management Review 21 (7): 1076–1100. doi:10.1080/14719037.2018.1549268.
Wirtz, B., J. Weyerer, and C. Geyerer. 2019. “Artificial Intelligence and the Public Sector –
Applications and Challenges.” International Journal of Public Administration 42 (7): 596–6156.
doi:10.1080/01900692.2018.1498103.
Zavattaro, S. M. 2013. “Social Media in Public Administration’s Future: A Response to Farazmand.”
Administration & Society 45 (2): 242–255. doi:10.1177/0095399713481602.
Zhou, T., and H. Li. 2014. “Understanding Mobile SNS Continuance Usage in China from the
Perspectives of Social Influence and Privacy Concerns.” Computers in Human Behavior 37:
283–289. doi:10.1016/j.chb.2014.05.008.