Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

Journal of Business Research

Collaborative Creativity in the Age of AI: Exploring the Intersection of Human-AI


Collaboration, Creativity and Trust
--Manuscript Draft--

Manuscript Number:

Article Type: Full length article

Section/Category: Innovation & Technology

Keywords: AI, creativity, human-AI collaboration, cobots, proximity of collaboration

Corresponding Author: Jesus Mascareño, Ph.D.


University of Groningen
Groningen, Groningen NETHERLANDS

First Author: Jesus Mascareño, Ph.D.

Order of Authors: Jesus Mascareño, Ph.D.

Aleksandra Przegalińska

Leon Ciechanowskie

Abstract: Although the importance of Human-AI collaboration is increasingly acknowledged, its


relationship with creativity is not fully understood. Drawing from task-technology fit
theory, this study delves into the relationship between human-AI collaboration, trust,
and various facets of creativity. In an experimental study (N = 217), we found that
human-AI collaboration fosters a greater number of ideas, enhanced idea originality,
and improved idea feasibility compared to human-only condition. A second
experimental study (N = 210) further revealed that higher proximity within human-AI
collaborations predicts increased idea productivity and feasibility, yet it does not lead to
greater idea originality. Additionally, our findings suggest that the effectiveness of this
proximity in fostering idea originality is contingent upon the level of trust in AI. Text
analytics on the communication patterns between humans and AI underscored these
results, showing variations in sentiment, language complexity, and communication
style in relation to the level of proximity and trust.

Suggested Reviewers: Soumyadeb Chowdhury


Associate Professor
s.chowdhury@tbs-education.fr
He has published articles on human-AI collaboration in your journal. Therefore, he is
knowledgeable on the subject and also familiar with your journal.

Gabriele Pizzi
Associate Professor, University of Bologna
gabriele.pizzi@unibo.it
He has published articles on human-AI collaboration in your journal. Therefore, he is
knowledgeable on the subject and also familiar with your journal.

Opposed Reviewers:

Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Title Page (WITH AUTHOR DETAILS)

Title page

Collaborative Creativity in the Age of AI: Exploring the Intersection of Human-AI

Collaboration, Creativity and Trust

Jesús Mascareñoa*, Aleksandra Przegalińskabd, Leon Ciechanowskiebc

a
Department of Human Resource Management and Organizational Behavior, University of

Groningen, Groningen, The Netherlands;


b
Management in Networked and Digital Societies (MINDS) Department, Kozminski

University, Warsaw, Poland;


c
MIT Center for Collective Intelligence, Massachusetts Institute of Technology, Cambridge,

MA, USA
d
Harvard Center for Labour and Just Economy, Cambridge, MA, USA

*Corresponding author: Jesús Mascareñoa, Department of Human Resource Management and

Organizational Behavior, University of Groningen, Nettelbosje 2, 9747 AE Groningen, The

Netherlands. Phone: + 31 503634288, E-mail: j.m.mascareno@rug.nl

This paper was supported by the NCN 2021/41/B/HS4/03664 grant.


Manuscript (WITHOUT AUTHOR DETAILS) Click here to view linked References

Collaborative Creativity in the Age of AI: Exploring the Intersection of Human-AI

Collaboration, Creativity and Trust

Abstract

Although the importance of Human-AI collaboration is increasingly acknowledged, its

relationship with creativity is not fully understood. Drawing from task-technology fit theory,

this study delves into the relationship between human-AI collaboration, trust, and various

facets of creativity. In an experimental study (N = 217), we found that human-AI

collaboration fosters a greater number of ideas, enhanced idea originality, and improved idea

feasibility compared to human-only condition. A second experimental study (N = 210) further

revealed that higher proximity within human-AI collaborations predicts increased idea

productivity and feasibility, yet it does not lead to greater idea originality. Additionally, our

findings suggest that the effectiveness of this proximity in fostering idea originality is

contingent upon the level of trust in AI. Text analytics on the communication patterns

between humans and AI underscored these results, showing variations in sentiment, language

complexity, and communication style in relation to the level of proximity and trust.

Keywords: AI, creativity, human-AI collaboration, cobots, proximity of collaboration

1
Artificial intelligence is reshaping the way organizations operate. AI helps

organizations to be more effective and efficient through data-led personalization,

optimization, and innovation (Grewal et al., 2021). AI-powered solutions are already being

used in many real-world applications, ranging from crop harvests and online retail to bank

loans and manufacturing (Fountaine et al., 2019; Grewal et al., 2021). A key area of

application is AI-powered collaborative robots (cobots) (Maedche et al., 2019). Cobot

technologies are developing rapidly (Adjoudani et al., 2019), becoming increasingly

affordable and accessible to organizations. In fact, cobotics has already become one of the

fastest-growing sectors of the robotics market (Goldberg, 2019), and it is expected to be worth

16.5 US billion by 2028 (Lauraéus et al., 2021).

Similarly, the literature on cobots is rapidly expanding (Knudsen & Kaivo-Oja, 2020).

Although, the human-AI collaboration literature has mainly focus on productivity-related

outcomes (i.e. El Zaatari et al., 2019; Schrier et al., 2010; Sowa et al., 2021), researchers are

beginning to uncover the relationship between human-AI collaboration and creativity (ie.

Dell'Acqua et al., 2024; Boussioux et al., 2023). However, the results are inconsistent across

studies, thus limiting understanding of the impact of human-AI collaboration on creativity.

Indeed some studies show that human-AI collaboration might flattens creativity (Dell'Acqua

et al., 2024), whereas others find that human-AI creativity matches human creativity

(Hitsuwari et al., 2023) or that human-AI collaboration is more creative for certain aspects of

creativity (Boussioux et al., 2023). Thus, our knowledge of what human-AI collaboration can

do to stimulate creativity –despite the increased research attention– is still rather limited

(Wilson & Daugherty, 2018) and more information on the topic is needed.

In this paper, we argue that the relationship between human-AI collaboration and

creativity is complex and multifaceted and we draw from task-technology fit theory (Goodhue

& Thompson, 1995) to understand the relationship between human-AI collaboration and

2
different dimensions of creativity. We argue that humans collaborating with AI-powered

cobots can produce more ideas, and more original and feasible ideas than humans alone. In

addition, we propose that the level of proximity between cobots and humans has different

effects on the different dimensions of creativity and that trust in the cobot will moderate this

relationship.

In sum, the aim of the present research is to contribute to creating consensus on the

effects of the human-AI collaboration on creativity. In addition, we aim to provide a more

nuanced understanding of this relationship by examining different dimensions of creativity,

by looking into the proximity of the collaboration between humans and cobots and by testing

the moderating effect of trust. We do so by using the results of two studies. The first study

addresses the relationship between human-AI collaboration and various dimensions of

creativity. The second study investigates proximity in the human-AI collaboration and its

relationship with different dimensions of creativity, and the moderating role of trust in this

relationship. Moreover, this work is a response to the call for more studies on how AI can

enhance different aspects of creativity (Dell'Acqua et al., 2024; Wilson & Daugherty, 2018),

and how to foster high performing human-AI collaborations (Hancock et al., 2011b). These

findings offer practical insights that will help practitioners in unlocking the potential human-

AI collaboration.

Human-AI collaboration, creativity and task-technology fit theory

Rapid developments in the field of AI like the assignment of cognitive functions to

machines have given rise to a new generation of AI agents named cobots (Beghetto, 2021).

Cobots are AI-powered agents that are capable of collaborating with human subjects

(Beghetto, 2021; El Zaatari et al., 2019). Traditionally, Cobots were employed in industrial

settings to take over repetitive tasks while humans handle unplanned tasks providing the

balance between automation and flexibility (Knudsen & Kaivo-Oja, 2020, Maurtua et al.,

3
2017). However, Cobots are now being increasingly integrated in knowledge-intensive work

(Sowa et al., 2021). The notion behind human-AI collaboration is that humans and cobots are

complementary and together they can create something novel that neither could do alone

(Feldman, 2017). Previous research has shown that cobots can cooperate directly with humans

at levels that rival human–human cooperation (Schrier et al., 2010), and that humans

collaborating with cobots perform better than humans and cobots alone (Gunser et al., 2022,

Hitsuwari et al., 2023). Human-AI collaboration takes place when humans delegate a certain

amount of agency to the AI resulting in an interdependence in their actions (Kang & Lou,

2022). Throughout the collaboration, humans negotiate agency with AI, either by accepting

AI’s agency or by exerting control over it (Kang & Lou, 2022, Sowa et al., 2021).

The literature on AI-human collaboration is growing rapidly and has generated an

extensive body of literature that spans across several disciplines. Indeed, empirical studies

have shown that human-AI collaboration is an important predictor of different organizational

outcomes. For example, in a set of experimental studies, De Melo and colleagues (De Melo et

al., 2017; 2018, 2019), found that human-AI collaboration can reduce the negative effects of

emotions on cooperation. In a more recent experimental study, Sowa et al (2021) found that

humans working with cobots were substantially more productive than the non-assisted group.

Similarly, in an online experiment among groups, Takko et al., (2021) tested different setups

of human-cobot groups, and found that human-AI groups perform better but only when

humans take the lead in the group. More recently, researchers have begun to explore the

relationship between human-AI collaboration and creativity, probing into what was once

believed to be an exclusively human capability (Boden, 2009). For example, Dell'Acqua et al.,

(2024) found, in a field experiment among business consultants, that human-AI collaboration

leads to higher quality of ideas, but there is a significant decrease in the variability of these

ideas compared to those generated by humans. In addition, Hitsuwari et al., (2023) found, in

4
an online experiment, that human–AI collaboration exhibited higher creativity in text

production. In another study, Boussioux et al., (2023) found that although human-AI

collaboration led to more valuable ideas, human ideas were more novel. However, up to this

point, results are not conclusive. All in all, existing research offers inconclusive results on the

relationship between LMX and innovation.

Creativity is the generation of novel and useful ideas (Amabile, 1983, West, 2002).

According to Amabile (Amabile, 1983), creativity tasks are the result of the confluence of

three within-individual components: domain-relevant skills, creativity-relevant processes, and

task motivation. Domain-relevant skills refer to the knowledge from a particular domain that

an individual can combine to create ideas. Creativity-relevant processes encompass a

cognitive style and personality to put existing ideas together in new combinations. Task-

motivation captures the individual’s attitude towards the task. Individuals must perceive that

the task is interesting, satisfying and challenging (Amabile, 1983; 1988).

Ideas are considered as the building block of creative performance and are commonly

assessed based on fluency, originality and feasibility (Paulus, 2000). Idea fluency refers to the

ability to produce a large number of ideas (Rietzschel & Nijstad, 2020). The more ideas

generated, the higher the idea fluency. However, idea fluency is not necessarily related to

quality of ideas. Original ideas are risky and involve uncertainty, as they often diverge from

established norms and conventional pathways (Paulus, 2000; Rietzschel et al., 2019).

Originality is often considered the most sought after attribute of creative ideas (Nijstad et al.,

2010). Feasible ideas are those that require minimal effort or resources to implement,

alongside their anticipated effectiveness in achieving desired outcomes (Paulus, 2000;

Rietzschel et al., 2019). Overall, each dimension plays a crucial role in the creative process,

contributing to the overall effectiveness and impact of creativity.

5
In order to better understand the relationship between human-AI collaboration and

creativity, it may be useful to rely on a theory that has generated insights in predicting

technology success. Goodhue and Thompson’s (1995) task-technology fit is one such theory.

TTF theory proposes that technology use and performance depend upon the degree of fit

between the functionalities of the technology and the characteristics of the task that must be

performed. Task–technology fit is defined as “the degree to which a technology assists an

individual in performing his or her portfolio of tasks” (Goodhue & Thompson, 1995, p. 216).

TTF posits that the alignment between task characteristics and technology characteristics

predicts the level of task-technology fit (Cai et al., 2022; Goodhue & Thompson, 1995). This

alignment, in turn, directly influences performance outcomes (Howard & Rose, 2019).

Furthermore, a high task-technology fit positively impacts user attitudes, subsequently driving

technology utilization. Increased utilization also contributes to performance outcomes.

Empirical studies have shown that TTF is suitable to explain the adoption and benefits of a

wide variety of technologies, including blockchain (Prockl et al., 2022), social media (Fu et

al., 2020), mobile applications (Kristianto, 2021), and unmanned aerial vehicles (Golizadeh et

al., 2019). TTF theory emphasizes the supporting role of technology in aiding humans to

accomplish tasks. Overall, the Task-Technology Fit Theory provides a structured framework

for evaluating and understanding the relationship between technology and tasks in an

organizational context. Therefore, based on the TTF theory, we argue that creativity will

emerge from the alignment between the capabilities of AI-powered cobots and the demands of

creative tasks.

First, humans collaborating with cobots will be more creative because they will get

access to new knowledge. A strong foundation of knowledge, in a specific domain, is crucial

for generating creative ideas (Amabile, 1983). Cobots can assist humans in collecting and

synthetizing large amounts of data from a specific domain, and perform more complex

6
analytical operations than is humanly possible (Sowa et al., 2021). This new knowledge will

expand the human’s set of possibilities from which a new idea can be generated.

Second, humans that collaborate with an AI-powered cobot may be more creative

because the cobot will stimulate humans’ divergent thinking. Creativity tasks require

divergent thinking to make unconventional connections, and consider alternative solutions

(Amabile, 1983). AI-powered cobots can combine information in ways that have never been

combined before (Das & Varshney, 2020), expanding the human’s cognitive flexibility. Thus,

humans collaborating with a cobot may be better equipped to meet the demands of creativity

by utilizing the cobot's input to generate ideas that are more divergent and novel.

Finally, human-AI collaboration will lead to creativity because humans will be more

intrinsically motivated to be creative. Creativity, tasks benefit from the individual’s intrinsic

motivation (Amabile, 1983). Motivation provides the energy and persistence needed to

navigate challenges, take risks, and explore unconventional ideas (Amabile, 1988). Cobots are

capable of learning and adapting to make humans enjoy the collaboration (Fan et al., 2017).

When individuals find a task enjoyable, they engage for longer periods, even beyond the point

at which they are rewarded (Deci, 1972). On this basis, we suggest that humans collaborating

with a cobot will be more creative than humans working alone. In sum, we argue that the

capabilities of cobots match the requirements inherent in creative tasks and, as a result,

humans will be more creative. This leads to the following hypothesis:

Hypothesis 1: Human-AI collaboration will yield higher creativity than what humans

will achieve alone.

Overview of the Present Research

To investigate the effects of human-AI collaboration on creativity, we adopted a

multiple-study approach to increase the external validity of the research. In Study 1, an

experimental study, we examined the relationship between human-AI collaboration and

7
several measures of creativity: idea productivity, idea feasibility and, idea originality. In

Study 2, a follow-up experimental study, we manipulated human-AI proximity on a creative

task and assessed idea productivity, idea originality and idea feasibility. In addition, we

examined the moderating role of trust on the relationship between human-AI proximity and

creativity.

Study 1

Participants and Procedure

Participants in this study were employees located in the Netherlands, Mexico and

Germany. A total of 210 participants (53% female) participated voluntarily in an online

experiment. Participants’ mean age was 30.0 years (SD = 12.68), and had an average

organizational tenure of 5.26 years (SD = 7.23). Of the participants, 49.0% obtained a

bachelor degree. Respondents worked in health care (11.0%), food services (9.5%), education

(7.6%), and retail (5.7%).

Participants were invited to participate in an online study that purportedly was

building a crowdsourced collection of ideas to inspire future entrepreneurs. Employees were

informed that the study would take approximately 20 minutes to complete, that the collected

data would be treated confidentially, and that participation was voluntary. In the first part of

the experiment, participants were asked to fill in demographics. After being randomly

assigned to either the human or the human-AI condition, participants received specific

instructions. Participants, in the human condition, were asked to generate creative ideas for

new businesses and describe them in a short but clear manner. Participants in the human-AI

condition were asked to generate ideas together with a cobot in another window of their

internet search engine. After the data collection, participants were debriefed by informing

them of the purpose of the study.

8
A single "master" rater, blind to conditions, coded all ideas. To further ensure

reliability, a second rater, also blind to conditions, coded a sample of 321 ideas (representing

40% of the sample) to assess the inter-rater reliability (Heyman et al., 2014).

Experimental Cobot. For the experiment, we developed a cobot with the primary

objective of facilitating real-time interaction within professional environments ( Rese &

Tränkner, 2024). The underlying technology of this chatbot is the GPT 3-5 engine, a product

of OpenAI. The cobot was set up on Streamlit to ensure a user-friendly interaction.

Measures

Creativity. To assess creativity, we measured idea productivity, idea originality and

idea feasibility. Idea productivity was measured by counting the total number of ideas

generated per participant. Both idea originality as well as idea feasibility were assessed by

two raters assessing it on a Likert scale from 1 (not at all) to 5 (very much). The level of

agreement between coders was calculated using intraclass correlation (ICC3) (McGraw &

Wong, 1996). ICC3 indicated a good level of agreement for idea originality (ICC3 = .82) and

for idea feasibility (ICC3 = .84) (Cicchetti, 1994).

Results

Preliminary Analysis

Means, standard deviations and correlations among the measures employed in this

study are shown in table 1. The independent variable, human-AI collaboration was

significantly correlated with the three measures of creativity, idea productivity (r = .41, p <

.001), idea originality (r = .25, p < .001) and idea feasibility (r = .23, p < .001).

INSERT TABLE 1 ABOUT HERE

Hypothesis Testing

Idea productivity. A one-way analysis of variance (ANOVA) revealed that there was

a significant difference between the human and human-AI condition (F(1,208) = [41.30], p <

9
.001). The mean comparison analysis revealed that participants in the human-AI condition (M

= 5.55, SD = 4.18) scored higher on idea productivity than those in the human condition (M =

2.64, SD = 2.33). The effect size was η2 = 0.17, which is classified as a medium effect

(Cohen, 1988).

INSERT FIGURE 1 ABOUT HERE

Idea originality. The ANOVA analysis revealed that there was a significant

difference between the human and the human-AI condition (F(1, 208) = [14.39], p < .001).

The mean comparison analysis revealed that participants in the human-AI condition scored

higher (M = 2.38, SD = 0.84) on idea originality than those in the human condition (M = 1.92,

SD = 0.88). The effect size was η2 = 0.06, which indicates a small effect (Cohen, 1988).

INSERT FIGURE 2 ABOUT HERE

Idea feasibility. The ANOVA analysis revealed that there was a significant difference

between the human and human-AI condition regarding idea feasibility (F(1, 208) = [41.30], p

< .001). The mean comparison analysis revealed that participants in the human-AI condition

scored higher (M = 3.50, SD = 0.64) on idea feasibility than those in the human condition (M

= 3.04, SD = 1.20). The effect size was η2 = 0.05, which indicates a small effect (Cohen,

1988).

INSERT FIGURE 3 ABOUT HERE

Discussion Study 1 and Introduction Study 2

The results supported our hypothesis that humans collaborating with AI-powered

cobots will be more creative than humans alone. Specifically, participants in the human-AI

condition generated more ideas and scored higher in originality and feasibility than

participants in the human condition. Thus, this study highlights the importance of human-AI

collaboration for employee creativity.

10
In study 1, we investigated the relationship between human-AI collaboration and

creativity, however, not all human-AI collaborations perform the same. Thus, Study 2 is a

follow-up study to further understand the relationship between human-AI collaboration and

creativity. First, we aimed to extend the findings of Study 1 by including human-AI proximity

as part of our human-AI construct to capture the different levels of collaboration (Feldman,

2017). Second, we included trust as a potential moderator to test whether the relationship

between human-AI proximity and creativity depends on another factor.

Human-AI proximity is defined as a collaborative approach in which humans and

cobots work closely together (El Zaatari et al., 2019; Sowa et al., 2021). According to El

Zataari et al. (2019), proximity is a continuum ranging from low to high proximity. In the

initial stage, humans and cobots do not collaborate but work independently (Feldman, 2017).

In the second stage, humans and cobots begin to complement each other by focusing on their

own expertise. In the third level of proximity, humans and cobots are interdependent,

complement each other’s strengths, and fill gaps in their collective knowledge in order to

achieve desired outcomes. Finally, in the fourth level of proximity cobots become an

extension of the human mind, resulting in a complete dependence and full collaboration

between the two. High proximity between the human and the cobot is crucial as it ensures

well-synchronized actions, speed, efficiency, and dynamism (Hoffman, 2019).

We anticipate that high human-AI proximity will result in a greater creative output

compared to low proximity. To understand this relationship, again we draw from TTF theory.

TTF theory posits that as individuals repeatedly engage with a technology, they develop a

deeper understanding of its features, functions, and potential applications. This increased

familiarity allows users to more effectively match the capabilities of the technology with the

requirements of their tasks (Goodhue & Thompson, 1995). Through continued usage, users

often discover nuanced functionalities and optimize their workflows, leading to a more

11
seamless integration of the technology into their daily activities. As users become proficient,

they can exploit the full range of the technology's features, adapting it to various task

requirements.

Humans and cobots collaborating in high proximity will be more creative because they

will reduce their learning curve. We expect that humans and cobots in a high proximity

collaboration will devote more time and attention to their interactions and, as a result, learn

faster to synthesize information and generate ideas. Cobots depend on human feedback to

learn and evolve to become full-fledged collaborators (Hancock et al., 2011b). Indeed,

previous studies have shown that when humans spend more time collaborating with the cobot,

the more creative the cobot becomes (Vinanzi et al., 2021). Moreover, the performance of

human-AI collaborations is proportional to the depth and length of their learning interactions

(Pinto et al., 2021). Thus, the closer humans and cobot collaborate, the faster they learn to

generate creative ideas.

Humans and cobots engaged in a high proximity collaboration will be more creative

because they will develop a shared understanding. Humans and cobots that work closely

together will have greater opportunities to learn together, relate to one another and,

consequently develop a shared understanding. A shared understanding helps humans and

cobots to be aware of what the other can perceive and create a common knowledge that can

increase their chances of achieving their goals (Matarese et al, 2022). Previous studies have

shown that human-robot teams perform better when they have a shared understanding about

their overarching purpose, goals and roles (Demir et al., 2020).

Finally, high proximity will yield higher creativity because it will augment the human

and cobot’s hybrid intelligence. Hybrid intelligence is the combination of human and artificial

intelligence and is higher than each intelligence on its own (Dellerman et al., 2019b). All

human-AI collaborations entail a degree of hybrid intelligence. However, hybrid intelligence

12
is particularly high when humans and cobots engage in close collaboration because it

facilitates the exchange of information, feedback and data (Dellerman et al., 2019a, Ostheimer

et al., 2021). Indeed, high proximity gives the human and the cobot closer access to the

humans’ intuition, empathy, common sense, and to the AI’s consistency, speed and efficiency

(Dellerman et al., 2019c). In sum, according to the TTF theory, the iterative and experiential

nature of technology use will enhance the fit between user tasks and the capabilities of cobots,

consequently leading to increased creativity. In other words, humans in a high proximity

collaboration will be more creative than those in a low proximity collaboration. On this basis,

we hypothesized that:

Hypothesis 2: Higher proximity between humans and AI is expected to yield greater

creativity compared to lower proximity.

Trust is one of the critical factors that determine AI adoption in organizations (Glikson

& Woolley, 2020). Humans that trust cobots are better equipped to capitalize on their benefits

(Gervasi et al., 2020, Hancock et al., 2011b), which consequently can lead to higher

performance (Coronado et al., 2022). Thus, trust may help explain when the relationship

between human-AI proximity and creativity is stronger. Trust is “the attitude that an agent

will help achieve an individual’s goals in a situation characterized by uncertainty and

vulnerability” (See & Lee, 2004, p. 54). In a human-to-human collaboration, potential risks

are minimized through mutual trust (Wang et al., 2021). However, when it comes to human-

AI collaboration, trust is influenced by the perception of the cobot's ability to perform its

designated task effectively, rather than the system trusting the performance of the human

(Ashoori & Weisz, 2019; Emaminejad & Akhavian, 2022; Muir & Moray, 1996).

In this study, we argue that humans that trust their cobot counterpart will be more

creative for several reasons. First, higher trust may increase the information exchange needed

to generate ideas. Trust directly affects the willingness of humans to accept the information

13
produced by the cobot (Freedy et al., 2007, Hancock et al., 2011a). Moreover, trust dictates

the rate of the information exchange between humans and cobots (Pinto et al., 2022). Thus,

humans that trust cobots will obtain the cognitive resources needed to spark creativity because

they will be more willing and motivated to deepen information exchange activities. Second,

humans collaborating with cobots will be particularly creative when they trust the cobot, as

this will increase their engagement. Humans that recognize the cobot is trustworthy, they will

be more motivated to reciprocate by working at their highest ability and invest more physical

and cognitive energies (Kahn, 1990). In addition, Cobots are capable of understanding

humans and develop customized experiences to keep the humans engaged (Fan et al., 2017).

Hence, humans that trust cobots will yield higher creative output because they will invest

more effort into the creative process. Overall, we predict that the effectiveness of human-AI

collaboration in creativity will be most pronounced when humans trust the cobot.

Hypothesis 3: Trust will strengthen the relationship between human-AI proximity and

creativity, such that the relationship is higher when trust is higher.

Study 2

Participants and Procedure

Two hundred and twenty five respondents from the Netherlands and Germany took

part in an online experiment. Eight respondents were excluded for failing two out of two

control questions (Kittur et al., 2008). Participation required respondents to work at least 20

hours per week. Of the remaining 217 respondents, 119 were female. Participants' age ranged

from 18 to 65 with an average of 28.81 years (SD = 12.07). Participants’ tenure in the

organization was 5.15 years (SD = 7.93). Most participants worked in education (9.7%), Food

service (8.8%), Healthcare (8.3%), or IT (5.5%). Most of them (47.0%) had a Bachelor

degree.

14
Similarly as in Study 1, participants were invited to take part in an online study that

purportedly was collecting business ideas to inspire future entrepreneurs. All participants

were informed that their participation was voluntary and anonymous, and their information

would be kept confidential. Participants provided online informed consent and were asked to

fill in demographics. Participants were randomly assigned to either the high proximity or low

proximity condition and presented with the experimental manipulation. Participants were then

asked to collaborate with a cobot to generate business ideas. Participants were given 20

minutes to complete the task. Afterwards, participants were debriefed, and thanked.

Human-AI proximity was manipulated with instructions that were presented at the

beginning of the experiment (Rietzschel et al., 2017), and the cobot’s attitude during the task.

Manipulations were construed using the items of the human-robot collaboration fluency scale

(Hoffman, 2019).

In the human-AI proximity condition, participants were presented with specific

instructions that encouraged them to collaborate with the cobot. Example sentences are “ The

agent is there to team up with you, complement and round out your capabilities” and “Involve

the agent in generating ideas and both should have equal contribution to the task.” In addition,

the cobot was programmed to assume an active role and to generate messages to proactively

engage with the participant. For example, “Hi, let's put our minds together and generate ideas,

what should we do first?” and “I’m excited to work with you.”

In the control condition, participants were presented with instructions that inform

them that they would use a cobot to help them to generate ideas. Example sentences are “The

assistant is there to assist you and enhance your capabilities” and “Ask the agent to generate

ideas, but you should contribute more to the task.” In addition, the cobot was programmed to

assume a supportive role and to generate messages to help participants. Example prompts

15
include, “Hi, I’m here to assist you to generate ideas, what do you want me to do first?” and

“Happy to help”.

A single "master" rater, blind to conditions, coded all ideas. To further ensure

reliability, a second rater, also blind to conditions, coded a sample of 379 ideas (representing

40% of the sample) to assess the inter-rater reliability (Heyman et al., 2014).

Experimental Cobot. For the second experiment, we employed the identical cobot as in the

initial study. However, we migrated the cobot to PythonAnywhere to automate the response-

saving process and to improve the bot's accessibility and functional efficiency.

Measures

Manipulation check human-AI proximity. The effectiveness of the human-AI

proximity manipulation was assessed with 5 items based on Hoffman (2019). Participants

were asked to reflect on their leader and to indicate their agreement to items like "The AI and

I worked well synchronized together" and "The AI contributed equally to the completion of

the tasks" on a scale ranging from 1 (strongly disagree) to 7 (strongly agree). Cronbach’s

alpha for this scale was .78.

Creativity. To assess creativity, we measured idea productivity, idea originality and

idea feasibility. Idea productivity was measured by counting the total number of ideas

generated per participant. Both idea originality as well as idea feasibility were assessed by

two raters assessing it on a Likert scale from 1 (not at all) to 5 (very much). The level of

agreement between coders was calculated using intraclass correlation (ICC3) (McGraw &

Wong, 1996). ICC3 indicated a good level of agreement for idea originality (ICC3 = .85) and

for idea feasibility (ICC3 = .81) (Cicchetti, 1994).

Trust. Trust was measured with the 6-item scale from Merrit (2011) trust scale. Items

include “I have confidence in the advice given by AI” and “I believe AI is a competent

16
performer”. The items were answered on a 5-point scale ranging from 1 (strongly disagree) to

5 (strongly agree). Cronbach's alpha for this scale was .84.

Results

Preliminary Analysis

Means, standard deviations and correlations among the measures employed in this

study are shown in table 5. Human-AI proximity was significantly correlated with idea

productivity (r = .38, p < .001), and idea feasibility (r = .19, p < .001). Idea productivity was

significantly positively correlated with idea feasibility (r = .23, p < .001). In addition, idea

originality was significantly correlated to idea feasibility (r = .24, p < .001).

INSERT TABLE 2 ABOUT HERE

Hypothesis Testing

Idea productivity. A one-way analysis of variance (ANOVA) revealed that there was

a significant difference between the human and human-AI condition (F(1,216) = [36.74], p <

.001). The mean comparison analysis revealed that participants in the high proximity

condition (M = 6.89, SD = 4.02) scored higher on idea productivity than those in the low

proximity condition (M = 4.06, SD = 2.79). The effect size was η2 = 0.15, which indicates a

medium effect (Cohen, 1988).

INSERT FIGURE 4 ABOUT HERE

Idea originality. The ANOVA analysis revealed that there was not a significant

difference between the human and the human-AI condition (F(1, 216) = [1.11], p = .294) and

no effect size (η2 = 0.00). The mean comparison analysis revealed that participants in the high

proximity condition scored higher (M = 3.25, SD = 0.92) on idea originality than those in the

low proximity condition (M = 3.13, SD = 0.80).

INSERT FIGURE 5 ABOUT HERE

17
Idea feasibility. The ANOVA analysis revealed that there was a significant difference

between the low and high proximity condition regarding idea feasibility (F(1, 216) = [7.60], p

< .05). The mean comparison analysis revealed that participants in the high proximity

condition scored higher (M = 3.69, SD = 0.52) on idea feasibility than those in the low

proximity condition (M = 3.42, SD = 0.86). The effect size was η2 = 0.03, which indicates a

small effect (Cohen, 1988).

INSERT FIGURE 6 ABOUT HERE

In order to test Hypothesis 3, we ran a moderation model using the PROCESS macro

developed by Hayes (2012) (model 1). In this model we tested for the moderating effect of

trust on the relationship between Human-AI proximity and creativity. We present the results

separately for each of the three dimensions of creativity, starting with idea originality,

followed by idea feasibility and finally, by idea productivity.

Idea originality. In terms of originality, the results supported our hypothesis (see

Table 2). The results revealed that human-AI proximity was not a significant predictor of idea

originality (b = - .19, p = 0.129) and that trust (b = 2 .12, p < 0.001) was not a predictor of

originality. However, the effect of trust on the relationship between human-AI proximity and

idea originality was significant (b = 1.62, p = 0.005).

INSERT TABLE 3 ABOUT HERE

Simple slopes analysis (see Figure 1) showed that human-AI proximity was negatively

associated with idea originality when trust was low (i.e., one standard deviation below the

mean; β = -0.48, 95% CI = -0.82, -0.14).

INSERT FIGURE 7 ABOUT HERE

Idea feasibility. In terms of feasibility, the results did not support our hypothesis (see

Table 3). The results revealed that human-AI proximity was a significant predictor of idea

feasibility (b = 0 .27, p = 0.012) and that trust was not a predictor of idea feasibility (b = 0 .09,

18
p = 0.344). The effect of trust on the relationship between human-AI and idea originality was

not significant (b = -0.42, p = 0.379).

INSERT TABLE 4 ABOUT HERE

Idea productivity. In terms of idea productivity, the results supported our hypothesis

(see Table 4). The results revealed that human-AI proximity was a significant predictor of

idea productivity (b = 2 .88, p < 0.001) and that trust was not a predictor of idea feasibility (b

= -.40, p < 0.690). The effect of trust on the relationship between human-AI and idea

originality was not significant (b = 2.38, p = 0.301).

INSERT TABLE 5 ABOUT HERE

Additional text analyses

To gain deeper insights into the dynamics of human-AI communication, and compare

language usage depending on the experimental group, we conducted a comprehensive

analysis of the text messages exchanged between human participants and the AI (chatbot).

This analysis encompassed sentiment analysis, readability assessments, lexical diversity,

sentence length and vocabulary level.

Sentiment Analysis. We utilized sentiment analysis algorithm (VADER - Valence

Aware Dictionary and sEntiment Reasoner) to evaluate the affective nature of the

collaboration, categorizing them into positive, negative, and neutral sentiments. The average

sentiment scores revealed a higher sentiment in AI-generated messages compared to user-

generated messages, with AI in low proximity conditions scoring M = 0.70 (SD = 0.36), and

users in low proximity M = 0.20 (SD = 0.32) (see Figure 8). In high proximity conditions, AI

sentiment was M = 0.66 (SD = 0.36), and for users, M = 0.25 (SD = 0.33). Overall, AI

sentiment was M = 0.67 (SD = 0.36), and user sentiment was M = 0.23 (SD = 0.32). Notably,

sentiment correlation in the high proximity group was r = .318, p < .001, and in the low

proximity group, r = .436, p < .001. The Kruskal-Wallis H test indicated significant

19
differences in sentiment scores across groups (H = 1234.869, p < .001). Post-hoc comparisons

using the Mann-Whitney U test revealed significant differences between human low and

human high proximity groups (U = 127827.5, p < .001, Cliff's Delta = -0.672), indicating a

substantial decrease in sentiment in the high proximity condition. Comparisons between

human and AI participants within the same proximity conditions (low and high) showed

significant differences (low proximity: U = 199712.0, p < .001, Cliff's Delta = 0.0; high

proximity: U = 762612.5, p < .001, Cliff's Delta = 0.0).

INSERT FIGURE 8 ABOUT HERE

Readability Assessments: We employed the Flesch-Kincaid and Gunning Fog indices

to assess the readability of the text. The Flesch-Kincaid index provided insights into the text's

complexity based on sentence length and word syllable count, while the Gunning Fog index

evaluated the number of complex words used, offering a view of the required education level

to comprehend the text. The Kruskal-Wallis H test for Flesch-Kincaid scores was significant

(H = 314.52, p < .001). Post-hoc analysis showed significant differences between human low

and high proximity groups (U = 487432.0, p < .001, Cliff's Delta = 0.249), suggesting higher

readability in high proximity conditions. Differences between human and AI within each

proximity condition were also significant.

Significant differences were found in the Gunning Fog scores (H = 344.514, p < .001).

Post-hoc tests indicated significant differences between human low and high proximity groups

(U = 278699.5, p < .001, Cliff's Delta = -0.286), with high proximity showing lower

complexity in language.

Lexical Diversity. To assess the variety of vocabulary used in the interactions, we

calculated the lexical diversity. This was achieved by measuring the ratio of unique words to

the total number of words, providing an indication of the breadth of vocabulary employed by

both humans and the AI. The Kruskal-Wallis H test showed significant variation in lexical

20
diversity (H = 897.296, p < .001). The lexical diversity was significantly higher in the human

high proximity group compared to the human low proximity group (U = 598382.0, p < .001,

Cliff's Delta = 0.533).

Average Sentence Length. We calculated the average sentence length in the text

messages to understand the syntactic complexity, where longer sentences generally indicate

more intricate information structure and conveyance. Significant differences in average

sentence length were observed (H = 498.82, p < .001). The human high proximity group used

shorter sentences compared to the human low proximity group (U = 233451.5, p < .001,

Cliff's Delta = -0.402).

Vocabulary Level. We analyzed the level of vocabulary used, classifying words into

basic, intermediate, and advanced categories. This helped in assessing the sophistication of

the language used in the interactions and the potential cognitive load on the participants.

Results show significant differences in vocabulary level (H = 887.555, p < .001). The human

high proximity group showed a higher vocabulary level compared to the human low

proximity group (U = 166428.0, p < .001, Cliff's Delta = -0.574).

INSERT FIGURE 9 ABOUT HERE

Discussion Study 2

The results of this study partly support our prediction that with higher levels of

proximity, humans are more creative. Human-AI proximity led to higher idea productivity

and idea feasibility, but not to idea originality. A potential explanation is that high proximity

may have diminished humans’ intraindividual processes. Originality, at the individual level, is

dependent on high individualism and intellectual autonomy (Rank et al., 2004), and an

accentuated emphasis on collaboration may have increased humans’ reliance on the cobot

rather than in their own intraindividual processes. Another potential explanation is that

humans and the cobot may have devoted more time developing the collaboration than

21
carrying a more in-depth analysis of potential ideas. According to Tuckman’s (1965) team

development model, it is not until the last stage that they gained enough knowledge from each

other that they focus on performing. Similarly, humans may have spent their time getting

familiar with the cobot and not spent enough time exploring ideas. This is in line with

findings that show that idea originality is the result of deep exploration of a specific domain

knowledge (Rank et al., 2004).

Results partially support the hypothesis that trust moderates the relationship between

human-AI proximity and creativity. Trust moderated the relationship between human-AI

proximity and idea originality. However, simple slopes analysis revealed that only the low

level of trust negatively affects idea originality. In other words, human-AI proximity

undermines originality when trust in the cobot is low. A potential explanation is that

participants with low trust may have felt the need to rely on their own originality, and as a

consequence, they may not have fully benefited from the collaborative advantages offered by

cobots. This is in line with research that shows that if AI does not behave as expected, humans

may avoid collaboration or limit interaction, hindering the potential benefits of human-AI

collaboration (Dietvorst et al., 2015; Jessup et al., 2020). Another potential explanation is that

trust in cobots requires time to develop but distrust, in contrast, can have immediate negative

effects. Indeed, previous research shows that humans are more sensitive to distrust behaviors

that are performed by robots (Jessup et al., 2020). In addition, trust did not strengthen the

relationship between human-AI proximity and idea productivity, and idea feasibility. One

reason might be that participants did not experience the high levels of risk and uncertainty

typically associated with creativity (Beghetto, 2021). In such situations, trust facilitates the

exchange needed to deal with the uncertainty and risks associated with creativity (Madjar et

al., 2011). However, trust in the cobot might not have been relevant for participants, as the

experiment did not pose a high risk or an uncertain situation to them, as a real-world situation

22
would. In sum, increasing proximity will have further benefits for idea productivity and idea

feasibility, but may come at a cost of lower originality when AI is not trusted.

The text analytics results offer additional insights into the dynamics of human-AI

collaboration. The sentiment analysis indicates that cobots, in both low and high proximity

conditions, maintained a more consistently positive sentiment compared to human users. This

suggests that AI's communication style remains steady regardless of proximity, potentially

providing a stabilizing influence in the collaborative process. However, the higher sentiment

correlation in the low proximity group hints that in less integrated collaborations, human

users' sentiments align more closely with the AI, perhaps reflecting a more transactional

interaction where human sentiments are influenced by AI responses.

The significant differences in readability scores (Flesch-Kincaid and Gunning Fog

indices) between human and AI users across different proximities suggest that the complexity

and sophistication of language used by AI and humans vary with the level of collaboration.

Higher proximity might demand more complex language use, possibly indicating a deeper,

more nuanced exchange of ideas which could be beneficial for idea productivity and

feasibility, but not necessarily for originality. This aligns with the idea that while close

collaboration fosters productivity and feasibility of ideas, it may not always encourage the

intellectual autonomy required for original thinking. This could be useful in these

collaborations, where the focus might be more on idea development and refinement rather

than on the generation of novel concepts. Additionally, the impact of trust on these dynamics

is evident, particularly in how it moderates the relationship between proximity and idea

originality. Trust appears to be a crucial factor in enabling the full potential of human-AI

collaboration, especially in the context of generating original ideas.

General Discussion

23
A growing body of research has identified the role of human-AI collaboration in

influencing organizational, team and individual outcomes (i.e. Sowa et al., 2021; Takko et al.,

2021). This research is the first that we know of, to focus on the relationship between human-

AI collaboration and creativity. Our first study showed that human-AI collaboration leads to

higher idea productivity, idea originality and idea feasibility. The results of our second study

provide initial support for the notion that where high human-AI proximity is sought, this will

lead to higher idea productivity and idea feasibility than in general human-AI collaboration ,

and that low trust has a negative effect on the relationship between human-AI proximity and

idea originality. Taken together, these findings offer several theoretical and practical

implications and provide further avenues for future research.

Theoretical Implications

Our study extends the theoretical development of human-AI collaboration in several

ways. First, our work is a response to the call for more studies on the cognitive impact of

cobots (Guertler et al., 2023; Wilson & Daugherty, 2018) and points at the relevance of TTF

theory in explaining human-AI collaboration as an important predictor of creativity. While

creativity is considered not just the highest form of human intelligence but an exclusive

human capacity (Boden, 1998; Boden, 2009; Cai et al., 2022), our results showed that human-

AI collaboration can enhance human creativity. Specifically, human-AI collaboration

stimulates the number, feasibility and originality of ideas. Moreover, bringing the literatures

together on human-AI collaboration and creativity offers a fruitful avenue for future research.

Second, research may also have some theoretical implications for the

conceptualization of human-AI collaboration. In our study we have made use of the

distinction between high and low human-AI proximity. Our findings show that high proximity

between the human and the cobot yields higher idea productivity and idea feasibility but not

idea originality. This is in line with research that shows that not all human-AI collaboration

24
configurations perform the same (i.e. Goldberg, 2019, Pinto et al., 2022, Vinanzi et al., 2021).

In sum, these findings illustrate that to fully understand the influence that human-AI

collaboration can have on creativity, a look at different collaborative approaches is necessary.

Finally, the results of the moderation analysis offer additional information to the

relationship between human-AI collaboration and creativity. Our findings suggest that human-

AI proximity can diminish idea originality when there is low trust in the cobot. However, it is

important to mention that trust, contrary to our expectations, did not have an effect on the

relationship between human-AI collaboration and idea productivity and between human-AI

collaboration and idea feasibility. This suggests that human-AI collaboration alone may

already be sufficient for generating more ideas, original ideas and feasible ideas. This is in

line with studies that suggest that trust in cobots is primarily based on cobot's ability to

perform tasks effectively and not necessarily in exploring alternatives (i.e. Amabile, 1996; El

Zaatari et al., 2019; Muir & Moray, 1996).

Practical Implications

Our research has several practical implications for managers. First, our findings

suggest that employees who collaborate with AI-powered cobots will be more successful in

idea productivity, feasibility and originality. Organizations may focus on redesigning

processes where creativity is needed to incorporate cobots. Incorporating cobots in

organizational processes will require developing a new set of skills from employees. Thus,

organizations may want to develop training programs to ensure employees develop the skills

needed to collaborate with cobots such as prompting, curating, cobot training. Indeed, the

adoption of human-AI processes demands new skills from employees (Wilson & Daugherty,

2018). Second, our findings suggest that organizations can foster trust in cobots to prevent

harming idea originality. In practice, several managerial instrumentalities can be employed to

foster trust in cobots. For instance, organizations could design training and development

25
programs that focus on AI transparency and cobots’ potential applications. Knowing what a

cobot can do is one of the key determinants of trust formation (Mara et al., 2021). Trust in

Human-AI collaboration should be the foundation on which human-AI collaboration is built.

Finally, our results suggest that proximity in human-AI collaboration may be beneficial for

idea productivity and idea feasibility. Consequently, organizations should strive to promote a

culture that goes beyond human-AI collaboration and promotes proximity with cobots.

Limitations and future research

Like most studies, the present set of studies is not without its limitations and the

findings should be interpreted in light of these limitations. First, both of our studies were

experimental studies and could raise questions concerning external validity. Although we

used an employee sample in both studies, it is unclear if these results can be replicated in a

natural environment. The experimental design was particularly appropriate for testing the

relationship between human-AI collaboration and creativity because it makes it possible to

draw causal conclusions (Antonakis et al., 2010), and allowed us to use objective measures of

human-AI performance. Therefore, even though we obtained experimental evidence of the

relationship between human-AI collaboration and creativity, future research should attempt to

replicate and extend our findings in a field setting. Second, the use of convenience samples in

both studies raises concerns regarding representativeness and generalizability. A strong point

of these samples, however, is that it consisted of a large population across several

organizations. In addition, convenience sampling is an appropriate sampling strategy to test

theoretical relationships and mechanisms (Highhouse & Gillespie, 2009; Landers & Behrend,

2015). Nevertheless, future research should consider the use of random sampling to ensure

generalizability of the results.

Conclusion

26
The current research contributes to the development of literature on human-AI

collaboration and creativity. Our study focused attention on the determinant role of human-AI

collaboration on creativity. In addition, it examined the moderating role of trust in the

relationship between human-AI collaboration and creativity. Further, we distinguished

between general human-AI collaborations and high proximity human-AI collaborations. As

cobots continue to evolve, its potential applications in organizations are likely to expand,

creating new opportunities for competitive advantage. It is hoped that the preliminary findings

reported in this paper will stimulate future research interest in the relationship between

human-AI collaboration and creativity.

Acknowledgment

The authors would like to thank to Kuba Białczyk for his assistance in this project.

27
References

Ajoudani, A., Zanchettin, A. M., Ivaldi, S., Albu-Schäffer, A., Kosuge, K., & Khatib,

O. (2018). Progress and prospects of the human–robot collaboration. Autonomous Robots,

42(4), 957-975. https://doi.org/10.1007/s10514-017-9677-2

Amabile, T. M. (1983). The social psychology of creativity: A componential

conceptualization. Journal of Personality and Social Psychology, 45(2), 357-376.

https://doi.org/10.1037/0022-3514.45.2.357

Amabile, T. M. 1988. A model of creativity and innovation in organizations. In

Cummings, B.S. (ed.), Research in Organizational Behavior, Greenwich: JAI Press.

Amabile, T. M. Creativity in context: Update to the social psychology of creativity.

Boulder: Westview Press, 1996.

Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2010). On making causal

claims: A review and recommendations. The Leadership Quarterly, 21(6), 1086-1120. .

https://doi.org/10.1016/j.leaqua.2010. 10.010

Ashoori, M.; and Weisz, J. D. (2019). In AI we trust? Factors that influence

trustworthiness of AI-infused decision-making processes. arXiv preprint arXiv:1912.02675.

http://arxiv.org/abs/1912.02675

Baker, A. L., Phillips, E. K., Ullman, D., & Keebler, J. R. (2018). Toward an

understanding of trust repair in human-robot interaction: Current research and future

directions. ACM Transactions on Interactive Intelligent Systems (TiiS), 8(4), 1-30.

https://doi.org/10.1145/3181671

Beghetto, R. A. (2021). There is no creativity without uncertainty: Dubito ergo creo.

Journal of Creativity, 31, 1-5. https://doi.org/10.1037/aca0000323

Boden, M. A. (2009). Computer models of creativity. AI Magazine, 30(3), 23-34.

https://doi.org/10.1609/aimag.v30i3.2254

28
Boden, M. A. (1998). Creativity and artificial intelligence. Artificial Intelligence,

103(1-2), 347-356. https://doi.org/10.1016/s0004-3702(98)00055-1

Boussioux, L., N Lane, J., Zhang, M., Jacimovic, V., & Lakhani, K. R. (2023). The

Crowdless Future? How Generative AI Is Shaping the Future of Human Crowdsourcing. The

Crowdless Future.

Cai, J., Li, Z., Dou, Y., Li, T., & Yuan, M. (2022). Understanding adoption of high

off-site construction level technologies in construction based on the TAM and TTF.

Engineering, Construction and Architectural Management, 30(10), 4978-5006.

https://doi.org/10.1108/ECAM-07-2021-0613

Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed

and standardized assessment instruments in psychology. Psychological Assessment, 6(4), 284-

290. https://doi.org/10.1037/1040-3590.6.4.284

Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. Hillsdale:

Lawrence Erlbaum.

Colton, S., & Wiggins, G. A. (2012). Computational creativity: The final frontier? In

Luc De Raedt et al. (Eds.), Ecai 2012, IOS Press, pp. 21-26.

Coronado, E., Kiyokawa, T., Ricardez, G. A. G., Ramirez-Alpizar, I. G., Venture, G.,

& Yamanobe, N. (2022). Evaluating quality in human-robot interaction: A systematic search

and classification of performance and human-centered factors, measures and metrics towards

an industry 5.0. Journal of Manufacturing Systems, 63, 392-410.

https://doi.org/10.1016/j.jmsy.2022.04.007

Daisley, B. (2020). Don’t Let Your Obsession with Productivity Kill Your Creativity.

Harvard Business Review, 10.

Das, P., & Varshney, L. R. (2020). Explaining artificial intelligence generation and

creativity: Human interpretability for novel ideas and artifacts. IEEE Signal Processing

29
Magazine, 39(4), 85-95. https://doi.org/10.1109/MSP.2022.3141365

Deci, E. L. (1972). The effects of contingent and noncontingent rewards and controls

on intrinsic motivation. Organizational Behavior and Human Performance, 8(2), 217-229.

https://doi.org/10.1016/0030-5073(72)90047-5

Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K.,

Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged

technological frontier: Field experimental evidence of the effects of AI on knowledge worker

productivity and quality. Harvard Business School Technology & Operations Mgt. Unit

Working Paper, (24-013).

Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S.; & Ebel, P. (2019a,

January 8-11). The future of human-AI collaboration: A taxonomy of design knowledge for

hybrid intelligence systems. 52nd Hawaii International Conference on System Sciences

(HICSS). Maui, HI, United States.

Dellermann, D.; Ebel, P.; Söllner, M.; & Leimeister, J. M. (2019b). Hybrid

intelligence. Business & Information Systems Engineering, 61, 637-643.

https://doi.org/10.1007/s12599-019-00595-2

Dellermann, D.; Lipusch, N.; Ebel, P.; & Leimeister, J. M. (2019c). Design principles

for a hybrid intelligence decision support system for business model validation. Electronic

Markets, 29, 423-441. https://doi.org/10.1007/s12525-018-0309-2

De Melo, C. M.; Marsella, S. & Gratch, J. (2017) Increasing fairness by delegating

decisions to autonomous agents. In Proceedings of the International Joint Conference on

Autonomous Agents and Multiagent Systems, AAMAS, Brazil, 1, 419–425.

https://dl.acm.org/doi/10.5555/3091125.3091188.

De Melo, C. M.; Marsella, S.; & Gratch, J. (2018). Social decisions and fairness

change when people’s interests are represented by autonomous agents. Autonomous Agents

30
and Multi-Agent Systems, 32, 163–187. https://doi.org/10.1007/s10458-017-9376-6

De Melo, C. M.; Marsella, S.; & Gratch, J. (2019). Human cooperation when acting

through autonomous machines. Proceedings of the National Academy of Sciences, 116(9),

3482–3487. https://doi.org/10.1073/pnas.1817656116

Demir, M.; McNeese, N. J.; & Cooke, N. J. (2020). Understanding human-robot teams

in light of all-human teams: Aspects of team interaction and shared cognition. International

Journal of Human-Computer Studies, 140, 102436.

https://doi.org/10.1016/j.ijhcs.2020.102436

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People

erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology:

General, 144(1), 114-127. https://doi.org/10.1037/xge0000033

Emaminejad, N., & Akhavian, R. (2022). Trustworthy AI and robotics: Implications

for the AEC industry. Automation in Construction, 139, 1-15.

https://doi.org/10.1016/j.autcon.2022.104298

El Zaatari, S., Marei, M., Li, W., & Usman, Z. (2019). Cobot programming for

collaborative industrial tasks: An overview. Robotics and Autonomous Systems, 116, 162-180.

https://doi.org/10.1016/j.robot.2019.03.003

Fan, L.; Scheutz, M.; Lohani, M.; McCoy, M.; & Stokes, C. (2017). Do we need

emotionally intelligent artificial agents? First results of human perceptions of emotional

intelligence in humans compared to robots. In: Beskow, J., Peters, C., Castellano, G.,

O'Sullivan, C., Leite, I., Kopp, S. (eds) Intelligent Virtual Agents. IVA 2017. Springer, Cham.

https://doi.org/10.1007/978-3-319-67401-8_15

Feldman, S. Co-creation: human and AI collaboration in creative expression. In

Electronic Visualisation and the Arts (EVA 2017), Proceedings of EVA, United Kingdom,

422-429. https://doi.org/10.14236/ewic/EVA2017.84

31
Fountaine, T., McCarthy, B., & Saleh, T. (2019). Building the AI-powered

organization. Harvard Business Review, 97(4), 62-73.

Freedy, A.; de Visser, E.; Weltman, G.; & Coeyman, N. (2007). Measurement of trust

in human-robot collaboration. Proceedings of the 2007 International Conference on

Collaborative Technologies and Systems, USA, 106–114.

https://doi.org/10.1109/CTS.2007.4621745.

Fu, J.; Shang, R. A.; Jeyaraj, A.; Sun, Y.; & Hu, F. (2020). Interaction between task

characteristics and technology affordances: task-technology fit and enterprise social media

usage. Journal of Enterprise Information Management, 33(1), 1-22.

https://doi.org/10.1108/JEIM-04-2019-0105

Glikson, E.; & Woolley, A. W. (2020). Human trust in artificial intelligence: Review

of empirical research. Academy of Management Annals, 14(2) , 627-660.

https://doi.org/10.5465/annals.2018.0057

Gervasi, R., Mastrogiacomo, L., & Franceschini, F. (2020). A conceptual framework

to evaluate human-robot collaboration. The International Journal of Advanced Manufacturing

Technology, 108, 841-865. https://doi.org/10.1007/s00170-020-05363-1

Goldberg, K. (2019). Robots and the return to collaborative intelligence. Nature

Machine Intelligence, 1(1), 2-4. https://doi.org/10.1038/s42256-018-0008-x

Golizadeh, H., Hosseini, M. R., Edwards, D. J., Abrishami, S., Taghavi, N., &

Banihashemi, S. (2019). Barriers to adoption of RPAs on construction projects: A task–

technology fit perspective. Construction Innovation, 19(2), 149-169.

https://doi.org/10.1108/CI-09-2018-0074

Goodhue, D. L., & Thompson, R. L. (1995). Task-technology fit and individual

performance. MIS Quarterly, 19(2), 213-236. https://doi.org/10.2307/249689

Grewal, D., Guha, A., Satornino, C. B., & Schweiger, E. B. (2021). Artificial

32
intelligence: The light and the darkness. Journal of Business Research, 136, 229-236.

https://doi.org/10.1016/j.jbusres.2021.07.043

Guertler, M.; Tomidei, L.; Sick, N.; Paul, G.; Carmichael, M.; Hernandez Moreno, V.;

& Hussain, S.(2023). When is a robot a cobot? Moving beyond manufacturing and arm-based

cobot manipulators. Proceedings of the Design Society, 3, 3889-3898.

https://doi.org/10.1017/pds.2023.390

Gunser, V. E., Gottschling, S., Brucker, B., Richter, S., Çakir, D., & Gerjets, P.

(2022). The pure poet: How good is the subjective credibility and stylistic quality of literary

short texts written with an artificial intelligence tool as compared to texts written by human

authors?. Proceedings of the Annual Meeting of the Cognitive Science Society, Ireland, 44,

1744-1750. https://doi.org/10.18653/v1/2022.in2writing-1.8

Gupta, A., Murali, A., Gandhi, D. P., & Pinto, L. (2018). Robot learning in homes:

Improving generalization and reducing dataset bias. Advances in Neural Information

Processing Systems, 31, 1-11. https://doi.org/10.48550/arXiv.1807.07049

Hancock, P. A., Billings, D. R., & Schaefer, K. E. (2011a). Can you trust your robot?

Ergonomics in Design, 19(3), 24-29. https://doi.org/10.1177/1064804611415045

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., &

Parasuraman, R. (2011b). A meta-analysis of factors affecting trust in human-robot

interaction. Human Factors, 53(5), 517-527. https://doi.org/10.1177/0018720811417254

Hayes, A. F. (2012). PROCESS: A versatile computational tool for observed variable

mediation, moderation, and conditional process modeling [White paper]. Retrieved from

http://www.afhayes.com/public/process2012.pdf

Heyman, R. E., Lorber, M. F., Eddy, J. M., & West, T. V. (2014). Behavioral

observation and coding. In H. T. Reis & C. M. Judd (Eds.), Handbook of Research Methods

in Social and Personality Psychology (pp. 345-372). New York: Cambridge University Press.

33
Highhouse, S., & Gillespie, J. Z. (2009). Do samples really matter that much? In

Lance, C. E., & Vandenberg, R. J. (Eds.), Statistical and Methodological Myths and Urban

Legends: Doctrine, Verity and Fable in the Organizational and Social Sciences (pp. 247–

265). New York: Routledge.

Hitsuwari, J., Ueda, Y., Yun, W., & Nomura, M. (2023). Does human–AI

collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-

generated haiku poetry. Computers in Human Behavior, 139, 107502.

https://doi.org/10.1016/j.chb.2022.107502

Hoffman, G. (2019). Evaluating fluency in human–robot collaboration. IEEE

Transactions on Human-Machine Systems, 49(3), 209-218.

https://doi.org/10.1109/THMS.2019.2904558

Howard, M. C., & Rose, J. C. (2019). Refining and extending task–technology fit

theory: Creation of two task–technology fit scales and empirical clarification of the construct.

Information & Management, 56(6), 103-134. https://doi.org/10.1016/j.im.2018.12.002

Jessup, S. A.; Gibson, A.; Capiola, A. A.; Alarcon, G. M.; & Borders, M. (2020).

Investigating the effect of trust manipulations on affect over time in human-human versus

human-robot interactions. Proceedings of the 53rd Hawaii International Conference on

System Sciences, USA, 1-10. https://doi.org/10.24251/HICSS.2020.068

Kahn, W. A. (1990). Psychological conditions of personal engagement and

disengagement at work. Academy of Management Journal, 33(4), 692-724.

https://doi.org/10.2307/256287

Kang, H., & Lou, C. (2022). AI agency vs. human agency: Understanding human–AI

interactions on TikTok and their implications for user engagement. Journal of Computer-

Mediated Communication, 27(5), 1-13. https://doi.org/10.1093/jcmc/zmac014

Keshvarparast, A., Battini, D., Battaia, O., & Pirayesh, A. (2023). Collaborative robots

34
in manufacturing and assembly systems: Literature review and future research agenda.

Journal of Intelligent Manufacturing, 1-54. https://doi.org/10.1007/s10845-023-02137-w

Kittur, A.; Chi, E. H.; & Suh, B. (2008). Crowdsourcing user studies with Mechanical

Turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,

USA, 453-456. https://doi.org/10.1145/1357054.1357127

Knudsen, M., & Kaivo-Oja, J. (2020). Collaborative robots: Frontiers of current

literature. Journal of Intelligent Systems: Theory and Applications, 3(2), 13-20.

https://doi.org/10.38016/jista.682479

Kristianto, Y. E. (2021). Strategy of technology acceptance model utilization for

halodoc, a telehealth mobile application with task technology fit as moderator variable.

International Journal of Innovative Science and Research Technology, 6(8), 192-201.

Landers, R. N., & Behrend, T. S. (2015). An inconvenient truth: Arbitrary distinctions

between organizational, mechanical turk, and other convenience samples. Industrial and

Organizational Psychology: Perspectives on Science and Practice, 8, 142–164.

https://doi.org/10.1017/iop.2015.13

Lauraéus, T., Kaivo-oja, J., Knudsen, M. S., & Kuokkanen, K. (2021). Market

structure analysis with Herfindahl-Hirchman Index and Lauraéus-Kaivo-oja Indices in the

global cobotics markets. Economics and Culture, 18(1), 70-81. https://doi.org/10.2478/jec-

2021-0006

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate

reliance. Human Factors, 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50_30392

Madjar, N., Greenberg, E., & Chen, Z. (2011). Factors for radical creativity,

incremental creativity, and routine, noncreative performance. Journal of Applied Psychology,

96(4), 730-743. https://doi.org/10.1037/a0022416

Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., Hinz, O.,

35
Morana, S., & Söllner, M. (2019). AI-based digital assistants. Business & Information

Systems Engineering, 61(4), 535-544. https://doi.org/10.1007/s12599-019-00600-8

Mara, M.; Meyer, K.; Heiml, M.; Pichler, H.; Haring, R.; Krenn, B.; Gross, S.;

Reiterer, B.; & Layer-Wagner, P. (2021). CoBot Studio VR: A virtual reality game

environment for transdisciplinary research on interpretability and trust in human-robot

collaboration. International Workshop on Virtual, Augmented, and Mixed-Reality for Human-

Robot Interaction (VAM-HRI 2021), Boulder, CO, United States.

Matarese, M., Rea, F., & Sciutti, A. (2022). Perception is only real when shared: A

mathematical model for collaborative shared perception in human-robot interaction. Frontiers

in Robotics and AI, 9, 1-16. https://doi.org/10.3389/frobt.2022.733954

Maurtua, I., Ibarguren, A., Kildal, J., Susperregi, L., & Sierra, B. (2017). Human–

robot collaboration in industrial applications: Safety, interaction and trust. International

Journal of Advanced Robotic Systems, 14(4), 1-10.

https://doi.org/10.1177/1729881417716010

McGraw, K. O., & Wong, S. P. (1996). Forming inferences about some intraclass

correlation coefficients. Psychological Methods, 1(1), 30-46. https://doi.org/10.1037/1082-

989X.1.1.30

Merritt, S. M. (2011). Affective processes in human–automation interactions. Human

Factors, 53(4), 356-370. https://doi.org/10.1177/0018720811411912

Muir, B. M., & Moray, N. (1996). Trust in automation. Part II. Experimental studies

of trust and human intervention in a process control simulation. Ergonomics, 39(3), 429-460.

https://doi.org/10.1080/00140139608964474

Nijstad, B. A., De Dreu, C. K. W., Rietzschel, E. F., & Baas, M. (2010). The dual

pathway to creativity model: Creative ideation as a function of flexibility and persistence.

European Review of Social Psychology, 21, 34-77.

36
https://doi.org/10.1080/10463281003765323

Ostheimer, J., Chowdhury, S., & Iqbal, S. (2021). An alliance of humans and

machines for machine learning: Hybrid intelligent systems and their design principles.

Technology in Society, 66, 1-11. https://doi.org/10.1016/j.techsoc.2021.101647

Paulus, P. B. (2000). Groups, teams, and creativity: The creative potential of idea-

generating groups. Applied Psychology, 49(2), 237–262. https://doi.org/10.1111/1464-

0597.00013

Prockl, G., Roeck, D., Jensen, T., Mazumdar, S., & Mukkamala, R. R. (2022). Beyond

task-technology fit: Exploring network value of blockchain technology based on two supply

chain cases. Proceedings Hawaii International Conference on System Sciences 2022 (HICSS

55), USA, pp. 1-10.

Pinto, A., Sousa, S., Simões, A., & Santos, J. (2022). A trust scale for human-robot

interaction: Translation, adaptation, and validation of a human computer trust scale. Human

Behavior and Emerging Technologies, 1-12. https://doi.org/10.1155/2022/6437441

Rank, J., Pace, V. L., & Frese, M. (2004). Three avenues for future research on

creativity, innovation, and initiative. Applied Psychology, 53(4), 518-528.

https://doi.org/10.1111/j.1464-0597.2004.00185.x

Rese, A., & Tränkner, P. (2024). Perceived conversational ability of task-based

chatbots–Which conversational elements influence the success of text-based dialogues?

International Journal of Information Management, 74, 1–20.

https://doi.org/10.1016/j.ijinfomgt.2023.102699

Rietzschel, E. F., & Nijstad, B. A. (2020). Group creativity. In M. Runco, & S.

Spritzker (Eds.), Encyclopedia of Creativity (3 ed., Vol. 1, pp. 562-568). Academic Press.

https://doi.org/10.1016/B978-0-12-8093245.06200-3

Rietzschel, E. F., Nijstad, B. A., & Stroebe, W. (2019). Why Great Ideas Are Often

37
Overlooked: A Review and Theoretical Analysis of Research on Idea Evaluation and

Selection. In P. B. Paulus, & B. A. Nijstad (Eds.), The Oxford Handbook of Group Creativity

(pp. 179–197). Oxford University Press.

https://doi.org/10.1093/oxfordhb/9780190648077.013.11

Rietzschel, E. F., Wisse, B., & Rus, D. (2017). Puppet masters in the lab:

Experimental methods in leadership research. In Schyns, B., Hall, R. J., & Neves, P. (Eds.),

Handbook of Methods in Leadership Research (pp. 48–72). Cheltenham: Edward Elgar

Publishing.

Schrier, T., Erdem, M., & Brewer, P. (2010). Merging task‐technology fit and

technology acceptance models to assess guest empowerment technology usage in hotels.

Journal of Hospitality and Tourism Technology, 1(3), 201-217.

https://doi.org/10.1108/17579881011078340

Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work:

Human–AI collaboration in managerial professions. Journal of Business Research, 125, 135-

142. https://doi.org/10.1016/j.jbusres.2020.11.038

Sundar, S. S. (2020). Rise of machine agency: A framework for studying the

psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication,

25(1), 74-88. https://doi.org/10.1093/jcmc/zmz026

Takko, T.; Bhattacharya, K.; Monsivais, D.; & Kaski, K. (2021). Human-agent

coordination in a group formation game. Scientific Reports, 11(1), 1-9.

https://doi.org/10.1038/s41598-021-90123-8

Tangen, S. (2005). Demystifying productivity and performance. International Journal

of Productivity and Performance Management, 54(1), 34-46.

https://doi.org/10.1108/17410400510571437

Tuckman, B. W. (1965). Developmental sequence in small groups. Psychological

38
Bulletin, 63(6), 384–399. https://doi.org/10.1037/h0022100

Vinanzi, S., Cangelosi, A., & Goerick, C. (2021). The collaborative mind: Intention

reading and trust in human-robot interaction. iScience, 24(2), 1-16.

https://doi.org/10.1016/j.isci.2021.102130

Wang, X., Wong, Y. D., Chen, T., & Yuen, K. F. (2021). Adoption of shopper-facing

technologies under social distancing: A conceptualisation and an interplay between task-

technology fit and technology trust. Computers in Human Behavior, 124, 1-13.

https://doi.org/10.1016/j.chb.2021.106900

West, M. A. (2002). Sparkling fountains or stagnant ponds: An integrative model of

creativity and innovation implementation in work groups. Applied Psychology: An

International Review, 51, 355-387. https://doi/org/10.1111/1464-0597.00951

Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI

are joining forces. Harvard Business Review, 96(4), 114-123.

39
Table 1: Correlations and Descriptive Statistics Study 1.

M SD 1 2 3 4
1. Human-AI collaboration 0.50 0.50 -
2. Idea productivity 4.26 0.38 .41** -
3. Idea originality 9.48 5.33 .25** .20** -
4. Idea feasibility 3.15 0.83 .23** .19** .16* -
*p < .05; **p < .001

Table 2: Correlations and Descriptive Statistics Study 1.

M SD 1 2 3 4 5
1. Human-AI proximity 1.48 0.50 -
2. Idea productivity 5.42 3.71 .38** -
3. Idea originality 3.19 0.86 -.07 -.06 -
4. Idea feasibility 3.55 0.73 .19** .23** .24** -
5. Trust 1.85 0.24 .27** .04 .00 .01 -
*p < .05; **p < .001

Table 3. Moderation Analysis of the Human-AI Proximity and Originality Relationship

B SE t p Adj. R2 Model F p
Intercept 3.14 0.06 51.87 <0.001 0.04 3.11 <0.05
Human-AI proximity -0.19 0.12 -1.52 0.129
Trust 0.42 0.28 1.50 0.136
Human-AI collaboration x trust 1.62 0.57 2.85 0.005
Note. Bootstrap (Boot) sample size = 10,000, Level of confidence interval = 95% unstandardized
regression coefficients.

Table 4. Moderation Analysis of the Human-AI Proximity and Feasibility Relationship

B SE t p Adj. R2 Model F p
Intercept 3.56 0.05 69.74 <0.001 0.04 3.00 0.031
Human-AI proximity 0.26 0.10 2.54 0.012
Trust 0.81 0.24 0.34 0.731
Human-AI proximity x trust -0.42 0.48 -0.88 0.378

40
Note. Bootstrap (Boot) sample size = 10,000, Level of confidence interval = 95% unstandardized
regression coefficients.
Table 5. Moderation Analysis Summary of the Human-AI and Feasibility Relationship

B SE t p Adj. R2 Model F p
Intercept 5.34 0.24 21.84 <0.001 0.15 12.89 <0.001
Human-AI proximity 2.88 0.49 5.86 <0.001
Trust -.45 1.13 -.40 0.690
Human-AI Collaboration x Trust 2.38 2.30 1.04 0.301
Note. Bootstrap (Boot) sample size = 10,000, Level of confidence interval = 95% unstandardized
regression coefficients.

Figure 1. Two-way ANOVA of idea productivity across conditions.

Figure 2. Two-way ANOVA of idea originality across conditions.

41
Figure 3. Two-way ANOVA of idea feasibility across conditions.

Figure 4. Two-way ANOVA of idea productivity across conditions.

Figure 5. Two-way ANOVA of idea originality across conditions.

42
Figure 6. Two-way ANOVA of idea feasibility across conditions.

Figure 7. Interaction between human-AI proximity and trust quality in predicting idea

originality.

Figure 8. Rolling average sentiment of user and cobot messages by group.

43
*p < .05; **p < .01; ***p < .001; ****p ≤ .0001

Figure 9. Text analyses per condition.

44
Author Biography

Biographical statements

Jesus Mascareño is an assistant professor at the Department of Human Resource


Management and Organizational Behavior at Faculty of Economics and Business at
University of Groningen. His research focus on creativity, innovation and digitalization. He
has experience as technology consultant advising organiztions on emerging technologies like
blockchain and AI.

Aleksandra Przegalińska received her PhD in philosophy of artificial intelligence from the
Department of Philosophy of Culture of the Institute of Philosophy of the University of
Warsaw, currently she is an associate professor at the Department of Management in Digital
and Networked Societies and Vice Rector at Kozminski University. Until recently, she
conducted research at the Massachusetts Institute of Technology in Boston. She is a Senior
Research Associate at the Center for Labour and Just Economy at Harvard University. She
graduated from The New School for Social Research in New York. She is interested in the
development of new technologies, natural language processing, the progress of humanoid
artificial intelligence, social robots and wearable technologies.

Leon Ciechanowski - Currently, Dr. Ciechanowski holds dual roles as a research scholar at
MIT and an assistant professor at Kozminski University. He works in the field of human-
computer interaction, phenomenology and a sense of perpetration/control. Leon Ciechanowski
is also a consultant in various business sectors (banking, data analysis, telecommunications,
pharmaceuticals), where he prepares expert opinions and writes articles related to fintech,
artificial intelligence, Big Data and other innovations technological.

You might also like