Professional Documents
Culture Documents
Jobr S 24 03108
Jobr S 24 03108
Manuscript Number:
Aleksandra Przegalińska
Leon Ciechanowskie
Gabriele Pizzi
Associate Professor, University of Bologna
gabriele.pizzi@unibo.it
He has published articles on human-AI collaboration in your journal. Therefore, he is
knowledgeable on the subject and also familiar with your journal.
Opposed Reviewers:
Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Title Page (WITH AUTHOR DETAILS)
Title page
a
Department of Human Resource Management and Organizational Behavior, University of
MA, USA
d
Harvard Center for Labour and Just Economy, Cambridge, MA, USA
Abstract
relationship with creativity is not fully understood. Drawing from task-technology fit theory,
this study delves into the relationship between human-AI collaboration, trust, and various
collaboration fosters a greater number of ideas, enhanced idea originality, and improved idea
revealed that higher proximity within human-AI collaborations predicts increased idea
productivity and feasibility, yet it does not lead to greater idea originality. Additionally, our
findings suggest that the effectiveness of this proximity in fostering idea originality is
contingent upon the level of trust in AI. Text analytics on the communication patterns
between humans and AI underscored these results, showing variations in sentiment, language
complexity, and communication style in relation to the level of proximity and trust.
1
Artificial intelligence is reshaping the way organizations operate. AI helps
optimization, and innovation (Grewal et al., 2021). AI-powered solutions are already being
used in many real-world applications, ranging from crop harvests and online retail to bank
loans and manufacturing (Fountaine et al., 2019; Grewal et al., 2021). A key area of
affordable and accessible to organizations. In fact, cobotics has already become one of the
fastest-growing sectors of the robotics market (Goldberg, 2019), and it is expected to be worth
Similarly, the literature on cobots is rapidly expanding (Knudsen & Kaivo-Oja, 2020).
outcomes (i.e. El Zaatari et al., 2019; Schrier et al., 2010; Sowa et al., 2021), researchers are
beginning to uncover the relationship between human-AI collaboration and creativity (ie.
Dell'Acqua et al., 2024; Boussioux et al., 2023). However, the results are inconsistent across
Indeed some studies show that human-AI collaboration might flattens creativity (Dell'Acqua
et al., 2024), whereas others find that human-AI creativity matches human creativity
(Hitsuwari et al., 2023) or that human-AI collaboration is more creative for certain aspects of
creativity (Boussioux et al., 2023). Thus, our knowledge of what human-AI collaboration can
do to stimulate creativity –despite the increased research attention– is still rather limited
(Wilson & Daugherty, 2018) and more information on the topic is needed.
In this paper, we argue that the relationship between human-AI collaboration and
creativity is complex and multifaceted and we draw from task-technology fit theory (Goodhue
& Thompson, 1995) to understand the relationship between human-AI collaboration and
2
different dimensions of creativity. We argue that humans collaborating with AI-powered
cobots can produce more ideas, and more original and feasible ideas than humans alone. In
addition, we propose that the level of proximity between cobots and humans has different
effects on the different dimensions of creativity and that trust in the cobot will moderate this
relationship.
In sum, the aim of the present research is to contribute to creating consensus on the
by looking into the proximity of the collaboration between humans and cobots and by testing
the moderating effect of trust. We do so by using the results of two studies. The first study
creativity. The second study investigates proximity in the human-AI collaboration and its
relationship with different dimensions of creativity, and the moderating role of trust in this
relationship. Moreover, this work is a response to the call for more studies on how AI can
enhance different aspects of creativity (Dell'Acqua et al., 2024; Wilson & Daugherty, 2018),
and how to foster high performing human-AI collaborations (Hancock et al., 2011b). These
findings offer practical insights that will help practitioners in unlocking the potential human-
AI collaboration.
machines have given rise to a new generation of AI agents named cobots (Beghetto, 2021).
Cobots are AI-powered agents that are capable of collaborating with human subjects
(Beghetto, 2021; El Zaatari et al., 2019). Traditionally, Cobots were employed in industrial
settings to take over repetitive tasks while humans handle unplanned tasks providing the
balance between automation and flexibility (Knudsen & Kaivo-Oja, 2020, Maurtua et al.,
3
2017). However, Cobots are now being increasingly integrated in knowledge-intensive work
(Sowa et al., 2021). The notion behind human-AI collaboration is that humans and cobots are
complementary and together they can create something novel that neither could do alone
(Feldman, 2017). Previous research has shown that cobots can cooperate directly with humans
at levels that rival human–human cooperation (Schrier et al., 2010), and that humans
collaborating with cobots perform better than humans and cobots alone (Gunser et al., 2022,
Hitsuwari et al., 2023). Human-AI collaboration takes place when humans delegate a certain
amount of agency to the AI resulting in an interdependence in their actions (Kang & Lou,
2022). Throughout the collaboration, humans negotiate agency with AI, either by accepting
AI’s agency or by exerting control over it (Kang & Lou, 2022, Sowa et al., 2021).
extensive body of literature that spans across several disciplines. Indeed, empirical studies
outcomes. For example, in a set of experimental studies, De Melo and colleagues (De Melo et
al., 2017; 2018, 2019), found that human-AI collaboration can reduce the negative effects of
emotions on cooperation. In a more recent experimental study, Sowa et al (2021) found that
humans working with cobots were substantially more productive than the non-assisted group.
Similarly, in an online experiment among groups, Takko et al., (2021) tested different setups
of human-cobot groups, and found that human-AI groups perform better but only when
humans take the lead in the group. More recently, researchers have begun to explore the
relationship between human-AI collaboration and creativity, probing into what was once
believed to be an exclusively human capability (Boden, 2009). For example, Dell'Acqua et al.,
(2024) found, in a field experiment among business consultants, that human-AI collaboration
leads to higher quality of ideas, but there is a significant decrease in the variability of these
ideas compared to those generated by humans. In addition, Hitsuwari et al., (2023) found, in
4
an online experiment, that human–AI collaboration exhibited higher creativity in text
production. In another study, Boussioux et al., (2023) found that although human-AI
collaboration led to more valuable ideas, human ideas were more novel. However, up to this
point, results are not conclusive. All in all, existing research offers inconclusive results on the
Creativity is the generation of novel and useful ideas (Amabile, 1983, West, 2002).
According to Amabile (Amabile, 1983), creativity tasks are the result of the confluence of
task motivation. Domain-relevant skills refer to the knowledge from a particular domain that
cognitive style and personality to put existing ideas together in new combinations. Task-
motivation captures the individual’s attitude towards the task. Individuals must perceive that
Ideas are considered as the building block of creative performance and are commonly
assessed based on fluency, originality and feasibility (Paulus, 2000). Idea fluency refers to the
ability to produce a large number of ideas (Rietzschel & Nijstad, 2020). The more ideas
generated, the higher the idea fluency. However, idea fluency is not necessarily related to
quality of ideas. Original ideas are risky and involve uncertainty, as they often diverge from
established norms and conventional pathways (Paulus, 2000; Rietzschel et al., 2019).
Originality is often considered the most sought after attribute of creative ideas (Nijstad et al.,
2010). Feasible ideas are those that require minimal effort or resources to implement,
Rietzschel et al., 2019). Overall, each dimension plays a crucial role in the creative process,
5
In order to better understand the relationship between human-AI collaboration and
creativity, it may be useful to rely on a theory that has generated insights in predicting
technology success. Goodhue and Thompson’s (1995) task-technology fit is one such theory.
TTF theory proposes that technology use and performance depend upon the degree of fit
between the functionalities of the technology and the characteristics of the task that must be
individual in performing his or her portfolio of tasks” (Goodhue & Thompson, 1995, p. 216).
TTF posits that the alignment between task characteristics and technology characteristics
predicts the level of task-technology fit (Cai et al., 2022; Goodhue & Thompson, 1995). This
alignment, in turn, directly influences performance outcomes (Howard & Rose, 2019).
Furthermore, a high task-technology fit positively impacts user attitudes, subsequently driving
Empirical studies have shown that TTF is suitable to explain the adoption and benefits of a
wide variety of technologies, including blockchain (Prockl et al., 2022), social media (Fu et
al., 2020), mobile applications (Kristianto, 2021), and unmanned aerial vehicles (Golizadeh et
al., 2019). TTF theory emphasizes the supporting role of technology in aiding humans to
accomplish tasks. Overall, the Task-Technology Fit Theory provides a structured framework
for evaluating and understanding the relationship between technology and tasks in an
organizational context. Therefore, based on the TTF theory, we argue that creativity will
emerge from the alignment between the capabilities of AI-powered cobots and the demands of
creative tasks.
First, humans collaborating with cobots will be more creative because they will get
for generating creative ideas (Amabile, 1983). Cobots can assist humans in collecting and
synthetizing large amounts of data from a specific domain, and perform more complex
6
analytical operations than is humanly possible (Sowa et al., 2021). This new knowledge will
expand the human’s set of possibilities from which a new idea can be generated.
Second, humans that collaborate with an AI-powered cobot may be more creative
because the cobot will stimulate humans’ divergent thinking. Creativity tasks require
(Amabile, 1983). AI-powered cobots can combine information in ways that have never been
combined before (Das & Varshney, 2020), expanding the human’s cognitive flexibility. Thus,
humans collaborating with a cobot may be better equipped to meet the demands of creativity
by utilizing the cobot's input to generate ideas that are more divergent and novel.
Finally, human-AI collaboration will lead to creativity because humans will be more
intrinsically motivated to be creative. Creativity, tasks benefit from the individual’s intrinsic
motivation (Amabile, 1983). Motivation provides the energy and persistence needed to
navigate challenges, take risks, and explore unconventional ideas (Amabile, 1988). Cobots are
capable of learning and adapting to make humans enjoy the collaboration (Fan et al., 2017).
When individuals find a task enjoyable, they engage for longer periods, even beyond the point
at which they are rewarded (Deci, 1972). On this basis, we suggest that humans collaborating
with a cobot will be more creative than humans working alone. In sum, we argue that the
capabilities of cobots match the requirements inherent in creative tasks and, as a result,
Hypothesis 1: Human-AI collaboration will yield higher creativity than what humans
7
several measures of creativity: idea productivity, idea feasibility and, idea originality. In
task and assessed idea productivity, idea originality and idea feasibility. In addition, we
examined the moderating role of trust on the relationship between human-AI proximity and
creativity.
Study 1
Participants in this study were employees located in the Netherlands, Mexico and
experiment. Participants’ mean age was 30.0 years (SD = 12.68), and had an average
organizational tenure of 5.26 years (SD = 7.23). Of the participants, 49.0% obtained a
bachelor degree. Respondents worked in health care (11.0%), food services (9.5%), education
informed that the study would take approximately 20 minutes to complete, that the collected
data would be treated confidentially, and that participation was voluntary. In the first part of
the experiment, participants were asked to fill in demographics. After being randomly
assigned to either the human or the human-AI condition, participants received specific
instructions. Participants, in the human condition, were asked to generate creative ideas for
new businesses and describe them in a short but clear manner. Participants in the human-AI
condition were asked to generate ideas together with a cobot in another window of their
internet search engine. After the data collection, participants were debriefed by informing
8
A single "master" rater, blind to conditions, coded all ideas. To further ensure
reliability, a second rater, also blind to conditions, coded a sample of 321 ideas (representing
40% of the sample) to assess the inter-rater reliability (Heyman et al., 2014).
Experimental Cobot. For the experiment, we developed a cobot with the primary
Tränkner, 2024). The underlying technology of this chatbot is the GPT 3-5 engine, a product
Measures
idea feasibility. Idea productivity was measured by counting the total number of ideas
generated per participant. Both idea originality as well as idea feasibility were assessed by
two raters assessing it on a Likert scale from 1 (not at all) to 5 (very much). The level of
agreement between coders was calculated using intraclass correlation (ICC3) (McGraw &
Wong, 1996). ICC3 indicated a good level of agreement for idea originality (ICC3 = .82) and
Results
Preliminary Analysis
Means, standard deviations and correlations among the measures employed in this
study are shown in table 1. The independent variable, human-AI collaboration was
significantly correlated with the three measures of creativity, idea productivity (r = .41, p <
.001), idea originality (r = .25, p < .001) and idea feasibility (r = .23, p < .001).
Hypothesis Testing
Idea productivity. A one-way analysis of variance (ANOVA) revealed that there was
a significant difference between the human and human-AI condition (F(1,208) = [41.30], p <
9
.001). The mean comparison analysis revealed that participants in the human-AI condition (M
= 5.55, SD = 4.18) scored higher on idea productivity than those in the human condition (M =
2.64, SD = 2.33). The effect size was η2 = 0.17, which is classified as a medium effect
(Cohen, 1988).
Idea originality. The ANOVA analysis revealed that there was a significant
difference between the human and the human-AI condition (F(1, 208) = [14.39], p < .001).
The mean comparison analysis revealed that participants in the human-AI condition scored
higher (M = 2.38, SD = 0.84) on idea originality than those in the human condition (M = 1.92,
SD = 0.88). The effect size was η2 = 0.06, which indicates a small effect (Cohen, 1988).
Idea feasibility. The ANOVA analysis revealed that there was a significant difference
between the human and human-AI condition regarding idea feasibility (F(1, 208) = [41.30], p
< .001). The mean comparison analysis revealed that participants in the human-AI condition
scored higher (M = 3.50, SD = 0.64) on idea feasibility than those in the human condition (M
= 3.04, SD = 1.20). The effect size was η2 = 0.05, which indicates a small effect (Cohen,
1988).
The results supported our hypothesis that humans collaborating with AI-powered
cobots will be more creative than humans alone. Specifically, participants in the human-AI
condition generated more ideas and scored higher in originality and feasibility than
participants in the human condition. Thus, this study highlights the importance of human-AI
10
In study 1, we investigated the relationship between human-AI collaboration and
creativity, however, not all human-AI collaborations perform the same. Thus, Study 2 is a
follow-up study to further understand the relationship between human-AI collaboration and
creativity. First, we aimed to extend the findings of Study 1 by including human-AI proximity
as part of our human-AI construct to capture the different levels of collaboration (Feldman,
2017). Second, we included trust as a potential moderator to test whether the relationship
cobots work closely together (El Zaatari et al., 2019; Sowa et al., 2021). According to El
Zataari et al. (2019), proximity is a continuum ranging from low to high proximity. In the
initial stage, humans and cobots do not collaborate but work independently (Feldman, 2017).
In the second stage, humans and cobots begin to complement each other by focusing on their
own expertise. In the third level of proximity, humans and cobots are interdependent,
complement each other’s strengths, and fill gaps in their collective knowledge in order to
achieve desired outcomes. Finally, in the fourth level of proximity cobots become an
extension of the human mind, resulting in a complete dependence and full collaboration
between the two. High proximity between the human and the cobot is crucial as it ensures
We anticipate that high human-AI proximity will result in a greater creative output
compared to low proximity. To understand this relationship, again we draw from TTF theory.
TTF theory posits that as individuals repeatedly engage with a technology, they develop a
deeper understanding of its features, functions, and potential applications. This increased
familiarity allows users to more effectively match the capabilities of the technology with the
requirements of their tasks (Goodhue & Thompson, 1995). Through continued usage, users
often discover nuanced functionalities and optimize their workflows, leading to a more
11
seamless integration of the technology into their daily activities. As users become proficient,
they can exploit the full range of the technology's features, adapting it to various task
requirements.
Humans and cobots collaborating in high proximity will be more creative because they
will reduce their learning curve. We expect that humans and cobots in a high proximity
collaboration will devote more time and attention to their interactions and, as a result, learn
faster to synthesize information and generate ideas. Cobots depend on human feedback to
learn and evolve to become full-fledged collaborators (Hancock et al., 2011b). Indeed,
previous studies have shown that when humans spend more time collaborating with the cobot,
the more creative the cobot becomes (Vinanzi et al., 2021). Moreover, the performance of
human-AI collaborations is proportional to the depth and length of their learning interactions
(Pinto et al., 2021). Thus, the closer humans and cobot collaborate, the faster they learn to
Humans and cobots engaged in a high proximity collaboration will be more creative
because they will develop a shared understanding. Humans and cobots that work closely
together will have greater opportunities to learn together, relate to one another and,
cobots to be aware of what the other can perceive and create a common knowledge that can
increase their chances of achieving their goals (Matarese et al, 2022). Previous studies have
shown that human-robot teams perform better when they have a shared understanding about
Finally, high proximity will yield higher creativity because it will augment the human
and cobot’s hybrid intelligence. Hybrid intelligence is the combination of human and artificial
intelligence and is higher than each intelligence on its own (Dellerman et al., 2019b). All
12
is particularly high when humans and cobots engage in close collaboration because it
facilitates the exchange of information, feedback and data (Dellerman et al., 2019a, Ostheimer
et al., 2021). Indeed, high proximity gives the human and the cobot closer access to the
humans’ intuition, empathy, common sense, and to the AI’s consistency, speed and efficiency
(Dellerman et al., 2019c). In sum, according to the TTF theory, the iterative and experiential
nature of technology use will enhance the fit between user tasks and the capabilities of cobots,
collaboration will be more creative than those in a low proximity collaboration. On this basis,
we hypothesized that:
Trust is one of the critical factors that determine AI adoption in organizations (Glikson
& Woolley, 2020). Humans that trust cobots are better equipped to capitalize on their benefits
(Gervasi et al., 2020, Hancock et al., 2011b), which consequently can lead to higher
performance (Coronado et al., 2022). Thus, trust may help explain when the relationship
between human-AI proximity and creativity is stronger. Trust is “the attitude that an agent
vulnerability” (See & Lee, 2004, p. 54). In a human-to-human collaboration, potential risks
are minimized through mutual trust (Wang et al., 2021). However, when it comes to human-
AI collaboration, trust is influenced by the perception of the cobot's ability to perform its
designated task effectively, rather than the system trusting the performance of the human
(Ashoori & Weisz, 2019; Emaminejad & Akhavian, 2022; Muir & Moray, 1996).
In this study, we argue that humans that trust their cobot counterpart will be more
creative for several reasons. First, higher trust may increase the information exchange needed
to generate ideas. Trust directly affects the willingness of humans to accept the information
13
produced by the cobot (Freedy et al., 2007, Hancock et al., 2011a). Moreover, trust dictates
the rate of the information exchange between humans and cobots (Pinto et al., 2022). Thus,
humans that trust cobots will obtain the cognitive resources needed to spark creativity because
they will be more willing and motivated to deepen information exchange activities. Second,
humans collaborating with cobots will be particularly creative when they trust the cobot, as
this will increase their engagement. Humans that recognize the cobot is trustworthy, they will
be more motivated to reciprocate by working at their highest ability and invest more physical
and cognitive energies (Kahn, 1990). In addition, Cobots are capable of understanding
humans and develop customized experiences to keep the humans engaged (Fan et al., 2017).
Hence, humans that trust cobots will yield higher creative output because they will invest
more effort into the creative process. Overall, we predict that the effectiveness of human-AI
collaboration in creativity will be most pronounced when humans trust the cobot.
Hypothesis 3: Trust will strengthen the relationship between human-AI proximity and
Study 2
Two hundred and twenty five respondents from the Netherlands and Germany took
part in an online experiment. Eight respondents were excluded for failing two out of two
control questions (Kittur et al., 2008). Participation required respondents to work at least 20
hours per week. Of the remaining 217 respondents, 119 were female. Participants' age ranged
from 18 to 65 with an average of 28.81 years (SD = 12.07). Participants’ tenure in the
organization was 5.15 years (SD = 7.93). Most participants worked in education (9.7%), Food
service (8.8%), Healthcare (8.3%), or IT (5.5%). Most of them (47.0%) had a Bachelor
degree.
14
Similarly as in Study 1, participants were invited to take part in an online study that
purportedly was collecting business ideas to inspire future entrepreneurs. All participants
were informed that their participation was voluntary and anonymous, and their information
would be kept confidential. Participants provided online informed consent and were asked to
fill in demographics. Participants were randomly assigned to either the high proximity or low
proximity condition and presented with the experimental manipulation. Participants were then
asked to collaborate with a cobot to generate business ideas. Participants were given 20
minutes to complete the task. Afterwards, participants were debriefed, and thanked.
Human-AI proximity was manipulated with instructions that were presented at the
beginning of the experiment (Rietzschel et al., 2017), and the cobot’s attitude during the task.
Manipulations were construed using the items of the human-robot collaboration fluency scale
(Hoffman, 2019).
instructions that encouraged them to collaborate with the cobot. Example sentences are “ The
agent is there to team up with you, complement and round out your capabilities” and “Involve
the agent in generating ideas and both should have equal contribution to the task.” In addition,
the cobot was programmed to assume an active role and to generate messages to proactively
engage with the participant. For example, “Hi, let's put our minds together and generate ideas,
In the control condition, participants were presented with instructions that inform
them that they would use a cobot to help them to generate ideas. Example sentences are “The
assistant is there to assist you and enhance your capabilities” and “Ask the agent to generate
ideas, but you should contribute more to the task.” In addition, the cobot was programmed to
assume a supportive role and to generate messages to help participants. Example prompts
15
include, “Hi, I’m here to assist you to generate ideas, what do you want me to do first?” and
“Happy to help”.
A single "master" rater, blind to conditions, coded all ideas. To further ensure
reliability, a second rater, also blind to conditions, coded a sample of 379 ideas (representing
40% of the sample) to assess the inter-rater reliability (Heyman et al., 2014).
Experimental Cobot. For the second experiment, we employed the identical cobot as in the
initial study. However, we migrated the cobot to PythonAnywhere to automate the response-
saving process and to improve the bot's accessibility and functional efficiency.
Measures
proximity manipulation was assessed with 5 items based on Hoffman (2019). Participants
were asked to reflect on their leader and to indicate their agreement to items like "The AI and
I worked well synchronized together" and "The AI contributed equally to the completion of
the tasks" on a scale ranging from 1 (strongly disagree) to 7 (strongly agree). Cronbach’s
idea feasibility. Idea productivity was measured by counting the total number of ideas
generated per participant. Both idea originality as well as idea feasibility were assessed by
two raters assessing it on a Likert scale from 1 (not at all) to 5 (very much). The level of
agreement between coders was calculated using intraclass correlation (ICC3) (McGraw &
Wong, 1996). ICC3 indicated a good level of agreement for idea originality (ICC3 = .85) and
Trust. Trust was measured with the 6-item scale from Merrit (2011) trust scale. Items
include “I have confidence in the advice given by AI” and “I believe AI is a competent
16
performer”. The items were answered on a 5-point scale ranging from 1 (strongly disagree) to
Results
Preliminary Analysis
Means, standard deviations and correlations among the measures employed in this
study are shown in table 5. Human-AI proximity was significantly correlated with idea
productivity (r = .38, p < .001), and idea feasibility (r = .19, p < .001). Idea productivity was
significantly positively correlated with idea feasibility (r = .23, p < .001). In addition, idea
Hypothesis Testing
Idea productivity. A one-way analysis of variance (ANOVA) revealed that there was
a significant difference between the human and human-AI condition (F(1,216) = [36.74], p <
.001). The mean comparison analysis revealed that participants in the high proximity
condition (M = 6.89, SD = 4.02) scored higher on idea productivity than those in the low
proximity condition (M = 4.06, SD = 2.79). The effect size was η2 = 0.15, which indicates a
Idea originality. The ANOVA analysis revealed that there was not a significant
difference between the human and the human-AI condition (F(1, 216) = [1.11], p = .294) and
no effect size (η2 = 0.00). The mean comparison analysis revealed that participants in the high
proximity condition scored higher (M = 3.25, SD = 0.92) on idea originality than those in the
17
Idea feasibility. The ANOVA analysis revealed that there was a significant difference
between the low and high proximity condition regarding idea feasibility (F(1, 216) = [7.60], p
< .05). The mean comparison analysis revealed that participants in the high proximity
condition scored higher (M = 3.69, SD = 0.52) on idea feasibility than those in the low
proximity condition (M = 3.42, SD = 0.86). The effect size was η2 = 0.03, which indicates a
In order to test Hypothesis 3, we ran a moderation model using the PROCESS macro
developed by Hayes (2012) (model 1). In this model we tested for the moderating effect of
trust on the relationship between Human-AI proximity and creativity. We present the results
separately for each of the three dimensions of creativity, starting with idea originality,
Idea originality. In terms of originality, the results supported our hypothesis (see
Table 2). The results revealed that human-AI proximity was not a significant predictor of idea
originality (b = - .19, p = 0.129) and that trust (b = 2 .12, p < 0.001) was not a predictor of
originality. However, the effect of trust on the relationship between human-AI proximity and
Simple slopes analysis (see Figure 1) showed that human-AI proximity was negatively
associated with idea originality when trust was low (i.e., one standard deviation below the
Idea feasibility. In terms of feasibility, the results did not support our hypothesis (see
Table 3). The results revealed that human-AI proximity was a significant predictor of idea
feasibility (b = 0 .27, p = 0.012) and that trust was not a predictor of idea feasibility (b = 0 .09,
18
p = 0.344). The effect of trust on the relationship between human-AI and idea originality was
Idea productivity. In terms of idea productivity, the results supported our hypothesis
(see Table 4). The results revealed that human-AI proximity was a significant predictor of
idea productivity (b = 2 .88, p < 0.001) and that trust was not a predictor of idea feasibility (b
= -.40, p < 0.690). The effect of trust on the relationship between human-AI and idea
To gain deeper insights into the dynamics of human-AI communication, and compare
analysis of the text messages exchanged between human participants and the AI (chatbot).
Aware Dictionary and sEntiment Reasoner) to evaluate the affective nature of the
collaboration, categorizing them into positive, negative, and neutral sentiments. The average
generated messages, with AI in low proximity conditions scoring M = 0.70 (SD = 0.36), and
users in low proximity M = 0.20 (SD = 0.32) (see Figure 8). In high proximity conditions, AI
sentiment was M = 0.66 (SD = 0.36), and for users, M = 0.25 (SD = 0.33). Overall, AI
sentiment was M = 0.67 (SD = 0.36), and user sentiment was M = 0.23 (SD = 0.32). Notably,
sentiment correlation in the high proximity group was r = .318, p < .001, and in the low
proximity group, r = .436, p < .001. The Kruskal-Wallis H test indicated significant
19
differences in sentiment scores across groups (H = 1234.869, p < .001). Post-hoc comparisons
using the Mann-Whitney U test revealed significant differences between human low and
human high proximity groups (U = 127827.5, p < .001, Cliff's Delta = -0.672), indicating a
human and AI participants within the same proximity conditions (low and high) showed
significant differences (low proximity: U = 199712.0, p < .001, Cliff's Delta = 0.0; high
to assess the readability of the text. The Flesch-Kincaid index provided insights into the text's
complexity based on sentence length and word syllable count, while the Gunning Fog index
evaluated the number of complex words used, offering a view of the required education level
to comprehend the text. The Kruskal-Wallis H test for Flesch-Kincaid scores was significant
(H = 314.52, p < .001). Post-hoc analysis showed significant differences between human low
and high proximity groups (U = 487432.0, p < .001, Cliff's Delta = 0.249), suggesting higher
readability in high proximity conditions. Differences between human and AI within each
Significant differences were found in the Gunning Fog scores (H = 344.514, p < .001).
Post-hoc tests indicated significant differences between human low and high proximity groups
(U = 278699.5, p < .001, Cliff's Delta = -0.286), with high proximity showing lower
complexity in language.
calculated the lexical diversity. This was achieved by measuring the ratio of unique words to
the total number of words, providing an indication of the breadth of vocabulary employed by
both humans and the AI. The Kruskal-Wallis H test showed significant variation in lexical
20
diversity (H = 897.296, p < .001). The lexical diversity was significantly higher in the human
high proximity group compared to the human low proximity group (U = 598382.0, p < .001,
Average Sentence Length. We calculated the average sentence length in the text
messages to understand the syntactic complexity, where longer sentences generally indicate
sentence length were observed (H = 498.82, p < .001). The human high proximity group used
shorter sentences compared to the human low proximity group (U = 233451.5, p < .001,
Vocabulary Level. We analyzed the level of vocabulary used, classifying words into
basic, intermediate, and advanced categories. This helped in assessing the sophistication of
the language used in the interactions and the potential cognitive load on the participants.
Results show significant differences in vocabulary level (H = 887.555, p < .001). The human
high proximity group showed a higher vocabulary level compared to the human low
Discussion Study 2
The results of this study partly support our prediction that with higher levels of
proximity, humans are more creative. Human-AI proximity led to higher idea productivity
and idea feasibility, but not to idea originality. A potential explanation is that high proximity
may have diminished humans’ intraindividual processes. Originality, at the individual level, is
dependent on high individualism and intellectual autonomy (Rank et al., 2004), and an
accentuated emphasis on collaboration may have increased humans’ reliance on the cobot
rather than in their own intraindividual processes. Another potential explanation is that
humans and the cobot may have devoted more time developing the collaboration than
21
carrying a more in-depth analysis of potential ideas. According to Tuckman’s (1965) team
development model, it is not until the last stage that they gained enough knowledge from each
other that they focus on performing. Similarly, humans may have spent their time getting
familiar with the cobot and not spent enough time exploring ideas. This is in line with
findings that show that idea originality is the result of deep exploration of a specific domain
Results partially support the hypothesis that trust moderates the relationship between
human-AI proximity and creativity. Trust moderated the relationship between human-AI
proximity and idea originality. However, simple slopes analysis revealed that only the low
level of trust negatively affects idea originality. In other words, human-AI proximity
undermines originality when trust in the cobot is low. A potential explanation is that
participants with low trust may have felt the need to rely on their own originality, and as a
consequence, they may not have fully benefited from the collaborative advantages offered by
cobots. This is in line with research that shows that if AI does not behave as expected, humans
may avoid collaboration or limit interaction, hindering the potential benefits of human-AI
collaboration (Dietvorst et al., 2015; Jessup et al., 2020). Another potential explanation is that
trust in cobots requires time to develop but distrust, in contrast, can have immediate negative
effects. Indeed, previous research shows that humans are more sensitive to distrust behaviors
that are performed by robots (Jessup et al., 2020). In addition, trust did not strengthen the
relationship between human-AI proximity and idea productivity, and idea feasibility. One
reason might be that participants did not experience the high levels of risk and uncertainty
typically associated with creativity (Beghetto, 2021). In such situations, trust facilitates the
exchange needed to deal with the uncertainty and risks associated with creativity (Madjar et
al., 2011). However, trust in the cobot might not have been relevant for participants, as the
experiment did not pose a high risk or an uncertain situation to them, as a real-world situation
22
would. In sum, increasing proximity will have further benefits for idea productivity and idea
feasibility, but may come at a cost of lower originality when AI is not trusted.
The text analytics results offer additional insights into the dynamics of human-AI
collaboration. The sentiment analysis indicates that cobots, in both low and high proximity
conditions, maintained a more consistently positive sentiment compared to human users. This
suggests that AI's communication style remains steady regardless of proximity, potentially
providing a stabilizing influence in the collaborative process. However, the higher sentiment
correlation in the low proximity group hints that in less integrated collaborations, human
users' sentiments align more closely with the AI, perhaps reflecting a more transactional
indices) between human and AI users across different proximities suggest that the complexity
and sophistication of language used by AI and humans vary with the level of collaboration.
Higher proximity might demand more complex language use, possibly indicating a deeper,
more nuanced exchange of ideas which could be beneficial for idea productivity and
feasibility, but not necessarily for originality. This aligns with the idea that while close
collaboration fosters productivity and feasibility of ideas, it may not always encourage the
intellectual autonomy required for original thinking. This could be useful in these
collaborations, where the focus might be more on idea development and refinement rather
than on the generation of novel concepts. Additionally, the impact of trust on these dynamics
is evident, particularly in how it moderates the relationship between proximity and idea
originality. Trust appears to be a crucial factor in enabling the full potential of human-AI
General Discussion
23
A growing body of research has identified the role of human-AI collaboration in
influencing organizational, team and individual outcomes (i.e. Sowa et al., 2021; Takko et al.,
2021). This research is the first that we know of, to focus on the relationship between human-
AI collaboration and creativity. Our first study showed that human-AI collaboration leads to
higher idea productivity, idea originality and idea feasibility. The results of our second study
provide initial support for the notion that where high human-AI proximity is sought, this will
lead to higher idea productivity and idea feasibility than in general human-AI collaboration ,
and that low trust has a negative effect on the relationship between human-AI proximity and
idea originality. Taken together, these findings offer several theoretical and practical
Theoretical Implications
ways. First, our work is a response to the call for more studies on the cognitive impact of
cobots (Guertler et al., 2023; Wilson & Daugherty, 2018) and points at the relevance of TTF
creativity is considered not just the highest form of human intelligence but an exclusive
human capacity (Boden, 1998; Boden, 2009; Cai et al., 2022), our results showed that human-
stimulates the number, feasibility and originality of ideas. Moreover, bringing the literatures
together on human-AI collaboration and creativity offers a fruitful avenue for future research.
Second, research may also have some theoretical implications for the
distinction between high and low human-AI proximity. Our findings show that high proximity
between the human and the cobot yields higher idea productivity and idea feasibility but not
idea originality. This is in line with research that shows that not all human-AI collaboration
24
configurations perform the same (i.e. Goldberg, 2019, Pinto et al., 2022, Vinanzi et al., 2021).
In sum, these findings illustrate that to fully understand the influence that human-AI
Finally, the results of the moderation analysis offer additional information to the
relationship between human-AI collaboration and creativity. Our findings suggest that human-
AI proximity can diminish idea originality when there is low trust in the cobot. However, it is
important to mention that trust, contrary to our expectations, did not have an effect on the
relationship between human-AI collaboration and idea productivity and between human-AI
collaboration and idea feasibility. This suggests that human-AI collaboration alone may
already be sufficient for generating more ideas, original ideas and feasible ideas. This is in
line with studies that suggest that trust in cobots is primarily based on cobot's ability to
perform tasks effectively and not necessarily in exploring alternatives (i.e. Amabile, 1996; El
Practical Implications
Our research has several practical implications for managers. First, our findings
suggest that employees who collaborate with AI-powered cobots will be more successful in
organizational processes will require developing a new set of skills from employees. Thus,
organizations may want to develop training programs to ensure employees develop the skills
needed to collaborate with cobots such as prompting, curating, cobot training. Indeed, the
adoption of human-AI processes demands new skills from employees (Wilson & Daugherty,
2018). Second, our findings suggest that organizations can foster trust in cobots to prevent
foster trust in cobots. For instance, organizations could design training and development
25
programs that focus on AI transparency and cobots’ potential applications. Knowing what a
cobot can do is one of the key determinants of trust formation (Mara et al., 2021). Trust in
Finally, our results suggest that proximity in human-AI collaboration may be beneficial for
idea productivity and idea feasibility. Consequently, organizations should strive to promote a
culture that goes beyond human-AI collaboration and promotes proximity with cobots.
Like most studies, the present set of studies is not without its limitations and the
findings should be interpreted in light of these limitations. First, both of our studies were
experimental studies and could raise questions concerning external validity. Although we
used an employee sample in both studies, it is unclear if these results can be replicated in a
natural environment. The experimental design was particularly appropriate for testing the
draw causal conclusions (Antonakis et al., 2010), and allowed us to use objective measures of
relationship between human-AI collaboration and creativity, future research should attempt to
replicate and extend our findings in a field setting. Second, the use of convenience samples in
both studies raises concerns regarding representativeness and generalizability. A strong point
theoretical relationships and mechanisms (Highhouse & Gillespie, 2009; Landers & Behrend,
2015). Nevertheless, future research should consider the use of random sampling to ensure
Conclusion
26
The current research contributes to the development of literature on human-AI
collaboration and creativity. Our study focused attention on the determinant role of human-AI
cobots continue to evolve, its potential applications in organizations are likely to expand,
creating new opportunities for competitive advantage. It is hoped that the preliminary findings
reported in this paper will stimulate future research interest in the relationship between
Acknowledgment
The authors would like to thank to Kuba Białczyk for his assistance in this project.
27
References
Ajoudani, A., Zanchettin, A. M., Ivaldi, S., Albu-Schäffer, A., Kosuge, K., & Khatib,
https://doi.org/10.1037/0022-3514.45.2.357
Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2010). On making causal
https://doi.org/10.1016/j.leaqua.2010. 10.010
http://arxiv.org/abs/1912.02675
Baker, A. L., Phillips, E. K., Ullman, D., & Keebler, J. R. (2018). Toward an
https://doi.org/10.1145/3181671
https://doi.org/10.1609/aimag.v30i3.2254
28
Boden, M. A. (1998). Creativity and artificial intelligence. Artificial Intelligence,
Boussioux, L., N Lane, J., Zhang, M., Jacimovic, V., & Lakhani, K. R. (2023). The
Crowdless Future? How Generative AI Is Shaping the Future of Human Crowdsourcing. The
Crowdless Future.
Cai, J., Li, Z., Dou, Y., Li, T., & Yuan, M. (2022). Understanding adoption of high
off-site construction level technologies in construction based on the TAM and TTF.
https://doi.org/10.1108/ECAM-07-2021-0613
Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed
290. https://doi.org/10.1037/1040-3590.6.4.284
Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. Hillsdale:
Lawrence Erlbaum.
Colton, S., & Wiggins, G. A. (2012). Computational creativity: The final frontier? In
Luc De Raedt et al. (Eds.), Ecai 2012, IOS Press, pp. 21-26.
Coronado, E., Kiyokawa, T., Ricardez, G. A. G., Ramirez-Alpizar, I. G., Venture, G.,
and classification of performance and human-centered factors, measures and metrics towards
https://doi.org/10.1016/j.jmsy.2022.04.007
Daisley, B. (2020). Don’t Let Your Obsession with Productivity Kill Your Creativity.
Das, P., & Varshney, L. R. (2020). Explaining artificial intelligence generation and
creativity: Human interpretability for novel ideas and artifacts. IEEE Signal Processing
29
Magazine, 39(4), 85-95. https://doi.org/10.1109/MSP.2022.3141365
Deci, E. L. (1972). The effects of contingent and noncontingent rewards and controls
https://doi.org/10.1016/0030-5073(72)90047-5
Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K.,
Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged
productivity and quality. Harvard Business School Technology & Operations Mgt. Unit
Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S.; & Ebel, P. (2019a,
January 8-11). The future of human-AI collaboration: A taxonomy of design knowledge for
Dellermann, D.; Ebel, P.; Söllner, M.; & Leimeister, J. M. (2019b). Hybrid
https://doi.org/10.1007/s12599-019-00595-2
Dellermann, D.; Lipusch, N.; Ebel, P.; & Leimeister, J. M. (2019c). Design principles
for a hybrid intelligence decision support system for business model validation. Electronic
https://dl.acm.org/doi/10.5555/3091125.3091188.
De Melo, C. M.; Marsella, S.; & Gratch, J. (2018). Social decisions and fairness
change when people’s interests are represented by autonomous agents. Autonomous Agents
30
and Multi-Agent Systems, 32, 163–187. https://doi.org/10.1007/s10458-017-9376-6
De Melo, C. M.; Marsella, S.; & Gratch, J. (2019). Human cooperation when acting
3482–3487. https://doi.org/10.1073/pnas.1817656116
Demir, M.; McNeese, N. J.; & Cooke, N. J. (2020). Understanding human-robot teams
in light of all-human teams: Aspects of team interaction and shared cognition. International
https://doi.org/10.1016/j.ijhcs.2020.102436
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People
erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology:
https://doi.org/10.1016/j.autcon.2022.104298
El Zaatari, S., Marei, M., Li, W., & Usman, Z. (2019). Cobot programming for
collaborative industrial tasks: An overview. Robotics and Autonomous Systems, 116, 162-180.
https://doi.org/10.1016/j.robot.2019.03.003
Fan, L.; Scheutz, M.; Lohani, M.; McCoy, M.; & Stokes, C. (2017). Do we need
intelligence in humans compared to robots. In: Beskow, J., Peters, C., Castellano, G.,
O'Sullivan, C., Leite, I., Kopp, S. (eds) Intelligent Virtual Agents. IVA 2017. Springer, Cham.
https://doi.org/10.1007/978-3-319-67401-8_15
Electronic Visualisation and the Arts (EVA 2017), Proceedings of EVA, United Kingdom,
422-429. https://doi.org/10.14236/ewic/EVA2017.84
31
Fountaine, T., McCarthy, B., & Saleh, T. (2019). Building the AI-powered
Freedy, A.; de Visser, E.; Weltman, G.; & Coeyman, N. (2007). Measurement of trust
https://doi.org/10.1109/CTS.2007.4621745.
Fu, J.; Shang, R. A.; Jeyaraj, A.; Sun, Y.; & Hu, F. (2020). Interaction between task
characteristics and technology affordances: task-technology fit and enterprise social media
https://doi.org/10.1108/JEIM-04-2019-0105
Glikson, E.; & Woolley, A. W. (2020). Human trust in artificial intelligence: Review
https://doi.org/10.5465/annals.2018.0057
Golizadeh, H., Hosseini, M. R., Edwards, D. J., Abrishami, S., Taghavi, N., &
https://doi.org/10.1108/CI-09-2018-0074
Grewal, D., Guha, A., Satornino, C. B., & Schweiger, E. B. (2021). Artificial
32
intelligence: The light and the darkness. Journal of Business Research, 136, 229-236.
https://doi.org/10.1016/j.jbusres.2021.07.043
Guertler, M.; Tomidei, L.; Sick, N.; Paul, G.; Carmichael, M.; Hernandez Moreno, V.;
& Hussain, S.(2023). When is a robot a cobot? Moving beyond manufacturing and arm-based
https://doi.org/10.1017/pds.2023.390
Gunser, V. E., Gottschling, S., Brucker, B., Richter, S., Çakir, D., & Gerjets, P.
(2022). The pure poet: How good is the subjective credibility and stylistic quality of literary
short texts written with an artificial intelligence tool as compared to texts written by human
authors?. Proceedings of the Annual Meeting of the Cognitive Science Society, Ireland, 44,
1744-1750. https://doi.org/10.18653/v1/2022.in2writing-1.8
Gupta, A., Murali, A., Gandhi, D. P., & Pinto, L. (2018). Robot learning in homes:
Hancock, P. A., Billings, D. R., & Schaefer, K. E. (2011a). Can you trust your robot?
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., &
mediation, moderation, and conditional process modeling [White paper]. Retrieved from
http://www.afhayes.com/public/process2012.pdf
Heyman, R. E., Lorber, M. F., Eddy, J. M., & West, T. V. (2014). Behavioral
observation and coding. In H. T. Reis & C. M. Judd (Eds.), Handbook of Research Methods
in Social and Personality Psychology (pp. 345-372). New York: Cambridge University Press.
33
Highhouse, S., & Gillespie, J. Z. (2009). Do samples really matter that much? In
Lance, C. E., & Vandenberg, R. J. (Eds.), Statistical and Methodological Myths and Urban
Legends: Doctrine, Verity and Fable in the Organizational and Social Sciences (pp. 247–
Hitsuwari, J., Ueda, Y., Yun, W., & Nomura, M. (2023). Does human–AI
collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-
https://doi.org/10.1016/j.chb.2022.107502
https://doi.org/10.1109/THMS.2019.2904558
Howard, M. C., & Rose, J. C. (2019). Refining and extending task–technology fit
theory: Creation of two task–technology fit scales and empirical clarification of the construct.
Jessup, S. A.; Gibson, A.; Capiola, A. A.; Alarcon, G. M.; & Borders, M. (2020).
Investigating the effect of trust manipulations on affect over time in human-human versus
https://doi.org/10.2307/256287
Kang, H., & Lou, C. (2022). AI agency vs. human agency: Understanding human–AI
interactions on TikTok and their implications for user engagement. Journal of Computer-
Keshvarparast, A., Battini, D., Battaia, O., & Pirayesh, A. (2023). Collaborative robots
34
in manufacturing and assembly systems: Literature review and future research agenda.
Kittur, A.; Chi, E. H.; & Suh, B. (2008). Crowdsourcing user studies with Mechanical
https://doi.org/10.38016/jista.682479
halodoc, a telehealth mobile application with task technology fit as moderator variable.
between organizational, mechanical turk, and other convenience samples. Industrial and
https://doi.org/10.1017/iop.2015.13
Lauraéus, T., Kaivo-oja, J., Knudsen, M. S., & Kuokkanen, K. (2021). Market
2021-0006
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate
Madjar, N., Greenberg, E., & Chen, Z. (2011). Factors for radical creativity,
Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., Hinz, O.,
35
Morana, S., & Söllner, M. (2019). AI-based digital assistants. Business & Information
Mara, M.; Meyer, K.; Heiml, M.; Pichler, H.; Haring, R.; Krenn, B.; Gross, S.;
Reiterer, B.; & Layer-Wagner, P. (2021). CoBot Studio VR: A virtual reality game
Matarese, M., Rea, F., & Sciutti, A. (2022). Perception is only real when shared: A
Maurtua, I., Ibarguren, A., Kildal, J., Susperregi, L., & Sierra, B. (2017). Human–
https://doi.org/10.1177/1729881417716010
McGraw, K. O., & Wong, S. P. (1996). Forming inferences about some intraclass
989X.1.1.30
Muir, B. M., & Moray, N. (1996). Trust in automation. Part II. Experimental studies
of trust and human intervention in a process control simulation. Ergonomics, 39(3), 429-460.
https://doi.org/10.1080/00140139608964474
Nijstad, B. A., De Dreu, C. K. W., Rietzschel, E. F., & Baas, M. (2010). The dual
36
https://doi.org/10.1080/10463281003765323
Ostheimer, J., Chowdhury, S., & Iqbal, S. (2021). An alliance of humans and
machines for machine learning: Hybrid intelligent systems and their design principles.
Paulus, P. B. (2000). Groups, teams, and creativity: The creative potential of idea-
0597.00013
Prockl, G., Roeck, D., Jensen, T., Mazumdar, S., & Mukkamala, R. R. (2022). Beyond
task-technology fit: Exploring network value of blockchain technology based on two supply
chain cases. Proceedings Hawaii International Conference on System Sciences 2022 (HICSS
Pinto, A., Sousa, S., Simões, A., & Santos, J. (2022). A trust scale for human-robot
interaction: Translation, adaptation, and validation of a human computer trust scale. Human
Rank, J., Pace, V. L., & Frese, M. (2004). Three avenues for future research on
https://doi.org/10.1111/j.1464-0597.2004.00185.x
https://doi.org/10.1016/j.ijinfomgt.2023.102699
Spritzker (Eds.), Encyclopedia of Creativity (3 ed., Vol. 1, pp. 562-568). Academic Press.
https://doi.org/10.1016/B978-0-12-8093245.06200-3
Rietzschel, E. F., Nijstad, B. A., & Stroebe, W. (2019). Why Great Ideas Are Often
37
Overlooked: A Review and Theoretical Analysis of Research on Idea Evaluation and
Selection. In P. B. Paulus, & B. A. Nijstad (Eds.), The Oxford Handbook of Group Creativity
https://doi.org/10.1093/oxfordhb/9780190648077.013.11
Rietzschel, E. F., Wisse, B., & Rus, D. (2017). Puppet masters in the lab:
Experimental methods in leadership research. In Schyns, B., Hall, R. J., & Neves, P. (Eds.),
Publishing.
Schrier, T., Erdem, M., & Brewer, P. (2010). Merging task‐technology fit and
https://doi.org/10.1108/17579881011078340
Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work:
142. https://doi.org/10.1016/j.jbusres.2020.11.038
Takko, T.; Bhattacharya, K.; Monsivais, D.; & Kaski, K. (2021). Human-agent
https://doi.org/10.1038/s41598-021-90123-8
https://doi.org/10.1108/17410400510571437
38
Bulletin, 63(6), 384–399. https://doi.org/10.1037/h0022100
Vinanzi, S., Cangelosi, A., & Goerick, C. (2021). The collaborative mind: Intention
https://doi.org/10.1016/j.isci.2021.102130
Wang, X., Wong, Y. D., Chen, T., & Yuen, K. F. (2021). Adoption of shopper-facing
technology fit and technology trust. Computers in Human Behavior, 124, 1-13.
https://doi.org/10.1016/j.chb.2021.106900
39
Table 1: Correlations and Descriptive Statistics Study 1.
M SD 1 2 3 4
1. Human-AI collaboration 0.50 0.50 -
2. Idea productivity 4.26 0.38 .41** -
3. Idea originality 9.48 5.33 .25** .20** -
4. Idea feasibility 3.15 0.83 .23** .19** .16* -
*p < .05; **p < .001
M SD 1 2 3 4 5
1. Human-AI proximity 1.48 0.50 -
2. Idea productivity 5.42 3.71 .38** -
3. Idea originality 3.19 0.86 -.07 -.06 -
4. Idea feasibility 3.55 0.73 .19** .23** .24** -
5. Trust 1.85 0.24 .27** .04 .00 .01 -
*p < .05; **p < .001
B SE t p Adj. R2 Model F p
Intercept 3.14 0.06 51.87 <0.001 0.04 3.11 <0.05
Human-AI proximity -0.19 0.12 -1.52 0.129
Trust 0.42 0.28 1.50 0.136
Human-AI collaboration x trust 1.62 0.57 2.85 0.005
Note. Bootstrap (Boot) sample size = 10,000, Level of confidence interval = 95% unstandardized
regression coefficients.
B SE t p Adj. R2 Model F p
Intercept 3.56 0.05 69.74 <0.001 0.04 3.00 0.031
Human-AI proximity 0.26 0.10 2.54 0.012
Trust 0.81 0.24 0.34 0.731
Human-AI proximity x trust -0.42 0.48 -0.88 0.378
40
Note. Bootstrap (Boot) sample size = 10,000, Level of confidence interval = 95% unstandardized
regression coefficients.
Table 5. Moderation Analysis Summary of the Human-AI and Feasibility Relationship
B SE t p Adj. R2 Model F p
Intercept 5.34 0.24 21.84 <0.001 0.15 12.89 <0.001
Human-AI proximity 2.88 0.49 5.86 <0.001
Trust -.45 1.13 -.40 0.690
Human-AI Collaboration x Trust 2.38 2.30 1.04 0.301
Note. Bootstrap (Boot) sample size = 10,000, Level of confidence interval = 95% unstandardized
regression coefficients.
41
Figure 3. Two-way ANOVA of idea feasibility across conditions.
42
Figure 6. Two-way ANOVA of idea feasibility across conditions.
Figure 7. Interaction between human-AI proximity and trust quality in predicting idea
originality.
43
*p < .05; **p < .01; ***p < .001; ****p ≤ .0001
44
Author Biography
Biographical statements
Aleksandra Przegalińska received her PhD in philosophy of artificial intelligence from the
Department of Philosophy of Culture of the Institute of Philosophy of the University of
Warsaw, currently she is an associate professor at the Department of Management in Digital
and Networked Societies and Vice Rector at Kozminski University. Until recently, she
conducted research at the Massachusetts Institute of Technology in Boston. She is a Senior
Research Associate at the Center for Labour and Just Economy at Harvard University. She
graduated from The New School for Social Research in New York. She is interested in the
development of new technologies, natural language processing, the progress of humanoid
artificial intelligence, social robots and wearable technologies.
Leon Ciechanowski - Currently, Dr. Ciechanowski holds dual roles as a research scholar at
MIT and an assistant professor at Kozminski University. He works in the field of human-
computer interaction, phenomenology and a sense of perpetration/control. Leon Ciechanowski
is also a consultant in various business sectors (banking, data analysis, telecommunications,
pharmaceuticals), where he prepares expert opinions and writes articles related to fintech,
artificial intelligence, Big Data and other innovations technological.