Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Artificial Intelligence, Augmented Reality, Location-based narratives, and user participation as means for

creating engaging Extended Reality experiences Argyro Papathanasiou1,

Natali-Teresa Chavez1, Paraskevi Bokovou1,

Chrysanthopoulou1, Aikaterini-Dimitra Gerothanasi1, Nefeli-Maria Dimitriadi1 1

Christina Aristotle University ofThessaloniki, School ofFilm, Ikoniou 1, Stavroupoli, Thessaloniki, Index,
Greece

Abstract When the development of modern media and computers crossed paths, according to Lev
Manovich a new form of media was born, which was described by the term “new media”, characterized
by multimodal narratives, where different multiple simultaneous information flows are used and require
the user’s engagement. With the evolution of digital technologies, both regarding hardware and human-
computer interaction interfaces, and software and multimedia content, a branch of “new media” has
taken the “status” of a reality and we refer to it as: “extended reality”. Extended Reality (XR) is an
umbrella term which encompasses virtual, augmented, and mixed reality. The term covers the full
spectrum from real to virtual in the concept of virtual-reality continuum introduced by P. Milgram and
F.Kishino back in 1994. XR is a new, computer-mediated hybrid reality experience we encounter by
participating in a multimodal experience comprising narrative audiovisual content. Moreover, the
narrative audiovisual content developed by and for XR technologies, needs to engage the viewer and
address the multimodal narrative needs. This paper aims to explore AI, AR, Location-based narratives,
and user participation as means for creation of impactful XR experiences. In the paper we will study and
present the use of:

● AR/MR tools in interactive film platforms, television, and location-based narratives, ● AI use in
cinematic narratives, ● Interactive narrative as means for increasing empathy and engagement, and ●
User’s participation in both virtual and physical space, in new forms of performative arts, “extended”
with digital technologies.

Keywords Artificial Intelligence, Augmented Reality, Virtual Reality, Mixed Reality, Extended Reality,
Location-Based Narratives, User Engagement, Multimodal Experience, Narrative Audiovisual, Interactive
Platforms, Interactive narrative, User experience, Motion Capture, Virtual Actors, Performance Capture,
Facial Motion Capture.

1. Introduction This paper aims to investigate the impact of AR/MR technologies, AI, and location-
based

Narratives in the film industry, from the stages of pre-production to the actual production, as means for
the creation of engaging interactive experiences. Over the past few years, media which assigned
traditionally the role of “spectator” to the audience, such as cinema, theater, performance art, and
television, are re-establishing their relation to the audience, turning it into an active participant of the
narrative. Interactive media not only engage the audience in a different way than traditional media, but
are also capable of creating empathetic experiences, due to the audience’s active participation and
immersion. The authors will investigate multimodal narrative needs through AR/MR technologies.

Particularly, they will examine the way these technologies can be utilized in documentary films and
interactive film platforms to increase viewer engagement and create an experiential model of viewing.
Moreover, these technologies will be researched in interactive film and TV platforms.Subsequently, the
evolution of these technologies will be examined, to assess how they

Introduce new ways and channels of distribution of audiovisual content. Concerning AI, the authors

Will analyze the way such tools can be useful to film and theater acting and instigate viewer’s

Engagement.

Concluding, the authors will explore the influence of new technologies and tools as a new

Model of interactive storytelling, which aims to immerse the audience, create empathy and aid with

Processing impactful experiences.

2. AR/MR tools in interactive film platforms, Television and Location Based

Narratives

2.1 Categorization of AR-MR applications


2.2 AR-MR applications are classified either around four or three axes. The four axes depend on
what

The app is centered on and are the following: technique centered, user-centered, information-centered

And based on the target of the augmentation (Normand et al., 2012). The three axes are based on the

Degree of freedom of detection used, based on how the augmentation is used in relation to the user and

Based on the relationship of the application content with time (pre-recorded or real-time).

This taxonomy focused on the degree of freedom in detection can be divided respectively in two

Main categories:

Marker-based augmented reality (i.e., QR code, or fiducial markers)

Markerless AR, position based, or location based via GPS

The second categorization axes concerning how the augmentation is used by the user are

Discerned in two sub-categories:

Mediated augmentation, where the augmentation takes place on a projection interface such as

In optical see-through applications using screens or glasses and video see-through where a

Back located camera is used (such as tablets and smartphones cameras) to provide the user

With a view of both the physical world and the augmented elements. Mediated augmentation

Can be discerned in another two subtypes: Projection-based Augmented Reality and

Superimposition-based Augmented Reality.

Direct augmentation where the virtual content is added to the physical world without the need
Of a mediator or interface between the eyes and the real world.

2.3 AR during the film production stages: possibilities and solutions


2.4 The concept of AR -as derived from the above- can be described as the overlay of digital content

On a real environment. To project AR content, glasses or mobile devices are used, allowing the viewer

To experience a new model of viewing, and evolving new concepts. On the other hand, MR enables

The user to not only see overlaid content on the real environment, but also to interact with it in

Physical space, since MR allows for physical and virtual objects to co-exist in mixed reality

Environments and interact with each other in real time.

If we consider the use of these technologies in cinema so far, and given their rapid development

Over the recent years, we could say that AR and MR seek to pop the film out of the two-dimensional

Screen and place it in three-dimensional space.

Additionally, in the past few years, there is a tendency to use the technology of AR during the

Pre-production states. This fact may help filmmakers in collecting material, storyboarding or even in

Creating the actual film. It is worth noting, that apart from AR technologies, the use of VR in

Pre-production is becoming more and more popular among filmmakers over the past few years.

As far as AR is concerned, it gives the filmmaker the ability to explore a whole new way of

Collecting material during the first stage, since it can be done from the camera of a mobile device and

Enables them to record data of the physical environment and the real actor, combined with the

Connected digital material. During film production, AR gives the ability to film digital and
physicalelements together. In this case, the filmmaker can create non-existing or fictional elements, as
well as

Mix more easily VFX and other CGI elements. For example, the short video “Nest” by Dunkan

Walker, was filmed solely using an iPhone and ARKit tools. The creator used CGI characters, which

Were instructed through a control panel on his phone and had the ability to shoot them multiple times

From different angles and scenes.

Regarding AR in documentary films, it can be turned into an extremely useful tool in the hands of

Filmmakers, since they are able to incorporate digital elements within reality, which are not possible to

Be found or filmed. Finally, AR can be used also for the projection of the documentary at specific

Sites, creating a new form of location-based narratives, since documentary films are inextricably

Linked with certain locations. AR is a useful tool in the hands of filmmakers, since they can
Incorporate digital elements within reality, which are not possible to be found or filmed.Figure 2:
Outdoor night AR shot placement with ProVis Pro iOS application. Documentary in progress

By A.Papathanasiou.

2.3 AR-MR in new forms of location-based narratives

Since the broad acceptance of games as a new form of art (Santorineos & Dimitriadi, 2006) and

Gamification as a major trend of digital culture, AR/MR technologies have created a bridge between

Interactive media content and real space, opening new possibilities of experimentation in

Cinematography.

The integration of AR/MR technologies during the last two years in a variety of devices – from

“traditional” media such as television to mobile phones, and more – create a new field of research in

Cinema focusing on the viewer experience, while expanding the audience and shifting the age range of

The viewers to a younger and more technology-savvy audience. In parallel, there has been a trend over

The past few years, where new ways of distribution (subscription channels, platforms, etc.) strongly

Influence the ways of film projection (traditionally in cinema theaters and television) and are heading

Towards a site-specific approach.

There are several content development approaches regarding the scenarios may be developed for

AR location-based narratives:

One approach “links” the actual location in a direct way with the virtual narration, in

Documentary AR narratives, as for example “bringing” to life stories that unfolded in specific

Locations, such as locations where major historical events took place etc.

Another approach gives a new meaning to a specific location – or/and creates a narrative

Storyline connecting multiple locations, which was nonexistent before the AR narrative.

The creation of augmented interactive digital storytelling is a new area of experimentation where

The analogue information can be combined with the digital to enhance and enrich the user experience

With interactions.

The study and usability research of the various ways of multimodal interaction, methods, and

Techniques by which the narration unfolds via the integration ofmultimedia material (images,
sounds,video, 2D graphics, 3D graphics and text) in an AR application proves itself necessary to
accelerate

The development of new types of digital cinematic storytelling.


According to Ronald Azuma (2015) one of the final uses of mixed and augmented reality is to

Allow the birth of new forms of narrative that will allow the connection between the virtual content

With specific locations, whilst in “Compelling Experiences in Mixed Reality Interactive Storytelling”,

Fred Charles et al. (2004) also point out the need to explore the possibilities ofArtificial Intelligence

Techniques in AR and MR new interactive narratives.

In these new forms of digital interactive storytelling the following possibilities can be explored as

Interactive mechanisms to move the story into unexpected, personalized directions:

1. the integration of the user’s behavior by the tracking of their gestures,

2. the integration of the user’s intentions by enabling interaction with virtual actors and/or

Environmental elements by speech recognition, and

3. The integration of the user’s physical interaction with real objects, allowing them to change the

Real set and affect the virtual story respectively.

2.4 Interactive film platforms

In the world of digital interactive film platforms, Augmented and Mixed reality open new avenues

To filmmakers and viewers to create, experience and participate in a story.

The artwork in progress “Re-Rec Borders: an interactive documentary map” by the director

Menelaos Karamaghiolis, seeks to give the viewer a new multimodal storytelling experience. It

Combines traditional cinematography with augmented and virtual reality technologies, transferring

Action from the 2D screen to real space, and specifically, to places related with the narration, or even

The routes of the protagonists of the documentary films contained in the platform.

This new way of viewing allows the spectators to experience the story of the protagonists all

Around the world, to participate in it, to immerse themselves and increase their involvement. At the

Same time, they can create their own storyline since this kind of platform allows for non-linear

Viewing of the stories. This way, every viewing is unique, even for the same spectator. However, this

Creates new challenges for the directors and raises concerns, since the viewers can choose their own

Point of view, viewing time and storyline, elements that in traditional cinematography are solid and

Totally controlled by the director.

2.5 XR and Television

In the case of TV series, the scriptwriter may not follow traditional linear storytelling but will
Provide alternative narratives and multiple storylines so as to enable possible directions to be taken by

The spectators. The spectators, in turn, may wish to interact, thus becoming co-creators as they may

Eventually affect the story. Their participation is based on pre-written narratives by the scriptwriter

Whose challenge is to allow the audience to be part of a linear story (see also Weijdom, J. 2017). This

Implies that the story must be written in such a way so as to facilitate alternative directions in good

Terms, though, with coherence. Coherence is a fundamental issue for a good story and must be kept

Throughout, without being disrupted by the spectators’ interaction.

There are different approaches that enable interactive storytelling. One way is character-based

(Cavazza 2002) according to which stories derive from the way characters interact with each other.

This, however, puts at risk the story’s coherence. Another approach is plot-based (Grasbon and Brown

2001; Paiva et al 2001; Spierling et al 2002) according to which the focus lies on the structure of the

Plot. This approach limits the possibility of interaction. Whichever of the two approaches is to be

Taken also depends on the genre of the story (Camanho et al 2009). The authors, furthermore, contend

That “the user’s immersion in the story” should not be disrupted and “interacting with a story is

Basically intellectual, rather than physical.” (Camanho et al 2009:24). Another parameter to be taken

Into account is how much the spectators wish to be involved and interact with the story. Some might

Observe rather than actively take part. Others might interact extensively and take advantage of
anyinteractive means provided. This implies that there should be a variety of levels for stepping into the

Story.

The question to be tackled is how a TV scene can be turned into an interactive narrative. To

Transform a story into an interactive one, several crucial parameters need to be taken into

Consideration such as the ones discussed further below. These will allow the viewers’ engagement,

Empathy, immersion, and interaction with the story. Especially in a TV series, the viewer must be

Engaged so as to continue watching the rest of the season(s) since a TV series is not a one-time

Experience. It has duration in time.

The story can’t have the same starting point as a non-interactive linear TV, since the viewer

Must be part of the story. The opening scene should be constructed in a completely different way to

Attract the viewer to get involved.

Another parameter in new XR narratives is the viewers’ POV. In an interactive script the
Viewer can be given the possibility to embody the protagonist which gives the viewers the opportunity

To explore the story and the character from their own initiative. The audience explores interactively

The world and the characters. In TV narratives we explore the characters through their actions. In a

New interactive scenario, it is the audience’s actions that would lead to understanding the character,

Raising the viewer’s empathy. Another option would be giving the viewers the possibility to switch

POVs during the scene, and thus to embody different characters of the story.

Moreover, the screenwriter’s challenge is to invent narrative element(s) that would direct the

Viewer, now participant, towards a linear and coherent story.

The potential alternatives above must be examined and tested through XR technologies. For

This purpose, the script will be enriched with new narrative elements and multiple directions will need

To be invented to cover the needs of the viewers/participants.

4. AI use in cinematic narratives

As far as AI is concerned, AI is currently used in cinema as a tool to facilitate, assist, enhance

And enrich the cinematic production, and as a new form of directing, acting, screenplay writing via

Machine learning and other techniques.

Specifically in acting, the technology that played a crucial role is Motion Capture: “The

Process of recording the movements of a real body in 3D space as data, which is then used to create a

Computer-generated body” (Pizzo, 2016). Lately, the new term of Performance Capture has arisen:

Actor’s 3D performance through facial expressions and body language. The mapping is realized by

Cameras, laser scanners or even, as recently introduced, other AI tools that can be used as input, such

As any standard 2D video captured even by smartphone (Salian, 2022). MoCap and FaMoCap share

Similar technology but in the FaMoCap there are higher resolution requirements so that subtle

Expressions can be detected and tracked even from micro-movements (Kennedy, 2021).

The actor Andy Serkis holds a record in roles that required MoCap and is considered as “the

Godfather of Motion Capture” (Stern, 2014) [Golum (Jackson P., 2001), King Kong (Jackson P.,

2005), Baloo (Serkis, 2018) etc.]

In Thanos (Anthony Russo, 2018) MoCap runs through a machine learning software that

Utilizes massive amounts of animation data of real actors to create a believable virtual actor (Digital

Domain, 2018).
Figure 3: Masqueade test footage from the SIGGRAPH 2017 PaperMore specifically, according to the
company who created the software Masquerade, as presented in

Their most recent report, AI:

1. collects the markers automatically, generating their 3D positions from the HMC images,

2. automatically stabilizes the head by examining how the actor’s head is moving in relation

To the helmet,

3. corrects the missing/occluded markers,

4. enhances marker geometry to final high-resolution geometry, (thousands of points are

Tracked from the actor’s face through a seated capture system, and that way a large

Database is created by the actor’s face motion and shapes)

5. Corrects actor’s or 3D character’s performance, (there is a machine learning component

That lets artists train the system to change an actor’s expression into a specific character

Expression and vice versa), and

6. Provides eye gaze and eyelids tracking

By developing Masquerade, Studio aimed to create machine-learning software using the same

Images as the portable helmet system. The actor’s face would also be produced in high resolution,

With accurate wrinkles, not just a few moving points. ‘’It let all that detail in the actors’ facial

Performance be captured. Then the 3D data captured is used to translate the performance to the actor’s

Digital self or character. Then, preserve as much of the performance as possible’’’ (Failes I., 2020)

Other AI tools, related to actors, that are used in production, facilitate:

1. Casting: Where AI platforms can find the right actor from talent databases (Smartclick,

n.d.)

2. Pre-production: Where the use of virtual actors becomes an important part of

Previsualization, digital extras and remote rehearsals (Slater M., 2000)

3. Design: Where Morphing algorithms offer human-like expressions to virtual actors

Figure 4: Acting in Virtual Reality. M. Slater, J. Howell, A.Steed, D-P. Pertaub, M. Gaurau,

S.Springel.Except for superhero and epic films, CGI technology has been used to create actors’ digital

Twins. A bright example is considered Audrey Hepburn’s digital recreation (McGee, 2014), for a

Commercial in 2013. It’s not the first time that technology cinema offers such an opportunity to the
Spectators. James Dean, Bruce Lee, Paul Walker, and other actors appeared on the screen after their

Death. Some years ago, it was humans who did the selection among tons of images from multiple

Different angles to create these digital replicas. Nowadays this process is executed by well-trained AI

Tools which resynthesize legends by drawing information from a huge data pool (photographic and

Video material). Deepfakes, for example, are made ofmachine learning tools, known as autoencoders

And generative adversarial networks (GANs).

There is a need for further research and in-depth analysis in acting methods, actors’

Expressions, body moves and voice qualities to create an accurate generic Virtual Actor from scratch

Using big data and not just a modification of the characteristics and acting methods followed by a

Specific actor.

Figure 6: IMDb Short movie poster

AI also can take the role of the director. Bright example is the short movie Campari Red

Diaries: Fellini Forward, a result of human and AI collaboration (UNIT9, 2021). Custom tools

Combined with algorithmic-building tools like TensorFlow, PyTorch among others, were used to

Create a Fellini style script and the Fellini shot prediction model. Identification and classification of

Different cinematic shot types realized via the Shot-Type-Classifier Model for ResNet-50 network;

Object detection by Mask R-CNN and real-time multi-person tracking by YOLOv3 and DeepSort. As

For emotion detection within a live camera feed they used Keras, the Python deep learning API for

Emotion detection. Generally, a supervised learning approach was followed (Stuart, 2021).

AI tools can cover many aspects of film production. Indicatively:

Figure 7: Do You Love Me (2016)Writing scripts: large amounts of movie data are analyzed, and unique
scripts are

Produced.

Pre-production: reperage and scheduling

Subtitle creation

Movie Promotion

Movie editing

Music scores: Iamus, the first AI composer of contemporary classical music (P.Ball, 2012)

This mix of new media in combination with the traditional process of acting on camera constitutes
A challenge for the actor. This interdisciplinary source will also encourage a deep theoretical

Examination of the newly developed technological interfaces and tools (immersive VR head-mount

Displays, motion sensors and body and eye tracking systems, wired biofeedback sensors, EEG).

4. Interactive narrative as means for increasing empathy and engagement

As Lev Manovich argues in the Language of New Media, non-interactive media are in their core

Interactive as well since they are biased by the perception of the audience. On the other hand,

Interactive media, from make-your-own-adventure text stories to video games and virtual reality

Experiences, pave the way for truly immersive storytelling. In these cases, the role of readers, players,

Or users, ranges from being able to explore in unique ways the story to truly engage with the narrative

In an emotional way (Meadows, 2003).

Figure 8: Zork: The Great Underground Empire – Part I in 1980

There is a difference between interactivity that relies on exploration or puzzle solving that creates a

Sense of engagement without affecting the narrative itself, and interactivity that relies on agency

(Murray, 1997). Therefore, the question that emerges concerns whether absolute agency of theaudience
can be achieved, since every possible path of a narrative has already been scripted by its

Creator.

A player feels that they have agency when their decisions have impact, especially in situations

Where they are not omnipotent, such as in horror games (Torres, 2020). Another category would be

Games that pose ethical dilemmas that simultaneously affect the players’ progress and the world itself,

Creating immersive and emotional experiences. Video games, and virtual reality experiences that

Create a sense of not only immersion, but presence, have become a popular medium to induce

Empathy because the user has this precise sense of presence in the environment and, consequently, the

Narrative itself.

“When someone talks about a video game, they use the pronoun ‘I’. It gives you a platform to

Create immersion and engagement – a new level of empathy.” Ryan Green

Virtual reality experiences, given that presence is ensured by the medium itself, can create positive

Empathetic responses. In the VR 360° video projects Clouds Over Sidra (United Nations – Schutte and

Stilinović, 2017), and The Displaced (The New York Times – Sirkkunen et al., 2016), and although

Both show a story in a third person perspective, the medium has been capable of engaging users and
Creating empathetic responses towards the stories of the protagonists (Bertrand et al, 2018).

Perhaps one of the most renowned paradigms of a virtual reality experience of artistic nature,

Aiming to create an empathetic response among the participating individuals is the project The

Machine to Be Another (BeAnotherLab, 2015). The users, paired in two each time, are bound to

Exchange their view, via a virtual reality headset. They are also instructed to perform certain

Movements, in absolute synchronization with one another, creating a strong sense of embodiment with

Each other’s physicality.

5. User’s participation in both virtual and physical space, in new forms of

Performative art

New forms of performative art are being created as we speak thanks to new digital

Technologies. As a result, the spectators become users and/or participants. The “ludic” or/and gamified

Aspect of the interactive experience, opens the possibility for multi-user experiences with an “in situ”

Element (Dimitriadi, 2019), not only in virtual, but also in real space.

There are numerous examples of hybrid multimedia performances that use mixed reality,

Where the continuum of space and time is broken but does not feel that way to the user. The artist

Julien Daillère for example, is completely liberated from space as he provides his spectators with the

Option of becoming performers and participating in his teleperformance Esprit poétique while being at

Their place, just by using the telephone1. Several participants become performers while the others

Remain spectators, still at their place, a new-found stage.

Α similar use of this technique was used by Romeo Castellucci in his latest spectacle Bros.

Castellucci used twenty-three men that were not actors, who were required to act on stage as he

Commanded via earplugs mimicking the “law and order2

” as known as policemen’s behavior

Everywhere nowadays. In this case, the experience was different only for those participating, while the

Spectators held their normal position.

Multiple artists have been using video-based platforms that bring together different spaces,

Thus giving the audience the chance to participate and belong; being together even while apart, has

Been especially crucial during the pandemic when physical co-existence was scarce. This need

Brought about new experiments where theater meets cinema. Τhe 7 deaths ofAntona by Bijoux de
Kant, for example, is “theater made to be projected on screen3

”. This non-performance, non-movie

Daillère Julien, Esprit poétique, 2022. URL: https://www.latraverscene.fr/teleperformances-theatre-


audioguide/

Castellucci Romeo, Bros, 2022. URL: https://www.onassis.org/whats-on/bros-romeo-castellucci

Bijoux de kant, Τhe 7 deaths ofAntona, 2022. URL: https://www.onassis.org/whats-on/the-7-deaths-of-


antona-bijoux-de-kantexperiment is currently being shown in a space specially designed to match the
space where Antona

Performs, making the spectators enter her world.

The space that includes the spectator into the performance is best shown in spectacles like

Remote Thessaloniki, Rimini Protokoll, by Stefan Kaegi and Jörg Karrenbauer4, that used the city of

Thessaloniki as a stage, its spectators as performers and several sets of headphones as narrators. A

Location-based performance where an AI voice dictates what the spectator/performer is to do next.

The user may follow its orders, or may not, they still have a choice; the feeling of belonging however

Is too strong to ignore.

The 4th International Forest Festival of Thessaloniki included in its program two more

Participatory experiences: The Quiet Volume5 and Not to Scale6, by Ant Hampton and Tim Etchells

Respectively, in which the audience pairs up, wears headsets, enters spaces other than the conventional

Stage – in one occasion it’s a library, in the other it’s a space inside a theater – and creates the story

Semi-narrated to them via the headsets – in one case books take part in the narration, in the other the

Partners draw different things complementary to the story so that they may reach the end of the

Narration. These two experiences, while participatory, are not as much location-based as

Location-adapted; and while they did include the spectator, the sense of belonging was missing as the

Participator interacted only with the chosen companion.

Experimentations with speech and sound are more and more commonly met in contemporary

Theater. Dimitris Karantzas in his spectacle The Persians7, by Aeschylus, chose to use amplifiers and

Microphones, with or without distortion of the voice being heard each time, that would interrupt the
Aeschylean tragedy bringing forth “how a people cope with defeat, regardless of their racial identity,

Society, or era8

”. To enhance this contemporary approach, he used non-actors as extras, from the

Region where the spectacle was being shown each time, placed between the spectators’ seats; this

Simple direction made the spectators feel instantly included in the society of the tragedy.

In addition, artists from various backgrounds are experimenting with the crossroad of virtual

Reality and performance.

7. Conclusions

In conclusion, the use of AI, AR, and location-based narratives in the creation of engaging

Experiences proves that the passage from new media to Extended Reality requires a multitude of

Advanced tools, techniques, and technologies. Thus, it is necessary to develop best practices and

Workflows integration to combine the possibilities of XR into a unified approach of creative

Development. Moreover, it is imperative to observe the evolution of XR technologies regarding new

Ways of creation and distribution of audiovisual content, while making the experience even more

Immersive and engaging for the viewers and challenging the creators to adapt and become more

Conceptual.

8. References

Kaegi Stefan and Karrenbauer Jörg, Remote Thessaloniki

https://on.ntng.gr/default.aspx?lang=en-GB&page=135&fsid=5

Hampton Ant

And

Etchells

https://on.ntng.gr/default.aspx?lang=en-GB&page=135&fsid=31&fid=2

Hampton Ant

And

Etchells
https://on.ntng.gr/default.aspx?lang=en-GB&page=135&fsid=31&fid=2

Festival, Ancient Theatre ofEpidaurus, Athens, 15 and 16 July 2022, p. 11.

Karantzas Dimitris, Kalampaka Geli, The Persians, 2022. URL:


https://aefestival.gr/festival_events/perses/?lang=en

Karantzas Dimitris, Kalampaka Geli, The Persians embrace us, in: The Persians of Aeschylus, programme,
Athens Epidaurus

Tim, Not

Tim, Not

, Rimini Protokoll,

Scale,

Scale,

To

To

2021. URL:

2022. URL:

2022. URL:[1] P.Ball, Iamus, classical music’s computer composer, live from Malaga, July 1st, 2012.
Retrieved

June

2022,

From

URL:

https://www.theguardian.com/music/2012/jul/01/iamus-computer-composes-classical-music

[2] Digital Domain. Avengers:Infinity War, Digital Domain, 2018. Retrieved May 2022, from

URL:https://digitaldomain.com/case-studies/avengers-infinity-war/.

[3] Failes, A brief history of Digital Domain’s Masquerade, April 1st, 2020.

URL:https://beforesandafters.com/2020/04/01/a-brief-history-of-digital-domains-masquerade/.

[4] J.Kennedy, Acting and Its Double: a Practice-Led Investigation of the Nature of Acting Within

Performance Capture, Ph.D. thesis, Auckland University ofTechnology, 2018.


[5] P.Jackson (Director), Lord of the Rings: The Fellowship of the Ring [Motion Picture], 2001.

[6] P.Jackson (Director), King Kong [Motion Picture], 2005.

[7] M.McGee, The Guardian. Retrieved from www.theguardian.com, October 8th, 2014.

[8] URL:https://www.theguardian.com/media-network/media-network-blog/2014/oct/08/how-we-ma

De-audrey-hepburn-galaxy-ad.

[9] A.Pizzo, Actors and Acting in Motion Capture. Acting Archives Review, VI(11), 2016, pp. 0–26.

[10] A.Russo, J.Russo (Director), Avengers: Infinity War [Motion Picture], 2018.Salian, NVIDIA AI

Makes Performance Capture Possible With Any Camera, August 9th, 2022. Retrieved September

2022,

From

URL:

URL:https://blogs.nvidia.com/blog/2022/08/09/ai-performance-capture/.

[11] A.Serkis (Director), Mowgli: legend of the jungle [Motion Picture]. New Zealand, 2018.

[12] M. Slater, J. Howell, A.Steed, D-P. Pertaub, M. Gaurau, S.Springel, Acting in Virtual Reality.

Proceedings of the third international conference on Collaborative virtual environments, CVE,

2000.

[13] Smartclick (n.d.), How Artificial Intelligence Is Used in the Film Industry, 2022.

URL:https://smartclick.ai/articles/how-artificial-intelligence-is-used-in-the-film-industry/.

[14] M.Stern, Motion capture maestro Andy Serkis on ‘Dawn of the Planet of the Apes’ and

Revolutionizing Cinema. 2020. Retrieved November 2022, from URL: https://www.thedailybeast.

Com/motion-capture-maestro-andy-serkis-on-dawn-of-the-planet-of-the-apes-and-.

[15] S.Stuart,.Can AI Direct Movies? This One Just Did. PCMag UK, September 3rd, 2021. Retrieved

October

2022

From

https://uk.pcmag.com/news/135481/can-ai-direct-movies-this-one-just-did.

[16] UNIT9, Campari: Fellini Forward, 2021. Retrieved from URL:

https://www.unit9.com/project/campari-fellini-forward/.

[17] M. M.Camanho, A. E. M. Ciarlini, A. L. Furtado, C. T. Pozzer, and B. Feijó, “A Model for


Interactive TV Storytelling”, VIII Brazilian Symposium on Games and Digital Entertainment.

2009.

[18] M.Cavazza, F. Charles, and S. Mead, “Character- based interactive storytelling”, IEEE

Intel-ligent Systems, special issue on AI in Interactive Entertainment. 17(4):17-24, 2002.

[19] D.Grasbon, and N. Braun, “A morphological approach to interactive storytelling”, In: Proc.

CAST01, Living in Mixed Realities. Special issue of Netzspannung. Org/journal, the Magazine

For Media Production and Intermedia Research, pp. 337-340, Sankt Augustin, Germany, 2001.

[20] A.Paiva, I. Machado and R. Prada, “Heroes, Villains, Magicians,…: Dramatis Personae in a

Virtual Story Creation Environment”, Proceedings of the 6th international conference on

Intelligent user interfaces, 2001. DOI: 10.1145/359784.360314

[21] U.Spierling, D. Grasbon, N. Braun, and I. Iurgel, “Setting the scene: Playing digital director in

Interactive storytelling and creation”, Computers & Graphics. 26(1):31-44, 2002. DOI:

10.1016/S0097-8493(01)00176-5

[22] J.Weijdom, “Mixed Reality and the Theatre of the Future Fresh Perspectives on Arts and New

Technologies”, IETM – International Network for Contemporary Performing Arts, Brussels,

2017.

[23] Andler, Predicate path expressions, in: Proceedings of the 6th. ACM SIGACT-SIGPLAN

Symposium on Principles of Programming Languages, POPL ’79, ACM Press, New York, NY,

1979, pp. 226–236. Doi:10.1145/567752.567774.

[24] TUG, Institutional members of the TEX users group, 2017. URL: http://www.tug.org/

Instmem.html.

URL:

https://blogs.nvidia.com,[25] R Core Team, R: A language and environment for statistical computing,


2019. URL:

https://www.R-project.org/.

[26] S.Anzaroot, A. McCallum, UMass citation field extraction dataset, 2013. URL: http:

//www.iesl.cs.umass.edu/data/data-umasscitationfield.

[27] Meadows, D. Digital Storytelling: Research-Based Practice in New Media. Visual

Communication, 2(2), 189–193. https://doi.org/10.1177/1470357203002002004, 2003.


[28] Murray, Janet Horowitz, 1946-. Hamlet on the Holodeck: the Future ofNarrative in Cyberspace.

New York: Free Press, 1997.

[29] Juul J. Half-Real: Video Games between Real Rules and Fictional Worlds. Cambridge Mass: MIT

Press; 2005.

[30] URL: http://archive.pov.org/thankyouforplaying/empathy-video-games/, 10 Insights Into

Empathy Video Games, 2016.

[31] E. Stilinović, N.S. Schutte, Facilitating empathy through virtual reality. Motivation and Emotion.

41. 10.1007/s11031-017-9641-7, 2017.

[32] Väätäjä, H. (Ed.), Turunen, M. (Ed.), Ilvonen, I. (Ed.), Sirkkunen, E. (Ed.), Uskali, T. (Ed.),

Kelling, C. (Ed.),. Vanhalakka, J., VIRJOX: Engaging Services in Virtual Reality. Tampere

University ofTechnology, 2018

[33] Bertrand P, Guegan J, Robieux L, McCall CA, Zenasni F. Learning Empathy Through Virtual

Reality: Multiple Strategies for Training Empathy-Related Abilities Using Body Ownership

Illusions in Embodied Virtual Reality. Front Robot AI. Doi: 10.3389/frobt.2018.00026. PMID:

33500913; PMCID: PMC7805971, 2018.

[34] URL: http://beanotherlab.org/home/work/tmtba/ 2015

[35] Santorineos, M. (ed.), Dimtiriadi, N. (co-ed.), Gaming Realities. A challenge for digital culture,

Fournos Centre for the digital culture, Mediaterra 2006, Athens, Greece, 2006

[36] Dimitriadi, N., Exploring multi-user interaction in @postasis platform: “Ludic” virtual

Choreography project», in DIGITAL CIVILIZATION, WHERE TREES MOVE, ed. Celestino

Soddu & Enrica Colabella, Domus Argenia Publications, Rome, 2019

Information about the authors

Argyro Papathanasiou holds a BASc in Music Technology & Acoustics engineering and M.A. in Art

& VR (ASFA&Paris8). Currently she is a PhD candidate at the School of Film (AUTh) focusing on

XR systems in Documentary films and Location Based Narratives. Her individual and teamwork have

Been presented in several conferences and festivals. She is co-founder and Managing Director of

ViRA. She collaborates with Documatism as XR Head ofProduction, focusing on documentary films,

Cinematic interactive installations, and film platforms, with Studio Bahia NPO (USA) as an XR

Designer and with NOUS VR as a production coordinator and XR designer.


Natalie-Teresa Chavez is a PhD candidate in School of Film ofAUTh, in the field ofVirtual, Augmented,

Mixed Reality and Cinema and an actress. She obtained a B.A. in International Business and Politics

(University of Macedonia), a MSc in Services Management (Athens University of Economics and

Business) and an Acting Diploma accredited by Greek Ministry of Culture. She possesses a Certificate of

Pedagogical and Teaching (Hellenic Open University) and a Certificate in the Elements ofAI (University

Of Helsinki). She has worked with various important directors in theater (Y.Houvardas et al.) and cinema

(C.Nikou et al).

Paraskevi Bokovou is a set and costume designer based at Thessaloniki, who has worked in feature and

Short films, television and theatre. She has a master’s degree in film studies from the School of Fine Arts

Of the Aristotle University of Thessaloniki, where she was a student of Ioulia Stauridou, and another

Master’s degree in theater from the University Paris 8 Vincennes – Saint-Denis, subject Scènes du
Monde,

Histoire et création, where she collaborated with Érica Magris. She continues her studies as a doctorate

Working with Nefeli Dimitriadi, experimenting with the emerging new media practices.Christina
Chrysanthopoulou holds a M.Sc. in Architecture Engineering, from the NTUA, and a M.A. in “Art and
Virtual Reality”, a collaboration between the ASFA and the Paris 8 University. Currently, she is a PhD
candidate at the School of Film Studies of the AUTH. She has several publications in conferences such as
Hybrid City II in Athens, IEEE VR2015 in Arles, and VS-Games Barcelona 2017, and has participated in
various exhibitions and festivals, such as the Ars Electronica Festival (Linz, 2015), the A.MAZE festival
(Berlin, 2018), the ADAF 2020, and the Ars Electronica Garden Athens 2020.

Aikaterini-Dimitra Gerothanasi BA in Film Studies, AUTH(2011). MA in Screenwriting, University of


Salford, Manchester(2014) with distinction. Postgraduate Programme in Writing and Development ofTV
Series, Film and Television Academy Berlin, DFFB (2015). Collaboration with Atlantique Productions,
Filmhuset, Wildside, amongst others, for her original TV series. She was selected in mfi Script 2 Film
Workshops 2012, 6th Sarajevo Talent Campus 2012, CANNESERIES INSTITUTE 2019, Series Mania UGC
Writers Campus 2020. Since 2020-present, she has been a screenwriting tutor at Catalyst Institute for
Creative Arts and Technology, School for Film and Visual FX, Berlin. PhD candidate, School of Film
Studies, AUTH.

Nefeli-Maria Dimitriadi is artist and director, Assistant Professor of “Virtual/ Augmented/ Mixed Reality
and Cinema” in Film School of Faculty of Fine Arts of Aristotle University of Thessaloniki, and she is
entitled with a PhD in Aesthetics, Science and Technology ofArt and a master’s degree in visual arts
(Univ. Paris 8) and a Diploma in Fine Arts (School of Fine Arts, Aristotle University of Thessaloniki). The
research work of Dr. Dimitriadi has been published in 33 scientific papers and articles in international
conferences and scientific publications, and her artwork has been presented at 31 exhibitions and
international festivals.

You might also like