Initial User Centered Design of A Virtual Reality Heritage System Applications For Digital Tourism

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

remote sensing

Article
Initial User-Centered Design of a Virtual Reality
Heritage System: Applications for Digital Tourism
Florent Poux 1, * , Quentin Valembois 1 , Christian Mattes 2 , Leif Kobbelt 2 and Roland Billen 1
1 Department of Geography, Faculty of Sciences, Geomatics Unit University of Liège, 4000 Liège, Belgium;
q.valembois@uliege.be (Q.V.); rbillen@uliege.be (R.B.)
2 Visual Computing Institute, RWTH Aachen University, 52062 Aachen, Germany;
mattes@cs.rwth-aachen.de (C.M.); kobbelt@cs.rwth-aachen.de (L.K.)
* Correspondence: fpoux@uliege.be

Received: 3 July 2020; Accepted: 6 August 2020; Published: 11 August 2020 

Abstract: Reality capture allows for the reconstruction, with a high accuracy, of the physical reality of
cultural heritage sites. Obtained 3D models are often used for various applications such as promotional
content creation, virtual tours, and immersive experiences. In this paper, we study new ways to
interact with these high-quality 3D reconstructions in a real-world scenario. We propose a user-centric
product design to create a virtual reality (VR) application specifically intended for multi-modal
purposes. It is applied to the castle of Jehay (Belgium), which is under renovation, to permit multi-user
digital immersive experiences. The article proposes a high-level view of multi-disciplinary processes,
from a needs analysis to the 3D reality capture workflow and the creation of a VR environment
incorporated into an immersive application. We provide several relevant VR parameters for the scene
optimization, the locomotion system, and the multi-user environment definition that were tested in a
heritage tourism context.

Keywords: point clouds; virtual reality; 3D modeling; laser scanning; photogrammetry; reality
capture; 3D reconstruction; cultural heritage; heritage information system

1. Introduction
Reality capture allows for the preservation of cultural heritage information through 3D models,
studies, analysis, and visualizations. However, the potential of these digital assets is often not
fully realized if they are not used to interactively communicate their significance to experts and
non-specialists alike. Virtual reality (VR) interfaces spatial data such as 3D point clouds or 3D models,
and allows users to experience VR in an immersive way. It favors interaction and helps make culture
potentially more accessible to users with different perspectives. This highlights a flexibility requirement
for any VR heritage system to adapt cultural proposals and information about objects based on different
types of user categories.
Creating a digital replica for such a multi-modal usage demands to first survey in high detail
the location while gathering relevant contextual information to create realistic experiences. This is
primarily achieved using various 3D capture technologies combined with state-of-the-art mapping
algorithms that depict physical environments through a large variety of 3D digital assets (e.g., [1–3]
for tourism purposes). These are then reworked to populate various immersive experiences, such as
virtual reality and augmented reality (AR) experiments [4,5]. However, AR and VR taken as “one-shot”
products are often non-integrated realizations meeting limited objectives. Moreover, different worlds
with different perspectives (e.g., geoinformation vs. cultural heritage vs. computer graphics) diminish
the flexibility of a system if no balanced approach is mapped to the development of the immersive

Remote Sens. 2020, 12, 2583; doi:10.3390/rs12162583 www.mdpi.com/journal/remotesensing


Remote Sens. 2020, 12, 2583 2 of 32

application. The main problem in such scenarios is linked to the sustainability of the solution. This is
due to a lack of adaptability to changing needs and technological developments.
As the number of projects using reality capture technologies and immersive technologies grows,
clear guidelines and use case insights for conducting applied research projects are valuable. This is
why our study proposes modular and high-level concepts that can be extended to undertake similar
projects. We have been working in this area for some time, leading to the Terra Mosana Interreg EMR
project,+ to merge perspectives and work on transversal solutions to these challenges.
In this article, we present a user-centric application design to create a sustainable virtual heritage
system (V-HS) built upon a modular information system. It is fed by several data sources globally
meeting the needs of the organization for which the system is implemented. The approach focuses
on the strategy of adaptation to evolutions in terms of data acquisition on the one hand, and in
terms of visualization and interactions on the other hand. It integrates several concepts dealing with
3D acquisition, data processing, data integration, immersive development as well as considerations
for handling massive reality-based datasets. The work is guided by user needs and illustrates its
proficiency in a use case that highlights the design choices for the creation of a virtual reality application.
Setting up a multi-modal immersive application is constrained due to several big data specificities
of which the volume (up to terabytes), the velocity (streams of data), and the variety (several geometric
and semantic modalities) majorly influence the representativity of the scene. This is not a straightforward
process and raises major data considerations to find an optimal working balance between scene fidelity
and user experience. Moreover, the interaction with the environment and the semantic sources
of information—initially available on-site or via audio guide—adds extra complications for their
correct integration.
The remainder of the paper is structured as follows. Section 2 deals with the current state of the
art constitution of heritage information systems and VR applications for tourism. It also includes
existing 3D data capture workflows that provide the geometric base for the environment design.
Section 3 gives the reader the context of the study, which takes place on the castle of Jehay,
in Belgium.
Section 4 proposes a consistent methodology to design and develop a V-HS based on reality
capture data and user needs. It aggregates the considerations explicitly from the previous chapters and
provides a replicable framework for the constitution of a VR application from 3D reality capture data.
To illustrate the considerations, we provide in Section 5 a concrete return of experience over a
use case, highlighting the immersive functionalities that can add value to visitors during the 9-year
planned renovations.
Finally, we extract key insights and transpose them in Section 6.

2. Related Works

2.1. Heritage Information System


As mention by M. Santana in [6], heritage information should provide relevant and timely data
linked to different work/analysis phases. The inclusion of all the required information (geometry,
temporality, semantics, sound, etc.) coupled with the management of its heterogeneity, its complexity,
and the multiplicity of actors leads to significant challenges of information integration [7]. The heritage
research community has quickly understood the need to develop systems for structuring information
through data models that can manage heterogenic and complex heritage information. The creation of a
heritage information system often deals with a first complication when linking semantic information
to low-level and heterogeneous geometric information, such as 3D meshes [8] and point clouds [9,10].
Several existing projects propose a heritage information system in which an information system is
related to a three-dimensional representation of a heritage site or building. Among those approaches,
we wish to point out some relevant applications: the BIMLegacy project [11], the PHYT project [12],
and the 3DHOP plan [13].
Remote Sens. 2020, 12, 2583 3 of 32

The BIMLegacy project [11] is an online platform aimed at unifying and synchronizing heritage
architecture information. The contribution of the work is to propose a platform where stakeholders
can synchronize heritage information. The platform has been developed following the design science
research methodological approach. This method of development relies on a prototype definition that is
enriched and corrected in several iterations. The paper proposes a building information model (BIM)
legacy workflow, which means that the acquired data (point clouds) must first be modeled through a
scan-to-BIM approach. While this has the benefit of producing light-weighted 3D models for a smooth
and reliable 3D representation, modeling the data is time-consuming and technological evolutions
outside the BIM framework are additional challenges.
The PHYT project [12] is a comprehensive project that merges several partners such as parietal
artists, archaeologists, and 3D information experts. Within the framework of the project, a 3D
environment has been developed to store and localize information from heterogeneous sources of
every specialist studying caves. The semantic information is attached to a 3D point that serves as
an identifier to retrieve data in the database. The 3D cave environment is represented as a 3D mesh.
Based on the semantic representation in the 3D environment, a bi-directional communication has been
set up from the 3D model to the database (selection from the 3D model) and from the database to the
3D environment (visualization of queries in the 3D environment) which is an interesting direction,
and extensible with more interactivity for non-experts.
The 3D Online Presenter (3DHOP) [13] is an open-source project for the creation of interactive
web presentations of high-resolution 3D models. The environment is oriented to use cases in cultural
heritage. The strength of this system resides in the variety of stakeholders that it is addressed
to. The system is not intended to be a working platform but only a visualization tool of already
implemented semantic information, unfortunately not VR-ready.
The analysis of an existing shared data environment dedicated to cultural heritage information
shows that there are very few working environments that can permit the linkage of flexible information
systems with multi-modal usage through immersive technologies. We believe that providing a unified
scheme for different user’s categories is the most efficient solution since each interpretation of heritage
objects or artifacts is dependent on past experience and a context of interpretation. Most analyzed
systems prefer to stream parametric or 3D meshes because of their relative lightness to be streamed
on the Web, but none studied their compatibility with VR frameworks. For all these reasons, we aim
to develop an immersive system that is a working environment with a two-way link between 3D
geometry and semantic information through immersive interfaces.

2.2. Virtual Reality for Tourism


The interest in virtual reality for tourism purposes started with early speculations on the impact
of this technology in the mid-1990s related to the articles [14–17]. R. Cheong [14] envisioned the
possibility to theoretically complete travel experiences in a safe and controlled environment using a
VR headset for more on-site accessibility. The author relates the potential of artificial tourism as a
competitor to disrupt this industry. In 1995, P. Williams and J.S Perry Hobson [17] state that VR needs
adequate hardware to reach “human” realism and comfort standards regarding motion sickness [18].
Furthermore, the authors’ conjecture the impact of virtual reality on accessibility: “for sensitive tourist
areas of the world where it is necessary to restrict access of tourists, VR could allow unlimited access”.
This conjecture is highlighted by works such as the digitization of the Chauvet Cave in France [19].
Discovered in 1994, this site was closed the same year to prevent further degradation of the cave, but by
using a VR application, anyone now has access to the place virtually. It also holds a digital copy of the
natural cave even if degradations occur on-site.
Nowadays, pushed by the evolution of modern technology and hardware capabilities, VR
advances deliver devices with better visual, audio, haptic, tracking, input, and smell experiences [20–22].
The initial concerns are now mostly resolved, and VR is categorized as a mature technology, according
to Gartner’s 2017 Hype Cycle for Emerging Technology [23]. This applies to applied sciences and
Remote Sens. 2020, 12, 2583 4 of 32

projects where many cultural sites have now adopted VR to enhance their user experience and attract
more tourists [24].
Virtual reality’s impact in the sector of tourism has been studied through six primary areas of
application [25]: education, accessibility, marketing, planning and management, heritage preservation,
and entertainment. Educators have recognized the teaching potential of VR, and shown that the feeling
of presence in a learning environment embedded with information helps the learning process [26,27].
Moreover, the ability to interact in VR is recognized as an essential learning factor in [28]. The impact
of virtual reality takes place in more unexpected areas. For instance, it allows for the collection of data
on the behavior of the user [29], with a direct application to cultural sites to test the popularity of a
new exhibit [25]. VR has also shown interest in the promotion of cultural sites as related in [30,31].
Since marketing tourism relies heavily on the Internet [32], VR is a potential tool for supplying
information to tourists, as expressed by the authors in [25]. In entertainment, VR can be a single
product for “fun” and to attract tourists, as seen in various theme parks [33].
While the impact of VR on the tourism industry is acknowledged, challenges arise in the
development and design of VR applications for cultural tourism. For instance, the authors in [24]
and [34] show that investigations are often necessary to enhance the learning experience in a unique
cultural tourism context, where user interaction needs to be designed to encourage the engagement of
a learner. Furthermore, in [24] the author emphasizes the objectives of such implementation in cultural
tourism where “technological challenges such as inconsistent interaction are not only challenges for
user interaction, but detrimental for the tourist experience”.

2.3. Data Capture for Cultural Site


The observations in Section 2.2. bring into attention to the developer-specific challenges that a VR
application for cultural tourism faces. A first concern found in the early stage of the application is the
data modalities that the rest of the development relies on. The data gathered from reality vary in forms
and utility scope. Three-dimensional point clouds and 3D mesh are the most common geospatial data
types within VR applications, with examples in [24,35,36].
Point clouds are a remarkably simple yet efficient way of representing spatial data while keeping
a direct link to the raw sensory data. However, despite the ease of capturing point clouds, processing
them becomes a challenging task. First, problems such as a misadjusted density, clutter, occlusion,
random and systematic errors, surface properties, or misalignment are the main data-driven obstacles
to a wider dissemination. It often related to their data-structure or capture-related environment
specificities as highlighted by F. Poux [37]. Secondly, structure-related problems usually emerge
due to a lack of connectivity within point ensembles, which can make the surface information
ambiguous [38]. Then, the compression and structuration of such datasets are heavy constraints that
limit the integration of point clouds directly in VR scenarios [39]. Some work, such as [35,36,40],
permit the integration of massive point clouds data, but an effective visualization and interaction are
still limited. Three-dimensional mesh representation is more commonly found within the industry
workflows, first propelled by the history of obtaining 3D information without reality capture devices.
Mesh representation often addresses the challenges mentioned above. On top, meshes present usually
a smaller number of connected vertices and encode an explicit topology [41].
Moreover, we can bind a texture through mapping for storing additional information.
Mesh representation from 3D surveyed data is most often derived from the point cloud data. As such,
it includes several pre-processing steps to filter noise and outliers and to obtain a geometrical fit as
clean as possible. On top, the balance between fit and complexity is often to be decided. In the end,
orienting the underlying geometry for further processes is often linked with the vision one has, in terms
of whether they want to work on original raw data or interpreted shapes with manual intervention in
the tuning/optimization of the meshing parameters.
The heritage documentation encompasses a wide range of activities, including the recording and
documentation of reality through geometric and semantic data collection. To collect the geometric
Remote Sens. 2020, 12, 2583 5 of 32

information, several acquisition procedures can be set up. Nowadays, terrestrial laser scanning (TLS)
and computer-vision photogrammetry through structure from motion (Sfm) [42] and dense image
matching (DIM) [43] are the two major techniques used in a digital documentation context [44,45].
Many examples of heritage site documentation can be retrieved where these two techniques are
used to collect geometrical information related to the objects [46,47]. For most of the applications,
some heritage information is collected over archaeological sites or heritage buildings. Based on
the gathered information, a 3D reconstruction of the current state of the artifact is proposed. Then,
a semantic enrichment takes place to add contextual information to the model and make it usable in a
heritage context.
Finally, a representation is proposed to communicate through a medium such as a web environment,
a 3D simulation software, or, more basically, by producing 2D documents such as maps and plans.
Some in-depth information can be found in [48,49]. A dichotomy still exists in the documentation
that the virtual environment can use to reproduce the cultural heritage sites [50]. Documentations are
found mostly either in 2D or 3D. The 2D technique suits well for small projects with a limited cost.
Still, it lacks cohesive information when recreating the whole structure in 3D, which is often the case to
model the virtual reality environment [50,51]. This rich information then serves numerous industries
(e.g., surveying and engineering; buildings and architecture; public safety and environmental issues)
and supports tourism and the promotion of cultural heritage sites. Their utility is found at many levels
and seeing the number of in-use applications gives a good idea of the potential they hold, as stated by
F. Poux in [52]. To digitally replicate a model of the physical world, one can leverage a set of tools
and techniques that will each have specificities, which makes choosing the right approach difficult.
Interestingly, photogrammetry is the most often used technique as it obtains a coherent 3D mesh with
lower efforts than TLS, and obtains a realistic texture when the lightning condition is addressed [53].
It is accessible thanks to a lower price, as well as a higher model and texture registration accuracy.
Total stations (TS, Figure 1) are instruments capable of pointing and gathering the coordinates of
points of interest with a very high accuracy (typically in the range of millimeters). They are the favored
instrument of surveyors and have been in use for decades for various surveys due to their high precision
and the possibility to attach some semantic information on-site (geocoding). In cultural heritage and
archaeological applications, they are most often used as the reference measurement method, often a
base for quality analysis (QA) and quality control (QC) [54,55], but their usage can be extended for 3D
modeling frameworks. Secondly, terrestrial laser scanners (TLS, Figure 1) can be seen as an automated
total station with an angular resolution limiting the number of gathered points. Contrarily to TS, a TLS
is operated almost fully automatically, and records a full 360◦ panorama constituted of visible points
(in the realm of one million points per second). They have been actively used since 2000 (e.g., [56–58]),
and recently the same LiDAR technology benefits new platforms enabling dynamic capture or aerial
surveys [59,60]. Finally, terrestrial photogrammetry (Figure 1) [61] aims essentially to extract a 3D
position from a set of 2D images [62]. On the industry side, new software based on structure from
motion [42] and multi-view stereo with bundle adjustment and SIFT, SURF, ORB, M-SURF, BinBoost
descriptors [63,64] allow a wide range of professionals and less-specialized users to recreate 3D content
from 2D poses [65–74]. Use cases range from object-scale reconstruction to city-scale reconstruction,
making this technique a promising way to get 3D point clouds. However, while the reconstruction
precision for middle- to large-scale applications is getting increasingly better [75], remote sensing via
active sensors is still favored in several infrastructure-related industries.
Consequently, data acquisition in parallel to the design of the virtual environment is not an
obvious task. It needs additional research to develop a methodological basis for cultural tourism VR
development. Independently from the recording technique that is used on an archaeological or heritage
site, it is worth mentioning that the acquisition allows only the gathering of information related to the
heritage object at the time of the analysis. All other information related to the past state of the site
must be reconstructed based on historical knowledge. The inclusion of all the required information
dimensions (temporality, semantic, management of heterogeneity, the complexity, the multiplicity of
used since 2000 (e.g., [56–58]), and recently the same LiDAR technology benefits new platforms
enabling dynamic capture or aerial surveys [59,60]. Finally, terrestrial photogrammetry (Figure 1)
[61] aims essentially to extract a 3D position from a set of 2D images [62]. On the industry side, new
software based on structure from motion [42] and multi-view stereo with bundle adjustment and
SIFT, SURF, ORB, M-SURF, BinBoost descriptors [63,64] allow a wide range of professionals and less-
Remote Sens. 2020, 12, 2583 6 of 32
specialized users to recreate 3D content from 2D poses [65–74]. Use cases range from object-scale
reconstruction to city-scale reconstruction, making this technique a promising way to get 3D point
clouds.
actors)However,
leads to a while
major the reconstruction
information precision
integration for middle-
challenge to large-scale
[7]. It motivates applicationsdescribed
the framework is gettingin
increasingly betterat[75],
this article aimed remote
creating sensing
a virtual via active
reality sensorsfor
environment is cultural
still favored in several
tourism based oninfrastructure-
reality capture.
related industries.

(a) (b) (c)


Figure
Figure1.1.Reality
Realitycapture
capturesensors.
sensors.(a)(a)
AAtotal
totalstation
station(TS);
(TS);(b)
(b)the
theLeica
LeicaP30
P30terrestrial
terrestriallaser
laserscanner
scanner
(TLS);
(TLS);(c)(c)a afull-frame
full-framecamera
camerafor
forterrestrial
terrestrialphotogrammetry.
photogrammetry.

2.4. User-Centered Approaches


A user-centered design is a suite of methods emphasizing understanding people rather than
technology. According to [76], a user-centered design can be seen as a philosophy of product
development, where the product is not an end in itself but rather a means toward providing a good
experience for users. It is conducted as an iterative process in which designers focus on the users and
their needs for each phase of the design [77]. Applying this concept to the creation of a multi-modal
VR environment for cultural heritage adds another layer of complexity due to the different categories
of users. Some attempts and realizations [78–80] are focused on some parts of the various challenges
and integrate initial insights. The article by M. Reunanen et al. [78] illustrates a holistic user-centered
approach to immersive digital cultural heritage installations on the case study of Vrouw Maria, a Dutch
merchant ship that sank near the Finnish coast in 1771. The paper is interesting and describes the
constitution of a prototype highlighting some 3D user interface issues as perspectives. The author
emphasizes the importance of such a process on projects with a tight schedule, especially when
considering the value of the methods used. In the use case, concept mapping, scenarios creation, and
iterative prototyping produced useful lightweight results quickly and were economically sound. Finally,
the author states that the concepts, needs, and requirements are essentially about users, not technology,
which makes it possible to tackle different design tasks using the same initial approach, which is
interesting for sustainable approaches. Similarly, L. Barbieri et al., in [79], propose a user-centered
design of a digital exhibit for archaeological museums that results in a fixed non-immersive screen
accessible for visitors. This approach has been carried out for the development of a virtual exhibit
hosted at the “Museum of the Bruttians and the Sea” (Italy). The paper gives some technical advice
and suggestions which can be adopted to overcome several typical and recurrent problems related to
the development of digital exhibits, especially when low-budgets and space constraints are among
the design requirements. The results of user testing and the opinions gathered by the participants to
the study demonstrated that the adoption of a user-centered-design approach can efficiently improve
the design of digital exhibits and improve the user-friendly experience for the users. While the title
encompasses the terminology virtual reality, it does not relate to any head mounted device or true
first-person point of view experience with complex interactions and hand gesture recognition. Finally, a
conceptual framework for designing a virtual heritage environment for cultural learning is given in [80],
which provides useful high-level insights into the key elements that a user-centered-design approach
provides for virtual heritage. Specifically, the paper approaches information design, information
presentation, some navigation mechanisms and environmental setting, with traditional 3D rendering as
a basis (e.g., navigation aids, minimaps, teleports, quality of graphics, contextual settings, etc.). While
the paper lays down interesting concepts, the prototype is tailored for casual users only. The authors
in [81] deliver additional concepts by looking at the simplicity, originality, and universality of a digital
solution for heritage tourism. In the article, guidelines are presented to foster ecology and cultural
heritage in natural and/or rural areas.
Remote Sens. 2020, 12, 2583 7 of 32

A user evaluation conducted in [82] on a VR experiment in Cyprus is an additional resource that


highlights some
Remote Sensing. outcomes
2020, x FOR PEERofREVIEW
a user-centered design. For example, the remote viewing of cultural 7 of 32
heritage spaces2020,
Remote Sensing. without
x FORhaving to visit the physical space is interesting thanks to interactive capabilities,
PEER REVIEW 7 of 32
closer
such or ability
as the view them from different
to manipulate some angles.
of theAdditionally, capabilities
objects for more that are
information notmove
or to available
closerduring
or view
traditional
closer
them from physical
ordifferent
view them museum visits and
fromAdditionally,
angles. different the Additionally,
angles. 3D reconstruction
capabilities notofavailable
artifacts
capabilities
that are prove
notto
that during
are be muchduring
available
traditional more
physical
engaging
traditional to users.
physical museum visits and the 3D reconstruction of artifacts
museum visits and the 3D reconstruction of artifacts prove to be much more engaging to users. prove to be much more
These
engaging references are very pertinent and propose interesting solutions, where
to users. are very pertinent and propose interesting solutions, where the studies address the studies address
These references
case These
studies with a specific
references are very emphasis and
pertinent on propose
the philosophy
interestingof solutions,
the user-centered
where design
studies process.
theprocess. address
case studies with a specific emphasis on the philosophy of the user-centered design However,
However,
case to
studies the
with best
a of our
specific knowledge,
emphasis onstudies
the that investigate
philosophy of thethe combination
user-centered of information
design process.
to the best of our knowledge, studies that investigate the combination of information systems, virtual
systems,
However,virtual
to thereality,
best ofandour3Dknowledge,
reality capture to facilitate a sustainable heritage information system
reality, and 3D reality capture to facilitate studies that investigate
a sustainable the combination
heritage information systemof information
design have
design
systems,have
virtualnotreality,
been andoutlined.
3D reality Additionally, features asuggested
capture to facilitate sustainableby information
heritage design
information and
system
not been outlined. Additionally, features suggested by information design and presentation [80] are
presentation [80] are mostly applied on multimedia devices
design have not been outlined. Additionally, features suggested by information design andwithout clearly highlighting
mostly applied on multimedia
technological devices without clearly highlighting technological the evolutions such
presentation evolutions such as
[80] are mostly VR applicability.
applied Finally,
on multimedia it devices
is necessary to study
without clearly generalization
highlighting
as of
VRthe applicability.
approach Finally,
to different it
useris necessary to study the generalization of the approach to different
technological evolutions such as categories.
VR applicability. Finally, it is necessary to study the generalization
user categories.
of the approach to different user categories.
3. Context and Problematics
3. Context and Problematics
3. Context and Problematics
The “Province de Liège”, i.e., the managers of the castle of Jehay, had a triple concern that led to
theThe
3D “Province de Liège”,
The “Province deheritage
capture of the i.e., the
Liège”, site
i.e., in
managers
theFigure
managers2. of the castle of Jehay, had a triple concern that led to
of the castle of Jehay, had a triple concern that led to
thethe
3D3Dcapture of the heritage site in Figure
capture of the heritage site in Figure 2. 2.

Figure 2. Three-dimensional point cloud of the castle of Jehay and its surrounding from the TLS (left),
cave of the
Figure castle of Jehay, top-view pointofcloud withofTLS scanandpositions (right).
Figure 2. 2. Three-dimensional
Three-dimensional pointcloud
point cloud the castle
of the castle of Jehay
Jehay andits
itssurrounding
surroundingfrom the
from TLS
the (left),
TLS (left),
cave of the castle of Jehay, top-view point cloud with TLS scan positions
cave of the castle of Jehay, top-view point cloud with TLS scan positions (right).(right).
Because of planned renovations, the province first wanted to keep an accurate documented state
for archive
Because purposes, digital documentation, and to wanted
better execute renovations accessorily state
(see
Because ofof plannedrenovations,
planned renovations, theprovince
the province first
first wanted to tokeep
keepan
anaccurate
accuratedocumented
documented state
outdoor
for changes
archive in Figure
purposes, 3). documentation,
digital Indeed, some parts and oftothe castleexecute
better are now sealed andaccessorily
renovations inaccessible(see
to
for archive purposes, digital documentation, and to better execute renovations accessorily (see outdoor
visitors.
outdoor On top, some overhaul archaeological parts and decorative elements (indoor ceilings) are
changes in changes
Figure 3).in Figure
Indeed, 3).some
Indeed, some
parts partscastle
of the of theare
castle
noware now and
sealed sealed and inaccessible
inaccessible to
to visitors.
not preserved
visitors. On top,during
somethe renovation.
overhaul archaeological parts and decorative elements (indoor ceilings) are
On top, some overhaul archaeological parts and decorative elements (indoor ceilings) are not preserved
Secondly,during
not preserved due tothethe regular income from tourism, decision-makers wanted to have an
renovation.
during the
exhibition renovation.
that allows
Secondly, due tovisitors to still experience
the regular income fromthe site while decision-makers
tourism, giving new innovative
wantedinsights.
to have an
exhibition that allows visitors to still experience the site while giving new innovative insights.

Figure
Figure 3. 3.
TLSTLSintensity-based
intensity-basedrendering
renderingof
of the
the point
point cloud
cloudofofthe
thecastle
castleacquired
acquiredbefore
beforethethe
renovation
renovation
in in February
Figure
February3. TLS2016
2016 (leftimage),
image),and
intensity-based
(left andininApril
April
rendering 2019
of the
2019 during
point the
theofrenovation
cloud
during (right
(rightimage).
the castle acquired
renovation before the renovation
image).
in February 2016 (left image), and in April 2019 during the renovation (right image).
Finally, due
Secondly, contractors want to
to the regular better from
income manage the heritage
tourism, site through
decision-makers various
wanted inventories,
to have the
an exhibition
quality control
Finally, of work
contractors in progress,
want to deformation
better manage analysis
the and
heritage to
site get a detailed
through
that allows visitors to still experience the site while giving new innovative insights. capture
various at different
inventories, the
time intervals for insurance and archive purposes. These goals are naturally highly
quality control of work in progress, deformation analysis and to get a detailed capture at different guiding
underlying approaches
time intervals which may
for insurance and be able topurposes.
archive provide aThese
solution that are
goals can naturally
subsist thehighly
test of guiding
time by
being integrated
underlying aroundwhich
approaches a more
may centralized
be able toconceptualization.
provide a solutionThethatsustainability of test
can subsist the suchofa time
system
by
being integrated around a more centralized conceptualization. The sustainability of such a system
Remote Sens. 2020, 12, 2583 8 of 32

Finally, contractors want to better manage the heritage site through various inventories, the quality
control of work in progress, deformation analysis and to get a detailed capture at different time intervals
for insurance and archive purposes. These goals are naturally highly guiding underlying approaches
which may be able to provide a solution that can subsist the test of time by being integrated around a
moreRemote
centralized conceptualization.
Sensing. 2020, x FOR PEER REVIEW The sustainability of such a system demands ways of managing 8 of 32
the addition of new data while adapting to new demands using the latest technology. Rather than
demandsthe
highlighting ways of managing
techniques the addition of
implemented, wenew datain
detail while adapting tosections
the following new demands using theto
key concepts latest
manage
technology. Rather than highlighting the techniques implemented, we detail in the following sections
the integration of technological evolutions into a modular system. Our proposition to move away from
key concepts to manage the integration of technological evolutions into a modular system. Our
the current trend of separating achievements (preservation vs. visit for non-specialists) and the most
proposition to move away from the current trend of separating achievements (preservation vs. visit
sustainable approach isand
for non-specialists) to the
set most
up asustainable
flexible evolving
approach system.
is to set It
upcomes at aevolving
a flexible cost: we are moving
system. It comesaway
from atthe “one-shot”
a cost: logic and
we are moving sitefrom
away managers must, therefore,
the “one-shot” acquire
logic and site new must,
managers skills therefore,
to be ableacquire
to support
the development
new skills to be andablemaintenance of the
to support the system while
development and using and enriching
maintenance it optimally.
of the system while using and
enriching it optimally.
4. Methodology
4. Methodology
In this project, we were inspired by the design methods used in geographic information science
which areInthemselves
this project, derived
we were inspired
from thebymethods
the designdeveloped
methods used in geographic science.
in information information
Thescience
approach
which are themselves derived from the methods developed in information
followed was a user-centered prototyping. This choice was primarily dictated by the tight deadlines science. The approach
followed was a user-centered prototyping. This choice was primarily dictated by the tight deadlines
and agile/iterative deployment constraints which such a design alleviates through a fast production of
and agile/iterative deployment constraints which such a design alleviates through a fast production
a working solution as illustrated in [83] or [84]. Users were put at the center of the process and we
of a working solution as illustrated in [83] or [84]. Users were put at the center of the process and we
assumed that an information system must meet the needs of an organization or a set of users. It is quite
assumed that an information system must meet the needs of an organization or a set of users. It is
commonquitethat
commonthesethat
needs
theseare not are
needs clearly identified,
not clearly or inorany
identified, case,
in any notnot
case, sufficiently
sufficientlyformalized.
formalized. The
prototyping approach implies that new stages start with feedback on the previous
The prototyping approach implies that new stages start with feedback on the previous stages. stages. This section
This
covers only covers
section the first pass
only the as presented
first in Figure
pass as presented in 4, which
Figure is thenis iteratively
4, which reproduced
then iteratively reproduced to to
obtain
obtain
various various
outputs outputs
detailed in detailed
Sectionsin5 Section
and 6. We 5 and Section
started 6. We
with started
a needs with a needs
analysis phase analysis
(Sectionphase
4.1.) with
(Sub-Section
an iterative 4.1.) with an iterative
co-construction betweenco-construction
the development between
teamtheand
development
the users. team
Thisand the users.
phase Thisto the
relates
phase relates to the definition of the specifications, the development of
definition of the specifications, the development of the conceptual model and the developmentthe conceptual model and the of
development of interfaces. Then, a double cascade development phase was initiated, which concerns
interfaces. Then, a double cascade development phase was initiated, which concerns data curation
data curation through multi-resolution 3D mapping (Sub-Section 4.2), followed by the optimization
through multi-resolution 3D mapping (Section 4.2), followed by the optimization for the various usages
for the various usages of the data and knowledge integration (Sub-Section 4.3), which were usable
of thethrough
data and knowledge
various outputsintegration (Section
export of which 4.3), VR
several which were usable
prototypes through4.4).
(Sub-Section various outputs export
The operational
of which several VR prototypes (Section 4.4). The operational systems
systems included database management systems populated with 3D data and were searchableincluded database management
systems populated
through with 3D data and were searchable through VR interfaces.
VR interfaces.

Figure
Figure 4. Global
4. Global workflow
workflow forrealization
for the the realization of a comprehensive
of a comprehensive virtual
virtual reality
reality experience
experience for
for heritage
heritage tourism. It comprises the three layers from the conceptual methodology to the technology
tourism. It comprises the three layers from the conceptual methodology to the technology adaptation
adaptation through the functional layer.
through the functional layer.
Figure 4 shows the proposed framework and its functional application to our use case detailed
in Section 5, as well as its technology adaptation for VR heritage tourism. In the next sub-sections, we
Remote Sens. 2020, 12, 2583 9 of 32

Figure 4 shows the proposed framework and its functional application to our use case detailed
Remote Sensing. 2020, x FOR PEER REVIEW 9 of 32
in Section 5, as well as its technology adaptation for VR heritage tourism. In the next sub-sections,
we
willwill describe
describe each
each main
main block
block and
and provide
provide a meta-modelthat
a meta-model thatparticipates
participatesininaasustainable
sustainable reflection
reflection
to set up a sustainable system.
to set up a sustainable system.

4.1. Needs Analysis


4.1. Needs Analysis
The
The needs
needs analysis aimed to
analysis aimed to identify
identify the
the requirements
requirements of of the
the project
project and
and to
to take
take note
note of of the
the
expected
expected elements. It is about contextualizing the project and analyzing the expectations to give aa
elements. It is about contextualizing the project and analyzing the expectations to give
framework
framework to to the project. A
the project. A needs
needs analysis
analysis usually
usually gathers
gathers insights
insights from
from several
several meetings
meetings atat different
different
points
points in time. This allows the different decision-makers involved in the process to describe enough
in time. This allows the different decision-makers involved in the process to describe enough
parameters
parameters for for constructive
constructive conceptual
conceptual modeling.
modeling. The The contributions
contributions of of aa needs
needs analysis
analysis permit
permit thethe
project to be based on real needs, oriented the project toward satisfying these needs,
project to be based on real needs, oriented the project toward satisfying these needs, and better and better construct
the project.
construct the project.
After
After the
the framing
framingphase,
phase,thetheprocess
processwas
was usually
usually oriented toward
oriented towardthe the
useruser
needneed
requirements
requirementsand
collecting the technical
and collecting requirements,
the technical to finally
requirements, list list
to finally the the
functional
functionaloperations
operationsandand
prioritize them
prioritize to
them
answer to the best of the constraints (Figure
to answer to the best of the constraints (Figure 5). 5).

5. Needs
Figure 5. Needsanalysis
analysisdiagram.
diagram.TheThe solution
solution is obtained
is obtained by following
by following a narrowing
a narrowing gathering
gathering process
process
from thefrom the need
general general needneeds
to user to user needs
and andneeds.
system system needs.

It is the
the sensitivity
sensitivityof
ofthe
theproject
projectteams
teamsand
andthe
theinvolved
involvedpersons
persons that
that determine
determine thethe attributes
attributes of
of
thethe requirements
requirements gathering
gathering process,
process, andand whether
whether it isitfocused
is focused
on on
thethe technical
technical or user
or user dimension.
dimension. In
In absolute
absolute terms,
terms, a comprehensiveneed
a comprehensive needgathering
gatheringphase
phaseconsiders
considersall
alldimensions
dimensions and
and determines
determines the
project
project requirements
requirements from a holistic
from perspective
a holistic (overall(overall
perspective vision, cross-functional, and multi-disciplinary
vision, cross-functional, and multi-
approach).
disciplinaryThe collection
approach). Theofcollection
needs was formalized
of needs in a framework
was formalized note and note
in a framework is a powerful tool for
and is a powerful
building a coherent project, meeting user expectations in each context. Our approach
tool for building a coherent project, meeting user expectations in each context. Our approach then then based its
decision
based its on a Bayesian
decision bandits decision-making
on a Bayesian algorithm.
bandits decision-making Our choice
algorithm. Ourwas motivated
choice by the fact
was motivated by
that a user-centric
the fact design has
that a user-centric many
design hasiterations, and we wanted
many iterations, and we to use the
wanted toone
usethat maximizes
the one positive
that maximizes
outcomes. To gatherTothe
positive outcomes. needsthe
gather one seeks,
needs among
one seeks,other
among things,
otherto:
things, to:

• • Contextualize
Contextualize the project:
the project: mainmain
facts,facts, target,
target, positioning.
positioning.
• • Characterize the objectives of the
Characterize the objectives of the project. project.
• • Carry
Carry outwork
out the the work of expressing
of expressing the needs.
the needs.
• • Conducting
Conducting the work
the work of collecting
of collecting useruser requirements.
requirements.
• Establish the list of content needs.
• Establish the list of content needs.
• Draw up the list of functional requirements.
• Draw up the list of functional requirements.
• Define, clarify and explain the functionalities that meet each need.
• Order and prioritize functionalities in order of importance.
• Create a synoptic table of content functions and their impact on the product.
• Identify the resources to be activated for production.
Remote Sens. 2020, 12, 2583 10 of 32

• Define, clarify and explain the functionalities that meet each need.
• Order and prioritize functionalities in order of importance.
• Create a synoptic table of content functions and their impact on the product.
•Remote
Identify
Sensing.the resources
2020, toREVIEW
x FOR PEER be activated for production. 10 of 32

InInallallcases,
cases,the purpose
the purpose of of
collecting thethe
collecting requirements
requirementswaswasto facilitate exchanges
to facilitate between
exchanges all the
between all
profiles throughout the project. The results of this approach are listed in Section 5.1.
the profiles throughout the project. The results of this approach are listed in Section 5.1. The high-The high-level
conceptual modelsmodels
level conceptual of the different aspects ofaspects
of the different a heritage
of atourist VRtourist
heritage application were initially
VR application built
were from
initially
Figure 6 expressed from the initial need’s analysis design.
built from Figure 6 expressed from the initial need’s analysis design.

Figure6.6.Definition
Figure Definitionofofthe
thedifferent
differentcategories
categoriesinfluenced
influencedbybythe
theneeds
needsanalysis.
analysis.First,
First,the
thedata
datapart
partisis
treated in Section 4.2, followed by the virtual reality (VR) environment creation in Section
treated in Sub-Section 4.2, followed by the virtual reality (VR) environment creation in Sub-Section 4.3, and the
optimizations needed for specific
4.3, and the optimizations needed interactions
for specificare in Section are
interactions 4.4.in Sub-Section 4.4.

This
Thisgraph
graphwaswascross-validated
cross-validated following a Bayesian
following bandits
a Bayesian strategy
bandits withwith
strategy the needs analysis
the needs and
analysis
was
andconstrained to obtain
was constrained severalseveral
to obtain conceptual modelsmodels
conceptual of the functionalities to develop,
of the functionalities and theand
to develop, entry
the
points to keep for further processes if the need arises. Additionally, we highlighted the
entry points to keep for further processes if the need arises. Additionally, we highlighted thespecificities of
the VR such as:
specificities of the VR such as:
  needs
VR to have
VR needs use: athe
to ahave value
use: thethat ARthat
value andARVR and
are promising in terms of
VR are promising in providing needs to
terms of providing
be clearly understood and relevant in the tourist context [24].
needs to be clearly understood and relevant in the tourist context [24].
  Quality
Quality environment:
environment:by offering a high-quality
by offering of resolution
a high-quality or sound,
of resolution more more
or sound, authentic VR and
authentic VR
AR environments in which tourists can be fully immersed should be provided
and AR environments in which tourists can be fully immersed should be provided [85]. [85].
  distraction:
No No distraction:
avoidavoid distractions
distractions forusers,
for the the users,
bugs,bugs, irrelevant
irrelevant information
information [31]. [31].
  Consumer-centric: one should carefully consider how this
Consumer-centric: one should carefully consider how this object creates meaningobject creates meaningfor forthe
the
visitor,
visitor, how how it connects
it connects to his/her
to his/her values,
values, andenables
and enablesthe
thevisitor
visitor to create
create his/her
his/herversion
versionof
of thethe experience
experience [24].
[24].

4.2.
4.2.Reality
RealityCapture
CaptureMethodology
Methodologyfor
for3D
3DData
DataAcquisition
Acquisition
The
Themission
missionfirst initiated
first thethe
initiated useuse
of aofterrestrial laserlaser
a terrestrial scanner (Leica
scanner P30—resolution
(Leica P30—resolutionset toset
1.6 to
mm 1.6
@mm
10 m) for the full survey of the site. It was initially mounted with an external camera
@ 10 m) for the full survey of the site. It was initially mounted with an external camera to obtain to obtain a
colorization of the
a colorization point
of the cloud.
point cloud.Later, several
Later, severalphotogrammetric
photogrammetric campaigns
campaignstook tookplace
placeindoors
indoorsand and
outdoors.
outdoors.All
Allthis
thisdata
datawas
wasredundant
redundantfrom fromaafullfulltopographic
topographicsurvey
surveyofofthe
thesite
sitevia
viathe
thetotal
totalstation
station
(Leica
(LeicaTCRP
TCRP1200à),
1200à),including
includingthetheprecise
precisecontrol
controlpoint
pointdistributed
distributedalong
alongstrategic
strategicpoints
points(we
(weused
usedaa
GNSS receiver Trimble R10 in RTK mode on 15 established points measured for 5 min each; each
point was also stationed with the total station to establish the main polygonal; the mean error was 1
mm in XY, and 2 mm in Z overall points after a Helmert transform). Thus, the three methods were
employed and combined for synergies. The workflow was designed to maximize the strengths of
each technique as proposed in Figure 7.
Remote Sens. 2020, 12, 2583 11 of 32

GNSS receiver Trimble R10 in RTK mode on 15 established points measured for 5 min each; each point
was also stationed with the total station to establish the main polygonal; the mean error was 1 mm in
XY, and 2 mm in Z overall points after a Helmert transform). Thus, the three methods were employed
and combined for synergies. The workflow was designed to maximize the strengths of each technique
as proposed in Figure 7.
Remote Sensing. 2020, x FOR PEER REVIEW 11 of 32

Figure 7.
Figure 7. Three-dimensional
Three-dimensionalreality
realitycapture
captureworkflow
workflowto to maximize
maximize automation,
automation, both
both on-site
on-site andand in
in the
the processing
processing phase.
phase.

The focus was on maximizing the automation and geometric accuracy while keeping a minimal
timeframe from
timeframe from acquisition
acquisition to
to VR
VR integration.
integration. Therefore, the survey included a full photogrammetric
reconstruction of outdoor elements and statues, a full full laser
laser scanning
scanning survey
survey forfor indoor
indoor spaces,
spaces, and
and
the registration
registration and
andquality
qualitycontrol
controlwere
weredone
doneusing thethe
using dense total
dense station
total topographic
station network.
topographic The
network.
control over the quality of the reconstruction is important to impose constraints
The control over the quality of the reconstruction is important to impose constraints on the 3D on the 3D dense-
matching reconstruction
dense-matching through
reconstruction the laser
through thescanning and total
laser scanning andstation. Yet, toYet,
total station. propose a fully
to propose non-
a fully
hybrid workflow,
non-hybrid the the
workflow, solesole
usage of of
usage a consumer-grade
a consumer-grade camera
cameraisispossible
possible(see
(seeSection
Section 66 for more
insights).
insights).

4.3. Populating the Prototype


Complementary to to the
theVR-ready
VR-readymesh
meshacquired,
acquired,several
several resources
resources were
were used
used to populate
to populate all
all the
the environment
environment datadata together
together whichwhich the environment
the VR VR environment is built
is built upon.upon.
Four Four
types types of were
of data data used:
were
used: 3D models,
3D models, texts, texts,
audio,audio, and semantics.
and semantics. To supplement
To supplement the main
the main 3D captured
3D captured elements
elements that
that were
were either missing or inadequate, some models were curated or designed. This
either missing or inadequate, some models were curated or designed. This aimed to increase the aimed to increase
the authenticity
authenticity andand realism
realism of of
thethe environment
environment regardingananimmersive
regarding immersiveexperience
experiencefor forthe
the user
user [85].
Additional models can be generated by computer graphics software or by similar
Additional models can be generated by computer graphics software or by similar capture as capture as described
in the previous
described in thesection.
previous section.
The
The data
datapopulating
populatingthe theprototype
prototypeisis
represented in in
represented Figure 8 that
Figure reads
8 that from
reads top to
from topbottom. We link
to bottom. We
each level to its subpart and highlight the influence of reality capture against computer-generation
link each level to its subpart and highlight the influence of reality capture against computer- in
the collection
generation in of
thethese datasets.
collection of these datasets.
Remote Sens. 2020, 12, 2583 12 of 32
Remote Sensing. 2020, x FOR PEER REVIEW 12 of 32

Figure
Figure 8. Data type
8. Data type and
and subpart
subpart populating
populating the
the VR
VR environment
environment prototype.
prototype.

We note that textures, meshes, and audio can be the product of both computer generation
We note that textures, meshes, and audio can be the product of both computer generation
techniques as well as reality capture. These are then combined and extended through other elements
techniques as well as reality capture. These are then combined and extended through other elements
(e.g., semantics) to create VR-ready environment data.
(e.g., semantics) to create VR-ready environment data.
4.4. Virtual Reality Application Design
4.4. Virtual Reality Application Design
The development of a virtual reality application for cultural heritage tourism is not a trivial
The development of a virtual reality application for cultural heritage tourism is not a trivial task.
task. From the acquisition of data to the interaction the user has upon the virtual environment,
From the acquisition of data to the interaction the user has upon the virtual environment, there is an
there is an intricate relationship between the development phases that plays an important role in the
intricate relationship between the development phases that plays an important role in the user
user experience.
experience.
We recall from the needs analysis that a certain authenticity and quality is expected for the tourism
We recall from the needs analysis that a certain authenticity and quality is expected for the
environment [85]. Even more, user-centric designs relate to a meaningful gameplay for the user [24],
tourism environment [85]. Even more, user-centric designs relate to a meaningful gameplay for the
and distractions such as bugs must be avoided [31]. Three main subjects result from these needs: the
user [24], and distractions such as bugs must be avoided [31]. Three main subjects result from these
VR environment, the user interaction, and the optimization.
needs: the VR environment, the user interaction, and the optimization.
Based on the needs analysis, we propose the methodology illustrated in Figure 9 for a VR
Based on the needs analysis, we propose the methodology illustrated in Figure 9 for a VR
application design. It extends the VR environment data diagram (Figure 8) and integrates it into the
application design. It extends the VR environment data diagram (Figure 8) and integrates it into the
design of the VR application. The centric aspect is highlighted in the user interaction subpart.
design of the VR application. The centric aspect is highlighted in the user interaction subpart.
It is noteworthy that the suggested methodology is also relevant for a regular (non-VR) 3D
It is noteworthy that the suggested methodology is also relevant for a regular (non-VR) 3D
application. Therefore, the VR specific elements present in the design of the application are highlighted
application. Therefore, the VR specific elements present in the design of the application are
in the diagram.
highlighted in the diagram.
The VR environment is composed of the VR environment data described in Figure 8 and the
“Visual Effects” component. Visual effects encompasses a set of techniques meant to increase the visual
fidelity of the scene such as the realistic shadow, particle system, color correction, and other types of
post-processing effects.
Concerning the user interaction, VR specific elements are mainly concerned with how the user
can trigger an input with its respective hardware such as the VR headset. Simple inputs can be
perceived from a particular user’s behavior, such as their gaze. However, modern headsets are now
Remote Sens. 2020, 12, 2583 13 of 32

endowed with controllers that are tracked in space to simulates the presence of the hand in the virtual
environment. It can be used to trigger an input with the controller’s buttons. It is the “gameplay” that
will define what the game response is from the user’s input. For example, grabbing an object if the
user is pressing on the grip button while his hand is near that object.
Remote Sensing. 2020, x FOR PEER REVIEW 13 of 32

Figure 9. Virtual reality application design for cultural heritage tourism.

The VR environment is composed of the VR environment data described in Figure 8 and the
“Visual Effects” component. Visual effects encompasses a set of techniques meant to increase the
visual fidelity of the scene such as the realistic shadow, particle system, color correction, and other
types of post-processing effects.
Concerning the user interaction, VR specific elements are mainly concerned with how the user
can trigger an input with its respective hardware such as the VR headset. Simple inputs can be
perceived from a particular user’s behavior, such as their gaze. However, modern headsets are now
endowed with controllers that are tracked in space to simulates the presence of the hand in the virtual
environment. It can be used to trigger an input with the controller’s buttons. It is the “gameplay” that
will define what the game response is from the user’s input. For example, grabbing an object if the
Figure9.9.Virtual
Virtual reality
reality application design for cultural heritage tourism.
Figure
user is pressing on the grip button whileapplication
his hand isdesign
near for
thatcultural
object. heritage tourism.
The VR environment
5. Experiments and Results is composed of the VR environment data described in Figure 8 and the
5. Experiments and Results
“Visual Effects” component. Visual effects encompasses a set of techniques meant to increase the
We
Weremind
visual fidelity
remindthe readers
of the
the scenethat
readerssuchweasfollow
that the the proposed
we realistic
follow shadow, methodology
particle
the proposed system,incolor
methodology Section 4, which
correction,
in Section 4,iswhich
and condensed
other is
incondensed
Figure
types of10.in Figure 10. As such, we present in the following sub-sections a direct use case illustratingthe
As such,
post-processing we present
effects. in the following sub-sections a direct use case illustrating
methodology
the methodologystepssteps
Concerning by user
the step.
by interaction,
step. VR specific elements are mainly concerned with how the user
can trigger an input with its respective hardware such as the VR headset. Simple inputs can be
perceived from a particular user’s behavior, such as their gaze. However, modern headsets are now
Need's analysis (5.1) 3D Data Acquisition (5.2) Populating the prototype (5.3) VR application design (5.4)
endowed with controllers that are tracked in space to simulates the presence of the hand in the virtual
environment. It can be used to trigger an input with the controller’s buttons. It is the “gameplay” that
Figure10.
Figure 10. User-centric
User-centric workflow
workflowforforthe
thedevelopment
development ofof
a heritage
a heritageVRVR
application on Jehay
application castle.
on Jehay castle.
will define what the game response is from the user’s input. For example, grabbing an object if the
5.1. user is pressing
Expressed Needson the grip button while his hand is near that object.
5.1. Expressed Needs
5.After
Afteraasynthetic
Experiments approach,
and Results
synthetic wepropose
approach, we proposethethe decomposition
decomposition of arising
of the the arising
needsneeds expressed
expressed by the by
the castle
castle of
We of Jehay
Jehay
remind representatives
representatives inwetwo
in two
the readers that parts,parts,
follow the“keep
“keep a track”
a track”
proposed and and “keep-alive”.
“keep-alive”.
methodology Indeed,Indeed,
in Section while
4, which while
theisfirstthe
firstcondensed in Figure 10. As such, we present in the following sub-sections a direct use case illustrating needs
aspect
aspect is is related
related to to
thethe functionalities
functionalities of of
thethe information
information system,
system, the the second
second concerns
concerns thethe
needs
regarding
regarding thetheway
the methodology wayaasteps
visitor
visitor could interact with
bycould
step. with the
thecaptured
captureddata.
data.Here
Hereis is
anan
extract of gathered
extract of gathered needs,
needs,
related to the work of M. Treffer [86], through multiple
related to the work of M. Treffer [86], through multiple exchanges. exchanges.
Keep
Keep
Need's a atrack:
track:
analysis (5.1) 3D Data Acquisition (5.2) Populating the prototype (5.3) VR application design (5.4)

• The different parts of the castle must be distinguishable. One must be able to identify the
• TheFigure 10. User-centric
different
differentparts
rooms
workflow
of the
andcastle for thebe
must
walls,
development
as well
of a heritage
distinguishable. One
as smaller elements
VRmust
application
suchbe as onto
able Jehay castle. the different
identify
windows, doors, and
roomschimneys.
and walls, as well as smaller elements such as windows, doors, and chimneys.
5.1. Expressed Needs
• • Materials
Materials and their origins
and their mustmust
origins be recognized
be recognized for all
for surfaces of the
all surfaces castle.
of the castle.
• • After
The Thea period
period syntheticbeapproach,
must must identified wefor
propose
be identified each the decomposition
for each
part part of the
of the of the arising needs expressed by the
castle.
castle.
castle
• ofEmbedded
Jehay representatives
information inistwo parts, “keep
required thea building
track” andconstruction
forbuilding “keep-alive”.asIndeed,
well aswhile thebuilding
for the first
• Embedded information is required for the construction as well as for the building block.
aspect is related to the functionalities of the information system, the second concerns the needs
• block.
The information system must be expendable to supplement with other parts of the domain such
regarding the way a visitor could interact with the captured data. Here is an extract of gathered needs,
as the to
related castle’s garden.
the work of M. Treffer [86], through multiple exchanges.
Keep a track:
Keep-alive:
• The different parts of the castle must be distinguishable. One must be able to identify the
• One mustdifferent rooms
be able and walls,
to explore a 3D as well reconstruction.
digital as smaller elements such as windows, doors, and
chimneys.
• The solution should allow visitors to experience an immersive reality at a given point in time.
• Materials and their origins must be recognized for all surfaces of the castle.
• The•different stages
The period of renovation
must be identifiedoffor
theeach
castle
partcan be displayed,
of the castle. and one can observe the evolution
of the site as well as the construction of some specific elements.
• Embedded information is required for the building construction as well as for the building
block.
Keep-alive:
• One must be able to explore a 3D digital reconstruction.
• The solution should allow visitors to experience an immersive reality at a given point in
Remote Sens. 2020, 12, 2583
time. 14 of 32
• The different stages of renovation of the castle can be displayed, and one can observe the
evolution of the site as well as the construction of some specific elements.
• It is possible to interact with several objects such as doors.
• It is possible to interact with several objects such as doors.
• Visitors should be guided through an intuitive path definition depending on their level
• Visitors should be guided through an intuitive path definition depending on their level of
of experience.
experience.
• The level of realism and texture is important for tourists.
• The level of realism and texture is important for tourists.
• •TheThe solution should
solution allow
should multiple
allow visitors
multiple at once
visitors in a in
at once common scene.
a common scene.
These are
These arecombined
combinedand andserve as aas
serve basis to establish
a basis a 3D survey
to establish a 3Dregarding the proposed
survey regarding the framework
proposed
in Section 4, applied the following sub-section.
framework in Section 4, applied the following sub-section.

5.2. The
5.2. The3D3DAcquisition
Acquisition
A total
A total of
of 156
156terrestrial
terrestriallaser
laserscanner
scanner(TLS)
(TLS)scan
scanstations were
stations made
were made(61 outside, 95 indoors),
(61 outside, each
95 indoors),
composed of an average of 150 million points. A total of 17 total stations (TS) and
each composed of an average of 150 million points. A total of 17 total stations (TS) and 1847 images 1847 images in seven
exposures
in for high dynamic
seven exposures range capture
for high dynamic range and fusion
capture andwere taken.
fusion wereThe registration
taken. of the different
The registration of the
entities was
different made
entities wasusing
madeground
usingcontrol
groundpoints
control(GCPs)
pointsfrom
(GCPs)575from
targets
575(141 outside
targets (141 and 434 and
outside indoor),
434
registered
indoor), using theusing
registered total station
the totalLeica TCRP1200
station Leica in the globalinsystem
TCRP1200 ETRS89
the global Belgian
system Lambert
ETRS89 2008,
Belgian
by indirect
Lambert georeferencing
2008, utilizing a GNSS
by indirect georeferencing receiver
utilizing Trimble
a GNSS R10 inTrimble
receiver RTK mode. The
R10 in RTKmisalignment
mode. The
between the TLS and photogrammetric reconstructions was assessed
misalignment between the TLS and photogrammetric reconstructions was assessed by comparing by comparing the iterative
the
closest point
iterative closest(ICP)
point[87] adjustments
(ICP) between
[87] adjustments the independent
between registration
the independent obtained
registration using using
obtained TS GCPsTS
only. We obtained a gaussian of 1 mm permitting the validation of the
GCPs only. We obtained a gaussian of 1 mm permitting the validation of the registration. The registration. The standard
deviationdeviation
standard for the castle’s facade
for the is 21facade
castle’s mm asisonly 202 photos
21 mm as onlywere202 used
photos forwere
the initial
used reconstruction.
for the initial
Thus, low-overlapping areas with a high incidence angle such as the
reconstruction. Thus, low-overlapping areas with a high incidence angle such as the roof roof (Figure 11) impacted the
(Figure 11)
photogrammetric
impacted reconstruction’s
the photogrammetric precision.
reconstruction’s precision.

Figure
Figure 11.
11. Color-coded
Color-codeddistances between
distances thethe
between TLS andand
TLS photogrammetric point
photogrammetric clouds
point afterafter
clouds the TS-
the
based registration.
TS-based registration.

To best design aa robust


To robust photoshoot,
photoshoot, it it is
is important
important to
to first
first obtain
obtain aa global
global skeleton
skeleton from
from aa highly
highly
overlapping scenario (above
overlapping (above 80%
80%lateral,
lateral,longitudinal,
longitudinal,and andvertical
verticaloverlap)
overlap) asas
illustrated
illustratedin in
Figure
Figure11.
TheThe
11. ability to use
ability a platform
to use such
a platform as an
such as Unmanned
an Unmanned Aerial Vehicle
Aerial (UAV)
Vehicle would
(UAV) be beneficial
would if one
be beneficial if
wants to better manage a globally consistent overlap and resolution adaptation
one wants to better manage a globally consistent overlap and resolution adaptation even on roof even on roof elements.
From this, we can precisely get to point of interest by “zooming with the feet”, i.e., getting closer
elements.
to each
Fromobject
this,ofwe
interest from this
can precisely initial
get polygonal
to point object
of interest byprogressively
“zooming with (Figure 12). i.e.,
the feet”, Taking pictures
getting in
closer
theeach
to bestobject
conditions for a robust
of interest data
from this acquisition
initial polygonal must answer
object these criteria:
progressively (Figure 12). Taking pictures
in the best conditions for a robust data acquisition must answer these criteria:
• It is necessary to prefer diffuse lighting of the scene to avoid the coexistence of shadow ranges
and areas under direct sunlight (best results are on a cloudy day without shadows);
• Direct illumination can be the source of image saturation if the scene strongly reflects light (shiny
materials);
• The shots will be taken with the best resolution of the camera to produce the most accurate 3D
model possible (the resolution greatly impact the intended reconstruction precision).
• It is necessary to prefer diffuse lighting of the scene to avoid the coexistence of shadow
ranges and areas under direct sunlight (best results are on a cloudy day without shadows);
• Direct illumination can be the source of image saturation if the scene strongly reflects light
(shiny materials);
Remote Sens.•2020,
The12, shots
2583 will be taken with the best resolution of the camera to produce the most accurate
15 of 32
3D model possible (the resolution greatly impact the intended reconstruction precision).

Figure 12. Camera position and orientation (in red) for capturing a polygonal object. The overlap is
over 80% for lateral, and vertical position planning.

To permit a quick indoor acquisition, we captured the geometry using only the TLS with a nodal-
point full-dome overlay on 95 stations using the Canon 5D Mark III full-frame camera for
colorization, delivering results such as those presented in Figure 13.
While the approach behind the reconstruction is really interesting, it exceeds the scope of this
paper, and we refer the reader to the work of Remondino et al. [64] for more details. We employed
Figure 12. Camera
Figure
the software 12. Cameraposition and
position
ContextCapture and
[66]orientation
orientation(in
for (in red) for
red) for
the independent capturing
capturing
modeling a apolygonal
of polygonal
photos object.
object.
(statues, The The overlap
overlap
the exterioris is
of the
over over
80% 80%
for for lateral,
lateral, and and vertical
vertical positionplanning.
position planning.
castle) and RealityCapture [74] for converting TLS scans in mesh elements for the indoor parts. The
choice toward the software solutions ContextCapture and RealityCapture was made by integrating
To permit
To permit a quick
a quickindoor acquisition,
indoor acquisition, wewe captured
captured the geometry
the geometry using
using only the only theaTLS
TLS with with a
nodal-
considerations of automatic efficiency and needed accuracy for the reconstruction, as well as QC
point full-dome
nodal-point full-dome overlay
overlay on on 95 95
stations
stationsusing the the
using Canon 5D 5D
Canon Mark III full-frame
Mark III camera
full-frame for for
camera
possibilities due to the first demand of the contractor. However, open-source solutions exist, such as
colorization,
colorization, delivering
delivering results
results suchasasthose
thosepresented
presented in in Figure 13.
Meshroom [88], MicMac [89],such
VisualSfm [69] or even Graphos Figure
[90].13.
While the approach behind the reconstruction is really interesting, it exceeds the scope of this
paper, and we refer the reader to the work of Remondino et al. [64] for more details. We employed
the software ContextCapture [66] for the independent modeling of photos (statues, the exterior of the
castle) and RealityCapture [74] for converting TLS scans in mesh elements for the indoor parts. The
choice toward the software solutions ContextCapture and RealityCapture was made by integrating
considerations of automatic efficiency and needed accuracy for the reconstruction, as well as QC
possibilities due to the first demand of the contractor. However, open-source solutions exist, such as
Meshroom [88], MicMac [89], VisualSfm [69] or even Graphos [90].

Figure
Figure 13. 13. Point
Point cloud
cloud ofofthe
thefirst
firstfloor
floor of
of the
the castle
castleofofJehay, before
Jehay, thethe
before renovations.
renovations.

While the approach behind the reconstruction is really interesting, it exceeds the scope of this
paper, and we refer the reader to the work of Remondino et al. [64] for more details. We employed
the software ContextCapture [66] for the independent modeling of photos (statues, the exterior of
the castle) and RealityCapture [74] for converting TLS scans in mesh elements for the indoor parts.
The choice toward the software solutions ContextCapture and RealityCapture was made by integrating
considerations ofFigure 13. Point efficiency
automatic cloud of the and
first floor of the accuracy
needed castle of Jehay,
for before the renovations. as well as QC
the reconstruction,
possibilities due to the first demand of the contractor. However, open-source solutions exist, such as
Meshroom [88], MicMac [89], VisualSfm [69] or even Graphos [90].

5.3. Data Integration (Geometry Optimization and Visualization Optimization)


From the 3D data acquisition phase, we generate the initial highly detailed mesh elements: two
separate meshes for the floor, and one for the castle exterior. To deploy a ready-to-use virtual site,
the data is integrated with a set of tools and techniques to favor a fluid human–machine interaction.
As such, the stereoscopic effect requires rendering the scene for each eye. Therefore, a VR environment
is prone to latency if too complex. A naïve approach to reduce the complexity of a scene is to downgrade
the quality of the visual elements such as the visual effects or the amount of details of the 3D model’s
mesh. However, it is often necessary to reduce initial meshes that are often redundant as they do not
have an acceptable number of triangles and vertices. Thus, the obtained meshes are down-sampled
using a quadric-based edge collapse decimation approach (with texture) as described in [91] and
implemented in MeshLab. It permits the reduction in the exterior of the castle from 20 million triangles
to 53,000 (Figure 14) without noticeable effects on the rendering quality for users.
is to downgrade the quality of the visual elements such as the visual effects or the amount of details
redundant as they do not have an acceptable number of triangles and vertices. Thus, the obtained
of the 3D model’s mesh. However, it is often necessary to reduce initial meshes that are often
meshes are down-sampled using a quadric-based edge collapse decimation approach (with texture)
redundant as they do not have an acceptable number of triangles and vertices. Thus, the obtained
as described in [91] and implemented in MeshLab. It permits the reduction in the exterior of the castle
meshes are down-sampled using a quadric-based edge collapse decimation approach (with texture)
from 20 million triangles to 53,000 (Figure 14) without noticeable effects on the rendering quality for
as described in [91] and implemented in MeshLab. It permits the reduction in the exterior of 16
the castle
users.Sens. 2020, 12, 2583
Remote of 32
from 20 million triangles to 53,000 (Figure 14) without noticeable effects on the rendering quality for
users.

Figure 14. Original mesh (left) to simplified mesh (right).


Figure 14. Original mesh (left) to simplified mesh (right).
Figure
As for the indoor mesh we 14. Originalwith
extracted mesh RealityCapture
(left) to simplified[74],
meshthree
(right).
million triangle mesh
elements per floor were used, simplified with the same quadric-based
As for the indoor mesh we extracted with RealityCapture [74], three million triangle edge collapse decimation
mesh elements
As for the indoor mesh we extracted with RealityCapture [74], three million triangle mesh
approach, to aim for 50,000 triangles per floor at the lower Level of Detail (LoD)
per floor were used, simplified with the same quadric-based edge collapse decimation approach, to aim level.
elements per floor were used, simplified with the same quadric-based edge collapse decimation
Both the
for 50,000 indoorper
triangles and outdoor
floor at thecaptured
lower Levelenvironments of thelevel.
of Detail (LoD) castle of Jehay are integrated into the
approach, to aim for 50,000 triangles per floor at the lower Level of Detail (LoD) level.
gameBoth
engine Unity [92]. This implicit outdoor–indoor space
the indoor and outdoor captured environments of the castle of partition is Jehay
leveraged through our
are integrated intoLoD
the
Both the indoor and outdoor captured environments of the castle of Jehay are integrated into the
strategy
game described
engine Unity in Sub-Section
[92]. 5.3.5.
This implicit As such, the entire
outdoor–indoor space castle is separated
partition is leveraged intothrough
two scenes: the
our LoD
game engine Unity [92]. This implicit outdoor–indoor space partition is leveraged through our LoD
interior space and the exterior space, each further refined and calling different mesh
strategy described in Section 5.3.5. As such, the entire castle is separated into two scenes: the interior complexities
strategy described in Sub-Section 5.3.5. As such, the entire castle is separated into two scenes: the
ranging
space and between the initial
the exterior space, high-polygon meshesand
each further refined and the lowest
calling differentlevels.
meshThis is then used
complexities for
ranging
interior space and the exterior space, each further refined and calling different mesh complexities
automated
between theocclusion culling with
initial high-polygon frustum
meshes and culling to ensure
the lowest a stable
levels. This framerate
is then used for(illustrated in Figure
automated occlusion
ranging between the initial high-polygon meshes and the lowest levels. This is then used for
15., see Section 5.3.4).
culling with frustum culling to ensure a stable framerate (illustrated in Figure 15., see Section 5.3.4).
automated occlusion culling with frustum culling to ensure a stable framerate (illustrated in Figure
15., see Section 5.3.4).

Figure 15. Frustrum and occlusion culling.


Figure 15. Frustrum and occlusion culling.
Some surrounding areas and vegetation of the castle of Jehay are modeled using computer-aided
Some surrounding Figure 15. Frustrum and occlusion culling.
design techniques. Thus,areas and vegetation
the obtained of the castle
mesh elements of Jehay
fall into both are
the modeled
categoriesusing computer-aided
(computer-generated
design
and techniques.
reality capture) asThus,
seen the obtained
in Section 4.3. mesh elements
Furthermore, fallreality
other into both the elements
categories (computer-
Some surrounding areas and vegetation of the castle of Jehaycaptured
are modeled usingare integrated,
computer-aided
generated
such as the and reality
chapel, as capture)
seen in as
Figureseen
16. in Section 4.3. Furthermore, other reality captured elements
design techniques.
Remote Sensing. Thus, REVIEW
2020, x FOR the obtained mesh elements fall into both the categories 17 of(computer-
are integrated, such as thePEER
chapel, as seen in Figure 16. 32
generated and reality capture) as seen in Section 4.3. Furthermore, other reality captured elements
are integrated, such as the chapel, as seen in Figure 16.

Figure
Figure 16. 16. Three-dimensionalmodel
Three-dimensional model of
of Jehay’s
Jehay’s chapel
chapelobtained
obtainedbyby
reality captured.
reality captured.

Concerning this trade-off between visual fidelity and optimality, one should find a balance
between getting a fluid environment and the intended visual quality. We describe this process in
more detail in the following sub-sections.

5.3.1. Hard Surface Normal-Map Baking


To reduce the complexity of a mesh without downgrading its visuals, a common technique in
Remote Sens. 2020, 12, 2583 17 of 32
Figure 16. Three-dimensional model of Jehay’s chapel obtained by reality captured.

Concerning
Concerningthisthistrade-off between
trade-off betweenvisual fidelity
visual and optimality,
fidelity one should
and optimality, find a balance
one should find a between
balance
getting a fluid environment and the intended visual quality. We describe this process
between getting a fluid environment and the intended visual quality. We describe this in more detail in
process in
the
morefollowing
detail insub-sections.
the following sub-sections.

5.3.1. Hard Surface


5.3.1. Hard Surface Normal-Map
Normal-Map Baking
Baking
To
To reduce
reduce the
the complexity
complexity of of aa mesh
mesh without
without downgrading
downgrading its its visuals,
visuals, aa common
common technique
technique inin
computer
computer graphics
graphics isis normal-map
normal-map baking
baking [91,93,94]. As described
[91,93,94]. As described inin [93],
[93], the
the workflow
workflow consists
consists of
of
creating
creating aa simplified
simplifiedversion
versionofofthe
the3D3D mesh.
mesh.The simplified
The mesh
simplified cancan
mesh be generated
be generatedautomatically [91]
automatically
and
[91] by
andaltering the mesh’s
by altering topology
the mesh’s in a 3D
topology insoftware. We created
a 3D software. an additional
We created texture—the
an additional normal
texture—the
map—which will gather the surface detail of the original mesh in a compressed image
normal map—which will gather the surface detail of the original mesh in a compressed image format. format. Finally,
this detailed texture is applied to the simplified mesh and used to retrieve the same details,
Finally, this detailed texture is applied to the simplified mesh and used to retrieve the same details, created by
the lighting,
created aslighting,
by the the original mesh.
as the Figure
original 17 shows
mesh. Figurethe
17 results
shows of
thethis process.
results of this process.

Figure 17.
Figure 17. Reworked
Reworkedmesh
meshof the castle
of the of Jehay
castle and lightmap
of Jehay creation,
and lightmap with (up)
creation, withand without
(up) (down)
and without
texture. texture.
(down)

5.3.2. Draw Call


5.3.2. Draw Call Batching
Batching
To
To be
bedrawn
drawnon onthe
thescreen,
screen,a ascene
sceneelement must
element mustcall thethe
call GPUGPU in ainprocess named
a process draw
named call.call.
draw If the
If
number of elements in the scene is high, the back and forth iterations between the element
the number of elements in the scene is high, the back and forth iterations between the element and and the GPU
can be computationally
the GPU demanding.
can be computationally To reduce the
demanding. numberthe
To reduce of calls,
number an easy optimization
of calls, technique is
an easy optimization
to use draw call batching to group all elements that share the same materials before calling [95].
Some game engines like Unity can automatically batch game objects that share the same
materials [92]. It is noteworthy that this automated batching requires the copying of the combined
geometry and thus increases the memory footprint. On the other hand, one can reduce the number
of draw calls by merging two meshes sharing the same material into one. Moreover, when two 3D
models only differ by the texture of their material, a common technique is to make an atlas [96] of the
two textures. Therefore, instead of having two draw calls for the separated textures, we can benefit
from only one draw call using the atlas.

5.3.3. Lightmap Baking


As explained by Jason Gregory in [97], lighting is one of the most important aspects for visual
quality. In a game engine, the lighting makes use of a mathematical model which can be described
as a partial solution of the rendering equation depending on the level of visual fidelity wanted [98].
Therefore, achieving realistic lighting requires additional calculations which can be computationally
two textures. Therefore, instead of having two draw calls for the separated textures, we can benefit
from only one draw call using the atlas.

5.3.3. Lightmap Baking


As 2020,
Remote Sens. explained
12, 2583by Jason Gregory in [97], lighting is one of the most important aspects for visual
18 of 32
quality. In a game engine, the lighting makes use of a mathematical model which can be described as
a partial solution of the rendering equation depending on the level of visual fidelity wanted [98].
demanding. To overcome
Therefore, achieving this lighting
realistic issue, a requires
commonadditional
technique in real-time
calculations rendering
which is to pre-calculate
can be computationally
(bake)
demanding. To overcome this issue, a common technique in real-time rendering is runtime,
the lighting and store the result in a texture map known as a light map. At the light
to pre-calculate
map texture
(bake) the is projected
lighting andon the the
store objects
resultinin
thea scene
textureand
map used to determine
known as a light the
map.lighting on them.
At runtime, On the
the light
downside,
map texturea light map needs
is projected on to
thestore extra
objects memory
in the in the
scene and game
used and objects
to determine thethat are inon
lighting thethem.
lightOn
map
the downside,
cannot change their a light map needsover
illumination to store
time.extra memory in the game and objects that are in the light
map cannot change their illumination over time.
5.3.4. Occlusion Culling and Frustum Culling
5.3.4. Occlusion Culling and Frustum Culling
Occlusion culling and frustum culling are two techniques aimed at only rendering what the user
Occlusion
sees. While frustumculling and removes
culling frustum culling are two
the objects thattechniques
are outsideaimed at only
of the userrendering
view frustum,what the user
occlusion
sees. While frustum culling removes the objects that are outside of the user view
culling tends to remove what is occluded by another object. The process is illustrated in Figure 18. frustum, occlusion
cullingculling
Frustum tends tocan remove what is occluded
be automated by another
by calculating object. Thevolume
if a bounding process ofis illustrated
the object in Figure a18.
(usually box
Frustum culling can be automated by calculating if a bounding volume of
or sphere) has points under all six planes of the view volume. Occlusion culling is a more complexthe object (usually a box
or sphere)
problem has points
generated under all six For
automatically. planes of the view
calculating volume.
if an objectOcclusion
occludes culling
anotheris object,
a more acomplex
common
problem generated automatically. For calculating if an object occludes
technique is the generation of potentially visible sets (PVS). PVS can be defined manually another object, a common
by cutting
technique is the generation of potentially visible sets (PVS). PVS can be defined manually by cutting
the scene into sub-regions, for example. However, other algorithms exist for automated PVS based on
the scene into sub-regions, for example. However, other algorithms exist for automated PVS based
the subdivision of regions. Even if the result might be imperfect, automated PVS can provide a quick
on the subdivision of regions. Even if the result might be imperfect, automated PVS can provide a
optimization without the need of additional work. The reader can see an overview of such techniques
quick optimization without the need of additional work. The reader can see an overview of such
in [99], where the authors review several algorithms and discriminate them according to four entries:
techniques in [99], where the authors review several algorithms and discriminate them according to
conservative, aggressive, approximate and exact.
four entries: conservative, aggressive, approximate and exact.

Figure Culling
18.18.
Figure Cullingoptimization
optimizationcomparison.
comparison. The second
second and
andthird
thirdpictures
picturesdisplay
displaya atop
top view
view of of
thethe
scene, the view frustum is indicated between
scene, the view frustum is indicated between the the white lines. We can remark how half of the terrain
lines. We can remark how half of the terrain
and castle
and areare
castle not rendered
not renderedasasnot
notinside
insidethe
theuser’s
user’s view
view frustum.
frustum.

5.3.5. Levels
5.3.5. ofof
Levels Details
Details(LOD)
(LOD)
When
When farfar
from
froma 3D
a 3Dmodel,
model,the
theviewer
viewerisis not
not able
able to look at
to look at the
thecomplexity
complexityofofa amesh.
mesh.Based
Basedonon
thisthis
observation, game engines tend to use a different level of complexity for a mesh: the
observation, game engines tend to use a different level of complexity for a mesh: the level oflevel of detail
(LOD).
detailThen,
(LOD).theThen,
calculated distance distance
the calculated between between
the virtual
thecamera
virtual and the object
camera and theis object
used toisdisplay
used toan
adapted
display version of the mesh
an adapted to of
version optimize the to
the mesh performance.
optimize the This technique can
performance. Thiseven improvecan
technique theeven
visual
fidelity as it reduces geometry aliasing. The downside of using such a technique is that the memory
footprint induced by having multiple meshes for one object is higher.
At this step, we obtained several meshes with different levels of detail (e.g., for a part in Figure 19),
to be integrated in the VR environment. These were generated by optimizing both the geometry and
texture of elements to maximize the performance of the application through normal mapping, culling,
and LoD strategies. They constitute the virtual environment data used for populating the prototype,
which serves as the backbone for the VR environment design described in Section 5.4.1.
is that the memory footprint induced by having multiple meshes for one object is higher.
At this step, we obtained several meshes with different levels of detail (e.g., for a part in Figure
19), to be integrated in the VR environment. These were generated by optimizing both the geometry
and texture of elements to maximize the performance of the application through normal mapping,
culling,
Remote Sens.and
2020,LoD strategies. They constitute the virtual environment data used for populating19the
12, 2583 of 32
prototype, which serves as the backbone for the VR environment design described in Section 5.4.1.

Figure19.
Figure 19.Levels
Levelsof
ofdetails
details for
for aa part
part of
of the
the castle.
castle. From
From left
left to
to right,
right, the
themodels
modelsare
arerespectively
respectively
composed of 1000, 10,000, and 100,000 triangles.
composed of 1000, 10,000, and 100,000 triangles.

5.4.
5.4.VR
VRDesign
Design

5.4.1.
5.4.1.The
TheLocomotion
LocomotionSystem
System
Inadequate
Inadequatelocomotion
locomotionsystems
systems lead
lead to
to VR
VR sickness which, similar
sickness which, similartotomotion
motionsickness,
sickness,result
resultinin
symptoms like nausea, headaches, or vomiting and can ruin the user’s experience
symptoms like nausea, headaches, or vomiting and can ruin the user’s experience [100]. In our case [100]. In our case
study,
study,we weneeded
neededthe theuser
userto
tobe
beable
able to
to move
move freely
freely to discover the
to discover thevirtual
virtualenvironment
environmentbybyhimself.
himself.
For
Forthis,
this,we
weused
usedthree
threedifferent
differenttypes
typesofof locomotion
locomotion systems simultaneously
simultaneously to toextend
extendthe
theuser’s
user’sarea
area
and
and control where the user could go while reducing VR sickness. We detail each locomotion systemin
control where the user could go while reducing VR sickness. We detail each locomotion system
Remote
the theSensing.
in following 2020,
following x FOR PEER
subsections
subsections REVIEW
below andand
below assess each
assess oneone
each as illustrated in in
as illustrated Figure 20.20.
Figure 20 of 32

Move-in place. The principal movement implemented was move-in-place. The basic idea of this
method is to imitate a walking motion while staying in place. To this end, the authors in [101] used a
knee cap to track the movement of the legs and assign the corresponding movement in VR. Another
technique is found in [102], where the tracking used is the motion of the headset to detect each step.
In our case study, we integrated a walk-in-place motion method using the swing of the controller.
The user could swing the arm as if he was running and he would move in the direction he was facing.
However, the downside of such a locomotion system is its precision.

Figure 20. Mean values and standard deviation of the summed scores for each locomotion type.
Figure 20. Mean values and standard deviation of the summed scores for each locomotion type. The
The higher the score, the more the participant felt present in the virtual environment. The maximum
higher the score, the more the participant felt present in the virtual environment. The maximum
theoretical score is indicated in blue.
theoretical score is indicated in blue.

Move-in place. The principal movement implemented was move-in-place. The basic idea of this
Teleportation. A commonly used solution in VR for moving the user while reducing motion sickness
method is to imitate a walking motion while staying in place. To this end, the authors in [101] used a
is teleportation. Using a controller, the user can point at a direction indicated by a straight line or a
knee cap to track the movement of the legs and assign the corresponding movement in VR. Another
parabolic curve where he wants to be teleported to. Then, at the press of a button, a fade transition
technique is found in [102], where the tracking used is the motion of the headset to detect each step.
occurs, and the user position is set to the desired location. We implemented a teleportation feature
In our case study, we integrated a walk-in-place motion method using the swing of the controller.
(materialized as clouds above the castle ) combined with move-in-place to reach an unattainable zone
(Figure 21). The user could point with the controller at the cloud and press a button to be teleported
to it, resulting in a unique view of the entire castle and its surrounding from above, which is non-
replicable on-site.
Remote Sens. 2020, 12, 2583 20 of 32
Figure 20. Mean values and standard deviation of the summed scores for each locomotion type. The
higher the score, the more the participant felt present in the virtual environment. The maximum
The user could swing the arm as if he was running and he would move in the direction he was facing.
theoretical score is indicated in blue.
However, the downside of such a locomotion system is its precision.
Teleportation.
Teleportation. A commonly
A commonly used solution
used solution in VR forin VR for moving
moving the userthe user
while while reducing
reducing motion
motion sickness
sickness is teleportation. Using a controller, the user can point at a direction indicated
is teleportation. Using a controller, the user can point at a direction indicated by a straight line or a by a straight
line or a parabolic
parabolic curve where curvehewhere
wants he
to wants to be teleported
be teleported to.the
to. Then, at Then, at of
press thea press
button,of aa fade
button, a fade
transition
transition
occurs, andoccurs, andposition
the user the userisposition is set
set to the to thelocation.
desired desired location. We implemented
We implemented a teleportation
a teleportation feature
feature (materialized as clouds above the castle ) combined with move-in-place to
(materialized as clouds above the castle ) combined with move-in-place to reach an unattainable reach an unattainable
zone
zone (Figure 21). The user could point with the controller at the cloud and press
(Figure 21). The user could point with the controller at the cloud and press a button to be teleported a button to be
teleported to it, resulting in a unique view of the entire castle and its surrounding from
to it, resulting in a unique view of the entire castle and its surrounding from above, which is non- above, which is
non-replicable
replicable on-site.on-site.

Figure 21. User’s


User’s view
view once
once teleported
teleported on a cloud.

Automatic
Automatic routing.
routing. In someIn some
cases,cases, andpassive
and for for passive tourism,
tourism, it is optimal
it is optimal to prevent
to prevent a visitor
a visitor from
from controlling
controlling his VR hismovement
VR movement as the
as the visitor
visitor must
must getget used
used totothe
thelocomotion
locomotionsystem
system using
using the
controllers. In the
controllers. the case
casestudy,
study,wewedesigned
designed a unique
a uniqueway to explore
way to explorethe the
outside of the
outside of castle withwith
the castle a boata
tour
boat in the
Remotetour
moat
in the
Sensing.
of
2020,moat
the
x FORof
castle as shown
the REVIEW
PEER
in Figure 22.
castle as shown in Figure 22. 21 of 32

Figure 22. Boat tour around the castle.

The boat is automatically moving along a curve and during the tour, and the user had no control
over the movement of the boat apart from looking in all directions. Because the user does not have
overits
control over itsposition,
position,a smooth
a smooth movement
movement is thus
is thus required
required to avoid
to avoid motion
motion sickness
sickness as sharp
as sharp turns
provide a more stimulus-inducing section, in terms of the impression of motion in a stationary
position [100], [103]. Even though this method is not aimed at visitors who want to experience a free
exploration movement, it is an optimal approach for first VR experiences, as it simplifies the user
interaction. Therefore, we built a version of the application using only the automated boat tour for
people new to VR and digital immersive experiences (Figure 23).
Figure 22. Boat tour around the castle.

RemoteThe
Sens.boat
2020,is
12,automatically
2583 moving along a curve and during the tour, and the user had no control
21 of 32
over the movement of the boat apart from looking in all directions. Because the user does not have
control over its position, a smooth movement is thus required to avoid motion sickness as sharp turns
turns provide
provide a morea more stimulus-inducing
stimulus-inducing section,
section, in terms
in terms of of
thethe impressionofofmotion
impression motioninin aa stationary
stationary
position
position [100], [103]. Even though this method is not aimed at visitors who want to experience aa free
[100,103]. Even though this method is not aimed at visitors who want to experience free
exploration movement, it is an optimal approach for first VR experiences, as it simplifies
exploration movement, it is an optimal approach for first VR experiences, as it simplifies the user the user
interaction. Therefore, we
interaction. Therefore, we built
built aa version
version of
of the
the application
application using
using only
only the
the automated boat tour
automated boat tour for
for
people new to VR and digital immersive experiences (Figure
people new to VR and digital immersive experiences (Figure 23). 23).

Figure 23.
Figure 23. Curve
Curve of
of the
the boat
boat (in
(in green).
green).

Multi-touch
Multi-touch movement.
movement. AnotherAnother
promisingpromising locomotion
locomotion mode ismode is multi-touch
multi-touch movement.
movement. It is
It is heavily
heavily
inspiredinspired by the multi-touch
by the multi-touch user experience
user experience foundfound in touch
in touch screenscreen devices.
devices. The controllers
The VR VR controllersare
are rendered
rendered as abstracted
as abstracted hands hands
whichwhich
the usertheuses
usertouses to “grab”
“grab” the environment
the environment pressingpressing on the
on the triggers
triggers of each controller
of each controller (Figure
(Figure 24). When 24).one When
or moreone controllers
or more controllers
are moved arewith
moved
theirwith theirheld,
triggers triggers
the
held,
scenethe scene is transformed
is transformed (pan, rotate,
(pan, rotate, scale) while
scale) while keepingkeeping the controllers’
the controllers’ scenescene position
position as close
as close as
as possible
possible to to their
their original
original positions.It It
positions. allowsthe
allows theuser
usertotomove
moveefficiently
efficientlyby
bygrabbing
grabbing the
the scene and
pulling it closer with a single controller,
controller, or
Remote Sensing. 2020, x FOR PEER REVIEW or to
to even
even rotate
rotateand
andscale
scalethe
thescene
sceneusing
usingtwo
twocontrollers.
controllers.
22 of 32
The transformations mentioned above are subject to a configurable subset of the following
constraints:
 Keeping the up-axis of the scene aligned with the up-axis of the physical world.
 Keeping the scale of the scene fixed.
 Keeping the rotation of the scene fixed.
Usually, the first constraint is enforced, as tilting the scene increases the risk of simulator
sickness while rarely providing a meaningful way to explore the scene. For users who are not familiar
with the multi-touch interactions, it can also be desirable to enforce the other constraints as these
prevent accidental rotation or scaling of the scene while moving around.

Figure 24. First-person


First-person view of multi-touch interactions with a TLS indoor scan. One hand is triggered
interactions with
direction of
and allows a translation in the direction of the
the visitor’s
visitor’s hand motion.

The transformations
Compared mentioned
to previously mentionedabove are subject
movement to a configurable
patterns, subset of the
multi-touch movement givesfollowing
the user
constraints:
a higher degree of freedom to explore a VR scene. The scaling transformation (e.g., in Figure 24 where
a full indoor scan is scaled in real-time to about 1:5000) allows for a quick exploration when scaling
 Keeping the up-axis of the scene aligned with the up-axis of the physical world.
down or detailed observations when magnifying scales. We observed that users quickly became
 Keeping the scale of the scene fixed.
comfortable with this form of movement even in setups with large tracking spaces where regular
 Keeping the rotation of the scene fixed.
walking is a valid method to move around the scene. They intuitively decided to remain stationary
and relied almost
Usually, exclusively
the first onismulti-touch
constraint enforced, asmovement.
tilting the scene increases the risk of simulator sickness
while rarely providing a meaningful way to explore the scene. For users who are not familiar with
5.4.2.
the Multi-Users
multi-touch VR Environments
interactions, it can also be desirable to enforce the other constraints as these prevent
accidental rotation or were
Five computers scaling of the scene
available while
on-site at moving around.
the castle of Jehay. To benefit from this setup, we
Compared to previously mentioned movement patterns,
decided to integrate a multi-user system to the application. Our multi-touch
approachmovement
was to usegives
thethe user a
network
higher degree of freedom to explore a VR scene. The scaling transformation (e.g., in Figure
system to synchronize the position and rotation of the headset and the controllers and use the data 24 where a
full indoor scan
to animate a fullisbody
scaled in real-time
which to about
will mimic 1:5000)
the body allows
of the userfor
inathe
quick exploration
virtual when scaling
world, thereby, down
the visitors
could see each other while visiting the castle (Figure 25).
Figure 24. First-person view of multi-touch interactions with a TLS indoor scan. One hand is triggered
and allows a translation in the direction of the visitor’s hand motion.

RemoteCompared
Sens. 2020, 12,to2583
previously
mentioned movement patterns, multi-touch movement gives the 22user
of 32
a higher degree of freedom to explore a VR scene. The scaling transformation (e.g., in Figure 24 where
a full indoor scan is scaled in real-time to about 1:5000) allows for a quick exploration when scaling
or detailed
down observations
or detailed when magnifying
observations scales. Wescales.
when magnifying observed
We that users quickly
observed became
that users comfortable
quickly became
with this form of movement even in setups with large tracking spaces where regular walking
comfortable with this form of movement even in setups with large tracking spaces where regular is a valid
method
walking to is move
a validaround
methodthe
toscene.
move They
aroundintuitively
the scene.decided to remaindecided
They intuitively stationary and relied
to remain almost
stationary
exclusively on multi-touch movement.
and relied almost exclusively on multi-touch movement.

5.4.2. Multi-Users VR
5.4.2. Multi-Users VR Environments
Environments
Five
Five computers
computerswere
wereavailable
available on-site at the
on-site castle
at the of Jehay.
castle To benefit
of Jehay. from this
To benefit setup,
from thiswe decided
setup, we
to integrate a multi-user system to the application. Our approach was to use the network
decided to integrate a multi-user system to the application. Our approach was to use the network system to
synchronize the position
system to synchronize and
the rotation
position of the
and headset
rotation andheadset
of the the controllers
and theand use the data
controllers to animate
and use a
the data
full body which will mimic the body of the user in the virtual world, thereby, the visitors
to animate a full body which will mimic the body of the user in the virtual world, thereby, the visitorscould see
each
couldother while
see each visiting
other the
while castle (Figure
visiting 25).
the castle (Figure 25).

Figure 25.
Figure 25. Three
Three users
users seeing
seeing each
each other
other in the castle
in the castle of
of Jehay
Jehay while
while playing
playing together
together as
as seen in the
seen in the
corner.
bottom right corner.

Creating
Creating aabody bodyinin VRVR
is aistedious task.task.
a tedious The challenge consists
The challenge of using
consists ofthe datathe
using from the from
data positions
the
of the headset
positions of theand the controllers
headset to recreatetothe
and the controllers entire body
recreate posebody
the entire in a realistic
pose in form.
a realistic form.
We cancan address
address thisthis problem
problem usingusing inverse kinematics [104] which determines a set of joint
configurations for retrieving a movement. This This method
method isis commonly
commonly usedused in in robotics and can be
similarly used on the rig of a human 3D mesh. We We invite the reader to study [104], which provides a
complete summary of the techniques used to solve inverse kinematics. In this this case
case study,
study, the method
implemented was the “Forward And Backward Reaching Inverse Kinematics” (FABRIK) [105], which
creates aaset
setofofpoints
points along a line
along to find
a line to each
find joint
eachposition. Then, aThen,
joint position. series of constraints
a series were combined
of constraints were
to retain
combined a realistic
to retain movement
a realistic [106],
movement
Remote Sensing. 2020, x FOR PEER REVIEW
as shown
[106],inasFigure
shown 26.
in Figure 26. 23 of 32

Figure
Figure 26.26.The
Theangle
angleexceeds
exceedsthe
the threshold
threshold on
on the
the left
leftfigure
figureand
andisiscorrected
correctedononthe right
the figure.
right figure.

5.4.3. Panoramic ◦ Pictures


5.4.3. Panoramic360
360° Pictures
AA360 ◦ panorama
360° panoramacaptures
captures the complete
completesurrounding
surrounding into
into a single
a single image
image thatthat is then
is then mapped
mapped to
to aa geometry
geometry(sphere,
(sphere,cube, cylinder,
cube, etc.)
cylinder, through
etc.) projective
through mappings.
projective SeveralSeveral
mappings. 360◦ panoramas
360° panoramas were
taken
were on-site
taken fromfrom
on-site scanning positions
scanning and leveraged
positions through
and leveraged the creation
through of a 360◦ -viewer
of a 360°-viewer
the creation VR-
VR-integrated feature. The viewer works through interactive portals between the real worldthe
integrated feature. The viewer works through interactive portals between the real world and and
virtual world for the user to see the comparison with the 3D model of the castle. When the user
intersects the portal with one of the controllers, he is teleported inside the 360°-viewer, as seen in
Figure 27.
Figure 26. The angle exceeds the threshold on the left figure and is corrected on the right figure.

5.4.3. Panoramic
Figure 26.360° Pictures
The angle exceeds the threshold on the left figure and is corrected on the right figure.
A 360° panorama captures the complete surrounding into a single image that is then mapped to
5.4.3. Panoramic 360° Pictures
aRemote
geometry (sphere,
Sens. 2020, cube, cylinder, etc.) through projective mappings. Several 360° panoramas23
12, 2583 were
of 32
A 360° panorama
taken on-site from scanningcaptures the complete
positions surrounding
and leveraged into a single
through image that
the creation ofisathen mapped toVR-
360°-viewer
a geometry
integrated (sphere,
feature. cube,
The cylinder,
viewer worksetc.)through
throughinteractive
projective mappings. Several the
portals between 360°real
panoramas werethe
world and
the virtual
taken
virtual world
on-site
world from
for forscanning
the the
useruser to see
thethe
positions
to see comparison
and leveraged
comparison with
with thethe
3D3D
through model
the ofofthe
creation
model the castle.
ofcastle. Whenthe
a 360°-viewer
When the
VR-user
user
integrated
intersects feature.
the portal The
withviewer
one works
of the through interactive
controllers, he is portals
teleported between
inside the
the 360 ◦ -viewer,
real world and
as the
seen in
intersects the portal with one of the controllers, he is teleported inside the 360°-viewer, as seen in
virtual
Figure27.
Figure world
27. for the user to see the comparison with the 3D model of the castle. When the user
intersects the portal with one of the controllers, he is teleported inside the 360°-viewer, as seen in
Figure 27.

Figure
Figure27.
27.User
Userinside
insidethe 360◦ -viewer.
the360°-viewer.
Figure 27. User inside the 360°-viewer.
Whenthe
When theuser
userintersects
intersects
thethe portal
portal withwith
oneone of controllers,
of the the controllers,
he is he is teleported
teleported insideinside
the 360°-the
360◦ -viewer.
viewer. For theFor
VRthe VR
design,design,
we set we
the set
360°the 360 ◦ panoramas as the main texture of a sphere where we
panoramas as the main texture of a sphere
When the user intersects the portal with one of the controllers, he is teleported inside the 360°-where we invert
invert
the theFor
normals
viewer. normals
so VRso
thethe the newly
newly
design,mapped
we setmapped 360°texture
texture
the is displayed
is displayed
panoramas from
as the from the interior
thetexture
main interiorof of the ofsphere.
a sphere the sphere.
where By By doing
wedoing
invert so,
one
so, could
theone position
could
normals so thethe
position user
newly inside
the mapped
user the
inside sphere
the is
texture as
sphereshown
displayed in
as shown Figure
from the 28. It is
in Figure
interior noteworthy
28.
of It
theissphere. that,
noteworthy for
By doing a better
that, for a
so,
one could
quality,
better position
the sphere
quality, the user
should
the sphere beinsidebethe
modeled
should sphere as
with enough
modeled shown
with in Figure
triangles
enough 28.should
and
triangles Itand
is noteworthy notthat,
not receive
should for alighting.
lighting.
receive better
quality, the sphere should be modeled with enough triangles and should not receive lighting.

Figure 28. On the left is the portal


portal to a 360° ◦ imageintegrated
integrated in the VR application. On the rightone
is
Figure
Figure28.28.On
Onthe leftisisthe
theleft the portaltotoaa360
360°image
image integratedininthe
theVR
VRapplication.
application. On
Onthe right is
theright is
one of
ofone the images
theofimages displayed
displayed whenwhen the
the user user interacts
interacts with with the
thethe portal.
portal.
the images displayed when the user interacts with portal.

6.6.6.
Discussions
Discussions
Discussions
The
TheVR
The VRVRenvironment
environmentresults
environment results in
results inan
in aninteractive
an application
interactiveapplication
interactive which
applicationwhich
which isisisendowed
endowed
endowed with
with
with aarich
a richrich control
control
control
over
over the
overthe
theuser’s immersion.
immersion.ItItItisisisworth
user’simmersion. worth mentioning
worthmentioning
mentioningthat that
thatthethe
the castle
castle representatives,
representatives,
castle representatives, thethe
the most most
most familiar
familiar with
familiar
with
the the
site,
with site,
were
the site,were
weresurprised
surprised toto rediscover
to rediscover
surprised thefrom
the castle
rediscover the castle fromaof
a from
castle point apoint
point ofview
viewofthat view
was that
notwas
that was notnot
replicable replicable
outside of
replicable
outside
the
outsideofofthe
theapplication,
application, application,consolidating
consolidating initialexpressed
initial thoughts
consolidating thoughtsexpressed
thoughts expressed
in [82]. Forinin [82].For
example,
[82]. For example,
the boat the
example, the
tour boat
around
boat tour
tour the
around
aroundthe
castle’s thecastle’s
moat castle’smoat or
or the
moatview
or the aerial the aerial
aerial view the
demonstrates demonstrates
demonstrates theunique
the
unique experience unique experience
thatexperience
a safe andthat that a safe
a safe
controlled andand
virtual
environment can achieve, even for accustomed users. It is often hard to gather information at the
initial user’s need phase and justify a cyclic application design to refine the initial prototype.
Moreover, reality capture data as a support for VR environments largely eases the application
update throughout the development. Indeed, during the feedback iteration phases with the client,
we were able to quickly modify the different elements populating the VR application (mesh complexity,
3D temporal curation, environment parameters, textures, etc.) which would not be the case if it was
based on a 360◦ video for example. The georeferencing of data also permits the swapping in place
of all the data for quick tests to assess the optimal geometric complexity for the on-site hardware
configurations. Furthermore, this characteristic stands even after the development. This allowed us
to quickly create a parallel version around the theme of Halloween built upon our original game by
solely modifying the lighting properties and adding new 3D models as well as new reality captured
elements at different intervals in time (see Figure 29). This modulation is an example of how to set up
the virtual heritage environment so that users will be interested to explore further.
in place of all the data for quick tests to assess the optimal geometric complexity for the on-site
hardware configurations. Furthermore, this characteristic stands even after the development. This
allowed us to quickly create a parallel version around the theme of Halloween built upon our original
game by solely modifying the lighting properties and adding new 3D models as well as new reality
captured
Remote Sens.elements at different intervals in time (see Figure 29). This modulation is an example of
2020, 12, 2583 24how
of 32
to set up the virtual heritage environment so that users will be interested to explore further.

Figure 29.
Figure 29. Jehay’s
Jehay’s castle
castle with
with aa Halloween
Halloween theme.
theme.

It is noteworthy that the alterable


alterable properties
properties of aa virtual
virtual environment
environment offer
offer opportunities outside
the tourism
tourism sector.
sector. We
Welater
laterused
usedthe
theapplication
applicationasasa astudy
studyenvironment
environment forfor
user’s behavior,
user’s such
behavior, as
such
aascomparison
a comparison forfor
thethe
VRVR locomotion
locomotion system
system[107],
[107],asaswell
wellasasthe
thebase
base for
for the
the initial
initial keep track
demands of a spatial information system. Additionally,
Additionally, usages
usages were
were found
found in other fields such as
psychology and architecture. In In psychology, the application was expanded to study the phobia of a
user inside a realistic environment. In architecture, the immersion was suitable to study the structure
of the castle’s building. Furthermore,
Furthermore, the V-HS design was used and adapted to create a multi-modal
immersive experience to recreate a lost or hidden environment. It lead to a better comprehension of
the site and allowed people to discover important changes that are no longer visible, as illustrated in
Figure 30 and presented in Supplementary Materials. These advances extend over possibilities given
by references [78–80] and highlight
[78–80]and highlight the
the multi-user,
multi-user, generalization,
generalization, andand navigation adaptability of our
system.
Remote Sensing. 2020, x FOR PEER REVIEW 25 of 32

Figure
Figure 30. 30.A Avirtual
virtualreality
realityexperiment
experiment showing
showing the
the multi-modal
multi-modaldata
dataintegration to to
integration serve the the
serve
comprehension of construction
comprehension of construction phases.phases.

Limitations were encountered during the testing phases of the development. Indeed, while most
of our testers were quickly getting familiar with the controls, some subjects not accustomed to VR
felt uncomfortable using the headset. It is more problematic as the personnel of the castle had to spare
a lot of time explaining how the VR environment worked, which is unpractical for large groups even
if all the controls were indicated inside the game. For this reason, a simplified version using only the
boat tour around the castle, which did not require any controls, was designed. It provides a solution
on how to ease the navigation in a virtual heritage environment to prevent users from leaving early.
Remote Sens. 2020, 12, 2583 25 of 32

Limitations were encountered during the testing phases of the development. Indeed, while most
of our testers were quickly getting familiar with the controls, some subjects not accustomed to VR felt
uncomfortable using the headset. It is more problematic as the personnel of the castle had to spare a
lot of time explaining how the VR environment worked, which is unpractical for large groups even if
all the controls were indicated inside the game. For this reason, a simplified version using only the
boat tour around the castle, which did not require any controls, was designed. It provides a solution
on how to ease the navigation in a virtual heritage environment to prevent users from leaving early.
This highlights the flexibility of the virtual heritage system to handle changes rapidly by following the
proposed user-centric design.
Hardware capabilities were another limitation. Concerning the headset, different problems
occurred that disrupted the user’s immersion such as the weight, the wire netting around the user and
if the user is wearing glasses (the glasses would not fit inside the headset). As mentioned in Section 5.3,
a major part of the workflow is spent on the integration of the data inside the virtual environment.
With the current state of hardware capabilities, this means meeting the hardware capabilities while
maximizing the quality of the environment. We recall from our literature review that this was one of
the first concerns about the implication of VR on the tourism sector by the author in [18]. We observe
that even if improvements have been made regarding the state of this technology, additional progress
is still necessary to deliver a comfortable user experience.
These concerns brought to our attention the importance of the context in which the immersion
takes place. Even if developers are cautious about the ease of use of the application, a focused
methodology is required through the user-centric design. The iterative steps should provide future
users with enough information to get easily familiar with the application and the proper hardware
setup that needs to be available to facilitate a user’s experience. As such, further studies are needed to
weight the importance of these parameters and how to improve user experience on-site.
Furthermore, as mentioned in Section 4, this article covers only the first stages of the work which
consisted of acquiring 3D data and the development of the virtual reality application for a user-centric
design. As such, an additional study needs to complete the other stage of the workflow which would
benefit from a detailed study of the user’s feedback over the application.
The provided workflow is based on existing tools for a direct replication of the experiment
depending on the level of sophistication one needs. It includes the idea that oriented our choice
toward the specific acquisition design presented in Section 4.2 and the VR features presented in
Section 4.4. While we had access to several surveying techniques and hardware/software solutions, the
workflow is easily customizable to create a full low-cost approach. It will mainly impact the level of
automation, the human and machine time needed as well as the geometric reconstruction’s accuracy.
It is interesting to note that in that case, the workflow would orient the sensory choice toward terrestrial
photogrammetry approaches, which then would permit better control of the color mapping fidelity
onto the generated mesh.
Moreover, the 3D data processing phases toward a virtual reality application proved to be highly
dependent on the actual state of the VR technology. It first translated into a data derivation phase to
transform point clouds into mesh elements. The current research investigates a direct use of point
clouds in VR set-ups to better streamline the framework and leverage point-based rendering (Figure 31).
The first results are very encouraging and achieved VR-ready performances when using a spatial
structure (modified nested octree) to keep a targeted number of points to be displayed (Figure 32).
A semantic modeling approach, such as the one presented in [108], is an interesting perspective
that will be investigated. Additionally, our approach was aimed at optimizing the provided comfort
and immersion to the user by the techniques and interactions described in Sections 4 and 5. As the
hardware and software landscape rapidly changes, the enhanced immersive and graphical capabilities
will better handle raw point cloud inputs, facilitating the raw data re-use for V-HS.
toward terrestrial
dependent photogrammetry
on the actual state of the approaches, which
VR technology. thentranslated
It first would permit
into abetter control of the
data derivation color
phase to
mapping fidelity onto the generated mesh.
transform point clouds into mesh elements. The current research investigates a direct use of point
cloudsMoreover, the 3Dtodata
in VR set-ups processing
better phases
streamline toward a virtual
the framework reality application
and leverage point-basedproved to be(Figure
rendering highly
dependent
31). on the actual state of the VR technology. It first translated into a data derivation phase to
transform point clouds into mesh elements. The current research investigates a direct use of point
Remote Sens.in
clouds VR12,
2020, 2583 to better streamline the framework and leverage point-based rendering (Figure
set-ups 26 of 32
31).

Figure 31. Real-time VR rendering of the full point cloud (left) with visualization and manipulation
of semantic attributes (right). A shader is employed to enhance the edge rendering.

Figure
The
Figure 31.31.
first Real-time
results
Real-timeare
VRVR
veryrendering ofofthe
encouraging
rendering thefull
fullpoint
and point cloudVR-ready
achieved
cloud (left)with
(left) withperformances
visualization and
visualization and manipulation
when using a spatial
manipulation of
of semantic
structure
semantic attributes
(modified
attributesnested(right). A
(right).octree) shader
A shaderto keep is employed
a targeted
is employed to enhance
numberthe
to enhance the edge
of edge
points rendering.
to be displayed (Figure 32).
rendering.

The first results are very encouraging and achieved VR-ready performances when using a spatial
120
per second

structure (modified
100 nested octree) to keep a targeted number of points to be displayed (Figure 32).
80
60
120
second

40
Frames

100
20
80
0
Frames per

60 10
2M 4M 6M 8M
40 M
20
Result 102 49 41 23 23
0
10
2Points
M 4loaded
M 6(in
Mmillions)
8M
M
Result 102 49 41 23 23
Figure
Figure 32.32.The
Thenodes
nodesin infront
front of
of the
the camera
camera (right
(right image)
image)are arerendered
renderedininmoremoredetail
detailthan those
than those
Points loaded (in millions)
partially in view. The target of 100 Frames Per Seconds (FPS) is found with two
partially in view. The target of 100 Frames Per Seconds (FPS) is found with two million points displayed,million points
outdisplayed, out of
of 2.3 billion 2.3 billion
points in totalpoints
for thein total
datafor theimage).
(left data (left image).
Figure 32. The nodes in front of the camera (right image) are rendered in more detail than those
partially in view. The target of 100suchFrames Perone
A semantic
Regarding modeling
the mode approach,
of locomotion inas atheVRSeconds (FPS) in
presented
application, is [108],
it found iswith two millionperspective
to note points
an interesting
is important that several
displayed, out of 2.3 billion points in total for the data (left image).
that will be investigated. Additionally, our approach was aimed at optimizing the provided comfort
presented modes require a different degree of pre-processing. Automatic movement requires the most
and immersion
ahead-of-time to the
input, as user by the techniques
a meaningful and interactions described in Sections 4 andin5. the
As the
A semantic
hardware and modeling
software approach, path
landscape such
rapidly
of the
as exploration
changes,
covering
one presented
the enhanced
the points
in [108], of interest
isimmersive
an interestingand
scene
perspective
graphical
hasthat
to be implemented.
will be will
investigated. Both the move
Additionally, in place and teleportation modes require the recognition
comfort of
capabilities better handle raw pointour approach
cloud inputs,was aimed atthe
facilitating optimizing the provided
raw data re-use for V-HS.
areas
andthat are feasiblethe
immersion to user
walkby onthe
or techniques
teleport to. andHowever, multi-touch movement is a4 very motivating
Regarding to the mode of locomotion in a VR interactions
application,described in Sections
it is important to note and
that5. several
As the
locomotion
hardware system
and as therelandscape
software is no need for such
rapidly a requirement
changes, the enhancedopening to a fully
immersive and automatic
graphical VR
presented modes require a different degree of pre-processing. Automatic movement requires the
environment
capabilities creation.
will better Indeed,
handle as the transformation parameters are computed independently from
most ahead-of-time input, as araw point cloud
meaningful pathinputs, facilitating
of exploration the rawthe
covering data re-use
points offor V-HS.in the
interest
any scene content,the
Regarding it makes
mode itofideal for the visualization
locomotion in a VR of 3D point
application, it clouds,
is giving
important to interesting
note that research
several
scene has to be implemented. Both the move in place and teleportation modes require the recognition
perspectives
presented for
modes future work
require a on 3D point
different clouds
degree of to VR environments.
pre-processing. Automatic
of areas that are feasible to walk on or teleport to. However, multi-touch movement is a very movement requires the
most ahead-of-time
Finally, in a more input, as
transversala meaningful
note, thispath of exploration
experiment shows covering
the the points
possibility
motivating locomotion system as there is no need for such a requirement opening to a fully automatic of
given interest
by in the
conceptual
scene has to be implemented. Both the move in place and teleportation
and modular bricks to establish a more sustainable approach built around a database structure modes require the recognition
of areas that are
communicating with feasible
differentto walk on orand
interfaces teleport to. However,
deliverables. It comes multi-touch
at a cost:movement
we are movingis a very
away
motivating locomotion system as there is no need for such a requirement opening
from the “one-shot” logic and site managers must, therefore, acquire new skills to be able to support to a fully automatic
the development and maintenance of the system while using and enriching it optimally. Nevertheless,
it shows that technological bricks can be added to provide additional outputs without remaking
everything from the beginning, which highly contributes to interoperable workflows. Moreover,
the V-HS can be used for other use cases as hinted in the previous sections, particularly for efficiently
managing semantics and knowledge on top of spatial data.

7. Conclusions
The sustainability of a VR system demands ways of managing the addition of new data (and new
types of data) and adapting to new demands (output) using the latest technology. What is important
is not the techniques implemented, but the management of the integration of these evolutions into
a modular “system”. It is therefore essential to move away from the current trend of separating
Remote Sens. 2020, 12, 2583 27 of 32

achievements (preservation vs. visits for non-specialists) to allow a more sustainable approach through
evolutionary systems. The provided workflow is based on existing tools for a direct replication of
the experiment depending on the level of sophistication one needs. It includes ideas on how to
design a user-centric VR application and a concrete illustration on the use case of the castle of Jehay.
We propose detailed conceptual meta-models that aim at providing clear guidelines when creating
user-centric VR applications from reality capture data. We show that user feedback is key to the
success of the application design and that interaction modes, as well as rendering techniques, should
often be multi-faceted to cope with independent limitations. Future work will investigate better
ways to incorporate reality capture data modalities (e.g., point clouds) without heavy geometric and
scene optimizations.

Supplementary Materials: Videos of the castle of Jehay’s VR application are available for download at the
following link: https://dox.ulg.ac.be/index.php/s/WS237YvcciHGGsL---videos of the Archeoforum of Liège VR
Experiments are available at the following link: https://www.dailymotion.com/video/x7uazt4.
Author Contributions: Conceptualization, F.P.; methodology, F.P.; software, F.P. and Q.V.; validation, F.P. and Q.V.;
formal analysis, F.P. and Q.V.; investigation, F.P. and Q.V.; resources, F.P.; data curation, F.P.; Writing—original
draft preparation, F.P.; Writing—review and editing, F.P. and R.B.; visualization, F.P., Q.V. and C.M.; supervision,
F.P.; project administration, F.P., R.B. and L.K. All authors have read and agreed to the published version of
the manuscript.
Funding: This research was supported by “ASBL de gestion du château de Jehay” and partially funded by the
European Regional Development Fund, within the Terra Mosana Interreg EMR project.
Acknowledgments: We acknowledge all the members of the castle of Jehay and the “Province de Liege” for
their open-minded exchanges. We also would like to express our warm thanks to the various reviewers for their
pertinent remarks.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Ortiz, P.; Sánchez, H.; Pires, H.; Pérez, J.A. Experiences about fusioning 3D digitalization techniques for
cultural heritage documentation. In Proceedings of the ISPRS Commission V Symposium ‘Image Engineering
and Vision Metrology’, Dresden, Germany, 25–27 September 2006; pp. 224–229.
2. Mayer, H.; Mosch, M.; Peipe, J. Comparison of photogrammetric and computer vision techniques: 3D
reconstruction and visualization of Wartburg Castle. In Proceedings of the XIXth International Symposium,
CIPA 2003: New Perspectives to Save Cultural Heritage, Antalya, Turkey, 30 September–4 October 2003;
pp. 103–108.
3. Koch, M.; Kaehler, M. Combining 3D laser-Scanning and close-range Photogrammetry—An approach
to Exploit the Strength of Both methods. In Proceedings of the Computer Applications to Archaeology,
Williamsburg, VA, USA, 22–26 March 2009; pp. 22–26.
4. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A survey of augmented, virtual, and mixed
reality for cultural heritage. J. Comput. Cult. Herit. 2018, 11, 1–36. [CrossRef]
5. Kersten, T.P.; Tschirschwitz, F.; Deggim, S.; Lindstaedt, M. Virtual reality for cultural heritage
monuments—from 3D data recording to immersive visualisation. In Proceedings of the Euro-Mediterranean
Conference, Nicosia, Cyprus, 29 October 2018; Springer: Cham, Switzerland, 2018; pp. 74–83.
6. Santana, M. Heritage recording, documentation and information systems in preventive maintenance.
Reflect. Prev. Conserv. Maint. Monit. Monum. Sites 2013, 6, 10. [CrossRef]
7. Chiabrando, F.; Donato, V.; Lo Turco, M.; Santagati, C. Cultural Heritage Documentation, Analysis and
Management Using Building Information Modelling: State of the Art and Perspectives BT. In Mechatronics
for Cultural Heritage and Civil Engineering; Ottaviano, E., Pelliccio, A., Gattulli, V., Eds.; Springer International
Publishing: Cham, Switzerland, 2018; pp. 181–202. ISBN 978-3-319-68646-2.
8. Liu, X.; Wang, X.; Wright, G.; Cheng, J.; Li, X.; Liu, R. A State-of-the-Art Review on the Integration of Building
Information Modeling (BIM) and Geographic Information System (GIS). Isprs Int. J. Geo-Inf. 2017, 6, 53.
[CrossRef]
Remote Sens. 2020, 12, 2583 28 of 32

9. Poux, F.; Billen, R. A Smart Point Cloud Infrastructure for intelligent environments. In Laser Scanning:
An Emerging Technology in Structural Engineering; Lindenbergh, R., Belen, R., Eds.; ISPRS Book Series; Taylor
& Francis Group/CRC Press: Boca Raton, FL, USA, 2019.
10. Poux, F.; Hallot, P.; Neuville, R.; Billen, R. Smart point cloud: Definition and remaining challenges. Isprs Ann.
Photogramm. Remote. Sens. Spat. Inf. Sci. 2016, IV-2/W1, 119–127. [CrossRef]
11. Palomar, I.J.; García Valldecabres, J.L.; Tzortzopoulos, P.; Pellicer, E. An online platform to unify and
synchronise heritage architecture information. Autom. Constr. 2020, 110, 103008. [CrossRef]
12. Dutailly, B.; Feruglio, V.; Ferrier, C.; Chapoulie, R.; Bousquet, B.; Bassel, L.; Mora, P.; Lacanette, D. Accéder en
3D aux données de terrain pluridisciplinaires-un outil pour l’étude des grottes ornées. In Proceedings of the
Le réel et le virtuel, Marseille, France, 9–11 May 2019.
13. Potenziani, M.; Callieri, M.; Scopigno, R. Developing and Maintaining a Web 3D Viewer for the CH
Community: An Evaluation of the 3DHOP Framework. In Proceedings of the GCH, Vienna, Austria, 12–15
November 2018; pp. 169–178.
14. Cheong, R. The virtual threat to travel and tourism. Tour. Manag. 1995, 16, 417–422. [CrossRef]
15. Perry Hobson, J.S.; Williams, A.P. Virtual reality: A new horizon for the tourism industry. J. Vacat. Market.
1995, 1, 124–135. [CrossRef]
16. Musil, S.; Pigel, G. Can Tourism be Replaced by Virtual Reality Technology? In Information and Communications
Technologies in Tourism; Springer: Vienna, Austria, 1994; pp. 87–94.
17. Williams, P.; Hobson, J.P. Virtual reality and tourism: Fact or fantasy? Tour. Manag. 1995, 16, 423–427.
[CrossRef]
18. Perry Hobson, J.S.; Williams, P. Virtual Reality: The Future of Leisure and Tourism? World Leis. Recreat. 1997,
39, 34–40. [CrossRef]
19. Google’s latest VR app lets you gaze at prehistoric paintings Engadget. Available online: www.engadget.com
(accessed on 28 March 2020).
20. Tsingos, N.; Gallo, E.; Drettakis, G. Perceptual audio rendering of complex virtual environments. Acm Trans.
Graph. (Tog) 2004, 23, 249–258. [CrossRef]
21. Dinh, H.Q.; Walker, N.; Hodges, L.F.; Song, C.; Kobayashi, A. Evaluating the importance of multi-sensory
input on memory and the sense of presence in virtual environments. In Proceedings of the IEEE Virtual
Reality (Cat. No. 99CB36316), Houston, TX, USA, 13–17 March 1999; pp. 222–228.
22. Tussyadiah, I.P.; Wang, D.; Jung, T.H.; tom Dieck, M.C. Virtual reality, presence, and attitude change:
Empirical evidence from tourism. Tour. Manag. 2018, 66, 140–154. [CrossRef]
23. Top Trends in the Gartner Hype Cycle for Emerging Technologies, 2017. Smarter with Gartner. Available
online: www.gartner.com (accessed on 28 March 2020).
24. Han, D.-I.D.; Weber, J.; Bastiaansen, M.; Mitas, O.; Lub, X. Virtual and augmented reality technologies to
enhance the visitor experience in cultural tourism. In Augmented Reality and Virtual Reality; Springer: Nature,
Switzerland, 2019; pp. 113–128. [CrossRef]
25. Guttentag, D.A. Virtual reality: Applications and implications for tourism. Tour. Manag. 2010, 31, 637–651.
[CrossRef]
26. Jacobson, J.; Ellis, M.; Ellis, S.; Seethaler, L. Immersive Displays for Education Using CaveUT. In Proceedings
of the EdMedia + Innovate Learning, Montreal, QC, Canada, 27 June–2 July 2005; Kommers, P., Richards, G.,
Eds.; Association for the Advancement of Computing in Education (AACE): Montreal, QC, Canada, 2005;
pp. 4525–4530.
27. Mikropoulos, T.A. Presence: A unique characteristic in educational virtual environments. Virtual Real. 2006,
10, 197–206. [CrossRef]
28. Roussou, M. Learning by doing and learning through play: An exploration of interactivity in virtual
environments for children. Comput. Entertain. (Cie) 2004, 2, 10. [CrossRef]
29. Gimblett, H.R.; Richards, M.T.; Itami, R.M. RBSim: Geographic Simulation of Wilderness Recreation Behavior.
J. For. 2001, 99, 36–42.
30. Marasco, A.; Buonincontri, P.; van Niekerk, M.; Orlowski, M.; Okumus, F. Exploring the role of next-generation
virtual technologies in destination marketing. J. Destin. Market. Manag. 2018, 9, 138–148. [CrossRef]
31. Tussyadiah, I.P.; Wang, D.; Jia, C.H. Virtual reality and attitudes toward tourism destinations. In Information
and Communication Technologies in Tourism 2017; Springer: Cham, Switzerland, 2017; pp. 229–239.
Remote Sens. 2020, 12, 2583 29 of 32

32. Buhalis, D.; Law, R. Progress in information technology and tourism management: 20 years on and 10 years
after the Internet—The state of eTourism research. Tour. Manag. 2008, 29, 609–623. [CrossRef]
33. Wei, W.; Qi, R.; Zhang, L. Effects of virtual reality on theme park visitors’ experience and behaviors:
A presence perspective. Tour. Manag. 2019, 71, 282–293. [CrossRef]
34. Wilkinson, C. Childhood, mobile technologies and everyday experiences: Changing technologies = changing
childhoods? Child. Geogr. 2016, 14, 372–373. [CrossRef]
35. Kharroubi, A.; Hajji, R.; Billen, R.; Poux, F. Classification and integration of massive 3d points clouds in
a virtual reality (VR) environment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 165–171.
[CrossRef]
36. Virtanen, J.P.; Daniel, S.; Turppa, T.; Zhu, L.; Julin, A.; Hyyppä, H.; Hyyppä, J. Interactive dense point clouds
in a game engine. Isprs J. Photogramm. Remote Sen. 2020, 163, 375–389. [CrossRef]
37. Poux, F. The Smart Point Cloud: Structuring 3D Intelligent Point Data. Ph.D. Thesis, Université de Liège,
Liège, Belgique, 2019.
38. Berger, M.; Tagliasacchi, A.; Seversky, L.M.; Alliez, P.; Guennebaud, G.; Levine, J.A.; Sharf, A.; Silva, C.T.
A Survey of Surface Reconstruction from Point Clouds. Comput. Graph. Forum 2017, 36, 301–329. [CrossRef]
39. Poux, F.; Neuville, R.; Hallot, P.; Billen, R. Model for Semantically Rich Point Cloud Data. Isprs Ann.
Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-4/W5, 107–115. [CrossRef]
40. Schütz, M.; Krösl, K.; Wimmer, M. Real-Time Continuous Level of Detail Rendering of Point Clouds.
In Proceedings of the IEEE VR 2019, the 26th IEEE Conference on Virtual Reality and 3D User Interfaces,
Osaka, Japan, 23–27 March 2019; pp. 1–8.
41. Remondino, F. From point cloud to surface: The modeling and visualization problem. Isprs—Int. Arch.
Photogramm. Remote Sens. Spat. Inf. Sci. 2003, XXXIV, 24–28.
42. Koenderink, J.J.; van Doorn, A.J. Affine structure from motion. J. Opt. Soc. Am. A 1991, 8, 377. [CrossRef]
43. Haala, N. The Landscape of Dense Image Matching Algorithms. J. Opt. Soc. Am. A 2013, 8, 377–385.
[CrossRef]
44. Quintero, M.S.; Blake, B.; Eppich, R. Conservation of Architectural Heritage: The Role of Digital
Documentation Tools: The Need for Appropriate Teaching Material. Int. J. Arch. Comput. 2007, 5,
239–253. [CrossRef]
45. Remondino, F. Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning. Remote Sens.
2011, 3, 1104–1138. [CrossRef]
46. Noya, N.C.; Garcia, Á.L.; Ramirez, F.C.; García, Á.L.; Ramírez, F.C. Combining photogrammetry and
photographic enhancement techniques for the recording of megalithic art in north-west Iberia. Digit. Appl.
Archaeol. Cult. Herit. 2015, 2, 89–101. [CrossRef]
47. Yastikli, N. Documentation of cultural heritage using digital photogrammetry and laser scanning. J. Cult.
Herit. 2007, 8, 423–427. [CrossRef]
48. Poux, F.; Neuville, R.; Van Wersch, L.; Nys, G.-A.; Billen, R. 3D Point Clouds in Archaeology: Advances in
Acquisition, Processing and Knowledge Integration Applied to Quasi-Planar Objects. Geosciences 2017, 7, 96.
[CrossRef]
49. Tabkha, A.; Hajji, R.; Billen, R.; Poux, F. Semantic enrichment of point cloud by automatic extraction and
enhancement of 360◦ panoramas. Isprs—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W17,
355–362. [CrossRef]
50. Napolitano, R.K.; Scherer, G.; Glisic, B. Virtual tours and informational modeling for conservation of cultural
heritage sites. J. Cult. Herit. 2018, 29, 123–129. [CrossRef]
51. Rua, H.; Alvito, P. Living the past: 3D models, virtual reality and game engines as tools for supporting
archaeology and the reconstruction of cultural heritage—the case-study of the Roman villa of Casal de Freiria.
J. Archaeol. Sci. 2011, 38, 3296–3308. [CrossRef]
52. Poux, F.; Billen, R. Voxel-based 3D point cloud semantic segmentation: Unsupervised geometric and
relationship featuring vs. deep learning methods. Isprs Int. J. Geo-Inf. 2019, 8, 213. [CrossRef]
53. Poux, F.; Neuville, R.; Billen, R. Point Cloud Classification Of Tesserae From Terrestrial Laser Data Combined
With Dense Image Matching For Archaeological Information Extraction. Isprs Ann. Photogramm. Remote Sens.
Spat. Inf. Sci. 2017, IV-2/W2, 203–211. [CrossRef]
54. Koutsoudis, A.; Vidmar, B.; Ioannakis, G.; Arnaoutoglou, F.; Pavlidis, G.; Chamzas, C. Multi-image 3D
reconstruction data evaluation. J. Cult. Herit. 2014, 15, 73–79. [CrossRef]
Remote Sens. 2020, 12, 2583 30 of 32

55. Mozas-Calvache, A.T.; Pérez-García, J.L.; Cardenal-Escarcena, F.J.; Mata-Castro, E.; Delgado-García, J. Method
for photogrammetric surveying of archaeological sites with light aerial platforms. J. Archaeol. Sci. 2012, 39,
521–530. [CrossRef]
56. Burens, A.; Grussenmeyer, P.; Guillemin, S.; Carozza, L.; Lévêque, F.; Mathé, V. Methodological Developments
in 3D Scanning and Modelling of Archaeological French Heritage Site: The Bronze Age Painted Cave of “Les
Fraux”, Dordogne (France). Isprs—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-5/W2, 131–135.
[CrossRef]
57. Kai-Browne, A.; Kohlmeyer, K.; Gonnella, J.; Bremer, T.; Brandhorst, S.; Balda, F.; Plesch, S.; Lehmann, D. 3D
Acquisition, Processing and Visualization of Archaeological Artifacts. In Proceedings of the EuroMed 2016:
Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection, LImassol,
Cyprus, 31 October–5 November 2016; Springer: Cham, Switzerland, 2016; pp. 397–408.
58. Armesto-González, J.; Riveiro-Rodríguez, B.; González-Aguilera, D.; Rivas-Brea, M.T. Terrestrial laser
scanning intensity data applied to damage detection for historical buildings. J. Archaeol. Sci. 2010, 37,
3037–3047. [CrossRef]
59. James, M.R.; Quinton, J.N. Ultra-rapid topographic surveying for complex environments: The hand-held
mobile laser scanner (HMLS). Earth Surf. Process. Landf. 2014, 39, 138–142. [CrossRef]
60. Lauterbach, H.; Borrmann, D.; Heß, R.; Eck, D.; Schilling, K.; Nüchter, A. Evaluation of a Backpack-Mounted
3D Mobile Scanning System. Remote Sens. 2015, 7, 13753–13781. [CrossRef]
61. Slama, C.; Theurer, C.; Henriksen, S. Manual of Photogrammetry; Slama, C., Theurer, C., Henriksen, S.W., Eds.;
Taylor & Francis Group/CRC Press: Boca Raton, FL, USA, 1980; ISBN 0-937294-01-2.
62. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’
photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314.
[CrossRef]
63. Weinmann, M. Reconstruction and Analysis of 3D Scenes; Springer International Publishing: Cham, Switzerland,
2016; ISBN 978-3-319-29244-1.
64. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching.
Photogramm. Rec. 2014, 29, 144–166. [CrossRef]
65. Pix4D Terrestrial 3D Mapping Using Fisheye and Perspective Sensors-Support. Available
online: https://support.pix4d.com/hc/en-us/articles/204220385-Scientific-White-Paper-Terrestrial-3D-
Mapping-Using-Fisheye-and-Perspective-Sensors#gsc.tab=0 (accessed on 18 April 2016).
66. Bentley 3D City Geographic Information System—A Major Step Towards Sustainable Infrastructure. Available
online: https://www.bentley.com/en/products/product-line/reality-modeling-software/contextcapture
(accessed on 12 February 2020).
67. Agisoft LLC Agisoft Photoscan. Available online: http://www.agisoft.com/pdf/photoscan_presentation.pdf
(accessed on 12 February 2020).
68. nFrames. Available online: http://nframes.com/ (accessed on 12 February 2020).
69. Changchang Wu VisualSFM. Available online: http://ccwu.me/vsfm/ (accessed on 12 February 2020).
70. Moulon, P. openMVG. Available online: http://imagine.enpc.fr/~{}moulonp/openMVG/ (accessed on
12 February 2020).
71. Autodesk. Autodesk: 123D catch. Available online: http://www.123dapp.com/catch (accessed on
12 February 2020).
72. PROFACTOR GmbH ReconstructMe. Available online: http://reconstructme.net/ (accessed on
12 February 2020).
73. Eos Systems Inc Photomodeler. Available online: http://www.photomodeler.com/index.html (accessed on
12 February 2020).
74. Capturing Reality s.r.o RealityCapture. Available online: https://www.capturingreality.com (accessed on
12 February 2020).
75. Bassier, M.; Vergauwen, M.; Poux, F. Point Cloud vs. Mesh Features for Building Interior Classification.
Remote Sens. 2020, 12, 2224. [CrossRef]
76. Gareth, J. The elements of user experience: User-centered design for the Web and beyond. Choice Rev. Online
2011, 49.
77. Gasson, S. Human-Centered Vs. User-Centered Approaches to Information System Design. J. Inf. Technol.
Theory Appl. 2003, 5, 29–46.
Remote Sens. 2020, 12, 2583 31 of 32

78. Reunanen, M.; Díaz, L.; Horttana, T. A holistic user-centered approach to immersive digital cultural heritage
installations: Case Vrouw Maria. J. Comput. Cult. Herit. 2015, 7, 1–16. [CrossRef]
79. Barbieri, L.; Bruno, F.; Muzzupappa, M. User-centered design of a virtual reality exhibit for archaeological
museums. Int. J. Interact. Des. Manuf. 2018, 12, 561–571. [CrossRef]
80. Ibrahim, N.; Ali, N.M. A conceptual framework for designing virtual heritage environment for cultural
learning. J. Comput. Cult. Herit. 2018, 11, 1–27. [CrossRef]
81. Cipolla Ficarra, F.V.; Cipolla Ficarra, M. Multimedia, user-centered design and tourism: Simplicity, originality
and universality. Stud. Comput. Intell. 2008, 142, 461–470.
82. Loizides, F.; El Kater, A.; Terlikas, C.; Lanitis, A.; Michael, D. Presenting Cypriot Cultural Heritage in Virtual
Reality: A User Evaluation. In LNCS; Springer: Cham, Switzerland, 2014; Volume 8740, pp. 572–579.
83. Sukaviriya, N.; Sinha, V.; Ramachandra, T.; Mani, S.; Stolze, M. User-Centered Design and Business Process
Modeling: Cross Road in Rapid Prototyping Tools. In Human-Computer Interaction–INTERACT 2007.
INTERACT 2007. Lecture Notes in Computer Science; Baranauskas, C., Palanque, P., Abascal, J., Barbosa, S.D.J.,
Eds.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4662 LNCS, pp. 165–178. [CrossRef]
84. Kinzie, M.B.; Cohn, W.F.; Julian, M.F.; Knaus, W.A. A user-centered model for Web site design: Needs
assessment, user interface design, and rapid prototyping. J. Am. Med. Inform. Assoc. 2002, 9, 320–330.
[CrossRef] [PubMed]
85. Jung, T.; tom Dieck, M.C.; Lee, H.; Chung, N. Effects of virtual reality and augmented reality on visitor
experiences in museum. In Information and Communication Technologies in Tourism 2016; Springer: Cham,
Switzerland, 2016; pp. 621–635.
86. Treffer, M. Un Système d’ Information Géographique 3D Pour le Château de Jehay; Définition des besoins et
modélisation conceptuelle sur base de CityGML. Master’s Thesis, Université de Liège, Liège, Belgique, 2017.
87. Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992,
14, 239–256. [CrossRef]
88. AliceVision Meshroom-3D Reconstruction Software. Available online: https://alicevision.org/#meshroom
(accessed on 30 July 2019).
89. IGN France MicMac. Available online: https://micmac.ensg.eu/index.php/Accueil (accessed on 30 July 2019).
90. Gonzalez-Aguilera, D.; López-Fernández, L.; Rodriguez-Gonzalvez, P.; Hernandez-Lopez, D.; Guerrero, D.;
Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; et al. GRAPHOS-open-source software for
photogrammetric applications. Photogramm. Rec. 2018, 33, 11–29. [CrossRef]
91. Garland, M. Quadric-Based Polygonal Surface Simplification. Master’s Thesis, Carnegie Mellon University,
Pittsburgh, PA, USA, 1999.
92. Unity. Unity-Manual: Draw Call Batching; Unity Technologies: San Francisco, CA, USA, 2017.
93. Webster, N.L. High poly to low poly workflows for real-time rendering. J. Vis. Commun. Med. 2017, 40, 40–47.
[CrossRef]
94. Ryan, J. Photogrammetry for 3D Content Development in Serious Games and Simulations; MODSIM: San Francisco,
CA, USA, 2019; pp. 1–9.
95. Group, F. OpenGL Insights, 1st ed.; Cozzi, P., Riccio, C., Eds.; A K Peters/CRC Press: Boca Raton, FL, USA,
2012; ISBN 9781439893777.
96. NVIDIA Corporation SDK White Paper Improve Batching Using Texture Atlases; SDK White Paper; NVIDIA:
Santa Clara, CA, USA, 2004.
97. Gregory, J. Game Engine Architecture; A K Peters/CRC Press: Boca Raton, FL, USA, 2014; Volume 47, ISBN
9781466560062.
98. Kajiya, J.T. The Rendering Equation. Acm Siggraph Comput. Graph. 1986, 20, 143–150. [CrossRef]
99. Nirenstein, S.; Blake, E.; Gain, J. Exact From-Region Visibility Culling. In Proceedings of theThirteenth
Eurographics Workshop on Rendering, Pisa, Italy, 26–28 June 2002.
100. LaViola, J.J., Jr. A discussion of cybersickness in virtual environments. ACM Sigchi Bull. 2000, 32, 47–56.
[CrossRef]
101. Templeman, J.N.; Denbrook, P.S.; Sibert, L.E. Virtual locomotion: Walking in place through virtual
environments. Presence: Teleoper. Virtual Environ. 1999, 8, 598–617. [CrossRef]
102. Tregillus, S.; Folmer, E. VR-STEP: Walking-in-Place using Inertial Sensing for Hands Free Navigation in
Mobile VR Environments. In Proceedings of the 2016 CHI Conference on Human Factors in Computing
Systems—CHI ’16, San Jose, CA, USA, 7–12 May 2016; pp. 1250–1255.
Remote Sens. 2020, 12, 2583 32 of 32

103. Mccauley, M.E.; Sharkey, T.J. Cybersickness: Perception of Self-Motion in Virtual Environment. Presence
Teleoper. Virtual Environ. 1992, 1, 311–318. [CrossRef]
104. Parger, M.; Mueller, J.H.; Schmalstieg, D.; Steinberger, M. Human upper-body inverse kinematics for
increased embodiment in consumer-grade virtual reality. In Proceedings of the 24th ACM Symposium on
Virtual Reality Software and Technology, Tokyo, Japan, 28 November 2018; pp. 1–10.
105. Aristidou, A.; Lasenby, J. FABRIK: A fast, iterative solver for the Inverse Kinematics problem. Graph. Model.
2011, 73, 243–260. [CrossRef]
106. Maciel, A.; De, S. Extending FABRIK with model constraints. Graph. Model. Wil. Online Libr. 2015, 73,
243–260.
107. Schyns, O. Locomotion and Interactions in Virtual Reality; Liège University: Liège, Belgique, 2018.
108. Poux, F.; Neuville, R.; Nys, G.-A.; Billen, R. 3D Point Cloud Semantic Modelling: Integrated Framework for
Indoor Spaces and Furniture. Remote Sens. 2018, 10, 1412. [CrossRef]

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like