Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Journal Pre-proof

A toolbox for the rapid prototyping of crime scene reconstructions in


virtual reality

Till Sieberth (Methodology) (Software) (Validation)


(Investigation)<ce:contributor-role>Writing – original draft)
(Visualization), Akos Dobay (Conceptualization) (Data
curation)<ce:contributor-role>Writing – review and editing)
(Visualization), Raffael Affolter (Software) (Validation) (Data
curation)<ce:contributor-role>Writing – review and editing)
(Visualization), Lars Ebert (Conceptualization) (Methodology)
(Validation)<ce:contributor-role>Writing – review and editing)
(Visualization) (Supervision)

PII: S0379-0738(19)30418-9
DOI: https://doi.org/10.1016/j.forsciint.2019.110006
Reference: FSI 110006

To appear in: Forensic Science International

Received Date: 4 June 2019


Revised Date: 6 October 2019
Accepted Date: 21 October 2019

Please cite this article as: Sieberth T, Dobay A, Affolter R, Ebert L, A toolbox for the rapid
prototyping of crime scene reconstructions in virtual reality, Forensic Science International
(2019), doi: https://doi.org/10.1016/j.forsciint.2019.110006
This is a PDF file of an article that has undergone enhancements after acceptance, such as
the addition of a cover page and metadata, and formatting for readability, but it is not yet the
definitive version of record. This version will undergo additional copyediting, typesetting and
review before it is published in its final form, but we are providing this version to give early
visibility of the article. Please note that, during the production process, errors may be
discovered which could affect the content, and all legal disclaimers that apply to the journal
pertain.

© 2019 Published by Elsevier.


A toolbox for the rapid prototyping of crime
scene reconstructions in virtual reality
Till Sieberth1,2, Akos Dobay1, Raffael Affolter1, Lars Ebert1,2

(till.sieberth, akos.dobay, raffael.affolter, lars.ebert)@irm.uzh.ch

1
Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057
Zurich, Switzerland
2
3D Zentrum Zurich, University of Zurich, Winterthurerstrasse 190/52, CH-8057 Zurich,
Switzerland

Highlights

of
 For the use of virtual reality in forensic virtual scenes require a set of tools.
 Interaction and Modification in Virtual Crime Scenes allow for discussion of

ro
the scenes.
 Already a small number of tools in virtual scenes support a variety of forensic
applications. -p
Abstract
re
Virtual reality is recently finding its way in forensic work. The required 3D data is nowadays a
standard dataset available in many cases, from homicide to traffic collisions, including not
lP

only data from the scene but also of weaponry and involved persons. Current investigations
use these 3D data to replicated the incident and as discussion base for forensic personal.
However, modifying the scene on a 2D viewport is often cumbersome due to the loss of the
third dimension. Also to perform the modifications on the scene a 3D operator is often
na

required. Virtual reality might improve this step by its easy use and by visualising the third
dimension. This publication presents a variety of tools which can be used in forensic
investigations. Additionally to the tools, examples of forensic use of these tools will be
presented, showing that already a small number of tools support a variety of forensic
ur

applications.

Keywords:
Jo

Virtual Reality, 3D Reconstruction, Crime Scene Investigation


1. Introduction
Currently, a vast number of different image modalities are used to document forensic cases.
Medical imaging such as post-mortem computed tomography (PMCT) and post-mortem
magnetic resonance imaging (PMMR) generate 3D volume datasets of the deceased [1–7].
Structured light scanners and photogrammetry document superficial injuries on the
deceased as well as on living persons [8–11]. Crime scenes, weapons and vehicles are
documented in 3D by using structured light scanners, photogrammetry and laser scanning
[12–15]. By integrating these different 3D data together with conventional evidence, forensic

1
questions regarding the sequence of events, gunshot trajectories and other forensic
evidence can be answered [16–19].

The general workflow for conducting a 3D reconstruction consists of four main steps. First,
the 3D data must be generated, which means employing the correct modality to document a
certain object, location or medical finding. Second, all data must be prepared for the
reconstruction. In this step, polygon models of relevant anatomical structures are extracted
from the volume datasets of medical scans, photogrammetric models and their textures, are
calculated from two-dimensional digital photos, and characters are prepared for animation by
rigging them to a kinematic bone system. If required, the polygon count of the resulting
models is reduced or parts of a scene are rebuilt with existing 3D data based on low-polygon
primitives. The purpose of this step is to convert all the data into a polygonal mesh and
import them into the 3D animation software. In the third step, the person performing the
reconstruction confers with field experts to decipher the data in the context of the forensic
question that must be answered. Because of the complexity and the amount of data that can

of
be available, this step can take a considerable amount of time. Subsequently, when the
traces fit together, the reconstruction is performed, and the visualizations are created - either
as 2D renderings or as virtual reality (VR) scenes [20, 21]. Therefore, the final reconstruction

ro
can consist of multiple plausible scenarios that are dependent on the forensic question and
the available traces, evidence and materials.
What makes these reconstructions time consuming and therefore expensive is the fact that
-p
while the data are in 3D, the display used for the reconstructions as well as the input devices
(the computer mouse and keyboard) work in 2D. Due to a lack of information and input
capabilities, converting several polygon models into the desired position can be tedious and
re
can require constant adjustment of the viewport perspective and the 3D object.

VR is a technology that could speed up the process of reconstructing incident scenes. In


lP

2014, we proposed a system to display crime scene reconstructions to state attorneys [20].
Since then, VR has been used routinely for incident reconstructions, virtual crime scene
visits or even forensic medical examinations [22, 23]. Current-generation VR systems
consist of 2 to 3 components. First, a VR headset displays a perspectively correct image to
na

the user. Second, controllers, whose positions are also tracked by the system, allow the user
to interact with the virtual environment. Finally, some systems employ external trackers or
lighthouses as external points of reference [24, 25]. The combination of 3D virtual reality
headsets in conjunction with tracked controllers offers more than just visualizing static crime
ur

scene reconstructions. In this article, we present a new application for using VR techniques
to accelerate 3D incident reconstructions.
Jo

2. Methods
To perform a 3D reconstruction in VR, several steps are required. First, the recorded data
are prepared. In this step, all the data are converted to polygon meshes and cropped to the
required area of interest to avoid data clutter and subsequent lagging in the VR visualization.
For large datasets, this process can also include reducing the polygon count and finding a
balance between the model detail and the polygon load. This step is performed in dedicated
software, depending on the modality. After the data are prepared as needed, scene
integration in Unity (Version 2018.1.8f1 Personal, Unity Technologies, San Francisco, United
States) is performed, and the required reconstructions are implemented. A preliminary
reconstruction is performed in VR. Subsequently, the reconstructed scene is exported and

2
transferred to other 3D software, such as CAD software, to perform fine adjustments, render
more realistic images or fit the scene in a larger context.

2.1 Hardware and Software


For the head-mounted display (HMD), we use the HTC Vive (HTC Vive, HTC Corporation,
New Taipei City, Taiwan), which includes the HMD, external lighthouses for positional and
rotational tracking and two controllers. Finger tracking is conducted using the Leap Motion
tracker (Leap Motion, Inc., San Francisco, California, United States), which is attached to the
front side of the HMD. The final scene is then visualized using Unity Support for VR is
integrated into Unity using the SteamVR asset (v1.2.3, Valve Cooperation, Bellevue,
Washington, United States). All required tools are implemented in Unity using C# using the
SteamVR library and the Leap Motion Software Development Kit (SDK) [26–28].

2.2 Functions and Tools


Inside the virtual environment, the user has a variety of different tools at their disposal.

of
These tools allow moving around in the scene, manipulating elements within the scene
(moving, rotating), measuring the placement of indicators and trajectories, taking
screenshots, replacing the controller with objects and scaling the user for a better overview

ro
or for increased detail. It is important to note that modifications of the polygon meshes or the
texture content within the virtual environment are deliberately not implemented to maintain
the integrity of the data. Different tools can be selected and used with the controller. The
-p
control scheme varies from tool to tool, and the button functionalities are displayed in the
virtual environment. All the scripts developed for this project are available upon request.
re
2.2.1 Teleportation
One of the most important features is the teleporting function provided with the SteamVR
asset. This feature allows the user to teleport instantly from one position to another,
lP

effectively allowing the user to cover large distances within a fraction of a second.
Furthermore, it is possible to limit the area that a user can reach by limiting the teleportation
area, and it is possible to define points of interest (POI) by adding a teleportation marker that
can be reached with the teleportation feature.
na

2.2.2 Network Connectivity


Connecting virtual reality HMDs via network connections allows experts working in different
locations to exchange knowledge and discuss scenarios in the simulated 3D environment
ur

[29].

2.2.3 Screenshot Tool


Jo

The first tool is a screenshot tool, which allows the user to take virtual photos of the current
view in the VR environment. The screenshots with dates and timestamps are saved as
digital files in the case folder. The native screenshot resolution is the resolution displayed by
the VR HMD and can be adjusted to render higher quality images with larger pixel counts.
However, a larger pixel count does not necessarily result in a better image because the
texture on the 3D model and polygon count are limiting parameters for the screenshot quality
[22].

2.2.4 Measure Tool

3
The measure tool allows the use of VR controllers to measure distances. A scalebar can be
placed into the scene via dragging. Upon the release of the button, the scale bar is
visualized, and the total length of the ruler is displayed. By placing multiple markers, it is
possible to measure around curves as well (Fig 1).

of
Figure 1. Measurement tool measuring the diameter of an injury with intermediate
points along the curvature of the round object surface and the straight line distance

ro
between two tables. Both measurements are performed in meters.

2.2.5 Trajectory Tool


-p
The trajectory tool is similar to the measure tool. Here, however, a single click allows the
user to set a marking sphere at the position of the controller or while clicking and dragging a
trajectory (Fig 2). This straight line trajectory drawn with the controller does not yet include a
re
cone that takes into account the possible deviation in the firing trajectory.
lP
na
ur
Jo

4
of
ro
-p
Figure 2. Trajectory tool. The red line at the bottom is the scanned probe that marks a
re
bullet in the wall and the presumed bullet trajectory. Encircled in green cylinder
showing the manually extended trajectory which does not consider the possible
deviation in the firing trajectory.
lP

2.2.6 Moving and scaling the user


During scene reconstructions, room-scale visualizations require the user to move freely
around in space without the limitation of being fixed to the floor of the scene. Scaling the
na

user might be necessary whenever there are traces in inaccessible areas such as the floor
or ceiling. Downscaling also helps with visualizing small objects or fine textures, which can
usually be challenging in VR due to the relatively low resolution of the display. Furthermore,
downscaling helps with fine controller-based movements as it suppresses hand-jitter effects.
ur

Upscaling of the user might be required to obtain an overview of a larger scene or to move
faster over longer distances.
In this mode, two types of movement are possible: a combination of walking and teleporting
Jo

as well as a fly-through movement, in which the trackpad on the VR controller is used to


move in the direction of the current gaze, backwards or sideways (Fig 3).

5
Figure 3. Scaling and movement. The image on the left shows the scaled user, and the
red figure appears proportionally large. The image on the right image shows the user
hovering over the scene.

2.2.7 Interactions with objects

of
One of the key functionalities for reconstruction is interacting with objects in the virtual
environment. After all objects have been loaded into the scene using the Unity3D user
interface, they can be given the “interactive object” property. This property ensures that

ro
objects such as the floor or the walls cannot be altered. Objects that are interactive can be
grabbed using the controller. Grabbed objects can be freely moved and rotated. To avoid
accidental grabs of objects, it is also possible to freeze objects in their position, which
-p
requires the user to unfreeze the object first before it can be moved or adjusted. Interactive,
unfrozen objects are highlighted when the controller touches them. To allow the user to
perform matching operations, the transparency of objects with single-texture maps can also
be adjusted (Fig 4).
re
lP
na
ur

Figure 4. Biped turned sideways by interacting with the controller.


Jo

2.2.8 Controller model replacement and object-specific scripting


Dependent on the person visiting the scene, it might be necessary to replace the controller
model with objects or tools such as weapons, forensic rulers or pointers. These additional
objects and tools can be connected with object-specific interactions. For example, when a
pistol is attached to the controller, a push of the trigger button simulates the weapon
discharging. These attachable objects and their respective actions can vary greatly and
require case-dependent adjustments and scripting. Using 3D printing, the haptics of the
object can be simulated (Fig 5).

6
of
ro
Figure 5. Controller attachment made visible in reality.

2.2.9 Recording the tracking information


-p
Case reconstructions are only performed by professional forensic staff but in some cases the
re
3D scene can also be used in questioning a witness [23]. In particular, it is necessary to
permanently document each movement, including movements of the HDM and controllers
controller button pushes and interactions with objects. All these movements all can be
lP

tracked relative to the scene, allowing subsequent revisualizations of the performed actions
and movements, which can be used in analyses or in witness statements [23] (Fig 6). In
cases of 3D VR reconstruction, all alterations to the scene are documented.
na
ur
Jo

7
Figure 6. The recorded motion path of a user. The headset and controller are visible
and can move along the original motion path, which is visualized by the red (HDM),
green (left controller) and blue (right controller) paths.

2.2.11 Exporting
An important step is to save the adjustments and reconstructions of the scene and to export
the scene to the actual reconstruction software. For this purpose, relevant single objects can
be exported from the Unity3D hierarchy in a .obj format, including their orientation and
rotation attributes.

2.3 Hand and Finger Tracking


VR controllers are convenient tools. Some users might have difficulties with the computer
game-style interaction and therefore, might not be able to learn how to perform the required
interactions with the controller in a reasonable amount of time. For this purpose, finger
tracking is a useful feature that requires dedicated hardware, which, in this work, is a Leap

of
Motion tracker. This tool allows fingers and hands to perform actions within the field of view
(Fig 7).

ro
-p
re
lP
na

Figure 7. Finger tracking using the Leap controller on the HMD. Open hands and
pointing index fingers can easily be differentiated and increase the usability.
ur

Grasping is a feature that allows the user to grasp objects and move them around before
opening the hand and releasing the object, which is similar to the interaction performed with
Jo

the controller (see 2.2.7).

Pointing at objects in VR is difficult for external viewers to see. For this purpose, the LEAP
Motion tracker was enabled to recognize a gesture in which the index finger is pointing while
the other fingers are retracted. When this gesture is recognized, the direction in which the
index finger points is represented by a “laser” emerging from the tip of the finger in the
pointing direction of the finger.

3. Discussion

8
In this article, we present a system that allows rapid prototyping of incident scene
reconstructions in virtual reality.

The proposed system can incorporate a wide variety of scanning modalities, including laser
scans of crime scenes, surface scans of objects and medical scans such as CT and MRI.
The functionality the system enables the user to quickly understand the data and explore the
scene in the context of an intuitively formulated forensic question. Because of the natural
interaction with the data compared to working with animation software such as 3ds Max, less
time is lost during this phase of the reconstruction, and thus, potentially reduces the costs.
The VR system can be easily integrated into the reconstruction workflow, provided there is
sufficient space available. Because the system is based on off-the-shelf gaming hardware
and software, the costs and entry threshold are relatively low. To date, the system was used
in four real cases that involved matching shoeprints and other injury inflicting tools.

Thus far, two applications for VR have been developed within the forensic holodeck project.

of
State attorneys can visualize crime scene reconstructions and visit virtual crime scenes in
which eye witness accounts are provided in VR. During the virtual crime scene visit, data
such as the witnesses position and rotation of the head and hands, audio and gunshot

ro
trajectories can be recorded. These data can be incorporated into the VR reconstruction
environment as well, allowing the presented system to bridge the pure VR visualization and
the crime scene visit also allowing to show discrepancies between the witnesses statement
and the forensic reconstruction.
-p
However, there are some limitations of the presented system. It is important to carefully
re
evaluate for each case if the use of VR is indicated or traditional methods are more suitable.
The use of VR is currently limited to evaluation and discussion purposes and can aid the
process of giving answers on forensic questions. The VR reconstruction is limited to 3D data
lP

only, and routine reconstruction tasks such as height estimations or reconstructions based
on camera footage cannot be performed. Using finger trackers to reposition objects in the
scene is possible but much less accurate than using the provided controllers. Currently, the
data must be manually imported and exported between the VR environment and the
na

animation software that is used for fine-tuning (3ds Max).


In the future, this issue could be solved by directly connecting the VR using a 3ds Max plugin
interface, thus allowing seamless switching between the desktop reconstruction and the VR
reconstruction. A large variety of different modalities and data, ranging from laser scanning
ur

point clouds over polygon meshes from surface scanners to volumetric data from medical
scans, are used for reconstructions. Unity as a gaming software, however, is limited to
visualizing relatively low-resolution polygon meshes. This visualization requires an additional
Jo

processing step in which all the data are converted to polygon meshes with a reduced
polygon count. Switching to a different platform other than Unity might enable us to visualize
the scanned data directly, further increasing the reconstruction speed. Future studies should
investigate the amount of time that could be saved by using VR for reconstructions with
respect to the type of forensic question.

Conflicts of Interest
There are no conflicts of interest.

Funding

9
No funding was received for this research.

Ethical Approval
No ethical approval was required for this article.

Author Credits Statements


Ideas; formulation or evolution of Lars Ebert, Akos Dobay
Conceptualization
overarching research goals and aims
Development or design of Till Sieberth, Lars Ebert
Methodology
methodology; creation of models
Programming, software development; Till Sieberth, Raffael Affolter
designing computer programs;
Software implementation of the computer code
and supporting algorithms; testing of

of
existing code components
Verification, whether as a part of the Lars Ebert, Till Sieberth,
activity or separate, of the overall Raffael Affolter

ro
Validation replication/ reproducibility of
results/experiments and other research
outputs

Investigation
Conducting a research and -p
investigation process, specifically
performing the experiments, or
Till Sieberth
re
data/evidence collection
Management activities to annotate Akos Dobay, Raffael Affolter
(produce metadata), scrub data and
lP

maintain research data (including


Data Curation
software code, where it is necessary for
interpreting the data itself) for initial
use and later reuse
na

Preparation, creation and/or Till Sieberth


Writing – Original presentation of the published work,
Draft specifically writing the initial draft
(including substantive translation)
ur

Preparation, creation and/or Lars Ebert, Akos Dobay,


presentation of the published work by Raffael Affoter
Writing – Review & those from the original research group,
Jo

Editing specifically critical review,


commentary or revision – including
pre-or postpublication stages
Preparation, creation and/or Till Sieberth, Lars Ebert,
presentation of the published work, Raffael Affolter, Akos Dobay
Visualization
specifically visualization/ data
presentation
Oversight and leadership responsibility Lars Ebert
Supervision
for the research activity planning and

10
execution, including mentorship
external to the core team

Acknowledgements
The authors express their gratitude to Emma Louise Kessler, MD for her generous donation
to the Zurich Institute of Forensic Medicine, University of Zurich, Switzerland. We also thank
Nicolas Krismer for his help with the 3D printed controller attachment.

of
ro
-p
re
lP
na
ur
Jo

11
References
1. Franckenberg S, Flach PM, Gascho D, Thali MJ, Ross SG Postmortem computed
tomography-angiography (PMCTA) in decomposed bodies - A feasibility study. J
Forensic Radiol Imaging 2015; 3:226–234.
2. Filograna L, Thali M Post-mortem computed tomography (PMCT) imaging of the
lungs: pitfalls and potential misdiagnosis. 2013; 1–13.
3. Laberke PJ, Ampanozi G, Ruder TD, Gascho D, Thali MJ, Fornaro J Fast three-
dimensional whole-body post-mortem magnetic resonance angiography. J Forensic
Radiol Imaging 2017; 10:41–46.
4. Ampanozi G, Schwendener N, Krauskopf A, Thali MJ, Bartsch C Incidental occult
gunshot wound detected by postmortem computed tomography. Forensic Sci Med
Pathol 2013; 9:68–72.
5. Thali MJ, Ross S, Oesterhelweg L, Grabherr S, Buck U, Naether S, et al Virtopsy.
Working on the future of forensic medicine. Rechtsmedizin 2007; 17:7–12.
6. Schweitzer W, Röhrich E, Schaepman M, Thali MJ, Ebert L Aspects of 3D surface
scanner performance for post-mortem skin documentation in forensic medicine using
rigid benchmark objects. J Forensic Radiol Imaging 2013; 1:167–175.

of
7. Ruder TD, Thali MJ, Hatch GM Essentials of forensic post-mortem MR imaging in
adults. Br J Radiol 2014; 87:.
8. Michienzi R, Meier S, Ebert LC, Martinez RM, Sieberth T Comparison of forensic

ro
photo-documentation to a photogrammetric solution using the multi-camera system
“Botscan.” Forensic Sci Int 2018; 288:46–52.
9. Breitbeck R, Ptacek W, Ebert L, Furst M, Kronreif G Virtobot - A Robot System for
Optical 3D Scanning in Forensic Medicine. 2014; 84–91.
-p
10. Kottner S, Ebert LC, Ampanozi G, Braun M, Thali MJ, Gascho D VirtoScan - a mobile,
low-cost photogrammetry setup for fast post-mortem 3D full-body documentations in
x-ray computed tomography and autopsy suites. Forensic Sci Med Pathol 2017;
re
13:34–43.
11. Leipner A, Baumeister R, Thali MJ, Braun M, Dobler E, Ebert LC Multi-camera system
for 3D forensic documentation. Forensic Sci Int 2016; 261:123–128.
12. Buck U, Albertini N, Naether S, Thali MJ 3D documentation of footwear impressions
lP

and tyre tracks in snow with high resolution optical surface scanning. Forensic Sci Int
2007; 171:157–164.
13. Franckenberg S, Binder T, Bolliger S, Thali MJ, Ross SG Just Scan It! - Weapon
Reconstruction in Computed Tomography on Historical and Current Swiss Military
na

Guns. Am J Forensic Med Pathol 2016; 37:214–217.


14. Leipner A, Dobler E, Braun M, Sieberth T, Ebert L Simulation of mirror surfaces for
virtual estimation of visibility lines for 3D motor vehicle collision reconstruction.
Forensic Sci Int 2017; 279:106–111.
15. Buck U, Naether S, Braun M, Bolliger S, Friederich H, Jackowski C, et al Application
ur

of 3D documentation and geometric reconstruction methods in traffic accident


analysis: With high resolution surface scanning, radiological MSCT/MRI scanning and
real data based animation. Forensic Sci Int 2007; 170:20–28.
Jo

16. Brüschweiler W, Braun M, Dirnhofer R, Thali MJ Analysis of patterned injuries and


injury-causing instruments with forensic 3D/CAD supported photogrammetry (FPHG):
An instruction manual for the documentation process. Forensic Sci Int 2003; 132:130–
138.
17. Thali MJ, Braun M, Markwalder TH, Brueschweiler W, Zollinger U, Malik NJ, et al Bite
mark documentation and analysis: The forensic 3D/CAD supported photogrammetry
approach. Forensic Sci Int 2003; 135:115–121.
18. Buck U, Kneubuehl B, Näther S, Albertini N, Schmidt L, Thali M 3D bloodstain pattern
analysis: Ballistic reconstruction of the trajectories of blood drops and determination of
the centres of origin of the bloodstains. Forensic Sci Int 2011; 206:22–28.
19. Thali MJ, Braun M, Brüschweiler W, Dirnhofer R Matching tire tracks on the head
using forensic photogrammetry. Forensic Sci Int 2000; 113:281–287.

12
20. Ebert LC, Nguyen TT, Breitbeck R, Braun M, Thali MJ, Ross S The forensic holodeck:
an immersive display for forensic crime scene reconstructions. Forensic Sci Med
Pathol 2014; 10:623–626.
21. Thali MJ, Braun M, Buck U, Aghayev E, Jackowski C, Vock P, et al VIRTOPSY—
Scientific Documentation, Reconstruction and Animation in Forensic: Individual and
Real 3D Data Based Geo-Metric Approach Including Optical Body/Object Surface and
Radiological CT/MRI Scanning. J Forensic Sci 2005; 50:1–15.
22. Koller S, Ebert LC, Martinez RM, Sieberth T Using virtual reality for forensic
examinations of injuries. Forensic Sci Int 2019; 295:30–35.
23. Sieberth T, Dobay A, Affolter R, Ebert LC Applying virtual reality in forensics – a
virtual scene walkthrough. Forensic Sci Med Pathol 2019; 15:41–47.
24. HTC Corporation Vive | Discover Virtual Reality. 2017
25. Oculus VR LLC Oculus. 2017
26. Valve Corporation SteamVR Plugin. 2019.
https://assetstore.unity.com/packages/tools/integration/steamvr-plugin-32647.
Accessed 11 Apr 2019.
27. Unity Technologies Unity. 2019. https://unity.com/. Accessed 11 Apr 2019.

of
28. HTC Corporation VIVETM | Discover Virtual Reality Beyond Imagination. 2019.
https://www.vive.com/eu/. Accessed 11 Apr 2019.
29. Kersten TP, Büyüksalih G, Tschirschwitz F, Kan T, Deggim S, Kaya Y, et al The

ro
selimiye mosque of edirne, Turkey -An immersive and interactive virtual reality
experience using htc vive. Int Arch Photogramm Remote Sens Spat Inf Sci - ISPRS
Arch 2017; 42:403–409.
-p
re
lP
na
ur
Jo

13

You might also like