Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

sensors

Article
Planetary-Scale Geospatial Open Platform Based on
the Unity3D Environment
Ahyun Lee * , Yoon-Seop Chang and Insung Jang
City and Geospatial ICT Research Section, Electronics and Telecommunications Research Institute (ETRI),
218 Gajeong-ro, Yuseong-gu, Daejeon 34129, Korea; ychang76@etri.re.kr (Y.-S.C.); e4dol2@etri.re.kr (I.J.)
* Correspondence: ahyun@etri.re.kr; Tel.: +82-420-860-1540

Received: 30 September 2020; Accepted: 19 October 2020; Published: 21 October 2020 

Abstract: Digital twin technology based on building a virtual digital city similar to a real one
enables the simulation of urban phenomena or the design of a city. A geospatial platform is an
essential supporting component of digital twin cities. In this study, we propose a planetary-scale
geospatial open platform that can be used easily in the most widely used game engine environment.
The proposed platform can visualize large-capacity geospatial data in real time because it organizes
and manages various types of data based on quadtree tiles. The proposed rendering tile decision
method provides constant geospatial data visualization according to the camera controls of the
user. The platform implemented is based on Unity3D, and therefore, one can use it easily by
importing the proposed asset library. The proposed geospatial platform is available on the Asset
Store. We believe that the proposed platform can meet the needs of various three-dimensional (3-D)
geospatial applications.

Keywords: geographic information systems; terrain mapping; 3-D map

1. Introduction
A geospatial platform is the first step toward understanding the real world as a digital world.
A digital twin is a digital model of a physical entity [1,2]. It virtually simulates and predicts simulation
results by creating digital twins of objects [3]. Digital twins are attracting attention as a technology
that can be used to address various industrial and social problems and that has applications in
manufacturing. A three-dimensional (3-D) geospatial platform is an essential part of a digital twin city.
It can digitally organize and visualize terrains, buildings, and other structures [4,5]. Virtual Singapore [6]
is a representative example of a simulation of a real city using digital geospatial data. This simulated
city can be used to study various urban phenomena and urban design scenarios.
Web-based geospatial platforms offer excellent user accessibility and convenience. We can use
Google Earth to explore anywhere in the world with a web browser [7,8]. Cesium allows users to
upload their own 3-D geospatial data to create and expand 3-D city models [9]. These web-based
platforms only require a web browser that supports web standards and WebGL; they do not require
the installation of additional libraries or programs [10,11]. However, they have functional limitations
as web browsers. They are limited in their ability to provide functionality that the platform does not
support. In addition, the execution environment has performance constraints as a web browser. On the
other hand, standalone geospatial platforms make it possible to use high-quality data and functions in
a high-performance computing environment, although user accessibility is limited [12–14]. The game
solution of Google Maps platform can use Google’s geospatial data based on the real world to build
a 3-D environment model to suit a user’s purposes [15]. Figure 1a shows a city model created by
importing Google geospatial data into a blank project. Figure 1b shows a city model that has been
modified and augmented by the user. The ArcGIS City Engine [16], a 3-D modeling software package

Sensors 2020, 20, 5967; doi:10.3390/s20205967 www.mdpi.com/journal/sensors


Sensors 2020, 20, 5967 2 of 20
Sensors 2020, 20, x FOR PEER REVIEW 2 of 19

that allows
package thataallows
user toa design
user to adesign
city, isa the
city,software that is that
is the software the most
is the widely used used
most widely to create large-scale
to create large-
responsive and immersive urban environments based on geospatial
scale responsive and immersive urban environments based on geospatial data. data.

(a) (b)
Figure 1.
Figure Examplesofofcity
1. Examples citymodels
models developed
developed using
using game
game solution
solution of Google
of Google Maps
Maps platform
platform [15]:[15]:
(a)
(a) initial
initial geospatial
geospatial model
model fromfrom Google
Google Maps
Maps platform;(b)
platform; (b)model
modelmodified
modifiedand
andimproved
improvedaugmented
augmented
by aa user.
by user.

The purpose
The purpose of of our
our proposed
proposed geospatial
geospatial platform
platform differs
differs from
from thatthat of of the
the aforementioned
aforementioned
standalone GIS platforms, wherein a user creates a new city model on the
standalone GIS platforms, wherein a user creates a new city model on the basis of real geospatial basis of real geospatialdata.
data.
Our platform
Our platform is is similar
similar to to the
the web-based
web-based Google
Google Earth—the
Earth—the only only difference
difference is is that
that ours
ours isis built
built on
on
the basis of a game engine instead. Moreover, our platform involves data
the basis of a game engine instead. Moreover, our platform involves data structure and management structure and management
functions that
functions that support
support the the use
use ofof geospatial
geospatial data data inin various
various ways.
ways. It It displays
displays the the user-generated
user-generated
geospatial data
geospatial dataininrealrealtime
timeon on
a planetary-scale.
a planetary-scale. There existsexists
There a game a solution
game solutionof Google of Maps
Google platform
Maps
based on the game engine that provides geospatial data, where users use
platform based on the game engine that provides geospatial data, where users use geospatial models geospatial models as shown
in shown
as Figure 1a. The difference,
in Figure however, between
1a. The difference, however,Google
between Maps
GoogleandMaps
our proposed
and our platform
proposedlies in the
platform
lies in the intended purpose of the platform. Google Maps aims to bring the geospatial data builtinto
intended purpose of the platform. Google Maps aims to bring the geospatial data built by Google by
the game
Google engine.
into the gameThe user
engine.canThe
thenuser
editcan
it asthen
shown
editinitFigure
as shown 1b [15,17].
in Figure On1b the contrary,
[15,17]. On our proposed
the contrary,
platform
our was developed
proposed platform was fordeveloped
the purpose forofthe
tiling-based
purpose ofgeospatial
tiling-based data management
geospatial data for rendering
management
for rendering on a planetary-scale. Therefore, the functions of our platform, which is based engine,
on a planetary-scale. Therefore, the functions of our platform, which is based on the game on the
resemble
game the data
engine, management
resemble the dataand visualization
management andfunctions of thefunctions
visualization web-based of Google Earth. Google
the web-based
Earth.We selected Unity3D, the most widely used game engine in the world, to build our proposed
geospatial platform,
We selected becausethe
Unity3D, of Unity3D’s
most widely advantages
used game of convenience,
engine in theaccessibility,
world, to build and extensibility
our proposed of
external libraries.
geospatial platform, Unity3D
because is of
capable of debugging
Unity3D’s advantages various execution environments
of convenience, accessibility,and andisextensibility
suitable for
of external libraries. Unity3D is capable of debugging various execution environments and more
use in developing applications for various purposes [18–20]. Unity3D currently supports than
is suitable
25 multi-platforms; more than 60% of the development of mixed reality
for use in developing applications for various purposes [18–20]. Unity3D currently supports more or virtual reality devices that
can maximize
than user concentration
25 multi-platforms; more than is60%
progressing with the aidofofmixed
of the development Unity3D [21].or virtual reality devices
reality
that can maximize user concentration is progressing with the aid of Unity3Dsuch
Some countries and cities have been collecting static geospatial data, [21].as terrain, building,
and Some
road data,
countries and cities have been collecting static geospatial data, such existing
for use on a digital twin city platform. They either choose an as terrain, platform
building,or
develop
and roada data,
new platform,
for use on depending
a digital on thecity
twin purpose of each
platform. digital
They twinchoose
either city platform.
an existingVWorld contains
platform or
the most precise geospatial data in South Korea [22]. It is similar
develop a new platform, depending on the purpose of each digital twin city platform. VWorld to the web-based Google Earth;
however,the
contains themost
terrain and geospatial
precise building models
data inare managed
South Korea separately, as shown
[22]. It is similar to theinweb-based
Figure 2a. Google
All the
building
Earth; modelsthe
however, created
terrain asand
individual
buildingmodels
modelsare areobtained
managedvia DataAPI as
separately, [23]. As ain
shown high-resolution
Figure 2a. All
building
the buildingmodel is created
models created separate from themodels
as individual terrain,areit requires
obtainedmore large-scale
via DataAPI data
[23]. Aswhen compared to
a high-resolution
the data required
building model isby Google
created Earth. However,
separate simulations
from the terrain, basedmore
it requires on precise city models
large-scale data whencan becompared
obtained.
SK Telecom, a Korean telecommunications company, analyzed the 3-D
to the data required by Google Earth. However, simulations based on precise city models can be stereoscopic information-based
radio waveSK
obtained. environment
Telecom, aand the 5Gtelecommunications
Korean antenna visible areacompany,analysis for a 5G wireless
analyzed the 3-D network design
stereoscopic
information-based radio wave environment and the 5G antenna visible area analysis for a 5G wireless
network design using the VWorld data, as shown in Figure 2b. This showcases the application of a
digital twin city platform in which high-precision geospatial data are built.
Sensors 2020, 20, 5967 3 of 20

using the VWorld data, as shown in Figure 2b. This showcases the application of a digital twin city
Sensors 2020,
platform in20, x FORhigh-precision
which PEER REVIEW geospatial data are built. 3 of 19

(a)

(b)
2. Web-based VWorld geospatial
Figure 2. geospatial data:
data: (a) VWorld service site [22]; the
the selected
selected building
building is
is
shown in red; (b) 5G radio wave environment and antenna visible area analysis using VWorld data.
in red; (b) 5G radio wave environment and antenna visible area analysis using VWorld data.

The
The VWorld
VWorld mapmap service
service provides
provides twotwo types
types of ofgeospatial
geospatial information
information platforms.
platforms. Anyone
Anyone cancan
easily
easily use
use aa geospatial
geospatial map
map through
through aa WebGL
WebGL platform
platform based
based on onweb
webstandards.
standards. The
The standalone
standalone
platform
platform provides
providesvarious
varioususer-customizable
user-customizablefunctions
functions and
andsupports
supports forfor
creating applications
creating applications[24–26].
[24–
However,
26]. However, a user can only develop the functions supported by the platform. For example, itit is
a user can only develop the functions supported by the platform. For example, is
possible
possible to
to add
add 2-D/3-D
2-D/3-D markers,
markers, 2-D/3-D
2-D/3-D polygons,
polygons, texts,
texts, etc.,
etc., at
at specific
specific locations. It is
locations. It is not
not possible,
possible,
however,
however, toto develop
developapplications
applicationsfor
foraaspecial
specialscenario,
scenario,suchsuchasasa satellite
a satelliteobservation
observationsimulation or an
simulation or
autonomous
an autonomous driving simulation
driving [27]. Development
simulation [27]. Developmentof a digital
of atwin city platform
digital twin cityrequires
platform convenient
requires
support for support
convenient special scenarios.
for specialThe platform
scenarios. Thecan support
platform different
can supportgeospatial data formats.
different geospatial dataOur goal
formats.
was to develop a universal digital twin city platform that can express geospatial
Our goal was to develop a universal digital twin city platform that can express geospatial data via data via real-time
streaming and meet and
real-time streaming the specific
meet theneeds of aneeds
specific digitalofcity platform.
a digital city platform.
In this paper, we present a geospatial platform prototype based on the universal game engine
Unity3D, as shown in Figure 3. The remainder of this paper is structured as follows. In Section 2, we
describe the structure of the proposed platform based on Unity3D. In Section 3, we explain how to
organize and manage geospatial data based on the quadtree tile structure. A validation experiment
and analysis of its results are discussed in Section 4. Finally, we present concluding remarks in Section
5.
Sensors 2020, 20, 5967 4 of 20

In this paper, we present a geospatial platform prototype based on the universal game engine
Unity3D, as shown in Figure 3. The remainder of this paper is structured as follows. In Section 2,
we describe the structure of the proposed platform based on Unity3D. In Section 3, we explain how to
organize and manage geospatial data based on the quadtree tile structure. A validation experiment and
Sensors 2020,
analysis 20, results
of its x FOR PEER
areREVIEW 4 of 19
discussed in Section 4. Finally, we present concluding remarks in Section 5.

Figure3.3. Game
Figure Game window
window examples of the
examples of the proposed
proposed geospatial
geospatial platform.
platform.

2. Methods: Unity3D-Based Platform


2. Methods: Unity3D-Based Platform
The proposed geospatial platform allows users to easily build a geospatial application without
The proposed geospatial platform allows users to easily build a geospatial application without
geospatial expertise. The setup process is as simple as downloading the asset library and importing it
geospatial expertise. The setup process is as simple as downloading the asset library and importing
into your Unity3D project, as shown in Figure 4. The platform enables the development of various
it into your Unity3D project, as shown in Figure 4. The platform enables the development of various
geospatial applications through utilities and external assets in Unity3D.
geospatial applications through utilities and external assets in Unity3D.
Figure 5 shows the proposed platform structure. The proposed open platform was implemented
Figure 5 shows the proposed platform structure. The proposed open platform was implemented
based on Unity3D. The Globe class is the 3-D global model. The Earth model consists of 71.5 billion
based on Unity3D. The Globe class is the 3-D global model. The Earth model consists of 71.5 billion
tile-based terrain models. The Sector class is a unit that contains a tile, and various types of geospatial
tile-based terrain models. The Sector class is a unit that contains a tile, and various types of geospatial
data are included in the tile. Data for composing the sectors were requested from the external geospatial
data are included in the tile. Data for composing the sectors were requested from the external
data server using
geospatial the Data
data server API.the
using The platform
Data usesplatform
API. The the VWorld
usesData Server API
the VWorld [22].
Data The Sector
Server Manager
API [22]. The
class manages the 71.5 billion tile-based sectors. The User Interface class includes the map movement
Sector Manager class manages the 71.5 billion tile-based sectors. The User Interface class includes the
and
mapdirection
movement control, according
and direction to theaccording
control, geospatialtocharacteristics
the geospatial of the globe. Aofuser
characteristics the can develop
globe. A usera
geospatial
can developapplication by importing
a geospatial applicationthe
by proposed
importingasset library indicated
the proposed by the
asset library blue blocks
indicated into
by the the
blue
empty project in Unity3D.
blocks into the empty project in Unity3D.
Sensors 2020, 20, x FOR PEER REVIEW 5 of 19

Sensors 2020, 20, 5967 5 of 20


Sensors 2020, 20, x FOR PEER REVIEW 5 of 19

Figure 4. Proposed geospatial platform in the game engine editor.


Figure 4. Proposed geospatial platform in the game engine editor.
Figure 4. Proposed geospatial platform in the game engine editor.

.
.
Figure 5. Proposed platform structures. The blue blocks pertain to the proposed asset library in the
Figure
game 5. Proposed platform structures. The blue blocks pertain to the proposed asset library in the
engine.
Figure 5. Proposed
game engine.
platform structures. The blue blocks pertain to the proposed asset library in the
game engine.
2.1.Camera
2.1. CameraControl
Control

2.1. Camera The


The cameracontrol
Control
camera control provided
provided withwith the Interface
the User User Interface class
class has has movements
movements for observing
for observing geospatial
geospatial data expressed as a globe [28]. It has transformations such as rotation, movement, and tilt,
data expressed as a globe [28]. It has transformations such as rotation, movement, and tilt, depending
The camera
depending on control provided
mouse events. with
First, the the User
camera Interface movement
has a rotational class has according
movements to thefor observing
latitude
on mouse events. First, the camera has a rotational movement according to the latitude and longitude
geospatial data expressed
and longitude when theasprincipal
a globe line
[28].ofItthe
hascamera
transformations suchthe
is fixed towards as rotation,
center pointmovement,
of the globe.and tilt,
when the principal line of the camera is fixed towards the center point of the globe. Figure 6a shows an
depending onshows
Figure 6a mouse anevents.
exampleFirst, the camera
of camera movements hasaccording
a rotational
to themovement according
rotations around to theand
the latitude latitude
example of camera movements according to the rotations around the latitude and longitude. The red α
and islongitude. when
longitude The redthe𝛼 is the latitude
principal line change
of theand rotatesisaround
camera fixed the right vector
towards the of thepoint
center camera ofatthe
theglobe.
the latitude change and rotates around the right vector of the camera at the center point of the globe.
Figurecentershows
point of
anthe globe. The blue 𝛽 is the longitude change and rotates around the up vector of the
The6ablue β is the example
longitude of camera
change and movements
rotates around according to the
the up vector rotations
of the cameraaround the latitude
at the center point of and
camera at the center point of the globe. When the left mouse button is pressed, the vertical movement
longitude.
the globe. red 𝛼
TheWhen theisleft
themouse
latitude change
button and rotates
is pressed, around
the vertical the right
movement vector
of the cursorofdetermines
the camera theat the
of the cursor determines the angle in latitude, and the horizontal movement of the mouse cursor
center point
angle of the globe.
in latitude, and the The blue 𝛽 movement
horizontal is the longitude change
of the mouse and determines
cursor rotates around the in
the angle uplongitude.
vector of the
determines the angle in longitude.
camera at the center point of the globe. When the left mouse button is pressed, the vertical movement
of the cursor determines the angle 𝑎𝑛𝑔𝑙𝑒
inangle =𝑑𝑒𝑙𝑡𝑎/𝑠
and ∙the
𝑒𝑙𝑒𝑣/𝑅/𝑈𝑅
delta/s·elev/R/UR
latitude, horizontal movement of the mouse (1)(1)
cursor
determines the angle in longitude.
𝑎𝑛𝑔𝑙𝑒 𝑑𝑒𝑙𝑡𝑎/𝑠 ∙ 𝑒𝑙𝑒𝑣/𝑅/𝑈𝑅 (1)
Sensors 2020, 20, 5967 6 of 20

delta denotes the amount of pixel-based movement of the mouse cursor on the 2-D screen. The movement
sensitivity can be set according to the size of s in (1). elev is the elevation of the camera position
calculated relative to the surface of the globe. R, the radius of the Earth, is 6,387,137 m, and UR is
the radius value used on the proposed platform. The transform property of the game object uses the
float type in Unity3D. In the case of the float type, if it exceeds the limit that can be expressed by
significant-digit bits, it takes an approximate value. When a planetary-scale map is used, floating point
precision limitations can arise in Unity3D. Some models with large position coordinates are not
normally displayed, as they are too far from the origin of the game coordinate system. The proposed
platform sets the radius as 10,000 m, which is smaller than the actual radius 6,387,137 m. Depending
on the application, UR can be set to a user-defined value. Other property values can be defined in the
Sensors 2020, 20, x FOR PEER REVIEW 6 of 19
inspector window of the Globe game object.

(a) (b) (c)

Figure 6. Camera control for the three-dimensional (3-D) globe-based map: (a) rotation based on
Figure 6. Camera
latitude control(b)
and longitude; for
tiltthe three-dimensional
transformation (3-D) globe-based
at the intersection map: (a)the
point p between rotation based
camera’s on
principal
latitude and
line and thelongitude; (b) globe;
surface of the tilt transformation at the
(c) rotation based intersection
on the point p line.
camera principal between the camera’s
principal line and the surface of the globe; (c) rotation based on the camera principal line.
Figure 6b shows an example of the tilt transformation. The tilt transformation rotates on the right
vector
deltaofdenotes
the camera at the intersection
the amount of pixel-based point p between
movement of the
the camera’s
mouse cursorprincipal
on theline
2-D and the surface
screen. The
of the globe.
movement When the
sensitivity cantilt
be angle is 0◦ , thetocamera’s
set according the size principal
of s in (1).line
elevpoints
is thetoward
elevationtheofcenter point of
the camera
the globe.
position When the
calculated tilt angle
relative to thechanges,
surface of thethe
camera’s principal
globe. R, the radiuslineofdoes not face
the Earth, the centerm,
is 6,387,137 point
andof
URthe
is globe, and the
the radius valuedirection
used onofthe theproposed
camera can be adjusted
platform. by lookingproperty
The transform away from the game
of the surface of the
object
uses the float type in Unity3D. In the case of the float type, if it exceeds the limit that can be expressedthe
globe, as shown in Figure 3. The vertical movement of the cursor determines the tilt angle while
byright mouse button
significant-digit is pressed.
bits, it takes anThe horizontal change
approximate of the mouse
value. When cursor controls
a planetary-scale map the blue γ
is used, rotation.
floating
Theprecision
point rotation of γ occurs can
limitations on the p vector
arise at theSome
in Unity3D. intersection
models point p, as position
with large shown in Figure 6c. are
coordinates Figure
not 7
shows example motions of the proposed camera control. The translation and
normally displayed, as they are too far from the origin of the game coordinate system. The proposed rotation of the camera
are similar
platform sets to
thethe motion
radius of a satellite.
as 10,000 m, which is smaller than the actual radius 6,387,137 m. Depending
on the application, UR can be set to a user-defined value. Other property values can be defined in the
inspector window of the Globe game object.
Figure 6b shows an example of the tilt transformation. The tilt transformation rotates on the right
vector of the camera at the intersection point p between the camera’s principal line and the surface of
the globe. When the tilt angle is 0°, the camera’s principal line points toward the center point of the
globe. When the tilt angle changes, the camera’s principal line does not face the center point of the
globe, and the direction of the camera can be adjusted by looking away from the surface of the globe,
as shown in Figure 3. The vertical movement of the cursor determines the tilt angle while the right
mouse button is pressed. The horizontal change of the mouse cursor controls the blue 𝛾 rotation. The
rotation of 𝛾 occurs on the p vector at the intersection point p, as shown in Figure 6c. Figure 7 shows
example motions of the proposed camera control. The translation and rotation of the camera are
similar to the motion of a satellite.
globe, and the direction of the camera can be adjusted by looking away from the surface of the globe,
as shown in Figure 3. The vertical movement of the cursor determines the tilt angle while the right
mouse button is pressed. The horizontal change of the mouse cursor controls the blue 𝛾 rotation. The
rotation of 𝛾 occurs on the p vector at the intersection point p, as shown in Figure 6c. Figure 7 shows
example motions of the proposed camera control. The translation and rotation of the camera are
Sensors 2020, 20, 5967 7 of 20
similar to the motion of a satellite.

Sensors 2020, 20, x FOR PEER REVIEW 7 of 19


(a)

(b)

(c)
Figure 7. 7. Examples of the camera control: (a) rotation
rotation based
based on latitude
latitude and longitude
longitude in
in Figure 6a;
(b) tilt
tilt transformation
transformation in
in Figure
Figure 6b;
6b; (c)
(c) rotation
rotation based
based on
onthe
thecamera
cameraprincipal
principalline
linein
inFigure
Figure6c.
6c.

2.2. Positioning Geospatial Model


2.2. Positioning Geospatial Model
On the proposed platform, we define the 3-D location of geospatial data items in terms of latitude,
On the proposed platform, we define the 3-D location of geospatial data items in terms of
longitude, and elevation. The coordinate system of the proposed platform is left-handed. The Y-axis
latitude, longitude, and elevation. The coordinate system of the proposed platform is left-handed.
points in the head-up direction. The 3-D model source is placed perpendicular to the origin on the
The Y-axis points in the head-up direction. The 3-D model source is placed perpendicular to the origin
Y-axis, as shown in Figure 8a. The lowest value of a building is 0 on the Y-axis. The Z axis is the south
on the Y-axis, as shown in Figure 8a. The lowest value of a building is 0 on the Y-axis. The Z axis is
side of the building, where the building will be placed on the Earth’s surface. The direction of the
the south side of the building, where the building will be placed on the Earth’s surface. The direction
Z-axis points toward the South Pole when the building is positioned on the surface of the globe.
of the Z-axis points toward the South Pole when the building is positioned on the surface of the globe.
A 3-D model is assumed to be placed in advance at a position and orientation, as shown in
Figure 8a, before the proposed positioning method is applied. Using the method, the 3-D model,
with latitude, longitude, and elevation data, can be positioned perpendicular to the surface of the
globe, as shown in Figure 8d. The latitude, longitude, and elevation of the model are converted into
3-D coordinates in the proposed 3D geospatial platform as follows:

h = UR + elev·UR/R (2)
   
 x   h ∗ cos lat· sin lon 
   
 y  =  h· sin lat  (3)
   
x −h ∗ cos lat· cos lon
   

(a) (b)
latitude, longitude, and elevation. The coordinate system of the proposed platform is left-handed.
The Y-axis points in the head-up direction. The 3-D model source is placed perpendicular to the origin
on the Y-axis, as shown in Figure 8a. The lowest value of a building is 0 on the Y-axis. The Z axis is
the south side of the building, where the building will be placed on the Earth’s surface. The direction
ofSensors
the Z-axis points
2020, 20, 5967 toward the South Pole when the building is positioned on the surface of the globe.
8 of 20

Sensors 2020, 20, x FOR PEER REVIEW 8 of 19

(a) (b)

(c) (d)
Figure 8.
Figure 8. Positioning
Positioning3-D
3-Dgeospatial
geospatialmodel:
model:(a)(a)
initial position
initial and
position andorientation of the
orientation 3D 3D
of the model; (b)
model;
rotation 𝑟 on the Y-axis; (c) rotation 𝑟 on the cross product of (x, y, z) and the Y-axis; (d) result of
(b) rotation r1 on the Y-axis; (c) rotation r2 on the cross product of (x, y, z) and the Y-axis; (d) result of
positioning 3-D
positioning 3-D geospatial
geospatial model.
model.

A 3-D
The model
Earth is assumed
is assumed to be
to be placed insphere,
a complete advance notatan
a position
ellipsoid. Latorientation,
and as shown
signifies latitude, lonin Figure
signifies
8a, before the
longitude, elev signifies
andproposed positioning method
elevation. is applied.
The initial Using the
3-D source method,
model the 3-D
illustrated model, 8a
in Figure withis
latitude, longitude,
transformed and elevation
on the surface, as shown data, can be8d,
in Figure positioned
through perpendicular
two rotations and to the
onesurface of ther1globe,
translation. is the
as shown
angle in Figure
between the (x,8d. The
0, z) latitude,
vector longitude,
and the and elevation
Z-axis, whose value on of thethe model
Y-axis is 0,are converted
as shown into 3-D
in Figure 8b.
coordinates
The in the first
source model proposed
rotates3D r1 geospatial platform
degrees around as follows:
the Y-axis. r2 is the angle between the (x, y, z) vector
and the Y-axis, as shown in Figure 8c. The source model then rotates r2 degrees around the cross
ℎ 𝑈𝑅 𝑒𝑙𝑒𝑣 ∙ 𝑈𝑅/𝑅 (2)
product of (x, y, z) and the Y-axis. The model perpendicular to the Y-axis rotates in a form perpendicular
to the surface of the model’s location through r1 and r2 rotations. Finally, the source model translates at
(x, y, z), as shown in Figure 8d. As a result𝑥 ℎ ∗ 𝑐𝑜𝑠
of these two𝑙𝑎𝑡rotations
∙ 𝑠𝑖𝑛 𝑙𝑜𝑛and one translation, the source model
𝑦 ℎ ∙
is located in position with the latitude, longitude, and elevationsin 𝑙𝑎𝑡 of the model, as shown in Figure (3) 9.
𝑥 ℎ ∗ cos 𝑙𝑎𝑡 ∙ cos 𝑙𝑜𝑛

The Earth is assumed to be a complete sphere, not an ellipsoid. Lat signifies latitude, lon signifies
longitude, and elev signifies elevation. The initial 3-D source model illustrated in Figure 8a is
transformed on the surface, as shown in Figure 8d, through two rotations and one translation. 𝑟 is
the angle between the (x, 0, z) vector and the Z-axis, whose value on the Y-axis is 0, as shown in
Figure 8b. The source model first rotates 𝑟 degrees around the Y-axis. 𝑟 is the angle between the
(x, y, z) vector and the Y-axis, as shown in Figure 8c. The source model then rotates 𝑟 degrees
around the cross product of (x, y, z) and the Y-axis. The model perpendicular to the Y-axis rotates in
a form perpendicular to the surface of the model’s location through 𝑟 and 𝑟 rotations. Finally, the
source model translates at (x, y, z), as shown in Figure 8d. As a result of these two rotations and one
translation, the source model is located in position with the latitude, longitude, and elevation of the
model, as shown in Figure 9.
around the cross product of (x, y, z) and the Y-axis. The model perpendicular to the Y-axis rotates in
a form perpendicular to the surface of the model’s location through 𝑟 and 𝑟 rotations. Finally, the
source model translates at (x, y, z), as shown in Figure 8d. As a result of these two rotations and one
translation,
Sensors 2020,the
20, source
5967 model is located in position with the latitude, longitude, and elevation of the
9 of 20
model, as shown in Figure 9.

(a)

(b)

(c)
Figure 9. Rendering results for representative buildings in Pyongyang, North Korea on the proposed
Figure 9. Rendering results for representative buildings in Pyongyang, North Korea on the proposed
platform: (a) Ryukyung Hotel; (b) Neungrado Stadium; (c) Kim Il Sung Square.
platform: (a) Ryukyung Hotel; (b) Neungrado Stadium; (c) Kim Il Sung Square.
Sensors 2020, 20, 5967 10 of 20

3. Methods and Results: Data Management and Rendering


The geospatial data used in the proposed platform are the VWorld data that have been assembled
for most cities and high-precision data in South Korea [22]. VWorld provides aerial images of the
whole world, and provides aerial images within 12 cm accuracy for the Korean Peninsula (South and
North Korea). VWorld has 3-D buildings and structure models for major cities in South Korea, such as
Seoul, Busan, Incheon, and Daejeon. It also has 3-D buildings and structures for some representative
buildings of Pyongyang in North Korea, such as the Ryukyung Hotel and the Neungrado Stadium,
as shown in Figure 9. This section describes how the geospatial data that fit the structure of the VWorld
tiles are managed. We also propose a method for visualizing the geospatial data provided by the
VWorld server, which has a total capacity of 30 TB or more in real time.

3.1. Sector
On the proposed platform, all formats of geospatial data are visualized as quadtree-based
tiles [29–31]. VWorld tiles consist of tiles from level 0 to level 15, with 16 steps in total. The surface
of the globe is divided into five latitude and ten longitude steps into 36 tiles at level 0, as shown in
Figure 10. As the camera approaches the surface of the globe, a tile is divided into four tiles, and the
level of the tile is increased by one. Figure 11 shows an example in which a tile at level 0 is divided
into four tiles at level 1 or 16 tiles at level 2. All 3-D terrain models have an aerial image with 256 × 256
Sensorsresolution
2020, 20, x and
FOR a 64 ×REVIEW
PEER 64 × 2 mesh model. As the level of the tiles increases, the representation of the10 of 19
3-D terrain model becomes more detailed.

Figure 10. Quadtree-based tile structure (IDX, IDY): Level 0 tiles consist of 50 tiles of the Earth’s surface.

Figure 10. Quadtree-based tile structure (IDX, IDY): Level 0 tiles consist of 50 tiles of the Earth’s
surface.

(a) (b) (c)


Fig. 11
Sensors 2020, 20, 5967 11 of 20

Level 2 Level 2 Level 2 Level 2


(32,15) (33,15) (34,15) (35,15)
Level 1 Level 1
(16,7) (17,7)
Level 2 Level 2 Level 2 Level 2
(32,14) (33,14) (34,14) (35,14)
Level 0
(8,3)
Level 2 Level 2 Level 2 Level 2
(32,13) (33,13) (34,13) (35,13)
Level 1 Level 1
(16,6) (17,6)
Level 2 Level 2 Level 2 Level 2
(32,12) (33,12) (34,12) (35,12)

Figure 11. Example division of quadtree-based tiles (IDX; IDY): a level 0 tile (latitude: 18◦ –54◦ ,
longitude: 108◦ –144◦ , South Korea) is divided into four level 1 tiles or 16 level 2 tiles. (a) level 0 tile;
(b) four level 1 tiles; (c) 16 level 2 tiles.

The level of a displayed tile is determined by the distance between the camera and the terrain
model of the tile. When the distance between the camera and the terrain increases, the low-level
tiles are represented [32]. However, when the distance decreases, high-level tiles are represented.
Equation (4) is a method for determining the rendering level. c is the center position of the camera,
and s is the center position of the terrain model of the tile.
 q 2 
2 2

 log(R/ (cx − sx ) + c y − s y + (cz − sz ) 

level =   (4)
log 2

 
 

A sector is a unit that contains various types of geospatial data in a tile area. Geospatial data
located in the latitude and longitude areas of the tile are grouped into the same sector. Each type of
geospatial data in a sector is managed as a layer. For example, the active layers are 3-D terrain models
and 3-D buildings. When a sector is created, the proposed platform requests that the 3-D terrain and
3-D building model data, which are included in the activated layer, be provided to the external data
server. When the request and creation of the sector are completed, it can be the target of rendering.
Figure 12 shows an example of a sector composition. The terrain model is created using an aerial image
as a texture source image and a 3-D terrain mesh model generated from a digital elevation model
(DEM). Various geospatial data types, such as building, facility, and local name data, are included in a
sector based on the terrain model. A sector is uniquely defined as (level, IDX, IDY). An index for sector
identification is defined as follows:

2level∗2 − 1
index = 50· + 10·2level ·IDY + IDX (5)
3
The size and location of a sector are defined by (level, IDX, IDY). For example, the index of the
sector (level:0, IDX:0, IDY:0) is 0. The latitude interval is −90◦ –−54◦ , and the longitude interval is
−180◦ –−144◦ . The size of each interval is 36◦ . In the Unity3D editor, the Sector game object is created
per sector under the Globe game object. It basically has a 3-D terrain model. The other layers of
geospatial data in a sector are created as new game objects under the Sector game object. The Sector
Manager class manages the Sector game objects. GameObject:Find() can access a game object in Unity3D.
However, as the number of game objects increases, the object search time with the find function
becomes slower. In the Sector Manager class, the Dictionary class variable is used to reference all Sector
game objects and Sector class variables created in the platform. These approaches can minimize the
time required to search for Sector game objects or Sector class variables.
image as a texture source image and a 3-D terrain mesh model generated from a digital elevation
model (DEM). Various geospatial data types, such as building, facility, and local name data, are
included in a sector based on the terrain model. A sector is uniquely defined as (level, IDX, IDY). An
index for sector identification is defined as follows:

Sensors 2020, 20, 5967 2 1 12 of 20
𝑖𝑛𝑑𝑒𝑥 50 ∙ 10 ∙ 2 ∙ 𝐼𝐷𝑌 𝐼𝐷𝑋 (5)
3

(a) (b)

(c) (d)

Figure 12. Examples of a sector composition: (a) aerial image; (b) 3-D terrain mesh model generated
Figure 12.
fromExamples of a sector
digital elevation composition:
model (DEM); (c)(a)3-D
aerial image;
building (b) 3-D
model; (d)terrain meshresult
rendering modelof generated
geospatial from
data in
a sector. model (DEM); (c) 3-D building model; (d) rendering result of geospatial data in a sector.
digital elevation

3.2.The
Determination of the Rendering
size and location of a sector Sectors
are defined by (level, IDX, IDY). For example, the index of the
sector Figure
(level:0,13IDX:0,
showsIDY:0)
the task is flow
0. The latitude
of the main interval
loop. The is Globe
−90°–−54°,
class isand
the the
main longitude interval
class in the is
proposed
−180°–−144°. The size of each interval is 36°. In the Unity3D editor, the Sector game
platform. The Camera Control class calculates the position and orientation of the camera according to the object is created
per sector
user under
input. the Globe
The Sector gameclass
Manager object. It basically
searches for the has a sectors
target 3-D terrain model.
according to The other layers
the calculated of
camera
geospatial data in a sector are created as new game objects under the Sector game
position and orientation. When the sector is determined as the rendering target, the Sector game object object. The Sector
Manager class
is created, if it manages the Sector
does not exist. game
The Sector objects. are
properties GameObject:Find()
defined as (level,canIDX,access a game
IDY). The object
required datainfor
Unity3D. However,
sector creation as the number
are requested of game
with (level, objects
IDX, increases,
IDY) using the object
the Data API fromsearch time withData
the Geospatial the Server.
find
function becomes slower. In the Sector Manager class, the Dictionary class variable
Requesting data and creating game objects are performed asynchronously. Therefore, sector creation is used to reference
allisSector
delayedgame objects to
in relation andtheSector
frameclass variables
that the sector created in the to
is determined platform.
be created.TheseTheapproaches can
rendering target
minimize
sectors inthe time
this required
frame to search
are selected for Sectortogame
according objects
whether or Sector
a sector class is
creation variables.
completed. In advance of
transforming the camera according to the user input, the rendering target sectors are determined from
the calculated position and orientation of the camera. At the final step of the loop frame, the camera is
transformed to the calculated position and orientation.
Geospatial Data Server. Requesting data and creating game objects are performed asynchronously.
Therefore, sector creation is delayed in relation to the frame that the sector is determined to be
created. The rendering target sectors in this frame are selected according to whether a sector creation
is completed. In advance of transforming the camera according to the user input, the rendering target
sectors2020,
Sensors are20,
determined
5967 from the calculated position and orientation of the camera. At the final13 step
of 20
of the loop frame, the camera is transformed to the calculated position and orientation.

Figure 13. Task flow of the main loop.

In the
theplatform
platformimplemented in Unity3D,
implemented among among
in Unity3D, all of theall
71.5ofbillion
the sectors, only approximately
71.5 billion sectors, only
60 or fewer sectors
approximately 60 or are rendered
fewer sectors in
area rendered
frame within aa frame
screenwith
resolution 1920 × 1080.
a screenofresolution of 1920The most
1080.
time-consuming step in a loop frame is the search for sectors to be rendered. We propose
The most time-consuming step in a loop frame is the search for sectors to be rendered. We propose a method fora
determining
method for the rendering sectors
determining based on the
the rendering quadtree-based
sectors based on tile thestructure—specifically,
quadtree-based tile whether the
structure—
target sectorswhether
specifically, are rendered through
the target three
sectors types
are of the through
rendered culling test.
threeFigure
types14ofshows the task
the culling test.flow of the
Figure 14
culling
shows thetesttask
process.
flow of the culling test process.
The first step is to determine whether a sector is the central sector. This sector is, among the
sectors through which the principal line of the camera passes, the closest to the camera. The shape of
the sector located on the surface of the globe is a curved surface. When the size of the sector is too
large, it may not pass the back-face culling test, which uses the four corner points and the center point
vectors of a 3-D terrain model in a sector. Therefore, the closest sector through which the principal
line of the camera passes is always rendered. The second step involves placing the target sector in
the camera view frustum. The third step is the back-face culling test. The target sectors, except the
center sector, must pass both the camera view frustum and back-face culling tests to enter the final
verification process. Finally, if the level of the target sector is the same as the rendering level in (5), it is
rendered in the current frame. If not, the child of the target sector becomes the target sector and the
three culling tests are performed again. Instead of targeting a total of 71.5 billion sectors, the proposed
method targets 50 sectors at level 0. It starts from all level 0 sectors and repeats the proposed test for
the child sectors until it reaches the rendering level at every frame.
Sensors 2020, 20, 5967 14 of 20
Sensors 2020, 20, x FOR PEER REVIEW 13 of 19

Figure 14. Task flow of the proposed three types of culling test.
Figure 14. Task flow of the proposed three types of culling test.
The process of rendering target sectors involves requesting data from the external geospatial data
server.The first step is to determine
Two-dimensional or 3-D models whether a sector
are created is theobjects
as game centralusing
sector.
theThis sector is,data.
downloaded among the
At the
sectors through which the principal line of the camera passes, the closest
initial stage of the loop frame, the enabled property of the mesh renderer components of all sectors to the camera. The shape of
thechanged
is sector located
to false.on the surface
When of the
the sector globe is a curved
is determined surface. When
to be rendered, the size
the enabled of the sector
property is too
is activated.
large, it may not pass the back-face culling test, which uses
The following rules are involved in the proposed rendering sector decision process.the four corner points and the center point
vectors of a 3-D terrain model in a sector. Therefore, the closest sector through which the principal
•line of
Thetheparent
camera sector
passesis one level lower
is always rendered.thanThe thesecond
child sector. The parent
step involves sector
placing thecontains the area
target sector of
in the
camerafourviewchild sectors.The third step is the back-face culling test. The target sectors, except the center
frustum.
•sector,
Themustchildpass
sectors are the
both one camera
level higherviewthan the parent
frustum and sector.
back-faceThe child
culling sectors
tests are
to divided
enter the by final
four
from theprocess.
verification parent Finally,
sector. if the level of the target sector is the same as the rendering level in (5), it
•is rendered
The createdin thesector
current frame.
implies If not,
that datathearechild of the and
requested target sector becomes
downloaded, and the target
sector sector andas
is created thea
threegame
culling testsinare
object performed again. Instead of targeting a total of 71.5 billion sectors, the proposed
Unity3D.
•methodThe targets 50 sectors
child sectors can beatcreated
level 0. It starts
only whenfrom all level
the parent 0 sectors
sector andcreated.
is already repeatsAccordingly,
the proposed all test for
created
the child sectors until it reaches the rendering level
sectors have the created parent sector except the level 0 sectors. at every frame.
• The process
When of rendering
a new layer is added, target sectorssectors
all created involves requesting
become a sectordata from
whose the external
creation geospatial
is not completed.
data The
server. Two-dimensional or 3-D models are created as game
sectors are created by requesting data for the newly added layer from level 0.objects using the downloaded data.
•At the initial can
A sector stage of the
only loop frame,
be rendered if itsthe enabled property
neighboring of thewhich
child sectors, mesh have
renderer components
the same of all
parent sector,
sectors is changed to false. When the sector is determined to be rendered, the
are created, except for the level 0 sectors. If no child sector has been created, the parent sector is enabled property is
activated. The following
the rendering target. rules are involved in the proposed rendering sector decision process.

• The parent sector is one level lower than the child sector. The parent sector contains the area of
four child sectors.
• The child sectors are one level higher than the parent sector. The child sectors are divided by
four from the parent sector.
• The created sector implies that data are requested and downloaded, and the sector is created as
a game object in Unity3D.
• The child sectors can be created only when the parent sector is already created. Accordingly, all
created sectors have the created parent sector except the level 0 sectors.
• When a new layer is added, all created sectors become a sector whose creation is not completed.
The sectors are created by requesting data for the newly added layer from level 0.
Sensors 2020, 20, x FOR PEER REVIEW 14 of 19

• A sector can only be rendered if its neighboring child sectors, which have the same parent sector,
are created, except for the level 0 sectors. If no child sector has been created, the parent sector is
the rendering
Sensors target.
2020, 20, 5967 15 of 20

The rendering target sectors are changed according to the camera control. Depending on the
network The rendering target
environment, sectorsectors are changed
creation according
is difficult to the camera
to complete withincontrol.
16.67 ms,Depending
which ison athe
frame
network environment, sector creation is difficult to complete within 16.67 ms, which
reference time of 60 fps. This implies that some sectors are not created and are displayed as blackis a frame reference
emptytime of 60 fps.
spaces, This implies
as shown that some
in Figure sectors
15. By are not the
following created andrules,
above are displayed as black
it is possible toempty
avoidspaces,
unnatural
as shown in Figure 15. By following the above rules, it is possible to avoid unnatural visualizations.
visualizations. Figure 15a shows the child sector a_3 of a, which is not created. Following the
Figure 15a shows the child sector a_3 of a, which is not created. Following the proposed rules, the child
proposed rules, the child sector should be targeted only when all four child sectors that have the same
sector should be targeted only when all four child sectors that have the same parent sector have been
parent sectorFigure
created. have been created.
15b shows Figure
a case 15b shows
in which parentasector
case in
b iswhich parent Ifsector
not created. all of bthe
is not created.
created childIf all
of thesectors
created child
have sectors
created have
parent created
sectors, b willparent sectors,normally.
be visualized b will be visualized normally.

(a)

(b)
Figure
Figure 15. 15. Examplesofofrendering
Examples rendering failure: changes
failure: in rendering
changes target sectors
in rendering targetdue to changes
sectors in position
due to changes in
and orientation of the camera (a) the child sectors are rendered before the creation of all four child
position and orientation of the camera (a) the child sectors are rendered before the creation of all four
sectors; (b) the child sectors are created without the creation of a parent sector.
child sectors; (b) the child sectors are created without the creation of a parent sector.
Figure 16 shows a rendering example of adjacent sectors of different levels. As it covers
Figure 16 shows
approximately a rendering
71.5 billion example
target sectors, only of
theadjacent
renderingsectors of different
target sectors levels.
are activated As it covers
according to
approximately 71.5 billion
the camera position andtarget sectors,
orientation. only thetorendering
According target
the decision rule sectors are activated
of rendering according
sectors, all sectors to
the camera position
except those and
at level orientation.
0 have a parent According to the
sector. A sector decision
cannot rule ofbefore
be rendered rendering
the samesectors, all sectors
child sectors
are created. Only the 11 colored sectors among the 41 sectors are rendered, as shown
except those at level 0 have a parent sector. A sector cannot be rendered before the same child in Figure 16c.sectors
are created. Only the 11 colored sectors among the 41 sectors are rendered, as shown in Figure 16c.

4. Discussion
In a speed performance experiment, three types of computers were used: a desktop computer
with an Intel® 4.20-GHz Core™ i7 CPU, an NVIDIA® GeForce® GTX 1080Ti GPU, and wired network
access; a notebook computer with an Intel® 4.2-GHz Core™ i7-8650U CPU, an NVIDIA® GeForce®
GTX 1060 GPU, wireless network access and a notebook computer with an Intel® 4.2-GHz Core™ i7-
8650U CPU, no GPU, an Intel® HD Graphics 620 integrated graphics unit, and wireless network
access.
Sensors 2020, 20, 5967 16 of 20
Sensors 2020, 20, x FOR PEER REVIEW 15 of 19

4, 4, 4, 4,
134, 135, 136, 137,
57 57 57 57
3, 69, 28
4, 4, 4,
135, 136, 137,
56 56 56

3, 68, 27 3, 69, 27

2, 33, 13

(a) (b)

(c)
Figure 16.
Figure 16. Rendering
Rendering example
example ofof adjacent
adjacent sectors
sectors and
and the
the quadtree
quadtree structure:
structure: (a)
(a) rendering
rendering result
result for
for
sectors in the game window; (b) wireframes of rendering sectors; (c) quadtree structure of rendering
sectors in the game window; (b) wireframes of rendering sectors; (c) quadtree structure of rendering
sectors (only
sectors (only colored
colored sectors
sectorsare
arerendered).
rendered).

Table 1 shows the performance results for each major step in the different types of environments.
Taking into consideration the flexible network environment, the speed measurements were obtained
at the frame at which the processes of downloading data using the DATA API from the external
geospatial data server and generating downloaded data as a game object had been completed. The
Sensors 2020, 20, 5967 17 of 20

4. Discussion
In a speed performance experiment, three types of computers were used: a desktop computer with
an Intel® 4.20-GHz Core™ i7 CPU, an NVIDIA® GeForce® GTX 1080Ti GPU, and wired network access;
a notebook computer with an Intel® 4.2-GHz Core™ i7-8650U CPU, an NVIDIA® GeForce® GTX 1060
GPU, wireless network access and a notebook computer with an Intel® 4.2-GHz Core™ i7-8650U CPU,
no GPU, an Intel® HD Graphics 620 integrated graphics unit, and wireless network access.
Table 1 shows the performance results for each major step in the different types of environments.
Taking into consideration the flexible network environment, the speed measurements were obtained at
the frame at which the processes of downloading data using the DATA API from the external geospatial
data server and generating downloaded data as a game object had been completed. The camera
control step included the camera control by the user, calculation of the camera position and direction,
calculation of the central longitude and latitude (coincident with the principal line of the camera and
the terrain model), and other parameters related to the camera control. Our platform was able to
perform at a rate of over 64.5 fps on the notebook computer without a GPU.

Table 1. Results for each step of performance tests.

Speed for Desktop with


Speed for Notebook Speed for Notebook
Step GPU (Graphics
with GPU (ms) without GPU (ms)
Processing Unit) (ms)
Camera control 0.0092 0.0123 0.0375
Find draw sectors 0.2202 0.2551 0.5930
Activate draw sectors 0.6741 0.8201 2.8769
6.0959 8.8838 15.4913
Frame rate (fps)
(164.04) (112.56) (64.55)
The resolution of the game window was 1024 × 768, and the screen resolution was 1920 × 1080. The speed results
were measured with an average speed of 3,000 frames. The sizes of the graphic buffers were as follows: triangles,
111.8 k; vertices, 269.4 k; textures, 339.6 MB; 859 game objects; 430 sectors. The frame rate is the time of the main
loop, including other steps.

The proposed platform can be employed as a general digital twin city platform that can utilize
geospatial data from various sources. Figure 17 shows the development pipeline employed to build
a digital twin city platform using the proposed platform. First, digitized geospatial data are to be
collected in the real city. Second, the digitized data must be generated in a tile unit with level-of-detail
(LOD) on the basis of the quadtree structure [33]. All the 3-D data must then be transformed as per the
proposed coordinate systems, as shown in Figure 8. In this study, we implemented a platform using
VWorld data that involves a quadtree LOD built on the basis of the actual cities. The blue blocks in
Figure 17 depict the steps followed in the proposed platform. The next step is to set the quadtree-based
tile units and layer properties on the proposed platform. This involves setting the division units on
the Earth’s surface and LOD steps, as shown in Figures 10 and 11. In our study, we have defined a
sector as a unit that groups various layers within a tile unit. When newly collected and quadtree-based,
constructed, digitized city model data are set as a new layer within a sector, the proposed platform
enables the geospatial data to be rendered on a planetary-scale.
Table 2 shows the differences between the platforms compared in the paper and the proposed
platform. The purpose of our platform is to develop a general geospatial platform for supporting
various type of geospatial data. Some cities are conducting 3-D geospatial data model constructions
such as terrains, buildings, facilities, etc. They plan to establish a digital twin city in the future.
There are limitations to developing a digital twin city platform based on web-based Google Earth
or VWorld service sites. Additionally, even if a game engine-based game solution of Google Maps
platform is used, the managing and rendering of large-scale tiled data with LODs in other formats is
not supported.
based tile units and layer properties on the proposed platform. This involves setting the division units
on the Earth's surface and LOD steps, as shown in Figures 10 and 11. In our study, we have defined
a sector as a unit that groups various layers within a tile unit. When newly collected and quadtree-
based, constructed, digitized city model data are set as a new layer within a sector, the proposed
platform
Sensors 2020,enables
20, 5967 the geospatial data to be rendered on a planetary-scale. 18 of 20

Figure 17. Development pipeline for digital twin city platform: the blue blocks show the steps followed
in the proposed
Figure platform. pipeline for digital twin city platform: the blue blocks show the steps
17. Development
followed
Table in the proposed
2. Comparison platform.
of geospatial platforms (accessed Oct. 19, 2020). The proposed platform does not
provide its own geospatial data, but it was developed based on VWorld geospatial data. Our platform
Table
can 2 shows
substitute thegeospatial
other differences
databetween thedata.
for VWorld platforms compared in the paper and the proposed
LOD: level-of-detail.
platform. The purpose of our platform is to develop a general geospatial platform for supporting
VWorld Service Game Solution of Proposed
Google Earth [7]
Site [22] Google Maps [15] Platform
Development
web web Unity3D Unity3D
environment
Scale planetary-scale planetary-scale city scale planetary-scale
Provide geospatial data yes yes yes no (VWorld)
Texture resolution of
low high low -
buildings
South Korea, North
Supported locations global Korea, and global global -
(only terrains)
Tiled data with LODs
yes yes no yes
management
External tiled
limit limit limit support
data usage
USD 200 (monthly
Price free free free
usage)

5. Conclusions
In this study, we propose a planetary-scale geospatial open platform developed in a universal
game engine environment. We describe how to organize and manage various types of geospatial data.
The proposed method creates game objects in Unity3D based on quadtree tiles using VWorld geospatial
data from an external data server. We propose a method for quickly searching the entire database,
which spans over 30 TB and 71.5 billion tiles, to determine the rendering data. The experimental
results provide a constant geospatial data visualization to the user and explain how to determine
the rendering data. One can easily use the proposed 3-D geospatial platform by importing a small
asset library into a Unity3D project. The proposed geospatial platform is available on the Asset Store,
and has the potential to be widely used as a general geospatial platform for various purposes.
In future work, we will research a geospatial platform for the development of a digital twin
that will display not only static data for city components, such as terrains, buildings, and facilities,
but also movable dynamic data components, such as cars and pedestrians. This will require dynamic
spatiotemporal data detection and storage management.

Author Contributions: Conceptualization, A.L., I.J.; methodology, A.L., Y.-S.C.; software, A.L., Y.-S.C.; validation,
A.L., Y.-S.C.; writing—original draft preparation, A.L.; writing—review and editing, Y.-S.C., I.J. All authors have
read and agreed to the published version of the manuscript.
Funding: This research was supported by a grant (20DRMS-B147287-03) for the development of a customized
realistic 3-D geospatial information update and utilization technology based on consumer demand, funded by the
Ministry of Land, Infrastructure and Transport of the Korean government.
Sensors 2020, 20, 5967 19 of 20

Conflicts of Interest: The authors declare no conflict of interest.

References
1. Boschert, S.; Rosen, R. Digital twin—The simulation aspect. In Mechatronic Futures; Springer: Cham,
Switzerland; New York, NY, USA, 2016; pp. 59–74.
2. Schleich, B.; Anwer, N.; Mathieu, L.; Wartzack, S. Shaping the digital twin for design and production
engineering. CIRP Ann. 2017, 66, 141–144. [CrossRef]
3. Grieves, M.; Vickers, J. Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex
systems. In Transdisciplinary Perspectives on Complex Systems; Springer: Cham, Switzerland; New York, NY,
USA, 2016; pp. 85–113.
4. Mohammadi, N.; Taylor, J. Knowledge discovery in smart city digital twins. In Proceedings of the 53rd
Hawaii International Conference on System Sciences, Hawaii, HI, USA, 7–10 January 2020.
5. Alam, K.M.; Saddik, A.E. C2PS: A digital twin architecture reference model for the cloud-based cyber-physical
systems. IEEE Access 2017, 5, 2050–2062. [CrossRef]
6. Ignatius, M.; Martin, M.; Wong, N.H. Developing Bescam: Integration of virtual Singapore with building
energy simulation and urban canopy modeling for climate and district energy demand assessment.
In Proceedings of the AMS International Conference on Urban Climate/14th Symposium on the Urban
Environment, New York, NY, USA, 6–10 August 2018.
7. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine:
Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [CrossRef]
8. Tian, F.; Wu, B.; Zeng, H.; Zhang, X.; Xu, J. Efficient identification of corn cultivation area with multitemporal
synthetic aperture radar and optical images in the Google Earth engine cloud platform. Remote Sens. 2019,
11, 629. [CrossRef]
9. He, B.; Mo, W.X.; Hu, J.X.; Yang, G.; Lu, G.J.; Liu, Y.Q. Development of power grid Web3D GIS based on
Cesium. In Proceedings of the 2016 IEEE PES Asia-Pacific Power and Energy Engineering Conference
(APPEEC), Xi’an, China, 25–28 October 2016; pp. 2465–2469.
10. Müller, R.D.; Qin, X.; Sandwell, D.T.; Dutkiewicz, A.; Williams, S.E.; Flament, N.; Seton, M. The GPlates
portal: Cloud-based interactive 3D visualization of global geophysical and geological data in a web browser.
PLoS ONE 2016, 11, e0150883. [CrossRef] [PubMed]
11. Gutbell, R.; Pandikow, L.; Kuijper, A. Web-based visualization component for geo-information. In International
Conference on Human Interface and the Management of Information; Springer: Cham, Switzerland; New York,
NY, USA, 2018; pp. 23–35.
12. Wood, L. Programming the Web: The W3C DOM specification. IEEE Internet Comput. 1999, 3, 48–54.
[CrossRef]
13. Zhu, C.; Tan, E.C.; Chan, T.K.Y. 3D Terrain visualization for Web GIS. Geospat. World 2009. Available online:
https://www.geospatialworld.net/article/3d-terrain-visualization-for-web-gis/ (accessed on 20 October 2020).
14. Angel, E.; Shreiner, D. Geometric objects and transformations. In Interactive Computer Graphics with WebGL,
7th ed.; Addison-Wesley Professional: Boston, MA, USA, 2014; pp. 165–244.
15. McQuire, S. One map to rule them all? Google Maps as digital technical object. Commun. Public 2019, 4,
150–165. [CrossRef]
16. Van Maren, G.; Shephard, N.; Schubiger, S. Developing with Esri CityEngine. In Proceedings of the ESRI
Developer Summit, San Diego, CA, USA, 26–29 March 2012.
17. Buyuksalih, I.; Bayburt, S.; Buyuksalih, G.; Baskaraca, A.P.; Karim, H.; Rahman, A.A. 3D modelling and
visualization based on the unity game engine—Advantages and challenges. In Proceedings of the ISPRS
Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-4/W4, Safranbolu,
Karabuk, Turkey, 14–15 October 2017; pp. 161–166.
18. Wang, S.; Mao, Z.; Zeng, C.; Gong, H.; Li, S.; Chen, B. A new method of virtual reality based on Unity3D.
In Proceedings of the IEEE International Conference on Geoinformatics, Beijing, China, 18–20 June 2010;
pp. 1–5.
19. Xie, J. Research on key technologies base Unity3D game engine. In Proceedings of the IEEE International
Conference on Computer Science & Education (ICCSE), Melbourne, Australia, 14–17 July 2012; pp. 695–699.
Sensors 2020, 20, 5967 20 of 20

20. Koh, E.; Park, G.; Lee, B.; Kim, D.; Sung, S. Performance Validation and Comparison of range/INS integrated
system in urban navigation environment using Unity3D and PILS. In Proceedings of the 2020 IEEE/ION
Position, Location and Navigation Symposium (PLANS), Portland, OR, USA, 20–23 April 2020; pp. 788–792.
21. Szajna, A.; Stryjski, R.; Woźniak, W.; Chamier-Gliszczyński, N.; Kostrzewski, M. Assessment of Augmented
Reality in Manual Wiring Production Process with Use of Mobile AR Glasses. Sensors 2020, 20, 4755.
[CrossRef] [PubMed]
22. Lee, A.; Jang, I. Implementation of an open platform for 3D spatial information based on WebGL. ETRI J.
2019, 41, 277–288. [CrossRef]
23. Jang, H.S.; Go, J.H.; Jung, W.R.; Jang, I.S. Performance evaluation of CDN method in V-World web service as
spatial information open platform service. Spat. Inf. Res. 2016, 24, 355–364. [CrossRef]
24. Lavoué, G.; Chevalier, L.; Dupont, F. Streaming compressed 3D data on the web using JavaScript and WebGL.
In Proceedings of the 18th International Conference on 3D Web Technology, San Sebastian, Spain, 20–22 June
2013; pp. 19–27.
25. Suárez, J.P.; Trujillo, A.; De La Calle, M.; Gómez-Deck, D.D.; Santana, J.M. An open source virtual globe
framework for iOS, Android and WebGL compliant browser. In Proceedings of the 3rd International
Conference on Computing for Geospatial Research and Applications, New York, NY, USA, 1–3 July 2012;
pp. 1–10.
26. Congote, J.; Segura, A.; Kabongo, L.; Moreno, A.; Posada, J.; Ruiz, O. Interactive visualization of volumetric
data with WebGL in real-time. In Proceedings of the 16th International Conference on 3D Web Technology,
New York, NY, USA, 20–22 June 2011; pp. 137–146.
27. Ham, Y.; Kim, J. Participatory Sensing and digital twin city: Updating virtual city models for enhanced
risk-informed decision-making. J. Manag. Eng. 2020, 36, 04020005. [CrossRef]
28. Gregory, J. The game loop and real-time simulation. In Game Engine Architecture, 3rd ed.; CRC Press: Danvers,
MA, USA, 2018; pp. 525–558.
29. Li, C.; Zhao, Z.; Sun, W.; Liu, Z. A fast quadtree-based terrain crack locating method that accounts for
adjacency relationships. Trans. GIS 2019, 23, 1374–1392. [CrossRef]
30. Pajarola, R. Large scale terrain visualization using the restricted quadtree triangulation. In Proceedings of
the Visualization ‘98, Research Triangle Park, NC, USA, 18–23 October 1998; pp. 19–26.
31. Wu, J.; Yang, Y.; Gong, S.R.; Cui, Z. A new quadtree-based terrain LOD algorithm. J. Softw. 2010, 5, 769–776.
[CrossRef]
32. Kraak, M.J.; Ormeling, F. Map characteristics. In Cartography: Visualization of Geospatial Data, 2nd ed.;
CRC Press: Danvers, MA, USA, 2020; pp. 51–84.
33. Chen, Q.; Liu, G.; Ma, X.; Mariethoz, G.; He, Z.; Tian, Y.; Weng, Z. Local curvature entropy-based 3D
terrain representation using a comprehensive Quadtree. ISPRS J. Photogramm. Remote Sens. 2018, 139, 30–45.
[CrossRef]

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional
affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like