Ijgi 09 00065 v2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

International Journal of

Geo-Information

Article
Indoor Reconstruction from Floorplan Images with a
Deep Learning Approach
Hanme Jang , Kiyun Yu and JongHyeon Yang *
Department of Civil and Environmental Engineering, Seoul National University, Seoul 08826, Korea;
janghanie1@snu.ac.kr (H.J.); kiyun@snu.ac.kr (K.Y.)
* Correspondence: yangjonghyeon@snu.ac.kr; Tel.: +82-10-5068-2628

Received: 19 November 2019; Accepted: 19 January 2020; Published: 21 January 2020 

Abstract: Although interest in indoor space modeling is increasing, the quantity of indoor spatial
data available is currently very scarce compared to its demand. Many studies have been carried
out to acquire indoor spatial information from floorplan images because they are relatively cheap
and easy to access. However, existing studies do not take international standards and usability into
consideration, they consider only 2D geometry. This study aims to generate basic data that can be
converted to indoor spatial information using IndoorGML (Indoor Geography Markup Language)
thick wall model or the CityGML (City Geography Markup Language) level of detail 2 by creating
vector-formed data while preserving wall thickness. To achieve this, recent Convolutional Neural
Networks are used on floorplan images to detect wall and door pixels. Additionally, centerline and
corner detection algorithms were applied to convert wall and door images into vector data. In this
manner, we obtained high-quality raster segmentation results and reliable vector data with node-edge
structure and thickness attributes that enabled the structures of vertical and horizontal wall segments
and diagonal walls to be determined with precision. Some of the vector results were converted into
CityGML and IndoorGML form and visualized, demonstrating the validity of our work.

Keywords: floorplan image; semantic segmentation; centerline; corner detection; indoor spatial data

1. Introduction
With developments in computer and mobile technologies, interest in indoor space modeling is
increasing. According to the U.S. Environmental Protection Agency, 75% of people living in cities
around the world spend most of their time indoors. Moreover, the demand for indoor navigation and
positioning technology is increasing. Therefore, indoor spatial information has a high utility value, and
it is important to construct it. However, according to [1,2], the amount of indoor spatial data available,
such as indoor navigation and indoor modelling, is very small relative to the demand. To address
this, research has been conducted on generating indoor spatial information sing various data such
as LiDAR (Light Detection and Ranging), BIM (Building Information Modeling), and 2D floorplans.
LiDAR is a surveying method that obtains the distance from the target by aiming a laser towards it and
measuring the time gap and difference in wavelength of the reflected light. Hundreds of thousands
of lasers are shot at the same time and return information from certain points on the target. Point
data acquired by LiDAR systems describe 3D worlds as they are; therefore, it has been widely used to
create mesh polygons that can be converted into elements of spatial data [3,4]. However, because of
the high cost of the LiDAR system and the time the process takes, only a few buildings were modelled
this way. However, BIM—especially its IFC(Industry Foundation Classes)—is a widely used open file
format that describes 3D buildings edited by various software like Revit and ArchiCAD [5]. It contains
enough attributes and geometric properties of all building elements that can be converted to spatial
data. Sometimes, IFC elements are projected into the 2D plane and printed as a floorplan for sharing

ISPRS Int. J. Geo-Inf. 2020, 9, 65; doi:10.3390/ijgi9020065 www.mdpi.com/journal/ijgi


ISPRS Int. J. Geo-Inf. 2020, 9, 65 2 of 16

on sites (to obtain opinions) or making a contract that is less formal than the original BIM vector data.
Floorplans have been steadily gaining attention as spatial data sources because they are relatively
affordable and accessible compared to other data [6]. However, when printed on paper, the geometric
point and line information vanish; only raster information remains. Several papers on the subject and
methods for recovering vector data from raster 2D floorplans have been suggested to overcome this.
The OGC (Open Geospatial Consortium), the global committee for geospatial contents and services
standard recently accepted two different GML models, CityGML and IndoorGML as the standard 3D
data models for spatial modelling inside buildings. It is pertinent to follow a standardized data format
when constructing indoor spatial data for the purpose of generating, utilizing, and converting them to
other GIS formats.
Studies on extracting and modifying objects such as doors and walls from floorplan images can
be classified into two categories, according to whether they extract the objects in the image using a
predefined rule set or whether they extract objects using machine learning algorithms that are robust
to various data. Macé et al. [7] separated images and texts with geometric elements from a floorplan
using the Qgar tool [8] and Hough transform method. As their method uses well-designed rules for
certain types of floorplans, low accuracy is expected for other kinds. In addition, they could be difficult
to use as spatial information due to their lack of coordinates. Gimenez et al. [9] defined the category of
wall segments well and classified them at the pixel level using a predefined rule set. However, as these
methods have many predefined hyper-parameters, they need to be chosen before the process, and
thus they lack generalizability. Ahmed et al. [10,11] separated text and graphics and categorized walls
according to the thickness of their line (i.e., thick, medium, or thin). This method outperformed others
in terms of finding wall pixels; however, doors and other elements were not considered and the result
was in a raster form unsuitable for spatial data.
However, as these studies processed floorplans according to predefined rules and parameters,
they lacked generality in terms of noise or multiple datasets. To overcome these problems, methods
using machine learning algorithms appeared. De las Heras et al. [12] constructed a dataset denoted
CVC-FP (Centre de Visio per Computador Floorplan), which contains multiple types of 2D floorplans,
then used it to extract walls using SVM-BOVW (Support Vector Machine Bag of Visual Words) classical
machine learning algorithm and extracted room structures using the centerline detection method.
Although the accuracy of generated data was quite low, this research was the first to try to generate
vector data from various types of 2D floorplan images. Dodge et al. [13] used FCN-2s to segment
walls in addition to R-FP (Rakuten Floorplan) datasets for training to improve segmentation results.
However, they did not conduct any postprocesses to construct indoor spatial information. Liu et al. [14]
applied ResNet-152(Residual Network 152) to R-FP to extract the corners of walls and applied integer
programming on the extracted corners to construct vector data that represented walls. However, their
research was conducted on the assumption that every wall in the floorplan was either vertical or
horizontal, not diagonal. Furthermore, they did not consider the elements of indoor spatial information,
such as wall thickness.
In general, existing studies on constructing indoor spatial information from floorplans either
use the rule-based method, which only works well for specific drawings that meet predetermined
criteria; or machine learning algorithms, which have higher performance than the rule-based method
because they learn patterns from data automatically and show robustness in performance against noise.
Representative studies are summarized in Table 1. However, these studies discussed above also have
some limitations in terms of indoor spatial information. First, information regarding walls and doors
was simply extracted in the form of an image. In order to comply with the OGC standard, the objects
must be represented as vectors of points, lines, and polygons. Second, studies that performed vector
transformations simply expressed the overall shape of the building. According to Kang et al. [15],
IndoorGML adopts the thick wall model and uses the thickness of doors and walls to generate spatial
information. In addition, Boeters et al. [16] and Goetz et al. [17] suggest that geometric information
ISPRS Int. J. Geo-Inf. 2020, 9, 65 3 of 16

should be preserved as much as possible when generating CityGML data, typically to create interior and
exterior walls using wall thickness, and to maintain the relative position and distance between objects.
In this study, we aimed to generate vector data that could be converted into CityGML LoD2 (Level
of Detail 2) or IndoorGML thick wall models given floorplan images. To this end, we used CNNs
(Convolutional Neural Networks) to generate wall and door pixel images and used a vector-to-raster
conversion method to convert wall and door images into vector data. Compared to existing work,
the novelty of our study is as follows. First, we generated reliable vector data that could be converted
into CityGML LOD2 or IndoorGML thick wall models, unlike in previous studies. Second, we adopted
modern CNNs to generate high-quality raster data that could then be used to generate vector data.
Third, to fully recover wall geometry, we extracted diagonal wall segments, as well as extract vertical
or horizontal wall segments.

Table 1. Summary of the datasets, methods, and results of selected previous studies.

Title Dataset Method Result


[7] CVC-FP Qgar project, Hough transform Entity segmentation
[9] CVC-FP Predefined rule Room detection
[10] CVC-FP Contour extraction Room detection
[11] Defined in paper Predefined rule Room detection
[12] CVC-FP SVM-BoVM, centerline detection Room detection
[13] CVC-FP, R-FP Deep learning (FCN-2s) Pixel
[14] R-FP Deep learning (ResNet-152) Vector with points and lines
Note: FCN-2s: Fully Convolutional Network with stride 2; ResNet-152: Residual Network 152.

2. Materials and Methods


To generate vector data from floorplans, we divided our workflow into two processes: segmentation
and vector data generation (Figure 1). The segmentation process is divided into two stages, namely
training and inference stages, in which we extract wall and door images from raw 2D floorplan images.
To train the CNNs, we classified datasets into three categories. These categories contained 247 images
for training, 25 images for validation, and 47 for testing. Data augmentation techniques such as
flip, shift, and brightness were used at the training stage, after which the CNNs were trained by
being fed augmented images and constructed ground-truths in training sets. In the inference stage,
we segmented walls and doors in test sets using trained models. In the process of generating vector
data, we converted segmented wall and door images into vector data. The simple wall geometry
was constructed using centerline detection and refined using the corner detection method on the
centerline images. The thickness of each wall was calculated automatically by overlapping the image
of the wall and the refined wall geometry. Finally, the wall graph and door point, which were simply
extracted separately using the corner detection method, were merged. All the experiments were
conducted using TensorFlow and PyQGIS library in python environment.
[11] Defined in paper Predefined rule Room detection
[12] CVC-FP SVM-BoVM, centerline detection Room detection
[13] CVC-FP, R-FP Deep learning (FCN-2s) Pixel
[14] R-FP Deep learning (ResNet-152) Vector with points and lines
Note: FCN-2s: Fully Convolutional Network with stride 2; ResNet-152: Residual Network 152;
ISPRS Int. J. Geo-Inf. 2020, 9, 65 4 of 16
2. Materials and Methods

Workflow
Figure 1.Figure of the study,
1. Workflow of theshowing the input
study, showing thedata,
inputprocess, and results.
data, process, Note:Note:
and results. = convolutional
CNNCNN =
neural network. convolutional neural network.

2.1. Deep-Learning-Based Segmentation


To generate vector data from floorplans, we divided our workflow into two processes:
segmentation and vector data generation (Figure 1). The segmentation process is divided into two
While existing pattern recognition algorithms are difficult to train in an end-to-end manner (since
stages, namely training and inference stages, in which we extract wall and door images from raw 2D
the feature extractors
floorplan images. Tothat extract
train hand-crafted
the CNNs, features
we classified and classifiers
datasets are separated),
into three categories. These deep networks
categories
integrate low-,247
contained mid-, andfor
images high-level
training, features
25 imagesand
for classifiers
validation, inand an47end-to-end
for testing. multilayer fashion [18].
Data augmentation
Sincetechniques
Krizhevsky etas
such al.flip,
[19]shift,
demonstrated astonishing
and brightness were usedresults using the
at the training ImageNet
stage, 2012the
after which classification
CNNs
benchmark, CNNs
were trained by have
beingshown excellent
fed augmented performance
images in manyground-truths
and constructed computer vision tasks,sets.
in training suchInas
theimage
classification, semantic segmentation, object detection, and instance segmentation [20–23]. Tooftrain a
inference stage, we segmented walls and doors in test sets using trained models. In the process
CNN for segmentation, a sufficient quantity of high-quality data is required. When data are difficult to
obtain, weak annotations such as image-level tags, bounding boxes, or scribbles can be used to achieve
results like the results of pixel-wise annotations [24]. In recent years, this approach has reached 90% of
the performance of Deeplab v1 [25], but it still falls short of the accuracy of supervised learning with
pixel-wise annotations.
In this study, we used a GCN (Global Convolutional Network), Deeplab v3 plus which is an
advanced version of Deeplab v1 and a PSPNet (Pyramid Scene Parsing Network) as they were found
to demonstrate good performance in semantic segmentation for Pascal VOC2012, Microsoft COCO
(Common Object in Context) and others. Furthermore, considering the characteristics of the floorplan
with a small number of classes, we used shallower models that showed efficient performance despite the
small number of parameters. We used DarkNet53 (Figure 2) as a backbone, removing the final average
pooling layer, fully connected layer, and SoftMax layer; and then joined five deconvolution layers to
restore the original size. After the first deconvolution layer, we added the skip connection before each
deconvolution layer. In order to improve the accuracy of extracting objects, we trained the networks for
walls and doors individually and constructed pixel-level annotations using LabelMe for our training
data, which were trained in a fully supervised manner. We carried out data augmentation for shift,
rotation, flip, and brightness change, and used the R-FP as training data, similar to Dodge et al. [12].
Simultaneously, data characteristics can affect the segmentation results. If a foreground pixel occupies
only a small region compared to the background, it often causes the learning process to get trapped
in the local minima of the loss function, yielding a network whose predictions are strongly biased
towards the background [22]. To prevent this, we used weighted cross entropy loss and dice loss in the
training process. We used batch normalization, ReLUs (Rectified Linear Units) method and trained
these with ADAM (Adaptive Moment estimation) optimizers. Finally, we used ensemble techniques to
improve the segmentation results.
ISPRS Int. J. Geo-Inf. 2020, 9, 65 5 of 16
ISPRS Int. J. Geo-Inf. 2020, 9, 65 5 of 17

Figure 2. Architecture of DarkNet53.


Figure 2. Architecture of DarkNet53.
2.2. Centerline-Based Approach
2.2. Centerline-Based Approach
In order to convert the images of the walls into vector data, we first used the centerline detection
process.order
In Sinceto most
convert of the
theimages
walls in of the
the walls
floorplaninto were
vectorstraight
data, welines,first used the centerline
we represented detection
them using
process. Since most of the walls in the floorplan were straight
polylines. We then applied corner detection algorithms to the output image. We constructed lines, we represented them usinga
polylines.
node-edgeWe graphthen applied
using corner image,
the output detection andalgorithms
then combinedto thetheoutput image. We
two outputs intoconstructed
one to generatea node-
the
edgeand
wall graph
doorusing the output
outputs in vectorimage,
form.andWethenthencombined
applied the the two outputs
corner detection into one toand
method generate the wall
postprocessed
and doorimage
the door outputs in vectorthe
to generate form. We
final doorthen applied the corner detection method and postprocessed
segment.
the door image to generate the final door segment.
2.2.1. Centerline Detection
2.2.1. Centerline Detection
The final outcome required the exact coordinates of every corner of the wall segments and the
edges The final outcome
between them. To required the exact
accomplish this, coordinates
the gap between of everythe corner
position of of
thethewall segments
real corners andand the
the
edges between them. To accomplish this, the gap between the position
position of the candidates extracted from the image should be minimized, and isolated corners must be of the real corners and the
position
connected ofproperly.
the candidates extracted(i.e.,
The centerline from thethe image should
collection be in
of pixels minimized,
the middleand isolated
of each wall)corners
can be must
used
be connected properly. The centerline (i.e. the collection of pixels in the
as a guideline for two reasons. First, when using the Harris corner detection method (Section 2.2.2), middle of each wall) can be
used as a guideline for two reasons. First, when using the Harris corner
the locations of corners can be found by identifying the feature points in the image. However, if we detection method (Section
2.2.2),
use thethe locations
wall imageof corners
from can be found
segmentation by identifying
as the input, we find the feature
that thepoints in theofimage.
thickness the wallHowever,
causes
if
inaccuracies in the locations of the corners. The location of the detected corners has errors in thecauses
we use the wall image from segmentation as the input, we find that the thickness of the wall x or y
inaccuracies in the
direction by half of locations of the corners.
the wall thickness compared The location of the
to the exact detected
location corners
of the has errors
end points of eachin the
wall.x or
In
y direction
addition, byto
due half
theofthickness
the wall of thickness
the wall,compared to the exact
multiple corner pointslocation of theon
are detected endthepoints
insideofandeach wall.
outside
In addition,
of the wall atduethe to the thickness of the wall, multiple corner points are detected on the inside and
edges.
outside of thethe
Second, wall at the edges.
centerline is simply a raster form of the grid, and the pixels do not fall apart (Figure 3).
By applying classical image is
Second, the centerline simply a raster
processing, we canform of the
convert thegrid,
image andintothe pixels do not
a node-edge fall apart
graph. (Figure
As described
3). By applying classical image processing, we can convert the image
in Figure 3, each pixel can work as a node and be connected with every adjacent pixel to create edges.into a node-edge graph. As
described
Consequently,in Figure 3, eachappearance
the overall pixel can work of the asbuilding
a node and wallbecanconnected
be viewed, with everymakes
which adjacent pixel to
connecting
create edges.
corners possible.Consequently, the overall appearance of the building wall can be viewed, which makes
connecting corners possible.
ISPRS Int. J. Geo-Inf. 2020, 9, 65 6 of 17

ISPRS Int. J. Geo-Inf. 2020, 9, 65 6 of 16


ISPRS Int. J. Geo-Inf. 2020, 9, 65 6 of 17

Figure 3. Centerline detection and node-edge graph.


Figure 3. 3.Centerline
Figure Centerlinedetection andnode-edge
detection and node-edge graph.
graph.
According to Gramblicka and Vasky [26], it is very important to select the correct algorithm
According to Gramblicka
According to Gramblickaandand
VaskyVasky[26],[26],
it isitvery important
is very importantto to
select thethe
select correct algorithm
correct algorithmwhen
when performing centerline detection. We found that the walls included in the drawing were
performing centerline detection.
when performing We foundWe
centerline detection. thatfound
the walls included
that the in the drawing
walls included in the were
drawingcomposed
were of
composed of rectangles with long horizontals and short verticals with many vertical intersections.
composed
rectangles withoflong
rectangles with long
horizontals andhorizontals
short verticals and short
with verticals with many
many vertical vertical intersections.
intersections. The algorithm
The algorithm proposed by by
The algorithm Huang et etal.al.[27], which improved theclassical
classical centerline detection
proposed by Huangproposed Huang
et al. [27], which improved [27],thewhich improved
classical the
centerline detectioncenterline
algorithmdetection
presented
algorithm presented
algorithm by byBoudaoud et al.al.[28], had aa good fit with
withour ourdata
data and showed great
by Boudaoud etpresented
al. [28], had Boudaoud
a good fit etwith our [28],data
had and good fit
showed and
great performance showed
when great
handling
performance when
performance handling
when noise.
handling It was,
noise. It was,therefore,
therefore, selected foruse
selected for useininthis
this study.
study.
noise. It was, therefore, selected for use in this study.
2.2.2. 2.2.2. Corner Detection
2.2.2.Corner
CornerDetection
Detection
As As shown
shown in in Figure
3,3,aa3,node-edge
a node-edge graphcan canbe
beaa wall
wall segment on its own. However, to express
As
the shown
wall with inFigure
Figure
the minimum node-edge graph
number ofgraph corners can
and a wallsegment
bestraight segment
lines,
on
uselessonitsitsown.
own.
nodes
However,
However,
should
to
toexpress
be removed. express
the wall
the wall with the minimum number of corners and straight lines, useless nodes should be removed.
Onlywith the minimum
the nodes representing number of corners
“real corners” and4)straight
(Figure lines,
rather the edgeuseless
of the wallnodes should
should be removed.
be included.
Only
Onlythe nodes
nodesrepresenting
theidentify representing “real
“realcorners”
corners” (Figure 4) rather the edge of
ofthethewall should be
beincluded.
To the node corresponding to the(Figure
corner 4)of rather thewe
the wall, edgeconducted wall shoulddetection
the corner included.
To identify
To identify the
the node
node corresponding
corresponding to
to the
the corner
corner of
of the
the wall,
wall, we
we conducted
conducted
process. A corner can be defined as a “visually angular” point in the image. The methods used for
the
the corner
corner detection
detection
process.
process. AAcorner
corner detection
corner can
can be
can defined
bebe classified
defined as aa “visually
asinto angular”
signal intensity,
“visually angular” point
contour,
pointandin
in the image.
template
the image. The
Themethods
methods, according
methods used
to for
used for
corner
corner detection
the searchingcan
detection can be classified
algorithm. Among
be classified into
them,
into signal
signal intensity,
theintensity, contour,
signal intensity
contour,method and template methods,
is suitablemethods,
and template for processing according to
linear-to the
according
the searching
shaped
searching algorithm.
features such
algorithm. Among
Amongas walls.
them,them,
Zhang the signal
et al. [29]
the signal intensity
and Harris
intensity methodmethod
andisStephensis suitable
suitable[30] for alsofor processing linear-
reportedlinear-shaped
processing that the
shaped
features such as walls. Zhang et al. [29] and Harris and Stephens [30] also reported that the that
features
classical such
algorithms as walls.
based onZhang et al.
signal intensity [29] and
showed Harris and Stephens
strong performance [30]
in the also
point reported
searching field. the
classical
Therefore,
classical algorithmsin this study,
based on we usedintensity
signal Harris cornershowed detection,
strong which is a representative
performance in the signal
point intensityfield.
searching
algorithms based on signal intensity showed strong performance in the point searching field. Therefore,
method,
Therefore, as our
in we
this cornerwe
study, detectionHarris
algorithm.
in this study, used Harris usedcorner detection, corner detection,
which which is a representative
is a representative signal intensitysignalmethod,intensity
as our
method, as our corner
corner detection algorithm. detection algorithm.

Figure 4. Corners of the wall: (a) corner type a; (b) corner type b.

2.2.3. Wall Segments and4.Thickness


Figure Corners of the wall: (a) corner type a; (b) corner type b.
Figure 4. Corners of the wall: (a) corner type a; (b) corner type b.
To generate wall thickness attributes, the method of Jae et al. [31] was used. We used vector data
2.2.3.as
Wall
the
Segments
input, which
and Thickness
were simplified by combining the output of the Harris corner detection process
2.2.3. Wall Segments and Thickness
and the node-edge graph
To generate wall thickness from the centerline
attributes, image. The
the method of vector data
Jae et al. andwas
[31] the used.
original
Weraster
usedimage of data
vector
To
thegenerate
walls wall
were thickness
overlapped, attributes,
and the the
kernelsmethod
moved of Jae
using et al. [31]
1-pixel-width was used.
slides
as the input, which were simplified by combining the output of the Harris corner detection processWe
alongused
the vector
wall todata
and
as the determine
input, wall thickness
which were (Figure 5).
simplified by Wall thicknessthe
combining data are stored
output of theseparately,
Harris such that
corner they canprocess
detection be
the node-edge graph from the centerline image. The vector data and the original raster image of the
and theused to create spatial
node-edge graphinformation, such as CityGML
from the centerline image. Theand vector
IndoorGML.
data and the original raster image of
walls were overlapped, and the kernels moved using 1-pixel-width slides along the wall to determine
the
wallwalls were(Figure
thickness overlapped,
5). Walland the kernels
thickness moved
data are storedusing 1-pixel-width
separately, such that slides along
they can the to
be used wall to
create
determine wall thickness (Figure 5). Wall thickness
spatial information, such as CityGML and IndoorGML. data are stored separately, such that they can be
used to create spatial information, such as CityGML and IndoorGML.
ISPRS Int. J. Geo-Inf. 2020, 9, 65 7 of 16
ISPRS Int. J. Geo-Inf. 2020, 9, 65 7 of 17

ISPRS Int. J. Geo-Inf. 2020, 9, 65 7 of 17

Figure
Figure 5. Restoring wall
5. Restoring wall thickness
thickness using
using the
the moving
moving kernel
kernel method.
method.

2.2.4. Door detection


2.2.4. Door detection
Figure 5. Restoring wall thickness using the moving kernel method.
Similar to the wall image, the door image contains door objects in the form of raster pixels. It is
Similar to the wall image, the door image contains door objects in the form of raster pixels. It is
used
2.2.4. to
Doorcreate a vector object to represent the doors in a similar manner to wall image processing.
detection
used to create a vector object to represent the doors in a similar manner to wall image processing.
Since the floorplans used in this study are very complex and diverse in nature, every door is assumed
SinceSimilar
the floorplans
to the wall used
image, in this studyimage
the door are very complex
contains and diverse
door objects in the in nature,
form every
of raster door
pixels. It is assumed
to be in
used
the shape shown in Figure 6a to increase overall accuracy. Following Or et al. [32], assumptions
to betoincreate
the ashape
vectorshown
object toinrepresent
Figure the 6a doors in a similar
to increase manner
overall to wall Following
accuracy. image processing.
Or et al. [32],
regarding the shape
Since the floorplans of the
used in doors
this are
study listed below: and diverse in nature, every door is assumed
assumptions regarding the shape ofarethe
very complex
doors are listed below:
to be in the shape shown inconsists
Figure 6a to increase overall accuracy. Following Or et al. [32],
• Every door Every consistsdoor of two of two
orthogonal orthogonal
straight straight
lines and one lines and onearc;
additional additional arc;
assumptions regarding the shape of the doors are listed below:
• The length  The
of twolength
lines of two lines
are almost are almost identical;
 Every door consists of twoidentical;
orthogonal straight lines and one additional arc;
• 
The arc is alwaysThe arc is always
a quarter ofaaquarter
circle; of a circle;
The length of two lines are almost identical;
• In some  In
 cases, some
The arc
oneisof cases,
always one of
a quarter
the two the
straight two
of alinesstraight
circle; lines is omitted.
is omitted.
 In some cases, one of the two straight lines is omitted.

Figure6.6.
Figure
Figure Process
6.Process ofof
Process door
of
door extraction:
door
extraction: (a) raw
extraction:
(a) raw (a) image;
image;raw (b) rasterized
image;
(b) rasterized(b) (c)door,
rasterized
door, (c)door,
extractedextracted points;
(c) (d)
points; final (d)
extracted final
points;
output.
output.
(d) final output.

We applied
We
We appliedaa process
applied aprocess
process similar
similar
similar to that
to that
to that described
described
described in Section
inSection
in Section 2.2.2toto2.2.2
2.2.2 tothe
extract
extract extract
the theand
corners
corners corners
and and
determine
determine
determine
the directionthe
thedirection
and shapeand
direction ofandshape
each of each
shape
door.ofOnlydoor.
each Only
door.
three three three
Only
isolated isolated points
isolated
points were
points
were produced produced
were
from from
doordoor
produced from door
segmentation
segmentation images (Figure 6b),6b),
segmentation
images (Figureimages (Figure
6b), without anywithout
without
other any any
otherother
geometry, geometry,
as shown as shown
geometry,
in Figure in Figure
as shown in6c. To overcome
Figure
6c. To overcome 6c. To
this, weovercome
used the
this, we
this, we used the data
used the adata to construct
to construct a virtual triangle, calculated the distances between the edges and
data to construct virtual triangle,acalculated
virtual triangle, calculated
the distances the distances
between the edgesbetween the edges
and corners and
of every
corners of every wall, and decided on the direction. Finally, the two nearest points from the corners
corners
wall, of every wall, and decided on the direction. Finally, the two nearest points from the corners
on theand
edgedecided on the
of the wall segmentdirection. Finally,
were used the two
to make a linenearest points
as a door fromasthe
segment, corners
shown on the6d.edge of the
in Figure
on the
wall edge ofwere
segment the wall
usedsegment
to makewere a lineused
as a to make
door a line as
segment, as ashown
door segment,
in Figureas 6d.shown in Figure 6d.
3. Results
3. Results
3.1. Data Generation
3.1. Data Generation
3.1. Data Generation
From EAIS (Korean government’s web system for computerization of architectural tasks like
From
building
From EAIS
license, (Korean
(Korean government’s
EAIS construction web
and demolition),
government’s system
web we for
for computerization
downloaded
system of
of architectural
floorplan images
computerization and manuallytasks
architectural tasks like
like
annotatedlicense,
building
building them forconstruction
license, pixel-wise ground
construction and truth
and generation.
demolition),
demolition), we
weTodownloaded
precisely reproduce
downloaded the geometry
floorplan
floorplan images of
images themanually
and
and manually
architectural
annotated plan,for
them wepixel-wise
labeled various
groundstructural
truth features, such
Toasprecisely
air duct reproduce
shafts (AD) the
andgeometry
access
annotated them for pixel-wise ground truth generation.
generation. To precisely reproduce the geometry of
of the
the
panels (AP) (Figure 7).
architectural plan, we labeled various structural features, such as air duct shafts (AD) and access panels
architectural plan, we labeled various structural features, such as air duct shafts (AD) and access
(AP) (Figure
panels 7).
(AP) (Figure 7).
ISPRS Int. J. Geo-Inf. 2020, 9, 65 8 of 16
ISPRS Int. J. Geo-Inf. 2020, 9, 65 8 of 17

Figure
Figure 7. Examples
7. Examples of pixel-wiseground
of pixel-wise groundtruth
truth generation:
generation: (a)
(a)source image;
source (b) (b)
image; ground truth.
ground truth.

3.2. Results of Experiments


3.2. Results of Experiments
We used
We used a total
a total of 319
of 319 floorplan
floorplan imagesdivided
images divided into
into246
246training sets,
training 25 25
sets, validation sets,sets,
validation and 47
and 47
test sets through random sampling for CNN training, validation, and hyperparameter tuning of of
test sets through random sampling for CNN training, validation, and hyperparameter tuning Harris
Harris corner detection. The results of the experiments are described as follows. First, we demonstrate
corner detection. The results of the experiments are described as follows. First, we demonstrate the
the results of extraction of the segments of walls and doors while preserving the actual wall thickness
results of extraction of the segments of walls and doors while preserving the actual wall thickness using
using CNN. Second, we demonstrate the results of converting the raster images into node-edge
CNN.structure
Second,vector
we demonstrate
data.
the results of converting the raster images into node-edge structure
vector data.
3.2.1. Wall and Door Segmentation Results
3.2.1. Wall and Door Segmentation Results
The results are reported according to the model used, the loss function used, and whether the R-
The results was
FP dataset are reported according
added during to the model
the training used,
process. Wethe loss
did notfunction
add R-FP used, and whether
datasets the R-FP
to the door
dataset was added during the training process. We did not add R-FP datasets to the door segmentation
segmentation results because there are no pixel-wise annotations for the doors. All the following
results
results are reported
because there areinnotest sets. We used
pixel-wise intersection
annotations for over union (IoU)
the doors. to evaluate
All the following theresults
results.are reported
The
in test sets. results
We used of wall segmentation
intersection without
over union R-FP
(IoU) datasets are
to evaluate theshown
results.in Table 2, whilst Table 3
shows the results with R-FP datasets. When the R-FP data
The results of wall segmentation without R-FP datasets are shown were used during the training
in Table 2, whilst process,
Table 3the
shows
networks yielded better results. In addition, when we used dice loss, we found a higher IoU than
the results with R-FP datasets. When the R-FP data were used during the training process, the networks
weighted cross entropy loss. Among the various models used, modified DarkNet53 produced the
yielded better results. In addition, when we used dice loss, we found a higher IoU than weighted cross
best results.
entropy loss. Among the various models used, modified DarkNet53 produced the best results.
Table 2. Results of wall segmentation without R-FP datasets. Note: WCE = weighted cross entropy;
2. Results
TableGCN = graphofconvolutional
wall segmentation
network.without
PSPNet =R-FP datasets.
pyramid Note: WCE
scene parsing = weighted cross entropy;
network.
GCN = global convolutional network. PSPNet = pyramid scene parsing network.
Network Dice loss WCE loss
DarkNet53 0.679 0.6578
Network Dice Loss WCE Loss
GCN 0.633 0.6299
DarkNet53 Deeplab v3 plus 0.679
0.612 0.6066 0.6578
GCN PSPNet 0.633
0.566 0.5637 0.6299
Deeplab v3 plus 0.612 0.6066
PSPNet 0.566
Table 3. Results of wall segmentation 0.5637
with R-FP datasets.

Network Dice loss WCE loss


Table 3. Results of wall segmentation
DarkNet53 0.688 with R-FP datasets.
0.6699
GCN 0.657 0.6352
Network Dice Loss
Deeplab v3 plus 0.643 0.6125 WCE Loss
DarkNet53 PSPNet 0.572
0.688 0.5802 0.6699
GCN 0.657 0.6352
Deeplab v3 plus 0.643 0.6125
PSPNet 0.572 0.5802
ISPRS Int. J. Geo-Inf. 2020, 9, 65 9 of 16

Table 4 shows the results of door segmentation. We found that similar to wall separation, using
modified DarkNet53 and dice loss yielded better performance.

Table 4. Results of door segmentation with R-FP datasets.

Network Dice Loss WCE Loss


DarkNet53 0.7805 0.6233
GCN 0.6532 0.5849
Deeplab v3 plus 0.5605 0.4308
PSPNet 0.7405 0.6116

The door segmentation results showed better performance than wall segmentation, because in
most cases the shape of the doors is uniform. However, the thickness, length, and types of walls vary
ISPRS Int. J. Geo-Inf. 2020, 9, 65 10 of
in floorplan datasets. To improve wall segmentation performance, we ensembled three models that
ISPRSwere
ISPRS Int.trained
Int. J. Geo-Inf. 2020, 9,
J. Geo-Inf. 65
2020,with
9, 65 different hyperparameters, yielding an IoU of 0.7283. Table 5 shows the results of 10 of 10 17 of 17
ISPRS Int. J. Geo-Inf. 2020, 9, 65 Table 5. Wall and door segmentation results. 10 of
ISPRSISPRS Int. J.segmentation.
Int. J. Geo-Inf. 2020, 9,
Geo-Inf. 65 9, 65
2020, As shown, the walls and doors are well segmented, and the results could be used 10 ofas
17 of 17
10
ISPRS Int. J.Int.
Geo-Inf. No2020, 9, 65 9, 2020, Image Ground Truth (wall) Prediction (wall) Ground Truth (door) Prediction 17(door)
ISPRS Int. ISPRS
J.ISPRS
Geo-Inf. ISPRS
input
Int.2020,
ISPRS 9,Int.
J.J.J.Geo-Inf.
Geo-Inf.
65
Int. J.J.
data Geo-Inf.
2020,
2020,9,for
2020, 65 9,65
65
65 vector
9,65 generation. 10 of 1710 of10
10
17
10ofof
of17
10 of
17 of 17
ISPRS Int. ISPRS Int.
J. Geo-Inf. Geo-Inf.
2020, 9,Geo-Inf.
65 2020, 9, TableTable
5. Wall
5. and
Walldoor
and segmentation results.
door segmentation results. 10 of 17 10
17
ISPRS Int. J. Geo-Inf. 2020, 9, 65 Table 5. Wall and door segmentation results. 10 of 17
TableTable
5. Wall and door
5.(wall)
Wall segmentation
and door results.
segmentation results.
No No ImageImage Ground TruthTruth
Ground (wall) Prediction (wall)
Prediction (wall) Ground TruthTruth
Ground (door)(door) Prediction (door)(door)
Prediction
No Image Ground
Table 5.Truth
TableWall
Table
5. (wall)
and
Wall door
5. Wall
and segmentation
and
door door Prediction
segmentationresults.
segmentation (wall)
results.results. Ground Truth (door) Prediction (door)
No No ImageImage Ground
Ground Table
Table
TruthTruth 5. 5.
Wall
(wall) Wall
and
Table and
door
Table
5. 5.
Wall door
segmentation
Wall
and segmentation
and
door
Prediction results.
Table 5. Wall and door segmentation results.results.
(wall) Table 5. Wall and
Prediction door segmentation
(wall)door segmentation
segmentation
(wall) results.
Ground
results. TruthTruth
results.
Ground (door)
(door) Prediction (door)
Prediction (door)
Table 5. Wall and door segmentation results.
No NoNo No
No
No No
Image
No
Image
Image
ImageImage
Image GroundGround
ImageImage Ground
Truth
Ground
Truth
Ground
Ground
Truth
(wall)
Ground
Truth
(wall)
Truth Truth
(wall)
Truth(wall)
Ground
(wall) Truth (wall)
(wall)(wall)
Prediction
Prediction Prediction
(wall)
Prediction
Prediction
(wall)
Prediction
Prediction
(wall)
(wall)
(wall)
Prediction (wall)
Ground
(wall)(wall)
Ground
Truth
Ground
Truth
Ground
(door)
Ground
Ground
Truth
(door)
Ground
Truth
Truth Truth
(door)
Truth(door)
Ground
(door) Truth (door) Prediction
Prediction
(door)(door) (door) (door)
Prediction
Prediction
Prediction (door)
(door)
Prediction
Prediction
Prediction (door)
(door)
(door)(door)
NoNo Image
Image Ground
Ground Truth
Truth(Wall)
(wall) Prediction (Wall) (wall)
Prediction Ground Truth (Door) TruthPrediction
Ground (door) (Door) Prediction (door)
78

78 78
78 78 78
78 78 78 78
78
78 78
78
7878

95 95
95
95 95
95 95 95 95
95
95 9595
95 95
95

290 290
290
290
290 290
290 290290290
290 290
290 290
290
290

3.2.2. Vector Generation Results


The hyperparameters of Harris corner detection were set to blocksize = 2, ksize = 7, k = 0.05
using the validation set. Improved centerline detection method generated relatively clear and simple
data as shown in Table 6. Especially, unintended branches and structure with hole due to the thick
wall pixels didn’t appear in our work. The results from previous works are summarized in Table 7,
using the original image to the final vector result of floorplan No. 290 as an example. Pixels from wall
segmentation were changed to centerline and converted to node-edge graph. Corner detection method
was used simultaneously to indicate the exact location of corner points, and finally vector data were
generated. Further results are provided in Table 8.
the validation
thethe theset.
validation set. Improved
validation
Improved set.centerline
the validationImproved
centerline set.detection
Improved
centerline
detection method
centerline
detection
method generated
detection
method
generated relatively
methodclear
generated
relatively and
generated
relatively
clear and simple
clear
simple data
relatively
and
data clear and
simple data simple data
theinvalidationthe
the validation
set. Improved
validation set.
set. Improved
centerline
Improved centerline
detection
centerline detection
method
detection method
generated
method generated
relatively
generated relatively
clear
relativelyand clear
simple
clear and
and data simple data
as validation
asshown
shown the
in as
asTable set.
Tablethe
shown
validation
shown 6. Improved
6.validation
Especially,
in
as
as
in Table
set.
shown
Especially,
shown
Table centerline
6.
Improved
6.set.
inin Improved
unintended
Especially,
Table
unintended
Table
Especially, detection
centerline
6.
6. centerline
branches
unintended
Especially,
branches
Especially,
unintended method
detection and generated
detection
unintended
and structure
branches
method
structure
unintended
branches method
and relatively
with
branches
with
branches
and generated
hole
structure
generated hole
structure and
and dueclear
due
with
relatively toand
structure
to the
hole
clear
the
structure
with hole simple
relativelythick
dueand
with
thick
with
due to data
clear
theand
wall
simple
hole
wall
tohole
the thick
due
due
thick tosimple
simple
data
to wall
the thick
the
wall
data
data
thick wall
wall
asasshown
shown in Table
as shown
shown
in Table 6.
as
in
as Especially,
shown
Table
shown
6. 6.in
in
Especially, unintended
Table
Especially,
Table 6.
6.
unintended branches
Especially,
unintended
Especially, branches and
unintended
branches
unintended
and structure branches
and
structure with
structure
branches with hole
and
and due
structure
with
structure
hole due to
holethe
to due
with
thethick
with hole
to
hole
thick wall
the due
thick
due
wall to
to the
wall
the thick
thick wall
wall
pixels
pixels as
didn’t
didn’t pixels
appearas
appear shown
in
didn’t
in Table
in our
our in6.
appear
pixels Table
Especially,
work.
didn’t
work. inThe 6.
The
our Especially,
appear unintended
results
work.
results in from
The
our
from unintended
results
work. branches
previous
previous Thefrom branches
works
works and
previous
results from
are and
structure
are works structure
summarized
previous
summarized with
are hole
inwith
summarized
works
in due
Table
Table arehole
to
7, 7, due
the
using
in to
thick
Table
summarized
using thewall
7, thick
using
in Table wall
7, using
pixels didn’t pixels
pixelsappear
didn’t inpixels
didn’t
pixels
pixels our
appear
didn’t
appear
work.
didn’t
didn’t in
inThe
our
appear
appear
our
appear work.
work.
results
in
in
in our
our
The
our
The work.
from
work.
results
results
work. previous
The
The
The
from
from results
results
previous
works
previous
results from
from
from
are previous
works
summarized
previous
works
previous are
are works
works
summarized
in are
summarized
works Table
are
are 7,summarized
in Table 7, in
using
summarized
in7,
summarizedTable in
7,from
in Table
using
Table
Table 7, using
using
7, using
thepixels
the didn’t
original
original the
pixels appear
image
image
the pixels
original
didn’tto
original indidn’t
tothe
the
the our
the
image
appear work.
finalappear
in
original
final
original
image to
vector
to The
vector
our the in
image
image
the results
our
result
final
work.result
final to
to work.
the
of
the
vector from
of
vector
The finalThe
floorplan
final previous
floorplan
results result
result results
from
vector
vectorof No.
ofNo. works
from
result
290
result 290
floorplan
previous
floorplan asof
of are
previous
as an
No.
works summarized
floorplan
an
No. works
example.
290
are
example.
floorplan
290 as
as an
No.
No.an in
are
Pixels
summarized Table
summarized
example.
290
Pixels
290
example.as from
from
as in
an
an using
wallin
Pixels
Table
example.
wall
example.
Pixels 7,
fromTable
using 7,7,
wall
Pixels
Pixels
wall
using
using
from
from wall
wall
thethe
original
the
originalimage
original
image to
the
the tothe
image final
original
original
the to
final vector
image
the
image final
vector result
to
to the
vector
the
result of
final
final
offloorplan
resultvector
vector
floorplanof No.
result
floorplan
result
No. 290of
of
290as
No. an
floorplan
floorplan
as example.
290
an asNo.
No.an
example. Pixels
290 as
example.
290 as
Pixelsanfrom
an Pixels wall
example.
example.
from from
wall Pixels
Pixelswall from
from wall
wall
the
segmentation
segmentation werethe
original
were
segmentation
segmentation original
image
changed to
were
segmentation
changed wereto
segmentation image
the
to changedto
final
were
centerline
changed the
centerline
were tovector
to final
and vector
result
centerline
changed
and
changed to
converted
centerline of
converted result
floorplan
and
centerline
and to
to centerline toof floorplan
converted No.
node-edge
and
node-edge
converted 290
to No.
as an
graph. 290
node-edge
converted
to graph.
and converted node-edge toas
example.
Corner
to an
Corner
graph.example.
node-edge
node-edge
graph. Pixels
detection
Corner
detection Pixels
from
graph. wall
detectionfrom
Corner
graph.detection
Corner wall
Corner detectiondetection
segmentation were
segmentation
segmentation changed
segmentation
were
segmentation to to
changedcenterline
were
were changed
to and
centerline
changed converted
tocenterline
to centerline
and location
centerline to
convertednode-edge
and
and converted graph.
to node-edge
node-edge
converted to Corner
tonode-edge
node-edge
graph.
node-edge detection
graph.
Corner
graph. Corner
detection
Corner detection
detection
method
method was
ISPRS
was Int. J.were
used
method
segmentation
used
method was
Geo-Inf.
was changed
segmentation
simultaneously
2020,
simultaneously
method
used 65
waswere
used9,changed
were
method centerline
to
used
simultaneouslychanged
toindicate
simultaneously
was used indicate
to and
to
to
centerlinethe
simultaneously
the
simultaneously
to converted
exact
indicate
exact
indicateand tothe
location
tothe to
and
exact
converted
indicate
indicate
exactofnode-edge
converted
of corner
location
to
the
corner
the
location exact graph.
to
points,
of
points,
exact of corner
location
and
location
corner Corner
and finally
points,
graph.
of detection
graph.
corner
finally
of corner
points, vector
and
Corner
vector
and Corner
finally
detection
points,
points,
finally and
and 10detection
vector
vector finally
of 16
finally vector
vector
method
method was
method
was used
used simultaneously
method
was
method used was
was
simultaneously used
used to
simultaneously indicate
simultaneously
to
simultaneously
to indicate the the exact
indicate to
to
exact location
theindicate
exact
indicate
location ofthecorner
location
theof exact
exact
corner points,
of location
corner
location
points, and of finally
corner
points,
of
andcorner and
finally vector
points,
finally
points,
vector and
and vectorfinally
finally vector
vector
data
data were method
were data
generated. method
generated.was
were
data wereFurther used was
Further
generated.
data
data were
were used
simultaneously
results
results simultaneously
Further
generated.
are
generated. are to
provided
results indicate
Further
provided
Further arein to
in indicate
the
Table
provided
results
Table exact
are
8. 8. inthe exact
location
Table
provided 8. location
inof corner
Table of
8. corner
points, points,
and and
finally finally
vector vector
data were generated.
data were data
generated.
Further
were
generated.
data were resultsFurther
generated.
Further
generated.
results
areresults
provided
Further
Further are
are inresults
provided
results Table
provided
results are
are
are
8. provided
in
provided
in
Table 8.in Table 8.
Tablein
provided in
8.in Table
Table 8.
data were
data generated.
were data Further
were
generated. results
generated.
Further are
Further
resultsprovided
results
are in
provided Table
are 8.Table
provided
in 8. Table 8.8.
Table 6.Table 6. Centerline
Centerline
Table Tabledetection
6.detection
Centerline results
6.detection
Centerline results
with two
resultswith two
algorithms.
with
detection twoalgorithms.
results algorithms.
with two algorithms.
Table 6. Centerline
Table 6.detection
Tableresults
Centerline with two
6.detection
Centerline algorithms.
detection
results withresults with two algorithms.
two algorithms.
Table 6. Centerline
Table detection
Table
6. Centerline
Table 6.results with
Centerline
detection two
results algorithms.
detection
with results with two
two algorithms. algorithms.
Table 6.Table
Centerline
Table 6. 6.
detection
6. Centerline Centerline
Centerline
detection detection
results detection
with two
results results
algorithms.
results
with twowithwith two
two algorithms.
algorithms.
algorithms.
Wall Image
Wall image Wall image
Wall image Wall imageClassical
Wall
Classical Classical
algorithm
Classical
image
Wall algorithm
image
Algorithm
[28]
[28]
Classical Classical
Improved
Classical
algorithm [28]
Improved
algorithm algorithm
Improved
algorithm
[28] algorithm
algorithm [28] Improved
[27]
algorithm
[28][27]
Improved Improved Algorithm
Improved[27]
algorithm [27] [27]
algorithm[27]
algorithm [27]
Wall image
Wall imageClassical
Wall
Wall algorithm
image
Classical
image [28]
algorithmImproved
Classical
Classical [28] algorithm
algorithm
Improved
algorithm [28] [27]
[28] Improved
Improved
algorithm
Improved algorithm
[27]
algorithm [27]
[27]
Wall image Wall
Wall image image
Classical algorithm
Classical [28]
Classical
algorithm Improved
algorithm
[28] algorithm
[28]
Improved [27]
algorithm algorithm
[27] [27]

Table Table 7. Intermediate


Table7.7.Intermediate
Intermediate
Table
Table 7. products
7. Intermediate
Intermediate
Table7.
products
Table products
7.and
and and
Intermediate
input/output
Intermediate
products andinput/output
input/output
products and ofofthe
products
products and
and
input/output ofof
theprocess.ofthe
process.
input/output theprocess.
process.
input/output
input/output
the process.ofthe
of theprocess.
process.
Table 7. Intermediate
Table products
Table
7. Intermediate
Table 7. and input/output
7.Intermediate
Intermediate
products and of the
products and
input/output
products and process.
input/output of
of the process.
input/output of the
the process.
process.
Table 7.Table
Intermediate
Table products
7. and
Intermediate input/output
products of
and the process.
input/output
7. Intermediate products and input/output of the process. of the process.
OriginalImage
Original
Original Image
Original Image
Image
Original Image Segmentation
Original Image
Segmentation
Original Image Image Segmentation
Segmentation
Image
Segmentation
Segmentation Image
Image Centerline
ImageCenterline
Segmentation Image Centerline
Image Centerline Centerline
Centerline
Centerline
Original Image
Original Image Segmentation
Original
Original Image ImageSegmentation
ImageSegmentation Image Centerline
Segmentation Image Centerline
Image Centerline
Centerline
OriginalOriginal
Image Original
Image Image Segmentation Image
Segmentation
Segmentation Image Image Centerline
Centerline Centerline

Node-Edge Node-Edge
Graph
Node-Edge Graph GraphCorner
Node-Edge
CornerGraph
Node-EdgeNode-Edge
Graph Detection
GraphCorner Detection
Detection Vector
Corner Detection
Vector
Corner Detection
Corner Detection Vector
Vector Vector Vector Vector
Node-Edge
Node-Edge Graph
Graph
Node-Edge Graph Corner
Node-Edge
Node-Edge Graph
GraphCorner
Corner Detection
Detection Corner
Detection
Corner Vector Vector
Detection
Detection Vector
Vector
Node-Edge Graph
Node-Edge
Node-Edge Corner
Graph Graph Detection
Corner Detection
Corner Detection Vector Vector Vector

Table8.8.Generated
Table Generated
Table 8.vector
Table 8.
vector outputs
Generated
Table
outputs
Table
Generated 8. with
vector
8.vector
with original
outputs
Generated
original
Generated and
with
vector
and
vector
outputs with segmentation
original
outputs
segmentation
outputs with
original and images.
andoriginal
with segmentation
original
images.and
segmentation images.
andsegmentation
segmentation
images. images.
images.
Table 8. Table 8.8.Generated
Generated
Table vector
Table
Generated
Table vector
outputs
8.outputs outputs
with
Generated
vector with
original
vector
outputs and original
outputs
with with
original and
segmentation
and segmentation
images.
original and
segmentation images.
segmentation
images. images.
Table 8.Table
Generated
Table 8. 8.
vector
8. Generated Generated
Generated withvector
vector
vector outputs outputs
original andwith
outputs
with with original
segmentation
original original
and and segmentation
images.
and segmentation
segmentation images.
images. images.
OriginalImage
Original Image
Original Image
Original Image Segmentation
Original
Segmentation
Original Image Image Segmentation
Segmentation
Image Segmentation
Image Segmentation
Image Vector
ImageVector Output
Image
Output
Image Vector Output
Vector Output
VectorOutput
Vector Output
Original
OriginalImage
Image
Original Image
OriginalSegmentation
Original Image ImageSegmentation
Image Segmentation
Segmentation Image Vector
Segmentation
Image Image
Image Output Vector
Vector Output Output
Vector
Vector Output
Output
OriginalOriginal
Image Original
Image Segmentation
Image Image
Segmentation
Segmentation Image Image Vector Output Vector
Vector Output Output

ISPRS Int.ISPRS Int.ISPRS


J. Geo-Inf.J.2020,Int.
Geo-Inf. J.2020,
9, 65 Geo-Inf.
9, 652020, 9, 65 12 of 17 12 of 17 12 of 17
ISPRS Int.ISPRS Int.ISPRS
J. Geo-Inf. J. Int.
9, 65J.J.2020,
Geo-Inf.
ISPRS
2020, Int. Geo-Inf.
9, 652020,
Geo-Inf. 2020, 9,
9, 65
65 12 of 17 12 of 17 12
12 of
of 17
17

ISPRS Int. J. Geo-Inf. 2020,


ISPRS9, Int.
65 J. Geo-Inf. 2020, 9, 65 12 of 17 12 of 17

ISPRS Int. J. Geo-Inf. 2020, 9, 65 11 of 16

Table 8. Cont.

Original Image Segmentation Image Vector Output

From the From


vector
From the
the vector
data
From
vector
From and
the
the data
data
vector and
attribute,
vector anddata
dataattribute,
the
andCityGML
and
attribute, the CityGML
attribute,
the
attribute, CityGML
LOD2
the
the CityGMLLOD2and
model
LOD2
CityGML model
LOD2
model
LOD2 and
celland
spaces
model
model celland
cell ofspaces
cell
spaces
and of IndoorGML
IndoorGML IndoorGML
spaces
cell of
spaces of
of IndoorGML
IndoorGML
thick wallthick
thick
From wall
model thick
wall themodel
thick
were wall
model
vector
wall were
model
were
generated data
model generated
were
were automatically.
and generated
generated attribute,
generated
automatically. the CityGMLEvery
automatically.
automatically.
automatically.
Every postprocess postprocess
Every
Every postprocess
LOD2
Every modelwas
waspostprocess
was and
postprocess
performed performed
was
was
with with
cellperformed
performed with
spaces
performed
FME FME
ofwith
FME
(Feature (Feature
FME
FME (Feature
(Feature
IndoorGML
with (Feature
From the vector
Manipulation
Manipulation
Manipulation
thick wallEngine)
model data
From
Manipulation and
Engine)
Engine)
Manipulation
by
were the
safe attribute,
bysafe
by
Engine)safedata
vector
Engine)
software,
generated by the
software,
software,
by safe
the CityGML
and
safeof theof
the
software,
which
automatically. of
attribute,
software, LOD2
which
thethe
the
which
results of
Everyof model
results
CityGML
which
results
arewhich
shown and
are
are
postprocess cell
shown
LOD2
results
shown
results
in are
are
Table
wasspaces
in
model
shown
in ofCityGML
Table
Table
shown in
9.performed
The IndoorGML
and
in 9.Table
9. The
cell
The
Table CityGML
withspaces
9. The
CityGML
9.Lod2
The
FME Lod2
of(Feature
IndoorGML
CityGML
Lod2
CityGML Lod2
Lod2
thick wall modelconsists
model were
consists
thick generated
wallof wall,
wall,of
model automatically.
interior
were generated Every
wall, ground
ground postprocess
surface,
automatically. and was
Every performed
door information.
postprocess with
was FME
On (Feature
the
performed other hand,
with FMEcell(Feature
modelManipulation Engine) by safe software, the of which results are shown in Table 9. The CityGML cell
model
consists model
model
of wall,consists
of
consists
interior of wall,
interior
wall,
wall, interior
wall,
interior
ground wall,
wall,
surface, ground
surface,
ground
and doorsurface,
and
surface, and
doorand
information. door
door On information.
information. On
information.
the other the On
other
On
hand, the
the other
hand,
cellother hand,
cell
hand, cell
Manipulation
spaces spaces
of
spaces Engine)
of
spaces
of byof
Manipulation
IndoorGML IndoorGML
spaces ofsafe
IndoorGML
could software,
IndoorGML
IndoorGML could
Engine)
be the
be
by
generated.
could could
be of which
generated.
safe software,
To
be
generated.
could be results
achieveTothe
generated.
To
generated. are
achieve
ofTo
this,
achieve
To shown
which
extracted
achieve
this,inextracted
this,
achieve Tableare9.
extracted
resultswall
this,
this, andThe CityGML
wall
showndoor
extracted
wall
extracted and
in
and door
Table
edges
walldoor
wall Lod2
and
and edges
9.were
The
door
edges
door were
CityGML
edges
were
edges Lod2
were
were
Lod2 model consists of wall, interior wall, ground surface, and door information. On the other hand,
model consists
combined, ofcombined,
combined,
closing wall,
model interior
closing the
consists wall,
space
of wall,ground
in the surface,
floorplan.
interior wall, and
grounddoorsurface,
information. On the
and door other hand,
information. Oncellthe other hand, cell
cellcombined,
spaces of the
combined, space
closing
IndoorGML theinspace
closing
closingthe floorplan.
the
the
couldinspace
the in the
be floorplan.
space in the
generated. floorplan.
floorplan.
To achieve this, extracted wall and door edges were
spaces of IndoorGML spacescould be generated.
of IndoorGML could Tobeachieve
generated.this, Toextracted
achievewall this,and door edges
extracted wall andweredoor edges were
combined, closing
combined, closingcombined,
theinspace
the space closing
in the floorplan.
the floorplan. Table 9.Table
GML data model.
the Table
space 9. inTable
the floorplan.
GML data
9. GMLmodel.
Table 9.
data
9. GML
GML data
data model.
model. model.
CityGML
CityGML LOD2 LOD2
Table 9. GML data IndoorGML
model. CellSpace
CellSpace CellSpace
CellSpace
Table 9.CityGML
CityGML LOD2
CityGML
GML dataLOD2
IndoorGML
LOD2
model.
Table GML IndoorGML
9. IndoorGML
IndoorGML
data model. CellSpace

CityGML LOD2 CityGML


CityGML LOD2 IndoorGML
LOD2 IndoorGML
CellSpace CellSpace
IndoorGML CellSpace

We evaluated
We evaluated
We evaluated
the
We results
We the of
evaluated
the
evaluated results
vector
the ofdata
vector
the results
results of vector
results of data
vector
of data
vectorgeneration
generation using
data using
a using
generation
generation
data generation aausing
confusion confusion
matrix. matrix.
To
aa confusion
confusion
using matrix.
confusion To determine
determine determine
matrix.
To
matrix. To
To determine
determine
whether
whetherwhether nodes
nodes whether
were were
correctly correctly
extracted extracted
or not, or
we not, we
compared compared
their their
locations locations
with the
whether nodes were correctly extracted or not, we compared their locations with the locations
nodes nodes
were were
correctly correctly
extracted extracted
or not, or
we not, we
compared compared
their their
locations with
locations
with the
locations
the locations
with of
the
locations of
of
locations of
of
We evaluated
nodes
nodes innodes in the
the
innodes inresults
original
We
in the
the original
nodes
the original image of
image
evaluated
originalvector
image 8).
the (Figure
original imagedata
(Figure
the generation
8).
results
image (Figure
(Figure 8). of
(Figure vector
8).
8). using
data a confusion
generation matrix.
using a To determine
confusion matrix. To determine
whether nodes were
We evaluatedcorrectly
whether nodes
the extracted
were
results or not,
of correctly
vector datawegeneration
compared
extracted or not,their
wealocations
using compared
confusion with the
their locations
locations
matrix. of thewhether
with
To determine locations of
nodesnodes
in the were
original
nodesimage (Figure
in theextracted
correctly 8).
original image
or not,(Figure 8).
we compared their locations with the locations of nodes in the
ISPRS Int.image
original J. Geo-Inf. 2020, 9, 8).
(Figure 65 13 of 17

Figure 8. Comparison of (a) nodes generated by ground truth with (b) extracted nodes.
Figure 8. Comparison of (a) nodes generated by ground truth with (b) extracted nodes.

The F1 score was 0.8506, which indicates that extracted nodes are appropriate for use as end
points of the edge (Table 10). Table 8 demonstrates that more corners were detected than the actual
number of corners, indicating that the false positive value is relatively larger than the false negative
ISPRS Int. J. Geo-Inf.
Figure2020, 9, 65
8. Comparison of (a) nodes generated by ground truth with (b) extracted nodes. 12 of 16

The F1 score was 0.8506, which indicates that extracted nodes are appropriate for use as end
The
points of F1
thescore
edgewas 0.8506,
(Table which8 indicates
10). Table that extracted
demonstrates that more nodes
cornersare appropriate
were for use
detected than theas end
actual
points of the edge (Table 10). Table 8 demonstrates that more corners were detected than
number of corners, indicating that the false positive value is relatively larger than the false negative the actual
number of corners,
value. Many corners indicating
were oftenthatdetected
the falsedue
positive value is relatively
to misalignment of the larger
raster than the
pixels onfalse
the negative
diagonal
value. Many corners were often detected due to misalignment of the raster pixels
(Figure 9a). Uncleaned noise in the segmentation image resulted in corners on the cross-section on the diagonalof
(Figure 9a). Uncleaned noise in the segmentation image resulted in corners on
walls and furniture being regarded as corners. For example, we did not include the outline of an the cross-section of
walls
indoorand furniture
entity being regarded
in the ground truth labelas (Figure
corners.9b,c),
For but
example, we did the
misclassified notpixels
include the outline
around of an
the entity as
indoor entity in the ground truth label (Figure 9b,c), but misclassified the pixels around
a wall, leading to the generation of two incorrect horizontal vector walls (Figure 9d). If we remove the entity as
aunnecessary
wall, leading to the
nodes ofgeneration of two incorrect
diagonal components horizontal
and improve thevector walls (Figure
performance 9d). If we remove
of segmentation, we can
unnecessary nodes
expect greater accuracy. of diagonal components and improve the performance of segmentation, we can
expect greater accuracy.
Table 10. Confusion matrix of node extraction.
Table 10. Confusion matrix of node extraction.
Prediction
Prediction Actual Actual Positive
PositiveNegative Negative
Positive Positive 21982198 267 267
Negative Negative 386386 0 0

Figure9.9.Wrong
Figure WrongPrediction:
Prediction:(a)
(a)floorplan
floorplanwith
withdiagonal
diagonalwall;
wall;(b)(b)
original floorplan
original with
floorplan furnishing;
with (c)
furnishing;
ground truth pixel;(c)
(d)ground
prediction
truthpixel; (e)(d)
pixel; generated vector.
prediction pixel; (e) generated vector.

Finally,
Finally,wall wallthickness,
thickness, as as measured
measured by by the
the number
number of of pixels
pixels occupied
occupied by
by each
each wall,
wall, was
was added
added
to
to the automatically generated vector. An example of the measurement of the thickness of each
the automatically generated vector. An example of the measurement of the thickness of each wall
wall
by
by the method described in Section 2.2.3 is shown in Figure 10. Of the numbers in parentheses, the
the method described in Section 2.2.3 is shown in Figure 10. Of the numbers in parentheses, the
first
first number
number represents
represents wall wall thickness
thickness from
from the
the results
results of
of wall
wall segmentation,
segmentation, and
and the
the second
second number
number
ISPRS Int. J. Geo-Inf. 2020, from
represents 9, 65 14 of 17
represents thickness
thickness fromground groundtruth.
truth.

Number of
Figure 10. Number of pixels
pixels (estimated,
(estimated, ground
ground truth).

In addition,
In addition, we
we defined
defined aa “thickness
“thicknesserror2
error index
index to
to evaluate
evaluate the
the wall
wall thickness
thickness result for all
result for all test
test
sets. The equation of thickness error is as follows:
sets. The equation of thickness error is as follows:
1 1X𝑡𝑠𝑒𝑔 t−
seg𝑡𝑔𝑡
− t gt
Thickness Error
Thickness = =∑
Error (1)
𝑛 n 𝑡𝑔𝑡 t gt
where 𝑡𝑠𝑒𝑔 is the wall thickness from the segmentation result and 𝑡𝑔𝑡 is wall thickness from the
ground truth result.
The thickness error of 10 images was obtained by random sampling of test sets, producing an
error value of 0.224. This suggests that we can expect an average error of 1 pixel per five wall pixels.
ISPRS Int. J. Geo-Inf. 2020, 9, 65 13 of 16

where tseg is the wall thickness from the segmentation result and t gt is wall thickness from the ground
truth result.
The thickness error of 10 images was obtained by random sampling of test sets, producing an
error value of 0.224. This suggests that we can expect an average error of 1 pixel per five wall pixels.
However, this value may be an overestimate, as the error from thin walls raises the overall thickness
error. In fact, the tseg − t gt values of almost 80% of walls were lower than the estimated overall thickness
error. Thus, we conclude that the wall thickness attribute is sufficiently accurate for use in generating
indoor spatial data, such as in CityGML or IndoorGML.

4. Discussion
In this paper, we proposed a process for automatically extracting vector data from floorplan images,
which is the basis for constructing indoor spatial information. To the best of our knowledge, this is the
first study that takes OGC standards into account when extracting base data for constructing indoor
spatial information from floorplan images. Our main contributions are as follows. First, we recovered
vector data from floorplan images that could be used to generate indoor spatial information with
CityGML LOD2 or a cell space of IndoorGML thick wall model, which was unable to be accomplished
with image results from previous studies. Second, we generated high-quality segmentation results by
adopting a modern CNN architecture that enabled us to generate reliable node-edge graph-formed
vector data with a thickness attribute. Third, we recovered both orthogonal and diagonal lines using
recently developed centerline detection algorithms.
To further demonstrate the contributions of our work, some methodologies were briefly compared
with our work. Segmenting entities in floorplan images was not an easy task; therefore, generating
fine vector data had been challenging and rarely tried before the centerline detection method was
applied to the output of learning-based algorithms by [11]. However, since the segmentation result
still had some noise and a bumpy shape, the output of the centerline detection method did not look
clean; it included many branches caused by the characteristics of centerline detection method itself
that had to be removed. This made it more difficult to extract nodes from it in a practical manner. We
first overcame these problems by improving the segmentation accuracy using a modern deep learning
technique. Although a deep learning technique was already applied in the previous work to segment
the entities, the previous work did not consider generation of vector data and focused only on the
image process itself. Furthermore, the FCN-2s model in that work was based on FCN, the earliest
and most primitive network architecture for segmentation, while our work used Darknet 53 as a
backbone, which was proven to be an efficient architecture for classification and object detection tasks.
In addition, we took advantage of modern training techniques, such as ADAM optimizers, dice loss,
and data augmentation. With many quality results obtained for wall and door segmentation images,
the recently developed centerline algorithm was then applied. This showed a reasonable centerline
with minimal branches and smoother edges. Thanks to the real shape detected by the centerline,
flexible wall description became possible. Previous vector generation methods seldom expressed the
complex structure of wall edges successfully, only dealing with rectangular shapes or orthogonal lines.
However, our approach included diagonal edges as well. In addition, we went further by generating
not just vector data, but by visualizing examples of real spatial data documents defined by OGC
standards, such as CityGML and IndoorGML, to show the utility of our result. This work had never
been tried before, and here a fully designed method was shown for using 2D floorplan images to
generate vector data that can be used as spatial data. In addition, because floorplan images are easy to
acquire, we expect that generating a large amount of indoor spatial information automatically, in an
accessible and affordable manner, will become possible.
The limitations of our work are as follows. In this study, walls and doors are considered the main
objects consisting of spatial data. However, there are other objects such as windows and stairs that
could also fulfill this role. They should also be included in further studies to produce semantically
rich data. In addition, every coordinate is a relative coordinate that cannot be visualized in widely
ISPRS Int. J. Geo-Inf. 2020, 9, 65 14 of 16

used viewers, such as a FME inspector or FZKviewer developed by Karlsruhe Institute of Technology.
The coordinate system connecting the real world and generated data should be considered and included
in the documents. From a technical viewpoint, the nodes on diagonal edges should be removed to
avoid expressing single edges with more than two nodes.

Author Contributions: Conceptualization, Hanme Jang and JongHyeon Yang; methodology, Hanme Jang and
JongHyeon Yang; software, Hanme Jang and JongHyeon Yang; validation, JongHyeon Yang; formal analysis,
Hanme Jang; investigation, Hanme Jang and JongHyeon Yang; resources, Kiyun Yu; data curation, Hanme Jang
and JongHyeon Yang; writing—original draft preparation, Hanme Jang; writing—review and editing, Hanme
Jang and JongHyeon Yang; visualization, Hanme Jang and JongHyeon Yang; supervision, Kiyun Yu; funding
acquisition, Kiyun Yu. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by National Spatial Information Research Program (NSIP), grant number
19NSIP-B135746-03 and the APC was funded by Ministry of Land, Infrastructure, and Transport of the
Korean government.
Acknowledgments: The completion of this work could not have been possible without the assistance of many
people in GIS/LBS lab of Department of Civil and Environmental Engineering at Seoul National University. Their
contributions are sincerely appreciated and gratefully acknowledged. To all relatives, friends and others who
shared their ideas and supported financially and physically, thank you.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Xu, D.; Jin, P.; Zhang, X.; Du, J.; Yue, L. Extracting indoor spatial objects from CAD models: A database
approach. In Proceedings of the International Conference on Database Systems for Advanced Applications,
Hanoi, Vietnam, 20–23 April 2015; Liu, A., Ishikawa, Y., Qian, T., Nutanong, S., Cheema, M., Eds.; Springer:
Cham, Switzerland, 2015; pp. 273–279.
2. Jamali, A.; Abdul Rahman, A.; Boguslawski, P. A Hybrid 3D Indoor Space Model. In Proceedings of the 3rd
International GeoAdvances Workshop, Istanbul, Turkey, 16–17 October 2016; The International Archives of
the Photogrammetry, Remote Sensing and Spatial Information Sciences: Hannover, Germany, 2016.
3. Shirowzhan, S.; Sepasgozar, S.M. Spatial Analysis Using Temporal Point Clouds in Advanced GIS: Methods
for Ground Elevation Extraction in Slant Areas and Building Classifications. ISPRS Int. J. Geo-Inf. 2019,
8, 120. [CrossRef]
4. Zeng, L.; Kang, Z. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds; International
Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences: Hannover, Germany, 2017.
5. Isikdag, U.; Zlatanova, S.; Underwood, J. A BIM-Oriented Model for supporting indoor navigation
requirements. Comput. Environ. Urban Syst. 2013, 41, 112–123. [CrossRef]
6. Gimenez, L.; Hippolyte, J.-L.; Robert, S.; Suard, F.; Zreik, K. Review: Reconstruction of 3D building
information models from 2D scanned plans. J Build. Eng. 2015, 2, 24–35. [CrossRef]
7. Macé, S.; Locteau, H.; Valveny, E.; Tabbone, S. A system to detect rooms in architectural floor plan images.
In Proceedings of the 9th IAPR International Workshop on Document Analysis Systems, Boston, MA, USA,
9–11 June 2010; ACM: New York, NY, USA, 2010; pp. 167–174.
8. Rendek, J.; Masini, G.; Dosch, P.; Tombre, K. The search for genericity in graphics recognition applications:
Design issues of the qgar software system. In International Workshop on Document Analysis Systems; Springer:
Berlin, Heidelberg, September 2004; pp. 366–377.
9. Gimenez, L.; Robert, S.; Suard, F.; Zreik, K. Automatic reconstruction of 3D building models from scanned
2D floor plans. Automat. Constr. 2016, 63, 48–56. [CrossRef]
10. Ahmed, S.; Liwicki, M.; Weber, M.; Dengel, A. Improved automatic analysis of architectural floor plans.
In Proceedings of the 2011 International Conference on Document Analysis and Recognition (ICDAR),
Beijing, China, 18–21 September 2011; pp. 864–869.
11. Ahmed, S.; Liwicki, M.; Weber, M.; Dengel, A. Automatic room detection and room labeling from architectural
floor plans. In Proceedings of the 2012 10th IAPR International Workshop on Document Analysis Systems,
Gold Cost, QLD, Australia, 27–29 March 2012; pp. 339–343.
12. De las Heras, L.P.; Ahmed, S.; Liwicki, M.; Valveny, E.; Sánchez, G. Statistical segmentation and structural
recognition for floor plan interpretation. Int. J. Doc. Anal. Recog. 2014, 17, 221–237. [CrossRef]
ISPRS Int. J. Geo-Inf. 2020, 9, 65 15 of 16

13. Dodge, S.; Xu, J.; Stenger, B. Parsing floor plan images. In Proceedings of the 2017 Fifteenth IAPR International
Conference on Machine Vision Applications, Nagoya, Japan, 8–12 May 2017; pp. 358–361.
14. Liu, C.; Wu, J.; Kohli, P.; Furukawa, Y. Raster-to-vector: Revisiting floorplan transformation. In Proceedings
of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2195–2203.
15. Kang, H.K.; Li, K.J. A standard indoor spatial data model—OGC IndoorGML and implementation approaches.
ISPRS Int. J. Geo-Inf. 2017, 6, 116. [CrossRef]
16. Boeters, R.; Arroyo Ohori, K.; Biljecki, F.; Zlatanova, S. Automatically enhancing CityGML LOD2 models
with a corresponding indoor geometry. Int. J. Geogr. Inf. Sci. 2015, 29, 2248–2268. [CrossRef]
17. Goetz, M.; Zipf, A. Towards defining a framework for the automatic derivation of 3D CityGML models from
volunteered geographic information. Int. J. 3-D Inf. Model. 2012, 1, 1–16. [CrossRef]
18. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
19. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks.
In Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS’12),
Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105.
20. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference
on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969.
21. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution
for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV),
Munich, Germany, 8–14 September 2018; Springer: Cham, Switzerland, 2018; pp. 801–818.
22. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical
image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV),
Stanford, CA, USA, 25–28 October 2016; pp. 565–571.
23. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890.
24. Rajchl, M.; Lee, M.C.; Oktay, O.; Kamnitsas, K.; Passerat-Palmbach, J.; Bai, W.; Damodaram, M.;
Rutherford, M.A.; Hajnal, J.V.; Kainz, B.; et al. Deepcut: Object segmentation from bounding box annotations
using convolutional neural networks. IEEE Trans. Med. Imaging 2016, 36, 674–683. [CrossRef] [PubMed]
25. Lee, J.; Kim, E.; Lee, S.; Lee, J.; Yoon, S. FickleNet: Weakly and Semi-Supervised Semantic Image Segmentation
Using Stochastic Inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5267–5276.
26. Gramblicka, M.; Vasky, J. Comparison of Thinning Algorithms for Vectorization of Engineering Drawings.
J. Theor. Appl. Inf. Technol. 2016, 94, 265.
27. Huang, L.; Wan, G.; Liu, C. An improved parallel thinning algorithm. In Proceedings of the Seventh
International Conference on Document Analysis and Recognition, Edinburgh, UK, 6 August 2003; Volume 3,
p. 780.
28. Boudaoud, L.B.; Solaiman, B.; Tari, A. A modified ZS thinning algorithm by a hybrid approach. Vis. Comput.
2018, 34, 689–706. [CrossRef]
29. Zhang, H.; Yang, Y.; Shen, H. Line junction detection without prior-delineation of curvilinear structure in
biomedical images. IEEE Access 2017, 6, 2016–2027. [CrossRef]
30. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the 4th Alvey Vision
Conference, Manchester, UK, 31 August–2 September 1988.
31. Jae Joon, J.; Jae Hong, O.; Yong il, K. Development of Precise Vectorizing Tools for Digitization. J. Korea Spat.
Inf. Soc. 2000, 8, 69–83.
32. Or, S.H.; Wong, K.H.; Yu, Y.K.; Chang, M.M.Y.; Kong, H. Highly automatic approach to architectural floorplan
image understanding & model generation. Pattern Recog. 2005.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like