JISOM723

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 226

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/260424967

Circular convolution and discrete Fourier transform

Article · January 2013

CITATIONS READS

2 41,348

1 author:

Mircea Ion Cirnu


Polytechnic University of Bucharest
75 PUBLICATIONS 162 CITATIONS

SEE PROFILE

All content following this page was uploaded by Mircea Ion Cirnu on 05 April 2014.

The user has requested enhancement of the downloaded file.


JOURNAL OF INFORMATION SYSTEMS &
OPERATIONS MANAGEMENT

Vol. 7 No. 2
December 2013

EDITURA UNIVERSITARA
Bucuresti
Foreword
Welcome to the Journal of Information Systems & Operations
Management (ISSN 1843-4711; IDB indexation: ProQuest, REPEC,
QBE, EBSCO, COPERNICUS). This journal is an open access journal
published two times a year by the Romanian-American University.
The published articles focus on IT&C and belong to national and
international researchers, professors who want to share their results of
research, to share ideas, to speak about their expertise and Ph.D.
students who want to improve their knowledge, to present their
emerging doctoral research.
Being a challenging and a favorable medium for scientific discussions,
all the issues of the journal contain articles dealing with current issues
from computer science, economics, management, IT&C, etc.
Furthermore, JISOM encourages the cross-disciplinary research of
national and international researchers and welcomes the contributions
which give a special “touch and flavor” to the mentioned fields. Each
article undergoes a double-blind review from an internationally and
nationally recognized pool of reviewers.
JISOM thanks all the authors who contributed to this journal by
submitting their work to be published, and also thanks to all reviewers
who helped and spared their valuable time in reviewing and evaluating
the manuscripts.
Last but not least, JISOM aims at being one of the distinguished journals
in the mentioned fields.
Looking forward to receiving your contributions,
Best Wishes
Virgil Chichernea, Ph.D.
Editor-in-Chief
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

GENERAL MANAGER
Professor Ovidiu Folcut

EDITOR IN CHIEF
Professor Virgil Chichernea

EDITORIAL BOARD

Academician Gheorghe Păun Romanian Academy


Academician Mircea Stelian Petrescu Romanian Academy
Professor Eduard Radaceanu Romanian Technical Academy
Professor Ronald Carrier James Madison University, U.S.A.
Professor Pauline Cushman James Madison University, U.S.A.
Professor Ramon Mata-Toledo James Madison University, U.S.A.
Professor Allan Berg University of Dallas, U.S.A.
Professor Kent Zimmerman James Madison University, U.S.A.
Professor Traian Muntean Universite de la Mediterranee, Aix –
Marseille II , FRANCE
Associate. Professor Susan Kruc James Madison University, U.S.A.
Associate Professor Mihaela Paun Louisiana Tech University, U.S.A.
Professor Cornelia Botezatu Romanian-American University
Professor Victor Munteanu Romanian-American University
Professor Ion Ivan Academy of Economic Studies
Professor Radu Şerban Academy of Economic Studies
Professor Ion Smeureanu Academy of Economic Studies
Professor Floarea Năstase Academy of Economic Studies
Professor Sergiu Iliescu University “Politehnica” Bucharest
Professor Mircea Cirnu University “Politehnica” Bucharest
Professor Victor Patriciu National Technical Defence University,
Romania
Professor Stefan Ioan Nitchi University “Babes-Bolyai” Cluj Napoca
Professor Lucia Rusu University “Babes-Bolyai” Cluj Napoca
Professor Ion Bucur University “Politehnica” Bucharest
Associate Professor Costin Boiangiu University “Politehnica” Bucharest
Associate Professor Irina Fagarasanu University “Politehnica” Bucharest
Associate Professor Viorel Marinescu The Technical University of Civil
Engineering Bucharest
Lecturer Alexandru Tabusca Romanian-American University

Senior Staff Text Processing:


Lecturer Gabriel Eugen Garais Romanian-American University
Assistant lecturer Mariana Coancă Romanian-American University
Assistant lecturer Dragos-Paul Pop Romanian-American University
JISOM journal details 2012

No. Item Value


1 Category 2010 (by CNCSIS) B+
2 CNCSIS Code 844
JOURNAL OF INFORMATION
3 Complete title / IDB title SYSTEMS & OPERATIONS
MANAGEMENT
4 ISSN (print and/or electronic) 1843-4711
5 Frequency SEMESTRIAL
Journal website (direct link to journal
6 http://JISOM.RAU.RO
section)
ProQuest

EBSCO

REPEC
http://ideas.repec.org/s/rau/jisomg.ht
IDB indexation (direct link to journal ml
7
section / search interface)
COPERNICUS
http://journals.indexcopernicus.com/
karta.php?action=masterlist&id=514
7

QBE

Contact

First name and last name Virgil CHICHERNEA, PhD


Professor

Phone +4-0729-140815 | +4-021-2029513

E-mail chichernea.virgil@profesor.rau.ro
vchichernea@gmail.com

ISSN: 1843-4711
The Proceedings of Journal ISOM Vol. 7 No. 2

CONTENTS
Editorial

Costin-Anton Boiangiu VOTING-BASED IMAGE SEGMENTATION 211


Radu Ioanitescu
Fawad Khan THE IMPORTANCE OF DIGITAL MARKETING. 221
Professor Dr Kamran Siddiqui AN EXPLORATORY STUDY TO FIND THE
PERCEPTION AND EFFECTIVENESS OF
DIGITAL MARKETING AMONGST THE
MARKETING PROFESSIONALS IN PAKISTAN
Andreea-Mihaela Pintilie IMAGE REPRESENTATION USING PHOTONS 229
Mihai Zaharescu
Ion Bucur
Virgil Chichernea DATABASE DYNAMIC MANAGEMENT 239
Dragos-Paul Pop PLATFORM (DBDMS) IN OPERATIVE
SOFTWARE SYSTEMS
Costin-Anton Boiangiu TEXT LINE SEGMENTATION IN 247
Mihai Cristian Tanase HANDWRITTEN DOCUMENTS BASED ON
Radu Ioanitescu DYNAMIC WEIGHTS
Ion Ivan M-TOURISM EDUCATION FOR FUTURE 264
Alin Zamfiroiu QUALITY MANAGEMENT
Irina Mocanu HAND GESTURES RECOGNITION USING 272
Tatiana Cristea TIME DELAY NETWORKS
Mircea Ion Cîrnu CIRCULAR CONVOLUTION AND FOURIER 280
DISCRETE TRANSFORMATION
Rauan Danabayeva MANAGEMENT OF INNOVATION IN THE 288
MODERN KAZAKHSTAN: DEVELOPMENT
PRIORITIES OF SCIENCE, TECHNOLOGY AND
INNOVATION
Mihai Cristian Tănase 2:1 UPSAMPLING-DOWNSAMPLING IMAGE 294
Mihai Zaharescu RECONSTRUCTION SYSTEM
Ion Bucur
Andreea-Mihaela Pintilie STUDY OF NEUROBIOLOGICAL IMAGES 300
Costin-Anton Boiangiu BASED ON ONTOLOGIES USED IN SUPER-
RESOLUTION ALGORITHMS
Crișan Daniela Alexandra A FUZZY COGNITIVE MAP FOR HOUSING 309
Stănică Justina Lavinia DOMAIN
Cristina Coculescu POSSIBILITIES OF DYNAMIC SYSTEMS 319
SIMULATION
Alexandru Tăbușcă HTML5 – AUGMENTED REALITY, A NEW 325
ALLIANCE AGAINST THE OLD WEB EMPIRE?
Gabriel Eugen Garais CASE STUDY ON HIGHLIGHTING QUALITY 333
CHARACTERISTICS OF MAINTAINABLE WEB
APPLICATIONS
Camelia M. Gheorghe MANAGING TECHNOLOGICAL CHANGE IN 343
Mihai Sebea INTERNATIONAL TOURISM BUSINESS
Qassim Al Mahmoud VERIFIABLE SECRET SHARING SCHEME 350
BASED ON INTEGER REPRESENTATION
Alexandru Pîrjan THE IMPACT OF 3D PRINTING TECHNOLOGY 360
Dana-Mihaela Petroşanu ON THE SOCIETY AND ECONOMY
Marian Zaharia CONSIDERATIONS REGARDING THE 371
Daniela Enachescu INTERNET PURCHASES BY INDIVIDUALS IN
ROMANIA AND EUROPE
Andrei Tigora AN OVERVIEW OF DOCUMENT IMAGE 378
ANALYSIS SYSTEMS
Oana Bălan ASSISTIVE I.T. FOR VISUALLY IMPAIRED 391
Alin Moldoveanu PEOPLE
Florica Moldoveanu
Anca Morar
Victor Asavei
Mihai Liviu Despa PROJECT MANAGEMENT DATA IN 404
INNOVATION ORIENTED SOFTWARE
DEVELOPMENT
Mihălcescu Cezar MODELING AND OPTIMIZING THE BUSINESS 414
Sion Beatrice PROCESSES USING MICROSOFT OFFICE
EXCEL
Anda Elena Olteanu A COMPARISON OF SOME NEW METHODS 422
Mircea Ion Cîrnu FOR SOLVING ALGEBRAIC EQUATIONS
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

VOTING-BASED IMAGE SEGMENTATION

Costin-Anton Boiangiu1
Radu Ioanitescu2
ABSTRACT
When it comes to image segmentation, there is no single technique that can provide the best
possible result for any type of image. Therefore, based on different approaches, numerous
algorithms have been developed so far and each has its upsides and downsides, depending
on the input data. This paper proposes a voting method that tries to merge different results
of some well-known image segmentation algorithms into a relevant output, aimed to be, as
frequently as possible, better than any of the independent ones previously computed.

KEYWORDS: image segmentation, machine vision, voting, cluster identification,


image processing
1. INTRODUCTION
In computer vision, segmentation refers to the process of partitioning an image into multiple
sets of pixels based on similarities. This is useful when it comes to extracting specific data,
for example, because it makes the image easier to analyze. Segmentation is used for locating
particular elements or boundaries (lines, curves etc.) in the image.
The result of the image segmentation process is a set of collections of pixels – or clusters
of pixels – that covers the entire image. Each of the pixels in one cluster is similar to the
others in the same cluster regarding a certain characteristic or property, such as color or
intensity, and based on the same criteria, adjacent clusters are significantly different.
This paper presents a method for segmenting an image using a voting technique that merges
independent results of several known algorithms into a single final output, aimed to be, in
as many cases as possible, better than any of the others in particular.
2. RELATED WORK
The number of image segmentation techniques has grown rapidly in the last decades and
more and more fields benefit from its advantages. Having a wide range of applications,
segmentation techniques tend to be specific and cannot be applied in a wide range of cases.
This creates a burden on the user that has to select the right segmentation algorithm for a
given task which is quite hard considering how complex the algorithms are and their
sometimes chaotic behavior on some specific sample inputs. One such image segmentation
application example would be for medical data sets [1], [2], which has to address problems
like low image contrast, quality of input and high variance of input cases.
A large number of papers [3-12] have been published with the purpose of evaluating the
state of the art in the image segmentation domain and trying to provide explanations on
what are the best choices for a given of image characteristics. However, this documentation

1 Associate Professor PhD Eng., ”Politehnica” University of Bucharest Romania, 060042 Bucharest,
costin.boiangiu@cs.pub.ro
2 Engineer, European Pattent Office (EPO) Germany, Bayerstrasse 34, Munich, rioanitescu@epo.org
211
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

includes only a limited number of segmentation algorithms, which are hard to evaluate and
fine-tune for every image specific.
In [13] a generic evaluation framework for image segmentation is presented. Ana Fred
proposes [14] a majority voting combination of clustering algorithms. In short, a set of
partitions is obtained by running multiple clustering algorithms and then the outputs are
merged into a single result. The main idea behind majority voting is that the judgment of a
group is superior to that of the individuals.
Other papers like [15] try to use voting for removal of noise and other inherent artifacts that
results from the segmentation process with efforts going as well into providing specific
mathematical models, methods [16] and probabilistic analysis in order to evaluate the
relative quality of segmentation [17].
The next section briefly presents the technique described in [14]. The results of the
clustering algorithms are mapped into a matrix, called the co-association matrix, where each
pair (i, j) represents the number of times the elements (pixels, in our case) i and j were found
in the same cluster. A pseudo-code for the algorithm is:
1. Input: a set of A partitions obtained from running different A clustering algorithms
on the initial data set of N samples
2. Output: a partition of the initial data set
3. Initialization: Set the co-association matrix to a null N x N matrix
4. for each clustering algorithm update the co-association matrix:
a. for i = 1 to A, let P be the ith partition from the input set corresponding to
the result of the ith algorithm
b. for every sample pair (i, j) that belong to the same cluster in P set coassoc(i,
j) = coassoc(i, j) + 1/A
5. obtain the output partition by thresholding on the matrix values:
a. for each sample pair (i, j) that has coassoc(i, j) > threshold, combine i and
j in the same cluster and also combine their belonging clusters, if necessary
b. for the remaining samples form an one element cluster
6. return the result.
This served as a starting point for the implementation of the voting image segmentation
algorithm proposed by this paper.
3. AUXILIARY ALGORITHMS
Different segmentation algorithms have been designed having qualities and shortcomings
and articles like [3], [7], [9] have been dedicated to studying the different trade-offs that
have to be made.
This section briefly describes the individual image segmentation algorithms used in this
paper to demonstrate the proposed concepts, with an emphasis on their upsides and
downsides in terms of over/under segmentation i.e. computing too many/few segments as
output for a given input image than what is ideally expected. An in-depth analysis of each
is not made, as this is not the purpose of the paper.
A series of representative algorithms from different categories in the image segmentation
field are considered in order not to restrain the potential application domain. The paper

212
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

makes use of the following segmentation techniques: histogram based segmentation, region
growing based on neighboring pixels, segmentation using graphs, watershed transformation
and mean shift segmentation.
Histogram based segmentation
This algorithm focuses exclusively on pixel intensity in an image. The general idea behind
it is the use of local minima and maxima as the boundaries for a cluster of pixels. Thus, the
histogram of the image is required.

Figure 1. Histogram of a grayscale image


Fig. 1 displays a typical histogram of an 8-BPP (256 levels) grayscale image. The “peaks”
represent the local maxima and the lower values represent the local minima. Between each
adjacent minima/maxima or maxima/minima pair lies a cluster of pixels.
This technique is described in detail in [18], [19] and is known to give good results when
the foreground of the image is clearly delimited from its background. Because the pixel
distribution in the image is not taken into account, using this method on other types of
images does not lead to very relevant results. A histogram based segmentation algorithm is
generally considered to over-segment the input image.
Region growing based segmentation
This is a simple concept, based on the assumption that an object does not generally change
color attributes in an abrupt manner. Therefore, neighboring pixels (that belong to the object
in question) should have relatively close intensity values. Thus, starting with a random pixel
and “growing” the region around it, pixels with similar characteristics can be found based
on a homogeneity criterion – close intensity values – and clustered in the same collection.
Optimized implementations of the region growing segmentation algorithms are the seeded
region growing described in [20] and the adaptive region growing described in [21]. Both
algorithms produce satisfactory results on most images, but generally have a tendency
towards over-segmentation.
Graph based segmentation
The technique used is based on the “minimal spanning tree” concept and is described in
detail in [22], [23]. In short, a graph is built with nodes as pixels and edges that have an
associated weight which measures the dissimilarity of the connected pixels. The
segmentation is obtained by partitioning the graph into connected components so that the
edges between the nodes in the same component have higher weights (the higher the weight,
the lower the dissimilarity).

213
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The algorithm is relatively accurate, under-segmenting the input image in some cases.
Watershed based segmentation
The Vincent-Soille algorithm is the classic approach when it comes to watershed based
segmentation. A detailed description of this technique can be found in [24].
A prerequisite for watershed transformation is blurring the input image first, so applying a
Gaussian filter to it is a recommended pre-processing step. Even so, the algorithm generally
tends to strongly over-segment the image.
Mean shift based segmentation
The theoretical background behind this approach can be found in [25] and an actual
implementation is given in [26].
Most of the times the results obtained with this algorithm are accurate, with a slight
tendency for under-segmentation.
4. THE VOTING ALGORITHM
The technique proposed is a variation of the more general algorithm based on majority
voting clustering presented in the second section of this paper. Our algorithm mainly
follows the same processing steps, with two major differences however: not all voting
algorithms have the same “power of decision” and the actual implementation takes into
consideration – and avoids – using too much of the system’s resources.
Coping with memory requirements
When images are used as input data, each pixel is a valid sample, so the total number of
samples is given by the width of image times its height. Considering this and the fact that
the co-association matrix is defined as an N x N matrix (where N is the number of samples),
the result is that a very large structure will be needed to store the data. This could easily
result in exceeding the addressing capabilities of a 32 bit operating system. To avoid such
a scenario, a square-shaped sliding window across the image is used and only the pixels
inside it are processed at a given time. The co-association matrix is thus dependent on the
size of that window – adjustable by the user – instead of the image’s entire size.
After the matrix is filled with data, the second step of the algorithm takes place, where
pixels are being merged into clusters. This is done by assigning a label to those pixels – all
the pixels in a cluster will be given the same label, which is different from labels associated
with pixels in adjacent clusters. Once this is complete, the window slides forward and the
process is repeated.
Another aspect is the increment used to slide the window. If it were to slide by its entire
size, then different clusters would be obtained in each region of the image the window
overlaps, although a certain cluster could easily span across multiple such regions. A
workaround to this is not to slide the window by its entire size, but rather by its size minus
one row of pixels – both horizontally and vertically, where applicable. This way an overlap
with previously considered regions is obtained and the erroneous splitting of a pixel set is
avoided, because the first row and/or column of pixels are already labeled and therefore
newly considered pixels can be added to previously found clusters.
214
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 2. The sliding window technique


Fig. 2 displays the process described above. The sliding window is showed in two
consecutive iterations – the black thick frame and the grey thick frame. Notice the
overlapping area between the two windows, which allows the grey pixels inside the second
one to be merged into the cluster found by the first one.
Weighted voting
Because each of the segmentation algorithms used had its “strengths and weaknesses”, a
method of selecting the better results was needed. The main choice criterion was the one
used for analysis in the third section of our paper: the over/under segmentation tendency of
an algorithm. Over-segmentation implies that different clusters contain pixels with the same
characteristics and thus can be combined in a larger one. Contrary, under-segmentation
represents the case when clusters span over regions with different characteristics that should
be split.
The assumption used was that if an algorithm is known to strongly over-segment an image
and determines that two pixels are in the same cluster, then this is highly probable to be
true. Analogous, if an algorithm is known to strongly under-segment an image and
determines that two pixels are not in the same cluster then this is highly probable to be true
as well. Considering all aforementioned assumptions, a performance evaluation parameter
was introduced for each considered segmentation algorithm - called the over-segmentation
factor. This parameter ranges from 0 to 1, where 0 represents the factor for the most extreme
under-segmentation algorithm which would consider all pixels to be in one cluster, 1 the
factor for the most extreme over-segmentation algorithm which will consider each pixel to
have its own cluster, and 0.5 being the optimal segmentation algorithm which depends on
the image characteristics.
Based on the dynamic analysis of the output results of the independent segmentation
algorithms used, an over-segmentation factor is associated with each as follows: a low one
(user defined) for the algorithm that produces the fewest clusters, a high one (also user
defined) for the algorithm that produces the most clusters and a variable one to the other
algorithms (computed using linear interpolation based on the number of clusters they
produce). This is a preprocessing step. Next comes the filling of the co-association matrix:
when an algorithm finds two pixels that are/aren’t in the same cluster, a value proportional
to its over-segmentation factor is added/subtracted to/from the corresponding matrix
element. Thus the “weight” of each segmentation algorithm is determined in a manner that
encourages its decision if the probability of it being true is high.

215
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The second step of the original approach remains the same – mainly clusters are determined
using a threshold value on the matrix. This is also user-defined.
Because the lowest and the highest values of the over-segmentation factor, as well as the
threshold value used in the cluster creation process are user-defined, the accuracy of the
final result is dependent on how well these parameters are chosen.
Complexity analysis
For the first step of the algorithm, the complexity is O(N2). The second step introduces the
cluster merging, which is done in O(logN) using a tree forest with path reduction. Therefore
the total complexity of the algorithm is O(N2logN).
It should be noted that the complexity depends on the image pixel size, so N represents the
image width times its height.
5. EXPERIMENTAL RESULTS
The algorithm performed well in general, giving good results on several test images
considered. The system used for testing had an Intel® Core™2 Duo 2.4 GHz Processor and
2 GB RAM.
The first test considered was the classic “Lena”, a perfect image for its natural color balance
and great range of frequencies encountered across different areas:

Figure 3. Lena input image


The results obtained when running the algorithm set on the image shown in Fig. 3 are
presented in Fig. 4 (the outputs for each independent algorithm, as well as the final output
of the voting algorithm).

Figure 4. Lena segmentation processing results. From left to right and top to bottom: Region
segmentation, Histogram segmentation, Watershed segmentation, Mean shift segmentation, Graph
segmentation and (the final result) the Voting segmentation

216
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

As can be seen in Fig. 4, the region growing, histogram and watershed transformation based
algorithms tend to over-segment the input image – especially watershed, which therefore
will have the highest over-segmentation factor associated. On the other hand, it can be
observed that the mean shift and graph segmentation methods tend to under-segment the
image. In this case, the graph based segmentation algorithm will have the lowest over-
segmentation factor associated.
The voting based image algorithm obtains a balance between the over and under-
segmentation characteristics of the individual algorithms.
The current application is built with extensibility in mind, taking a framework approach
where adding a new algorithm is as simple as implementing an interface. The framework is
efficient because it runs all configured input segmentation algorithms in parallel.
The approach presented in this paper is an important part of a bigger project: a complete,
modular, fully automatic content conversion system developed for educational purposes.
Once finished, the system will allow large batches of documents to be processed fully
automatically and as a result more complex algorithms like [27] will be employed to
provide input segmentations. The voting process tuning will also benefit from the
availability of more diverse test data, allowing every input algorithm to adjust over/under
segmentation by using its running history.
Here are some other test images and the associated results for each of them (only the final
voting based result is shown):

Figure 5. Cameraman segmentation processing Figure 6. Livingroom segmentation processing


results results

Figure 7. Mandril segmentation processing Figure 8. Peppers segmentation processing


results results

217
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 9. Pirate segmentation processing results Figure 10. Text segmentation processing results

Figure 11. Walkbridge segmentation processing results Figure 12. Blonde woman segmentation
processing results

Figure 13. Museum segmentation processing Figure 14. Question mark segmentation
results processing results

Figure 15. Map segmentation processing results


In the following table are presented the times obtained (for the voting algorithm only) on
the considered testing system:
Image size
Input image Run time (seconds)
Size (pixels) Color depth (bits)
Lena 512 x 512 24 22.03
Cameraman 512 x 512 8 19.14
Livingroom 512 x 512 8 13.26
Mandril 512 x 512 24 13.82
Peppers 512 x 512 8 14.44
Pirate 512 x 512 24 14.48
Text 975 x 821 8 50.40
Walkbridge 512 x 512 8 12.62
Blonde woman 512 x 512 8 15.20
Question mark 300 x 375 24 11.21
Museum 584 x 504 24 23.48
Map 512x512 24 16.16
Table 1 Output result analysis

218
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

6. CONCLUSIONS AND FUTURE WORK


The aim of this work is to propose a weighted voting algorithm that segments an image
based on the results of other well-known segmentation algorithms. The algorithm copes
with large memory requirements and also takes into account the tendency of the individual
considered segmentation algorithms to over/under segment and has encouraging higher
accuracy results.
Using the proposed algorithm, the results obtained were promising for most images tested.
It can be therefore concluded that the idea of using several segmentation algorithms and
merging their output to obtain a relevant result is viable.
Another algorithm optimization approach could use a feedback loop by considering the
results of some segmentation algorithms to adjust the input of the other segmentation
algorithms from the framework. By knowing the general characteristics of a segmentation
algorithm and its results on an image, some meaningful information about the processed
image can thus be derived and looped back either into the voting process or the individual
segmentation parameters.
As future work there are two improvements to be taken into consideration. The first is
adding an extra post-processing step for eliminating all the small clusters which may appear
as being noise in the final result. The second is dynamically adapting the threshold
parameter used for the merging/splitting of clusters, thus ensuring a higher chance of getting
improved output results for a very varied range of input images.
7. ACKNOWLEDGMENT
The authors would like to thank Constantin Manoila and Lucian-Ilie Calin, for their great
ideas, support and assistance with this paper.
8. REFERENCES
[1] P. Herghelescu, M. Gavrilescu, V. Manta, "Visualization of Segmented Structures in 3D
Multimodal Medical Data Sets", Advances in Electrical and Computer Engineering, vol. 11, no.
3, 2011, pp. 99 - 104.
[2] L. D. Pham, X. Chenyang and J. L. Prince, "Current Methods in Medical Image Segmentation".
Annual Review of Biomedical Engineering, vol 2, pp. 315–337, 2000.
[3] S. K. Pal, N. R. Pal, “A Review on Image Segmentation Techniques”, Pattern Recognition, Vol.
26, No. 9, pp. 1277-1294, 1993.
[4] Fu K.S.,Mui J.K., "A survey on image segmentation", Elsevier Pattern Recognition journal,
Volume 13, Issue 1, pp. 3–16, 1981.
[5] R.M. Haralick, L.G. Shapiro, "Image Segmentation Techniques", CVGIP: Graphical Models
and Image Processing vol. 29, pp. 100-132, 1985
[6] A. Hoover, "An experimental comparison of range image segmentation algorithms", IEEE
Transactions on Pattern Analysis and Machine Intelligence, Volume 18, Issue 7, pp. 673-689.
[7] A. A. Aly, S. B. Deris, N. Zaki, Research review for digital image segmentation techniques,
International Journal of Computer Science & Information Technology (IJCSIT) vol. 3, no. 5,
oct 2011
[8] Y.J. Zhang, “A survey on evaluation methods for image segmentation,” Pattern Recognition,
vol. 29, no. 8, pp. 1335C1346, 1996.

219
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[9] H. Zhang, J. E. Fritts, S. A. Goldman, “Image segmentation evaluation: a survey of unsupervised


methods”, Computer Vision and Image Understanding vol. 110, Elsevier Science Inc., New
York, pp. 260-280, 2008.
[10] J. Freixenet, X. Munoz, D. Raba, J. Martí and X. Cufí, "Yet Another Survey on Image
Segmentation: Region and Boundary Information Integration", Lecture Notes in Computer
Science, vol. 2352/2002, pp. 21-25, 2002.
[11] K. McGuinness, N. E. O’Connor, “A comparative evaluation of interactive segmentation
algorithms”, Pattern Recognition vol. 43, Elsevier Science Inc., New York, pp. 434-444, 2010.
[12] S. Chabrier, H. Laurent, B. Emile, C. Rosenburger, and P. Marche, “A comparative study of
supervised evaluation criteria for image segmentation”, EUSIPCO, pp. 1143-1146, 2004.
[13] J. S. Cardoso and L. Corte-Real, “Toward a Generic Evaluation of Image Segmentation”, IEEE
Transactions on Image Processing, No. 11, pp. 1773-1782.
[14] A. L. N. Fred, Finding consistent clusters in data partitions, Lecture Notes in Computer Science
vol. 2096, Springer-Verlag, London, pages 309-318, 2001.
[15] P. Jonghyun, T. K. Nguyen and L. Gueesang, "Noise Removal and Restoration Using Voting-
Based Analysis and Image Segmentation Based on Statistical Models", Energy minimization
methods in computer vision and pattern recognition, Lecture Notes in Computer Science, vol
4679/2007, 242-252, 2007.
[16] P. Correia and F. Pereira, “Objective evaluation of relative segmentation quality”, Image
processing, vol 1, pp. 308-311, 2000.
[17] K. Vincken, A. Koster and M. Viergever: Probabilistic multiscale image segmentation, IEEE
Transactions on Pattern Analysis and Machine Intelligence, 19:2, pp. 109–120, 1997.
[18] L. G. Shapiro, G. C. Stockman (2001): “Computer Vision”, New Jersey, Prentice-Hall, ISBN 0-
13-030796-3, pp 279-325, 2001.
[19] R. Ohlander, P. Keith, R. D. Raj, "Picture Segmentation Using a Recursive Region Splitting
Method", Computer Graphics and Image Processing, vol. 8, no. 3, pp. 313–333, 1978.
[20] R. Adam, L. Bischo: "Seeded Region Growing", IEEE Transactions On Pattern Analysis
Machine Intelligence, vol. 16, no. 6, pp. 641-647, 1994.
[21] Y. L. Chang, X. Li: “Adaptive Image Region-Growing, IEEE Transactions on Image
Processing”, vol. 3, no. 6, pp. 868-872, 1994.
[22] P. F. Felzenswalb, D. P. Huttenlocher, ”Efficient graph-based image segmentation”,
International Journal of Computer Vision vol. 59, Kluwer Academic Publishers, Hingham, pp.
167-181, 2004.
[23] J. Shi, J. Malik, “Normalized Cuts and Image Segmentation”. IEEE Trans. Pattern Analysis and
Machine Intelligence, Vol. 22, No. 8, pp. 888-905, 2000.
[24] J. Roerdink, A. Meijster, “The watershed transform: definitions, algorithms and parallelization
strategies”, Fundamenta Informaticae vol. 41, IOS Press, Amsterdam, pp. 187-288, 2000.
[25] D. Comaniciu, P. Meer, “Mean shift: a robust approach towards feature space analysis”, IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 24, IEEE Computer Society,
Washington DC, pp. 603-619, 2002.
[26] C. M. Christoudias, B. Georgescu, P. Meer: “Synergism in low level vision”. 16th International
Conference on Pattern Recognition., Quebec City, Canada, August 2002, vol. IV, 150-155,
2002.
[27] E. Ganea, D. D. Burdescu, M. Brezovan, "New Method to Detect Salient Objects in Image
Segmentation using Hypergraph Structure", Advances in Electrical and Computer Engineering,
vol. 11, no 4, pp. 111-116, 2011

220
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

THE IMPORTANCE OF DIGITAL MARKETING. AN EXPLORATORY STUDY


TO FIND THE PERCEPTION AND EFFECTIVENESS OF DIGITAL
MARKETING AMONGST THE MARKETING PROFESSIONALS IN PAKISTAN

Fawad Khan1
Professor Dr Kamran Siddiqui2
ABSTRACT
The purpose of this exploratory research is to present the perceptions towards Digital
Marketing in Pakistan. This issue has rarely been addressed by the academicians and
researchers in Pakistan and elsewhere. This study used digital marketing parameters to
measure the awareness and effectiveness of digital marketing among marketing
professionals in Pakistan. 200 marketing professionals participated in this academic
exercise. Data was analyzed in many ways, a) through descriptive statistics b) summarizing
the data using factor analysis. Four major perception groups were emerged from the
analysis i.e., a) Skeptical b) Enthusiast c) Utilitarian and d) Parsimonious. The result
suggests that professionals in Pakistan are more skeptical towards digital marketing tools
and concepts. They do not fully understand the benefits of digital marketing in terms of
growth and cost effectiveness. Finally, the limitations of the studies and findings are
presented in study.

Key words: SEO, Google Analytics, META tags, Blogs


1. INTRODUCTION
There are not many studies conducted in Pakistan in the area of digital marketing. This
concept is rapidly emerging as a new concept which is aggressively adopted internationally
for marketing success. In today’s time, social media channels such as Face book, Twitter,
Google and other social media firms have successfully transformed the attitudes and
perceptions of consumers and in the end helped revolutionized many businesses. This was
done through measurable vast network of customers with trustworthy data with real-time
feedback of customer experiences.
It is much more convenient for businesses to conduct surveys online with a purpose to get
relevant information from targeted groups and analyzing the results based on their
responses. Potential customers can look for reviews and recommendations to make
informed decisions about buying a product or using the service. On the other hand,
businesses can use the exercise to take action on relevant feedback from customers in
meeting their needs more accurately.
Change is constant and with time new ideas are accepted and adopted. In order to make the
decision to understand the advantage of online marketing, advantages must be highlighted
for industry players to realize its power.

1 DHA Suffa University,Phase VII (Ext), DHA, Karachi-75500, PAKISTAN., e-mail:


fawad.khan@dsu.edu.pk, Tel: +923022914846, Fax: +9221-35244855
2 DHA Suffa University, Phase VII (Ext), DHA, Karachi-75500, PAKISTAN., e-mail: ksiddiqui@dsu.edu.pk,
Tel: +9221-35244865, Fax: +9221-35244855
221
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2. LITERATURE REVIEW
Literature Review
The purpose of doing research in the area of digital marketing is because it seem huge,
intimidating and foreign. Businesses are looking for clearer picture to start but do not know
where and how to start doing digital marketing. In today’s time, social media channels such
as Face book, Twitter, Google and other social media firms have successfully transformed
the attitudes and perceptions of consumers and in the end helped revolutionized many
businesses. This was done through measurable vast network of customers with trustworthy
data with real-time feedback of customer experiences.
It is much more convenient for businesses to conduct surveys online with a purpose to get
relevant information from targeted groups and analyzing the results based on their
responses. Potential customers can look for reviews and recommendations to make
informed decisions about buying a product or using the service. On the other hand,
businesses can use the exercise to take action on relevant feedback from customers in
meeting their needs more accurately.
Digital marketing is the use of technologies to help marketing activities in order to improve
customer knowledge by matching their needs (Chaffey, 2013).
Marketing has been around for a long time. Business owners felt the need to spread the
word about their products or services through newspapers and word of mouth. Digital
marketing on the other end is becoming popular because it utilizes mass media devices like
television, radio and the Internet. The most common digital marketing tool used today is
Search Engine Optimization (SEO). Its role is to maximize the way search engines like
Google find your website.
Digital marketing concept originated from the Internet and search engines ranking of
websites. The first search engine was started in 1991 with a network protocol called Gopher
for query and search. After the launch of Yahoo in 1994 companies started to maximize
their ranking on the website (Smyth 2007).
When the Internet bubble burst in 2001, market was dominated by Google and Yahoo for
search optimization. Internet search traffic grew in 2006; the rise of search engine
optimization grew for major companies like Google (Smyth 2007). In 2007, the usage of
mobile devices increased the Internet usage on the move drastically and people all over the
world started connecting with each other more conveniently through social media.
In the developed world, companies have realized the importance of digital marketing. In
order for businesses to be successful they will have to merge online with traditional methods
for meeting the needs of customers more precisely (Parsons, Zeisser, Waitman 1996).
Introduction of new technologies has creating new business opportunities for marketers to
manage their websites and achieve their business objectives (Kiani, 1998).
With the availability of so many choices for customers, it is very difficult for marketers to
create brands and increase traffic for their products and services. Online advertising is a
powerful marketing vehicle for building brands and increasing traffic for companies to
achieve success (Song, 2001). Expectations in terms of producing results and measuring

222
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

success for advertisement money spent, digital marketing is more cost-efficient for
measuring ROI on advertisement (Pepelnjak, 2008).
Today, monotonous advertising and marketing techniques have given way to digital
marketing. In addition, it is so powerful that it can help revive the economy and can create
tremendous opportunities for governments to function in a more efficient manner (Munshi,
2012). Firms in Singapore have tested the success of digital marketing tools as being
effective and useful for achieving results. (Teo, 2005). More importantly, growth in digital
marketing has been due to the rapid advances in technologies and changing market
dynamics (Mort, Sullivan, Drennan, Judy, 2002).
In order for digital marketing to deliver result for businesses, digital content such as
accessibility, navigation and speed are defined as the key characteristics for marketing
(Kanttila, 2004). Other tried and tested tool for achieving success through digital marketing
is the use of word-of-mouth WOM on social media and for making the site popular (Trusov,
2009). In addition, WOM is linked with creating new members and increasing traffic on the
website which in return increases the visibility in terms of marketing.
Social media with an extra ordinary example Facebook has opened the door for businesses
to communicate with millions of people about products and services and has opened new
marketing opportunities in the market. This is possible only if the managers are fully aware
of using the communication strategies to engage the customers and enhancing their
experience (Mangold, 2009). Marketing professional must truly understand online social
marketing campaigns and programs and understand how to do it effectively with
performance measurement indicators. As the market dynamics all over the world are
changing in relation to the young audience accessibility to social media and usage. It is
important that strategic integration approaches are adopted in organization’s marketing
communication plan (Rohm & Hanna, 2011).
Blogs as a tool for digital marketing have successfully created an impact for increasing
sales revenue, especially for products where customers can read reviews and write
comments about personal experiences. For businesses, online reviews have worked really
well as part of their overall strategic marketing strategy (Zhang, 2013). Online services
tools are more influencing than traditional methods of communication ( Helm, Möller,
Mauroner, Conrad, 2013). As part of study, it is proven that users experience increase in
self-esteem and enjoyment when they adapt to social media which itself is a motivating sign
for businesses and marketing professional (Arnott, 2013). Web experiences affect the
mental process of consumers and enhance their buying decision online (Cetină, Cristiana,
Rădulescu, 2012). This study is very valuable for marketing professional as it highlights
the importance of digital marketing.
The Internet is the most powerful tool for businesses (Yannopoulos, 2011). Marketing
managers who fail to utilize the importance of the Internet in their business marketing
strategy will be at disadvantage because the Internet is changing the brand, pricing,
distribution and promotion strategy.
Pakistan has seen tremendous growth in media with 20 million people have access to the
Internet but still marketers insist on doing things the traditional way (Mohsin 2010).
Management and structure in Pakistan are still based on ancient paradigm where customers
223
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

are moving ahead with their demands and expectations. This gap is widening day by day
with limited skills and mindset available in Pakistan to solve the problem for the demanding
customers. Companies in Pakistan including the MNC’s are going the traditional way and
keeping the digital aspect just to show off in tune with the modern trends.
3. METHODOLOGY
Sampling: The sample comprising marketing professional in Karachi, Pakistan. Karachi is
the biggest city in Pakistan in terms of business presence and commercial activity which is
why it was considered for this study. Hundreds of managers were surveyed in Karachi
working in different organizations from media, FMCG, Pharmaceuticals, airlines,
automobiles, petrochemicals and education. The final sample size was random 200 in which
93% are Men and 17% are Women from the city of Karachi.
Research Instrument: This study uses Wilska’s (2003) instrument to measure perceptions
of professional. All measures adapted use five-point likert scales. Various non-statistical
validity checks were made prior to the questionnaire’s actual implementation. Firstly, all of
these constructs were adopted from earlier studies providing acceptably reliable and valid
measures. Secondly; these measures had acceptable reliability figures mostly stated in terms
of Cronbach’s alpha above 0.5. They have reported a reasonable internal consistency among
the items; Cronbach alpha > 0.50 (Wilska, 2003). Finally these measures were processed in
a systematic manner in the earlier stages of the research project. In addition to these steps,
pre-testing of the questionnaire was also performed.
Data Collection: The strategy of using advertising agencies and their clients’ worked really
well in terms of questionnaire administration and provided a suitable environment
necessary for target participant’s involvement, motivation and convenience. All
questionnaires were properly filled and 100% response rate was achieved.
Analysis
The data was analyzed into ways a) descriptive statistics b) factors analysis.
Descriptive Analysis
The result from the study indicates that majority of the participants have a perception that
digital marketing is a new mix for promotion but also have a negative perception that digital
marketing can be misleading and is not useful for word of mouth (WOM) (See Table 1)
Perceptions M SD
Digital Marketing
… is a new avenue for promotion mix. 2.59 .816
… may provide content not in line with our believes. 2.59 .816
… can be misleading. 2.51 .750
… rewrites contents for privacy issues. 2.50 .750
… accelerates revenue growth. 2.31 .726
… has low investment. 2.31 .726
… provides customer's participation. 1.91 .455
… generates immediate response from customers. 1.91 .455
… attracts attention very quickly 1.86 .426
… is much more measurable. 1.85 .398

224
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

… creates marketing opportunities. 1.85 .398


… useful for word of mouth (WOM). 1.85 .398
Table 1 Perceptions towards digital marketing
Perceptions towards digital marketing tools and their effectives, it was found that mobile
phone in terms of SMS and MMS having the highest value followed by online videos,
goggle ranking, website content, YouTube and Facebook. All these tools are considered
most important for implementing digital marketing practices. Surprisingly, In-depth
understanding of technical tools of digital marketing such as Webinars, pay-per-clicks,
Google analytics, Blogs and META tags scored low indicating lack of application of these
tools and their understanding. (See Table 2)
Digital Marketing Tools M SD

Mobile Phone – MMS 4.28 .450


Mobile Phone – SMS 4.28 .450
Online Videos 4.28 .450
SEO - Google Rankings 4.28 .450
SEO - Keywords Tags 4.28 .450
Website Contents 4.28 .450
Youtube 4.28 .450
Social Media - Facebook 4.03 .412
Social Media – linkedIn 4.03 .412
Social Media –Twitter 4.03 .412
Webinars 2.84 .943
Pay-per-click 2.83 .941
Google Analytics 2.31 .726
Inlinks 2.31 .726
Blogs 1.85 .398
E-Newsletters 1.85 .398
SEO - Title Tags 1.25 .431
SEO - META Tags / descriptions 1.25 .431
Table 2 Perceptions towards digital marketing tools and their effectiveness
Factor Analysis of Perceptions towards Digital Marketing
The data was analyzed in a number of stages. Firstly, exploratory factor analysis was used
to determine the factor structure of items related to marketing professional perception
towards digital marketing. Secondly, summated score was calculated for resultant digital
marketing factors and finally individual differences were measured for marketing
professional mindset factors. Factor analysis was conducted for the digital marketing
perception mindset scale using a multi-step process which includes three steps; (a)
extracting the factors; (b) labeling the factors; c) creating summated scales and examining
the descriptive statistics.
Analysis of 12 items related to the digital marketing perception scale, using the maximum
likelihood method of extraction with direct oblimin rotation, yielded, a four-factor solution,
to which various criteria were then applied for refinement. Initially, the solution was
examined to determine whether all the factors satisfied the Kaiser criterion (eigenvalues (1)
and they did. All the items loading on each separate factor were found to cohere to some

225
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

degree, and therefore they were included in their respective factors. The above analysis
resulted in a final four-factor solution, comprising 12 items, all with communality values
greater than 0.3. (See Table 3)
Factors
Skeptic Enthusia Utilitari Parsimonio
Items al st an us
M 2.54 1.85 1.89 2.31
SD 0.74 0.39 0.40 0.72
Digital marketing ….
… is a new avenue for promotion mix. 0.95
… may provide content not in line with our
believes. 0.95
… rewrites contents for privacy issues. 0.94
… can be misleading. 0.93
… is much more measurable. 0.95
… creates marketing opportunities. 0.95
… useful for word of mouth (WOM). 0.95
… provides customer's participation. 0.92
… generates immediate response from
customers. 0.92
… attracts attention very quickly 0.72
… has low investment. 0.98
… accelerates revenue growth. 0.98

Factor 1 was labeled as ‘Skeptical’. This group is more skeptical about the importance and
benefits of digital marketing. They agree up to certain extent that digital marketing is useful
tool for promotional but on the other hand they also think that digital marketing also leads
to privacy and misleading of information issues. They have the highest mean value of 2.54
and standard deviation of 0.74.
Factor 2 was labeled as ‘Enthusiast’. These professionals have been defined as enthusiast
with digital marketing concepts and excited to include them for marketing success. They
have a view that digital marketing is useful for creating marketing opportunities and have
a positive outlook. They have the lowest mean value of 1.85 and standard deviation of 0.39.

Factor 3 was labeled as ‘Utilitarian’. It reflects persons who are most utilitarian in nature
and more usage oriented. They use digital marketing services in their routine matters and it
is something for them having important in their marketing professional job. The analysis
shows that they are the keen user of digital marketing and they are keen on using them as a
utility in their professional marketing job. They are more concerned about the utility or
usefulness of digital marketing concept and tools. They have the third highest mean value
of 1.89 and standard deviation of 0.40.
Factor 4 was labeled as ‘Parsimonious’. It reflects a marketing professional who considers
that digital marketing is important in terms of cost saving but also gives high importance
for growth. They have the second highest mean value of 2.31 and standard deviation of
0.72.

226
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Results/Findings
The result suggests that professionals in Pakistan are more skeptical towards digital
marketing tools and concepts. They do not fully understand the benefits of digital marketing
in terms of growth and cost effectiveness.
Parsimonious group is more in favor of cost factors of digital marketing and considers it an
important tool for growth. This segment of marketing professionals is using the digital
marketing strategies and reflects new knowledge and training of professional in Pakistan.
4. CONCLUSIONS
This survey examined the perception towards digital marketing of marketing professionals
in Pakistan. Although, digital marketing tools and concepts are taking over traditional
methods of marketing internationally, it is still a new field for professionals operating in
Pakistan.
According to this survey, professionals are skeptical about the usage and benefits of digital
marketing and have been classified as Skeptical. They do consider it as an important tool
for promotion but at the same time concerned about the issues of privacy and misleading of
information of digital marketing. SMS and MMS are considered as the most important tool
for conducting digital marketing which shows lack of understanding and in-depth usage of
digital marketing tools by marketing professionals in Pakistan.
5. REFERENCES
1. AJ Parsons, M Zeisser, R Waitman, “Organizing for digital marketing”, McKinsey
Quarterly, 1996
2. Boyd, D. M. & Ellison, N. B. 2007. “Social Network Sites: Definition, History and
Scholarship”, Journal of ComputerMediated Communication 13 (1), 210-230.
3. G. Reza Kiani, (1998) "Marketing opportunities in the digital world", Internet Research,
Vol. 8 Iss: 2, pp.185 – 194.
4. YS Wang, TI Tang, JE Tang, “An instrument for measuring customer satisfaction toward
web sites that market digital products and services”, Journal of Electronic Commerce
Research, VOL. 2, NO. 3, 2001
5. A Sundararajan, Leonard N., “Pricing Digital Marketing: Information, Risk Sharing and
Performance”, Stern School of Business Working NYU, 2003
6. DC Edelman , “Four ways to get more value from digital marketing”, McKinsey Quarterly,
2010
7. YB Song, “Proof That Online Advertising Works”, Atlas Institute, Seattle, WA, Digital
Marketing Insight, 2001.
8. J Chandler Pepelnjak,“Measuring ROI beyond the last ad”, Atlas Institute, Digital
Marketing Insight, 2008.
9. A Munshi, MSS MUNSHI, “Digital matketing: A new buzz word”, International Journal
of Business Economics & Management Research, Vol.2 Issue 7, July 2012.
10. Thompson S.H. Teo, “Usage and effectiveness of online marketing tools among Business-
to-Consumer (B2C) firms in Singapore”, International Journal of Information
Management, Volume 25, Issue 3, June Pages 203–213, 2005.
11. Mort, Gillian Sullivan; Drennan, Judy, “Mobile digital technology: Emerging issue for
marketing”, The Journal of Database Marketing”, Volume 10, Number 1, 1 September
2002 , pp. 9-23.

227
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

12. Rick Ferguson, "Word of mouth and viral marketing: taking the temperature of the hottest
trends in marketing", Journal of Consumer Marketing, Vol. 25 Iss: 3, pp.179 – 182, 2008.
13. Dickinger, Astrid, “An investigation and conceptual model of SMS marketing”, System
Sciences, Proceedings of the 37th Annual Hawaii International Conference, 5-8 Jan, 2004.
14. Nina Koiso-Kanttila, “Digital Content Marketing: A Literature Synthesis”, Journal of
Marketing Management, Volume 20, Issue 1-2, pg-45-65, 2004.
15. Michael Trusov, Randolph E. Bucklin, Koen Pauwels (2009). Effects of Word-of-Mouth
Versus Traditional Marketing: Findings from an Internet Social Networking Site. Journal
of Marketing: Vol. 73, No.5,pp.90-102.
16. Glynn Mangold, David Faulds, “Social media: The new hybrid element of the promotion
mix”, Business Horizons, Volume 52, Issue 4, , Pages 357–365, July–August 2009.
17. Hanna, Rohm, Crittenden, “We’re all connected: The power of the social media
ecosystem”, Business Horizons, Volume 54, Issue 3, Pages 265–273, May–June 2011.
18. Guoying Zhang, Alan J. Dubinsky, Yong Tan, “Impact of Blogs on Sales Revenue”,
International Journal of Virtual Communities and Social Networking, Vol .3, Pg 60-74,
Aug-2013.
19. Roland Helm, Michael Möller, Oliver Mauroner, Daniel Conrad, “The effects of a lack of
social recognition on online communication behavior”, Computers in Human Behavior Vol
29, pg 1065-1077, 2013.
20. Pai. P, Arnott. DC, “User adoption of social networking sites: Eliciting uses and
gratifications through a means–end approach”, Computers in Human Behavior, Volume 29,
Issue 3, Pages 1039–1053, May 2013.
21. Cetină. J, Cristiana. M, Rădulescu. V, “Psychological and Social Factors that Influence
Online Consumer Behavior”, Procedia - Social and Behavioral Sciences, Vol 62, Page 184-
188, 2012.
22. Yannopoulos. P, “Impact of the Internet on Marketing Strategy Formulation”, International
Journal of Business and Social Science, Vol. 2 No. 18; October 2011.
23. Smyt.G, “The History of Digital Marketing”, Inetasia, 2007.
24. Mohsin. U, “The Rise of Digital Marketing in Pakistan”, Express Tribune, June 21, 2010.
25. Chaffey. D, “Definitions of Emarketing vs Internet vs Digital marketing”, Smart Insight
Blog, February 16, 2013.

228
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

IMAGE REPRESENTATION USING PHOTONS

Andreea-Mihaela Pintilie1
Mihai Zaharescu 2
Ion Bucur3
ABSTRACT
Genetic brain maps can help physicians to discover patterns of brain structure and how it
changes in disease or reacts to medication. In order to generate a brain map, an image
model is needed. A much discussed subject, nowadays, is the subject of improving images
obtained from devices like MRI, PET, CT etc. In medical area there is a need to improve
image segmentation and image resolution; images might be blurred or might contain noise
due to the patient’s movement during the process of acquiring them. Imaging studies of the
human brain at active medical institutions today routinely accumulate more than 5
terabytes of clinical data per year. The present paper concentrates on neurological field:
brain imaging and on genetic field based on the results of the brain imaging. This paper
proposes a new medical image format representation, using photons, a tree-like structure
in order to improve the inefficiency problem on large medical datasets and an algorithm
for eliminating noise from images.

KEYWORDS: Image representation, medical image acquisition, image formats,


raster images, vector images, photon images, quad trees, binarization, segmentation,
edge detection
1. INTRODUCTION
Image representation is important when trying to analyze different images. Nowadays
medical image is a very important component when trying to establish a diagnostic, when
planning an evaluating surgical and radiotherapeutic / chemotherapeutic treatments. There
are different registration methods for medical images, based on the purpose of the
investigation the medicinal doctor is conducting for a patient. In the following section we
provide a short classification of the registration methods in medical imaging area. [1]
One of the first criterions is the dimensionality. The main division is based on the spatial -
time dimensions. We can register two 3D/3D images - obtained from two tomographic data
sets with no time dimension or 2D/2D images - obtained as separate slices from
tomographic data, or even a 2D/3D registration used for the alignment of spatial data to
projective data, where the first image can be a CT one, while the second one a X-ray. Time

1 Engineer, Department of Computer Science and Engineering, Faculty of Automatic Control and Computers
Science, University “Politehnica” of Bucharest, Splaiul Independenţei 313, Bucharest, 060042, Romania,
andreea.pintilie@cti.pub.ro
2 Engineer, Department of Computer Science and Engineering, Faculty of Automatic Control and Computers
Science, University “Politehnica” of Bucharest, Splaiul Independenţei 313, Bucharest, 060042, Romania,
mihai.zaharescu@cti.pub.ro
3 Associate Professor PhD Eng., Department of Computer Science and Engineering, Faculty of Automatic
Control and Computers Science, University “Politehnica” of Bucharest, Splaiul Independenţei 313, Bucharest,
060042, Romania, ion.bucur@cs.pub.ro

229
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

series of images are important when trying to monitor a tumor growth or the effectiveness
of a treatment as a pre or post surgical monitoring of healing in different time intervals
(short/medium/long).
Registration can furthermore be divided into extrinsic registration or intrinsic methods.
When acquiring an extrinsic image usually artificial, foreign, invasive objects are used; less
invasive objects or even non-invasive markers can be used but with lower accuracy. As an
example, before a neurosurgery operation, a stereotactic frame can be used for localization
and guidance.
Intrinsic methods rely mainly on patient generated image content only. Landmarks for
example can be locatable points of the morphology of the visible anatomy or geometrical
points providing geometric property such as corners, local curvature extremes etc. Usually,
the set of identified points is compared to the original image content and they are used to
measure distance. Another intrinsic method is segmentation: same anatomical structures are
extracted from two different images and used for the alignment procedure. One image can
be elastically deformed in order to be compared with the second image: can be zoomed in
or out. The voxel property registration method is different from the other two intrinsic
methods described above because the method is applied on the image gray values.
Ironically, another registration method is the non-image based one. In this type of
registration, two devices are calibrated to each other and that the patient must stay
motionless between the first and the second image acquisition.
Another classification of the registration task is based on the modalities that are involved.
In monomodal registration - the images belong to the same modality, while in multimodal
the images are provided from two different modalities. In monomodal method only one
image is involved and the other one is the patient himself or a model. As an example, when
trying to analyze the myocardial dysfunction in a patient two images can be acquired: under
rest and under stress condition. In multimodal method an image obtained from a PET device
can be registered to an MR image in order to relate an area of dysfunction to anatomy or
the registration of an MR brain image to a defined model of gross brain structures.
When a single patient provides the images involved, the method is referred as intrasubject
registration, when accomplished using images from distinct patients (or a model and a
patient) this is known as intersubject registration; when one image is constructed from a
database, an ontology or a knowledge base and the other one is provided by a single patient,
the method is called atlas registration.
All these image registration methods require different transformations:
 rigid: when only translation and rotations are allowed
 affine: if the transformation maps the parallel lines onto parallel lines
 projective: transformation maps lines on lines
 curved: transformation maps lines to curves
 global: if the transformation is applied to the entire image
 local: if subsection of the image have their transformations defined
In general rigid and affine transformations are used as global transformations and
curved transformations are local. Affine transformations are used in rigid body

230
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

movements, when the scale factors applied on the image are suspected to be
incorrect or unknown: as an example, during an MR image acquisition of the lungs:
the patient might breath - which will modify the lungs volume and will provide a
distorted image. The projective transformation type can be used in 2D/3D image
registration as a constrained elastic transformation when a fully elastic
transformation behaves inadequately.
2. MOTIVATION
In medical image acquisition it is important to have a history in order to establish the rate
of progress / regress of a disease. Therefore, it is important to be able to map or to compare
different images containing the same anatomical structure.
The image file format is defined as the standardized means of organizing and storing digital
images. Image files are composed of either pixel or vector (geometric) data that are
rasterized to pixels in graphic display (such as monitor, print and other devices). The pixels
that constitute an image are ordered in a grid; each pixel has magnitudes of brightness
(expressed by gray-scale value) and color. The file size increases directly with the number
of pixels composing an image and the color depth of the image: the greater the number of
columns and rows, the greater the image resolution will be, hence the size. The size also
increases when the pixels color depth increases: an 8 bit pixel will store 256 colors, while
a 24-bit pixel will store 16 million colors. The compression method associated with the file
format is also important when trying to establish the image size. The images are classified
into three families of graphics: raster, vector and metafile (which combine raster and vector
information).
From the common image file formats we mention JPEG (compression by eliminating non-
visible frequencies, using the cosine transform, and storing color with fewer bits than
luminance) GIF (format limited to an 8-bit palette (256 colors); they are suitable for storing
graphics with relatively few colors like shapes, diagrams and not for natural images) TIF
(saves 8 bits or 16 bits/color for a 24 and 48-bit total) PNG (provides simple detection of
common transmission errors as well as full file integrity checking. The format can support
up to 16 million colors in a lossless format).
3. RAW (RAW IMAGE FORMAT)
The RAW formats use a lossless or nearly lossless compression and produce full size
processed images from the same cameras. ISO 12234-2 represents a standard for the raw
image format, but most cameras are not standardized or documented. Raw images are also
known as digital negatives in film photography. After the acquisition, the image is processed
by a raw converter to “positive” file format for further manipulation, which often encodes
the image in a device-dependent colorspace. Many medical devices use RAW data format
such as Metafile.
Metafile represents a generic term for a file format that can store multiple types of data:
raster, vector and even type data. The common use is to provide support for the OS
computer graphics.

231
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

DICOM (Digital Imaging and Communications in Medicine)


DICOM represents a standard for storing, handling, transmitting and printing medical
imaging. Besides being an image format standard, it also represents a network
communication protocol. The protocol uses TCP/IP for communication between systems.
The DICOM data format contains a structure which allows maintaining not only image
information, but as well as other patient information and medical identification. A DICOM
data object consists of a number of attributes, including items such as name, ID, etc and
also one special attribute containing the image pixel data. A single DICOM object can only
contain one attribute containing pixel data. Pixel data can be compressed using a variety of
standards, including JPEG, JPEG Lossless, and JPEG 2000.
When acquiring the images the time and the spatial dimensions are important. As a
consequence algorithms that map a 3D image onto a 2D image must be used. Hence, images
must support transformations. We decided to provide a new image format that can support
different geometrical transformation and compression and that will maintain its quality
when geometrical transformations are applied.
4. OBJECTIVES
This paper is mainly concerned with finding an image format that can preserve the image
quality event after different geometric transformations are applied on the image. A good
example would be a patient who is having multiple RMI, representing different stages in
the tumor’s evolution. In order to provide a probability based on the scan images for the
malignity of the tumor it is necessary to analyze the images, but analyzing images that have
different dimensions, different intensities or that might even contain the object of interest
in different positions can be a very difficult task.
A map reduce algorithm will be used in order to improve the algorithm computational
speed. In order to extract different objects from the image, a canny edge detection algorithm
will be used.
Another interesting aspect is extracting the tumor from a Pet or RMI image. The main
drawback would be sensitivity to false edges, but usually the intensities of the tumor cells
are higher than the ones of the normal brain cells and they can be better preserved after
considering the luminosity of the pixels around them.
5. IMAGE REPRESENTATIONS
Image representation based on spatial relationships
In article “Similarity Searching for Chest CT Images Based on Object Features and Spatial
Relation Maps”[2] is presented an object base image retrieval system for CT images. The
paper proposes an image segmentation method which combines the anatomical knowledge
of the chest and the well-known watershed segmentation algorithm. The purpose of
segmentation is to identify the mediastinum and the two lung lobes in a chest CT image.
The proposed solution has a first step the construction of ARGs (attributed relational
graphs) which are used to describe the features of segmented objects.

232
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Traditional text-based image retrieval system retrieves images by text keywords. The
keywords used cannot reliable describe the features of the images, such as shapes and
textures.
Several methods have been proposed to approach the CBIR content-based image retrieval
problem. To archiving and retrieving images by content, the images need to be: analyzed,
indexed and stored to database. Analysis of images can be divided into several steps, like
image segmentation, feature extraction[14][15], etc. The contents of a medical image are
complex, therefore automatic image segmentation cannot be easily achieved. Features of
images can be color, shape and texture of image content [3]. Because most medical images
are represented in gray levels, shape and texture features are more often used in describing
medical images. Moreover, the spatial relationships among different objects were proposed
to further handle multiple objects in an image [4]. In order to achieve a good representation
of separated objects in an image, different strategies should be considered for different kinds
of images, especially in processing of analyzing medical images acquired from different
devices and/or from different parts of the body.
The first step is the preprocessing module. The CT image files store the pixel-values as CT
numbers. The CT number represents the relative X-ray absorption properties of different
organs or tissues with respect to water [5]. Most physiological organisms have positive CT
numbers. The air has a negative CT number of -1000. In a typical chest CT image, air
occupies much of the space in the lung lobes, hence the property to separate lung lobes (and
air) from other parts in the images.
The second step is the Modified watershed segmentation algorithm. The method is based
on the geographic phenomenon of water flooding up in area with hills and valleys: the water
is filling up from the lowest to the highest valleys. The borders between two neighboring
valleys are detected when the water in them merge. The watershed segmentation algorithm
proposed is the following:
1. The input pixel values are sorted. The purpose is to access the pixels in the same
gray level efficiently.
2. Different regions are labeled when the flooding step is processed.
3. The image is segmented in small regions.
4. A region merging step is proceeded to acquire meaningful objects out of enormous
small segments.
The article proposes 4 merging steps:
1. Merging of the air part with the background: merge regions whose mean pixel
values are less than threshold T and are contact with the image background.
2. Merging of regions with similar CT numbers: merge two neighboring regions if the
difference in CT number is less than a threshold of 40.
3. Merging of lung lobes: merge regions surrounded by the labels as lung lobes.
4. Merging of mediastinum: merge regions positioned between the sternum, spinal
cord, and two lung lobes.
After the four merging steps, the three objects (mediastinum, left and right lung lobes) in
the chest are identified. After the segmentation step, the objects can be labeled. A spatial-
relationship model was used to describe the object features and the relations among different
objects. Three properties are used to describe the spatial relationships between two objects:

233
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

1. D (Distance): the minimum distance between the two objects


2. P (Relative Position): the angle between the connection of the two mass centers and
the horizontal line.
3. R (Ratio): the ratio of the areas of the two objects
Vector Image representation
Vector Image representation is important because of the vector-based graphical content
properties: of being editable and scalable. The main trend in vector image representation is
developing scalable (resolution-independent) representations of full-color raster images. In
a usual path the image plane is decomposed in triangular patches with potentially curved
boundaries, and the color signals over the image plane are handled as height fields. The
image representation is based on piecewise smooth subdivision surfaces. The image is
modeled with discontinuity curves for providing the continuity conditions useful in
vectorization ad subsequent vector editing operations.
Vectorization techniques:
 Triangulation: In representations based on triangulation, each curvilinear feature
is approximated by multiple short line segments. This form of representation is not
resolution independent because the difference between a smoothly curved feature
and a polyline with only C0 continuity at the vertexes become obvious when
zoomed in. A technique to overcome the problem is by fitting subdivision curves
to patch boundaries that have C2 continuity.
 Parametric patches: When grids or Bezier patches are involved, it is necessary to
use techniques that are based on higher-order parametric functions. An example of
a vectorization technique based on optimized gradient meshes is manually aligning
mesh boundaries with salient image features. The main drawback is the fact that
mesh placements can be time-consuming. Meshes might introduce color
discontinuities that can be approximated using degenerate quads, but the real
challenge it to align color discontinuities with the image features. [7]
 PDE solutions: The PDE solution is a mesh-free image representation. The
technique is based on diffusion curves [6] and it relies on curves with color and
blur attributes as boundary conditions of a diffusion process. The color variations
of a vector image are represented by the final solution of the diffusion process. The
main limitation is that the curves are not coupled together; therefore it is difficult to
edit the image using region-based color or shaping operations.
Quadtree representation
Quad trees are mainly used for image representations because they reduce the storage space
of images and the time required for image manipulations. The quad tree structure is efficient
to store 2D images. Maintaining a quad tree as an image representation makes it easier to
cluster images based on their characteristics such as color, shape, semantics or texture or
even by their history. To represent an image by a quadtree the image is recursively split in
four disjoint quadrants or squares having the same size. The root node represents the initial
quadrant containing the whole image. Different criteria can be used to define the
homogeneity property: pixel color, same texture etc. If the image is not homogeneous
(according to the criterion established above), the quadtree root has for descendant nodes
234
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

which represent the four first levels of the image: northwestern, northeastern, southwestern,
and southeastern. A node is a leaf when its corresponding image is homogeneous. Generally
the quad tree is unbalanced.
Each quad tree node has a key, or a location code/quad code. The quad tree node and the
corresponding image have the same key.
The main operations applied on quad trees are reading and modifying an image quad tree,
comparing two quad trees and determine the Q-similarity distance. When trying to modify
an image quad tree, one can use the complement operation: each leaf node is accessed and
has its value changed. As an example, assuming that the image has a total of 256 different
intensity levels to represent a pixel p, the value of p will be changed with (255 –p). In the
case of a binary image the black and white values are interchanged. When comparing two
quad trees, nodes with the same identifier are actually compared. The basic operations that
can be applied on two quad trees are: union, intersection, comparison and difference). When
trying to compare a gray-scale or colored image, a similarity measure based on image
features is necessary. The node values might be considered as similar if the similarity
distance between those two nodes is under a given threshold. The Q similarity distance is
defined as the distance between two quad trees. It represents the number of nodes having
the same identifier and different values divided by the cardinal of the union of node
identifiers.
6. EDGE DETECTION
ED represents a terminology in image processing, which refer to algorithms that have the
scope to identify points in a digital image at which the image brightness changes sharply or
has discontinuities. The next section will present some of the methods used for edge
detection.
Robert's Cross Operator
The Robert’s cross operator performs a simple, quick to compute, 2 – D spatial gradient
measurement on an image. It thus highlights regions of high spatial frequency which often
corresponds to edges. Pixel values at each point in the output represent the estimated
absolute magnitude of the spatial gradient of the input image at that point. In theory the
operator consists of a pair of 2×2 convolution kernels as shown below. One kernel is simply
the other rotated by 90°.

These kernels are designed to respond maximally to edges running at 45° to the pixel grid,
one kernel for each of the two perpendicular orientations. The edge gradient magnitude has
the formula:

The angle of orientation of the edge giving rise to the spatial gradient is given by:

235
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Sobel Operator
The Sobel operator performs a 2–D spatial gradient measurement on an image and typically
it is used to find the approximate absolute gradient magnitude at each point in an input
image. In theory the operator consists of a pair of 3×3 convolution kernel as shown below.

These kernels are designed to respond maximally to edges running vertically and
horizontally relative to the pixel grid, one kernel for each of the two perpendicular
orientations.
The edge gradient magnitude is given by:

The angle of orientation of the edge giving rise to the spatial gradient is given by:

Prewitt Compass Edge Detector


Compass edge detection is an alternative approach to the differential gradient edge
detection. When using compass edge detection the image is convolved with a set of (in
general 8) convolution kernels each of which is sensitive to edges in a different orientation.
For each pixel the local edge gradient magnitude is estimated with the maximum response
of all 8 kernels at this pixel location.

Where Gi is the response of the kernel ‘i’ at the particular pixel location and ‘n’ is the
number of convolution kernels. The local edge orientation is estimated with the orientation
of the kernel that yields the maximum response. Two templates out of the set of 8 for the
Prewitt kernel is given below.

The whole set of 8 kernels is produced by taking one of the kernels and rotating its
coefficients circularly. Each of the resulting kernels is sensitive to an edge orientation
ranging from 0 to 315 insteps of 45, where
0 corresponds to a vertical edge.
Zero Crossing Detector
The zero crossing detector looks for places in the Laplacian of an image where the value of
the Laplacian passes through zero, i.e. points where the Laplacian changes sign. Such points
often occur at edges in images, i.e. the points where the intensity of the changes rapidly, but

236
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

they also occur at places that are not as easy to associate with edges. It is best to think of
the zero crossing detector as some sort of feature detector rather than as a edge detector.
The core of the zero crossing detector is the Laplacian of Gaussian filter. Once the image
has been Laplacian of Gaussian filtered, it only remains to detect the zero crossings. This
can be done in several ways. The simplest is to threshold the Laplacian of Gaussian output
to zero and a more accurate approach is to perform some kind of interpolation to estimate
the position of the zero crossing to sub–pixel precision.
Canny Edge Detector
The Canny operator was designed to be an optimal edge detector. The Canny operator works
in a multistage process. First of all the image is smoothed by Gaussian convolution, then a
derivative operator is applied. The resultant edges pass through a first threshold and in order to
connect broken edges, pixels that are able to fill the spaces are passed through a second, lower,
threshold. Using the edge directions, the edges are thinned to a pixel wide. The effect of the
Canny operator is determined by three parameters: the width of the Gaussian kernel, the
lower and the upper thresholds used by the tracker.
7. BINARIZATION
A binary image is an image that only uses two values for its pixel values. Binary images are
often used in digital image processing as masks and are the result of certain operations such
as edge detection, segmentation, thresholding. The following section will present different
binarization methods [13].
 Histogram-based methods: These methods determine the binarization threshold
by analyzing the shape properties of the histogram, such as the peaks and valleys.
Pavlidis [11] constructs a histogram by using gray-image pixels with significant
curvature, or second derivative, and then selects a threshold based on the histogram.
 Clustering-based methods: The threshold is selected by partitioning the image’s
pixels into two clusters at the level that maximizes the between-class variance, or
minimizes the misclassification errors of the corresponding Gaussian density
functions [9].
 Entropy-based methods: These methods employ entropy information for
binarization [8].
 Object attribute-based methods: These methods select the threshold based on
some attribute quality (e.g., edge matching of Hertz and Schafer [10]) or the
similarity measure between the original image and the binarized image.
 Spatial binarization methods: These methods binarize an image according to the
higher-order probability or the correlation between pixels.
 Hybrid methods: Local, adaptive methods [12]; algorithms for improving
mentioned methods [13]
8. IMPLEMENTATION
The project implementation is in Java using Hadoop Map Reduce for image processing.
The first step of the algorithm was to apply the Canny edge detection algorithm on a image.
Then we used an entropy-based binarization algorithm as presented in Binarization section
to remove the noise. We spread the photons on the edges using an iterative algorithm. The
237
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

photons were spread based on the fact that the edges will absorb the light, therefore the
photons will model the edges. The image processing was done using map reduce to improve
the computational speed. We also maintained a quad tree containing the image which
provides access with a higher level of detail once the tree is traversed.
9. ACKNOWLEDGEMENT
The authors would like to thank Costin-Anton Boiangiu for his original idea (paper under
review) onto which the described system is based, for support and assistance with this paper.
10. REFERENCES
[1] J.B.A. Mainz, M.A. Viergever: “A survey of medical image registration”, Conf Proc IEEE
Eng Med Biol Soc. 2004;2:1298-301.
[2] Yu SN, Chiang CT, “Similarity searching for chest CT images based on object features and
spatial relation maps”.
[3] Yihong GongIntelligent Image Databases: Towards Advanced Image Retrieval, Robotics
Institute, Carnegie Mellon University.
[4] Euripides G.M. Petrakis, “Design and evaluation of spatial similarity approaches for image
retrieval”, Image and Vision Computing, Volume 20, Issue 1, 1 January 2002, Pp 59–76.
[5] Louis M. Castanier, “An Introduction To Computerized X-Ray Tomography For Petroleum
Research”, May 1989, Stanford University Petroleum Research Institute.
[6] Alexandrina Orzan, Adrien Bousseau, Holger Winnemoller, Pascal Barla, Joelle Thollot,
David Salesin, “Diffusion Curves: A Vector Representation for Smooth-Shaded Images”.
[7] Tian Xia, Binbin Liao, Yizhou Yu, “Patch-Based Image Vectorization with Automatic
Curvilinear Feature Alignment”.
[8] J.N. Kapur, P.K. Sahoo, A.K.C. Wong, “A new method for gray-level picture thresholding
using the entropy of the histogram”, Computer Vision, Graphics, and Image Processing,
Volume 29, Issue 3, March 1985, Pages 273–285.
[9] J. Kittler, J. Illingworth, “Minimum error thresholding”, Pattern Recognition, V 19, I 1,
1986, Pp 41–47.
[10] L. Hertz, R. W. Schafer, “Multilevel thresholding using edge matching”, Journal Computer
Vision, Graphics, and Image Processing archive, Vol44 Is 3, Dec. 1988, Pp 279-295.
[11] Pavlidis, T. “Threshold selection using second derivatives of the gray scale image”,
Document Analysis and Recognition, 1993.
[12] Costin-Anton Boiangiu, Alexandra Olteanu, Alexandru Victor Stefanescu, Daniel Rosner,
Alexandru Ionut Egner (2010). „Local Thresholding Image Binarization using Variable-
Window Standard Deviation Response” (2010), Annals of DAAAM for 2010, Proceedings
of the 21st International DAAAM Symposium, 20-23 11 2010, Zadar, Croatia, pp. 133-134.
[13] Costin-Anton Boiangiu, Andrei Iulian Dvornic, Dan Cristian Cananau, „Binarization for
Digitization Projects Using Hybrid Foreground-Reconstruction”, Proceedings of the 2009
IEEE 5th International Conference on Intelligent Computer Communication and
Processing, Cluj-Napoca, August 27-29, pp.141-144, 2009.
[14] Costin-Anton Boiangiu, “The Beta-Shape Algorithm for Polygonal Contour
Reconstruction”, CSCS14 – The 14th International Conference on Control System and
Computer Science, Bucharest, July 2003.
[15] Costin-Anton Boiangiu, Bogdan Raducanu. “3D Mesh Simplification Techniques for
Image-Page Clusters Detection”. WSEAS Transactions on Information Science,
Applications, Issue 7, Volume 5, pp. 1200 – 1209, July 2008.

238
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

DATABASE DYNAMIC MANAGEMENT PLATFORM (DBDMS) IN


OPERATIVE SOFTWARE SYSTEMS

Virgil Chichernea1
Dragos-Paul Pop2
ABSTRACT
This paper discusses the opportunity of using cloud computing and cloud service features
to reliably store and manipulate data and databases. It proposes a platform, the Database
Dynamic Managemnt Platform (DBDMS), which can be used to effectively handle database
versioning of both schema and data. Opportunities and advantages that this systems brings
are discussed here and a mathematical model if presented an analyzed for the proposed
platform.

Keywords: cloud computing, cloud storage, dynamic database, schema versioning,


data versioning
1. INTRODUCTION
In a globalized society with state of the art informational technologies, Operative Software
systems (OSS) that companies use are always under stress to adapt to the dynamics of
always more sophisticated demands of precise information delivery. These systems need to
adapt as soon as possible to deliver information to all online connected users that need to
access Databases (DB) used to store information by the systems.
Because of these conditions, Databases used by OSS are always undergoing changes and
updates both in the data they are storing but also in their structure and schemas, in order to
quickly adapt to changes in requirements, at a low cost in order to provide an optimal cost-
benefit relationship.
These are a few of the basic sources of database structure and content updates that require
updating and storing new versions of databases:
 the evolution of the systems in question determined by the requirements to integrate
it in a globalized world; this evolution imposes an update dynamic for OSS in order
to provide adequate responses for attribute dynamics: exact information provided
timely, anywhere, anytime, to any number of online users;
 the explosive evolution of new information technologies, imposes the adoption of
some precautionary measures in order for OSS to be transferred on new hardware
and on new operating systems;
 the evolution of Database Management Systems (DBMS) performance and data
storing opportunities;
 legal changes in the business environment and the IT&C environment;
 the list could go on;

1Proffessor, Ph.D., Romanian-American University, Bucharest, Romania, chichernea.virgil@profesor.rau.ro


2
Teaching Assistant, Romanian-American University, Bucharest, Romania, Ph.D. Student,
Academy of Economic Studies, Bucharest, Romania, pop.dragos.paul@profesor.rau.ro

239
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The solutions offered by cloud DBMS are different from other locally found systems,
because they do not require physical installation and because they can be accessed with just
an Internet connection and a Web browser.
Regardless of where it is being accessed from (home, work, automobile, anywhere), the
DBMS stores and offers at any time the software, documents, files and data that are needed
to keep an OSS functioning. This can be seen as the technical means required to keep pace
with the ever growing activity of the system in question.
The DBDMS offers access from anywhere and any device connected to the Internet -
computer, mobile phone, tablet - without interruption, without time loss and at affordable
prices. In a cloud environment, the DBDMS platform handles in a permanent and
continuous way, the updates of structure and content for operational databases, using an
operational versions management system.
Access for company users for these versions of the operational databases are managed by
this platform, removing down times caused by technical and data security requirements,
allowing for an increase in productivity for company personnel.
The DBDMS platform is an advanced technical solution, without huge extra initial costs
(servers, licenses). The costs for servers and storage space decrease significantly, allowing
for a flexible configuration of access for authorized users to data stored in different
successive versions (content and dynamic structures) following current business needs. In
the cloud costs are predictable, easy to measure and always optimized.
The DBDMS platform can be adapted to the specifics of the system in question, maintaining
its essence unchanged and this way eliminating the barriers that appear in the natural
evolution of structures and data contents that are stored in these databases. System upgrades
are offered at a periodic rate in order to keep up with the level of competitiveness and be
against it. DBDMS can be integrated in a stable, unitary and perfectly operational platform
in any system regardless of company size and domain.
Any altering of the requested data or data structure determines the transfer of those changes
instantaneously to other operational versions, thus saving time because there is no need to
input the data again and data query is assured to be easy no matter the version of database
they are stored in.
The proposed cloud solutions work in pair with other software applications that are in use
and that are to be kept in use.
The cloud DBDMS platform offers a safe and secure work environment which allows for
safe and secure data storage and keeps unwanted users away and data is not lost in the race
of changes, number of users or change of files and documents. Access is reserved only to
authorized users.
The cloud DBDMS solutions are designed to best address the requirements of each
department of the system in question and offer dedicated solutions that cover the entire
range of the following interest levels: efficient integrated internal process management
(data content and structure), streamlining of the operational flow, productivity increase and
optimization of the cost-performance relationship.

240
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

These are a few of the advantages offered by the DBDMS platform:


 Reduced costs – there are no initial acquisition, implementation and equipment
maintenance costs, because there is no need to create a new infrastructure;
 Flexibility – the solution is easy to implement, there is no need to install hardware
/ software, and it can be used from anywhere and any device. Moreover, at any time
new users can be added to the system;
 Scalability – costs are based on effective number of users and updating their
number is done immediately;
 Mobility – the solution can be accessed at any time and from anywhere, from any
equipment as long as there is a connection to the Internet;
 Modularity – each company activity is managed by a specific application. The
solution’s models work either in an integrated manner, using a single database, or
independently;
 Security – an advanced level of security is provided for stored data. This data is
protected against industrial espionage, theft or definitive loss of data;
 Integrability – the solution integrates the whole activity of a company in a unique
database. The tracking of all financial and accounting documents within the
company is allowed, along with processes undertaken on these documents and
contribution from each user that has worked on the database;
 Business process efficiency – The systems allows for organizational objective
fulfillment, providing for real time control and efficiency growth by offering
decisional support;
 Coordination – The system allows for activity planning and work flow control in
real time.
2. FACILITIES OFFERED BY CLOUD STORAGE
Cloud storage is a network of storage for data and data objects (images, text documents,
sound and video files) in virtualized areas hosted by a provider. The provider operates with
the data centers (BIGD) that are distributed among many servers and locations and a
personnel that requires own data that has been stored in memory locations at a price. Data
center operators virtualize available resources and provide, by request, memory locations
that users can use by themselves to store data and data objects. The security of files is
offered by the provider and by software applications that are being used.
The cloud storage system can be accessed through a web service application programming
interface (API) or through applications that use this API like cloud desktop storage, cloud
storage gateway or web-based content management systems.
The new concepts of cloud computing and cloud storage, first envisioned by Joseph Carl
Robnett Licklider in the 1960s, work with a large array of terms, from which we underline:
storage cloud, private cloud storage, mobile cloud storage, public cloud storage, hybrid
cloud storage, personal cloud storage, public cloud, cloud backup, cloud enablement, hybrid
cloud, cloud services, private cloud, cloud computing, Amazon Simple Storage Service -
Amazon S3, etc. [5], [6].
Cloud storage provides a virtual IT structure that can evolve as fast as the system in
question, offering a generous environment for developing operational software systems for
241
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

companies and small and medium enterprises. Small companies can use a section of cloud
storage and a specialized software (sync) managed by the provider where they can operate
data storage and retrieval queries form any authorized mobile device. [4] Data back-up is
made outside of company headquarters, on multiple servers, which allows for a better
security model in case of force majeure situations like fires and floods. For example, we
can mention some of the best known cloud storage systems like Dropbox and SugarSync
or a large number of cloud drive systems like Google Drive, SkyDrive and others.
A lot of providers offer free space as a starting point, with ranges from 5 to 25 GB and only
charge for extra space or bandwidth.
Companies and especially small companies that use these services benefit from significant
saving in time and money.
These are some of the benefits:
Cost reduction – the host cloud server optimizes the relationship between:
computing speed – storage space – time – running costs and provides for a
significant save at company level for these level when thinking about running
software systems;
 Anytime and anywhere – desktop storage allows for cloud storage access of stored
files from any authorized device anytime via a specific software application (sync).
A user’s files are stored on multiple servers lowering the risk of technical incidents
to a minimum;
 Easy collaboration – saving and accessing files in cloud storage is available in a
multiuser-multitasking regime, so that all authorized users can access the same
stored data at the same time;
 Risk reduction – cloud storage provides data security by off-site data backup,
reducing the risk of virus infections and other cyber-attacks;
 Increase in efficiency – after migrating to cloud storage, small companies won’t
have problems regarding computing power, storage space or access to specialized
software;
Notions, concepts and definitions
Any software system stores data in records that are organized in files stored on magnetic
drives to allow for quick retrieval.
Let there be:
{R} – array of records
{S} – array of address in the storage space where data is written for these records
{C} – array of data requests for records in storage
A database DB (R, S, C) for a software system is array R, stored on drive S, in order to
satisfy the requests found in C (retrieval of requested data in time).
Any database (DB) has a structure of R arrays applied over S (expressed by the structure of
files which store data) and a content for that data at the time t+t0.
Organizing the DB, O {DB {R, S, C}} is defined as both the content and the files in which
data is stored.

242
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The main objective of any O {DB{R, S, C}} is the optimization of the relation between
storage space S and retrieval time t(ci) under the aspect of total cost.
In mathematical terms, this objective can be formulated as follows:
With M {O {DB{R, S, C}}} being given, identify a structure O{DB{R, S, C}} that can
optimize the following relationship:
Min {t(ci), Vci C} at the same time with min Cost (O{BD{R,S,C}}) – under the aspect of
total cost.
A general solution for this problem is difficult to obtain because of the complex structures
and the volume of data related to the arrays R, S, C.
In order to achieve this objective, a wide range of techniques for organizing O {DB{R, S,
C}} have been developed from simple files to state of the art RDBM systems.
The operational software system (OSS) needs to provide exact data in time, anywhere,
anytime, to any users connected online and, in this context, the contents of the database, as
a support for the OSS, is a updated through the operational flows of the system in question.
The dynamics of these updates affect both the current contents of the DB (arrays R and S)
and the structure of O {DB{R, S, C}}, influenced by the dynamics of the requests in array
C.
We define a dynamic database (DDB) the array Oi{BD(R,S,C)}, for i=1,2 … n in which
Ok{BD(R,S,C)} represents O{BD(R,S,C)} at the point in time t=tk. We define the platform
DBDMS as being the software platform that manages the different versions Oi{BD(R,S,C)}
of the OSS by using cloud facilities.
3. THE DDBMS PLATFORM FOR OPERATIVE SOFTWARE SYSTEMS
Through the information flows of the system in question (companies, central or local state
administrative unite, banks, etc.) the arrays R, S and C are always updated (many times the
updates being in real time), both at the content and structure levels (files, file structures,
keywords, etc.).
In [3] aspects related to Boolean algebra for record arrays are presented and a
O{BD(R,S,C)} is proposed as a Boolean algebra of all possible answers and the
mechanisms specific to large databases (BIGD).
Let us follow the dynamics of updates in a OSS and, implicitly, the dynamics of the
Oi{BD(R,S,C)} array.
In order to secure the normal functionality of the OSS and to avoid system crashes in case
of technical incidents, mechanisms for saving and restoring the state of the OSS have been
refined. These are among the most known such mechanisms:
 Backups of different versions of DB and transaction files; in case of technical
incidents (hardware or software malfunctions) the DB is to be restored base on
these files and updated with the transactions from transaction files from the last DB
save;

243
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 For some certain OSSs (like banking systems), along backups the so called
mirroring system is used, i.e. the parallel and real time activity of supplemental DB
updating mechanisms;
 In all classical OSSs, the altering of DB structure is forbidden.
The facilities offered by cloud systems and state of the art software technologies leads to
new approaches for the processes of updating, saving and restoring DB (R, S, C). The
processes of updating the DB according to request dynamics have driven the development
of new platforms that can provide updates to data and structures, but also to save / restore
processes in real time to the last version of the O{BD(R,S,C)}.
In accordance with the dynamics of the OSS in the stage of facilities offered by cloud and
new software technologies (laptop, mobile phone, tablet) we observe the following new
notations in the DB:
R  R + Ri,
S  S + Si
C  C + Ci
Where Ri, Si, Ci represent changes in structure for the DB, meaning either changes in
records in some DB files or the addition or deletion of some files.
In this new context, using the previous notations, the aspects of dynamic update of the DB,
using the mathematical notations from above, can be expressed as follows:
Oi{BD(R + Ri,S +Si,C + Ci)}, for i = 1,2,… n
Where
 {Ri} – the new structures of the R array (expressed by either changes in the
structures of existing DB files or by the addition or deletion of files);
 {Si} – the array of addresses of the storage space in which these records are stored;
 {Ci} – new requests of the system in question
With the new notation let’s consider a number of keywords (fields contained in the DB
records), denoted as k1, k2, k3 … kn with the property that any record in {R + Ri} contains
at least one ki keyword.
We denote with:
R(ki) +Ri(ki)} set of records that contain the keyword ki;
{A(ki) +Ai(ki)} list of addresses that hold the records found in the array {R(ki)
+Ri(ki)};
From [3] we can prove that any request from C for the DB can be written as a Boolean
function like:
F(k1, k2, k3, …, kn) = Ki
The answer to this request for data is found in the record collection B B(R + Ri), where we
name the array B(R + Ri) the array of all possible answers.

244
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

4. TECHNIQUES FOR SAVING / RESTORING OF THE DB IN OSS


In DBs of OSSs various files store numerical data and object data (images, text documents,
multimedia files) as well as addresses for records that allow the retrieval of this data in time.
The file for direct transactions is an intermediary file that contains all data used
to undertake the following operations on the DB:
 Inserting new records in the DB certain files;
 Deleting records from certain DB files;
 Changing records in certain DB files.
The file for reflected transaction is a file attached to the direct transaction file of the
system in the following way:
 For any record in the direct transaction file there is a record in the reflected
transaction file defined as follows:
Let us consider:
o records of the direct transactions file:
𝑥𝑖 = {𝑑𝑘𝑖 (𝑡), 𝑘 = 1 … 𝑛}; 𝑖 = 1 … 𝑝; 𝑡 ∈ 𝑇
o records of the database
𝑥̃𝑖 = {𝑎𝑘𝑖 (𝑡), 𝑘 = 1 … 𝑛}; 𝑖 = 1 … 𝑝; 𝑡 ∈ 𝑇
o records of the reflected file
𝑥̅𝑖 = {𝑑̅𝑘𝑖 (𝑡), 𝑘 = 1 … 𝑛}; 𝑖 = 1 … 𝑝; 𝑡 ∈ 𝑇
𝑑̅𝑘 (𝑡) is defined as:
𝑖

̅ 𝑘 (𝑡) = −𝑑𝑘 (𝑡), 𝑖𝑓 𝑎𝑘 (𝑡): = 𝑎𝑘 (𝑡) + 𝑑𝑘 (𝑡)


1. 𝑑 𝑖 𝑖 𝑖 𝑖 𝑖
̅ 𝑘 (𝑡) = 𝑑𝑘 (𝑡), 𝑖𝑓 𝑎𝑘 (𝑡): = 𝑎𝑘 (𝑡) − 𝑑𝑘 (𝑡)
2. 𝑑 𝑖 𝑖 𝑖 𝑖 𝑖

̅ 𝑖 ∶= 𝑥̅ 𝑖 ∪ {𝑑̅ 𝑘𝑖 (𝑡), 𝑘 = 1 … 𝑛} 𝑖𝑓 𝑥̃ 𝑖 ∶= 𝑥̃ 𝑖 ∩ {𝑑̅ 𝑘𝑖 (𝑡), 𝑘 =


3. 𝑥
1 … 𝑛}
̅ 𝑖 ∶= 𝑥̅ 𝑖 ∩ {𝑑̅ 𝑘𝑖 (𝑡), 𝑘 = 1 … 𝑛} 𝑖𝑓 𝑥̃ 𝑖 ∶= 𝑥̃ 𝑖 ∪ {𝑑̅ 𝑘𝑖 (𝑡), 𝑘 =
4. 𝑥
1 … 𝑛}
 Both files will be stored in cloud storage but on different servers and will serve for
both monitoring the correct evolution of the DB (by comparing records from
transaction T at the moment t1) and for rebuilding the DB in case of a technical
incident, by using the backup of the DB at the moment t and undertaking direct and
reflected transactions on that backup from the interval t + t1 and comparing the
comparing the contents of these two rebuilt DBs by the use of the two rebuild
procedures.
This way, by using the 3 entities (DB, direct transactions file, reflected transactions file)
and by using the procedures for rebuilding the contents of the DB at the moment t + t1
procedures and methods are provided for rebuilding the DB at certain moments its
evolution. This is requested on one hand by the possibility of the destruction of the DB
(technical incidents, cyber-attacks, etc.) and on the other hand by the frequent reports
demanded on the state of the system at a given point in time.
245
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1 DB update and reflected transaction file creation

Figure 2 Restoring de DB at a certain stage


5. CONCLUSIONS
Cloud storage facilities for securely storing the three entities allows for designing and
maintaining dynamic DBs in service. These DBs can support, besides the classical
operations of updating (adding, deleting, changing records), the update of the DB structure,
the change of the structure of records in DB files and the addition and deletion of DB files
dynamically according to the evolution of the system in question.
6. ACKNOWLEDGEMENT
This work was co-financed from the European Social Fund through Sectorial Operational
Program Human Resources Development 2007-2013, project POSDRU/107/1.5/S/77213
„Ph.D. for a career in interdisciplinary economic research at the European standards”
7. BIBLIOGRAPHY
[1] Cloud storage – Wikipedia the free encyclopedia, www.en. Wikipedia.org/wiki/cloud storage
[2] Cloud storage & Unlimited, www. Online File storage –Raskpage Cloud
[3] Chichernea, V., Pop, D., “Techniques For Optimizing The Relationship Between Data Storage
Space And Data Retrieval Time For Large Databases”, Journal of Information Systems and
Operations Management, vol. 6, no.2, pp. 258-268, 2012
[4] Armbrust, M., Fox, A., Griffith, R., Joseph, A., & RH. (2009). Above the clouds: A Berkeley
view of cloud computing. University of California, Berkeley, Tech. Rep. UCB , 07–013.
[5] Lee, G., Rabkin, A., Katz, R., Stoica, I., Griffith, R., Joseph, A. D., … Zaharia, M. (2010). A
view of cloud computing. Communications of the ACM.
[6] Mirashe, S. P., & Kalyankar, N. V. (2010). Cloud Computing. (N. Antonopoulos & L. Gillam,
Eds.)Communications of the ACM, 51(7)

246
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

TEXT LINE SEGMENTATION IN HANDWRITTEN DOCUMENTS BASED ON


DYNAMIC WEIGHTS

Costin-Anton Boiangiu1
Mihai Cristian Tanase2
Radu Ioanitescu3
ABSTRACT
Identification of text lines in documents, or text line segmentation, represents the first step
in the process called ‘Text recognition”, whose purpose is to extract the text and put it in a
more understandable format. The paper proposes a seam carving algorithm as an approach
to find the text lines. This algorithm uses a new method that allocates dynamic weights for
every processed pixel in the original image. With this addition, the resulting lines follow
the text more accurately. The downside of this technique is the computational time
overhead.

KEYWORDS: OCR, text line segmentation, handwritten documents, dynamic


weights
1. INTRODUCTION
The process of extracting lines from a document is used as a basis for document structure
extraction, handwriting recognition or text enhancement. There are numerous methods
([13], [9], [5]) that address the printed document line extraction problem which is usually
reduced to global skew search (the text lines are parallel with each other, but not necessarily
horizontal). On the other hand, when dealing with handwritten document the problem
becomes more complex ([14], [12], [11], [10], [8], [7], [6], [4]): the lines are not parallel
with each other, same letters do not have same sizes, text lines have letters that extend to
other text lines, higher text organization cannot be defined (paragraphs, subsections etc.).
In any image, document or not, with handwritten or printed text, each pixel can be
associated with an importance (i.e., how much that pixel influences the overall image). In
this paper the importance of the pixels in an image is given by the energy map, which is
further used to compute the energy cost map and, finally, the seam carving algorithm is
used to detect the text lines.
2. RELATED WORK
There are many techniques that address the problem of text line segmentation. They are
generally divided into two categories:
 techniques that work directly on the gray scale image
 techniques that use, as input, the binary representation of the image.

1 Associate Professor PhD Eng, ”Politehnica” University of Bucharest Romania, 060042 Bucharest,
costin.boiangiu@cs.pub.ro.
2 Engineer, VirtualMetrix Design Romania, 060104 Bucharest, ,mihaicristian.tanase@gmail.com
3 Engineer, European Pattent Office (EPO) Germany, Bayerstrasse 34, Munich, rioanitescu@epo.org
247
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

For the gray scale images, there are algorithms that use the, so called, projection profiles
representing the sum of all the pixels values in a given direction. This method, applied in a
pre-computed direction, which usually uses the Hough transform, can accurately identify
text lines in printed documents, but fails to produce even moderate results when applied to
handwritten documents.
Different workarounds of this problem were proposed: statistical division of the text page,
which tries to closely follow the local skew of the handwritten text; smearing methods,
which fills the space in-between text characters; Hough transform which selects all lines
that have the value accumulated in Hough space greater than a given threshold; repulsive-
attractive method in which the current line is attacked by the pixels from the text and is
repelled by previous found text lines. For the binary images, there are various grouping
methods that can be used in addition to the above mentioned techniques.
In this paper, the energy map of a gray-scale image is used to compute the energy cost map
which, in turn, represents the input for the seam carving algorithm. The result is a
combination between the original document and lines that follow the text which represents
the segmentation boundaries.
Image as an energy map

Removing insignificant pixels Removing information pixels

Original image

Before Before

After After
Figure 1 Exemplification of the energy map concept
The energy map of an image represents the information quantity map. Each pixel in the
energy map has a value associated with it that represents the amount of information that the
given pixel stores in the image. If a high energy pixel is removed from the image, the
resulting image has a significant drop in detail, whereas removing a low energy pixel results
in a negligible information loss.
Figure 1 illustrates this concept. Removing a set of pixels, each belonging to a
homogeneous area will result in almost insignificant information loss compared to
extracting the second set of pixels.
From this example, an observation can be made: pixels belonging to homogeneous areas
have low energy and pixels belonging to areas with high variations have high energy. This
observation leads to the idea of viewing energy map as the derivate of the original image.
In [4], the energy map is computed as the distance transform of the binary image (the objects
of interest in the image are the text characters). The output of distance transform represents

248
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

a map of pixels, each assign with the distance from that pixel to the closest text pixel. For
this energy map, high energy represents areas furthest from the text and small energy
represents text or areas very close to text.
Three methods are used for calculating the energy map:
2
 f   f 
2

f      
1. Magnitude of the gradient, namely:  x   y  whereas pixels at the
edge receive higher energies.
2. Gaussian 1st derivate
3. Inverse distance transform’
To use distance transform, first, the original image needs to be transformed in a binary
image (a binary threshold is applied), than the distance transform algorithm is applied which
outputs an image where small values are associated with the text and high values with the
background. To have a consistent implementation (i.e., high energy represents high
variations), the last image needs to be inverted, which results in an image where the high
energy values are associated with text.
3. DYNAMIC WEIGHTS
The next step in the process is to calculate the energy cost map. Similar to [11] and [4], this
cost map assigns to each pixel a minimum value calculated with the formula below:

M  i, j  2*e  i,j +min  w  neighbors_number/2  k  *M  i  direction, j  k  

Where -neighbors_number/2<k<neighbors_number/2 and direction represents the


direction of processing the energy map, which is either left-to-right (+1), or reverse (-1).
To obtain a more accurate energy sum map, both the energy sum maps and linear
interpolation the result are computed similar to [11] and the results are displayed in Figure2.
To avoid discontinuous lines, a maximum neighborhood of 3 is used (i.e., neighbor pixels
are 8-connected). The weights are constant throughout energy cost calculations. As shown
in figures form “Test and Results” section, the results are good to excellent when the
documents has close to horizontal lines, but when there lines are skewed, the constant
weight allocation leads to erroneous detection. This is where the idea of dynamic weight
allocation comes into play. To accurately follow the text, the direction of the text lines has
to be available at each pixel. In other words, for each pixel, the weights are calculated
according to local direction of the text line in that pixel.
Energy cost map (left-to- Energy cost map (right-to- Linear interpolation of the two energy cost map
right) left) images

Figure 2. Energy sum map results

249
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Original image Energy cost map

Figure 3. Energy cost map of an image


The algorithm of dynamic weights allocation is presented below.
 Input: original image, window width, window height
 For every pixel:
o For every variation of the window (by skewing)
 Sum up the pixels in the new skewed window
 If the sum is less than the current minimum then
 Update the current minimum
 Save the window variation
o Rescale the saved variation to [-neighbors_number / 2, -
+neighbors_number / 2]
o Calculate the weights giving the rescaled saved window variation
Experimentally, the best results were obtained when the width and height of the scanned
window are set to at least the average character width and height values. The minimum and
maximum variation angles are obtained empirically: the documents verified showed a
maximum skew of approximately 20 degrees. Using this maximum angle, the minimum
variation is set to -20 degrees and maximum to 20, leading to a 40 degrees interval, which
is more than enough for most documents.
To calculate the weights distance function is used.

f ( x, x0 , y, y0 )  x  x0 2   y  y0 2
Because of the way energy cost function is calculated, the formula becomes:

f ( y , y0 )  1   y  y0 
2

Which is equivalent to:


f x   1  x 2
To use the rescaled saved window variation, the formula becomes:

f x, var   1  x  var 


2

which is equivalent to the geometric translation of variable “x” with “-var” units.
The number of operations necessary is:
operations  w * h * nr_variations * window_width * window_height
The last step is to identify the text lines, knows as the “seam carving” process. In this step
the energy cost map is scanned, searching for minimal values. The algorithm is:
Input: energy cost map having width w and height h, neighbors_number
Output: h lines
250
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

For each line (1..h):


For each column (1..w):
P = pixel on right that has minimum cost in the energy cost map and belongs to the
neighborhood defined by neighbors_number
Add P to the current carved seam, identified by the line index
If P belongs to a previous found seam, then
Stop this loop
Continue to the next line
4. TESTS AND RESULTS
To test the dynamic weights method, a number of images of handwritten documents have
been used. There are three types of tests, each calculating the energy map differently, as
described in Section 3.
The database of test input images was consisting of about 500 different types of documents
from old letters, library index files, receipts, patents, miscellaneous printed documents
(with pronounced skew and complex layout which failed because of that the automatic
scanning deskew phase). Despite its relatively small size, the database was considered
somehow relevant because of the great variety of input documents types and as a result of
the great variety of problems that occurred in the carried automatic batch tests.
Some of the results are discussed below, in order to emphasize the possible solutions, the
evolution of the proposed approach, its limitations, strong and weak points and
computational costs. One aspect however that cannot be solved using a database of this size
is to evaluate the robustness in providing successful handwritten text segmentation in all
cases.
Original image Resulting line segmentation

Figure 5. (a) Gaussian 1st derivate using constant weights test 1

Original image Resulting line segmentation

Figure 5. (b) Gaussian 1st derivate using constant weights test 2

251
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Original image Resulting line segmentation

Figure 5. (c) Magnitude of the gradient using constant weights test 1

Original image Resulting line segmentation

Figure 5. (d) Magnitude of the gradient using constant weights test 2

Original image Resulting line segmentation

Figure 5. (e) Inverse distance transform using constant weights test 1

Original image Resulting line segmentation

Figure 5. (f) Inverse distance transform using constant weights test 2

Original image Resulting line segmentation

Figure 5. (g) Gaussian 1st derivate dynamic weights test 1

252
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Original image Resulting line segmentation

Figure 5. (h) Gaussian 1st derivate dynamic weights test 2

Original image Resulting line segmentation

Figure 5. (i) Magnitude of the gradient using dynamic weights

Original image Resulting line segmentation

Figure 5. (j) Inverse distance transform using dynamic weights

5. CONCLUSION AND FUTURE WORK


This paper describes an improvement to text line segmentation for handwritten documents
based on seam carving using energy map. Experimental results showed that using constant
weights when calculating the energy cost map leads to bad results when the text lines are
skewed. This is the reason a dynamic allocation of weights based on the local text direction
information is proposed. This way, pixels from the same line have higher probability of
selection when calculating the minimum values in energy cost map.
As mentioned in the beginning of this paper, the dynamic weight calculation has very high
computational cost. One method to reduce this complexity is to set smaller parameter values
for the number of variations, window width and height. Other methods of optimization can
also be considered and applied in conjunction. Also, the seam carving algorithm can be
improved by constraining line curvature, which can solve the problem in the picture
“Inverted distance transform – Dynamic weights”, in which a seam passes through the space
between two words and intersects another seam.
The work presented in this paper is a building block of a much bigger project: a complete,
modular, fully automatic content conversion system developed for educational purposes. In
the near future, with the completion of the system and the running in automatic batch
processing of large image databases of all kind of skewed documents (containing

253
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

handwriting or not) the dynamic weights method will be fully evaluated in order to assess
its real potential as a preprocessing phase for OCR applied on handwritten documents.
6. REFERENCES
[1] L. Likforman-Sulem, A. Zahour, B. Taconet "Text Line Segmentation of Historical
Documents: a Survey", International Journal on document analysis and recognition, vol.
9, no. 2/4, pp 123/138, 2007.
[2] R. Strand, F. Malmberg, S. Svensson, "Minimal Cost-Path for Path-Based Distances",
Image and Signal Processing and Analysis, pp. 379-384, 2007.
[3] S. Avidan, A. Shamir, "Seam Carving for Content-Aware Image Resizing”, ACM
Siggraph, article 10, 2007.
[4] R. Saabni, J. El-Sana, "Language-Independent Text Lines Extraction Using Seam
Carving", Document Analysis and Recognition (ICDAR), pp. 563-568, 2001.
[5] R. P. dos Santos, G. S. Clemente, T. I. Ren and G. D.C. Calvalcanti , "Text Line
Segmentation Based on Morphology and Histogram Projection", Document Analysis and
Recognition (ICDAR), pp. 651- 655, 2009.
[6] V. Papavassiliou, T. Stafylakis, V. Katsouros, G. Carayanni, "Handwritten document
image segmentation into text lines and words", Pattern Recognition, vol. 43, no 1, pp.
369-377, 2010.
[7] X. Du, W. Pan, T. D. Bui, "Text Line Segmentation in Handwritten Documents Using
Mumford-Shah Model", Pattern Recognition, vol. 42, no 12, pp. 3136-3145, 2009.
[8] N. Tripathy and U. Pal , "Handwriting segmentation of unconstrained Oriya text",
Frontiers in Handwriting Recognition, pp. 306-311, 2004.
[9] S. Saha, S. Basu, M. Nasipuri and D. Kr. Basu, "A Hough Transform based Technique for
Text Segmentation", Journal of Computing, vol. 2, no. 2, 2010.
[10] D. J. Kennard, W. A. Barrett, "Separating Lines of Text in Free-Form Handwritten
Historical Documents", Document Image Analysis for Libraries, pp. 12-23, 2006.
[11] Asi, R. Saabni, J. El-Sana, "Text Line Segmentation for Gray Scale Historical Document
Images", Proceedings of the 2011 Workshop on Historical Document Imaging and
Processing, 120-126, 2011.
[12] Bar-Yosef, "Input sensitive thresholding for ancient Hebrew manuscript", Pattern
Recognition Letters, vol. 26, no. 8, pp. 1168-1173, 2005.
[13] M. Arivazhagan, H. Srinivasan and S. Srihari, "A Statistical approach to line
segmentation in handwritten documents", Proceedings of SPIE, 2007.
[14] I, Bar-Yosef, N. Hagbi, K. Kedem, I. Dinstein, "Line segmentation for degraded
handwritten historical documents", Document Analysis and Recognition, pp. 1161-1165,
2009.

254
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

A METHODICAL STUDY OF THE ROLE OF TRUST AT VARIOUS


DEVELOPMENT STAGES OF VIRTUAL ORGANIZATIONS

Muhammad Yasir1
Abdul Majid2
ABSTRACT
Virtual organization (VO) is an outcome of technological advancements and the
introduction of structural flexibilities in organizations. It is a temporary combination of
internally independent parties to exploit the emerging market opportunities. The need to
demonstrate extra efficiency in its limited life-span, geographical distribution of parties,
lack of face-to-face communication, and the absence of complete information about
partners working at a distance call for establishing trust-based relationship in virtual
organizations. Almost all the researchers agree upon the importance of trust for virtual
organization but its nature and role at various development stages of these organizations
is still unexplored. Therefore, in this research we have conducted a methodical
investigation to propose the models explaining nature and role of trust at various
development stages of virtual organizations. The result of this research would help the
members to formulate effective trust-based relationship that could ultimately increase the
efficiency and performance of virtual organizations.

Key Words: Virtual organization, trust, development stages of virtual organizations


1. INTRODUCTION
Virtual organization (VO), is a contemporary organizational form with its most significant
features being: the use of information and communication technologies (ICTs) for
information sharing and communication; inter-agency collaboration; geographical
distribution and the inability to maintain frequent face-to-face interactions; and depending
more on trust than on control (Davidow & Malone, 1992; Travica, 1997). All these
characteristics developed among virtual organizations as a result of advancements in ICTs
and the need for flexible structures; but for maintaining virtuality and ensuring its benefits
for members, trust is the most crucial element.
Researchers have explained several forms of trust related to VOs but a comprehensive
model that might explain the differences in the need for trust at various stages of
development of VOs has not been proposed. In this research, we have identified the nature
and forms of trust as described by researchers with respect to virtual organizations and have
proposed the types of trust that could fit at a particular development stage of VO. The
proposed model will help the researchers and practitioners in designing a relationship
structure based on optimum trust for ensuring an effective virtual organization.

1 Department of Management Sciences, Hazara University Mansehra, Pakistan, Email Address:


muhammadyasir@hu.edu.pk
2 Department of Management Sciences, Hazara University Mansehra, Pakistan, Email Address:

abdulmajid@hu.edu.pk
255
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2. LITERATURE REVIEW
2.1. Virtual Organization
Virtual organizations emerged as a result of advancement in technology (DeSanctis &
Jackson, 1994) and the need for managerial and structural flexibility (Wang, 2000). Bultje
& Wijk (1998) explains that the concept of virtuality entails a state or condition that
includes any or all of these phenomena: unreal but looks real; immaterial and supported by
ICTs; potentially present; existing but changing. In the light of these characteristics the
concept of VOs could be defined as the groups comprising of independent parties that link
together temporarily by means of ICTs to accomplish common objectives (Davidow &
Malone, 1992; Grenier & Metes). Travica (1997) claims that parties in VOs are
geographically distributed which makes it impossible for them to maintain face-to-face
communication hence, there is a greater need to rely on ICTs. Moreover, there is an
increased significance of developing trust among parties as identified by the pioneering
researchers on the notion of trust in VOs. Craven and Piercy (1994) explain that establishing
trust is significant because parties have to share their core competencies with the others
while collaborating towards the accomplishment of common objectives. While, Handy
(1995) claims that the key to establish an effective VO is to determine the mechanisms to
run the systems based more on trust than on control.
2.2. Alternative Forms of Virtual Organizations
Virtuality with respect to organizations has several connotations, as explained by Bultje and
Wijk (1998). The relationship structure of parties, lifespan of organization, reliance on ICTs
or face-to-face communication, and the decision to employ trust or control are all the factors
that define the nature and alternative forms of virtuality in organizations. Some of the
variations in it as identified and discussed by the researchers include:
a. Network structures (Miles & Snow, 1995; Crossman & Lee-Kelley, 2004);
b. virtual teams (Jarvenpaa & Leidner, 1999; Lee-Kelley, Crossman & Cannings,
2004; Kimble, 2011)
c. collaborative networked organizations (Msanjila & Afsarmanesh, 2007;
Afsarmanesh & Analide, 2009);
d. virtual organization breeding environments (Camarinha-matos &
Afsarmanesh,2007; Msanjila & Afsarmanesh,2008)
e. virtual project teams (Oertig & Buergi, 2006);
f. virtual corporation (Davidow & Malone, 1992);
g. virtual enterprise (Mun et al. 2009);
h. virtual collaborative relationship (Paul & McDaniel, 2004);
i. virtual organization (Mowshowitz, 1997; Mun, Shin & Jung, 2011);
The forms of virtuality in organizations identified above could demonstrate different
characteristics and development stages which have not been covered here as they do not
come under the scope of this research. However, ‘virtual organization’ is taken as an
inclusive term to explain all the other forms of virtuality in organizations. Furthermore, for
the purpose of this research, we have identified the standard stages of team development
and studied the relevance of alternative forms of trust with each of these stages.

256
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2.3. Development Stages of Virtual Organizations


Virtual organizations, although widely varying in their nature and form, demonstrate
several similar characteristics. Therefore, we have employed a standard model of group
development introduced by Tuckman (1965) to identify the development stages of VOs.
This model has also been used by Furst et al. (2004) as a standard for their study on
managing the lifecycle of virtual organizations.
According to Tuckman (1965) there are four stages of group development: forming,
storming, norming, and performing, as shown in Figure I. Forming, according to him, is the
first stage at which group/ team members implicitly or explicitly share information about
themselves and the tasks. In the second stage of storming, conflicts emerge among group
members as they attempt to clarify the goals as well tasks assigned to each of the group
members. The third stage of group development is norming in which groups successfully
resolve conflicts and develop group norms to be followed by group members. While the
final stage in many groups is performing at which the members of the group collaborate
towards the accomplishment of joint objectives. However, the temporary or project-based
structures, as virtual organizations usually are, have also a final stage of adjourning at which
the group activities come to a close after completing the tasks.

Norming Performing
Adjourning

Development
Stages of
Virtual
Organizations

Storming Forming

Figure I. Development Stages of Virtual Organization


Furst et al. (2004) argue that establishing trust is very important at the forming stage of
virtual organizations to develop inter-personal relationships, as well as at all the later stages
to coordinate the activities of all the parties. These arguments have been further explained
in the following sections with the help of a review of literature on trust in virtual
organizations.
2.4. Trust in Virtual Organizations
The discussion on significance of trust for virtual organizations started with the work of
Charles Handy (1995). Handy observes that establishing trust among members is pivotal
for the success of a virtual organization. However, he emphasizes that trust requires face-
to-face communication among parties hence, is limited by boundaries. This notion of trust,
although still popular among virtual organization researchers, has subsequently given way
to the other researches such as by jarvenpaa and Leidner (1999), Panteli and Duncan
(2004), and Prasert and Yoo (2007) that advocate the presence of trust even at the start of a
257
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

virtual relationship and in global teams linked by ICTs. They argue that information
technology mediated formal and informal communication could also help in developing
effective trust among parties. Regardless of the nature and from of organization, it is now
an established fact that trust among parties is extremely important for a successful virtual
relationship. Important for this research, however, is an understanding of the types of trust
in VOs.
Meyerson, Weick, and Kramer (1996) introduced the notion of swift trust in relation to
VOs. They explain that swift trust is a fragile form of trust that develops among parties as
soon as they come closer to each other for the accomplishment of joint objectives. The
concept was further advocated by Jarvenpaa, Knoll and Leidner (1998) and jarvenpaa and
Leidner (1999) who argue that swift trust could develop even among parties in a global
virtual organization. Others researchers such as McInerney (2002); Ramo (2004); Oertig
and Buergi (2006); Prasert and Yoo (2007); Webster and Wong (2008); Lambrechts et al
(2009); Altschuller and Benbunan-Fich (2010) and Al-Ani (2011) have also supported the
development of swift trust among parties in VOs. However, these researchers have a variety
of views on the emergence and role of swift trust. Whereas, many of them consider its
presence even at the time of starting a relationship, others such as Ramo (2004) and Oertig
and Buergi (2006) argue that it is very fragile and requires right communication at the right
time. Webster and Wong (2008) attribute its development to the clarity of roles of all the
parties while Al-Ani et al (2011) argue that swift trust would develop only in short-term
VOs when there is no time for trust development.
Paul and McDaniel (2004) present three forms of trust related to VOs i.e. calculative,
competence and relational trust. Calculative trust, according to them, is based on perceiving
trust as a system of economic exchange. Panteli and Sockalingam (2005) name it as calculus
based trust while Hsu et al. (2007) have used the term economy-based trust to explain the
same phenomenon. Panteli and Sockalingam (2005) explain that it is based on the rewards
or punishment attached with pursuing or violating the conditions of trust. All these
researchers agree that calculus/economy based trust should be developed at the early stages
of VO development. The second form of trust in VOs, as identified Paul and McDaniel
(2004) is competence trust which is based on the belief that the other party is competent
enough to perform its part of the joint activity. It is also termed as knowledge based (Panteli
and Sockalingam, 2005) or information based (Hsu et al. 2007) trust. Third type of trust i.e.
relational or benevolence or identification based trust is based on the feelings and personal
attachments of a party towards the other/s which operates irrespective of the business
motives. Pantelli and Sockalingam (2005) argue that it is very unusual for short-term VOs
to develop identification based trust.
Radin (2006) classifies trust into thin, thick and deep trust. She explains that deep trust
develops in organizations when parties work together to achieve a certain goal while thick
trust develops among individuals with their strengthening interpersonal relationships. Thin
trust on the other hand, is generalized like swift trust, which Peter and Manz (2007)
proclaim as stereotype based trust. Prasert and Yoo (2007) and berry (2011) claim that there
are only two fundamental classifications of trust in VOs i.e. cognitive based that is
developed with information technology mediated communication and affect based trust that
depends on social and emotional skills of people. Lambrechts et al (2009) also divide trust
in VOs into two forms claiming that there are only swift trust and institutional trust.
258
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

According to Msanjila and Afsarmaensh (2010) trust in VOs could be classified in four
types: role based trust which develops to facilitate the adoption of responsibilities related
to the role of parties in collaborative relationship; reputation based trust that depends on the
opinion and judgment of people in a community; interaction based trust that depends on the
past experiences of working with a party; and risk based trust that parties develop among
each other on the basis of the reduced number of risks involved in a particular relationship.
However, in a recent study Hardwick et al (2013) present a simplified classification of trust
in VOs, suggesting that it has only two forms i.e. social based trust that depends on goodwill
and social relationships among parties, and technical based trust which depends on work-
related competences.
3. ROLE OF TRUST AT INDIVIDUAL DEVELOPMENT STAGES OF
VIRTUAL ORGANIZATIONS: A DISCUSSION
This section explains the role of trust at various stages of the development of VOs. We take
standard models to identify the development stages of VOs and trust and examine the
relevance of alternative forms of trust with each of these stages. As mentioned above,
Tuckman (1965) identified four fundamental stages of group development i.e. forming,
storming, norming, and performing which are followed by adjourning in the temporary
groups. Almost all the researchers on trust in VOs agree that trust is lower at the initial
stages of relationship but it exists in the form of swift trust. As the organization moves to
next stages of development, trust starts building among parties, reaching the highest levels
as the relationships are fully developed. The trend of trust at various stages of VO
development is shown in Figure II.

Figure II. Level of trust at various stages of virtual organization development

3.1. Forms of Trust at Each Stage of VO Development


As a result of the review of literature we were able to identify the following forms of trust
in VOs. Many of these forms could be categorized as different terminologies to represent
the same phenomenon. However, they can be sorted on the basis of their relevance and
impact on each of the development stages of VOs.
3.1.1. Trust at Forming Stage
As explained in the review of literature and indicated in Figure II, parties approach each
other with an initial trust at the commencement of a virtual relationship. This is a fragile
form of trust variously termed as swift trust, stereotype based trust, or thin trust. Although
trust at this stage is extremely weak, it is considered as necessary for VOs to initiate their
collaboration. This trust is usually based on personal judgments or past experiences of
working with similar parties.
259
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3.1.2. Trust at Storming Stage


At the stage of storming conflicts start arising among parties as they try to reach agreements
on the distribution of tasks and the terms and conditions of virtual collaboration. Trust is
also shaken at this stage of development, although momentarily, as the parties negotiate to
reach an agreement among them and move on to the next stage of development. Swift trust
might play its role at the stage of storming too, but more important at this stage is the
development of calculus based trust. Calculative, calculus based, or economy based trust is
significant at the stage of storming as parties attempt to reach agreement while resolving
their conflicts, based on the belief that the resulting collaboration would be rewarding for
them.
3.1.3. Trust at Norming Stage
At the stage of norming, parties usually have well-developed norms and work procedures
to guide their collaboration. Therefore, as indicated in Figure II, trust is high and growing
at this stage of VO development. The forms of trust that define relationship of parties at this
stage are technical, cognitive, competence, information, or knowledge based trust. This
trust is based on the cognitive information and knowledge that parties receive about each
other as a result of their relationship. Furthermore at the stage of norming, managerial and
technical skills as well as competencies to perform the tasks effectively are the factors that
positively affect trusting relationship among parties; as is signified by competence based
and technical based trust.
3.1.4. Trust at Performing Stage
As indicated in Figure II, trust is at its highest level at the stage of performing in VO
development. This is mainly because performing is the final stage of development at which
all the factors contributing towards the formation of effective trust relationships are fully
functional. Moreover, with the passage of time in VO lifecycle, interpersonal relationships
based on identification and benevolence start developing among parties. Therefore, the
types of trust which become more active particularly at this stage of development are thick
trust, social based trust, relational, benevolence or identification based trust.
However, our research supports the argument of researchers such as Pantelli and
Sockalingam (2005) who claim that identification based trust occurs very rarely in
temporary virtual organizations. Therefore, identification based trust and deep trust that
develop with the development of personal relationships among individuals are uncommon
in VOs. These types of trust would become effective only in the case of long-term projects
or the situations in which parties continue their collaboration for a new task after
successfully accomplishing their objective attached with the previous one. Furthermore,
Figure II also indicates that at the stage of adjourning in temporary VOs, trust does not
suddenly disappear but there is a rapid decline in it. As the parties usually are
geographically distributed with no face-to-face communication among members, stability
of trust could be ensured only if the parties continue to work on another collaborative task.
The resulting model identifying the types of trust at each stage of development of VOs is
given out in Figure III.

260
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Swift+ Calculative Swift+Calculative+


+ knowledge based+
technical, cognitive, thick trust, social based trust,
competence, information, relational, benevolence or
knowledge based trust identification based trust

Norming Performing

Development
Stages of
Virtual
Organizations

Storming Forming

Swift trust+
Swift trust,
calculative, calculus
stereotype based
based, economy
trust, thin trust
based trust

Figure III. Forms of trust at individual development stages of virtual organizations


4. CONCLUSION
This paper is based on a methodical study of the role of trust at each development stage of
virtual organization. As a result of the review of relevant literature, we have identified the
type of trust in virtual organizations. These have been further analyzed to investigate the
role of each type of trust and its significance for a particular development stage of VOs.
The resulting model indicates that swift trust is important at the stage of forming while at
the storming stage although the effects of swift trust are there but more important is to
develop calculus based trust. The effects of both swift and calculus based trust help the
parties in developing technical and knowledge based trust whereas, swift, calculus, and
knowledge based trust combine to build relational trust at the stage of performing in VOs.
Finally, we propose that identification based and deep trust among individual workers are
developed only if the organizations continue to work for long-term but are not very effective
in temporary virtual relationships.
The results of this research would help in providing better understanding of the role of trust
at each of the development stages of VO. The same could also be investigated empirically
in the future endeavors of researchers.
5. REFERENCES
 Afsarmanesh, H & Analide, C. (2009). Virtual enterprise methods and approaches for
coalition formation- Guest editorial. International Journal of Production Research, 47(17),
4655-4659.
 Al-Ani, B., Horspool, A., & Bligh, M. C. (2011). Collaborating with ‘virtual strangers’:
Towards developing a framework for leadership in distributed teams. Leadership, 7(3), 219-
249.
 Altschuller, S. & Benbunan-Fich, R. (2010). Trust, performance, and the communication
process in ad hoc decision-making virtual teams. Journal of Computer-Mediated
Communication, 16, 27-47.

261
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 Berry, G. R. (2011). A cross-disciplinary literature review: Examining trust on virtual


teams”, Performance Improvement Quarterly, 24(3), 9-28.
 Bultje, R. & Wijk, J. (1998). Taxonomy of virtual organizations, based on definitions,
characteristics and typology. Newsletter of Virtual-organization.net, 2(3), 7-21.
 Camarinha-Matos, L. M. & Afsarmanesh, H. (2007). A framework for virtual organization
creation in a breeding environment. Annual Reviews in Control,31, 119-135.
 Cravens, D. W. & Piercy, N. F. (1994). Relationship marketing and collaborative networks
in service organizations. International Journal of Service Industry Management, 5(5), 39-
53.
 Crossman, A. & Lee-Kelley, L. (2004). Trust, Commitment and Team Working: The
Paradox of Virtual Organizations. Global Networks, 4(4), 375-390.
 Davidow, W. H. & Malone, M. S. (1992) The virtual corporation: structuring and
revitalizing the corporation for the 21st century. New York: Harper Collins Publishers.
 DeSanctis, G. & Jackson, B. (1994). Coordination of information technology management:
Team-based structures and computer-based communication systems. Journal of
Management Information Systems, 10(4), 85-110.
 Furst, S. A. Reeves, M., Rosen, B. & Blackburn, R. S. (2004). Managing the life cycle of
virtual teams. Academy of Management Executives, 18(2), 6-20.
 Grenier, R. & Metes, G. (1995). Going virtual - Moving your organization into the 21st
century. New Jersey: Prentice Hall.
 Handy, C. (1995). Trust and the virtual organization. Harvard Business Review, 73(3), 40-
50.
 Hardwick, J. Anderson, A. R. & Cruickshank, D. (2013). Trust formation processes in
innovative collaborations: Networking as knowledge building practices. European Journal
of Innovation Management, 16(1), 4-21.
 Hsu, M., Ju, T. L., Yen, C. & Chang, C. (2007). Knowledge sharing behavior in virtual
communities: The relationship between trust, self-efficacy, and outcome expectations.
International Journal of Human-Computer Studies, 65, 153-169.
 Jarvenpaa, S. L. & Leidner, D. E. (1999). Communication and trust in global virtual teams.
Organization Science, 10(6), 791-815.
 Jarvenpaa, S. L., Knoll, K. & Leidner, D. E. (1998). Is anybody out there? Antecedents of
trust in global virtual teams. Journal of Management Information System, 14(4), 29-64.
 Kanawattanachai, P. & Yoo, Y. (2002). Dynamic nature of trust in virtual teams. Journal of
Strategic Information System, 11, 187-213.
 Kanawattanachai, P. & Yoo, Y. (2007). The impact of knowledge coordination on virtual
team performance over time. MIS Quarterly,31(4), 783-808.
 Kimble, C. (2011). Building effective virtual teams: How to overcome the problems of trust
and identity in virtual teams. Global Business and Organizational Excellence, 30(2), 6-15.
 Lambrechts, F., Sips, K., Taillieu, T., & Grieten, S. (2009). Virtual organizations as
temporary organizational networks: boundary blurring, dilemmas, career characteristics
and leadership. Argumenta Oeconomica, 1(22), 55-82.
 Lee-Kelley, L., Crossman, A. & Cannings, A. (2004). A social interaction approach to
managing the "invisibles" of virtual teams”. Industrial Management & Data Systems,
104(8), 650- 657.
 Meyerson, D., Weick, K. E., & Kramer, R. M. (1996). Swift trust and temporary groups, In
Kramer, R.M. and Tyler, T.R. (Eds), Trust in Organizations: Frontiers of Theory and
Research. Thousand Oaks, California: Sage Publications.
 Miles, R. E. & Snow, C. C. (1995). The new network firm: A spherical structure built on a
human investment philosophy. Organizational dynamics, 23(4), 4-18.

262
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 Mowshowitz, A. (1997). Virtual organization. Communications of the ACM, 40(9), 30-37.


 Msanjila, S. S. & Afsarmanesh, H. (2007). Modelling trust relationships in collaborative
networked organisations. International Journal of Technology Transfer and
Commercialisation, 6(1), 40-55.
 Msanjila, S. S. & Afsarmanesh, H. (2008). Trust analysis and assessment in virtual
organization breeding environments. International Journal of Production Research, 46(5),
1253–1295.
 Msanjila, S. S. & Afsarmanesh, H. (2010). “FETR: a framework to establish trust
relationships among organizations in VBEs, Journal of Intelligent Manufacturing, 21, 251-
265.
 Mun, J., Shin, M., & Jung, M. (2011). A goal-oriented trust model for virtual organization
creation. Journal of Intelligent Manufacturing, 22, 345-354.
 Mun, J., Shin, M., Lee, K., & Jung, M. (2009). Manufacturing enterprise collaboration
based on a goal-oriented fuzzy trust evaluation model in a virtual enterprise. Computers &
Industrial Engineering, 56, 888-901.
 Oertig, M. & Buergi, T. (2006). The challenges of managing cross-cultural virtual project
teams. Team Performance Management, 12(1/2), 23-30.
 Panteli, N. & Duncan, E. (2004). Trust and temporary virtual teams: alternative
explanations and dramaturgical relationships. Information Technology & People, 17(4),
423-441.
 Panteli, N., & Sockalingam, S. (2005). Trust and conflict within virtual inter-organizational
alliances: a framework for facilitating knowledge sharing. Decision Support Systems, 39,
599-617.
 Paul, D. L. & McDaniel, Jr., R. R. (2004). A Field Study of the Effect of Interpersonal Trust
on Virtual Collaborative Relationship Performance. MIS Quarterly, 28,(2), 183-227.
 Peters, L. M. & Manz, C. C. (2007). Identifying antecedents of virtual team collaboration.
Team Performance Management, 13(3), 117-129.
 Radin, P. (2006). To me, it’s my life: Medical communication, trust, and activism in
cyberspace, Social Science & Medicine, 62, 591-601.
 Ramo, H. (2004). Moments of trust: temporal and spatial factors of trust in organizations.
Journal of Managerial Psychology, 19(8), 760-775.
 Travica, B. (1997). The design of the virtual organization: A research model. In Gupta, J.,
Association of information systems: Proceedings of the Americas conference on
information systems, August 15-17, 1997. Indianapolis, 417-419.
 Tuckman, B. W. (1965). Developmental Sequence in Small Groups. Psychological
Bulletin, 63, 384-399.
 Wang, S. (2000). Meta-management of virtual organizations: toward information
technology support. Internet Research: Electronic Networking Applications and Policy,
10(5), 451-458.
 Webster, J. & Wong, W. K. P. (2008). Comparing traditional and virtual group forms:
identity, communication and trust in naturally occurring project teams. The International
Journal of Human Resource Management, 19(1), 41-62.

263
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

M-TOURISM EDUCATION FOR FUTURE QUALITY MANAGEMENT

Ion Ivan1
Alin Zamfiroiu2
ABSTRACT
Tourism is the main income source of revenue in GDP in many countries. For 2012 the
relative contribution of tourism in GDP in Croatia was 11.9%, in Greece, 6.5% and in
Romania was 1.5%. The tourism industry is characterized by high level technology,
qualifications of staff and quality management. Increasing the quality and productivity in
the industry is achieved through education and certification of workforce. Nowadays due
to advanced technologies and permanent lack of time the education is done more and more
through mobile device. The characteristics of the processes of education using computer
applications based on mobile technologies and security requirements for M-Learning
systems are presented. It constructed a metric to determine the behavior of information
applications in tourism education.

Keywords: Tourism, Mobile Learning, Quality Management, Security


1. EDUCATION AND E-LEARNING IN TOURISM INDUSTRY
Tourism is represented by activities performed for recreation, for resting or for leisure time.
In some countries such as Croatia, Greece, Egypt or Thailand tourism is very important
because of the income and the contribution to GDP. According to The Authority on World
Travel & Tourism (2013) contributions in the GDP for the year 2012 were substantial. The
top ten countries in the hierarchy after the absolute value of direct contribution to GDP are
shown in Table 1.
Index Country Absolute contribution Relative contribution
($ billions) (%)
1 United States 438.5 2.8
2 China 215.4 2.6
3 Japan 127.6 2.1
4 France 99.7 3.8
5 Italy 81.9 4.1
6 Brazil 76.9 3.4
7 Spain 73.3 5.4
8 Mexico 68.3 5.8
9 United Kingdom 58.4 3.1
10 Germany 55.4 1.8
65 Romania 2.6 1.5
Table 1. Absolute and relative direct contribution of tourism to GDP
In Romania in 2011 contribution to GDP was $ 2.5 billion. Thus increasing from 2011 to 2012 was
$ 1 billion, which represents an increase of 4% in 2012 compared to 2011.

1 The Bucharest University of Economic Studies, ionivan@ase.ro


2 The Bucharest University of Economic Studies, zamfiroiu@ici.ro
264
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

This increase shows that Romania is a country that is present growth of the tourism industry.
Growth which must be ensured by quality tourism and management features so that all
activities to be monitored and controlled.
In addition to direct income and indirect tourism contribute to GDP through tourism related
service industries. Industry services include transport and accommodation services.
Control the quality of tourism growth is ensured by measures taken on the:
 seasonality and repeatability in terms of periods in which people undertake
recreational and leisure activities;
 processes and activities to attract tourists to the area representative for the local or
regional tourism;
 service packages offered to tourists through loyalty, promotional packages with
optional services included in the base price;
 workforce in the tourism services, tourism employed persons must prove an
adequate training in working environment, offering tourists with high quality
services according to their expectations.
Quality of tourism services according to Butnaru et al (2012), Aldebert et al (2011) is
influenced most by staff performance especially in large units. Package, food,
accommodation, entertainment, maintenance or other activities can be found in several
travel agencies grid, instead of staff performance cannot be offered identical in several
locations. Such certification is required and training staff continue to work undertaken in
the field of tourism. Thus, a restaurant staff should be trained to daily settlement table
linens, in a hotel with settlement linen, towels and soap in bathroom.
Staff employed in tourism does not have very much free time, more than it has so little free
time periods between different activities, such training makes for short periods of time and
in different locations. For such training most suitable training through mobile devices that
are present regardless of location and time. Mobile technology increasingly penetrates the
tourism industry according to Ivan (2009).
Education involves reading courses on the mobile device, viewing videos with
demonstrations of place cutlery, linen and towels.
Fee paying courses is via the mobile operator, the amount being paid to pay your bill and
the operator transferring the money to educate suppliers' service.
Testing is also done through mobile devices for obtaining certification following the user
to move during the season in an examination center where the final exam, after which you
get diploma.
2. FEATURES OF M-EDUCATION PROCESSES
Lee and Salman (2012) define M-Education term as Science in Hand and requires the use
of mobile devices to obtain information, for reading courses or to train any person.
M-education processes has the following features:
 diversity of users on the set of people who use mobile devices;

265
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 distinct but homogeneous categories of users such as teachers, students,


administrators, developers tutorials and learning materials for the platform, these
groups form and applications of M-Learning modules, each category of users with
a specific module within the application;
 accessing application is made anywhere regardless of space, it can be accessed from
the car, in public transport, in the park, at work or elsewhere;
 short interaction time due to accessing the application in various places and
unavailable to users for a very long time;
 small size of the screen that displays information, mobile devices are characterized
by small size such information must be tailored to the small size of the screen;
 amount of information transmitted via the Internet to be reduced because of high
costs imposed by mobile phone operators for data;
 the presentation of information should be done through images and sounds because
a very large text is hard to follow on a small screen such as mobile devices;
 terms using a common vocabulary known to all users, we recommend the use of
terms and symbols known users in other similar applications, so existing
applications to be analyzed and built a vocabulary of terms for the new application;
 prioritization of frequently used items and placing them so that they are easily
accessible through the surgery to speed up user interaction, thus it creates a profile
of the user and quickly provided the necessary information to the image;
 constructing a map of the application for the user to know in what way the
application quickly find the desired information;
 security of confidential information stored about users logged into the application,
for registration information users provide is kept confidential and secure;
 friendly interface and applications are oriented citizen, writing text should be as
short as any writing application user must come up with suggestions for auto
complete of the user to select the desired option;
 innovation of applications close on the larger user and perform certain services or
calculations very quickly and on the spot, the user is not forced to wait for them
very much, also allowed users to upload material into the application and bring
their own contribution to detached and platform development experience gained
through working in the field; cater loaded by users are validated by administrators
will then be available to other users using the platform for training;
 automatic testing and complex test results so users will be provided immediately
after the test, the user is not forced to wait for it, in this way ensuring staff training
application, its testing and certification exams after data;
 independence from the administrators and application developers, application must
be functional regardless of time or the availability of certain persons;
These particularities are met for all applications in the M-Learning Environment. These
features determine the characteristics of mobile applications based on their educational
system and determine the quality of mobile applications.
3. SECURITY REQUIREMENTS IN M-EDUCATION
Due to the nonhomogeneity of the target group requires that m-education applications meet
certain safety requirements and all mobile applications according to Ivan (2013):

266
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 authenticate each user and depending on the type of module is oriented properly,
so teachers are oriented module for teachers, students and administrators the
module for students by the administration of the platform;
 ensure consistency and accuracy of data input validation through time entry,
validation must be complex for the data to be adequately controlled and rigorously;
 user permission to access a single module, so if a student does not have access
courses access to tests or exam during the exam and if you cannot access courses,
for this user is redirected to a module created for this, Figure 1; thus the user unable
to access another module, it will be automatically directed to the module already
accessed;

Student

ID Student

Courses Tests Exams

Figure 1. Controlled access to the modules


 existing data control platform in order to preserve the integrity of information
provided by the platform, because the platform presents a high dynamism and users
can upload new information in correlation with their experience, this information
must be regularly checked and ensure the integrity of all information existing
platform, this control is needed to increase the quality of information on the quality
platform and mobile application automatically used for learning;
 logging all information access platform to be able to analyze who made some
modifications, and if identification of problems will be checked logs of records
backwards until the cause of the problem and identify the user responsible for the
occurrence of the problem, so any change in the platform such information are
saved:
o user ID;
o modified date;
o mobile device type that was made change;
o reason for change (improvement, contribution or other);
 encryption of personal information stored on the platform but the information
stored by users and answers to tests or exams, because the application is mobile
examination is held at the same time by all students, if a student exam earlier than
other students the answers provided by it is encrypted so it cannot be accessed by
other students who have not yet sustained examination; Another way to avoid
copying answers from other students by students who take the tests of time is like
Fields of the examination to be changed from one student to another, such questions
are difficult to create similar but different and each student is determined a grid of
questions, which differ from one grid to another student;

267
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Fulfilling these requirements lead to security compliance minim quality educational


environment for mobile applications. To increase the quality of these applications is
necessary to identify solutions to increase security of the educational mobile applications
for tourism.
4. SOLUTIONS TO INCREASE THE SECURITY IN INFORMATICS
TOURISM APPLICATIONS
The continuous development of existing applications should be aimed at increasing its
security. Such development is done by adding new modules in the application to meet the
new security standards occurred since the last application development. Development does
not delete the old existing modules in the application. Removing some modules in special
security modules and thus lead to a decrease in lower quality security application developed
according to Ivan et al. (2012), Pocatilu (2010).
Siveco Romania has implemented solutions for education dedicated to large audience. They
have educational portals, virtual laboratories and academic platforms.
Authentication in the application is based on the username and password. To increase
security keeps logging this model but is also developing an adjacent module takes into
account the identification code of the mobile device that is logging. Such a user will only
be allowed to log on his mobile phone. Avoid an examination in this way by other people
in the name of students, Figure 2.
Username P
+ Login L
Password A
T
Username F
+ O
Login
Password R
M
Phone ID
Figure 2. Increased security when logging into the application
The old model is still used in parallel but is also used in the new phone number is
automatically checked by the application without the user having it provide. Access to the
platform is achieved by means of two models of log.
For users increased security is achieved by implementing new input validation by them.
Thus in addition to the existing checks are implemented some new more complex
information to provide an adequate analysis of this information introduced and certification.
It is also recommended to remove the text input by the user because writing text on a mobile
device is very difficult. If in some application modules are text entry and they cannot be
completely replaced with controls for selection by the user only to select the desired option
is recommended developing suggestions for text input controls. This solution increases
speed and increases interaction with mobile application and security because this will not
enter the wrong words but words already in the platform default vocabulary.

268
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Registered users on the platform after logging have access to courses, video tutorials, tests
and exams. For unregistered users is allowed the access to simple classes, free tutorials and
simple tests, Figure 3.

PLATFORM
SECURE PLATFORM
Login Courses
U
Tutorials
S
Tests
E
Exams
R
S
Samples
Free tutorials
Free test

Figure 3. Access to resources available on the platform


Resources available for registered users are secured and encrypted. If unregistered users
make request for these resources, they are redirected to the registration page. After
registration and account validation newly created resource access is allowed because the
moment users are part of the register on the platform.
5. BEHAVIOR METRIC FOR M-EIA (M-EDUCATION INFORMATICS
APPLICATIONS)
Metric behavior of computer applications in the M-Educational contains indicators for calculating
the efficiency and quality of education in tourism platform.
The registration indicator on the platform of people who view the free tutorials, RI is
calculated by the formula:
NRPV
RI 
NPV
where:
NPV – number of people who have viewed free tutorials;
NRPV – number of registred people who have viewed free tutorials.
The indicator takes values in the interval [0;1]. IF RI=1 then free tutorials offered to
unregistered users have a high quality and convincing all users have looked to make an
account on platform. If RI=0 any user of who have viewed the free tutorials don’t made an
account on the platform, thus the quality of these materials is very low.
This indicator shows the quality of the materials offered for free to view users that are not
logged in or registered on the platform. Their quality determines the users to make their
account to access the rest of tutorials or exam and getting certification.
The second indicator is graduation indicator in relation to the total number of registered
users on the platform, GI TU , calculated by the formula:
NGP
GI TU 
NRU
where:
NGP – number of graduated people;
269
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

NRU – number of registered users on the platform.


The indicator takes values in the interval [0; 1] and determines the graduation rate in relation
to the total number of students. Such efficiency and quality is pursued teaching materials
provided through the platform and the ratio of these materials and data subjects for
graduation or obtaining certification exams. For better accuracy and quality levels for
materials provided on the platform graduated indicator relative to the number of users who
have taking the final exam, GI FU , calculated by the formula:
NGP
GI FU 
NUF
where:
NUF – number of users who have taking the final exam.
The indicator takes values in the interval [0; 1] and eliminates the difference created by the
users who access the platform but not going to the final exam.
Because the instruction is only through mobile device can automatically take action on the time spent
learning. This will determine if the time spent learning directly influence the mark in the final exam.
It is considered the set, S, of registered users:
S  S1 S 2 ... S n 
where: n – number of registered users
For each user, S i , determine the time spent on the platform, LTSi by the formula:
mS i

LTSi   ( LTSij end  LTSij start)


j 1

where:
m – number of accesses by the user S i on the platform;
LTSji start – start time for j access;
j
LTSi end – end time for j access.

In this way we obtain the set LT, whose elements represent the time spent by each student
on the platform:
LT  LTS1 LTS 2 ... LTS ns 
From automated data collection we obtain the set D, the student notes:

D  DS1 DS2 ... DSn 
For the two sets is calculated minimum time spent by a user who has obtained
certification, LTgr , and the maximum time spent by a user who has not obtained certification,
LT re .

LT gr  min LT Si  LT | S i  S ; i  1 : n; D Si  D D Si  5 

LT re  max LT Si  LT | S i  S ; i  1 : n; D Si  D D Si  5 
Based on the calculated time to determine the influence of the learning to obtained result
on final exam.

270
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

If LTgr  LTre then the learning time spent on the platform have a substantially influence
for the degree obtained on the final exam.
If LTgr  LTre , the degree obtained at final exam is influenced by other factors in addition
to the time spent, the latter having little influence on that degree.
Indicators are automatically calculated in the platform through automatic collection of
information and data, users have to complete some questionnaires not leave a feedback or.
6. CONCLUSIONS
Applications in M-Tourism education are necessary because many factors such as:
 anytime and anywhere access to courses and tests;
 lack of time for tourism staff;
 ease of accessing information through mobile devices;
Mobile applications are geared toward citizens and especially the educational environment
becomes custom applications meet user requirements and performs functions according to
user profile. Adjustment for profile is used as both vocabulary and categories of users that
diversity levels. Thus in the same category of users are subcategories of users. There waiter
category has three subcategories: terrace waiter, waiter restaurant and waiter experienced
for 5-stars hotel restaurants.
Applications developed for the m-education are built in modules so that future development
of the application to be able to do both vertically by adding new modules and vertically by
adding new sub modules.
Mobile applications will grow in importance for the training and qualification executants
that if they have something not clear are trained accessing lesson by phone.
7. REFERENCES
1. Aldebert Bénédicte, Dang Rani, Christian Longhi, Innovation In The Tourism Industry: The
Case Of Tourism@, Tourism Management, Vol. 32, Nr. 5, 2011, Pp. 1204-1213.
2. Butnaru Gina Ionela, Miller Amanda, Conceptual Approaches On Quality And Theory Of
Tourism Services, Procedia Economics And Finance, Nr. 3, 2012, Pp. 375–380.
3. Lee, K.B. And Salman, R. (2012) The Design And Development Of Mobile Collaborative
Learning Application Using Android, Journal Of Information Technology And Application
In Education, Vol. 1, Nr. 1, 1-8.
4. Ivan Ion, Surcel Traian, Milodin Daniel, Tourism Management Information System Based
On Mobile Technology ,Proceedings Of The 2009 International Conference On Tourism
And Workshop An Sustainable Tourism Within High Risk Areas Of Environment Crisis,
22 - 25 April, 2009, Messina, Isbn 978-88-96116-20-3
5. Ivan Ion, Milodin Daniel, Zamfiroiu Alin (2013) Security Of M-Commerce Transactions,
Theoretical And Applied Economics, Vol. 20, Nr. 7, 59-76.
6. Ivan Ion, Boja Catalin, Zamfiroiu Alin (2012) Self- Healing For Mobile Applications,
Journal Of Mobile, Embedded And Distributed Systems - Jmeds, Vol. 4, Nr. 2, 96-106.
7. Pocatilu Paul (2010) Developing Mobile Learning Applications For Android Using Web
Services, Informatica Economică Vol. 14, No. 3, 106-115.
8. Siveco Romania, Http://Www.Siveco.Ro/, Accessed 15.08.2013.
9. The Authority On World Travel & Tourism, Http://Www.Wttc.Org, Accessed 09.08.2013.

271
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

HAND GESTURES RECOGNITION USING TIME DELAY NETWORKS

Irina Mocanu1
Tatiana Cristea2
ABSTRACT
This paper proposes a system for body gestures recognition using the coordinates of the
body skeletal returned by a Kinect sensor who are processed in order to compute a set of
angles. Gesture recognition is achieved using a Time Delay Neural Network, implemented
in two ways: with delay layer, and with delay synapse. The resulting system was trained on
a set of gestures, such as: hands up and elbows bent, lifting arms, lowering arms, round,
greeting gesture and pointing assertion. Each gesture was repeated at least 5 times with
different speed. The accuracy of the proposed method is approximately 80%.

Keywords: gesture recognition, time delay networks, Kinect sensor, intelligent


systems;
1. INTRODUCTION
The development of a series of devices has become an essential part of our daily
life. The ease of use of this equipment has become an intensely studied in current
research in the field of human interaction with the surrounding environment [1]. A
method that facilitates communication with the human environment is the
interpretation of his gestures. Thus there is no need for direct interaction with the
devices but it is enough a gesture made from any position / location to trigger one
or more actions (for example turning light, TV, alarm trigger).
This paper proposes a system for body gestures recognition using the coordinates
of the skeletal returned by a Kinect device. The coordinates of the skeletal provided
by the Kinect are processed in order to compute a set of angles for gesture
recognition. Angles are calculated based on their relevance to detect gestures: the
arm - forearm, neck - shoulders, arms - shoulders. Next step is made to identify the
actual gesture based on the information processed previously. In [2], hand movements
are analysed using time delay neural networks with RGB images as inputs in order to
recognize hand movements. Based on that result we apply time delay neural networks
(TDNN) with Kinect skeletal as input in order to recognize hand gestures. Time Delay
Neural Network is implemented in two ways: with delay layer, and second with
delay synapse.
The rest of the paper is organised as follows. Section 2 presents some existing methods for
gesture recognition. The gesture recognition method with Time Delay Neural Network is

1 Lecturer, University POLITEHNICA of Bucharest, Spaliul Independentei No. 313, Bucharest 060042,
Romania, *irina.mocanu@cs.pub.ro
2 Master Student, Artificial Intelligence, VU University Amsterdam, De Boelelaan 11051081 HV Amsterdam,

The Netherlands, tatiana.cristea@gmail.com


272
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

described in Section 3. Experimental results, conclusion and future developments are


presents in Section 4 and 5.
2. RELATED WORKS
Methods for human gestures analyses can be classified in two classes: static and
dynamic detection methods, as it is described in [3]. This classification occurs due
to various issues that need to treat algorithms used to identify gestures from each
category. The static gestures can be identified using decision tree as in [4]. The dynamic
gestures involve several aspects:
 it is necessary to be able to represent a temporal dependency between
actions;
 the learned patterns must be invariant in time and
 actions must not require alignment over time.
In [5] motion patterns are learned from the extracted trajectories from videos using a time-
delay neural network. The experiments described in the paper demonstrate recognition of
40 hand gestures of the American Sign Language. Experimental results show that hand
gestures can be extracted and identified with high recognition rate. In paper [6] is described
a model for recognizing general activities using Hidden Markov Models (HMM) and
stochastic parsing. These activities are first identified as a series of low-level
primitive actions, represented using the HMM and then they are recognized by string
parsing using a context-free grammar. In [7] human activities are recognized by
matching temporal templates with saved instances of images for known activity.
Variable Length Markov Models to model human behavior in order to include
temporal dependencies is used in [8]. The above approaches require image
processing to extract relevant data to detect human body postures. This process
involves some limitations, such as depth information which is difficult to extract only
from one image. Thus, in [9] is adopted a system based on four synchronized
cameras to recognize human postures.
3. GESTURE RECOGNITION
Based on the necessary aspects that we must consider in recognizing gestures we
present a method for gesture recognition using Time Delay Neural Networks
(TDNN) [10]. TDNN will be implemented in two ways: with delay layer, and
second with delay synapse. In both cases TDNN has an input level, an output level and
two hidden states. The differences between the two types of the TDNN are at the
construction level of the network unit. The basic unit used in many neural networks
computes the weighted sum of its inputs and then passes this sum through a nonlinear
function, most commonly a threshold or sigmoid function. In a TDNN a basic unit is
modified by introducing delays D1 to DN, as in Figure 1, as described in [10].

273
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

DN DN
...
...
W
j+N i+N
W ...
...
...
...
U
UJJ Neuron
1 Wi+1 U
Uii
Wj + D1
D1 Wi

j
W
FF

Figure 1. A Time Delay Neural Network unit


The J inputs of such a unit now will be multiplied by several weights, one for each delay
and one for the not delayed input.
The Recognition System
We detect human gestures by processing the human skeletal. In order to detect the human
skeletal we use a Kinect sensor. The information provided by Kinect is processed and we
obtain the relevant data which will be used to train the TDNN. After the training phase we
test the learned model and determine the error rate. The system architecture is presented in
Figure 2 and is composed of three main modules:
 the preprocessing module - receives the human skeletal from the Kinect, computes
a feature vector for each skeletal formed by a set of relevant angles;
 the training module – trains the TDNN; the input data is composed by the feature
vectors computes by the processing module; The parameters of the trained TDNN
are saved in the TDNN Parametres database.
 the classification module – classifies a new gesture; it receives a feature vector of
a new human skeletal from the preprocessing module and classifies it using the
learned parameters of the neural network from the TDNN Parameters database.
Preprocessing Training
module module

TDNN
Parameters
Classification Database
module

Recognized gesture

Figure 2. The system architecture


The Kinect sensor produced by Microsoft (Microsoft research, 2011) uses the PrimeSense's
3D sensing technology [11] and is composed by three main parts: an infrared laser projector,
infrared camera, and a RGB color camera. The depth projector simply floods the
surrounding space with Infra Red laser beams creating a depth field that can be seen only
by the IR camera. Due to infrared’s insensitivity to ambient light, the sensor can be used in
any lighting conditions. The hardware is the basis for creating an image that the processor
can interpret, the software behind the sensor is what makes everything possible. Using
statistics, probability, and testing for different natural human movements the SDK
274
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

(software development kit) is able to track the movements of 20 main joints on a human
body, as described in Figure 3. This software is how the sensor can differentiate a human
from other objects (robots and other devices) that happens to be in front of the IR projector
or different humans that share the same space. The sensor has the capabilities of tracking
up to six different persons at a time, but as of now the software can only track up to two
active operators.

Figure 3. The skeletal computed by Kinect: (a) general view, b) a skeletal example
The processing module receives the skeletal provided by Kinect (the 20 joint points) and
computes a feature vector for each skeletal. The feature vector is composed of 5 angles
computed using only eight joint points from the Kinect skeletal: ShoulderLeft, ElbowLeft,
WristLeft, ShoulderRight, ElbowRight, WristRight, ShoulderCenter and Head: (α, β, γ, δ,
θ ), as shown in Figure 4.

 the angle between the upper arm and the lower


arm: left - (α) and right - (θ);
 the angle between the shoulder and the upper
arm: left - (γ) and right - (β);
 the angle between the head and the right
shoulder: (δ);

Figure 4. The feature vector for a human skeletal


The feature vector will be passed as input to the training module or to the classifying
module. Both the training module and the classifier module use a neural network with a
sigmoid activating function, who accepts input data in domain [0, 1]. Thus the feature vector
will be normalized.
The training module uses a TDNN and must configure the architecture of the: number of
synapses, the number of levels and the size of the time window. After that the network
is trained and the weights associated with the synapses are calculated. The weights are
275
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

computed using the back-propagation algorithm. It must be constructed two types of


networks:
 TDNN with delay synapse and
 TDNN with delay layers.
The network parameters: the type of the network, the number of synapses, the number of
nodes, the number of levels, the weights, the temporal window and the mean square error
are saved in the TDNN Parameters Database.
The classifying module will classify a new gestured using a trained network with
parameters selected from the TDNN Parameters Database. The result of the classifying
module is a vector with the detected gestures of the supervised person.
4. EXPERIMENTAL RESULTS
The gesture recognition system was implemented in Java using the Java Object Oriented
Neural Network (JOONE) library [12]. JOONE is a free neural network framework to
create, train and test artificial neural networks. The aim is to create a powerful environment
both for enthusiastic and professional users, based on the newest Java technologies. JOONE
is composed by a central engine that is the fulcrum of all applications that are developed
with JOONE.
For testing we use 3500 frames with gestures. Each gesture was repeated at least 5 times
with different speed. We evaluate the two types of TDNN networks: with delay synapse
and with delay layer, with 10.000 epochs for training, varying the parameters of the
network: the number of neurons from the hidden layers, dimension of the temporal window,
and finding the optimal network configuration for a good accuracy of gesture recognition.
We obtain the accuracy of the proposed method approximately 80% and the
parameters of the optimal TDNN are presented below.
Gesture Recognition with Type Delay Neural Networks with Delay Synapse
The obtained results are presented in Figure 5. The optimal configuration for the TDNN is
obtained with the following parameters: the number of neurons in the hidden levels: 3 and
4 and the size of the temporal window: 4, 6 and 9. The error is high if we vary the number
of neurons from the hidden levels while maintaining constant size for the temporal
windows. Also if the number of neurons from the hidden layers is constant and we vary the
size of the temporal windows the global error is high. These facts are possible based on
how a gesture is formed: the gesture is outlined in the first temporal window and it is
accentuated in the second window and finally it is classified.

276
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Global error
0.57
0.55
0.53
0.51
Global error
0.49
0.47
0.45
0.43
0.41
0.39
0.37
0.35
0.33
0.31
0.29
0.27
0.25
1 2 3 4 5 6 7 8 9 10
Number of neurons on the first layer
Number of neurons on the second layer Number of neurons
Size of the first temporal window
Size of the second temporal window
Size of the third temporal window

Figure 5. Global error depending of the number of neurons


Gesture Recognition with Type Delay Neural Networks with Delay Layer
The obtained results are presented in Figure 6 a) and b). The optimal configuration for the
TDNN is obtained with the following parameters: the number of neurons in the hidden
levels: 60 and 60 and the size of the temporal window: 28, 6 and 3. The global error is
increasing if we very the size of the temporal window or the number of the neurons from
the hidden layers.

Global error
0.42
0.4
0.38
0.36
Global error

0.34
0.32
0.3
0.28
0.26
0.24
0.22
0.2
0.18
2 3 4 6 10 20 25 26 28 30
First temporal window Second temporal window
Third temporal window Size of the temporal window

Figure 6. (a) Global error depending of the size of the temporal window

277
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

0.42
Global error
0.4
0.38
0.36
0.34
0.32
0.3
Global error

0.28
0.26
0.24
0.22
0.2
0.18
2 5 10 15 20 40 50 58 60

Number of neurons on the first layer


Number of neurons Number of neurons on the second layer

Figure 2 (b) Global error depending of the number of neurons

5. CONCLUSIONS AND FUTURE WORK


The coordinates of the skeletal provided by the Kinect are processed in order to
compute a set of angles for gesture recognition. Angles are calculated based on their
relevance to detect gestures: the arm - forearm, neck - shoulders, arms - shoulders.
Next step is made to identify the actual gesture based on the information processed
previously. Gesture recognition is achieved using a Time Delay Neural Network,
implemented in two ways: with delay layer, and second with delay synapse.
The resulting system was trained on a set of gestures, such as: hands up and elbows
bent, lifting arms, lowering arms, circling, greeting gesture and pointing assertion.
Each gesture was repeated at least 5 times with different speed. The accuracy of the
proposed method is approximately 80%. The best results were obtained for the
following configurations: (i) network with delay layers with time windows: 28, 6, 3,
and the number of neurons in the hidden levels 60 and 60 and (ii) network type time-
delay type synapses implemented using delay with time windows: 4, 6, 9, and the
number of neurons in the hidden levels 3 and 4.
As future work we will intend to recognize more gestures and we also will integrate
the recognition system in an ambient intelligent system for supervising people. The
ambient intelligent system will detect the gestures of a supervised person in order
to interact with the intelligent devices from the house and also to recognize daily
activities performed by the person.

278
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

6. REFERENCES
[1] Cook D, Das S. Smart Environments Technology, Protocols and Applications. 2004
[2] Modler, P. and Myatt, T. Recognition of Separate Hand Gestures by Time-delay Neural Networks
Based on Multi-state Spectral Image Patterns from Cyclic Hand Movements. IEEE International
Conference on Systems, Man, and Cybernetics,.2008, p. 1539-1544
[3] Moeslund TB, Granum, E. A Survey of Computer Vision-Based Human Motion Capture.
In: Computer Vision and Image; 2001; 81(3), p. 231-268
[4] Mocanu, I, Florea, A. M. A Multi-Agent system for Human Activity Recognition in Smart
Environments. Intelligent Distributed Computing V, Studies in Computational Intelligence, 382,
2012, p. 291-301
[5] Yang MH, Ahuja N. Recognizing Hand Gesture Using Motion Trajectories. In: Proc. IEEE Conf.
Computer Vision and Pattern Recognition; 1999, p. 466-472
[6] Ivanov YA, Bobick A.F. Recognition of Visual Activities and Interactions by Stochastic Parsing.
In: IEEE Trans. Pattern Analysis and Machine Intelligence; 2000; 22(8), p. 852-871
[7] Aarob B, Davis J. W. Action Recognition using Temporal Templates. http://www.cse.ohio-
state.edu/~jwdavis/CVL/Publications/motrec.pdf, accessed November 2013
[8] Galata A, Johnson N, Hogg D. Learning Variable-Length Markov Models of Behavior. In:
Computer Vision and Image Understanding; 2001; 81( 3), p. 398-413
[9] Chu CW, Cohen I. Posture and Gesture Recognition using 3D Body Shapes Decomposition.
http://iris.usc.edu/outlines/papers/2005/cwchu-v4hci05.pdf, accessed November 2013
[10] Waibel A, Hanazawa T, Hinton G, Lang KJ, Shikano K. Phoneme Recognition Using Time –
Delay Neural Networks. http://www.cs.toronto.edu/~hinton/absps/waibelTDNN.pdf, accessed
November 2013
[11] Microsoft Kinect for Windows SDK, Programming Guide.
http://research.microsoft.com/redmond/kinectsdk/docs/programmingguide_kinectsdk.pdf,
accessed November 2013
Memsic - http://www.memsic.com/wireless-sensor-networks
[12] Marrone P. Java Object Oriented Neural Engine. The Complete Guide. All you need to know
about Joone, www,joone.org, accessed September 2012

279
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

CIRCULAR CONVOLUTION AND FOURIER DISCRETE TRANSFORMATION

Mircea Ion Cîrnu1


ABSTRACT
We present the discrete circular convolution and its algorithms of calculus. Using discrete
Fourier transform can be determined the circular deconvolution, ie the circular
convolution inverse. We exemplify these concepts by solving an equation of circular
convolution. This type of convolution product is applies to the periodic phenomena,
discretely analyzed. By this paper we continue the series of articles about the types of
discrete convolution products and their applications, some of which being cited in the
References.

Keywords: Circular convolution, circular deconvolution, discrete Fourier transform,


circular convolution equations.
1. DISCRETE CIRCULAR CONVOLUTION
Being given two finite sequences x  x0 , x1 ,, x N 1  and y   y0 , y1 ,, y N 1  of the
same of the same length N , is called discrete circular convolution of period N the
sequence z  x  y  z0 , z1 ,, z N 1  , where
N 1 n N 1
z n   x k y n  k mod N    x k y n  k  x k y n  k  N , n  0,1,, N  1.
k 0 k 0 k  n 1

(1)
m
We make the convention x
k n
k  0 , if m  n .

The circular convolution can be considered also for sequences of different lengths, by right
completion with 0, such that the two sequences to be of the same length.
2. ALGORITHMS FOR CALCULUS OF DISCRETE CIRCULAR
CONVOLUTION
We give three types of circular convolution algorithms, which we will exemplify by the
example 1,2,4,5,6  7,3,9,8  102,111,91,73,109 .
Circular algorithm
The elements of the longer sequence sits outside on a circle in direct sense, i.e. contrary to
clockwise motion and the elements of the shortest in the interior of the circle, in indirect
sense, i.e. in the sense of clockwise. Its first elements must stand beside one another.
Multiply the elements of the two sequences that stand beside one another and add the
results. Then move one position the interior sequence in the sense of clockwise and do the
same operations. It is finished when the first element of the interior sequence is beside the

1 Faculty of Applied Sciences, Polytechnic University of Bucharest, cirnumircea@yahoo.com


280
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

last element of the exterior sequence. The numbers resulted by this procedure are the
elements of the circular convolution.

Bilinear algorithm
The longer sequence is placed from left to right twice. Below it is placed the second
sequence from right to left, its first element being under the first element of the above
second sequence. Multiply the elements of the two rows that stand beside one another and
add the results. Then move one position to the right the bottom sequence and do the same
operations. It is finished when the first element of the bottom sequence is beside the last
element of the above second sequence. The numbers resulted by this procedure are the
elements of the circular convolution.
Exemple.
1 2 4 5 6 1 2 4 5 6 1 2 4 5 6 1 2 4 5 6
8 9 3 7 8 9 3 7
32  45  18  7  102 40  54  3  14  111

1 2 4 5 6 1 2 4 5 6 1 2 4 5 6 1 2 4 5 6
8 9 3 7 8 9 3 7
48  9  6  28  91 8  18  12  35  73

1 2 4 5 6 1 2 4 5 6
8 9 3 7
16  36  15  42  109

281
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Aliasing algorithm
This algorithm, the simplex, consists in performing linear complete convolution (see [1] or
[6]) of the two sequences and adding to the part from resulting product with the same length
as the longest sequence factor, the remaining part.
1 2 4 5 6
7 3 9 8

7 14 28 35 42
3 6 12 15 18
9 18 36 45 54
8 16 32 40 48

7 17 43 73 109 95 94 48
95 94 48

102 111 91 73 109

3. DISCRETE FOURIER TRANSFORM


The Fourier transform of the finite sequence x  x0 , x1 ,, x N 1  is the sequence
xˆ  xˆ0 , xˆ1 ,, xˆ N 1   F x , with
N 1
xˆ n   x k  Nk n , n  0,1,, N  1, (2)
k 0

where  N  e 2 i N
is the N order complex root of the unity, i.e.

 NN  e 2 i  cos2   i sin 2   1

and  N  e 2 i N
  N1 is the complex conjugate.
Theorem 1. The inverse transform is given by the relation
N 1
1
xj 
N
 xˆ 
k 0
k
jk
, j  0,1,, N  1. (3)

Proof. Using the formula for sum of a geometric progression, we have

282
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

N 1 N 1 N 1 N 1 N 1 N 1 N 1

 xˆ n n k   x j j n n k   x j  k  j  n  Nxk 
n 0 n 0 j 0 j 0 n 0
  
j 0, j  k n 0
k  j n

N 1
 k  j  N  1
 Nxk   k  j  1  Nxk ,
j 0, j  k 

k  j N
because, for j  k , we have   1 and  k  j  1 . It results the formula (3).
Remark. The direct and inverse discrete Fourier transforms are matrix written
F ( Xˆ )  FN X , respective F 1 ( X )  FN1 Xˆ , where

1 1 1  1 
 x0   xˆ 0   
    1  N  N2   N 1

 1  ˆ  1 
x ˆ
x N

X   ,X   , FN  1  N2  N3   2  N 1 
   N

         
x   xˆ 
 1   1   
 N N 1  N 12 
1  N  N
N N N 1 2

and

1 1 1  1 
 
1  N  N2   N 1
N 
 1  N2 .
1 2  N 1
F 1
 N3  
N
N  N

     
1  N 1  2 N 1  N  N 12 
 N N 

Particularly, for N  4 , we have

1 1 1 1 1 1 1 1 
   
1  i  1 i  1  1 i  1  i 
F4    şi F41   .
1 1 1 1 4 1  1 1  1
   
1 i  1  i  1  i  1 i 
   

4. THE DISCRETE FOURIER TRANSFORM OF THE CIRCULAR


CONVOLUTION

Theorem 2. For two column matrices X  x0 x1  x N 1  and Y   y 0 y1  y N 1  ,


T T

we have the formula of discrete Fourier transform of the circular convolution,


283
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

F  X  Y   XˆYˆ  xˆ 0 yˆ 0 xˆ1 yˆ1  xˆ N 1 yˆ N 1  ,


T

the product of the column matrices Xˆ  xˆ 0 xˆ1  xˆ N 1  and Yˆ   yˆ 0 yˆ1  yˆ N 1  from the
right hand of the above formula being performed on elements.
Proof. In conformity with above definitions and iterating the sums, an arbitrary component
of the discrete Fourier transform of the circular convolution of the matrices X and Y is
given by the formula
N 1 N 1  k N 1 
F  X  Y n    X  Y k  k n     x j y k  j   x j yk  j  N   k n 
k 0 k 0  j 0 j  k 1 
N 1 k N  2 N 1
   x j jn
y k  j  k  j  n   x  j
jn
y k  j  N  k  j  N  n 
k 0 j 0 k 0 j  k 1

N 1 N 1 N 1 j 1
  x j jn
 y k  j  k  j  n   x j  jn
y k  jN  k  j  N  n .
j 0 k j j 1 k 0

Making the changes of variables m  k  j in the first term and m  k  j  N in the


second term from the last formula, we have
N 1 N 1 j N 1 N 1
F  X  Y n   x j  j n  y m m n   x j  j n  y m m n 
j 0 m 0 j 1 m N  j

N 1 N 1
  x j jn
y m  m n  F  X n F Y n .
j 0 m 0

5. CIRCULAR CONVOLUTION EQUATIONS.


Using the discrete Fourier transform method, we solve as example a circular convolution
equation:
Example. Solve the circular convolution equation of period four,
A X  B,

where A  1 9 9 1 and B  12   12   8   8    . To make


T T

discussion after the real parameter  and checking of obtained solution.

B  B1  B2 , B1  12 12 8 8 ,
T
Solution. We denote where
B2  1  1 1  1 , X   x0 x3 
T T
x1 x2 and

284
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

F  X   Xˆ  xˆ 0 xˆ1 xˆ 2 xˆ 3  . Apllying


T
the discrete Fourier transform on the
equation, this take the form

F  AXˆ  F B1   F B2  .

We have
1 1 1 1  1   20 
    
1  i  1 i  9    8  8i 
F  A  F4 A    ,
1  1 1  1 9   0 
    
1 i  1  i  1    8  8i 
    

1 1 1 1 12   40 
    
1  i  1 i 12   4  4i 
F B1   F4 B1    ,
1  1 1  1 8   0 
    
1 i  1  i  8   4  4i 
    

1 1 1 1  1   0 
    
1  i  1 i   1  0 
F B2   F4 B2    ,
1  1 1  1 1   4 
    
1 i  1  i   1  0 
    

therefore the equation becomes

 20  xˆ 0   20 xˆ 0   40   0   40 
          
  8  8i  xˆ1    81  i xˆ1   41  i   0   41  i 
 0  xˆ    0    0     4    4  .
   
2
      
  8  8i  xˆ    81  i xˆ   41  i   0   41  i 
  3   3      

Equaling the components of the column matrices by above equation, it results the system
of equations
20xˆ0  40 ,  81  i xˆ1  41  i  , 0  4 ,  81  i xˆ3  41  i  ,

from which it is obtained the solutions


1 i i 1 i i
xˆ 0  2 , xˆ1    , xˆ 3   
21  i  2 21  i  2

285
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

and the compatibility condition   0 . The unknown xˆ 2  p can not be determined,


leaving a real parameter. For   0 the convolution equation have no solution. For   0
, the equation has an infinity of solutions, given by the formula

1  i 
2
1 1 1 2  p
   
1 ˆ 1 1 i  1  i  2  1  1  p 
X F X    , p  R .
4 1  1 1  1 p  4  2  p 
   
1  i  1 i   i  3 p
     
 2

Verification
We verify the above result by circular convolution, computed by aliasing method.
2 p 1 p 2 p 3 p
4 4 4 4
1 9 9 1

2 p 1 p 3 p 2 p
4 4 4 4
18  9 p 9 9p
18  9 p 27  9 p
4 4 4 4
9 9p18  9 p
18  9 p 27  9 p
4 4 4 4
2 p 1 p 2 p 3 p
4 4 4 4

2  p 19  8 p 29  p 46  p 29  8 p 3 p
8
4 4 4 4 4 4
46  p 29  8 p 3  p
4 4 4

12 12 8 8

Remark. The above example is the particular case for m  8, n  10 of the same equation
with the dates

A  1 m  1 m  1 1 and B  n  2   n  2   n  2   n  2    .
T T

286
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The author uses such exercises with parameters for student works, especially to masters.
Every student has a particular pair of parameters m and n , for example they can be the
row and column on which found the place of the student in the seminar room. Thus, every
student has quantitatively his individual problem to solve, but all these problems are
qualitatively the same.
6. REFERENCES
[1] M. I. Cîrnu, Algebraic equations. New solutions for old problems, Lambert Academic Publishing,
Germany, 2013.
[2] M. I. Cîrnu, Calculation of convolution products of picewise defined functions and some
applications, Journal of Information Systems and Operations Management, 6(2012) no. 1, 41-52.
[3] M. I. Cîrnu, A certain integral-recurrence equation with discrete-continuous auto-convolution,
Archivum Mathematicum (Brno), 47 (2011), 267-272.
[4] M. I. Cîrnu, Determinantal formulas for sum of generalized arithmetic-geometric series, Boletin
de la Asociacion Matematica Venezolana, 43 (2011) no. 1, 25-38
[5] M. I. Cîrnu, Initial-value problems for first-order differential recurrence equations with auto-
convolution, Electronic Journal of Differential Equations, 2011 (2011) no. 2, 1-13.
[6] M. I. Cîrnu, Linear discrete convolution and its inverse, Journal of Information Systems and
Operations Management, Part I. Convolution, 4 (2010) no.1, 129-137, Part II. Deconvolution, 4
(2010) no. 2, 123-137.
[7] M. I. Cîrnu, First order differential recurrence equations with discrete auto-convolution,
International Journal of Mathematics and Computation, 4, 2009, 124-128.
[8] A. V. Oppeinheim, R. W. Schafer, J. R. Buck, Discrete-time signal processing, Prentice Hall,
1999.

287
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

MANAGEMENT OF INNOVATION IN THE MODERN KAZAKHSTAN:


DEVELOPMENT PRIORITIES OF SCIENCE, TECHNOLOGY AND
INNOVATION

Rauan Danabayeva1
ABSTRACT
Kazakhstan’s economy has expanded rapidly over the last decade, posting one of the
fastest paces of growth in the region. As a country with abundant natural resources The
paper analyses the national innovations systems, the institutional framework of innovation
policy and the state of science, technology and innovation (STI) in the Republic of
Kazakhstan. As a country with abundant natural resources, Kazakhstan is still facing
challenges in transforming into a knowledge-based economy. The strategic course of
Kazakhstan for industrial-innovative development provides necessary conditions for
elaboration and implementation of new scientific ideas and technologies. The strategy of
development of Kazakhstan till 2050 together with such documents as the Strategic
Development Plan up to 2020, or the State program of Forced Industrial-Innovative
Development of Kazakhstan for 2010-2014 provide regular, necessary conditions that
support the development of research, technology and innovation in Kazakhstan.

Keywords: innovation policy, industrial-innovative development programm,


technology, economic growth, national innovation system.
1. INTRODUCTION
Kazakhstan is an upper-middle company, according to the World Bank classification with
GDP per capita of around 12.000$ in 2012 [1]. Large and sparsely populated, the country
is rich in natural resources, with very significant reserves of oil, gas, minerals. While the
development of its natural resources has provided a major impetus to the recent expansion
of Kazakhstan’s economy, the authorities have stressed the need to develop other sources
of growth and improve overall economic competitiveness. In order to support these aims,
growing resources are being devoted to the modernization of the economy and the
revamping of its infrastructure, seeking to facilitate economic diversification. Kazakhstan
set up different institutions and developed many programs aimed at encouraging innovation
and modernization. Kazakhstan has put a growing emphasis on the promotion of innovation
as a driver of economic development and diversification.
2. INNOVATION POLICY OF KAZAKHSTAN
Kazakhstan is becoming a critical part of the emerging “New Silk Road’’ that connects the
East with Europe, Turkey and the Middle East. And advantageous geographical position,
regional integration initiatives and an improving business climate are three key reasons why
Kazakhstan is emerging as an attractive investment destination [2]. Kazakhstan has an
increasingly business-friendly environment. Kazakhstan has an increasingly business-friendly
environment. The World Bank’s Doing Business 2013 index ranks it 49th, up from 56th place in 2012

1 Al-farabi Kazakh National University, 71, Al-Farabi av., Almaty, 050040, Kazakhstan, Santiago de
Compostela University, Praza do Obradoiro, s/n, 15782,Spain, E-mail: rauan.danabaeva@mail.ru, Phone:
+7 707 3358666; +34 615875948
288
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[3]. Overall through, Kazakhstan was named as one of the 10 economies improving the most across
three or more areas of doing business between 2011 and 2012. And the World Bank has included
Kazakhstan in its list of the world’s 20 most attractive investment destinations. Kazakhstan in 2012
for the first time reached a historic high in the growth of innovation indicators. Positive
trend is due to the successful results of the State program of Forced Industrial-Innovative
Development of Kazakhstan for 2010-2014. According to the report “Global Competitiveness
Report 2013-2014’ of World Economic Forum, Kazakhstan has improved by one position to rank
50th this year out of 144 countries [4]. The country benefits from a flexible and efficient labor market
(15th) and a stable economic environment (23rd) at a time when many countries are struggling in
these areas. Kazakhstan’s main challenges relate to its health care and primary education systems
(97th), its lack of business sophistication (94th), and its low innovation (84th)
According to this report Kazakhstan approached the group of countries driven by innovation
[4]. Priority is given to innovative policies to encourage and promote business innovation,
as well as the implementation of the technology transfer (table 1)

Stage 1 Transition Stage 2 Transition Stage 3


Factor-driven from stage 1 Efficiency - from stage 2 Innovation-
(38 to stage 2 driven to stage 3 driven
(20 (31 (22 (37
economies)
economies) economies) economies) economies)

Algeria
Argentina
Armenia Australia
Kyrgyz Republic China Brazil
Azerbaijan Austria
India Egypt Hungary
Bolivia Germany
Ghana Romania Kazakhstan
Kuwait Japan
Bangladesh Thailand Latvia
Moldova Korea, Rep.
Yemen Tunisia Malaysia
Saudi Arabia United States
Mali Ucraine Russian
Philippines Norway
and etc. And etc. Federation
And etc. And etc.
Turkey

Table 1. Countries/economies at stage development.


In the World Economic Forum GCI (Global Index of Competitiveness of the World
Economic Forum), Kazakhstan joined the group of countries inspired by ‘management
efficiency’ and ‘innovations’ along with such countries as Brazil, Malaysia, Turkey, Russia
and others.
By 2016, GDP per capita in Kazakhstan is expected to reach US$15000, compared with the
current level of over US$12000- and the country will be classified by the World Bank as
“high income company’’. All in all, these are significant achievements for a country that
only became independent over 20 years ago.
Innovation policy in Kazakhstan plays a great role in Kazakhstan’s economic
strategy. There is a clearly stated policy objective to move from an resource-based to a
knowledge-based economy, using earnings from the oil, gas, and mineral sector to facilitate
diversification and modernization [5]. A major challenge for innovation policies in
Kazakhstan is the weak domestic demand for innovation, which reflects the structural
characteristics of the economy and the dominance of extractive industries.

289
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3. NATIONAL INNOVATION SYSTEM


Innovations are one of the key factors, influencing the development and the progress of any
society. Innovation capabilities of companies depend on a variety factors, such as R&D
expenditure, knowledge management processes, culture, organization structure,
management systems etc. [6]. In search of new, innovative ideas and solutions to
undertakings tend to cooperate more and more often also colleges, universities and other
public research is actually one of their missions. Concept of the National System (NIS),
proposed by Freeman is widely used. Freedman definition of an NIS is ‘the network of
institutions in the public and private sectors whose activities and interactions initiate,
import, modify and diffuse new technologies’ [7]. The central actors in the NIS system are
business enterprises, which require internal R&D capacities to innovate successfully. The
concept of the NIS remains the basis for innovation policy in many countries.
Governments have an important role in fostering innovation. Innovation, like all economic
activity, is contingent on a number of conditions that interact with the different elements of
NIS. In particular these framework conditions define a suitable business environment that
facilitates entrepreneurship and innovation.
The programme for Innovative Development and Support for Technological Modernization
of Kazakhstan for 2010-2014 recognized the need to develop the NIS on the basis of
integrated and interrelated and systematic actions that address the different factors
influencing the generation, dissemination and commercialization of knowledge (Table 2)
[8].
1 stage 2 stage 3 stage
2010-2014 2014-2020 2018-2025
Creation of the competitive industrial Development of the innovation Increase of the innovation
and technological base market economy
Challenges of innovations
1.Technology modernization 2.Creation of the economy bases 3.Creation of the favorable
for the future innovative environment
To raise the technology level of the a)To identify high-tech industries Increase of the NIS elements
operating enterprises will promote the that will become a base for coordination, anlyticl
ability to accept innovations, and then technological competitiveness of support to the innovative
not only to become consumers, but to the economy of Kazakhstan in a processes, science and
become a generators of innovative long-time period; innovation propaganda,
technologies. b)to develop own scientific legislative base
competencies of the economy of improvement.
Kazakhstan in a long-time period.
Table 2. Development of national innovation system
The key to a successful National Innovation System rests on the creation of synergies
between the various Sector and Regional Innovation Systems. As modern science is a
multidisciplinary activity, knowledge-generation institutions have a major role to play in
creating such synergies, as they facilitate exchanges between scientists and engineers of
different disciplines.
The main strength of Kazakhstan is the support for Science, technology and Innovation at
senior levels in the government. The Government of the Republic of Kazakhstan has
adopted a wide range of policies and made substantial investments in support of innovation.
For instance, plans for increased spending on innovation by large state companies may
290
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

provide new impetus, including the decision to allocate 10% of the net profit of Samryk-
Kazyna, National Welfare Fund, on innovation-related projects. The need to increase
domestic demand for innovation, to diversify the concentration of economic activity, to
structure a comprehensive strategy for Human Capital Development, and to establish and
strengthen a tradition of commercializing research are among the key areas that need to be
given special attention. A major challenge for innovation policies in Kazakhstan is the weak
domestic demand for innovation. In this context, one way of overcoming this obstacle is to
enter foreighn markets with a high demand profile for innovative products and diversify
and reach new target markets other than Russia and China.
4. A KEY MECHANISM FOR DEVELOPMENT
The 2010-2014 state program on accelerated industrial and innovative development was
established to promote stable and well-balanced economic growth. The program targets
diversification of the economy and improved competitiveness by developing priority
sectors and supporting industrial development.
And industrialization map is the key mechanism used to implement the program. The
Government and the business community work together to identify specific projects that
meet the program’s requirements and plot them on the industrialization map. Currently the
industrialization map includes 779 projects, which have a combined value of KT 11.2t
(US$74.7b). These projects will create approximately 220,000 jobs during their
construction period and around 181,000 jobs when they are put into operation. Contribution
of these projects to GDP in 2012 is 1.3% [9].
Results for the first three years (2010 to 2012) of the program:
 Number of projects put into operation: 537
 Total investment: KZT 2.1t (US$ 14b)
 Jobs created: 57,000.
The main programmatic document is the State Programm for Accelerated Industrial
Innovative Development (SPAIID) 2010-2014, part of the Development Strategy 2020 that
was approved in 2010 and covers 2010-2020. In addition to the SPAIID, the Development
Strategy 2020 includes a Health Programme, Education Programme, Language Programme
and others (Table 3). SPAIID has 13 sectoral programmes and ten functional programmes.
It builds on earlier measures and includes regional development plans and sector plans.
In accordance with the provisions of the SPAIID, the Ministry of Industry and New
Technologies is in charge of elaborating the intersectoral plan for scientific-technological
development until 2020. The priorities identified in this plan are reflected in the criteria
used for access to different mechanisms of support (grants, consulting services, business
incubation). Innovation grants in Kazakhstan are [10]:
1. Grant for industrial research;
2. Grant for supporting of high-tech goods production at the initial stage of
development;
3. Grant for patenting abroad or in regional patent organizations;
4. Grant for technology transfer;
5. Grant for technology commercialization;

291
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

6. In the frame of state program ‘Performance 2020’


7. Grant for training of technical staff abroad;
8. Grant for attraction of highly qualified foreign professionals;
9. Grant for attraction of consulting, design and engineering organizations;
10. Grant for implementation of management and production technologies.

Concept of innovation development until 2020

Task 1 Task 2 Task 3

Own scientific Smart technology transfer Innovation environment


competencies

Table 3
5. CONCLUSION
For Kazakhstan it is essential not only to focus on industrial innovation, but also to
complement them with suitable innovative business models (i.e, a combination of
technological innovation and business innovation). Current investors are much more aware
of the country’s environment and are willing to explore further possibilities in the market.
Conversely, Kazakhstan needs to change the widely held perceptions of potential new
investors. Most seem not to have Kazakhstan on their investment radar or remain unaware
of the country’s attractive features, locations and sectors that present opportunities for
growth. To overcome this motivation, it is essential that the Kazakhstan Government
intensifies its efforts to communicate the country’s potential to the rest of the world. Even
in a challenging global environment, the message can get through that Kazakhstan is
building a solid framework for moving up the value chin and is developing a welcoming
business culture that is conducive to innovation and growth.
Kazakhstan has the opportunity and potential to improve its capacity to innovate, and join
the world leaders in innovation. Towards achieving this, Kazakhstan should ensure the
effectiveness and coherence of all the constituent elements of the National Innovation
system. Ensuring the market economy with a dynamic innovation capacity requires not only
sound government policies and tools, but also private sector initiatives. Being a young
market economy, Kazakhstan has strong potential, and should give special attention to
effective partnership between public and private sector for generating an environment
conducive to a functional knowledge-based economy.
6. REFERENCES
1. www.worldbank.org.kz
2. State Program for Accelerated Industrial innovative development of the Republic of
Kazakhstan (approved by the President decree No 958 on 19 March 2010).

292
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3. World Bank. Doing Business 2013: Kazakhstan - Smarter Regulations for small and
Medium-Size Enterprises-comparing business regulations for domestic firms in 185
economies.
4. The Global Competitiveness Report 2013-2014, World Economic Forum , Geneva 2013.
5. N.Kuchukova (2010), Sources of financing for the industrial-innovative development of
Kazakhstan, presentation at the Astana Economic Forum. 2010.
6. Cesar Camison, Vicente M.Monfort-Mir.Measuring innovation in tourism from the
Schumpeterian and the dynamic-capabilities perspectives. Tourism management, Volume
33, Issue 4, August 2012.
7. C.Freedman and L.Soete (1997), the Economics of Industrial innovation, 3rd edition,
London, Pinter.
8. Strategy of Industrial and Innovative development of the Republic of Kazakhstan for 2003-
2015 (approved by President Decree No.1096 on 17 May 2003).
9. Report of EY’s attractiveness survey. Kazakhstan 2013 – Unlocking value.
10. Second Report under the studies on International/Regional trade integration: ‘Kazakhstan:
taking advantage of trade and openness for development., July 10,2012.

293
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2:1 UPSAMPLING-DOWNSAMPLING
IMAGE RECONSTRUCTION SYSTEM

Mihai Cristian Tănase1


Mihai Zaharescu2
Ion Bucur3
ABSTRACT
In this paper the subject that is considered is an efficient way to send images over an
unreliable network link, so that with a small bandwidth, the receiver obtains data at the
best quality in an acceptable time. This paper resumes a study that was made, presented in
detail, and gives an interesting new view of the process of sending images over the Internet.

KEYWORDS: Image processing, image compression, progressive coding, transfer


protocol, image file format
1. INTRODUCTION
The study that was made in this paper is based on upsampling–downsampling images at the
2:1 ratio, considering most of the standard filters.
The study also resulted in a system that is constructed of two parts: the server (producer)
side and the client (consumer) side. Between this two parts can by any kind of network
connection: local, Internet, etc.
The result of the study is an implementation of the system that uses BMP (Bitmap format)
and constructs a pyramid (see below) of different sizes of the original image.
2. SYSTEM IMPLEMENTATION
System overview
Each one of the system components has a package (a set of unique upsampling–
downsampling filters) that is used to recursively construct a pyramid.
The pyramid building algorithm is straightforward, it is based on successive downsampling
operations followed by upsampling and encoding of the residuals.
The Figure 1 represents an example of the resulting pyramid.

1 Engineer, Department of Computer Science and Engineering, Faculty of Automatic Control and Computers
Science, University “Politehnica” of Bucharest, Splaiul Independenţei 313, Bucharest, 060042, Romania,
mihaicristian.tanase@gmail.com
2 Engineer, Department of Computer Science and Engineering, Faculty of Automatic Control and Computers

Science, University “Politehnica” of Bucharest, Splaiul Independenţei 313, Bucharest, 060042, Romania,
mihai.zaharescu@cti.pub.ro
3 Associate Professor PhD Eng., Department of Computer Science and Engineering, Faculty of Automatic

Control and Computers Science, University “Politehnica” of Bucharest, Splaiul Independenţei 313, Bucharest,
060042, Romania,ion.bucur@cs.pub.ro
294
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. Constructing a pyramid of different image resolutions.


Each layer (level) of the pyramid is constructed by downsampling the layer below or
upsampling the layer above and modifying the result with the residual image. The residual
image is the difference between an image of a layer and the upsampled image of the layer
above.
The system works as presented in the Figure 2.

Figure 2. Phases in the image transformation system.


A layer of the pyramid contains the level number (starting from 1), the original image
downsampled (resizing at half on both width and height) and the residual image, which is
the difference between the original image and the upsampled image of this layer.

295
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In this implementation, bandwidth variation is taken into consideration and it influences the
chosen layer to be sent. The bandwidth is simulated by a random value between two chosen
thresholds4 (in the implementation, the lower threshold is 40 and the upper is 5120). To
select the layer, a linear interpolation is done between the upper, lower bandwidth threshold,
current bandwidth value and 1 to highest number of the pyramids layers. The formula is:
S  ( M  m) * (Ct  Lt ) /( Mt  Lt )
where S is the selected layer Id; M the max layer Id; m the min layer Id; Ct the current
threshold; Lt the lower threshold and Mt the max threshold.
In other words, when the bandwidth is close to the minimum accepted, the layer that is
selected, when packaging (preparing) for sending, is close to the minimum.
The following step is represented by the sending process. In the implementation this is
simulated by two different functions (one that fills up a package and the other that uses this
package to do the next presented calculations). To fully generate the original image, a client
(consumer) has to have all the residual images from the selected layer to the first, the bottom
one.
The last step is the consumer side of the system, in which the package formed is used to
generate recursively the originally-sized image.
Filters
In this system three filters are used: a box filter, a bilinear filter and one that generates pixels
by multiplying. For downsampling a box filter is used by averaging four neighbors into a
single pixel. For the upsampling the three interpolation methods mentioned are tested.
The residual image
On a given layer of the constructed pyramid, the residual image is the difference between
the image of the lower level5 and the upsampled image of the current layer. Because both
images contain values from 0 to 255, the pixel values in the residual image are in the interval
[-255, 255]. This is an important aspect as it will be used in the encoding process.
Of course, different filters (for upsampling or downsampling) will yield different residual
images (see Figure 3).

Figure 3. Comparison between two different upsampling filters:


bilinear interpolation (left) and pixel multiplication (right)

4 The bandwidth is a value in kb (kilobytes).


5 For the first layer of the pyramid, the lower layer is the original image.
296
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

One objective of the study is to select the best combination of filters that give the best
results. The optimal result is directly influenced by the encoded size of the residual image.
In the implementation, for entropy coding purposes the Adaptive Huffman encoding was
selected. [1]
3. TESTS AND RESULTS
When doing the tests, most of the parameters were altered (new images, different
bandwidth, etc).
The graphs in Figure 4 show the original image size (the blue line) and the sizes of all the
layers (values on the X-axis represent the number of downsamplings done, which are the
same as the layer ID’s in the pyramid).

Figure 4. Comparison between sizes in the pyramids and original images. From left to right:
ChristmasTree (512x512), ChristmasTree2 (512x512), Dinner (256x256)
In the Table 1 there are the results of the tests. One note here: going upward in the pyramid
does not necessarily yield to a smaller package size.
Size
Selected
Image Bandwidth Original Sent layer Size reduction
layer6
image package (%)
ChristmasTree 40 3 786486 288815 63.28
(512x512) 2540 9 786486 311293 60.42

5120 1 786486 399355 49.22

ChristmasTree2 40 4 786486 247118 68.58


(512x512) 2540 2 786486 272330 65.37

5120 1 786486 370151 52.94

Dinner 40 2 49206 25890 47.38


(256 x 256) 2540 1 49206 28630 41.82

6
The selected layer represents the number of down-samplings executed on the original image.
297
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

5120 7 49206 31688 35.60

Table 1. Test results


4. FUTURE WORK
In the study and, also, in the implementation, the priority was represented by the
construction of a system capable of sending images at different resolutions in
correspondence with the bandwidth. This is the reason for not having tested more
resampling filters. So, the next step is to implement all the standard resampling filters and
rerun the tests.
To obtain the most realistic results, the whole system has to be integrated and tested in a
real world network.
The system can be adapted to video compression [3] and adaptive transmission. Correlation
on frames in the already generated pyramid [2] can help introduce a warping transformation
matrix that utilizes the redundancy between frames [9].
Instead of splitting the image one resolution at a time, we can split a resolution layer by the
orientations of the features.[4][5] The advantage of this approach is that we can send a
random orientation from each resolution layer, giving a first impression that the image has
all the frequencies, and if there is time left we can send the remaining channels also.
Besides the lossless compression, a lossy compression, obtained by thresholding small
values would offer good quality over a small band with network. A proposal over the
traditional algorithms is using adaptive thresholding, adapted from binarization. [4][5]
Whatever is black is split o n a different layer for sending it if there is spare time at the end.
Clustering is another possibility for sending elements that generate visual impact first. [8]
All the filtering tested was done directly on the image, in spatial domain. But an idea is
trying to filter the image in a different domain, like after applying the Radon transform [7]
in order to minimize coherent edge deteriorations. The lines detected on the residual image
can be sent in vectorized forms as a first a first iteration to give a fast impression of better
resolution. [6]
5. CONCLUSIONS
This paper presents the implementation details and the results of a study which searches a
more efficient way of sending images over various networks. It focuses on building a
system that transforms images into their correspondent low resolution pairs, which leads to
a structure similar to a pyramid.
Also, to further simulate the network environment, the system, takes an additional input,
the bandwidth value. This implementation will need more refinement as stated in the
“Further work” section, but, as shown in the “Tests and results” section it has an immediate
benefit, by reducing, in some cases, close to 70%, the total size of the image.

298
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

6. ACKNOWLEDGEMENT
The authors would like to thank Costin-Anton Boiangiu for his original idea (paper under
review) onto which the described system is based, for support and assistance with this paper.
7. REFERENCES
[1] Xudong Song, Yrjo Neuvo, “Image Compression Using Nonlinear Pyramid Vector
Quantization”, Multidimensional Systems and Signal Processing, Vol. 5, 1994, pp. 133-
149.
[2] M. Sankar Kishore, K Veerabhadra Rao, “A study of correlation technique on pyramid
processed images”, Sadhana, Vol. 25, 2000, pp. 37-43.
[3] Luc Vandendorpe, Benoit Macq, “Optimum quality and progressive resolution of video
signals”, Annales Des Télécommunications, Vol. 45, 1990, pp. 487-502.
[4] Costin-Anton Boiangiu, Alexandra Olteanu, Alexandru Victor Stefanescu, Daniel Rosner,
Alexandru Ionut Egner (2010). „Local Thresholding Image Binarization using Variable-
Window Standard Deviation Response” (2010), Proceedings of the 21st International
DAAAM Symposium, 20-23 October 2010, Zadar, Croatia, pp. 133-134.
[5] Costin-Anton Boiangiu, Andrei Iulian Dvornic. “Methods of Bitonal Image Conversion
for Modern and Classic Documents”. WSEAS Transactions on Computers, Issue 7,
Volume 7, pp. 1081 – 1090, July 2008.
[6] Costin-Anton Boiangiu, Bogdan Raducanu. “Robust Line Detection Methods”.
Proceedings of the 9th WSEAS International Conference on Automation and Information,
WSEAS Press, pp. 464 – 467, Bucharest, Romania, June 24-26, 2008.
[7] Costin-Anton Boiangiu, Bogdan Raducanu. “Effects of Data Filtering Techniques in Line
Detection”, Proceedings of the 19th International DAAAM Symposium, DAAAM
International (Vienna, Austria), pp. 0125–0126, 2008.
[8] Costin-Anton Boiangiu, Bogdan Raducanu. “3D Mesh Simplification Techniques for
Image-Page Clusters Detection”. WSEAS Transactions on Information Science,
Applications, Issue 7, Volume 5, pp. 1200 – 1209, July 2008.
[9] Costin-Anton Boiangiu, Daniel Rosner, Alexandra Olteanu, Alexandru Victor Stefanescu,
Alin Dragos Bogdan Moldoveanu, “Confidence Measure for Skew Detection in
Photographed Documents”, Annals of DAAAM for 2010, Proceedings of the 21st
International DAAAM Symposium, 20-23 October 2010, Zadar, Croatia, pp. 129-130.

299
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

STUDY OF NEUROBIOLOGICAL IMAGES BASED ON ONTOLOGIES USED


IN SUPER-RESOLUTION ALGORITHMS

Andreea-Mihaela Pintilie 1
Costin-Anton Boiangiu2
ABSTRACT
This paper focuses on the applicability of ontologies in medical image processing area.
Techniques for automatically describing medical images in a medical language that
doctors can operate with are presented. Super resolution algorithms are also highlighted
in order to prove their superiority over usual interpolation methods on medical images. At
the end, a set of experimental results regarding ontologies are offered.

KEYWORDS: Neurobiological image, medical image, ontology, super resolution


1. APPLICABILITY OF ONTOLOGIES IN SUPER RESOLUTION
ALGORITHMS
Super resolution case studies
The ability to transcend the fundamental resolution limits of sensors using super-resolution
algorithms has shown significant progress and capability in the area of photographic
imaging. By far, the majority of applications using super-resolution technology have been
in the area of photographic imagery for either consumer or defense-type applications.
Recently, researchers integrated super-resolution algorithms in different imaging medical
applications. In order to improve the super resolution algorithms for medical applications,
the researchers took into consideration the differences between the photographic imaging
and medical imaging.
Image representation
An image can be represented in various feature spaces, and the FCM algorithm classifies
the image by grouping similar data points in the feature space into clusters. This clustering
is achieved by iteratively minimizing a cost function that is dependent on the distance of
the pixels to the cluster centers in the feature domain. The pixels on an image are highly
correlated, i.e. the pixels in the immediate neighborhood possess nearly the same feature
data. Therefore, the spatial relationship of neighboring pixels is an important characteristic
that can be of great aid in imaging segmentation. General boundary detection techniques
have taken advantage of the spatial information for image segmentation. However, the
conventional FCM algorithm does not fully utilize this spatial information.

1 Engineer, Department of Computer Science and Engineering, Faculty of Automatic Control and Computers
Science, University “Politehnica” of Bucharest, Splaiul Independenţei 313, Bucharest, 060042,
Romania,andreea.pintilie@cti.pub.ro
2 Associate Professor PhD Eng., Department of Computer Science and Engineering, Faculty of Automatic

Control and Computers Science, University “Politehnica” of Bucharest, Splaiul Independenţei 313, Bucharest,
060042, Romania,costin.boiangiu@cs.pub.ro
300
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1: Super resolution steps


Pixel Classification
A noisy pixel might be wrongly classified because of its abnormal feature data, in a standard
FCM technique. The method presented in [28] incorporates spatial information, and it
altered the membership weighting of each cluster after the cluster’s distribution in the
neighborhood is considered. This scheme is proposed in order to reduce the effect of noise
and to bias the algorithm toward homogeneous clustering.
Medical Image acquisition
Firstly, when a medical image is taken a controlled source of illumination is often used
during the image acquisition. The strong illumination often provokes a higher signal-to
noise ratio. [23] In order to prevent tissue damaging, the illumination energy is limited,
thereby the SNR is below the SNR of a photographic image.
Secondly, imaging speed is another factor that has to be taken into consideration: in order
to avoid patient’s movement and therefore minimizing the imaging artifacts associated with
patient movement and the acquisition time also has an impact on the patients’ comfort.
Thirdly, the image processing artifacts are much less tolerable in medical images than in
photographic applications. Luckily, medical imaging systems operate under highly
controlled environments with highly similar objects. Algorithm developers can leverage
prior knowledge about the anatomy or biology to improve image quality. [21]
Finally, the majority of medical imaging applications involve creating images from
radiation propagation through three-dimensional objects. Thus, while the final images are
two-dimensional, they represent some form of projection through a three-dimensional
volume.
Low Radiation Digital X-ray Mammography
Super resolution is nowadays used in Low Radiation Digital X-ray Mammography. Today’s
digital detectors cannot shrink pixel sizes to increase resolution without sacrificing the SNR
measurement.[30]
To maximize image resolution, the researchers have explored digitally combining multiple
low-dosage images, each containing spatial shifts. The shifts were the result of patient
movement, intentional dithering of the detector, vibration in the imaging system, and small
movement of the imaging gantry. Providing high resolution imagery required sophisticated,

301
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

nonlinear reconstruction techniques to address the extremely low SNR of the captured
images, but using an improved super resolution algorithm solved the problem. [31]
Optical Coherence Tomography
The invention and utilization of Optical Coherence Tomography (OCT), first reported in
[28] in 1991, has had a profound impact on many fields of research, especially ophthalmic
sciences. OCT systems provide noninvasive yet high-resolution in-vivo images of retinal
structures, which can be used to define imaging biomarkers of the onset and progression of
many ophthalmic diseases. By employing an interferometer, several OCT based imaging
systems have been developed throughout the years, most notably the time-domain OCT
(TDOCT) and ultrahigh resolution OCT (UHROCT).
The advent of the Spectral Domain Optical Coherence Tomography (SDOCT) system has
further improved the image quality and acquisition speed. Today, several commercial
SDOCT systems are available with similar capabilities and 20-30 kHz A-scan rates.[28]
OWL Ontology
In medicine large-scale ontology domains have been implemented besides UMLS:
GALEN, NCI and Medical Entities Dictionary. Though containing huge amounts of useful
domain knowledge, most available medical ontologies have been structured under different
design principles than those required for Semantic Web applications and therefore cannot
be directly integrated in such applications. On one hand most of the available ontologies
are not formalized in an appropriate representation language to be shared and reused. On
the other hand they have been realized for very concrete tasks and their content is modeled
in an ambiguous way.[9]
Ontology in medical images analysis
According to the article [34] a medical image is more than an array of pixels. At least three
major distinct components of a medical image can be distinguished:
1. The particular anatomical structure from which the image is taken
2. The array of pixels of measured radiation, hydrogen density, etc.
3. The collection of features each of which is a cluster of pixels with similar pixel
values in the pixel array.
The article also states that there are at least four stages of medical image analysis:
1. Clustering pixels into features
2. Analyzing the shape of features and spatial relations between them as they are
depicted in the pixel array
3. Mapping features to anatomical structures and relations between features to
relations between (parts of ) anatomical structures
4. Evaluating the depicted anatomical structure (and parts thereof ) as normal or
pathological in the sense of canonical anatomy
The first stage of analyzing the image shown in Figure 2(a), is clustering pixels into features
like feature 1 and feature 2 as depicted in the drawing shown in Figure 2(b). This process
is complex and has the following problems:
1. difficulty of extracting crisp boundaries from the pixel array
302
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2. heterogeneity of pixels that are to be clustered into a single feature

Figure 2: Ontology features classification


In order to cluster the pixels besides using a fuzzy clustering algorithm and a super
resolution one to determine the boundaries of the image, the ontology analyses the
anatomical structures as a top-down approach to determine which qualitative relations hold
between the parts of (normal and pathological) anatomical structures.
2. USE OF OWL ONTOLOGY MODULE IN PROJECT IMPLEMENTATION

Figure 3: Project structure


OWL Ontology module
The ontology design we used for this project is pretty simple, but we intend to extend it. As
stated in the previous chapters, the ontology was designed in order to obtain some
representation of the brain and of the tumors located in different parts of the brain. The
structure is a hierarchical one, as presented in the following images.
The ontology is not centered on anatomical structures, as in the case of general-use
ontology, but rather on radiography entities because a segmentation application makes use
of entities which can be determined in radiography. Nevertheless, the parts of the
anatomical structures are very useful for refining the segmentation. For instance, more
precise coordinates might be added in order to determine the junction points of certain
bones or tissues. The main classes are RadiographyEntity and Tumor.

303
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 4: Proposed ontology classes


RadiographyEntity
RadiographyEntity contains the AnatomicalEntity class, at the beginning of the design the
Tumor was considered a RadiographyEntity, but the Tumor type must be different from
any of the types contained in RadiographyEntity and the location of the Tumor must have
as location an anatomical entity. In addition, we started with the idea that the patient has no
anatomical abnormality. This way the ontology is modular.
The class was used in order to group entities with the same role. The basic brain parts are
mapped on entities from this class. In future implementations, a class like abnormalities
must be taken into consideration, because the brain is unique form individual to individual.
SimpleEntity
The SimpleEntity class represents the basic parts that the brain is constituted form. It has
restrictions such as: the brain cells and bone cells are disjoint. After reading about the
structure and the composition of the brain, we noticed that there are different types of cells
for the skull and for the brain. Even the skull cells differ from one another by density and
number of pores. In the present implementation we only took into consideration the number
of pores present in the bone.
HeadEntity
The HeadEntity represents the level above the SimpleEntity. Instead of using HeadEntity
any other composed structure such as thorax or rib cage. Besides the fact that the
composition of each element can be established and inference rules can be applied in order

304
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

to generate the structure of related elements, the spatial relations between objects were also
taken into consideration.
OpacityEntity
It is important to make a clear distinction between anatomical objects or tumor objects and
the opacities visible on radiographs like edges, intensity and texture. In defining the opacity
class we took into consideration the fact that these opacities are proper to each radiographic
image and after reading about the x-rays image interpretation we managed to define the
attributes: edges, intensity and texture. Moreover, these attributes are also very important
in image segmentation.
MalignityPossibility
The malignity possibility represents the class where the percentage of a tumor of being
malign or benign is expressed. If a tumor could not be classified as malign or benign, then
it can be in a certain percentage malign or benign.
Lobes
The class, as the name states, represents the lobes of the brain. The lobes are also parts of
right or left hemisphere.
Anatomical Landmarks
The landmarks are of great importance for image segmentation algorithms. Based on them,
the entire junction between the cranial structures can be determined.
Tumor
Important subclasses for image segmentation are the location and the size of the tumor.
After establishing the location and the size of the tumor, with the help of a radiologist
certain conclusions might be drawn such as: the possibility for a tumor to become malign
or to grow and the influence of the tumor on the cognitive and behavioral process. In the
present ontology the assumptions about the malignity of a tumor were made based on the
size of the tumor.
Relations
The relations are grouped in spatial relations, consistence relations and malignity relations.
The spatial relations have the role to design the brain map and usually are expressed by
words like: toLeft, toRight, above, etc. or for establish the intersection point:
isIntersectionPoint.
The consistence relations are used in order to create the anatomical hierarchy previously
presented: hasCells, hasBones, consitsOf, etc.
The malignity relations refer to the malignity of a tumor.

305
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 5: Proposed ontology relations


3. EXPERIMENTAL RESULTS
Data properties
The data properties were used in order to establish certain intervals e.g. for a malign tumor
or for the opacity properties like texture type and edges type, the size of the tumor, the lobes
position in a certain hemisphere etc.
Individuals
Test
The ontology accepts individuals like the one presented in the ”Individuals” section. An
individual can be added as a class type, or by using some rules that define the class. As an
example, in order to add a facial bone, this can be added either as a direct type of facial
bones or as:
 (isPartOf some Bones)
 and (consitsOf only BoneCells)
 and (hasBones only FacialBonesCells)
 and (hasPores some integer[< 1])
Results
The ontology is very useful when trying to determine the intersection points between two
entities and when trying to establish a ”map” of the human brain, in order to decide the
gravity of the tumor.

306
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 6: Ontology data set and results


4. REFERENCES
[1] Jean-Pierre Schober, Thorsten Hermes, Otthein Herzog “Content-based Image Retrieval by
Ontology-based Object Recognition”, KI-2004 Workshop on Applications of Description
Logics.
[2] Paul M. Thompson “Bioinformatic and brain imaging: recent advances and neuroscience
applications” Ph. D, 2002.
[3] Dameron O Modlisation, “Representation et partage de connaissances anatomiques sur le
cortex cerebral”, These de doctorat d’Universite, Universite de Rennes 1 2003.
[4] Paul M.Thompson “Bioinformatics and brain imaging: recent advances and neuroscience
applications” 2002.
[5] Thompson PM et al. “Brain Image Analysis and Atlas Construction”, in: Fitzpatrick M,
Sonka M (ed.), Handbook on Medical Image Analysis, SPIE Press 2000.
[6] Thompson PM, Mega MS, Vidal C, Rapoport JL, Toga AW “Detecting Disease-Specific
Patterns of Brain Structure using Cortical Pattern Matching and a Population-Based
Probabilistic Brain Atlas” 2001.
[7] Benoit Da Mota, Vincent Frouin, Edouard Duchesnay, Soizic Laguitton, Gael Varoquaux,
Jean-Baptiste Poline, Bertrand Thirion “A fast computational framework for genome-wide
association studies with neuroimaging data” 20th International Conference on
Computational Statistics, 2012.
[8] Fernando Calamante, Jacques-Donald Tournier, Robin M. Heidemann, Alfred Anwander,
Graeme D. Jackson, Alan Connelly “Track density imaging (TDI): validation of super-
resolution property”, NeuroImage, Volume 56, Issue 3, 1 June 2011, Pages 1259–1266.
[9] Christopher Town “Ontological Inference for Image and Video Analysis” Machine Vision
and Applications, May 2006, Volume 17, Issue 2, pp 94-115 .
[10] Hai-Guang Li, Gong-Qing Wu, Xue-Gang Hu, Jing Zhang, Lian Li, Xindong Wu
“K-Means Clustering with Bagging and MapReduce”, System Sciences (HICSS), 2011 44th
Hawaii International Conference.
[11] Weiling Cai, Songcan Chen, Daoqiang Zhang “Fast and Robust Fuzzy C-Means Clustering
Algorithms Incorporating Local Information for Image Segmentation” Pattern
Recognition, Volume 40, Issue 3, March 2007, Pages 825–838.
[12] Weizhong Zhao, Huifang Ma, Qing He “Parallel K-Means Clustering Based on

307
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

MapReduce” Cloud Computing Lecture Notes in Computer Science Volume 5931, 2009,
pp 674-679 .
[13] Imianvan A.Al. Obi J.C “Diagnostic Evaluation Of Hepatitis Utilizing Fuzzy Clustering
Means” World Journal of Applied Science and Technology, Vol.3. No.1 (2011). 23-30. 23.
[14] Chih-Cheng Hung, Sameer Kulkarni, Bor-Chen Kuo, “A New Weighted Fuzzy CMeans
Clustering Algorithm for Remotely Sensed Image Classification” Selected Topics in Signal
Processing, IEEE Journal (Volume:5 , Issue: 3 ).
[15] Dzung L. Pham, Jerry. Prince “An Adaptive Fuzzy C Means Algorithm for Image
Segmentation in the Presence of Intensity Inhomogeneities”, Pattern Recognition Letters,
2009.
[16] Neelum Noreen, Khizar Hayat and Sajjad A. Madani “MRI Segmentation through
Wavelets and Fuzzy C-Means” World Applied Sciences Journal 13 (Special Issue of
Applied Math): 34-39, 2011.
[17] Leehter Yao, Kuei-Sung Weng “On A Type-2 Fuzzy Clustering Algorithm” 2012.
[18] Francesco Masulli, Andrea Schenone “A fuzzy clustering based segmentation system as
support to diagnosis in medical imaging” Artificial Intelligence in Medicine Volume 16,
Issue 2, June 1999, Pages 129–147.
[19] Robert L. Cannon, Jitendra V. Dave, James C. Bezdek “Efficient Implementation Of The
Fuzzy C-Means Clustering Algorithms” Pattern Analysis and Machine Intelligence, IEEE
Transactions (Volume: PAMI-8, Issue: 2 ).
[20] Payman Milanfar “Super resolution IMAGING” CRC Press, 2010.
[21] M. T. Merino “Super-Resolution with Additive-Substitutive Wavelets” IEEE Signal
Processing Magazine 06/2003.
[22] Tinku Acharya, Ping-Sing Tsai “Computational Foundations of Image Interpolation
Algorithms” Magazine Ubiquity archive Volume 2007 Issue October Article No. 4 .
[23] Dinggang Shen “Image registration by local histogram matching” 2006.
[24] David Capel, Andrew Zisserman “Super-resolution Enhancement of Text Image
Sequences” Pattern Recognition, Volume 40, Issue 4, April 2007, Pages 1161–1172.
[25] Assaf Zomet, Alex Rav-Acha, Shmuel Peleg “Robust Super-Resolution” Proceedings of
the International Conference on Computer Vision and Pattern Recognition (CVPR),
Hawaii, December 2001.
[26] Peyman Milanfar, Filip Sroubek, Jan Kamenick “Superfast superresolution” 2011.
[27] Whilhelm Burger, Mark James Burge “Digital Image Processing An algorithmic
introduction using Java” January 19, 2012 ISBN-10: 1846283795.
[28] D. Huang, E.A. Swanson, C.P. Lin, J.S. Schuman, W.G. Stinson, W. Chang, M.R. Hee, T.
Flotte, K. Gregory “Optical coherence tomography” Science 22 November 1991:Vol. 254
no. 5035 pp. 1178-1181 .
[29] Sean Owen, Robert Anil, Ted Dunning, Ellen Friedman “Mahout in Action” Manning
Publications (October 14, 2011) .
[30] Ding Feng “Segmentation of Bone Structures in X-ray Images”, Advances in Electrical and
Computer Engineering, 2013-02-28, Volume 13, Issue 1, Year 2013, page(s): 87 - 94 2006.
[31] M. Dirk Robinson, Stephanie J. Chiu, Cynthia A. Toth, Joseph A. Izatt, Joseph Y. Lo, Sina
Farsiu “New Applications of Super-resolution in Medical Imaging” Super-Resolution
Imaging, P. Milanfar, ed.(CRC, 2010), pp. 383–412.
[32] Tom White “Hadoop: The Definitive Guide” O'Reilly Media / Yahoo Press, 2011.
[33] Sang C. Suh and Kalyani Komatireddy “Clustering of Ontology in Preventive Health Care
Through Relational Ontology” KE’11, World comp 2011, Las Vegas, Nevada,USA.
[34] Thomas Bittner, Louis J. Goldberg and Maureen Donnelly “Ontology and qualitative
medical images analysis” 2010.

308
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

A FUZZY COGNITIVE MAP FOR HOUSING DOMAIN

Crișan Daniela Alexandra 1


Stănică Justina Lavinia 2
ABSTRACT
It is well known that local officials in city governments need to develop a comprehensive
housing policies to guide their current and future decisions in the context of a specific
community that often face different issues. In this paper, the authors propose an original
Fuzzy Cognitive Map (FCM) model for housing domain; it analysis the housing decisions
and identifies the reasons motivating people’s residential choices. The model uses ten
variables grouped into three categories. Stakeholders, including the authorities, could
exploit the model to make scenarios and use these in order to develop strategies and policies
that can lead to the improvement of households’ life quality.

Keywords: fuzzy cognitive map (FCM), simulation, participative decision-making


process, housing domain
1. INTRODUCTION
One of the main concerns of authorities, at international, national and local level, is to
increase the quality of life of the citizens. Studies show that the households’ residential
choice is directly linked to the well-being of the community they belong to. Therefore,
decisions makers are actively involved in developing projects with the objective of
identifying the factors that positively or negatively influence people’s quality of life.
Housing location is an interesting topic, because the decisions of households to move or
stay in their residential location are determined by several issues such as: population
density, property and rental cost, job availability, access to education, access to recreational
activities and services and so on. All of these factors are closely tied to people’s decision
to live in a certain community.
In order to find the motivation of a household to relocate or stay in its current location, the
main variables that influence the location choices have been identified first. Considering
these factors as the main drivers of residential choice, a Fuzzy Cognitive Map model for
housing was designed. Fuzzy cognitive maps are qualitative models of a system, designed
through a participative process.
Once the variables of the housing model were identified, the next step consisted in defining
the causal relations between them. For visualizing and analyzing the model two software
tools were used: FCMapper[*] and Pajek[**]. Using FCMapper, the steady state of the
system was calculated starting from the weighted matrix of the model. In order to see the
impact a minor change of one factor could have over the other variables, the suggested
FCM model was used to run two scenarios.

1
PhD, Associate Professor, School of Computer Science for Business Management, Romanian-American
University; e-mail: crisan.daniela.alexandra@profesor.rau.ro
2
PhD, Lecturer, School of Computer Science for Business Management, Romanian-American University; e-
mail: lavinia.stanica@gmail.com
309
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The advantage of FCMs is that they are very intuitive and therefore, they can be used by all
citizens (even with less experience in the domain) in participating and understanding the
decision making process in a certain domain.
The work presented in this article is a result of research made in the “Future Policy
Modeling – FUPOL” project, through FP7 framework.
2. FUZZY COGNITIVE MAPS
Overview
Fuzzy Cognitive Maps (FCMs) are qualitative models of a system, consisting of variables
(or concepts) and the causal relationships between those variables. The variables could be
the actors, events, facts, or other entities which influence the described system. The causal
relationships are floating numbers between 0 and 1, reflecting the relative strength between
two variables. The cognitive maps can be graphically represented by a weighted graph,
where the nodes are the variables, while the arcs are the causal relationships. The direction
of an arc is decided by the direction of the corresponding causal relationship. The stronger
is the causal relation, the greater (closest to 1) is the weight of the corresponding arc. The
weak is the causal relation, the smaller (towards 0-value) is the weight of the corresponding
arc.
Modeling systems using FCMs presents several advantages. FCMs are capable of
participative process; they reflect the vision shared by politicians, officers, key partners,
stakeholders and communities. They are very intuitive, easy to understand and use by non-
specialists in the targeted domain.
FCM can be used to simulate the evolution of the system in time. Given an initial state of
the system, a FCM can evolve over time until it reaches a state of equilibrium (the steady
state). This steady state can be used to make predictions or to test different scenarios: fine
modifications of one or several factors in the system will yield to different behaviors in
time.
Formally, a FCM is a set of valued concepts and causal relationships:
{A,C,W}, where:
 N is the number of concepts;
 A=(Ai) i = 1, . . . , N is the set of concepts;
 C=(Ci∈ [0, 1]) i = 1, . . . , N is the set of values for the concepts;
 W=(Wij∈ [-1, 1]) i,j = 1, . . . , N is the set of weights (Wij is the causal
link/relationship between the concepts Ai and Aj).

The life-cycle of a Fuzzy Cognitive Map


The life-cycle of a FCM consists of three significant stages:
 (1) Designing the Fuzzy Cognitive Map
 (2) Running the Fuzzy Cognitive Map

310
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 (3) Simulating scenarios

(1) Designing a Fuzzy Cognitive Map


There are several ways of designing the FCM:
 from questionnaires;
 through interviews conducted by a trained moderator (a specialist in FCMs);
 by data-mining, through extraction from written texts (possibly from social-media).
The causal relationship between factors is usually defined following the steps:
 a direct or inverse relation between two factors is identified; it is marked with an
arrow pointing the direction of the relation;
 then, the strength of the relation is described, using linguistic (qualitative) weights,
such are: “strong”, “weak”, or “lack” “low”, “medium”, “high” etc.;
 the linguistic weights are transformed into fuzzy sets (Table 1).
Linguistic weights
Fuzzy weights
(How strong is the relationship?)
1.00 strongest
0.80 very strong
0.70 strong
0.60 moderately strong
0.50 weak
0.40 very weak
0.20 weakest
0.00 lack
Table 1. Transformation of the linguistic weights into fuzzy weights

(2) Running a fuzzy cognitive map


As mentioned above, one of the advantages of FCMs is that they can be run until they reach
a steady-state, which can be used for predictions, simulations, in order to obtain a better
understanding of the process by people involved, not necessarily specialists.
Running a FCM is a process similar to training neural networks. In fact, the similitudes are
perfect: the FCM consist in a set of nodes/concepts with initial values (time t=0), the nodes
are linked between them with weighted arrows. We can compute a node’s output with a
process similar to activating a neuron: if C1, C2, … , Cn are the concept values of all concepts
that influence the focused factor with the weights W1, W2, …, Wn, then the output of the
focused node is obtained by summing the weighted factors and, applying a non-linear
transformation (function F, named activation function) to this sum:

311
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The summing operation in the above figure is “the activation of a concept node”. It involves
multiplying each input causal influence Ci arriving from another concept node i, with the
weight or strength, Wi, of the corresponding causal link:
net   in
i 1.. n
i * wi

 
out  F (net )  F   ini * wi  .
 i 1..n 
The role of the activation function is to limit the amplitude of the output of a node to the
interval [0, 1]. Literature specifies many forms of the activation functions, but the most
commonly used activation function is the sigmoid function.
At each step, the value of a concept is influenced by the values of concept-nodes connected
to it, until the system would converge to a point and no further changes would take place.
So, running a FCM means iterating the FCM (repeatedly computing the values of the
concepts) until the system converges to a steady state. Therefore, a FCM can simulate the
system’s evolution over time to predict its future behavior.
(3) Simulating scenarios
Simulating scenarios is, in fact, the most interesting feature of FCMs. Simulating scenarios
helps the decision maker to “predict the future”, and also explains to the beneficiaries
(households and s.o.) why some decision have been taken. In scenario analysis, FCMs
indicate the direction in which the system will move given certain changes in the driving
variables. The scenarios do not offer any indication about the time axe (they are not able to
predict the moment when something will happen), but they give an idea of the magnitude
of system fluctuations after a disturbance.
3. A FUZZY COGNITIVE MAP MODEL FOR HOUSING ISSUE
Developing the FCM model
In this section, a Fuzzy Cognitive Map for the Housing domain is proposed. It is based on
some features that characterize the quality of housing through social, demographic,
economic, and facilities factors.
The model comprises ten factors grouped in three categories:
1) The housing relevant factors:
 Owned housing
 Rental cost
312
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 Property cost
 Available land
2) The socio-economic and demographic:
 Population
 Schools
 Employment
3) Accessibility to facilities:
 Secondary facilities
 Recreation facilities
 Service facilities
For the factors included in the model the following causal relationships between them have
been identified:
 Property cost has negative influence over Owned housing, Secondary facilities and
Service facilities
 Owned housing has negative influence over Available land
 Available land has negative influence over Property cost
 Population has positive influence over Owned housing, Rental cost, Recreation
facilities, Service facilities and Schools
 Rental cost has positive influence over Owned housing
 Secondary facilities, Recreation facilities and Service facilities have positive
influence over Employment and negative influence over Available land
 Schools has positive influence over Employment
Based on the identified causal relationships the weighted matrix associated to the model
was defined:
Available land
Property cost
Employment

Rental cost
Population

Recreation
Secondary
facilities

facilities

facilities

Schools
housing

Service
Owned

Secondary 0.00 0.60 0.00 0.00 -0.80 0.00 0.00 0.00 0.00 0.00
facilities
Employment 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Property -0.40 0.00 0.00 -0.80 0.00 0.00 0.00 0.00 -0.40 0.00
cost
Owned 0.00 0.00 0.00 0.00 -0.50 0.00 0.00 0.00 0.00 0.00
housing
Available 0.00 0.00 -0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00
land
Population 0.00 0.00 0.00 0.30 0.00 0.00 0.20 0.40 0.20 0.80
Rental cost 0.00 0.00 0.00 0.40 0.00 0.00 0.00 0.00 0.00 0.00
Recreation 0.00 0.70 0.00 0.00 -0.20 0.00 0.00 0.00 0.00 0.00
facilities

313
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Service 0.00 0.80 0.00 0.00 -0.40 0.00 0.00 0.00 0.00 0.00
facilities
Schools 0.00 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Table 2. The weighted matrix
The visual representation of the FCM model
For a better visualization of the FCM model, two freely available software programs have
been chosen:
 FCMapper[*] – is a software tool for analyzing fuzzy cognitive maps, that can be
used to automatically calculate the indices associated to the factors, to perform the
dynamic simulations on the given model, and to transform the FCM model (coded
in matrix format) into a file that can be further used in other network-analysis
software tools.
 Pajek[**] – is a software package for visualization and analysis of large networks
and provides a set of tools for analyzing such networks.
The digraph associated to the model was drawn using Pajek. The software tool provides six
types of objects that can be used as input for network analysis: networks, partitions,
permutations, clusters, hierarchies, and vectors. The networks are the main objects (vertices
and lines) used for drawing the graph. A network can be given in an input file in different
ways: using arcs/edges, using arcs-lists/edges-lists, or in matrix format. The input file can
contain additional information necessary for drawing the graph such as the coordinates of
each vertex or the color of the arcs.

(a) (b)
Figure 1. The network file (b) imported in Pajek (a)
The input network file for the current model was created using FCMapper. Starting from
the adjacencies matrix, after defining the sets of vertices and arcs, the net file of the model
was generated (Figure 1 (b)). The network file was then imported in Pajek and the
associated digraph was drawn.

314
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 2. The digraph of the FCM model


In the above digraph, the causal relations between factors are represented using directed
links between the corresponding vertices. A positive casual relation between two factors
(drawn as a solid line) shows that increasing the first factor leads to an increase in the second
factor. A negative relation (drawn as a dotted line) means that increasing the first factor
generates a decreasing of the other factor.
Analysis of the FCM model
Given an initial state of the system, represented by a set of values of its constituent concepts
(which are arbitrary, with no meaning, like the initial values for the weights in neural
network), a FCM can evolve over time until a state of equilibrium, until it reaches the steady
state (Table 2 (b)). This steady state can be used to make prediction or different scenarios.
Fine modifications of one or several factors in the equilibrium state will yield to different
behaviors of the system. Two scenarios were imagined in our model: first, we doubled the
Available land (from 0.28 to 0.5, factor 5, Table 3 (c)), second we have increased the
Population factor (from 0.5 to 0.7, factor 6, Table 3 (e)).
Steady Steady
Steady Scenario Scenario
Factors/Concepts state for state for
No state #1 #2
(a) Scenario Scenario
(b) (c) (e)
#1 (d) #2 (f)
1 Secondary facilities 0,45 0,45 0,46 0,45 0,45297123
2 Employment 0,7718162 0,7718162 0,7723359 0,7718162 0,7775885
3 Property cost 0,4713597 0,4713597 0,450166 0,4713597 0,471682
4 Owned housing 0,4957261 0,4957261 0,4999647 0,4957261 0,51165578
5 Available land 0,2867168 0,5 0,5 0,2867168 0,28348341
6 Population 0,5 0,5 0,5 0,7 0,7

315
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

7 Rental cost 0,5249792 0,5249792 0,5249792 0,5249792 0,53494295


8 Recreation facilities 0,549834 0,549834 0,549834 0,549834 0,56954622
9 Service facilities 0,477878 0,477878 0,4799941 0,477878 0,4878342
10 Schools 0,598688 0,598688 0,5986877 0,598688 0,63645254
Table 3. The steady state for the FCM in Housing domain and two possible Scenarios
Analyzing the results from the Scenario 1 we can draw the following conclusion:
A change of the value of the Available land variable will have
 medium positive influence on the Secondary facilities, Owned housing and Service
facilities
 weak positive influence on the Employment
 strong negative impact on the Property cost

Comparing the scenario with the steady state, we have obtained the following results:
Scenari % of Variables Changed 45,45
o1
Scenario 1 - Steady Overall Positive strength Negative strength
state change Changes (pos) Changes (neg)
Seconda 0,002101 medium pos. Secondary medium Property strong
ry 471 change facilities change cost change
facilities
Employ 0,000519 low pos. Employment weak
ment 718 change change

Property - high neg. Owned medium


cost 0,021193 change housing change
707

Owned 0,004238 medium pos. Service medium


housing 637 change facilities change

Availab 0,213283
le land 241

Populati 0 no change
on
Rental 0 no change
cost
Recreati no change
on 0
facilities
Service 0,002115 medium pos.
facilities 606 change

Schools 0 no change

316
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 4. Analysis for Scenario 1


For the second scenario we have reached the following conclusion:
A change of the Population variable will lead to
 strong positive change in Owned housing, Recreation facilities and Schools
 medium positive influence on the Employment, Rental cost and Service facilities
 weak positive influence on the Property cost
 medium negative impact on the Available land
 weak negative impact on the Secondary facilities
Scenario % of Variables Changed 81,82
2
Scenario 2 - Positive strength Negative strength
Steady state Changes (pos) Changes (neg)
Seconda - very low neg. Employment medium Secondary very weak
ry 0,0000 change change facilities change
facilities 32

Employ 0,0057 medium pos. Property cost weak Available medium


ment 72 change change land change

Property 0,0003 low pos. Owned strong


cost 22 change housing change

Owned 0,0159 high pos. Rental cost medium


housing 30 change change

Availabl - medium neg. Recreation strong


e land 0,0032 change facilities change
33

Populat 0,2000 Service medium


ion 00 facilities change

Rental 0,0099 medium pos. Schools strong


cost 64 change change

Recreati 0,0197 high pos.


on 12 change
facilities
Service 0,0099 medium pos.
facilities 56 change

Schools 0,0377 high pos.


65 change

Table 5. Analysis for Scenario 2

317
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

4. CONCLUSION
Residential location choice is a key determinant of activity-travel behavior and yet, little is
known about the underlying reasons why people choose to move, or not move, residences.
Such understanding is critical to being able to model residential location choices over time,
and design built environments that people find appealing. This approach attempts to fill this
gap by developing a FCM that models the decision of moving/not moving of a household
and highlight the reason(s) that led to this behavior.
The model described here has identified the main factors that influence the housing
decisions and the causal relations between them. Starting from the calculated equilibrium
state, two scenarios have been analyzed, so as to see how a minor variation in one variable
can influence the other factors and, therefore, the state of the system. These scenarios can
be used to make predictions of how the system will react to changes.
5. REFERENCES
 Allen C.A., Mugisa E.K., Improving Learning Object Reuse Through OOD: A Theory of
Learning Objects, Journal of Object Technology, vol. 9, no. 6, pp. 51–75, ISSN 1660-1769,
2010
 FUPOL Deliverable 2.2, 2012
 Haase D., Nina Schwarz, Simulation Models on Human–Nature Interactions in Urban
Landscapes: A Review Including Spatial Economics, System Dynamics, Cellular Automata
and Agent-based Approaches , 2009
 Hobbs B.F., Ludsin S.A., Knight R.L., Ryan P.A., Biberhofer J., Ciborowski J.J.H., Fuzzy
cognitive mapping as a tool to define management objectives for complex ecosystems, Ecol.
Appl. 12, 1548–1565, 2002.
 Johnson M. P, H. Heinz J., Economic and Statistical Models for Affordable Housing Policy
Design, Economic Analysis for Flexible Models, 2007
 Khan M. S., Quaddus, M., Group Decision Support Using Fuzzy Cognitive Maps for
Causal Reasoning, Group Decision and Negotiation 13: 463–480, 2004.
 Özesmi U., Conservation strategies for sustainable resource use in the Kizilirmak Delta in
Turkey. Ph.D. dissertation, University of Minnesota, St. Paul, 230 pp. 1999.
 Özesmi U., Modeling ecosystems from local perspectives: fuzzy cognitive maps of the
Kizilirmak Delta wetlands in Turkey. In: Proceedings of 1999 World Conference onNatural
Resource Modelling, Halifax, NS, Canada, 1999b.
 Özesmi U., Özesmi S.L., Ecological models based on people’s knowledge: a multi-step
fuzzy cognitive mapping approach, Ecological Modelling 176, pg. 43–64, 2003
 Schneider M., Shnaider E., Kandel A., Chew G., Automatic construction of FCMs. Fuzzy
Sets, Syst. 93, pp. 161–172, 1998

 [*] http://www.fcmappers.net/
 [**] http://pajek.imfm.si/doku.php?id=pajek

318
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

POSSIBILITIES OF DYNAMIC SYSTEMS SIMULATION

Cristina Coculescu1
ABSTRACT
Modeling dynamic systems can be made using several instruments and techniques,
simulation being among these. Simulation of a system operating, allows evaluation of the
kind how it will develop in certain conditions or because its management using a specific
set of rules. In many cases, simulation is the only possible solution for making such
evaluations. In this work we’ll show general considerations about possibilities of
simulation dynamic systems using dedicated programs for simulating systems.

Keywords: dynamic system, numerical integration, simulation


JEL code: C6, C88
1. BASIC ASPECTS ABOUT DYNAMIC SYSTEMS
An important step for scientific knowledge is passing from a system or real process to
corresponding mathematical model and hence, to a physical one. This thing is possible also
because in nature there are several real physical phenomena which are described by the
same kinds of equations but the functions and variables within that mathematical structure
have different meanings. With other words, nature unity appears in an amazing similarity
of differential equations from different kinds of phenomena.
A dynamic system is one which develops in time. If the set of times when the system
develops is a subset of:
 integer numbers set ℤ, then the system is called discrete time system (discrete
system);
 real number set ℝ, then the system has continuous time (continuous system).
In dynamic systems modeling, particular rules of mass and energy conservation are used,
mathematically expressed as balance equations, that are simply differential equations,
where derivation variable is time, t, or with partial derivates – where there is at least one
derivation variable besides time (a space coordinate as example).
Mathematic model of a dynamic system is the set of differential or integer-differential
equations which show system behavior under input values action.
Because differential equations which processes of real world are often non-linear and of
superior order, finding an analytical solution (exact one) of Cauchy problem is difficult.
The alternative is finding an approximate solution that is using of a numerical algorithm.
Therefore, solution found using a numerical method, is a string of approximations of the
values of exact solutions, computed in step times, usually of same interval.

1Ph.D., Associate Professor, Romanian-American University, 1B Expozitiei Bd, Sector 1, Bucharest, E-mail:
cristina_coculescu@yahoo.com
319
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In complex systems study, when finding an analytical solution is impossible and pure
practice on real system is, for a reason or another, not operational, simulation techniques
are normally used.
Computer system simulation gives needed method from the study of their behavior. It is the
opportunity of to correlate all factors and to clarify their kind of interference, being a
method which gives the possibility of making forecasts about simulated system or process
for its dynamic aspects.
In this context, the term of “simulation” is associated to time evolution of a dynamic system
described using an adequate model, when a known input signal is applied to the system;
with other words, there is considered finding system output problem, also named answer to
a certain known input.
2. NUMERICAL INTEGRATION OF DIFFERENTIAL EQUATIONS.
Theoretical Basics
In this section, we’ll show the main theoretical results which rules numerical methods of
solving differential equations and differential equation systems.
Let be differential equation:
𝑦 ′ = 𝑓(𝑥, 𝑦) (1)
with initial condition:
𝑦(𝑥0 ) = 𝑦0 (2)
The problem of finding the function 𝑦 = 𝑦(𝑥) that identically proves equation (1) and
condition (2) is named Cauchy problem or the problem with starting conditions.
Differential equations are:
 ordinal (ODE) – where derivation variable is the time, t;
 with partial derivates (PDE) – where there is at least one derivation variable besides
time (a space coordinate as example).

For many ordinal differential equations, classical integration methods aren’t applicable or
these methods demands big computing. Therefore, approximate methods need, which are
of two kinds: analytical, that give the approximate solution as analytical expressions and
numerical, within the solution is given as a string of values, beginning from a starting value
of the solution.
Through analytical methods, we mention step-by-step approximation method and the
method of finding solution of a series of powers.
Numerical solving of a Cauchy problem consists of finding a set of points 𝑦1 , 𝑦2 , … , 𝑦𝑛
which approximate exact values 𝑦(𝑥1 ), 𝑦(𝑥2 ), … , 𝑦(𝑥𝑛 ) of integral curve that pass
through initial point(𝑥0 , 𝑦0 ).
Let’s consider integration step:
ℎ = 𝑥𝑖+1 − 𝑥𝑖 , 𝑖 = 0,1,2, … , 𝑛 − 1 (3)

320
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Numerical methods for integration of ordinal differential equations are divided into two
classes:
 methods with separate steps for computing the ordinate 𝑦𝑖+1 , need knowledge of
the coordinates of prior point (𝑥𝑖 , 𝑦𝑖 ) and the size of integration step. Many of them
use Taylor series development for the function 𝑦 = 𝑦(𝑥) around a point.
 linked step methods. for computing ordinate 𝑦𝑖+1 , demand knowledge of
integration step h and of several anterior points (𝑥𝑖 , 𝑦𝑖 ); (𝑥𝑖+1 , 𝑦𝑖+1 ); ... The
values 𝑦𝑖 , 𝑦𝑖+1 , ... are generally established using numerical methods with
separate steps.
Both kinds of methods can use for computing approximate values 𝑦1 , 𝑦2 , … , 𝑦𝑛 explicit or
implicit kind algorithms.
For a separate step method, an explicit algorithm is of kind:
𝑦𝑖+1 = 𝑦𝑖 + ℎ𝜑(𝑥𝑖 , 𝑦𝑖 , ℎ), 𝑖 = 0,1,2, … , 𝑛 − 1 (4)
and an implicit one:
𝑐 ,𝑃
𝑦𝑖+1 = 𝑦𝑖 + ℎ𝜑(𝑥𝑖 , 𝑦𝑖 , 𝑥𝑖+1 , 𝑦𝑖+1 , ℎ), 𝑖 = 0,1,2, … , 𝑛 − 1 (5)
𝑃
The approximate 𝑦𝑖+1which appears in the right branch of equality is named prediction
𝑐
value and it is computed by an explicit algorithm and 𝑦𝑖+1 is called correct value of the
coordinate 𝑦𝑖+1 . Similarly we can build such explicit and implicit algorithms for linked
steps numerical methods.
Between separate steps numerical methods we remember: Taylor series development
method, Euler method (polygonal lines method), Runge-Kutta method, improved Euler
method and from those which linked steps: Adams-Moulton, Milne etc.
Numerical methods encountered within ordinal differential equations can be easily
extended for ordinal differential equations systems.
Dynamic systems which have more than an independent variable (time, space coordinates)
are characterized by differential equations with partial derivates (PDDE). Simulation of this
kind of systems, therefore consists of numerical integration of compound PDDE.
Numerical integration of PDDE is relied on derivates approximation using finite differences
that means fuzzification of the ranges of independent variables. Clearly, numerical
integration will be as exact as integration steps are smaller.
3. METHODS OF DYNAMIC SYSTEM SIMULATION IN MATLAB
Matlab is a program set of high performance for numerical computing and graphic
representations in science and engineering. Developed during several years, Matlab is now
a standard in university media and also in research and practical solving of problems of
experimental data processing , statistical control, signal processing, finding systems etc.
To simulate a dynamic system is the same thing to solve (integrate) the differential equation
which describes it, in known initial conditions, which represents initial starting of the
system.

321
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Integrated environment Matlab gives a set of specialized function for numerical integration
of ODE that implements methods, from the easiest direct methods to multistep ones,
improved by complicated algorithms for variation of integration step for approximation
error reducing. The name of these functions begins with the character string ode (ordinary
differential equations).
In what concerns implementation of numerical integration methods of PDDE, in Matlab
there are no dedicated functions, because of the richness of methods kind and variety.
Mainly, final objective of every numerical method is to establish as accurate as possible an
approximate solution of the problem using minimum memory and amount of computing
operations. The difference between exact and approximate value (computed using a
numerical method) of an ordinate, is total error of computing step. Total error on step
consists of approximation errors (because numerical method), of rounding (used because
digit number limitation) and of propagation, which appears because the errors of prior steps.
Approximation method depends on used numerical method. Exactly, on integration step h.
therefore, decreasing of approximation method is possible if the step h is correspondingly
decreased.
Rounding error depends on computer facilities (how many important digits we can use). It
can be reduced if double-precision mode is used. However, rounding error increases
according to the amount of steps because error propagation from one step to another.
Accordingly, if integration step is decreased for approximation error reducing, the number
of steps will increase and in consequence, rounding error will be bigger. Therefore, there is
an optimal value of integration step, h, wherefore total step error is minimal.
The kind how rounding errors modify computed values precision, is clarified in numerical
stability analysis of integration methods. This consists of to study if small variations in
initial conditions give small variations in found solution. If this happens, we can consider
the method as being numerically stable.
In certain cases, an integration step enough for insure desired precision, can be much bigger
than would be necessary for assure method stability. Such ODE’s are named stiff, in what
concerns numerical integration. In systemic context, stiffness appears at ODE’s having
model phenomena with several different time scales.
Implicit Euler method is extremely easy but demands much smaller integration steps then
those used by other methods for reach the same precision. Hence, this is practically used
for small complexity systems. However, this is worth to mention that its using with same
step, properly chosen, can be a good solution for simulating systems having stiff behavior.
Runge-Kutta methods are right for large categories of dynamic systems characterized by a
(linear or non-linear) balanced dynamic (without stiff behavior tendencies)
Adams predictor-corrector methods are recommended for simulating systems with an easy
or moderate stiff behavior.
Burlish-Stoer algorithm can replace, sometimes with better results, Adams predictor-
corrector algorithms.

322
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Gear kind methods are the only which can give correct results in simulation systems having
a strong stiff behavior. Their using is recommended only in such cases, for other kind of
problems it being inefficient.
In consequence, the speed and precision wherewith Cauchy problem attached to a
differential equations system is solved depends on used integration method, size of
integration step and on maximum error accepted by user. Errors overview and the fitness
of integration step will avoid numerical instability but integration step needed for keeping
numerical stability is too small for computing a solution very easy to approximate. As a
conclusion, integration step results from an accord between method rightness and precision.
In what concerns implementation, a stiff ODE can be solved faster using better a dedicate
code than a standard one. This reason enabled the classification of ode functions of Matlab
library in functions for stiff and non-stiff ODE.
As the most functions of Matlab library, ode functions can have a changeable number of
input variables. Whatever the case, minimal (and most used) calling shape is:
[t,y]=odexyz (′ f ′, tspan,y0)
where odexyz is one of ode functions.
First input argument show what must be solved. The second shows integration step
(simulation time step). The third input argument shows the conditions for solving given
ODE: the vector y0, of dimension equal to given ODE degree, consists of starting
conditions.
First output argument, t is the vector of values wherein step-by-step times has performed
(in the case of system answer simulation).
Second output argument, y, is a matrix having so lines as elements of t, that is the number
of integration steps and the number of column is equal to the degree of ODE to be solved.
There is the chance to modify implicit integration parameters using additional input
arguments.
The most Matlab functions of integration ODE are:
 for standard equations (for non-stiff ODE’s): ode23, ode45, ode113;
 for stiff ODE’s: ode15s, ode23s, ode23t, ode23tb.
The functions ode23 and ode45 uses variation of integration step through the alternation of
direct Runge-Kutta formulas (RK) of order 2, with RK of third degree, RK of order 4 with
RK of order 5, the precision of given solutions being enough for a large class of
applications.
ODE functions are in the base for using Simulink (part of Matlab integrated environment,
composed by functional blocs which implements the concept of model for a dynamic
system). visual libraries in dynamic system simulation.
Simulink is a useful interactive environment for modeling, analysis and simulation of a big
number of mathematic and physic systems. Of optional extension of Matlab program kit,
Simulink gives a user graphic interface, for making dynamic systems models represented
in bloc-scheme.

323
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Dynamic simulation for studied system, is relied on the knowledge of differential equation
system, on making the bloc scheme and of course, on using a numerical integration method.
Simulation results can be displayed both graphically and as numerical tables.
Using S-Functions library blocs, user blocs can be made, which are integrated in existing
schemes, realized by the means of blocs of standard Simulink library.
4. CONCLUSIONS
Simulation of a system operation, allows evaluation of the kind how it will develop in
certain cases or because its management using a specific set of rules. In many cases,
simulation is the only possible solution for making such evaluations.
For solving dynamic modes, simulation programs need to be fast, easy and developable In
most cases, these relies on the practice and developments of Fortran programming language
which has sturdy numeric algorithms and well developed program library. However, last
years, the facilities given by Windows, especially the graphics, are superior so that specific
programs with better graphic facilities have developed.
Simulation of Simulink models, demands numerical integration of differential equations
systems. Simulink gives a set of integration algorithms for simulation such equations.
Unfortunately, because of variety of dynamic behavior of a system, none of these methods
warrants efficient and exact simulation of any kind of model.
The results of simulations, in what concerns speed and precision, varies depending on
model and initial conditions. Right choosing of integration method and carefully setting of
simulation parameters, are necessary conditions for gaining exact results.
5. REFERENCES
[1] Arrowsmith, D. K., Place, C. M., An Introduction to Dynamical Systems, Cambridge University
Press, 1990;
[2] Băzăvan, P., Algoritmi Numerici în Studiul Sistemelor Dinamice, Editura Sitech, Craiova, 2005;
[3] Demidovitch, B, Maron, I., Éléments de Calcul Numérique, Mir, Moscow, 1973;
[4] Ebâncă, D., Metode de Calcul Numeric, Editura Sitech, Craiova, 1994;
[5] Stuart, A. M., Humphries, R., Dynamical Systems and Numerical Analysis, Cambridge
University Press, 1996;
[6] [Mat93] *** , SIMULINK, Dynamic System Simulation Software, User=s Guide, The
MathWorks Inc.,

324
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

HTML5 – AUGMENTED REALITY, A NEW ALLIANCE AGAINST THE OLD


WEB EMPIRE?

Alexandru Tăbușcă 1
ABSTRACT
The current paper presents the 2013 realities of the Augmented Reality paradigm in the
context of the use of modern web technologies. The article starts with an educated summary,
presentation style, of the present worldwide recognized standards for Augmented Reality
(AR) applications. I will not only present but also compare and comment on the current
state problems, advantages, development directions and guidelines for AR web
applications. The focus of this article is set towards the development and use of AR web
applications especially for mobile devices like smartphones, tablets, special factor laptops
or ultra-books.

Keywords: html5, augmented reality, canvas, video stream, web extras, qr code
1. INTRODUCTION
One of the hottest subject in the IT trends today is represented by the “augmented reality”
applications. The most talked about project of the IT giant company, Google, the iconic
Google Glass is of course the spear head of this news but, by far not, not the only gadget
that is based on augmented reality applications for introduction of a new paradigm in the
IT consumer products and services. This trend comes as a normal step forward - today is
almost unthinkable that some previous generations were able to cope without the use of
computers in their lives. Today, almost every aspect of our life has something to do with a
computer. We do not only rely on computers but even the internet became something that
we think about as a normal thing, like running water or electricity. Some years ago the
internet was even established as a citizen right in Finland, assured by the law [1]. After
these facts we really need something new and exciting to drive forward the crowds of
developers and users that live on the internet environment. As a consequence, the AR seems
the best option to light a new spark on the sky of web interactivity.
Arguably, the best and most successful hardware product already on the market in high
volumes today is Microsoft’s Kinect Console. The applications and games developed for
this revolutionary device are already taking advantage of face tracking, geo-location and
other software features that are available in connection with the augmented reality
paradigm. Our children, and not only them, are now able to interact with the software in
ways that seemed science fiction only several years ago. We can now virtually travel inside
the old Egyptian pyramids, admire the medieval history recreations, walk on the moon or
visit the Louvre museum directly from our home PC or even with the help of our super
powerful smartphones. Even more, in front of the Kinect device, we can swipe, touch or
push a button, in the air, and we get a lot more detailed information that is available on
demand. We can now go from Bucharest to Florence, frame the old Dome inside the sensor

1Lecturer, PhD, School of Computer Science for Business Management – Romanian-American University; e-
mail: alextabusca@rau.ro

325
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

of our smartphone and we get all the historical information available in any respectable
tourist guide; and that is not all – in certain areas we can even get, through the phone’s
display, an entire image with the room’s setup three hundreds of year ago.
The “augmented reality” concept is now known as AR, abbreviated like almost any other
term related to the IT environment . The term actually refers not exactly to
“augmentation” but more precisely to “duplication”. The augmented reality, AR, is
described as focusing on the replication of real world environment within a computer virtual
reality. The AR usually composes a new complex image by mixing the real details (such as
location, part of the image, weather condition etc.) with a setup completely generated by
the machine, producing a result with “added” information that suggest an “augmented”
image – therefor the A in the AR. The virtual composite setup created by the computer has
the aim to improve the user’s image of a real thing by making him see more details or even
interact at some level with real or virtual items. The final aim of the AR concept would be
to recreate an entire ecosystem inside of which we cannot differentiate between the virtual
and the real elements – a sort of a scenario widely presented in the iconic StarTrek television
show, tens of years ago, by the use of a so called “holodeck”. The actual uses of these
technologies today are quite often related to entertainment and gaming but we have to
remember that a very important role is also played by their insertion within the fields
robotics, micro-manufacturing, military simulation or engineering design.
AR was, at the beginning, based on the implementation of code in high level programming
languages, like C++ or Java. Today, a new trend is raising – the use of the HTML family
of languages for the development of AR applications. The AR can be actually considered
the heir of the QR codes. The “Quick Response Codes” can now be regarded as AR beta 1
– they represent 2D codes that triggered an event when scanned and read by a specific QR
reader application. Usually the QR codes send the user to a website or another web based
resource directly linked with the developer of the QR code. The most distinctive part
separating the AR and the QR is actually the destination that the user is pointed to; the QR
code let you see the real thing (an object, a catalogue page, a poster etc.) and if you interpret
it through a dedicated application you are sent to another location for more information
related to the topic. Switching to the AR, when you interpret the AR code through its
software application you do not get sent away to another location – you actually receive the
new detailed information an see it as a layer over the reality layer that you started from. We
can say that QR sends you away while AR brings over other things to you.
The QR code insertion on the market became quite relevant during 2011, mainly because
of the very easy way to implement it; just encode the address of a website within a QR code
and print or publish it somewhere for the client to see it. AR, on the other hand, is not so
easily integrated into our mainstream software or printed environments; the AR in truth
delivers a way more complex and exciting way for you to actually interact with you
potential clients than in the case of QR but it might not be the right choice for all scenarios.
Most printed elements are still relying on QR (magazine publishers, consumer packaged
goods, printed catalogues, e-commerce websites etc.) to deliver content, information and
support for a virtual shopping experience. The number of cases that a company relies on
augmented reality support is very small – this thing, in itself, bringing also an unexpected
bonus for those companies: some complex and unique experiences are quite rare and
represent something special for a client. The professionals of the modern advertising
326
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

industry are agreeing that AR helps create some very interactive experiences, more complex
and immersive than the mere QR codes and that are excellent facts for a campaign that aims
to dazzle the potential clients. However, on the same agreeing tone, the professionals argue
that QR is still the more pragmatic solution. QR is able to provide quite a comparable
experience with the benefit of being more versatile and much easier to implement.
AR and QR are mainly regarded as two different evolutionary steps on the same ladder –
you first go on with the QR and when you feel ready step over to the AR. In my opinion
the matter cannot be viewed as settled just yet. I actually think both QR and AR codes will
coexist for quite some time, both being related and not exactly competitors. While the QR
is much more suited for the pragmatic and time-aware clients, the AR is going to have more
appeal for the enthusiast and gadget-oriented clients. From my personal experience with a
world recognized brand such as IKEA2, the use of the most modern AR coding can become
a very frustrating issue. The 2013 IKEA catalog has some items with attached codes that
should trigger an AR experience – just scan a certain image with their dedicated application
and you can see the coffee table, for example, imposed as a virtual layer on the live image
of your living room seen through the smartphone’s display… it sound interesting but it can
actually take quite a lot of time to get it right and in some cases it does not work at all. On
the other hand, lots of products have QR codes besides them and then scan of these images
works 99% of the times and send you to a dedicated webpage with lots of information.
From the point of view of usability and user experience the QR is much more mature than
the AR, at least at this moment. QR code interpretation requires little, almost no effort from
the part of the user in order to access it and offers a great way to see the product “acting”
before making a decision to purchase something or not. Also, a big advantage of the QR,
as I already mentioned before, comes from the seller’s perspective – these codes are quite
inexpensive and very easy to implement. The AR can be quite a challenging experience,
with the clients required to hold up a phone in front of their face to find the right position
for digital overlaps on the physical objects. The production costs of AR can also be much
higher, a negative aspect for the seller, but the investment might be worth it if the target
client is prone to the use of a fun and immersive experience.
The retail business is the area where both QR codes and AR have found a good breeding
ground. Companies always look to upgrade the clients’ experiences beyond the physical
aspect and into the electronic environment, in the meantime also bringing an amount of
modern and added excitement to their stores.
For the following year, during 2014, I think that the largest retailers will continue to spread
the use of QR codes as a simple and practical way to engage users and deliver useful content
while AR is likely to continue to be used as a means to drive on the user’s excitement. I
think that a real game-changer for the AR can be the Google Glass. Already, excitement is
building around this project for a while now, with almost everybody agreeing the Google
Glass project will finally offer the AR experience in a very simple and easy to use way -
via the special glasses that users wear.
While the excitement about QR will decrease, partly because they are not “news” anymore
and partly because they will be regarded simply as just a quick short-cut to a digital site,

2
IKEA – one the largest furniture and accessories retailer in the world
327
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

the interest in QR might come to new heights with the public release of the Google Glasses
– and for sure with the entire plethora of copies and similar products from other companies.
2. FACTS
The use of AR is now targeted at mobile devices. Most of the test scenarios preview the use
of AR with the help of the smartphones or tablets. While indeed the personal computers
have much more computing power they also have a larger range of options for bringing the
virtual elements to the screen and, most important, these classic computers are quite
difficult to imagine in situation when AR is best suited. It would be quite difficult and
completely strange to take out the laptop from your backpack, open it, switch its camera
towards a mall and start an AR experience while you struggle to keep you laptop in front
of you. Even with a smartphone, a phablet or a tablet, the experience can be quite strange –
as you might be required to make an entire series of adjustments to the position of the
camera in order to obtain the best results.
The best suited technology for implementing AR applications can become, arguably, the
HML5. While in truth HTML5 did not, and will not for quite a long time, replace the aging
HTML4 it is best suited for the development of this new type of applications. The HTML5
might actually be described as a mix of web technologies that matured over time: HTML,
CSS and JAVASCRIPT. A very good analogy sees the HTML5 “anatomically”: the HTML
represents the bones, the CSS represents the skin and the JAVASCRIPT represents the
muscles. They were here long before this 5 version but now they seem to have been treated
with growth steroids . In fact, I can mention three main reasons why HTML5 might be
considered as revolutionary rather than just evolutionary, why the concept has the right to
be called HTML5 rather than 4.5.
First of all, in HTML5 the good old webpage does not have to look and act like a “classic”
webpages. The rise of the Adobe Flash technology over the last years has mainly been just
an attempt to overcome the natural limitations of the things that HTML5 was able to
provide. Flash was initially targeted at animating websites and producing special visual
effects. Soon after, complete websites emerged as being produced entirely in Adobe Flash,
allowing the developers to obtain a different type of navigation and organization of the
webpage, more complex access to the certain sections of a webpage, and the ability to avoid
the use of JavaScript (actually one can say that they replaced JavaScript with ActionScript).
Secondly, webpages no longer represent, in the majority of cases, one person or
organization as regarding the content. The common webpage today is usually an accessory
of somebody’s digital world, comprising text content, images, audio or video materials etc.
Lastly, webpages today have to be able to function easily across a lot of display devices.
HTML5 grabbed that boat and navigates under the banner of the “mobile” concept.
Actually, at this moment, smartphones, phablets or tablets should be able to provide the
same experience as a computer browser.
The HTML5 introduced a new concept of programming interconnection that overpassed
the previous limitations. HTML5 and CSS3 provide a very powerful and comprehensive
set of utilities, tools and effects that can compete on par with everything that Adobe Flash
had to offer. Actually, almost every website that was completely designed in Adobe Flash
can now be developed only by implementing HTML5, CSS3 and JavaScript. The most
328
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

important advantages of this fact lies in exactly the parts were the Adobe Flash was
somehow vulnerable: the user can select the text like in classical HTML, you can bookmark
any page directly, source code is very accessible again and not wrapped inside a nice .swf
package.
All these facts make the HTML5 developed websites completely and easily indexable, the
audio and video content is now very easily linked intra or extra website, the overall
experience of the website is almost the same on either your office Dell PC, your iPad tablet
or you Samsung smartphone.
As I mentioned before, mobile devices can view HTML5 webpages or provide audio and
video content without special issues. In fact, HTML5 owes a lot of its interest to mobile
devices, and in particular to Steve Jobs3 and his active campaign against Adobe Flash. Form
the beginning of the war against Adobe Flash, Steve Jobs’ Apple provided the iPhone with
the Safari browser, which had HTML5 support but lacked the Adobe Flash plugins. Today
all mobile phones have HTML5 browsers or the capability to install one. On the other hand,
I have to mention some facts that seem to be overlooked quite all the time – actually,
HTML5 does not offer a special set of tools for working with mobile devices. Indeed, we
have the capability to detect the orientation and screen size of a mobile device but those
command also allow for the detection of the size of a browser window or a tablet display.
The canvas element and the new audio and video features make it possible for the developer
to deliver content to a mobile device. In truth, HTML5 is not targeted for mobile use – it
just works normally on both classical and mobile devices; the developer does not have to
build to version of the website, one for PC and one for smartphones anymore.
As a test case, I will present a very simple application, developed by implementing HTML5
for an AR application. The application develops a virtual slideshow based on an AR code
that should be read with any camera, either on a smartphone or a webcam.
For creating this test application I chose a quite simple AR scenario: show the current
camera an AR code and the application will impose a layer with another image on the
camera live preview; take the code out of the camera’s sight and then bring it again – a new
image will be imposed on the lice preview. Actually, the development is based on the
capabilities of the WebRTC4 API and the JSARToolKit5 library and on the implementation
described by Ilmari Heikkinen [2].
To shortly present the schematic algorithm of the application I can outline the main steps
involved:
 We have to send an HTML5 canvas element to JSARToolKit

3
Steve Jobs – founder of Apple Corporation, an innovator and visionary that changed the digital
paradigm of the 21st century; among the products that become market anchors and were backed by
him we can mention the iPhone or the iPad.
4
WebRTC – a free open project that enables web browsers with Real-Time Communications (RTC)
capabilities via simple JavaScript APIs; information available publicly at http://www.webrtc.org
5
JSARToolKit – a JavaScript port of FLARToolKit, operating on canvas images and video element
contents; available publicly at https://github.com/kig/JSARToolKit
329
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 JSARToolKit returns a list of AR markers found in the image and the


corresponding transformation matrices; on top of these markers we will place the
AR layers with the images
 we send the transformation matrix to the 3D library responsible for the image
creation
 in order to analyze the video that is taken from the camera live preview we have to
place the respective video on a canvas element and send it to JSARToolKit for
frame-by-frame analysis.

In order to make the application work we need to use a WebRTC capable browser – at the
moment of writing this article the WebRTC is fully supportd by Google Chrome, Firefox
and Opera browsers.
The first step is to download the JSARToolKit and save the corresponding .js file inside
our working folder, together with the magi.js file that is also required by the JSARToolKit.
Then, we have to build out main webpage, called “RAU-AR-test.html”. Next, we have to
select the pictures that we want to impose as the AR layer over the lice preview of the
camera. We save them in the same folder the as html file and the .js files. Next, we build
our AR marker and print it on any piece of paper:

Figure 1. AR code to be printed


Then, we launch our web page in a WebRTC enabled browser - my favorite browser is
Firefox - and allow it to access our webcam for creating the background of the AR
application:

330
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 2. The first imposition of the AR image layer over the marker
Each time that we hide the marker from the camera and show it again, the AR application
shall place another image from the batch that we have selected on the markers position.
Actually, each detection of the marker in front of the camera would trigger another step in
a “for” element that brings about the images that are preloaded by the use of a JavaScript
array:

Figure 3. Another image imposed over the marker, at a later show of the AR code to the
application inside the html webpage
3. CONCLUSION
The new alliance between the Augmented Reality environment and the HTML5 is the best
bet at the moment for most web developers. This statement is fully arguable – only if you
check out the sources for the above mentioned application we can see that we can build
something with a great potential for a good user experience with quite a little experience in
the field and in a relatively short time frame. Right now, the HTML5 is the best mix of web
technologies, (mostly) standardized by a set of common specifications, is relatively easy to
learn but complex enough in order to be able to provide solutions for the most scenarios
met during a web application development cycle.
I have started using HTML5 a couple of years ago and started to poke around the AR
applications only about a year ago. From my own experience I can say that HTML5 can
give you a very enthusiastic start and let you progress very quickly, even stepping over
some previous mandatory requirements that a web developer needed to know just a few
year back. But, I also have to mention that not everything is peachy in this AR-HTML5
alliance – as for any big potential alliance, downturns appear from time to time. The
HTML5 is not yet fully supported, or even supported in the same manner on every
important browser. I would personally recommend developing by the means of HTML5
approach in the case of a trendy project, with targeted and predictable customers that are
tech-enthusiasts. On the other hand, I would warn about the use of this approach for
enterprise-grade applications. The implementation of the HTML5 elements is not yet
graded as enterprise-level and the cost involved in developing an enterprise-grade web
application based on such an immature framework could be quite high. As a direct result, I
foresee that at least for the next couple of years the AR based on HTML5 will continue to
gain popularity but will not replace any other technology already available and mature. I do
331
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

not think that a lot of retailers, for example, will scrape off the price labels from their shelves
and rely only on a AR code for convincing a client to buy a product. The online retailing,
usually the most modern and tech savvy environment, is not very likely to drop their so
hard-worked security implementations [3] for the fast integration of AR only shopping.
More important, some other support technologies used for AR development are still not
fully supported by all our computer-like devices. Today, the statistics show that around 1.4
billion mobile devices are HTML5 enabled and support the WebRTC standard while only
this year alone the mobile devices sold on the global market numbered 1.86 bilion6.

All in all, after carefully considering all the facts and factors involved, I am sure that any
dedicated web applications developer should envisage the field of AR implementations and,
in order to make the AR work they can use the HTML5 as one of the best tools available.
A next step that I see coming in just a couple of years is related to hardware products. I am
quite sure that the Google Glass project will be a success from the start – the first iterations
of the device will bring the wearable devices to the mainstream consumer, will make him
comfortable with the idea and the interfaces involved. The next step will be to develop real
sci-fi applications that would need a considerable amount of computing power that, for
now, cannot be built into a smaller and smaller wearable device. But there already is a
solution for this – we just have to link to Google Glasses, or the devices of this kind, to a
powerful processor that stays idle for a long time while being able to produce amazing
numbers as regarding the computing power. We can link the computing needs to a specific
chipset, inside a PC or a dedicated one, based on the Nvidia CUDA architecture [4]. This
CUDA enabled devices can produce an enormous amount of power that could probably
benefit a lot to the advanced AR applications of tomorrow.
4. REFERENCES & BIBLIOGRAPHY
1. Tabusca, Silvia Maria - "The Internet Access as a Fundamental Right"; published in
“Journal of Information Systems and Operations Management”, Vol.4. No.2 / 2010, pp 206-
212, ISSN 1843-4711.
2. Heikkinen, Ilmari – “Writing Augmented Reality Applications using JSARToolKit”;
available online at http://www.html5rocks.com/en/tutorials/webgl/jsartoolkit_webrtc/; last
accessed on November 15th, 2013
3. Pirjan, Alexandru – “Electronic commerce security in the context of the means of payment
dematerialization”, published in “Journal of Information Systems & Operations
Management”, Vol.4 No.1 / 2010, pp. 184-194, ISSN 1843-4711
4. Pirjan, Alexandru; Petroșanu, Dana-Mihaela – “Solutiouns for Optimizing the Radix Sort
Algorithm Function using the Compute Unified Device Architecture”, published in
“Journal of Information Systems & Operations Management”, Vol.6 No.2 / 2012, ISSN
1843-4711

6
Source: http://www.ccsinsight.com/press/company-news/1655
332
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

CASE STUDY ON HIGHLIGHTING QUALITY CHARACTERISTICS OF


MAINTAINABLE WEB APPLICATIONS

Gabriel Eugen Garais1


ABSTRACT
Building websites require maintenance approach for the design and implementation
structures at both logical and physical level. This feature provides to web application
maintainability with greater flexibility and significantly increases usability. The life cycle
of Web applications is amended by adding the complexity of a maintainable structure. The
case study presented highlights the basics of implementing a maintainable structure and
represents a quality feature in web applications development.

Keywords: web applications, maintenance, maintainable application

Maintainability feature is implemented on three structure levels in the logic of the Infos
application case study:
 - Presentation by implementing the concept of visual themes that provides adding,
changing and interchanging user interfaces working with functions provided while
the application is running;
 - The business through the implementation of rules and procedures which reuses
data structures between modules with different functionality that provides a high
degree of compatibility between components and standardization of components
used in the application;
 - Level data access by implementing classes that mediates and standardizes ADOdb
database connections to optimize processing and source code writing.

Management organizes structured interface display system with visual themes and required
specific distributed applications by managing how visual content distribution. Look of
distributed online implementation is designed as themes of view. Interfaces as the theme or
themes are independent of content viewing application.
By means separation visual theme aspect of application interfaces to the content that is
stored entirely in the database, which is shown by various techniques in dedicated areas .
In the levels of organization of visual themes distributed applications are placed in the
presentation logic.
The technical characteristics of visual themes implemented in application Infos are:
 - Coding aspect is accomplished using HTML and DHTML dynamic peopled by
functions written in PHP language program;
 - Visual themes contain content areas dedicated typologies such as small block
areas for menus , small block for various content areas , areas containing large
block detail;

1 Lecturer PhD, Romanian-American University, Bucharest, garais.gabriel.eugen@profesor.rau.ro

333
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 - Dynamic generation of interfaces is achieved by viewing type classes Smarty;


 - Visual themes programmatic structure is stored in the database for user
personalization storage limits only the database file contents remain unaltered;
 - Structure and content sharing blocks is modified directly from the administration
panel of the application.

Using the structure and components of visual themes related interfaces described
maintainable application provides character infos. It provides design, in parallel with the
existing new interfaces that are accessible only by a certain group of people to final
validation. In this way the new interfaces are tested until the final validation of the real
system and existing data sets updated database.
The defining visual elements composing themes are blocks of text, references, colours and
menus.
Text blocks are characterized by position, size and content. This element of visual displays
information topics chosen by the user or the administrator default distribution. There is a
standard visual theme originally constructed which is then customized by users by
activating and location blocks with desired content. The blocks contain visible elements of
reduction and enlargement to ensure user interactivity with blocks of application.
References located inside the visual themes are characterized by colour and appearance.
References are called links html links through which navigates the hierarchy of sections
within the application. Users change the visual theme custom colours and look for matching
references to the rest of the components. Altered appearance of references related to the
length and size of the display. These changes depending filling visual field information
whose significance is greater than a reference sequence, the latter limited to a strict number
of characters.
Figure 1 presents the defining elements of a graphical visual themes application for use of
facilities Infos visual themes outlining the position of the blocks, menus, text areas, and
colour references.

334
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Menu

Responsive text
blocks

Hyper references

Figure 1 – Highlighting text blocks, menus and references


Figure 2 shows a section of the application that blocks his position swapped main
area dedicated text in full.

dedicated blocks
modified by
changing the
position parameter

Figure 2 – Highlighting text blocks changed position


A colour employs colour palettes used in displaying visual themes. The use of certain
combinations of colours in the palettes to make sure that they are put types of text such as
335
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

the colours and backgrounds. Display text in blue over dark blue backgrounds causes a
strain to read the text leads to the impossibility of distinguishing between text and
backgrounds involved. Colours are chosen by users through a conditional action that
supports the realization of the contrast between the elements of colour changes.
Menus employs blocks or areas dedicated for displaying references that point to the main
sections of the application. Menus display level involves both singular referential and
referential display hierarchical structures. By displaying singular reference level reference
is made to the main sections of the application while the reference display hierarchical
structures such reference is made to the main sections and for sections located on different
levels. Location menus are accomplished by blocking position by the designer or user
customized moving to another area of application.
For custom visual themes emphasizing maintainability testing whether a visual theme
relates easily by determining the differences between all the components chosen by two
users selected refereed. By components is meant components which are packet types are
characterized by blocks, and the placement text.
The existence of eight components characterized by the modified blocks containing
information involves incorporating elements of remote access and types of text added
internal users PU5 the internal database as described in Table 1.
Table 1 – Description of the tested for maintainability
components components description Method for the inclusion of information
Types of text blocks formed from
cm1 exchange rate remotely
cm2 weather remotely
cm3 Brokers remotely
cm4 News local database
cm5 Targeted searches local database
cm6 Top Events local database
cm7 Authenticated users local database
cm8 Events calendar local database

Table 2 presents the frequency of choice of components by two different users randomized
to test differences in the activation and use of components between users.
Table 2 – Choice of visual components by users
components \ users u1 u2
cm1 1 1
cm2 1 1
cm3 3 0
cm4 1 2
cm5 1 5
cm6 1 1
cm7 1 2

336
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

cm8 0 1
Totals for users k Nftcuk 9 14
To determine differences in use is calculated Nftcuk total frequency selection components
for each k user and TNftcu overall total frequency selection. Table 2 shows that levels
Nftcu1 = 9
Nftcu2 = 14.
Difference |ΔNftcuj | = 5.

The value of the difference | ΔNftcuj | moves away from zero the greater the level of use of
the facilities available in the combination of the components of visual themes that suggest
ease of use in customizing visual themes. The existence of frequency components whose
selection is equal to or tends to 0, determines that component removal and replacement with
a new component with different content or content as the first version.
The management of modular structure necessary to achieve effective maintenances is the
display components by extraction of the filtered data based on content update and content.
Display is carried out by extracting this content from the database. For each type of
publication is created a template on which displays HTML content. Each template contains
a set of sub- types display as follows: template to display the list of publications; template
to display each page of which references are included in the list of publications; template
printable version of each page.
Each list of publications is filtered based on the selection criteria information, including
field searching for words or combinations of keywords and display reactive to user
behaviour. By reactive display means displaying the types of blocks whose occurrence is
linked to a set of keywords set by the application administrator.
Updating content is performed by two management modules series image and content.
Management mode image series, m9 InfoFoto, manage photo albums defined hierarchical
structure. Series image is loaded directly from the application interface, and automatically
generating different sizes for each image: big size - the original term - the preview small -
thumbnail (icon). Each image is stored in the database along with the data of technical
properties of each image, and the title, description and keywords. These files are available
for upload to allow insertion in any kind of publication within the application to users who
have permissions from the publisher high level 1 inclusive.
Structure, m5 InfoStiri content management module types of publications is designed with
accessibility in hierarchical and has implemented a management system of information
flow. So based on the rules and permissions created for the requirements of the target group
are designed levels and formats of input interfaces for each group of user accounts. Each
publication is attached lists with sub- lists for subsequent filtration containing a number of
other sub- publications thus creating a hierarchy of publications. Types of publications they
attached a history of all updates made on it. System administrators and editors of level 2
are inserted under each publication a tabular zone that displays this history. On request to
view any of the intermediate stages through which passed a certain publication in the
process of updating the content publication, various quality control for information flow.

337
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 3 shows how the display changes made history for the news and the distribution of
hits in calls direct from search engines.

Figura 3 – History of changes undertaken in the application InfoS News


Distributed application maintainability research provided Infos define tables structure TS =
{ts1, ts2}. Infos Application modules that share the same data structure from the set TS
causes several types of features and utilities applied. Table 3 shows the structure TS1
containing common fields used by classification modules differ between these sections.
Tabel 3 – Fields of structure ts1
Field name Data type dimension
id_publicatie int 11
tip_publicatie tinyint 2
titlu varchar 255
scurt_intro text -
text_extins longtext -
categorie tinyint 2
id_utilizator int 11
data_publicarii date 8

Table 4 shows the structure of TS2, modules used to monitor access and actions on records
TS1 structure. Structures relate the fields and ts2.camp_de_relationare ts1.id_publicatie.
Tabel 4 – Fields of structure ts2
Field name Data type dimension
titlu_imagine varchar 255
fisier_imagine blob -
link varchar 255
comentariu text -
camp_de_relationare int 11

338
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 5 highlights reuse TS structures in the set of modules that provide a high degree of
maintainability generated by flexibility for future changes as part of the maintenance
process.
Table 5 – Reuse of structures in the set of modules
Si structures
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10 m11 Nfmti
/ Modules Mk
standard – ts1 3 2 3 5 15 3 4 7 8 1 4 55
details – ts2 1 3 2 2 2 5 3 1 6 1 1 27
TNfmt 82

Subject to the number of reuses for a given data structure is the existence of at least two
reuses of the application modules set.
For levels measured Nfmt1 = 55 = 27 and TNfmt Nfmt2 = 82 resulting aggregate indicators
GrsT1 = 0,67 şi GrsT2 = 0,33

The values GrsT1 and GrsT2 that are included in the range [0.2, 1) gives good results
accomplished by increasing the processing speed of the internal procedures and the flexible
and maintainable required subsequent maintenance processes that occur in the normal life
cycle of distributed applications . From the results so GrsT1 and GrsT2 value is in the range
that supports the reuse and standardization maintenances conducting structures.
Maintainability level of access to data is ensured by using abstract class type ADOdb. Infos
distributed applications is guaranteed maintainability through distributed access to different
databases:
 - Infobd has stored the information content of the application edit Infos on which
users perform processing operations and updating;
 - InfoMonbd has stored data sets resulting from operations monitoring algorithm
performed by users;
 - InfoMediabd belongs to other distributed applications that perform operations
media monitoring and content of which is questioned in application Infos.

To achieve speed queries, modification and deletion of records contained in the database
are used standardized definitions ADOdb classes that allow reuse both source code and
changing the type of DBMS that manages databases. ADOdb classes contain written
procedures and rules in which must specify the type of DBMS in which the database is
implemented. Ensuring maintainability data access flexibility is achieved by using classes
generated by ADOdb to support information management of distributed application. Figure
4 is a schematic diagram on which you can access data from distributed application.

339
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Infobd InfoMonbd
InfoMediabd

Interface classes

ADOdb

InfoS Web Application

Figure 4 - Using ADOdb classes to ensure maintainability of the application Infos


It involves the following distribution of frequencies for accessing the application Infos
ADOdb classes to query different databases. In Table 6 are extracted a series of iterations
in which the application calls Infos different database queries based on ADOdb classes.
Table 6 - Frequencies to access the classes in the application ADOdb Infos
Query Total access
Access Access
iteration Access frequency frequencies
frequency frequency
Infobd on databases through
InfoMonbd InfoMediabd
ADOdb classes
I1 1 4 1 6
I2 2 5 0 7
I3 1 3 1 5
Total access frequencies
16
on databases through ADOdb classes on three iterations ∈ (𝐼1 , 𝐼2 , 𝐼3 )

To highlight the nature maintainable by using ADOdb classes Gado determine the degree
of access of ADOdb classes.
Table 6 follows the measured Gado = 0.187.
The result calculated for the degree of access of ADOdb classes Gado = 0.187 is positive
and causes a high degree of maintainability application Infos iterations query against the
database. Gado result tends to 0 while the indicator belongs Gado (0, 1] shows a high level
of utility classes ADOdb implementation of the standard query interface between Web
applications and holders of our data.
CONCLUSION
Web applications should have a projected maintainable structure for a flexible usability and
coding. The life span expectance for distributed web applications can expand dramatically
through maintenance and so the total cost is lowered and get a higher profit at the final
aftermaths.

340
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

REFERENCES
[ADLI08] Adina Lipai, World Wide Web Metasearch Clustering Algorithm, Informatica
Economica Journal, Vol. 12, No 2, 2008, ISSN 1453-1305, INFOREC Publishing
House, pg. 5-12
[ALCM06] Amy N. Langville, Carl D. Meyer – Google's PageRank and Beyond: The Science
of Search Engine Rankings, Princeton University Press, 2006, ISBN: 978-1-4008-
3032-9, pg. 240
[ALEN07] Alexandru ENĂCEANU, SEO Techniques for Business Websites, Informatica
Economica Journal, Vol. 11, No 2, 2007, ISSN 1453-1305, INFOREC Publishing
House, pg. 74-78
[BBMA09] D. BEIN, W. BEIN, P. MADIRAJU, Cloud Computing and the Future of Web,
The Proceedings on the 9th International Conference On Informatics In Economy,
„Information & Knowledge Age”, 7 – 8 May 2009 ASE Printing House,
Bucharest, 2009 ISBN: 978-606-505-172-2, pg. 307 – 312
[BOCR07] Boboila Cristea, The XML Revolution and its Relationship to E-Commerce, The
Proceedings on The 8th International Conference On Informatics In Economy,
“Information & Knowledge Age”, May, 17th-18th , 2007, ASE Printing House,
Bucharest, 2007, ISBN: 978-973-594-921-1, pg. 304 – 310
[BODO10] Cătălin BOJA, Mihai DOINEA, Security Assessment of Web Based Distributed
Applications, Informatica Economica Journal, Vol. 14, No 1, 2010, ISSN 1453-
1305, INFOREC Publishing House, pg. 152- 163
[BOEH11] Vu Nguyen, Barry Boehm, Phongphan Danphitsanuphan, A controlled
experiment in assessing and estimating software maintenance tasks, Information
and Software Technology, Vol 53, Nr. 6, 2011, Elsevier Publishing, ISSN: 0950-
5849, pg. 682-691
[CATI07] Carmen TIMOFTE, E-Business Technologies, Informatica Economica Journal,
Vol. 11, No 4, 2007, ISSN 1453-1305, INFOREC Publishing House, pg. 76-79
[CHEN10] C.H. Chen – Handbook of Pattern Recognition and Computer Vision, World
Scientific Publishing Co. Pte. Ltd, 2010, ISBN: 978-981-4273-38-1, pg. 984
[CHGP09] Vigil Chichernea, Gabriel Eugen Garais, Dragos Paul Pop – Baze de date Oracle,
Editura Universitatii Romano-Americane, 2009, ISBN: 978-606-92189-4-5, pg.
352
[COHN09] Mike Cohn – Succeeding with Agile Using Scrum, Addison-Wesley Printing,
2009, ISBN: 978-0-321-57936-2, pg. 504
[COTO08] Cosmin TOMOZEI, Hypertext Entities’ Semantic Web-Oriented Reengineering,
Proceedings of Journal of Applied Quantitative Methods, Vol 3, Issue 1, March
30, 2008, ISSN 1842-4562, pg. 9
[DEXD09] Academia Romana, Dictionarul explicativ al limbii romane (editia a III-a, 2009,
revazuta si adaugita), Editura Univers Enciclopedic, ISBN 978-606-921-597-5,
pg. 1192
[DFJH09] Dana R. Ferris, John Hedgcock - Teaching Readers of English: Students, Texts,
and Contexts, Taylor & Francis, 2009, ISBN: 978-041-5999-64-9, pg. 456
[DGJG09] Daniel Graham, Judith Graham - Can Do Writing: The Proven Ten-Step System
for Fast and Effective Business Writing, John Wiley and Sons, 2009, ISBN: 978-
047-044-979-0, pg. 208
[DRVE07] Dragos Marcel Vespan, Internet User Behavior Model Discovery Processz, The
Proceedings on The 8th International Conference On Informatics In Economy,
“Information & Knowledge Age”, May, 17th-18th , 2007, ASE Printing House,
Bucharest, 2007, ISBN: 978-973-594-921-1, pg. 57 – 62

341
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[DWTU09] Dennis, Wixom, Tegarden – Systems Analysis & Design with UML Version 2.0,
An Object – Oriented Approach, 2009, ISBN: 978-804-7007-478-7, pg. 544
[ERCA08] Erik Castello - Text Complexity and Reading Comprehension Tests, Vol. 85,
Peter Lang, 2008, ISBN: 978-303-911-717-8, pg. 350
[GERE00] George Reese – Database Programming with JDBC & Java, Second Edition,
O’Reilly Publishing, 2000, ISBN: 978-1-56592-616-5, pg. 328
[GJME09] Susanne Göpferich, Arnt Lykke Jakobsen, Inger M. Mees - Behind the mind:
methods, models and results in translation process research, Vol. 37, Forlaget
Samfundslitteratur, 2009, ISBN: 978-875-931-462-3, pg. 257
[HACA07] Vladimir Hanga, Diana Calciu, Dicţionar juridic: A - Z., Bucureşti, Editura
Lumina Lex, 2007. 208 p. ISBN 978-973-758-097-9, pg. 400
[IDUF09] I. ISTUDOR, L. DUTA, F. Gh FILIP, WEB-Based Group Decision Support
System, The Proceedings on the 9th International Conference On Informatics In
Economy, „Information & Knowledge Age”, 7 – 8 May 2009 ASE Printing
House, Bucharest, 2009 ISBN: 978-606-505-172-2, pg. 145 – 151
[IVDU08] Ion IVAN, Eugen DUMITRAŞCU, Stable Structures for Distributed
Applications, Informatica Economica Journal, Vol. 12, No 1, 2008, ISSN 1453-
1305, INFOREC Publishing House, pg. 81-96
[NASO07] Floarea NĂSTASE, Pavel NĂSTASE, Robert ŞOVA, Information Security Audit
in e-business applications, Informatica Economica Journal, Vol. 11, No 1, 2007,
ISSN 1453-1305, INFOREC Publishing House, pg. 79-88
[PAGA07] Pavel Gabriela, Brut Mihaela, A Semantic Web Approach for a Business
Framework, The Proceedings on The 8th International Conference On
Informatics In Economy, “Information & Knowledge Age”, May, 17-18 , 2007,
ASE Printing House, Bucharest, 2007, ISBN: 978-973-594-921-1, pg. 234 – 240
[POTO09] Marius POPA, Cristian TOMA, Stages for the Development of the Audit
Processes of Distributed Informatics Systems, Proceedings of Journal of Applied
Quantitative Methods, Vol 4, Issue 3, Sep 30, 2009, ISSN 1842-4562, pg. 359
[PRGP11] Francisco J. Pino, Francisco Ruiz, Felix Garcia, Mario Piattini, A software
maintenance methodology for small organizations: Agile_MANTEMA, Journal
of Software Maintenance and Evolution: Research and Practice, Published online
in Wiley Online Library, 2011, ISSN: 1532-060X
[PRKA09] Ladda PREECHAVEERAKUL, Wichuta KAEWNOPPARAT, Dararat
SAELEE, A Missing Piece of RSS Technology, Informatica Economica Journal,
Vol. 13, No 3, 2009, ISSN 1453-1305, INFOREC Publishing House , pg. 119 –
132
[SATZ08] John W. Satzinger, Robert B. Jackson, Stephen D. Burd – Systems Analysis and
Design in a Changing World, Fourth Edition, Editura GEX Publishing Services
Thomson Course Technology, 2008, ISBN 13: 978-142-390-228-7 , ISBN 10:
142-390-228-9, pg. 752
[SMDC10] Ion SMEUREANU, Andreea DIOSTEANU, Liviu Adrian COTFAS, Knowlegde
Dynamics in Semantic Web Service Composition for Supply Chain Management
Applications, Proceedings of Journal of Applied Quantitative Methods, Vol 5,
Issue 1, March 30, 2010, ISSN 1842-4562, pg. 1 - 13
[WLDB07] William H. DuBay - Unlocking Language: The Classic Studies in Readability,
2007, BookSurge Publishing, ISBN: 978-141-966-176-1, pg. 246

342
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

MANAGING TECHNOLOGICAL CHANGE IN INTERNATIONAL TOURISM


BUSINESS

Camelia M. Gheorghe1
Mihai Sebea2
ABSTRACT
Recent advances in technologies adjust the traditional business model in tourism, and it is
expected to create new ones. The question is if companies could understand the benefit from
becoming even more innovative and creative when it comes to their „smart” business
strategies, in order to fully differentiate these efforts from traditional business operations.
Even more, many of the traditional consumers, in the new context of ICTs development,
have changed their buying habits and the consumption behaviour. They act different now,
they changed the way of interacting with the others and with the business suppliers, and
they also became more interested in taking part at the creation process. But are these the
essential characteristics of the next generation tourists?

Going further, the infrastructure of the organization determines the readiness to respond
to customer requirements. The new trends affect not only the shape of the offer, but also the
design of the demand. As a consequence, customer relationship management and other
fundamental information management systems are essential for businesses to scale-up.

In that respect, the discussions on this topics will enable an overview of the present status
of the field and also it aims extending the methodological insights regarding the
appropriate approaches and responses of tourism suppliers to the new technological
changes.

Keywords: tourism, technology, digital tourists, ICT, smart business, change


1. INTRODUCTION
Nowadays, the information and communication technology (ICT) has become a critical tool
for the tourism industry development. Technology not only mediates the access to
destinations and improves the quality of experiences, but is the one who allows the
participation of consumers to the creation process. In that respect the experiences are
transforming, as consumers are now more experienced, sophisticated and play an active
part in co-creating their own experiences. (Neuhofer, Buhalis & Ladkin, 2013). As
customers gradually prefer to go their own way, the relationship between customers and
companies is changing in favour of customers who are increasingly gaining power and
control.
Therefore, the progress of the science, the use of devices with multimedia features and the
digital/mobile marketing campaigns has generated a new set of tools for tourists’
experiences.

1 PhD , Romanian-American University, gheorghe.camelia.monica@profesor.rau.ro


2 PhD, Romanian-American University, sebea.mihai@profesor.rau.ro
343
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

But, are the managers prepared to make the distinction and to treat separately, but
excellently, as well, the traditional tourists and the digital tourists? Are they able to keep up
the pace with the new technological developments? How do the suppliers manage
technological change in order to offer great experiences?
This paper provides a theoretical review of the influences of ICT in tourism activities and
presents findings in terms of tourism suppliers’ readiness for technological changes.
2. THEORETICAL FRAMEWORK
The widespread use of handheld devices, which are becoming increasingly powerful and
flexible, and the development of a fast mobile communication networks offer a new way of
accessing information anytime and anywhere, thus changing the habits of mobile users.
(Ballard, 2007). In today’s scenario, the smartphones and applications can translate words
live on screen, give real-time transportation advice, locate somebody anywhere in the
world, act as a boarding pass, book the dinner reservation, and even help finding a cheap,
last-minute hotel room. Nowadays, it is more relevant to provide a dynamic view of the
locations, like ranking of hotels or places, feedback on a point of interest etc. It is also
expected that these applications should be integrated with popular social networks so that
consumers can see the feedback from the people in real time. (Radha M De, 2012). It is
clear that invisible, attentive and adaptive technologies that provide tourists with relevant
services and information anytime and anywhere represent the future. The new display
paradigm, stemming from the synergy of new mobile devices, context-awareness and
Augmented Reality (AR), has the potential to enhance tourists’ experiences and make them
exceptional (Yovcheva, Buhalis & Gatzidis, 2013).
On the other hand, the recent research regarding the use of technology in tourism outlines
that the online shared videos can provide mental pleasure to viewers by stimulating
fantasies and daydreams, as well as bringing back past travel memories. (Tussyadiah &
Fesenmaier, 2008). But, the tourism information on the Internet is spreading not only
through the official websites. There are many of unofficial channels such as blogs, wiki,
social networks, to also offer destination information (Inversini & Buhalis, 2009). The
destination managers are investing considerable efforts (time and money) in order to market
their destination online without considering that unofficial information competitors are
gaining more and more popularity among internet users (Inversini, Marchiori, Dedekind &
Cantoni, 2010).The destinations need to manage their brand and online reputation
holistically by attempting to coordinate the players offering information about themselves
and also amalgamating the entire range of information and service providers on platforms
of experience creation (Inversini, Cantoni, Buhalis, 2010).
Social media are gaining prominence as an element of destination marketing organisation
(DMO) marketing strategy at a time when public sector cuts in their funding are requiring
them to seek greater value in the way marketing budgets are spent. Social media offers
DMOs with a tool to reach a global audience with limited resources (Haysa, Page &
Buhalis, 2012). Findings reveal that social media are predominantly used after holidays for
experience sharing. It is also shown in the recent research that there is a strong correlation
between level of influence from social media and changes made to holiday plans. Moreover,
it is revealed that user-generated content is more trusted than official tourism websites,
travel agents, and mass media advertising (Fotis, Buhalis & Rossides, 2011). But the
344
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

literature clearly demonstrates how tourism organizations should exploit the inbound and
outbound communication, networking, and collaboration capabilities of social media for
including several other stakeholders into their management strategies and activities. The
practical and research implications of social media in crisis management for tourism policy
makers, tourism suppliers, and researchers is essential nowadays (Sigala, 2011).
However, the Information & Communication Technology is subject to social
transformation, since it is used in airlines, hotels, travel agencies, tour operators and
destination management organizations, as an important support of interplay with digital
travellers. Therefore, the co-creation process is based not only on the interaction between
the tourists and the suppliers, but on the new network formed because of the interaction
between the tourists and facilitated by the technology.
The impact of technology is tremendous in the transport sector. The recent research
published by SITA3 shows the way technology is shaping the future of air travel. They
consider that airline IT investment priorities continue to focus on mobilizing the passenger
journey, as they have done for the past years, but airlines are also showing a strong interest
in improving business intelligence to better understand their operations and customers.
More than that, the airline industry is an industry that is fast adopting the digital world. In
order to improve the formalities and procedures, some airports adopted new trends and
installed smart gates for customer, which are secure, automated self-service alternatives to
the conventional face-to-face border control process. The devices identify users through
their passports, ID cards or e-Gate cards, and also use facial and eye recognition technology
to verify the user.
The advent of digital media technology and the emergence of Internet-based content are
raising the bar in terms of what consumers expect from in-room hotel technology. The latest
technology gives hotels an opportunity to provide new products and services to guests, but
it also bring challenges. Using a single remote for all the features in the room, turning the
guest’s smartphone into the remote control for the television or providing the hotel rooms
with iPads, loaded with virtual concierge application, are services adopted by smart hotels,
but in the near future, it will be available at many other hotels.
Regarding the tourism firms, the role of Information & Communication Technologies
(ICTs) in increasing their performance, but also of the tourism destinations at a macro-
economic level, is widely advocated (Wang & Fesenmaier, 2003).
But the impact of Information & Communication Technologies is essential in case of
destinations as well. The studies reveal the opportunities for tourism destination marketing
organizations to communicate successfully their attractions and offerings through user
generated blog content (Volo, 2010). More than that, it is known that the totality of the ICTs
developed by Destination Marketing Organisations (DMO) for marketing their destinations
represents the Destination Management System (DMS). Today’s world has thousands of
destinations that have a DMS, the function of which is to oversee the entire offer of the
local or regional destinations situated on their territory, to carry out promotion for these,
and to serve as a distribution channel of bookable tourism products (Bédard & Biegala,

3
The Airline IT Trends Survey, SITA 2013, http://www.sita.aero/surveys-reports/industry-surveys-
reports/airline-it-trends-survey-2013.
345
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2010). The major role of DMS is to act as an electronic intermediary providing


functionalities related to e-distribution, e-marketing and e-sales for the whole destination
and its tourism suppliers. As investment in and the adoption of ICTs are now an
indispensable component of tourism and hospitality business, researchers increasingly seek
to understand and communicate the significance of the new technologies, investigate and
interpret developments in e-tourism, and attempt to forecast the way ahead for both industry
and technological development (Sigala, 2011).
The industry leaders considered that the new Web era was imminent and heralded benefits
for supply and demand side interoperability, although management and technical
challenges could impede progress and delay realisation (Mistilis & Buhalis, 2012).
Websites and the mobile applications need to be evaluated from a usability perspective for
improving their quality for the end users (Triacca, Inversini & Bolchini, 2005). The main
reason for the continuous and rigorous upgrade is because the studies regarding the effect
of the design factors of destination Web sites on first impression formation indicate that the
virtual users were able to make quick judgments on tourism Web sites and that inspiration
and usability were the primary drivers evoking a favourable first impression. But the
volume of good information and the credibility of the destination web page are also
important arguments for tourists to create a positive first impression (Kim & Fesenmaier,
2008).
And, not at least, agile competition and fast development of information and
communication technologies have the capacity to modify the traditional time-space
interaction and form different modern organizational structures of business systems – the
virtual organizations. Closer collaboration and the utilization of ICTs would enable tourism
business system actors to expand their supply and to enhance their competitiveness
(Hopeniene, Railiene & Kazlauskiene, 2009).
3. METHODOLOGY
The research methodology included: a) a textual analysis using the methodological
resources of conceptual clusters and the links between them: IT- business – tourist
experiences (Fig.1), b) a survey regarding the readiness of tourism suppliers for
technological challenges, and b) synthetic parts.

Fig. 1. The key conceptual clusters of digital tourism


346
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The first part consisted of the study and analysis of key texts of consumer behaviour in the
new digital context and the discussion on emergence of new business models in tourism.
The survey was conducted among 100 companies acting in tourism area and its main
objective was to establish the level of using the new technology in their business operations
or in relation with consumers. The direct quantitative research aimed at recording responses
to questions from a questionnaire and implied the following steps: setting the goal and
objective of the research, defining the group, determining the poll unit, calculating the
sample size, choosing the sampling mechanism, conducting the research (information
collection, data processing, analysis and interpretation), developing conclusions.
The constructive part of the methodology consisted in synthesizing the results to conclude
on possible new interpretations of the key conceptual clusters with the aim to identify and
discuss conceptual innovations regarding the new business models in tourism determined
by the adoption of new technologies, and the new structures and strategies for tourism
suppliers based on digital tourists experiences; set up new knowledge to prepare the tourism
suppliers for the smart tourists.
4. RESULTS
The analysis of key texts on consumer behaviour in the context of the development of
information and communication technology followed the methodological approach central
to conceptual attitudes and behaviours via smart/digital tourist experiences, which is an
explicit novelty for the present-day controversies regarding the traditional/digital tourist
behaviour.
Regarding the survey results, some conclusions may be drawn, as follows:
 Because of the new mobile technology, it seems that the passive role of the tourists
was replaced by an active one and the tourists became more experienced,
sophisticated and eager to be involved in the creation process;
 The new global technological environment determined a shift in consumers
behaviour, and, the new profile of the modern tourist determined another approach
of business in terms of organization, marketing, connection with consumers;
 Many of the tourism suppliers are aware of the benefits of adopting the new
technologies, but not all of them are able to keep the pace with the rhythm of
technology development;
 The new, adjusted type of business requires a workforce endowed with more
specific skills;
 Some of them consider that WoM (word of mouth) and traditional marketing tools
are more effective than the mobile or electronic information channels.
5. CONCLUSIONS
The research results have the potential of opening new areas of scientific investigation and
explore potentially new paradigms in the field of business science and administration in
tourism.
The future research will therefore focus its attention on:

347
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 change of roles; shift in gravity center (the consumer becomes active part of
creating touristic packages) “Prosumer” (= professional, informed, expert
consumer);
 consumers co-create virtual communities, where the feedback given by consumers
co-creates the touristic experience;
 marketing and Human resources strategies are directly influenced by smart
technologies;
 shift from physical business to online business and perhaps mobile business (can
online business strategies be adapted or adopted by tourism industry? Can we speak
in tourism of “mobile business models” the same way we speak of “online
business”? Paradigm shift?);
 how companies could understand the benefit from becoming even more innovative
and creative when it comes to their „smart” business strategies, in order to fully
differentiate these efforts from traditional business operations.

The notion of creating experiences has become ultimate for successful business operations,
structures and strategies. Considering that the performance of businesses heavily relies on
minimising the imitation of tourism products and services and on maximising the creation
of valuable experiences, it is crucial for business suppliers to gain an in-depth understanding
of the paradigm shifts changing the conditions they are operating in. To that end, the
research set out to conceptualise technology as a tool for the next generation tourism
(industry).
6. REFERENCES:
1) Bédard F. & Biegala T., Consumer Behaviour and Consumer Profile in the Use of Portal
Destination Management Systems (DMS), Conference ENTER, Switzerland, 2010;
2) Bourgouin F., Information communication technologies and the potential for rural tourism
SMME development: the case of the Wild Coast, Development Southern Africa Vol 19, No 1,
March 2002;
3) Fotis J., Buhalis D. & Rossides N., Social media impact on holiday travel planning: the case of
the Russian and the FSU markets. International Journal of Online Marketing 1(4): 1–19, 2011;
4) Hays S., Page S.J. & Buhalis D., Social media as a destination marketing tool: its use by national
tourism organisations. Current Issues in Tourism 16(3): 211–239, 2012;
5) Hopeniene R., Railiene G. & Kazlauskiene E., Potential of virtual organizing of tourism
business system actors, Engineering economics = Inžinerinė ekonomika. 3(63), 75-85, 2009;
6) Inversini A. & Buhalis D., Information Convergence in the Long Tail. The Case of Tourism
Destination Information. In W. Hopken, U. Gretzel & R. Law (Eds.), Information and
Communication Technologies in Tourism 2009 – Proceedings of the International Conference
in Amsterdam, Netherland (pp. 381-392). Wien: Springer, 2009;
7) Inversini A., Cantoni L. & Buhalis D., Destinations information competitors and Web
reputation. Information Technology and Tourism 11: 221–234, 2010;
8) Inversini A., Marchiori E., Dedekind C. & Cantoni L., Applying a Conceptual Framework to
Analyze Web Reputation of Tourism Destinations. In U. Gretzel, R. Law, & M. Fuchs (Eds.),
Information and Communication Technologies in Tourism 2010 – Proceedings of the
International Conference in Lugano, Switzerland (pp. 321-332). Wien: Springer, 2010;
9) Kim H. & Fesenmaier D.R., Persuasive Design of Destination Web Sites: An Analysis of First
Impression, Journal of Travel Research OnlineFirst, published on January 14, 2008;

348
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

10) Mistilis, N., & Buhalis, D., Challenges and potential of the Semantic Web for tourism, e-Review
of Tourism Research (eRTR), Vol. 10, No. 2, 2012;
11) Morosan C. & Fesenmaier D.R., A conceptual framework of persuasive architecture of tourism
websites: Propositions and implications. Proceedings, Fourteenth International Conference on
Information and Communication Technology in Tourism, Ljubljana, Slovenia, 2007, pp. 243 –
254;
12) Neuhofer B., Buhalis D. & Ladkin A., A Typology of Technology-Enhanced Tourism
Experiences, International Journal of Tourism Research, Int. J. Tourism Res., Published online
in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/jtr.1958, 2013;
13) Radha M De, Mobile Tourism Planning, Academy of Technology, ATN Volume 3, Number 5,
2012;
14) Sigala M., Social Media and Crisis Management in Tourism: Applications and Implications for
Research, Information Technology & Tourism, ISSN 1098-3058, Volume 13, Number 4, 2011
, pp. 269-283(15);
15) Spencer A., Buhalis D. & Moital D., Staged technology adoption for small ownermanaged travel
firms: An organizational decision-making and leadership perspective. Tourism Management,
2011;
16) Triacca L., Inversini A. & Bolchini D., Evaluating Web Usability with MiLE+, Website
Evolution IEEE Symposium, Budapest: Hungary, 2005;
17) Tussyadiah I.P. & Fesenmaier D.R., Mediating Tourist Experiences. Access to Places via Shared
Videos, Annals of Tourism Research, Vol. 36, No. 1, pp. 24–40, 2008;
18) Yovcheva Z., Buhalis D. & Gatzidis C., Engineering augmented tourism experiences. In Cantoni
L, Xiang Z (eds). Information and Communication Technologies in Tourism 2012. Springer
Verlag: Austria; 24–35;
19) Volo S., Bloggers’ Reported Tourist Experiences: Their Utility as a Tourism Data Source and
Their Effect on Prospective Tourists, Journal of Vacation Marketing 16 (4), 297-311, 2010;
20) Wang Y. & Fesenmaier D.R., Understanding the Motivation to Contribute to Online
Communities: An Empirical Study of an Online Travel Community, Electronic Markets, 13(1):
pp. 33 – 45, 2003;
21) Wang Y., Hwang Y.H. & Fesenmaier D.R., Futuring Internet Marketing Activities Using
Change Propensity Analysis, Journal of Travel Research, 45: 158-166, November 2006;
22) ****Air Transport Industry Insights, The Airline It Trends Survey, Sita 2012;
23) ****Air Transport Industry Insights, The Airport It Trends Survey, Sita 2013.

349
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

VERIFIABLE SECRET SHARING SCHEME BASED ON INTEGER


REPRESENTATION

Qassim Al Mahmoud1
ABSTRACT
In Shamir’s scheme that the security based on the numbers of the field of a prime number
P which the coefficients' polynomial reduced to modulo P (takes a value from some field
Zp
, where P is a large prime number). Thus, the adversary must know only the free
coefficient of the polynomial in order to break the scheme. Our scheme which based on
representation integer using the so-called g  adic expansion we can see any integer a
can build such polynomial so that the polynomial has degree of k, k being the length digits
for that integer a . Where the coefficients of that polynomial are taken from the set of
{0,1,..., g  1}k , we will introduce a small participation improving Shamir’s scheme to
share the secret S that will be seen constructed from the whole of the coefficients in the
polynomial used, which is based on the representation integer. Then apply Pederson’s VSS
scheme in order to improve our scheme to be Verifiable Secret Sharing Scheme Based on
Integer Representation.

Keywords: Shamir scheme, information dispersal, integer representation, verifiable


secret sharing (VSS), g  adic
1. INTRODUCTION
The secret sharing is the main topic in the information security field. In the early evolution
of cryptography the key exchange between users was one of those for which scientists tried
to find an ideal solution. Day after day they have seen the need to save the information from
the people themselves, who are responsible for saving the key. Security of crucial
information, such as credit evidences and confidential documents, has become one of the
most important issues nowadays. Usually, we encrypt these pieces of critical information
to protect data. But how to protect the encryption key?.
Shamir mentioned the question in 1979 [1]. In practical, to improve encryptions can not
solve this problem. Traditional methods for encryptions are ill-suited for simultaneously
achieving high levels of confidentiality and reliability as they usually keep keys in a single
and well-guarded location where single-point-failure exists. Moreover, it is also very
significant that keys not be lost or exposed.
Secret sharing schemes have been independently introduced by Shamir [1] Blakley [2] and,
as a solution for safeguarding cryptographic keys. Secret sharing schemes can be used in
any way in which the access to an important resource has to be restricted.

Basic secret sharing schemes assume that the dealer who divides the secret and distributes
shares to participants is a mutually trusted party. In 1985, Chor et al. [3] extended the
original secret sharing and presented a notion of verifiable secret sharing (VSS). The

1 Faculty of Mathematics and Computer Science, the University of Bucharest, Romania


350
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

property of verifiability enables participants to verify that their shares are consistent
According to the security property. In Feldman’s VSS scheme [4] is that the committed
values are publicly known and the privacy of secret S depends on the difficulty of solving
the discrete logarithm problem. In other words, Feldman’s scheme is computationally
secure.
Later, Pedersen [5] (Pedersen, 1992) used a commitment scheme to remove the
assumption in Feldman’s VSS scheme to propose a VSS scheme which is information-
theoretically secure. However, in Pedersen’s VSS scheme the dealer can succeed in
distributing incorrect shares if the dealer can solve the discrete logarithm problem (see
previous study).
Consideration to the concept of secret must be taken into account by the group of people
selected to be the group authorized to build the concept of secret sharing, dividing this
group into subsets where each subset can retrieve private confidence. In fact this is the
definition of the access structure. We have previously mentioned that there is a subset of
group that is authorized to possess the key for retrieving the information which is stored,
when we need it, and that is group is called the Access Structure. Let x  {1, 2,..., n} be
the set of users, the access structure   P (X ) we give bounds on the amount of
information (shares) for each participant. Then we apply this to construct computational
schemes for general access structures.
Let us consider a set of groups   P (X ) . The (authorized) access structure of a secret
sharing scheme is the set of all groups which are designed to reconstruct the secret. The
elements of the access structure A will be referred to as the authorized groups/ sets and the
rest are called unauthorized groups/ sets. In other words, the unauthorized access structure
 is well specified by the set of the maximal unauthorized groups.
In secret sharing schemes, the number of the participants in the reconstruction phase was
important for recovering the secret. Such schemes have been referred to as "threshold secret
sharing schemes."
In the Shamir threshold secret sharing scheme, the secret is chosen as the free coefficient
of a polynomial, so that only a fixed number (k) of people (or more) may reconstruct the
polynomial, and then restore that secret. Less than the satisfying number of people (k-1)
should not be able to know anything about that polynomial then secret, this being called the
threshold secret sharing scheme.
In way that secret S is generated from some fragments of pieces in that scheme, the secret
S refer to information and the shares refer to fragments such that the scheme
(S , (F1 , F2 ,...Fn )) the problem is finding the secret S given the fragments
(S , (F1 , F2 ,...Fn )) . (k , n ) - Threshold secret sharing scheme is based on information
dispersal. Threshold information dispersal schemes have been introduced by Rabin [6]. For
any authorized subsets A of set of participants so that A  k the problem to find a secret
S with k of fragment F. Krawczyk [7] has proposed combining perfect threshold secret
sharing schemes with encryption and information dispersal in order to decrease the size of
shares with the cost of decreasing the level of security.

351
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In [8], the representation integer using the so-called g  adic expansion we can see any
integer  can build such polynomial that polynomial has degree of k , so that k is the
length digits for that integer a . Where the coefficients of that polynomial are token from
the set of {0,1,..., g  1}k , we will improve the Shamir's scheme to share the secrets S that
will be seen constructed from the whole of the coefficients in the polynomial used, which
is based on the representation integer, then apply Pederson VSS on our scheme in order to
be construct Verifiable secret sharing scheme based on g  adic integer representation.
This paper is organized as follows: within the rest of this section, we will give the definition
of the g  adic representation integer; section 2 will be the previous study, which is divided
into three studies: the first study will be explained on a threshold Shamir secret sharing
scheme. The second study will be a (k, n) - Threshold information and an important example
(Krawczyk scheme). And the third study will be about Pedersen VSS; in section 3, we will
give an explanation for our scheme, which is the Verifiable secret sharing scheme based on
g  adic integer representation with a small artificially example; in Section 4, we will
discuss about the security of our scheme; and finally the conclusion will be included in
section 6.
Integer Representation [8]:
Our scheme will be seen in section 3, and it is based on integer representation. Within
theorem 1 and definition 1, so that it is very important to view and understand this section
well.
In [8], the integer can be represented using the so-called g  adic expansion, for an integer
g  1 and a positive real number  , denote by log g  the logarithm for base g of  . For
k
a set M, let M be the set of all sequences of length k with entries from M
Example 1: We have log 2 8  3 because 23  8 . Also log 8 8  1 because 81  8 .
Example 2: The sequence (0,1,1,1, 0) is an element of {1, 0}5 . Also
{1, 2}2  {(1,1), (1, 2), (2,1), (2, 2)}.
Theorem 1[8]: Let g be an integer, g  2 . For each positive integer  , there is a uniquely
determined positive integer k and a uniquely determined sequence
(a1, a2 ,...ak ) {0,1,..., g 1} with a1  0 and

a  i 1 ai g k i
k
(1)

In addition, k  log g a   1 and ai  (a   ai g k i / g k i  , 1 i  k .


k

 i 1 
Definition 1[8]: The sequence (a1 , a2 ,...an ) from theorem is called g  adic expansion of
a . Its elements are called digits. It is length is k  log g a   1 . If g  2 the sequence is
called binary expansion of a , If g  16 then the sequence is called the hexadecimal
expansion of a . For more details the reader has to read in [8], [9].

352
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In definition 1 we had seen that any integer can be a representation as k  tuple of


elements from the set M = {0,1,..., g  1}k as in eq. (1). In fact this is definition of integer
representation will be useful in section 3.
2. PREVIOUS STUDIES
In this section we have to mention, in general, the important threshold schemes in secret
sharing: the first part in this section talks about Shamir’s secret sharing scheme, the second
part will be about information dispersal and an important example (Krawczyk schemes),
And the third study will be about Pedersen VSS.
Threshold Shamir Secret Sharing Scheme
In 1979, Shamir [1] has introduced the threshold secret sharing as a solution for
safeguarding cryptographic keys. His scheme has generated the thinking of how to share a
secret with multi participants, and it has been used until now as the root for wide research
papers in computer security. Thus, we should give an overview of his important scheme.
In Shamir's (k, n)-scheme based on Lagrange interpolating polynomial, there are n
shareholders, P  {P1 , P2 ,..., Pn } and a dealer D. The scheme consists of two algorithms:

1-Generation Shares Algorithm: Dealer D does the following:


 Picks a polynomial f (x ) of degree (k-1) randomly f ( x)  a0  a1 x  ...  at 1 x k 1
, in which the secret S  a0 and all coefficients a0 , a1 ,...ak 1 are in a finite field
Fp  GF (P )
 Computes: s1  f (1), s 2  f (2),..., s n  f (n )
 Outputs a list of n shares (s1 , s 2 ,...s n ) , and distributes each share to corresponding
participants privately.
2-Secret Reconstruction Algorithm:
With any t shares, (si1 , si 2 ,...sit ) , we can reconstruct the secret S as t participants work
together and then, pooling their shares, then apply Lagrange’s interpolation to find the
coefficients in polynomial used f ( x) as follow:
k
f ( x)   Li ( x) f ( xi ) (2)
i 0

Where k in f(x) is order polynomial that approximates the function and the points as,
k x  xj
Li ( x)   (3)
j 0 xi  x j
j i

Then the secret is:

353
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

x
S  f (0)   S i (  j
)
i A j A /{i } x j xi
We note that the above scheme satisfies the basic requirements of the secret sharing scheme
as follows:
 With knowledge of any k or more than k shares, it can reconstruct the secret S.
 With knowledge of any fewer than k shares, it cannot reconstruct secret S
Shamir's scheme is information-theoretically secure since the scheme satisfies these two
requirements without making any computational assumption. For more information on this
scheme, readers can refer to the original paper [1].
In Shamir’s (t, n) SS, a dealer is a third trusted party who generates and distributes shares
to n participants by using such polynomial, k participants and (more) can reconstruct the
secret S, and less than k know nothing about S.
Threshold Information Dispersal
In Shamir secret sharing in section (2.1) above has restriction on unauthorized groups,
which is difference in information dispersal, in addition, the secret S has been divided into
n shares with the same size of secret S , in other hand, if the secret has size m, each shares
have the size m too, Thus his method leads to n-fold increase in total storage. Threshold
information dispersal schemes have been introduced by Rabin [6]. He has proposed the
Information Dispersal Algorithm (IDA) to develop that breaks a file (secret S) of length
L  S into n pieces S i , 1  i  n , the secret S can be reconstructed by any k shares
S i each of length S i = L / k , so that every k pieces suffice for reconstructing F.

In Iftene [10] has been defined information dispersal scheme more clearly, so that it
mentions the scheme from original paper.
Definition 2: Let n be an integer. Where n  2 , and 2  k  n . Informally, an (k-1)-
threshold information dispersal scheme is a method of generating (S , (F1 , F2 ,...Fn )) such
that for any set A with A  K , the problem of finding the element S . Given the set
{I i / i  A } , is “easy”. Such that the secret S refers to information and the shares
(F1 , F2 ,...Fn ) refer to fragments.

Krawczyk [7] has presented a threshold information dispersal scheme very similar to
Shamir’s threshold secret sharing scheme:
 - The information S is chosen as the vector of the coefficients of a random
polynomial f (x ) of degree k-1 over some field;
 - The fragments (F1 , F2 ,...Fn ) are chosen as follows: Fi  f (x i ) , 1  i  n ,
where (x 1 , x 2 ,...x n ) are pairwise distinct public values.

354
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 - Having the fragments {Fi / i  A } , for some group A with A  k , the


polynomial f (x ) and, thus, the information S, can be obtained using Lagrange’s
interpolation formula as in equations (2, 3).

The information is chosen as the whole polynomial as opposed to the perfect secret sharing
scheme in which the secret is chosen only as the free coefficient. Our scheme will be seen
in section 3, being based on the Krawczyk scheme, which is different in the way of how the
polynomial f (x ) is chosen, and the coefficients are chosen based on integer
representation.
Pederson’s VSS Scheme
In Feldman’s VSS scheme [4] is that the committed values are publicly known and the
privacy of secret s depends on the difficulty of solving the discrete logarithm problem. In
other words, Feldman’s scheme is computationally secure. In 1992 Pedersen [5] proposed
a non-interactive and information-theoretically secure VSS scheme based on Feldman’s
VSS scheme.
Let p and q be two large primes such that q| (p−1), and g, h ∈ Z p are two elements of order
q. There are n participants P  {P1 , P2 ,.., Pn } and a dealer D who will divide a secret s 
Z p we describe Pedersen’s scheme below.
1-Generation Shares Algorithm: Dealer D does as follows:
 Picks a polynomial f ( x)  a0  a1 x  ...  at 1 x k 1 of degree at most (k  1)
randomly, in which the secret S  a0  f (0) and all coefficients a0 ,..., ak 1 are in
Zp
 Picks b0 ,..., bk 1  Z p at random. Let k ( x)  b0  b1 x  ...  bt 1 x k 1 .
 Computes shares ( si , ti ) for i = 1, . . . ,n and each coefficient’s commitment of
added sum of polynomials of f(x) and k(x) as follows: (si , ti )  ( f (i),(k (i))
Computes c j  g j h j mod p for j = 0, 1. . . k −1.
a b

 Outputs a list of n shares ( si , ti ) and distributes each share to corresponding
participants Pi privately. D also broadcasts c j

2. Share verification: each participant Pi , who has received the share ( si , ti ) and all
broadcasted information, can verify that share defines a secret by testing:
k 1
g si hti mod p   ci j (mod p )
j
(4)
j 0

2 Secret Reconstruction Algorithm: It is same as Shamir’s scheme.

355
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In Pedersen’s scheme, the value g a0 is not made publicly known, that is, the secret S is
embedded in the commitment c0  g a0 hb0 . Thus, no information directly about the secret S
is revealed even if an attacker with unlimited computing power can solve log g h , the
attacker still gets no information about the secret S.
3 Verifiable Secret Sharing Scheme Based On Integer Representation
Now, after we overviewed the mathematical integer representation we can start to view our
scheme as based on polynomials, which use the integer representation to construct the
coefficients of polynomial used, then apply Pederson’s shares verification algorithm and
then generate the shares and verification shares, with the difference that the secret will be
the value f ( g ) , where g is the base in g  adic ,the shares verification will be as in
Pederson’s VSS. The reconstruction algorithm is any subset authorized A so that A  k
pooling their shares and use interpolation polynomial to reconstruct the secret the
polynomial f (x ) , then find f ( g )  S where S the secret.
Our scheme has three algorithms:
1- Construction of the Polynomial Used, with Share Generation Algorithm
 - Choose a secret S that has the length k (number of digits) which is defined in
definition 1, and g the base in g  adic , g will be public number.
 - Represent S by using the definition 1 and theorem 1, as follow:
k
S   ai g k i
i 1

Where the coefficients (a1 , a2 ,...ak ) {0,1,..., g 1} with a1  0


 - Construct the polynomial f (x ) of degree (k-1) from equation (1) as:
f (x )  ak 0 x 0  ak 1x 1  ...  a1x k 1 .
 - Picks b0 ,..., bk 1  Z p at random. Let k ( x)  b0  b1 x  ...  bt 1 x k 1 .
 - Computes shares ( si , ti ) and each coefficient’s commitment of added sum of
polynomials of f (x) and k(x) as follows: (si , ti )  ( f (i),(k (i))
- Computes c j  g j h j mod p for j = 0, 1. . . k −1.
a b

 - Outputs a list of n shares ( si , ti ) and distributes each share to corresponding
participants Pi privately. D also broadcasts c j
2- Share verification: It is same as Pederson share verification.
3- Secret Reconstruction Algorithm
In reconstruction algorithm any subset authorized A so that A  k pool their shares and
S can be obtained using Lagrange’s interpolation formula as in equations (2,3) to
reconstruct the polynomial f (x ) , then find f ( g )  S where S the secret.

356
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Note that g is public number for all participants (n), when we use g large number that has
large length k , the scheme close to be more secure.
We note that when the secret and the base g are small numbers, the security for our schemes
non-secure, so that when we are use large numbers (not necessarily prime numbers,
different in Shamir’s scheme) our scheme will seem more secure than Shamir’s scheme.
In the next section will be discussing about the security in our scheme and will be compared
to Shamir’s scheme.
Example (with artificially parameters)
1- Shares generations Algorithm.
Let us will share the integer secret S=5923, n=10 participants.
We can represent the secret in g-adic representation definition as follow:
Now let us we will use to represent the secret in hexadecimal (g=16) representation then
we can rewrite the secret 5923 in hexadecimal as follow:
5923  1163  7 162  2 161  3
Then the secret polynomial used is:
f ( x)  x3  7 x2  2 x1  3
The threshold will be k=4 such that any 4 participants and more than 4 can reconstruct the
secret and less than 4 participants cannot reconstruct it.
Now we will use this polynomial in order to generate the shares in generation algorithm as
in Shamir's scheme, then the shares will be:
{s1  f (1)  13, s2  f (2)  43, s3  f (3)  99
s4  f (4)  187, s5  f (5)  313, s6  f (6)  483,
s7  f (7)  703, s8  f (8)  979, s9  f (9)  1317,
s10  f (10)  1723}
2- Shares Verification: It is same as Pederson’s share verification.
3 Secret Reconstruction Algorithm: In the secret reconstruction algorithm, any
authorized subset of n participants with order has 4, S can be obtained using Lagrange’s
interpolation formula as in equations (2, 3) to reconstruct the polynomial f (x ) , and then
find f (16)  S .
Suppose that the participants {2, 3, 6, and 7} want to reconstruct the secret S, they do the
following:
They use the shares
{s2  f (2)  43, s3  f (3)  99, s6  f (6)  483, s7  f (7)  703}

In order to reconstruct the secret polynomial used f ( x)  x  7 x  2 x  3


3 2 1

Then they find the secret as S  f (16)  5923

357
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3. SECURITY ANALYSIS
Shamir's scheme is provable secure, that means: in a (k,n) scheme one can prove that it
makes no difference whether an attacker has k-1 valid shares at his disposal or none at all;
as long as he has less than k shares, there is no better option than guessing to find out the
secret.
We have noted in Shamir’s scheme that the security based on the numbers of elements in
field of a prime number P which the coefficients' polynomial reduced to modulo P (take a
values from some field Z P , where P large prime number). Thus, the adversary must know
only the free coefficient of polynomial in order to break the scheme, on the other hand the
probability that adversary will know the secret S is 1/ P . Shamir’s scheme is a uniform
distribution and his scheme is perfect [1]. Our scheme is more secure than Shamir scheme,
which means that the adversary must know all coefficients of polynomial which are used.
In addition, he must exactly know the values of coefficients (a1 , a2 ,..., ak ) , with the order
of (a1 , a2 ,..., ak ) exactly. The probability for adversary to know the secret in our scheme is
1 / ( g k ) . If k and g are two large numbers, our scheme seems to be secure than Shamir’s
scheme. This means that the probability is closer to zero. We have seen that if k  1 , and
g  p our scheme is similar to Shamir’s security scheme. In addition our scheme is a
uniform distribution as Shamir's scheme.
In Pedersen’s scheme, the value g a0 is not made publicly known, that is, the secret S is
embedded in the commitment c0  g a0 hb0 . Thus, no information directly about the secret s
is revealed even if an attacker with unlimited computing power can solve log g h , the
attacker still gets no information about the secret S. In our scheme the secret S is embedded
in all commitments ci  g ai hbi , and then the attacker needs to compute all broadcasts
information till break the scheme.
4. CONCLUSION
We introduced this scheme to be helpful in future works for anyone who is interested in
this domain of information security. Our future work will study the domain which is related
to the secret sharing scheme based on integer representation, such as those schemes that do
not require a dealer (free dealer secret sharing scheme). In addition, we will develop our
method, in order to build the verification scheme.
In this paper, we have used the Integer representation in order to construct the polynomial
that assigns an integer which will be the secret (S). Our scheme, in section (3); was based
on Shamir’s scheme which generates the shares, then distributes the shares to the
participants, and based on the Information dispersal scheme, in order to divide the secret
(S) within whole terms in polynomial ( f (x ) ) that is constructed by our scheme, the secret
f ( g )  S . We have seen that our scheme is more secure than Shamir’s scheme when both
were compared in section IV.

358
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

5. REFERENCES
1. Shamir." How to share a secret ". Communications of the ACM, 1979.
2. G. R. Blakley." Safeguarding cryptographic keys". In National Computer Conference, 1979,
volume 48 of American Federation of Information Processing Societies Proceedings, pages,
1979.
3. B. Chor, S. Goldwasser, S. Micali, B. Awerbuch, Verifiable secret sharing and achieving
simultaneity in the presence of faults, Proceedings of the 26th IEEE Symposium on Foundations
of Computer Science, 21–23 October, Oregon, Portland, IEEE Computer Society, 1985, pp. 383–
395.
4. Feldman, P., 1987. A practical scheme for non-interactive verifiable secret sharing. In:
Proceedings of the 28th IEEE Symposium on Foundations of Computer Science, 27–29 October.
IEEE Computer Society, Los Angeles, California, pp. 427–437.
5. Pedersen, T.P., 1992. Non-interactive and information-theoretic secure verifiable secret sharing.
In: Advances in Cryptology-CRYPTO’91, LNCS, vol. 576. Springer- Verlag, Berlin, pp. 129–
140.
6. M. O. Rabin." Efficient dispersal of information for security, load balancing, and fault tolerance".
Journal of ACM, 36(2):335–348, 1989.
7. H. Krawczyk. , Secret sharing made short. In D. R. Stinson, editor, Advances in Cryptology -
CRYPTO ’93, volume 773 of Lecture Notes in Computer Science, pages 136–146. Springer-
Verlag, 1994.
8. Johannes A. Buchmann. „Introduction to Cryptography". Department of Computer Science
Technical University, Germany, ISBN O-387-95034-6, Springer-Verlag New Yourk, Heidelberg,
pages 2-6, September, 2000.
9. Delfs, H. Knebl, H. "Introduction to Cryptography Principles and Applications". second edition,
Georg-Simon-Ohm University of Applied Sciences N¨urnberg Department of Computer Science,
Germany, ISBN-13 978-3-540-49243-6 Springer Berlin Heidelberg New York, pages 35- 37,
2007
10. Sorin Iftene. "Secret Sharing Schemes with Applications in Security Protocols". Sci. Ann. Cuza
Univ.17-18, 2007.

359
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

THE IMPACT OF 3D PRINTING TECHNOLOGY ON THE SOCIETY AND


ECONOMY

Alexandru Pîrjan, 1
Dana-Mihaela Petroşanu2
ABSTRACT
In this paper, we analyse the evolution of 3D printing technology, its applications and
numerous social, economic, geopolitical, security and environmental consequences. We
compare some of the most significant existing 3D printing solutions, taking into account
the acquisition price, the technical specifications, their main advantages and limitations.
Just as it happened in the past decades with the personal computers and Internet, the impact
of 3-D printing will gradually increase in the future, leading to significant transformations,
redefining our everyday life, economy and society.

Keywords: 3-D printing technology, costs, impact.


1. INTRODUCTION
In 1981, Hideo Kodama of the Nagoya Municipal Industrial Research Institute (Nagoya,
Japan) has studied and published for the first time the manufacturing of a printed solid
model, the starting point of the “additive manufacturing”, “rapid prototyping” or “3D
printing technology” [1]. In the next decades, this technology has been substantially
improved and has evolved into a useful tool for researchers, manufacturers, designers,
engineers and scientists.
As the term suggests, “additive manufacturing” is based on creating materials and objects,
starting from a digital model, using an additive process of layering, in a sequential manner.
Most of the traditional manufacturing processes are based on subtractive techniques:
starting from an object having an initial shape, the material is removed (cut, drilled) until
the desired shape is obtained. Unlike the above-mentioned technique, the 3D printing is
based on adding successive material layers in order to obtain the desired shape.
Since 1984, when the first 3D printer was designed and realized by Charles W. Hull from
3D Systems Corp. [2], the technology has evolved and these machines have become more
and more useful, while their price points lowered, thus becoming more affordable.
Nowadays, rapid prototyping has a wide range of applications in various fields of human
activity: research, engineering, medical industry, military, construction, architecture,
fashion, education, computer industry and many others.
The 3D printing technology consists of three main phases - the modelling, the printing and
the finishing of the product:

1 Ph D, Faculty of Computer Science for Business Management, Romanian-American University, 1B,


Expozitiei Blvd., district 1, code 012101, Bucharest, Romania, E-mail: alex@pirjan.com
2 PhD, Department of Mathematics-Informatics I, University Politehnica of Bucharest, 313, Splaiul

Independentei, district 6, code 060042, Bucharest, Romania, E-mail: danap@mathem.pub.ro


360
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 In the modelling phase, in order to obtain the printing model, the machine uses
virtual blueprints of the object and processes them in a series of thin cross-sections
that are being used successively. The virtual model is identical to the physical one.
 In the printing phase, the 3D printer reads the design (consisting of cross-sections)
and deposits the layers of material, in order to build the product. Each layer, based
on a virtual cross section, fuses with the previous ones and, finally, after printing
all these layers, the desired object has been obtained. Through this technique, one
can create different objects of various shapes, built from a variety of materials
(thermoplastic, metal, powder, ceramic, paper, photopolymer, liquid).
 The final phase consists in the finishing of the product. In many cases, in order to
obtain an increased precision, it is more advantageous to print the object at a higher
size than the final desired one, using a standard resolution and to remove then the
supplementary material using a subtractive process at a higher resolution.

Depending on the employed manufacturing technique, the 3D printing could offer


additional improvements. Thus, in the printing process, one can use multiple materials in
manufacturing different parts of the same object or one can use multiple colours. If
necessary, when printing the objects, one can use certain supports that are being removed
or dissolved when finishing the product. Taking into account the importance of the 3D
printing technology, we have decided to analyse further the main available additive
processes, the advantages and limitations of this technology, to compare the most
significant existing 3D printing solutions. We have also decided to study the usefulness, the
implications and the future evolution that the 3D printing technology brings into the modern
society, economy and everyday life.
2. THE ADDITIVE TECHNOLOGY AND THE MATERIALS USED IN
RAPID PROTOTYPING
The existing 3D printers use a wide range of technologies and materials in order to print
objects starting from a digital design. In the following, we present the main 3D printing
technologies and materials:
 The inkjet head and the powder bed 3D printers sprinkle an initial thin layer of
powder with fine binder droplets. Then, a roller is used in order to spread and
compact a fresh layer of powder. In the end, an object consisting in powder layers
bound together is obtained. If it is necessary, the used binder could be dyed in order
to obtain a coloured final object. After the printing, one can also use treatments for
improving the material’s strength (with super glue) or for reducing the colour
fading (with UV protectants). The final object is made from more different
constituent materials, having different chemical and physical properties, thus being
a composite material.
 The powder bed and inkjet head 3D printers are also useful in creating objects using
ceramic powder. The printed objects are then subjected to heat treatment for drying
and glazing, thus improving the material’s strength and aspect.
 The stereolitography (SLA) is an additive manufacturing technology that uses a
liquid photopolymer (resin) and an ultraviolet laser light, in order to obtain
successive objects’ layers. In order to obtain a layer, the laser draws on the resin a

361
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2D path, thus obtaining a cross section of the final object. The obtained layer is
then exposed to ultraviolet laser light, curing and solidifying the layer with the
previous ones. Through this technology, one can obtain very smooth final objects.
 Another 3D printing technology, the selective laser sintering (SLS) melts and fuses
fine particles layers of powdered materials like plastic (often nylon) or metal, using
a powerful laser beam. The laser crosses over a powder surface and after the
completion of a printed layer, the plastic powder is spread over. When the laser
crosses over this new layer, the powder particles melt, interfusing each other and
also with the previous layer. The SLS technology is useful when printing complex
objects having fine details.
 The photopolymer jetting technology spreads droplets of resin, using small jets
dispersed by movable heads similar to those of an inkjet printer. After spreading
the droplets, the resin is solidified using an ultraviolet lamp. If it is necessary, one
may also print a support material that surrounds the droplets and is removed in the
final steps of the printing. This technology is useful when one has to obtain models
having very fine details or smooth surfaces, using various materials.
 The direct metal laser sintering (DMLS) 3D printing technology uses a laser in
order to fuse particles of metal powder (e.g. titanium). This technology is similar
to the above-mentioned SLS technology that prints plastic materials. The DMLS
has the disadvantages of high costs and requires specific design guidelines.
 Another 3D printing method consists in the direct metal printing technique that
generates metallic models using powder particles (mainly stainless steel). The
method consists in several steps. In the first step, the designed object is printed,
using an inkjet process, in a bed of fine stainless steel powder. The plastic binder
is burned out using a heat treatment, while the steels particles are fused together.
Then, the empty spaces within the model are filled using molten bronze. In the end,
the final printed product consisting in a porous steel material having bronze filled
porosities, can be gold (or other metals) platted.
 The indirect printing methods are those based on creating models or molds that can
be further used in creating metal objects, based on traditional techniques.
3. THE MAIN ADVANTAGES AND LIMITATIONS OF THE 3D PRINTING
TECHNOLOGY
In order to analyse the impact of 3D printing technology on the society and economy, in
the following we study its main advantages and limitations.
The most important advantages offered by 3D printing are:
 Additive manufacturing offers the possibility of creating, in a short timeframe,
complex 3D objects, with fine details, from different materials. Through 3D
printing, the customer has the possibility to create complex objects and shapes that
are impossible to be obtained through any other existing technology.
 A very important advantage of creating objects using 3D printing technology
instead of traditional manufacturing methods is the waste reduction. As the
construction material is added layer after layer, the waste is almost zero and during
the production, it is used solely the material needed for obtaining the final object.
In the traditional manufacturing processes, based on subtractive techniques, the
362
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

final product is manufactured through cutting or drilling an initial object, thus


leading to a substantial loss of material.
 One can easy print small movable parts of the final object.
 The product’s digital design may be sent over the Internet at the customer’s
location, where he can print it.
 The customers also have the possibility of printing items in remote locations taking
into account the fact the Internet is nowadays widespread and in some countries is
even a legal right of the citizens [3].
 Some of the materials used in 3D printing have improved properties in terms of
strength and provide a wide range of superior finishing details, compared to the
materials used when manufacturing objects through traditional technologies.
 As the additive manufacturing is a computer-controlled technique, it reduces the
necessary amount of human interaction and requires a low level of expertise for the
operator. Furthermore, the process ensures that the final product represents a
perfect 3D version of the digital design, excluding the errors that could have
appeared when using other existing technologies. As the AM reduces the waste in
the manufacturing process, it could help solving tough problems of the humanity
such as the consumption of the construction material resource, the energy
consumption and the environmental protection.
 Using the 3D printing technology one can produce complex designs useful in
various fields: fashion, industry, arts, jewellery, computer industry,
telecommunications, transports etc. AM has led to amazing advances in medicine,
being capable of saving lives, lowering health’s care costs and improving the
human life’s quality. For example, researchers have managed to create a 3D printer
useful in creating prosthetics, parts of the human body, organs and tissues. First, it
is created a 3D model of the final object using a scanner (computed tomography or
magnetic resonance imaging). Using 3D shapes, the organic material is printed and
afterwards is implanted in the patient’s body. The researchers from the Wake Forest
University’s Institute for Regenerative Medicine (North Carolina) have
successfully created a reduced size functional kidney. Another interesting case is
the one of an eagle’s beak that, after being destroyed by a poacher, it has been
successfully replaced by the researchers of Kinetic Engineering Group with a
prosthetic one, built from titanium using a 3D printer. A very useful application of
the 3D printing is the Wilmington Robotic Exoskeleton, created using metal and
rubber bands. This device is useful for helping patients (especially children) having
underdeveloped arms, as it offers them the possibility of performing ample arm
motions, allowing personal customization and fine-tuning. Another important
innovation that employs the additive manufacturing was developed by the
Organovo Company that has built a 3D printer able to print tissues. One of their
most important achievements was to print in 30 minutes a blood vessel having the
length of 5 cm and the diameter of 1 mm. The Bespoke Innovations Company
realizes custom surfaces that cover prosthetic legs, thus obtaining a natural shape
and aspect. Their technology uses a 3D scanner and based on the obtained images,
the covering is designed and printed using various materials. Another amazing
application of the additive manufacturing was developed by LayerWise company
from Belgium, that has replaced a woman’s mandible that had to be removed (due
363
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

to severe illness), with a printed one. In order to obtain the digital model, the
company has used a computed tomography of the patient and then has printed the
replacement using titanium and a ceramic coating. Researchers from the Glasgow
University have synthetized custom laboratory equipment on a lower scale. Using
specialized Computer Aided Design (CAD) software and a 3D printer, researchers
were able to print customized equipment using a polymer gel along with chemical
reagents. This can be particularly useful in the pharmacy industry.
 The advertising of the 3D printing devices could be efficiently achieved using the
World Wide Web, as these devices are targeted to tech-savvy users and thus, it is
not necessary to conduct expensive marketing campaigns (on radio, TV, etc.) [4].

Like any other technology, 3D printing has a series of disadvantages and limitations that
currently obstruct a large-scale expansion of this technology. The main disadvantages and
limitations of 3D printing are:
 The lack of legislation and regulations regarding the 3D printing. For example,
there can be printed guns (and this has already happened), weapons, parts for
aircrafts, military parts, counterfeit parts for commercial or defence operations
(designed for sabotage), drugs or chemical weapons. In addition, all of these could
be achieved with ease, at reduced costs and very fast. Moreover, weapons could be
very easy disguised in non-hazardous products. Thus, 3D printing can become a
potential danger when used by criminals or counterfeiters. Nowadays, the
lawmakers are particularly interested in regulating the firearms and more generally
the 3D printed products, but not the 3D manufacturing devices. Even if many
politicians promote, support and adhere to the previous mentioned strategy, another
opinion expressed by the politicians takes into account that the declaration and
registration of 3D printing devices become mandatory and also to restrict the
blueprints’ dissemination. A part of the 3D printers’ manufacturers took into
account mitigating these risks and therefore they introduced software limitations
on items that can be printed.
 Another main disadvantage of 3D printers is the fact that children could print out
dangerous items. In order to prevent this, one can employ software limitations and
parental control.
 A major disadvantage of 3D printing is its high cost. At the actual price of the
device and materials, the 3D printing is the best solution when one needs to print a
small number of complex objects, but it becomes expensive to print a large number
of simple objects, when compared to traditional manufacturing techniques. In
addition, the 3D printing becomes unprofitable when printing large size objects.
The cost of a 3D printed large object is significantly higher than if it had been
traditionally manufactured.
 Due to the material costs (especially regarding the moulds), the additive
manufacturing is not always the best technical choice, most of the moulds’
materials being degradable over time and sensible at outdoor exposure.
 Sometimes, the 3D printed objects’ building quality is lower than if it had been
traditionally manufactured. Although the additive manufacturing can print objects

364
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

having intricate designs, the final product can sometimes have flaws that might
affect not only the object’s design, but also its functionality and resistance.
In addition to the foregoing, another important factor that must be taken into account when
analysing the influence of 3D printing on human life is the impact that the wide spreading
of this technology has on the global economy and on the workforce requirements. In this
respect, the most important issues worth to be considered are:
 The possibility of manufacturing products on demand and at different locations
than when using traditional techniques, could reduce actual economic imbalances
and could modify the current hierarchy of the economic powers.
 As the additive manufacturing is a computer-controlled technique, it reduces the
necessary amount of human labour and thus it could lead to significant reductions
in work force requirements regarding the production, product delivery and
manufacturing jobs for export industries, as the AM technique allows
manufacturing products on demand and closer to the consumer’s location.
 On the other hand, the 3D printing technology’s development and spreading will
result in creating new professions, jobs and industries related to: the production of
the 3D printers, supplies, materials and printing cartridges; the products’
engineering and design; the software industry. Moreover, the AM technology could
use cheap recycled materials. Thus, the costs of expensive imports could be
reduced.
 The additive manufacturing’s development will also affect the import of the
construction materials, as it uses different materials than other techniques, some of
which could be locally supplied, without imports.
4. A SURVEY OF THE MOST SIGNIFICANT EXISTING 3D PRINTING
SOLUTIONS
In recent years, 3D printer devices became cheaper, better, more useful and important with
each day that passed. Their growing importance and the features offered have made them
to become increasingly widespread. From a historical point of view, since the appearance
of the RepRap 3D printer in 2007, a real 3D printer revolution has started. The RepRap
series was followed by the MakerBot cupcake CNC kit in 2009, Printrbot in 2011 and many
other devices satisfying the customers’ growing interest.
The most important criteria that a customer has to take into account when he chooses a 3D
printer are:
 The printer and its supplies should be affordable for frequent using.
 The 3D printer’s hardware and software should be affordable and friendly to the
customer, without having an advanced training in software.
 The 3D printer should produce objects that meet the customer’s requirements and
needs, in a short timeframe.
 The 3D printer device’s size must scale to the customer’s space that could be either
a wide enterprise’s space or a limited office space.

In the following, we present a brief historical survey of the most popular printers, still
available at relative low prices [5].

365
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The RepRap 3D printer series, developed in 2007 at the Bath University by Dr. Adrian
Bowyer, is a real milestone in the 3D printer devices’ history. The RepRap name represents
an abbreviation for “replicating rapid-prototyper”.
 Darwin, the first RepRap model was designed so that it was able to print its own
parts, in order to replicate itself. This model started a real revolution in the 3D
printer domain.
 Mendel, the second generation of RepRap, was launched in 2009 by Josef Prusa.
In 2010 the Prusa Mendel was released, an improved model at an affordable price,
that is still available on the market today. This low-cost 3D printer device prints
relatively fast objects of up to 200 mm × 200 mm × 110 mm.
 Dr. Adrian Bowyer and Jean-Marc Giacalone released a smaller and cheaper
version of the Mendel model, RepRap Huxley, in 2010. This model has a reduced
size, an improved portability (being one of the smallest printers available on the
market), an advantageous price, provides greater precision and prints fast but it
could print reduced size objects of up to 140 mm × 140 mm × 110 mm.

The Box Bots represents a category of 3D printing devices, having a common characteristic:
they are built using plywood frames obtained by cutting panels with a precise laser beam.
The Box Bots 3D printers are easier to calibrate and are more precise then the previous
models. In the following, we describe three models from this category, produced by
different companies.
 The Cupcake CNC launched by MakerBot Industries in 2009 is a low-cost device
that uses a laser-cut frame of plywood. The second generation of 3D printer devices
produced by MakerBot Industries was Thing-O-Matic, launched in 2010, an
improved model that prints the smallest objects from all the 3D printing devices,
the maximum printing size being 120 mm x 120 mm x 115 mm. The third
generation of MakerBot’s 3D printers, Replicator, launched in 2012, offers a larger
maximum printing size, 225 mm ×145 mm × 150 mm. The standard Replicator
comes assembled by the manufacturer, unlike the previous products, the Thing-O-
Matic or the Cupcake, that are delivered to the buyer as kits buildable at home. The
Replicator offers the possibility of using dual extruders, being the only low-cost
3D printing device that includes this as a standard option. Therefore, it is possible
to use this device in order to print objects using two different filament colours. It
also has the advantages of fast printing and does not require initial calibration.
 Another 3D actual printing device is the MakerGear Mosaic M1, which
distinguishes as an easy to assemble, to use and to calibrate precision machine, with
a long-term reliability. It is characterized by a printbed that moves along two axes,
while it lowers along the third one, as printing the object. It offers a printing size
of up to 127 mm × 127 mm × 127 mm. The Mosaic M1 3D printer uses precision
linear guides and rails along the axes, while its screws are Teflon-coated.
 One of the fastest 3D printers on the market is Ultimaker, offering a very large
printing area: 210 mm × 210 mm × 220 mm. Unlike other 3D printing solutions,
this one does not include a heated printbed (therefore the printing materials or types
of prints are limited), but uses a thermal printhead moving at a very high speed
along two axes. It offers high accuracy, prints with high speed, at excellent quality,
366
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

offering the largest available printing area, requiring a low maintenance effort, but
it is very expensive and hard to assemble.

The RepStrap 3D printing machines were developed in order to overcome the very high 3D
printing costs of the first RepRap model’s components, Darwin. Actually, these machines
have been designed only for printing the RepRap’s printer parts. The only model in this
series for which the design and hardware are available today is the whiteAnt CNC. It is
constructed from base materials, like plywood, it has a simple hardware, offers a build area
of 160 mm × 190 mm × 125 mm, but it has some major disadvantages: it takes a lot of time
to calibrate and runs slow.
New devices, incorporating new designs and techniques are developing right now, offering
the customers many potential improvements. Day after day, the 3D printing technology
evolves along with the wide spreading of three-dimensional printed objects’ applications.
Some of the newest 3D printers are presented below.
 As they are built using aluminium extrusions, the new AO-100 from Aleph Objects
and the MendelMax designed by Maxbots, offer a higher structural rigidity and are
easier to assemble. Even if this new material raises the 3D printers’ price, these
printers ensure fast printing speeds at an increased level of precision. While the
AO-100 has a build area of 200 mm × 190 mm × 100 mm, the MendelMax offers
a maximum print area of 250 mm × 250 mm × 200 mm. The AO-100 comes
preassembled, while the MendelMax is available as a kit. Both of the models offer
an increased printbed size compared to the standard Prusa Mendel, improved
precision, a symmetrical and more rigid printing frame, but they have the major
disadvantage of being more expensive than the Prusa Mendel standard model.
 Belonging to the latest generation of 3D printing devices, the RepRap Wallace and
the Printrbot represent a real milestone in this field as they have an almost frameless
design and a reduced number of parts. They use two motors along one axe and thus
they do not require the using of a frame, like in the case of other printers. The
maximum building volume is 150 mm × 150 mm × 150 mm for the entry-level
Printrbot and 200 mm × 200 mm × 200 mm for the standard RepRap Wallace
model. The advantages of these models are their reduced costs, the fact that they
allow the possibility of printing a variety of shapes and sizes and that they can be
built very quickly. A major limitation that worth to be mentioned is that, while the
Printrbot is sold as a kit, the other model is available only as separate parts, sold by
various merchants and a few components that need to be printed using other three
dimensional printing devices.

As the 3D printing technology evolves at a fast pace, day-by-day new models and features
emerge and it becomes difficult to choose a 3D printer device if one needs to buy one.
Obviously, in making this decision, the essential aspects that one must keep in mind are the
purpose for which the printer is used and the economic factor (the printer’s cost, the printing
material’s cost, the energy cost).
Comparing the technical specifications of the above-mentioned devices could help in this
matter and therefore, in the following, we present a comparison taking into account the

367
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

following details: the maximum print volume, the printer’s resolution, the print speed, an
approximation of the actual printer’s price (available as a kit).
In the following, we present a comparison between the maximum print volume offered by
the analysed 3D printers. For each of the devices mentioned above, we have considered
their maximum actual printing size. Analysing the maximum print volume, we have found
that the Ultimaker offers the largest volume of them all (Figure 1).

Figure 1. The print volumes’ comparison


Then, we present a comparison between the maximum resolutions offered by the above
mentioned 3D printers. Even if the best criteria when analysing the printers’ quality is to
see and physically compare the printed objects, we had to limit ourselves at analysing the
maximum technically specified printing resolutions. We have found that the best resolution
is offered by the Ultimaker, while the Printrbot does not provide detailed prints. In this case,
lower is better as the 3D printer provides the best printing quality when it is able to print
fine, accurate details (Figure 2).

Figure 2. The resolutions’ comparison


In the following, we present a comparison between the maximum print speeds offered by
the analysed 3D printers. By analysing it, we have found that the fastest of all the 3D
analysed printers is the Ultimaker, followed by the Aleph Objects AO-100 (Figure 3).

368
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 3. The print speeds’ comparison


Then, we present a comparison between the analysed 3D printers’ prices available at the
online specialized stores on October 2013 (except the whiteAnt CNC’s price, which is not
available). Analysing the devices’ costs, we have found that the best choice is the RepRap
Wallace or Printrbot, while the most expensive device is the MakerBot Replicator (Figure
4).

Figure 4. The prices’ comparison


Analysing the used materials, one can remark that MakerBot Replicator, whiteAnt CNC
and Printrbot work with 3 mm thin ABS (acrylonitrile butadiene styrene), while RepRap
Mendel, RepRap Huxley, MakerGear mosaic, Ultimaker and Aleph objects AO-100 use
1.75 mm thin PLA (polylactic acid). Both materials are polymers (plastic). ABS is made of
oil-based resources and has a higher melting-point, it is stronger and harder than PLA
plastic. PLA is made of plant-based resources (corn starch or sugar cane) and it is
biodegradable. Both ABS and PLA have advantages and limitations, and choosing a printer
depending on the used material could be a good idea, as they have different properties: ABS
has a longer lifespan and has a higher melting point, while PLA is more malleable, easier
to use, looks better and is suitable for creating artistic 3D objects.
5. CONCLUSIONS
In this paper, we have presented and analysed the impact of 3D printing technology on the
society and economy. After presenting, in the introduction, a brief history of 3D printing,
369
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

in the second section we have depicted the additive technology and the materials used in
rapid prototyping. In the third section, we have highlighted the main advantages and
limitations of the 3D printing technology, while in the fourth section we have made a survey
of the most significant existing 3D printing solutions. We have compared these 3D printing
solutions, taking into account their technical specifications and prices. One can conclude
that the 3-D printing technology’s importance and social impact increase gradually day after
day and significantly influence the human’s life, the economy and modern society.
6. REFERENCES
[1] Kodama H., Automatic Method for Fabricating a Three Dimensional Plastic Model with Photo
Hardening Polymer, Rev Sci Instrum, pp.1770-1773, 1981.
[2] http://www.pcmag.com/slideshow_viewer/0,3253,l=293816&a=289174&po=1,00.asp
[3] Tabusca Silvia, The Internet access as a fundamental right, Journal of Information Systems and
Operations Management, Vol.4, No.2/2010, Universitary Publishing House, Bucuresti, ISSN
1843-4711.
[4] Tabusca A., Electronic Online Advertising – The “Economic Crisis” is just the pretext of the
fall, Journal of Information Systems and Operations Management, Vol. 6, No.2/2012, pp. 410-
418, Universitary Publishing House, Bucuresti, ISSN 1843-4711.
[5] Evans B., Practical 3D Printers: The Science and Art of 3D Printing, Apress Publisher, 2012.

370
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

CONSIDERATIONS REGARDING THE INTERNET PURCHASES BY


INDIVIDUALS IN ROMANIA AND EUROPE

Marian Zaharia1
Daniela Enachescu 2
ABSTRACT
Internet and E-commerce have witnessed a continuous development in recent decades, the
share of individuals who use this method to purchase goods and services in some countries
in Europe reaching values exceeding 60 percent of the total population (Norway 71%,
United Kingdom 67 %, the Netherlands and Sweden 66%). Moreover, in some age groups
this percentage, exceeding 80 percent. Unfortunately, in terms of Internet use by individuals
in Romania for making internet purchases barely exceeds 5 percent. From this viewpoint
we are detached in last place in the EU. Based on an analysis of the evolution of the
percentage of individuals from EU countries who make Internet purchases, this paper
presents situation in Romania, compared to 10 European countries, of percentage of Internet
purchases by individuals in the last three month and in the last year.

Key words: Romania, E-Commerce, internet purchases


Jel classification:C19, O52
Abbreviations:
 IPI3 Internet Purchases by Individuals in the last three month
 IPI12 Internet Purchases by Individuals in the last year
1. INTRODUCTION
Together with the production of goods, trade accounted for over time, one of the very
important work, in the beginning to purchase basic necessities products (e.g. trading with
salt), and then, for the development of human society, having different types, from
exchange of products (barter) to bills.
Since ancient times, the development of trade has been linked to constantly developing of
methods of transport and communication, encouraging each other. An important moment,
that facilitated the emergence the emergence of new forms of commerce, was the
emergence and development of electronic computers in the mid 20th century. Based on this,
the development of information technology (IT) in 1972 enabled the introduction of e-mail,
which at the late 1980s is becoming accessible for commercial use, Internet access control
being held till 1990, by the Department of U.S. defense.
After 1990 there is a rapid development of the Internet. In 1992 is founded by the private sector
the Internet Society, and after the development of the service, WWW (World Wide Web), the
Internet becomes ubiquitous. Developing a coherent information society based on the IT & C is
one of the main actions in the EU. Directive 98/34/EC and Directive 98/84/EC defined
information society services as: ‘Any service normally provided for remuneration, at a

1 “Petrol-Gaze” University/Modeling, Economic Analysis and Statistics Department, Ploiesti, Romania,


marianzaharia53@gmail.com
2 “Petrol-Gaze” University/Modeling, Economic Analysis and Statistics Department, Ploiesti, Romania,
denachescu22@yahoo.com
371
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

distance, by means of electronic equipment for the processing (including digital


compression) and storage of data, and at the individual request of a recipient of a service
In Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on
certain legal aspects of information society services, in particular electronic commerce, in
the Internal Market (Directive on electronic commerce) specified that the development of
electronic commerce within the information society offers significant employment
opportunities in the Community, particularly in small and medium-sized enterprises, and
will stimulate economic growth and investment in innovation by European companies, and
can also enhance the competitiveness of European industry, provided that everyone has
access to the Internet.
In the context of the application of the E-commerce Directive, the Commission
commissioned two studies in 2007, one on the Economic Impact of the Electronic
Commerce Directive and the other on the Liability of Internet Intermediaries, to improve and
accelerate the development of electronic commerce in the EU. Based on these considerations, in
the following chapters is carried out an analysis of how e-commerce has evolved, in
particular Internet purchases by individuals in Romania and 10 other European countries.
The primary data used were taken from EUROSTAT databases, respectively dataset =
isoc_ec_ibuy, data which underlying the graphs and the figures presented in the paper. For
carrying out the analysis and the argumentation of the conclusions were performed
statistical processing, Tomescu-Dumitrescu C, Bălăcescu A (2008), and econometric
,Caraiani P et al (2010) and Gogonea R.M, Zacharias M (2008), using the software EViews
and Excel, Oprea C, Zaharia M (2011)
2. EUROPEAN TRENDS IN INTERNET PURCHASES BY INDIVIDUALS IN
THE LAST THREE MONTHS
In the EU, the share of individuals who make purchases on the internet in all individuals,
between 2007 and 2012 has increased continuously. E.g., share of those who used the
Internet for this purpose in the last 3 months increased from 23% in 2007 to 35% in 2012,
the most significant increase occurring in the period 2008-2009 (the from 24% to 28%).
Although the economic crisis started in 2009 had major implications in most sectors, the
number of individuals making Internet purchases continued to rise. This shows a change in
the behavior of individuals, raising interest and the internet place in household activities of
population in EU countries.
On the other hand, large differences in development between the EU countries and the gap between
the levels of civilization lead us to conclude that the average values recorded IPI3 at EU level are
not significant due to the high dispersion of indicator values of each country. This conclusion is
reinforced by developments IPI3 from 11 European countries, shown in Figure 1.

372
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1 Evolution of the proportion of individuals, from 11 European countries, who have made
Internet purchases in the last least 3 months
As can be seen, the 11 states studied, from the point of view of the indicator may be divided
into four groups. The first group, with IPI3 values , well above the EU average includes
Denmark, Sweden and the Netherlands. In Denmark, in the period, this indicator increased
by 17 percentage points to reach 60% in 2012. Note, that the increase was linear, the
economic and political developments of this period having no influence on IPI3 values. In
Sweden, although IPI3 value recorded in 2012 is 58%, with 2 percentage points lower than
Denmark, after a small decline, of 1 percentage point in 2008, it had the highest increase
among all countries surveyed (20 percentage points). A somewhat different trend of IPI3 is
registered in the Netherlands. Starting from 43% in 2007, after a maximum of 53% in 2011,
IPI3 value decreases in 2012 to 52%, anyway a value higher than the EU average.
The second group includes France, Austria and Belgium. This is the group of countries with
IPI3 developments around the EU average values. Note that the number of percentage
points by which increased the IPI3 in these countries (17 in France, 18 in Belgium and 13
in Austria) is higher than that recorded at EU level (12 points). In this group of countries,
in Belgium, IPI3 recorded during 2008 to 2009 the highest growth (9 percentage points),
passing, as can be seen from Figure 1, from the third, in the second group value.
The third group comprises the Czech Republic, Greece and Hungary, countries in which,
although IPI3 indicator values are below the EU average, in the analyzed period, showed
significant increases (between 8 and 11 percentage points, in 2012 IPI3 being 18% in the
Czech Republic, 16 % in Greece and 15% in Hungary). Note that these values are about
half IPI3 value recorded in 2012 in the EU and the growth rate is below the EU.
The fourth group includes Bulgaria and Romania. In these countries, in analyzed period,
the increase of IPI3 was 4 percentage points (from 2% in 2007 to 6% in 2012) in Bulgaria
and only 1 percentage point (from 2% in 2007 to 3% 2012) in Romania. Value very low
(disastrous) of IPI3 registered in Romania (11.6 times lower than the EU average) and the
pace of its growth, highlights the extremely low level of the knowledge and the use of IT&C
by individuals and the low level of the civilization in general.

373
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

A very important conclusion that emerges from the presented up here is that in EU
witnessing a divergent evolution of skills of individuals in using the Internet for purchasing
goods and services. The gap between the countries where the Internet purchases by
individuals is a mass phenomenon and those where this method of procurement is little
used, increases continuously. If in 2007 the countries analyzed, the gap was 42 percentage
points, in 2012 it reached 57 points.
3. EUROPEAN TRENDS IN INTERNET PURCHASES BY INDIVIDUALS IN
THE LAST YEAR
If the resume analysis and reduce the frequency of use Internet by individuals for
purchasing from 3 months to a year, in the 11 countries analyzed, we get to the
developments presented in Figure 2. Although somewhat similar to IPI3 evolution, the
evolution of the share of Internet use by individuals in the last year (IPI12) presents some
particularities.

Figure 2 Evolution of the proportion of individuals, from 11 European countries, who have made
Internet purchases in the last year
At EU level, IPI12 developments in the analyzed period, is more pronounced than that of
IPI3 (14 percentage points, to 12 percentage points IPI3). With the exception of the groups’
one and four, in the groups two and three there are some changes, France and the Czech
Republic have tendency to pass into the higher value groups.
In the first group, Sweden recorded the highest value of IPI12 of all countries surveyed
(74%) by 16 percentage points more than the value of IPI3 and only one percentage point
more than the value of IPI12, registered in the same year in Denmark. In both countries, the
increases in the IPI12 are above the EU average. In the case of the Netherlands, the year
2012 brings a decline in value IPI12 and increasing gap between it and the other two
countries, from about 1 percentage point in 2011 to 8 percentage points in 2012, so in the
entire period, IPI12 increased by only 10 percentage points versus 14 at EU level. If the
trend, registered in the Netherlands continues, the IPI12 approaching to the EU average,
but evolving from top to bottom (decreasing).
In the second group, in Austria, IPI12 follow the development of the European average. A
significant development is registered in Belgium. Although in 2007 IPI12 was with 9
374
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

percentage points below the EU average, in 2012 it reaches 45% (by 1 percentage point
above the EU average). Also, a great evolution is registered in France. If in 2009, in terms
of the value of IPI12, it could be considered as part of the second group, after 2009, the
values recorded of IPI12, it departs from the other two countries from the group, recorded
in 2012 an IPI12 value of 57%, with 9 percentage points more than Austria, respectively by
12 percentage points compared to Belgium.
Czech Republic, Hungary and Greece, which in the case of IPI3, formed a close group of
developments, in the case IPI12, their developments, although relatively parallel, have the
distinct value levels, the IPI12 values recorded in 2012 being 32% in the Czech Republic,
25% in Hungary and 20% in Greece. Note that in the analyzed period, while in the Czech
Republic, the IPI12 increased by 15 percentage points (1 percentage point above the EU
average); in Greece the increase was only 12 percentage points.
In the last value group, in the case of IPI12, are maintained the same countries Bulgaria and
Romania, where the values recorded in 2012 was 9% in Bulgaria (by 6 percentage points
more than in 2007) and 5% in Romania (by 2 percentage points more than in 2007).
In conclusion, taking into account the evolutions of IPI12 values recorded in the analyzed
countries, in this case also, we have a divergent evolution trend. Thus, if in 2007 the gap
between countries in which the IPI12 values of the first and last value group was 53
percentage points (maximum 56% in Denmark and minimum 3% in Romania and
Bulgaria), in 2012 it reached 69 percentage points (maximum 74% in Sweden and minimum
5% in Romania).
Romania is far from average values recorded by IPI3 and by IPI12 at EU level, disparities
continuing to rise. In Romania, in the 2012 year, 95% of individuals did not use any at least
once a year the Internet for purchases of goods and services
4. ROMANIA'S PLACE, BETWEEN THE COUNTRIES NEAR BORDERS,
REGARDING THE PERCENTAGE OF INTERNET PURCHASES BY
INDIVIDUALS
To highlight further the extent to which the Internet is used by individuals for the purchase
of goods and services, in this chapter we briefly present Romania's place between the
countries near the borders for which data were available.
We chose to compare Hungary, Bulgaria, Serbia, Romania's neighbors, Croatia (the 28th
member state of the EU) and Turkey (possible future candidate). Since the last data
available for Turkey were for 2010, for comparison, were used data of this year. The results
are shown in Figure 5.
From the point of view of individuals who have made Internet purchases in the last three
months, in 2010, on the first place is Hungary, where IPI3 where 10% of all individuals.
Only at 1 percentage point difference was Croatia (9%), followed by Turkey (4%), Bulgaria
and Serbia (3%). With an IPI3 value of only 2%, Romania is the bottom of the list. The
percentage of individuals who make Internet purchases, Romania is about 5 times lower
than Hungary or Croatia, and half of Turkey. For individuals who have made Internet
purchases in the last year, the hierarchy is maintained: Hungary 18%, Croatia 14%, Turkey,
Bulgaria and Serbia 5% and, least, Romania 4%.

375
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 3 Share of individuals from some countries around Romania who made Internet purchases
in 2010.in the last three month and in the last year (* for Serbia have used data from 2009)
Situation of the individuals using the Internet for purchases of goods and services in
Romania continues to move away even the situation in neighboring countries. For example,
if IPI3 in Romania was 2%, 5 times lower than in Hungary and 1.5 times lower than in
Bulgaria, the gap being 8 and respectively 1 percentage point, the gap in 2012 reached 12
points (a increase of 50%) when Hungary, and 2 percentage points in the case of Bulgaria.

Regarding IPI12 indicator, the lag are even greater, from 14 percentage points in 2010 to 20
percentage points in 2012, in Hungary, respectively, from 1 percentage point in 2010 to 4
percentage points in 2012, case of Hungary.
5. CONCLUSIONS
Internet and implicitly E-commerce in recent decades have witnessed continuous
development. In the EU, the percentages of individuals who make internet purchases in
all individuals, between 2007 - 2012 have increased continuously. Although the economic
crisis started in 2009 had major implications in most sectors, the number of people making
internet purchases continued to rise, which shows increasing interest and the internet place
in current housework purchases of EU population.
The big differences in development between the EU countries and the discrepancies
between the levels of civilization led to a large dispersion of the percentage of individuals
who make Internet purchases in every country. Moreover, we see still, diverged evolution
from the habits of Internet use by individuals for the purchase of goods and services. The
gap between countries regarding the percentage of Internet purchases by individuals is a
mass phenomenon, and the countries in which this method of purchase is little used,
continuously increases. If in 2007 the countries analyzed, the gap was 42 percentage points,
in 2012 it reached 57 percentage points for those who made Internet purchases in the last
three months, and from 53 percentage points in 2007 to 69 percentage points in 2012, for
those who use the Internet to purchase in the last year.
Romania is far from average values recorded at EU level respect to individuals using the
Internet for purchases and continually increasing gap. Given that in 2012, in Romania 95%
of individuals who, in the last year, not used the Internet for purchases of goods and

376
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

services, will require sustained action both in the popularization of this service and to
develop the necessary infrastructure.
In recent years the situation of Internet use by individuals in Romania even worsened
compared to Bulgaria with which we shared last place in the EU in this regard. Thus, if
compared to 2007, the year in which the two countries were on a par at both values of IPI3
(2%) and IPI12 (3%) in 2012, Bulgaria was significantly ahead of Romania, recording
practically almost double values of Romania both to the IPI3 indicator (6% in Bulgaria
compared to 3% in Romania) and to the IPI12 indicator (9% in Bulgaria, compared to 5%
in Romania).
6. REFERENCES
1. Caraiani P., Solomon O., Despa R., Din A.M., Econometrie, Editura Universitară,
Bucureşti, 2010
2. Enăchescu D, Zaharia M., Considerations on the Impact of Information Technology
Evolution in Romania in the Last Six Years, in Economic Insight – Trends and Challenges,
vol.II(LXV), No.2/2013, p82-89
3. Gogonea R.M., Zaharia M., Econometrie cu aplicaţii în comerţ - turism – servicii, Editura
Universitară, Bucureşti, 2008
4. Tomescu-Dumitrescu C., Bălăcescu A., Statistică economică aplicată, Editura
Universitaria, Craiova, 2008
5. *** Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000
on certain legal aspects of information society services, in particular electronic commerce,
in the Internal Market ('Directive on electronic commerce')
6. *** Study on the Economic Impact of the Electronic Commerce Directive – Apendix A,
Final Raport/ 7 September 2007, Official Journal of the European Communities, 5.8.98;
7. *** DIRECTIVE 98/48/EC OF THE EUROPEAN PARLIAMENT AND OF THE
COUNCIL of 20 July 1998 amending Directive 98/34/EC laying down a procedure for the
provision of information in the field of technical standards and regulation
8. http://epp.eurostat.ec.europa.eu/portal/page/portal/statistics/search_database
9. http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=isoc_ec_ibuy&lang=en
10. http://ec.europa.eu/internal_market/e-commerce/directive/index_en.htm

377
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

AN OVERVIEW OF DOCUMENT IMAGE ANALYSIS SYSTEMS

Andrei Tigora1
ABSTRACT
This paper presents an overview of Document Image Analysis Systems, their composing
modules, the approaches these modules use, as well as uses for these applications. One of
the main goals is to present some of the most important technologies and methods behind
the Document Image Analysis domain in order to evaluate the best approach when dealing
with real-world documents. The other main goal is to ensure a foundation for those starting
to build such complex software systems and to give an elaborate technical answer to the
question: “How to make physical documents available to a large number of people?”

Keywords: Document image analysis, character recognition, OCR, image data


extraction, image export
1. INTRODUCTION
Scanning physical pages and storing them in a digital format is a means of making physical
data available to the digital world. It also solves the problems of storage, paper
deterioration, accessibility and many others. However, what it does not do is make it faster
to pinpoint the data that is relevant for a certain endeavor; giving the pixels structure and
extracting the underlying information into a computer comprehensible representation is the
job of Document Image Analysis (DIA) systems. When talking about DIA systems, the first
thing that comes to mind is Optical Character Recognition [2][16], both because it has been
in use for quite some time in the American postal system, and because it is something users
have come to expect when interacting even with non-text files. However, OCR is no more
than a small component of much larger applications, and represents a narrow view of what
a DIA really is.
Document image analysis is generally aimed at synthetic images [1], images that contain
symbolic objects, such as book pages, postal addresses on letters, engineering drawings,
sheet music, maps and so on. However, it can also deal with naturally occurring patterns as
well, such as fingerprint analysis [16]. The other category of images is represented by
“natural” images, such as photographs, satellite images, X-rays - objects that may be
captured with standard or non-standard cameras. A picture of a Chinese traffic sign may be
part of both categories, depending on what is expected of it, but this should not represent a
problem for the rest of the discussion.
2. PROCESSING STEPS
Firstly it should be noted that there is no such thing as a uniquely recognized taxonomy of
the processing steps images undergo from the raw form to the computer comprehensible
data. Nagy [1] proposed a taxonomy that is comprehensive and it catalogues procedures

1Engineer, Jinny Software Romania SRL 13 C Pictor Ion Negulici, Bucharest, Romania,
andrei.tigora@jinnysoftware.com

378
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

based on the granularity of the entities they deal with. The five identified levels, starting
from the lowest granularity, are as follows:
 Pixel level
 Primitive level
 Structure level
 Document level
 Corpus level

Nagy’s taxonomy also differentiates between the nature of the input images, separating
those whose contents is mostly text, from those that are made of mostly non-text graphics.
A similar differentiation between character and graphics dominated images is also reflected
in [2]. However, for some input images, depending on the desired output, processing steps
from both categories might be necessary. For example, a scanned image of a circuit diagram
will need both OCR processing and specialized line detection and a circuit identification
component.
The classification used throughout this paper is loosely based on Nagy’s taxonomy [1]. The
next sections will present an overview of each level, with a focus on text documents.
3. PIXEL LEVEL PROCESSING
Pixel level processing deals with image to image, attempting to transform the given images
into more appropriate versions for the following levels. Algorithms that fall within this
category handle noise reduction, binarization, character segmentation, character scaling and
vectorization.
Noise Reduction
Noise in images has multiple causes, such as degraded input images, imperfect capture
devices, as well as improper use of the previously used devices, compression and
transmission error. Before using the images, the optical abnormalities have to be
compensated for.
Noise reduction aims at increasing the signal to noise ratio and it is done not only for
images, but for all forms of signals. In the world of digital processing it plays an important
role not only in digital image analysis, but also in medicine [20] and astronomy [23]. Noise
reduction can be applied on bitonal [16], grayscale [21][22], as well as directly on color
images [19][20]. Most noise reduction mechanisms make assumptions concerning the noise
distribution pattern [18]. Gaussian distribution is most widely assumed, due to its simplicity
[19][5], but there Poisson distribution [20] is also considered, as well as non-Gaussian
distributions [5].
The approaches used for noise reduction nowadays are quite varied [5] and a lot more
complex than those that were in use ten years ago [2]. The classic approach is that of
morphological methods, which rely on sequences of erosion and dilation transformations
combined with segmentation heuristics [63]. These are non-linear filters, and together with
linear filters, such as those based on Gauss filters [17], fall into the spatial domain filtering
category. More recently, special attention has been given to transform domain filtering;
among these, the best results appear to be produced by wavelet domain filters [21].
379
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Although they perform better than other methods for natural images, they do not seem as
successful with synthetic ones, which represent the focus of document image analysis
systems.
Binarization
Just like noise reduction, binarization [64] is relevant for multiple types of images, not only
for document images. Through the process of binarization, the pixels of an image are
assigned to two categories: foreground and background. This is perhaps the most radical
transformation an image undergoes, due to the significant information loss; yet, for a text
image, this simplification should not make the image lose any of its value, as characters
will become foreground and the rest of the image background.
The most used algorithms are thresholding based [24][25][26]; thresholding methods
compute a global or local value (threshold) [64][65] which is used a delimiter between black
and white pixels. Binarization and noise reduction can be combined together in a single step
in order to classify better what is foreground and what is background. [62][63] A different
approach is used by dithering algorithms: halftone [27] and ordered [28]. These ones
attempt to reproduce the color densities within a given boundary using only a predefined
set of color values. The last type of binarization algorithms is error dispersion technique
[29]; they aim to minimize the error representations across the image, through
approximation and propagation of the color difference between neighboring pixels.
Deskew
Skew detection rather than deskewing itself is the main focus of the various algorithms used
for skew correction. Image skew is the result of improper handling of documents during
digitization process; depending on the device, and the nature of the document, improper
placement may lead to the appearance of dark regions around the borders of the document,
known as marginal noise [8]. Although deskewing does not concern itself with removing
marginal noise, these two enhancements are usually performed one after the other. A
correctly aligned document simplifies the character and text line detection algorithms [7],
assuming that text itself has the expected orderly distribution.
Some approaches for skew estimation rely on projection profiles or Radon transform. These
algorithms rotate the image within certain limits and evaluate the obtained projected
profiles; the one with the largest variation corresponds to the skew angle [5][66]. Just like
the projection profiles, Hough transform methods execute a sweep of possible angle values
in the attempt to determine the image skew; they rely on the observation that most collinear
pixels will be encountered along lines that a parallel to a text line’s base line [4][7][31].
Filtering in the Hough space can improve results [69] but unfortunately neither of these
methods produces good results when dealing with documents that also contain images, as
they have different pixel distributions that affect the statistics.
Unlike the previous methods, nearest neighbor approach first determines character blocks
and then, by grouping the ones that are closest. This is done under the assumption that they
must be part of the same text line. Once the neighbor groups are determined, the skew angle
is computed by analyzing the variations between neighboring entities [5][30][67]. These
methods also have a downside: they are very sensitive to noise, which can introduce

380
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

artificial entities that disturb the neighborhood creation process. There are however
methods which give a confidence factor for the detected rotation and can even correct bent
pages caused by the shape of the document or lens distortions [68].
Character Segmentation
Character segmentation deals with identifying the particular regions in an image that
correspond to a single character. Segmenting Latin script-based printed texts in high quality
images poses little problems, as there are few - at most 3 but usually only one - connected
components that compose a character. This is usually solved by determining the connected
components and running a nearest neighbor algorithm to assign the various diacritics to the
corresponding central entity.
However, with low quality images, characters tend to either merge or be split into multiple
entities, with some fonts being more prone to a certain behavior than others. Such situations
are common for other scripts, such as Arabic, where the characters are linked, or Chinese
[56], where characters may be composed of several distinct components or scripts that few
people know how to interpret, like Lanna which has touching and overlapping letters. [14]
Therefore, some of the mechanism initially designed to solve the problem of low quality
images are actually employed for character segmentation for those specific scripts.
Separating merged characters may be done by explicit character identification [32][33][35].
Yet this limits the segmentation to a particular script, making it less generic that desired.
Another solution relies on vertical projection [33][34], the aim being to determine the
coordinates with fewest pixel and split along that coordinate. Thinning [33][34] may also
be employed, as it may result in the “natural” elimination of the merge points, but it can
just as well overly segment the existing characters.
Merging separated entities into a single character usually comes down to evaluating the
distances between the various components. The components may be represented by their
bounding boxes or the pixels themselves or by Voronoi diagrams, Delaunay triangulation
or 3D meshes. [70][71][72]
4. PRIMITIVE LEVEL PROCESSING
The aim of primitive level processing is to identify the nature of the elements that have
become accessible following the preprocessing steps. Whereas for text documents the focus
is the eventual recognition of characters, for non-textual documents the aim is recognizing
basic geometric entities.[13] For scanned maps, the deduction of textual elements becomes
harder, as the text has various orientations and is placed over variable colored background.
Morphological operations together with statistical data help in identifying interesting
regions. [15]
Character Recognition
The early work in character recognition was performed on Latin script, as it is the dominant
script of the Western world. Though not yet perfect, in ideal circumstances, character
recognition rates are close to 100% [36]. All methods rely to some extent on matching a
candidate character against a set of features. The most basic approach is using a reference
bitmap of the character, which is then compared to a scaled version of the candidate. More

381
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

reliable reference sets are made of line strokes, stroke crosses, relative angles [1][37]. The
purpose is finding some features that can uniquely and unambiguously classify a certain
candidate object as belonging to exactly one group. The reference set of features is
determined through machine learning, usually a neural network.
Character classification was traditionally performed on a per character basis, determining
which features best fit the proposed character. The fitness function is usually relatively
simple, computing the candidate’s deviation from the reference pattern in terms as distance
from the reference’s list of features. However, experience over the years has shown that
this approach has a high degree of ambiguity, which can only be solved by using higher
level information, such as language patterns. By taking into account linguistic information,
the classification can become more precise, with the use of Generalized Hidden Markov
Models or Bayesian Networks, thus indicating what sequence of characters is more likely
to occur.
To some extent it can be said that the field has somehow matured, with developers and
researchers alike acknowledging the performances of current OCR products [11][36][41].
Most of the current research in character recognition is targeted towards Arabic and Chinese
scripts, as well as handwritten documents [2]. Handwritten and Arabic script based texts
are similar in that for both cases characters that form a word are linked to one another. Also,
while the shape of Arabic characters varies based on context, the shape of characters for
handwritten texts tends to display larger variation not only between documents written by
different people, but also within the same document.
For Chinese based scripts, and others inspired by it, the main difficulty is represented by
the large number of available characters, which makes them unsuitable for evaluation using
approaches employed for Latin based scripts. Yet, there are some mechanisms that respond
well to the demands imposed by this script, such as Tesseract [37]. Overall though, most
algorithms developed for analyzing Latin script can be applied, with minor
modifications/customizations for any type of script [39][40].

5. STRUCTURE LEVEL PROCESSING


For this level of processing, the aim is to give “meaning” to the groups of entities, such as
identifying words, reconstructing text lines and interpreting tables.[10]
The problem of word segmentation must once again receive different treatment depending
on the used script. Modern Latin based scripts use space as a means of separating words
and in some special cases other symbols, such as hyphens. Original Latin monumental
inscription though, did not separate words; the same applies for Chinese language.
For Latin script based texts, separating words is a matter of evaluating distances between
consecutive characters. Yet, for noisy document images, the distances between characters
will be affected by the presence of unexpected dark pixels, so segmenting words requires
some sort of linguistic model information. This is also holds true for Chinese texts that do
not use any kind of separator, and relies on the readers’ knowledge to decide where to split
the words.

382
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Using linguistic information means that segmentation becomes an OCR type processing.
The oldest approach dates from the 1960s, when N-grams were proposed for error
correction [1] and later extended to word segmentation [55], as an alternative to the high
requirements of using a full dictionary [57]. Both the N-grams and dictionaries may be
obtained from preexisting repositories [55], or generated from “relevant” segmented
sources [57]. The actual segmentation evaluates candidate sequences, choosing the one that
has the highest probability, and in the case of varying length patterns slightly favoring
longer sequences, as they are less frequent. Linguistic information can be generated on the
spot from the scanned document and then used in the segmentation for the further
documents, or correction in post-processing. [74]
6. DOCUMENT LEVEL PROCESSING
Modern document level processing is centered on document layout analysis, which is
grouping elements into logical sequences, and to some extent, identifying outstanding
elements of the text, such as authors, headers, etc. For layout analysis it is assumed that
entities such as words (or even text lines), tables and images have already been identified,
along with their bounding boxes. Information concerning the nature of the contents of the
bounding boxes is not compulsory. Grouping the different elements is a matter of observing
distance conventions that are usually respected while writing texts, finding interpreting
white spaces, lines [75] and any form separators [76]. Methods for detecting font
characteristics accurately give valuable information [70]. Authors evaluate these distances
relative to text size [77], as the blank columns separating two text columns need to be large
enough to unambiguously split two lines that have the same vertical coordinates [42]. For
top-down approaches, so called X-Y cut algorithms are employed, based on X-Y trees, that
recursively split/cut regions in the image by one of the two axes, to eventually end up with
a [45]
As previously stated, layout analysis does not limit itself to grouping elements; it also
performs text labeling. This can be achieved both through bottom-up and top-down
approaches. The labeling can rely exclusively on block page distribution [44], or be
enhanced with OCR information [45].
Other important issues are those of detecting tables, graphs, pictures and text regions within
a document image. The algorithms should be able to cut out each element from the image
without including parts from other objects, even if this means cutting in an irregular manner
[78][71].
One approach for detecting table regions is to identify the graphic elements that compose
the table: line segments and intersections [58]. This tends to be relatively resilient to noise,
but it is cannot be applied to tables that do not have graphic components. Other solutions
attempt to analyze word distributions within a certain region and make decisions based on
their positions relative to other words.
In [59] authors attempt to identify tables based on row line sparsity, which is how large the
white spaces on a particular line are. However, this assumes that the table cells have little
content, which may not always be the case. An alignment based approach is proposed in
[60], first identifying narrow text columns and then grouping them together into tables. A
hybrid implementation is described in [61]; this attempts combining non-tabular
383
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

information such as header and trailer position, white spaces/lines separating table cells as
well as background color variations.
7. CORPUS LEVEL PROCESSING
Once a document image has been completely analyzed and enhanced with information, the
interactions that are possible for an application specific digital version of the scanned
document should become available to the user. These include content indexing and search,
as well as validation and document update. When cataloging papers, humans first react to
names, such as the title of the publication, so the expectation was that machines would do
the same thing. Unfortunately, the documents that really require automating indexing are
usually old and low quality, which leads to poor performances of OCR [36]. Therefore,
other approaches have been developed such as those based on document layout. The
algorithm is first trained on a set of images to learn the existing layouts, then for the
classification phase, for each new image the layout is determined [71] and a fitness score is
computed based on how well the polygons in the image fit the ones on the reference
[43][46]. This layout recognition approach can also be used as a retrieval mechanism, which
can prove to be a more robust mechanism than OCR.
However, information extraction mechanism can rely on elements as low level as pixels.
The authors of [47] describe an image identification algorithm that uses pixel distributions
measurements to identify particular images. A slightly higher level approach analyzes
character shapes, more precisely the vertical segments of characters, to construct image
feature sets, than are then used for identifying a particular document [9].
8. PERFORMANCE EVALUATION
Evaluating the performance of a particular algorithm should not be a matter of debate, yet,
in the field of Document Image Analysis it often is. The first problem arises from the
modular, hierarchical structure of Document Analysis Systems. In this context, a
binarization algorithm cannot be evaluated simply by comparing its output to the ground
truth image. A image sharpness indicator is a useful starting point [73]; a more reliable
indicator would be the correct identification of characters and other features by the higher
level analyzers. Even so, an advanced OCR analyzer might in fact hide an average
performance from the lower levels, so a comparison should be performed using alternative
modules.
Choosing the reference images is also an important component. While there is a general
consensus that using a high number of files is desirable [1], the community is split when it
comes to what are those files that should be used. This problem exists from binarization, all
the way up to the high level processing algorithms. The reference output, the so-called
“ground truth”, is the great issue. For example, when binarizing artistic images and
photographs it is highly unlikely to have a uniquely satisfying binary image, whereas for
synthetic images, the binary reference is usually unique - unless the noise in the image is
for some reason more relevant that the actual contents. Despite its unique nature, another
problem arises, that of generating the ground truth information. On one hand, one may start
from the ground truth and distort it in various ways so that the input that is processed is the
distorted version of the original ground truth. The alternative would be to generate the
ground truth from documents that are usually processed by program or platform. As
384
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

expected, both approaches have their drawbacks; the first one is relatively fast and allows
the designers great control over the kind of distortions they expect to test, but the distorted
images tend to be an “unnatural”. Therefore, algorithms that are evaluated against such
inputs are being fine tuned for situations that are never encountered in real documents. The
latter solution on the other hand, while it does provide realistic input, there is no rapid means
of quickly generating the ground truth - the ground truth must be generated by hand by
humans, making the approach non-scalable for current testing demands [38].
9. ANALYSIS SYSTEM APPROACHES
Ideally, a single software package should be able to extract the information from any type
of document it is presented, but the truth is that most times highly specialized software is
used to solve a specific problem. For optimum results, analysis systems, as well as their
components, tend to have a limited applicability. This may involve a limited series of
processing than can be performed, imposing a restriction on the quality of the documents,
limited language support, meaning that only documents written in a particular language
may be processed, or specific document layout, for archives of documents whose format
repeats over and over again [12][48]. Yet, even when the input material is limited from the
point of view of the contents, the required processing might be very complex, due to high
variety of other parameters such as paper color, non-standard text layout, and wide ranging
handwriting styles as is the case of [53].
Document image analysis systems may be limited in scope to things such as extracting
names from documents [4], or processing only tables, graphs and images [3]. This is
simplifications rely on the assumption that only certain information is worth being used for
document indexing.
The system described in [6] circumvents character recognition altogether, extracting “word
images” that are then matched with user inputted keywords through word-to-word
matching.
Some systems simply cannot compensate for the low quality of the documents that are
processed, so the designers developed systems that rely on user feedback. As stated in [36],
without human intervention, the results tend to degrade significantly, so human intervention
is compulsory for good results. The systems proposed in [7][12] uses initial human input to
create templates that are afterwards used to extract information for similarly constructed
documents, with no other user intervention expected. More common though, is allowing
human users to review the proposed results of the analysis and correct them accordingly
[12][51].
Other systems allow the user to gain full control of their capabilities [49], which could be
useful whenever the batch mode processing does not yield the expected results. A more
radical approach is to leave the entire document analysis to the end user [50], taking
advantage of a large community of scientists that are interested in those documents and can
collaborate on enhancing them. Some degree of automated processing could be integrated
nonetheless, having been tried in [51]. However, in order to eliminate the effort of
duplication, a single shared model of the documents should be used [52]. Crowd sourcing
is also used for [54], in order to review the results of the automated processing and improve
the quality of the end documents.
385
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

10. CONCLUSIONS
There is no universal solution to all problems, due to high variety of input material.
However, finding optimal solutions for specific document input categories and a specific
(pre)processing stage is possible. In order to take advantage of a “best” solution in a “most
suitable” processing environment/phase an intelligent Document Image Analysis System is
needed, which would allow choosing both the algorithm/method to solve a specific problem
and the parameter settings most suitable for the task.
11. REFERENCES
1. G. Nagy, “20 Years of Document Image Analysis in PAMI”, IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol. 22, No. 1, January 2000.
2. R.r Kasturi, L. O’gorman, V. Govindaraju, “Document image analysis: A primer”,
Sadhana, Vol. 27, No. 1, February 2002.
3. X. Lu , S. Kataria , W. J. Brouwer , J. Z. Wang , P. Mitra , C. Lee Giles, “Automated
analysis of images in documents for intelligent document search”, International Journal on
Document Analysis and Recognition, Vol. 12. No. 2, June 2009.
4. L. Likforman-Sulem, P. Vaillant, A. B. de la Jacopière, “Automatic name extraction from
degraded document images”, Pattern Analysis and Applications, Vol. 9, No. 2-3, October
2006.
5. T. Saba, G. Sulong, A. Rehman, “Document image analysis: issues, comparison of methods
and remaining problems”, Artificial Intelligent Review, Vol. 35, No. 2, February 2011.
6. C. B. Jeong, S.H. Kim , “A Document Image Preprocessing System for Keyword Spotting”,
Proceedings of the 7th international Conference on Digital Libraries: international
collaboration and cross-fertilization, December 2004.
7. C. Antonio Peanho, H. Stagni, F. S. C. da Silva, “Semantic information extraction from
images of complex documents”, Applied Intelligence, Vol. 37, No. 4, December 2012.
8. Z. Hu, X. Lin, H. Yan, “Document image retrieval based on multi-density features”,
Frontiers of Electrical and Electronic Engineering in China, Vol. 2, No. 2, 2007.
9. C. L. Tan, W. Huang, S. Y. Sung, Z. Yu, Y. Xu, “Text Retrieval from Document Images
Based on Word Shape Analysis”, Applied Intelligence, Vol. 18, No. 3, May-June 2003.
10. J.-Y. Ramel, S. Busson, M. L. Demonet, “AGORA: the interactive document image
analysis tool of the BVH project”, Second International Conference on Document Image
Analysis for Libraries 2006, 27-28 April 2006.
11. A. Antonacopoulos, D. Karatzas, “Document Image Analysis for World War II Personal
Records”, Proceedings of First International Workshop on Document Image Analysis for
Libraries, 2004.
12. J. He, A. C. Downton, “User-assisted archive document image analysis for digital library
construction”, Proceedings of Seventh International Conference on Document Analysis and
Recognition, 3-6 August 2003.
13. E.E. Regentova, S. Latifi, D. Chen, K. Taghva, D. Yao, “Document analysis by processing
JBIG-encoded images”, International Journal of Document Analysis and Recognition, Vol.
7, No. 4, September 2005.
14. S. Pravesjit, A. Thammano, “Segmentation of Historical Lanna Handwritten Manuscripts”,
Intelligent Systems (IS), 2012 6th IEEE International Conference, 6-8 September 2012.
15. S. Biswas, A. K. Das, “Text extraction from scanned land map images”, 2012 International
Conference on Informatics, Electronics & Vision (ICIEV), 18-19 May 2012.

386
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

16. L. O. Gorman, R. Kasturi, “Document Image Analysis”, Los Alamitos: IEEE CS Press,
1995.
17. J. Lee, R.-H. Park, S. Chang, “Noise reduction using multiscale bilateral decomposition for
digital color images”, Signal, Image and Video Processing, August 2012.
18. H.-Y. Lim, D.-S. Kang, “Efficient noise reduction in images using directional modified
sigma filter”, The Journal of Supercomputing, December 2012.
19. J. Harikiran, R. Usha Rani, “Color Image Restoration Method for Gaussian Noise
Removal”, Information and Communication Technologies Communications in Computer
and Information Science, Vol. 101, 2010, pp 554-560.
20. S. Okawa, Y. Endo, Y. Hoshi, Y. Yamada, “Reduction of Poisson noise in measured time-
resolved data for time-domain diffuse optical tomography”, Medical & biological
engineering & computing, 50(1), 2012, pp 69–78.
21. A. D. E. Stefano, P. R. White, W. B. Collis, “Selection of Thresholding Scheme for Image
Noise Reduction on Wavelet”, 2004, pp 225–233.
22. M. Nachtegael, S. Schulte, D. Weken, D. Van Der, V. De Witte, E. E. Kerre, “Do Fuzzy
Techniques Offer an Added Value for Noise Reduction in Images?” Advanced Concepts
for Intelligent Vision Systems, 2005, pp. 658–665.
23. S. Harmeling, B. Scholkopf, H. C. Burger, “Removing noise from astronomical images
using a pixel-specific noise model”, IEEE International Conference on Computational
Photography, 2011, pp 1-8.
24. N. Otsu, “A threshold selection method from gray-level histograms”, IEEE Transactions
on Systems, Man and Cybernetics, 1979.
25. M. Sezgin, B. Sankur, “Survey over image thresholding techniques and quantitative
performance evaluation”, Journal of Electronic Imaging, 2004.
26. J. Zhang, J. Hu, “Image segmentation based on 2D Otsu method with histogram analysis”,
Proceedings of the International Conference on Computer Science and Software
Engineering. IEEE Computer Society, Washington, DC, USA, 2008.
27. A. R. Ulichney, “Halftone Characterization in the Frequency Domain”, Imaging Science
and Technology 47th Annual Conference, 1994.
28. A. R. Ulichney, “The void-and-cluster method for dither array generation”, SPIE, Vol.
1913, 1993.
29. R. W. Floyd, L. Steinberg, “An adaptive algorithm for spatial grey scale”, Proceedings of
the Society of Information Display, Vol. 17, pp. 75–77, 1976.
30. Y. Lu , C. L. Tan, “A nearest-neighbor chain based approach to skew estimation in
document images”, Pattern Recognition Letters, Vol. 24, pp. 2315–2323, 2003.
31. Y. M. Alginahi, “A survey on Arabic character segmentation”, International Journal on
Document Analysis and Recognition, 2012.
32. J. Wang, J. Jean, “Segmentation of Merged Characters by Neural Networks and Shortest-
Path”, Proceedings of the 1993 ACM/SIGAPP symposium on Applied computing, pp. 762–
769, 1993.
33. Y. M. Alginahi, “A survey on Arabic character segmentation”, International Journal on
Document Analysis and Recognition, 2012.
34. A. Zahour, B. Taconet, L. Likforman-Sulem, W. Boussellaa, “Overlapping and multi-
touching text-line segmentation by Block Covering analysis”, Pattern Analysis and
Applications, 2008.
35. A. Nomura, K. Michishita, S. Uchida, M. Suzuki, “Detection and Segmentation of
Touching Characters in Mathematical Expressions”, Seventh International Conference on
Document Analysis and Recognition, pp. 126–130, 2003.

387
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

36. R. Holley, “How Good Can It Get? Analysing and Improving OCR Accuracy in Large Scale
Historic Newspaper Digitisation Programs”, D-Lib Magazine, Vol. 15, Issue 3, pp. 1–13,
2009.
37. R. Smith, “An Overview of the Tesseract OCR Engine”, Ninth International Conference
on Document Analysis and Recognition, Vol 2, pp. 629–633, 2007.
38. J. Callan, P. Kantor, D. Grossman, “Information Retrieval and OCR: From Converting
Content to Grasping Meaning”, ACM SIGIR Forum, Vol. 36, Issue 2, 58–61, 2002.
39. F. Hedayati, J. Chong, K. Keutzer, “Recognition of Tibetan Wood Block Prints with
Generalized Hidden Markov and Kernelized Modified Quadratic Distance Function”,
Proceedings of the 2011 Joint Workshop on Multilingual OCR and Analytics for Noisy
Unstructured Text Data, 2011.
40. M. K. Jindal, M. Kumar, M., R. K. Sharma, “Offline Handwritten Gurmukhi Character
Recognition: Study of Different Feature-Classifier Combination”, Proceeding of the
Workshop on Document Analysis and Recognition Pages, pp. 94–99, 2012.
41. H. Déjean, J. Meunier, “Structuring Documents According to Their Table of Contents”,
ACM symposium on Document Engineering, pp. 2–9, 2005.
42. P. E. Mitchell, H. Yan, “Newspaper Document Analysis featuring Connected Line
Segmentation”, Proceedings of the Pan-Sydney area workshop on Visual information
processing, Vol. 11, pp. 77–81, 2001.
43. L. Golebiowski, “Automated Layout Recognition”, 1st ACM workshop on Hardcopy
document processing, pp. 41–45, 2004.
44. B. Rosenfeld, R. Feldman, Y. Aumann, “Structural Extraction from Visual Layout of
Documents”, Proceedings of Eleventh International Conference on Information and
Knowledge Management, pp. 203–210, 2002.
45. A. Takasu, K. Aihara, “Information Extraction from Scanned Documents by Stochastic
Page Layout Analysis”, Proceedings of the 2008 ACM symposium on Applied computing,
pp. 447, 2008.
46. L. Lecerf, D. Maupertuis, “Scalable Indexing for Layout Based Document Retrieval and
Ranking”, Proceedings of the 2010 ACM Symposium on Applied Computing, pp. 28–32,
2010.
47. Z. Hu, X. Lin, H. Yan, “Document Image Retrieval Based on Multi-Density Features”,
Frontiers of Electrical and Electronic Engineering in China, Vol. 2, Issue 2, 2007.
48. A. B. S. Almeida, R. D. Lins, G. D. F. Pereira e Silva, Thanatos: “Automatically Retrieving
Information from Death Certificates in Brazil”, Proceedings of the 2011 Workshop on
Historical Document Imaging and Processing, pp. 146–153, 2011.
49. R. D. Lins, G. D. F. Pereira e Silva, A. D. A. Formiga, “Enhancing a Platform to Process
Historical Documents”, Proceedings of the 2011 Workshop on Historical Document
Imaging and Processing, Vol. 0, pp. 169–176.
50. P. Tranouez, S. Nicolas, V. Dovgalecs, A. Burnett, L. Heutte, Y. Liang, R. Guest, R.,
“DocExplore: Overcoming Cultural and Physical Barriers to Access Ancient Documents”,
Proceedings of the 2012 ACM symposium on Document engineering, pp. 205–208, 2012.
51. A. H. Toselli, E. Vidal, A. Juan, “Interactive Layout Analysis and Transcription Systems
for Historic Handwritten Documents Categories and Subject Descriptors”, Proceedings of
the 10th ACM Symposium on Document Engineering, pp. 219–222, 2010.
52. R. Sanderson, B. Albritton, R. Schwemmer, H. Van De Sompel, “SharedCanvas: A
Collaborative Model for Medieval Manuscript Layout Dissemination”, Proceedings of the
11th Annual International ACM/IEEE Joint Conference on Digital Libraries, pp. 175–184,
2011.

388
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

53. E. Matthaiou, E. Kavallieratou, “An information extraction system from patient historical
documents”, Proceedings of the 27th Annual ACM Symposium on Applied Computing, pp.
787, 2012.
54. T. Ishihara, T. Itoko, D. Sato, A. Tzadok, H. Takagi, “Transforming Japanese Archives into
Accessible Digital Books Categories and Subject Descriptors”, Proceedings of the 12th
ACM/IEEE-CS joint conference on Digital Libraries, pp. 91–100, 2012.
55. V. Zhikov, H. Takamura, “An Efficient Algorithm for Unsupervised Word Segmentation
with Branching Entropy and MDL”, Proceedings of the 2010 Conference on Empirical
Methods in Natural Language Processing, pp. 832–842, 2010.
56. A. Chen, “Chinese Word Segmentation Using Minimal Linguistic Knowledge”,
Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, pp. 148–
151, 2003.
57. P. Simon, S. Hsieh, L. Prevot, C.-R. Huang, “Rethinking Chinese Word Segmentation:
Tokenization, Character Classification, or Wordbreak Identification”, Proceedings of the
45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pp.
69–72, 2007.
58. L. A. Pereira Neves, J. M. de Carvalho, F. Bortolozzi, “A Table-form Extraction with
Artefact Removal”, Proceedings of the 2007 ACM Symposium on Applied Computing, pp.
622–626, 2007.
59. Y. Liu, P. Mitra, C. L. Giles, “Identifying table boundaries in digital documents via sparse
line detection”, Proceeding of the 17th ACM conference on Information and knowledge
mining, pp. 1311, 2008.
60. F. Shafait, R. Smith, “Table detection in heterogeneous documents”, Proceedings of the 8th
IAPR International Workshop on Document Analysis Systems, pp. 65–72, 2010.
61. G. Harit, P. Art, “Table Detection in Document Images Using Header and Trailer Patterns”,
Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image
Processing, 2012.
62. C. A. Boiangiu, A. I. Dvornic. “Methods of Bitonal Image Conversion for Modern and
Classic Documents”, WSEAS Transactions on Computers, Issue 7, Volume 7, pp. 1081 –
1090, July 2008.
63. C. A. Boiangiu, A. I. Dvornic, D. C. Cananau, “Binarization for Digitization Projects Using
Hybrid Foreground-Reconstruction”, Proceedings of the 2009 IEEE 5th International
Conference on Intelligent Computer Communication and Processing, Cluj-Napoca, August
27-29, pp.141-144.
64. C. A. Boiangiu, A. Olteanu, A. V. Stefanescu, D. Rosner, N. Tapus, M. Andreica, “Local
Thresholding Algorithm Based on Variable Window Size Statistics”, Proceedings CSCS-
18, The 18-th International Conference on Control Systems and Computer Science, May
24-27 2011, Bucharest, Romania, Volume 2, Pp. 647-652.
65. C. A. Boiangiu, A. Olteanu, A. V. Stefanescu, D. Rosner, A. I. Egner, “Local Thresholding
Image Binarization using Variable-Window Standard Deviation Response”, Annals of
DAAAM for 2010, Proceedings of the 21st International DAAAM Symposium, 20-23
October 2010, Zadar, Croatia, pp. 133-134.
66. B. Raducanu, C. A. Boiangiu, A. Olteanu, A. Ștefănescu, F. Pop, I. Bucur, “Skew Detection
Using the Radon Transform”, Proceedings CSCS-18, The 18-th International Conference
on Control Systems and Computer Science, May 24-27 2011, Bucharest, Romania, Volume
2, Pp. 653-657.
67. D. Rosner, C. A. Boiangiu, A. Ștefănescu, N. Țăpuș, A. Olteanu, “Text Line Processing for
High-Confidence Skew Detection in Image Documents”, ICCP 2010 Proceedings, Cluj-
Napoca, Romania, August 26-28 2010, pp. 129-132.

389
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

68. C. A. Boiangiu, D. Rosner, A. Olteanu, A. V. Stefanescu, A. D. B. Moldoveanu,


“Confidence Measure for Skew Detection in Photographed Documents”, Annals of DAAAM
for 2010, Proceedings of the 21st International DAAAM Symposium, 20-23 October 2010,
Zadar, Croatia, pp. 129-130.
69. C. A. Boiangiu, B. Raducanu, “Effects of Data Filtering Techniques in Line Detection”,
Annals of DAAAM for 2008, Proceedings of the 19th International DAAAM Symposium,
pp. 0125–0126.
70. C. A. Boiangiu, A. C. Spataru, A. I. Dvornic, D. C. Cananau, “Automatic Text Clustering
and Classification Based on Font Geometrical Characteristics”, Proceedings of the 9th
WSEAS International Conference on Automation and Information, WSEAS Press, pp. 468
– 473, Bucharest, Romania, June 24-26, 2008.
71. C. A. Boiangiu, D. C. Cananau, B. Raducanu, I. Bucur, “A Hierarchical Clustering Method
Aimed at Document Layout Understanding and Analysis”, International Journal of
Mathematical Models and Methods in Applied Sciences, Issue 1, Volume 2, 2008, Pp. 413-
422.
72. C. A. Boiangiu, B. Raducanu, “3D Mesh Simplification Techniques for Image-Page
Clusters Detection”, WSEAS Transactions on Information Science, Applications, Issue 7,
Volume 5, pp. 1200 – 1209, July 2008.
73. C. A. Boiangiu, A. V. Stefanescu, D. Rosner, A. Olteanu, A. Morar, “Automatic Slanted
Edge Target Validation in Large Scale Digitization Projects”, Proceedings of the 21st
International DAAAM Symposium, 20-23 October 2010, pp. 131-132.
74. C. A. Boiangiu, D. C. Cananau, S. Petrescu, A. Moldoveanu, “OCR Post Processing Based
on Character Pattern Matching”, 20th DAAAM World Symposium, pp. 141-144 Austria
Center Vienna (ACV), 25-28th of November 2009.
75. C. A. Boiangiu, B. Raducanu, “Line Detection Techniques for Automatic Content
Conversion Systems”, WSEAS Transactions on Information Science, Applications, Issue 7,
Volume 5, pp. 1200 – 1209, July 2008.
76. C. A. Boiangiu, D. C. Cananau, A. C. Spataru, “Detection of Arbitrary-Form Separators
Based on Filtered Delaunay Triangulation”, Proceedings of the 9th WSEAS International
Conference on Automation and Information, WSEAS Press, pp. 442 – 445, Bucharest,
Romania, June 24-26, 2008.
77. C. A. Boiangiu, D. C. Cananau, A. I. Dvornic, “White-Space Detection Techniques Based
on Neighborhood Distance Measurements”, Annals of DAAAM for 2008, Proceedings of
the 19th International DAAAM Symposium, pp. 131 – 132.
78. C. A. Boiangiu, M. Zaharescu, I. Bucur, “Building Non-Overlapping Polygons for Image
Document Layout Analysis Results”, The Proceedings of Journal ISOM, Vol. 6 No. 2 /
December 2012, pp. 428-436.

390
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

ASSISTIVE I.T. FOR VISUALLY IMPAIRED PEOPLE

Oana Bălan1
Alin Moldoveanu
Florica Moldoveanu
Anca Morar
Victor Asavei
ABSTRACT
According to an international survey performed by the World Health Organization, it was
estimated that the number of visually impaired people in the year 2002 rose to about 161
million (2.6% of the world’s population). Therefore, a large number of people are suffering
from a visual handicap which impedes them from normally accomplishing their daily
activities. As a result, there is need for an assistive device based on an alternative modality,
that can complement or replace sight by another sense -auditory, haptic (tactile or
kinesthetic) [13], or a combination of both- and that can offer a means to deal with
blindness.

Rapid progress is ongoing in various fields of medicine, as advances in computer


technology are enhancing extended development and evolution in simulation, visualization
and virtual reality systems. As a result, a convenient approach is the use of augmented
reality for the development of assistive devices for visually-handicapped people.

This paper presents the current state of research in the field of virtual reality and six
assistive devices for visually-impaired people, the technology engaged to provide effective
and reliable benefits and some of the most interesting and innovative applications in the
area of rehabilitation techniques based on another senses.

Keywords—sensory substitution, assistive IT, virtual reality, augmented reality


1. INTRODUCTION
According to an international survey performed by the World Health Organization, it was
estimated that the number of visually impaired people in the year 2002 rose to about 161
million (2.6% of the world’s population), of whom 124 million had sight deficiencies, while
37 million were blind (legal or total blindness) [13]. Therefore, a large number of persons
are suffering from a visual handicap which impedes them from accomplishing their daily
activities. As a result, there is need for an assistive device based on an alternative modality,
that can complement or replace sight by another sense -auditory, haptic (tactile or
kinesthetic) [13], or a combination of both- and that can offer a means to deal with
blindness.
Vision substitution techniques have been intensively studied over the last century.
Use of combined modalities to convey visual information (haptic, auditory and

1Facultatea De Automatică Şi Calculatoare, Universitatea Politehnica Din Bucureşti,


oanab_2005@yahoo.com
391
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

auditory/haptic) is relatively recent, but has an ascending trend, due to the intense
attention devoted to developing assistive technologies [21], [22] and because of the
evident benefits of associating all possible communication and interaction channels
and.

The foundations of sensory substitution have been validated for the last 30 years [23], but
the existing devices seem to be inefficient in what concerns actual problems and situations
that appear in the day-to-day activities of visually handicapped subjects. Thirty years ago,
the American neuroscientist Paul Bach-y-Rita [6] made an experiment that put a solid basis
on the field of sensory substitution. Therefore, he transferred the image captured by a video
camera onto a 20 by 20 matrix of vibrotactile pins, placed on the back of a dentist chair.
Using this device, the blind subjects sitting on the chair could recognize and identify simple
visual patterns.
A reasonable visual prosthesis should be portable, has to take into account the end user’s
needs and give good results in a short time scale [6].
Navigation aid devices, generally known as Electronic Travel Aids, improve the quality of
life of sight-impaired people by helping them to perceive the surrounding environment.
Electronic Travel Aids are classified as follows: obstacle detectors based on ultrasonic and
laser beams emission into the environment in order to calculate the distance between the
subject and the detected object, navigation systems with the purpose of acknowledging
information about the setting and position of the subject in local or global coordinate system
of the scenery and environmental sensors.
2. ASSISTIVE SYSTEMS FOR PREDEFINED ENVIRONMENTS
The Drishi navigation system [1] maps the information from the environment and answers
the requested queries from the user through a speech-recognizing model. The mapping
information is stored in a database that can be dynamically updated as the user moves
through the environment and encounters other objects. The devices also provide GPS
localization for outdoor displacement.
A similar device is SmartGuide [19], whose initial purpose was assisting and surveying the
user’s travel in a university campus. It also guides the user through speech to the desired
location and provides the available pathways that he can follow.
Guentert [10] proposed an audio software application for iPhone that can help blind people
travel safely by train. This system presents the disadvantage of not providing the user’s
position and not monitoring his path continuously.
Hub et al [2] proposed a system based on a stereoscopic perspective of the reality for
monitoring the distance to the surrounding objects. The detected objects are being
compared with a 3D model. This approach is not efficient, because of the use of large
amount of data and the impossibility of modeling complex or dynamic environments.
Despite their interactivity and feasibility, the assistive devices for predefined environments
have limitations concerning their usability only in known environments and pre-designed
settings.

392
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3. VISION-SUBSTITUTION TRAVEL AID SYSTEMS FOR UNKNOWN


ENVIROMENTS
Virtual Reality Simulator For Visually Impaired People
Main Idea
Virtual Reality Simulator for Visually Impaired People [18] is a system developed in order
to create an auditory representation of the environment for people who suffer from sight
deficiencies. The main idea of this project concerns the calculation of the distance between
the subject and the nearby objects that surround him, and the conversion of this distance
into sound, thus rendering the entire virtual environment through hearing. The concept that
lies behind the transcription of the visual representation into auditory information is the
sonification process. The visual cortex of a blind person can become responsive to sound,
hence an assistive device such as Virtual Reality Simulator may enhance the neural
plasticity of the brain, providing synthetic vision with truly auditory sensations.
Results
The Virtual Reality Simulator is comprised of a set of small modules (the 3D Simulator,
the Tracking System and the Sound System) (Fig. 1), working independently or
simultaneously and performing various tasks. The components of the system are: the 3D
distance sensor (a pair of glasses), a hardware sound device and a pair of headphones. The
3D Simulator performs the rendering of the scene, distance calculations, depth map analysis
that provides the information for the Sound System and device-user communication. The
3D Simulator generates a depth map from the real scene and provides visual information
for the user interface about the virtual environment of the simulator. The 3D engine is
implemented in C++ and OpenGL and uses the Fox Toolkit Library. The simulator manages
two groups of scenes- the background group that gives visual feedback, used as a reference
system, and the main group of scenes, that are used for designing the depth map the user is
interacting with. The simulator loads the scenes from a 3D mesh with 3000 polygons. The
depth map, calculated from the 3D mesh, is a 2D representation that encodes, for each pixel,
the distance from the camera. For example, lighter values stand for close distances, while
darker values indicate farther distances. The depth calculation algorithms have been written
in GLSL shading language, enabling the addition of other algorithms, without
reprogramming the simulator [18].

Fig. 1- The Virtual Reality Simulator [18]


The sound renderer module converts the depth map into an auditory one, by spatializing the
sound according to its virtual location. Each position is characterized by a specific sound.
The 3D perception and accuracy detection is ensured by applying the HRTF (Head Related
Transfer Function) - a characteristic of how the sound is perceived from one position or a
393
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

specific angle in space. The sonification module consists of a series of techniques that
select, sort and transfer the sounds into a sequence that will be conveyed to the user through
a pair of headphones. The virtualization and mixing of sounds are performed in real-time.
The sonification strategies employed are: testing different available HRTF databases, using
different sample sizes for the HRTF impulses, randomizing the sounds’ order or presenting
them linearly, using various processing sounds for the convolution with the HRTFs.
There has been conducted an experiment in which the subjects compare the performance of
the device when they are presented an object or an obstacle (column or wall), to enable the
detection and representation of narrow or wide objects. Also, they were given specific
navigation tasks in a training setup, where the user had to get accustomed to the
environment, identify objects present in the scene and their location, recognize the scene,
complete the itinerary in a given time and correct previously made mistakes. The subjects
perform these tasks in training environments (virtual or real), moving around fake furniture,
walls or columns, in order to learn and identify the whole background. In this kind of
training sessions, the measured indicators are: completion time, correct orientation in space,
accuracy of localization and position detection, trajectories followed during the session.
The newest prototype integrates the 3D distance sensor in a pair of glasses and incorporates
an infrared Time of Flight 3D camera for the improvement of mobility simulations.
The Virtual Reality Simulator proved to be helpful for vision disabled people in different
research experiments performed indoors and outdoors, in virtual and real life situations.
Limitations
The main limitations that have to be overcome are: the adaptability of the simulator, which
has to be designed in order to allow multiple setup configurations, the complexity of the
scenes, tracker accuracy (that has to be configurable for a larger workspace). A possible
solution stands for calibration and distortion software improvement.
Expected Evolution
As future work, the tracking calibration will be revised for a more accurate detection. Also,
the scene file types will vary with more 3D common formats, while the incorporation of a
physics simulation engine will create a more realistic environment, as now it is mostly
inactive. Different sound encodings for various object categories are suitable for
improvement and enhancement. In addition to this, using neural networks to obtain patterns
for sound conversion is also an efficient and reliable method for the technological
advancement of the system.
The Vibe
Main Idea
The Vibe [13] [6] is an assistive system that converts video streams coming from a camera
into auditory information, rendered to the user through a pair of headphones. This device
makes a correlation between the coordinates of the image into the visual space and the audio
representation of sound. For instance, it encodes top/bottom positions with high/low tone,
and left/right locations with left panning/right panning.

394
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Results
The Vibe maps the image extracted from the camera into a set of receptive fields, distributed
uniformly in the picture. A receptive field is a set of pixels, grouped in a limited area. To
each receptive field it corresponds a particular sound, characterized by a frequency value
and a panning, determined by the vertical and horizontal location of the receptive field’s
center.
An experimental procedure took place in a U-shaped car park maze. 20 blindfolded subjects
equipped with The Vibe, had to complete the track 3 times. The experiment had four
sessions (three for training and one for test), separated by at least 24 hours. The results of
the experiment were evaluated based on the time of completion (Run Time) and the number
of mistakes- number of times the subject crossed the bound of the track. (Number of
Contacts). Using various statistical calculations (Friedman non-parametric test of variance,
Student t-test), the conclusions drawn revealed two main advantages over other procedures:
significant results after a short learning time even under noisy conditions and decrease in
the number of contacts after completing the training sessions, under normal and reversed
conditions.
The experiment is practically efficient because it has an impact on the subject’s mobility,
thus being adequate for the development of the visual assistive device in what concerns the
user’s necessities.
Limitations
Pattern recognition depends on the resolution of the device, and a small number of points
available in the visual field are not usually enough for a good detection of the objects in
space. The actual devices are not able to give a realistic insight into the practical situations
of using a sensory substitution system on a regular basis in daily life.
Expected Evolution
Recent developments and advances in the understanding of signal processing in the visual
system offered new pathways for the optimization of visual aids in the case of unsighted
subjects. These methods should be applied to evaluate the qualitative and quantitative
performances of the assistive devices when used in a realistic environment, so that they can
give good results in a short time scale.
See Color
Main Idea
The SeeColor project [13] [23] is a device designed to represent real-time images using the
auditory channel. The main idea of the system is to help blind subjects to reconstruct
mentally the frontal scenes of the environment. Colored objects of the reality are depicted
using three-dimensional sound sources that indicate position, localization and cardinality,
enabling vision impaired people to navigate in an unknown environment.
The mental simulation model is basically formed starting from audio. The researchers tried
to replace the visual sense, which is parallel, with a parallel representation of the audio

395
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

signals in time. The characteristics of the setting are mapped into multiple signal features,
encoded by sound duration or musical instruments sounds.
Results
The purpose of the SeeColor prototype is to transform typical colors (green, red, yellow, in
the case of a crosswalk), into musical instruments sounds. For example, the sounds rendered
in the 3-dimensional virtual environment that correspond to each colored pixel location are
flute, for green pixels, and piano, for yellow ones. Object depth is encoded by the length of
the signal and has four possible durations that correspond to four depth values. In what
concerns image processing, to build a more consistent scene of the environment, the goal
is to decrease the number of insignificant colors and retain only the important ones.
The experiments conducted in time proved the close connection between colors and musical
instruments sounds. It is obviously easier and more accessible to link visual information to
specific instrumental sounds. An experiment carried on with the aid of 15 participants
demonstrated that sounds can help locate and associate objects of different or similar colors.
The prototype has also been tested for mobility assistance, where a subject had to follow a
red painted line on the ground. It was proven that the combination of real-time modulation
and the distance information gave correct and precise information to facilitate subject
displacement.
Limitations
This device is addressed to blind people who have seen before their visual impairment. The
limitation of the SeeColor prototype lies in the unfeasibility of using this device to
congenitally blind people, because they are unable to distinguish the colors and cannot have
the sense of perspective transformations. However, they suffer from imagery limitations
when scenes and images increase in complexity.
In addition, researchers should take into account the issue of reducing the size of the
devices, so that they can be confortable to the user and acceptable to be worn in public.
Nevertheless, lowering the cost of the system and using a sophisticated technology for both
audio and video processing are compelling for developing an efficient and accurate visual-
substitution system.
Expected Evolution
Future work concerns the extraction of basic color properties, consequently to reducing as
much as possible the effects of light in the image. Work is underway to determine the salient
regions of the image, thus attracting the attention of the user towards the most noticeable
parts of the scene.
The Voice
Main Idea
The main idea of the system The vOICe [6] [13] [24] (the capital letters are the abbreviation
of “Oh I See”) is to create a sensory substitution assistive device based on image-to-sound
renderings that uses basically the physical properties of the sound to represent the visual

396
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

surrounding information. The vOICe is a very well-known prototype, whose principles led
to the realization of other experiments and trials over the last years. The goal is to provide
synthetic vision using a non-invasive prosthesis that exploits the capacity of adaptation of
the brain in the complete absence or deterioration of a sense.
Results
The Voice maps a 64 X 64 gray level picture into different sound levels. Views are refreshed
once per second, while each pixel has associated a sinusoidal tone, defined by the pixel’s
vertical and horizontal position in the 60 X 60 resolution map. The video-to-audio mapping
has associated height (vertical position) with pitch (high frequencies at the top of a column
and low frequencies at the bottom of a column) and brightness with amplitude (loudness)
in a left-to-right scan of the frame.
The Voice technology consists of a head-mounted, non-invasive video camera that converts
images into soundscapes and transmits them via headphones. The wearable device includes
also a notebook PC and costs around 2500 dollars. The software for the device is available
for free download. Peter Meijer, the researcher who developed The Voice system, is
convinced that, by using the brain’s adaptive capacity (brain plasticity), blind subjects can
mentally reconstruct and translate the visual content of the environment in a fluent and
continuous way, like natural perception, without any conscious effort at all [24].
Blue Edge Bulgaria developed a simplified, but portable version of The Voice, compatible
with Nokia mobile phone cameras [24].
Limitations
Because the combination of amplitude and frequency for a sound, it requires extensive
training to interpret the resulting signal for a given group of pixels in the current scene. In
addition to this, the system does not provide depth information about the location of the
object in space. The left-to-right perception is not continuous, as expected from the visual
sense, which processes information at the same time from multiple directions.
Expected Evolution
For efficiency, to adjust the visual field, an accelerometer is required in order to provide a
steady image, even with the movement of the head. Also, connecting an infrared sensor to
adjust the camera position corresponding to the eye movements can provide a better
reconstruction of the reality. In addition to this, future extensions include support for object
recognition (reading large prints, headlines, street signs, labels), eye-tracking ( augmented-
reality glasses for totally blind subjects, localization, absolute position of an object,
cardinality), location technology with GPS, binocular vision with a stereoscopic camera
hardware for better depth and perception sensations, in order to detect objects and obstacles.
The binocular paradigm uses the anaglyphic processing method, which combines two
viewpoints through two different color filters, usually red for the left eye and cyan or green
for the right eye. The Voice analyses the 3D image generated from the anaglyphic approach
and creates a depth map that is translated into spatial sounds, according to the distance of
the landmark.

397
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

For instance, Minoru (Japanese word that stands for “Reality”), world’s first 3D camera
[25] can be successfully used in combination with The Voice, to create a stereoscopic
anaglyphic processing of the reality that can be transferred to visually-handicapped subjects
through auditory perceptions. Minoru is defined by depth image output, camera calibration,
stereo and anaglyphic capture. Signal information extracted from Minoru 3D camera can
be edited with Open CV, a computer vision library developed by Intel, designed for real-
time image processing.
Nevertheless, a disadvantage of the binocular anaglyphic stereoscopic system is the fact
that it is more difficult to distinguish between the foreground and the background scenes
than in the case of stereo frames.
Kinect For The Blind
Main Idea
Kinect for the Blind [26] was designed to help blind people orient and localize position on
the street (direction, distance, dimensions of obstacles). This device differs from the usual
assistive aid for the visually impaired (for example, the white cane), in what concerns the
fact that it can detect obstacles simultaneously, in all directions (up, down, left and right),
while the white cane can track only one object once.
Results
This system works by using Kinect for Xbox 360 sensor, which can determine depth
directly, not by composing it from stereo frames. It uses an infrared flash and receiver that
measures per-pixel light delay. Kinect for the blind transforms the depth map resulting from
the sensor through a set of heuristic filtering methods and scales it down to a belly-mounted
8 x 4 tactile matrix.
The tactile matrix makes the user “feel” the depth of an object in space through the pixels
that receive voltage periodically. The rate of frequency of the voltage indicates the distance
to a specific object. The more frequent it is, the closer is the object corresponding to that
pixel. Tactile data is transferred via USB-to-UART FTDI Interface and controlled by
ATmrga32 board.

Visual- Sensory
Commercial
Aid Substitution Substitution modality Addressability
device
System Encoding
VR. Distance to objects is
Simulator encoded by the amplitude
for of sound Congenitally, early
Audio No
Visually- Depth image map is and late blind people
Impaired converted into an audio
People map by using the HRTF
Top/bottom position-
frequency of sound
Congenitally, early
The Vibe Audio (low/high pitch) No
and late blind people
Left/right position-
left/right panning

398
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The color of objects is


Early and late blind
encoded by musical
people
instruments sounds
See Color Audio Not recommended to No
Distance is encoded by the
congenitally blind
length of sound
people
Top/bottom position-
frequency of sound Congenitally, early
The Voice Audio Yes
(low/high pitch) and late blind people

Tactile matrix that enables


Kinect for users to perceive the depth Congenitally, early
Haptic No
the blind of objects and late blind people

Distance is encoded by
sound frequency (low/high
Real-
pitch)
Time Congenitally, early
Audio 3D directional sound by No
Assistance and late blind people
using the Head Related
Prototype
Transfer Function

Table 1- A comparative study between the main visual-aid systems available

Limitations and Expected Evolution


Kinect for the blind does not offer a very accurate perception of depth.
Because the pins are quite massive, they can be softened by covering them in a piece of
napkin.
The Real-Time Assistance Prototype
Main Idea
The Real-Time Assistance Prototype [5], a device developed at the Research Center in
Graphic Technology of the Universidad Politecnica de Valencia, tracks objects in the
surrounding space by providing the user an acoustic signal of the path, as he navigates in
the environment. The hardware device is comprised of a pair of stereo cameras that record
information ranging from 32 degrees to the left to 32 degrees to the right and headphones
for the transmission of the signal to the subject. The system can detect objects in a natural
environment that are placed at a distance between 1 and 15 meters, covering a range of 64
degrees. The surrounding information is achieved using algorithms of segmentation and
depth mapping analysis. The acoustical system involves the use of binaural sounds for a
better localization in space. Binaural sounds are the result of the convolution process of a
monaural sound with the sound localization cues- HRTF- Head Related Transfer Function-
the transfer change in sound shape from the point where the sound is emitting to the location
of the listener.

399
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The acoustical system encodes objects’ position in space based on object distance (inverse
proportional to the sound frequency), object direction (a sound displacement from the
direction of the object) and object speed, which is proportional to the pitch change intensity.
Results
Two experiments have been performed in order to analyze the performance of the Real-
Time Assistance Prototype. In the first experiment, the four totally blind subjects were
asked to stay unmoved and to identify the direction of the sound, whereas in the second
experiment, they had to identify the location of the moving sound and to follow it. In the
second experiment, four lengths of the object distance were tested- near (5 m), far (10 m),
very far (12 m) and very, very far (15 m). The results showed that even if the traveler
correctly identified the sound source, when following it, he lost it several times. The
prototype provides good results for far objects detection and for moving objects, as a request
from the blind people community that supported the idea of developing a system capable
of recognizing when an object gets nearer. The Real-Time Assistance Prototype can reliably
detect objects situated at a distance between 5 and 15 m.
Limitations and Expected Evolution
As a limitation, the system is unable to identify objects at the ground level, due to the stereo
vision technology employed, which cannot process depth information for small objects.
Also, the user is constrained to the visual area of 64 degrees (32 degrees to the left and 32
degrees to the right).
In order to develop a perfect system, several improvements need to be undergone: sensory
device to detect near and far objects, a GPS System to enhance navigation and a Head
Positioning System.

Haptic Devices
The haptic devices use as sensory substitution method the tactile and kinesthetic encoding
and representation.
The haptic systems evolved considerably over the last decades. One of the first devices
developed [11] was comprised of a video camera and an electronic equipment made of an
array of 20x10 tactile receptors placed on the back of a chair. The user felt the impulses
transmitted through the receptors and could detect the location of different objects or
persons in space.
In [12], the disadvantage of using high voltage level for conveying the electric energy to
the back chair’s pins has been suppressed by employing an array of 7x7 pins on the user’s
tongue. It enabled the subject to identify primarily geometrical shapes: circles, squares or
triangles.
Zeng et al [8] proposed a system based on a TOF (Time-of-Flight) camera, a portable
Braille module with 30x2 resolution, a white cane that uses a Wii vibrating remote
controller and an audio system. The user could either “inspect” (stay in a fixed position) or
“monitor” (move into the environment). This system provides information about the
400
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

position (relative distance), shape, width, height and type of object (obstacle at ground,
middle or head level). This device is rather efficient, but presents the disadvantage of being
quite uncomfortable and time-consuming for the user, who has to read on the Braille display
the result of the analysis of the scene. The Braille display has a low resolution, thus offering
little detailed information about the setting.
4. A COMPARATIVE STUDY BETWEEN THE DEVICES PRESENTED
All the 6 analyzed devices prove to be a good solution that addresses the problem of object
detection and navigation in the surrounding environment for the visual-impaired people, as
resulting from the experiments performed. The sensory substitution encoding techniques
are either audio or haptic, while the substitution modalities employed range from the use of
HRTF impulses, sound amplitude, frequency and musical instruments sounds for encoding
distance, top/bottom position or color of objects. These systems are dedicated to
congenitally, early or late blind people. Nevertheless, the only commercial device available
on market is The vOICe, while the others are only experimental prototypes (Table 1).
5. CONCLUSIONS
Until now, many electronic devices that assure assistance to visually-impaired people have
been developed, but actually few of them are used on a daily basis [4] (Table 1). The aim
of sensory substitution systems is to restore to blind people the capacity to categorize and
localize objects rapidly. Such a device can address one of the major problems faced by the
visually handicapped people- assistance for their daily life, cardinality, autonomy, mobility
and prevention of random situations (obstacles or dangers). There are many devices that
help blind people’s displacement (Electronic Travel Aids-ETA- and Electronic Orientation
Aids-EOA). The other technologies present- Location Based Services, artificial vision, and
obstacle detection sensors are dedicated to visually-handicapped subjects, but none of these
assistive devices enables a blind person to recognize and localize nearby objects.
Neuroscience research has shown that the visual cortex (the area of the brain responsible
with sight) can become responsive to sound [13]. On the other hand, it was proven the brain
has the capacity to categorize object very fast [4]. A 2D model can be localized very quickly
(12 ms), and a large number of visual forms (around 40kb) can be stored even on mobile
devices (PDA, smartphones). Spikenet can be used to reconstruct 3D objects from a pinhole
stereoscopic 60 degrees camera, where distances can be calculated with a precision of 20
cm at a distance of one meter [4].
Image/video processing involves thresholding operations, segmentation, and extraction of
contours, elimination of noise and other simplification techniques (contrast enhancement,
zooming, and magnification). To describe the semantic content of a scene, object
recognition and video data interpretation are required. An advantage of this method is to
embed complex algorithms with high
performance rates, even for portable devices [4].
Electronic devices can enhance the autonomy and mobility of blind users. It is important to
focus on the skill of subjects to detect and determine the position of targets in the visual
field, as for them, the most important aspect is to manage to navigate in unknown
environments, to locate obstacles and to identify similar objects.

401
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The modalities of substituting a sensory channel (haptic, auditory and auditory/haptic) are
rather sequential, not involving multiple senses at the same time. To counteract the
limitations of these methods, a clear benefit would be brought by exploiting all the parallel
interaction channels.
Auditory encoding, even if it uses low-cost devices, must handle the issue of high
information loss data when transferring visual to audio information.
In conclusion, sensory substitution aids have the advantage of being simple, practical,
portable, removable, wearable, and offer a good alternative to implants and to surgical
interventions.
In what concerns human-computer interaction, there is need to make these devices more
ergonomic and easier to interact with. In terms of appearance, they have to be acceptable to
be worn in public and relatively inexpensive for the vast blind population who cannot afford
to buy costly equipment [4].
Assistive electronic devices can address one of the major problems faced by the visually
handicapped people- assistance for their daily life, cardinality, autonomy, mobility and
prevention of random situations (obstacles and dangers).
6. REFERENCES
1. Abdelsalam Helal, Steven Edwin Moore, Balaji Ramachandran, "Drishti: An Integrated
Navigation System for Visually Impaired and Disabled", Proceedings of the 5th
International Symposium on Wearable Computers, pp. 149-156, 2001
2. Andreas Hub, Joachim Diepstraten, Thomas Ertl, "Design and Development of an Indoor
Navigation and Object Identification System for the Blind", Proceedings of the 6th
International ACM SIGACCESS Conference on Computers and Accessibility, pp. 147-152,
2004
3. Bronzino, J.D., The Biomedical Engineering Handbook, Second Edition, Volume I, Crc
Press
4. Dramas F., Oriola B., Katz B., Thorpe S., Jouffrais C., Designing an Assistive Device for
the Blind Based on Object Localization and Augmented Auditory Reality, ASSETS’08,
October 13-15, 2008, Halifax, Nova Scotia, Canada
5. Dunai L., Fajarnes Peris G., Praderas V.S., Garcia Defez B., Lengua Lengua I., Real-Time
Assistance Prototype- a new Navigation Aid for blind people, Research Center in Graphic
Technology, Universidad Politecnica de Valencia, 2010 IEEE
6. Durette B., Louveton N., Alleysson D., Herault J., Visuo-auditory sensory substitution for
mobility assistance: testing TheVIBE, Grenoble, France
7. Gorman P.J., Meier A.H., Krummel T., Simulation And Virtual Reality In Surgical
Education, Arch Surg/Vol 134, Nov. 1999
8. Limin Zeng, Denise Prescher, Gerhard Weber, "Exploration and Avoidance of Surrounding
Obstacles for the Visually Impaired", Proceedings of the 14th International ACM
SIGACCESS Conference on Computers and Accessibility, pp. 111-118, 2012
9. Lippincott W., Surgical Simulation and Virtual Reality: The Coming Revolution, Annals of
Surgery, Vol. 228, No. 5, 635-637, 1998
10. Markus Guentert, "Improving Public Transit Accessibility for Blind Riders: A Train Station
Navigation Assistant", Proceedings of the 13th International ACM SIGACCESS
Conference on Computers and Accessibility, pp. 317-318, 2011

402
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

11. Paul Bach-Y-Rita, Carter C. Collins, Frank A. Saunders, Benjamin White, Lawrence
Scadden, "Vision Substitution by Tactile Image Projection", Nature, vol. 221, pp. 963-964,
1969
12. Paul Bach-y-Rita, Kurt A. Kaczmarek, Mitchell E. Tyler, Jorge Garcia-Lara, "Form
perception with a 49-point electrotactile stimulus array on the tounge: A technical note",
Journal of Rehabilitation Research and Development, vol. 35(4), pp. 427-430, 1998
13. Pun T., Roth P., Bologna G., Moustakas K., Tzovaras D., Image and Video Processing for
Visually Handicapped People, EURASIP Journal on Image and Video Processig, Volume
2007
14. Riva G., Applications of Virtual Environments in Medicine, Methods Inf Med 5/2003, 2003
15. Schultheis M., Rizzo A., The Application of Virtual Reality Technology and Rehabilitation,
Rehabilitation Psychology, 2001, Vol. 46, No. 3, 296-311, 2001
16. Stone R., McCloy R., Virtual Reality in Surgery, BMJ. 2001 October 20; 323(7318): 912–
915, 2001
17. Szekely G., Satava R., Virtual Reality in Medicine, BMJ VOLUME 319, 1999
18. Torres-Gil, Casanova-Gonzalez M.A., Gonzalez-Mora O., Applications of Virtual Reality
for Visually Impaired People, Universitad de La Laguna, WSeas Transactions on
Computers, Issue 2, Volume 9, February 2010
19. Z.H. Tee, L.M. Ang, K.P. Seng, J.H. Kong, R. Lo, M.Y. Khor, "SmartGuide System to
Assist Visually Impaired People in a University Environment", Proceedings of the 3rd
International Convention on Rehabilitation Engineering & Assistive Technology", 2009
20. Zajtchuk R., Satava R., Medical Applications of Virtual Reality, Communications of the
ACM, September 1997/Vol. 40, No. 9
21. R. G. Lupu, F. Ungureanu, V. Siriteanu, “Eye Tracking Mouse for Human Computer
Interaction”, The 4th IEEE International Conference on E-Health and Bioengineering -
EHB 2013
22. R. G. Lupu, F. Ungureanu, “Mobile Embedded System for Human Computer
Communication in Assistive Technology”, 2012 IEEE 8th International Conference on
Intelligent Computer Communication and Processing, 2012
23. http://cvml.unige.ch/doku.php
24. http://www.seeingwithsound.com/
25. http://www.minoru3d.com/
26. http://www.zoneos.com/kinectfortheblind. htm
27. http://www.vrphobia.com/
28. http://medicalaugmentedreality.com/
29. http://www.columbia.edu/cu/21stC/issue-1.4/doctor.html
30. http://www.vrs.org.uk/

403
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

PROJECT MANAGEMENT DATA IN INNOVATION ORIENTED SOFTWARE


DEVELOPMENT

Mihai Liviu Despa1


ABSTRACT
The focus of this article is on project management data acquisition, analysis, processing
and classification in the context of innovation oriented software development. The role
played by data in the decision-making process is highlighted. Main data categories, specific
to IT project management, are depicted. Data sources are described and analyzed. Data
collection process specific to software development project management is formalized into
a diagram. Data sorting and grading methods are submitted by offering practical examples
from the author’s own activity. Software tools for data management are indicated. Methods
of data analysis are presented. An indicator for data consistency is introduced. Key
characteristics of the indicator are submitted for analysis. Future research opportunities
regarding data management are suggested.

Keywords: project management, data, software development, innovation


1. PROJECT MANAGEMENT DATA
Data collection is the process of gathering information in an organized manner and sorting
it according to its relevancy [7]. The main resources of project a manager’s daily activity
are data and information. On the basis of data and information a project manager can make
decisions, define strategies, correct deviations and measure progress. Data and information
accuracy will determine the effectiveness of the project manager.
IT project management data:
 requirements – represents all the specifications, instructions, needs submitted by
the parties involved in the project. Requirements are expressed in the form of
specifications by the project owner and represent quality, security and functionality
standards that the application needs to meet and integrate. Requirements are
expressed in the form of indications by future users of the application and represent
their expectations in terms of functionalities form the future software.
Requirements are expressed in the form of needs by the project team members and
represents wages, working hours or other related benefits.
 legislative framework - represent all the laws, norms and regulations under which
the application to be developed falls. The legislative framework creates
opportunities and constraints for a software development project. The project
manager is required to know and optimally use facilities provided by the existing
legislative framework. Relevant examples are tax exempt when employing persons
who are part of certain social categories or tax exempt in the case of hiring, on a
software developer position, a specialized graduate. The legislative framework has
also a coercive character. For example, when designing an application the architect
should take into consideration consider how the law requires personal data to be
handled.

1 Academy of Economic Studies, Bucharest, Romania, mihai.despa@yahoo.com


404
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 productivity – represents the efficiency of work performed for the application that
is the focus of the project. Productivity is calculated based on progress and allows
for predictions and simulations related to project development. During the planning
phase the project manager estimates the productivity of individual team members
and productivity of the project team in order to calculate the entire project timespan.
Actual productivity has to be compared with the estimated productivity and the
necessary corrections have to be applied when there are significant differences
 communication – represents an extremely important element in a project and is
directed to the project team, the project owner, suppliers, collaborators or end-
users. Representative data on the communication process is obtained by assessing
the availability and receptivity factors. Availability is determined by the degree in
which the messages transmitted through a certain channel reach their destination.
The availability must be complete; all messages must reach the recipient and the
recipient must be aware of the messages. For example if the messages are
transmitted via e-mail and the recipient does not check its email then its available
for this type of communication is not complete. In this situation the communication
channel must be changed or backed-up by an alternative channel. Receptivity
represents the accuracy with which the message that was received was interpreted
and the speed at which the necessary measures have been taken. For example, if
the programmers team receives a SMS stating that all efforts should be directed
towards graphical interface, however, team members do not comply, then the
channel used for transmitting the message does not impose the level of authority
required for the task.
 bugs – represents malfunction or components that are not compliant with the
specifications transmitted by the project owner. Bugs are important data for the
project manager because they provide an overview of the software application’s
quality. Also bugs constitute an element that generates tension in the relationship
between the project manager and project owner. Detailed knowledge of application
bugs are an element of confidence to the project owner and demonstrates that the
project manager controls deviations occurred in the original plan.
 standards – represents all the required characteristics the application has to meet.
Standards are set by the project owner, by common industry practices or by similar
software applications already on the market. The project manager has to know in
detail all these standards in order to ensure the development of a competitive
application.
Data is valuable for a project manager only if it is consistent. In order to be considered
consistent data must comply with the following:
Validity. Data is valid if it is true under every possible interpretation. Validity represents
the extent to which the date is anchored in the reality of the project. In order to be valid data
must be relevant and true for all aspects related to the project. For example data about the
project’s deadline is valid when it expresses the actual deadline requested by the project
owner.
Accuracy. Data is accurate if the measured value tallies the actual value of an indicator.
Accuracy represents the degree to which measured values expressed as data approach the
real values of the project. For example data about productivity is accurate when it expresses
405
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

the real productivity level of the team or of an individual member. Accuracy is not the same
thing as precision. Precision is how specific a measurement is; accuracy is how close to
reality a measurement is .
Usability. Data is usable if it can be easily analyzed, interpreted and stored. Usability
expresses the degree to which data can be handled. For example data about productivity is
usable when it is expressed in a known language and in a friendly format.
Integrity. Data is integer if it is complete and now aspect that may influence its analysis
and interpretation is missing. Integrity is the degree to which data contains all relevant
details. For example data about productivity is integer if it provides the productivity levels
of all project team members.
In order to evaluate data consistency the Icd indicator is defined.
1 n n n n

  Vd k   Ad k Ud k   Id k 
Icd=  k 1 
n k 1 k 1 k 1
(1)
4

Where:
n – number of data collected by the project manager
Vd – data validity rating given by analyst; ranges from 1 to 100
Ad – data accuracy rating given by analyst; ranges from 1 to 100
Ud – data usability rating given by analyst; ranges from 1 to 100
Id – data integrity rating given by analyst; ranges from 1 to 100

The Icd indicator ranges from 1 to 100 where 1 represents a project with the lowest degree
a consistency concerning data collection and 100 represents a project with impeccable data
collection processes. The threshold for which the data collecting process of a project is
considered consistent is 75. The threshold was determined empirically by analyzing actual
software development projects using the Icd indicator. Projects that have a value below 75
for the Icd indicator are regarded as unfit from the data collection process point of view.
The Icd indicator is an aggregate indicator and it offers information about all the data
collected. A project manager often requires information about the consistency of a specific
piece of data. In order to obtain information about the consistency of a specific piece of
data the Icd indicator can be used to address a single instance of the data collecting process.
Table 1 – Analysis of a specific piece of data
Validity Accuracy Usability Integrity Icd
D1 Vd1 Ad1 Ud1 Id1 Icd(1)

D2 Vd 2 Ad 2 Ud 2 Id 2 Icd(2)

D3 Vd3 Ad3 Ud3 Id 3 Icd(3)

D4 Vd 4 Ad 4 Ud 4 Id 4 Icd(4)

D5 Vd5 Ad5 Ud 5 Id 5 Icd(5)


... ... ... ... ... ...

406
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Dk Vd k Ad k Ud k Id k Icd(k)
... ... ... ... ... ...
Dn Vd n Ad n Ud n Id n Icd(7)
In Table 1 the Icd indicator is presented as a tool for measuring the consistency of individual
data.
2. DATA COLLECTION SOURCES
Before the process of collecting data begins the project manager has to define a set of
objectives that has to be met during this process. It is very important to carefully select the
sources from which data is collected. In innovative software development projects for each
prospective source the type of data that are intended to be collected must be defined. The
collection of unnecessary or irrelevant data generates unwanted costs and complicates the
process of sorting and classifying. The project manager of an innovative software project
collects data from the following sources:
 project team – transmits data related to requirements, productivity,
communication and bugs. The project team has its own requirements and
expectations from the project. In innovative software development projects team
members have certain expectation regarding the financial and material benefits.
The role of the project manager is to identify these expectations prior to the
project’s start and to assess whether they are achievable. If expectations are realistic
and the project manager assumes them he must also ensure that they are met on
time. When project team members' expectations are not met, regardless of the
project progress, productivity is affected. The project manager needs to notice and
correct any defect in the process of communication with the project team. Alarming
data about the process of communication is any message sent and not
acknowledged or any messaged that was not acted upon as the instructions required.
The project manager needs to collect data on individual productivity of project team
members and on the aggregate productivity of the team. Project team and especially
the staff responsible for testing the application need to report to the project manager
all bugs, problems and difficulties encountered by the project.
 end-user - transmits data related to requirements, communication and bugs. The
end user can provide important information regarding the expectations they have
of the application before the actual design process starts or even during the
development process. The end user is actually the one that is going to decide if the
application is really innovative or not so end-user data is very important. The
project manager has to identify and correct all the shortcomings of the
communication process with prospective users of the application to ensure that
information he receives is relevant. Users can report and submit bugs discovered
while using the demo version of the application, which was released for testing and
marketing purposes, or while using the final version of the application, which was
intended for widespread use.
 project owner - transmits data related to requirements, communication and bugs.
The project owner imposes the specifications and standards that the software
application must meet. Specifications have commitment value and they will often
be transferred into the legal contract signed by the two sides involved in the project.
407
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The project manager has to identify and correct all the shortcomings of the
communication process with the project owner. In innovative software
development projects the owner has to be involved as much as possible in all
development stages. Project owner has access to the application's test environment
throughout the project and has the opportunity to undertake parallel testing
operations and progress evaluation. Bugs thus identified will be passed on to the
project manager.
 similar projects – the project manager can collect important data from applications
similar with the one he coordinates. Similar projects can provide information on
both the implementation and the functionalities or facilities of related software
applications. From similar applications already launched on the market, the project
manager can obtain information about user traffic so that he may set up and
properly size the application architecture. Innovation is not always based on a new
software product but sometimes it requires perfecting an already existing solution.
 project environment – the environment in which the application is going to
function should be carefully analyzed and the data obtained should be taken into
account when designing the software solution. For example, an application
intended for academic purposes should not focus on graphics instead an application
intended for children use should emphasize the graphic component. Innovative
software projects should look for inspiration in the surrounding environment. The
environment is an ideal source for collecting data.
In Figure 1 are presented the main types of data a project manager is working with and their
sources.

Fig. 1 - Data collection specific to IT project management [7]


3. DATA COLLECTION PROCEDURES AND INSTRUMENTS
The project manager is responsible for defining the modalities, procedures and instruments
used for data collection. For data collection the following instruments are use:
 interviews – it involves the project team and end-users. The interview provides the
formal framework in which data can be transmitted and received without
interference. Individual interviews with project team members often reveal
408
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

important information about the progress of the project. The interview provides an
opportunity for each team member to discuss the project manager and address
issues that in another setting would be avoided or would be treated with reluctance.
Through interviews with project team members it is sought to obtain data related
requirements and productivity. The end users of the application are also ideal
interviews candidates. In an innovative software development project end-users are
the ideal target for interviews because they can provide valuable data. The concept
of focus groups is a widespread practice and is interviewing a group of people about
a product or service to be launched or already on the market [1]. Through interviews
with end users, the project manager intends to obtain tips and suggestions for
improvement.
 questionnaires - it involves the project team, users and project owner. Data
collected through questionnaires are easily sorted, compared and classified for all
participants respond to the same or similar questions. Questionnaires have the
advantage of offering anonymity to responders. The data thus obtained are often
closer to the reality of the project. Through questionnaires addressed to project team
members it is sought to obtain data related requirements and productivity. Through
questionnaires addressed to end-users it is sought to obtain directions and
suggestions for improvement. Through questionnaires addressed to the project
owner it is sought to obtain data related to specifications and communication. The
project manager of an innovative project should also respond fill out questionnaire
in order to test their relevance.
 reporting - involve the project team. Project team members must submit daily
reports on work done on the project. These reports allow the project manager to
access data on every aspect of the development process. The data collected through
the reports is used to determine productivity levels. The project manager of an
innovative project should also submit daily activity reports even if he is the highest
authority in the project team. Reports will help him track down his activity during
the entire project and provide valuable data sources for future projects.
 documentation - it involves the project owner, similar applications, the end user,
and the project’s environment. The project manager must conduct a comprehensive
documentation before starting a project as well as in its early stages. In innovative
software development projects documentation should ceases in any project stage.
The project manager must document in connection with similar applications
already on the market. Documentation is done by searching for information and
references on specialized sites or blogs. The project manager must seek online
information about the owner of the project and the projects undertaken by him in
the past. Such research provides data about the reputation of the project owner. The
project manager must document in relation to future users of the application and to
obtain information about the preferences and their value systems. Social networks
are a conducive environment for such research. Finally the project manager must
document in relation to the environment in which the application will run. The
environment is closely linked to users but unlike these it is less volatile.
 direct observation– it involves the project owner, similar applications, the project
team and the project environment [2]. The project manager must rely on direct
observation when collecting data on applications similar to the one he is going to
409
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

implement. In this case direct observation involves actual use these applications.
Direct observation of project owner requires analyzing its manifestations and
actions and constructing, based on collected data, a behavior and construction,
based on the data, a behavior and construction, based on the data, a behavior
pattern. Based on this pattern strategies will be defined in order to address certain
issues that arise during the project implementation. Direct observation of the
project team provides clues about the problems of communication, work or
personal conflicts that affect the quality of the project. Direct observation of the
project’s environment is beneficial because it brings forward issues that otherwise
would have been eluded. The project’s environment does not allow for direct
observation in all scenarios. For example, the development of an application for the
banking environment will not provide access for the project manager to that
environment.
 open discussions or with an imposed theme– it involves the project owner and
the project team. The project manager should initiate regular discussions with the
owner of the project in order to collect data that provide vital clues about its
satisfaction about the project’s progress. The project manager should initiate free
or imposed theme discussions on regular intervals with the project team members.
In these discussions the project team members should be the main contributors,
with the project manager acting as a simple mediator and observer. These
discussions may highlight potential problems or early conflicts. The project
manager of an innovative project should encourage open discussions related to the
software application that is being developed and try to keep them informal.
 evaluations – it involve the project team. Evaluations should be conducted
periodically by the project manager. The evaluation of a project team member must
consider issues such as: the difficulty of the tasks performed, the estimated time to
perform a task compared to the actual execution time, improvement proposals
submitted and collaboration with other team members. Through the process
evaluation, the project manager obtains data related to the productivity of team
members.
4. DATA HANDLING TOOLS
Data obtained in the collection process must be sorted and classified in order to be analyzed
quickly and used efficiently. Innovative software development projects relay heavily on
software solutions for data handling issues.
 Dropbox – is a cloud based files storage and sharing service [3]. Dropbox also
offers the possibility for synchronization and replication on a local device hard
drive of data stored in the cloud. By using Dropbox folder for storing, organizing
and classifying data about every team member can be created and access from every
device capable of browsing the web. In Figures 2 and 3 are two instances of the
mobile version, iOS version, of the Dropbox application. In Figure 2 it is presented
the folder structure for team members in the company run by the author of this
report. Figure 3 shows the file structure of the folder of one of the team members
within the company administered by the author of this report

410
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Fig. 2 – Project team members Fig. 3 – Folder of a project team member


 E-mail client – email client or mail user agent applications are used for managing
emails. E-mails are an important source of data and information for project
managers. Important decisions are often taken following the receipt of an e-mail
[5]. Specifications, requirements, requests for proposals and bids are sent via email.
It is extremely important that this information be kept organized and archived for
quick access when needed.
 Pivotal Tracker – is an application dedicated to the task allocation, progress
monitoring, productivity assessment and to bug reporting. In Figures 4 and 5 two
instances of the mobile version, iOS version, of the Pivotal Tracker application are
displayed. Figure 4 shows the bugs reported for a project implemented in the
company managed by the author of this article. In Figure 5 are shown the details of
a specific bug.

Fig. 4 – Pivotal Tracker iOS mobile version Fig. 5 – Pivotal Tracker iOS mobile version
 Microsoft Project – is an application dedicated to project management. Microsoft
Project is used to define the stages, tasks, resources, and budget of a project [4].

411
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Help establish dependencies between activities and the allocation of responsibility


for each activity. Microsoft Project provides tools for resource management and for
generating simulations related to the project development. With the help of
Microsoft Project the overall progress of a project can be monitored. In Figure 6 is
presented the project plan of a project implemented by the company run by the
author of this article.

Fig. 6 – Microsoft Project


The data should be analyzed in terms of quantity and quality. Useless data should be
discarded and valuable data should be sorted analyze and classified using the above
mentioned tools.
5. CONCLUSIONS
Data and information accuracy have a decisive impact on the project manager’s ability to
make correct decisions. Data that the project manager of an innovative project uses as part
of his daily activity consists of requirements, legislative framework, productivity,
communication, bugs and standards. It is important to collect data from trusted and reliable
sources. A project manager should rely on the following sources: project team, end-users,
project owner, similar projects and project environment. In order to collect data procedures
and instruments are required. Project managers of innovative software development
projects use interviews, questionnaires, reporting, documentation, direct observation, open
discussions, imposed theme discussions and evaluation as instruments for collecting data.
Dropbox is submitted a suitable tool for storing and sharing data. Email clients are tools
that help manage one of the most widely used communication environments: emails.
Project managers should take full advantage of email clients. Pivotal Tracker is depicted a
suitable tool for task allocation, progress monitoring, productivity assessment and to bug
reporting.
Microsoft Project is presented as an application designed for managing project data
regarding resources. The research direction that is being outlined in the field of data
collection is automated filtering. If data is already handled by using software tools than pre-
filtering data should become a mandatory attribute.

412
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

6. REFERENCES
[1] Z. Varvasovszky, R. Brugha - A stakeholder analysis, Health Policy and Planning, Vol. 15, No.
3, pg. 338-345, 2000, doi: 10.1093/heapol/15.3.338
[2] R. Sharma - Methods of Data Collection in Stakeholder Analysis, published online at
http://www.brighthubpm.com/project-planning/99511-methods-of-data-collection-in-stakeholder-
analysis/#.
[3] I. Drago, M. Mellia, M. M. Munafo, A. Sperotto, R. Sadre, A. Pras - Inside Dropbox:
Understanding Personal Cloud Storage Services, Proceedings of the 2012 ACM conference on
Internet measurement conference, 14-16 Nov. 2012, Boston, Massachusetts, USA pg. 481-494
[4] B. Biafore - Microsoft Project 2007: The Missing Manual, Publisher: Pogue Press, 2007, pg. 704,
ISBN-13: 978-0596528362
[5] V. Bellotti, N. Ducheneaut, M. Howard, I. Smith - Taking Email to Task: The Design and
Evaluation of a Task Management Centered Email Tool, Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems, Fort Lauderdale, Florida, USA, 5-10 Apr. 2003, pg. 345-352
[6] S. Berkun - The Art of Project Management, Publisher: O'Reilly, 2005, pg. 392, ISBN: 0-596-
00786-8
[7] M. DESPA - The adaptive nature of managing software innovation, Journal of Information
Systems & Operations Management, No 1, 2013

413
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

MODELING AND OPTIMIZING THE BUSINESS PROCESSES USING


MICROSOFT OFFICE EXCEL

Mihălcescu Cezar1
Sion Beatrice2
ABSTRACT
Business process management is a management approach based on aligning all the
company’s activities along with the customers’ needs and wishes. It is a method that
promotes the company’s efficiency, but in the same time leaves an open place to the
innovation, flexibility and integration with other software applications. Business process
modeling is trying to continuously improve the work processes within a company. Business
process modeling helps the companies to be more efficient and more able to change than
the companies that are based on a traditional hierarchical management.

Key words: modeling, simulation, optimization, business, Excel


1. THE IMPORTANCE OF MODELING WITHIN THE BUSINESS PROCESS
MODELING
The business process modeling is the easiest, with the lowest risk and the highest rentability
investment that a company can do, in order to become more efficient.
The key features of globalization which includes the services sector target high
technology, capital investment, management and marketing performance.
The globalization of services, generated by using modern technology, has led to a
quantitative growth but also a qualitative leap in the development of services.
According to the opinion of specialists, the globalization determines a strong
increase of competition between business partners in services, for conquest of
known market segments targeted by the offer and to conquer newsegments.
Analyzing the business processes will review in detail, through discussions with the
company’s employees, the existing business processes. The interviews will be both direct
and cross interviews. After a detailed analysis of the processes, the consultants will
document and deliver the results.
After documenting the processes, the consultants will start to optimize them using the
business process re-engineering techniques. This way, the analyzed company will become
more efficient and will use the best processes at that moment.
After optimizing the business processes, the deciders will ensure that there are indicators
which are necessary to measure each process, so that the user will be able to improve the
processes in the future, based on relevant performance indicators.

1 PhD, Professor, Department: the Economy of Domestic and International Tourism, University: Romanian-
American University of Bucharest, mihalcescu.cezar.octavian@profesor.rau.ro
2 PhD Lecturer, , Department: the Economy of Domestic and International Tourism, University: Romanian-

American University of Bucharest, sion.beatrice@profesor.rau.ro


414
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Advantages from the operational costs’ point of view:


 Automatization – removing the manual work by automating the operations using
the software applications
 Capturing the information – capturing the information from the process, in order to
better understand the flow and streamline the process.
 Sequence – changing the sequence tasks with parallel tasks
 Disintermediation – removing the intermediates from the processes, this way to
efficient the process.

Advantages:
 To become proactive
 To have roles and well defined responsabilities, to know who to contact in each
situation, to improve the timing in taking decisions
 To remove the work overload
 To learn from experience
 To increase the quality of the products and services
 To align the business strategy with the offered services
 To measure our performances
 To be scalable
2. DATA MODELING IN EXCEL
The spreadsheets can be used not only to create tables, charts and simple calculations, but
also to create complex mathematical models. Due to easiness which it can be used, many
financial applications are being solved with the spreadsheet systems.
One of the most popular systems of this type is the software product Microsoft Office
Excel. It can also be used other spreadsheets programs, e.g. OpenOffice.org Calc.
Unfortunately, even if Excel is a software with a very strong platform, which can be used
to solve a large variety of financial issues, the most Excel users are using only its’ base
capacities.
In the modeling process of the financial decision, Microsoft Office Excel offers a range of
financial functions, which can be found within Insert Menu.
Modeling and simulation are necessary when direct experimentation on a real system is not
possible or desirable.
Simulation is a process through which a model of a real system is build and experiments
are made with this model, in order to understand the system’s behavior and/or evaluate
different strategies for the analyzed system.
Table 1: Simulation and optimization meniu

415
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Currently, an increase of the simulation usage in different domains can be observed. This
situation was favored by many factors:
 Increasing the number of simulation software tools
 Existence of some simulation packages for specific problems.
 The convenient prices that most of the available software packages can be bought.
 The simulation software packages do not, generally, require a particular technical
expertise in order to be used.
 Current computer systems can provide large amounts of data necessary for the
simulation experiments.
Data modeling tool with PowerPivot
PowerPivot is a software program in addition to Office Professional Plus Excel 2013 which
can be used in order to perform strong data analysis and to create sophisticated data models.
PowerPivot allows an Excel user to combine large data amounts from various sources, to
quickly perform information analysis and to easily share details.
Both in Excel and Power Pivot, you can create a data model, a collection of tables with
connections. The data model that you see in a spreadsheet from Excel is the same with the
one that you see in PowerPivot window. All the data that you input in Excel are available
in PowerPivot and vice versa.
The main difference between them is that you can create a data model much more
sophisticated working in the PowerPivot window. Let’s compare some activities.
Tabel 1: Activities from Excel with PowerPivot

Activity Excel PowerPivot


Import data from other sources,
Filter the data and rename the
such as big corporate databases, Import all data from a data
columns and tables during the
public data flows, spreadsheets source.
import.
and text files on the computer
The tables can be on each
spreadsheet from the The tables are organized in pages
Create the tables workbook. The with individual sheets in
spreadsheets can contain PowerPivot window.
more tables.
You can edit values in
Edit data within a table. You cannot edit in individual cells.
individual cells from a table.
Create connections between In See Chart or in Create
In Connections dialog tab.
tables. Connections dialog tab
Write complex formulas using the
Create the calculation Use the Excel formulas expressions language Data Analysis
Expressions (DAX).
Define the Hierarchies to be used
Create hierarchies
every where, in a workbook,
including Power View.

416
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Create KPIs to be used in


Create the KPIs PivotTable and Power View
reports.
Create the Perspectives in order to
limit the number of columns and
Create the Perspectives
tables that the users of the
workbook can see.
Create PivotTable reports in Click the PivotTable button from
Create Pivot Table and Pivot Excel. the PowerPivot window. More
Chart reports. about creating the Pivot Table and
PivotChart reports from PowerPivot
Make improvements, such as
Improving a model for Power
Create a database model. identifying the default fields, the
View.
images and unique values.
Using the Visual Basic property Do not use VBA in PowerPivot
Use VBA in Excel.
for Applications (VBA) window.
Group the data in a Use DAX in calculated columns
Group the data.
PivotTable report in Excel. and calculated fields.

The data you are working in Excel and Power Pivot window are stored in an analytical
database within the Excel workbook, and a strong local engine uploads, interrogates and
updates the data from that database. Because the data are in Excel, these are immediate
available in the PowerPivot, PivotChart, Power View reports and other features from Excel,
that you use to aggregate and interact with data. The presentation and interactivity of all
data are provided by Excel 2013; and data and Excel presentation items are included in the
same file of the workbook. PowerPivot accepts files with a maximum 2 GO size and allows
you to work with data until 4 GO size memory.
The workbooks that you modify with PowerPivot can be shared with other people in all the
ways you share other files. However, you obtain more advantages by publishing the
workbook in a SharePoint environment which has the Excel Services application enabled.
On the SharePoint server, the Excel Services application processes and restores the data in
a browser window where other persons can analyze them.
Table 2: Power pivot

In SharePoint2013, you can add PowerPivot for SharePoint 2013, in order to get extra
assistance for collaboration and management of documents, including the PowerPivot
Gallery, PowerPivot management dashboard from Central Administration, scheduled
refresh of data and the ability to use a published workbook as external data source from its
location in SharePoint.

417
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Despite the general opinion, Microsoft Excel is designed for general purpose
calculation, achieving charts and statistical operations, not having limited
applicability only to financial accounting field.
Due to inclusion in Microsoft Office package, this software application is installed
on most computers running one of the Windows version.
It is recommended to make calculations and charts in Microsoft Excel, because it
has all the advantages of using the combined components of Microsoft Office
Application: efficiency, easy transfer of information between documents by
copying, setting some connections between the original copy, incorporation of new
data by copying the existing information or insert through some objects.
In the next pages, it will attempt a brief description of this software application, by
describing the main functions and how these can be used, in order to make complex
mathematical calculations.
Data analysis tool by simulation with scenarios
In any institution from the tourism domain, at the economic activities level, there are one
or many indicators, which by their value, determine the values of one or many other
indicators. Solving in Excel such situations is made through a model, where indicators that
influences, represents the independent variables, and the influenced ones are dependent
variables, also named objective functions.
In a model it can be studied the variation of one or more dependent variables, based on one
or two independent variables. A range of values are assigned to Independent variables,
indicating a possible evolution scenario such as ”if the value of independent variable is ...
the value of function is….” . Scenario’s results are obtained in an Excel table.
If it indicates a single independent variable, it says that the table is with one entry and in
this case it can be studied through a simple script the variation of multiple dependent
variables (objective function).
If it indicates two independent variables, it says that the table is with two entries in this case
can follow the evolution of a single objective function. In this case, the scenarios
management is used (Scenario Manager ...) from Microsoft Office Excel.
If more than two independent variables are necessary, or it is desired an optimal value of
the objective function, or restrictions are applied on the variance, it is used the Solver utility
from Excel.
The scenarios are used when it is necessary to study variation of the values of two objective
functions (dependent variables), depending on the variable value of a single indicator that
influences them (independent variables). Depending on the values of the independent
variable, the objective functions will have different values.
Case study
A hotel rents rooms on a unit cost c = 215 and a room profit for p = 75. The total
cost per number of rooms x = 100 is C = c * x, and the total profit is P = p * xs, where
418
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

s = 0.9 is the exponent of the profit decrease with increasing the number of rooms
(due to the need to provide more tourist places).
Goal: study the variation of C (total cost) and P (total profit) in relation to the number
of rooms, respectively 25, 50, 75 and 100.
Table 3: Example of a hotel rental rooms

Table 4: Data table

Table 5: Size exponent of profit calculations

Table 6: Show formulas for the case study

419
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Monte Carlo Modeling Method


The Monte Carlo simulation method is more and more applied in business domain, in order
to investigate the stochastic problems or risk conditions, when the same course of action
might have more consequences, whose probability can be estimated.
The variables whose values are not known with certainty, but can be described by
probability distributions are called stochastic or probabilistic variables. In the simulation,
to mimic the variability of such variables it is necessary to generate its possible values
based on the probability distribution.
The probabilities have an important role in modeling the situation, where stochastic sizes
are involved. During simulation, the knowledge regarding probabilities is necessary both
in the construction phase of the simulation model, as in the analysis phase of the simulation
results.
The probabilities can be obtained in several ways. The simplest way is the subjective
method, through which the experts estimate on a scale from 0 to 1, the probability that a
certain event to take place. Another method is the objective one or the method based on
relative frequencies which are using the historical data or gained by direct measurement of
the values of a stochastic size.
3. CONCLUSIONS
In conclusion, modeling and simulation using Microsoft Office Excel can contribute to
understand and improve a real system. Although a system can be extremely complex, it is
good to attempt to create a model as simple as possible. This is obtained both by defining
the boundaries of the analyzed system, so that only the essential features in terms of
analysis’s objective, and by defining some simplifying assumptions.
The model can be improved by redefining the limits and relaxing the assumptions. On the
other hand, if you try to include in the model all the factors and relationships, the model
might become too complicated to be solved. That’s why it is necessary to make a
compromise between the necessity to build a simple and easy model and the necessity to
get through the model a reasonable and plausible representation of the real problem.

420
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The simplest model is the deterministic one, because it makes the assumption that all the
model’s data are reliable. In some cases, this type of model is very valuable. In addition, by
applying the sensitivity analysis some simplifying assumptions that underlying these
models can relax and can better understand and explain the results of the resolution.
4. BIBLIOGRAPHY
 SC Albright, WL Winston, CJ Zappe, 2009, Data Analysis and Decision Making with
Microsoft® Excel,
 Y Murakami, T Ishida, T Kawasoe,2003, Scenario description for multi-agent simulation,
 Proceeding AAMAS ’03 Proceedings of the second international joint conference on
Autonomous agents and multiagent systems, pg 369-376
 Cristina Maniu, Andreea Marin Pantelescu, 2013, Romanian’s services in the actual context
of the economic crisis, Annals of the „Constantin Brâncuşi” Unive rsity of Târgu Jiu,
Economy Series, Issue 1/2013, „Academica Brâncuşi” Publisher, ISSN 1844 – 7007
 Weiyu Tsai, Don G. Wardell, 2006, An Interactive Excel VBA Example for Teaching
Statistics Concepts, TSAI & WARDELL An Interactive Excel VBA Example for Teaching
Statistics Concepts
 Berk, K.N. and P. Carey, 2004, Data Analysis with Microsoft Excel, 2nd Edition, Thomson
 Brooks/Cole, Pacific Grove, CA.
 Fishman, S.G.,1997, Monte Carlo: concepts, algorithms, and applications. Springer-Verlag
New York Berlin Heidelberg
 C. Raţiu, F. Luban, D. Hîncu, N. Ciocoiu, 2009, Modelare economică, editura ASE
 C. Raţiu, F. Luban, D. Hîncu, N. Ciocoiu, 2007, Modelare economică. Studii de caz. Teste,
editura ASE
 Vasilica Voicu, 2008, Sisteme suport pentru adoptarea deciziilor, editura Universitară

421
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

A COMPARISON OF SOME NEW METHODS FOR SOLVING ALGEBRAIC


EQUATIONS

Anda Elena Olteanu1


Mircea Ion Cîrnu2
ABSTRACT
In this paper we compare different methods of solving algebraic equations based on
resolvent polynomial equations. These methods were considered by several authors,
including the second author of this article. Due to the fact that no demonstration of
convergence was made for these methods and therefore the speed of convergence was not
determined, this study offers some important data in terms of effectiveness.

Mathematics Subject Classification: 65H05, 41A25


Keywords: algebraic equation, Iteration method, resolvent polynomial equation,
Taylor expansion
1. INTRODUCTION
The Newton-Raphson iteration scheme for solving algebraic equations is derived by the
first-order Taylor expansion and gives a recurrence formula for the approximate solution
of an equation. However, more accurate methods were developed recently, based on
superior-order Taylor expansion, that reduce algebraic equations to polynomial ones,
named resolvent equations. The variation of the iterations can be determined from these
polynomial equations, i.e. the number that must be added to the known iteration to obtain
the following one, where the chosen iteration is the root of the resolvent equation with the
least absolute value.Such a method was first given by J.H.He, [5], who obtained a quadratic
resolvent equation deduced by the second-order Taylor expansion. Depending on the initial
data, this method can be inapplicable if the resolvent equation has complex roots. However,
this limitation was overcome by D.Wei, J.Wu and M.Mei, [6], who considered a cubic
resolvent equation, therefore making the restriction to the choice of initial data unnecessary.
In paper [1], see also [3], several types of resolvent equations were given. The simplistic
one uses the Taylor polynomial related to the function defining the equation, considered at
the previous iteration. This method, which is a direct generalization of the Newton-Raphson
method, proves to be the most effective. In the same paper, were obtained, not only the
methods presented by He and Wei-Wu-Mei, but also some improved versions of these.
Furthermore, due to the fact that the methods for solving algebraic equations presented in
this article are reduced to solving polynomial ones, it is useful to mention that new, simple
methods for solving polynomial equations have been given in paper [2]. See also [3]. In the
following Section, a summary of the methods for solving algebraic equations by their
reduction to the polynomial form is presented. In the last part, all the aforementioned

1 Faculty of Applied Sciences, Polytechnic University of Bucharest, Romania, e-mail:


anda.olteanu12@yahoo.com
2 Department of Mathematics, Faculty of Applied Sciences, Polytechnic University of Bucharest, Romania, e-

mail: cirnumircea@yahoo.com
422
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

methods will be validated in the MATLAB environment, and the results will be presented
at the end of this paper.
2. PRELIMINARY NOTES
We consider the algebraic equation
𝑓(𝑥) = 0 (1)

where 𝑓(𝑥) is a function of real variable with derivatives up to necessary order.


Iterative methods consist of determining a sequence of iterations 𝑥𝑛 , 𝑛 = 0, 1, 2, …, that
approximate the exact real root 𝑥 of the equation, meaning that lim 𝑥𝑛 = 𝑥, starting from
𝑛→∞
the initial value 𝑥0 . The most common method is the Newton-Raphson method, obtained
from the first-order Taylor expansion of the function 𝑓(𝑥). The iterations of this method
are determined from the relation
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 − , 𝑛 = 1, 2, … (2)
𝑓 ′ (𝑥𝑛 )
using the initial data 𝑥1 and considering 𝑥0 = 0. Recent methods, specified in the
References Section, determine variations of iterations 𝑥𝑛 denoted
𝑡𝑛 = 𝑥𝑛+1 − 𝑥𝑛 , 𝑛 = 0, 1, 2, … (3)
starting with the initial data 𝑥0 and 𝑥1 , hence with the initial variation 𝑡0 = 𝑥1 − 𝑥0 .
We now consider the 𝑚-order Taylor expansion, where 𝑚 = 1, 2, … of the function 𝑓, at
the point 𝑥𝑛+1 , in relation to the values of the function and its derivatives at the previous
iteration 𝑥𝑛 , given by the following formula
𝑓(𝑥𝑛+1 ) = 𝑃𝑚,𝑛 (𝑡𝑛 ) + 𝑅𝑚,𝑛 , 𝑛 = 0, 1, 2, … (4)
where 𝑅𝑚,𝑛 is the remainder of the expansion, and
𝑚
𝑓 (𝑘) (𝑥𝑛 ) 𝑘
𝑃𝑚,𝑛 (𝑡𝑛 ) = ∑ 𝑡𝑛 , 𝑛 = 0, 1, 2, … (5)
𝑘!
𝑘=0
is the 𝑚-order Taylor polynomial in 𝑥𝑛 with the unknown 𝑡𝑛 .
As shown in paper [1], if it is assumed that 𝑓(𝑥𝑛+1 ) = 0, 𝑅𝑚,𝑛 = 0, the expansion (4) takes
the form
𝑚
𝑓 (𝑘) (𝑥𝑛 ) 𝑘
𝑃𝑚,𝑛 (𝑡𝑛 ) = ∑ 𝑡𝑛 = 0, 𝑛 = 1, 2, … (6)
𝑘!
𝑘=0
This is the simplest form of a resolvent equation, and the method which is based on this
formula is referred to as the Generalized Newton-Raphson method, since for 𝑚 = 1 in (6)
we obtain (2). For 𝑚 = 2, the resolvent equation of this method is given by
1 ′′
𝑓 (𝑥𝑛 )𝑡𝑛2 + 𝑓 ′ (𝑥𝑛 )𝑡𝑛 + 𝑓(𝑥𝑛 ) = 0, 𝑛 = 1,2, … (7)
2
and for 𝑚 = 3 we have the formula
1 ′′′ 1
𝑓 (𝑥𝑛 )𝑡𝑛3 + 𝑓 ′′ (𝑥𝑛 )𝑡𝑛2 + 𝑓 ′ (𝑥𝑛 )𝑡𝑛 + 𝑓(𝑥𝑛 ) = 0, 𝑛 = 1,2, … (8)
6 2
If it is assumed that 𝑓(𝑥𝑛+1 ) = 0, 𝑅𝑚,𝑛 = 𝑅𝑚,𝑛−1 , the following resolvent equation is
obtained
𝑃𝑚,𝑛 (𝑡𝑛 ) + 𝑓(𝑥𝑛 ) − 𝑃𝑚,𝑛−1 (𝑡𝑛−1 ) = (9)

423
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

𝑚 𝑚
𝑓 (𝑘) (𝑥𝑛 ) 𝑘 𝑓 (𝑘) (𝑥𝑛−1 ) 𝑘
∑ 𝑡𝑛 + 𝑓(𝑥𝑛 ) − ∑ 𝑡𝑛−1 = 0, 𝑛 = 1, 2, …
𝑘! 𝑘!
𝑘=0 𝑘=0
The method based on this equation will be referred to as Generalized He-We-Wu-Mei
method, since for 𝑚 = 2 we have the quadratic resolvent equation introduced by J.H.He,
[4], [5], and for 𝑚 = 3 we obtain the one presented by D. Wei, J. Wu and M. Mei, [6].
In paper [1], these methods were improved by simplifying the resolvent equation (9), hence
obtaining the method denoted Improved Generalized He-We-Wu-Mei method
𝑛 𝑚
𝑓 (𝑘) (𝑥𝑛 ) 𝑘
𝑃𝑚,𝑛 (𝑡𝑛 ) + ∑ 𝑓(𝑥𝑗 ) − 𝑃𝑚,0 (𝑡0 ) = ∑ 𝑡𝑛 +
𝑘!
𝑗=1 𝑘=0
𝑛 𝑚 (10)
𝑓 (𝑘) (𝑥0 ) 𝑘
∑ 𝑓(𝑥𝑗 ) − ∑ 𝑡0 = 0, 𝑛 = 1, 2, …
𝑘!
𝑗=1 𝑘=0
Particularly, considering 𝑥0 = 𝑥1 , therefore 𝑡0 = 0, we obtain the simplified resolvent
equations
𝑚
𝑓 (𝑘) (𝑥1 ) 𝑘
∑ 𝑡1 = 0, 𝑛 = 1 (11)
𝑘!
𝑘=0
and
𝑚 𝑛
𝑓 (𝑘) (𝑥𝑛 ) 𝑘
∑ 𝑡𝑛 + ∑ 𝑓(𝑥𝑗 ) = 0, 𝑛 ≥ 2 (12)
𝑘!
𝑘=0 𝑗=2
Notice that both in the case of the resolvent equations (6) and (11) – (12), only a single
initial data is necessary in order to start the method, whereas the methods based on the
resolvent equations (9) and (10) require two such data.
3. NUMERICAL RESULTS
The purpose of this study is to find the numerical solution of the problem 𝑓(𝑥) = 0, with
the initial values 𝑥0 and 𝑥1 , using the abovementioned methods and to compare these
methods from the point of view of both speed of convergence and the simplicity of the
formula. In order to do that, a function was created using the MATLAB environment. The
function offers the numerical solution of all or just some of the methods presented,
depending on the calling sequence and has the following arguments: the function 𝑓(𝑥), the
initial points 𝑥0 , 𝑥1 , the number of iterations, a vector with components from 1 to 9,
representing the required methods, the precision of the result.
In the following we present the results obtained using the aforesaid Matlab function for
solving two particular algebraic equations, for different initial data.
Example
We consider the equation used in [5], [6] and [1], i.e. 𝑓(𝑥) = 𝑥 3 − exp(−𝑥), with the
unique solution 𝑥 = 077288295914921012475 ….
Given the initial values 𝑥0 = 0 and 𝑥1 = 0.5, with 5 iterations and an error within 10−10 ,
we obtain

424
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Iteratio
n
𝑥2 𝑥3 𝑥4 𝑥5 𝑥6
Metho
d
Newton .85497219 .77871052 .77291426 .77288296 .77288295
Raphson 05 82 91 01 91
(6), m = 2 .78387825 .77288232 .77288295 .77288295 .77288295
≡ (7) 97 42 91 91 91
(6), m = 3 .77282407 .77288295 .77288295 .77288295 .77288295
≡ (8) 18 91 91 91 91
.71022258 .76844136 .77278836 .77288291 .77288295
J.H.He
59 99 39 99 91
.77387120 .77294273 .77288295 .77288295 .77288295
Wei-Wu-Mei
00 72 91 91 91
.71022258 .76844136 .77278836 .77288291 .77288295
(10), m = 2
59 99 39 99 91
.77387120 .77294273 .77288295 .77288295 .77288295
(10), m = 3
00 72 91 91 91

The results show that the fastest convergent method is given by formula (8), as it gets to the
solution after only 2 steps. Moreover, both the Wei-Wu-Mei and its improved version,
presented by the second author of this article, obtain the numerical solution with a precision
of 10 digits after only 3 steps. Nevertheless, the improved version, (10), has an advantage
regarding the simplicity of the formula, as it does not require the use of derivatives in
previous iterations, but only in the first one, 𝑥0 . Now, in order to compare all the methods,
we have to consider 𝑥0 = 𝑥1 = 0.5 as the simplified method requires it, obtaining
Iteratio
n
𝑥2 𝑥3 𝑥4 𝑥5 𝑥6
Metho
d
Newton .85497219 .77871052 .77291426 .77288296 .77288295
Raphson 05 82 91 01 91
(6), m = 2 .78387825 .77288232 .77288295 .77288295 .77288295
≡ (7) 97 42 91 91 91
(6), m = 3 .77282407 .77288295 .77288295 .77288295 .77288295
≡ (8) 18 91 91 91 91
.78387825 .76165364 .77288888 .77288228 .77288295
J.H.He
97 59 08 10 91
.77282407 .77294184 .77288295 .77288295 .77288295
Wei-Wu-Mei
18 01 91 91 91
.78387825 .76165364 .77288888 .77288228 .77288295
(10), m = 2
97 59 08 10 91
.77282407 .77294184 .77288295 .77288295 .77288295
(10), m = 3
18 01 91 91 91

425
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

.78387825 .76165364 .77288888 .77288228 .77288295


(11), m = 2
97 59 08 10 91
.77282407 .77294184 .77288295 .77288295 .77288295
(11), m = 3
18 01 91 91 91

As we can see from the previous table, the three methods, the Generalized He-We-Wu-Mei,
the improved and the simplified versions, offer the same results, for both m = 2 and m =
3. This is due to the fact that they started out from the same formula, but by a telescoping
reduction of terms, the methods presented by He and Wei-Wu-Mei now have a simple,
improved expression, (10). In particular, where 𝑥0 = 𝑥1 , the recurrence relation is given by
(11), (12) which is easier to apply.
However, considering 𝑥0 = 𝑥1 = 0, all the methods generate complex values from the first
iteration, in the case of m = 2. The table below illustrates these inconveniences for the
chosen initial data, and also presents the case m = 5 for the methods presented in this paper,
with an error within 10−15 . In this case, the methods offer the exact solution generally after
3 iterations, with the exception of the first one, i.e. the Generalized Newton-Raphson
method, that only needs 2 iterations.
Iteration
𝑥2 𝑥3 𝑥4
Method
Newton Raphson 1. .812309030097381 .774276548985500
(6), m = 2 ≡ (7) complex values complex values complex values
(6), m = 3 ≡ (8) .767315738099893 .772882959140974 .772882959149210
(6), m = 5 .772764910588615 .772882959149210 .772882959149210
J.H.He complex values complex values complex values
Wei-Wu-Mei .767315738099893 .778393341418516 .772882959270339
(9), m = 5 .772764910588615 .773000981896338 .772882959149210
(10), m = 2 complex values complex values complex values
(10), m = 3 .767315738099893 .778393341418516 .772882959270339
(10), m = 5 .772764910588615 .773000981896338 .772882959149210
(11), m = 2 complex values complex values complex values
(11), m = 2 .767315738099893 .778393341418516 .772882959270339
(11), m = 5 .772764910588615 .773000981896338 .772882959149210
Example
We will now consider the following equation x − 2 + ln x = 0, with the solution x =
1.5571455989976115131 …. Given the initial values 𝑥0 = 1 and 𝑥1 = 2, with 5
iterations and an error within 10−9 , we obtain
Iteratio
n
𝑥2 𝑥3 𝑥4 𝑥5 𝑥6
Metho
d
Newton 1.5379018 1.5570985 1.5571455 1.5571455 1.5571455
Raphson 80 51 99 99 99

426
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

(6), m = 2 1.5544451 1.5571456 1.5571455 1.5571455 1.5571455


≡ (7) 41 00 99 99 99
(6), m = 3 1.5566981 1.5571455 1.5571455 1.5571455 1.5571455
≡ (8) 50 99 99 99 99
1.4356755 1.5630934 1.5570125 1.5571456 1.5571455
J.H.He
95 12 13 11 99
1.6432279 1.5573244 1.5571467 1.5571455 1.5571455
Wei-Wu-Mei
56 33 86 99 99
1.4356755 1.5630934 1.5570125 1.5571456 1.5571455
(10), m = 2
95 12 13 11 99
1.6432279 1.5573244 1.5571467 1.5571455 1.5571455
(10), m = 3
56 33 86 99 99

The results show anew that the Generalized Newton-Raphson method is the most effective.
It offers the numerical solution after only 2 iterations, when m = 3, followed by the
methods presented by Wei-Wu-Mei and its improved version, which need 4 steps to get to
the solution.
We will now add the case m = 5 for 𝑥0 = 𝑥1 = 1.5. It is shown that all the methods of this
order, obtain the solution with an error within 10−19 , generally after 3 iterations.
Nonetheless, the Generalized Newton-Raphson method proves to be most effective, as for
both m = 5 and m = 3 gets to the solution within the specified precision, after only 2 steps.
The results are given below.

Itera
-tion
Me
𝑥2 𝑥3 𝑥4
-
tho
d
1.5567209351351014 1.5571455763466026 1.5571455989976112
N-R
579 667 910
1.5571565174669614 1.5571455989976112 1.5571455989976115
(7)
873 910 131
1.5571452877916973 1.5571455989976115 1.5571455989976115
(8)
634 131 131
1.5571455986971243 1.5571455989976115 1.5571455989976115
(6), m = 5
224 131 131
1.5571565174669614 1.5571346805581998 1.5571455989976119
J.H.He
873 129 572
Wei-Wu- 1.5571452877916973 1.5571459102035498 1.5571455989976112
Mei 634 656 910
1.5571455986971243 1.5571455992980984 1.5571455989976115
(9), m = 5
224 817 131
427
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

(10), m 1.5571565174669614 1.5571346805581998 1.5571455989976119


=2 873 129 572
(10), m 1.5571452877916973 1.5571459102035498 1.5571455989976112
=3 634 656 910
(10), m 1.5571455986971243 1.5571455992980984 1.5571455989976115
=5 224 817 131
(11), m 1.5571565174669614 1.5571346805581998 1.5571455989976119
=2 873 129 572
(11), m 1.5571452877916973 1.5571459102035498 1.5571455989976112
=3 634 656 910
(11), m 1.5571455986971243 1.5571455992980984 1.5571455989976115
=5 224 817 131

Acceptable results are obtained for each method. Although the fastest methods are obtained
in the case m = 5, the case m = 3 of the Generalized Newton-Raphson method still seems
to be the most efficient as its formula is easier to apply. The Wei-Wu-Mei method and the
improved one, given in [1], are situated on the second place, in matters of effectiveness. In
paper [1], both the improved and the simplified method were obtained by a telescoping
reduction of terms using the resolvent equation that generates the Generalized He-Wei-Wu-
Mei method. In fact, these three methods use the same equation, as demonstrated by their
identical numerical results presented previously for the specified iterative schemes.
However, both the improved and the simplified version have a clear advantage regarding
the ease of use in the case of elaborate calculus, compared to the Generalized He-Wei-Wu-
Mei method.
To conclude, the results presented in the above tables show the importance of considering
the polynomial equation (6) obtained in [1], to solve algebraic equations. Besides the fact
that it has the simplest form among the proposed resolvent equations, thus requiring fewer
calculations, it also provides the fastest convergence to the exact solution. As expected, it
also results that the higher the degree of the resolvent equation is, the faster the iteration
converges to the exact solution. Particularly, the results show the importance of considering
these new methods for solving algebraic equations based on resolvent polynomial
equations, instead of the traditional Newton-Raphson method.
4. REFERENCES
[1] M.I.Cîrnu, Newton-Raphson type methods, International Journal of Open Problems, 5(2012), 95-
104.
[2] M.I.Cîrnu, Solving polynomial equations, Mathematica Aeterna, 2(2012), 8, 651-667.
[3] M.I.Cîrnu, Algebraic Equations. New Methods for Old Problems, LAP Lambert Academic
Publishing, Saarbrücken, 2013.
[4] J.H.He, Improvement of Newton iteration method, International Journal of Nonlinear Sciences
and Numerical Simulation, 1(2000), 239-240.
[5] J.H.He, A new iteration method for solving algebraic equations, Applied Mathematics and
Computation, 135(2003), 81-84.
[6] D.Wei, J.Wu and M..Mei, A more effective iteration method for solving algebraic equations,
Applied Mathematical Sciences, 2(2008), 28, 1387-1391.

428
JOURNAL
of
Informatin Systems &
Opeartions Management

ISSN : 1843-4711
---
Romanian American University
No. 1B, Expozitiei Avenue
Bucharest, Sector 1, ROMANIA
http://JISOM.RAU.RO
office@jisom.rau.ro

View publication stats

You might also like