Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Metadata of the chapter that will be visualized in

SpringerLink
Book Title Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference
Series Title
Chapter Title Image Recoloring of Art Paintings for the Color Blind Guided by Semantic Segmentation
Copyright Year 2020
Copyright HolderName The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland
AG
Corresponding Author Family Name Chatzistamatis
Particle
Given Name Stamatis
Prefix
Suffix
Role
Division Department of Cultural Technology and Communication
Organization University of the Aegean
Address University Hill, 81100, Mytilene, Lesvos Island, Greece
Email stami@aegean.gr
Author Family Name Rigos
Particle
Given Name Anastasios
Prefix
Suffix
Role
Division Department of Cultural Technology and Communication
Organization University of the Aegean
Address University Hill, 81100, Mytilene, Lesvos Island, Greece
Email a.rigos@aegean.gr
Author Family Name Tsekouras
Particle
Given Name George E.
Prefix
Suffix
Role
Division Department of Cultural Technology and Communication
Organization University of the Aegean
Address University Hill, 81100, Mytilene, Lesvos Island, Greece
Email gtsek@ct.aegean.gr

Abstract This paper introduces a semantic-segmentation guided image recoloring approach of digitized art paintings
to enhance the color perception of color- blind people that suffer from protanopia and deuteranopia.
Semantic segmentation using transfer learning between natural images and art paintings is applied to
extract annotated color information. By using a standard technique, the annotated colors are transformed to
simulate the effects of protanopia and deuteranopia. Then, a specialized objective function is minimized to
recolor only the colors that are significantly different from the respective simulated ones, because these
colors are perceived as confusing by the color blind. The effectiveness of the proposed method is
demonstrated through its comparison with other algorithms in several experimental cases.
Keywords Color vision deficiency - Digitized art paintings - Image recoloring - Semantic segmentation -
Deep network
Image Recoloring of Art Paintings
Author Proof

for the Color Blind Guided by Semantic


Segmentation

Stamatis Chatzistamatis(&), Anastasios Rigos,


and George E. Tsekouras

Department of Cultural Technology and Communication,


University of the Aegean, University Hill, 81100 Mytilene,
Lesvos Island, Greece
{stami,a.rigos}@aegean.gr, gtsek@ct.aegean.gr

Abstract. This paper introduces a semantic-segmentation guided image recol- AQ1


oring approach of digitized art paintings to enhance the color perception of
color- blind people that suffer from protanopia and deuteranopia. Semantic
segmentation using transfer learning between natural images and art paintings is
applied to extract annotated color information. By using a standard technique, AQ2
the annotated colors are transformed to simulate the effects of protanopia and
deuteranopia. Then, a specialized objective function is minimized to recolor only
the colors that are significantly different from the respective simulated ones,
because these colors are perceived as confusing by the color blind. The effec-
tiveness of the proposed method is demonstrated through its comparison with
other algorithms in several experimental cases.

Keywords: Color vision deficiency  Digitized art paintings  Image


recoloring  Semantic segmentation  Deep network

1 Introduction

The human color vision system generates the color perception by using three types of
photoreceptor cells, called cones, to perform photon absorption: the L-cones, the M-
cones, and the S-cones. The L-cones correspond to the red color, the M-cones to the
green color, and the S-cones to the blue color. Malfunction of one or more photore-
ceptors results in color vision deficiency (CVD), also called color blindness, which
consists of three types: the monochromacy, the dichromacy, and the anomalous
trichromacy [16, 17, 20]. The most challenging CVD is the dichromacy, which
embraces three categories [5, 16, 20]: (a) protanopia caused by the absence of L-cones,
(b) deuteranopia caused by the absence of M-cones, and (c) tritanopia caused by the
absence of S-cones. The protanopes and deuteranopes cannot distinguish between red
and green, while the tritanopes between blue and yellow.
People with strong CVDs face difficulties in daily life, where problematic color
perception becomes annoying or even critical (e.g. road signs), making their accessi-
bility in colored content a challenge. Regarding this issue, the accessibility of

© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2020
L. Iliadis et al. (Eds.): EANN 2020, INNS 2, pp. 1–13, 2020.
https://doi.org/10.1007/978-3-030-48791-1_20
2 S. Chatzistamatis et al.

color-blind people in cultural content, such as art paintings, has been acknowledged as
Author Proof

an important demand by worldwide organizations dealing with the CVDs and cultural
organizations such as museums, as well [7, 10, 12]. Many color-blind people are at a
disadvantage when choosing to study or enjoy art paintings because they can only
discern a confusing set of objects and colors.
So far, a wide variety of image recoloring methods has been proposed to enhance
the color perception of the color-blind [16]. Hassan and Paramersan [5] set up the
image recoloring in the XYZ color space utilizing three steps namely, color normal-
ization, angular color rotation, and color un-normalization. Huang et al. [8] performed
the color enhancement by extracting key colors and determining an optimal mapping to
maintain the contrast between pairs of those colors. In [9], special requirements were
quantified by minimizing an objective function in the CIE Lab space. Rani and Rajeev
[14] assisted deuteranopic viewers by enhancing the contrast between adjacent shades.
Although too much effort has been put on the recoloring of natural images, there are
relatively few works that consider the case study of digitized art painting images. Due
to the complexity of such kind of images, maintaining a color natural appearance of the
recolored image is a very challenging problem [1, 18]. The requirement of color
naturalness focuses on minimizing the perceptual difference between the colors in the
original image and the respective modified colors in the recolored image [16].
In this paper, we introduce a novel approach that attempts to meet the requirement
of naturalness, regarding the protanopia and deuteranopia CVDs. A transfer learning
based semantic image segmentation algorithm is implemented to extract annotated
color information of the original image. The semantic segmentation is implemented by
introducing a framework for transferring learning from a Mask RCNN network [3, 15],
trained on available large collections of labeled natural images, to the context of art
paintings. Then, the colors identified above are transformed to simulate the corre-
sponding color blind perception [20]. The above information is further processed by a
specialized objective function, which is minimized to obtain the recolored art painting.
The material is organized as follows. Section 2 describes the proposed algorithm in
detail. Section 3 illustrates the simulation results and the respective analysis. Finally,
the paper concludes in Sect. 4.

2 The Proposed Recoloring Method

The flowsheet of the algorithm is illustrated in Fig. 1. Given an RGB color


C ¼ ðCR CG CB ÞT , with CR ; CG ; CB 2 ½0; 255, its simulation to protanopia or
deuteranopia is denoted as CD ¼ ðCR;D CG;D CB;D ÞT , where subscript D describes the
protanopia or deuteranopia.
Image Recoloring of Art Paintings for the Color Blind 3
Author Proof

Transfer Learning Output


Input Recoloring
between Natural Image
Image Process
and Art Images

Fig. 1. The basic algorithmic steps of the recoloring process.

The vector CD is calculated by matrix transformations that involve the XYZ and
LMS color spaces, where the transformations are functionally represented CD ¼ fD ðCÞ.
For a detailed description of the color simulation procedure, the interested reader is
referred to [20].

2.1 Semantic Segmentation Using Transfer Learning Between Natural


and Art Digitized Paintings
R-CNN [3] is one of the first deep learning networks inspired by AlexNet [11] and it
uses segments of that network with minor changes. Network’s functionality is
improved using the Selective Search [19] technique to perform object recognition in the
image instead of classifying the whole image as one object. After identifying the
regions of interest (RoIs) with bounding boxes, a processed version of AlexNet is used
to classify the object. At the last level of the network, a support vector machine
(SVM) categorizes the object into a class. However, after executing the network’s
training process, one more step takes place to optimize the rectangular boundaries,
which is considered a simple regression problem. In a nutshell, it gets as input the
bounding boxes of the objects and improves them. The main drawback of the R-CNN
is the utilization of many feed-forward iterations (*2 K) of AlexNet for each RoI, and
separately trains the three subsequent models: (a) CNN for extracting image features,
(b) SVM, and (c) a simple regression model to improve the bounding boxes.
Fast R-CNN [2] was implemented to solve these problems and accelerate the R-
CNN. It performs only one feed-forward pass of the CNN throughout the image and
then pools on every bounding box (named Region of Interest Pooling - RoIPool). Then,
it integrates all the aforementioned three models on a common network, where the
regression takes place in parallel with the classification which is carried out by Softmax
classifier instead of the SVM, taking as an input the result of the RoIPool layer. The
next improved network is the Faster R-CNN which uses a Region Proposal Network
(RPN) to accomplish the tasks of the Selective Search algorithm. The latest suggestion
based on R-CNN is the Mask R-CNN which is the Faster R-CNN [15] with the
exception that instead of bounding boxes it detects the real contours (i.e. masks) of the
objects in the image. To achieve this, a parallel branch of the network is responsible for
classifying the pixels into objects. Each mask is classified into two classes using binary
regression rather than Softmax.
The above-mentioned deep learning algorithms were able to achieve very good
results on classification problems after trained on large datasets. The lack of available
training data in many domains such as medical images and art images is a well-known
4 S. Chatzistamatis et al.

problem in the scientific community. That problem is addressed by the transfer learning
Author Proof

mechanism [4, 21]. The main idea behind that approach is to train a machine learning
framework on a new task while exploiting the knowledge acquired by the framework in
a previously related task.

Fig. 2. The basic structure of the transfer learning based semantic segmentation scheme.

The flowsheet of the network used in this paper is depicted in Fig. 2. We use
Matterport’s1 implementation of Mask-RCNN for training nine classes of art paintings
images with seventy-five annotated images in each class. Firstly, data preprocessing is
carried out in terms of image augmentation. The key point is to follow a set process of
taking in existing images from our training dataset and apply some image transfor-
mation operations to them. Specifically, regarding each image, a horizontal flip is
performed, the image is randomly cropped to a scale between 0.9 and 1 times the
original dimension, a Gaussian blur with random standard deviation is applied, the
contrast is adjusted, the brightness is adjusted, and a series of random affine trans-
formations are also applied so each image has dimensions 500  500: The nine trained
classes are: person, boat, bird, horse, bowl, chair, table, vase and fruit.
The ResNet101 architecture [6] is used as a backbone model to extract relevant
features from the input image. Instead of using the pre-trained weights for MS COCO2,
we initialize the weights of our backbone model using weights pre-trained on Ima-
genet3. The backbone model serves as a feature extractor, where the early layers detect
low-level features (edges and corners), and later layers successively detect higher-level
features (person, boat, etc.). To fine-tune the model pre-trained on Imagenet, we train
only the model heads and the layers from ResNet101 level 4 and up for the remaining
epochs, because we reuse the weights of the model learned to extract features from
natural images. The image is converted from a tensor of shape (500, 500, 3) to a feature
map of shape (32, 32, 2048). The extracted feature map is then fed into a Region

1
https://github.com/matterport/Mask_RCNN.
2
http://cocodataset.org/.
3
http://www.image-net.org/.
Image Recoloring of Art Paintings for the Color Blind 5

Proposal Network (RPN) [15]. The RPN scans regions of the feature map with sliding
Author Proof

bounding boxes, trying to determine regions that contain objects. For each box, called
anchor, the RPN assigns an anchor class (i.e. possible object, not an object or neutral).
A proposal layer then picks the anchors most likely to contain an object and refine the
anchor box to fit the object more closely, obtaining the RoIs. For each region that
contains an object selected by the RoI classifier, the model generates 28  28 masks.

2.2 Image Recoloring


The chosen color space is the RGB space. Let us denote the input image as I ¼
½pij  ð1  i  N; 1  j  KÞ where pij symbolizes the RGB color vector of the (i, j) image
pixel, and N  K is the image size. The pixels of the input image are divided into two
disjoint sets namely, TV and TU . The set TV includes all pixels that belong to the objects
identified by the semantic segmentation algorithm,
 while the set TUincludes the rest of
the pixels. Next, we form the set SV ¼ CV;1 ; CV;2 ; . . .; CV;jSV j that includes the
 
distinct colors of TV , and the set SU ¼ CU;1 ; CU;2 ; . . .; CU;jSU j that includes the
distinct colors of TU , where jj stands for the set cardinality. Colors in SV are trans-
formed to simulate the dichromacy effect, Ck; D ¼ fD ðCk Þ ðk ¼ 1; 2; . . .; jSV jÞ.  By
appropriately choosing / [ 1 (trial and error), if for some k it holds Ck  Ck;D \/
the colors Ck and Ck;D are similar, meaning that dichromats do not confuse the color Ck .
Therefore, this color must remain intact. All colors of SV satisfying the above condition
are transferred to the set SU . This updating mechanism reads as
 
8Ck 2 SV : Ck  Ck;D \/ ! SV ¼ SV  fCk g ^ SU ¼ SU [ fCk g: ð1Þ

After the end of the updating process, colors belonging to the set SU will remain
intact, whereas the colors belonging to the set SV will be appropriately modified to
enhance their perception by the color blind.
Next, the fuzzy c-means is used to divide the elements of SV in n1 clusters with
centers V ¼ fv1 ; v2 ; . . .; vn1 g, and the elements of SU in n2 clusters with centers
U ¼ ft1 ; t2 ; . . .; tn2 g. The
 cluster centers are called
 key colors. The target is to obtain
a recoloring set V rec ¼ vrec;1 ; vrec;2 . . .; vrec;n1 of the set V.
The error vector between the key color vi and its simulation fD ðvi Þ is:
eri ¼ vi  fD ðvi Þ ði ¼ 1; 2; . . .; n1 Þ. Then, the recoloring process of vi is [1, 18].

vrec;i ¼ vi þ M D;i eri ð2Þ


2 3
a lD;i;3 0
with M D;i ¼ 4 lD;i;1 b 05 ð3Þ
lD;i;2 lD;i;4 1

In the case of protanopia, a 2 ½1; 0, b ¼ 1, and lD;i;3 ¼ lD;i;4 ¼ 0 are prefixed,
while lD;i;1 and lD;i;2 are adjustable positive parameters. In the case of deuteranopia,
6 S. Chatzistamatis et al.

a ¼ 1, b 2 ½1; 0 and lD;i;1 ¼ lD;i;2 ¼ 0 are prefixed, while lD;i;3 ; lD;i;4 are adjustable
Author Proof

positive parameters. Considering protanopia, the implementation of (2) along with the
respective matrix in (3) leads to a reduction of the Red in favor of G and B, obtaining
less saturated red/oranges and more saturated greens, increasing the contrast and
therefore, decreasing color confusion of a protanope viewer. In the case of deutera-
nopia, the above process leads to the reduction of green in favor of red and blue colors
obtaining similar results. In both cases, the color confusion is alleviated.
 
Given two key colors vi 2 V and ti 2 U, their distance is vi  tj . After trans-
 colorvi into
forming the key v  , the distance between vrec;i and ti as seen by a
 rec;i
dichromat is fD vrec;i  fD tj . The objective is to keep the above distances as
similar as possible so that the dichromat will be able to perceive the differences
between colors in a similar way as the normal color vision viewer. Thus, summing over
all pairs.

1 X n1 Xn2       
vi  tj   f vrec;i  f tj 
E1 ¼ ð4Þ
n1 n2 i¼1 j¼1

Following the same approach for all color pairs in the set V we arrive at

1X n1 Xn1       
vi  vj   f vrec;i  f vrec;j :
E2 ¼ ð5Þ
n1 i¼1 j¼1
2

It can be easily seen that minimizing E1 and E2 enhances the contrast between the
objects identified by the semantic segmentation when compared each other and when
compared with the rest of the image regions. To further preserve the naturalness the
vrec;i must be as close to vi as possible. To do so, we use the next error function.

1X n1  
vi  vrec;i 
E3 ¼ ð6Þ
n1 i¼1

The overall objective function reads as

E ¼ E1 þ E2 þ cE3 ð7Þ

where c is a regularization factor that takes positive values and is used to obtain a
counterbalance between the distinct parts of the function E.
The function E is optimized with respect to the parameters lD;i;1 , lD;i;2 for prota-
nopia, and lD;i;3 , lD;i;4 for deuteranopia. In each case, the total number of adjusted
parameters is 2 n1 . To perform the minimization, we employ the well-known differ-
ential evolution (DE) algorithm [13]. The DE comprises three evolving learning pha-
ses: the mutation, crossover and selection. There are two parameters that must be
Image Recoloring of Art Paintings for the Color Blind 7

defined namely, the FFR 2 ð0; 1 that controls the rate at which the population evolves,
Author Proof

and the CFR 2 ½0; 1 that controls the fraction of the parameter values copied from one
generation to the next. A detailed description of DE is provided in [13].
The above procedure obtains the matrices M D;i ði ¼ 1; 2; . . .; n1 Þ that minimize the
function E. Each matrix corresponds to one key color of the set V. The recoloring
process of the image pixels pij ð1  i  N; 1  j  KÞ is described next. If pij 2 SV then
this pixel must be modified. The methodology of Vienot et al. [20] is applied to obtain
the simulated color fD ðpij Þ and calculate the corresponding error vector using Eq. (1):
er ¼ pij  fD ðpij Þ. The closest key color to pij is denoted as v‘0 and defined as:
 ij   
pij  v‘  ¼ min pij  v‘  , with v‘ 2 V. Then, the recoloring of pij is carried
0 0
1  ‘  n1
out in terms of Eq. (2).

prec;ij ¼ pij þ M D;‘0 erij ð8Þ

Finally, all pixels of the input image with colors belonging to the set SV are
appropriately recolored, while the rest are kept intact. The impact of this effect is that
the number of elaborated pixels reduces and therefore, the naturalness of the recolored
image increases because the number of modified pixels is kept to a minimum.

3 Simulation Experiments

In this section, six art paintings are used to perform several experiments for protanopia
and deuteranopia. The paintings are presented in Fig. 3. The proposed method is
compared to two recoloring algorithms. The first was developed by Huang et al. in [8]
and concerns both protanopia and deuteranopia. The second was developed by Rani
and Rajeev in [14] and concerns only the deuteranopia. The performance index to
conduct the comparison is the naturalness index [9].

1 X N X K  
pij  prec;ij 
J¼ ð9Þ
N K i¼1 j¼1

For our experiments we set / ¼ 11, meaning that all color combinations belonging
to sphere with radius 11 are consider similar to the color located at the center of that
sphere. An odd number is selected to achieve uniformity in R, G, and B axis. The
numbers of key colors for the sets V and U are n1 ¼ n2 ¼ 12. Thus, in total there are 24
key color. Based on trial and error approach, in the case of protanopia we select
a ¼ 0:5, and in the case of deuteranopia b ¼ 0:5. The domain of values for the
adjustable parameters lD;i;1 , lD;i;2 , lD;i;3 , and lD;i;4 is the interval [0, 1].
8 S. Chatzistamatis et al.
Author Proof

Fig. 3. (a) Painting 1 (by Terence Clarke), (b) painting 2 (by Stratis Axiotis), (c) painting 3 (by
Ektor Doukas), (d) painting 4 (by Raphael), (e) painting 5 (by William Redmore Bigg), and
(f) painting 6 (by Angeliki Leousi-Karatza).

Fig. 4. Semantic segmentation results for the first three paintings of Fig. 3.

For the differential evolution the parameters are: FFR ¼ 0:8, CFR ¼ 0:9, the pop-
ulation size is equal to 50, and the maximum number of iterations tmax ¼ 100. As far as
the parameter setting of the other two methods is concerned, it is the same as reported
in the respective references.
The results of the transfer learning-based semantic segmentation for the first three
paintings are illustrated in Fig. 4. It can be easily seen that the obtained object
recognition is sufficiently accurate.
Image Recoloring of Art Paintings for the Color Blind 9
Author Proof

Fig. 5. Results on painting 3 (protanopia): (a) recolored (proposed method), (b) recolored as
seen by a protanope (proposed method), (c) recolored (method of Huang et al. [8]), (d) recolored
as seen by a protanope (method of Huang et al. [8]).

Fig. 6. Results on painting 4 (protanopia): (a) recolored (proposed method), (b) recolored as
seen by a protanope (proposed method), (c) recolored (method of Huang et al. [8]), (d) recolored
as seen by a protanope (method of Huang et al. [8]).
10 S. Chatzistamatis et al.
Author Proof

Fig. 7. Results on paintings 1 and 5 (deuteranopia): (a) and (b) recolored (proposed method),
(c) and (d) recolored as seen by a deuteranope (proposed method), (e) and (f) recolored (method
of Rani and Rajeev [14]), (g) and (h) recolored as seen by a deuteranope (method of Rani and
Rajeev [14]).

60 50
Painting 2 Painting 6 Painting 2 Painting 6
50 40
40
30
E

30
E

20
20
10
10

0 0
0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48
Iteration Iteration

Fig. 8. Dynamic behavior of the objective function E during the application of the DE algorithm
for the paintings 2 and 6 in the cases of protanopia (left figure) and deuteranopia (right figure).

Figures 5 and 6 visualize the recolored and the corresponding protanope simula-
tions for paintings 3 and 4, as they are obtained by the proposed algorithm and the
method in [8]. Note that the recolored paintings obtained by our approach use colors
that are more similar to the respective colors of the original paintings reported in Fig. 3.
Image Recoloring of Art Paintings for the Color Blind 11

The same visual characteristics are observed in Fig. 7, where the proposed method
Author Proof

is compared with the recoloring algorithm developed in [14], regarding the case of
deuteranopia for the paintings 1 and 5.
Figure 8 depicts the minimization of E as a function of the iteration during the DE
implementation for the paintings 2 and 6.

Table 1. Values of the naturalness index (J) obtained by the three algorithms for the six images
regarding the cases of protanopia and deuteranopia (best values for each case are in bold fonts).
Paintings Protanopia Deuteranopia
Proposed Huang et al. [8] Proposed Huang et al. [8] Rani and Rajeev [14]
1 3.7168 12.3482 8.7589 13.8699 87.8429
2 9.0473 9.8727 5.0898 5.1581 25.8787
3 7.4945 20.0707 5.8445 2.1712 77.3267
4 5.5553 23.8336 9.3129 3.3042 83.9168
5 3.9349 14.6587 18.5893 63.8226 90.4925
6 1.9294 22.8655 8.0602 9.3072 89.9749

Quantitative comparative results in terms of the naturalness index given in Eq. (9)
are reported in Table 1. The results of this table are quite convincing since, apart from
two experimental cases in deuteranopia (paintings 3 and 4), the proposed algorithm
significantly outperforms the other two algorithms. The main conclusion extracted from
this table is that the naturalness index is sufficiently minimized by the proposed
recoloring approach, yielding colors that are not curious for a normal color vision
viewer, while maintaining an aesthetic result for the dichromat viewers, also. This fact
is supported by both the visual and quantitative results.

4 Conclusions

The problem investigated in this paper was the recoloring of digitized art paintings in
order to improve the color perception of dichromat viewers that suffer from protanopia
or deuteranopia. Initially, a transfer learning based semantic segmentation mechanism
was applied to extract meaningful color information. The colors of the objects rec-
ognized by the deep network were simulated to match the way a dichromat perceives
them. Based on the above information, the pixels of the input image were divided into
two sets. The first set included colors that must be recolored, whereas the second one
included colors that remained intact. This process reduces the number of modified
pixels yielding a recolored image that retains its natural appearance. Then, a specially
designed objective function was minimized using differential evolution. The opti-
mization parameters are elements of a matrix transformation involved in the recoloring
process. The structure of the objective function favored the selection of colors that keep
similar the color differences in the original and the dichromat simulation of the
recolored image. The method was tested in several experimental cases. The simulation
12 S. Chatzistamatis et al.

results indicated that the proposed methodology can naturally modify the original
Author Proof

colors enhancing the perception of color blind viewers.


Future efforts could extend the present algorithm by developing more sophisticated
learning approaches and effective color transformations in order to enhance the pro-
tanopia, deuteranopia, and tritanopia effects and the color adaptation procedures, as
well.

Acknowledgments. This research is co-financed by Greece and the European Union (European
Social Fund-ESF) through the Operational Programme “Human Resources Development, Edu-
cation and Lifelong Learning 2014–2020” in the context of the project “Color perception
enhancement in digitized art paintings for people with Color Vision Deficiency” (MIS 5047115).

References
1. Doliotis, P., Tsekouras, G.E., Anagnostopoulos, C.N., Athitsos, V.: Intelligent modification
of colors in digitized paintings for enhancing the visual perception of color-blind viewers. In:
The Proceedings of the 5th International Conference on Artificial Intelligence Applications
and Innovations, pp. 293–301 (2009)
2. Girshick, R.: Fast R-CNN. In: The Proceedings of the 2015 IEEE International Conference
on Computer Vision (ICCV), Santiago, Chile, pp. 1440–1448 (2015)
3. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object
detection and semantic segmentation. In: The Proceedings of the 2014 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, pp. 580–587
(2014)
4. Gonthier, N., Gousseau, Y., Ladja, S., Bonfait, O.: Weakly supervised object detection in
artworks. In: The Proceedings of the 2018 European Conference on Computer Vision
(ECCV 2018), Munich, Germany, pp. 692–709 (2018)
5. Hassan, M.F., Paramesran, R.: Naturalness preserving image recoloring method for people
with red–green deficiency. Sig. Process. Image Commun. 57, 126–133 (2017)
6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: The
Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), Las Vegas, NV, USA, pp. 770–778 (2016)
7. Ho, G.W.: Color, vision, and art: teaching, learning, and making art with color blind
awareness. M.Sc. thesis, University of Florida (2014)
8. Huang, J-B., Chen, C.S., Jen, T.S., Wang, S.J.: Image recolorization for the color blind. In:
The Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP 2009), pp. 1161–1164 (2009)
9. Huang, J.-B., Tseng, Y.-C., Wu, S.-I., Wang, S.-J.: Information preserving color
transformation for protanopia and deuteranopia. IEEE Signal Process. Lett. 14(10), 711–
714 (2007)
10. Johnston-Feller, R.: Color Science in the Examination Museum Objects. The Getty
Conservation Institute, Los Angeles (2001)
11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional
neural networks. Commun. ACM 60(6), 84–90 (2017)
12. Marmor, M.F., Lanthony, P.: The dilemma of color deficiency and art. Surv. Ophthalmol. 45
(5), 407–414 (2001)
13. Price, K.V., Storn, R.M., Lampinen, J.A.: Differential Evolution: A Practical Approach to
Global Optimization. Springer, Berlin (2005)
Image Recoloring of Art Paintings for the Color Blind 13

14. Rani, S.S., Rajeev, R.: Colour transformation for deuteranopic viewers. Int. J. Control Theor.
Author Proof

Appl. 9(10), 4527–4535 (2016)


15. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with
region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)
16. Ribeiro, M., Gomes, A.J.P.: Recoloring algorithms for colorblind people: a survey. ACM
Comput. Surv. 52(4), 72:1–72:37 (2019)
17. Stockman, A., Sharpe, L.T.: The spectral sensitivities of the middle- and long-wavelength-
sensitive cones derived from measurements in observers of known genotype. Vision. Res.
40, 1711–1737 (2000)
18. Tsekouras, G.E., Chatzistamatis, S., Anagnostopoulos, C.N., Makris, D.: Color adaptation
for protanopia using differential evolution-based fuzzy clustering: a case study in digitized
paintings. In: The Poceedings of the 2018 IEEE Conference on Evolving and Adaptive
Intelligent Systems (EAIS 2018), Rhodes Island, Greece (2018)
19. Uijlings, J.R.R., van de Sande, K.E.A., Gevers, T., Smeulders, A.W.M.: Selective search for
object recognition. Int. J. Comput. Vision 104, 154–171 (2013)
20. Vienot, F., Brettel, H., Mollon, J.: Digital video colourmaps for checking the legibility of
displays by dichromats. Color Res. Appl. 24(4), 243–252 (1999)
21. Yin, R., Monson, E., Honig, E., Daubechies, I., Maggioni, M.: Object recognition in art
drawings: transfer of a neural network. In: The Proceedings of the 2016 IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China (2016)
Author Proof

Author Query Form

Book ID : 495006_1_En
Chapter No : 20

Please ensure you fill out your response to the queries raised below
and return this form along with your corrections.

Dear Author,
During the process of typesetting your chapter, the following queries have
arisen. Please check your typeset proof carefully against the queries listed below
and mark the necessary changes either directly on the proof/online grid or in the
‘Author’s response’ area provided below

Query Refs. Details Required Author’s Response


AQ1 This is to inform you that corresponding author has been identified as per the
information available in the Copyright form
AQ2 Per Springer style, both city and country names must be present in the
affiliations. Accordingly, we have inserted the city name “Mytilene” in affiliation
“1”. Please check and confirm if the inserted city name “Mytilene” is correct. If
not, please provide us with the correct city name.
MARKED PROOF
Please correct and return this set
Please use the proof correction marks shown below for all alterations and corrections. If you
wish to return your proof by fax you should ensure that all amendments are written clearly
in dark ink and are made well within the page margins.

Instruction to printer Textual mark Marginal mark


Leave unchanged under matter to remain
Insert in text the matter New matter followed by
indicated in the margin or
Delete through single character, rule or underline
or or
through all characters to be deleted
Substitute character or
through letter or new character or
substitute part of one or
more word(s) through characters new characters
Change to italics under matter to be changed
Change to capitals under matter to be changed
Change to small capitals under matter to be changed
Change to bold type under matter to be changed
Change to bold italic under matter to be changed
Change to lower case Encircle matter to be changed
Change italic to upright type (As above)
Change bold to non-bold type (As above)
or
Insert ‘superior’ character through character or
under character
where required
e.g. or
Insert ‘inferior’ character (As above) over character
e.g.
Insert full stop (As above)
Insert comma (As above)
or and/or
Insert single quotation marks (As above)
or

or and/or
Insert double quotation marks (As above)
or
Insert hyphen (As above)
Start new paragraph
No new paragraph
Transpose
Close up linking characters

Insert or substitute space through character or


between characters or words where required

Reduce space between between characters or


characters or words words affected

You might also like