Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Artificial Intelligence Review (2023) 56:9605–9638

https://doi.org/10.1007/s10462-023-10405-7

Deep learning: survey of environmental and camera impacts


on internet of things images

Roopdeep Kaur1 · Gour Karmakar1 · Feng Xia2 · Muhammad Imran1

Accepted: 16 January 2023 / Published online: 6 February 2023


© The Author(s) 2023

Abstract
Internet of Things (IoT) images are captivating growing attention because of their wide
range of applications which requires visual analysis to drive automation. However, IoT
images are predominantly captured from outdoor environments and thus are inherently
impacted by the camera and environmental parameters which can adversely affect corre-
sponding applications. Deep Learning (DL) has been widely adopted in the field of image
processing and computer vision and can reduce the impact of these parameters on IoT
images. Albeit, there are many DL-based techniques available in the current literature for
analyzing and reducing the environmental and camera impacts on IoT images. However, to
the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches
for this purpose. Motivated by this, for the first time, we present a Systematic Literature
Review (SLR) of existing DL techniques available for analyzing and reducing environ-
mental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate
and highlight the significance of IoT images in their respective applications. Secondly, we
describe the DL techniques employed for assessing the environmental and camera lens dis-
tortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing
the impact of environmental and camera lens distortion in IoT images. Finally, along with
the critical reflection on the advantages and limitations of the techniques, we also present
ways to address the research challenges of existing techniques and identify some further
researches to advance the relevant research areas.

Keywords Camera lens distortions · Deep learning · Environmental impacts · Internet of


things · Image quality

* Gour Karmakar
gour.karmakar@federation.edu.au
Roopdeep Kaur
roopdeepkaur@students.federation.edu.au
Feng Xia
f.xia@ieee.org
Muhammad Imran
m.imran@federation.edu.au
1
Institute of Innovation, Science and Sustainability, Federation University Australia, Ballarat,
VIC 3353, Australia
2
School of Computing Technologies, RMIT University, Melbourne, VIC 3000, Australia

13
Vol.:(0123456789)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9606 R. Kaur et al.

1 Introduction

Recently, a large volume of data is being generated by the IoT (Gubbi et al. 2013) devices.
The analysis of such big data is a cumbersome process. Therefore, an advanced machine
learning technique called DL (LeCun et al. 2015) is required to extract the value from these
IoT data. DL techniques have been widely used in computer vision, especially in difficul-
ties of visual classification in which the latest algorithms perform better than humans. Vari-
ous works have been done in which Deep Neural Networks (DNN) (Yi et al. 2016) are
used to classify high-resolution images into different categories (Krizhevsky et al. 2012;
Simonyan and Zisserman 2014; Lee et al. 2015). Therefore, DL algorithms are in great
demand in image processing. There are various cloud-based services by Google, Amazon,
and Microsoft in which users do not need to train their models for image processing.
Recently, Google made Cloud Vision Application Programming Interface (API) (Ofoeda
et al. 2019) for the analysis of images. However, (Hosseini et al. 2017) demonstrated that
Google’s API is not robust to the noise in images. Google API can detect objects and faces
and also read out words in images. Nevertheless, when noise is added to images, the same
API produces different outputs. If the noise from noisy input images is filtered, it results in
the same output as the original images. Therefore, the efficiency of DL algorithms can be
enhanced by pre-filtering noisy input images.
A DNN, namely Alexnet (Lu et al. 2019) has also achieved good performance in image
processing. However, in practical applications, various kinds of impacts occur during
image capturing. These impacts can be weather conditions, lighting conditions, or camera
parameter impacts. Weather condition parameters include rain, snow, fog, wind, shadow,
and darkness (Kapoor et al. 2016), while camera parameters are lens blurriness, dirtiness,
and distortions (e.g., barrel distortion) (Buades et al. 2005).
Research shows these impacts can affect the performance of DL algorithms (Temel
et al. 2019, 2017). Consequently, it is necessary to consider the impact of these parameters
while using DL techniques for image processing. For this, there exist some DL-based tech-
niques to analyse and reduce both environmental (Kazerouni et al. 2019; Tschentscher et al.
2017) and camera impacts (Lu et al. 2017; Liu et al. 2018) on IoT images. The existence of
these techniques necessitates presenting their state-of-the-art approaches in a survey paper.
So that the research community can perceive the impacts and address them accurately. In
this paper, for the first time, we present an SLR (Kitchenham et al. 2009) on DL-based
techniques that are used to analyse and reduce the impacts of environmental and camera
parameters on IoT images.
The contributions of the paper are summarised as follows:

1. We emphasise the significance of IoT images and how they are used in their respective
applications.
2. We compile state-of-the-art DL-based approaches for the analysis of the impacts of
dynamic outdoor environments and camera lenses on the quality of IoT images.
3. We incorporate the latest works on DL-based techniques for the reduction of both envi-
ronmental and camera lens impacts on image quality.
4. We present some further research to advance the impact analysis and reduction tech-
niques for IoT images.

The structure of the paper is organised as follows: the review methodology is described
in Sect. 2. Section 3 emphasises the applications of IoT images in various application

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9607

domains. In Sect. 4, we provide diverse DL techniques for the analysis of impacts on IoT
images. The reduction of environmental and camera parameter impacts on IoT images is
demonstrated in Sect. 5. Section 6 brings further research challenges. Finally, the conclu-
sion is given in Sect. 7.

2 Review methodology

The SLR has recently been used to conduct the literature survey. SLRs are logical and
systematic methods of identifying, evaluating, and interpreting existing approaches related
to a particular research problem. As it lays the foundation for primary research, SLR can
reduce the possibility of bias and is regarded as secondary research (Keele 2007). There-
fore, in this paper, we opted to present the current state-of-the-art of environmental and
camera impacts on DL, which is the main purpose of this survey.
The following research questions have been established to achieve the above objective:
Research Question-1 (RQ-1): What is the significance of the analysis of IoT images?
Research Question-2 (RQ-2): What are the different DL techniques used for the analysis
of various environmental and camera impacts on IoT images?
Research Question-3 (RQ-3): Which DL techniques are being used to reduce the envi-
ronmental and camera effects on IoT images?
Note that considering the purpose and scope of the study, we have selected above-men-
tioned three questions that emphasise the significance, motivation and contributions of the
study. To answer the research questions outlined above, appropriate keywords, research
databases, and include and exclude criteria were selected. As a final search string, we used
the following search keywords:“IoT images”,“IoT image capturing”, “IoT applications”,
“environmental impacts”, “camera impacts”, “machine learning”, and “deep learning”. We
also combined these keywords to get better results. Research databases such as Science
Direct, IEEE Xplore, Google Scholar, ACM, and Springer, as well as websites such as
www.statista.com and www.towardsdatascience.com were utilised to search for relevant
topics.
The following criteria for include and exclude was utilised:
Inclusion criteria are (i) English-language data sources, (ii) articles and papers contain-
ing searched keywords in their text, and (iii) peer-reviewed academic sources, such as con-
ference proceedings, journal articles, and book chapters.
In our search criteria we excluded (i) data sources written in languages other than Eng-
lish, (ii) articles mentioning the search keywords only in the references, and (iii) resources
published before 2002.
Search results: about 19,100 search results were returned from Google Scholar that were
relevant to the keywords. We apply the filtering criterion of “since 2000". Further, three
main areas (i) application of IoT images (ii) analysis of environmental and camera impacts
(iii) reduction of environmental and camera impacts are focused.
The 123 papers are arranged according to the annual publications in IoT and various DL
techniques used for the analysis and reduction of environmental and camera lens distortion
impacts. The distribution of articles is clearly represented in Fig. 1. The utilisation of DL
techniques for environmental and camera impact analysis and reduction was restricted in
the first decade of the 21st century because the third wave of DL emerged in 2006 with a
breakthrough (Bansal 2020). It has rapidly increased from 2014 onwards and was maxi-
mum (of 17 articles) in 2017. Further consideration was given to articles that mention the

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9608 R. Kaur et al.

Fig. 1  Annual publications in IoT and various DL techniques used for the analysis and reduction of environ-
mental and camera lens distortion impacts

impacts of environmental and camera parameters on DL algorithms which is the principal


objective of this article. This will enlighten us about the emergence and effectiveness of
various DL techniques for the analysis and reduction of environmental and camera impacts.
There are 36 primary articles that use DL taken from 123 papers which are shown in Fig. 1
and included in the main body of this review. The distribution of these 36 articles among
analysis and reduction of environmental and camera impacts is demonstrated in Fig. 2.
There are two major groups of selected primary articles representing the analysis of
environmental and camera impacts and the reduction of environmental and camera
impacts. According to the search results, there is more number of articles on the analysis
and reduction of environmental impacts compared with the camera impacts. Most research
is happening on the reduction of environmental impacts which is 44%. The second-highest

Fig. 2  Percentage of articles published in analysis and reduction of environmental and camera impacts

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9609

number (27%) of articles is on the analysis of environmental impacts. Figure 2 also shows
there has been a substantial amount of research i.e., 14% and 13% on analysis and reduc-
tion of camera impacts. Active research is going on for the reduction of environmental
impacts and its popularity is increasing over the years. It shows the ever-increasing demand
for DL algorithms in the analysis and reduction of environmental impacts. However, on
contrary, still, there is a need to explore more DL techniques for the analysis and reduction
of camera impacts.
The research questions mentioned above are explained in the following sections. RQ-2
is well elaborated through Sect. 4. In Sect. 5, RQ-3 is covered in detail. RQ-1 is addressed
in the following section.

3 Applications of IoT images

The concept of the IoT came from a United States research scientist, Peter T. Lewis, in
1985 (Sharma 2016). IoT is regarded as the future of the Internet because the backbone
of IoT is the Internet. The vision of IoT is to connect all smart devices (either things or
humans carrying/using those smart devices) with the Internet. Figure 3 shows an example
of IoT where everything (e.g., city, field, home, hospitals, vehicles) is connected to the
Internet. Wireless technologies, such as 2 G/3 G/4 G, WiFi, blacktooth, etc., have been
used in heterogeneous IoT applications, where billions of devices are connected by wireless
communication technologies. 2 G networks (currently covering 90% of the world’s popula-
tion) are for voice, 3 G (currently covering 65%) is for data and voice, and 4 G is for broad-
band internet experiences (since 2012). In recent years, 4 G technology has significantly

Fig. 3  IoT. Note, other connections indicate Wi-Fi/cellular communications/IoT network connections

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9610 R. Kaur et al.

enhanced the capability of cellular networks to provide IoT devices with Internet access
(Akpakwu et al. 2017). In 2012, Long-Term Evolution (LTE) (Ghosh et al. 2010) emerged
as the fastest and most consistent type of 4 G compared to other technologies, such as
Worldwide Interoperability for Microwave Access (WiMax) (Taylor 2011), ZigBee (Nokia
2017), SigFox (SigFox 2018), Long Range (LoRa) (Vangelista et al. 2015), etc. The 5 G
networks and standards are expected to solve challenges facing 4 G networks, such as more
complex communication, computational capability, intelligence, etc, to meet the needs in
industry 4.0 (Zhang and Chen 2020) and smart environments (Costanzo and Masotti 2017),
(Li et al. 2018). Next-generation mobile communication and network technology is 6 G.
The development of the 6 G mobile communication system (after the previous five gen-
erations from 1 G to 5 G) will be a revolution in mobile communication. 6 G will be a
holographic, multidimensional network with the ability to expand coverage over air, sea,
and land, and integrate with AI, IoT, and blockchain (Lu and Ning 2020). AI, 5 G/6 G,
and Quantum Computing are rising technologies that will drive innovations and contribute
significantly to the future development of Industry 4.0 that enables moving industries to a
higher level. The new technology is capable of integrating both new and classical Indus-
trial 4.0. Future industries and technologies will be transformed by artificial intelligence,
6 G, and quantum computing (Sigov et al. 2022).
IoT devices are considered smart because they are assumed to have human-like capa-
bilities. These capabilities include intelligent processing including inferencing for deci-
sion making and two ways of interacting with other smart devices. These capabilities also
incorporates data transmission, joining a group for a particular mission and removing from
that group after mission completion. For this reason, these devices have processors, trans-
mitters, memories, Global Positioning System (GPS) (El-Rabbany 2002), Radio-frequency
identification (RFID) (Landt 2005) and the relevant software and tools that can enable
these devices to do intelligent processing.
IoT revenue worldwide is ascending exponentially from the year 2020 and will
increase in the future as shown in Fig. 4. Various sources suggests that it will be more
than 1.5 Trillion United States Dollar (USD) by 2030 [36], (Morrish and Hatton 2020;

Fig. 4  Global IoT revenue from 2020 to 2030 (in billion US dollars) (Som and Kayal 2022)

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9611

Biswal 2021). For example, in 2020, IoT revenue was USD 144.95 million, but by 2028,
it will increase to USD 770.52 million (Som and Kayal 2022). Currently, IoT is bur-
geoning in most application domains like industry (Javed et al. 2018) such as digital
industry, robots, smart grid, and energy. It is also highly utilised in applications such as
smart cities (Zhu et al. 2015), smart health (Islam et al. 2015), smart home (Perera et al.
2014), smart transportation (Da Xu et al. 2014), smart agriculture (Talavera et al. 2017)
and others (refer to Fig. 5). Figure 5 shows the most IoT usage is in smart cities (26%),
smart industry (24%), smart health (20%), and smart homes (14%). We expect smart
transportation, smart utilities, and smart health (wearables) will increase the adoption
of IoT technologies in the future. IoT technology may facilitate intelligent transporta-
tion and further enhance greenness and sustainability (Golpîra et al. 2021). Examples of
such IoT applications are presented in Fig. 3. In particular, for smart homes, as shown
in Fig. 3, mobile phones, refrigerators, Air Conditioner and microwaves are connected
to smart watches through Wireless-Fidelity (Wi-Fi), cellular communications or IoT
network connections (these connections shown in Fig. 3 are referred to as other con-
nections). Similarly, in smart health, healthcare service providers including hospitals
and clinics are connected to peripheral devices (for example, blood pressure monitors,
heart rate monitors, webcams, headphones), research departments, and other handheld
devices through wired, blacktooth, or other connections as depicted in Fig. 3. Recently,
the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) (Pal et al. 2020)
is influencing different aspects of life and industry. IoT plays a crucial role in this dis-
ease as an IoT-based system is able to manage and balance the impact of SARS-CoV-2
in smart cities by cluster identification which identifies people who are not wearing face
masks (Herath et al. 2021). Other examples include monitoring vaccine temperature
using IoT (Almars et al. 2022).
Since IoT contains sensor networks and thus has sensing capability, image process-
ing becomes an indispensable part for many IoT applications. For automation, under-
standing circumstances around the application-focused things (e.g., application-specific
scenarios, events) and dealing with the uncertainty associated with applications, image
processing has been essential for IoT-based smart applications. For this reason, like
other data types such as numeric and textual data, IoT images are being applied in vari-
ous smart applications. Some contemporary IoT applications are provided in Table 1.

Fig. 5  IoT utilisation in various application domains

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 1  Contemporary IoT applications
9612

Year Purpose Application domain Data type IoT N/W ICT platform IoT devices Impact (type)

13
2019 (Li et al. 2019) An efficient system Smart agriculture and Image Wi-Fi Cloud based Sensors Image transmission
with less complexity transportation insecurity
and energy require-
ment image commu-
nication system for
IoT based monitoring
applications
2021 (Piccialli et al. An IoT based system Smart transportation Numerical LPWAN Non-cloud based Sensors Air pollution and noise
2021) which can predict the and city traffic
parking occupancy
in the streets and
enhance the flow
of cars in the over-
crowded areas
2021 (Mabrouki et al. Weather (temperature, Smart environment Numerical Wi-Fi Non-cloud based Temperature, humidity, Pollutants
2021) humidity) moni- and CO2 sensors
toring system for
remote places with
Arduino based wire-
less sensor networks
2021 (Sahu et al. Proposed a security Smart health Graphical images Wi-Fi Cloud based Heart rate sensors Insecurity
2021) mechanism for
e-healthcare applica-
tions
2021 (Xie et al. 2021) Environment monitor- Smart environment Numerical NB-IoT Cloud based Environment monitor- Unreliability
ing system in shal- ing sensors
low sea
R. Kaur et al.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Table 1  (continued)
Year Purpose Application domain Data type IoT N/W ICT platform IoT devices Impact (type)
2021 (Savic et al. Demonstration of Smart logistics Numerical NB-IoT Non-cloud based CIoT (Cellular IoT) Anomoly detection
2021) anomaly detection user equipment
for cellular IoT
networks which can
be utlilized in smart
logistics through the
real world data study
Deep learning: survey of environmental and camera impacts on…
9613

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9614 R. Kaur et al.

Table 1 includes applications associated with the most application domains (e.g., smart
agriculture, smart health, smart transportation) shown in Fig. 5.
Agriculture is one of the sectors in which the combination of IoT and image pro-
cessing lead to efficient and cost-effective methods. These methods increase production
to a great extent. IoT images can be utilised in smart agriculture to predict crop yield
and to detect various crop diseases. Temperature, humidity, and soil moisture content
can be predicted using the combination of IoT sensing networks and image processing
networks. Kapoor et al. (2016) presented an approach that determines different environ-
mental (under different sunlight conditions) and man-made factors (pesticides, insec-
ticides) which could impact the production of agriculture. In this approach, different
sensors and image-capturing cameras are assembled on an Arduino and the data are
captured from these sensors. The camera is stored in an Secure Digital (SD) card and
then further processing is done with algorithms in MATLAB. A database is created for
the histogram analysis. Four sets of the same plant are taken. A healthy set of plants is
kept in set l. The plants in set 2 are kept in extreme sunlight under moist conditions.
Low to moderate sunlight is provided to set 3 with little to no water. The fourth set
is treated with excesses of N, P, and K fertilizer under normal environmental condi-
tions. There were visible morphological differences between each of these sets. As a
result of these visible morphological differences, each plant can be analysed through
the use of histograms. Depending on the set, each histogram plot will be different. Thus,
the algorithm analyses the given image and identifies the specific problem that affected
the given plant. Therefore, information collected concerning temperature, humidity, and
moisture can be used to predict the health of the plant.
Not only in agriculture, but IoT images are also an indispensable part of smart health.
The combination of IoT images and other networks give us the option of home-based
healthcare and telehealth services. The detection of diseases is easily possible through
these services while the old patients being at home. For example, Liu et al. (2019) dem-
onstrated an approach to perform the acquisition and processing of teeth images with the
combination of IoT networks and DL techniques. The approach decreases the mean diag-
nosis time by 37.5% which increases the number of patients who needs to be treated by
18.4%.
Moreover, IoT images are also beneficial for smart transportation as mentioned in
Table 1 in which the early prediction of parking spaces is done and linked to a mobile app
Users can know about vacant parking spaces in congested areas through that mobile app It
results in less consumption of fuel and saves time as well. Similar to the parking prediction,
a traffic prediction technique was introduced by Goel et al. (2017) in which authors pre-
sented a smart traffic monitoring system with Closed-Circuit Television (CCTV) (Cieszyn-
ski 2006) cameras. Based on the traffic, an alert is given to the people about highly con-
gested roads and alternate paths are suggested to them. Likewise in smart transportation
and smart health, IoT images are also utilised in wildlife or forest protection. These images
are used to classify the animals through IoT image networks and DL models so that we can
save endangered species of animals and plants.
IoT images are also an indispensable part of the smart industry which enhances the
manufacturing of the product and thus increases overall production. Hsu et al. (2017) pro-
posed a system for the automatic production of steel products.
Beyond these above applications, IoT images can also be used in smart cities, smart
homes, transportation, and logistics. Hence, to apply IoT image networks efficiently for
the above-mentioned applications, there is a need for good quality and reliable images.
Since the efficacy of IoT image-based applications is heavily affected by the image quality,

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9615

it is of utmost crucial to make sure that good quality images are collected through the IoT
networks.
Because lighting, time, the position of the camera, and weather conditions (e.g. rain,
wind, snow, fog) are all factors that contribute to capturing images in an outdoor environ-
ment. In windy weather, for instance, objects may lose their clarity. This results in blurry
images. Other important factors include the time, camera setting, and distance at which
images are taken in uncontrolled conditions. There are also differences in the images taken
in the morning, afternoon, and at night, which are caused by differences in lighting condi-
tions (Kapoor et al. 2016), light sources, and the number of shadows on the image which
impacts the processing accuracy. As a result, outdoor images require consideration of all
the above-mentioned effects in the analysis (Fathi Kazerouni et al. 2019). Without consid-
ering all possible outdoor environmental impacts, decisions derived from outdoor image
analysis can be erroneous. Therefore, many techniques are available to analyse the envi-
ronmental and camera impacts on images over the decades. In these techniques, the qual-
ity of images is objectively assessed by many image quality assessment metrics such as
Peak Signal-to-Noise Ratio (PSNR) [60], Mean Square Error (MSE) (Aziz et al. 2015),
Oriented FAST and Rotated BRIEF (ORB) (Rublee et al. 2011) and Structural Similarity
Index (SSIM) (Z Wang et al. 2004; Kaur et al. 2022). Research also shows the prominent
evolution and ever-increased adoption of DL applications to analyse the environmental and
camera impacts on IoT images. Consequently, various DL-based techniques for the envi-
ronmental and camera impact analysis on IoT images are described in the following section
which also answers our RQ-2.

4 DL techniques used for analysing the impacts on IoT images

There are a variety of DL techniques (depicted in Table 2) that are used for analysing the
impacts on IoT images. Table 2 summarises the purpose, research challenges, impact type,
DL techniques, and dataset used for each approach. For example, Table 2 shows Zhou et al.
(Zhou et al. 2017) analysed the impact of lens blurriness using DNN classifiers (LeNet-5,
MatConvNet) and MNIST, CIFAR-10 and ImageNet datasets. Some approaches to allevi-
ate those impacts are also articulated in this approach. These impacts can be categorised
into two major classes: (i) environmental and (ii) camera lens distortions.
There are diverse DL-based approaches such as Convolutional Neural Network (CNN)
(Albawi et al. 2017), Recurrent Neural Network (RNN) (Pascanu et al. 2013), and many
more which are utilised to analyse and reduce the impact of environmental and camera
distortion parameters. To illustrate, the basic structure of VGG-16 CNN is delineated in
Fig. 6. CNNs are a variety of artificial neural networks used in image and text recognition
(Jain et al. 2007), classification, segmentation, face detection (Matsugu et al. 2003) and
others. A typical CNN consists of three layers: (i) a convolutional layer, (ii) a pooling layer,
and (iii) a Fully Connected (FC) layer. Feature extraction is performed by the first two lay-
ers which are convolution and pooling layers. FC layer do mapping of extracted features
to final output. The convolutional layer plays a crucial role which performs mathemati-
cal operations like convolution. In Fig. 6, convolution layer 1 is fed with 224 x 224 RGB
image input. black block represents the max pooling layer which calculates the maximum
value in every feature map patch. Following five convolutional layers, there are three FC
layers. Lastly, there is a soft-max layer that performs multi-class classification (Simonyan
and Zisserman 2014).

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 2  Different DL techniques for the analysis and reduction of impacts due to environmental and camera parameters while image capturing
9616

Approach/Year Purpose Research challenge Type of impact DL type Dataset used


addressed

13
To study the performance Examine the impact of The impact of image Contrast, JPEG compres- Caffe Reference, VGG- ImageNet (Non-synthetic)
of different DNN classi- different types of image distortion on different sion, noise, blur (camera CNN-S, VGG-16, (Russakovsky et al. 2015)
fiers, 2016 (Dodge and distortion such as JPEG four neural networks and environmental GoogleNet
Karam 2016) compression, noise on is analysed and Visual impact)
the classification of Geometry Group-16
images on DNN (VGG-16) (Alippi
et al. 2018) network
outperforms for various
distortions
To enhance the Google Improve Google Cloud Improvement of system Gaussian noise (environ- Google’s cloud vision API ImageNet (Deng et al.
cloud robustness, Vision API’s robustness robustness to noise by mental impact) (DNN) 2009), Faces94 [70] (Both)
2017(Hosseini et al. to noise adding noise filter
2017)
Testing of DNN efficiency, Check the effectiveness Improvement in DNN Distance and angle (cam- YOLO detector (DNN) MSCOCO (Lin et al. 2014),
2017 (Lu et al. 2017) of DNN with varying for the classification of era impact) German traffic signs data-
distance of camera and images under the impact set (Stallkamp et al. 2011)
angle of camera distance and (Synthetic)
angle
To check the efficacy of Analyse the impact of Provided approaches to Blurriness (camera LeNet-5, MatConvNet MNIST, CIFAR-lO
DNN classifiers, 2017 blurriness and noise on reduce the effects of impact) and noise (envi- (Deep Convolutional (Krizhevsky et al. 2009),
(Zhou et al. 2017) DNN classifiers image distortion ronmental impact) Neural Network ImageNet (Deng et al.
(DCNN)) 2009) (Both)
To raise the capabil- Enhance the efficiency of Enhance the efficiency of Fog, sunny, rain, snow Unreal Engine 4 (AI) Real and virtual self gener-
ity of classifiers in classifier in image distor- classifier and reached the (environmental impact) ated images (Both)
image distortions, 2017 tion such as rain, cloud classifier efficiency to
(Tschentscher et al. and snow 99.72%
2017)
R. Kaur et al.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Table 2  (continued)
Approach/Year Purpose Research challenge Type of impact DL type Dataset used
addressed

Plant recognition system Make plant recognition The accuracy of their Wind and Rain (environ- DL (Convolutional Neural Modern Natural Plant
by DL technique, 2019 system using DNN in technique is 99.5% as mental impact) Networks) Dataset (MNPD) (Non-
(Kazerouni et al. 2019) environmental challeng- compared to other state- synthetic)
ing conditions of-the-art DL techniques
(Bay et al. 2006)
A framework for improv- A strategy is presented A task-driven enhancement Rain and haze (environ- Dense Convolutional TuSimple dataset, KITTI
ing the performance that minimizes image network is developed for mental impact) Network (Huang et al. (Geiger et al. 2012), and
among lane and 2D enhancement’s detri- less memory and com- 2017) RID dataset (Li et al.
object detection under mental effects while putational cost, making 2019) (Both)
bad weather conditions improving high-level it suitable for embedded
in terms of both low task performance to a systems of autonomous
memory and accuracy, task-driven end cars
2021 (Lee et al. 2021)
A gradient flow based Transmission map deter- Associating the input of Fog (environmental Deep residual network Image Defogging Research
deep residual network, mination and foggy haze haze and defog images impact) at LIVE [80], FRIDA3
for improving scene removal using residual having maximum accu- (Tarel 2022) (Both)
Deep learning: survey of environmental and camera impacts on…

visibility degraded by network based on the racy by generating the


fog, 2021 (Suganya and estimation of the ratio transmission map
Kanagavalli 2021) between transmission
map and foggy image
A transfer learning based Comparing the efficacy of Address the general issues Rain and snow (environ- Transfer learning Outdoor-Rain (Li et al.
general image enhance- general image enhance- of limited training data- mental impact) 2019), Snow 100K (Liu
ment model used for ment model for rain, set and computational et al. 2018), Underwater
images degraded by snow and underwater complexity Image Enhancement
specific natural environ- with the individual Benchmark (UIEB) (Li
mental parameters such image enhancement et al. 2019) (Synthetic)
as rain and snow, 2021 model for rain, snow and
(Yin and Ma 2021) underwater
9617

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 2  (continued)
9618

Approach/Year Purpose Research challenge Type of impact DL type Dataset used


addressed

13
An end-to-end model Developing a single To reduce the number of Rain, fog, and snow (envi- Trans-Weather (Trans- Test1 dataset (Li et al. 2019,
based on transformers encoder-decoder based transformer param- ronmental impact) former) 2020), the RainDrop test
that can restore an image transformer that provides eters that used multiple dataset (Qian et al. 2018)
damaged by any weather an efficient solution to encoders specific to each and the Snow100k-L test
condition, 2022 (Vala- the problem of adverse weather removal task set (Both) (Liu et al. 2018)
narasu et al. 2022) weather removal
Comparison of three vari- The purpose of this study How to find bench marking Rain, snow and haze (envi- YOLO AAU Extreme Weather
ants of You Only Look is to investigate single- results for improving ronmental impact) Dataset (Non-synthetic)
Once (YOLO) object shot detection, which is vehicle detection algo- (Bahnsen and Moeslund
detection algorithm on YOLO algorithm with rithms 2018)
vehicle detection models, its variations to detect
2022 (Boppana et al. vehicles in real-time
2022)
R. Kaur et al.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Deep learning: survey of environmental and camera impacts on… 9619

Fig. 6  Basic structure of VGG16 CNN Model

The CNN mentioned in Fig. 6 is one example of a DL-based technique that is used for
the analysis and reduction of environmental and camera impacts. There exist various other
DL techniques which are discussed in the following sections.

4.1 Assessing environmental impacts on IoT images

For improving the reliability and effectiveness of IoT applications which are mentioned in
Sect. 3, it is pivotal to assess the environmental impacts on IoT images. As alluded before,
there are various weather conditions such as rain, shadow, darkness, etc. that can affect the
quality of IoT images. The efficacy of DL algorithms is affected by such challenging condi-
tions because the impacted images contain many distortions that increase the uncertainty
of DL algorithms.
To analyse the impact of rain, sun, and fog, Tschentscher et al. (2017) implemented a
new simulation system using Unreal Engine 4 to enhance the efficiency of the traditional
parking system. The traditional parking system was based on an embedded system without
any use of smart cameras. There was an insufficient amount of data on the traditional park-
ing system. Therefore, the authors created a simulation model that can be used to capture
data considering different environmental and camera impacts. Because a large amount of
data is required for the training of DL models. For this, a portrait of a parking system is
made that contains many rows. The authors fostered a simulated smart camera that exe-
cutes all highlights of a genuine camera such as lens blur and lens distortion.
The camera is used to observe and store three-dimensional information about the
scene. Most of the features of the camera are similar to the real one such as it creates
lens distortion and various aperture sizes. Recorded images (grey or coloured) are then
stored in memory. Then the image processing system is implemented, which can fetch
the images from the memory. The particular used Unreal Engine 4 is a game engine
that is used to compare a real scenario with its respective simulated one. To provide a
real touch to the system, the coating of materials is done by Physically-Based Render-
ing (PBR). Because PBR-based scenarios are more realistic compared with their corre-
sponding non-PBR-based counterparts. There is a trigger volume to hear whenever any

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9620 R. Kaur et al.

car is entering or leaving the parking space. Based on that the parking details are written
to the log file such as parking space ID and information about the parking space whether
it is empty or occupied.
The main advantage of reproduced conditions is the full command over ecological
boundaries. The authors covered every situation such as a difference in climate and diverse
daytime lighting conditions, etc. This system has control of exposure, focal length, aper-
ture opening, and focus distance while capturing images. Some post-processing steps are
implemented to reduce the effects of lens distortion, motion blur, and image noise. The
lighting conditions are adjusted in three different weather conditions such as sunny, foggy,
and cloudy. The accuracy of the video-based parking guidance system is less in cloudy
and sunny environments. For the foggy environment, it depends on the density of the fog
increases with an increase in the camera distance.
Snow creates one of the difficult challenges while capturing images in the outdoor envi-
ronment. Therefore, the accuracy of their system in the snow is 70%. Overall, the algorithm
can work in difficult weather and lighting conditions, where a classifier is trained to turn on
or off lights accordingly. The performance of the new parking classifier modelled with the
simulated data is verified with various recorded real scenarios. The results show the classi-
fier is capable of handling challenging conditions such as snow and rain but the accuracy is
low (70% in snow and 93% in rain).
As mentioned before, while dealing with snow, the accuracy of this system is 70%. Also,
the video captured from the simulated environment was compared with its respective real-
world scenarios. However, they have not leveraged this simulated video to demonstrate the
improvement of the efficacy of the real-world/traditional parking guidance system. Hence,
to improve this system, there is a need to train the model using this simulated video data
for the parking guidance system and test the trained model using real/live parking traffic
conditions data under different traffic scenarios and environmental impacts.
Hence, to make the parking system more flexible and cover more weather conditions
such as fog, a reliable system based on the DL algorithm executed in edge devices was pro-
posed for smart parking surveillance systems. The impact of weather conditions, tempera-
ture, and different timings of the day are considered. The metrics that are used to indicate
the goals for performance of their system are: smartness is the automatic detection and
pattern recognition in a parking garage scene; efficiency is the online processing, and reli-
ability is the detection performance that is reliable in all kinds of environments. For test-
ing the proposed system, a busy Angle Lake parking garage having 1160 available park-
ing slots is used. IoT devices are installed on the sixth floor in an outdoor environment to
analyse the performance of the system in dynamic outdoor environment conditions. The
authors considered mainly four environmental parameters such as cloud, rain, fog, and sun.
First, the system is developed and tested in a lab environment, and then it is put into use in
a real-world parking garage for three months. A computer vision-based algorithm is used
to detect objects and the modified SORT algorithm (Bewley et al. 2016) is utilised on the
server side to decrease the load on the edge side. The system outperforms in the cloudy
conditions compared with the other three conditions (rain, fog, and sun) with an accuracy
of 97.5% on weekdays and 99.2% on weekends. It performs well in the rainy environment
in relation to the sunny and foggy environments having 93.75% accuracy on weekdays and
96.2% accuracy on weekends. The accuracy of the system is maximum at night as com-
pared to daytime and other weather conditions mentioned above because the total num-
ber of occupied parking spaces and traffic volume were both lower at night. The accuracy
of the system is less in foggy and sunny weather conditions because, in sunny weather,
there are huge shadows and reflections toward the camera lens. The results show the least

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9621

accuracy in a foggy environment. Because fog creates more impact on image quality com-
pared with the other three factors (Ke et al. 2020).
Therefore, to improve the accuracy of the system in a foggy environment, Khan et al.
(2019) introduced a deep CNN-based approach for detecting smoke in a normal and foggy
IoT environment. Detecting smoke in smart cities is not an easy job especially when smoke
is in its initial stages or in uncertain environments such as fog. The systems which use
traditional feature extraction approaches produce a large number of false alarm rates and
less accuracy in smoke detection. Also, existing methods are expensive and not suitable for
real-time. The authors found out that VGG-16 is better and they fine-tuned the last layer of
architecture from 1000 to 4 classes to perform planned classification of smoke and non-
smoke in foggy and normal environments. The authors created their benchmark smoke
dataset and compared the performance of their DL-based model with that of existing DL-
based techniques such as Googlenet and Alexnet. Their proposed approach achieves much
better accuracy (97.72%) compared with existing state-of-the-art methods. However, the
authors just considered fog as an uncertain environmental parameter but detection accu-
racy in other environmental parameters such as shadow, sun, and others is not discussed.
The solution for these issues is to gather data under different environmental conditions and
test their system with collected data and exploit the insights received from the results to
improve the given approach.
Similar to smart transportation, the need to analyse the quality of IoT images for smart
agriculture is also essential. Because in smart agriculture, leaves are an important part
of plants. There are a lot of factors that can change the shape and size of leaves. It can
be mainly due to environmental parameters such as a change in weather conditions (e.g.,
snow, sunlight, rain) which results in the deformation of leaves or dried leaves. Also, windy
weather affects the clarity of images. Most of the images would be blurry which can be due
to lens blurriness and results in the deformation of leaves. Another factor is fog due to
which the contrast of an image is changed. And there will be light scattering and blocking
due to some water droplets. Rain also leads to an increase in the intensity of images.
Besides weather, lighting conditions can also hugely impact the imaging process. This
can be attributed to the fact that the images taken in the different parts of the day such as in
the morning or the afternoon have a lot of variations in brightness and intensity. All these
above factors can change the image content to a great extent and make images much more
complex.
All the challenging conditions discussed such as weather types (rain, wind, tempera-
ture, humidity), complex backgrounds, time of photographing and change of light intensity
make plant recognition difficult.
For this reason, Kazerouni et al. (2019) presented an automatic plant recognition tech-
nique. In this technique, DL is used to identify plants in the above-mentioned challenging
conditions. The authors used CNN in which the first five layers are convolutional layers
and the other three are FC layers. They used their model on the Modern Natural Plant
Dataset (MNPD) in which the training dataset contains 800 images and the test dataset
consists of 200 images in comparison with previously existing methods (Kazerouni et al.
2017, 2017) in which 664 and 336 images were used as training and test set images respec-
tively. The accuracy of previously existing systems presented in Kazerouni et al. (2017)
and Kazerouni et al. (2017) is 94.9 and 93.9, respectively. The overall plant recognition
accuracy of this technique under challenging conditions such as rain, wind, temperature,
humidity, and light intensity is reported as 99.5%. This system is a distance-independent
system which does not depends on the distance between the camera and the plant. Because
of the deep CNN structure, this technique is regarded as the most effective.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9622 R. Kaur et al.

However, only four species of plants are considered for the recognition process, and
only two environmental parameters such as rain and wind, and camera parameters such as
distance are considered in the results. There is a need to consider more species of plants
such as rice and cassava that are difficult to recognize (Hassan and Maji 2022) and other
environmental parameters such as shadow, darkness, and fog.
Image noise is also one of the environmental distortions that can happen while captur-
ing IoT images (Buades et al. 2005). An increase in temperature leads to image noise. To
analyse how image quality affects computer vision applications, (Dodge and Karam 2016)
evaluated the impact of noise by DNN models. For this experiment, they used four differ-
ent neural networks. The first network is Caffe Reference Model (Jia et al. 2014) which
is an execution of AlexNet network (Krizhevsky et al. 2012). The second model is the
VGG-CNN-S model (Chatfield et al. 2014), similar to Caffe Reference Model, but it has
greater efficiency compared with Caffe Reference Model. The third model is the VGG-
16 model (Simonyan and Zisserman 2014) which has more convolutional layers than the
previous two models. The last model is GoogleNet model (Szegedy et al. 2015) which uses
fewer parameters than others. The performance of VGG-16 and GoogleNet is better com-
pared with the Caffe and VGG-CNN-S networks in noise. This is because of their deeper
structure. Deeper structure involves multiple layers in the model. Multiple layers have the
advantage of learning features at different abstraction levels. A deeper structure allows the
networks more room to learn features that are not affected by noise. Despite JPEG and
JPEG2000 compression distortions, the networks are surprisingly resilient. Performance
drops only at very low-quality levels (quality parameter less than 10 for JPEG and PSNR
less than 30 for JPEG2000). Note that the quality values are in the range 0–100 for JPEG
and JPEG 2000, where 0 indicates the lowest quality value and 100 indicates the highest
quality value. Given that the compression level is sufficient, deep networks can be rea-
sonably confident of performing well on compressed data. The disadvantages of deeper
structures are more power consumption and a time-consuming training process. It takes
a lot of time which consequently leads to an increase in run time and the requirement for
good memory GPU. Moreover, if the network is very deep, this may result in vanishing or
exploding gradients (Tian et al. 2020). Overall, they concluded that the VGG-16 network is
more robust against noise than other models. The synthetic Gaussian noise is added to the
colour component of each pixel to analyse the impact. However, as per (Dodge and Karam
2016), synthetic Gaussian noise cannot capture all types of noises from the deployment
environment which is the shortcoming of this method. Hence, for the accurate validation
and performance analysis of those DL-based models, there is a necessity to use the datasets
captured under many different real-world noisy environmental scenarios.
For assessing the environmental noise more accurately, since the Gaussian noise is
generally adopted to represent the environmental noise, the impact of Gaussian noise is
assessed by using DNN classifier (Zhou et al. 2017). Gaussian noise is generally adopted
to represent the environmental noise, and the impact of Gaussian noise is assessed by using
the DNN classifier. However, the other feasible alternatives could be unsupervised Deep
learning models such as Deep Belief Networks. The good results are reported on a noisy
dataset called n-MNIST, which contains Gaussian noise. We believe that the noisy versions
of MNIST and Bangla datasets will help researchers better apply and extend the research
on understanding representations for noisy object recognition datasets. The advantage of
using this particular approach is improvement in the accuracy when classifying distorted
images. This improvement indicates that classification results on distorted images can be
close to the results on original images. Authors used more datasets such as different types
of small datasets (LeCun et al. 1998; Krizhevsky et al. 2009) and a large datasets, namely

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9623

Imagenet dataset (Deng et al. 2009). While studying the impact of noise, they noticed that
the performance of the DNN classifier degrades. For this purpose, they re-trained their net-
work including images containing noise in which a large training dataset is required. They
also changed some first layers with the degraded images and made adjustments in the low-
level filters of DNN to match the features of degraded images. Therefore, to improve the
performance of their model, they re-trained and fine-tuned datasets. For a high level of
noise, better results are given by re-training for distorted datasets. For low levels of noise,
good results are provided by fine-tuning and re-training together. Fine-tuning is preferable
because it has the advantage of saving computational time. Because fine-tuning takes less
training time in comparison with re-training the model. (Zhou et al. 2017).
The effect of Gaussian noise on DNN was also analysed by Basu et al. (2017). The
additive white gaussian noise was generated by adding noise with a Signal-to-noise-ratio
(SNR) of 9.5. 9.5 is a standard SNR. The advantage of using SNR=9.5 is that it simulates
background clutter well. The noise which is represented by this SNR value is similar to
noise that is caused by electrical variations in cameras, television static, etc in real-world
applications. However, the disadvantage is that this system can only perform well under the
impact of noise at SNR greater than or equal to 9.5. The alternative way to add Gaussian
noise is with a Gaussian distribution function with varying values of mean and Standard
Deviation (SD). The authors achieved an improvement of 69% with their learning frame-
work based on probabilistic quadtrees compared with previous traditional deep belief net-
works on MNIST, n-MNIST, and the noisy Bangla datasets. Therefore, in these two above-
mentioned noise analysis methods, authors synthetically generated noise but not from the
natural source of the noise. Hence, firstly, there is a requirement for the database in the
real environment to analyse the impact of different environmental parameters. Secondly,
to address the issue of noisy datasets, the authors’ future research direction is to plan to
examine various pooling techniques like SPM (Lazebnik et al. 2006) and certain sparse
representations such as sparse coding (Lee et al. 2006). So far, we present the environ-
mental impact assessing techniques on IoT images. However, research shows (see Table 2)
camera lens distortions affect IoT images considerably. The following section details the
techniques for analysing the camera lens distortion impacts on IoT images.

4.2 Assessing camera lens distortion impacts on IoT images

There are different types of camera impacts such as lens blur, lens dirtiness, and barrel dis-
tortion that happen while capturing IoT images. Nevertheless, the current literature exhibits
DL techniques have been applied to analyse the impacts of lens blur (Zhou et al. 2017;
Dodge and Karam 2016) and lens dirtiness on IoT images as shown in Table 2. For exam-
ple, lens blur can happen due to the movement of the camera or when the camera is not in
focus mode (Nejati et al. 2016; Pomponiu et al. 2016; Do et al. 2014). Camera lens blur
leads to the loss of high-frequency components and thus results in low resolution.
To analyse the impact of lens blurriness and JPEG compression, Dodge and Karam
(Dodge and Karam 2016) used Caffe Reference Model (Jia et al. 2014), VGG-CNN-S
model (Chatfield et al. 2014), VGG-16 model (Simonyan and Zisserman 2014) and Goog-
leNet model (Szegedy et al. 2015) for their experiments. To create the blur effects, the
Gaussian kernel is used and the SD of the Gaussian is varied from 1 to 9. Filter window
size is set to four times the SD. Even though authors used fixed window size, however, the
window size determines the extent of filtering. Larger sizes generally result in greater fil-
tering. Larger windows require more processing and produce higher levels of blurring. For

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9624 R. Kaur et al.

example, if the filter window size is set to eight times the SD, it would lead to high levels
of Gaussian blurring than when the window size is equal to four times the SD. Blur creates
a slight change in the first convolutional layer. Contrary, the response to the last convolu-
tional layer differs significantly when blur is introduced in the first layer. The result of this
is that small changes in the first layer are propagated to create larger changes at the higher
layers. It shows that the reduction in the performance of DNN under blur is not limited to
a particular model, nevertheless, it is common to all considered DNN architectures. Their
analysis shows all the models are very sensitive to lens blurriness. However, VGG-16 net-
work appears more robust than the other networks. In this method, authors considered only
the lens blur effect on the performance of DNN but other camera impacts such as lens dirti-
ness and barrel distortion are not considered.
The impact of blurriness is also assessed by DNN classifier (Zhou et al. 2017). The
applications utilised in this research project have different types of distortion such as motion
blur, defocus blur and a combination of them. Authors used various datasets (LeCun et al.
1998; Krizhevsky et al. 2009) such as different types of small datasets (LeCun et al. 1998;
Krizhevsky et al. 2009; Deng et al. 2009) to study the impact of lens blurriness. They ana-
lysed that the performance of the DNN classifier degrades with an increase in the impact of
lens blur. Retraining and fine-tuning are done to reduce the Top-1 error rate. Experimental
results a reduction in the Top- 1 classification error rate of distorted images from 53.1% to
47.7%.
In Basu et al. (2017), the effect of lens blur is analysed in which the authors achieved
an improvement of 26% for lens blurring for the n-MNIST dataset compared with previous
traditional deep belief networks. The main advantage of this method is that it also consid-
ers a challenging condition such as reduced contrast in which the efficiency is improved
by 16%. Besides these challenging conditions, there are other challenging conditions such
as camera distance and movement in the camera, which result in the degradation of the
quality of images. DL algorithms are greatly influenced by these camera perturbations.
Even small camera movement results in the misclassification of trained neural networks.
Even small camera movement in any directions including either x-axis or y-axis. These
challenges can create more serious problems when used in safety-critical systems such as
autonomous vehicles. For example, sometimes a stop sign can be regarded as a minimum
speed limit sign which can cause serious accidents. YOLO detector is used for the meas-
urement of the detection rate of different traffic signs contained in IoT images. Photos with
varying distances are taken and analysed so that the rate of distortion increases with an
increase in the distance. Results show that the change in the angle can also impact the
effectiveness of the models (Lu et al. 2017). However, the proposed DL algorithm has been
tested with stop signs only. Moreover, traffic signs are represented in different sizes and dif-
ferent colours. Besides lens blurriness impact, the camera distance affects the quality of its
images differing from traffic sign to traffic sign. Therefore, to show the effectiveness of this
technique for autonomous vehicles, it needs to further investigate the performance using
various traffic signs under the different camera and environmental impacts. For example,
different camera impacts include lens dirtiness and barrel distortion etc. Different types of
environmental impacts are also mentioned in Sect. 1.
Temel et al. (2018) analysed the impact of lens dirtiness and lens blur. The authors
introduced the Challenging Unreal and Real Environments for Object Recognition (CURE-
OR) dataset, which consists of 1,000,000 images of 100 objects of various sizes, colours,
and texts. Under challenging conditions such as lens blur and lens dirtiness, it is demon-
strated that Amazon Rekognition and Microsoft Azure Computer Vision performance sig-
nificantly degrades. The gaussian operator is used to obtain blur images. Dirty lens patterns

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9625

have been overlayed on original images to create dirty lens images. Top-5 accuracy metric
and confusion metric was employed to show the recognition performance. Top-5 accuracy
means that any of the top five most probable answers match the expected answer. The chal-
lenging conditions such as lens blur and dirty lenses lead to Amazon’s and Microsoft’s
accuracy dropping below 1%. Because a dirty lens pattern with different weights corre-
sponds to a challenge level, effectively blocks the edge and texture in an image. To con-
clude, based on the analysis of the categorical misclassifications, recognition performance
degrades as the challenge level increases for each challenge category (lens dirtiness and
lens blur). In this lens blur analysis, the recognition performance was measured by only
top-5 accuracy and do not mention top-1 accuracy. And the Gaussian operator is used to
obtain blurred images, but the authors do not examine the impact on naturally blurred
images which would restrict its application in real-world environments. Also, other camera
impacts such as barrel distortion have not been covered. Therefore, there is a requirement
to capture databases in more realistic environments.
So far, we provided the solution for RQ-2 where the DL techniques focusing analysis of
environmental and camera impacts on IoT image capturing are presented. However, there
exist some DL approaches for reducing both environmental and camera impacts on IoT
images that correspond to the solution of RQ-3. These approaches are presented in the fol-
lowing sections.

5 Reducing impact on IoT images using DL

Table 2 and the above discussion depict that dynamic outdoor environmental parameters
such as fog, rain, snow, and camera parameters have adverse impacts on image processing
applications (e.g., transportation, agriculture, city, and health). Hence, there is a need to
minimise the impact of such parameters on IoT images because these parameters limit the
potentiality of application systems.

5.1 Reducing environmental impact on IoT images

From the early onset of the research on reducing the environmental impact on images,
there exist many techniques that are mainly not DL-based (Narasimhan and Nayar 2002;
Shwartz and Schechner 2006; Fattal 2008; Tan 2008; Tarel and Hautiere 2009; He et al.
2010). For example, haze is mostly created by weather conditions. It is one of the envi-
ronmental parameters due to the attenuation in the light, which results in the reduction of
image quality. The decrease in image quality leads to loss of information and improper rec-
ognition of objects. Various studies (Narasimhan and Nayar 2002; Shwartz and Schechner
2006; Fattal 2008; Tan 2008; Tarel and Hautiere 2009; He et al. 2010) have done to reduce
the impacts of haze by different methods of physical equations like atmosphere scattering
model (Gu et al. 2017) and improvement in contrast techniques. Although these methods
achieve dehazing, they take a lot of computational time and are inefficient to remove haze
from every part of the image.
Up to now, we present non-DL-based environmental impact reduction techniques.
However, with the advancement of neural networks, especially the DL algorithms, a few
relevant DL-based techniques are available in the current literature for reducing environ-
mental impacts. The pioneering works in this area are the approaches developed using
DL algorithms (CNN), neural network filters, and dense correspondence-based transfer

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9626 R. Kaur et al.

learning. These algorithms decrease the impact of rain, shadow, darkness, clouds, snow
and haze on IoT images and the images captured by traditional cameras.
A combined approach of DL (neural network filter) and fuzzy inference system to
tackle the problem of hazing in images is presented in Wang et al. (2016). Haze primar-
ily occurs when light is attenuated. It is a nonlinear problem to estimate the attenuation
of light. For this reason, a fuzzy inference system is designed to estimate the varia-
tion in the intensity of light. Hence, the authors used this combined approach to effec-
tively remove the haze effect. Because the fuzzy inference system is based on the human
brain’s ability to express word ambiguity. The neural network emulates the brain’s
organizational structure and is capable of learning. In this method, factor e is used as a
performance metric. It is the capability to restore the ratio of edges that are not visible
in the original image. It is 0.06 in comparison with other techniques whose value for e
is −0.06 (Tan 2008), −0.01 (Tarel and Hautiere 2009) and 0.01 (He et al. 2010). Along
with haze which can impact the quality of images, factors such as shadow, darkness,
rain, snow and cloud can also impact the quality of images. As alluded to before, in the
field of transportation systems, there is a problem to understand traffic scene images on
rainy, cloudy, dark and sunny days.
To reduce the impact on traffic scene images, series of experiments were done using
a challenging dataset consisting of a total of 1828 images having different weather con-
ditions. A Dense Correspondence-based Transfer Learning (DCTL) technique was used
by Shuai et al. Di et al. (2018) which consists of three main steps that use:

1. a fine-tuned convolutional neural network to extract deep representations of traffic scene


images;
2. cross-domain metric learning and subspace alignment to construct compact and effective
representations for cross-domain retrieval; and
3. cross-domain dense correspondences, and a probabilistic Markov random field to trans-
fer annotations from the best-matching image to the test image.

Traffic scene data sets are quite different from the data sets used to pre-train CNN in
the authors’ case. Because of this, authors fine-tune the pre-trained CNN using traffic
scene data. Depending on the weather and illumination conditions, traffic scene images
may have dramatically different appearances, and the extracted deep features may
also not be consistent. Domain adaptation allows for overcoming this challenge. For
instance, the terms “sunny day", “cloudy day", and “snowy day" belongs to domain
A; while “night", “rainy night", and “foggy day" are in domain B. Annotation infor-
mation on scene images is transferred from the cross-domain best-matching image to
the test image based on the dense correspondences established between them. Differ-
ent images in different domains usually have different appearances due to differences
in weather or illumination conditions, but they share a similar spatial layout. Thus,
if proper correspondences are created, the annotation information on scene images in
domain A can be transferred to scene images in domain B. The dense correspond-
ences are constructed using Scale-Invariant Feature Transform (SIFT) flow (Liu et al.
2010, 2011) and the annotation information is transferred using a Markov random
field model. For all these challenging scenarios, the DCTL technique performs bet-
ter compared with other state-of-the-art approaches such as Geodesic Flow Kernel
(GFK) (Gong et al. 2012) and Subspace Alignment (SA) (Fernando et al. 2013). It is
because of the fine-tuning of CNN to extract deep features and subspace alignment

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9627

for constructing compact representations to retrieve the cross-domain best-matching


image. The 12.57% improvement is seen in the detection accuracy of traffic signs with
the fine-tuning of CNN’s.
However, the DCTL technique is not adequate as it is insufficient to compensate
for possible variations in illumination. Consequently, an advanced method (Di et al.
2021) is proposed recently to address the problem of images captured in rainy and
night scenes without pixel-level human annotations. For these experiments, the authors
created a new dataset comprising different 7000 images. The proposed method outper-
forms the previously mentioned algorithms (e.g.,“A dense correspondence-based trans-
fer learning approach” (Di et al. 2017) and “Dark Model Adaptation” (Dai and Van
Gool 2018)) which have the problem of domain adaptation for semantic segmentation
of rainy night images. This issue is addressed in this technique (Di et al. 2021) using
adversarial learning, which does not require pixel-level human annotations.
There are varied DL methods such as removal of rain (Ren et al. 2019; Ahn et al.
2022), removal of fog (Wang et al. 2019), removal of haze (Wang et al. 2022; Feng
et al. 2022) and many more. However, these methods cannot work effectively to man-
age comprehensive environmental conditions and they just worked on the synthetic
database rather than real images.
To solve this issue of artificially generated environmental parameters on images,
Wei et al. (2019) proposed a semi-supervised transfer learning for the removal of
rain from the images. Authors used 64× 64 synthesised rainy/clean image patch pairs
as supervised training data, which are the same as those used by the baseline CNN
method. To formulate the loss function on supervised samples, the authors followed
the network structure and negative residual mapping skill of DerainNet (Fu et al.
2017). Real-world rainy images were used from dataset (Wei et al. 2017) and Google
image search. The unsupervised samples were obtained by randomly cropping one mil-
lion 64× 64 image patches from these images. Unlike deep learning methods that only
use supervised image pairs with/without synthetic rain, authors put real rainy images,
without the need for their clean ones, into the network training process. They also used
real rainy images for the network training process. Both supervised and unsupervised
images of rain are used to train a CNN. The difficulty of collecting training samples
and overfitting training samples are particularly alleviated by their method in compari-
son to other DL methods designed for this task. Nevertheless, this model is not capable
of handling all rainy images which are extremely complex (images containing high or
extremely high-level rain impact). As the authors did not experiment using a dataset
containing heavy rain, a future project, is aimed to show the efficacy of their method to
reduce the impact of heavy rain (Wei et al. 2019). Also, only environmental parameter
such as rain is covered in this experiment indicating other environmental parameters
such as shadow, darkness, wind, and fog also need to be considered.
To address this issue in which authors just focused on the removal of rain, Chen
et al. (2020) devised an adaptive noise removal tool based on the VGG. The proposed
adaptive noise removal tool improves the quality of images with the optimised algo-
rithms. The method is beneficial for the removal of sunny, foggy, cloudy, and rainy
images. However, snowy weather is still in their future works which need to be con-
sidered. Until now, environmental impact-reducing techniques on IoT images are pre-
sented. The techniques to reduce the camera lens distortion impact for IoT images are
explained in the forthcoming section.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9628 R. Kaur et al.

5.2 Reducing the impact of camera lens distortions on IoT images

As mentioned in Section 3.2, there are various types of camera lens distortions such as lens
blur, lens dirtiness, barrel distortion, or fisheye distortion.
There exist different works for lens deblurring such as Park et al. (2015) introduced a
method called the “self example image enhancement” method to alleviate the effect of image
blurriness occurred by the camera lens.
Recently, Bhonsle (2021) states that the Lucy-Richardson algorithm is utilised for the
deblurring of images, although, this method can be used to deblur the images up to a certain
extent.
To overcome this complication, a Krill Herd method was employed and a comparison of
this method with pre-existing on the basis of PSNR, SNR, and SSIM proved the efficacy of
the system (Bhonsle 2021).
In Park et al. (2015), barrel distortion correction is done by an orthographic projection
model based on the parabolic equation. An algorithm for real-time applications to correct bar-
rel distortion using MATLAB and BF561 DSP is presented in Awade et al. (2016). The main
shortcoming of this method is that the distortion correction factor is adjusted manually accord-
ing to image distortion. However, for a faster and more automatic process, the correction factor
could also be adjusted automatically according to the focal length of the lens. Note, the camera
lens distortion reduction approaches presented up to this point are non-DL-based. However, to
our knowledge, there exist few DL-based methods for camera lens distortion reduction that are
presented in this section.
As alluded before, to address the issue of manual adjustment of the distortion correc-
tion factor, Liu et al. (2018) developed a DL-based model entitled “smart unstaffed retail
shop schemes.” The model has a front-end layer that consists of a camera and an intelligent
unmanned retail container. The authors used a dataset with the composition of 11,000 images
having different kinds of scenarios. A DL-based method, namely Mask-Regional Convolu-
tional Neural Network (MASK-RCNN) is utilised to train their model. The model achieves
96.7 % recognition accuracy for the test set of images. To improve the efficacy of the method
introduced by Liu et al. (2018) for fisheye or barrel distortion reduction, there exist some other
methods (Li et al. 2011; Park et al. 2009; Wang et al. 2015). These methods use the spherical
projection (Bourke 2010) technique for barrel distortion reduction which makes the system
less computationally complex.
Das et al. (2017) also showed that DNN plays a significant role in solving the problems
of machine learning, particularly in image recognition processes. Latest studies indicate that
DNN are at high risk of adversarial generated instances that can confuse DNN but it seems
fine for human beings. To compensate for these effects, JPEG compression is an effective way
of pre-processing to remove the noise. Because it can remove high-frequency components.
However, noise could be due to various impacts such as weather-changing parameters and
camera lens distortions. The definition of noise is not clear in this approach.
Until now, we have presented the solution for RQ-3 where the DL techniques have focused
on reducing the environmental and camera impacts on IoT image capturing. Even so, there
are still a few research challenges that need to be addressed, which are discussed in the next
section.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9629

6 Further research challenges

Even though there exist many techniques available in the literature, impact analysis and
reduction techniques including the DL-based approaches for IoT images are still at an
embryonic stage.
Table 3 presents future research challenges concerning particular techniques for the
analysis and the reduction of environmental and camera parameters including their impact
types. For example, a further research challenge for the automatic plant recognition tech-
niques is that the authors examined the effects of rain and wind on only four plant species.
The recommended ways to address future research challenges are also mentioned in this
table. The recommended way for addressing the above-mentioned research challenge is to
work on plant species like cassava that are difficult to identify (Hassan and Maji 2022).
Moreover, there are general ways to advance research in this area. Some of the possible
future research issues and their recommended solutions are presented as follows:

1. Objective quality assessment techniques (e.g., PSNR, SSIM, ORB, MSE, SC) (Kaur
et al. 2022) and some DL-based quality assessment techniques (Dodge and Karam 2016)
are not consistent. Because these techniques cannot capture impact levels accurately and
represent the impact level consistent with human perception for all environmental and
camera impacts. These inconsistent impact levels demand the development of a new
image quality assessment technique. The new technique could be either based on DL or a
statistical approach and should consider the characteristic aspects of both environmental
and camera impacts on IoT images more accurately.
2. The recent trends for the industry, business, and governmental organisations are to use
IoT technologies to capture the value from data. These trends include big data usage,
including video and unstructured data, and introducing innovative products and cost-
effective digital or automatic services for better customer/user satisfaction and economic
boost. However, benchmark IoT datasets on different environmental and camera impacts
are not available with current research communities. The unavailability of such datasets
is one of the barriers to developing effective image quality assessment techniques and
assessing both environmental and camera impacts on IoT images accurately.
3. The performance analysis of existing DL-based distortion reduction techniques for all
possible environmental and camera distortion impacts on the IoT images needs to be
advanced further. For example, (Tschentscher et al. 2017) covers only the impacts of fog,
sun, rain, and snow but not other environmental impact factors such as wind, shadow,
and darkness. Likewise, the approach presented in Kazerouni et al. (2019) emphasises
only wind and rain. However, the rest of the environmental parameters such as shadow,
snow, and fog while capturing IoT images are not considered. Similarly, even though
Liu et al. (2018) have reduced the impact of barrel distortion, other camera parameter
impacts such as lens blurriness and dirtiness are not considered. The information pre-
sented in this further research indicates that the existing distortion reduction techniques
are not taking all the environmental and camera lens distortion impacts into considera-
tion.
4. All DL techniques are not effective enough for reducing different environmental and
camera parameters from the accuracy point of view. Different applications of DL meth-
ods have different goals. Since DL shows itself as the potential tool for all applications,
DL techniques have to be robust against different types of environmental and camera
impacts. Therefore, there is a need to introduce more effective DL-based distortion

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 3  Future research challenges associated with DL techniques for the analysis and reduction of environmental and camera impacts and suggested ways to address those
9630

challenges. Here, Env, Cam, Anal, and Red represent environmental, camera, analysis, and reduction, respectively
Approach/Year Future research challenge Ways to address challenge Impact Impact

13
type process
Env Cam Anal Red

Implementation of a simulation system using There is no attempt made to use this simulated The simulated video data needs to be used for ✓ × ✓ ×
Unreal Engine 4 to analyse the impact of rain, video to demonstrate the effectiveness of the training the parking guidance model. The per-
sun, and fog (2017) (Tschentscher et al. 2017) real-world or traditional parking guidance formance of the model is required to be tested
system(s) under various environmental impacts based on
real parking traffic scenarios
To analyse the effect of lens blur on DNN the experiments are limited to recognising stop- With the usage of all different road and traffic × ✓ ✓ ×
(2017) (Lu et al. 2017) ping signs in traffic databases. This limitation signs used in traffic control systems, the
is a concerning situation in many real-world efficacy of the proposed method needs to
circumstances, especially for autonomous be tested before applying it in intelligent
vehicles transportation systems involving autonomous
vehicles
Deep CNN-based approach for detecting smoke Fog is regarded as the only uncertain environ- The system needs to be tested under different ✓ × ✓ ×
in a normal and foggy IoT environment (2019) mental parameter. However, other environ- environmental conditions. The insights gained
(Khan et al. 2019) mental impacts such as shadows and sunlight from these tests can help improve the system
are not addressed
Automatic plant recognition technique using DL Not many (only four) plant species and envi- Plant species such as cassava that are difficult ✓ ✓ ✓ ×
(2019) (Kazerouni et al. 2019) ronmental parameters (two) such as rain and to identify (Hassan and Maji 2022) and other
wind are considered in the plant recognition environmental parameters such as shadow,
process darkness, and fog need to be included
Semi-supervised transfer learning for the It would be difficult to handle complex rainy The model can be trained using difficult rainy ✓ × × ✓
removal of rain from the images (2019) (Wei images with this model and only rain is con- conditions and other environmental situa-
et al. 2019) sidered as environmental parameter tions created by other different environmental
parameters
Analysed and compared three variants of the How to measure the amount of degradation To develop a mathematical function that can ✓ × ✓ ×
YOLO object detection algorithm based on occurs in vehicle detection model and how measure the correlation between the quality
vehicle detection models, 2022 (Boppana much it suffers from deterioration of input of images and the performance of the vehicle
et al. 2022) images detection model
R. Kaur et al.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Table 3  (continued)
Approach/Year Future research challenge Ways to address challenge Impact Impact
type process
Env Cam Anal Red
An end-to-end model based on transformers that Only environmental parameters such as rain, Other environmental impacts such as shadow, × ✓ ✓ ×
can restore an image damaged by any weather snow and fog are considered darkness can be included in the experiments
condition, 2022 (Valanarasu et al. 2022)
Deep learning: survey of environmental and camera impacts on…
9631

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9632 R. Kaur et al.

reduction techniques considering the purpose of applications so that they can accurately
handle the application-specific images having many different environmental and camera
impacts.
5. IoT network is vulnerable to cyber-attacks. Research shows DL models are also threat-
ened by various types of attacks such as adversarial attacks (Ren et al. 2020), network
attacks on DL architectures (Wu et al. 2020) and data poisoning attacks (Chou et al.
2018). Therefore, appropriate cyber-security protection mechanisms either need to be
developed for integrating with or incorporated in IoT data gathering, transmission, and
processing steps as well as DL training and testing phases.
6. For most of the applications, real datasets (covering all environmental impacts such as
rain, fog, shadow, wind, darkness, and camera impacts such as lens blurriness, dirtiness,
and barrel distortion) are not available. There exists a model that simulates the camera
to generate the data for training traffic parking guidance systems considering the lens
blur and lens distortion camera impacts (Tschentscher et al. 2017). The main loophole of
this simulation model is that this uses only lens blur and lens distortion camera impacts.
The advancement of DL techniques for the analysis and reduction of environmental
and camera impacts require addressing the problem of the unavailability of the dataset
mentioned before in this research challenge. Therefore, similar to the simulation model
developed in Tschentscher et al. (2017), there is a need for the development of a simula-
tion model which is capable of generating data that closely represents various real-world
environmental deployment scenarios. The development of such a simulation model also
demands a technique to validate the simulated data against their real-world counterparts.

7 Conclusions

There is not a plethora of DL techniques available for analysing the impingement of environ-
mental and camera lens distortions. From the literature, we perceive that DL-based image pro-
cessing techniques can deal with uncertainties. These uncertainties are associated with images
and their processing techniques that lead to non-deterministic problems. DL algorithms can
handle these problems. This paper presents an SLR of the recent state-of-the-art DL-based
approaches for analyzing and reducing environmental and camera lens impacts on images and
presents some open research issues in the relevant areas. Even though DL techniques seem to
be the potential to handle these impacts, still now there are many research gaps that need to
be addressed in the future. Some of these research gaps are mentioned in Sect. 6. Moreover,
there exist some techniques which alone cannot analyse and reduce the repercussions related to
the impact of all environmental parameters. For instance, only one DL technique is not able to
analyse all the environmental impacts such as rain, snow, fog, and many more while image cap-
turing. The research study also shows sufficient work is not done to analyse the impact of wind
on IoT images as compared to other factors such as rain, snow, and fog. We also acknowledge
that image communication through IoT networks raises many research issues like less reliable
data forwarding, low latency for real-time applications, and vulnerability to cyberattacks. This
survey covers applications that use DL techniques to analyse and reduce camera impacts on
IoT images. However, when sufficient DL-based techniques are available for such analysis and
reduction in the future, the further survey can be conducted on specific applications such as
autonomous driving.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9633

Funding Open Access funding enabled and organized by CAUL and its Member Institutions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://​creat​iveco​mmons.​org/​licen​ses/​by/4.​0/.

References
Ahn WJ, Kang TK, Choi HD, Lim MT (2022) Remove and recover: deep end-to-end two-stage attention
network for single-shot heavy rain removal. Neurocomputing
Akpakwu GA, Silva BJ, Hancke GP, Abu-Mahfouz AM (2017) A survey on 5g networks for the internet
of things: communication technologies and challenges. IEEE Access 6:3619–3647
Albawi S, Mohammed TA, Al-Zawi S (2017) Understanding of a convolutional neural network. In: 2017
International Conference on Engineering and Technology (ICET), pp 1–6. Ieee
Alippi C, Disabato S, Roveri M (2018) Moving convolutional neural networks to embedded systems:
the alexnet and vgg-16 case. In: 2018 17th ACM/IEEE International Conference on Information
Processing in Sensor Networks (IPSN), pp 212–223. IEEE
Almars AM, Gad I, Atlam E-S (2022) Applications of ai and iot in covid-19 vaccine and its impact
on social life. In: Medical Informatics and Bioimaging Using Artificial Intelligence, pp 115–127.
Springer
Awade PG, Bodhula R, Chopade N (2016) Implementation of barrel distortion correction on dsp in real
time. In: 2016 International Conference on Computing Communication Control and Automation
(ICCUBEA), pp 1–6. IEEE
Aziz M, Tayarani-N MH, Afsar M (2015) A cycling chaos-based cryptic-free algorithm for image steganog-
raphy. Nonlinear Dyn 80(3):1271–1290
Bahnsen CH, Moeslund TB (2018) Rain removal in traffic surveillance: does it matter? IEEE Trans Intell
Transp Syst 20(8):2802–2819
Bansal V (2020) The Evolution of Deep Learning. https://​towar​dsdat​ascie​nce.​com/​the-​deep-​histo​ry-​of-​
deep-​learn​ing-​3bebe​b810f​b2
Basu S, Karki M, Ganguly S, DiBiano R, Mukhopadhyay S, Gayaka S, Kannan R, Nemani R (2017) Learn-
ing sparse feature representations using probabilistic quadtrees and deep belief nets. Neural Process-
ing Letters 45(3):855–867. Springer
Bay H, Tuytelaars T, Van Gool L (2006) Surf: Speeded up robust features. In: European Conference on
Computer Vision, pp 404–417. Springer
Bewley A, Ge Z, Ott L, Ramos F, Upcroft B (2016) Simple online and realtime tracking. In: 2016 IEEE
International Conference on Image Processing (ICIP), pp 3464–3468. IEEE
Bhonsle D (2021) Quality improvement of richardson lucy based de-blurred images using krill herd optimi-
zation. In: 2021 International Conference on Advances in Electrical, Computing, Communication and
Sustainable Technologies (ICAECT), pp 1–5. IEEE
Biswal J (2021) Internet of Things (IoT) total annual revenue worldwide. https://​www.​globa​ldata.​com/​
global-​iot-​market-​will-​surpa​ss-1-​trill​ion-​mark-​2024-​says-​globa​ldata/
Boppana UM, Mustapha A, Jacob K, Deivanayagampillai N (2022) Comparative analysis of single-stage
yolo algorithms for vehicle detection under extreme weather conditions. In: IOT with Smart Systems,
pp 637–645. Springer
Bourke P (2010) Capturing omni-directional stereoscopic spherical projections with a single camera. In:
2010 16th International Conference on Virtual Systems and Multimedia, pp 179–183. IEEE
Buades A, Coll B, Morel J-M (2005) A review of image denoising algorithms, with a new one. Multiscale
Model Simul 4(2):490–530
Chatfield K, Simonyan K, Vedaldi A, Zisserman A (2014) Return of the devil in the details: Delving deep
into convolutional nets. arXiv preprint arXiv:​1405.​3531
Chen M, Sun J, Saga K, Tanjo T, Aida K (2020) An adaptive noise removal tool for iot image processing
under influence of weather conditions. In: Proceedings of the 18th Conference on Embedded Net-
worked Sensor Systems, pp 655–656

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9634 R. Kaur et al.

Chou E, Tramèr F, Pellegrino G, Boneh D (2018) Sentinet: detecting physical attacks against deep learning
systems
Cieszynski J (2006) Closed circuit television. Elsevier, Amsterdam
Costanzo A, Masotti D (2017) Energizing 5g: Near-and far-field wireless energy and data trantransfer as an
enabling technology for the 5g iot. IEEE Microwave Mag 18(3):125–136
Da Xu L, He W, Li S (2014) Internet of things in industries: a survey. IEEE Trans Ind Inf 10(4):2233–2243
Dai D, Van Gool L (2018) Dark model adaptation: semantic image segmentation from daytime to nighttime.
In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp 3819–3824.
IEEE
Das N, Shanbhogue M, Chen S-T, Hohman F, Chen L, Kounavis ME, Chau DH (2017) Keeping the bad
guys out: protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:​1705.​
02900
Data FR University of essex. UK, Face 94
Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image data-
base. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp 248–255. IEEE
Di S, Feng Q, Li C-G, Zhang M, Zhang H, Elezovikj S, Tan CC, Ling H (2021) Rainy night scene under-
standing with near scene semantic adaptation. IEEE Trans Intell Transp Syst 22(3):1594–1602.
https://​doi.​org/​10.​1109/​TITS.​2020.​29729​12
Di S, Zhang H, Li C-G, Mei X, Prokhorov D, Ling H (2017) Cross-domain traffic scene understanding: a
dense correspondence-based transfer learning approach. IEEE Trans Intell Transp Syst 19(3):745–757
Di S, Zhang H, Li C-G, Mei X, Prokhorov D, Ling H (2018) Cross-domain traffic scene understanding: a
dense correspondence-based transfer learning approach. IEEE Trans Intell Transp Syst 19(3):745–
757. https://​doi.​org/​10.​1109/​TITS.​2017.​27020​12
Do T-T, Zhou Y, Zheng H, Cheung N-M, Koh D (2014) Early melanoma diagnosis with mobile imaging.
In: 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology
Society, pp 6752–6757. IEEE
Dodge S, Karam L (2016) Understanding how image quality affects deep neural networks. In: 2016 Eighth
International Conference on Quality of Multimedia Experience (QoMEX), pp 1–6. https://​doi.​org/​10.​
1109/​QoMEX.​2016.​74989​55
Dodge S, Karam L (2016) Understanding how image quality affects deep neural networks. In: 2016 Eighth
International Conference on Quality of Multimedia Experience (QoMEX), pp 1–6. IEEE
El-Rabbany A (2002) Introduction to GPS: the Global Positioning System. Artech house
Fathi Kazerouni M, Mohammed Saeed NT, Kuhnert K-D (2019) Fully-automatic natural plant recognition
system using deep neural network for dynamic outdoor environments. SN Applied Sciences 1(7):1–
18. Springer
Fattal R (2008) Single image dehazing. ACM Trans Graph 27(3):1–9
Feng W, Tong X, Yang X, Chen X, Yu C (2022) Coal mine image dust and fog clearing algorithm based on
deep learning network. In: 2022 4th Asia Pacific Information Technology Conference, pp 40–47
Fernando B, Habrard A, Sebban M, Tuytelaars T (2013) Unsupervised visual domain adaptation using
subspace alignment. In: Proceedings of the IEEE International Conference on Computer Vision, pp
2960–2967
Fu X, Huang J, Zeng D, Huang Y, Ding X, Paisley J (2017) Removing rain from single images via a deep
detail network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pp 3855–3863
Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite.
In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp 3354–3361. IEEE
Ghosh A, Ratasuk R, Mondal B, Mangalvedhe N, Thomas T (2010) Lte-advanced: next-generation wireless
broadband technology. IEEE Wirel Commun 17(3):10–22
Goel D, Pahal N, Jain P, Chaudhury S (2017) An ontology-driven context aware framework for smart traffic
monitoring. In: 2017 IEEE Region 10 Symposium (TENSYMP), pp 1–5. IEEE
Golpîra H, Khan SAR, Safaeipour S (2021) A review of logistics internet-of-things: current trends and
scope for future research. J Ind Inf Integr 22:100194
Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. In:
2012 IEEE Conference on Computer Vision and Pattern Recognition, pp 2066–2073. IEEE
Gu Z, Ju M, Zhang D (2017) A single image dehazing method using average saturation prior. Mathematical
Problems in Engineering 2017
Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of things (iot): a vision, architectural ele-
ments, and future directions. Future Gen Comput Syst 29(7):1645–1660
Hassan SM, Maji AK (2022) Plant disease identification using a novel convolutional neural network. IEEE
Access 10:5390–5401. https://​doi.​org/​10.​1109/​ACCESS.​2022.​31413​71

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9635

He K, Sun J, Tang X (2010) Single image haze removal using dark channel prior. IEEE Trans Pattern Anal
Mach Intell 33(12):2341–2353
Herath H, Karunasena G, Herath H (2021) Development of an iot based systems to mitigate the impact
of covid-19 pandemic in smart cities. In: Machine Intelligence and Data Analytics for Sustainable
Future Smart Cities, pp 287–309. Springer
Hosseini H, Xiao B, Poovendran R (2017) Google’s cloud vision api is not robust to noise. In: 2017 16th
IEEE International Conference on Machine Learning and Applications (ICMLA), pp 101–105. IEEE
Hsu C-Y, Lin H-Y, Kang L-W, Weng M-F, Chang C-M, You T-Y (2017) 3d modeling for steel billet images.
In: 2017 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), pp 5–6. IEEE
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4700–4708
Islam SR, Kwak D, Kabir MH, Hossain M, Kwak K-S (2015) The internet of things for health care: a com-
prehensive survey. IEEE Access 3:678–708
Jain V, Murray JF, Roth F, Turaga S, Zhigulin V, Briggman KL, Helmstaedter MN, Denk W, Seung HS
(2007) Supervised learning of image restoration with convolutional networks. In: 2007 IEEE 11th
International Conference on Computer Vision, pp 1–8. https://​doi.​org/​10.​1109/​ICCV.​2007.​44089​09
Javed F, Afzal MK, Sharif M, Kim B-S (2018) Internet of things (iot) operating systems support, network-
ing technologies, applications, and challenges: a comparative review. IEEE Commun Surv Tutor
20(3):2062–2100
Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe:
convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International
Conference on Multimedia, pp 675–678
Kapoor A, Bhat SI, Shidnal S, Mehra A (2016) Implementation of iot (internet of things) and image pro-
cessing in smart agriculture. In: 2016 International Conference on Computation System and Informa-
tion Technology for Sustainable Solutions (CSITSS), pp 21–26. IEEE
Kaur R, Karmakar G, Xia F (2022) Evaluating outdoor environmental impacts for image understanding and
preparation (accepted on february 21). In: Part 1: Overview of Image Processing and Intelligent Sys-
tem of Edited Book Entitled “Image Processing and Intelligent Computing Systems"
Kazerouni MF, Schlemper J, Kuhnert K-D (2017) Modern detection and description methods for natural
plants recognition. Int J Comput Inf Eng 10(8):1497–1512
Kazerouni MF, Saeed NTM, Kuhnert K-D (2019) Fully-automatic natural plant recognition system using
deep neural network for dynamic outdoor environments. SN Applied Sciences 1(7):1–18. Springer
Kazerouni MF, Schlemper J, Kuhnert K-D (2017) Automatic plant recognition system for challenging natu-
ral plant species
Ke R, Zhuang Y, Pu Z, Wang Y (2020) A smart, efficient, and reliable parking surveillance system with
edge artificial intelligence on iot devices. IEEE Trans Intell Transp Syst
Keele S et al (2007) Guidelines for performing systematic literature reviews in software engineering. Tech-
nical report, Citeseer
Khan S, Muhammad K, Mumtaz S, Baik SW, de Albuquerque VHC (2019) Energy-efficient deep cnn for
smoke detection in foggy iot environment. IEEE Internet Things J 6(6):9237–9245
Kitchenham B, Brereton OP, Budgen D, Turner M, Bailey J, Linkman S (2009) Systematic literature reviews
in software engineering-a systematic literature review. Inf Softw Technol 51(1):7–15
Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural net-
works. Adv Neural Inf Process Syst 25:1097–1105
Landt J (2005) The history of rfid. IEEE Potentials 24(4):8–11
Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. In: 2006 IEEE Computer Society Conference on Computer Vision and Pat-
tern Recognition (CVPR’06), 2:2169–2178. IEEE
LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition.
Proc IEEE 86(11):2278–2324
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444
Lee H, Battle A, Raina R, Ng A (2006) Efficient sparse coding algorithms. Adv Neural Inf Process Syst 19:1
Lee C-Y, Xie S, Gallagher P, Zhang Z, Tu Z (2015) Deeply-supervised nets. In: artificial Intelligence and
Statistics, pp 562–570. PMLR
Lee Y, Jeon J, Ko Y, Jeon B, Jeon M (2021) Task-driven deep image enhancement network for autonomous
driving in bad weather. In: 2021 IEEE International Conference on Robotics and Automation (ICRA),
pp 13746–13753. IEEE

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9636 R. Kaur et al.

Li Y, Zhang M, Wang B (2011) An embedded real-time fish-eye distortion correction method based on
midpoint circle algorithm. In: 2011 IEEE International Conference on Consumer Electronics (ICCE),
pp 191–192. IEEE
Li S, Da Xu L, Zhao S (2018) 5g internet of things: a survey. J Ind Inf Integr 10:1–9
Li S, Araujo IB, Ren W, Wang Z, Tokuda EK, Junior RH, Cesar-Junior R, Zhang J, Guo X, Cao X (2019)
Single image deraining: a comprehensive benchmark analysis. In: Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition, pp 3838–3847
Li C, Guo C, Ren W, Cong R, Hou J, Kwong S, Tao D (2019) An underwater image enhancement bench-
mark dataset and beyond. IEEE Trans Image Process 29:4376–4389
Li L, Wen G, Wang Z, Yang Y (2019) Efficient and secure image communication system based on com-
pressed sensing for iot monitoring applications. IEEE Trans Multimed 22(1):82–95
Li R, Cheong L-F, Tan RT (2019) Heavy rain image restoration: Integrating physics model and conditional
adversarial learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, pp 1633–1642
Li R, Tan RT, Cheong L-F (2020) All in one bad weather removal using architectural search. In:
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp
3175–3185
Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco:
common objects in context. In: European Conference on Computer Vision, pp 740–755. Springer
Liu C, Yuen J, Torralba A (2010) Sift flow: Dense correspondence across scenes and its applications.
IEEE Trans Pattern Anal Mach Intell 33(5):978–994
Liu C, Yuen J, Torralba A (2011) Nonparametric scene parsing via label transfer. IEEE Trans Pattern
Anal Mach Intell 33(12):2368–2382
Liu L, Zhou B, Zou Z, Yeh S-C, Zheng L (2018) A smart unstaffed retail shop based on artificial intel-
ligence and iot. In: 2018 IEEE 23rd International Workshop on Computer Aided Modeling and
Design of Communication Links and Networks (CAMAD), pp 1–4. IEEE
Liu Y-F, Jaw D-W, Huang S-C, Hwang J-N (2018) Desnownet: context-aware deep network for snow
removal. IEEE Trans Image Process 27(6):3064–3073
Liu L, Xu J, Huan Y, Zou Z, Yeh S-C, Zheng L-R (2019) A smart dental health-iot platform based on intel-
ligent hardware, deep learning, and mobile terminal. IEEE J Biomed Health Inf 24(3):898–906
Lu S, Lu Z, Zhang Y-D (2019) Pathological brain detection based on alexnet and transfer learning. J
Comput Sci 30:41–47
Lu Y, Ning X (2020) A vision of 6g–5g’s successor. J Manag Anal 7(3):301–320
Lu J, Sibai H, Fabry E, Forsyth D (2017) No need to worry about adversarial examples in object detec-
tion in autonomous vehicles. arXiv preprint arXiv:​1707.​03501
Mabrouki J, Azrour M, Dhiba D, Farhaoui Y, El Hajjaji S (2021) Iot-based data logger for weather
monitoring using arduino-based wireless sensor networks with remote graphical application and
alerts. Big Data Min Anal 4(1):25–32
Matsugu M, Mori K, Mitari Y, Kaneda Y (2003) Subject independent facial expression recognition with
robust face detection using a convolutional neural network. Neural Netw 16(5–6):555–559
Matworks E Compute peak signal-to-noise ratio (PSNR) between images
Morrish J, Hatton M (2020) Internet of Things (IoT) total annual revenue worldwide from 2019 to 2030.
https://​www.​iot-​now.​com/​2020/​05/​20/​102937-​global-​iot-​market-​to-​g row-​to-1-​5trn-​annual-​reven​
ue-​by-​2030/
Narasimhan SG, Nayar SK (2002) Vision and the atmosphere. Int J Comput Vis 48(3):233–254. Springer
Nejati H, Pomponiu V, Do T-T, Zhou Y, Iravani S, Cheung N-M (2016) Smartphone and mobile image
processing for assisted living: Health-monitoring apps powered by advanced mobile imaging algo-
rithms. IEEE Signal Process Mag 33(4):30–48
Nokia L (2017) evolution for iot connectivity. Nokia white paper
Ofoeda J, Boateng R, Effah J (2019) Application programming interface (api) research: a review of the
past to inform the future. Int J Enterprise Inf Syst 15(3):76–95
Pal M, Berhanu G, Desalegn C, Kandi V (2020) Severe acute respiratory syndrome coronavirus-2 (sars-
cov-2): an update. Cureus 12(3)
Park J, Byun S-C, Lee B-U (2009) Lens distortion correction using ideal image coordinates. IEEE Trans
Consumer Electron 55(3):987–991
Park J, Kim D, Kim D, Paik J (2015) Non-dyadic lens distortion correction and enhancement of fish-
eye lens images. In: 2015 IEEE International Conference on Consumer Electronics (ICCE), pp
275–276. IEEE
Pascanu R, Gulcehre C, Cho K, Bengio Y (2013) How to construct deep recurrent neural networks.
arXiv preprint arXiv:​1312.​6026

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Deep learning: survey of environmental and camera impacts on… 9637

Perera C, Liu CH, Jayawardena S, Chen M (2014) A survey on internet of things from industrial market
perspective. IEEE Access 2:1660–1679
Piccialli F, Giampaolo F, Prezioso E, Crisci D, Cuomo S (2021) Predictive analytics for smart parking: a
deep learning approach in forecasting of iot data. ACM Trans Internet Technol 21(3):1–21
Pomponiu V, Nejati H, Cheung N-M (2016) Deepmole: Deep neural networks for skin mole lesion clas-
sification. In: 2016 IEEE International Conference on Image Processing (ICIP), pp 2623–2627. IEEE
Qian R, Tan RT, Yang W, Su J, Liu J (2018) Attentive generative adversarial network for raindrop
removal from a single image. In: Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp 2482–2491
Ren K, Zheng T, Qin Z, Liu X (2020) Adversarial attacks and defenses in deep learning. Engineering
6(3):346–360
Ren D, Zuo W, Hu Q, Zhu P, Meng D (2019) Progressive image deraining networks: a better and simpler
baseline. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni-
tion, pp 3937–3946
Rublee E, Rabaud V, Konolige K, Bradski G (2011) Orb: an efficient alternative to sift or surf. In: 2011
International Conference on Computer Vision, pp 2564–2571. IEEE
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M,
et al (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252.
Springer
Sahu AK, Sharma S, Puthal D (2021) Lightweight multi-party authentication and key agreement protocol in
iot-based e-healthcare service. ACM Transactions on Multimedia Computing, Communications, and
Applications (TOMM) 17(2s):1–20
Savic M, Lukic M, Danilovic D, Bodroski Z, Bajović D, Mezei I, Vukobratovic D, Skrbic S, Jakovetić D
(2021) Deep learning anomaly detection for cellular iot with applications in smart logistics. IEEE
Access 9:59406–59419
Sharma C (2016) Correcting-the-iot-history. http://​www.​cheta​nshar​ma.​com/​corre​cting-​the-​iot-​histo​ry/
Shwartz S, Schechner Y (2006) Blind haze separation. In: 2006 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR’06), 2:1984–1991. IEEE
SigFox S (2018) Available on line 15 Jan availability: http://​www.​sigfox.​com
Sigov A, Ratkin L, Ivanov LA, Xu LD (2022) Emerging enabling technologies for industry 4.0 and beyond.
Inf Syst Front, 1:1–11
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:​1409.​1556
Som A, Kayal P (2022) Ai, blockchain, and iot. In: Digitalization and the Future of Financial Services, pp
141–161. Springer
Stallkamp J, Schlipsing M, Salmen J, Igel C (2011) The german traffic sign recognition benchmark: a multi-
class classification competition. In: the 2011 International Joint Conference on Neural Networks, pp
1453–1460. https://​doi.​org/​10.​1109/​IJCNN.​2011.​60333​95
Statista (2020) Statista: Internet of Things (IoT) total annual revenue worldwide from 2019 to 2030. https://​
www.​stati​sta.​com/​stati​stics/​11947​09/​iot-​reven​ue-​world​wide/
Suganya R, Kanagavalli R (2021) Gradient flow-based deep residual networks for enhancing vis-
ibility of scenery images degraded by foggy weather conditions. J Ambient Intell Hum Comput
12(1):1503–1516
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015)
Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp 1–9
Talavera JM, Tobón LE, Gómez JA, Culman MA, Aranda JM, Parra DT, Quiroz LA, Hoyos A, Garreta LE
(2017) Review of iot applications in agro-industrial and environmental fields. Comput Electron Agric
142:283–297
Tan RT (2008) Visibility in bad weather from a single image. In: 2008 IEEE Conference on Computer
Vision and Pattern Recognition, pp 1–8. IEEE
Tarel J-P (2022) FRIDA3 image databases. http://​perso.​lcpc.​fr/​tarel.​jean-​phili​ppe/​index.​html
Tarel J-P, Hautiere N (2009) Fast visibility restoration from a single color or gray level image. In: 2009
IEEE 12th International Conference on Computer Vision, pp 2201–2208. IEEE
Taylor L (2011) Alliance, and zigbee,“interconnecting zigbee & m2m networks,”. In: Proc. ETSI M2M
Workshop, pp 1–18
Temel D, Kwon G, Prabhushankar M, AlRegib G (2017) Cure-tsr: challenging unreal and real environments
for traffic sign recognition. arXiv preprint arXiv:​1712.​02463

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
9638 R. Kaur et al.

Temel D, Lee J, Alregib G (2018) Cure-or: challenging unreal and real environments for object recognition.
In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp
137–144. https://​doi.​org/​10.​1109/​ICMLA.​2018.​00028
Temel D, Chen M-H, AlRegib G (2019) Traffic sign detection under challenging conditions: a deeper
look into performance variations and spectral characteristics. IEEE Trans Intell Transp Syst
21(9):3663–3673
Texas: Perceptual Fog Density Assessment and Image Defogging Research at LIVE. http://​live.​ece.​utexas.​
edu/​resea​rch/​fog/​index.​html (2015)
Tian C, Fei L, Zheng W, Xu Y, Zuo W, Lin C-W (2020) Deep learning on image denoising: an overview.
Neural Netw 131:251–275
Tschentscher M, Pruß B, Horn D (2017) A simulated car-park environment for the evaluation of video-
based on-site parking guidance systems. In: 2017 IEEE Intelligent Vehicles Symposium (IV), pp
1571–1576. IEEE
Valanarasu JMJ, Yasarla R, Patel VM (2022) Transweather: transformer-based restoration of images
degraded by adverse weather conditions. In: Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pp 2353–2363
Vangelista L, Zanella A, Zorzi M (2015) Long-range iot technologies: the dawn of loraTM. In: Future Access
Enablers of Ubiquitous and Intelligent Infrastructures, pp 51–58. Springer
Wang A, Bovik H, Sheikh R, Simoncelli EP (2004) Image quality assessment: from error visibility to struc-
tural similarity. IEEE Trans Image Process 13(4):600–612
Wang Z, Liang H, Wu X, Zhao Y, Cai B, Tao C, Zhang Z, Wang Y, Li S, Huang F et al (2015) A practi-
cal distortion correcting method from fisheye image to perspective projection image. In: 2015 IEEE
International Conference on Information and Automation, pp 1178–1183. IEEE
Wang J-G, Tai S-C, Lee C-L, Lin C-J, Lin T-H (2016) Using a hybrid of fuzzy theory and neural network
filter for image dehazing applications. In: 2016 International Joint Conference on Neural Networks
(IJCNN), pp 692–697. IEEE
Wang X, Zhang X, Zhu H, Wang Q, Ning C (2019) An effective algorithm for single image fog removal.
Mobile Networks and Applications, 1–9. Springer
Wang W, Wang A, Liu C (2022) Variational single nighttime image haze removal with a gray haze-line
prior. IEEE Trans Image Process
Wei W, Meng D, Zhao Q, Xu Z, Wu Y (2019) Semi-supervised transfer learning for image rain removal. In:
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 3872–3881.
https://​doi.​org/​10.​1109/​CVPR.​2019.​00400
Wei W, Yi L, Xie Q, Zhao Q, Meng D, Xu Z (2017) Should we encode rain streaks in video as determin-
istic or stochastic? In: Proceedings of the IEEE International Conference on Computer Vision, pp
2516–2525
Wu Y, Wei D, Feng J (2020) Network attacks detection methods based on deep learning techniques: a sur-
vey. Security and Communication Networks 2020
Xie L, Xing C, Wu Y, Zhang D (2021) Design and realization of marine internet of things environmental
parameter acquisition system based on nb-iot technology. In: 2021 9th International Conference on
Communications and Broadband Networking, pp 257–260
Yi H, Shiyu S, Xiusheng D, Zhigang C (2016) A study on deep neural networks framework. In: 2016 IEEE
Advanced Information Management, Communicates, Electronic and Automation Control Conference
(IMCEC), pp 1519–1522. IEEE
Yin X, Ma J (2021) General model-agnostic transfer learning for natural degradation image enhancement.
In: 2021 International Symposium on Computer Technology and Information Science (ISCTIS), pp
250–257. IEEE
Zhang C, Chen Y (2020) A review of research relevant to the emerging industry trends: Industry 4.0, iot,
blockchain, and business analytics. J Ind Integr Manag 5(01):165–180
Zhou Y, Song S, Cheung N-M (2017) On classification of distorted images with deep convolutional neu-
ral networks. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pp 1213–1217. IEEE
Zhu C, Leung VC, Shu L, Ngai EC-H (2015) Green internet of things for smart world. IEEE Access
3:2151–2162

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:

1. use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
2. use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at

onlineservice@springernature.com

You might also like