Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Computers and Electronics in Agriculture 165 (2019) 104926

Contents lists available at ScienceDirect

Computers and Electronics in Agriculture


journal homepage: www.elsevier.com/locate/compag

Original papers

Study of shrimp recognition methods using smart networks T


a,⁎ a b
Zihao Liu , Xiaojun Jia , Xinsheng Xu
a
College of Mathematics, Physics and Information Engineering, Jiaxing University, Jiaxing, Zhejiang 314001, PR China
b
College of Quality and Safety Engineering, China Jiliang University, Hangzhou, Zhejiang 310018, PR China

A R T I C LE I N FO A B S T R A C T

Keywords: Traditional shrimp recognition algorithms, based on machine vision, commonly utilize human-designed fea-
Shrimp classification tures, which are heavily dependent on human experience and can be inefficient and inaccurate. A smart deep
Deep convolutional neural networks convolutional neural network, using the improved LeNet-5 structure (ShrimpNet), is proposed to address this
Validation accuracy problem. Shrimp image segmentation, normalization and data augmentation were initially performed. Given the
Machine vision
morphological differences in the external features of shrimp, the LeNet-5 structure was modified into a three-
layer parallel structure for efficient matching and identification. A combination classifier strategy was subse-
quently added into the fully connected layers to strengthen the feature expression in the corresponding classes.
Finally, different architectures were explored by shrinking the depth and width to search for effective network
structures that could act as alternatives for practical applications and reveal the practical use of ShrimpNet.
Experimental results revealed that the smaller model (ShrimpNet-3) could achieve a validation accuracy of
96.84% and a modeling time of 0.47 h for the constructed dataset. Therefore, the proposed method is promising
for shrimp classification and quality measurement of production lines.

1. Introduction (Ni et al., 2019; Zihao et al., 2016a). Therefore, many researchers have
tried to improve the shrimp product quality and promote shrimp
With the increasing demand for food quality safety, high-quality quality improvement strategies based on machine vision technology.
and pollution-free food is desired by people worldwide. The main- Machine vision, as a type of nondestructive technology, is extensively
stream methods for food quality detection are nondestructive. applied in agriculture product quality detections and measurements,
Nondestructive methods are used in many research fields, such as ve- especially for shrimp-related products.
getable freshness inspection (Pu et al., 2015; Zhang et al., 2014a, 2015), Harbitz (2007) proposed a linear model for shrimp length mea-
fruit external quality measurement (Arendse et al., 2015; Blasco et al., surements based on a log-log scale of the length in relation to the pixel
2017; Nicolai et al., 2014), seafood quality grading (Borresen, 2018; area. The results indicated that less than 0.01 s per shrimp was required
Hassoun and Karoui, 2017; Kim et al., 2017) and fruit juice level by the image processing algorithm. Lee et al. (2012) suggested a simple,
classification (Chakraborty et al., 2014; Fernandez-Lozano et al., 2013). fast, and accurate shape analysis method using turn angle cross-corre-
Fresh shrimp are harvested from shrimp cultivation ponds manu- lation. High recognition rates of 93.7% and 94.2% were achieved to
ally, and some defective or diseased shrimp are typically mixed in with classify broken and good shrimp, respectively. Zhang et al., 2014b)
sound shrimp. These “dirty” shrimp include shrimp infected with bac- studied online shrimp detection equipment to eliminate broken shrimp
teria or viruses, crushed shrimp, broken shrimp, and shrimp that have based on the Evolution-COnstructed (ECO) features and the AdaBoost
died from lack of oxygen. If the “dirty” shrimp are not recognized and classifier. Approximately 95.1% of the overall classification accuracy
removed quickly, they pose a significant threat to the fresh shrimp with a 0.948 precision rate and a 0.920 recall were obtained using the
clusters and can deteriorate the overall shrimp product quality. In most constructed model. Hanmei (2015) originally proposed double
Chinese shrimp processing companies, a simple visual inspection by threshold segmentation and artificial neural network methods under
trained inspectors is performed for at least 10 kg of every 1 t of shrimp. static conditions. These methods were respectively used to extract the
The percentage of defective or diseased shrimp in the sample can be melanotic part and classify different melanotic levels of melanosis
used to grade the specific shrimp class. This operation is time-con- shrimp, and high accuracy was achieved. Wei (2018) suggested two
suming, and consistency between different operators is not guaranteed types of methods to complete the recognition duties of fresh and cooked


Corresponding author.
E-mail address: lzh2017@cjlu.edu.cn (Z. Liu).

https://doi.org/10.1016/j.compag.2019.104926
Received 7 March 2019; Received in revised form 24 July 2019; Accepted 28 July 2019
Available online 12 August 2019
0168-1699/ © 2019 Elsevier B.V. All rights reserved.
Z. Liu, et al. Computers and Electronics in Agriculture 165 (2019) 104926

shrimp. Recognition of fresh shrimp was performed using the com- 2. Methods and materials
pacting degree feature based on contour analysis, and 99.6% accuracy
was achieved. Recognition of cooked shrimp was performed using the 2.1. Shrimp samples and system design
center of gravity moment feature based on template matching, and
99.4% accuracy was achieved. Shrimp samples were directly acquired from the Wu Mart
Based on the abovementioned studies, many researchers have Supermarket and a shrimp product processing company (Economic
sought the ideal characteristics for shrimp classification and the optimal Development Zone of Hangzhou, China). Trained human experts from
methods with minimal human intervention. Fortunately, most shrimp the company classified these shrimp samples into nine categories:
classification algorithms are useful in specific situations. However, di- crushed shrimp (CR), shrimp that lacked oxygen (LAO), shrimp that
rect use of these methods to solve multi-class shrimp classification lacked tail meat (LATM), shrimp that lacked tails (LAT), sound shrimp
problems, as proposed herein faces two issues: (SS), red diseased shrimp (RD), red distorted shrimp (RDT), shrimp tail
meat (TM), and white fish (WF). A total of 565 sound shrimp and 1166
(i) Previous studies on shrimp classification involved human-designed defective shrimp (134 CR, 123 LAO, 222 LATM, 80 LAM, 126 RD, 116
features, such as ECO (Zhang et al., 2014b), tail moment features RDT, 80 TM, and 285 WF) were included in the dataset. In the har-
(TMFs) (Liu et al., 2016a), turn angle distribution analysis (TADA) vested fresh shrimp clusters, some white fish, small grasses, and other
(Lee et al., 2012) and area ratios (Zihao et al., 2017). Designing and impurities are also harvested from the shrimp cultivation pond. Small
optimizing of these elaborate features is time-consuming and in- grasses and other impurities can be eliminated using a simple me-
efficient. chanical process based on size differences. However, it is difficult to
(ii) Although some effective algorithms have been developed success- remove small white fish with sizes similar to the shrimp using me-
fully based on shrimp characteristics, the false negative rate chanism methods (Liu et al., 2016a). The white fish mixed in the fresh
reached 10 percent, that is, 10 percent bad shrimp were not de- shrimp clusters are potential threats to the quality of fresh shrimp.
tected, leading to severe contamination of the fresh shrimp clusters. Thus, the white fish class is treated as one of the research objects in this
Severe quality and safety problems can occur when these polluted paper. Half of the samples were randomly selected from the dataset to
products are delivered to supermarkets. build a training set, whereas the remaining samples were used to build
a testing set. All the experimental results were presented as the average
To overcome the above problems in shrimp quality evaluation, a of ten repeated experiment.
novel deep convolutional neural network (DCNN) is proposed herein All the images were acquired online using a Charge Coupled Device
based on the improved LeNet-5 structure (ShrimpNet). DCNNs have (CCD) camera (DFK-23G618, Imaging Source Company, Germany) with
provided theoretical answers to these questions (Arun et al., 2018; a resolution of 640 × 480 × 24 bits. The lens-attached (VT-LEM0618-
Banerjee and Das, 2018; Li et al., 2018; Wu et al., 2018). DCNNs can MP3, Vision Datum Technology Company, China) CCD had two options:
achieve state-of-the-art performance in many other image recognition one for a normal field of view (12 mm) and the other for a relatively
tasks, such as speech recognition (Zhao et al., 2019), face retrieval small field of view (16 mm). Through several repeated experiments, we
(Dong et al., 2016), agriculture product quality measurements (Fuentes found that samples of various sizes were all within the field of view
et al., 2018), and image denoising (Zhang et al., 2010). Effective feature (FOV) when a 16 mm lens was used. Thus, the 16 mm option was se-
extraction and representation is the quintessence of DCNN methods. lected. Two online parameters were calculated based on two CCD
Large numbers of local features can be learned from the lower layer of camera properties (Zhang et al., 2018), time of exposure (1/1542) and
DCNNs. Integrating these tremendous local features into the overall transmission gain (11.23 db), to acquire clear images. Moreover, to
features of shrimp objects is the most fundamental work of ShrimpNet. ensure that only one shrimp emerged in the FOV, we minimized the
Some of the local features learned by the lower convolutional layers shrimp loading speed and maximized the velocity of the conveyor belt.
appear to be meaningless individually, but grouping these local features Furthermore, we used spaced food-grade boards that were designed and
together layer-by -layer can have a large impact on the shrimp re- installed before the shrimp entered the lighting box. The actual shrimp
cognition performance. quality inspection line is depicted in Fig. 1. The schematic diagram of
A smart and small DCNN structure was explored by combining the online quality inspection system for shrimp is presented in Fig. 2.
different efficient classifiers in this paper. Initially, a three-layer parallel All the details of the operation of the shrimp classification machine
structure was developed based on the functions of different convolu- are depicted in Fig. 2. As the shrimp moved on the conveyor belt and
tional strides. Optimization of the LeNet-5 structure from the top layer passed the sensor, the sensor was triggered immediately. After a short
to lower layers was successively performed to investigate the perfor-
mance of the newly formed nets. This step could provide insight into
the network’s internal structure. Finally, hyper-parameter optimization
steps were conducted to obtain better results. Modifications were per-
formed to solve the overfitting problem, which is the core reason for the
reduced validation accuracy. The contributions of this study are as
follows:

(i) The proposed ShrimpNet was successfully used to classify shrimp


using a DCNN.
(ii) Based on the external appearance of the shrimp, the network
structure was modified to a three-layer parallel structure for effi-
cient matching and identification.
(iii) Combination classifier thought was assembled into the combina-
tion layer to optimize the learned features by ShrimpNet.

The proposed ShrimpNet structure overcomes the over-fitting pro-


blem brought by the traditional DCNNs. Moreover, the newly con-
structed ShrimpNet can reduce the modeling time. Thus, a novel
method to automatically discern defective shrimp was attained. Fig. 1. Actual shrimp quality inspection line.

2
Z. Liu, et al. Computers and Electronics in Agriculture 165 (2019) 104926

IPC

CCD
Controller
Light source Upper Air Nozzle

shrimp1 shrimp2 shrimp3

Conveyor system

Motion direction

Lower Air Nozzle


Fig. 2. Schematic diagram of the online quality inspection system.

delay, the CCD camera began to capture images, and a self-developed manipulation, the grey image was similar to a structured binary image
program initiates. A single-chip AVR was used as the controller to open (Fig. 3c). Secondly, the single-channel image was transmitted to a
or close the electromagnetic valves, which were connected to air noz- three-channel image. Thus, all the x- and y-coordinates with pixel va-
zles. Based on the results of the image processing, the flawed shrimp lues of 255 in the single-channel image were recorded, and these co-
were ejected by the three air nozzles and the sound shrimp continued ordinate points in the original color image were set to 255. The ex-
moving with the conveyor belt and fell into the container at the end of tracted ROI shrimp image is shown in Fig. 3d. The whole process of
the conveyor belt. The first nozzle was used to remove the flawed background segmentation is depicted in Fig. 3.
shrimp, the second nozzle was used to acquire large sound shrimp, and
the third nozzle was used to acquire small sound shrimp. Different 2.2.3. Redundancy deletion and image size normalization
functions of the three air nozzles were implemented in the procedure, Redundancy deletion is the process in which the portions of the
depending on the requirements. The air-blowing approach was adopted images acquired from the online equipment that contain no relevant
to minimize shrimp damage during the removal procedure. information are removed. Image size normalization is the process of
Image acquisition and prepossessing algorithms were implemented acquiring a standard rectangular image for adapting to the internal
using the MATLAB programming language, version R2017a. structure of DCNNs. Redundancy deletion and image size normalization
Subsequent DCNN construction and training algorithms were im- involve three sequential processes. First, four margin coordinates (top,
plemented using Python (3.6.2) and the Deep Learning Toolbox Pytorch bottom, left-most, and right-most) were computed based on the shrimp
(4.0.1). The computer platform was a computer server with an Intel background segmentation algorithm. Second, non-target areas in the
Core Xeon CPU 1.7 GHz, 32 GB memory, and three GTX 1050Ti (4G) images were deleted, and ten pixels were retained from each margin, to
GPUs. The details of this system are shown in Fig. 2. avoid shape descriptive overflow errors when using edge extraction
algorithm. Finally, a linear interpolation method was used to resize the
2.2. Shrimp image preprocessing and data augmentation resulting images, and the final image was acquired with a size of
224 × 224 × 3. The processes of redundancy deletion and image nor-
2.2.1. Shrimp image preprocessing malization are shown in Fig. 4. In this image, although the shape of the
The shrimp image preprocessing included background segmenta- shrimp image changed, the mathematical topological structure of the
tion, redundancy deletion, and image size normalization. original image’s internal structure was retained. Through the above
three steps, the original shrimp image of 640 × 480 × 3 (Fig. 3a) was
2.2.2. Background segmentation converted to 224 × 224 × 3 for the CNN network and the rectangular
Zihao et al. (2016a) presented a background segmentation algo- image was transformed to a square shape. Fig. 5 displays parts of the
rithm that was developed for shrimp, and this algorithm was used in sample images in the nine classes after image preprocessing.
this study. The main process can be summarized by the following two
points. Firstly, the shrimp region of interest (ROI) was extracted based 2.2.4. Data augmentation
on the difference between the object and background peaks in the Data augmentation technology has been adopted in many related
image histogram (Fig. 3b). Depending on this difference, the threshold fields (Liu et al., 2016b; Nyawira Ishtar et al., 2018; Wang et al., 2018).
was obtained by averaging two peak values with the following equation The construction of an original dataset in this work was necessary.
(Eq. (1.1)): However, the number of original shrimp images that were acquired
from the online system was relatively small, only containing 1731
pk1 + pk 2
t= images overall in the nine classes (Table 1). This is not a large enough
2 (1.1)
set to train a DCNN with excellent performance. Moreover, to avoid
where t represents the threshold, and pk1 and pk2 represent grey values overfitting, a large-scale dataset was constructed, and the data aug-
of the two peaks. If the grey value was less than the threshold, the pixel mentation technique was developed. For each original image, an ad-
value was set to 255 in the grey-level image; if the grey value was ditional 15 different images were expanded. These newly formed
greater than threshold, the pixel value was set to zero. After this images were created by rotating the images clockwise by seven

3
Z. Liu, et al. Computers and Electronics in Agriculture 165 (2019) 104926

A Pixel
frequency
B

Pixel value

C D

Fig. 3. Background segmentation algorithm for shrimp image.

different angles (45°, 90°, 135°, 180°, 225°, 270°, and 315°), translating sequences of convolutional layers. Interleaved with the max pooling
the images in the lower left direction by 50 and 100 pixels, reflecting layers, these layers can capture deformable parts, and reduce the re-
the images to reverse the pixel order in each row and column, enlarging solution of the convolutional outputs. The latter two fully connected
the images by factors of 2.0, 4.0, and 0.5 using bi-cubic interpolation, layers (FC3 and FC4) can capture complex co-occurrence statistics,
and down-sampling the images at intervals of 2. After several steps from which drops semantics of spatial location. The combination layer was
these image transformations, the dataset was augmented from 1731 to developed by combining three types of single classifiers to strengthen
27,696 images. As the original image set was manipulated, the pro- the feature expression ability. The final layer obtains a synthetic label
portion of each shrimp class kept the same. A detailed sample dis- decision that is produced based on the end-to-end network feature map.
tribution in each class is listed in Table 1 This architecture is appropriate for learning powerful local features
from the complex shrimp image dataset. The overall framework of
ShrimpNet is illustrated in Fig. 6.
2.3. Proposed DCNN structure——ShrimpNet The motivation for improving the small LeNet-5 architecture is il-
lustrated by the following two points.
2.3.1. ShrimpNet structure
The proposed architecture ShrimpNet was implemented and altered (i) Strategies for sorting shrimp are sought such that the model can
based on the LeNet-5 structure (LeCun et al., 1998). This five-layer execute in real-time and achieve rapid retraining by accepting new
network can be considered to be a self-learning process of local image samples. Therefore, models with few parameters and a simple net
features from low to mid to high levels. The first two layers are called structure that preserve good accuracy are necessary.
convolutional layers (Conv1 and Conv2), and the high layers synthesize (ii) The dataset contains sound, defective and diseased shrimp. Their
complex and abstract structural information across the large-scale

need to delete right‐most 378x117x3


point
left‐most point top point Redundancy deleting
10 pixels

need
10 10 to Image size
need pix pix delete normalization
to els els
delete
10 pixels

need to delete bottom


point

224x224x3
640x480x3
Fig. 4. Process of redundancy deleting and image normalization.

4
Z. Liu, et al. Computers and Electronics in Agriculture 165 (2019) 104926

framebuffer only saved one third of the DCNN parameters, providing


convenient communication for all the GPUs. Moreover, these GPUs had
mutual access to the framebuffer, and this process did not access the
host memory. ShrimpNet was designed to allow GPUs to communicate
only at certain layers of the network, controlling the performance loss
of communication. Thus, simultaneously using multiple GPUs to train
ShrimpNet was efficient.
Step2: Add 3 × 3 and 1 × 1 filters to the model
SS CR LAO
Motivated by the efficient convolutional kernel sizes (filters) design
of SqueezeNet (Forrest et al., 2016), 3 × 3 and 1 × 1 filters were added
to the original LeNet-5 model. Given the number of filters, the addition
of 1 × 1 and 3 × 3 filters mainly stemmed from the following two as-
pects: i) the network consisting of 1 × 1 and 3 × 3 filters had 25 and 3
times fewer parameters than the network only consisting of a 5 × 5
filter, respectively. Therefore, the quantity of the parameters decreased
while attempting to preserve accuracy; ii). Small filters are easily found
in internal subtle patterns, such as, texture and shape-related features
LATM LAT RD hidden in the deep part of images. The representation of these features
was important for increasing the recognition accuracy for certain types
of shrimp. Note that recognition accuracy was a key part of the con-
struction of ShrimpNet. Therefore, this strategy must be employed for
good shrimp classification.
Step3: Merge different convolutional strides in the convolu-
tional layers
Motivated by the design of the SqueezeNet structure (Forrest et al.,
2016), a DCNN structure was designed by integrating different con-
RDT TM WF volutional strides into the three GPUs independently. When this process
was performed in each convolutional layer, the convolutional stride size
Fig. 5. Parts of sample images in the nine classes after image preprocessing. could have large impacts on the features learned by the DCNN. For
example, a large convolutional stride is prone to obtain image in-
external appearances are complex. Although several human-de- formation, such as size, shape, and location. On the contrary, small
signed external features were developed, and some exploratory convolutional strides are prone to capture image information such as
processes were accomplished, high demands for accuracy and ef- color, texture, and image granularity (Forrest et al., 2016). These fac-
ficiency must still be improved. Therefore, the following question tors can serve as references to identify different types of shrimp, based
was considered: What type of DCNN model can be used to improve on the nine categories of samples provided by the dataset in this study.
the performance and reduce the execution time? Some types of shrimp have considerable differences in color and shape,
prompting the adoption of different convolutional strides to obtain a
2.3.2. Steps for developing smart networks precision classification rate.
The innovations and main contributions of ShrimpNet are as fol- Step4: Add combination classifier algorithm to the model
lows. In the traditional structure of the DCNNs, the final classification
Step1: Model chunking training parallelly in three pieces of layer uses a single classifier as the label decision tool. The learned
GPUs features usually match with the internal structure of a certain classifier.
Motivated by the design of the AlexNet structure (Krizhevsky et al., However, the decision made by the single classifier sometimes does not
2017), the original LeNet-5 structure was expanded into three blocks of reflect the comprehensive features of the samples. Therefore, the use of
parallel network structure lengthways (Fig. 6). The constructed net- a combination classifier strategy (Fumera et al., 2008) into the con-
work was trained correspondingly in three GPU modules. In each structed model was developed. The classifier types involved support
module, the convolutional stride sizes of 5 × 5, 3 × 3 and 1 × 1 were vector machine (SVM) (Qiu et al., 2015), SoftMax (Jiang et al., 2018),
integrated into the original LeNet-5 correspondingly. The Compute and Random Forest (Paul et al., 2018). We inserted these three types of
Unified Device Architecture (CUDA) modules can speed up the training classifiers between FC4 and the final classification layer to complete the
process for the DCNN synchronously. Moreover, strong GPU computa- combination processes based on the Improved Majority (IMAJ) rules.
tional capabilities are required to handle the tremendous matrix com- Our team proposed the IMAJ algorithm in 2016 (Liu et al., 2016a). The
putations when training a DCNN. The three GPUs were all NVIDIA GTX flow chart of IMAJ algorithm is displayed in Fig. 7.
1050Ti (4 GB) models. If only one GPU was used to train this smart This method was performed using a comprehensive decision
network and the GPU hardware design structure was not expanded, the thought. It could produce a label decision using the new convolutional
maximal scale of the parallel DCNN could be limited. Therefore, each features combined with the SVM, SoftMax, or Random Forest. The final
part of ShrimpNet was distributed into the GPUs, and each GPU label decisions made by the IMAJ algorithm must be the same to ensure

Table 1
Number distribution of samples in old and new datasets.
Data style Nine different shrimp classes

CR LAO LAT LATM SS RD RDT TM WF

Label 0 1 2 3 4 5 6 7 8
Before augmentation 134 123 222 80 565 126 116 80 285
After augmentation 2010 1845 3330 1200 8475 1890 1740 1200 4275

5
Z. Liu, et al. Computers and Electronics in Agriculture 165 (2019) 104926

Stride:2 FC3 FC4


M
1x1 a
x Stride:2
1x1
Stride:5
p
3x3 o SVM
Stride:5
224x224x3 o M
Stride:9 3x3 a Softmax
l x
IMAJ Rule
5x5 i p
o Random forest
Fil
ter
s:6
n Stride:9
5x5
o
l
g i
Conv1 Filters:16 n
84 Combination 
120
g classifiers
Conv2
Full
connection Final label

Fig. 6. Overall framework of ShrimpNet architecture.

that the “minority is subordinate to the majority” for these produced assumed that the number of local receptive fields in the Conv1 layer
labels. The quintessence of the IMAJ algorithm lies in the final label was six, whereas that of local receptive fields in the Conv2 layer was 16.
decision made by the “elite votes” instead of “all votes”. By using the The input number of the neuron node in the first fully connected layers
IMAJ algorithm, a few advantaged votes that contribute to the final were 120 (FC3), whereas that of the neuron node in the second fully
accuracy can be retained, whereas other disadvantaged votes that de- connected layers was 84 (FC4). We also supposed that the input para-
crease the final accuracy can be deleted. Through the previous four meters were (N, Cin, Hin, Win), whereas the output parameters were (N,
steps, the ShrimpNet model was constructed. Cout, Hout, Wout), where N denotes the batch size, C denotes the number
Step5: Adjust ShrimpNet to find optimal network structure of channels, H denotes the height in image pixels, and W is the width in
The objective of this step is similar to Step 2. From a practical point image pixels. The output image size can be depicted using the following
of view, Step 5 allows the shrimp sorting applications to confirm that equations (Eqs. (1.2) and (1.3)):
the model can execute in real-time and continually refresh by accepting
new samples. Moreover, the changing strategies of the net structures Hout
were also investigated in the overall architecture in this step. Such Hin + 2 × padding [0] − dilation [0] × (kernel _size [0] − 1) − 1
= +1
strategies can enable fast running times with small effects. The per- stride [0]
formance and corresponding running time of this model were analyzed (1.2)
by shrinking its depth (number of layers) and width (filters in each
layer) until an optimal net structure was found. Wout
Step6: Add dropout techniques to ShrimpNet Win + 2 × padding [1] − dilation [1] × (kernel _size [1] − 1) − 1
= +1
Hinton et al. (2012) proposed a dropout technique, which was stride [1]
mainly used to randomly delete some superfluous neurons, thereby (1.3)
preventing overfitting when training the DCNN. By adding this tech-
nology, the generalized performance of ShrimpNet could be enhanced where padding controls the amount of implicit zero-padding on both
to a certain extent. In ShrimpNet, the two fully connected layers (FC3, sides for the padding number of points for each dimension, and padding
FC4) employed dropout techniques. [0] and padding[1] represent the x- and y-coordinate padding amounts,
The specific details of ShrimpNet were analyzed by calculating the respectively. dilation controls the spacing between the kernel points,
number of parameters in each layer, thereby computing the output dilation[0] and dilation[1] represent the x- and y-coordinate dilations,
image size and understanding the depth and width of ShrimpNet. The respectively. kernel_size indicates the number of filters in each con-
input of the 224 × 224 × 3 original images was preprocessed. We volutional layer, and kernel_size[0] and kernel_size[1] represent x- and y-
coordinate filter amounts, respectively. The padding, dilation, kernel_size,

K1 classifiers K2 features

K1 K2 misclassification rates(MR)
Recall to find corresponding
Renew MR from L1 classifiers and L2 features
small to large L1 classifiers L2 features
Majority votes
MR set contains K1 K2 values
Final label
Selected the top-rank
L1×L2 (L1 K1, L2 K2)

New MR set contains L1×L2 values

Fig. 7. Flowchart of IMAJ algorithm.

6
Z. Liu, et al. Computers and Electronics in Agriculture 165 (2019) 104926

Table 2 Validation 10-4


Parameter design and computation from ShrimpNet. accuracy 10-3
10-5
Validation
CNNs layers GPU parallel distribution training CNNs structure accuracy

Output size Filters size/strides Total parameters

Conv1 Conv11 56 × 56 × 6 1 × 1/2 18,816


Conv12 23 × 23 × 6 3 × 3/5 3147
Conv13 13 × 13 × 6 5 × 5/9 1014

Conv2 Conv21 28 × 28 × 16 1 × 1/2 12,544


Conv22 2 × 2 × 16 3 × 3/5 64
Conv23 1 × 1 × 16 5 × 5/9 16

FC3 FC31 28 × 28 × 16 × 120 \ 1,505,280


FC32 2 × 2 × 16 × 120 \ 7680
FC33 1 × 1 × 16 × 120 \ 1920

FC4 FC41 120 × 84 \ 10,080 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99
FC42 120 × 84 \ 10,080 Dropout ratio
FC43 120 × 84 \ 10,080
Fig. 9. Effects of dropout ratio on validation accuracy.

Hin and Win were all known in advance. Therefore, the output size for initialacc − steadyacc
TADSR = × 100%
each image could be solved. The computed results are summarized in initialacc (1.4)
Table 2.
initialloss − steadyloss
TLDSR = × 100%
initialloss (1.5)
3. Results and discussion
where initialacc is the initial value of the training accuracy, and steadyacc
3.1. Hyper-parameter optimization of ShrimpNet is the final accuracy value when the model is at steady state. Similarly,
initialloss is the initial value of the training loss, and steadyloss is the final
Hyper-parameters include the learning rate, dropout ratio, batch- loss value when the model is at steady state. Fig. 10 illustrates the
size, number of convolutional kernels, and convolutional kernel size. comparison results with different batch-size values. As the batch-size
The first three parameters are analyzed in this section, while the latter increased, the two indices simultaneously increased and became nearly
two parameters are analyzed in next section. flat when the batch-size increased to 200 and 256. Both values could
The learning rate controls the convergence speed of the DCNN optimize the model. Considering that large batch-size may result in
model, indicating that the models finished training, while the loss memory collapses, 200 was selected.
function converged to a steady state. A high learning rate results in
rapid convergence, whereas a low learning rate results in slow con- 3.2. Model exploration in different architectures of ShrimpNet
vergence. Based on a contrast experiment, a learning rate of 10-4 was
selected, as shown in Fig. 8. ShrimpNet was varied to explore the better-performing sub-models
Dropout is a powerful technique to address this issue when data are by adjusting the sizes of the layers or removing a certain layer. The run-
limited. This process removed the net units with a fixed probability in time and accuracy of the overall architecture of ShrimpNet were first
the training stage, using the whole architecture at test time. In this explored by completely removing each layer. Table 3 shows the impact
experiment, the effect of varying this hyper-parameter was explored of changing the overall architecture on the performance and runtime.
from 0.01 to 0.99, which is the generally recommended range (Wu The abbreviation TLDT in Table 3 is the training loss descend time. A
et al., 2018). As shown in Fig. 9, the network performed well at 0.8. smaller TLDT corresponds to a faster and more efficient modeling
The batch-size is the number of samples that are input into the process.
networks when training the model. Two concepts were introduced to In each case, the model was independently trained from scratch
optimize the batch-size: the rising speed ratio of training accuracy with the revised architecture. Removing the fully connected layers of
(TADSR) and descending speed ratio of training loss (TLDSR). The two FC3 and FC4 (called ShrimpNet-3) simultaneously or deleting one of
indexes were calculated using following two equations (Eqs. (1.4) and
(1.5)): TADSR
or
0.0004 0.0003 0.0005
TLDSR
Validation Train loss
0.1 minimum
accuracy
Validation
accuracy
0.6

0.002

Optimal
1.8
scope

0.09

10-1 10-2 10-3 10-4 10-5 10-6 10-7 10-8 8 16 32 64 150 200 256
Learning rate Batch-size
Fig. 8. Effects of learning rate on validation accuracy and minimum train loss. Fig. 10. Effects of batch-size on TLDSR and TADSR.

7
Z. Liu, et al. Computers and Electronics in Agriculture 165 (2019) 104926

Table 3
Effects of changing the overall architecture of ShrimpNet (bold vales in this Table represent the better experimental results).
Type Index Architectural change TLDSR (%) TLDT (h) Training time(h) Validation accuracy

1 Only retain Conv1 99.60% 0.49 0.43 84.70%


2 Overall ShrimpNet 99.86% 0.5 0.65 97.92%
3 Remove Conv2 + Pooling2 99.86% 0.54 0.58 85.79%
4 Remove Pooling1 + Pooling2 99.92% 0.48 0.60 91.45%
5 Remove Pooling2 99.1% 0.51 0.56 94.53%
6 Remove Pooling1 99.65% 0.53 0.61 94.70%
7 Remove FC3 96.51% 0.51 0.52 92.43%
8 Remove FC4 97.26% 0.46 0.53 93.23%
9 Remove FC3,FC4 99.94% 0.41 0.47 96.84%
10 Adjust layers Conv1,2 96.16% 0.43 0.56 86.36%
4,12filters
11 Adjust layers Conv1,2 98.96% 0.58 0.65 95.67%
32,64filters + dropout
12 Adjust layers Conv1,2 98.66% 0.76 0.81 96.53%
64,128filters + dropout
13 Adjust layers Conv1,2 97.26% 0.69 0.99 97.19%
128,256filters + dropout
14 Adjust layers Conv1,2 97.18% 0.72 1.17 97.23%
256,512filters + dropout

them only yielded a slight decrease in accuracy. This result is sur- the external appearances of the SS and LAT classes. Moreover, the
prising, since these layers contain most of the model parameters. This percentage of incorrectly classifying LATs as SS samples accounted for
adjustment can reduce the computation cost and burden with only a 88.66% in the overall misclassified samples of LAT class. In this situa-
slight accuracy loss. Removing two of the pooling layers (Pooling1-2) tion, LAT is a class that only lacks a part of the tail, which cannot cause
simultaneously or deleting one of them also yielded a relatively small pollution to sound shrimp clusters. Moreover, as shown in Table 4, no
difference in the final accuracy. However, only retaining the Conv1 diseased or polluted shrimp classes were incorrectly graded into the
layer or removing the Conv2 layer and Pooling2 layer simultaneously sound shrimp class, which was an objective of this study.
yielded a poor performance. Therefore, the convolutional layers were
important for obtaining good performance. The convolutional kernel 3.4. Performance comparison between combination and individual classifier
sizes of the Conv1 and Conv2 layers in ShrimpNet were modified, and
the results are provided in Table 3 (No. 10-14). These results indicate Table 5 shows the performance of the combination layer in
that increasing the convolutional kernel sizes of the Conv1 and Conv2 ShrimpNet under different classifier combinations, confirming that the
layers only provided a benefit for the model training performance. Such classifier combination schemes could achieve relatively high validation
an increase also enlarged the scale of the fully connected layers, in- accuracy. However, the training time was not optimal, because the
troducing an over-fitting issue. Thus, the dropout technology was added combination processes spend time merging these three single classifiers
into ShrimpNet. In this contribution, this sub-model in which the FC3 together. After this experiment, we confirmed that using ShrimpNet-
and FC4 layers were removed (No. 9) was selected as the optimized SoftMax required the least time, but its accuracy was not optimal. Thus,
model based on the smallest training time and best validation accuracy. there is a tradeoff between time consumption and accuracy, and the
Efficient feature self-learning is the kernel of ShrimpNet, and it is optimal combination must be selected based on the actual application
also the main reason that high performance was achieved. Moreover, requirements. The final classification layer of traditional structure of a
the design of the two convolutional layers and the latter fully connected DCNN commonly uses a single classifier as the label decision tool.
layers fulfill an essential role in contributing to the recognition rate. In However, the decision made by the single classifier sometimes cannot
the first convolutional layer (Conv1), the network can learn the key present the comprehensive features of the shrimp samples. Therefore,
feature points and the obvious regions on the surface of the shrimp, the use of a combination classifier strategy is a key factor in acquiring
including shrimp eyes, spots on the abdomen, tiny dots on the maxilla high performance.
and uropod. In the second convolutional layer (Conv2), the edge related
information consisting of pixel feature points could be learned in the
shrimp image, including the profile shape on the abdomen, parts of the 3.5. Performance comparison with other sophisticated DCNN
cephalothorax, and parts of the carapace. Many object feature differ-
ences between sound shrimp and other diseased or defective shrimp The proposed method was compared with other sophisticated
were detected in these two fully connected layers (FC3-4), including the DCNNs. The results were obtained on the same computational system
subtle textural difference on the surface of the shrimp, the rostrum and are summarized in Table 6. However, the system operation time
difference in the morphology, and the glossiness on the surface of evidently differed based on the depth of the DCNNs. For example,
shrimp. The hidden feature could be used to recognize real shrimp training the deep model DenseNet consisting of 121 layers with an
samples in ShrimpNet. acquisition accuracy of 99.81% required more than half of a day. Given
that most parameters accumulated in the FC3 and FC4 layers of
ShrimpNet, manipulations should be performed to free the memory
3.3. Confusion matrix for shrimp recognition space. Thus, strategies of deleting the two fully connected layers were
developed to form ShrimpNet-3. This strategy involved judiciously
Table 4 was created to investigate how frequently the proposed decreasing the quantity of parameters from 254.6 to 9.7, attempting to
method identifies “dirty” shrimp as sound shrimp incorrectly. In preserve accuracy as much as possible. This step could improve the
Table 4, the evaluation indices time and accuracy, correspondingly operation efficiency of the system using ShrimpNet. Although the ac-
represent the recognition time per shrimp and total validation accuracy. curacy of ShrimpNet is not surprising, this accuracy level for agriculture
From Table 4, the LAT class had the lowest validation accuracy (87.5%) recognition is completely acceptable. The advantages of ShrimpNet-3
among all nine classes. The reason is that there was a high similarity in are mainly in its improved algorithm execution time and model scale.

8
Z. Liu, et al. Computers and Electronics in Agriculture 165 (2019) 104926

Table 4
Confusion matrix for shrimp clasification (bold vales in this Table represent the numbers of being correctly classified).
Testing stage Nine different shrimp classes

CR LAO LAT LATM SS RD RDT TM WF

CR 1073 3 4 5 0 0 0 7 0
LAO 0 986 7 2 0 0 0 8 0
LAT 16 2 1436 4 172 0 0 11 0
LATM 0 5 13 637 0 0 0 13 0
SS 0 6 64 0 4441 0 0 2 0
RD 1 1 0 0 0 1009 25 0 0
RDT 0 0 0 0 0 9 934 0 0
TM 8 2 5 4 0 0 0 641 0
WF 0 0 0 0 7 5 0 0 2280

Time (ms) 19.56 23.16 24.18 16.34 26.42 21.29 19.29 22.7 32.98
Accuracy (%) 98.3 98.3 87.5 95.4 98.4 97.4 99.1 97.1 99.5

Table 5 Table 7
Comparison between combination and single classifiers (bold vales in this Table Accuracy comparison of ShrimpNet with other shrimp recognition methods
represent the better experimental results). (bold vales in this Table represent the better experimental results).
Classifier choices Evaluation indexes Feature representations Classifiers Validation accuracy

Training time (h) Validation accuracy (%) ECO features (Zhang et al., 2014b) AdaBoost 95.1%
TADA features (Lee et al., 2012) Threshold settings 92.4%
ShrimpNet-SVM 0.63 93.34% Segmentation features (Hanmei, Neural networks 93.33%
ShrimpNet-SoftMax 0.59 92.52% 2015)
ShrimpNet-Random Forest 0.61 94.13% Shape features (Wei, 2018) K-Nearest Neighbor 86.4%
ShrimpNet-Combination 0.65 96.92% Combination features (Liu et al., Combination rules 92.7%
2016a)
Iterations features (Zihao et al., Threshold settings 94.34%
2016b)
Table 6
ShrimpNet-3 Combination rules 96.84%
Performance comparison of different DCNN constructions with the proposed
ShrimpNet (bold vales in this Table represent the better experimental results).
DCNN type Memory requirement/MB Total time (h) Accuracy (%) constructing DCNN features, the end-to-end strategy of ShrimpNet-3
allows convenient and rapid model construction. Moreover, this model
Model size Data flow
exhibited better validation accuracy than the other shrimp recognition
LeNet 109.6 4.2 1.06 95.92% methods. Specifically, the accuracy of ShrimpNet-3 increased 1.74%
AlexNet 254.6 5.4 2.09 98.31% and 10.44% compared to ShrimpNet for the best (95.1%) and the worst
VGGNet (11) 397.6 48.7 3.41 98.44% (86.4%) accuracies. Moreover, the average validation accuracy value of
SqueezeNet 198.4 8.6 1.54 92.46%
the six traditional methods was 92.38%, which is 4.46% lower than that
GoogleNet (v3) 598.3 99.8 7.31 97.44%
ResNet (50) 869.4 102.3 13.52 99.67% of ShrimpNet-3. These results demonstrate the superiority of the pro-
ResNet (101) 1769.6 165.1 15.00 99.82% posed algorithm.
DenseNet (121) 2894.7 234.8 17.87 99.81% Therefore, this preliminary study served as a proof-of-concept that
ShrimpNet 288.3 6.5 1.01 97.92% the DCNN combined with fusion classifier strategies can be used to
ShrimpNet-3 9.7 3.2 0.7 96.84%
effectively assess shrimp quality. However, further studies must be
performed to scale up the implementation of such tools. A shrimp
The two indices were lower than those of other sophisticated DCNNs quality evaluation model should be built using various shrimp classes
from Table 6. ShrimpNet-3 achieved a 96.84% accuracy with a 9.7 M arising from different geographic regions. Longer experimental win-
model size and 3.2 M data flow. In addition, its model scale was small, dows would also allow the creation of large datasets and robust
nearly 298 and 182 times smaller than DenseNet (121) and ResNet mathematical models, which should be well generalized for additional
(101), respectively. These results demonstrate the superiority of the shrimp classes and evaluation conditions.
proposed algorithm.
4. Conclusions

3.6. Performance comparison with other shrimp recognition methods In this study, the effectiveness of the proposed ShrimpNet was de-
monstrated by recognizing shrimp in industrial images. ShrimpNet was
In this research, we compared ShrimpNet to other shrimp recogni- proposed based on a standard dataset using image augmentation
tion methods. The results are shown in Table 7. Searching for the dif- technology to train a DCNN. After augmentation, this dataset achieved
ferences in the external appearances of the shrimp classes is tradition- an unprecedented scale and was significantly enriched for the variation
ally achieved by the human eye. However, converting human of each species, allowing the construction of a powerful DCNN for
experience to a machine vision algorithm using machine language is the shrimp classification. Certain results were obtained during the off-line
mainstream and full-automatic method to evaluate shrimp quality. modeling process, and the accuracy of ShrimpNet was comparatively
Therefore, for the results presented in Table 7, the feature extraction high in the field of agricultural product recognition. However, this re-
methods mainly depend on elaborate and complex mathematical search suggested that the application of this method to recognize
models, which is crucial for improving the validation accuracy. How- shrimp online is promising and efficient. The constructed ShrimpNet-3
ever, the process of seeking the best sorting strategies based on human had a small model size (9.7 M) and small parameter flow (3.2 M).
experience requires considerable amounts of time. During the process of Moreover, with a tiny network structure and trained model parameters,

9
Z. Liu, et al. Computers and Electronics in Agriculture 165 (2019) 104926

ShrimpNet-3 can be used in online classification for rapid judgment and Fuentes, M., Baigorri, R., Gonzalez-Gaitano, G., Garcia-Mina, J.M., 2018. New metho-
decision making. dology to assess the quantity and quality of humic substances in organic materials
and commercial products for agriculture. J. Soils Sedim. 18 (4), 1389–1399.
The pipeline of classification was not used previously, and thus, this Fumera, G., Roli, F., Serrau, A., 2008. A theoretical analysis of bagging as a linear
paper is the first to report this method for the shrimp classification task. combination of classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 30 (7), 1293–1299.
Hanmei, H., 2015. Research of melanotic cooked shrimp recognition method using ma-
This approach can be improved further as follows. (i) Including fine chine vision. In: School of Biosystems Engineering and Food Science. Zhejiang
method improvements for discerning the LAT class that may decrease University, pp. 137.
the total validation accuracy. (ii) Expanding the dataset to include Harbitz, A., 2007. Estimation of shrimp (Pandalus borealis) carapace length by image
analysis. ICES J. Mar. Sci. 64 (5), 939–944.
additional significant species in the future. (iii) Implementing online Hassoun, A., Karoui, R., 2017. Quality evaluation of fish and other seafood by traditional
learning to use unlabeled new shrimp samples in the online industry for and nondestructive instrumental methods: Advantages and limitations. Crit. Rev.
Food Sci. Nutrit. 57 (9), 1976–1998.
updating the model parameters in real-time. (iv) An unsupervised ma-
Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R., 2012.
chine learning algorithm should be used to reduce human intervention Improving neural networks by preventing co-adaptation of feature detectors.
during the training process. (v) Interpreting overlapping shrimp clus- Comput. Sci. 3 (4), 212–223.
Jiang, M.Y., Liang, Y.C., Feng, X.Y., Fan, X.J., Pei, Z.L., Xue, Y., Guan, R.C., 2018. Text
ters, which remains a great challenge. classification based on deep belief network and softmax regression. Neural Comput.
Appl. 29 (1), 61–70.
Acknowledgement & Funding Kim, H.W., Hong, Y.J., Jo, J.I., Ha, S.D., Kim, S.H., Lee, H.J., Rhee, M.S., 2017. Raw
ready-to-eat seafood safety: microbiological quality of the various seafood species
available in fishery, hyper and online markets. Lett. Appl. Microbiol. 64 (1), 27–34.
This work was financially supported by the Scientific Research Krizhevsky, A., Sutskever, I., Hinton, G.E., 2017. ImageNet classification with deep
convolutional neural networks. Commun. ACM 60 (6), 84–90.
Foundation of Jiaxing University, the city public welfare technology LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., 1998. Gradient-based learning applied to
application research project of Jiaxing Science and Technology Bureau document recognition. Proceedings of the IEEE.
(No. 2018AY11008), the National Social Science Funds (No. Lee, D.J., Xiong, G.M., Lane, R.M., Zhang, D., 2012. An efficient shape analysis method
for shrimp quality evaluation. In: 12th International Conference on Control,
18ZDA079), 2019 Active Design Projects of Key R&D Plans of Zhejiang Automation, Robotics & Vision (ICARCV), pp. 865–870.
Province (No. 2019C01128) and 2019 Projects on Public Welfare Li, S., Song, W.F., Qin, H., Hao, A.M., 2018. Deep variance network: An iterative, im-
Technology Research of Zhejiang Province (No. LGF19G030004). We proved CNN framework for unbalanced training datasets. Pattern Recogn. 81,
294–308.
thank LetPub for its linguistic assistance during the preparation of this Liu, Z., Cheng, F., Hong, H., 2016a. Identification of impurities in fresh shrimp using
manuscript. improved majority scheme-based classifier. Food Anal. Methods.
Liu, Z.Y., Gao, J.F., Yang, G.G., Zhang, H., He, Y., 2016b. Localization and classification of
paddy field pests using a saliency map and deep convolutional neural network. Sci.
Declaration of Competing Interest Rep. 6.
Ni, C., Wang, D.Y., Vinson, R., Holmes, M., Tao, Y., 2019. Automatic inspection machine
for maize kernels based on deep convolutional neural networks. Biosyst. Eng. 178,
The authors declare that they have no conflicts of interest. 131–144.
Nicolai, B.M., Defraeye, T., De Ketelaere, B., Herremans, E., Hertog, M.L.A.T.M., Saeys,
W., Torricelli, A., Vandendriessche, T., Verboven, P., 2014. Nondestructive mea-
Ethical approval
surement of fruit and vegetable quality. Annu. Rev. Food Sci. Technol. 5 (5),
285–312.
All applicable international, national, and/or institutional guide- Nyawira Ishtar, B.K., Iris, Qian, Annie, Zhang, 2018. Understanding neural pathways in
zebrafish through deep learning and high resolution electron microscope data.
lines for the care and use of animals were followed. Proceedings of the Conference Name|, Conference Location|.
Paul, A., Mukherjee, D.P., Das, P., Gangopadhyay, A., Chintha, A.R., Kundu, S., 2018.
Informed consent Improved random forest for classification. IEEE Trans. Image Process. 27 (8),
4012–4024.
Pu, Y.Y., Feng, Y.Z., Sun, D.W., 2015. Recent progress of hyperspectral imaging on quality
Not applicable. and safety inspection of fruits and vegetables: a review. Compreh. Rev. Food Sci.
Food Saf. 14 (2), 176–188.
Qiu, S.S., Gao, L.P., Wang, J., 2015. Classification and regression of ELM, LVQ and SVM
Appendix A. Supplementary material for E-nose data of strawberry juice. J. Food Eng. 144, 77–85.
Wang, S.H., Lv, Y.D., Sui, Y.X., Liu, S., Wang, S.J., Zhang, Y.D., 2018. Alcoholism de-
tection by data augmentation and convolutional neural network with stochastic
Supplementary data to this article can be found online at https:// pooling. J. Med. Syst. 42 (1).
doi.org/10.1016/j.compag.2019.104926. Wei, Z., 2018. Study of online identification and elimination system for incomplete
shrimp based on machine vision technology. School of Biosystems Engineering and
Food Science. Zhejiang University p. 89.
References Wu, X., He, R., Sun, Z.N., Tan, T.N., 2018. A light CNN for deep face representation with
noisy labels. IEEE Trans. Inf. Forens. Secur. 13 (11), 2884–2896.
Zhang, L., Dong, W., Zhang, D., Shi, G., et al., 2010. Two-stage image denoising by
Arendse, E., Fawole, O.A., Opara, U.L., 2015. Discrimination of pomegranate fruit quality
principal component analysis with local pixel grouping. Pattern Recogn.
by instrumental and sensory measurements during storage at three temperature re-
Zhang, B.H., Li, J.B., Fan, S.X., Huang, W.Q., Zhang, C., Wang, Q.Y., Xiao, G.D., 2014a.
gimes. J. Food Process. Preserv. 39 (6), 1867–1877.
Principles and applications of hyperspectral imaging technique in quality and safety
Arun, P.V., Buddhiraju, K.M., Porwal, A., 2018. CNN based sub-pixel mapping for hy-
inspection of fruits and vegetables. Spectrosc. Spectral Anal. 34 (10), 2743–2751.
perspectral images. Neurocomputing 311, 51–64.
Zhang, D., Lillywhite, K.D., Lee, D.J., Tippetts, B.J., 2014b. Automatic shrimp shape
Banerjee, S., Das, S., 2018. Mutual variation of information on transfer-CNN for face
grading using evolution constructed features. Comput. Electron. Agric. 100, 116–122.
recognition with degraded probe samples. Neurocomputing 310, 299–315.
Zhang, Y., Wang, G.Y., Xu, J.T., 2018. Parameter estimation of signal-dependent random
Blasco, J., Munera, S., Aleixos, N., Cubero, S., Molto, E., 2017. Machine vision-based
noise in CMOS/CCD image sensor based on numerical characteristic of mixed poisson
measurement systems for fruit and vegetable quality control in postharvest. Meas.
noise samples. Sensors 18 (7).
Model. Autom. Adv. Food Process. 161, 71–91.
Zhang, Y.D., Wu, L.A., Wang, S.H., Ji, G.L., 2015. Comment on ‘Principles, developments
Borresen, T., 2018. Improving seafood safety and quality. J. Aquat. Food Prod. Technol.
and applications of computer vision for external quality inspection of fruits and ve-
27 (5), 543.
getables: a review (Food Research International; 2014, 62: 326–343)’. Food Res. Int.
Chakraborty, D., Dutta, O., Sarkar, A., Ghoshal, S., Saha, S., 2014. Linear discriminant
70, 142.
analysis based indian fruit juice classification using NIR spectrometry data. In: 5th
Zhao, J.F., Mao, X., Chen, L.J., 2019. Speech emotion recognition using deep 1D & 2D
International Conference Confluence the Next Generation Information Technology
CNN LSTM networks. Biomed. Signal Process. Control 47, 312–323.
Summit (Confluence), pp. 144–148.
Zihao, L., Fang, C., Wei, Z., 2017. Recognition-based image segmentation of touching
Dong, Z., Jia, S., Wu, T., Pei, M., et al., 2016. Face video retrieval via deep learning of
pairs of cooked shrimp (Penaeus Orientalis) using improved pruning algorithm for
binary hash representations. In: Proceedings of the Thirtieth AAAI Conference on
quality measurement. J. Food Eng. 195, 1–16.
Artificial Intelligence (AAAI-16).
Zihao, L., Fang, C., Zhang, W., 2016a. A novel segmentation algorithm for clustered
Fernandez-Lozano, C., Canto, C., Gestal, M., Andrade-Garda, J.M., Rabunal, J.R., Dorado,
flexional agricultural products based on image analysis. Comput. Electron. Agric.
J., Pazos, A., 2013. Hybrid model based on genetic algorithms and SVM applied to
126, 44–54.
variable selection within fruit juice classification. Sci. World J.
Zihao, L., Fang, C., Zhaoyong, G., Zhaohong, Y., Mingchuan, Z., Junfeng, G., 2016b. An
Forrest, N.L., Song, H., Matthew, W.M., Khalid, A., William, J.D., Kurt, K., 2016.
automatic system for eliminating shrimp impurities using iteration algorithm. Int.
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model
Agric. Eng. J. 4.
size. Proceedings of the Conference Name|, Conference Location|.

10

You might also like