Professional Documents
Culture Documents
An Anti-Fraud System For Car Insurance Claim Based On Visual Evidence
An Anti-Fraud System For Car Insurance Claim Based On Visual Evidence
Feature extraction
Feature matching
all these claims decrease the efficiency of insurance com- to train and evaluate on the new dataset we collect.
panies’ service and increase the cost of staffing. In this
case, a system that can help to detect the fraud claims is
needed when we want to make the system more automati- 4.0.1 YOLO Detector
cally. As shown in Figure ??, the user is required to upload
YOLO is a fast, accurate object detection framework.Its
several images following the requirement in order to open
base network runs at 45 frames per second with no batch
a new claim. The anti-fraud system will then using algo-
processing on a Titan X GPU and a fast version runs at
rithms to extract features to search in the enrolled history
more than 150 fps which allows us to process streaming
database. If the system reports a high score of suspicious of
video in real-time with less than 25 milliseconds of latency.
a certain claim, humans can take into the loop and manu-
It is extremely useful to assist needs in auto-drive, real-time
ally retrieve the history to do the loss assessment. To make
responsive services. YOLO frame object detection as a sin-
the system more accurate and fast, we need a robust fea-
gle regression problem.Image pixels are mapped to bound-
ture which based on accurate locating the damages in the
ing box coordinates and class probabilities straightly. As
image. We adopt two state-of-the-art real-time detectors for
damages like scratch or dent might have a different shape,
damage detection and propose an approach to form robust
YOLO overcome this by learning features through regres-
features for the anti-fraud searching. We will introduce each
sion of four coordinates.YOLO, with its standard model,
part in the following sections.
achieve 45FPS with mAP of 63.4 on VOC2007. All these
Damage detection performances meet the requirement of the efficient needs of
an express anti-fraud system for insurance claims.
In most of the cases, users are required to upload few
images of the close view of the damages on the vehicle and Damage Re-Identification
another picture with a farther view that contains the whole
body of the car as well as the plate which shows the model After damage detection generated, robust features need
and identity of the car. The first step to obtaining a robust to be extracted to overcome lighting conditions and differ-
feature descriptor for fraud detection is the damage detec- ent angle of views. Also, both global and local features are
tion. This task is challenging since it is not a traditional being used to finally determine whether a claim has ever ap-
object detection problem and the damages would have dif- peared in the history database as each claim requires at least
ferent forms. We consider the damage mainly three types one image of the image containing the close view of the
due to analyzing our dataset: scratch, dent, and crack. damage as well as another image shows the whole view of
Localization of a damage in an image uploaded by users the body of the corrosponding vehicle . To extract local fea-
is a critical part of our system as the system would obtain tures, we employ the pre-trained VGG16 [16]object recog-
more stable and discriminative features when accurate de- nition model as a feature extractor. Global feature consists
tection happens. In addition, to meet our real-time require- of global deep features and color histogram. We fuse these
ment without sacrifice detection accuracy, we employed feature to obtain a more discriminative feature and conduct
YOLO, which is a state-of-the-art lightweight real-time de- a 1 to N match between the probe images and the images in
tector to be part of the system. We proposed our approach the post-claim history database.
5. Experiments and result better precision at 47.04 percent while lower recall. The
VGG-16 has a lower performance than the YOLO original
5.1. Damage detection model and the Tiny YOLO model achieves the lowest per-
formance due to the size of the net architecture.
Later, we conduct an experiment over real car damage
100 data set which is collected in the local parking lot. We
designed two protocols for the training and testing. Con-
80 sider the real system is updated periodically enhancing the
model, all the images recorded as history in the system
60
Precision
YOLO original YOLO vgg 16 YOLO with LRN YOLO with LRN+dropout YOLO tiny
precision 12.66 11.1 14.62 14.27 9.41
recall 85.6 82.01 89.72 78.66 72.49
precision 25.58 32.75 37.96 47.04 17.48
recall 76.61 57.58 81.75 63.54 66.84
precision 40.89 49.07 38.63 86.25 26.26
recall 66.32 27.25 72.49 17.74 60.15
Rank-1 accuracy on different feature fusions with ground truth Rank-10 accuracy on different feature fusions with ground truth
80
50 70
60
40
50
Accuracy
Accuracy
30
40
20 30
20
10
10
0 0
hist:8 hist:16 hist:32 car-body scratch+car body hist:8 hist:16 hist:32 car-body scratch+car body
Rank-1 accuracy on different feature fusions with detection result Rank-10 accuracy on different feature fusions with detection result
50 80
70
40
60
30 50
Accuracy
Accuracy
40
20 30
20
10
10
0 0
hist:8 hist:16 hist:32 car-body scratch+car body hist:8 hist:16 hist:32 car-body scratch+car body
Figure 6: Fraud detection result at rank-1 rate and rank-10 rate with ground truth and detection result with different feature
fusions
sion of the color histogram feature does not always boost up the color of the candidate vehicles are very similar.
the performance. When adopting the 32 bin color histogram
feature, a slightly performance drop happens. However, if
We also explore the matching performance by substi-
only employ the global feature obtained from the whole ve-
tute the detection result from the ground truth with the ones
hicle body, we only gain 19.6 percent rank-1 rate, while in-
yield from the YOLO detector we trained. We discover only
corporating the feature from the scratch enhance the rank-1
a tiny performance difference when directly employ the de-
rate to 56.52 percent. This indicates that the histogram fea-
tector which indicates that the detector if trained well could
ture might have negative impact on the performance when
yield a robust result for the damage feature extraction.
6. Conclusion and discussion [13] C. P. Papageorgiou, M. Oren, and T. Poggio. A general
framework for object detection. In Computer vision, 1998.
In this paper, we introduce a useful data set of car dam- sixth international conference on, pages 555–562. IEEE,
age which is the first data set for car damage detection and 1998.
matching. We proposed a practical system pipeline to de- [14] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You
tect the fraud claim before issuing the settlement in order to only look once: Unified, real-time object detection. In Pro-
reduce loss for the insurance company. We research on dif- ceedings of the IEEE Conference on Computer Vision and
ferent deep architectures for better damage detection result Pattern Recognition, pages 779–788, 2016.
and analyze different feature fusions and its impact on the fi- [15] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards
nal matching performance.Extensive experiments with dif- real-time object detection with region proposal networks. In
ferent experiment settings under real scenario demonstrate Advances in neural information processing systems, pages
the effectiveness of the proposed approach. 91–99, 2015.
[16] K. Simonyan and A. Zisserman. Very deep convolutional
networks for large-scale image recognition. arXiv preprint
References arXiv:1409.1556, 2014.
[1] Multi-camera vision system inspects cars for dents caused by
hail.
[2] Y.-J. Cha, J. Chen, and O. Büyüköztürk. Output-only com-
puter vision based damage detection using phase-based op-
tical flow and unscented kalman filters. Engineering Struc-
tures, 132:300–313, 2017.
[3] Y.-J. Cha, W. Choi, and O. Büyüköztürk. Deep learning-
based crack damage detection using convolutional neural
networks. Computer-Aided Civil and Infrastructure Engi-
neering, 32(5):361–378, 2017.
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for
human detection. In Computer Vision and Pattern Recogni-
tion, 2005. CVPR 2005. IEEE Computer Society Conference
on, volume 1, pages 886–893. IEEE, 2005.
[5] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scal-
able object detection using deep neural networks. In Pro-
ceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 2147–2154, 2014.
[6] R. Girshick. Fast r-cnn. In Proceedings of the IEEE inter-
national conference on computer vision, pages 1440–1448,
2015.
[7] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Region-
based convolutional networks for accurate object detection
and segmentation. IEEE transactions on pattern analysis
and machine intelligence, 38(1):142–158, 2016.
[8] S. Gontscharov, H. Baumgärtel, A. Kneifel, and K.-L.
Krieger. Algorithm development for minor damage identi-
fication in vehicle bodies using adaptive sensor data process-
ing. Procedia Technology, 15:586–594, 2014.
[9] S. Jayawardena et al. Image based automatic vehicle damage
detection. 2013.
[10] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-
Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector.
In European conference on computer vision, pages 21–37.
Springer, 2016.
[11] D. G. Lowe. Object recognition from local scale-invariant
features. In Computer vision, 1999. The proceedings of the
seventh IEEE international conference on, volume 2, pages
1150–1157. Ieee, 1999.
[12] A. Mohan and S. Poobal. Crack detection using image pro-
cessing: A critical review and analysis. Alexandria Engi-
neering Journal, 2017.