Professional Documents
Culture Documents
EP14786656NWB1
EP14786656NWB1
*EP003087532B1*
(11) EP 3 087 532 B1
(12) EUROPEAN PATENT SPECIFICATION
(45) Date of publication and mention (51) International Patent Classification (IPC):
of the grant of the patent: G06V 10/20 (2022.01) G06V 20/56 (2022.01)
06.09.2023 Bulletin 2023/36
(52) Cooperative Patent Classification (CPC):
(21) Application number: 14786656.0 G06V 20/56; G06V 10/255
(84) Designated Contracting States: • CHARKARI N M ET AL: "A new approach for real
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB time moving vehicle detection", INTELLIGENT
GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO ROBOTS AND SYSTEMS ’93, IROS ’93.
PL PT RO RS SE SI SK SM TR PROCEEDINGS OF THE 1993 IEIEE/RSJ
INTERNATIONAL CONFERENCE ON
(30) Priority: 23.12.2013 DE 102013022076 YOKOHAMA, JAPAN 26-30 JULY 1993, NEW
YORK, NY, USA,IEEE, US, vol. 1, 26 July 1993
(43) Date of publication of application: (1993-07-26), pages 273-278, XP010219078, DOI:
02.11.2016 Bulletin 2016/44 10.1109/IROS.1993.583109 ISBN:
978-0-7803-0823-7
(73) Proprietor: Valeo Schalter und Sensoren GmbH • AGARWAL S ET AL: "Learning to Detect Objects
74321 Bietigheim-Bissingen (DE) in Images via a Sparse, Part-Based
Representation", IEEE TRANSACTIONS ON
(72) Inventor: CHANUSSOT, Lowik PATTERN ANALYSIS AND MACHINE
F-75019 Paris (FR) INTELLIGENCE, IEEE COMPUTER SOCIETY,
USA, vol. 26, no. 11, 1 November 2004
(74) Representative: Claassen, Maarten Pieter et al (2004-11-01), pages 1475-1490, XP011118904,
Valeo Schalter und Sensoren GmbH ISSN: 0162-8828, DOI: 10.1109/TPAMI.2004.108
CDA-IP
Laiernstraße 12
74321 Bietigheim-Bissingen (DE)
Note: Within nine months of the publication of the mention of the grant of the European patent in the European Patent
Bulletin, any person may give notice to the European Patent Office of opposition to that patent, in accordance with the
Implementing Regulations. Notice of opposition shall not be deemed to have been filed until the opposition fee has been
paid. (Art. 99(1) European Patent Convention).
2
1 EP 3 087 532 B1 2
3
3 EP 3 087 532 B1 4
bounding box) is determined by the detection algorithm, [0014] Preferably, the camera is a video camera, which
in which the target vehicle is depicted. Therein, basically, is able to provide a plurality of images (frames) per sec-
any detection algorithm can be used such that presently ond. The camera can be a CCD camera or a CMOS cam-
the detection algorithm is not elaborated in more detail. era.
The image processing device then determines a region 5 [0015] If a sequence of images (i.e. video) is provided,
of interest (ROI) within the partial region such that wheels thus, the detection algorithm can be applied to each or
of a common axle of the target vehicle are depicted in each n-th image. The target vehicle can then be tracked
the ROI. The image processing device then identifies re- over the sequence of the images, wherein the width of
spective lateral outer edges of the depicted wheels and the target vehicle is determined according to the method
determines the width of the depicted target vehicle as 10 according to the invention to each or each n-th image.
the distance between the outer edges. [0016] The camera system can be a collision warning
[0011] Accordingly, the invention takes the way of de- system, by means of which a degree of risk with respect
tecting the wheels of the target vehicle as invariant points to a collision of the motor vehicle with the target vehicle
and of determining the width of the target vehicle in the is determined and a warning signal is output depending
image based on the lateral outer edges of the wheels. 15 on the current degree of risk, by which the risk of collision
First, a ROI within the partial region (bounding box) is is signaled to the driver. Additionally or alternatively, the
determined, in which the wheels of the target vehicle are camera system can also be formed as an automatic brake
presumably depicted. Therein, for example a lower area assistance system, by means of which brake interven-
of the partial region can be defined as the ROI, for which tions are automatically performed depending on the de-
it can be assumed that the wheels of the target vehicle 20 gree of risk. Therein, for example, the time to collision
are located in this lower area. Then, the outer edges of and/or a distance of the target vehicle from the motor
the wheels are searched for in this ROI. If these outer vehicle can be used as the degree of risk.
edges are identified, a distance between these outer edg- [0017] Preferably, a gradient image - i.e. an edge im-
es is determined and used as the width (in pixels) of the age - of the ROI is generated such that the outer edges
depicted target vehicle. The above mentioned partial re- 25 are identified based on the gradient image. For providing
gion (bounding box) output by the detection algorithm the gradient image, for example, the Sobel operator can
can then be adapted corresponding to the new, more be used. In order to more accurately determine the po-
accurate width of the target vehicle. On the one hand, sition of the edges, therein, the NMS (Non-Maximum
the invention has the advantage that the width of the tar- Suppression) filtering can also be used. Based on such
get vehicle and thus the position of the target vehicle in 30 a gradient image, the outer edges of the wheels, i.e. the
the image can be particularly precisely determined. Since left edge of the left wheel as well as the right edge of the
the tires of the target vehicle are always black, the outer right wheel, can be particularly reliably identified without
edges of the wheels can always be reliably detected. The much effort without the image having to be subjected to
acquisition of the width is therefore also particularly pre- pattern recognition.
cise over a sequence of images. On the other hand, the 35 [0018] In an embodiment, it is provided that a gradient
invention has the advantage that the outer edges of the direction is respectively determined to pixels of the gra-
wheels can be detected without much effort by determin- dient image, which specifies a direction of a brightness
ing the ROI without the image having to be subjected to change at the respective pixel. Thus, the gradient direc-
pattern recognition. For example, here, filtering of a gra- tion specifies, into which direction the brightness (for ex-
dient direction of pixels of a gradient image can be per- 40 ample from black to white) changes in the image at the
formed, which results in exclusively the outer edges of respective pixel. The outer edges can then be identified
the wheels remaining as the result of this filtering. based on the gradient direction of the pixels. Here, filter-
[0012] Further, the invention has the advantage that ing of the pixels with respect to the gradient direction can
the refinement in position allows to estimate accurately be performed. Those pixels, at which the brightness
the real distance (in meters) of the target vehicle. Know- 45 changes from bright to dark (from left to right), are inter-
ing the width (in pixel) and the camera calibration param- preted as the left edge of the left wheel. By contrast, those
eters, it is possible to estimate an accurate distance from pixels, at which the brightness changes from dark to
the target vehicle, which is not influenced by the slope bright (from left to right), are interpreted as the pixel of
of the road or pitch angle variation. the right edge of the right wheel. By such filtering of the
[0013] Preferably, the camera is a front camera, which 50 gradient direction of the pixels, the outer edges can be
is in particular disposed behind a windshield of the motor precisely detected.
vehicle, for example directly on the windshield in the in- [0019] If the width of the target vehicle in the image is
terior of the motor vehicle. Then, the front camera cap- known, thus, a real width of the target vehicle can also
tures the environment in direction of travel or in vehicle be determined depending on this width of the target ve-
longitudinal direction in front of the motor vehicle. This 55 hicle in the image. Thereby, the relative position of the
can in particular imply that a camera axis extending per- target vehicle with respect to the motor vehicle and thus
pendicularly to the plane of the image sensor is oriented the current risk of collision can be even more accurately
parallel to the vehicle longitudinal axis. estimated by the image processing device.
4
5 EP 3 087 532 B1 6
[0020] Preferably, therein, a width of a lane in the im- [0027] Further features of the invention are apparent
age (in pixels) is determined on the level of the depicted from the claims, the figures and the description of figures.
target vehicle by the image processing device. The real All of the features and feature combinations mentioned
width of the target vehicle can then be determined de- above in the description as well as the features and fea-
pending on the width of the lane in the image. If the width 5 ture combinations mentioned below in the description of
of the target vehicle in the image and additionally also figures and/or shown in the figures alone are usable not
the width of the lane in the image on the level of the target only in the respectively specified combination, but also
vehicle are known, thus, the real width of the target ve- in other combinations or else alone.
hicle can be determined considering the real width of the [0028] Now, the invention is explained in more detail
lane. Namely, the ratio of the width of the lane in the 10 based on a preferred embodiment as well as with refer-
image to the real width of the lane is equal to the ratio of ence to the attached drawings.
the width of the target vehicle in the image to the real [0029] There show:
width of the target vehicle. The real width of the target
vehicle can be calculated from this relation. Fig. 1 in schematic illustration a motor vehicle with a
[0021] The image processing device can identify lon- 15 camera system according to an embodiment of
gitudinal markers of the lane or of the roadway in the the invention, wherein a target vehicle is in front
image, and the width of the lane (in pixels) can be deter- of the motor vehicle;
mined as the distance between the longitudinal markers
in the image on the level of the target vehicle. Thus, the Fig. 2 in schematic illustration an image, in which the
width of the lane in the image can be determined without 20 target vehicle is detected;
much effort.
[0022] Therein, the width of the lane (in pixels) is pref- Fig. 3 in schematic illustration a region of interest of
erably determined on the level of and thus in extension the image according to Fig. 2;
with a bottom of the wheels in the image. Namely, the
bottom of the wheels of the target vehicle defines the 25 Fig. 4 in schematic illustration a gradient image;
lower edge of the above mentioned partial region (bound-
ing box). In order to particularly precisely determine the Fig. 5 in schematic illustration the gradient image after
real width of the target vehicle, the width of the lane also filtering with respect to a gradient direction;
has to be determined in the correct position in the image.
In order to ensure this, in this embodiment, the width of 30 Fig. 6 in schematic illustration the image according to
the lane is determined along a line, which coincides with Fig. 2, wherein the width of the target vehicle
the bottom of the wheels of the target vehicle in the image. has been more precisely determined; and
[0023] In order to calculate the real width of the target
vehicle based on the above mentioned ratio, the real Fig. 7 in schematic illustration an image, based on
width of the lane can also be determined on the basis of 35 which a real width of the target vehicle is deter-
the image, in particular based on the longitudinal mark- mined.
ers. The real width of the lane can be determined at a
very close distance from the ego vehicle and thus not at [0030] A motor vehicle 1 shown in Fig. 1 is a passenger
the level of the target vehicle, but directly in front of the car in the embodiment. The motor vehicle 1 includes a
ego vehicle. This is particularly advantageous, since the 40 camera system 2, which for example serves as a collision
real width of the lane can be determined without much warning system, by means of which the driver of the mo-
effort at close distance. Namely, any known lane detec- tor vehicle 1 can be warned of a risk of collision. Addi-
tion system having a limited range of detection can be tionally or alternatively, the camera system 2 can be
used to determine the real width of the lane, and the formed as an automatic brake assistance system, by
estimation is not influenced by unknown pitch angle of 45 means of which the motor vehicle 1 can be automatically
the ego vehicle or the slope of the road. decelerated due to a detected risk of collision.
[0024] In addition, the invention relates to a camera [0031] The camera system 2 includes a camera 3,
system for a motor vehicle, wherein the camera system which is formed as a front camera. The camera 3 is dis-
is formed for performing a method according to the in- posed in the interior of the motor vehicle 1 on a windshield
vention. 50 of the motor vehicle 1 and captures an environmental
[0025] A motor vehicle according to the invention, in region 4 in front of the motor vehicle 1. For example, the
particular a passenger car, includes a camera system camera 3 is a CCD camera or else a CMOS camera. The
according to the invention. camera 3 is additionally a video camera, which provides
[0026] The preferred embodiments presented with re- a sequence of images of the environmental region 4 and
spect to the method according to the invention and the 55 communicates it to an image processing device not illus-
advantages thereof correspondingly apply to the camera trated in the figures.
system according to the invention as well as to the motor [0032] This image processing device and the camera
vehicle according to the invention. 3 optionally can also be integrated in a common housing.
5
7 EP 3 087 532 B1 8
[0033] As is apparent from Fig. 1, a target vehicle 6 is 6’ of the ROI are depicted. A gradient direction is respec-
on a roadway 5 in front of the motor vehicle 1. The image tively associated with each pixel of the gradient image
processing device is setup such that it can apply a de- 13, which specifies a direction of the brightness change
tection algorithm to the images of the environmental re- at the respective pixel. The gradient direction therefore
gion 4, which is adapted to detect target vehicles 6. This 5 defines, into which direction the brightness changes at
detection algorithm can for example be stored in a mem- the respective pixel. Now, the pixels can be grouped de-
ory of the image processing device and for example be pending on the respective gradient direction, wherein for
based on the algorithm AdaBoost. Such detection algo- example nine intervals of values can be defined for the
rithms are already prior art and here are not described gradient direction. Then, the pixels are grouped to or as-
in more detail. If the target vehicle 6 is detected, thus, it 10 sociated with these intervals. At the left outer edge 11,
can be tracked over time by the image processing device. the brightness changes from bright to dark (from left to
To this too, corresponding tracking algorithms are known. right). Thus, the pixels of the left outer edge 11 have a
[0034] The tracking of the target vehicle 6 over time gradient direction of 0°. By contrast, the pixels of the right
means that the target vehicle 6 is detected in each image outer edge 12 have a gradient direction of 180°. If all of
or in each n-th image of the image sequence and thus 15 the pixels are now filtered out, which have another gra-
its current position in the respective image is known. dient direction than 0°and 180°, thus, a filtered edge im-
Thus, the target vehicle 6 is tracked over the sequence age 13’ according to Fig. 5 is provided, in which only the
of the images. The accuracy of the detection of the target outer edges 11, 12 are depicted. The new width w’ of the
vehicle 6 in the individual images thus immediately af- target vehicle 6’ is now determined as the distance be-
fects the accuracy of tracking. 20 tween the outer edges 11, 12.
[0035] An exemplary image 7 of the camera 3 is shown [0039] As is apparent from Fig. 6, now, a precise
in Fig. 2. The detection algorithm detects the target ve- bounding box 14 with the width w’ can be provided, by
hicle 6’ and outputs a partial region 8 (bounding box) as which the position of the target vehicle 6’ in the image is
a result, in which the target vehicle 6’ is depicted. The precisely identified. Therein, a lower edge 15 of the
detection of the target vehicle 6’ thus includes that a width 25 bounding box 14 can coincide with the bottom of the
w as well as a height h of the target vehicle 6’ are deter- wheels 9, 10.
mined, which are indicated by the partial region 8. In ad- [0040] If the exact width w’ of the target vehicle 6’ in
dition, the position x, y of the partial region 8 in the image the image 7 is known, thus, the real width of the target
7 is also known. As is apparent from Fig. 2, the partial vehicle 6 can also be precisely determined. With refer-
region 8 indicates the width w of the target vehicle 6’ only 30 ence now to Fig. 7, a width 16 of a lane 17 in the image
in inaccurate manner. Thus, the relative position of the 7 (in pixels), on which the target vehicle 6’ is located, is
target vehicle 6 with respect to the motor vehicle 1 also determined. The width 16 of the lane 17 (in pixels) is
can only inaccurately be determined, which in turn ad- determined on the level of the lower edge 15 of the bound-
versely affects the accuracy of tracking. ing box 14. This means that the width 16 in the image 7
[0036] In order to improve the accuracy of determining 35 is determined along a line, which coincides with the lower
the width of the target vehicle 6’ in the image 7, a region edge 15 and thus with the bottom of the wheels 9, 10.
of interest ROI is determined within the partial region 8, [0041] In order to be able to determine the width 16 of
in which wheels 9, 10 of the rear axle of the target vehicle the lane 17 in the image 7, longitudinal markers 18, 19
6’ are depicted. Therein, the ROI can be fixedly preset of the roadway 5 are identified by the image processing
with respect to the partial region 8 and in particular 40 device. The width 16 then represents a distance between
present a lower area of the partial region 8. the longitudinal markers 18, 19 on the level of the lower
[0037] In order to determine the exact width of the de- edge 15.
picted target vehicle 6’, then, exclusively the ROI is eval- [0042] In determining the real width of the target vehi-
uated. It is illustrated in enlarged manner in Fig. 3. In this cle 6, it is assumed that the real width of the lane 17
ROI, now, respective outer edges 11, 12 of the wheels 45 remains constant and thus is equal on the level of the
9, 10 are searched for. Therein, the outer edges 11, 12 target vehicle 6 and on the level of the motor vehicle 1.
are the left edge 11 of the left wheel 9 as well as the right The real width of the lane 17 can also be determined on
edge 12 of the right wheel 10 in the ROI. In order to detect the basis of the longitudinal markers 18, 19. The real
these outer edges 11, 12, a gradient image 13 is gener- width of the lane 17 can be determined at a very close
ated from the ROI, as it is shown in Fig. 4. In order to 50 distance from the ego vehicle 1 and thus not at the level
generate the gradient image 13, for example, the Sobel of the target vehicle 6, but directly in front of the ego
operator can be applied to the ROI. Alternatively, another vehicle 1.
known edge filter can also be used. Optionally, the NMS [0043] The real width (in meters) of the target vehicle
(Non-Maximum Suppression) filtering can also be used 6 then results from a ratio of the real width of the lane 17
to refine the position of the edges in the gradient image 55 (in meters) to the width 16 of the lane 17 in the image 7
13. (in pixels), multiplied by the width w’ of the target vehicle
[0038] Thus, the gradient image 13 represents an edge 6’ in the image (in pixels).
image, in which exclusively the edges of the target vehicle [0044] In order to be able to detect the longitudinal
6
9 EP 3 087 532 B1 10
markers 18, 19 on the level of the lower edge 15, image characterized in that
regions 20, 21 to the left and to the right of the lower edge depending on the width (w’) of the target vehicle (6’)
15 are defined. These image regions 20, 21 are defined in the image (7), a real width of the target vehicle (6)
in those positions, in which presumably also the longitu- is determined by the image processing device.
dinal markers 18, 19 are located. To this, information 5
from a detection module for detecting longitudinal mark- 5. Method according to claim 4,
ers can also be used. The image regions 20, 21 can then characterized in that
be subjected to additional filtering in order to be able to a width (16) of a lane (17) in the image (7) on the
more accurately detect the edges of the longitudinal level of the depicted target vehicle (6’) is determined
markers 18, 19. For example, therein, a histogram filter 10 by the image processing device and the real width
and/or a Gaussian filter can be used. of the target vehicle (6) is determined depending on
the width (16) of the lane (17) in the image (7).
7
11 EP 3 087 532 B1 12
3. Verfahren nach Anspruch 2, 30 10. Kraftfahrzeug (1), umfassend ein Kamerasystem (2)
dadurch gekennzeichnet, dass nach Anspruch 9.
eine Gradientenrichtung jeweils zu Bildpunkten des
Gradientenbilds (13) bestimmt wird, die eine Rich-
tung einer Helligkeitsänderung bei dem jeweiligen Revendications
Bildpunkt angibt, und die Außenkanten (11, 12) ba- 35
sierend auf der Gradientenrichtung der Bildpunkte 1. Procédé de détermination d’une largeur (w’) d’un vé-
identifiziert werden. hicule cible (6) au moyen d’un système de caméra
(2) d’un véhicule à moteur (1), dans lequel une image
4. Verfahren nach einem der vorhergehenden Ansprü- (7) d’une région environnante (4) du véhicule à mo-
che, 40 teur (1) est fournie par une caméra (3) du système
dadurch gekennzeichnet, dass de caméra (2) et la largeur (w’) du véhicule cible (6’)
abhängig von der Breite (w’) des Zielfahrzeugs (6’) dans l’image (7) est déterminée par un dispositif de
in dem Bild (7) eine reale Breite des Zielfahrzeugs traitement d’images du système de caméra (2), in-
(6) durch die Bildverarbeitungsvorrichtung bestimmt cluant les étapes de :
wird. 45
- détection du véhicule cible (6’) dans l’image
5. Verfahren nach Anspruch 4, (7) au moyen d’un algorithme de détection par
dadurch gekennzeichnet, dass le dispositif de traitement d’images, dans lequel
eine Breite (16) einer Fahrspur (17) in dem Bild (7) la détection inclut qu’une région partielle (8) de
auf der Höhe des abgebildeten Zielfahrzeugs (6’) 50 l’image (7) est déterminée par l’algorithme de
durch die Bildverarbeitungsvorrichtung bestimmt détection, dans lequel le véhicule cible (6’) est
wird und die reale Breite des Zielfahrzeugs (6) ab- représenté en intégralité,
hängig von der Breite (16) der Fahrspur (17) in dem - détermination d’une région d’intérêt (ROI) à
Bild (7) bestimmt wird. l’intérieur de la région partielle (8) par le dispo-
55 sitif de traitement d’images de telle sorte que
6. Verfahren nach Anspruch 5, des roues (9, 10) d’un essieu arrière commun
dadurch gekennzeichnet, dass du véhicule cible (6’) sont représentées dans la
Längsmarkierungen (18, 19) der Fahrspur (17) in région d’intérêt (ROI),
8
13 EP 3 087 532 B1 14
- identification de bords extérieurs (11, 12) res- (6), une largeur réelle de la voie (17) est déterminée
pectifs des roues (9, 10) représentées à l’inté- sur la base de l’image (7), en particulier sur la base
rieur de la région d’intérêt (ROI), et des marqueurs longitudinaux (18, 19).
- détermination de la largeur (w’) du véhicule
cible (6’) représenté comme étant égale à la dis- 5 9. Système de caméra (2) pour un véhicule à moteur
tance entre les bords extérieurs (11, 12). (1), incluant une caméra (3) pour fournir une image
(7) d’une région environnante (4) du véhicule à mo-
2. Procédé selon la revendication 1, teur (1), et incluant un dispositif de traitement d’ima-
caractérisé en ce que ges pour déterminer une largeur (w’) d’un véhicule
une image gradient (13) de la région d’intérêt (ROI) 10 cible (6’) dans l’image (7), dans lequel le dispositif
est générée et les bords extérieurs (11, 12) sont iden- de traitement d’images est adapté pour réaliser un
tifiés sur la base de l’image gradient (13). procédé selon l’une quelconque des revendications
précédentes.
3. Procédé selon la revendication 2,
caractérisé en ce que 15 10. Véhicule à moteur (1) incluant un système de camé-
une direction de gradient est déterminée respective- ra (2) selon la revendication 9.
ment à des pixels de l’image gradient (13), qui spé-
cifie une direction d’un changement de luminosité
au niveau du pixel respectif, et les bords extérieurs
(11, 12) sont identifiés sur la base de la direction de 20
gradient des pixels.
9
EP 3 087 532 B1
10
EP 3 087 532 B1
11
EP 3 087 532 B1
12
EP 3 087 532 B1
This list of references cited by the applicant is for the reader’s convenience only. It does not form part of the European
patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be
excluded and the EPO disclaims all liability in this regard.
• EP 1806595 B1 [0006]
13