Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

Patentability Search Report

Lane Marker Detection and


Annotation via Crowdsourcing
P002393
May 02, 2024
Contents
1. Objective of the Search.......................................................................................3

2. Scope of the Search.............................................................................................3

3. Key features of the invention.............................................................................4

4. Summary of Analysis...........................................................................................5

5. Granted Patents and Published Patent Applications......................................6

5.1 US11551459.............................................................................................................6

5.2 US20180322777A1.....................................................................................................8

5.3 US9336681B2.............................................................................................................9

5.4 JP4567375B2..........................................................................................................11

6. Search Strategy....................................................................................................13

7. Search Log............................................................................................................15

8.1 Patent database search.........................................................................................15

8.2 Non Patent/Product search...................................................................................18

8. Project Methodology............................................................................................19

9. Disclaimer..............................................................................................................20

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 2
1. Objective of the Search
The objective of the search is to technically evaluate the invention within the relevant technical field
as reflected in the prior art literature. The search is conducted to identify and analyze issued
patents/published applications and non-patent literature references that are relevant to the
invention disclosure provided by the client.

2. Scope of the Search


The search was conducted on the following databases:

Patent Databases Non Patent Databases

 Orbit  Google/ Google Scholar


 Espacenet  IEEEXplore
 Google Patents  ResearchGate

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 3
3. Key features of the invention
The following key features were derived from the invention disclosure provided by the client:

Key Feature 1
The Invention discloses about the lanes mark detection and annotation via crowdsourcing.

Key Feature 2
Collect the frames for which lane markings are missed or detected with low confidence by the
perception of a specific AD vehicle.

Key Feature 3
A frame which needs annotating is firstly converted to a grid image. The cells of the grid image can
be evenly split (e.g. 6x6) or customized based on which parts of the detection of the lane markers
have low confidence (thus more cells are created to improve the resolution of the annotation).

Key Feature 4
The grid image is then used by websites that uses the service provided by the invention for
authentication or proving that user being not a robot.

Key Feature 5
The same grid image is possibly used by multiple times and when this is done. The annotated data
is sent back to the cloud service which redirect them to the AD vehicle which collects them.

Key Feature 6
The machine learning model is continuously trained by the obtained annotated data locally.

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 4
4. Summary of Analysis
Key features
Results
1 2 3 4 5 6
Patents and Published Patent Applications
Y Y Y*
Y
[Description, Col [Description, Col. [Description, Col.
[Description, Col
US11551459 1,Line-0045- 1-2,Line-0056- 1-2,Line-0056-
1,Line-0045-
0055] 0060 and 0063- 0060 and 0063-
0055]
0007] 0007]
Y* Y Y
Y Y*
[Detailed [Detailed [Detailed
[Detailed [Detailed
US20180322777A1 Description, Description,
description, Page Description, Page description, Page
Page 2, Para Page 1, Para
1,Para-0012] 2, Para 0013] 1,Para-0012]
0013] 0012]
Y*
Y Y*
[Description, Col
US9336681B2 [Summary, Col.1, [Description, Col
2-3, Line 57-67
Line-44-54] 4, Line 19-24]
and 3-5]

Non Patent Literature

No relevant non-patent results identified.

*indicates Partial mapping

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential
5. Granted Patents and Published Patent Applications
5.1 US11551459
Assignee: PLUSAI INC [US]
Inventor: SAGGU INDERJOT SINGH
Filed: June 27, 2022
Title: Ambiguous lane detection event miner
Abstract: A computer system obtains a plurality of road images captured by one or more cameras
attached to one or more vehicles. The one or more vehicles execute a model that facilitates driving of the
one or more vehicles. For each road image of the plurality of road images, the computer system
determines, in the road image, a fraction of pixels having an ambiguous lane marker classification. Based
on the fraction of pixels, the computer system determines whether the road image is an ambiguous image
for lane marker classification. In accordance with a determination that the road image is an ambiguous
image for lane marker classification, the computer system enables labeling of the image and adds the
labeled image into a corpus of training images for retraining the model.

[Relevant Figure]

[Relevant Text]
[Abstract,Col.1]
A computer system obtains a plurality of road images captured by one or more cameras attached
to one or more vehicles. The one or more vehicles execute a model that facilitates driving of the one
or more vehicles. For each road image of the plurality of road images, the computer system
determines, in the road image, a fraction of pixels having an ambiguous lane marker classification .
Based on the fraction of pixels, the computer system determines whether the road image is an
ambiguous image for lane marker classification. In accordance with a determination that the road
image is an ambiguous image for lane marker classification, the computer system enables labeling of
the image and adds the labeled image into a corpus of training images for retraining the model.
[Description, Col 1,Line-0045-0055]
Currently, fleet operators often collect large amounts of data from individual vehicles in order to learn
from existing road and traffic conditions. Typically, this data is sent from the vehicles to a remote
server for storage and analysis (e.g., at a later time). Transmitting such large amounts of data (e.g.,

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 6
HD video or LIDAR data) from many vehicles (e.g., over a cellular data network) consumes valuable
communication bandwidth and is prohibitively expensive. Furthermore, a lot of the data may be repetitive,
typical, and do not represent rare events from which autonomous driving models can learn.
[Description, Col. 1-2,Line-0056-0060 and 0063-0007]
Accordingly, there is a need for improved systems, methods, and devices that provide a more efficient
mechanism for collecting, monitoring, and learning from road condition data captured by a fleet of
vehicles, such as data pertaining to lane markers (e.g., lane markings).
a computer system (e.g., an event miner) determines a ratio of pixels having an ambiguous lane
marker classification in a road image collected by a vehicle, identifies “interesting events” associated
with lane marker detection in the road image, and determines whether the road image is an
ambiguous image for lane marker classification. In accordance with a determination that the road
image is an ambiguous image for lane marker classification, the computer system enables labeling of
the image and adds the labeled image into a corpus of training images for retraining a model for
autonomous driving.
[Description, Col.7,Line-0003-0014]
In some embodiments, deep learning techniques are applied by the vehicles 102, server(s) 104, or
both to process the vehicle data 112. For example, in some embodiments, after image data are
collected by the cameras of one of the vehicles 102, the image data is processed using an object
detection model to identify objects (e.g., road features including, but not limited to, vehicles, lane
lines, lane markers (e.g., lane markings), shoulder lines, road dividers, traffic lights, traffic signs,
road signs, cones, a pedestrian, a bicycle, and a driver of the first vehicle) in the vehicle driving
environment 100.

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 7
5.2 US20180322777A1
Inventor: TITZE ANDREAS [DE]; ORTMANN STEFAN [DE]
Assignee: VOLKSWAGEN AG [DE]
Filed: November 09, 2016
Title: Method and system for creating a lane-accurate occupancy grid map for lanes

Abstract: A method for creating a lane-accurate occupancy grid map for lanes. In at least one mobile
device, an environment is sensed by a camera and evaluated by an evaluating unit. The evaluating unit
defines a section in the environment and determines a lane in the section. Objects in the environment or in
the section are also detected and classified by the evaluating unit. The object information, section
information, time information, and the lane information are transmitted to a map-creating device, which
creates a lane-accurate occupancy grid map for the lane therefrom. The lane-accurate occupancy grid
map can be transmitted back to the mobile device. Also disclosed is an associated system.
[Relevant Text]
[Detailed description, Page 1,Para-0012]
A method for creating a lane-accurate occupancy grid map for lanes. In at least one mobile device,
an environment is sensed by a camera and evaluated by an evaluating unit. The evaluating unit
defines a section in the environment and determines a lane in the section. Objects in the
environment or in the section are also detected and classified by the evaluating unit. The object
information, section information, time information, and the lane information are transmitted to a map-
creating device, which creates a lane-accurate occupancy grid map for the lane therefrom. The lane-
accurate occupancy grid map can be transmitted back to the mobile device. Also disclosed is an
associated system.
[Detailed Description, Page 2, Para 0013]
Provision is made for the mobile device to be a motorized transportation vehicle and for the map
creation device to be a central server with which the motorized transportation vehicle
communicates via a wireless communication connection. Further mobile devices are then further
motorized transportation vehicles, for example, which likewise communicate with the central server.
However, provision may also be made for the map creation device to be integrated in the mobile
device.
[Deatiled Description, Page 1, Para 0011]
capturing an image sequence of an environment of the at least one mobile device by at least one
camera, identifying and classifying objects in the captured image sequence by an evaluation unit,
determining object positions of the objects relative to the at least one mobile device by the
evaluation unit.
[Description, Page 2, Para 0021]
The camera 4 captures a sequence of images of the environment 12 of the motorized transportation
vehicle 50. The captured sequence of images is passed from the camera 4 to the evaluation unit 5.
The evaluation unit 5 defines a section 13 from the sequence of images. This section 13 has a
predefined size.

[Detailed Description, Page 1, Para 0012]


A system for creating a lane-accurate occupancy grid map for lanes, the system comprising :at least
one mobile device comprising :at least one camera for capturing an image sequence of an
environment of the at least one mobile device, an evaluation unit, and a transmitting device for

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 8
communicating with a map creation device, A driver or an automated controller can thus use the traffic
density provided in this manner to carry out route planning.

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 9
5.3 US9336681B2
Inventor: ANNAPUREDDY KOUSHIK [FI]; FINLOW-BATES KEIR [FI]
Assignee: QUALCOMM INC [US]
Filed: February 27, 2015
Title: Navigation using crowdsourcing data
Abstract: Method, computer program product, and apparatus for providing navigation guidance to
vehicles are disclosed. The method may include receiving crowdsourcing data from a plurality of vehicles,
determining traffic data corresponding to a road using the crowdsourcing data, predicting traffic condition
of each lane of the road using the traffic data, and providing navigation guidance to a vehicle in
accordance with the traffic condition of each lane of the road. The crowdsourcing data includes on board
diagnostics data (OBD) correlated with time stamps and GPS locations of a vehicle, where the on board
diagnostics data includes odometer information, speedometer information, fuel consumption information,
steering information, and impact data.
[Relevant Image]

[Relevant Text]
[Summary, Col.1, Line-44-54]
The method may include receiving crowdsourcing data from a plurality of vehicles, determining
traffic data corresponding to a road using the crowdsourcing data, predicting traffic condition of
each lane of the road using the traffic data, and providing navigation guidance to a vehicle in
accordance with the traffic condition of each lane of the road. The crowdsourcing data includes on board
diagnostics data (OBD) correlated with time stamps and GPS locations of a vehicle, where the on board
diagnostics data includes odometer information, speedometer information, fuel consumption information,
steering information, and impact data.
[Description, Col 2, Line 58-65]
The processing logic comprises logic configured to send crowdsourcing data to a server, logic
configured to receive navigation guidance from the server, and logic configured to display the

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 10
navigation guidance on a display. The logic configured to send crowdsourcing data to a server
further comprises logic configured to receive on board diagnostics data.
[Description, Col 4, Line 19-24]
from an on-board diagnostic module of a vehicle, The navigation controller 204 may then transmit the
crowdsourcing data to a database associated with the crowdsourcing server 102. The
crowdsourcing server 102 may be configured to data-mine the crowdsourcing data, constructing tables of
road segments.
[Description, Col 2-3, Line 57-67 and 3-5]
a mobile station comprises a navigation controller including processing logic. The processing
logic comprises logic configured to send crowdsourcing data to a server, logic configured to receive
navigation guidance from the server, and logic configured to display the navigation guidance on a display.
The logic configured to send crowdsourcing data to a server further comprises logic configured to
receive on board diagnostics data from an on-board diagnostic module of a vehicle, logic configured
to receive GPS locations of the vehicle from a GNSS module, and logic configured to correlate the on
board diagnostics data and GPS locations of the vehicle with time stamps to generate the
crowdsourcing data

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 11
5.4 JP4567375B2
Inventor: SATO KEIJI
Assignee: CLARION CO LTD
Filed: May 25, 2004
Title: Auxiliary information presentation device

Abstract: PROBLEM TO BE SOLVED: To reduce lowering of visibility of a monitor due to various kinds of
lines superimposed.
SOLUTION: This device is provided with a camera for photographing a rear side of a vehicle, the monitor
for displaying the photographed video, a predicted locus calculation means for calculating a predicted
locus of the vehicle based on an angle of a steering wheel, a parking space recognition means for
recognizing a parking space in image, and a positional relation decision means for deciding whether or not
the positional relation of the recognized parking space and the vehicle satisfies a prescribed condition. A
function for displaying vehicle body direction information for showing the direction of a vehicle body by
further superimposing the information on the video is imparted to the device when the positional relation is
decided to satisfy the prescribed condition.
[Relevant Text]
[Description, Page 1, Para 0006]
An auxiliary information presentation device according to an aspect of the present invention that solves
the above-described problem is provided with a camera that captures the rear of a vehicle, a
monitor that displays captured images, and an expected trajectory of the vehicle based on the
angle of the steering wheel. A predicted trajectory calculating means for calculating and displaying
the calculated expected trajectory superimposed on the video, a parking space recognizing means
for recognizing a parking space, and a positional relationship between the recognized parking
space and the vehicle Positional relationship determining means for determining whether or not a
predetermined condition is satisfied, and when the positional relationship is determined to satisfy
the predetermined condition, vehicle direction information indicating a vehicle body direction is
further added to the video It is displayed in a superimposed manner.
[Description, Page 2, Para 0011]
When the auxiliary information presentation device according to the present invention is
employed, information indicating the direction of the vehicle, which is less necessary to be displayed when
the vehicle is away from the parking space, is not actively displayed, and the positional relationship is
made minute by approaching each other. In order to display the above information that is highly
necessary for adjustment, the number of lines constituting the driving assistance information
displayed on the monitor tends to be reduced. For this reason, the deterioration of the visibility of
the monitor by the lines as described above is reduced, and the driver can view the monitor image
comfortably.
[Description, Page 5, Para 0031]
When the vehicle width extension line display process according to the present embodiment is performed,
the vehicle width extension line, which is less necessary to be displayed when the vehicle 100 is away
from the parking space, is not intentionally displayed. The number of lines forming auxiliary
information is reduced. For this reason, the area hidden in the line as described above in the video
imaged by the camera is reduced, and the visibility is improved.
[Description, Page 5, Para 0029]

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 12
FIG. 5 is a flowchart showing the vehicle width extension line display process of the present embodiment.
This vehicle width extension line display process is a process in which the vehicle width extension
line is displayed on the monitor 16 as driving assistance information so as to be superimposed on
the video imaged by the camera when the vehicle 100 approaches the parking space.
[Description, Page 5, Para 0027]
When the driving assistance information is displayed on the monitor 16, the control unit 10
determines whether a pulse signal is output from the steering angle detection sensor 18, that is,
whether the angle of the steering wheel has changed (S12). When it is determined that the angle of
the steering wheel has changed (S12: YES), since the expected trajectory changes, the control unit
10 returns to the process of S8 and recalculates the expected trajectory. When it is determined that
the angle of the steering wheel has not changed (S12: NO), the control unit 10 performs the determination
process of S12 again after a predetermined timing.
[Description, Page 5, Para 0036]
When at least one straight line is recognized in the process of S22, the control unit 10 determines whether
there is a recognized straight line whose slope a is larger than 0 (S23). When it is determined that there
is a recognized straight line with an inclination a greater than 0 (S23: YES), the control unit 10
defines a straight line with an inclination a greater than 0 as a parking space in the vehicle width
direction. The process proceeds to S26. Further, when it is determined that no recognized straight line
has an inclination a greater than 0 (S23: NO), the control unit 10 regards that there is no straight line that
defines the parking space in the vehicle width direction, and S21. Return to the process. Here, taking FIG.
6 described later as an example, in this figure, white lines w1 and w3 are displayed in the area A1.
Here, since the slope a of the white line w1 is a positive slope, the white line w1 is considered to be
able to define a parking space in the vehicle width direction. Further, since the slope a of the white
line w3 is negative, the white line w3 is regarded as not defining a parking space in the vehicle width
direction.

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 13
6. Search Strategy
The following keywords and their semantic variants were used for searching relevant patents and
published applications:

Keywords
VEHICLE / AUTOMOBILE / CAR / AUTONOMOUS
LANE / ROUTE / PATHWAY/TRACK/ROAD
CAMERA / IMAGE SENSOR/CAMCORDER/CAM/IMAGING APPARATUS
CROWDSOURCING/SERVER
GRID/FRAMES/PIXELS/MATRIX
ANNOTATION/LABELING/REMARK
MARK/SIGN/SYMBOL/INDICATION/TRACE
COLLECT/GATHER/ACCUMULATE
RAW DATA/SOURCE DATA/PRIMARY DATA
AUTHENTICATION/AUTHORIZATION/VERIFICATION/VALIDATION

The following US classes were identified for searching relevant patents and published patent applications:
382 IMAGE ANALYSIS
382/104 Vehicle or traffic control (e.g., auto, bus, or train):
348 TELEVISION
348/148 Vehicular:
382/103 Target tracking or detecting:
701 DATA PROCESSING: VEHICLES, NAVIGATION, AND RELATIVE LOCATION
701/36 Vehicle subsystem or accessory control:
382 IMAGE ANALYSIS

The following IPC/CPC classes were identified for searching relevant patents and published patent
applications:

IPC/CPC Classes Definition


G PHYSICS
G06 COMPUTING; CALCULATING; COUNTING (score computers for games
A63B71/06, A63D15/20, A63F1/18; combinations of writing implements with
computing devices B43K29/08)
G06N COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
G06N3/00 Computer systems based on biological models
G06N3/0464 Convolutional networks [CNN, ConvNet]

G06N3/084 {Back-propagation},Learning methods

G06N3/09 Supervised learning


G06V IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
G06V10/00 Arrangements for image or video recognition or understanding (character recognition

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 14
in images or video
G06V10/764 using classification, e.g. of video objects
G06V10/774 Generating sets of training patterns

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 15
7. Search Log
Based on the keywords, IPC classes, CPC classes and US classes mentioned in 'Search Strategy', the
following key strings and combinations thereof were used for searching relevant patents and published
patent applications. Please note that some of the patent and/or non-patent databases below do not
support key strings based on complex logic structure.
8.1 Patent database search

Source: Orbit
No.
S.
Type of of
No Key Strings
Search Hit
.
s
1 Keyword .((LANE 1D TO 1D MARK+) AND (ANNOTATI+) AND
21
based (CROWDSOURCING))/TI/AB/CLMS/TX

2 ((VEHICLE+ OR AUTOMOBILE+ OR CAR+ OR DRIVE+) AND (DISPLAY+ OR


SCREEN+) AND (SENSOR+ OR CAMERA+) AND (LANE+ OR EDGE+ OR
RAIL+ OR CURB+ OR PAVEMENT+) AND (MAP+ OR SATELLITE+ OR
THEORETICAL+ OR GPS) AND (WHEEL+ OR STEERING+) AND (INTENT)
AND (TURN+ AND ANGLE+) AND (INDICATOR+ OR
Keyword
BLINKER+))/TI/AB/CLMS(((VEICHLE? OR AUTOMO+ OR AUTONOMOUS OR 361
based
CAR) AND (COLLECT+ OR ACCUMULAT+) AND (DATA OR INFORMATION)
AND (SEND+ OR FORWARD+) AND
(CROWDSOURCING))/TI/AB/CLMS/DESC/ODES AND ((HUMAN OR USER)
AND (VERIFICATION OR AUTHENTICAT+) AND (FRAMES OR
GRID))/TI/AB/CLMS/DESC/ODES)

3 (((VEICHLE? OR AUTOMO+ OR AUTONOMOUS OR CAR) AND (COLLECT+


OR ACCUMULAT+) AND (DATA OR INFORMATION) AND
Keyword (SEND+ORFORWARD+)AND(CROWDSOURCING))/TI/AB/CLMS/DESC/ODES
167
based AND ((HUMAN OR USER) AND (VERIFICATION OR AUTHENTICAT+) AND
(FRAMES OR GRID))/TI/AB/CLMS/DESC/ODES AND
(ANNOTAT+)/TI/AB/CLMS/DESC/ODES)=

4 ((((VEICHLE? OR AUTOMO+ OR AUTONOMOUS OR CAR) AND (COLLECT+


OR ACCUMULAT+) AND (DATA OR INFORMATION) AND (LANE? OR ROAD
OR ROUTE? OR PATH OR SIGN) AND (SEND+ OR FORWARD+) AND
(CROWDSOURCING) AND (ANNOTAT+))/TI/AB/CLMS/DESC/ODES AND
((HUMAN OR USER) AND (VERIFICATION OR AUTHENTICAT+) AND
Keyword (FRAMES OR GRID))/TI/AB/CLMS/DESC/ODES)= (VEHICLE+ OR
158
based AUTOMOBILE+ OR CAR+) AND (DISPLAY+ OR INFOTAINMENT+) AND
(ENVIRONMENT+ OR GEOGRAP+ OR CONTENT+ OR OBJECT+) AND
(SENSOR+ OR CAMERA+) AND (MAP+ OR SATELLITE+ OR
THEORETICAL+) AND (WHEEL+ OR STEERING+) AND (INTENT) AND
(WHEEL_ANGLE+) AND (INDICATOR+ OR BLINKER+ OR
SIGNAL+))/TI/AB/CLMS/DESC/ODES

5 Keyword ((VEHICLE? OR AUTOMO+ OR CAR? OR AUTONOMOUS) S (DATA OR 1


based INFORMATION) S (LANE? OR PATHWAY OR ROUTE? OR TRACK+) S

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 16
(CROWDSOURCING) S (ANNOTAT+))/TI/AB/CLMS/DESC/ODES

6 Class (((CROWDSOURCING) AND (ANNOTAT+))/TI/AB/OBJ/ADB/ICLM AND (G01C-


6
based 021/34 OR B60R-011/04 OR B60R-011/04 OR B60Y-2200/00)/IPC/CPC)=
(((crowdsourcing) and (authenticat+ or authori+ or validat+ or
7
Class verifi+))/TI/AB/OBJ/ADB/ICLM AND (G06N-003/0464 or G06N-003/084 or
27
based G06N-003/09 or G06V-010/764 or G06V-010/774 or G01C-021/36 or G01C-
021/3492 or G01C-021/3415)/IPC/CPC)
8 ((((crowdsourcing) and (authenticat+ or authori+ or validat+ or
verifi+))/TI/AB/OBJ/ADB/ICLM AND (G06N-003/0464 or G06N-003/084 or
Inventor G06N-003/09 or G06V-010/764 or G06V-010/774 or G01C-021/36 or G01C-
based 021/3492 or G01C-021/3415)/IPC/CPC) AND ((KOUSHIK 1D 4
ANNAPUREDDY)/IN/OIN/INH/INV OR (KEIR 1D
FINLOW-BATES)/IN/OIN/INH/INV OR (INDERJOT 1D SINGH 1D
SAGGU)/IN/OIN/INH/INV))

((((crowdsourcing) and (authenticat+ or authori+ or validat+ or


Assigne verifi+))/TI/AB/OBJ/ADB/ICLM AND (G06N-003/0464 or G06N-003/084 or
9 G06N-003/09 or G06V-010/764 or G06V-010/774 or G01C-021/36 or G01C-
e based 5
021/3492 or G01C-021/3415)/IPC/CPC) AND (("OPERR
TECHNOLOGIES")/PA/OPA/NPAN OR ("OPERR
TECHNOLOGY")/PA/OPA/NPAN OR ("OPERR TEKNOLODZHIZ
INK")/PA/OPA/NPAN OR ("STRONG FORCE TX PORTFOLIO
2018")/PA/OPA/NPAN))
Total Hits
1 OR 2 OR 3 OR 4 OR 5 OR 6 OR 7 OR 8 OR 9
analysed 750

Source: Espacenet
S. No. Key Strings
.(ti all "Lane" OR ti all "pathway" OR ti all "route" OR ti all "track") AND (nftxt all "mark" OR ta all
"sign" OR ta all "indication" OR ta all "trace" OR ta all "symbol") AND (nftxt all "detection" OR
1
nftxt all "finding" OR nftxt all "identification") AND (nftxt any "annotation" OR nftxt any "remark"
OR nftxt any "labeling") AND nftxt any "crowdsourcing”
. (ti all "Lane" OR ti all "pathway" OR ti all "route" OR ti all "track") AND (nftxt all "mark" OR ta
all "sign" OR ta all "indication" OR ta all "trace" OR ta all "symbol") AND (nftxt all "detection"
2 OR nftxt all "finding" OR nftxt all "identification") AND (nftxt any "annotation" OR nftxt any
"remark" OR nftxt any "labeling") AND nftxt any "crowdsourcing" AND nftxt any "Human
verification" AND nftxt = "Human" AND nftxt = "verification"
. (ti all "Lane" OR ti all "pathway" OR ti all "route" OR ti all "track") AND (nftxt all "mark" OR ta
all "sign" OR ta all "indication" OR ta all "trace" OR ta all "symbol") AND (nftxt all "detection"
3 OR nftxt all "finding" OR nftxt all "identification") AND (nftxt any "annotation" OR nftxt any
"remark" OR nftxt any "labeling") AND nftxt any "crowdsourcing" AND nftxt any "Human
verification" AND nftxt = "Human" AND nftxt = "verification" AND (“crowdsourcing”)
ntxt all "Imaging apparatus" OR ta all "Cam OR Camera" OR ta all "Server OR system OR
4
crowdsourcing"
5 ntxt all "Imaging apparatus" OR ta all "Cam OR Camera" OR ta all "Server OR system OR

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 17
crowdsourcing" AND ntxt all “annotation” or “labeling” or “remark”

Source: Google Patent


S. No. Key Strings
1 Vehicle camera to crowdsourcing and labeling images and send for autehntication
(VEHICLE OR CAR+) AND (camera or cam or camcorder or imaging apparatus) AND
2
(crowdsourcing)
3 Vehicle collect raw data of lanes and send to the crowdsourcing and labeling of images.

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 18
8.2 Non Patent/Product search

Source: IEEE Explore


S. No. Key Strings
("All Metadata":AUTONOMOUS) AND ("All Metadata":VEHICLE) AND ("All
1 Metadata":CAMERA) AND ("All Metadata":CROWDSOURCING) AND ("All
Metadata":LABELING)
("All Metadata":VEHICLE) AND ("All Metadata":CAR) AND ("All Metadata":CAMERA) AND ("All
2
Metadata":LANE) AND ("All Metadata":CROWDSOURCING)
("All Metadata":AUTONOMOUS) AND ("All Metadata":CAR) AND ("All Metadata":CAMERA)
3
AND ("All Metadata":CROWDSOURCING) AND ("All Metadata":LABELING)

Source: Google/Google Scholar


S. No. Key Strings
1 (vehicle) AND (camera) AND (collect) AND (data) AND (crowdsourcing)
2 (car OR vehicle) AND (camera ) AND (data OR information) AND (crowdsourcing)
3 (car OR vehicle) AND (camera OR sensor) AND (steering) AND (wheel)

Source: Research Gate


S. No. Key Strings
1 (vehicle) AND (camera) AND () AND (health)
(car OR vehicle) AND (camera) AND (face) AND (droop) AND (attack OR stroke OR paralysis)
2
AND (intervene)
(vehicle OR automobile OR car OR machine OR automotive) AND (display OR infotainment)
3
AND (environment OR object) AND (sensor OR camera)

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 19
8. Project Methodology
Following are the key steps followed during the course of the project:

Step
Step Output
No.
Understanding the technology of the invention
Key features of the invention
disclosure/provisional patent
1 Keywords for identifying relevant
Analyzing the key features of the invention patents and published patent
applications
Keyword-based search to identify related patents and
published patent applications
IPC/US/CPC class-based search to identify relevant patents List of patents and published
and published patent applications. The results were restricted patent applications that match
2
with broad keywords the parameters set through the
various search criteria
Citation search for the identified patents to locate relevant
patents and published patent applications. The results were
restricted with broad keywords
Short listing patents and published patent applications based Patents and published
on title, abstract, claims and description. applications which potentially
3
Identification of patents and published patent applications map onto the key features of the
which potentially map onto the key features of the invention. invention.

Keyword based search in non-patent databases Non-patent prior art which


4 Identification of non-patent prior art which potentially map potentially map onto the key
onto the key features of the invention. features.

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 20
9. Disclaimer
This report has been prepared by Effectual Services engineer(s) and contains analysis and
recommendations based on the understanding of the subject matter by the searcher. The searcher’s
analysis and recommendations are purely technical suggestions and should not be construed as legal
opinions under any circumstances. Client alone reserves the right to make a final decision on the subject
matter as disclosed. Further, Effectual Services may have used one or more third-party databases while
preparing this report, and cannot warrant the accuracy of the information obtained from third-party
databases. Such databases may also include translations, and Effectual Services cannot warrant the
accuracy/authenticity of such translations.

This report is technical in nature, and contains no legal opinion. The characterization, paraphrasing,
quotation, inclusion or omission of any references with regard to this report represents the personal, non-
legal judgment of the one or more technical researchers involved in the preparation of this report.
Therefore, no content of this report, including the characterization, paraphrasing, quotation, inclusion or
omission of any references, should be construed as having any legal weight or being legally dispositive in
any manner. This report is provided without any express or implied warranties, including fitness for a
particular purpose such as patentability, infringement, freedom-to-operate or invalidity opinion. Effectual
Services cannot be held responsible for any damages whether direct or consequential, based on use of
this report.

© 2024, Effectual Knowledge Services Pvt. Ltd.; All Rights Reserved, Privileged & Confidential 21

You might also like