Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 30

THE NATIONAL INSTITUTE OF ENGINEERING

MYSORE

DEPARTMENT OF MECHANICAL ENGINEERING


2022-23

PROJECT PRESENTATION
OBJECT DETECTION AND DIMENSIONING USING OPTICAL SENSORS

PRESENTED BY, GUIDED BY,


GAGAN RAJ C Dr. K.R PRAKASH
4NI21MAR04 Head Of Department &Professor,
4thSem, Industrial Automation Department of Mechanical
And Robotics Engineering
INTRODUCTION

This project presents a fashion for calculating the measures in real- time from images. To explain it’s working it

principally uses a webcam and a white paper background to descry the object. After detecting the object, it displays

its confines in specified measuring units at real time. In the perpetration of the proposed fashion, by a pre designed

system that used OpenCV software library. Some advantages of using this methodology are that it's veritably useful

in the artificial field, it simplifies mortal work.


INTRODUCTION TO OPENCV

OPENCV ( Open Source Computer Vision Library) is an open source computer vision and machine literacy software

library. OpenCV was erected to give a common structure for computer vision operations and to accelerate the use of

machine perception in the marketable products. Being a BSD certified product, OPENCV makes it easy for businesses

to use and modify the law. The library has further than 2500 optimized algorithms, which includes a comprehensive set

of both classic and state of heart computer vision and machine literacy algorithms.
LITERATURE SURVEY

2.1. Measuring Object Dimensions and its Distances Based on Image Processing Technique by Analysis the Image:

The measurement of object dimensions and its distance is essential for many technical applications. The purpose of this study
was to find a suitable mathematical model in order to find an easy and accurate way to determine the object dimensions and its
distance. This study was divided into two parts: the first one was to determine the dimensions of objects using a digital camera
and a single laser pointer by placing those objects on black screen at different distances away from the camera then the second
step was to determine the ranges of the objects using the images of two laser spots. Results show there is a relationship between
different zoom and scale factor, and there is convergence and similarity between the experimental and theoretical values.
2.2. Estimating the object size from static 2D image:

In this paper the real size of the object from a static digital image is estimated. The analysis of selected algorithms used in
the area of computer vision is carried out, this includes introduction of the software tools efficient enough to measure the
object size. Proposed solution has to be in compliance with the conditions for correct processing of obtained information,
such as a suitable scene for measuring and correct adjustment of the camera. The final software solution calculates the
estimated size of measured object while utilizing substitution variables that were created for this purpose. Such variables are
based on the acquired images and their subsequent processing.
2.3. Object detection and measurement using stereo images:

This paper presents an improved method for detecting objects in stereo images and of calculating the distance, size and
speed of these objects in real time. This can be achieved by applying a standard background subtraction method on the
left and right image, subsequently a method known as subtraction stereo calculates the disparity of detected objects.
This calculation is sup-ported by several additional parameters like the center of object, the colour distribution and the
object size. The disparity is used to verify the plausibility of detected objects and to calculate the distance and position
of this object. Out of position and distance the size of the object can be extracted, additionally the speed of objects can
be calculated when tracked over several frames. A dense disparity map produced during the learning phase serves as
additional possibility to improve the detection accuracy and reliability .
PROBLEM STATEMENT

• Object detection and measurement is a fundamental task in computer vision that aims to
identify and localize objects within an image or video stream, while also providing accurate
measurements of their size, shape, and position.
• This problem statement addresses the challenge of developing algorithms and models that can
accurately detect various objects of interest, or specific objects in industrial settings. The goal
is to enable machines to understand and interpret visual data in real-world scenarios , such as
manufacturing or quality control processes.
• This can lead to increased efficiency, improved accuracy, and reduced human error in tasks
that require object detection and measurement.
OBJECTIVES

• Firstly, the main goal is to accurately identify and locate objects within an image or video. This is important for
various applications such as surveillance, autonomous navigation, and augmented reality.

• Secondly, object detection aims to classify the identified objects into predefined categories, providing valuable
information about the nature of the objects present.

• Lastly, the objective of object measurement is to estimate the size, shape, and orientation of the detected objects,
enabling further analysis and understanding of the scene.

• Overall, the primary objectives for object detection and measurement are to enhance perception, facilitate decision-
making processes, and enable more advanced and intelligent computer vision systems.

• These measurements are crucial for tasks like object tracking, counting instances of specific objects, and
understanding spatial relationships in a scene. Overall, the objectives for object detection In addition to the above the
hardware has to be designed, where all the respective processes will be taking place, which can also be defined as an
test environment.
METHODOLOGY

Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an
enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like
video frame or photograph and output may be image or characteristics associate with that image. Usually Image Processing
system in-clades treating images as two dimensional signals while applying already set signal processing methods to them.
Image processing basically includes the following three steps:

•Importing the image with optical scanner or by digital photography.

•Analyzing and manipulating the image which includes data compression and image enhancement and spotting patterns that
are not to human eyes like satellite photographs.

•Output is the last stage in which result can be altered image or report that is based on image analysis.

The Project has four stages: (1) capturing image (2) object measurement process (3) save output (4) displaying output.
METHODOLOGY FLOW CHART

DIMENSION
MEASURING

EDGE
IDENTIFICATION

BLURRING

GREY SCALING

CROP IMAGE

CAMERA INTIALIZATION

START
HARDWARE DESIGN

The complete design is made up from acrylic sheet of 6mm thickness having to blocking any source of natural
light entering from the environment to the inside of the setup environment.

The placement of camera is above, so that the view is clear and so that the object can be placed on a uniform
surface for the object to rest with out any disruption. The camera used is 720p and 30fps which is quite basic
to start with considering the resolution, the lighting would be essential for the camera to pick up the crisp
images and without blurring the image, establishing the distance would be a key factor which gives us a
baseline aperture and focus value to the base of the surface to establish our experimental setup. The lights
used LED’s which are used to enhance the brightness and keep the image balanced eliminating noise in the
image.
3D RENDERED DESIGN MODEL

DIMENSIONS OF THE HARDWARE SETUP (ALL DIMENSIONS ARE IN INCHES).


EXPERIMENTAL SETUP

The environment is created by an acrylic glass which is in a shape of box with an web camera placed above
which would gather images in the top view reference, mainly because the orientation of the object and the
stability is much better when placed down rather to any other orientations. The ambient light is provided for a
crisp and better photo as the acrylic does not allow the outside environment light to enter into to the work
environment, The main reason is the establish a standard light intensity such that the aperture of the camera
can remain the same and the noise associated with the image is at the lowest as much as possible, which can
aid for better image processing. The circuit diagram for the connection of the LED assembly is mentioned in
the fig10, Once the camera is placed , it is very important to calibrate the camera as to the depth surface where
it would be measuring , the further process is provided in the below mentioned code in the terms of scale as
per which is to be working as a multiplier.
LED CONNECTION CIRCUIT

inside surface where the object can be placed for measurement


inside upper view of the placement of LED AND WEB
CAM
power circuit and master switch for LED

external top view of the assembly


CANNY EDGE DETECTION:

Canny edge detection is a popular image processing technique used to detect edges in an image. It involves
several steps to achieve accurate edge detection. Firstly, the image is smoothed using a Gaussian filter to
reduce noise. Then, the gradient magnitude and direction are calculated using Sobel operators to determine
the intensity changes in different directions. Next, non-maximum suppression is applied to thin out the
detected edges by suppressing non-maximum pixels. Finally, a hysteresis thresholding technique is used to
determine the final edges by setting a high and low threshold. Pixels with gradient magnitudes above the
high threshold are considered strong edges, while those below the low threshold are considered non-edges.
Pixels with gradient magnitudes between the high and low thresholds are classified as weak edges. By
connecting strong edges to weak edges, a complete edge map can be generated
STEPS INVOLED IN CANNY EDGE DETECTION

1. Smoothing: The first step involves applying a Gaussian filter to the input image to reduce noise and remove small details.
This helps in obtaining a more accurate edge map.
2. Gradient Calculation: Next, the gradient magnitude and direction are calculated using derivative filters, typically the Sobel
operators. These filters highlight areas of rapid intensity changes, which correspond to edges in the image.
3. Non-maximum Suppression: In this step, only the maximum values of the gradient magnitude are kept, while all other
values are suppressed. This helps to thin out the edges and ensure that only the strongest edges are retained.
4. Hysteresis Thresholding: After non-maximum suppression, a threshold is applied to the gradient magnitude to identify
strong edges. Pixels with values above the high threshold are marked as strong edges, while those below the low threshold are
considered weak edges.
5. Edge Tracking by Hysteresis: To determine the final edges, a connectivity analysis is performed. Weak edges that are
connected to strong edges are considered as part of the edge map, while isolated weak edges are discarded. This step helps to
fill in gaps and create a continuous edge map. Overall, these steps work together to generate a more accurate and refined edge
map that captures the important features and boundaries in the image.
MORPHOLOGY:

Morphology is known as the broad set of image processing operations that process images based on the shapes. It is also
known as a tool used for extracting image components that are useful in representation and description of region shape.
The basic morphological operations are:
1. Erosion
2. Dilation

Dilation: The value of the output pixel is the maximum value of all pixels in the neighbourhood. In a binary image, a pixel
is set to 1 if any of the neighbouring pixels have the value 1.
EROSION:

The value of the output pixel is the minimum value of all pixels in the neighborhood. In a binary image, a pixel is
set to 0 if any of the neighboring pixels have the value 0.

As we can see from above two figures second image is somewhat reduced when compared to previous image this
is what erosion is erosion removes pixels on object boundaries.
WHY CAMERA INITIALIZATION IS NEEDED?

Camera initialization is a crucial step in the process of setting up a camera for use. It involves configuring various
parameters and settings to ensure optimal performance and functionality. One important aspect of camera
initialization is selecting the appropriate resolution and frame rate for capturing images or videos. This decision
depends on factors such as the intended use of the camera, available storage capacity, and desired image quality.

Additionally, camera initialization includes calibrating focus, exposure, and white balance settings to achieve
accurate and well-balanced images. It is also essential to set up communication protocols and interfaces, such as
USB or Wi-Fi, to establish a connection between the camera and other devices. Furthermore, configuring image
stabilization options can help reduce motion blur and enhance overall image quality. Lastly, ensuring that all
firmware and software are up to date is vital for optimal performance and compatibility with other devices or
applications. Overall, camera initialization encompasses a range of technical considerations that are necessary for
achieving high-quality results in various imaging applications.
WHY IMAGE CROPPING IS NEEDED?

Image cropping is a widely used technique in the field of image processing. It involves removing unwanted parts of an
image to focus on the desired subject or to improve composition. The process of cropping an image involves selecting a
specific area and discarding the rest.

This technique is commonly used in various applications, such as photography, graphic design, and website
development. By cropping an image, one can enhance its visual appeal, remove distractions, or highlight specific details.
Additionally, image cropping allows for better framing and resizing to fit different platforms or mediums. It is important
to note that while cropping can be beneficial, it should be done carefully to avoid losing essential information or altering
the intended message of the image.
INSTRUMENT AVAILABLE IN MARKET

Keyence image dimension measuring system is one of their innovative products that helps businesses ensure precise
measurements and quality control in their manufacturing processes. With its high-resolution imaging capabilities and
intuitive software, the Keyence image dimension measuring system provides real-time data analysis and visualization,
enabling businesses to make informed decisions and improve their overall product quality. In addition, the Keyence
image dimension measuring system also offers advanced features such as automatic data recording and statistical
analysis.

Although all the dimensions are measured in 2 way axis configuration the system is equipped with high end image
capturing system and high end processor with auto-referencing capabilities with repeatability accuracy of 0.1 microns.

The costing for the available system mentioned would be close to $18000 featuring the touch probe and multi stage
camera along with it.
FUCTIONS USED IN CODE AND THEIR EXPLATION
FIND CONTOURS
Contours can be explained simply as a curve joining all the continuous points (along the boundary), having same color or
intensity. The contours are a useful tool for shape analysis and object detection and recognition.
REORDER
In Python, the "reorder" function is used to change the order of elements in a list or sequence. It allows you to rearrange
the items based on specific criteria or patterns. This can be useful when you need to sort or prioritize certain elements in
your data structure. By using the "reorder" function, you can easily manipulate the order of elements in a flexible and
efficient manner.
WARPING
In Python, the term “warping” refers to a technique used in image processing and computer vision. It involves
manipulating the shape or perspective of an image to achieve desired effects or correct distortions. Warping can be used
to align images, remove lens distortions, create special effects, or transform images to fit a specific layout or template. It
is commonly used in tasks such as image registration, panorama stitching, object recognition, and augmented reality
applications
FINDDIS
In Python, the “findDis” function is used to find the distance between two points in a given coordinate system. This
function takes in the coordinates of two points as input and calculates the distance using a mathematical formula,
such as the Euclidean distance formula. The result is then returned as the output of the function. This can be useful in
various applications, such as calculating distances between geographical locations or measuring distances in a graph
or network
RESULTS OBSERVED FROM THE SETUP
SL.NO SPECIMEN VALUE MEASURED VALUE 10
(in mm) (in mm)
9

8
1 100 103
7
2 90 92.05
6
3 80 83.05
4 70 71.7 5

5 60 61.9 4
6 50 51.5 3
7 40 41.9 2
8 30 31.7
1
9 20 21.2
0 20 40 60 80 100 120
10 10 10.9
Column1 MEASURED VALUE(in mm)
Linear (MEASURED VALUE(in mm)) SPECIMEN VALUE
(in mm)

As per the observed results, It is clearly evident that the error lies in between the range of 3% to 0.09% as per the
various samples under the test. In the test conducted at the current setup environment there is a possibility to
increase the accuracy and reduce the error to an further more extent, which is only possible by deploying higher
resolution camera and a better algorithm to make the image processing on a finer detail.
The whole setup and implementation costed a fraction of a price which is available in market and still requires
work to be done to operate at high accuracy and detect various features.
On a greater extent the data captured was found to be satisfactory and close to true value. Hence the proof of the
ideology is proved
CONCLUSION

In conclusion, the project successfully implemented object detection and dimensioning using an optical sensor. The
developed system accurately detected objects in real-time and provided precise measurements of their dimensions. This
technology has immense potential in various industries such as manufacturing, logistics, and retail, where efficient
object detection and dimensioning are crucial for optimizing processes and ensuring quality control. Further
improvements can be made by integrating machine learning algorithms to enhance the system's accuracy and expand
its capabilities. Overall, this project demonstrates the feasibility and effectiveness of optical sensor .

Dimensioning technology in streamlining operations and improving productivity in diverse sectors. By accurately
capturing the dimensions of objects, businesses can minimize errors, reduce waste, and enhance overall efficiency.
Moreover, the integration of machine learning algorithms can enable the system to learn from past data and make more
accurate predictions, further enhancing its functionality
FUTURE SCOPE

In addition to the current advancements in object detection and dimensioning using optical sensors, there are
several future possibilities that can be explored. One potential direction is the integration of artificial
intelligence algorithms to enhance the accuracy and efficiency of object detection and dimensioning. This could
involve training the system to recognize various objects and their dimensions, allowing for more precise
measurements and reducing errors. Furthermore, the development of compact and cost-effective optical sensors
could enable their widespread implementation in industries such as logistics, manufacturing, and retail,
revolutionizing the way objects are detected and dimensions are measured.
THANK YOU

You might also like