Homework 1

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Student Name: Fauzy Satrio Wibowo

ID : 107618401

1. (15%) Exercise 2.27

A plant produces a line of translucent miniature polymer squares. Stringent quality


requirements dictate 100% visual inspection, and the plant manager finds the use of human
inspectors increasingly expensive. Inspection is semi-automated. At each inspection station, a
robotic mechanism places each polymer square over a light located under an optical system
that produces a magnified image of the square. The image completely fills a viewing screen
measuring 80*80 mm. Defects appear as dark circular blobs, and the inspector’s job is to look
at the screen and reject any sample that has one or more such dark blobs with a diameter of
0.8 mm or larger, as measured on the scale of the screen. The manager believes that if she can
find a way to automate the process completely, she will increase profits by 50%. She also
believes that success in this project will aid her climb up the corporate ladder. After much
investigation, the manager decides that the way to solve the problem is to view each
inspection screen with a CCD TV camera and feed the output of the camera into an image
processing system capable of detecting the blobs, measuring their diameter, and activating the
accept/reject buttons previously operated by an inspector. She is able to find a system that can
do the job, as long as the smallest defect occupies an area of at least 2*2 pixels in the digital
image. The manager hires you to help her specify the camera and lens system, but requires
that you use off-the-shelf components. For the lenses, assume that this constraint means any
integer multiple of 25 mm or 35 mm, up to 200 mm. For the cameras, it means resolutions of
512 * 512, 1024 * 1024, 2048 * 2048 or pixels. The individual imaging elements in these
cameras are squares measuring 8*8 µm and the spaces between imaging elements are 2µm.
For this application, the cameras cost much more than the lenses, so the problem should be
solve with the lowest-resolution camera possible, based on the choice of lenses. As a
consultant, you are to provide a written recommendation, showing in reasonable detail the
analysis that led to your conclusion. Use the same imaging geometry suggested in Problem
2.5.

Answer:
Solution:
∆X = (λ*80) / (λ – Z)
So, ∆X = 8m.
Then, Z = 9λ
Assuming = 25 mm lens (λ), the front lens will have to located at ± 225 mm from the viewing
screen (Z). With the smallest defect having diameter of 0.8mm, then,
∆X = (λ*0.8) / (λ – Z)
Or, ∆X = (25*0.8) / (25 – 225)
Or, ∆X =20/-200 µm.
Or, ∆X = -100 µm.

Where, the minus (-) sign was ignored because it just inverse position coordinate.

Conclusion – A conclusion defect of 0.8mm diameter will be displayed at 100µm an LCD


512*512 camera with a 25mm lens and the defect distance will be 225 mm.
The satisfaction is well above 48 µ minimum requirement.
2. V = {0, 1}
Length of 4- = 0 . There is no 4- path from p to q

0 3 1 2 1 1(q)

1 2 0 1 2 0

0 1 2 3 1 2

1(p) 1 0 1 2 1

Length of 8- = 5

0 3 1 2 1 1(q)

1 2 0 1 2 0

0 1 2 3 1 2

1(p) 1 0 1 2 1

Length of m- = 6

0 3 1 2 1 1(q)

1 2 0 1 2 0

0 1 2 3 1 2

1(p) 1 0 1 2 1

V = {1, 2}
Length of 4- = 0 . There is no 4- path from p to q

0 3 1 2 1 1(q)

1 2 0 1 2 0

0 1 2 3 1 2

1(p) 1 0 1 2 1
Length of 8- = 5

0 3 1 2 1 1(q)

1 2 0 1 2 0

0 1 2 3 1 2

1(p) 1 0 1 2 1

Length of m- = 7

0 3 1 2 1 1(q)

1 2 0 1 2 0

0 1 2 3 1 2

1(p) 1 0 1 2 1

3. Grey image Arithmetic Operation

Subtraction:

private void Sub_btn_Click(object sender, EventArgs e)


{
Int32 wid = iImage.GetWidth(GrayImage);
Int32 hei = iImage.GetHeight(GrayImage);
Int32 panjang = iImage.GetHeight(GrayImage2);
Int32 lebar = iImage.GetHeight(GrayImage2);

byte[,] ImgArray2D = new byte[hei, wid];


byte[,] ImgArray2Dx = new byte[panjang, lebar];
err = iImage.iPointerFromiImage(GrayImage, ref ImgArray2D[0, 0], wid,
hei);
err2 = iImage.iPointerFromiImage(GrayImage2, ref ImgArray2Dx[0, 0],
panjang, lebar);

if (err != E_iVision_ERRORS.E_OK)
{
MessageBox.Show(err.ToString(), "Error");
return;
}

if (err2 != E_iVision_ERRORS.E_OK)
{
MessageBox.Show(err.ToString(), "Error");
return;
}

///image process here.


Int32 wid3 = wid - panjang;
Int32 pan3 = hei - lebar;
byte[,] ImgArray2Dy = new byte[wid3, pan3];
err3 = iImage.iPointerFromiImage(GrayImage3, ref ImgArray2Dy[0, 0], wid3,
pan3);
IntPtr imgPtr = iImage.iVarPtr(ref ImgArray2Dy[0, 0]);
err3 = iImage.iPointerToiImage(GrayImage3, imgPtr, wid3, pan3);
// Check the state
if (err != E_iVision_ERRORS.E_OK)
{
MessageBox.Show(err.ToString(), "Error");
return;
}

hbitmap3 = iImage.iGetBitmapAddress(GrayImage3);
if (pictureBox3.Image != null)
pictureBox3.Image.Dispose();
pictureBox3.Image = System.Drawing.Image.FromHbitmap(hbitmap);
hbitmap3 = iImage.iGetBitmapAddress(GrayImage3);
pictureBox3.Refresh();
}

4. Grey Image Intensity Operation


Source Code Gray Image Intensity Application:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using MiM_iVision;

namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
IntPtr gray = iImage.CreateGrayiImage();
E_iVision_ERRORS failed = E_iVision_ERRORS.E_NULL;
Int32[] Intensitybar = { 1, 2, 4, 8, 16, 32, 64, 128, 256 };

public Form1()
{
InitializeComponent();
}

private void Form1_FormClosing(object sender, FormClosingEventArgs e)


{
iImage.DestroyiImage(gray);
}

public E_iVision_ERRORS errorcheck(object sender, EventArgs e, IntPtr image)


{
openFileDialog1.Filter = "BMP|*.bmp";
string source;

if (openFileDialog1.ShowDialog() != DialogResult.OK)
{
MessageBox.Show(failed.ToString(), "No File Chosen!");
return failed;
}

source = openFileDialog1.FileName;
failed = iImage.iReadImage(image, source);

if (failed != E_iVision_ERRORS.E_OK)
{
MessageBox.Show(failed.ToString(), "Wrong Type");
return failed;
}

return E_iVision_ERRORS.E_OK;
}

public void finalimage(object sender, EventArgs e, IntPtr image)


{
IntPtr tempImage = iImage.iGetBitmapAddress(image);
pictureBox1.Image = System.Drawing.Image.FromHbitmap(tempImage);
pictureBox1.Refresh();
}
private void btn_loadimage_Click(object sender, EventArgs e)
{
failed = errorcheck(sender, e, gray);

if (failed == E_iVision_ERRORS.E_OK)
{
finalimage(sender, e, gray);
}
}

private void trackbar_image_Scroll(object sender, EventArgs e)


{
txb_Form1_IntensityLevel.Text =
Intensitybar[trackbar_image.Value].ToString();
Int32 intensity_init = trackbar_image.Value;
Int32 intensity_l = Intensitybar[intensity_init];

changeIntensityLevel(sender, e, gray, intensity_l);


}
public void changeIntensityLevel(object sender, EventArgs e, IntPtr image,
Int32 intensity_l)
{
E_iVision_ERRORS failed;
IntPtr tempgray = iImage.CreateGrayiImage();
failed = iImage.iImageCopy(tempgray, image);

Int32 newheight = iImage.GetHeight(tempgray);


Int32 newwidth = iImage.GetWidth(tempgray);

byte[,] tempMatrix = new byte[newheight + 2, newwidth + 2];

failed = iImage.iPointerFromiImage(tempgray, ref tempMatrix[0, 0],


newwidth, newheight);
if (failed != E_iVision_ERRORS.E_OK)
{
MessageBox.Show(failed.ToString(), "Error: ...");
return;
}

// Const for Converting


Int32 Value1 = 256 / intensity_l;
Int32 Value2 = 255 / (intensity_l - 1);

for (int i = 0; i < newheight; i++)


for (int j = 0; j < newwidth; j++)
{
tempMatrix[i, j] = (byte)(tempMatrix[i, j] / Value1 * Value2);
}

// Display Picture
IntPtr temp;
unsafe
{
fixed (byte* bufPointer = &tempMatrix[0, 0])
{
failed = iImage.iPointerToiImage(tempgray, (IntPtr)bufPointer,
newwidth, newheight);
if (failed != E_iVision_ERRORS.E_OK)
{
MessageBox.Show(failed.ToString(), "Error");
return;
}
temp = iImage.iGetBitmapAddress(tempgray);
pictureBox1.Image = System.Drawing.Image.FromHbitmap(temp);
pictureBox1.Refresh();
}
}
iImage.DestroyiImage(tempgray);
}
}
}

Result:

5. Zooming and Shrinking Images By Nearest-Neighbour and Bilinier Interpolation

Main Code:
rows1 = (int)(gray.rows/zoomFactor);
cols1 = (int)(gray.cols/zoomFactor);
gray1 = Mat::zeros(cvSize(rows1, cols1), gray.type());
#if (METHOD == NEAREST_NEIGHBOR)
float hRatio = (float)(rows0)/rows1;
float wRatio = (float)(cols0)/cols1;
float rr = 0, cc = 0;
for (int i = 0; i < rows1; i++)
for (int j = 0; j < cols1; j++)
{
//gray1.at<uchar>(i, j) = gray.at<uchar>(i*rows0/rows1,
j*cols0/cols1);
rr = i*hRatio;
cc = j*wRatio;
gray1.at<uchar>(i, j) = gray.at<uchar>((int)rr, (int)cc);
}
#elif (METHOD == BILINEAR)
float hRatio = (float)(rows0-1)/rows1;
float wRatio = (float)(cols0-1)/cols1;
float rr = 0, cc = 0;
float rDiff = 0, cDiff = 0;
int A, B, C, D;
float value;
for (int i = 0; i < rows1; i++)
for (int j = 0; j < cols1; j++)
{
rr = (float)i*hRatio;
cc = (float)j*wRatio;
rDiff = rr - (int)rr;
cDiff = cc - (int)cc;
A = gray.at<uchar>((int)rr, (int)cc);
B = gray.at<uchar>((int)rr, (int)cc+1);
C = gray.at<uchar>((int)rr+1, (int)cc);
D = gray.at<uchar>((int)rr+1, (int)cc+1);
value = (int)(A*(1-rDiff)*(1-cDiff) + B*rDiff*(1-cDiff) +
C*(1-rDiff)*cDiff + D*rDiff*cDiff);
gray1.at<uchar>(i, j) = value;
//gray1.at<uchar>(i, j) = (int)(gray.at<uchar>(rDiff,
cDiff)*(1-rDiff)*(1-cDiff) + gray.at<uchar>(rDiff, cDiff+1)*rDiff*(1-cDiff) +
// gray.at<uchar>(rDiff+1, cDiff)*(1-rDiff)*cDiff +
gray.at<uchar>(rDiff+1, cDiff+1)*rDiff*cDiff);
}
#elif (METHOD == BICUBIC)
double p[4][4] = {{1,3,3,4}, {7,2,3,4}, {1,6,3,6}, {2,5,7,2}};
float hRatio = (float)(rows0)/rows1;
float wRatio = (float)(cols0)/cols1;
float rr = 0, cc = 0;
float rDiff = 0, cDiff = 0;

Mat tempI = Mat::zeros(cvSize(rows0+3, cols0+3), gray.type());


for (int i = 1; i <= rows0; i++)
for (int j = 1; j <= cols0; j++) tempI.at<uchar>(i, j) =
gray.at<uchar>(i-1, j-1);
for (int j = 1; j <= cols0; j++)
{
tempI.at<uchar>(0, j) = gray.at<uchar>(0, j-1);
tempI.at<uchar>(rows0, j) = gray.at<uchar>(rows0-1, j-1);
tempI.at<uchar>(rows0+1, j) = gray.at<uchar>(rows0-1, j-1);
}
for (int i = 1; i <= rows0; i++)
{
tempI.at<uchar>(i, 0) = gray.at<uchar>(i-1, 0);
tempI.at<uchar>(i, cols0) = gray.at<uchar>(i-1, cols0-1);
tempI.at<uchar>(i, cols0+1) = gray.at<uchar>(i-1, cols0-1);
}
tempI.at<uchar>(0, 0) = gray.at<uchar>(0, 0);
tempI.at<uchar>(0, cols0) = gray.at<uchar>(0, cols0-1);
tempI.at<uchar>(0, cols0+1) = gray.at<uchar>(0, cols0-1);
tempI.at<uchar>(rows0, 0) = gray.at<uchar>(rows0-1, 0);
tempI.at<uchar>(rows0+1, 0) = gray.at<uchar>(rows0-1, 0);
tempI.at<uchar>(rows0, cols0) = gray.at<uchar>(rows0-1, cols0-1);
tempI.at<uchar>(rows0, cols0+1) = gray.at<uchar>(rows0-1, cols0-1);
tempI.at<uchar>(rows0+1, cols0) = gray.at<uchar>(rows0-1, cols0-1);
tempI.at<uchar>(rows0+1, cols0+1) = gray.at<uchar>(rows0-1, cols0-1);

for (int i = 0; i < rows1; i++)


for (int j = 0; j < cols1; j++)
{
rr = i*hRatio;
cc = j*wRatio;
rDiff = rr - (int)rr;
cDiff = cc - (int)cc;

for (int ii1 = 0; ii1 < 4; ii1++)


for (int jj1 = 0; jj1 < 4; jj1++)
{
p[ii1][jj1] = tempI.at<uchar>((int)rr+ii1,
(int)cc+jj1);
}

int tvalue = (int)(bicubicInterpolate(p, cDiff, rDiff));


if (tvalue>255) tvalue = 255;
if (tvalue<0) tvalue = 0;
gray1.at<uchar>(i, j) = tvalue;
}
#endif
// Show new Image
imshow("before zoom/shrink", gray);
imshow("after zoom/shrink", gray1);
waitKey(0);
}

Nearest-Neighbour:
int[] temp = new int[w2*h2] ;
double x_ratio = w1/(double)w2 ;
double y_ratio = h1/(double)h2 ;
double px, py ;
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
px = Math.floor(j*x_ratio) ;
py = Math.floor(i*y_ratio) ;
temp[(i*w2)+j] = pixels[(int)((py*w1)+px)] ;
}
}

Biliner-Code:
#elif (METHOD == BILINEAR)
float hRatio = (float)(rows0-1)/rows1;
float wRatio = (float)(cols0-1)/cols1;
float rr = 0, cc = 0;
float rDiff = 0, cDiff = 0;
int A, B, C, D;
float value;
for (int i = 0; i < rows1; i++)
for (int j = 0; j < cols1; j++)
{
rr = (float)i*hRatio;
cc = (float)j*wRatio;
rDiff = rr - (int)rr;
cDiff = cc - (int)cc;
A = gray.at<uchar>((int)rr, (int)cc);
B = gray.at<uchar>((int)rr, (int)cc+1);
C = gray.at<uchar>((int)rr+1, (int)cc);
D = gray.at<uchar>((int)rr+1, (int)cc+1);
value = (int)(A*(1-rDiff)*(1-cDiff) + B*rDiff*(1-cDiff) +
C*(1-rDiff)*cDiff + D*rDiff*cDiff);
gray1.at<uchar>(i, j) = value;
//gray1.at<uchar>(i, j) = (int)(gray.at<uchar>(rDiff,
cDiff)*(1-rDiff)*(1-cDiff) + gray.at<uchar>(rDiff, cDiff+1)*rDiff*(1-cDiff) +
// gray.at<uchar>(rDiff+1, cDiff)*(1-rDiff)*cDiff +
gray.at<uchar>(rDiff+1, cDiff+1)*rDiff*cDiff);
}

Bicubic Code:
double p[4][4] = {{1,3,3,4}, {7,2,3,4}, {1,6,3,6}, {2,5,7,2}};
float hRatio = (float)(rows0)/rows1;
float wRatio = (float)(cols0)/cols1;
float rr = 0, cc = 0;
float rDiff = 0, cDiff = 0;

Mat tempI = Mat::zeros(cvSize(rows0+3, cols0+3), gray.type());


for (int i = 1; i <= rows0; i++)
for (int j = 1; j <= cols0; j++) tempI.at<uchar>(i, j) =
gray.at<uchar>(i-1, j-1);
for (int j = 1; j <= cols0; j++)
{
tempI.at<uchar>(0, j) = gray.at<uchar>(0, j-1);
tempI.at<uchar>(rows0, j) = gray.at<uchar>(rows0-1, j-1);
tempI.at<uchar>(rows0+1, j) = gray.at<uchar>(rows0-1, j-1);
}
for (int i = 1; i <= rows0; i++)
{
tempI.at<uchar>(i, 0) = gray.at<uchar>(i-1, 0);
tempI.at<uchar>(i, cols0) = gray.at<uchar>(i-1, cols0-1);
tempI.at<uchar>(i, cols0+1) = gray.at<uchar>(i-1, cols0-1);
}
tempI.at<uchar>(0, 0) = gray.at<uchar>(0, 0);
tempI.at<uchar>(0, cols0) = gray.at<uchar>(0, cols0-1);
tempI.at<uchar>(0, cols0+1) = gray.at<uchar>(0, cols0-1);
tempI.at<uchar>(rows0, 0) = gray.at<uchar>(rows0-1, 0);
tempI.at<uchar>(rows0+1, 0) = gray.at<uchar>(rows0-1, 0);
tempI.at<uchar>(rows0, cols0) = gray.at<uchar>(rows0-1, cols0-1);
tempI.at<uchar>(rows0, cols0+1) = gray.at<uchar>(rows0-1, cols0-1);
tempI.at<uchar>(rows0+1, cols0) = gray.at<uchar>(rows0-1, cols0-1);
tempI.at<uchar>(rows0+1, cols0+1) = gray.at<uchar>(rows0-1, cols0-1);

for (int i = 0; i < rows1; i++)


for (int j = 0; j < cols1; j++)
{
rr = i*hRatio;
cc = j*wRatio;
rDiff = rr - (int)rr;
cDiff = cc - (int)cc;

for (int ii1 = 0; ii1 < 4; ii1++)


for (int jj1 = 0; jj1 < 4; jj1++)
{
p[ii1][jj1] = tempI.at<uchar>((int)rr+ii1,
(int)cc+jj1);
}

int tvalue = (int)(bicubicInterpolate(p, cDiff, rDiff));


if (tvalue>255) tvalue = 255;
if (tvalue<0) tvalue = 0;
gray1.at<uchar>(i, j) = tvalue;
}
#endif

// Show new Image


imshow("before zoom/shrink", gray);
imshow("after zoom/shrink", gray1);
waitKey(0);

Result:
Image Processed by Bilinear:
Image Processed by Nearest-Neighbor:
Image Processed by Bicubic:
Conclusion:

1. When do Shrinking then Zooming, the image will be not clear as the image source.
Each interpolation method will give different results.
2. Nearest-Neighbor interpolation gives the fastest result but worse.
3. Bilinear interpolation gives a better result compared to Nearest-Neighbor
4. Bicubic interpolation gives a nearly same result compared to Bilinear. But, if we watch
carefully, we can see that this methods result is a little more clear and smoother than
Bilinear.

Addition:

- Bilinear interpolation is a method to interpolate value from 2 points by their first degree
polynomial.

- Cubic Interpolation is a method to interpolate value from 4 points by their third degree
polynomial.

- Bicubic interpolation is cubic interpolation in two dimensions. I'll only consider the case
where we want to interpolate a two dimensional grid. We can use the cubic interpolation
formula to construct the bicubic interpolation formula. This method need 16 points to
interpolate the value.

- Bicubic interpolation formula:


Note:

In this project, I use both 3 Algorithms: Nearest-Neighbor, Bilinear and Bicubic.

I use predefined definition that “METHOD” to switch the algorithms.

If you want to try to run: You would have to use opencv 2.4.9; change the METHOD predefine to test.

You might also like