Phuong Nguyen SB ID: 107344539 MAT 331, Spring 2012 Project 1: Least-Squares Fitting

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Phuong Nguyen SB ID: 107344539 MAT 331, Spring 2012 Project 1: Least-Squares Fitting For the first part,

given data was plotted and named as P1. Least squares built-in command was used to find the best-fit line for vertical approximation as V.

Dmin and Dmax was created to make graphing easier. The obtained line was name as P2, and plotted on the same graph with P1 using display command. In the first part, we can also approach the problem by partially differentiating the distance equation between a data point and one lying on the best-fit line respect to m and b. The vertical distance between these two points can be expressed as ( [ ]) [ ]

Therefore, In order to minimize the horizontal distance, we need to represent our line y=mx+b = f(x) as a function of y instead. So, we use maple to solve

for the inverse, and call it g(y), which in this case gives g(y)= (y-b)/m Least squares method is used to minimize the sum of squared residuals, difference between observed values and fitted values. This sum can be written as H:=(g,data)->sum(ep(g,data[i])^2,i=1..nops(data)); (10) By taking derivatives respect to slope (m) and y-intercept (b) error can be minimized. Since minima of differentiable function occur at critical points, we need to set these partial derivatives to 0 to solve for slope (m) and y-intercept (b). coef:=solve({diff(H(g,data),m)=0,diff(H(g,data),b)=0}); (11) Command subs was used to substitute the results into the line y=mx + b H1:=subs(coef,f(x)); (12) Then best-fit line H1 was plotted together with the raw data using the same method for the first part. For minimizing the absolute Euclidean distance, the function can be found as following

For ax+by+c = 0, we have

The vector r from the point (x0,y0) to the line is given by ( )

Projecting r onto v, we get | | | | | ( ) ( )| | | | |

We have mx-y+b=0 (from f=mx+b). By plugging this in the equation above we have the Euclidean distance for our case as

(14) By typing in ep1:=(f,pt)->(f(pt[1])-pt[2])/sqrt(m^2 + 1); The rest of the procedure is as exactly described as horizontal least squares procedure. Except, three different results were obtained. Observation was made to see which set of data has the closest slope compared to two previous parts. And the result is (18) In general, all critical points are corresponding eigenvalues of the error function. Two out of three critical points are minima that spanned either by smaller or larger eigenvectors. However,

critical point that corresponds with the smallest eigenvector, also known as global minimum should be chosen as the result in order to have the least possible error.

You might also like