Assignment 5: E1 244 - Detection and Estimation Theory (Jan 2023) Due Date: April 02, 2023 Total Marks: 55

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

E1 244 - Detection and Estimation Theory (Jan 2023)

Assignment 5 Due Date: April 02, 2023


Total Marks: 55

General Assignment Guidelines:


• Submission via Teams: Assignments will be allocated to you via the Teams “Assignments” feature. You
will have to upload your answer sheet via the same feature in Teams. Answer sheets sent to me or
the TAs by e-mail will not be considered. Please familiarize yourself with this feature.
• Late Submission Policy: Assignment submission beyond the deadline is allowed as per the below policy:

– Delay of 24 hours will attract 20% penalty


– Delay of 48 hours will attract 40% penalty
– Delay of 72 hours days will attract 60% penalty
– Assignments submitted beyond 72 hours will not be considered

The upload time reflected in the Teams will be considered as final. You are highly encouraged
to upload your answer sheets well before the deadline so that any potential connection/technical issues
can be resolved in time (recommended time is at least 1 hour before the deadline).
• File Logistics: You can scan your handwritten answers, or use a tablet, or typeset your answers in Latex.
Following logistics should be followed while uploading the answer sheet:

– Make sure your scans are properly visible. You can try using some scan apps (Ex. AdobeScan) to
get better results.
– The total file size should not be too large (ideally, less than 5 Mb). You can use available
apps/softwares to reduce the file size. Make sure that the answers are clearly visible while re-
ducing the size.
– Only a single pdf file is allowed to be uploaded (other formats will not be accepted).
– The order of answers in your file should be in the same as the order of questions. Check that you
have included all the pages in your file before uploading it.
– Name you file as: DET Assignmentx FirstName, where x is the assignment number (x=1,2,...)
– Mention your name, course name and submission date on the first page.
• Collaboration Policy: You are allowed to discuss the concepts/questions with your classmates. However,
the final answer should be as per your own understanding. Merely copying solutions from class-
mates/online sources will attract significant penalty and strict disciplinary action (refer to Section 13.2
and 13.3 of the IISc student information handbook - link). If you have collaborated/discussed with other
classmates or referred some online resource for a particular question, clearly mention the names of
the classmates/online resource at the beginning of your answer.

Problem 2 carries 15 marks. Remaining questions carry 10 marks each.

Problem 1 Let yk = ark + wk for k = 1, 2, · · · , where a ∼ N (µa , σa2 ), 0 ≤ r ≤ 1 and wk ∼ i.i.d. N (0, σ 2 ).
Assume that a is independent of wk . Determine equations to recursively compute the estimate and the error
covariance of a based on sequential data.

Problem 2 We will implement and simulate the Kalman filter in this problem. Consider the Gauss-Markov
process considered in the class with the following parameters
   
0.8 −0.25 0 1 0  
  2 −1
A = −0.8 −0.1
 0 ,B = 0 1 ,C = 1 2 3 ,W =
   , V = 1, X0 = I3
−1 3
0 −0.5 0.4 0 0

Note that x0 ∼ N (0, X0 ), wk ∼ N (0, W ), and vk ∼ N (0, V ).


(a) Explain how will you generate an instance of a random vector z with distribution z ∼ N (0, Z). Mention
the code line(s) that you will use to generate this.

1
(b) Write a function for implementing a Kalman Filter(KF). The function should take as input A, B, C, W, V, X0 , K
where K is the total number to time steps for which you run the simulation. The function should do the fol-
lowing:
(i) Generate the initial condition x0 , and use it to generate x1 , x2 , · · · , xK and y0 , y1 , y2 , · · · , yK .
(ii) Implement the KF equations to compute x̂k/k and x̂k/k−1 . Make sure to initialize the KF appropriately.
(iii) The function should output the trajectories xk , x̂k/k , x̂k/k−1 for k = 0, 1, · · · , K.
(iv) Plot the state trajectory xk and the estimated trajectory x̂k/k−1 in the same figure for the first component
of the state vector. The function should generate these plots automatically along with a legend that differen-
tiates these plots. What do you observe in these plots as you decrease the value of R? Include these plots
and explanations in your answer.
(v) Make sure the function works for any given A, B, C, W, V, X0 , K of any dimension, not for just
the values of these quantities given in the question.

(c) In this part, we will empirically verify the values of Pk/k−1 .


(i) Generate a theoretical plot of trace(Pk/k−1 ) as k varies from 0 to K. For this, you need to recursively
compute the value of trace(Pk/k−1 ) in your KF function. Include this plot in your answer.
(ii) Explain how will you empirically compute the value of trace(Pk/k−1 ) for each k = 0, 1, · · · , K by taking
sample means over different runs of your KF function.
(iii) Write a script that generates 105 sample trajectories of xk , x̂k/k , x̂k/k−1 , that is, runs your KF function
105 times, and use the data to empirically compute trace(Pk/k−1 ). You need not store the data from all 105
runs, rather you can keep track of sums of appropriate quantities over different runs of KF.
(iv) The script should automatically generate a single figure which includes the trajectories of theoretical
trace(Pk/k−1 ) (same as part (i)) and the empirically calculated trace(Pk/k−1 ). Include the legend to clearly
mark which plot is theoretical and which is empirical. Include this plot in your answer and comment
on what you observe.

You should submit two files along with your answer: The KF function file and the script for
empirically computing trace(Pk/k−1 ).

Problem 3
(a) Let y ∼ Unif[0, θ1 ] where θ > 0 is the parameter to be estimated. Prove that no function g(·) exists such
that g(y) is unbiased.
(b) Let yk ∼ i.i.d Unif[0, θ], for k = 1, 2, · · · , N . Show that the following regularity condition does not hold
for this problem  
∂lnf (y; θ)
E = 0, for all θ > 0.
∂θ

Problem 4
(a) Let X1 , X2 , X3 be i.i.d. N (θ, σ 2 ) random variables, and let the observations be Y1 = X1 + X2 and
Y2 = X2 + X3 . Derive the MVU estimator of θ based on Y1 and Y2 , and show that it is an efficient estimator.
(b) Let x̂1 and x̂2 be unbiased estimates of a scalar parameter x. The estimators have covariances σ12 and σ22
and correlation coefficient ρ which are all independent of x. Consider an averaged estimator x̂λ = λx̂1 +(1−λ)x̂2
where λ ∈ R. Find λ that minimizes the covariance of x̂λ and specialize your answer to the case when ρ = 0.

Problem 5 Let θ ∈ Rm be a unknown nonrandom parameter.


(a) Let y ∼ N (s(θ), R). Here, the mean is a function of θ and R in a known matrix. Show that the Fisher
information matrix is given by
 T
∂s(θ) ∂s(θ)
[Iy (θ)]jk = R−1 1 ≤ j, k ≤ m.
∂θj ∂θk
(b) Let y ∼ N (s, R(θ)). Here, the covariance is a function of θ and s in a known vector. Show that the Fisher
information matrix is given by
 
1 ∂R(θ) −1 ∂R(θ)
[Iy (θ)]jk = Trace R−1 (θ) R (θ) 1 ≤ j, k ≤ m.
2 ∂θj ∂θk
(c) Let y ∼ N (s(θ), R(θ)). Here, both mean and covariance are functions of θ. Show that the Fisher
information matrix is given by the sum of two matrices in part(a) and part(b).

You might also like