The prior mean m in a regression model can be thought of as the OLS estimate using dummy observations, where the dummy observations y are generated as y = Pm, with P having dimension kxk and m having dimension kx1. The regression model is then y = Xβ + ε, with X = P, so the OLS estimate is simply m, the prior mean. The variance of the OLS estimate using the dummy observations is then the prior variance M. Dummy observations thus provide a way to mechanically derive the prior and posterior distributions in Bayesian regression.
The prior mean m in a regression model can be thought of as the OLS estimate using dummy observations, where the dummy observations y are generated as y = Pm, with P having dimension kxk and m having dimension kx1. The regression model is then y = Xβ + ε, with X = P, so the OLS estimate is simply m, the prior mean. The variance of the OLS estimate using the dummy observations is then the prior variance M. Dummy observations thus provide a way to mechanically derive the prior and posterior distributions in Bayesian regression.
The prior mean m in a regression model can be thought of as the OLS estimate using dummy observations, where the dummy observations y are generated as y = Pm, with P having dimension kxk and m having dimension kx1. The regression model is then y = Xβ + ε, with X = P, so the OLS estimate is simply m, the prior mean. The variance of the OLS estimate using the dummy observations is then the prior variance M. Dummy observations thus provide a way to mechanically derive the prior and posterior distributions in Bayesian regression.
The prior mean m in a regression model can be thought of as the OLS estimate using dummy observations, where the dummy observations y are generated as y = Pm, with P having dimension kxk and m having dimension kx1. The regression model is then y = Xβ + ε, with X = P, so the OLS estimate is simply m, the prior mean. The variance of the OLS estimate using the dummy observations is then the prior variance M. Dummy observations thus provide a way to mechanically derive the prior and posterior distributions in Bayesian regression.
Here is an illustration of where the dummy observations come from:
The prior mean m can be thought of as the OLS estimate of an auxiliary
regression of y on X where y = P m with P having dimension (k k) where k is the number of coe¢ cients and m is of dimension (k 1) so y has dimension (k 1) and can be thought of as a vector of "fake data" (dummy observations) and X = P . Thus the regression model is: y = X + " and OLS = (X 0 X) 1 X 0 y, so substituting the expressions for y and X yields OLS = (P 0 P ) 1 P 0 P m. The P 0 P cancel and what remains is m, the prior mean implied by the choices of y and X:The variance of the OLS estimate is var( ) = (X 0 X) 1 (note that the 2 in the usual formula is our dii and is estimated separately; hence, not included here). Again substitute the expression for X which gives var( ) = (P 0 P ) 1 with P 0 P = M 1 as de…ned on slide 74, thus var( ) = M , the posterior variance.
That’s the "mechanics" and the beauty of dummy observations. :-)