Professional Documents
Culture Documents
Multiple Regression Analysis - Inference
Multiple Regression Analysis - Inference
Otto-von-Guericke-Universität Magdeburg
1 / 100
Outline
Topics we cover:
Introduction
2 / 100
Outline
Introduction
3 / 100
Introduction
4 / 100
Outline
Introduction
5 / 100
4.1 Sampling Distributions of the OLS Estimators
I Knowing the expected value and variance of the OLS estimators is useful
for describing the precision of the OLS estimators.
I However, in order to perform statistical inference, we need to know more
than just the first two moments of β̂j ; for statistical inference, we need to
know the full sampling distribution of the β̂j .
I Note that even under the Gauss-Markov assumptions, the distribution of β̂j
can have virtually any shape.
6 / 100
4.1 Sampling Distributions of the OLS Estimators
I To obtain the sampling distributions of the β̂j , we make use of the fact that
when we condition on the values of the independent variables in our sample
(the X ’s), the sampling distributions of the OLS estimators depend on the
underlying distribution of the errors.
I So, to obtain the sampling distributions of the β̂j , we will proceed in the
following steps:
1. First, we specify the distribution of the error term u.
2. Then, we derive the resulting distribution of y (holding x constant).
3. This in turn determines the distribution of (β̂0 , β̂1 , ..., β̂k ).
I Why? Because the OLS estimator is a linear function of the y -observations,
so the distribution of y carries over to β̂.
7 / 100
4.1 Sampling Distributions of the OLS Estimators
Assumption MLR.6 (Normality)
8 / 100
4.1 Sampling Distributions of the OLS Estimators
Assumption MLR.6 (Normality)
I To emphasize that we are assuming more than before, we will refer to the
full set of Assumptions MLR.1 through MLR.6.
9 / 100
4.1 Sampling Distributions of the OLS Estimators
MLR.1–MLR.6: The Classical Linear Model
I Under the CLM assumptions, the OLS estimators β̂0 , β̂1 , ..., β̂k have a
stronger efficiency property than under the Gauss-Markov assumptions:
I It can be shown that the OLS estimators are the minimum variance
unbiased estimators.
I i.e. OLS has the smallest variance among unbiased estimators, no longer
only among those estimators that are linear in the yi .
(for further discussion of this property, see Appendix E in textbook)
10 / 100
4.1 Sampling Distributions of the OLS Estimators
MLR.1–MLR.6: The Classical Linear Model
y |x ∼ Normal(β0 + β1 x1 + β2 x2 + ... + βk xk , σ 2 )
11 / 100
4.1 Sampling Distributions of the OLS Estimators
MLR.1–MLR.6: The Classical Linear Model
I The CLM Model with a single explanatory variable: y = β0 + β1 x + u
I Normal error terms imply that y can take any value on the real line:
Sometimes, simple reasoning suggests y cannot have a normal distribution,
e.g.:
I Example 1: hourly wages: y ≥ 0.
I Example 2: hourly wages with minimum wage floor: y ≥ ymin .
I Example 3: number of children born in a particular family: y ≥ 0
... so, again, this is an empirical issue. We will discuss the consequences of
nonnormality for statistical inference in Topic 5.
I Normality, as a consequence, can be a reasonable approximation, even if
normality may not hold exactly.
15 / 100
4.1 Sampling Distributions of the OLS Estimators
MLR.1–MLR.6: The Classical Linear Model
Reasonability of Normality Assumption in Practical Applications
16 / 100
4.1 Sampling Distributions of the OLS Estimators
MLR.1–MLR.6: The Classical Linear Model
I Normality of the error term translates into normal sampling distributions of
the OLS estimators:
β̂j − βj
∼ Normal(0, 1),
sd β̂j
where sd β̂j denotes the standard deviation of β̂j (again, see previous topic).
17 / 100
4.1 Sampling Distributions of the OLS Estimators
MLR.1–MLR.6: The Classical Linear Model
Proof of Theorem
The proof of the theorem is not that difficult, given the properties of normally
distributed random variables (for these properties, see Appendix B in textbook):
→ We sketch the proof for the first part of the theorem in 5 steps:
1. Each β̂j can be written as:
Pn n
rˆij ui X
β̂j = βj + Pi=1
n 2
= β j + wij ui
i=1 rˆij i=1
where wij = rˆij /SSRj , rˆij is the i th residual from the regression of the xj on all the other
independent variables, and SSRj is the sum of squared residuals from this regression.
I Since the wij depend only on the independent variables (because of the way
the rˆij are obtained), they can be treated as nonrandom.
⇒ So, β̂j is just a linear combination of the errors in the sample
{ui : 1, 2, ..., n}.
18 / 100
4.1 Sampling Distributions of the OLS Estimators
MLR.1–MLR.6: The Classical Linear Model
Proof of Theorem
I The second part of the theorem follows immediately from the fact that
when we standardize a normal random variable by subtracting off its mean
and dividing by its standard deviation, we end up with a standard normal
random variable.
I In Topic 5, we will show that the normality of the OLS estimators is still
approximately true in large samples even without normality of the errors.
20 / 100
Outline
Introduction
21 / 100
4.2 Hypothesis Testing: The t Test
22 / 100
4.2 Hypothesis Testing: The t Test
β̂j − βj
∼ tn−k−1 = tdf ,
se β̂j
23 / 100
4.2 Hypothesis Testing: The t Test
I This result differs from the last theorem (Normal Sampling Distributions)
in some notable respects:
I The last theorem (Normal
Sampling
Distributions)
showed that under the
CLM assumptions, β̂j − βj /sd β̂j ∼ Normal (0, 1).
I The t distribution in this new theorem (t Distribution for the
Standardized
Estimators) comes from the fact that the constant σ in sd β̂j has been
replaced with the random variable σ̂.
(The proof that this leads to a statistic which has a t distribution with n − k − 1
degrees of freedom is difficult and not very instructive - so we skip it here.)
I Accounting for this additionalestimation
step, the statistic in the new
theorem contains the term se β̂j (standard error ), which replaces the
sd β̂j in the last theorem.
24 / 100
4.2 Hypothesis Testing: The t Test
H 0 : βj = 0
I Interpretation: Once x1 , x2 , ..., xj−1 , xj+1 , ..., xk have been accounted for, xj
has no effect on (the expected value of) y .
I Note: We cannot state the null hypothesis as “xj does have a partial effect
on y ” because this is true for any value of βj other than zero. Classical
testing is suited for testing simple hypotheses like the null hypothesis above.
25 / 100
4.2 Hypothesis Testing: The t Test
I The statistic we use to test this null hypothesis (against any alternative) is
called “the” t-statistic or “the” t-ratio of β̂j and is defined as:
β̂
t-statistic or t-ratio tβ̂j ≡ j
se β̂j
I tβ̂j measures how many estimated standard deviations β̂j is away from zero.
⇒ Values of tβ̂j sufficiently large from zero will result in a rejection of H0 .
I Why does tβ̂j have features that make it reasonable as a test statistic to
detect βj 6= 0?
1. Since se β̂j is always positive, tβ̂j has the same sign as β̂j : if β̂j is positive,
then so is tβ̂j , and if β̂j is negative, so is tβ̂j .
2. For a given value of se β̂j , a larger value of β̂j leads to larger values of tβ̂j .
If β̂j becomes more negative, so does tβ̂j .
26 / 100
4.2 Hypothesis Testing: The t Test
I Note that in any interesting application, the point estimate β̂j will never
exactly be zero, whether or not H0 ist true. The relevant question is: How
far is β̂j from zero?
I A sample value of β̂j very far from zero provides evidence against
H0 : βj = 0.
I However, we must recognize that there is sampling error in our estimate β̂j ,
so the size of β̂j must be weighed against its sampling error.
I Since the standard error of β̂j is an estimate of the standard deviation of β̂j ,
tβ̂j measures how many estimated standard deviations β̂j is away from zero.
I This is precisely what we do in testing whether the mean of a population is
zero, using the standard t-statistic from introductory statistics.
I Values of tβ̂ sufficiently far from zero will result in a rejection of H0 .
I The precise rejection rule depends on the alternative hypothesis and the
chosen significance level of the test.
27 / 100
4.2 Hypothesis Testing: The t Test
28 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
I Note that when we state the alternative this way, we are really saying that
the null hypothesis is H0 : βj ≤ 0:
I For example, if βj is the coefficient on education in a wage regression, we
only care about detecting that βj is different from zero when βj is actually
positive.
I Recall from introductory statistics that the null value that is hardest to
reject in favor of H1 : βj > 0 is βj = 0. In other words, if we reject the null
βj = 0 then we automatically reject βj < 0.
⇒ Therefore, it suffices to act as if we are testing H0 : βj = 0 against
H1 : βj > 0, effectively ignoring βj < 0, and that is the approach we take
here.
29 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
30 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
tβ̂j > c
31 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
Step-By-Step: How Do we Test for Significance of a Coefficient?
β̂
tβ̂j = j
se β̂j
32 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
Step-By-Step: How Do we Test for Significance of a Coefficient?
33 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
Step-By-Step: How Do we Test for Significance of a Coefficient?
34 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
Step-By-Step: How Do we Test for Significance of a Coefficient?
I There is a pattern in the critical values (see Table “Critical values of the t
distribution” on the next slide):
I As the significance level falls, the critical value increases.
I So we require a larger and larger value of tβ̂j in order to reject H0 .
I Thus, if H0 is rejected at, say, the 5% level, then it is automatically rejected
also at the 10% level.
(It makes no sense to reject the null hypothesis at, say, the 5% level and then redo
the test to determine the outcome at the 10% level.)
I Note that as the degrees of freedom in the t distribution get larger, the t
distribution approaches the standard normal distribution:
I E.g., when n − k − 1 = 120, the 5% critical value for the one-sided
alternative is c = 1.658, compared with the standard normal value of 1.645.
These are close enough for practical purposes.
⇒ So, for degrees of freedom greater than 120, one can use the standard
normal critical values.
35 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
Step-By-Step: How Do we Test for Significance of a Coefficient?
36 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
Example
37 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 1 (H1 : βj > 0)
Example
I The other one-sided alternative that the parameter is less than zero, i.e.:
H 1 : βj < 0
tβ̂j < −c
39 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 2 (H1 : βj < 0)
I Illustrative example:
I If the significance level is 5% and the degrees of freedom is 18, then c is
1.734.
I H0 : βj = 0 is therefore rejected in favor of H1 : βj < 0 at the 5% level if
tβ̂j < −1.734.
I It is important to remember that, to reject H0 against the negative
alternative H1 : βj < 0, we must get a negative t statistic:
I A positive t ratio, no matter how large, provides no evidence in favor of
H1 : βj < 0.
40 / 100
4.2 Hypothesis Testing: The t Test
Testing against One-Sided Alternatives: Case 2 (H1 : βj < 0)
I The rejection rule is illustrated in the following figure:
41 / 100
4.2 Hypothesis Testing: The t Test
Two-Sided Alternatives: Case 3 (H1 : βj 6= 0)
H1 : βj 6= 0
|tβ̂j | > c
43 / 100
4.2 Hypothesis Testing: The t Test
Two-Sided Alternatives: Case 3 (H1 : βj 6= 0)
44 / 100
4.2 Hypothesis Testing: The t Test
Two-Sided Alternatives: Case 3 (H1 : βj 6= 0)
45 / 100
4.2 Hypothesis Testing: The t Test
Two-Sided Alternatives: Case 3 (H1 : βj 6= 0)
46 / 100
4.2 Hypothesis Testing: The t Test
Testing Other Hypotheses about βj
I Sometimes we want to test whether βj is equal to some given constant
other than zero. In this case, the null hypothesis is:
H0 : βj = aj
As before, t measures how many estimated standard deviations β̂j is away from the
hypothesized value of βj . The usual t statistic is obtained when aj = 0.
49 / 100
4.2 Hypothesis Testing: The t Test
Testing Other Hypotheses about βj
Example 4.4: Campus Crime and Enrollment
(See Appendix A for properties of the natural logarithm and exponential functions.)
I For β0 and u = 0, this equation is graphed in the figure on the next slide
for β1 < 1, β1 = 1, and β1 > 1.
51 / 100
4.2 Hypothesis Testing: The t Test
Testing Other Hypotheses about βj
Example 4.4: Campus Crime and Enrollment
52 / 100
4.2 Hypothesis Testing: The t Test
Testing Other Hypotheses about βj
Example 4.4: Campus Crime and Enrollment
wage =
\ -0.905 + 0.541 schooling, n = 526, R 2 = 0.1648
(0.685) (0.053)
55 / 100
4.2 Hypothesis Testing: The t Test
Summary: Critical Values for large Number of Degrees of Freedom
56 / 100
4.2 Hypothesis Testing: The t Test
Computing p-Values for t Tests
57 / 100
4.2 Hypothesis Testing: The t Test
Computing p-Values for t Tests
58 / 100
4.2 Hypothesis Testing: The t Test
Computing p-Values for t Tests
I Note that, once the p-value has been computed, a classical test can be
carried out at any desired level:
I If α denotes the significance level of the test (in decimal form), then H0 is
rejected if p-value< α; otherwise, H0 is not rejected at the 100 × α% level.
59 / 100
4.2 Hypothesis Testing: The t Test
Computing p-Values for t Tests
60 / 100
4.2 Hypothesis Testing: The t Test
Computing p-Values for t Tests
Two Caveats
61 / 100
Outline
Introduction
62 / 100
4.3 Confidence Intervals
65 / 100
4.3 Confidence Intervals
66 / 100
Outline
Introduction
67 / 100
4.4 Testing Hypotheses about a Single Linear Combination
of the Parameters
I The previous two sections have shown how to use classical hypothesis
testing (the t-test) or confidence intervals to test hypotheses about a single
parameter βj at a time.
I In applications, however, we often must test hypotheses involving more
than one population parameter.
I We will cover two cases:
1. In Section 4.4: Testing a single restriction, involving several parameters
(a single hypothesis): a modified t-test.
2. In Section 4.5: Testing several restrictions jointly
(multiple hypotheses): the F -test.
68 / 100
4.4 Testing Hypotheses about a Single Linear Combination
of the Parameters
Example (Returns to education)
I Consider a simple model to compare the returns to education at (two-year)
junior colleges (jc) and four-year colleges (univ ):
where jc (univ ) is # years attending a 2-year college (4-year college), and exper is
months in workforce. The population are working people with a high school degree.
I The only thing that makes testing equality of two different parameters
more difficult than testing about a single βj is obtaining the standard error
in the denominator of the t statistic:
I Obtaining the numerator, β̂1 − β̂2 , in contrast, is trivial once we have
performed the OLS regression.
⇒ So, how do we compute the denominator of the t statistic, se β̂1 − β̂2 ?
I Note: se β̂1 − β̂2 6= se β̂1 − se β̂2 !
71 / 100
4.4 Testing Hypotheses about a Single Linear Combination
of the Parameters
I To compute se β̂1 − β̂2 , we first obtain the variance of the difference.
Using the results on variances in Appendix B in the textbook, we have:
Var β̂1 − β̂2 = Var β̂1 + Var β̂2 − 2Cov β̂1 , β̂2
I The standard deviation of β̂1 − β̂2 is just the square root of this, and since
h i2
se β̂1 is an unbiased estimator of Var β̂1 , and similarly for
h i2
se β̂2 , we have:
h i2 h i2 1/2
se β̂1 − β̂2 = se β̂1 + se β̂2 − 2s12
where s12 is an estimate of Cov β̂1 , β̂2 .
I However, while this approach is feasible, it requiresus to
estimate the
covariance between the two slope estimators, Cov β̂1 , β̂2 .
72 / 100
4.4 Testing Hypotheses about a Single Linear Combination
of the Parameters
I However, rather than trying to compute se β̂1 − β̂2 from the above
equation, it is much easier to estimate instead a different model that
directly gives us the standard error of interest:
I Define a new parameter as the difference between β1 and β2 : θ1 = β1 − β2 .
I Then, we want to test:
H0 : θ1 = 0
against H1 : θ1 < 0
IThe t statistic from before, i.e. t = β̂1 − β̂2 /se β̂1 − β̂2 , in terms of θ̂1
is just t = θ̂1 /se θ̂1 .
⇒ So, the challenge is now finding se θ̂1 .
Introduction
75 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
76 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing Exclusion Restrictions
I We begin with an example to illustrate why testing significance of a group
of variables can be useful:
Example (Major league baseball players’ salaries)
I The following model explains major league baseball players’ salaries:
log (salary) =β0 + β1 years + β2 gamesyr + β3 bavg
+ β4 hrunsyr + β5 rbisyr + u
where: salary is the 1993 total salary, years is years in the league, gamesyr is average
games played per year, bavg is the career batting average, hrunsyr is home runs per year,
and rbisyr is runs batted in per year.
I Suppose we want to test the null hypothesis that, once years in the league
and games per year have been controlled for, the statistics measuring
performance (bavg, hrunsyr, rbisyr) have no effect on salary, i.e.:
H0 : β3 = 0, β4 = 0, β5 = 0
(I.e., essentially, H0 states that productivity as measured by baseball statistics has no
effect on salary.)
77 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing Exclusion Restrictions
I The F -test is based on the sum of squared residuals (SSR) of the models:
IIntuition of the test: If relevant variables are dropped, we should see a
substantial increase in the sum of squared residuals SSR.
I In our example: Does the SSR increase significantly, when we drop the
SSRr − SSRur .
⇒ Thus, we want to reject the null hypothesis if the increase in the SSR in
going from the unrestricted model to the restricted model is large.
79 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing Exclusion Restrictions
I Because it is no more difficult, we will derive the F -test directly for the
general case:
I The unrestricted model with k independent variables (and hence k + 1
parameters) is:
y = β0 + β1 x1 + ... + βk xk + u
I Suppose that we have q exclusion restrictions to test, so that the null
hypothesis e.g. states the last q variables have zero coefficients:
H0 : βk−q+1 = 0, ..., βk = 0
I The alternative hypothesis is simply that H0 is false (i.e. at least one of the
parameters listed in H0 is different from zero).
I When we impose the restrictions under H0 , we get the restricted model:
80 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing Exclusion Restrictions
I The F statistic (or F ratio) measures the relative increase in SSR when
moving from the unrestricted to the restricted model. It is defined by:
(SSRr − SSRur )/q
F ≡
SSRur /(n − k − 1)
I SSRr = the sum of squared residuals from the restricted model.
I SSRur = the sum of squared residuals from the unrestricted model.
! F statistic is always nonnegative (as SSRr can be no smaller than SSRur ).
! Denominator of F is just the unbiased estimator of σ 2 = Var (u) in the
unrestricted model.
I Numerator and denominator degrees of freedom:
I numerator degrees of freedom = dfr − dfur = q,
i.e. the number of restrictions imposed in moving from the unrestricted to
the restricted model (q independent variables are dropped).
I Why? df in each case equals n − k and n is identical, but k (number of
estimated parameters) differs by q.
I denominator degrees of freedom = dfur = n − k − 1.
81 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing Exclusion Restrictions
I To use the F statistic, we must know its sampling distribution under the
null in order to choose critical values and rejection rules.
I It can be shown that, under H0 (and assuming the CLM assumptions hold),
F is distributed as an F random variable with (q, n − k − 1) degrees of
freedom. We write this as:
F ∼ Fq,n−k−1
I Why?
(SSRr − SSRur )/q
I It can be shown that the equation F ≡ is actually the
SSRur /(n − k − 1)
ratio of two independent chi-square random variables, divided by their
respective degrees of freedom, i.e. q and n − k − 1.
I This is the definition of an F distributed random variable (see Appendix B).
I This result allows us to use the tabulated F -distribution to find critical
values.
82 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing Exclusion Restrictions
85 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing Exclusion Restrictions
Step-By-Step Approach to F Test
86 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing Exclusion Restrictions
I We have seen how the F statistic can be used to test whether a group of
variables should be included in a model.
I But what happens if we apply the F statistic to the case of testing
significance of a single independent variable?
[e.g. H0 : βk = 0, q = 1 to test the single exclusion restriction that xk can be excluded
from the model]
I It can be shown that the F statistic for testing exclusion of a single variable
is equal to the square of the corresponding t statistic:
2
I Since tn−k−1 has an F1,n−k−1 distribution, the two approaches lead to the
same outcome, provided the alternative hypothesis is two-sided.
I As t statistics are more flexible for testing a single hypothesis (they can be
directly used to test against one-sided alternatives) and are easier to obtain
than F statistics, there is no reason to use an F statistic to test hypotheses
about a single parameter.
I Warning:
I It is possible that we can group a bunch of insignificant variables with a
significant variable and conclude that the entire set of variables is jointly
insignificant.
89 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
The R-Squared Form of the F Statistic
2
(Rur − Rr2 )/q 2
(Rur − Rr2 )/q
F = 2
= 2 )/df
(1 − Rur )/(n − k − 1) (1 − Rur ur
90 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
The R-Squared Form of the F Statistic
I Note:
I The R 2 is reported with almost all regressions (the SSR is not), so it is easy
to use the R 2 s from the unrestricted and restricted models to test for
exclusion of some variables.
I In the numerator, the unrestricted R 2 comes first. In the SSR-based version
of the F statistic, the SSR of the restricted model (SSRr ) comes first.
94 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing General Linear Restrictions
I Testing exclusion restrictions is by far the most important application of F
statistics.
I Sometimes, however, the restrictions implied by a theory are more
complicated than just excluding some independent variables.
I In such cases, it is still straightforward to use the F statistic for testing.
I Suppose you want to test whether the assessed house price (assess) is a rational
valuation. If the case, then a 1% change in assess should be associated with a
1% change in price, i.e. β1 = 1. In addition, lotsize, sqrft, and bdrms should not
help to explain log (price), once the assessed value has been controlled for.
I The unrestricted model is:
y = β0 + β1 x1 + β2 x2 + β3 x3 + β4 x4 + u
95 / 100
4.5 Testing Multiple Linear Restrictions: The F Test
Testing General Linear Restrictions
Example (Housing prices - actual and assessed (continued))
I The various hypotheses together imply the null hypothesis:
H0 : β1 = 1, β2 = 0, β3 = 0, β4 = 0
i.e. 4 restrictions, but only 3 of them are exclusion restrictions.
I The restricted model is therefore:
y = β0 + x1 + u
I In order to impose the restrictions, we estimate the following model:
y − 1x1 = β0 + u
I We first compute a new dependent variable y − x1 .
I And then we regress this on a constant.
I We compute the F statistic as before:
F = [(SSRr − SSRur )/SSRur ][(n − 5)/4]
I Note: we cannot use the R 2 form of the F statistic here:
I Our regression for the restricted model now has a different dependent variable.
Thus, the total sum of squares will be different. We can no longer re-write the test
in the R 2 -form.
96 / 100
Outline
Introduction
97 / 100
4.6 Reporting Regression Results
98 / 100
4.6 Reporting Regression Results
Experience2 -0.001
(0.000)
99 / 100
4.6 Reporting Regression Results
I We can use stars to indicate the result of a simple t-test on each coefficient.
I Possibly report additional relevant tests/statistics at bottom of table.
Experience2 -0.001***
(0.000)