Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

University of Groningen

Exam Introduction to Econometrics


Monday, June 20 2013, 6:30 PM – 9:30 PM

1. 70 points
Consider a sample of 4744 working individuals, born in 1958 and living in Great Britain. We know
their hourly wage (wage) in 1991, the number of years of education (educ), the number of years of
experience (exper). We also know their mathematical ability (math) and their verbal ability (verbal)
in 1965. Let us define the variable ability = math + verbal, the variable lwage = log(wage) and the
variable educ2 = educ2 .
Consider the following OLS regression:
lwagei = β1 + β2 educi + β3 educ2i + β4 experi + β5 abilityi + β6 mathi + β7 SWi + β8 SEi + β9 f emalei + εi (1)

where f emalei is a dummy equal to 1 if the person is female. In order to account for regional differences
in wages, we have also added two regional dummies to equation (1): The dummy variable SEi (SWi )
indicates that the respondent lives in the South East (Sooth West) part of Great Britain1 . The results
of regression (1) are presented in Table 1 (cf. column ’equation 1’).

(a) Provide a careful economic interpretation of the estimated coefficient on SWi .


(b) How large is the return to education for a worker with 10 years of education (educi = 10)? And
for a worker with 20 years of education?
(c) Use a t test to check the validity of the null hypothesis that mathematical ability and verbal ability
have the same effect on lwagei . Please mention all steps of the testing procedure (α = 0.05).
(d) Table 1 also presents the results of a regression model without the regional dummies (cf. column
’equation 2’). Check by means of a F test whether the regional dummies are jointly significant.
Please mention all steps of the testing procedure (α = 0.05).
(e) Now, we consider the possibility that the β coefficients differ across subgroups of males and females.
We have estimated the following model for both subgroups (cf. columns (equation male) and
(equation female) in Table 1):
lwagei = β1k +β2k educi +β3k educ2i +β4k experi +β5k abilityi +β6k mathi +β7k SWi +β8k SEi +εi , k = male, female
(2)
Test the null hypothesis that that all slope coefficients of model (2) are the same across the
subgroups of ”females” and ”males”. Notice that according to the null hypothesis the intercept
term β1 might differ across the two subgroups (α = 0.05).
(f) Up to now we have assumed that the error term is homoskedastic. In order to check the validity
of this assumption we have run the following auxiliary regression (see column ’aux. regression 1’
in Table 1):
e2i = γ1 + γ2 educi + γ3 educ2i + γ4 experi + γ5 abilityi + γ6 mathi + γ7 SWi + γ8 SEi + γ9 f emalei + ui (3)

where ei is the residual of the estimated equation (1). Please carry out a test to show that the
error term of equation (1) is heteroskedastic (α = 0.05).
(g) In light of the results of the heteroskedasticity test above, we have to conclude that the results of
the Chow test which you obtained in part (1e) of this exercise, cannot be trusted. However, this
test can be carried out in an alternative way which takes into account the heteroskedastic nature
of the error term. Please indicate precisely how this alternative Chow test can be carried out.
Write down the regression model which you need to estimate in order to carry out this alternative
test.
(h) Now consider the following simplified version of regression equation (1)
lwagei = β1 + β2 educi + β4 experi + β7 SWi + β8 SEi + β9 f emalei + ui (4)

Why might educ be endogenous?


1 Reference group consists of respondents living in the Northern part of Great Britain.
1
(i) We treat educi as an endogenous right hand side variable in equation (4). The reduced form
demand for the education equation (”first stage equation”) is specified as follows:

educi = δ1 + δ2 experi + δ3 SWi + δ4 SEi + δ5 f emalei + δ6 payrsedi + δ8 mayrsedi + υi (5)

where payrsedi and mayrsedi denote father’s and mother’s number of years of education respec-
tively. The parameters of equation (4) are estimated by means of 2SLS (cf. column ’2sls’ of Table
1). Table 1 also presents the estimation results of equation (5) (see column ’first stage reg’).
Under which conditions will the 2SLS estimator yield consistent estimators of the β parameters
in equation (4)? Do you think that these conditions are satisfied?
(j) Use the information presented in Table 1 to test the hypothesis that in equation (4) the right hand
side variable educi is exogenous (α = 0.05).
(k) Use the information presented in Table 1 to perform the Sargan test of overidentifying restriction(s)
(α = 0.05). What is the underlying null hypothesis of the Sargan test?

ANSWERS EXERCISE 1
(a) Keeping other factors constant, people living in the South Wrest earn 21.9 % (precise calculation
(exp(0.219)-1)*100=24.48 %) more than the reference group, i.e. people who live in the Northern
part of Great Britain.
(b) Marginal effect at educ=10: beduc + 2 · beduc2 · 10 = .3834481 + 2 ∗ (−.0097959) ∗ 10 = .1875301
Marginal effect at educ=20: beduc + 2 · beduc2 · 20 = .3834481 + 2 ∗ (−.0097959) ∗ 20 = −.0083879
(c) Equation (1) cab be rewritten as follows:

lwagei = β1 +β2 educi +β3 educ2i +β4 experi +β5 (mathi +verbali )+β6 mathi +β7 SWi +β8 SEi +β9 f emalei +εi
(6)
So we like to test the following H0 : β6 = 0 (then verbali and mathi have the same effect, β5 , on
lwagei )
Steps testing procedure
• Formulate H0 : β6 = 0, H1 : β6 6= 0
• α = 0.05
b6
• Test statistic se(b 6)
. This test statistic follows a student t distribution with n − K degrees of
freedom. n = 4744 and K = 9.
• Compute the value of the t statistic. Notice that t = 0.00372/0.00758 = .49076517
• Look up critical value (α = 0.05, n − K = 4735): 1.96
• Since |0.490| < 1.960 we do not reject the H0 . We find evidence that mathematical ability
and verbal ability have the same effect on the hourly wage rate.
(d) Steps F-test
• Formulate H0 and H1
H0 : β7 = 0, β8 = 0, H1 : H0 not true
• Test statistic
(SSRr − SSRu )/#r
F (#r, n − K) =
SSRu /(n − K)
This test statistic follows a student F distribution with #r numerator degrees of freedom and
n − K denominator degrees of freedom.
In this case, n = 4744, K = 9, #r = 2,
• Compute the value of the F statistic. Notice that SSRr = 670.794638, SSRu = 643.600006

(670.794638 − 643.600006)/2
F (2, 4735) = = 100.03619
643.600006/(4735)

• Look up critical value (α = 0.05, #r = 2, n − K = 4735): 3.000


2
• Since 100.03619 > 3.000. So the regional dummies SE and SW are jointly statistically
significant at the 5 % level.
(e) Steps F test
• Formulate H0
H0 : βif emale = βimale , i = 2, . . . , 8
Apart from the intercept term, the coefficients of the ”female equation” are the same as those
of the ”male equation”.
Number of restrictions imposed by H0 : #r = 7
• Computation F statistic

(SSRr − SSRu )/#r


F (#r, n − K) =
SSRu /(n − K)

where K = number of regression parameters in unrestricted model


Unrestricted model (separate equations for female=1 and female=0)
Number of regression parameters in unrestricted model: 2 × 8 = 16
SSR unrestricted model: SSRu = SSRf emale + SSRmale = 323.880 + 315.191 = 639.071
SSR restricted model: SSRr = 643.600 (see column ’equation 1 table 1)
So, The value of the F-statistic equal to:

(643.600 − 639.071)/7
F (7, 4744 − 16) = = 4.7866606
639.071/(4744 − 16)

• Look up critical value: c = 2.010


• 4.7866606 > 2.010, we reject the H0 . We conclude that (at least one of the) slope coefficients
are different across the subgroups of males and females.
(f) Steps heteroskedasticity test
• Formulate H0 (error term is homoskedastic.)

H0 : γ2 = γ3 = . . . = γ9 = 0 or H0 : E(ε2i |xi ) = σ 2 (constant variance)

.
Number of restrictions imposed by H0 : #r = 8
• Computation χ2 statistic

χ2 (8) = n · R2 = 4744 ∗ 0.007 = 33.208

• Look up critical value χ2 (8): c = 15, 50731


• 33.208 > 15.50731, we reject the H0 . We conclude that the error term is heteroskedastic (the
variance of the error term varies with background characteristics).
(g) Estimate the following equation by means of OLS and compute the heteroskedasticity-robust
standard errors:
lwagei = β1 + β2 educi + β3 educ2i + β4 experi + β5 abilityi + β6 mathi + β7 SWi + β8 SEi + β9 f emalei +
β10 f emalei ∗ educi + β11 educ2i + β1 2f emalei ∗ experi + β13 f emalei ∗ abilityi +
β14 f emalei ∗ mathi + β15 f emalei ∗ SWi + β16 f emalei ∗ SEi + εi
(7)
Next test the following H0 : H0 : β10 = . . . = β16 = 0 or H0 : Rb = 0 where the ((7 × 16) matrix
R is defined as follows: R = (O, I7 ) where O is a (7 × 9) matrix containing zero elements and I7
is the Identity matrix of order 7.
The associated test statistic (Wald statistic) is equal to (robust option in Stata):
0
W ≡ n(Rb)0 [R[Avar(b)]R
\ ]−1 (Rb) −
→ χ2 (7) (8)
d

3

where the estimated heteroskedasticity-robust covariance matrix of \ is equal
n(b − β) (Avar(b))
to !
n
\ −1 1X 2 0 −1
Avar(b) = Sxx e xi xi Sxx (9)
n i=1 i
n
1
xi x0i )
P
(Sxx = n
i=1

(h) Compared with Model (1), model (4) misses the following regressors: educ2i , mathi and abilityi .
These three variables are obviously positively correlated with educi . Since ui = εi + β3 educ2i +
beta5 abilityi + β6 mathi , educi is an endogenous rhs variable, i.e. correlated with the error term
ui of equation (4).
(i) Assumptions
3.1: Linearity Model (4) is in indeed linear in the parameters. This assumption does not need
to be mentioned
3.2’: random sample It is reasonable to assume that we have a random sample available. This
assumption does not need to be mentioned
3.4: rank condition (instrument relevance) The matrix E(xi zi0 ) should be of full column
rank where xi = (1, experi , SEi , SWi , f emalei , payrsedi , mayrsedi )0 and
zi = (1, educi , experi , SEi , SWi , f emalei )0 . This condition is likely to be satisfied
because the endogenous rhs variable is likely to be partially correlated with the ’excluded
instruments’ payrsedi and mayrsedi . The results from the first regression suggest that the
two excluded instruments payrsedi and mayrsedi are strongly jointly significant: The F
statistic takes on a large value (F=46.736). This number is larger than 10, which gives us
confidence that the instrument relevance condition is met.
3.3: Instrument exogeneity E(xi ui ) = 0. This assumption is definitely not satisfied.
The variables payrsedi and mayrsedi are definitely not orthogonal to ui = εi + β3 educ2i +
beta5 abilityi + β6 mathi because payrsedi and mayrsedi are certain correlated with educ2i
because we have shown above that these 2 variables are partially correlated with educi . By
omitting the variable educ2i from the specification, it is impossible to find suitable instruments.
Notice also that the two excluded instruments are also likely to be partially correlated with
mathi and abilityi .
(j) We just concluded that payrsedi and mayrsedi are no valid instruments. Strictly speaking it does
not make sense to carry out the exogeneity test and the Sargan test. But the sake of it we carry
out these tests
Exogeneity test: H0 : E(educi ui ) = 0
Hausman-Wu test of exogeneity can be carried by running the following auxiliary regression:

lwagei = β1 + β2 educi + β4 experi + β7 SWi + β8 SEi + β9 f emalei + θυ̂i + i (10)

where υ̂i denotes the residual of the first stage regression. The test statistic is equal to the t-
value of the estimated parameter associated wit the first stage residual (see Table 1 column aux
regression 2).

−0.0372
tυ̂ = = −1.6986301
0.0219
Since | − 1.6986301| < 1.96 (α = 0.05), the H0 of exogeneity is not rejected. We do not find strong
evidence that the right hand side variable educi is endogenous.
(k) H0 : E(xi ui ) = 0, i.e. all instruments (the predetermined regressors and the excluded instruments)
should be orthogonal to the error term ui (xi = (1, experi , SEi , SWi , f emalei , payrsedi , mayrsedi )0 )
The Sargan test can be carried out by running an auxiliary regression of the second stage residual
ûlwage on all instruments (i.e predetermined regressors plus the excluded instruments), see the last
column of table 1. The Sargan statistic is equal to

n × R2 = 4744 ∗ 0.0004 = 1.8976


4
where R2 is the R-squared of the regression mentioned above. The asymptotic distribution of this
test statistic is χ2 (K − L), where K the number of instruments and L the number of regressors
in equation (4). In our case K = 7 and L = 6 (K − L = 1). So, The critical value (α = 0.05) is
equal to 3,84.
Since 1.8976 < 3.84, we do not reject H0 . Conclusion: The H0 that the instruments are orthogonal
to the error term of eq. (4) is not rejected

2. 16 points

a) Let u = y − E(y|x). Show that E[g(x)u] = 0 for any function g(x).


HINT : E[g(x)u] = E(E[g(x)u|x])
b) Show that
var(y) = E[var(y|x)] + var (E(y|x)) (11)
HINT 1 : The conditional variance of a random scalar y given a random vector x is defined as

var(y|x) = σ 2 (x) = E[(y − E(y|x))2 |x] (12)

HINT 2 : Use the add and subtract strategy. y − E(y) can be rewritten as

y − E(y) = (y − E(y|x)) + (E(y|x) − E(y))

HINT 3 : Let u = y − E(y|x). Then E[g(x)u] = 0 for any function g(x) (see part a of this
exercise).

ANSWERS exercise 2

2a)

E[g(x)u] = E(E[g(x)u|x]) = E(g(x)E[u|x]) = E(g(x)E[y − E[y|x]|x]) =


E(g(x)(E[y|x] − E[y|x]) = E(g(x) ∗ 0) = 0

2b) Let u = y − E(y|x) and g(x) = E(y|x) − E(y)

var(y) = E[(y − E(y))2 ]


= E[((y − E(y|x)) + (E(y|x) − E(y)))2 ]
= E[(u + g(x))2 ]
= E[u2 ] + E[g(x)2 ] + 2E[g(x)u] =
= E[u2 ] + E[g(x)2 ] =
= E[(y − E(y|x))2 ] + E[(E(y|x) − E(y))2 ]
= E{E[(y − E(y|x))2 |x]} + E[(E(y|x) − E(y))2 ]
= E[var(y|x)] + var(E(y|x))

In the third line we use hint 3

E[g(x)u](= E[(E(y|x) − E(y))(y − E(y|x))]) = 0

3. 14 points
Consider the simple regression model with one nonconstant explanatory variable z

y i = β1 + β2 z i + ε i (13)

and let xi be a binary (dummy) instrumental variable for z (i.e. xi can take on only two values 0 and
1). Rewriting model (13) yields:
yi = z̃i0 β + εi (14)
5
where z̃i = (1, zi )0 and β = (β1 , β2 )0 . Since the model is just identified, one can use the following IV
estimator to estimate β:
n
!−1 n
X X
β̂ = x̃i z̃i0 x̃i yi (15)
i=1 i=1
0 0
where x̃i = (1, xi ) and β̂ = (β̂1 , β̂2 ) .
(a) Use equation (15) to show that

n
!−1 n
X X
β̂2 = (xi − x̄)(zi − z̄) (xi − x̄)(yi − ȳ) (16)
i=1 i=1

ANSWER
Equation (15) implies that
n n
   
P P
 n zi  i=1 yi 
 
i=1
 β̂1
 n n
 = n

 P
xi
P
xi zi
 β̂2  P
xi yi

i=1 i=1 i=1

or
β̂1 = ȳ − z̄ β̂2 (17a)
n
! n
! n
X X X
xi β̂1 + xi zi β̂2 = xi yi (17b)
i=1 i=1 i=1

Substitution of equation (17a) onto (17b) and rearranging yields


n
! n
X X
xi (zi − z̄) β̂2 = xi (yi − ȳ) (18)
i=1 i=1

n
P n
P
Since x̄(zi − z̄) = x̄(yi − ȳ) = 0, equation (18) can be rewritten as follows:
i=1 i=1

n
! n
X X
(xi − x̄)(zi − z̄) β̂2 = (xi − x̄)(yi − ȳ) (19)
i=1 i=1

or !−1 !
n
X n
X
β̂2 = (xi − x̄)(zi − z̄) (xi − x̄)(yi − ȳ) (20)
i=1 i=1

q.e.d.
(b) Use formula (16) to show that
(ȳ1 − ȳ0 )
β̂2 = (21)
(z̄1 − z̄0 )
where ȳ1 and z̄1 are the sample averages of yi and zi over the part of the sample with xi = 1, and
where ȳ0 and z̄0 are the sample averages of yi and zi over the part of the sample with xi = 0. This
estimator, known as the grouping estimator, was first suggested by Wald (1940).

6
ANSWER Let the set I1 = {i|xi = 1} contain n1 element (and the set I0 = {i|xi = 0} n0
elements. Given the binary nature of variable xi , equation (18 can be rewritten as follows:

n
!−1 n
!
X X
β̂2 = xi (zi − z̄) xi (yi − ȳ)
i=1 i=1
!−1 !
X X
= zi − n1 z̄) yi − n1 ȳ)
iI1 iI1
−1
= (z̄1 − z̄)) (ȳ1 − ȳ))
 n
1
 n1  −1  n
1
 n1  
= z̄1 − z¯1 + 1 − z¯0 ȳ1 − y¯1 + 1 − y¯0
n n n n
  n  −1   n  
1 1
= 1− (z̄1 − z¯0 ) 1− (ȳ1 − y¯0 )
n n
−1
= (z̄1 − z¯0 ) (ȳ1 − y¯0 )

7
Table 1: Exercise 1, regression results
equation 1 equation 2 equation equation aux. 2sls first stage aux. aux.
male female regression 1 regression regression 2 regression 3
Dependent variable lwage lwage lwage lwage e2lwage * lwage educ lwage ûlwage ***
VARIABLES
educ 0.383 0.398 0.391 0.353 0.0540 0.162 0.162
(0.0332) (0.0338) (0.0431) (0.0517) (0.0250) (0.0220) (0.0217)
educ2 -0.00980 -0.0101 -0.0104 -0.00830 -0.00186
(0.00118) (0.00121) (0.00153) (0.00185) (0.000894)
exper 0.0507 0.0520 0.0505 0.0492 0.000472 0.0680 -0.360 0.0680 -1.96e-05
(0.00207) (0.00211) (0.00478) (0.00241) (0.00156) (0.00821) (0.00855) (0.00809) (0.00182)
ability 0.0271 0.0257 0.0253 0.0294 0.00438
(0.00467) (0.00477) (0.00636) (0.00686) (0.00353)
math 0.00372 0.00502 0.0164 -0.00998 -0.00664
(0.00758) (0.00773) (0.0104) (0.0110) (0.00572)
SE 0.155 0.197 0.108 0.0415 0.141 0.405 0.141 3.64e-05
(0.0130) (0.0173) (0.0196) (0.00984) (0.0164) (0.0634) (0.0162) (0.0135)
SW 0.219 0.226 0.213 0.00760 0.170 1.133 0.170 0.000634
(0.0236) (0.0315) (0.0356) (0.0179) (0.0346) (0.114) (0.0341) (0.0243)
female -0.256 -0.255 0.0165 -0.203 -0.882 -0.203 0.000400

8
(0.0121) (0.0123) (0.00913) (0.0230) (0.0563) (0.0227) (0.0120)
payrsed 0.0556 0.00288
(0.0115) (0.00245)
mayrsed 0.00604 -0.00355
(0.0123) (0.00262)
υ̂** -0.0372
(0.0219)
Constant -2.130 -2.233 -2.156 -2.185 -0.269 -1.005 17.06 -1.005 0.00618
(0.225) (0.229) (0.311) (0.346) (0.170) (0.389) (0.149) (0.383) (0.0317)

Observations 4,744 4,744 2,490 2,254 4,744 4,744 4,744 4,744 4,744
R2 0.439 0.415 0.289 0.398 0.007 0.393 0.307 0.412 0.000
SSR 643.600 670.795 315.191 323.880 367.285 695.766 15366.676 674.546 695.495
F test exclusion restrictions 46.736
Standard errors in parentheses
* elwage denotes the residual of the regression 1 (cf. column ’equation 1’)
** υ̂ denotes the residual of the first stage regression (cf. column ’first stage reg.’)
*** ûlwage denotes the 2sls residual of equation (4) (cf. column ’2sls’)
Table: Critical values of the Chi-Square distribution
Significance level
Degrees of
freedom 0.10 0.05 0.01
1 2,705544 3,841459 6,634897
2 4,60517 5,991465 9,21034
3 6,251389 7,814728 11,34487
4 7,77944 9,487729 13,2767
5 9,236357 11,0705 15,08627
6 10,64464 12,59159 16,81189
7 12,01704 14,06714 18,47531
8 13,36157 15,50731 20,09023
9 14,68366 16,91898 21,66599
10 15,98718 18,30704 23,20925
11 17,27501 19,67514 24,72497
12 18,54935 21,02607 26,21697
13 19,81193 22,36203 27,68825
14 21,06414 23,68479 29,14124
15 22,30713 24,99579 30,57791
16 23,54183 26,29623 31,99993
17 24,76904 27,58711 33,40866
18 25,98942 28,8693 34,80531
19 27,20357 30,14353 36,19087
20 28,41198 31,41043 37,56623
21 29,61509 32,67057 38,93217
22 30,81328 33,92444 40,28936
23 32,0069 35,17246 41,6384
24 33,19624 36,41503 42,97982
25 34,38159 37,65248 44,31411
26 35,56317 38,88514 45,64168
27 36,74122 40,11327 46,96294
28 37,91592 41,33714 48,27824
29 39,08747 42,55697 49,58788
30 40,25602 43,77297 50,89218
31 41,42174 44,98534 52,19139
32 42,58474 46,19426 53,48577
33 43,74518 47,39988 54,77554
34 44,90316 48,60237 56,06091
35 46,05879 49,80185 57,34207
36 47,21217 50,99846 58,61921
37 48,36341 52,19232 59,8925
38 49,51258 53,38354 61,16209
39 50,65977 54,57223 62,42812
40 51,80506 55,75848 63,69074
41 52,94851 56,94239 64,95007
42 54,0902 58,12404 66,20624
43 55,23019 59,30351 67,45935
44 56,36854 60,48089 68,70951
45 57,50531 61,65623 69,95683
46 58,64054 62,82962 71,2014
47 59,77429 64,00111 72,44331
48 60,90661 65,17077 73,68264
49 62,03754 66,33865 74,91947
50 63,16712 67,50481 76,15389
100 118,498 124,3421 135,8067
Critical calues of the t-distribution
Significance level
1-tailed 0,1 0,05 0,025 0,005
2-tailed 0,2 0,1 0,05 0,01
degrees of freedom
1 3,078 6,314 12,706 63,657
2 1,886 2,920 4,303 9,925
3 1,638 2,353 3,182 5,841
4 1,533 2,132 2,776 4,604
5 1,476 2,015 2,571 4,032
6 1,440 1,943 2,447 3,707
7 1,415 1,895 2,365 3,499
8 1,397 1,860 2,306 3,355
9 1,383 1,833 2,262 3,250
10 1,372 1,812 2,228 3,169
11 1,363 1,796 2,201 3,106
12 1,356 1,782 2,179 3,055
13 1,350 1,771 2,160 3,012
14 1,345 1,761 2,145 2,977
15 1,341 1,753 2,131 2,947
16 1,337 1,746 2,120 2,921
17 1,333 1,740 2,110 2,898
18 1,330 1,734 2,101 2,878
19 1,328 1,729 2,093 2,861
20 1,325 1,725 2,086 2,845
21 1,323 1,721 2,080 2,831
22 1,321 1,717 2,074 2,819
23 1,319 1,714 2,069 2,807
24 1,318 1,711 2,064 2,797
25 1,316 1,708 2,060 2,787
26 1,315 1,706 2,056 2,779
27 1,314 1,703 2,052 2,771
28 1,313 1,701 2,048 2,763
29 1,311 1,699 2,045 2,756
30 1,310 1,697 2,042 2,750
40 1,303 1,684 2,021 2,704
60 1,296 1,671 2,000 2,660
90 1,291 1,662 1,987 2,632
120 1,289 1,658 1,980 2,617

1,282 1,645 1,960 2,576


Table: 5% Critical values of the F distribution
Numerator degrees of freedom
1 2 3 4 5 6 7 8 9 10
Denominator degrees of freedom 10 4,965 4,103 3,708 3,478 3,326 3,217 3,135 3,072 3,020 2,978
11 4,844 3,982 3,587 3,357 3,204 3,095 3,012 2,948 2,896 2,854
12 4,747 3,885 3,490 3,259 3,106 2,996 2,913 2,849 2,796 2,753
13 4,667 3,806 3,411 3,179 3,025 2,915 2,832 2,767 2,714 2,671
14 4,600 3,739 3,344 3,112 2,958 2,848 2,764 2,699 2,646 2,602
15 4,543 3,682 3,287 3,056 2,901 2,790 2,707 2,641 2,588 2,544
16 4,494 3,634 3,239 3,007 2,852 2,741 2,657 2,591 2,538 2,494
17 4,451 3,592 3,197 2,965 2,810 2,699 2,614 2,548 2,494 2,450
18 4,414 3,555 3,160 2,928 2,773 2,661 2,577 2,510 2,456 2,412
19 4,381 3,522 3,127 2,895 2,740 2,628 2,544 2,477 2,423 2,378
20 4,351 3,493 3,098 2,866 2,711 2,599 2,514 2,447 2,393 2,348
21 4,325 3,467 3,072 2,840 2,685 2,573 2,488 2,420 2,366 2,321
22 4,301 3,443 3,049 2,817 2,661 2,549 2,464 2,397 2,342 2,297
23 4,279 3,422 3,028 2,796 2,640 2,528 2,442 2,375 2,320 2,275
24 4,260 3,403 3,009 2,776 2,621 2,508 2,423 2,355 2,300 2,255
25 4,242 3,385 2,991 2,759 2,603 2,490 2,405 2,337 2,282 2,236
26 4,225 3,369 2,975 2,743 2,587 2,474 2,388 2,321 2,265 2,220
27 4,210 3,354 2,960 2,728 2,572 2,459 2,373 2,305 2,250 2,204
28 4,196 3,340 2,947 2,714 2,558 2,445 2,359 2,291 2,236 2,190
29 4,183 3,328 2,934 2,701 2,545 2,432 2,346 2,278 2,223 2,177
30 4,171 3,316 2,922 2,690 2,534 2,421 2,334 2,266 2,211 2,165
40 4,085 3,232 2,839 2,606 2,449 2,336 2,249 2,180 2,124 2,077
60 4,001 3,150 2,758 2,525 2,368 2,254 2,167 2,097 2,040 1,993
90 3,947 3,098 2,706 2,473 2,316 2,201 2,113 2,043 1,986 1,938
120 3,920 3,072 2,680 2,447 2,290 2,175 2,087 2,016 1,959 1,910
3,840 3,000 2,600 2,370 2,210 2,100 2,010 1,940 1,880 1,830

You might also like