Panel Data Models

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Econometrics

Panel Data Models


Outline
• Why Use Panel Data Models

• The Fixed Effects and Random Effects Models

• Extensions and Applications


• Difference-in-differences
Why Use Panel Data Models
Why Use Panel Data Models
• You are following multiple observations over a period – panel data has space
(𝑖) and time (𝑡) dimensions
• You could just treat these as “pooled data” – ignoring space and time dimensions and
just treat each 𝑖 × 𝑡 as an “observation” that just happened to be 𝑖 at time 𝑡
• So, why use panel data?
• Cross-sectional data can be difficult to establish causality
• Everything observed at the same time → reverse causality problems
• Treatment effects? Hard to establish if you only observe post-treatment period, and do not
observe baseline
• Explicitly control for heterogeneity – different people, different lives; different years,
different times
• Better suited to capture dynamics of change – baseline vs. treatment
• Increase observations and degrees of freedom – greater consistency and precision of
estimates.
The Fixed Effects
and Random Effects
Models
The Fixed Effects and Random Effects Models
• Will OLS work?
• Well, yes… but, it depends

• Setting up
• First, know what type of panel you’ll be dealing with – because this can affect how
you estimate your equation
• Balanced panel (each 𝑖 has the same number of 𝑡) vs. Unbalanced panel (𝑖’s have different 𝑡)
• Short panel – where 𝑁 > 𝑇 vs. Long panel – where 𝑁 < 𝑇
• Why does this matter? Aside from different estimation, longer 𝑇 would mean being subject to time-
series concepts (i.e., stationarity, cointegration, etc.)

• At macro levels, a panel dataset is relatively simple to imagine


• Check out the World Bank, Penn World Tables, Philippine Statistical Yearbook databases
The Fixed Effects and Random Effects Models
• Setting up
• At a micro level, I can note two sub-classifications

• Real panel data – “longitudinal data” which follows EXACTLY the same subjects over
time (e.g., Panel Study of Income Dynamics/PSID in the US, Indonesian Family Life
Survey/IFLS, Household Income and Labor Dynamics in Australia/HILDA Survey)

• “False” or “pseudo” may be a better term panel data


• Repeated cross-section – not necessarily the same set of subjects followed over time (e.g., PSA’s
FIES and LFS are not panel data – while some households are constants across rounds, they are
untraceable, different rounds typically survey a different set of people)
• Pseudo panel – tracks a cohort (i.e., a stable group of individuals), rather than individuals
themselves, over time. The criteria for the formation of cohorts needs to be observable for all, and
must be constant over time (e.g., year of birth) (see Guillerm, 2017). – although at times you can
create cohorts based on multiple criteria (year of birth x family size x educational attainment – a
bit debatable but may work for samples where the latter two are relatively time-invariant).
The Fixed Effects and Random Effects Models
• Setting up
• “False” or “pseudo” may be a better term panel data
• Repeated cross-section – not necessarily the same set of subjects followed over time (e.g., PSA’s
FIES and LFS are not panel data – while some households are constants across rounds, they are
untraceable, different rounds typically survey a different set of people)

• Can OLS run this?


• If it were a simple panel… Yes!
• Say, you’re using two rounds of the LFS – where a large number of observations are sampled over
time.
• These are not the same set of households – so it’s not “panel” data in the strict sense
• Your data is independently but not identically distributed – meaning there was no selection into
sampling (correlation) over time, but the different times mean that the two samples are
structurally different from each other
• Can you estimate this properly? Yes! OLS is still consistent if you account for the differences in
time… use time fixed effects or “era effects” – a “time dummy”
The Fixed Effects and Random Effects Models
• Setting up
• Panel data stacking
• Sort the data by 𝑖 then 𝑡, so for each 𝑖, 𝒊 𝒕 𝒀𝒊𝒕 𝑿𝒊𝒕
arrange observations for 𝑡 = 1, … , 𝑇,
∀ 𝑖 = 1, … , 𝑁 1 2000 𝑌1,2000 𝑋1,2000
• Consider 𝑁 = 10 countries, 𝑇 =
2000, … , 2010 1 2001 𝑌1,2001 𝑋1,2001
1 ⋮ ⋮ ⋮
• In Stata 1 2010 𝑌1,2010 𝑋1, 2010
• If you know what your 𝑖 and 𝑡 are
(usually 𝑖 is firm or country, 𝑡 is year), 2 2000 𝑌2,2000 𝑋2,2000
you can use the command 𝑠𝑜𝑟𝑡 to sort
first by 𝑖, then by 𝑡 2 2001 𝑌2,2001 𝑋2,2001
• 𝑠𝑜𝑟𝑡 𝑖 𝑡 2 ⋮ ⋮ ⋮
• Then declare that you are using panel
data using 𝑥𝑡𝑠𝑒𝑡 or 𝑡𝑠𝑠𝑒𝑡 2 2010 𝑌2,2010 𝑋2, 2010
• 𝑥𝑡𝑠𝑒𝑡 𝑖 𝑡 or 𝑡𝑠𝑠𝑒𝑡 𝑖 𝑡 ⋮ ⋮ ⋮ ⋮
• The time series dimension is often a
numerical variable, but be careful if 10 2000 𝑌10,2000 𝑋10,2000
you’re working with higher frequency 10 ⋮ ⋮ ⋮
time series such as quarterly or monthly.
There’s a proper way of specifying this in 10 2010 𝑌10,2010 𝑋10, 2010
your dataset – either as
2010: 1, … , 2010: 4 for quarterly,
sometimes it’s 2010𝑞1, … , 2010𝑞4
The Fixed Effects and Random Effects Models
• Fixed Effects Model Least-Squares Dummy Variable (LSDV) Model
• Allows for heterogeneity among subjects by allowing each entity or time period its own
intercept value.
• Why? Because each entity or time period can be characterized by idiosyncratic qualities or “uniqueness”
not captured by the regressors in the model.
• Say, you’re talking about firms – each firm has its own managerial style or board structure – time-
invariant characteristics that define it
• Or years – each year is characterized by events and growth/change in aggregate demand that affects all
firms
• This model directly controls for these by allowing for differential intercepts

• Take the case where structural differences are attributed to different entities 𝑖
• 𝑌𝑖𝑡 = 𝛽0𝑖 + 𝛽1 𝑋𝑖𝑡 + 𝑢𝑖𝑡 ∀ 𝑖 = 1, … , 𝑁 and 𝑡 = 1, … , 𝑇
• Note the differential intercepts – each 𝑖 has its own intercept, representing a distinct average 𝑌 for a
particular 𝑖 – “on average, 𝑌 for this 𝑖 is different from other 𝑗 ≠ 𝑖, and the manner this occurs is fixed for
each 𝑖” – hence “fixed effects”
• These differential intercepts represent time-invariant characteristics of each 𝑖
• Can also be written as 𝑌𝑖𝑡𝑛−1
= 𝛽0 + 𝛽1 𝑋𝑖𝑡 + 𝛿𝑖 + 𝑢𝑖𝑡 where 𝛿𝑖 are the fixed effects per 𝑖, ∀ 𝑖 = 1, … , 𝑁 − 1,
or 𝑌𝑖𝑡 = 𝛽0 + 𝛽1 𝑋𝑖𝑡 + σ𝑖=1 𝛿𝑖 𝐷𝑖 + 𝑢𝑖𝑡
• Note that we only add 𝑛 − 1 dummy variables or differential slopes in this notation.
• Sometimes called LSDV M1
The Fixed Effects and Random Effects Models
• Fixed Effects Model Least-Squares Dummy Variable (LSDV) Model
• The effect of controlling for fixed effects is illustrated below
The Fixed Effects and Random Effects Models
• Fixed Effects Model Least-Squares Dummy Variable (LSDV) Model
• The effect of controlling for fixed effects is illustrated on the right
• This can be also done for a time-varying intercept – if we assume structural
differences occur across time and is experienced by all entities (LSDV M2)
• 𝑌𝑖𝑡 = 𝛽0𝑡 + 𝛽1 𝑋𝑖𝑡 + 𝑢𝑖𝑡 ∀ 𝑖 = 1, … , 𝑁 and 𝑡 = 1, … , 𝑇
• 𝑌𝑖𝑡 = 𝛽0 + 𝛽1 𝑋𝑖𝑡 + 𝜏𝑡 + 𝑢𝑖𝑡 where 𝜏𝑡 are the fixed effects per 𝑡, ∀ 𝑡 = 1, … , 𝑇 − 1
• 𝑌𝑖𝑡 = 𝛽0 + 𝛽1 𝑋𝑖𝑡 + σ𝑇−1
𝑡=1 𝜏𝑡 𝐷𝑡 + 𝑢𝑖𝑡

• Can also be done for both entity-varying and time-varying intercept – this is called
a two-way fixed effects estimator (LSDV M3)
• 𝑌𝑖𝑡 = 𝛽0𝑖𝑡 + 𝛽1 𝑋𝑖𝑡 + 𝑢𝑖𝑡 ∀ 𝑖 = 1, … , 𝑁 and 𝑡 = 1, … , 𝑇
• 𝑌𝑖𝑡 = 𝛽0 + 𝛽1 𝑋𝑖𝑡 + 𝛿𝑖 + 𝜏𝑡 + 𝑢𝑖𝑡 where 𝛿𝑖 , 𝜏𝑡 are the fixed effects per 𝑖 and 𝑡, ∀ 𝑖 = 1, … , 𝑁 −
1 and 𝑡 = 1, … , 𝑇 − 1
• 𝑌𝑖𝑡 = 𝛽0 + 𝛽1 𝑋𝑖𝑡 + σ𝑁−1 𝑇−1
𝑖=1 𝛿𝑖 𝐷𝑖 + σ𝑡=1 𝜏𝑡 𝐷𝑡 + 𝑢𝑖𝑡

• In Stata – regress using OLS then add - 𝑖. 𝑣𝑎𝑟, or 𝑖𝑏#. 𝑣𝑎𝑟 such that # ∈ 𝑣𝑎𝑟
The Fixed Effects and Random Effects Models
• Fixed Effects Model Least-Squares Dummy Variable (LSDV) Model
• We can test whether the inclusion of differential intercepts are significant by using the
Wald’s test for linear restrictions
𝑅𝑆𝑆𝑅 −𝑅𝑆𝑆𝑈𝑅
𝛼
• 𝐹= # 𝑟𝑒𝑠𝑡𝑟𝑖𝑐𝑡𝑖𝑜𝑛𝑠
𝑅𝑆𝑆𝑈𝑅 ~𝐹𝑑𝑓 𝑈𝑅
𝑑𝑓𝑈𝑅
• 𝐻0 : restrictions (that is, the fe’s 𝛿𝑖 = 0, 𝜏𝑡 = 0) are valid,
• 𝐻1 : restrictions are invalid
• OLS (without fe’s, called “Naïve” or N ) is considered the restricted model, LSDV’s are unrestricted
models

• Can test: 1.) M1 vs N, 2.) M2 vs N, 3.) M3 vs N;


• But can we test 4.) M1 vs M3? Yes; 5.) M2 vs M3? Yes; 6.) M1 vs M2? No… Why not?
• Why test at all? To find the most appropriate specification of FEM…
• But not always… only do this if you don’t know which one is the best.
• How would you know? Use your instinct, knowledge of the topic, knowledge of the dataset, previous
literature or prior experience. Make sure you do it for good reason.
• In Stata – use the 𝑡𝑒𝑠𝑡 command (which is for the generic Wald’s test) to compare the
LSDVs to N, then use 𝑑𝑖𝑠𝑝𝑙𝑎𝑦 and 𝑑𝑖𝑠𝑝𝑙𝑎𝑦 𝑖𝑛𝑣𝐹𝑡𝑎𝑖𝑙 𝑚, 𝑑𝑓𝑈𝑅 , 𝛼 to manually compute
the F-statistics and the critical value when comparing the LSDV’s to each other
The Fixed Effects and Random Effects Models
• Other Fixed Effects Models
• Within-Group Estimator (WG Estimator)
• Eliminating fixed effects by expressing variables as “deviations from their means” – de-meaned or
mean-corrected values
• For each group/entity, get the means of each variable and then calculate the deviation per 𝑡,
expressed as 𝑦𝑖𝑡 = 𝑌𝑖𝑡 − 𝑌ത𝑖 , 𝑥𝑖𝑡 = 𝑋𝑖𝑡 − 𝑋ത𝑖 , ∀ 𝑖 = 1, … , 𝑁
• Estimate the equation 𝑦𝑖𝑡 = 𝛽1 𝑥𝑖𝑡 + 𝑢𝑖𝑡
• Why doesn’t this have an intercept? And yet retain 𝑢𝑖𝑡 ?
• Try to do this as an exercise

• This gives consistent estimates of the slope 𝛽1 , but inefficient (i.e., larger variance) because of
smaller variation in variables (and therefore, larger variation in 𝑢𝑖𝑡 ).
• This may also render any time-invariant variation (i.e., sex, race, industry) inestimable, and may
remove long-run effects, because of 𝐸 .

• In Stata – this can be done using the ”𝑥𝑡𝑟𝑒𝑔” command with the “𝑓𝑒” option – 𝑥𝑡𝑟𝑒𝑔 𝑦 𝑥, 𝑓𝑒
provided you did the “𝑥𝑡𝑠𝑒𝑡” command at the beginning.
The Fixed Effects and Random Effects Models
• Other Fixed Effects Models
• First-difference estimator
• An alternative to the WG estimator – in principle, it also effectively removes any entity-specific,
time-invariant heterogeneity
• You take the first difference of all variables in the equation
• For any 𝑍𝑖𝑡 ∈ 𝑌𝑖𝑡 , 𝑋𝑖𝑡 , 𝑢𝑖𝑡 , ∆𝑍𝑖𝑡 = 𝑍𝑖,𝑡 − 𝑍𝑖,𝑡−1
• Estimate the equation ∆𝑌𝑖𝑡 = 𝛽1 ∆𝑋𝑖𝑡 + ∆𝑢𝑖𝑡

• Again, note that there is no intercept (why?)


• But this is how any source of heterogeneity is eliminated – through the elimination of the intercepts and
therefore any time-invariant component
• Try to show this for yourself.

• In Stata – This can be done manually by doing an OLS regression but transforming the variables
using “𝑑#. 𝑣𝑎𝑟” operator where “#” is the number of differences, usually either 1 or 2. You must
have done 𝑥𝑡𝑠𝑒𝑡 or 𝑡𝑠𝑠𝑒𝑡 first
• For example, 𝑟𝑒𝑔𝑟𝑒𝑠𝑠 𝑑1. 𝑖𝑛𝑐𝑜𝑚𝑒 𝑑1. 𝑒𝑑𝑢𝑐
The Fixed Effects and Random Effects Models
• Random Effects Model
• Also known as the error components model (ECM)
• As a criticism to FEM – inclusion of dummy variables are a representation of the lack of
knowledge about the “true” model, why not express it through the disturbance term
• 𝑌𝑖𝑡 = 𝛽0𝑖 + 𝛽1 𝑋𝑖𝑡 + 𝑢𝑖𝑡 , ∀ 𝑖 = 1, … , 𝑁
• But 𝛽0𝑖 = 𝛽0 + 𝜀𝑖 , … so it becomes 𝑌𝑖𝑡 = 𝛽0 + 𝛽1 𝑋𝑖𝑡 + 𝜀𝑖 + 𝑢𝑖𝑡
• Where 𝜀𝑖 is the cross-sectional error component which represents the unobserved
heterogeneity across 𝑖 such that 𝜀𝑖 ~𝑁 0, 𝜎 2
• So, we now have a new error term, 𝜔𝑖𝑡 , such that 𝜔𝑖𝑡 = 𝜀𝑖 + 𝑢𝑖𝑡
• Usual assumptions of the ECM include
• 𝜀𝑖 ~𝑁 0, 𝜎𝜀2
• 𝑢𝑖𝑡 ~𝑁 0, 𝜎𝑢2
• 𝐸 𝜀𝑖 𝑢𝑖𝑡 = 0; 𝐸 𝜀𝑖 𝜀𝑗 = 0 ∀ 𝑖 ≠ 𝑗; 𝐸 𝑢𝑖𝑡 𝑢𝑖𝑠 = 0 ∀𝑡 ≠ 𝑠; 𝐸 𝑢𝑖𝑡 𝑢𝑗𝑡 = 0 ∀𝑖 ≠ 𝑗; - meaning error
components are not correlated with other cross-section and time series units
• 𝐸 𝑋𝑖𝑡 𝜔𝑖𝑡 = 0 – exogeneity of regressors must be preserved otherwise ECM will be inconsistent.
The Fixed Effects and Random Effects Models
• Random Effects Model
• Note that by assumptions about the distribution of the error components, 𝐸 𝑤𝑖𝑡 = 0
and 𝑣𝑎𝑟 𝜔𝑖𝑡 = 𝜎𝜀2 + 𝜎𝑢2
• If 𝜎𝜀2 = 0, then that means the model is no different from the Naïve model – we can
just use OLS.
• Use the Breusch-Pagan Lagrange Multiplier Test – which tests the null hypothesis,
𝐻0 : 𝜎𝜀2 = 0 or just use Naïve vs 𝐻1 : 𝜎𝜀2 > 0 or use REM.
• If we reject the null hypothesis, that means REM is better than Naïve

• In Stata, use 𝑥𝑡𝑟𝑒𝑔 𝑦 𝑥, 𝑟𝑒 following 𝑥𝑡𝑠𝑒𝑡 or 𝑡𝑠𝑠𝑒𝑡


• Test the validity of the model using 𝑥𝑡𝑡𝑒𝑠𝑡0
The Fixed Effects and Random Effects Models
• FEM vs REM
• FEM and REM will have very different results
• Can test this formally using the Hausman test – a 𝜒 2 computed as 𝐻 = 𝛽𝑐 − 𝛽𝑒 ′ ൫ሺ𝑉𝑐 −
𝑉𝑒 ሻ−1 ൯ 𝛽𝑐 − 𝛽𝑒 where 𝛽 are coefficient vectors, 𝑉 are covariance matrices, 𝑐 is the consistent
estimator (usually the FEM), and 𝑒 is the efficient estimator (which is usually the REM)
• This essentially tests the hypothesis whether FEM and REM have the same coefficients – but
also that 𝐻0 : 𝐸 𝑋𝑖𝑡 𝜀𝑖 = 0, or 𝐸 𝑋𝑖𝑡 𝜔𝑖𝑡 = 0 which means that the new error term does not
correlate the exogeneity of regressors
• If they differ, then use FEM because REM is not appropriate, which implies that
𝐻1 : 𝐸 𝑋𝑖𝑡 𝜀𝑖 ≠ 0, or 𝐸 𝑋𝑖𝑡 𝜔𝑖𝑡 ≠ 0
• In a way, bottom line is that 𝐻0 : REM is better vs. 𝐻1 : FEM is better and this depends the error
component will be correlated to regressors (endogeneity)

• In Stata, the command is ℎ𝑎𝑢𝑠𝑚𝑎𝑛 𝑐𝑜𝑛𝑠𝑖𝑠𝑡𝑒𝑛𝑡_𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑜𝑟 𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑜𝑟 where


consistent if FEM, efficient is REM
• Before you run ℎ𝑎𝑢𝑠𝑚𝑎𝑛, first estimate each model, then use 𝑒𝑠𝑡𝑠𝑡𝑜 𝑛𝑎𝑚𝑒 to store the FEM
and REM under their respective names. Say, you saved FEM as 𝑓𝑖𝑥𝑒𝑑 and REM as 𝑟𝑎𝑛𝑑𝑜𝑚 –
then you run ℎ𝑎𝑢𝑠𝑚𝑎𝑛 𝑓𝑖𝑥𝑒𝑑 𝑟𝑎𝑛𝑑𝑜𝑚 and look at the p-value
The Fixed Effects and Random Effects Models
• FEM vs REM – some guidelines based on panel dimensions
• (Long panel) If N is small, and T is large, very likely FEM may be preferable
• (Short panel) If N is large and T is small, this depends – if we have reason to believe
that errors will be correlated to regressors (endogeneity), then use FEM. Otherwise,
REM.
• If N is not based on a random drawing from a larger population, then FEM.
• If N is a random drawing from a larger population, then REM.
• REM can estimate time-invariant variables if we choose to do so, FEM does not
necessarily do so.
• Check the literature
• Do both, then do the Hausman
• In the end, trust your instinct
Extensions and Applications
Extensions and Applications
• Difference-in-differences (DID)
estimator
• A very useful impact evaluation
technique to capture the effects of
a policy or intervention on groups
over time – can make causal
inference (provided assumptions
are met)
• Often used with simple panel data
(repeated cross section, or
longitudinal data with at least 2
time periods) – but this can be
extended to longer panel data via
event study methodology
• This requires a sudden exogenous
source of variation – “treatment”
Extensions and Applications
• Difference-in-differences (DID) estimator
• Say, that for some outcome 𝑌, a group 𝑇 was affected by an
intervention, a group 𝐶 was not. The intervention happened
in a period 𝐴𝑓𝑡𝑒𝑟 which succeeded the period 𝐵𝑒𝑓𝑜𝑟𝑒.E After
• The DID equation is essentially a difference in means Before After -
equation given by Before
• 𝑌 = 𝛼 + 𝛽𝑇 + 𝛾𝐴𝑓𝑡𝑒𝑟 + 𝛿 𝑇 × 𝐴𝑓𝑡𝑒𝑟
• Where 𝑇 = 1 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑔𝑟𝑜𝑢𝑝 and 𝐴𝑓𝑡𝑒𝑟 = Control 𝛼 𝛼+𝛾 𝛾
1 𝑝𝑒𝑟𝑖𝑜𝑑 𝑎𝑓𝑡𝑒𝑟 𝑖𝑛𝑡𝑒𝑟𝑣𝑒𝑛𝑡𝑖𝑜𝑛 , 0 otherwise
• 𝑌𝐶,𝐵𝑒𝑓𝑜𝑟𝑒 = 𝛼; 𝑌𝐶,𝐴𝑓𝑡𝑒𝑟 = 𝛼 + 𝛾
• 𝑌𝑇,𝐵𝑒𝑓𝑜𝑟𝑒 = 𝛼 + 𝛽; 𝑌𝑇,𝐴𝑓𝑡𝑒𝑟 = 𝛼 + 𝛽 + 𝛾 + 𝛿 𝛼+𝛽
Treated 𝛼+𝛽 𝛾+𝛿
+𝛾+𝛿
• 𝐷𝐼𝐷 = 𝑌𝑇,𝐴𝑓𝑡𝑒𝑟 − 𝑌𝑇,𝐵𝑒𝑓𝑜𝑟𝑒 − 𝑌𝐶,𝐴𝑓𝑡𝑒𝑟 − 𝑌𝐶,𝐵𝑒𝑓𝑜𝑟𝑒 =
𝛼+𝛽+𝛾+𝛿 − 𝛼+𝛽 − 𝛼+𝛾 −𝛼 = 𝛾+𝛿 − Treated
𝛾 = 𝛿 is the DID estimator – 𝛽 𝛽+𝛿 DID = 𝜹
• 𝛼 is the autonomous value – without treatment, before Control
intervention
• 𝛽 represents the systematic difference between treated and
control groups
• 𝛾 represents catchall era effects
• In Stata, estimate this using OLS
Extensions and Applications
• Difference-in-differences (DID)
estimator
• Assumptions
• Parallel trends assumption – in the absence of
a treatment, both treated and control groups
can have a structural difference (parallel), but
must have the same growth trajectory
• Implicit Assumptions:
• Treated and control groups must be similar in terms of certain characteristics – lest there may be selection.
Alternatively, shock must be exogenous – no selection into treatment.
• Homogenous treatment effect – effect on all treated groups are the same
• As opposed to heterogenous treatment intensity/differing effects
• Treatment timing is the same
• As opposed to staggered timing
• Once treated, always treated
• As opposed to possible program end or program exit
• Violation of the first one is relatively simple to address – do some matching to make sure groups are
comparable, and reduce the selection problem.
• Violation of 2,3,4 is a lot harder to address as this requires some very advanced models that have been
developed only recently.
Extensions and Applications

• Another useful estimator is that Arellano-Bover-Blundell-Bond (or just the Arellano-


Bond) 2-step Systems GMM estimator (or just Systems GMM)
• Look into this yourselves, or we could study this as an independent case.
References
• Cameron, C., and Trivedi, P., (2005). Microeconometrics.
• Guillerm, M., (2017). Pseudo-panel methods and an example of application to Household Wealth Data. Economics and
Statistics, 491-492, 2017. DOI: 10.24187/ecostat.2017.491d.1908.
• Gujarati, D., and Porter, D., (2009). Basic Econometrics. Singapore City: McGraw-Hill.

You might also like