Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Term Paper on “Regression Methodologies & Review of Literature”

Submitted To:
Tarik Hossain
Assistant Professor,
Department of AIS,
Comilla University.

Submitted By:
Serial No. Name Roll No.

1. Antor Das Antu 11606008

2. Sharima Alam 11606012

3. Afroza Rahman 11606016

4. Nazia Narjis 11606022

5. Md. Salahuddin 11606024

6. Sanjida Anowar Anni (Group Leader) 11606032

7. Susmita Paul 11606043

8. Jagjit Majumder 11606052

9. Farhana Akter 11606057

Dept. of AIS, 10th Batch,


MBA Session: 2019-2020,
Comilla University.

Date of Submission: 07/05/2021


Letter of Transmittal

7 May, 2021
Tarik Hossain
Assistant Professor,
Department of Accounting & Information Systems,
Comilla University, Cumilla.
Subject: Submission of Term Paper on “Regression Methodologies & Review of Literature”

Sir,
By the grace of almighty Allah, the benevolent and merciful, with your solitary help we have been successful to
complete the term paper on “Regression Methodologies & Review of Literature” that you have asked to
prepare.
We have tried our best to make the term paper within the given time period. We also tried to discuss the topics
elaborately and easily. Though our intention was strong to make the report best, unfortunately there may be some
mistakes. We hope that you will consider our mistakes with mercifully.
Therefore, we sincerely, expect that you would be kind enough to accept our report for evaluation and oblige
thereby.
Sincerely yours,
Sanjida Anowar Anni
Group Leader,
On the behalf of group C
Department of Accounting & Information Systems,
Comilla University.
Acknowledgement

First of all we remember almighty God for making us successful to prepare Term Paper. Without
the assist of dedicated and cognizant teacher, a student can’t be able to nourish his/her caliber
forlorn. Conversant teacher nurtures his learners judiciously through rendering them his/her
ultimate excellence. We are truly grateful to our honorable course teacher of Applied Research
Methodology “Tarik Hossain” ,Assistant Professor, Department of Accounting and Information
Systems, Comilla University for providing us a chance to prepare a comprehensive term paper
report on “Regression Methodologies & Review of Literature”. From the core of our heart, we
are dedicating our precious tribute and endless homage for helping us a lot so that we can
prepare ourselves immaculately. Hope in future also we will get his solicitude indication.
However there might be some errors and mistake, so we seek your kind consideration as we are
in the process of learning.
Table of Contents

Serial No. Contents Page No.

1 Regression 1
2 Seven Classical OLS Condition 2
3 Two-Stage Least Squares (2SLS) Regression 5
Analysis
4 Three-Stage Least Squares (3SLS) 6
5 Generalized Method of Moments 7
6 Condition of GMM Regression 8

7 Two-step feasible GMM 9


8 Vector Regression and Time Series Analysis 9
9 VAR 9
10 VECM (vector error correction model) 10
11 ARDL (Auto Regressive Distributed lag) 10
12 Dummy Regression for Developing Likert Scale 11
13 Tips for Writing the Literature Review 15
14 Tricks to Write an Effective Literature Review 15
15 The Way of Choose Current Literature of Relevant 16
Topic of Interest
16 Ways to Find Literature Review 17
17 Way to Gather Literature Review 19
18 Conclusion 20
19 Bibliography 21
Regression Methodologies

Regression
Regression is a statistical method used in finance, investing, and other disciplines that attempts to
determine the strength and character of the relationship between one dependent variable (usually
denoted by Y) and a series of other variables (known as independent variables).Regression helps
investment and financial managers to value assets and understand the relationships between
variables, such as commodity prices and the stocks of businesses dealing in those commodities.

OLS Model

Ordinary least-squares (OLS) models assume that the analysis is fitting a model of a relationship
between one or more explanatory variables and a continuous or at least interval outcome variable
that minimizes the sum of square errors, where an error is the difference between the actual and
the predicted value of the outcome variable. The most common analytical method that utilizes
OLS models is linear regression (with a single or multiple predictor variables).

Fixed Effects Model

A fixed effects model is a statistical model in which the model parameters are fixed or non-
random quantities. This is in contrast to random effects models and mixed models in which all or
some of the model parameters are random variables. In many applications
including econometrics and biostatistics a fixed effects model refers to a regression model in
which the group means are fixed (non-random) as opposed to a random effects model in which
the group means are a random sample from a population. Generally, data can be grouped
according to several observed factors. The group means could be modeled as fixed or random
effects for each grouping. In a fixed effects model each group mean is a group-specific fixed
quantity.

Two common assumptions can be made about the individual specific effect: the random effects
assumption and the fixed effects assumption. The random effects assumption is that the
individual unobserved heterogeneity is uncorrelated with the independent variables. The fixed

1
effect assumption is that the individual specific effect is correlated with the independent
variables.

If the random effects assumption holds, the random effects estimator is more efficient than the
fixed effects model. However, if this assumption does not hold, the random effects estimator is
not consistent.

Random Effect Model

A random effects model, also called a variance components model, is a statistical model where
the model parameters are random variables. It is a kind of hierarchical linear model, which
assumes that the data being analyzed are drawn from a hierarchy of different populations whose
differences relate to that hierarchy. In econometrics, random effects models are used in panel
analysis of hierarchical or panel data when one assumes no fixed effects (it allows for individual
effects). The random effects model is a special case of the fixed effects model.

Two common assumptions can be made about the individual specific effect: the random effects
assumption and the fixed effects assumption. The random effects assumption is that the
individual unobserved heterogeneity is uncorrelated with the independent variables. The fixed
effect assumption is that the individual specific effect is correlated with the independent
variables.

If the random effects assumption holds, the random effects estimator is more efficient than the
fixed effects model. However, if this assumption does not hold, the random effects estimator is
not consistent.

Seven Classical OLS Condition


There are seven classical OLS conditions for linear regression. The first six are mandatory to
produce the best estimates. While the quality of the estimates does not depend on the seventh
assumption, analysts often evaluate it for other important reasons that I’ll cover. Below are these
conditions:
1. The regression model is linear in the coefficients and the error term
2. The error term has a population mean of zero
3. All independent variables are uncorrelated with the error term
2
4. Observations of the error term are uncorrelated with each other
5. The error term has a constant variance.
6. No independent variable is a perfect linear function of other explanatory variables
7. The error term is normally distributed (optional)
OLS Condition 1: The regression model is linear in the coefficients and the error term

This assumption addresses the functional form of the model. In statistics, a regression model is
linear when all terms in the model are either the constant or a parameter multiplied by an
independent variable. We build the model equation only by adding the terms together. These
rules constrain the model to one type:

OLS Condition 2: The error term has a population mean of zero

The error term accounts for the variation in the dependent variable that the independent variables
do not explain. Random chance should determine the values of the error term. For our model to
be unbiased, the average value of the error term must equal zero.

Suppose the average error is +7. This non-zero average error indicates that our model
systematically under predicts the observed values. Statisticians refer to systematic error like this
as bias, and it signifies that our model is inadequate because it is not correct on average.

Stated another way, we want the expected value of the error to equal zero. If the expected value
is +7 rather than zero, part of the error term is predictable, and we should add that information to
the regression model itself. We want only random error left for the error term.

OLS Condition 3: All independent variables are uncorrelated with the error term

If an independent variable is correlated with the error term, we can use the independent variable
to predict the error term, which violates the notion that the error term represents unpredictable
random error. We need to find a way to incorporate that information into the regression model
itself.

This condition is also referred to as erogeneity. When this type of correlation exists, there is
endogeneity. Violations of this assumption can occur because there is simultaneity between the

3
independent and dependent variables, omitted variable bias, or measurement error in the
independent variables.

OLS Condition 4: Observations of the error term are uncorrelated with each other

One observation of the error term should not predict the next observation. For instance, if the
error for one observation is positive and that systematically increases the probability that the
following error is positive, that is a positive correlation. If the subsequent error is more likely to
have the opposite sign, that is a negative correlation. This problem is known both as serial
correlation and autocorrelation. Serial correlation is most likely to occur in time series models.

For example, if sales are unexpectedly high on one day, then they are likely to be higher than
average on the next day. This type of correlation isn’t an unreasonable expectation for some
subject areas, such as inflation rates, GDP, unemployment, and so on.

OLS Condition 5: The error term has a constant variance (no heteroscedasticity)

The variance of the errors should be consistent for all observations. In other words, the variance
does not change for each observation or for a range of observations. This preferred condition is
known as homoscedasticity (same scatter). If the variance changes, we refer to that as
heteroscedasticity (different scatter).

OLS Condition 6: No independent variable is a perfect linear function of other explanatory


variables

Perfect correlation occurs when two variables have a Pearson’s correlation coefficient of +1 or -
1. When one of the variables change, the other variables also change by a completely fixed
proportion. The two variables move in unison.

Perfect correlation suggests that two variables are different forms of the same variable. For
example, games won and games lost have a perfect negative correlation (-1). The temperatures in
Fahrenheit and Celsius have a perfect positive correlation (+1).

Ordinary least squares cannot distinguish one variable from the other when they are perfectly
correlated. If we specify a model that contains independent variables with perfect correlation, our
statistical software can’t fit the model, and it will display an error message. We must remove one
of the variables from the model to proceed.

4
OLS Condition 7: The error term is normally distributed (optional)

OLS does not require that the error term follows a normal distribution to produce unbiased
estimates with the minimum variance. However, satisfying this assumption allows you to
perform statistical hypothesis testing and generate reliable confidence intervals and prediction
intervals.

Two-Stage Least Squares (2SLS) Regression Analysis

Two-Stage least squares (2SLS) regression analysis is a statistical technique that is used in the
analysis of structural equations. This technique is the extension of the OLS method. It is used
when the dependent variable’s error terms are correlated with the independent variables.
Additionally, it is useful when there are feedback loops in the model. In structural equations
modeling, we use the maximum likelihood method to estimate the path coefficient. This
technique is an alternative in SEM modeling to estimate the path coefficient. This technique can
also be applied in quasi-experimental studies.

Assumptions:

• Models (equations) should be correctly identified.

• The error variance of all the variables should be equal.

• Error terms should be normally distributed.

• It is assumed that the outlier(s) is removed from the data.

• Observations should be independents of each other.

New Variables: We can change NEWVAR settings on the TSET command prior to 2SLS to
evaluate the regression statistics without saving the values of predicted and residual variables, or
we can save the new values to replace the values that were saved earlier, or we can save the new
values without erasing values that were saved earlier (see the TSET command). We can also use
the SAVE subcommand on 2SLS to override the NONE or the default CURRENT settings on
NEWVAR.

5
Covariance Matrix: We can obtain the covariance matrix of the parameter estimates in addition
to all of the other output by specifying PRINT=DETAILED on the TSET command prior to
2SLS. We can also use the PRINT subcommand to obtain the covariance matrix, regardless of
the setting on PRINT.

There are a few things to keep in mind as we enter our instruments:

• In order to calculate TSLS estimates, our specification must satisfy the order condition for
identification, which says that there must be at least as many instruments as there are coefficients
in your equation. There is an additional rank condition which must also be satisfied. See
Davidson and MacKinnon (1993) and Johnston and DiNardo (1997) for additional discussion.

• For econometric reasons that we will not pursue here, any right-hand side variables that are not
correlated with the disturbances should be included as instruments.

• EViews will, by default, add a constant to the instrument list. If we do not wish a constant to be
added to the instrument list, the Include a constant check box should be unchecked.

Three-Stage Least Squares (3SLS)

The term three-stage least squares (3SLS) refers to a method of estimation that combines system
equation, sometimes known as seemingly unrelated regression (SUR), with two-stage least
squares estimation. It is a form of instrumental variables estimation that permits correlations of
the unobserved disturbances across several equations, as well as restrictions among coefficients
of different equations, and improves upon the efficiency of equation-by-equation estimation by
taking into account such correlations across equations. Unlike the two-stage least squares (2SLS)
approach for a system of equations, which would estimate the coefficients of each structural
equation separately, the three-stage least squares estimates all coefficients simultaneously. It is
assumed that each equation of the system is at least just-identified. Equations that are under
identified are disregarded in the 3SLS estimation.

Three-stage least squares originated in a paper by Arnold Zellner and Henri Theil (1962). In the
classical specification, although the structural disturbances may be correlated across equations
(contemporaneous correlation), it is assumed that within each structural equation the

6
disturbances are both homoscedastic and serially uncorrelated. The classical specification thus
implies that the disturbance covariance matrix within each equation is diagonal, whereas the
entire system’s covariance matrix is no diagonal.

The Zellner-Theil proposal for efficient estimation of this system is in three stages, wherein the
first stage involves obtaining estimates of the residuals of the structural equations by two-stage
least squares of all identified equations; the second stage involves computation of the optimal
instrument, or weighting matrix, using the estimated residuals to construct the disturbance
variance-covariance matrix; and the third stage is joint estimation of the system of equations
using the optimal instrument. Although 3SLS is generally asymptotically more efficient than
2SLS, if even a single equation of the system is mis-specified, 3SLS estimates of coefficients of
all equations are generally inconsistent.

The 3SLS estimator has been extended to estimation of a nonlinear system of simultaneous
equations by Takeshi Amemiya (1977) and Dale Jorgenson and Jean-Jacques Laffont (1975). An
excellent discussion of 3SLS estimation, including a formal derivation of its analytical and
asymptotic properties, and its comparison with full-information maximum likelihood (FIML), is
given in Jerry Hausman (1983).

Generalized Method of Moments

In econometrics and statistics, the generalized method of moments (GMM) is a generic method
for estimating parameters in statistical models. Usually it is applied in the context of semi
parametric models, where the parameter of interest is finite-dimensional, whereas the full shape
of the data's distribution function may not be known, and therefore maximum likelihood
estimation is not applicable.

The method requires that a certain number of moment conditions be specified for the model.
These moment conditions are functions of the model parameters and the data, such that their
expectation is zero at the parameters' true values. The GMM method then minimizes a certain
norm of the sample averages of the moment conditions, and can therefore be thought of as a
special case of minimum-distance estimation.

7
The GMM estimators are known to be consistent, asymptotically normal, and efficient in the
class of all estimators that do not use any extra information aside from that contained in the
moment conditions. GMM were advocated by Lars Peter Hansen in 1982 as a generalization of
the method of moments, introduced by Karl Pearson in 1894. However, these estimators are
mathematically equivalent to those based on "orthogonally conditions" (Sargan, 1958, 1959) or
"unbiased estimating equations" (Huber, 1967; Wang et al., 1997).

Suppose the available data consists of T observations {Yt } t = 1,...,T, where each observation Yt
is an n-dimensional multivariate random variable. We assume that the data come from a certain
statistical model, defined up to an unknown parameter θ ∈ Θ. The goal of the estimation problem
is to find the “true” value of this parameter, θ0, or at least a reasonably close estimate. A general
assumption of GMM is that the data Yt be generated by a weakly stationary argotic stochastic
process.

GMM Regression

In econometrics and statistics, the generalized method of moments (GMM) is a generic method
for estimating parameters in statistical models. These moment conditions are functions of the
model parameters and the data, such that their expectation is zero at the parameters' true values.

Condition of GMM Regression

1. The generalized method of moments (GMM) is a method for constructing estimators. GMM
uses assumptions about specific moments of the random variables instead of assumptions about
the entire distribution, which makes GMM more robust than ML, at the cost of some efficiency.

2. Two-step approach is that the numbers of equations and parameters in the non- linear GMM
step do not grow with the number of perfectly measured repressors, conferring a computational
simplicity not shared by the asymptotically. More efficient one-step GMM estimators that we
also describe+ Basing GMM.

8
Two-step feasible GMM

• Step 1: Take W = I (the identity matrix) or some other positive-definite matrix, and compute
preliminary GMM estimate .This estimator is consistent for θ0, although not efficient.

• Step 2: converges in probability to Ω−1 and therefore if we compute with this weighting
matrix, the estimator will be asymptotically efficient.

Time Series Analysis

A time series is a sequence of data points, measured typically at successive points in time spaced
at uniform time intervals. Time series are used in statistics, signal processing, pattern
recognition, econometrics, mathematical finance, weather forecasting, earthquake prediction,
electroencephalography, control engineering, astronomy, and communications engineering. Time
series analysis comprises methods for analyzing time series data in order to extract meaningful
statistics and other characteristics of the data.

Vector Regression and Time Series Analysis

In this section, we introduce ε-support vector regression and time series analysis which are used
to forecast Bayan-nur’s total water requirement.

ε-Support Vector Regression

Consider the training set,

T={(x1,y1),•••,(xl,yl)}∈(Rn×Y)l,

Where xi ∈ Rn,yi ∈ Y=R,i=1,•••,l.

VAR

The VAR model is useful for describing the dynamic behavior of economic and financial time
series and for forecasting. It often provides superior forecasts to those from univari- ate time
series models and elaborate theory-based simultaneous equations model.

9
The conditions to build a VAR model involve the following steps:

1. Analyze the time series characteristics.

2. Test for causation amongst the time series.

3. Test for stationary.

4. Transform the series to make it stationary, if needed.

5. Find optimal order

6. Prepare training and test datasets.

7. Train the model.

VECM (vector error correction model)

Generally, after examining the long-run relationship between the variables, standard Granger
causality based on VAR system based on vector error correction model are used to determine the
direction of causality between the variables. If there is a cointegration between the variables,
vector error correction model can be used. This method is known as augmented Granger
causality test. In this approach, error correction term (ECT) is added to the VAR system. The
significance t-statistic on the parameter of ECT indicates that there is an evidence of the
existence of the long run relationship and long run causality between the variables.

VECM Estimation and Analysis, Cointegration analysis demonstrates that oil prices, GDP, and
carbon emissions do have long-run equilibrium relationships, but, in the short term, the three are
in disequilibrium. The short-term imbalance and dynamic structure can be expressed as VECM.

ARDL (Auto Regressive Distributed lag)

Autoregressive distributed lag (ARDL) model is an ordinary least square (OLS) based model
which is applicable for both non-stationary time series as well as for times series with mixed

10
order of integration. A dynamic error correction model (ECM) can be derived from ARDL
through a simple linear transformation.

Dummy Regression for Developing Likert Scale

A dummy regression is a numeric variable that represents categorical data, such as gender, race,
political affiliation, etc.

In statistics and econometrics, particularly in regression analysis, a dummy variable is one that
takes only the value 0 or 1 to indicate the absence or presence of some categorical effect that
may be expected to shift the outcome.

 They can be thought of as numeric stand-ins for qualitative facts in a regression model,
sorting data into mutually exclusive categories (such as smoker and non-smoker).
 A dummy independent variable (also called a dummy explanatory variable) which for
some observation has a value of 0 will cause that variable's coefficient to have no role in
influencing the dependent variable, while when the dummy takes on a value 1 its
coefficient acts to alter the intercept.

For Example: Suppose membership in a group is one of the qualitative variables relevant to
a regression. If group membership is arbitrarily assigned the value of 1, then all others would
get the value 0. Then the intercept would be the constant term for non-members but would be
the constant term plus the coefficient of the membership dummy in the case of group
members.

11
Dummy Independent Variables

Figure 1 : Graph showing wage = α0 + δ0female + α1education + U, δ0 < 0.

Dummy variables are incorporated in the same way as quantitative variables are included (as
explanatory variables) in regression models.

For example: if we consider a Mincer-type regression model of wage determination, wherein


wages are dependent on gender (qualitative) and years of education (quantitative):

 In the model, female = 1 when the person is a female and female = 0 when the person is
male.
 Difference in wages between females and males, holding education constant.
 Thus, δ0 helps to determine whether there is a discrimination in wages between males
and females.

For Example : If δ0 > 0 (positive coefficient), then women earn a higher wage than men
(keeping other factors constant). The coefficients attached to the dummy variables are called
differential intercept coefficients. The model can be depicted graphically as an intercept shift
between females and males. In the figure, the case δ0<0 is shown (wherein men earn a higher
wage than women).

12
Dummy Dependent Variables

A model with a dummy dependent variable (also known as a qualitative dependent variable) is
one in which the dependent variable, as influenced by the explanatory variables, is qualitative in
nature. Some decisions regarding 'how much' of an act must be performed involve a prior
decision making on whether to perform the act or not. For example, the amount of output to
produce, the cost to be incurred, etc. involve prior decisions on whether to produce or not,
whether to spend or not, etc. Such "prior decisions" become dependent dummies in the
regression model.

For example :

 The decision of a worker to be a part of the labour force becomes a dummy dependent
variable.
 The decision is dichotomous, i.e., the decision has two possible outcomes: yes and no.
 So the dependent dummy variable Participation would take on the value 1 if participating,
0 if not participating.

Likert Scale

Likert items are used to measure respondents attitudes to a particular question or statement. One
must recall that Likert-type data is ordinal data, i.e. we can only say that one score is higher than
another, not the distance between the points.

 A Likert item is simply a statement that the respondent is asked to evaluate by giving it a
quantitative value on any kind of subjective or objective dimension, with level of
agreement/disagreement being the dimension most commonly used. Well-designed Likert
items exhibit both "symmetry" and "balance".
 Symmetry means that they contain equal numbers of positive and negative positions
whose respective distances apart are bilaterally symmetric about the "neutral"/zero value
(whether or not that value is presented as a candidate).
 Balance means that the distance between each candidate value is the same, allowing for
quantitative comparisons such as averaging to be valid across items containing more than
two candidate values.

13
The format of a typical five-level Likert item, for example, could be:

1. Strongly disagree

2. Disagree

3. Neither agree nor disagree

4. Agree

5. Strongly agree

14
 Review of Literature
A literature review is a survey of scholarly sources on a specific topic. It provides an overview of
current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing
research.

Tips for Writing the Literature Review

To write a proper literature review, we should:

 Start with a short and concise introduction, which provides the reader with an outline of
the literature review. This introduction should include the topic covered and the order of
our arguments. In addition, we can add a brief rationale for all this in the introduction.
 Add a short summary of our arguments and the evidence at the end of each section. we
can also use quotations wherever appropriate.
 Acknowledge various opinions even if they do not agree with our point of view. Do not
ignore opposing viewpoints – this can only make our argument look weaker.
 Use an academic, formal style and language. Keep our writing concise at all times, and
avoid using personal language and colloquialisms.
 Be objective. we should always be respectful of other opinions. This academic paper is
not the spot to use emotive language or our strong personal opinions.
 Avoid plagiarism at all costs. To achieve this, separate our sources from the hypothesis.
Use the literature to prove a point, but reference it properly.

Tricks to Write an Effective Literature Review

Use effective keywords

First and the foremost task is to find effective and relevant literature articles to conduct a
literature review. Undoubtedly, it is important that the articles which are chosen for conducting
the literature review are peer reviewed and are from authoritative sources. At times some of the
universities in different countries have different norms for the selection of the articles

15
Reading through the articles

This is the toughest task while writing literature reviews because you will need to identify the
appropriate literature from the different peer reviewed journals. However, reading takes a lot of
time to read so many journal articles.

References

In the previous step, we intimated you to read through the literature review of the article in-hand.
While reading through the literature review, you will find several other references to the articles
which we can use for your research

Up-to-Date content

It is a matter of fact that world is changing at a faster pace where there is introduction of
technology, changes in the environment, etc. This is of course going to impact the results of the
research.

The Way of Choose Current Literature of Relevant Topic of Interest

There are 5 steps including the way of current literature of relevant topic of interest:

Step - 1 : Choosing a Topic

Step - 2 :Finding information

Step - 3 : Evaluating Content

Step - 4 : Recording information

Step - 5 : Synthesizing Content

16
Ways to Find Literature Review

Chronology of Events

If we review follows the chronological method, we could write about the materials according to
when they were published. This approach should only be followed if a clear path of research
building on previous research can be identified and that these trends follow a clear chronological
order of development. For example, a literature review that focuses on continuing research about
the emergence of German economic power after the fall of the Soviet Union.

By Publication

Order we sources by publication chronology, then, only if the order demonstrates a more
important trend. For instance, we could order a review of literature on environmental studies of
brown fields if the progression revealed, for example, a change in the soil collection practices of
the researchers who wrote and/or conducted the studies.

Thematic

Thematic reviews of literature are organized around a topic or issue, rather than the progression
of time. However, progression of time may still be an important factor in a thematic review. For
example, a review of the Internet’s impact on American presidential politics could focus on the
development of online political satire. While the study focuses on one topic, the Internet’s impact
on American presidential politics, it will still be organized chronologically reflecting
technological developments in media. The only difference here between a "chronological" and a
"thematic" approach is what is emphasized the most: the role of the Internet in presidential
politics. Note however that more authentic thematic reviews tend to break away from
chronological order. A review organized in this manner would shift between time periods within
each section according to the point made.

Methodological

A methodological approach focuses on the methods utilized by the researcher. For the Internet in
American presidential politics project, one methodological approach would be to look at cultural
differences between the portrayal of American presidents on American, British, and French
websites. Or the review might focus on the fundraising impact of the Internet on a particular

17
political party. A methodological scope will influence either the types of documents in the
review or the way in which these documents are discussed.

Other Sections of Literature Review

Once we've decided on the organizational method for our literature review, the sections we need
to include in the paper should be easy to figure out because they arise from our organizational
strategy. In other words, a chronological review would have subsections for each vital time
period; a thematic review would have subtopics based upon factors that relate to the theme or
issue. However, sometimes we may need to add additional sections that are necessary for our
study, but do not fit in the organizational strategy of the body. What other sections we include in
the body is up to you but include only what is necessary for the reader to locate your study
within the larger scholarship framework.

Here are examples of other sections we may need to include depending on the type of review we
write:

Current Situation: information necessary to understand the topic or focus of the literature
review.

History: the chronological progression of the field, the literature, or an idea that is necessary to
understand the literature review, if the body of the literature review is not already a chronology.

Selection Methods: the criteria you used to select (and perhaps exclude) sources in your
literature review. For instance, we might explain that our review includes only peer-reviewed
articles and journals.

Standards: the way in which we present our information.

Questions for Further Research: What questions about the field has the review sparked? How
will further research as a result of the review?

18
Way to Gather Literature Review

Once we've settled on how to organize our literature review, we're ready to write each section.
When writing our review, keep in mind these issues.

Use Evidence

A literature review section is, in this sense, just like any other academic research paper. Our
interpretation of the available sources must be backed up with evidence that demonstrates that
what we are saying is valid.

Be Selective

Select only the most important points in each source to highlight in the review. The type of
information you choose to mention should relate directly to the research problem, whether it is
thematic, methodological, or chronological. Related items that provide additional information
but that are not key to understanding the research problem can be included in a list of further
readings.

Use Quotes Sparingly

Some short quotes are okay if we want to emphasize a point, or if what an author stated cannot
be easily paraphrased. Sometimes you may need to quote certain terminology that was coined by
the author, not common knowledge, or taken directly from the study. Do not use extensive
quotes as a substitute for our own summary and interpretation of the literature.

Summarize and Synthesize

Remember to summarize and synthesize our sources within each thematic paragraph as well as
throughout the review. Recapitulate important features of a research study, but then synthesize it
by rephrasing the study's significance and relating it to your own work.

Use Caution When Paraphrasing

When paraphrasing a source that is not our own, be sure to represent the author's information or
opinions accurately and in our own words. Even when paraphrasing an author’s work, we still
must provide a citation to that work.

19
Conclusion
This report is about “Regression Methodologies & Review of Literature”. In this report we
can able to understand about literature review and various methodologies of regression. These
are important terms of Applied Research Methodology.

20
Bibliography

Website:

1. www.scribbr.com
2. www.editage.com
3. uow.libguides.com
4. eml.berkeley.edu
5. en.wikipedia.org
6. www.moresteam.com
7. www.statisticssolutions.com
8. www.ibm.com

21

You might also like