Psy 3 - M

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

PSYCHOLOGICAL STATISTICS

Second sem / lesson 1 - ___ / midterms / PPT & lecture based

DESCRIPTIVE STATISTICS 1

- describes, shows, and summarizes the basic  Identify highest and lowest
features of a data set found in a given study, value
presented in a summary that describes the  Look for class interval width =
data sample and its measurements. range/ sqrsamplesize
 Range=highest value – lowest
- The prime purpose of descriptive statistics is to value
convey information regarding a data set. It  Calculate class intervals:
helps in reducing a large chunk of data into a lower limit <= a < upper limit
few relevant pieces of information.  Add the class interval and
lowest value
INFERENTIAL STATISTICS 2

- a field of statistics that uses analytical tools for  Count the frequency
drawing conclusions about a population by
examining random samples.  Relative frequency distributions
- The goal of inferential statistics is to make o the proportion of data for a variable’s
generalizations about a population. values or class intervals.
o rf = f/
DESCRIPTIVE INFERENTIAL
(fa + fb + fc + fd)
STATISTICS STATISTICS
 Quantifies  Make =%
characteristics conclusions
of the data about the  Cumulative frequency distribution
 Measures of population using o how often observations fall below
central tendency analytical tools, certain values.
and measures of hypothesis
dispersion are testing, and
the tool used regression
 Describe analysis
characteristics  Make inferences
of the sample about unknown
 Measures are population
variance, range,  Measures are t-
mean, median, tests, z-test, 2. CENTRAL TENDENCY
etc. linear o Finds the mean, median, mode
regression, etc.  MEAN - μ = ∑X/n
 MEDIAN = divided by 2
 MODE = Mo = L+h + (fm – f1)______
TYPES OF DESCRIPTIVE STATISTICS (fm – f1) + (fm – f2)

1. FREQUENCY DISTRIBUTIONS 3. MEASURES OF VARIABILITY


o Describes the number of observations Variability – describes how far apart data points lie
for each possible value of a variable. frim each other and from the center distribution
o Depicted using graphs and tables
 RANGE
TYPES OF FREUENCY DISTRIBUTIONS o difference between the highest and
lowest values
 Ungrouped frequency distributions
 INTERQUARTILE RANGE (IQR)
o Categorical variables
o is the third quartile (Q3) minus the first
o Number of observation for each value
quartile (Q1).
of the variable
 STANDARD DEVIATION
 Grouped frequency distributions
o Quantitative variables can aemploy
this style of FD o
o No. of observation made for variable;’s
various class intervals  VARIANCE
STEPS:
RAMOS, MAE RACQUEL GRACE E. / AB PSYCHOLOGY 1 – SECOND SEM / LA CONSOLACION UNIVERSITY
PHILIPPINES
1
PSYCHOLOGICAL STATISTICS
Second sem / lesson 1 - ___ / midterms / PPT & lecture based

o the individual values of the two


variables are plotted on a graph paper.
o one for X variable and another for Y
___________________________________________ variable.
Z SCORE
 KARL PEARSON’S COEFFICIENT OF
- how the relationship between a value and the CORRELATION
given mean of a set of variables. o Karl Pearson
- Greater than the mean = positive, less than o known as the Product Moment
mean = negative Correlation Coefficient
o measures the degree of linear
relationship between two variables.
o Measures the value that shows the
- strength between the two variables in
a correlation
CORRELATION o most common, in determining the
relationship of variable
- helps to analyze the covariation of two or more o denoted by- r -1 ≤ r ≥ +1
variables. USE WHEN
TYPES: o If both variables are quantitative (e.g:
interval scale)
 POSITIVE o The variables are normally distributed
o the independent variable increases in o If the relationship is linear
value, so does the dependent variable. o Both descriptive and inferential
 NEGATIVE
o Independent variable increases in
value, the dependent variable falls in  SPEARMAN’S RANK CORRELATION AND
value. COEFFICIENT
 LINEAR o Charles Spearman,
o linear when the amount of change in o Nonparametric measure of rank
one variable tends to bear a constant correlation.
ratio to the amount of change in the o It evaluates the capability of a
other.
monotonic function to express the
 NON LINEAR
relationship between two variables.
o the amount of change in one variable
does not bear a constant ratio to the
amount of change in the other
variable.
 SIMPLE
o 2 variables are being studied
 MULTIPLE
o 3 or more variables being studied o requires that your data be continuous
 PARTIAL data with a monotonic relationship or
o three or more variables, but only two ordinal data.
are considered while keeping the other ___________________________________________
variables constant, MULTIPLE LINEAR REGRESSION

METHODS OF STUDYING CORRELATION  LINEAR REGRESSION


o predict the value of one variable using
 SCATTER DIAGRAM METHOD the knowledge of another.
o A graph of observed plotted points o Only two continuous variables—an
where each point represents the independent variable and a dependent
values of X and Y as a coordinate. It variable—can be utilized in a linear
portrays the relationship between regression.
these two variables graphically o The parameter that is utilized to
calculate the dependent variable or
 GRAPHIC METHOD
RAMOS, MAE RACQUEL GRACE E. / AB PSYCHOLOGY 1 – SECOND SEM / LA CONSOLACION UNIVERSITY
PHILIPPINES
2
PSYCHOLOGICAL STATISTICS
Second sem / lesson 1 - ___ / midterms / PPT & lecture based

result is known as the independent


variable.  The Model Summary Table reports the
strength of the relationship between the model
 MULTIPLE LINEAR REGRESSION and the dependent variable.
o known simply as multiple regression,  R in regression analysis is called the multiple
is a statistical technique that predicts correlation coefficient, which indicates thE
the outcome of a response (dependent linear correlation between the observed and
variables) using several explanatory model-predicted values of the dependent
variables (independent variables). variable.
o an extension of linear regression that  R-Squared (R2 or the coefficient of
employs only one explanatory determination) is a statistical measure in a
variable. regression model that determines the
o widely used in econometrics and proportion of variance in the dependent
financial analysis. variable that can be explained by the
independent variable
 LIMITATIONS
o Linearity  Adjusted R-Squared is a corrected goodness-
 It assumes that there is a of-fit (model accuracy) measure for linear
linear relationship between the models. It identifies the percentage of variance
dependent and independent in the target field that is explained by the input
variables. or inputs.
o Multicollinearity
 occurs when the  Unstandardized coefficients are those that the
predictor/independent linear regression model produces after its
variables are highly correlated training using the independent variables, which
with each other. When this are measured in theiroriginal scales, i.e., in the
happens, it becomes difficult same units in which we are taking the dataset
to distinguish the independent from the source to train the
effects of each independent model.
variable on the dependent
variable.  Unstandardized beta (B). This value
o Outliers represents the slope of the line between the
predictor variable and the dependent variable.
 are observations that are
significantly different from the
 The standard error is an estimate of the
other observations in a data
standard deviation of the coefficient, the
set.
amount it varies across cases. It can be
o Causality
thought of as a measure of the precision with
 The major conceptual
which the regression coefficient is measure
limitation of all regression
techniques is that one can
 Beta coefficients or beta weights are the
only ascertain relationships,
estimates resulting from a regression analysis
but never be sure about the
where the underlying data have been
underlying causal mechanism.
standardized so that the variances of
dependent and independent variables are
MULTIPLE LINEAR LINEAR REGRESSION
equal to 1.
REGRESSION
 An extension of  (One-to-one)
Simple Linear  One IV/ one DV  The t statistic is the coefficient divided by its
Regression standard error. The standard error is an
 (many-to-one) estimate of the standard deviation of the
 Two or more IV/ coefficient, the amount it varies across cases.
one DV
 The sign of a linear regression coefficient tells
you whether there is a positive or negative
correlation between each independent variable
and the dependent variable.

INTERPRETATING THE RESULT


RAMOS, MAE RACQUEL GRACE E. / AB PSYCHOLOGY 1 – SECOND SEM / LA CONSOLACION UNIVERSITY
PHILIPPINES
3

You might also like