Scale Development

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

• Key Steps in Scale

Development

SCALE • Exploratory Factor


Analysis
DEVELOPM • Confirmatory Factor
Analysis
ENT AND • Assessing Scale

VALIDATIO Reliability
• Establishing Construct
N Validity (Convergent
and Discriminant)
PROFILE
 Resource Person: Khawaja Fawad Latif

 Assistant Professor, Department of Management Sciences, COMSATS University Islamabad, Attock Campus

 Published over 20 Research papers in high-quality impact factor journals that include Corporate
Social Responsibility and Environmental Management, Social Indicators Research, Total Quality
Management and Business Excellence, Journal of Intellectual Capital, Leadership and Organization
Development Journal, Applied Research in Quality of Life, Journal of Enterprise Information
Management, ASILB Journal of Information Management, Evaluation and Program Planning,
Knowledge Management Research and Practice, International Journal of Hospitality Management,
and Studies in Higher Education.
 Analytical Expertise include IBM-SPSS, SMART-PLS, AMOS, Mplus, R (Lavaan), fuzzy-set
Qualitative Comparative Analysis (fsQCA), Atlas.ti
 Google Scholar: Khawaja Fawad Latif
 Publons: https://publons.com/researcher/1347031/khawaja-fawad-latif/
CONTENTS
What is a Scale and Why need it?

Key Steps in Scale Development

Exploratory Factor Analysis

Confirmatory Factor Analysis

Assessing Scale Reliability

Establishing Construct Validity (Convergent and Discriminant)


WHAT IS A SCALE AND WHY WE
NEED IT
Questionnaire (also called a test or a scale) is defined as a set of items designed to measure
one or more underlying constructs, also called latent variables (Fabrigar & Ebel-Lam,
2007).

In other words, it is a set of objective and standardized self-report questions whose


responses are then summed up to yield a score. Item score is defined as the number assigned
to performance on the item, task, or stimulus (Dorans, 2018: p. 578).

Scale Development or construction, is the act of assembling or/and writing the most
appropriate items that constitute test questions (Chadha, 2009) for a target population.
START WITH OPERATIONAL
DEFINITION
 The very first step is the operational definition of the constructs.
 Simply it means, How do you plan to operationalize (use) the
construct in your study?
 How do you define the constructs in your study?
 The measurement is guided by the operational definition
OPERATIONAL DEFINITION
Operationalization is the process of strictly defining variables into measurable factors. The process defines vague
concepts and allows them to be measured, empirically and quantitatively
SCALE DEVELOPMENT PROCESS
SCALE DEVELOPMENT PROCESS

Kyriazos & Stalikas, 2018


Item generation Item categorization Initial Data Conducting the
• Item generation into different Collection – Pilot Main Survey
through existing dimensions Testing
• EFA
literature via • Validation of the • Exploratory Factor
• Confirmatory
previously item categorization Analysis (EFA) to
Factor Analysis
available survey from both academic assess the factor
(CFA)
instruments in the and field experts structure
• Assessment of
literature (if any) • Reliability Analysis
• Item generation Reliability
• Scale Modification
• Convergent and
Conceptualization/ through open-ended and refinement Proposing the Final
questionnaire Discriminant
Domain of the Scale based on results of items
survey with experts Validity
pilot testing
• Item generation
through focus
group discussion

Latif & Sajjad (2018)


WHAT IS FACTOR ANALYSIS
1. Factor analysis is used as a data reduction technique.
2. Factor analysis takes a large number of variables and
reduces or summarizes it to represent them in different
factors or components.
3. Factor analysis is a method for investigating whether a
number of variables of interest are linearly related to a
smaller number of unobservable factors. This is done by
grouping variables based on inter-correlations among set
of variables.
WHAT IS FACTOR ANALYSIS
1. A common usage of factor analysis is in developing objective instruments
for measuring constructs which are not directly observable in real life.
2. Factor Analysis technique mainly examines the systematic
interdependence among set of observed variables, and the researcher is
mainly focused on determining the base of commonality among these
variables.
3. Factor analysis has been extensively used in research for data reduction
and summarization. The main objective of factor analysis is to summarize
the information contained in a large number of variables into a few small
numbers of factors.
4. How well do the items go well together. Incase we are building a new
Scale.
EFA VS CFA
 When applied to a research problem, these methods can be used to either
confirm a priori established theories or identify data patterns and relationships.
 Specifically, they are confirmatory when testing the hypotheses of existing
theories and concepts and exploratory when they search for latent patterns in
the data in case there is no or only little prior knowledge on how the variables
are related.
 When exploratory factor analysis is applied to a data set, the method searches
for relationships (variables with high correlation are grouped together)
between the variables in an effort to reduce a large number of variables to a
smaller set of composite factors (i.e., combinations of variables). The final set
of composite factors is a result of exploring relationships in the data and
reporting the relationships that are found (if any).
BASIC TERMINOLOGIES IN FACTOR ANALYSIS
The following is the list of some basic terms frequently used in the factor analysis
 Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy: The Kaiser-Meyer-Olkin
(KMO) measure of sampling adequacy is an index used to examine the appropriateness of
factor analysis. This statistics shows the proportion of variance, for variables included in the
study is the common variance. A high value of statistic (from 0.5 - 1) indicates the
appropriateness of the factor analysis for the data in hand, whereas a low value of statistic
(below 0.5) indicates the inappropriateness of the factor analysis.
 Bartlett’s test of Sphericity: Bartlett's test of sphericity is a test statistic used to examine the
hypothesis that the variables are uncorrelated in the population. In other words, the population
correlation matrix is an identity matrix; each variable correlates perfectly with itself (r = 1) but
has no correlation with the other variables (r = 0).
 A value less than 0.05 indicate that the data in hand do not produce an identity matrix as with
an identify matrix, factor analysis is meaningless. This means that there exists a significant
relationship among the variables. A significant result (Sig. < 0.05) indicates matrix is not an
identity matrix; i.e., the variables do relate to one another enough to run a meaningful EFA.
BASIC TERMINOLOGIES IN FACTOR ANALYSIS
 Communality: Communality is the amount of variance a variable shares
with all the other variables being considered. This is also the proportion of
variance explained by the common factors. Small values indicate variables
that do not fit well with the factor solution, and should possibly be dropped
from the analysis. Normally values Less than .60 are removed.
Uniqueness: Gives the proportion of the common variance of the
variable not associated with the factors. Uniqueness is equal to 1 –
communality. So Communality = 1-Uniqueness
 Percentage of Variance: It gives the percentage of variance that can be
attributed to each specific factor relative to the total variance in all the
factors.
BASIC TERMINOLOGIES IN FACTOR ANALYSIS
The following is the list of some basic terms frequently used in the factor analysis
 Eigen Value: The eigenvalue represents the total variance explained by
each factor. Factors having eigenvalues over one (1) are selected for further
study.
 Scree Plot: It is a plot of eigenvalues and factor number according to the
order of extraction. This plot is used to determine the optimal number of
factors to be retained in the final solution.
 Factor Loadings: Also referred to as factor-variable correlation. Factor
loadings are simple correlations between the variables and the factors.
 Factor Matrix: A factor matrix contains the factor loadings of all the
variables on all the factors extracted.
ROTATION METHOD
Makes the Loading Patterns Easy to Understand
 Varimax (most common)

minimizes number of variables with extreme loadings (high or low) on a factor. Minimizes
the correlation between factors. Makes it possible to identify a variable with a factor. Components
are always orthogonal—each component explains non-redundant information
 Quartimax

minimizes the number of factors needed to explain each variable. Tends to generate a
general factor on which most variables load with medium to high values, not very helpful for research
 Direct oblimin (DO)

factors are allowed to be correlated


 Promax (Use this one if you're not sure)

computationally faster than DO. Used for large datasets


 Simplimax

Generation of Simple Structure


ROTATIONS
 Rotations that allow for correlation are called oblique
rotations; rotations that assume the factors are not
correlated are called orthogonal rotations.
 Varimax returns factors that are orthogonal; Oblimin
allows the factors to not be orthogonal.
EFA VS CFA
 When applied to a research problem, these methods can be used to either confirm a priori
established theories or identify data patterns and relationships.
 Specifically, they are confirmatory when testing the hypotheses of existing theories and
concepts and exploratory when they search for latent patterns in the data in case there is no or
only little prior knowledge on how the variables are related.
 when exploratory factor analysis is applied to a data set, the method searches for relationships
(variables with high correlation are grouped together) between the variables in an effort to
reduce a large number of variables to a smaller set of composite factors (i.e., combinations of
variables). The final set of composite factors is a result of exploring relationships in the data
and reporting the relationships that are found (if any).
Measurement Model
The measurement model is the part
of the model that examines
relationship between the latent
variables and their measures.
Measurement model helps establish
the reliability and validity of the
constructs
RELIABILITY AND VALIDITY

20
WHAT IS RELIABILITY?
Reliability is:
 the consistency of your measurement
instrument

 the degree to which an instrument measures


the same way each time it is used under the
same condition with the same subjects

21
RELIABILITY
 Imagine that you are using a ruler

to measure a book
What do you think would happen if you
waited 10 minutes and measured the
book again, how long would it be then?
…Probably still 7 inches

What if you spun the ruler around!


And shook it up really good?!
And it measures about 7 inches across
Now what would it say?
…Probably still 7 inches

22
RELIABILITY
Your ruler…
 was consistent
 measured the same way each time it was used
under the same condition with the same object
The book did not change and therefore the
ruler reported back the same measurement
Your ruler is RELIABLE

© A. Taylor
Do not duplicate without author’s permission
23
RELIABILITY
 What about this reliable instrument…
This clock reads 6:15
If nothing changes – if time stands still,
will the clock still say the same thing?
YES! It’s very reliable. You always
know exactly what it is going to say.
The problem is, even if time doesn’t
stand still, the clock will not move…but
it IS still reliable.

24
WHAT IS VALIDITY?
Validity asks
 if an instrument measures what it is supposed to

 how “true” or accurate the measurement is

25
RELIABLEThese
BUT
165 NOT VALID
instruments are very RELIABLE
They both report consistently – too
consistently

But, neither measures what it is supposed to:


• The scale is not really measuring weight
• The clock is not measuring time
They are NOT VALID

26
PUTTING RELIABILITY AND VALIDITY
TOGETHER
 Every instrument can be evaluated on two dimensions:
 Reliability
 How consistent it is given the same conditions

 Validity
 If it measures what it is supposed to and how accurate it is

27
CONVERGENT AND
DISCRIMINANT VALIDITY
 Convergent validity takes two measures that are supposed
to be measuring the same construct and shows that they
are related.
 Conversely, discriminant validity shows that two measures
that are not supposed to be related are in fact,

You might also like